Results 1 to 8 of 8

Thread: Battlefield 1

  1. #1
    Join Date
    Aug 2008
    Location
    UK
    Posts
    5,704

    Default Battlefield 1

    New trailer out for Battlefield 1.

    Once you get past the CGI at the start (lasts for about 40 secs), the actual game graphics look awesome. Only trouble is my experience of EA and Battlefield is the games are usually great at release, then dumbed down and spoilt shortly after, so not sure I can be tempted.


  2. #2
    Join Date
    May 2016
    Posts
    104

    Default

    Issue with these modern AAA titles is you have realistic graphics and unrealistic feel. I'm confident the intro CGI was rendered by the game's engine in realtime, not Maya or 3D Studio.
    Let me explain, game development and games can be split into these parts: graphics, audio, input, physics simulation and AI.
    While the first 4 have improved quite a bit, the last two haven't changed much for about a decade or more.
    So you have a nice trailer that looks photorealistic, but then you play the game and you have robotic helper AI and enemy AI, almost undestructible environments, your car/bike/plane getting stuck between two invisible collision geometry... it just doesn't feel realistic as it looks which kills the immersion. On top of that the traditional 3rd person camera and helper HUD spamming aren't helping.
    Of course they try to hide these in the "gameplay trailers" by taking the best shots and displaying those only.

    Just to show what I mean, check 3:58 in this video: https://www.youtube.com/watch?v=z3RKj9-X8dg
    This is not realistic, sorry.

    But if rating it comparably to other games, then it's pretty good.
    Last edited by neskusen; 06-19-2016 at 02:33.

  3. #3
    Join Date
    Aug 2008
    Location
    UK
    Posts
    5,704

    Default

    Yeah granted, although with PC versions you don't get auto aim assistance etc so they're usually much more realistic and skilful than console versions especially in multi-player. Personally I've never bothered with single player as not much fun playing an AI anyway. On previous versions of Battlefield and associated titles eg Crisis, there has been a concerted effort to make environments destructible, eg if you look at the original video, in several points you can see buildings destruct and even a whole village when the Blimp crashes onto it.

    eg: @ 46 secs and again at 47 secs, a trailer of some sort and the front 1/2 of a building collapse and the blimp crash at 1 min 12 secs.

    We have seen this before in Battlefield games, although generally the destruction gets more total with every generation as graphics and processing power increase eg in Battlefield 4:

    https://www.youtube.com/watch?v=NEZ5p3H-M5k

    That's not scripted CGI but game play, with the destruction occurring only in response to on the ground events. The only scripted part is when the total damage reaches critical, the eventual total collapse always happens in the same way ie the rubble falls into the same place every time with the same cavities and spaces.

    I do agree that mesh collisions that don't result in destruction can be unrealistic and frustrating. However, the biggest limiting factor in creating truly destructible environments is always the graphic / processing / memory capabilities as destruction requires both a lot of calculation and a lot of graphics power due to the high number of particles on screen at anyone time, plus a lot of memory to remember the changes. It can only get better over time, although the reduction in PC game sales, probably due to the high cost of having to constantly upgrade, may well be a limiting factor. Consoles generally don't have the power to render as much as games built for the very latest PC's.

    Personally, I'm not a fan of EA games because of the way they usually make game play changes after release. That pretty much stops me buying any EA title these days. That said, I have to admit, at 1st glance, I'm finding Battlefield 1 looking very impressive and far better than I ever imagined. Star Wars Battlefront was a total turn off for me pretty much on game play grounds as the beta maps felt poor and the game play very wooden and artificial.

  4. #4
    Join Date
    Aug 2008
    Location
    UK
    Posts
    5,704

    Default

    Just found some multi-player beta action here, not much evidence of destruction but game play does look like classic battlefield and some nice graphics:


  5. #5
    Join Date
    May 2016
    Posts
    104

    Default

    I understand the limiting factor is hardware limitations, I'm not trying to imply the hardware is there and the software developers just aren't using it (well, maybe except AI code).
    But I do think the hardware developers should maybe slowly invest more development in general processing units or specialized processing units for faster physics simulation or AI rather than pumping more to the graphical processing units. I just think it's time physics and AI in games would catch up with the realism reached in the graphical field. They tried it at one point but abandoned the idea: https://en.wikipedia.org/wiki/Physics_processing_unit
    It doesn't have to be a separate component on the motherboard either like the GPU, it could be another chip on the GPU, since other applications where GPU is used for 3d graphics (3D animation programs, simulation programs, etc.) physics simulation is also used, so almost anyone buying a GPU would be using it. Today they allow GPU to perform this kind of calculations (eg. compute shaders) but it's just not designed for it, same way CPU is so slow for rendering.

    The better graphics just don't sell it for me anymore and of course not everyone shares this feeling. VR headsets might get me back into gaming and game development though.
    Last edited by neskusen; 06-19-2016 at 10:02.

  6. #6
    Join Date
    Oct 2012
    Location
    Germany
    Posts
    1,479

    Default

    ... when '87 to '90 I've made my diploma thesis with programming and simulating "massive parallel networks" built with Transputers, i found some methodes for super fast calculating and exchanging 3D-data across this networks in a sort of "holographic live-stream", totally different to the ray-tracing and radiosity algorhythms used then ... so my expectation was, this sort of interacting with 3D-data would bring better real-time simulation of complex 3D-worlds or better/faster game engines.

    But it seems to be a problem to bring alternative ideas into a 'prospering' market ...

    Viktor

  7. #7
    Join Date
    Aug 2008
    Location
    UK
    Posts
    5,704

    Default

    It's interesting actually I've just noticed when watching a 43 min from which the latter footage seems to have been extracted, all the way through, the commentator says that pretty much the whole environment is destructible, and apart from a single village that starts off in ruins, the whole landscape starts out as pristine. It's at 33 mins in here:

    https://www.youtube.com/watch?v=CN0xHRZQmms

    That video also has some great camera flies through as they're using the free camera viewing mode where a 3rd party viewer can choose to watch any of the action or players rather than following a single players screen.

    Viktor, why not offer to sell your services to EA or similar? They have European offices and I'm sure if you have something truly innovative, they'd be interested in looking and maybe taking you on. Anything that takes forward graphics be-it in gaming, laser simulation or other uses such as 3D virtual worlds or mapping can only be good for the future of software and tech generally.

  8. #8
    Join Date
    Oct 2012
    Location
    Germany
    Posts
    1,479

    Default

    ... if it would be so easy :-/

    I was in contact with a company developing and building 3D-displays (then based on Displays with lenticular sheets), to present them a more "holographic" optical mode without goggles too, and able to not only display, but usable for recording life 3D-content too.

    It's a pretty simple methode, actually used for the light-field cameras, but some +80 years old, as I found it first mentioned in a book for optical engineers from 1925.

    For displaying a 'passive' mode it's already used for printing 3D-sheets - http://www.mireco.net/

    Used in an 'active' mode you can record or transmit a 3D-scene from a "plenoptical" camera with a lens-array before the sensor to a display with a corresponding lens-array.

    This 3D-content can be virtually calculated too or mixed for Augmented-Reality displays, but then you'll need a beefy PC for calculating the simulation.

    This technology will allow "holo-walls", which will virtually connect two rooms over internet, so the other "room" is recorded and displayed simultanous for true 3D-video-conferencing ... or simply as synced 3D-camera/display for holographic windows, showing nice 3D-sceneries from all over the world or virtual/artificial 3D-content.

    In short -- the company was not interested, as they "have first to sell their actual (newly developed) displays, before starting a new development" ... they said me, that this technology sounds too much like "Star-Treck holo-decks", so more like SciFi, than real world stuff :-(

    Viktor

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •