LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
Sprites continued to be the weapon of choice for graphics for quite some time. The NES and SNES were the kings, and it was all about 2D. Games got higher resolution and more colors. Arcade games even went as far as digitizing green screened people to scan into sprites for even more realism (see Pit Fighter from 1990, and the later Mortal Kombat series).
Many years later after graduating with a degree in architecture and working in the post production business, I got my first job making video games. Our target platform was the Sony PlayStation, and we were making a RPG. Screen resolution was 320x240 for full screen gameplay, and a letterboxed 320x160 for cinematic sequences. We used Silicon Graphics workstations and software costing many times our annual salaries to put everything together.
The way we got around the processor limitations was to use prerendered background with real-time characters rendered on top. Polygon counts were in the low hundreds and used mostly vertex colors. For the battle sequences we transitioned to full 3D environments, with simple textured ground planes and limited use of special effects. It sounds primitive now but given the lower resolution you could cheat here and there to make things look like (or at least trick the brain into thinking) things were more detailed than they actually were. The game was either 3D with 2D prerendered background for exploration, full 3D for combat, and full prerendered 3D for cinematic sequences.
The handheld gaming and mobile phone markets have also gone through similar growths in processing power and display size. My first phone game used prerendered sprites running on J2ME. Similar to the console growth path, things eventually moved to 3D, with increasing power. Today my phone is capable of running a full VR experience in a completely untethered headset. It can push 48X the amount of pixels my first PlayStation had, at twice the frame rate.
A related topic is the use of polygon reduction tools. Early tools were able to reduce complex meshes to performant levels, but the results still needed to be cleaned up considerably. Modern tools have fortunately gotten a lot better at this, being better able to preserve UV coordinates and the overall shape of objects, so if art resources are scarce, this could be a viable option. Another more recent option is the use of voxel (or non-polygonal) based modeling tools, where the creation software itself handles the reduction and exporting. This allows the flexibility of having a high resolution mesh with which to generate maps from, as well as meshes of variable levels of detail depending on the export platform.
There is never a magic bullet when it comes to creating great art that is also performant in device. So often it is easy to get excited about getting new capabilities that we just start throwing in everything we can find without regards to the cumulative (or sometimes exponential) effects they may have. Ultimately it comes down to a combination of things:
c01484d022