Monday, August 2, 2010

Evolution of 3D games

3D games never fail to surprise me. Such precision and duplication of real life astounds me. From human figures to cars, birds to clouds in the sky…everything seems to be as good as real life characters. From the texture to theme of the games, everything has shaped up a bright future for 3D games and secured its popularity for a long time to come.

3D graphics did not happen in a day…a series of events led to one another and textures were formed. The lights came to life and the shadows added depth to the concept. Let’s have a look at the few major techniques without which, 3D would not have happened.

Techniques leading to 3D graphics:
  • 2D Vector Graphics
“Vector: A vector is a point in space defined as the distance from the origin along each axis.”

Every geometrical shape is represented by a group of points…ring any bells? Well, anyone who has studied trigonometry knows that each of these points have X location, which is its horizontal location and Y location, which is its vertical location. These are also known as 2D vectors, which are connected to create shapes.  The first videogame ever created was done like this.
  • MIP Mapping:

Simple texture mapping with high-resolution textures inefficiently uses both memory and processing power, and tends to exhibit various problems with shimmering artefacts (see ‘complex filtering’). MIP mapping solves both of these problems by storing a series of scaled-down versions of each texture map, with each successive map being half the size of its parent. When a texture is applied to a polygon, the rasteriser computes how far the pixel is to the camera. This defines which MIP map to use, though several MIP levels are typically used and interpolated between to get smooth transitions. MIP mapping predates the commercial use of GPUs, but the most notable examples occur after the GPU’s introduction, such as Rage Software’s shooter, Incoming. Released in 1998, its fast, smooth 3D was a showcase for GPU-enabled effects.

  • Texture Mapping

One of the main challenges in rendering 3D graphics is creating surface detail, because representing every little element as coloured, shaded polygons is computationally expensive. Texture mapping is therefore employed to simulate surface detail, a technique that’s fundamentally important to 3D graphics. In its basic form, texture mapping is pasting an image on to a polygon. The process assigns the vertices in a polygon to specific pixels in a 2D image called a texture map. When the polygon is rasterised, the 2D texture coordinates are interpolated to find out which pixel in the texture map (the ‘texel’) matches each screen pixel to map the texture onto the surface of the polygon. Developers had attempted to use texture mapping since the birth of filled 3D polygons, but it wasn’t until Descent was released in 1995 that the process showed its full potential, with a complete 3D environment of walls, floors and in-game objects rendered as fully textured polygons.

  • Scaled Sprites

In order to match the visual detail and colour featured in 2D games in early 3D worlds, developers looked to sprites. In 2D games these small images would normally be rasterised at full scale on to the screen, but for 3D games they needed to be scaled depending on their distances from the viewpoint. In such systems the sprites representing gameworld objects are placed at vectors – as the image is rasterised, the pixels from the source image are copied multiple times to adjacent pixels or skipped in order to scale the sprite correctly, a process called linear interpolation. It’s a technique made famous by Sega with games like 1985’s Space Harrier, which ran on its now legendary System 16 arcade hardware. Ironically, modern GPUs actually emulate sprites using textured polygons aligned to square up to the camera’s view angle, exactly what Sega was trying to simulate, such as the fronds of leaves on Oblivion’s trees, the ‘sprites’ in Castle Crashers or the smoke from skidding wheels in racers like PGR.

  • Filled Polygons

While many early wireframe games used vector beam displays, the cathode-ray tube was far more common among displays until LCD and plasma technology came along. CRT displays are raster-based, the image being made up of rows of thousands or millions of coloured pixels. Rendering, or rasterising, polygons on such displays is a process in which the image is ‘painted’, pixel by pixel, as the display scans each pixel on every line of the screen from left to right, top to bottom. By computing the edges of each polygon in each scan line, they can be painted with colour in order for them to appear solid. This is still used today in the core of GPU rasterisers and is part of the family of Bresenham line drawing algorithms, which determine how pixels can represent straight lines. Though its atypical game design prevented it from being a hit, Atari’s I, Robot, released in 1983, is now considered the forefather of the modern 3D videogame, with its use of filled 3D polygons and camera controls.

  • Gouraud Shading

Though polygons are often employed to represent smoothly contoured shapes, they’re let down by the fact that polygons are inherently angular. Gouraud shading is a technique that blends the gradients of color making up their facets, visually softening them. In the process, each vertex that defines a polygon is given its own colour, with differences between adjacent vertices interpolated during rasterisation, so the shading changes smoothly across the polygon. It’s a method often used alongside basic realtime lighting, which is calculated by computing the angle between the vector of a simple directional light and a polygon’s ‘surface normal’ – that is, the direction its flat face is pointed. The angle is analogous to the amount of light falling on the polygon and shaded accordingly. If the surface normal values between adjacent polygons are averaged, however, you can, in effect, move the surface normals to being ‘vertex normals’, and therefore use Gouraud shading to make lit surfaces look smooth, a process used to great effect by Star Wars: TIE Fighter, released in 1994.

  • Wireframe 3D

If 2D vector graphics are represented with two numbers for each point on a plane, 3D requires three, with the extra numeral representing depth (the Z location). The mathematics to manipulate and to project them on to a 2D plane have been understood for centuries so, in some ways, 3D games were inevitable once computer hardware became powerful enough to manipulate and display enough 3D points to represent a game world. Released in 1980, Atari’s Battlezone was the first truly 3D videogame. It used a similar vector beam display to that of Spacewar! and, indeed, Atari’s own Asteroids, to show tanks and the battlefield as simple vector outlines. After all, the hardware, even though it featured a custom math co-processor, could only handle a handful of points each frame.


(This article is inspired by http://bit.ly/a3lkoL)

No comments: