Guest Blogger

Brian Jens

Digital artists are perfecting their craft to the point that the line between physical and virtual is becoming difficult to delineate. Theatrical and gaming revenue are fueling the demand for even more ambitious projects. This blog post shows some of the approaches that these artists use to create their masterpieces.

Almost all computer games and movies use computer generated imagery (“CGI”). CGI artists are in demand as never before. Here is a basic overview of the tools – renders, modelers, animators – and basic production stages of production often referred to as the animation pipeline.

Features of Video Movies Creation Process

Creating of computer graphics in movies is a challenging task which requires the work of hundreds of professionals, from writers and producers to 3D artists. They’re modeling, texturing, animating and visualizing the virtual characters.

For movies consider these factors in creating CGI :

  • The deadline
  • The level of complexity to sell the shot
  • Budget

Features of Video Games Creation Process

Unlike films, games are interactive communications between a virtual world and a human. Therefore, the main factors in the creation of the video game are:

  • Gameplay & Interactivity
  • Technical capabilities of the target platform
  • Creating a entire world for player to visually explore


Six Main Stages That Every 3D Model Undergoes

  1. Modeling – the creation of three-dimensional objects. Modeling for movies and games are quite similar, but still there are some differences, namely:
    1. Modeling method. Films and movies are created by both NURBS-modeling and polygonal modeling while computer games use the last method only since polygonal models are much easier to visualize.
    2. The number of polygons in the model. The more polygons are in the object, the better the level of detail and the quality. There are high poly and low poly models. Graphics modelers have quite a difficult task to create a sense of high quality with a limited number of polygons in the model.  Autodesk Maya is often used in big budget projects.  Autodesk 3Ds Max and Cinema 4D are popular commercial options. Blender is widely used and is developed as open source software.  
  2. Texturing – applying textures and materials to the 3D-model. Texturing is not just selecting colors and materials for the model – it’s a real art for textures designers. Before the designers start their work on a project, the modelers create UV-scan – a two-dimensional image containing the model surface. UV-scans are needed to ensure the perfect lay down of texture on the model. The next steps are drawing textures and attaching them to the model. Texture mapping can be created in the same programs as the models.
  3. Rigging – creation of a virtual “skeleton” for the subsequent character animation. Setup artists create bones and controllers to manage the rules that govern the movement of the model. In movies every aspect of anatomy is thought through to create models that can express emotion. For gaming the rigging is often used to control  movement and character actions.
  4. Animating of a 3D character. The main task of an animator is to make the model motion as realistic as possible.  This is especially important for the movies in which the frame three-dimensional characters interact with real actors. The simplest method of animation is animation by keyframes: an animator indicates the position of the character in the initial and final frames of motion while the program calculates its positions in the intermediate frames. This method is the easiest, but it’s rather laborious to create complex movements and requires great skills of the animator.  Also, there is procedural animation, when a special program controls the character.
  5. Rendering – visualization of graphics and recording.  The final stage is a visualization of the received scenes. There are two types of rendering – real-time and pre-rendering.
    1. Real-time rendering is commonly used in computer games. In real time rendering, color, shades, and light are formed by the previously-calculated maps and textures while objects are perspectively projected onto a screen.
    2. Pre-rendering is often used in movies since the main factor is the image quality, namely the photorealistic quality with physically correct superposition of light and shades. The rendering of a single element can last up to 100 hours. There are three main methods of rendering: scanline rasterization, ray tracing, and radiosity. Very often, raytracing and radiosity techniques are combined to achieve impressive photorealistic results.
  6. Compositing – the union of the individual elements in the final scene (for example, the integration of 3D scenes in the footage, color correction, and adding effects). It’s a final stage of post production. A compositor combines all the pieces into a coherent whole, three-dimensional characters and other elements in the 3D footage, eliminates defects, and works on a variety of other effects. In short, he creates a realistic scene. The composer is responsible for the final product.

Of course, you have to go a long way to become a good 3D artist. It’s all about years of practice. Choosing your way, try not to be upset if at first your works will be far from being masterpieces. Remember that what you see on the screen is the result of years of painstaking work of hundreds of professionals. However, I hope the paper was useful, and you’ve learned something new about the 3D graphics in video movies and games.


Brian Jens is a researcher and a blogger at DesignContest. He loves doing an in-depth investigation of the design market and the connected areas. Feel free to ask Brian to write something for you: he will likely agree to do research if the idea seems fresh and interesting. Brian is someone who’s constantly looking for new things to bring to his life.