February 25, 2017 at 7:05 pm #4096
Animation Pagoda StaffModerator
- Topic Count 63
- Replies 0
Parallax scrolling creates the illusion that background objects are always far away. Basically layers of the background move at different speeds relative to the faster movement of the player in the foreground. This is commonly seen in 2D side scrolling games.
Procedural generation is used to generate random maps, levels, or assets. It can theoretically produce an infinite number of combinations that will never be exactly the same, and it doesn’t require a lot of memory space. Procedural generation works best for grid-based terrain and maps of limited size. Random enemy encounters are also an example of procedural generation. Most studios use procedural generation sparingly since custom level design allows greater control and provides a better player experience.
Fractals, Perlin Noise, the Midpoint Displacement Algorithm, and recursion are programming concepts that relate to Procedural Generation. Imperfect factories can add slight variations when generating lots of assets like trees, foliage, rocks, or NPCs, ensuring each one is different.
Modular elements are time-saving assets that get copy and pasted to create larger structures without having to model every detail from scratch. Corridors, walls, caves, floor tiles, and hallways are usually modular, along with other architectural features. The trick is to create interchangeable components that have the potential to be used as many times as possible. Slight variations break up the tiling effect. Substance textures or procedural generation can be implemented to produce small alterations.
Voxels are the 3D cube equivalent of pixels. Voxels are easy to generate, making them a potentially useful tool for rapidly building large detailed worlds that don’t take forever to load. Voxels can also be used to create blocky low-poly props or characters.
Point of View
Most 2D games have a fixed camera looking directly from the side, which is pretty straightforward since everything is 2D. Point of view plays more of a role in 3D and isometric games. The smart choice for an inexperienced game designer is to implement a first person perspective. This is a good way to avoid having to do much character animation, and it can make the player feel more engrossed in the game world.
Third person view allows the player to see over the head of their avatar model, and offers better peripheral visibility. The camera doesn’t necessarily have to be fixed. A lot of strategy and level building games offer 360 degrees of camera movement and zooms. This can cause undesired control issues in action-based games. Most games just resort to cinematic cutscenes to focus on any important actions where the player needs to see character animation.
Games have to perform in real-time, which means geometry and in-game assets must render at a frame rate of around 30-60FPS. Highly detailed models with thousands of vertices would cause the game performance to lag, so game designers have to cheat a little by creating 3D models with less geometry, but the same amount of detail as a high quality model. This is accomplished by reducing the number of overall faces and vertices to the simplest possible 3D form and then using displacement map textures to reproduce fine details and surfaces.
A high poly model is generally made first, then copies are made reducing the geometry to a mid-poly and low-poly model. This provides the option to run a game on different quality settings and on new and older game systems without affecting performance.
3D game models must adhere to a rule of being modeled in only quadrilaterals or triangles. Quads and triangles cannot mix. Most game engines can automatically convert quads to triangles as long as there are not any weird vertices.
Wireframes, Culling, and Clipping
A computer reads stored video game models as pure data, in this case a string of coordinate numbers correlating to a series of points in virtual space. To store the model information, the game engine will translate the wireframe vertices into a triangle vector mesh. The polygon mesh is now able to be controlled by programming functions and preset animations.
Textures are not directly affected by the game mechanics programming. It is far less taxing on computer processors and memory to only track points and source files. The texture files are still attached to the wireframe points however, and they will resurface again in the correct areas once the wireframes are rasterized into pixels to appear on screen. This method is the most efficient way to bring high-level graphics without a long render time.
Hidden Surface Detection and culling determine which polygon faces are visible at any given time. Faces that are not visible will not render, saving processing power.
Importing 3D Models into an Engine
Models will need to be scaled properly to the scale of the world before they are imported. It is also recommended that all assets and scenes be oriented along the same xyz axis. If you are importing something that will have complex animations, such as a character, it is better to keyframe and bake the settings in an animation program like Maya or Blender. Blendshapes seem to cause problems in Unity.
Draw distance represents how far away the view will render from the player camera. Anything beyond the draw distance will not appear in view until the player gets closer. This saves processing power, especially with large open worlds. Fog is often used to make distant landmasses fade into the horizon. Another trick is to set up an adaptive level of detail system where geometry reverts to a low poly state when far away, but when the player gets closer to an object, more detailed textures will appear.
Physics engines simulate Newtonian Physics by using discrete math formulas. Collision detection, particle systems, gravity, rigid body simulations, and soft body simulations fall under the category. 3D games require more advanced physics formulas than 2D games.
Slow-Motion and Bullet Time
A popular embellishment to gameplay is to create special slow-mo animations or cutscenes for enemy takedowns. This adds complexity for game designers, but it looks pretty cool.
The way slow-motion shots are done generally involves reducing the global room speed in relation to the player. This still allows the player to move normally, and the speed can easily be switched back to normal without causing gameplay issues. Bullet Time uses multiple cameras taking shots from different points around the action, so that it is possible to spin around the frozen action in 3D space.
Making a game takes a lot of work, so the tendency is to want to use everything you made in the game. The reality is that you should only keep what aspects are absolutely necessary to make a game that is complete. Extra stuff is always cool, but sometimes having more just adds clutter and distracts from the core mechanics of the game. Having more enemy types or a few extra powers probably isn’t going to be the deciding factor in whether people like the game or not. There will be leftover ideas, scrapped mechanics, and story ideas that don’t work out or never make it into the final game. Hopefully if the game succeeds, there might be a chance to revisit old ideas and implement them in a sequel.
If a game is successful, the game designer has a big decision to make figuring out what to do next. It is generally much easier to create a sequel reusing the old game engine, textures, and assets than to risk starting from scratch creating a new original game. However, some artists don’t like getting creatively limited to one series or franchise for the rest of their careers, and they may want to leave to do something else. Plan ahead, and have some backup ideas for other games just in case.
You must be logged in to reply to this topic.