0% found this document useful (0 votes)
32 views23 pages

Understanding the Cartesian XY-Plane

Uploaded by

amang22cs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views23 pages

Understanding the Cartesian XY-Plane

Uploaded by

amang22cs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

The Cartesian XY-plane

• The Cartesian xy-plane provides a mechanism for translating variables into a graphical
format.
• A Cartesian XY plane consists of axes and quadrants.
• Measurements to the right of the origin are positive and to the left of the origin are negative
respectively, and measurements above the origin are positive and below the origin are
negative.

Function Graphs
• A Different type of functions will create shapes that allows the function to be easily
identified such as Linear functions are straight lines, quadratics are parabolas, cubic will
have an ‘s’ shape, and trigonometric functions will have a wave-like trace.
• Such graphs are used in computer animation to control the movement of objects, lights
and the virtual camera.

 Figure shows an example where the horizontal axis marks the progress of time in
animation frames, and the vertical axis records the corresponding brightness of a virtual
light source.
 Such a graph helps animator to make changes to the function with the aid of interactive
software tools and achieve appropriate animation.
Geometric Shapes
• Computer graphics requires 2D shapes and 3D objects that have some sort of numerical
description.
• Shapes can include polygons, circles, arbitrary curves, mathematical functions, etc., and
objects can be faceted, smooth, bumpy, furry, gaseous, etc.
• The Cartesian plane also provides a way to represent 2D shapes numerically, which permits
them to be manipulated mathematically.
Polygonal Shapes
• A polygon is constructed from a sequence of vertices (points).
• A straight line is assumed to link each pair of neighbouring vertices; intermediate points on
the line are not explicitly stored.
• There is no starting point for chaining of vertices, but software will often specify whether
polygons have a clockwise or anti-clockwise vertex sequence.
Position Vectors
• Given any point P (x, y, z), a position vector p can be created by assuming that P is the
vector’s head and the origin is its tail.
• In other words, position vector is a vector whose tail is the origin. That is the coordinates
of the tail of the position vector will be (0,0,0).
• Because the tail coordinates are (0, 0, 0), the position vector’s components are x, y, z.
• Consequently, the position vector’s magnitude ||p|| will be equal to √ (x2 + y2 + z2 ).
• For example, the point P(4, 5, 6) creates a position vector p relative to the origin: p =[ 4 5
6 ] and ||p|| = √ (42 + 52 + 62 ) = 20.88
Unit Vectors
• By definition, a unit vector has a magnitude of 1.
• A simple example is i , where i = [ 1 0 0 ] ||i|| = 1.
• Unit vectors are extremely useful when we come to vector multiplication.
• It is because multiplication of vectors involves taking their magnitude, and the
multiplication is greatly simplified.
• Furthermore, in computer graphics applications, vectors are used to specify the
orientation of surfaces, the direction of light sources and the virtual camera. Again, if
these vectors have a unit length, the computation time associated with vector operations
can be minimized.
Cartesian Vectors
• we can combine scalar multiplication of vectors, vector addition and unit vectors all three
to permit the algebraic manipulation of vectors.
• To begin with, we will define three Cartesian unit vectors i, j, k that are aligned with the x
-, y- and z -axes respectively. i=[ 1 0 0 ] j=[ 0 1 0 ] k=[ 0 0 1 ]
• Therefore, any vector aligned with the x-, y- or z -axes can be defined by a scalar multiple
of the unit vectors i, j and k respectively.
The Dot Product in Lighting Calculations
• Lambert’s law states that the intensity of illumination on a diffuse surface is proportional to
the cosine of the angle between the surface normal vector and the light source direction.

• In this figure The light source is located at (20, 20, 40) and the illuminated point is (0, 10,
0). In this situation we are interested in calculating cos(β), which when multiplied by the
light source intensity gives the incident light intensity on the surface.
• To begin with, we are given the normal vector n to the surface. In this case n is a unit
vector, and its magnitude
ǁnǁ =1. n =[ 0 1 0 ]
The direction of the light source from the surface is defined by the vector s:
s =[ 20 − 0 20 − 10 40 − 0 ] = [ 20 10 40 ]
||s|| = √ (202 + 102 + 402 ) = 45.826
||n|| · ||s|| cos(β)=0 × 20 + 1 × 10 + 0 × 40 = 10 1 × 45.826 × cos(β) = 10 cos(β) = 10 45.826 =
0.218
Therefore, the light intensity at the point (0, 10, 0) is 0.218 of the original light intensity at
(20, 20, 40).
The Dot Product in Back-Face Detection
• Back-face detection means determination of whether a face of an object is facing backward
and therefore that face is invisible.
• A standard way of identifying back-facing polygons relative to the virtual camera is to
compute the angle between the polygon’s surface normal and the line of sight between the
camera and the polygon.
• If this angle is less than 90◦ the polygon is visible.
• If it is equal to or greater than 90◦ the polygon is invisible.

polygon’s vertex is (10, 10, 40). The normal vector is [5 5 − 2]T n=[ 𝟓 𝟓 −𝟐 ]
• Let’s prove this concept algebraically. Let the camera be located at (0,0,0) and the

ǁnǁ=√(5 2+52+(-2)2 )=7.348


The camera vector c is c =[ 0 − 10 0 − 10 0 − 40 ] =[ −10 −10 −40 ] ||c|| = √ ((−10)2 + (−10)2
+ (−40)2 ) = 42.426
Therefore,
||n|| · ||c|| cos(β)=5 × (−10) + 5 × (−10) + (−2) × (−40) 7.348 × 42.426 × cos(β) = −20 cos(β)
= −20 7.348 × 42.426 = −0.0634 β = cos−1 (−0.0634) = 93.635◦ which shows that the
polygon is invisible.
UNIT-2
CHAPT-1
GPU vs CPU Architectures
CPU
1. CPU stands for Central Processing Unit.
2. Handles general-purpose tasks like game logic, AI, physics calculations, input
handling, and managing the overall flow of the game.
3. Focuses on executing complex sequential tasks. It has fewer cores but each core is
powerful and designed for single-threaded performance.
4. Handles a wide range of tasks and can switch between different operations rapidly.
5. Good at sequential processing and handling complex decision-making tasks that
require logical steps.
6. Uses a hierarchical memory system with large caches to reduce latency and optimize
access to frequently used data.
GPU
1. GPU stands for Graphics Processing Unit.
2. Specialized for rendering graphics, managing tasks like drawing 3D models, textures,
shadows, lighting,etc.
3. Designed for parallel processing with thousands of smaller cores. It excels in
executing multiple operations at once.
4. Optimized for specific, repetitive tasks like rendering frames.
5. Excels at parallelism, processing thousands of simple tasks simultaneously.
6. The GPU’s architecture includes high-bandwidth memory that supports quick access
to large datasets needed for real-time rendering and parallel computation.
Solving Problems with GPUs (using DirectX)
DirectX, a set of APIs developed by Microsoft, includes components for handling
multimedia and gaming tasks. When solving problems using GPUs via DirectX the process
typically involves:
1. Programming Model:
- Utilizing GPU-accelerated APIs within DirectX (like DirectCompute or HLSL shaders) to
offload computations to the GPU.
- Writing shaders (small programs executed on the GPU) that define how data is processed
and rendered.
2. Data Parallelism:
- Identifying tasks that can be executed in parallel across multiple threads on the GPU.
- Utilizing GPU-specific data structures (like textures or buffers) to efficiently manage and
access data in parallel.
3. Optimizing for GPU Architecture:
- Understanding memory access patterns and ensuring data locality to maximize throughput.
- Minimizing branching and conditional statements in shaders to maintain high performance.
4. Integration with CPU Tasks:
- Leveraging the strengths of CPUs and GPUs together, where CPUs handle complex logic,
orchestration, and serial tasks, while GPUs handle parallelizable tasks that benefit from their
massive parallel processing capabilities.
CHAPT-2
THE SWAP CHAIN AND PAGE FLIPPING
Swap Chain
A swap chain is a collection of buffers used to hold rendered frames before they are
displayed on the screen. It usually includes front buffers and back buffers. When rendering a
frame is complete, the buffers are swapped so the back buffer becomes the front buffer,
making the new frame visible. Components of a Swap Chain:
 Front Buffer: Contains the current frame that is being displayed.
 Back Buffer: Contains the next frame being rendered. Once rendering is complete, it is
swapped with the front buffer.
 Additional Buffers (for Triple Buffering): Helps reduce tearing and provides smoother
frame transitions.

Page Flipping
Page flipping is the technique used to swap the front and back buffers. Instead of copying the
content of the back buffer to the front buffer, the pointers of these buffers are swapped. This
improves performance by reducing the time taken for display updates.
Types of Page Flipping:
 Double Buffering: Involves two buffers – one front and one back buffer. Page flipping
switches the two buffers after each frame is ready.
 Triple Buffering: Adds an extra buffer, allowing more time to render the next frame. This
helps to reduce screen tearing, which happens when parts of different frames are mixed
together on the screen.

DEPTH BUFFERING
The depth buffering calculates a depth value for each pixel and performs a depth test. This
test compares pixels competing for the same pixel position on back buffer. The pixel which
has the depth value closest to the viewer will be drawn on that position.
TEXTURE RESOURCE VIEWS
Texture resource views are interfaces that define how a texture will be accessed by the GPU.
Different views can be created for a single texture resource depending on how the texture
data needs to be used.
Types of Texture Resource Views:
 Shader Resource View (SRV): Used to read texture data in shaders, like pixel shaders or
vertex shaders. SRVs allow for sampling textures during rendering.
 Render Target View (RTV): Used when a texture acts as a render target. It allows a
texture to be written to during rendering (e.g., off-screen rendering, post-processing
effects).
 Depth-Stencil View (DSV): Used for depth and stencil buffers, allowing textures to store
depth information for depth testing and stencil operations.
 Unordered Access View (UAV): Allows random read/write access to texture data, often
used in compute shaders for advanced effects like particle systems or post-processing.

CHAPT-3
Rendering Pipeline
1. Input Assembler Stage (IA):
- Function: Gather vertex data from buffers based on input layout and topology.
- Input: Vertex data (position, normal, texture coordinates) from buffers.
- Output: Vertex data passed to the Vertex Shader stage.
2. Vertex Shader Stage (VS):
- Function: Processes each vertex independently.
- Input: Vertex attributes (position, normal, texture coordinates).
- Output: Transformes vertex position in homogeneous clip space.
3. Tessellation Stage (TS):
- Function: Increases mesh detail based on tessellation factors. It is optional stage.
- Output: More vertices generated for finer surface detail.
4. Geometry Shader Stage (GS):
- Function: Allows creation or modification of primitives. It is optional stage.
- Output: Additional or modified primitives passed to the next stage.
5. Pixel Shader Stage (PS):
- Function: Computes final pixel colors .
- Input: Interpolated vertex attributes, texture coordinates, and other interpolated data.
- Output: Final pixel colors.
6. Output Merger Stage (OM):
- Function: Combines pixel shader outputs with existing data in the render target.
- Input: Final pixel colors, depth values.
- Output: Updated render targets.

Meshes or Objects
- Meshes/Objects: Represented by vertices and indices stored in vertex buffers.
- Input Assembler Stage: Assembles vertex data into primitives (points, lines, triangles)
based on topology (triangle list, strip, etc.).
Texturing
- Texture Mapping: Applied during the Pixel Shader stage using texture coordinates
interpolated from vertex data.
- Types: Supports various texture types (2D, cube maps, volume textures) for different effects
(diffuse, specular, normal maps).
Lighting
- Vertex Shader: Computes vertex lighting (position and normal transformations, light
calculations).
- Pixel Shader: Computes per-pixel lighting effects (phong shading, reflections, shadows)
using interpolated normals and texture data.
Blending
- Output Merger Stage: Blends final pixel colors with existing render target contents based
on blend modes (alpha blending, additive blending, etc.).

Game Engine Architecture

A game engine is a software framework used for the creation and development of video
games. It provides essential features such as rendering, physics, and input handling.
Core Components of a Game Engine
 Rendering Engine: Handles drawing and rendering of graphics on the screen. It manages
2D/3D models, textures, lighting, and shaders.
 Physics Engine:Simulates physical interactions in the game world, including collision
detection, gravity, and rigid body dynamics.
 Input System:Captures and processes user inputs from various devices like keyboards,
mice, and game controllers.

 Scripting/Logic System:Allows developers to write game logic and behaviors using


scripting languages like Lua or Python.
 Audio Engine:Manages sound playback, including background music, sound effects,
and 3D spatial audio.
 AI (Artificial Intelligence):Controls non-player character (NPC) behavior,
pathfinding, and decision-making algorithms.

Game engine architecture is designed to be modular and flexible, allowing developers to


focus on building game content rather than low-level details. Key components work together
to provide a seamless experience in game development.

Architecture of a Game
The architecture and structure of a game is like that of a software. But it does have some
additional components which makes it different from a software. Every game has the
following components:
• Graphics Engine
• Sound/Audio Engine
• Rendering & Vision-Input Engine
• I/O Devices (like, Mouse, keyboard, speaker, monitor etc)
• DLL files and Drivers/Device APIs
Graphics Engine
A graphic engine is a software which in association with an application program helps to
draw graphics on our computer's display device. Graphics engine helps to make our game's
graphics better by increasing the resolution and increasing the number of pixels per unit area.
This engine also makes our game's scenes clear and run smooth.
Sound/Audio Engine
The audio/sound engine is the component that consists of algorithms for dealing with sound
and in-built programs are written into it to handle the sound effects embedded in the game. It
has the capability to perform calculations using the CPU, or on any dedicated ASIC
(Application Specific Integrated Circuit). Abstraction APIs, such as Open-AL, SDL Audio,
X Audio 2, Web Audio, etc. can be available within this engine.
Rendering and Vision-Input Engine
The rendering engine along with vision input system produces 3D animated graphics, using
different techniques, like rasterization and ray-tracing. Majority of rendering engines are
developed upon one or more rendering APIs like Direct3D and/or OpenGL that offers a
software abstraction layer for the Graphics Processing Unit (GPU).
I/O Devices
The devices which are used for inputting the data and the programs in the computer are
known as Input Devices. Input device can read data and convert them to a form that a
computer can use. Output Devices can generate the finished product of machine processing
into a form usable/readable by humans. For a game, there should have to be a strong
interaction between the user and the game he/she is playing. So for this the peripheral
devices like mouse, keyboard, joysticks and monitors play a major role to make the game
interactive.
DLL files and Drivers/Device APIs
A DLL (Dynamic Link Library) file, is a type of file which includes instructions written in
the form of programs which can be called or used by other programs for performing certain
tasks. In this manner, different programs can share the abilities and characteristics that have
been programmed into a single file.

Engine support systems


Systems that support the core functionality of the game engine, such as memory
management, multithreading, task scheduling, and debugging tools. These are crucial for
optimizing performance and maintaining stability.
Corona SDK
Corona SDK is a software development kit that is available on Windows and OS X and uses
Lua as a scripting language. Using Corona, one can develop mobile games for free.
However, to create a game or app with more elaborate features, we need to opt for an
enterprise model that offers native libraries and APIs. The tool is the best mobile game
development solution if we want to develop a cross-platform game.
SpriteKit
Available on iOS and OS X, SpriteKit is Apple’s proprietary 2D game development
framework that supports both Swift and Objective-C languages. SpriteKIt offers great
convenience to developers. With SKView, scene management is made easy. SKAction class
can be leveraged to move, scale or rotate various game objects. It also supports sound and
developing custom codes. SpriteKit offers scene editor that enables designing of levels.
Particle editor helps with developing different particle systems.
Unity
Unity is a mobile game development engine that supports C# and UnityScript which is
Unity’s own language like JavaScript. It comes with free as well as professional editions. It
is a cross platform tool and is deployable to many platforms. Like other tools, its built-in
editor allows us to edit images and organize animations from animator window. We can also
design particle system in Unity editor.
Cocos2D
Cocos2D is an open-source framework that game developers can use for free. It works with
both Swift and Objective-C and supports iOS and OS X. If coding is done in Objective-C, it
supports Android through Spritebuilder Android plug-in. SpriteBuilder is used for creating
projects and provides a graphical design environment where we can prototype and build
games
Marmalade
Marmalade offers free suite of tools that enable easy game development and porting
processes. It is a fast, high-performance cross-platform engine for creation of 2D and 3D
games. The SDK can be used to code in C++. Marmalade Quick version supports app
development using Lua scripting language whereas Marmalade Web facilitates creating
hybrid apps using HTML5, CSS and JavaScript.
CryEngine
Developed by German based company Crytek, CryEngine is a mobile game development
engine used to create 3D games for Console and Window PC. We can create first person
shooter games with CryEngine, and other advanced games using C++, ActionScript, Lua
script and Visual Studio. CryEngine offers some of the incredible features.

Resources and File systems


Resource Processing
Pre- processing data before handing it to the next step in the pipeline is not uncommon
outside games. For game engines, the preprocessing refers to modifying resources to be more
applicable in different circumstances. For assets such as textures this usually means reducing
file size at the expense of quality so that the data will better fit in memory and depending on
the asset require less processing power from the hardware. Another objective that can be
achieved is change in the file format to better accommodate the game engine's and target
platform's requirements.
File system:
File systems read from and write data to storage systems such as DVD-ROM, hard disk, and
SD cards. Code in this subsystem will generally be responsible for managing game resource
files and loading and saving the game state. Managing resource files can be pretty
complicated much more so than simply opening a JPG or an MP3 file.

Game profiling
Game profiling is the process of analyzing a game’s performance to identify bottlenecks and
optimize the game’s code, ensuring smooth gameplay across different devices and
configurations. It involves measuring various aspects like CPU usage, memory consumption,
frame rate, and rendering efficiency.

Key Metrics in Game Profiling

 Frame Rate (FPS): Measures how many frames are rendered per second. A high and
stable FPS is crucial for smooth gameplay.
 CPU and GPU Usage: Analyzes how much processing power is being utilized and
identifies if there’s any bottleneck caused by the CPU or GPU.
 Memory Usage: Tracks how much RAM is being consumed and detects memory leaks
that could cause crashes or slowdowns.
 Draw Calls and Render Time: Evaluates the efficiency of rendering scenes by
measuring how many draw calls are made and how long it takes to render each frame.

Types of Profiling
 Real-time Profiling: Measures performance metrics in real-time during gameplay,
helping identify issues as they occur.
 Frame-by-Frame Analysis: Breaks down each frame to see how time is spent on tasks
like rendering, physics calculations, AI, and input processing.
 Memory Profiling: Tracks memory allocation, usage, and deallocation, helping find
memory leaks and optimize resource management.

Unit-3
APPLICATION LAYER

The Physics Abstraction Layer (PAL) is an open-source library allowing developers to work
with multiple physics SDKs in a single project. It provides integration with various SDKs
like PhysX, Newton, ODE, OpenTissue, and others. This enables developers to use different
physics engines without needing to rewrite core game code, offering flexibility and
compatibility in physics simulation.

The application layer manages various aspects of a game's interaction with devices, the
operating system, and the overall game lifecycle. It handles reading input from devices like
keyboards, mice, and gamepads, translating these inputs into game commands. It also
manages system clocks for synchronization, ensuring smooth animation and gameplay.
Additionally, the application layer deals with string handling for localization, dynamically
loaded libraries (DLLs) to swap or add components, threads for handling multitasking, and
network communication for multiplayer support. It also oversees the game's initialization,
main loop, and shutdown, coordinating all core systems to ensure the game runs efficiently.

Game Logic:

Game Logic defines the core mechanics of a game, dictating the game universe, its objects,
and their interactions. It reacts to external inputs like player commands or AI actions. Here's
an overview of its main components:

1. Game State and Data Structures: Manages game objects. Simple games use lists, while
complex ones require more flexible structures for fast object state changes and
property handling.
2. Physics and Collision: Governs rules of motion, gravity, and object interactions.
Realistic physics enhance gameplay, while poor physics can ruin the experience.
3. Events: Systems like graphics, audio, and AI respond to changes (e.g., creating or
moving an object) via events. These events notify only the relevant subsystems.
4. Process Manager: Handles simple processes, such as moving actors or executing
scripts, and allocates CPU time to each process in the game loop.
5. Command Interpreter: Translates player inputs or AI commands into actions within
the game logic. This interface separates the logic from the view, allowing flexibility,
such as creating custom mods or scripts for the game.

Game View:
Game Views represent how a game presents itself to different observers, whether it's a
human player or an AI. Here's a breakdown of the main components:

1. Graphics Display: Renders game objects, UI, and possibly video. It aims to deliver a
high frame rate while scaling for different hardware setups, focusing on efficiency in
drawing 3D scenes.
2. Audio: Encompasses sound effects, music, and speech. Game audio involves 3D
positioning of sounds, music integration for gameplay, and syncing speech with
character movements.
3. User Interface Presentation: Game UI is creatively designed, often unique to each
game, featuring custom controls for player interaction, with options like licensing
tools to streamline the process.
4. Process Manager: Similar to the game logic, a process manager in the view handles
things like animations and media playback.
5. Memory Management: Effective memory management is crucial, often requiring
custom memory managers to handle allocations and track memory budgets for optimal
performance

Initialization, the Main Loop, and Shutdown:

Games operate differently from typical applications. Instead of waiting for user input like
many software programs, games continue processing, simulating, and interacting with the
game world regardless of player interaction. This is done through a main loop, which
controls the ongoing activity in the game.

Components of the Main Loop:

1. Player Input: Grabs and queues input from the player.


2. Game Logic: Ticks game logic like AI, physics, and animations.
3. Game Views: Renders the game state, plays sounds, or sends updates to online
components.

At the highest level, the game application layer initializes and loads the game logic, attaches
the game views to this logic, and provides CPU time for each system to function properly.

Controlling The Main Loop:

A game’s main loop runs a series of operations repeatedly to present and update the game for
the player. Unlike most software that only reacts when the user triggers something, games
need constant processing. A typical main loop involves:

 Receiving player input.


 Running creature AI.
 Updating animations.
 Updating the physics system.
 Running world simulations.
 Rendering the scene.
 Playing sounds and music.
The main loop is critical for ensuring that the game runs continuously, even when the player
does nothing. This cycle needs to run quickly, typically completing each iteration within 33
milliseconds (or 30 iterations per second).

Organizing The Main Loop:

There are different ways to organize a main loop:

1. Hard-Coded Updates: The simplest method, where each system is updated once per
frame. This approach is straightforward but lacks flexibility. Early games often used
this method.
2. Multithreaded Main Loops: This method divides the update into sections that can
run concurrently. A common approach is splitting game logic and rendering into
different threads. Since rendering often causes the CPU to wait for the GPU, moving it
to a separate thread allows better utilization of the CPU. Modern games can benefit
from multithreaded main loops, as modern processors have multiple cores.

User Interface (UI) Management: The UI in games refers to the methods (e.g., keyboard,
mouse) and screens (e.g., inventory, map) through which the player interacts with the game.
Unlike other UI designs, game UIs integrate fiction, meaning the player becomes part of the
story, much like a narrator.

Types of Game UI:

1. Diegetic: UI elements exist within the game world, allowing the player and their
avatar to interact with them (e.g., in-world maps or control panels). These enhance
immersion.
2. Meta: UI elements outside the game's geometry but still within the narrative, like
blood splatter on the screen as a health bar.
3. Spatial: Provides information outside what the character should know, but still within
the game environment (e.g., distance markers). These keep immersion intact without
needing separate menu screens.
4. Non-Diegetic: Traditional UI elements completely separate from the game’s fiction or
geometry, like health bars or menus. These often follow the game's art style and are
used when other UI types become restrictive.

1. DirectX

 Overview: DirectX is a collection of APIs by Microsoft, focused on multimedia and


gaming, particularly for high-performance 2D and 3D game development.
 Pros:
o Direct hardware access enables high-performance graphics.
o Excellent for Windows-based games, especially AAA titles.
 Cons:
o Windows-specific, limiting cross-platform development.
 Usage: Best for performance-critical games that need direct control over the system's
hardware, like rendering intense 3D environments.
2. Java

 Overview: A cross-platform, high-level programming language known for portability.


"Write once, run anywhere" philosophy.
 Pros:
o Portable across multiple platforms (Windows, Linux, macOS).
o Good for education, and there's extensive community support.
 Cons:
o Slower performance compared to C++ or other native languages.
o Less common for high-performance or AAA games.
 Usage: Popular for indie games, educational projects, and mobile game development.
Suitable for cross-platform needs.

3. Python

 Overview: A high-level language that's easy to learn and versatile, often used for
scripting and rapid prototyping.
 Pros:
o Quick development cycles, great for prototyping.
o Simple and easy to use for beginners, with a large community.
 Cons:
o Slower than compiled languages like C++.
o Not ideal for performance-intensive games.
 Usage: Mostly used in indie and experimental games, educational projects, or for
quick prototyping due to its ease of use.

4. Unity

 Overview: A popular, cross-platform game engine that supports both 2D and 3D game
development. It uses C# for scripting and has a robust editor.
 Pros:
o User-friendly with a large asset store and cross-platform support.
o Used for a wide range of platforms including mobile, PC, console, and VR/AR.
o Extensive documentation and community support.
 Cons:
o Resource-intensive and has a steeper learning curve for non-programmers.
 Usage: Widely adopted for AAA games, indie games, and mobile games due to its
versatility and broad platform support.

UNIT-4
RENDERING ENGINES
 Rendering engine is the module that is responsible for generating the graphical output.
 The job of a rendering engine is to convert the applications internal model into a series of
pixel brightness that can be displayed by a monitor.
 For example, in a 3D game, the rendering engine might take a collection of 3D polygons
as inputs and use that to generate 2D images to be outputted to the monitor.
 Render engines comes with only two general categories:
 CPU based render engine
 GPU based render engine
 There is also another category known as a hybrid render engine. This model can utilize
the power of both the CPU and GPU at the same time.
 There are two ways to perform rendering:
 Biased Render Engine:
 Fast but less realistic: It speeds up the process by making some smart guesses about
how light behaves. The image still looks good, but it’s not always 100% accurate.
 Some examples: V-Ray, RedShift, Mental Ray, Render Man.
 Unbiased Render Engine:
 Realistic but slow: It tries to create images as realistically as possible by simulating
how light works in real life. This can take a long time because it doesn’t cut any
corners.
 Examples: Maxwell, Ocane, Indigo, Fstorm, Corona .

AUGMENTED REALITY:
Augmented reality (AR) is an enhanced version of the real physical world that is achieved
through the use of digital visual elements, sound, or other sensory stimuli delivered via
technology. The most famous example of AR technology is the mobile app Pokemon Go.
Advantage:
1. Accessible to everyone.
2. Improves medical training and saves lives.
3. Safe military simulations without real danger.
4. Enhances information sharing and learning.
5. Real-time shared experiences.
6. Immersive gaming experiences.
Disadvantages:
1. Expensive to produce.
2. May increase aggression in violent AR games.
3. Privacy concerns with personal data exposure.
4. Risk of information overload

Applications of Augmented Reality (AR):

1. Medical Training: Enhances training for medical professionals by simulating complex


procedures and equipment use.
2. Retail: Allows customers to visualize and customize products in real-time, improving
the shopping experience.
3. Design & Modeling: Helps architects, engineers, and designers visualize and modify
their creations in real-world environments.
4. Classroom Education: Improves student engagement by visualizing complex subjects
like astronomy and music in interactive ways.
5. Entertainment: Engages audiences with branded characters, as seen with AR games
like Pokémon Go.
6. Military: Improves situational awareness with technologies like Microsoft’s IVAS,
which includes night vision, battlefield navigation, and thermal imaging.
VIRTUAL REALITY:
Virtual Reality is an artificial environment that is created with the software and presented to
the user in such a way that the user starts to believe and accept it as a real environment.
It implies a complete immersion experience that shuts out the physical world.
Applications of Virtual Reality (VR):
• Entertainment: Immersive video games with enhanced 3D graphics and simple
accessories.
• Education: Enables virtual museum tours, building design, and astronomy learning.
• Medicine: Used for surgery simulations and therapy for phobias and traumas.
• Commercial: Virtual stores and real estate tours, offering personalized shopping
experiences and reducing costs.
• Tourism and Hospitality: Virtual tours of vacation spots, hotels, and landmarks to
motivate bookings, while also training staff in simulated scenarios.

MIXED REALITY:
Mixed Reality (MR) combines elements of both augmented reality (AR) and virtual reality
(VR). It allows digital objects to interact with the real world in real-time, creating an
experience where physical and digital worlds coexist and interact. Example is Microsoft’s
HoloLens, one of the most notable mixed reality apparatuses.
Applications of Mixed Reality (MR): Same as above

Smart Glasses Overview:


Smart glasses are wearable devices that overlay information on the real world through visual
displays or audio. Equipped with sensors, accelerometers, touchpads, and voice controls,
they provide various functionalities:- Messaging and Calls - Navigation- Interaction with
Apps: Control apps like search, fitness, music.
Applications:
1. Entertainment: Watch movies in 3D, replacing the need for a TV.
2. Lifelogging: Record and store experiences while traveling.
3. Voice Commands: Hands-free phone calls, scheduling, and navigation.
4. Training: Enhance workout sessions with real-time data.
5. Facial Recognition: Used by military and police for security purposes.

UNITY ENGINE:
• Unity is a game engine developed by Unity Technologies. It is one of the most widely used
engines in the game development industry. • Since it is a cross-platform engine, it can be
used to create games for different platforms like Windows, iOS, Linux, and Android.
• The engine has been adopted by industries outside video gaming, such as film, automotive,
architecture, engineering, and construction. As of now, the engine supports as many as 25
platforms.
• It has its own Integrated Development Environment (IDE) and is famous for creating
interactive games.
• It contains many elements like Assets, GameObjects, Components, Scenes, and Prefab.
Essential Unity Concept
• Assets: Building blocks of Unity projects like images, 3D models, and sound files.Stored
in the "Assets" folder of any project.
• Scenes: Represent individual levels or areas (menus, gameplay).Help distribute loading
times and allow independent testing.
• GameObject:When an asset is added to a game scene, it becomes a GameObject. Every
GameObject starts with a Transform component, which controls its position, rotation, and
scale in X, Y, and Z coordinates. You can modify this component in scripts and add more
components to create the desired functionality for any game scenario.
• Components:Components define an object's behavior, appearance, or other functions in a
game. Attaching components to a GameObject applies specific game engine features.
Unity provides built-in components like Rigidbody, lights, and cameras. You can also
create custom components with scripts to add interactivity.
• Scripts:In Unity, you can create custom components using scripts to trigger game events,
modify component properties, and handle user input. Unity natively supports C#Scripts
add functionality to GameObjects, allowing for dynamic game behavior.
• Prefabs: Prefabs are blueprints of a GameObject. They allow you to create copies of a
GameObject that can be reused and added to a scene, even during gameplay.

TIMELINE IN UNITY:
The Timeline Editor in Unity is a tool for creating cut-scenes, cinematics, and gameplay
sequences by visually arranging tracks and clips. It allows users to manage and animate
GameObjects within the scene.For each sequence, two key components are saved:

1. Timeline Asset: This stores the tracks, clips, and recorded animations but is not linked
to specific GameObjects. It is saved to the project and can be reused in different
scenes.
2. Timeline Instance: This stores the links or bindings to specific GameObjects
animated by the Timeline Asset. These links are saved to the scene, ensuring that the
animations work with the correct GameObjects.
3. When key animations are recorded during the creation process, the recorded
animations are saved as children of the Timeline Asset, making the Timeline Editor a
versatile tool for handling complex sequences.

INTRODUCTION TO SCRIPTING:

• Most applications need scripts to respond to input from the player and to arrange for
events in the gameplay to happen when they should.
• Beyond that, scripts can be used to create graphical effects, control the physical behavior
of objects or even implement a custom AI system for characters in the game
• Scripting is the process of writing blocks of code that are attached like components to
GameObjects in the scene.

Creating Scripts:
Scripts are usually created within Unity directly. You can create a new script from the Create
menu at the top left of the Project panel or by selecting Assets > Create > C# Script from the
main menu.

Anatomy of a Script file:

In Unity, scripts are usually opened in Visual Studio by default, but you can change the
editor via Unity > Preferences > External Tools. When you create a new script, Unity
automatically generates a class that derives from MonoBehaviour, which connects the script
to Unity’s system. The class name must match the file name for the script to be attachable to
GameObjects.

• The Update function is the place to put code that will handle the frame update for the
GameObject.
• This might include movement, triggering actions and responding to user input, basically
anything that needs to be handled over time during gameplay.
• To enable the Update function to do its work, it is often useful to be able to set up
variables, read preferences and make connections with other GameObjects before any
game action takes place.
• The Start function will be called by Unity before gameplay begins (ie, before the Update
function is called for the first time.

To control a GameObject, you must attach the script to it, which can be done by dragging the
script onto the GameObject in the hierarchy or inspector. You can also find your script under
the Component > Scripts menu. Once attached, the script will run when you press Play, and
Unity will begin calling its lifecycle functions.

MonoBehaviour Class

MonoBehaviour is the base class from which every Unity script derives. This class doesn't
support the null-conditional operator (?.) and the null-coalescing operator (??).

• Start() - Start is called on the frame when a script is enabled just before any of the Update
methods are called the first time. Start is called exactly once in the lifetime of the script.
• Update() - Update is called every frame, if the MonoBehaviour is enabled. Unity calls
this method 60 times per second(i.e 60 frames per second). Not every MonoBehaviour
script needs Update.
• FixedUpdate() - The FixedUpdate frequency is more or less than Update. If the
application runs at 25 frames per second (fps), Unity calls it approximately twice per
frame
• LateUpdate() - LateUpdate is called every frame, if the Behaviour is enabled. LateUpdate
is called after all Update functions have been called. This is useful to order script
execution.
• OnGUI() - OnGUI is called for rendering and handling GUI events.
• OnDisable() - This function is called when the behavior becomes disabled.
• OnEnable() - This function is called when the object becomes enabled and active.
SETTING UP A MULTIPLAYER PROJECT

• Network Manager:
The Network Manager is essential for managing the networking aspects of the game. It
handles the creation and connection of multiplayer game sessions. Unity provides a built-
in Network Manager component, which simplifies these tasks. It ensures only one
Network Manager is active in the scene at a time and manages the connection between
the host and client computers.
• User Interface (UI):
A UI allows players to find, create, and join multiplayer game sessions, commonly
known as the "lobby." Unity’s NetworkManagerHUD provides a basic UI for creating
matches, though it is limited in functionality and design. Developers typically create a
custom UI to better fit the game’s design before finalizing the project.
• Networked Player Prefabs:
In multiplayer games, players control objects, such as characters or cars, represented by
networked GameObjects. These GameObjects should be designed as Prefabs and
assigned to the Network Manager. When players connect, the Network Manager creates
instances of these Prefabs, ensuring each player has control over their own object.
• Multiplayer-aware Scripts:
The scripts attached to GameObjects must differentiate between actions performed by the
host computer and the client computers. Since both host and clients are connected
simultaneously, the scripts should be capable of handling input from different sources
while ensuring smooth multiplayer interactions.

NAVIGATION AND PATH FINDING:

• The navigation system allows you to create characters that can intelligently move around
the game world, using navigation meshes that are created automatically from your Scene
geometry.

• Dynamic obstacles allow you to alter the navigation of the characters at runtime, while off-
mesh links let you build specific actions like opening doors or jumping down from a ledge.

The Unity NavMesh system consists of the following pieces:

1. NavMesh (short for Navigation Mesh) is a data structure which describes the walkable
surfaces of the game world and allows users to find paths from one walkable location to
another in the game world.

2. NavMesh Agent component helps you to create characters which avoid each other while
moving towards their goal.

3. Off-Mesh Link component allows you to incorporate navigation shortcuts which cannot
be represented using a walkable surface. For example, jumping over a ditch or a fence, or
opening a door before walking through it, can be all described as Off-mesh links.
4. NavMesh Obstacle component allows you to describe moving obstacles the agents should
avoid while navigating the world.

CREATING USER INTERFACES (UI) : Unity provides three UI systems

• UI Toolkit is the newest UI system in Unity. It’s designed to optimize performance


across platforms, and is based on standard web technologies. You can use UI Toolkit to
create extensions for the Unity Editor, and to create runtime UI for games and
applications (when you install the UI Toolkit package.
• The Unity UI (uGUI) package The Unity User Interface (Unity UI) package (also called
uGUI) is an older,GameObject-based UI system that you can use to develop runtime UI
for games and applications. In Unity UI, you use components and the Game view to
arrange, position, and style the user interface. It supports advanced rendering and text
features.
• Immediate Mode Graphical User Interface (IMGUI) is a code-driven UI Toolkit that
uses the OnGUI function, and scripts that implement it, to draw and manage user
interfaces. You can use IMGUI to create custom Inspectors for script components,
extensions for the Unity Editor, and in-game debugging displays. It is not recommended
for building runtime UI. It is a GameObject-based UI system that uses Components and
the Game View to arrange, position, and style user interfaces.
• The Canvas is a GameObject that contains all UI elements in Unity. When you create a
UI element (e.g., Image), a Canvas is automatically generated if none exists. All UI
elements are children of the Canvas, which appears as a rectangle in the Scene View,
allowing easy UI positioning. The Canvas also works with the EventSystem for managing
input events.

Visual Components

1. Text (Label): Displays text with customizable font, size, style, alignment, and overflow
options. It can auto-resize with "Best Fit."

2. Image: Displays a sprite with scaling options: Simple, Sliced, Tiled, or Filled. Sprites can
be 9-sliced to preserve corner integrity when resized.

3. Raw Image: Similar to Image but supports textures instead of sprites.

4. Mask: Restricts child elements to the shape of the parent, making only parts within the
parent's bounds visible.

5. Effects: Apply simple visual effects like drop shadows or outlines.

Interaction Components

1. Button: Triggers an action via the OnClick event when clicked.

2. Toggle: A checkbox that flips between on/off, with an OnValueChanged event.

3. Toggle Group: Groups toggles to ensure only one can be active at a time.
4. Slider: Allows users to select a value by dragging a handle, with OnValueChanged.

5. Scrollbar: Scrolls content between values 0 and 1, often paired with Scroll Rect for
scrollable areas.

6. Dropdown: A selectable list of options, with OnValueChanged for selection changes.

7. Input Field: Editable text input, with events for text changes and editing completion.

8. Scroll Rect: Displays large content in a small area with scrolling functionality, often
combined with a Mask and Scrollbars.

AUGMENTED REALITY(AR) VIRTUAL REALITY(VR) Augmented reality enhances real life with artificial images and adds
graphics, sounds to the natural world as it exists. Virtual reality replaces the real world with artificial. User is not cut
from the reality user can interact with the real world and at the same time can see both real and virtual world. The
user enters an entirely immersive world and cut off from the real world. AR uses device such as smartphone or
wearable device which contains software sensors, a compass and small digital projector which display images onto
real world objects. VR might work better for video games and social networking in a virtual environment such as
second life or even play station home. 182 Introduction to These phones have GPRS which obtains information about
a particular geographical location which can be overlaid with tags etc. images, videos etc can be imposed onto this
location. Here the head mounted displays (HMD)&input devices block out all the external world from the viewer and
present a view that is under the complete control of the computer

You might also like