Intro
Types of graphics
ㅤ | Raster | Vector |
Image Representation | Images are represented as a grid of pixels, by storing a color value for each pixel | Images are represented as a list of shapes and their components’ coordinates |
Software | Can be made using painting programs | Can be made using drawing programs |
File Formats | GIF, PNG, JPEG, and WebP | SVG |
Coordinate System | Two-dimensional grid in which each pixel corresponds to a pair of integers <x,y> giving the number of the row and the column that contains the pixel | It uses real-number coordinates |
Characteristic | Static pixels - Bitmaps | Vector images has attributes that can be used to adjust the image |
Image Size | It is determined on the basis of image resolution | ㅤ |
Raster
- Data compression: the data usually contains a lot of redundancy, and the data can be “compressed” to reduce its size
- GIF and PNG use lossless data compression algorithm: the original image can be recovered perfectly from the compressed data
- JPEG uses a lossy data compression algorithm: the image recovered from a JPEG file is not exactly the same as the original image; some information has been lost
- WebP uses both algorithms
Vector
- SVG:
- XML-based language for describing two-dimensional vector graphics images
- Stands for Scalable Vector Graphics
- Scalable means that there is no loss of quality when the size of the image is increased
Elements of 3D Graphics
1. Geometry
- Geometric primitives: the very basic shapes available in some graphics system (such as points, lines, triangles, etc.)
- Hierarchical modeling: building complex geometric model using smaller models that are simpler components
- Geometric transform: is used to adjust the size (scaling), orientation (rotation), and position of a geometric object (translation)
- Scaling, rotation, and translation are the most basic kinds of geometric transform
2. Appearance
- Material
- Texture
- Lighting
3. Image
- The goal of creating 3D graphics is to produce 2D images
- This is done through:
- Viewing
- Projection
- Rasterization: assigning colors to individual pixels
- The whole process of producing an image is called rendering
Hardware and Software
- Common graphics APIs:
- OpenGL (Free)
- OpenGL ES (for embedded systems)
- WebGL (a port of OpenGL ES, for web browsers)
- Vulkan (Free)
- Modern, Low-level API, and gives high performance
- Direct3D (Windows)
- Metal (MacOS)
GPU
- GPU is better than CPU in graphics-related computations, so graphics APIs provide a way to allow the CPU communicates with the GPU and send it commands and graphical data to compute.
- It have hundreds or thousands of processors that can operate in parallel.
- The individual processors are much less powerful than a CPU, but then typical per-vertex and per-fragment computations are not very complicated
OpenGL
- OpenGL itself is just an API specification, not an implementation.
- It was designed as a client/server system
- Server is the GPU, and it is responsible for controlling the computer’s display and performing graphics computations, carries out commands issued by the client. GPU has memory to keep the graphics data close.
- Client is the CPU, and it sends the OpenGL commands and data to the server (GPU)
- The channel between both clients and server can be a limiting factor in graphics performance
- The channel can be a network link, when client machine is separated from the server
- Shaders: programs written in GLSL (OpenGL Shading Language) to send computation commands to the GPU to specify how a vertex or its fragments (pixels) will be created.
Basics of Geometry
Pixels
- A computer image is usually represented as a discrete grid/array of picture elements a.k.a. pixels. The number of pixels determines the resolution of the image
- B&W: it is usually stored as an integer between 0 (black) and 255 (white)
- Colored: each pixel is described by a triple of numbers representing the intensity of red, green and blue.
- Aliasing: The most classical form of aliasing is the jaggy aspect of lines. This problem is fixed by using gray levels to smooth the appearance of the line
Geometric Model
Polygons
- Polygon is the most building block that is used for modeling 3D geometry.
- 3D object is a polygonal mesh. Each polygon is described by the 3D coordinates of its list of vertices
Triangles are used most of the time for simplicity and generality
- Polygons produce a flat geometric appearance. Therefore, techniques called smoothing or interpolation are used to improve that.
Primitives
- This refer to the classical geometric entities like: cubes, cylinders, spheres and cones
- Polyline: is a connected sequence of straight lines
- The edges of a polyline can cross one another but a polyline does not have to be closed
- Polygon is a closed Polyline
- Subdivision surfaces: it is a solution for obtaining smooth surfaces without compromising the simplicity of polygons
Rendering
- Image is the 2D projection of the 3D objects
- The projections can be computed by having:
- Position of the viewpoint
- Right camera parameters like field of view (near, far, and viewed angle)
- Rasterization: The geometric entities then will be rasterized, meaning that all visible pixels will be drawn
Visibility
- Occlusions: An issue happens if the scene contains more than one objects, and some of them are hidden by others
- Solutions:
- The Painter’s Algorithm: it sorts the objects or polygons from back to front, and rasterizing them in this order
- Ray-tracing Algorithm: it sends one ray from the eye and through each pixel of the image. The intersection between this ray and the objects of the scene is computed, and only the closest intersection is considered.
- Z-Buffer Method: The most common method nowadays
- It stores the depth (z) of the closest pixel
- When a new polygon is rasterized, for each pixel, the algorithm compares the depth of the current polygon and the depth of the stored pixel
- If the new polygon has a closer depth, the color and depth of the stored pixel are updated
- Otherwise, it means that for this pixel, a formerly drawn polygon hides the current polygon

Three.js
- It is built over WebGL and it has a simple API
- It uses the
<canvas>
HTML element
- It is made up of many classes. Three of the most basic are:
- THREE.Scene
- It is the root node of for the scene graph
- It is a holder (list) for all the objects of the 3D world, including lights, objects, and cameras.
- THREE.Camera
- It is a special object that represents a viewpoint.
- It represents a combination of a viewing transformation and a projection
- THREE.WebGLRenderer
- This is the most common renderer
- It uses WebGL 2 if available, or WebGL 1 if v2 isn’t available
- It is an object that can create an image from a scene graph
- There are other types of cameras
camera = new THREE.OrthographicCamera( left, right, top, bottom, near, far );
- Similar to glOrtho() in OpenGL
camera = new THREE.PerspectiveCamera( fieldOfViewAngle, aspect, near, far );
- Similar to gluPerspective() in OpenGL’s GLU library
fieldOfViewAngle
the vertical extent of the view volume, given as an angle measured in degreesaspect
the ratio between the horizontal and vertical extents (it’s like the window size); it should usually be set to the following division: canvas.width/canvas.heightnear, far
give the z-limits on the view volume as distances from the camera. For a perspective projection, both must be positive, with near less than far
- This command is essential in every Three.js app, as it produce the final results
renderer.render( scene, camera );
- THREE.Object3D
- The scene graph is made up of objects of type
THREE.Object3D
- Cameras, lights, visible objects, and even
THREE.Scene
- A THREE.Object3D object can holds a list of child THREE.Object3D objects (like linked list)
- Using
node.add(obj)
andnode.remove(obj)
- Every node has a pointer to its parent that is automatically made, and it shouldn’t be set directly
obj.parent
- The children of THREE.Object3D are stored in
obj.children
which is a JS array.