Page 2 of 21

How to optimize for GPU / Graphics workload performance

This part includes key principles to be followed to avoid critical performance flaws when creating/optimization graphics apps. The following recommendations come from the experience of real industry and the developers within it.

  1. Understand the Target Device
    • You’ll need to learn as much info about the device as possible in order to understand different graphics architectures, to use the device in the most efficient manner possible.
  2. Profile the Workload
    • Identify the bottlenecks in the apps you are optimizing and determine whether there are opportunities for improvement.
  3. Perform Clear Well
    • Perform a clear on a framebuffer’s contents to avoid fetching the previous frame’s data on tile-based graphics architectures, which reduces memory bandwidth.
  4. Do Not Update Data Buffers Mid-Frame
    • Avoid touching any buffer when a frame is mid-flight to reduce stalls and temporary buffer stores.
  5. Use Texture Compression
    • Reduce the memory footprint and bandwidth cost of texture assets.
  6. Use Mipmapping
    • This increases texture cache efficiency, which reduces bandwidth and increases performance.
  7. Do Not Use Discard
    • Avoid forcing depth-test processing in the texture stage as this will decrease performance in the early depth rejection architectures.
  8. Do Not Force Unnecessary Synchronization
    • Avoid API functionality that could stall the graphics pipeline and do not access any hardware buffer directly.
  9. Move Calculations “To the Front”
    • Reduce the overall number of calculations by moving them earlier in the pipeline, where there are fewer instances to process.
  10. Group per Material
    • Grouping geometry and texture data can improve app performance.
  11. Do Not Use Depth Pre-pass
    • Depth pre-pass is redundant on deferred rendering architectures.
  12. Prefer Explicit APIs
    • Graphical app made using explicit APIs tend to run more efficiently, if set up correctly.
  13. Prefer Lower Data Precision
    • Lower precision shader variables should be used, where appropriate, to improve performance.
  14. Use All CPU Cores
    • Using multi-threading in apps is critical to efficient CPU use.
  15. Use Indexed Lists
    • Indexed lists can reduce mesh storage requirements by eliminating redundant vertices.
  16. Use On-chip Memory Efficiently for Deferred Rendering
    • Making better use of on-chip memory reduces overall system memory bandwidth usage.

What is Graphics Rendering Pipeline?

Graphics Rendering Pipeline

GPU Rendering Pipeline
Graphics (GPU) Rendering Pipeline

The GPU pipeline takes the vertices across several stages during which the vertices have their coordinates transformed between various spaces. Different programmer using different graphics frameworks will care about only certain stages of the pipeline. Normally, the GPU vendor will take care of the “Vertex Fetch”, “Primitive Assembly”, “Rasterization”, and “Framebuffer” stages in their drivers. While the graphics software programmer only need to care about the Vertex and Fragment Processing stages which are programmable through graphics APIs such as Metal, Cuda, or OpenGL, etc.

1. Vertex Fetch

Different graphics Application Programming Interfaces (APIs) have different names for this stage. In order to start rendering 3D content, we first need a scene. A scene contains of models which have meshes of vertices. For example, one of the simplest models is the cube model that consists of six faces (12 triangles). A specific hardware unit called the Scheduler will send the vertices and their attributes on to the Vertex Processing stage.

2. Vertex Processing

At this stage, vertices are processed one by one. Programmers write code to calculate per-vertex lighting and color. Moreover, we send vertex coordinates through various coordinate spaces to reach their position in the final framebuffer. The CPU sends to GPU a vertex buffer that programmer created from the model mesh. Then the programmer configured the vertex buffer using a vertex descriptor which tells the GPU how the vertex data was structured. On the GPU, programmer created a struct to encapsulate the vertex attributes. The vertex shader takes in this struct, as a function argument, and through the qualifier, acknowledges that position comes from the CPU via the position in the vertex buffer. The vertex shader then can processes all the vertices and returns their positions as a float4. A specific hardware unit called Distributer sends the grouped blocks of vertices on to the Primitive Assembly stage.

3. Primitive Assembly

The former stage sent processed vertices grouped into blocks of data to this stage. One thing to keep in mind is that vertices belonging to the same geometrical shape (primitive) are always in the same block which means the one vertex of a point, or the two vertices of a line, or the three vertices of a triangle, would always be in the same block, hence a second block fetch will never be necessary. At this stage, primitives are fully assembled from connected vertices and they move on to the rasterizer.

4. Rasterization

For every object in the scene, send rays back into the screen and check which pixels are covered by the object. Depth information is kept, so that will be updated the pixel color if the current object is closer than the previously saved one. At this point, all connected vertices sent from the previous stage need to be represented on a 2 dimensional grid using their X/Y coordinates. This step is called triangle setup. Then a process called scan conversion runs on each line of screen to look for intersections and to determine what is visible and what is not. For drawing onto the screen, only the vertices and the slopes they determine are needed. The stored depth information can be used to determine whether each point is in front of other points in the scene. After this stage, the Scheduler unit again dispatches work to the shader cores, but this time it’s the rasterized fragments sent for Fragment Processing.

5. Fragment Processing

The primitive processing in the previous stages was sequential because there is only one Primitive Assembly unit and one Rasterization unit. However, as soon as fragments reach the Scheduler, work can be forked (divided) into many segmented sections, and each section is given to an available shader core. Thousands of cores are then doing parallel processing. This fragment processing stage is another programmable stage. Programmer create a fragment shader function which would receive the lighting, texture coordinate, depth and color information which the vertex function outputs. The fragment shader output is a single color for that fragment. Every of these fragments will contribute to the color of the final pixel in the framebuffer. All the attributes are interpolated for each fragment.

6. Framebuffer

As soon as fragments have been processed into pixels the Distributer unit sends them to the Color Writing unit. This unit is responsible for writing the final color in a special memory location called the framebuffer. From here, the view gets its colored pixels refreshed every frame. A technique called double-buffering is used to do the last action: While the 1st buffer is being displayed on the display, the 2nd one is updated in the background. The, the two buffers are swapped, and the second one is displayed on the screen while the first one is updated, and the cycle continues.

What is the benefit of using half-cycle-path?

Until now, we focus on the timing of one-cycle-path or full-cycle-path. Sometimes, there may exist half-cycle-path in design. One example will be the launch flop is negedge triggered, while the capture flop is still posedge triggered.

We discussed the clock skew and how it affects STA in previous post. Equivalently, half-cycle-path can be modeled as one-cycle-path with clock skew δ = +T/2. It is obvious that hold time closure is easier while setup time closure is harder for half-cycle-path.

You may wonder what is the benefit of using half-cycle-path in the design.

Continue reading → What is the benefit of using half-cycle-path?

Design a simple ALU and draw its logical block diagram

Given the ALU pseudo code below, write the Verilog code and draw its logical block diagram using only 1 full adder, bitwise OR/AND, and as fewer MUXes as possible.

Verilog implementation should be straightforward:

Continue reading → Design a simple ALU and draw its logical block diagram