In computer graphics, a shader is a type of computer program originally used for shading in 3D scenes (the production of appropriate levels of light, darkness, and color in a rendered image). They now perform a variety of specialized functions in various fields within the category of computer graphics special effects, or else do video post-processing unrelated to shading, or even perform functions unrelated to graphics at all.
The usual rendering technique on most GPUs is called Immediate Mode Rendering (IMR) on which geometry is sent to the GPU, and gets drawn straight away. This simple architecture is relatively inefficient, resulting in wasted processing power and memory bandwidth. Pixels are often still rendered despite never being visible on the screen, such as when a car is completely obscured by a closer building.
Tile-Based Deferred Rendering architecture works in a much intelligent way. It captures the whole scene before starting to render, thus occluded pixels can be identified and rejected before they are processed. The hardware starts splitting up the geometry data into small rectangular regions that will be processed as one image, which we call “tiles”. Every tile is rasterized and processed separately, and as the size of the render is so small, this allows all data to be kept on very fast chip memory.
Deferred rendering means that the architecture will defer all texturing and shading operations until all objects have been tested for visibility. This significantly reduces system memory bandwidth requirements, which in turn increases performance and reduces power requirements. This is a critical advantage for phones, tablets, and other devices where battery life makes all the difference.
What if you got ask the very basic question: How Computers Work?
A brief answer:
A more comprehensive answer:
This part includes key principles to be followed to avoid critical performance flaws when creating/optimization graphics apps. The following recommendations come from the experience of real industry and the developers within it.
- Understand the Target Device
- You’ll need to learn as much info about the device as possible in order to understand different graphics architectures, to use the device in the most efficient manner possible.
- Profile the Workload
- Identify the bottlenecks in the apps you are optimizing and determine whether there are opportunities for improvement.
- Perform Clear Well
- Perform a clear on a framebuffer’s contents to avoid fetching the previous frame’s data on tile-based graphics architectures, which reduces memory bandwidth.
- Do Not Update Data Buffers Mid-Frame
- Avoid touching any buffer when a frame is mid-flight to reduce stalls and temporary buffer stores.
- Use Texture Compression
- Reduce the memory footprint and bandwidth cost of texture assets.
- Use Mipmapping
- This increases texture cache efficiency, which reduces bandwidth and increases performance.
- Do Not Use Discard
- Avoid forcing depth-test processing in the texture stage as this will decrease performance in the early depth rejection architectures.
- Do Not Force Unnecessary Synchronization
- Avoid API functionality that could stall the graphics pipeline and do not access any hardware buffer directly.
- Move Calculations “To the Front”
- Reduce the overall number of calculations by moving them earlier in the pipeline, where there are fewer instances to process.
- Group per Material
- Grouping geometry and texture data can improve app performance.
- Do Not Use Depth Pre-pass
- Depth pre-pass is redundant on deferred rendering architectures.
- Prefer Explicit APIs
- Graphical app made using explicit APIs tend to run more efficiently, if set up correctly.
- Prefer Lower Data Precision
- Lower precision shader variables should be used, where appropriate, to improve performance.
- Use All CPU Cores
- Using multi-threading in apps is critical to efficient CPU use.
- Use Indexed Lists
- Indexed lists can reduce mesh storage requirements by eliminating redundant vertices.
- Use On-chip Memory Efficiently for Deferred Rendering
- Making better use of on-chip memory reduces overall system memory bandwidth usage.