There are three common ways to light a scene
Compute lighting per vertex in the vertex shader; let DDA/Bresenham interpolate the resulting colors to fragments.
Include texture maps that are already lit, with shadows and so on pre-computed.
Also called Phong shading
(not to be confused with Phong lighting), in this model the data needed to performing the lighting is interpolated via DDA/Bresenham to each pixel, where the actual lighting happens.
Within the per-pixel lighting model there are two common approaches
Light every fragment, even though some will be replaced by other fragments by the depth buffer.
Also called deferred shading
1 Some people use deferred rendering
and deferred shading
to mean two different but related ideas, but most sources I consulted in 2022 used them as synonyms., this is a two-pass system.
Pass 1: instead of a color buffer, the data needed to compute colors are stored in several G-buffers
. filtered by the depth buffer so that we end up with just one position, normal, and material per pixel.
Pass 2: iterate over the G-buffers, computing one color per pixel (not per fragment like forward rendering does).
You’ll need (at least) two GLSL programs to implement deferred rendering.
The first pass’s geometry inputs and vertex shader source are the same you’d use for forward rendering.
The first pass’s fragment shader should have several out vec4
s. WebGL2 guarantees you can use at lest 4, and depending on the GPU and browser maybe as many as 16; most GPUs and browsers I checked in 2022 supported either 4 or 8.
The fragment shader should do as little work as possible. If you have few enough inputs that all of them can fit into the available out vec4
, just put them there and end. If you have too many, do the minimal amount of work needed to get the remaining information to fit in those out vec4
. For example, if you have a base color with a translucent texture overlay you could look up the texture and combine the two into a single color – but only do this if you can’t fit the texture coordinate and mip-map level into the out vec4
s.
Pass 1 needs to render into a G-buffer, which we implement using an off-screen structure called a frame buffer
var gBuffer = gl.createFramebuffer() // make the G-buffer
.bindFramebuffer(gl.FRAMEBUFFER, gBuffer) // and tell GL to use it gl
The frame buffer needs to store its various out vectors into textures for the second pass to use
.activeTexture(gl.TEXTURE0) // in pass 1, can re-use one active texture
gl
// set up the output buffers
var outBuffers = []
var targets = []
for(let i=0; i < numberOfOututs; i += 1) {
var target = gl.createTexture()
.bindTexure(gl.TEXTURE_2D, target)
gl// ... set up pixelStore and texParameteri here like usual ...
.texStorage2D(gl.TEXTURE_2D, 1, gl.RGBA16F,
gl.drawingBufferWidth, gl.drawingBufferHeight)
gl.framebufferTexture2D(gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0 + i, gl.TEXTURE_2D, target, 0);
gl.push(target)
targets.push(gl.COLOR_ATTACHMENT0 + i)
outBuffers
}.drawBuffers(outBuffers)
gl
// also need a depth buffer
var depthTexture = gl.createTexture()
.bindTexure(gl.TEXTURE_2D, depthTexture)
gl// ... set up pixelStore and texParameteri here like usual ...
.texStorage2D(gl.TEXTURE_2D, 1, gl.DEPTH_COMPONENT16,
gl.drawingBufferWidth, gl.drawingBufferHeight)
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT,
gl.TEXTURE_2D, depthTexture, 0); gl
The second pass setup needs all of the G-buffer textures, and may need others for usual texture look-up too
for(let i = 0; i < targets.length; i+=1) {
.activeTexture(gl.TEXTURE0 + i)
gl.bindTexture(gl.TEXTURE_2D, targets[i])
gl
}// bind other textures using `gl.TEXTURE0 + targets.length` and beyond
The first pass drawing code needs to use the framebuffer and depth tests
.bindFramebuffer(gl.FRAMEBUFFER, gBuffer)
gl.useProgram(glslProgramForFirstPass)
gl.enable(gl.DEPTH_TEST)
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFER_BIT)
gl// draw the geometry here
The second pass drawing code needs to use the default framebuffer and no depth
.bindFramebuffer(gl.FRAMEBUFFER, null)
gl.useProgram(glslProgramForSecndPass)
gl.disable(gl.DEPTH_TEST)
gl.clear(gl.COLOR_BUFFER_BIT)
gl// draw the full-screen quad here
The second pass geometry should be a simple quad that fills the screen with a where on the screen is it
texture coordinate.
The second pass fragment shader should look up the data stored by the first pass in the supplied uniform sampler2D
s and use them to complete the lighting and shading computation.