WebGPU Engine from Scratch Part 6: Diffuse Lighting

The scene is bland and could use some lighting so let’s implement that. First we start with the Light entity:

//light.js
/**
* @typedef {“point” | “directional” | “spot” } LightType
*/

export class Light {
/**@type {LightType} */
#type;


This content originally appeared on DEV Community and was authored by ndesmic

The scene is bland and could use some lighting so let's implement that. First we start with the Light entity:

//light.js
/**
 * @typedef {"point" | "directional" | "spot" } LightType
 */

export class Light {
    /**@type {LightType} */
    #type;
    #position;
    #direction;
    #color;

    constructor(light){
        this.#type = light.type ?? "point";
        this.#position = light.position ?? [0, 0, 0, 0];
        this.#direction = light.direction ?? [0, 0, 0, 0];
        this.#color = light.color ?? [1,1,1,1];
    }

    set type(val){
        this.#type = new Float32Array(val);
    }
    get type(){
        return this.#type;
    }

    set position(val){
        this.#position = new Float32Array(val);
    }
    get position(){
        return this.#position;
    }

    set direction(val){
        this.#direction = new Float32Array(val);
    }
    get direction(){
        return this.#direction;
    }

    set color(val){
        this.#color = new Float32Array(val);
    }
    get color(){
        return this.#color;
    }
}

Similar to the WebGL version. Then let's create a light

initializeLights(){
    this.#lights.set("light", new Light({
        type: "point",
        position: sphericalToCartesian([ Math.PI / 4, Math.PI / 4, 2]),
    }));
}

This is the same as the other initialization method and need this.#lights set as a Map. Then we need to pass the light to the shader in a bindgroup.

I struggled for a while to get the correct size for the buffer and the correct offsets. Lots of little mistakes. I finally came up with a genericized way to pack structs:

//buffer-utils.js
/**
 * @typedef {[string,GpuType]} Prop
 * @typedef {Prop[]} Schema
 * @param {object} data 
 * @param {Schema} schema 
 */
export function packStruct(data, schema, minSize){
    const { offsets, totalSize } = getAlignments(schema.map(s => s[1]), minSize);
    const buffer = new ArrayBuffer(totalSize);
    const dataView = new DataView(buffer);

    for(let i = 0; i < schema.length; i++){
        const [name, type] = schema[i];
        const value = data[name];
        //TODO: add other GPU Types
        switch(type){
            case "u32": {
                dataView.setUint32(offsets[i], value, true);
                break;
            }
            case "vec3f32": {
                dataView.setFloat32(offsets[i], value[0], true);
                dataView.setFloat32(offsets[i] + 4, value[1], true);
                dataView.setFloat32(offsets[i] + 8, value[2], true);
                break;
            }
            case "vec4f32": {
                dataView.setFloat32(offsets[i], value[0], true);
                dataView.setFloat32(offsets[i] + 4, value[1], true);
                dataView.setFloat32(offsets[i] + 8, value[2], true);
                dataView.setFloat32(offsets[i] + 12, value[3], true);
                break;
            }
            default: {
                throw new Error(`Cannot pack type ${type} at prop index ${i} with value ${value}`);
            }
        }
    }

    return buffer;
}

This is only partially complete as it only covers the types that I need for the light right now but filling the rest in should be easy and an exercise to the reader (and my future self). With that we can pack a light into a buffer.

//buffer-utils.js
/**
 * 
 * @param {Light} light 
 */
export function packLight(light){
    const schema = [
        ["typeInt", "u32"],
        ["position", "vec3f32"],
        ["direction", "vec3f32"],
        ["color", "vec4f32"]
    ];
    return packStruct(light, schema, 64);
}

This illustrates how to use packStruct. It only works for shallow structs but you give tuples of name and type. The name looks up the property in the object. It's an array because I want to maintain strict ordering so there are no surprises.

We can then set the bind group. I've made it a 3rd group and added it to setMainBindGroups.

setMainLightBindGroup(passEncoder, bindGroupLayouts, lights){
    const lightData = packLight(lights.get("light"));
    const bufferSize = lightData.byteLength;
    const lightBuffer = this.#device.createBuffer({
        size: bufferSize,
        usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST | GPUBufferUsage.STORAGE,
        label: "main-light-buffer"
    });
    this.#device.queue.writeBuffer(lightBuffer, 0, lightData);
    const lightCountBuffer = this.#device.createBuffer({
        size: 4,
        usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
        label: "main-light-count-buffer"
    });
    this.#device.queue.writeBuffer(lightCountBuffer, 0, new Int32Array([lights.size]));
    const lightBindGroup = this.#device.createBindGroup({
        label: "main-light-bind-group",
        layout: bindGroupLayouts.get("lights"),
        entries: [
            {
                binding: 0,
                resource: {
                    buffer: lightBuffer,
                    offset: 0,
                    size: bufferSize
                }
            },
            {
                binding: 1,
                resource: {
                    buffer: lightCountBuffer,
                    offset: 0,
                    size: 4
                }
            }
        ]
    });
    passEncoder.setBindGroup(2, lightBindGroup);
}

This process should be well understood by now. We are passing a light (we'll see soon that this can generalize to many lights) and the count of lights (the reason why explained below). Also keep in mind that lightBuffer has usage GPUBufferUsage.STORAGE. We'll test things out in a shader based on our previous one.

//textured-lit.wgsl
struct VertexOut {
    @builtin(position) position : vec4<f32>,
    @location(0) uv : vec2<f32>
};
struct Uniforms {
    view_matrix: mat4x4<f32>,
    projection_matrix: mat4x4<f32>,
    model_matrix: mat4x4<f32>,
    normal_matrix: mat3x3<f32>,
    camera_position: vec3<f32>
}
struct Light {
    light_type: u32,
    position: vec3<f32>,
    direction: vec3<f32>,
    color: vec4<f32>
}
struct LightCount {
    count: u32
}

@group(0) @binding(0) var<uniform> uniforms : Uniforms;
@group(1) @binding(0) var main_sampler: sampler;
@group(1) @binding(1) var texture: texture_2d<f32>;
@group(2) @binding(0) var<storage, read> lights: array<Light>;
@group(2) @binding(1) var<uniform> light_count: LightCount;

@vertex
fn vertex_main(@location(0) position: vec3<f32>, @location(1) uv: vec2<f32>) -> VertexOut
{
    var output : VertexOut;
    output.position =  uniforms.projection_matrix * uniforms.view_matrix * uniforms.model_matrix * vec4<f32>(position, 1.0);
    output.uv = uv;
    return output;
}
@fragment
fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32>
{
    var light = lights[0];
    var lc = light_count.count;

    var tex = textureSample(texture, main_sampler, fragData.uv);
    return light.color;
}

We need to define the struct for the Light which is pretty straight-forward. What's a little more odd is LightCount which holds a single u32. LightCount must be a struct. We cannot bind directly to scalar values for whatever reason so they always need to be boxed in a struct. The other interesting part here is how we define an array of lights with var<storage, read> lights: array<Light>; This is cool because unlike WebGL we can actually pass arrays of dynamic length, however it is limited in that the length is set by the binding and we do not get a length in the shader value which is why we need to pass the count manually. This array construct is also why the buffer usage was tagged with GPUBufferUsage.STORAGE because it's treated differently by the GPU.

In the fragment shader we're just doing a test to see if we can output the light color (white). This will help ensure that we got all the data packing correct. We need some dummy variables for light count and the texture otherwise they get erased for no-usage on compile and we'll get errors (I hate this about WebGPU).

If everything went well and we pray to the alignment gods we should see something like this.

White teapot and rug

Cool.

With that hooked up correctly now we can actually get to making our shader use normal diffuse lighting.

struct VertexOut {
    @builtin(position) frag_position : vec4<f32>,
    @location(0) world_position: vec4<f32>,
    @location(1) uv : vec2<f32>,
    @location(2) normal : vec3<f32>
};
struct Uniforms {
    view_matrix: mat4x4<f32>,
    projection_matrix: mat4x4<f32>,
    model_matrix: mat4x4<f32>,
    normal_matrix: mat3x3<f32>,
    camera_position: vec3<f32>
}
struct Light {
    light_type: u32,
    position: vec3<f32>,
    direction: vec3<f32>,
    color: vec4<f32>
}
struct LightCount {
    count: u32
}

@group(0) @binding(0) var<uniform> uniforms : Uniforms;
@group(1) @binding(0) var main_sampler: sampler;
@group(1) @binding(1) var texture: texture_2d<f32>;
@group(2) @binding(0) var<storage, read> lights: array<Light>;
@group(2) @binding(1) var<uniform> light_count: LightCount;

@vertex
fn vertex_main(@location(0) position: vec3<f32>, @location(1) uv: vec2<f32>, @location(2) normal: vec3<f32>) -> VertexOut
{
    var output : VertexOut;
    output.frag_position =  uniforms.projection_matrix * uniforms.view_matrix * uniforms.model_matrix * vec4<f32>(position, 1.0);
    output.world_position = uniforms.model_matrix * vec4<f32>(position, 1.0);
    output.uv = uv;
    output.normal = normal;
    return output;
}
@fragment
fn fragment_main(frag_data: VertexOut) -> @location(0) vec4<f32>
{   
    var surface_color = textureSample(texture, main_sampler, frag_data.uv);
    var diffuse = vec4(0.0);

    for(var i: u32 = 0; i < light_count.count; i++){
        var light = lights[i];
        var light_position = vec4(light.position.xyz, 1.0);
        var to_light = normalize(light_position - frag_data.world_position);
        var light_intensity = max(dot(normalize(frag_data.normal), to_light.xyz), 0.0);
        diffuse += light.color * vec4(light_intensity, light_intensity, light_intensity, 1);
    }

    return surface_color * diffuse;
}

We need to modify the pipeline to pass in the normals and pass that through the vertex shader along with the world position of the fragment. In the fragment shader we get the base color and apply each light in sequence. We get the direction of the light and then dot that with the normal to figure out how much gets reflected (if the light is the same direction as the normal it all does). Then we take the light color, surface color and intensity and multiply it together for the final color. Repeat for each light.

This works with one light but what an actual array of lights? We need a small modification.

//buffer-utils.js
export function packArray(data, schema, minSize){
    const { totalSize: structSize } = getAlignments(schema.map(s => s[1]), minSize);
    const totalSize = structSize * data.length;
    const buffer = new ArrayBuffer(totalSize);
    for(let i = 0; i < data.length; i++){
        packStruct(data[i], schema, minSize, buffer, i * structSize);
    }
    return buffer;
}

To pack an array we need to take the max align of the struct and make sure each element aligns on that boundary. Luckily the padding from packing a struct is the same thing so we just need to put it into a single buffer. To do this, I modified packStruct to optionally take a buffer and offset for where to place the packed bytes. Not worth showing here but you can look at the final code. Then we just modify packLight into packLights and use packArray instead.

When we apply it something weird happens.

Side of teapot with red light. Rug is lit toward camera but teapot is lit from bottom

Here I have a red light at 0, 0, -1. This should be slightly in front of the camera and on the surface mesh as seen from above (toward -Y) we can see that it is.

Top down of teapot with red light on bottom of plane

But the bottom of the teapot is light up, not the side. The -X and +X work just fine though, but it appears as though Y and Z are flipped! It took a while to figure out but it's because when we bake the transforms we didn't do the normals! Recall that to properly transform normals we need to get the top left 3x3 of the model matrix and get the transpose of the inverse (in this case we do not need to multiply by the view matrix since it's in model space). This also requires use to alter the matrix-vector multiplication because we had assumed it was always going to be a 4x4 times a 4, but now we need variable length.

//vector.js
/**
 * 
 * @param {Float32Array} vector 
 * @param {Float32Array} matrix
 * @param {number} size
 * @returns 
 */
export function multiplyMatrixVector(vector, matrix, size) {
    const newVector = new Float32Array(size);
    for(let i = 0; i < size; i++){
        let sum = 0;
        for(let j = 0; j < size; j++){
            sum += vector[j] * matrix[i * size + j] 
        }
        newVector.set([sum], i);
    }

    return newVector;
}

The whole bakeTransforms

//mesh.js
bakeTransforms(){
    //positions
    const modelMatrix = this.getModelMatrix();
    const transformedPositions = chunk(this.positions, this.positionSize)
        .map(values => {
            const lengthToPad = 4 - values.length;
            switch(lengthToPad){
                case 1:{
                    return [...values, 1.0]
                }
                case 2:{
                    return [...values, 0.0, 1.0];
                }
                case 3: {
                    return [...values, 0.0, 0.0, 1.0];
                }
                default: {
                    return [0.0, 0.0, 0.0, 1.0];
                }
            }
        })
        .map(values => multiplyMatrixVector(values, modelMatrix, this.positionSize + 1)) //need homogenous coordinates for positions
        .toArray();
    const normalMatrix = getTranspose(
        getInverse(
            trimMatrix(modelMatrix, [4,4], [this.normalSize,this.normalSize]), 
        [this.normalSize,this.normalSize]), 
    [this.normalSize,this.normalSize]);
    const transformedNormals = chunk(this.normals, this.normalSize)
        .map(values => multiplyMatrixVector(values, normalMatrix, this.normalSize))
        .map(values => normalizeVector(values))
        .toArray();
    //collect
    const newPositionsBuffer = new Float32Array(this.vertexLength * this.positionSize);
    for(let i = 0; i < transformedPositions.length; i++){
        newPositionsBuffer.set(transformedPositions[i].slice(0, this.positionSize), i * this.positionSize)
    }
    const newNormalsBuffer = new Float32Array(this.vertexLength * this.normalSize);
    for (let i = 0; i < transformedNormals.length; i++) {
        newNormalsBuffer.set(transformedNormals[i].slice(0, this.normalSize), i * this.normalSize)
    }
    this.positions = newPositionsBuffer;
    this.normals = newNormalsBuffer;
    this.resetTransforms();
    return this;
}

Before we are done we need to double check a few things that tripped me up:

  • The normalMatrix should not be multiplied by the view matrix, it should be in world space (if everything was view space it would work but it's more work).
  • Packing a mat3x3 is unintuitively the same as packing a mat4x4 with zeros at the end of each column. Each column needs padding. This took ages to debug because it's yet another alignment rule.
  • Matrices are column major meaning they index like M[col][row]. This is flipped 90 from how we do it in js so you need to double check it's what your thing.

I also when ahead and started using pack struct to pack the matrices and renamed "uniforms" to "scene" because it sounded better. After sorting all the alignment bugs we can finally have 2 lights that properly work.

teapot on rug with red and green lights

Code

https://github.com/ndesmic/geo/releases/tag/v0.5


This content originally appeared on DEV Community and was authored by ndesmic


Print Share Comment Cite Upload Translate Updates
APA

ndesmic | Sciencx (2025-08-25T19:00:00+00:00) WebGPU Engine from Scratch Part 6: Diffuse Lighting. Retrieved from https://www.scien.cx/2025/08/25/webgpu-engine-from-scratch-part-6-diffuse-lighting/

MLA
" » WebGPU Engine from Scratch Part 6: Diffuse Lighting." ndesmic | Sciencx - Monday August 25, 2025, https://www.scien.cx/2025/08/25/webgpu-engine-from-scratch-part-6-diffuse-lighting/
HARVARD
ndesmic | Sciencx Monday August 25, 2025 » WebGPU Engine from Scratch Part 6: Diffuse Lighting., viewed ,<https://www.scien.cx/2025/08/25/webgpu-engine-from-scratch-part-6-diffuse-lighting/>
VANCOUVER
ndesmic | Sciencx - » WebGPU Engine from Scratch Part 6: Diffuse Lighting. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/08/25/webgpu-engine-from-scratch-part-6-diffuse-lighting/
CHICAGO
" » WebGPU Engine from Scratch Part 6: Diffuse Lighting." ndesmic | Sciencx - Accessed . https://www.scien.cx/2025/08/25/webgpu-engine-from-scratch-part-6-diffuse-lighting/
IEEE
" » WebGPU Engine from Scratch Part 6: Diffuse Lighting." ndesmic | Sciencx [Online]. Available: https://www.scien.cx/2025/08/25/webgpu-engine-from-scratch-part-6-diffuse-lighting/. [Accessed: ]
rf:citation
» WebGPU Engine from Scratch Part 6: Diffuse Lighting | ndesmic | Sciencx | https://www.scien.cx/2025/08/25/webgpu-engine-from-scratch-part-6-diffuse-lighting/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.