Raw WebGPU

An overview on how to write a WebGPU application. Learn what key data structures and types are needed to draw in WebGPU.

Alain Galvan ·12/27/2019 @ 12:30 AM · Updated 6 days ago

Github Repo Codepen

WebGPU is a version of the architecture of modern computer graphics APIs such as Vulkan, DirectX 12, and Metal designed for the Web. This shift in paradigm for web graphics APIs allows users to take advantage of the same benefits native graphics APIs bring, faster applications thanks to the ability to keep the GPU busy with work, less graphics driver specific bugs, and the potential for new features should they be implemented in the future either by vendor extensions or in the specification itself.

WebGPU isn't for the faint of heart however, as the API is arguably the most complex out of all rendering APIs on the web, though this cost is offset by both the increase in performance and guarantee of future support that the API provides.

⚠️ Note: The WebGPU's specification is still a work in progress, so the following is subject to change:


I've prepared a Github repo with everything you need to get started. We'll walk through writing a WebGPU Hello Triangle application in TypeScript (JavaScript with optional type checking).

Check out my other post on WebGL for writing graphics applications on with a more mature and supported web graphics API.

Setup

First install:

Then type the following in your terminal.

# 🐑 Clone the repo
git clone https://github.com/alaingalvan/webgpu-seed

# 💿 go inside the folder
cd webgpu-seed

# 🔨 Start building the project
npm start

Refer to this blog post on designing web libraries and apps for more details on Node.js, packages, etc.

Project Layout

As your project becomes more complex, you'll want to separate files and organize your application to something more akin to a game or renderer, check out this post on game engine architecture and this one on real time renderer architecture for more details.

├─ 📂 node_modules/   # 👶 Dependencies
│  ├─ 📁 gl-matrix      # ➕ Linear Algebra
│  └─ 📁 ...            # 🕚 Other Dependencies (TypeScript, Webpack, etc.)
├─ 📂 src/            # 🌟 Source Files
│  ├─ 📄 renderer.ts    # 🔺 Triangle Renderer
│  └─ 📄 main.ts        # 🏁 Application Main
├─ 📄 .gitignore      # 👁️ Ignore certain files in git repo
├─ 📄 package.json    # 📦 Node Package File
├─ 📄 license.md      # ⚖️ Your License (Unlicense)
└─ 📃readme.md        # 📖 Read Me!

Dependencies

  • gl-matrix - A JavaScript library that allows users to write glsl like JavaScript code, with types for vectors, matrices, etc. While not in use in this sample, it's incredibly useful for programming more advanced topics such as camera matrices.

  • TypeScript - JavaScript with types, makes it significantly easier to program web apps with instant autocomplete and type checking.

  • Webpack - A JavaScript compilation tool to build minified outputs and test our apps faster.

Overview

In this application we will need to do the following:

  1. Initialize the API - Check if navigator.gpu exists, and if it does, request a GPUAdapter, then request a GPUDevice, and get that device's default GPUQueue.

  2. Setup Frame Backings - Create a GPUSwapchain to receive a GPUTexture output for the current frame, as well as any other attachments you might need (such as a depth-stencil texture, etc.). Create GPUTextureViews for those textures.

  3. Initialize Resources - Create your Vertex and Index GPUBuffers, pre-compile your Shaders to SPIR-V and load your SPIR-V shader binary as GPUShaderModules, create your GPURenderPipeline by describing every stage of a Graphics or Compute pipeline. Finally, build your GPUCommandEncoder with what what render passes you intend to run, then a GPURenderPassEncoder with all the draw calls you intend to execute for that render pass.

  4. Render - Submit your GPUCommandEncoder by calling .finish(), and submitting that to your GPUQueue. Refresh the swapchain by calling requestAnimationFrame.

  5. Destroy - Destroy any data structures after you're done using the API.

The following will explain snippets from that can be found in the Github repo, with certain parts omitted, and member variables (this.memberVariable) declared inline without the this. prefix so their type is easier to see and the examples here can work on their own.

Initialize API

Entry Point

Entry Point Diagram

To access the WebGPU API, you need to see if there exists a gpu object in the global navigator.

// 🏭 Entry to WebGPU
const entry: GPU = navigator.gpu;
if (!entry) {
    throw new Error('WebGPU is not supported on this browser.')
}

Adapter

Adapter Diagram

An Adapter describes the physical properties of a given GPU, such as its name, extensions, and device limits.

// ✋ Declare adapter handle
let adapter: GPUAdapter = null;

// 🙏 Inside an async function...

// 🔌 Physical Device Adapter
adapter = await entry.requestAdapter();

Device

Device Diagram

A Device is how you access the core of the WebGPU API, and will allow you to create the data structures you'll need.

// ✋ Declare device handle
let device: GPUDevice = null;

// 🙏 Inside an async function...

// 💻 Logical Device
device = await adapter.requestDevice();

Queue

Queue Diagram

A Queue allows you to send work asynchronously to the GPU. As of the writing of this post, you can only access a defaultQueue from a given GPUDevice.

// ✋ Declare queue handle
let queue: GPUQueue = null;

// 📦 Queue
queue = device.defaultQueue;

Frame Backings

Swapchain

Canvas Element Diagram

In order to see what you're drawing, you'll need an HTMLCanvasElement and to create a Swapchain from that canvas.

// ✋ Declare Swapchain handle
let swapchain: GPUSwapchain = null;

const context: GPUCanvasContext = canvas.getContext('gpupresent') as any;

// ⛓️ Create Swapchain
const swapChainDesc: GPUSwapChainDescriptor = {
    device: device,
    format: 'bgra8unorm',
    usage: GPUTextureUsage.OUTPUT_ATTACHMENT | GPUTextureUsage.COPY_SRC
};
swapchain = context.configureSwapChain(swapChainDesc);

Frame Buffer Attachments

Texture Attachments Diagram

When executing different passes of your rendering system, you'll need output textures to write to, be it depth textures for depth testing or shadows, or attachments for various aspects of a deferred renderer such as view space normals, PBR reflectivity/roughness, etc.

// ✋ Declare attachment handles
let depthTexture: GPUTexture = null;
let depthTextureView: GPUTextureView = null;

// 🤔 Create Depth Backing
const depthTextureDesc: GPUTextureDescriptor = {
    size: {
        width: canvas.width,
        height: canvas.height,
        depth: 1
    },
    arrayLayerCount: 1,
    mipLevelCount: 1,
    sampleCount: 1,
    dimension: '2d',
    format: 'depth24plus-stencil8',
    usage: GPUTextureUsage.OUTPUT_ATTACHMENT | GPUTextureUsage.COPY_SRC
};

depthTexture = device.createTexture(depthTextureDesc);
depthTextureView = depthTexture.createView();

// ✋ Declare swapchain image handles
let colorTexture: GPUTexture = null;
let colorTextureView: GPUTextureView = null;

colorTexture = swapchain.getCurrentTexture();
colorTextureView = colorTexture.createView();

Initialize Resources

Buffers

Buffers Diagram

A Buffer is an array of data, such as a mesh's positional data, color data, index data, etc. When rendering triangles with a raster based graphics pipeline, you'll need 1 or more buffers of vertex data (commonly referred to as Vertex Buffer Objects or VBOs), and 1 buffer of the indices that correspond with each triangle vertex that you intend to draw (otherwise known as an Index Buffer Object or IBO).

// 📈 Position Vertex Buffer Data
const positions = new Float32Array([
    1.0, -1.0, 0.0,
   -1.0, -1.0, 0.0,
    0.0,  1.0, 0.0
]);

// 🎨 Color Vertex Buffer Data
const colors = new Float32Array([
    1.0, 0.0, 0.0, // 🔴
    0.0, 1.0, 0.0, // 🟢
    0.0, 0.0, 1.0  // 🔵
]);

// 🗄️ Index Buffer Data
const indices = new Uint16Array([ 0, 1, 2 ]);

// ✋ Declare buffer handles
let positionBuffer: GPUBuffer = null;
let colorBuffer: GPUBuffer = null;
let indexBuffer: GPUBuffer = null;

// 👋 Helper function for creating GPUBuffer(s) out of Typed Arrays
let createBuffer = (arr: Float32Array | Uint16Array, usage: number) => {
    let desc = { size: arr.byteLength, usage };
    let [ buffer, bufferMapped ] = device.createBufferMapped(desc);

    const writeArray =
        arr instanceof Uint16Array ? new Uint16Array(bufferMapped) : new Float32Array(bufferMapped);
    writeArray.set(arr);
    buffer.unmap();
    return buffer;
};

positionBuffer = createBuffer(positions, GPUBufferUsage.VERTEX);
colorBuffer = createBuffer(colors, GPUBufferUsage.VERTEX);
indexBuffer = createBuffer(indices, GPUBufferUsage.INDEX);

Compiling Shaders

Compiling Shaders

In this example, the following was used as our vertex shader source:

#version 450

layout (location = 0) in vec3 inPos;
layout (location = 1) in vec3 inColor;

layout (location = 0) out vec3 outColor;

void main()
{
    outColor = inColor;
    gl_Position = vec4(inPos.xyz, 1.0);
}

In this example, the following was used as our fragment shader source:

#version 450

// Varying
layout (location = 0) in vec3 inColor;

// Return Output
layout (location = 0) out vec4 outFragColor;

void main()
{
  outFragColor = vec4(inColor, 1.0);
}

With glslang installed and accessible in your terminal's PATH, run the following:

glslangValidator -V triangle.vert -o triangle.vert.spv
glslangValidator -V triangle.frag -o triangle.frag.spv

Shader Modules

Shader Modules Diagram

Shader Modules are pre-compiled shader binaries that execute on the GPU when executing a given pipeline.

As shaders need to be pre-compiled to be used in WebGPU, you'll need to use a tool to compile your shaders to SPIR-V. I'm currently working on a new version of CrossShader that will allow for this, but there's certainly room for usability improvements such as WebPack shader loaders, etc.

// ✋ Declare shader module handles
let vertModule: GPUShaderModule = null;
let fragModule: GPUShaderModule = null;

// 👋 Helper function for creating GPUShaderModule(s) out of SPIR-V files
let loadShader = (shaderPath: string) =>
    fetch(new Request(shaderPath), { method: 'GET', mode: 'cors' }).then((res) =>
        res.arrayBuffer().then((arr) => new Uint32Array(arr))
    );

// 🙏 inside an async function...
// ⚠️ Note: You could include these binaries as variables in your javascript source.

const vsmDesc: any = { code: await loadShader('triangle.vert.spv') };
vertModule = device.createShaderModule(vsmDesc);

const fsmDesc: any = { code: await loadShader('triangle.frag.spv') };
fragModule = device.createShaderModule(fsmDesc);

Bind Group

Uniform Buffer Diagram

You'll often times need to feed data directly to your shader modules, and to do this you'll need to specify a uniform. In order to create a Uniform Buffer in your shader, declare the following prior to your main function:

// 🕸️ In your Vertex Shader
layout (set = 0, binding = 0) uniform UBO
{
  mat4 modelViewProj;
  vec4 primaryColor;
  vec4 accentColor;
};

// ❗ Then in your Vertex Shader's main file, replace the second to last line with:
gl_Position = modelViewProj * vec4(inPos, 1.0);

Then in your JavaScript code, create a Uniform Buffer as you would with an index/vertex buffer.

// 👔 Uniform Data
const uniformData = new Float32Array([

    // ♟️ ModelViewProjection Matrix
    1.0, 0.0, 0.0, 0.0
    0.0, 1.0, 0.0, 0.0
    0.0, 0.0, 1.0, 0.0
    0.0, 0.0, 0.0, 1.0

    // 🔴 Primary Color
    0.9, 0.1, 0.3, 1.0

    // 🟣 Accent Color
    0.8, 0.2, 0.8, 1.0
]);

// ✋ Declare buffer handles
let uniformBuffer: GPUBuffer = null;

uniformBuffer = createBuffer(uniformData, GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST);

You'll want to use a library like gl-matrix in order to better manage linear algebra calculations such as matrix multiplication.

// ✋ Declare handles
let uniformBindGroupLayout: GPUBindGroupLayout = null;
let uniformBindGroup: GPUBindGroup = null;
let layout: GPUPipelineLayout = null;

// 🗂️ Bind Group Layout
uniformBindGroupLayout = device.createBindGroupLayout({
    bindings: [{
        binding: 0,
        visibility: GPUShaderStage.VERTEX,
        type: "uniform-buffer"
    }]
});

// 🗄️ Bind Group
uniformBindGroup = device.createBindGroup({
    layout: uniformBindGroupLayout,
    bindings: [{
        binding: 0,
        resource: {
            buffer: uniformBuffer
        }
    }]
});

//🏢 Pipeline Layout
layout = device.createPipelineLayout({bindGroupLayouts: [sceneUniformBindGroupLayout]}),

Graphics Pipeline

Pipeline Diagram

A Graphics Pipeline describes all the data that's to be fed into the execution of a raster based graphics pipeline. This includes:

  • 🔣 Input Assembly - What does each vertex look like? Which attributes are where, and how do they align in memory?

  • 🖍️ Shader Modules - What shader modules will you be using when executing this graphics pipeline?

  • ✏️ Depth/Stencil State - Should you perform depth testing? If so, what function should you use to test depth?

  • 🍥 Blend State - How should colors be blended between the previously written color and current one?

  • 🔺 Rasterization - How does the rasterizer behave when executing this graphics pipeline? Does it cull faces? Which direction should the face be culled?

  • 💾 Uniform Data - What kind of uniform data should your shaders expect? In WebGPU this is done by describing a Pipeline Layout.

// ✋ Declare pipeline handle
let pipeline: GPURenderPipeline = null;

// ⚗️ Graphics Pipeline

// 🔣 Input Assembly
const positionAttribDesc: GPUVertexAttributeDescriptor = {
    shaderLocation: 0, // [[attribute(0)]]
    offset: 0,
    format: 'float3'
};
const colorAttribDesc: GPUVertexAttributeDescriptor = {
    shaderLocation: 1, // [[attribute(1)]]
    offset: 0,
    format: 'float3'
};
const positionBufferDesc: GPUVertexBufferLayoutDescriptor = {
    attributes: [ positionAttribDesc ],
    arrayStride: 4 * 3, // sizeof(float) * 3
    stepMode: 'vertex'
};
const colorBufferDesc: GPUVertexBufferLayoutDescriptor = {
    attributes: [ colorAttribDesc ],
    arrayStride: 4 * 3, // sizeof(float) * 3
    stepMode: 'vertex'
};

const vertexState: GPUVertexStateDescriptor = {
    indexFormat: 'uint16',
    vertexBuffers: [ positionBufferDesc, colorBufferDesc ]
};

// 🖍️ Shader Modules
const vertexStage = {
    module: vertModule,
    entryPoint: 'main'
};

const fragmentStage = {
    module: fragModule,
    entryPoint: 'main'
};

// ✏️ Depth/Stencil State
const depthStencilState: GPUDepthStencilStateDescriptor = {
    depthWriteEnabled: true,
    depthCompare: 'less',
    format: 'depth24plus-stencil8'
};

// 🍥 Blend State
const colorState: GPUColorStateDescriptor = {
    format: 'bgra8unorm',
    alphaBlend: {
        srcFactor: 'src-alpha',
        dstFactor: 'one-minus-src-alpha',
        operation: 'add'
    },
    colorBlend: {
        srcFactor: 'src-alpha',
        dstFactor: 'one-minus-src-alpha',
        operation: 'add'
    },
    writeMask: GPUColorWrite.ALL
};

// 🔺 Rasterization
const rasterizationState: GPURasterizationStateDescriptor = {
    frontFace: 'cw',
    cullMode: 'none'
};

// 💾 Uniform Data
const pipelineLayoutDesc = { bindGroupLayouts: [] };
const layout = device.createPipelineLayout(pipelineLayoutDesc);

const pipelineDesc: GPURenderPipelineDescriptor = {
    layout,

    vertexStage,
    fragmentStage,

    primitiveTopology: 'triangle-list',
    colorStates: [ colorState ],
    depthStencilState,
    vertexState,
    rasterizationState
};
pipeline = device.createRenderPipeline(pipelineDesc);

Command Encoder

Command Encoder Diagrams

Command Encoders encode all the draw commands you intend to execute in groups of Render Pass Encoders. Once you've finished encoding commands, you'll receive a Command Buffer that you could submit to your queue.

In that sense a command buffer is analogous to a callback that executes draw functions on the GPU once it's submitted to the queue.

// ✋ Declare command handles
let commandEncoder: GPUCommandEncoder = null;
let passEncoder: GPURenderPassEncoder = null;

// ✍️ Write commands to send to the GPU
function encodeCommands() {
    let colorAttachment: GPURenderPassColorAttachmentDescriptor = {
        attachment: colorTextureView,
        loadValue: { r: 0, g: 0, b: 0, a: 1 },
        storeOp: 'store'
    };

    const depthAttachment: GPURenderPassDepthStencilAttachmentDescriptor = {
        attachment: depthTextureView,
        depthLoadValue: 1,
        depthStoreOp: 'store',
        stencilLoadValue: 'load',
        stencilStoreOp: 'store'
    };

    const renderPassDesc: GPURenderPassDescriptor = {
        colorAttachments: [ colorAttachment ],
        depthStencilAttachment: depthAttachment
    };

    commandEncoder = device.createCommandEncoder();

    // 🖌️ Encode drawing commands
    passEncoder = commandEncoder.beginRenderPass(renderPassDesc);
    passEncoder.setPipeline(pipeline);
    passEncoder.setViewport(0, 0, canvas.width, canvas.height, 0, 1);
    passEncoder.setScissorRect(0, 0, canvas.width, canvas.height);
    passEncoder.setVertexBuffer(0, positionBuffer);
    passEncoder.setVertexBuffer(1, colorBuffer);
    passEncoder.setIndexBuffer(indexBuffer);
    passEncoder.drawIndexed(3, 1, 0, 0, 0);
    passEncoder.endPass();

    queue.submit([ commandEncoder.finish() ]);
}

Render

Triangle Raster Gif

Rendering in WebGPU is a simple matter of updating any uniforms you intend to update, getting the next attachments from your swapchain, submitting your command encoders to be executed, and using the requestAnimationFrame callback to do all of that again.

let render = () => {
    // ⏭ Acquire next image from swapchain
    colorTexture = swapchain.getCurrentTexture();
    colorTextureView = colorTexture.createView();

    // 📦 Write and submit commands to queue
    encodeCommands();

    // ➿ Refresh canvas
    requestAnimationFrame(render);
};

Conclusion

WebGPU might be more difficult than other graphics APIs, but it's an API that more closely aligns with the design of modern graphics cards, and as a result, should not only result in faster applications, but also applications that should last longer.

There were a few things I didn't cover in this post as they would have been a beyond the scope of this post, such as:

  • Matrices (for camera calculations)

  • A detailed overview of every possible state of a graphics pipeline

  • Compute pipelines

  • Loading textures

Additional Resources


You can find all the source for this post in the GitHub Repo here.