WebGPU: The Future of Web Graphics
Pipelines, bind groups, timestamp queries—and a tiny triangle to get your hands dirty.
WebGPU is the modern, low-overhead graphics and compute API for the web. It exposes explicit control over pipelines, bind groups, command encoders, and compute—finally giving browsers the same muscle as native APIs like Vulkan/Metal/DX12. Translation: fewer hidden stalls, more predictable perf. If WebGL was a family sedan, WebGPU is a stick‑shift with a turbo. You can stall it, but you’ll grin while doing it.
What it changes in practice (aka, “less magic, more power”):
- Explicit pipelines: create pipelines once, reuse them across frames.
- Bind groups: batch resource bindings; avoid per-draw churn.
- Timestamp queries: profile GPU passes precisely. No more guessing which pass “vibes” slow.
Minimal triangle—end‑to‑end setup (no frameworks):
// 1) Init
const canvas = document.querySelector('canvas');
const adapter = await navigator.gpu?.requestAdapter();
const device = await adapter.requestDevice();
const context = canvas.getContext('webgpu');
const format = navigator.gpu.getPreferredCanvasFormat();
context.configure({ device, format });
// 2) Shaders (WGSL)
const shader = device.createShaderModule({ code: `
@vertex fn v_main(@builtin(vertex_index) vi: u32) -> @builtin(position) vec4f {
var p = array<vec2f,3>(
vec2f(0.0, 0.6), vec2f(-0.6, -0.6), vec2f(0.6, -0.6)
);
return vec4f(p[vi], 0.0, 1.0);
}
@fragment fn f_main() -> @location(0) vec4f {
return vec4f(0.9, 0.3, 0.2, 1.0);
}
`});
// 3) Pipeline
const pipeline = device.createRenderPipeline({
layout: 'auto',
vertex: { module: shader, entryPoint: 'v_main' },
fragment: { module: shader, entryPoint: 'f_main', targets: [{ format }] },
primitive: { topology: 'triangle-list' }
});
// 4) Draw
function frame(){
const encoder = device.createCommandEncoder();
const view = context.getCurrentTexture().createView();
const pass = encoder.beginRenderPass({
colorAttachments: [{ view, loadOp: 'clear', storeOp: 'store', clearValue: {r:0.08,g:0.09,b:0.1,a:1} }]
});
pass.setPipeline(pipeline);
pass.draw(3);
pass.end();
device.queue.submit([encoder.finish()]);
requestAnimationFrame(frame);
}
frame();
Compute pass to prefix‑sum (toy example) so you can mix graphics + GPGPU. If the phrase “prefix‑sum” just made you nostalgic and a little tired, congrats—you’re an engineer.
const n = 256;
const input = new Uint32Array(n).map((_,i) => i);
const inBuf = device.createBuffer({ size: input.byteLength, usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST });
const outBuf = device.createBuffer({ size: input.byteLength, usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC });
const readBuf = device.createBuffer({ size: input.byteLength, usage: GPUBufferUsage.MAP_READ | GPUBufferUsage.COPY_DST });
device.queue.writeBuffer(inBuf, 0, input);
const cshader = device.createShaderModule({ code: `
@group(0) @binding(0) var<storage, read> In: array<u32>;
@group(0) @binding(1) var<storage, read_write> Out: array<u32>;
@compute @workgroup_size(64) fn main(@builtin(global_invocation_id) gid: vec3u) {
let i = gid.x;
if (i < arrayLength(&In)) { Out[i] = (i == 0u) ? In[i] : In[i] + Out[i-1u]; }
}
`});
const cpipe = device.createComputePipeline({ layout: 'auto', compute: { module: cshader, entryPoint:'main' } });
const group = device.createBindGroup({ layout: cpipe.getBindGroupLayout(0), entries:[
{ binding:0, resource:{ buffer: inBuf } },
{ binding:1, resource:{ buffer: outBuf } }
]});
const enc = device.createCommandEncoder();
const passC = enc.beginComputePass();
passC.setPipeline(cpipe); passC.setBindGroup(0, group); passC.dispatchWorkgroups(Math.ceil(n/64));
passC.end(); enc.copyBufferToBuffer(outBuf,0,readBuf,0,input.byteLength);
device.queue.submit([enc.finish()]);
await readBuf.mapAsync(GPUMapMode.READ);
console.log(new Uint32Array(readBuf.getMappedRange()));
Rules of thumb: design pipeline layouts up front, minimize buffer re‑creations, and batch updates with queue.writeBuffer or staging buffers. Use GPUQuerySet for timings and treat regressions like boss fights—name them, time them, beat them.