In the previous article we covered a few super basics about WebGPU. In this article we’re going to go over the basics of inter-stage variables.
Inter-stage variables come into play between a vertex shader and a fragment shader.
When a vertex shader outputs 3 positions a triangle gets rasterized. The vertex shader can output extra values at each of those positions and by default, those values will be interpolated between the 3 points.
Lets make a small example. We’ll start with the triangle shaders from the previous article. All we’re going to do is change the shaders.
const module = device.createShaderModule({ - label: 'our hardcoded red triangle shaders', + label: 'our hardcoded rgb triangle shaders', code: ` + struct OurVertexShaderOutput { + @builtin(position) position: vec4f, + @location(0) color: vec4f, + }; @vertex fn vs( @builtin(vertex_index) vertexIndex : u32 - ) -> @builtin(position) vec4f { + ) -> OurVertexShaderOutput { let pos = array( vec2f( 0.0, 0.5), // top center vec2f(-0.5, -0.5), // bottom left vec2f( 0.5, -0.5) // bottom right ); + var color = array<vec4f, 3>( + vec4f(1, 0, 0, 1), // red + vec4f(0, 1, 0, 1), // green + vec4f(0, 0, 1, 1), // blue + ); - return vec4f(pos[vertexIndex], 0.0, 1.0); + var vsOutput: OurVertexShaderOutput; + vsOutput.position = vec4f(pos[vertexIndex], 0.0, 1.0); + vsOutput.color = color[vertexIndex]; + return vsOutput; } - @fragment fn fs() -> @location(0) vec4f { - return vec4f(1, 0, 0, 1); + @fragment fn fs(fsInput: OurVertexShaderOutput) -> @location(0) vec4f { + return fsInput.color; } `, });
First off we declare a struct
. This is one easy way to coordinate the
inter-stage variables between a vertex shader and a fragment shader.
struct OurVertexShaderOutput { @builtin(position) position: vec4f, @location(0) color: vec4f, };
We then declare our vertex shader to return a structure of this type
@vertex fn vs( @builtin(vertex_index) vertexIndex : u32 - ) -> @builtin(position) vec4f { + ) -> OurVertexShaderOutput {
We create an array of 3 colors.
var color = array<vec4f, 3>( vec4f(1, 0, 0, 1), // red vec4f(0, 1, 0, 1), // green vec4f(0, 0, 1, 1), // blue );
And then instead of returning just a vec4f
for position we declare an instance
of the structure, fill it out, and return it
- return vec4f(pos[vertexIndex], 0.0, 1.0); + var vsOutput: OurVertexShaderOutput; + vsOutput.position = vec4f(pos[vertexIndex], 0.0, 1.0); + vsOutput.color = color[vertexIndex]; + return vsOutput;
In the fragment shader we declare it to take one of these structs as an argument to the function
@fragment fn fs(fsInput: OurVertexShaderOutput) -> @location(0) vec4f { return fsInput.color; }
And just return the color
If we run that we’ll see, every time the GPU called our fragment shader it passed in a color that was interpolated between all 3 points.
Inter-stage variables are most often used to interpolate texture coordinates across a triangle which we’ll cover in the article on textures. Another common use is interpolating normals cross a triangle which will cover in the first article on lighting.
location
An important point, like nearly everything in WebGPU, the connection between the vertex shader and the fragment shader is by index. For inter-stage variables they connect by location index.
To see what I mean, let’s change only the fragment shader to take vec4f
parameter
at location(0)
instead of the struct
@fragment fn fs(@location(0) color: vec4f) -> @location(0) vec4f { return color; }
Running that we see it still works.
@builtin(position)
That helps point out another quirk. Our original shader that used the same
struct in both the vertex and fragment shaders had a field called position
but
it didn’t have a location. Instead it was declared as @builtin(position)
.
struct OurVertexShaderOutput { * @builtin(position) position: vec4f, @location(0) color: vec4f, };
That field is NOT an inter-stage variable. Instead, it’s a builtin
. It
happens that @builtin(position)
has a different meaning in a vertex shader vs
a fragment shader.
In a vertex shader @builtin(position)
is the output that the GPU needs to draw
triangles/lines/points
In a fragment shader @builtin(position)
is an input. It’s the pixel coordinate
of the pixel the fragment shader is currently being asked to compute a color
for.
Pixel coordinates are specified by the edges of pixels. The values provided to the fragment shader are the coordinates of the center of the pixel
If the texture we were drawing to was 3x2 pixels in size these would be the coordinate.
We can change our shader to use this position. For example let’s draw a checkerboard.
const module = device.createShaderModule({ label: 'our hardcoded checkerboard triangle shaders', code: ` struct OurVertexShaderOutput { @builtin(position) position: vec4f, - @location(0) color: vec4f, }; @vertex fn vs( @builtin(vertex_index) vertexIndex : u32 ) -> OurVertexShaderOutput { let pos = array( vec2f( 0.0, 0.5), // top center vec2f(-0.5, -0.5), // bottom left vec2f( 0.5, -0.5) // bottom right ); - var color = array<vec4f, 3>( - vec4f(1, 0, 0, 1), // red - vec4f(0, 1, 0, 1), // green - vec4f(0, 0, 1, 1), // blue - ); var vsOutput: OurVertexShaderOutput; vsOutput.position = vec4f(pos[vertexIndex], 0.0, 1.0); - vsOutput.color = color[vertexIndex]; return vsOutput; } @fragment fn fs(fsInput: OurVertexShaderOutput) -> @location(0) vec4f { - return fsInput.color; + let red = vec4f(1, 0, 0, 1); + let cyan = vec4f(0, 1, 1, 1); + + let grid = vec2u(fsInput.position.xy) / 8; + let checker = (grid.x + grid.y) % 2 == 1; + + return select(red, cyan, checker); } `, });
The code above takes fsInput.position
, which was declared as
@builtin(position)
, and converts its xy
coordinates to a vec2u
which is 2
unsigned integers. It then divides them by 8 giving us a count that increases
every 8 pixels. It then adds the x
and y
grid coordinates together, computes
module 2, and compares the result to 1. This will give us a boolean that is true
or false every other integer. Finally it uses the WGSL function select
which
given 2 values, selects one or the other based on a boolean condition. In
JavaScript select
would be written like this
// If condition is false return `a`, otherwise return `b` select = (a, b, condition) => condition ? b : a;
Even if you don’t use @builtin(position)
in a fragment shader, it’s convenient
that it’s there because it means we can use the same struct for both a vertex
shader and a fragment shader. What was important to takeaway is that the position
struct
field in the vertex shader vs the fragment shader is entirely unrelated. They’re
completely different variables.
As pointed out above though, for inter-stage variables, all that matters is the
@location(?)
. So, it’s not uncommon to declare different structs for a vertex
shader’s output vs a fragment shaders input.
To hopefully make this more clear, the fact that the vertex shader and fragment shader are in the same string in our examples it just a convenience. We could also split them into separate modules
- const module = device.createShaderModule({ - label: 'hardcoded checkerboard triangle shaders', + const vsModule = device.createShaderModule({ + label: 'hardcoded triangle', code: ` struct OurVertexShaderOutput { @builtin(position) position: vec4f, }; @vertex fn vs( @builtin(vertex_index) vertexIndex : u32 ) -> OurVertexShaderOutput { let pos = array( vec2f( 0.0, 0.5), // top center vec2f(-0.5, -0.5), // bottom left vec2f( 0.5, -0.5) // bottom right ); var vsOutput: OurVertexShaderOutput; vsOutput.position = vec4f(pos[vertexIndex], 0.0, 1.0); return vsOutput; } + `, + }); + + const fsModule = device.createShaderModule({ + label: 'checkerboard', + code: ` - @fragment fn fs(fsInput: OurVertexShaderOutput) -> @location(0) vec4f { + @fragment fn fs(@builtin(position) pixelPosition: vec4f) -> @location(0) vec4f { let red = vec4f(1, 0, 0, 1); let cyan = vec4f(0, 1, 1, 1); - let grid = vec2u(fsInput.position.xy) / 8; + let grid = vec2u(pixelPosition.xy) / 8; let checker = (grid.x + grid.y) % 2 == 1; return select(red, cyan, checker); } `, });
And we’d have to update our pipeline creation to use these
const pipeline = device.createRenderPipeline({ label: 'hardcoded checkerboard triangle pipeline', layout: 'auto', vertex: { - module, + module: vsModule, entryPoint: 'vs', }, fragment: { - module, + module: fsModule, entryPoint: 'fs', targets: [{ format: presentationFormat }], }, });
And this would also work
The point is, the fact that both shaders are in the same string in most WebGPU
examples is just a convenience. In reality, first WebGPU parses the WGSL to make
sure it’s syntactically correct. Then, WebGPU looks at the entryPoint
you specify. From that it goes and looks at the parts that entryPoint references
and nothing else for that entryPoint. It’s useful because you don’t have to type
things like structures or binding and group locations twice if two or more shaders
share bindings or structures or constants or functions. But, from the POV of WebGPU,
it’s as though you did duplicate all of them, once for each entryPoint.
Note: It is not that common to generate a checkerboard using the
@builtin(position)
. Checkerboards or other patterns are far more commonly
implemented using textures. In fact you’ll see an issue
if you size the window. Because the checkerboard is based on the pixel coordinates
of the canvas it’s relative to the canvas, not relative to the triangle.
We saw above that inter-stage variables, the outputs from a vertex shader, are interpolated when passed to the fragment shader. There are 2 sets of settings that can be changed for how the interpolation happens. Setting them to anything other than the defaults is not extremely common but there are use cases which will be covered in other articles.
Interpolation type:
perspective
: Values are interpolated in a perspective correct manner (default)linear
: Values are interpolated in a linear, non-perspective correct manner.flat
: Values are not interpolated. Interpolation sampling is not used with flat interpolatedInterpolation sampling:
center
: Interpolation is performed at the center of the pixel (default)centroid
: Interpolation is performed at a point that lies within all the samples covered by the fragment within the current primitive. This value is the same for all samples in the primitive.sample
: Interpolation is performed per sample. The fragment shader is invoked once per sample when this attribute is applied.You specify these as attributes. For example
@location(2) @interpolate(linear, center) myVariableFoo: vec4f; @location(3) @interpolate(flat) myVariableBar: vec4f;
Note that if the inter-stage variable is an integer type then you must set its
interpolation to flat
.
If you set the interpolation type to flat
, the value passed to the fragment shader
is the value of the inter-stage variable for the first vertex in that triangle.
In the next article we’ll cover uniforms as another way to pass data into shaders.