Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(webgl): Implement ShaderLayout.bufferMap in webgl #1780

Merged
merged 2 commits into from
Aug 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 68 additions & 0 deletions docs/api-guide/attributes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Attributes

In traditional 3D graphics, the purpose of *attributes* is typically described
as providing vertex data to the GPU.

However, luma.gl favors thinking about a GPU as operating on "binary columnar tables".
- In such a mental model, attributes are "columnar binary arrays" with the same number of elements
that each contain one value for each row.
- Each column is an array of either floating point values, or signed or unsigned integers.
- A row can use up either a single value, or represent a vector of 2, 3 or 4 elements.
- All rows in a column must be of the same format (single value or vector)

## VertexFormat

The format of a vertex attribute indicates how data from a vertex buffer
will be interpreted and exposed to the shader. Each format has a name that encodes
the order of components, bits per component, and vertex data type for the component.
The `VertexFormat` type is a string union of all the defined vertex formats.

Each vertex data type can map to any WGSL scalar type of the same base type, regardless of the bits per component:

| Vertex format prefix | Vertex data type | Compatible WGSL types | Compatible GLSL types |
| -------------------- | --------------------- | --------------------- | --------------------- |
| `uint` | `unsigned int` | `u32` | `uint`, `uvec2-4` |
| `sint` | `signed int` | `i32` | `int`, `ivec2-4` |
| `unorm` | `unsigned normalized` | `f16`, `f32` | `float`, `vec2-4` |
| `snorm` | `signed normalized` | `f16`, `f32` | `float`, `vec2-4` |
| `float` | `floating point` | `f16`, `f32` | `float`, `vec2-4` |

| Vertex Format | Data Type | WGSL types | GLSL Types |
| ------------- | --------- | ------------ | ----------------- |
| 'uint8x2' | `uint8` | `u32` | `uint`, `uvec2-4` |
| 'uint8x4' | `uint8` | `u32` | `uint`, `uvec2-4` |
| 'sint8x2' | `sint8` | `i32` | `int`, `ivec2-4` |
| 'sint8x4' | `sint8` | `i32` | `int`, `ivec2-4` |
| 'unorm8x2' | `unorm8` | `f16`, `f32` | `float`, `vec2-4` |
| 'unorm8x4' | `unorm8` | `f16`, `f32` | `float`, `vec2-4` |
| 'snorm8x2' | `snorm8` | `f16`, `f32` | `float`, `vec2-4` |
| 'snorm8x4' | `snorm8` | `f16`, `f32` | `float`, `vec2-4` |
| 'uint16x2' | `uint16` | `u32` | |
| 'uint16x4' | `uint16` | `u32` | |
| 'sint16x2' | `sint16` | `i32` | `int`, `ivec2-4` |
| 'sint16x4' | `sint16` | `i32` | `int`, `ivec2-4` |
| 'unorm16x2' | `unorm16` | `f16`, `f32` | |
| 'unorm16x4' | `unorm16` | `f16`, `f32` | |
| 'snorm16x2' | `snorm16` | `f16`, `f32` | |
| 'snorm16x4' | `snorm16` | `f16`, `f32` | |
| 'float16x2' | `float16` | `f16` | |
| 'float16x4' | `float16` | `f16` | |
| 'float32' | `float32` | `f32` | |
| 'float32x2' | `float32` | `f32` | |
| 'float32x3' | `float32` | `f32` | |
| 'float32x4' | `float32` | `f32` | |
| 'uint32' | `uint` | `u32` | |
| 'uint32x2' | `uint32` | `u32` | |
| 'uint32x3' | `uint32` | `u32` | |
| 'uint32x4' | `uint32` | `u32` | |
| 'sint32' | `sint` | `i32` | `int`, `ivec2-4` |
| 'sint32x2' | `sint32` | `i32` | `int`, `ivec2-4` |
| 'sint32x3' | `sint32` | `i32` | `int`, `ivec2-4` |
| 'sint32x4' | `sint32` | `i32` | `int`, `ivec2-4` |


Note:
- 8 and 16 bit values only support 2 or 4 components. This is a WebGPU specific limitation that does not exist on WebGL, but is enforced for portability.
- WebGL: GLSL supports `bool` and `bvec*` but these are not portable to WebGPU and not included here.
- WebGL: GLSL types `double` and `dvec*` are not supported in any WebGL version
- WebGPU: WGSL `f64` (hypothetical double type) is not supported. Perhaps in a future extension?
7 changes: 7 additions & 0 deletions docs/api-guide/background/webgpu-vs-webgl.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,13 @@ In WebGL many parameters are set on the WebGL context using individual function

- and a range of WebGL features are no longer available (uniforms and transform feedback just to mention a few). There are of course good reasons for this (and In many cases these incompatibilities reflect choices made by the underlying next-gen APIs) but WebGPU does create quite an upgrade shock for existing WebGL based frameworks.

## Attributes

In WebGPU
- Unlike WebGL, WebGPU attribute sizes must be even multiples of 2, which means that an attribute with 1 or 3 bytes per vertex are not possible.
- Attribute formats (type, components, normalization, etc) are specified when creating a pipeline (program). This cannot be re specified when rebinding an attribute, like it can in WebGL.


## No TransformFeedback

- and a range of WebGL features are no longer available (uniforms and transform feedback just to mention a few). There are of course good reasons for this (and In many cases these incompatibilities reflect choices made by the underlying next-gen APIs) but WebGPU does create quite an upgrade shock for existing WebGL based frameworks.
Expand Down
69 changes: 60 additions & 9 deletions docs/api-reference/api/shader-layout.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,21 @@
The luma.gl v9 API is currently in [public review](/docs/public-review) and may be subject to change.
:::

Shader code contains declarations of attributes, uniform blocks, samplers etc in the GLSL or WGSL code,
that collectively define the layout of data that needs to be bound before the shader can execute on the
GPU.
luma.gl defines the `ShaderLayout` type to collect a description of a (pair of) shaders.
A `ShaderLayout` is used when creating a `RenderPipeline` or `ComputePipeline`.

Since the actual binding of data is performed on the CPU, a certain amount of metadata is needed in JavaScript
to describe the layout of any specific shaders.
The binding of data is performed in JavaScript on the CPU. For this to work,
a certain amount of metadata is needed in JavaScript that describes the layout of a
render or compute pipeline's specific shaders.

luma.gl defines the `ShaderLayout` type to collect a description of a (pair of) shaders. A `ShaderLayout`
is required when creating a `RenderPipeline` or `ComputePipeline`.
Shader code contains declarations of attributes, uniform blocks, samplers etc in WGSL or GLSL code. After compilation and linking of fragment and vertex shaders into a pipeline, the resolved declarations collectively define the layout of the data that needs to be bound before the shader can execute on the GPU.

Note: `ShaderLayout`s can be created manually (by reading the shader code),
or be automatically generated by parsing shader source code or using e.g. the WebGL program introspection APIs.
`ShaderLayout`s can be created manually by a programmer (by reading the shader code
and copying the relevant declarations).

:::info
A default `ShaderLayout` is be extracted programmatically by the `RenderPipeline` in WebGL, but this is not yet possible in WebGPU. Therefore it is necessary to provide an explicit `layout` property to any `RenderPipeline` that is expected to run in WebGPU. This restriction may be lifted in the future.
:::

```typescript
type ShaderLayout = {
Expand Down Expand Up @@ -63,6 +66,9 @@ const shaderLayout: ShaderLayout = {

### attributes

The attributes field declares structural information about the shader pipeline.
It contains fixed information about each attribute such as its location (the index in the attribute bank, typically between 0-15) and whether the attribute is instanced.

```typescript
attributes:
instancePositions: {location: 0, format: 'float32x2', stepMode: 'instance'},
Expand All @@ -71,12 +77,57 @@ const shaderLayout: ShaderLayout = {
}
```

### bufferMap

The bufferMap provides data about how the buffers we bind map to the attributes,
when calling `Model.setAttributes()` or `RenderPipeline.setAttributes()`

The simplest use case is to provide a non-default vertex type:

```typescript
bufferMap: [
{name: 'instancePositions', format: 'float32x3'}
...
// RGBA colors can be efficiently encoded in 4 8bit bytes, instead of 4 32bit floats
{name: 'instanceColors': format: 'uint8normx4'},
],
```

A more advanced use case is interleaving: two attributes access the same buffer in an interleaved way.

```typescript
bufferMap: [
{name: 'particles', attributes: [
{name: 'instancePositions'},
{name: 'instanceVelocities'}
]
],
```

In the above case case a new buffer name `particles` is defined and `setAttributes()`
calls will recognize that name and bind the provided buffer to all the interleaved
attributes.


### bindings

Bindings cover textures, samplers and uniform buffers. location (index on the GPU)
and type are the key pieces of information that need to be provided.

```typescript
bindings?: {
projectionUniforms: {location: 0, type: 'uniforms'},
textureSampler: {location: 1, type: 'sampler'},
texture: {location: 2, type: 'texture'}
}
```


### uniforms

And "free" uniforms (not part of a uniform buffer) are declared in this field.

:::caution
Uniforms are a WebGL-only concept, and it is strongly recommended to use uniform
buffers instead.
:::
33 changes: 19 additions & 14 deletions docs/developer-guide/debugging.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,20 @@
# Debugging

Debugging GPU code can be quite tricky. It is not possible to put breakpoints in or
single step through GPU shaders, and when they fail, a black screen may not provide much
information. The error can be in the shader, in the data that was provided to the GPU,
or in one of the many GPU pipeline settings, or in the way the APIs were called.

luma.gl provides a number of facilities for debugging your GPU code, to help you save time during development:

- Object tracking via `id` fields.
- Log levels, verbose logs display all values being passed to each draw call.
- Detailed shader compilation logs.
- Parameter validation
Debugging GPU code can be challenging. Standard CPU-side debugging tools like
breakpoints and single stepping are not avaialble in GPU shaders. when shaders fail, the result is often a blank screen that does not provide much information about what went wrong.
In addition, the error behind a failed render can be located in very different parts of the code:
- it can be in the shader code itself
- but it can also be in the data that was provided to the GPU (attributes, bindings, uniforms etc)
- or in one of the many GPU pipeline settings
- or in the way the APIs were called.

The good news is that luma.gl provides a number of facilities for debugging your GPU code,
to help you save time during development. These features include

- All GPU objects have auto-populated but configurable `id` fields.
- Configurable logging of GPU operations, with optional verbose logs that display all values being passed to each draw call.
- Propagates detailed logs of errors and warnings during shader compilation.
- WebGL Parameter validation.
- Spector.js integration
- Khronos WebGL debug integration - Synchronous WebGL error capture (optional module).

Expand Down Expand Up @@ -78,15 +82,16 @@ The most flexible way to enable WebGL API tracing is by typing the following com

Note that the developer tools module is loaded dynamically when a device is created with the debug flag set, so the developer tools can be activated in production code by opening the browser console and typing:

`luma.set('debug', true)`
```
luma.set('debug', true)
```

and then reloading the browser tab.
then reload your browser tab.

While usually not recommended, it is also possible to activate the developer tools manually. Call [`luma.createDevice`](/docs/api-reference-v8/webgl-legacy/context/context-api) with `debug: true` to create a WebGL context instrumented with the WebGL developer tools:

```typescript
import {luma} from '@luma.gl/api';
import '@luma.gl/debug';
const device = luma.createDevice({type: 'webgl', debug: true});
```

Expand Down
1 change: 1 addition & 0 deletions docs/sidebar.json
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@
"api-guide/rendering",
"api-guide/parameters",
"api-guide/bindings",
"api-guide/attributes",
"api-guide/transforms",
"api-guide/shader-modules"
]
Expand Down
Loading
Loading