Skip to content

Commit

Permalink
chore: Improve CommandEncoder implementation (#2265)
Browse files Browse the repository at this point in the history
  • Loading branch information
ibgreen authored Sep 20, 2024
1 parent 798b3d9 commit fe10255
Show file tree
Hide file tree
Showing 18 changed files with 267 additions and 142 deletions.
60 changes: 36 additions & 24 deletions docs/api-guide/gpu/gpu-textures.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,27 @@ import {DeviceTabs, Format as Ft, Filter as L, Render as R} from '@site/src/reac
While the idea behind textures is simple in principle (a grid of pixels stored on GPU memory), GPU Textures are surprisingly complex objects.

Capabilities

- Shaders can read (sample) from textures
- Textures can be set up as render targets (by attaching them to a framebuffer).
- **Arrays** Textures can have multiple images (texture arrays, cube textures, 3d textures...), indexed by a `depth` parameter.
- **Mipmaps** Each texture image can have a "pyramid" of "mipmap" images representing
- **Mipmaps** Each texture image can have a "pyramid" of "mipmap" images representing

Textures are supported by additional GPU objects:

- **Samplers** - The specifics of how shaders read from textures (interpolation methods, edge behaviors etc) are controlled by **GPU Sampler objects**. luma.gl will create a default sampler object for each texture, but the application can override if designed
- **TextureViews** - A texture view specifies a subset of the images in a texture, enabling operations to be performed on such subsets. luma.gl will create a default TextureView for each texture, but the application can create additional TextureViews.
- **Framebuffers** - A framebuffer is a map of "attachment points" to one or more textures that can be used when creating `RenderPasses` and made available to shaders.

Setting texture data from CPU data:

- There is a fast path for setting texture data from "images", that can also be used for 8 bit RGBA data.
- General data transfer is more complicated, it needs to go through a GPU Buffer and a CommandEncoder object.

Notes:
- Some GPUs offer additional Texture-related capabilities (such as availability of advanced image formats, more formats being filterable or renderable etc).
- Check `DeviceFeatures` if you would like to take advantage of such features to discover what features are implemented by the current `Device` (i.e. the current WebGPU or WebGL environment / browser the application is running on).

- Some GPUs offer additional Texture-related capabilities (such as availability of advanced image formats, more formats being filterable or renderable etc).
- Check `DeviceFeatures` if you would like to take advantage of such features to discover what features are implemented by the current `Device` (i.e. the current WebGPU or WebGL environment / browser the application is running on).

## Texture Dimension

Expand Down Expand Up @@ -69,7 +72,6 @@ defining a common super-compressed format which can be decoded after load into a

To use Basis supercompressed textures in luma.gl, see the [loaders.gl](https://loaders.gl) `BasisLoader` which can extract compressed textures from a basis encoded texture.


## Texture Data

The textures may not be completely packed, there may be padding (per row)
Expand All @@ -92,47 +94,57 @@ A primary purpose of textures is to allow the GPU to read from them.
```

### Texture filtering
### Filtering

Texture formats that are filterable can be interpolated by the GPI
Texture formats that are filterable which means that during sampling texel values can be interpolated by the GPU for better results.
Sampling is a highly configuratble process that is specified using by binding a separate `Sampler` object for each texture.

Filtering is specified using a separate `Sampler` object.

Sampling is a sophisticated process
Sampling can be specified separately for

- magnification
- minification
- anistropy

The parameters used for sampling is specified in a sampler object.

Notes:

- Not all texture formats are filterable. For less common texture formats it is possible to query the device to determine if filtering is supported.
- Filtering with transparent textures can result in undesirable artifacts (darkening and false color halos) unless you work in premultiplied colors.

### Mipmaps

To improve samling mipmaps can be included in a texture. Mipmaps are a pyramid of lower resolution images that are used when the texture is sampled at a distance.
Mipmaps can be generated automatically by the GPU, or can be provided by the application.
To improve sampling quality further when sampling from a distance (`minFilter`), mipmap filtering can be used.
Mipmaps are a pyramid of lower resolution images that are stored for each text`ure image and used by the GPU when the texture is sampled at a distance.

Using mipmap filtering requireds some extra setup. The texture being sampled must be created with the `mipLevels` property set the the appropriate number of mip levels,
and each mip level must be initialized with a scaled down version of the mip level 0 image.
Note that mip levels can be generated automatically by luma.gl, or each mip level can be set explicitly by the application.

Mipmap usage is controller via `SamplerProps.mipmapFilter`:

| `mipmapFilter` | `minFilter` | Description | Linearity | Speed |
| -------------- | ----------- | ----------------------------------------------- | --------------------- | --- |
| `none` | `nearest` | Mo filtering, no mipmaps | | |
| `none` | `linear` | Filtering, no mipmaps | bilinear | slowest \* |
| `nearest` | `nearest` | No filtering, sharp switching between mipmaps | | |
| `nearest` | `linear` | No filtering, smooth transition between mipmaps | | |
| `linear` | `nearest` | Filtering, sharp switching between mipmaps | bilinear with mipmaps | fastest \* |
| `linear` | `linear` | Filtering, smooth transition between mipmaps | trilinear | |
| `mipmapFilter` | `minFilter` | Description | Linearity | Speed |
| -------------- | ----------- | ----------------------------------------------- | --------------------- | ------- |
| `none` | `nearest` | Mo filtering, no mipmaps | none | |
| `none` | `linear` | Filtering, no mipmaps | bilinear | slowest |
| `nearest` | `nearest` | No filtering, sharp switching between mipmaps | none | |
| `nearest` | `linear` | No filtering, smooth transition between mipmaps | linear | |
| `linear` | `nearest` | Filtering, sharp switching between mipmaps | bilinear with mipmaps | fastest |
| `linear` | `linear` | Filtering, smooth transition between mipmaps | trilinear | |

Performance typically depends on texture LOD, and perhaps surprisingly, mipmaps not only improve visual quality, but can also improve performance. When a scaled down, lower quality mipmap is selected it reduces memory bandwidth requirements.
In addition, the `anistropy` sampler property controls how many miplevels are used during sampling.

Texture filtering is considered bilinear because it is a linear filter that is applied in two directions, sequentially. First, a linear filter is applied along the image's x axis (width), and then a second filter is applied along the y axis (height). This is why bilinear filtering is sometimes referred to as linear/bilinear.
Notes:
- Enabling mipmap filtering not only improves visual quality, but can also improve performance. When a scaled down, lower resolution mip level is selected this reduces memory bandwidth requirements.
- Linear filtering is considered "bilinear" because it is a linear filter that is applied in two directions, sequentially. First, a linear filter is applied along the image's x axis (width), and then a second filter is applied along the y axis (height).
- Linear mipmap filtering is considered "trilinear" since it also interpolates linearly between mip levels.

## Binding textures

Before textures can be sampled in the
Before textures can be sampled in the fragment shader, they must be bound. A sampler must also be bound for each texture, though luma.gl will bind the textures default sampler if not supplied.

## Texture Rendering (Writing to Textures on the GPU)

Texture formats that are renderable can be bound to framebuffer attachments so that shaders can write to them.
Texture formats that are renderable can be bound to framebuffer color or depthStencil attachments so that shaders can write to them.

## Blending

Expand Down
41 changes: 31 additions & 10 deletions modules/core/src/adapter/device.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import type {Framebuffer, FramebufferProps} from './resources/framebuffer';
import type {RenderPass, RenderPassProps} from './resources/render-pass';
import type {ComputePass, ComputePassProps} from './resources/compute-pass';
import type {CommandEncoder, CommandEncoderProps} from './resources/command-encoder';
import type {CommandBuffer} from './resources/command-buffer';
import type {VertexArray, VertexArrayProps} from './resources/vertex-array';
import type {TransformFeedback, TransformFeedbackProps} from './resources/transform-feedback';
import type {QuerySet, QuerySetProps} from './resources/query-set';
Expand Down Expand Up @@ -360,6 +361,7 @@ export abstract class Device {
/** type of this device */
abstract readonly type: 'webgl' | 'webgpu' | 'null' | 'unknown';
abstract readonly handle: unknown;
abstract commandEncoder: CommandEncoder;

/** A copy of the device props */
readonly props: Required<DeviceProps>;
Expand Down Expand Up @@ -409,9 +411,10 @@ export abstract class Device {
return textureCaps;
}

/** Calculates the number of mip levels for a texture of width and height */
getMipLevelCount(width: number, height: number): number {
return Math.floor(Math.log2(Math.max(width, height))) + 1;
/** Calculates the number of mip levels for a texture of width, height and in case of 3d textures only, depth */
getMipLevelCount(width: number, height: number, depth3d: number = 1): number {
const maxSize = Math.max(width, height, depth3d);
return 1 + Math.floor(Math.log2(maxSize));
}

/** Check if data is an external image */
Expand Down Expand Up @@ -444,6 +447,20 @@ export abstract class Device {
return isTextureFormatCompressed(format);
}

// DEBUG METHODS

pushDebugGroup(groupLabel: string): void {
this.commandEncoder.pushDebugGroup(groupLabel);
}

popDebugGroup(): void {
this.commandEncoder?.popDebugGroup();
}

insertDebugMarker(markerLabel: string): void {
this.commandEncoder?.insertDebugMarker(markerLabel);
}

// Device loss

/** `true` if device is already lost */
Expand Down Expand Up @@ -492,7 +509,7 @@ export abstract class Device {
abstract createCanvasContext(props?: CanvasContextProps): CanvasContext;

/** Call after rendering a frame (necessary e.g. on WebGL OffscreenCanvas) */
abstract submit(): void;
abstract submit(commandBuffer?: CommandBuffer): void;

// Resource creation

Expand Down Expand Up @@ -523,19 +540,23 @@ export abstract class Device {
/** Create a vertex array */
abstract createVertexArray(props: VertexArrayProps): VertexArray;

/** Create a RenderPass */
abstract beginRenderPass(props?: RenderPassProps): RenderPass;

/** Create a ComputePass */
abstract beginComputePass(props?: ComputePassProps): ComputePass;

abstract createCommandEncoder(props?: CommandEncoderProps): CommandEncoder;

/** Create a transform feedback (immutable set of output buffer bindings). WebGL only. */
abstract createTransformFeedback(props: TransformFeedbackProps): TransformFeedback;

abstract createQuerySet(props: QuerySetProps): QuerySet;

/** Create a RenderPass using the default CommandEncoder */
beginRenderPass(props?: RenderPassProps): RenderPass {
return this.commandEncoder.beginRenderPass(props);
}

/** Create a ComputePass using the default CommandEncoder*/
beginComputePass(props?: ComputePassProps): ComputePass {
return this.commandEncoder.beginComputePass(props);
}

/**
* Determines what operations are supported on a texture format, checking against supported device features
* Subclasses override to apply additional checks
Expand Down
13 changes: 12 additions & 1 deletion modules/core/src/adapter/resources/command-encoder.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ import {Resource, ResourceProps} from './resource';
import {Buffer} from './buffer';
import {Texture} from './texture';
import {QuerySet} from './query-set';
import type {RenderPass, RenderPassProps} from './render-pass';
import type {ComputePass, ComputePassProps} from './compute-pass';
import type {CommandBuffer, CommandBufferProps} from './command-buffer';

// WEBGPU COMMAND ENCODER OPERATIONS

Expand Down Expand Up @@ -131,6 +134,8 @@ export type CommandEncoderProps = ResourceProps & {
* Encodes commands to queue that can be executed later
*/
export abstract class CommandEncoder extends Resource<CommandEncoderProps> {
abstract readonly handle: unknown;

override get [Symbol.toStringTag](): string {
return 'CommandEncoder';
}
Expand All @@ -140,7 +145,13 @@ export abstract class CommandEncoder extends Resource<CommandEncoderProps> {
}

/** Completes recording of the commands sequence */
abstract finish(): void; // TODO - return the CommandBuffer?
abstract finish(props?: CommandBufferProps): CommandBuffer;

/** Create a RenderPass using the default CommandEncoder */
abstract beginRenderPass(props?: RenderPassProps): RenderPass;

/** Create a ComputePass using the default CommandEncoder*/
abstract beginComputePass(props?: ComputePassProps): ComputePass;

/** Add a command that that copies data from a sub-region of a Buffer to a sub-region of another Buffer. */
abstract copyBufferToBuffer(options: CopyBufferToBufferOptions): void;
Expand Down
15 changes: 8 additions & 7 deletions modules/core/test/adapter/resources/command-encoder.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ test('CommandBuffer#copyBufferToBuffer', async t => {
destinationBuffer,
size: 2 * Float32Array.BYTES_PER_ELEMENT
});
commandEncoder.finish();
commandEncoder.destroy();
let commandBuffer = commandEncoder.finish();
device.submit(commandBuffer);

receivedData = await readAsyncF32(destinationBuffer);
expectedData = new Float32Array([1, 2, 6]);
Expand All @@ -42,8 +42,8 @@ test('CommandBuffer#copyBufferToBuffer', async t => {
destinationOffset: 2 * Float32Array.BYTES_PER_ELEMENT,
size: Float32Array.BYTES_PER_ELEMENT
});
commandEncoder.finish();
commandEncoder.destroy();
commandBuffer = commandEncoder.finish();
device.submit(commandBuffer);

receivedData = await readAsyncF32(destinationBuffer);
expectedData = new Float32Array([1, 2, 2]);
Expand Down Expand Up @@ -167,8 +167,8 @@ async function testCopyTextureToBuffer(
destinationBuffer,
byteOffset: dstByteOffset
});
commandEncoder.finish();
commandEncoder.destroy();
const commandBuffer = commandEncoder.finish();
device_.submit(commandBuffer);

const color =
srcPixel instanceof Uint8Array
Expand Down Expand Up @@ -219,7 +219,8 @@ function testCopyToTexture(

const commandEncoder = device_.createCommandEncoder();
commandEncoder.copyTextureToTexture({sourceTexture, destinationTexture});
commandEncoder.finish();
const commandBuffer = commandEncoder.finish();
device_.submit(commandBuffer);

// Read data form destination texture
const color = device_.readPixelsToArrayWebGL(destinationTexture);
Expand Down
1 change: 1 addition & 0 deletions modules/engine/src/compute/texture-transform.ts
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,7 @@ export class TextureTransform {
const renderPass = this.device.beginRenderPass({framebuffer, ...options});
this.model.draw(renderPass);
renderPass.end();
this.device.submit();
}

getTargetTexture(): Texture {
Expand Down
4 changes: 2 additions & 2 deletions modules/engine/src/model/model.ts
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ const LOG_DRAW_TIMEOUT = 10000;

export type ModelProps = Omit<RenderPipelineProps, 'vs' | 'fs' | 'bindings'> & {
source?: string;
vs: string | null;
fs: string | null;
vs?: string | null;
fs?: string | null;

/** shadertool shader modules (added to shader code) */
modules?: ShaderModule[];
Expand Down
7 changes: 4 additions & 3 deletions modules/engine/test/compute/texture-transform.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,10 @@ async function readU8(
): Promise<Uint8Array> {
const destinationBuffer = webglDevice.createBuffer({byteLength});
try {
const cmd = webglDevice.createCommandEncoder();
cmd.copyTextureToBuffer({sourceTexture, destinationBuffer});
cmd.finish();
const commandEncoder = webglDevice.createCommandEncoder();
commandEncoder.copyTextureToBuffer({sourceTexture, destinationBuffer});
const commandBuffer = commandEncoder.finish();
webglDevice.submit(commandBuffer);
return destinationBuffer.readAsync();
} finally {
destinationBuffer.destroy();
Expand Down
16 changes: 4 additions & 12 deletions modules/test-utils/src/null-device/null-device.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,6 @@ import type {
RenderPipelineProps,
ComputePipeline,
ComputePipelineProps,
RenderPassProps,
ComputePass,
ComputePassProps,
CommandEncoderProps,
TransformFeedbackProps,
QuerySetProps
Expand All @@ -32,7 +29,7 @@ import {NullCanvasContext} from './null-canvas-context';
import {NullBuffer} from './resources/null-buffer';
import {NullFramebuffer} from './resources/null-framebuffer';
import {NullShader} from './resources/null-shader';
import {NullCommandEncoder} from './resources/null-command-buffer';
import {NullCommandEncoder} from './resources/null-command-encoder';
import {NullSampler} from './resources/null-sampler';
import {NullTexture} from './resources/null-texture';
import {NullRenderPass} from './resources/null-render-pass';
Expand All @@ -57,6 +54,8 @@ export class NullDevice extends Device {
readonly info = NullDeviceInfo;

readonly canvasContext: NullCanvasContext;
override commandEncoder: NullCommandEncoder;

readonly lost: Promise<{reason: 'destroyed'; message: string}>;

constructor(props: DeviceProps) {
Expand All @@ -65,6 +64,7 @@ export class NullDevice extends Device {
const canvasContextProps = props.createCanvasContext === true ? {} : props.createCanvasContext;
this.canvasContext = new NullCanvasContext(this, canvasContextProps);
this.lost = new Promise(resolve => {});
this.commandEncoder = new NullCommandEncoder(this, {id: 'null-command-encoder'});
}

/**
Expand Down Expand Up @@ -128,18 +128,10 @@ export class NullDevice extends Device {
return new NullRenderPipeline(this, props);
}

beginRenderPass(props: RenderPassProps): NullRenderPass {
return new NullRenderPass(this, props);
}

createComputePipeline(props?: ComputePipelineProps): ComputePipeline {
throw new Error('ComputePipeline not supported in WebGL');
}

beginComputePass(props: ComputePassProps): ComputePass {
throw new Error('ComputePass not supported in WebGL');
}

override createCommandEncoder(props: CommandEncoderProps = {}): NullCommandEncoder {
return new NullCommandEncoder(this, props);
}
Expand Down
Loading

0 comments on commit fe10255

Please sign in to comment.