Gathering detailed insights and metrics for @antv/g-device-api
Gathering detailed insights and metrics for @antv/g-device-api
Gathering detailed insights and metrics for @antv/g-device-api
Gathering detailed insights and metrics for @antv/g-device-api
npm install @antv/g-device-api
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
13 Stars
145 Commits
3 Forks
15 Watching
3 Branches
50 Contributors
Updated on 28 Nov 2024
TypeScript (96.41%)
JavaScript (2.77%)
Rust (0.79%)
Shell (0.03%)
Cumulative downloads
Total Downloads
Last day
-13.6%
3,822
Compared to previous day
Last week
1.6%
21,924
Compared to previous week
Last month
6.5%
89,113
Compared to previous month
Last year
4,407.1%
990,749
Compared to previous year
5
50
This is a set of Device API also known as the hardware adaptation layer(HAL). It is implemented using WebGL1/2 & WebGPU underneath and inspired by noclip.
Now we use it in the following projects:
1npm install @antv/g-device-api
Resource Creation
Submit
Query
Debug
GPU Resources
A device is the logical instantiation of GPU.
1import {
2 Device,
3 BufferUsage,
4 WebGLDeviceContribution,
5 WebGPUDeviceContribution,
6} from '@antv/g-device-api';
7
8// Create a WebGL based device contribution.
9const deviceContribution = new WebGLDeviceContribution({
10 targets: ['webgl2', 'webgl1'],
11});
12// Or create a WebGPU based device contribution.
13const deviceContribution = new WebGPUDeviceContribution({
14 shaderCompilerPath: '/glsl_wgsl_compiler_bg.wasm',
15 // shaderCompilerPath:
16 // 'https://unpkg.com/@antv/g-device-api@1.4.9/rust/pkg/glsl_wgsl_compiler_bg.wasm',
17});
18
19const swapChain = await deviceContribution.createSwapChain($canvas);
20swapChain.configureSwapChain(width, height);
21const device = swapChain.getDevice();
A Buffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout.
We references the WebGPU design:
1createBuffer: (descriptor: BufferDescriptor) => Buffer;
The parameters are as follows, references the WebGPU design:
required
Set buffer data directly or allocate fixed length(in bytes).required
The allowed usage for this buffer.optional
Known as hint when calling bufferData in WebGL.1interface BufferDescriptor { 2 viewOrSize: ArrayBufferView | number; 3 usage: BufferUsage; 4 hint?: BufferFrequencyHint; 5}
We can set buffer data directly, or allocate fixed length for later use e.g. calling setSubData:
1const buffer = device.createBuffer({ 2 viewOrSize: new Float32Array([1, 2, 3, 4]), 3 usage: BufferUsage.VERTEX, 4}); 5 6// or 7const buffer = device.createBuffer({ 8 viewOrSize: 4 * Float32Array.BYTES_PER_ELEMENT, // in bytes 9 usage: BufferUsage.VERTEX, 10}); 11buffer.setSubData(0, new Uint8Array(new Float32Array([1, 2, 3, 4]).buffer));
The allowed usage for buffer.
They can also be composited like BufferUsage.VERTEX | BufferUsage.STORAGE
.
1enum BufferUsage { 2 MAP_READ = 0x0001, 3 MAP_WRITE = 0x0002, 4 COPY_SRC = 0x0004, 5 COPY_DST = 0x0008, 6 INDEX = 0x0010, 7 VERTEX = 0x0020, 8 UNIFORM = 0x0040, 9 STORAGE = 0x0080, 10 INDIRECT = 0x0100, 11 QUERY_RESOLVE = 0x0200, 12}
This param is called usage in WebGL. We change its name to hint
avoiding duplicate naming.
1enum BufferFrequencyHint { 2 Static = 0x01, 3 Dynamic = 0x02, 4}
This method references the WebGPU design to create a Texture:
1createTexture: (descriptor: TextureDescriptor) => Texture;
The parameters are as follows, references the WebGPU design:
1interface TextureDescriptor { 2 usage: TextureUsage; 3 format: Format; 4 width: number; 5 height: number; 6 depthOrArrayLayers?: number; 7 dimension?: TextureDimension; 8 mipLevelCount?: number; 9 pixelStore?: Partial<{ 10 packAlignment: number; 11 unpackAlignment: number; 12 unpackFlipY: boolean; 13 }>; 14}
required
The allowed usages for this GPUTexture.required
The format of this GPUTexture.required
The width of this GPUTexture.required
The height of this GPUTexture.optional
The depth or layer count of this GPUTexture. Defaulting to 1
.optional
The dimension of the set of texel for each of this GPUTexture's subresources. Defaulting to TextureDimension.TEXTURE_2D
optional
The number of mip levels of this GPUTexture. Defaulting to 1
.optional
Specifies the pixel storage modes in WebGL.
gl.PACK_ALIGNMENT
gl.UNPACK_ALIGNMENT
gl.UNPACK_FLIP_Y_WEBGL
The TextureUsage
enum is as follows:
1enum TextureUsage { 2 SAMPLED, 3 RENDER_TARGET, // When rendering to texture, choose this usage. 4}
The TextureDimension
enum is as follows:
1enum TextureDimension { 2 TEXTURE_2D, 3 TEXTURE_2D_ARRAY, 4 TEXTURE_3D, 5 TEXTURE_CUBE_MAP, 6}
Samplers are created via createSampler()
.
1createSampler: (descriptor: SamplerDescriptor) => Sampler;
The params reference GPUSamplerDescriptor.
1interface SamplerDescriptor { 2 addressModeU: AddressMode; 3 addressModeV: AddressMode; 4 addressModeW?: AddressMode; 5 minFilter: FilterMode; 6 magFilter: FilterMode; 7 mipmapFilter: MipmapFilterMode; 8 lodMinClamp?: number; 9 lodMaxClamp?: number; 10 maxAnisotropy?: number; 11 compareFunction?: CompareFunction; 12}
AddressMode
describes the behavior of the sampler if the sample footprint extends beyond the bounds of the sampled texture.
1enum AddressMode { 2 CLAMP_TO_EDGE, 3 REPEAT, 4 MIRRORED_REPEAT, 5}
FilterMode
and MipmapFilterMode
describe the behavior of the sampler if the sample footprint does not exactly match one texel.
1enum FilterMode { 2 POINT, 3 BILINEAR, 4} 5enum MipmapFilterMode { 6 NO_MIP, 7 NEAREST, 8 LINEAR, 9}
CompareFunction
specifies the behavior of a comparison sampler. If a comparison sampler is used in a shader, an input value is compared to the sampled texture value, and the result of this comparison test (0.0f for pass, or 1.0f for fail) is used in the filtering operation.
1enum CompareFunction { 2 NEVER = GL.NEVER, 3 LESS = GL.LESS, 4 EQUAL = GL.EQUAL, 5 LEQUAL = GL.LEQUAL, 6 GREATER = GL.GREATER, 7 NOTEQUAL = GL.NOTEQUAL, 8 GEQUAL = GL.GEQUAL, 9 ALWAYS = GL.ALWAYS, 10}
1createRenderTarget: (descriptor: RenderTargetDescriptor) => RenderTarget;
1interface RenderTargetDescriptor { 2 format: Format; 3 width: number; 4 height: number; 5 sampleCount: number; 6 texture?: Texture; 7}
1createRenderTargetFromTexture: (texture: Texture) => RenderTarget;
1createProgram: (program: ProgramDescriptor) => Program;
wgsl
will be used directly in WebGPU while glsl
will be compiled internally.
Since WebGL doesn't support compute shader, compute
is only available in WebGPU.
1interface ProgramDescriptor { 2 vertex?: { 3 glsl?: string; 4 wgsl?: string; 5 }; 6 fragment?: { 7 glsl?: string; 8 wgsl?: string; 9 }; 10 compute?: { 11 wgsl: string; 12 }; 13}
1createBindings: (bindingsDescriptor: BindingsDescriptor) => Bindings;
1interface BindingsDescriptor { 2 bindingLayout: BindingLayoutDescriptor; 3 pipeline?: RenderPipeline | ComputePipeline; 4 uniformBufferBindings?: BufferBinding[]; 5 samplerBindings?: SamplerBinding[]; 6 storageBufferBindings?: BufferBinding[]; 7 storageTextureBindings?: TextureBinding[]; 8}
BufferBinding
has the following properties:
required
Should match the binding
in shader.required
optional
The offset, in bytes, from the beginning of buffer to the beginning of the range exposed to the shader by the buffer binding. Defaulting to 0
.optional
The size, in bytes, of the buffer binding. If not provided, specifies the range starting at offset and ending at the end of buffer.1interface BufferBinding { 2 binding: number; 3 buffer: Buffer; 4 offset?: number; 5 size?: number; 6}
InputLayout
defines the layout of vertex attribute data in a vertex buffer used by pipeline.
1createInputLayout: (inputLayoutDescriptor: InputLayoutDescriptor) => 2 InputLayout;
A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride
is the stride, in bytes, between elements of that array. Each element of a vertex buffer is like a structure with a memory layout defined by its attributes, which describe the members of the structure.
1interface InputLayoutDescriptor { 2 vertexBufferDescriptors: (InputLayoutBufferDescriptor | null)[]; 3 indexBufferFormat: Format | null; 4 program: Program; 5} 6 7interface InputLayoutBufferDescriptor { 8 arrayStride: number; // in bytes 9 stepMode: VertexStepMode; // per vertex or instance 10 attributes: VertexAttributeDescriptor[]; 11} 12 13interface VertexAttributeDescriptor { 14 shaderLocation: number; 15 format: Format; 16 offset: number; 17 divisor?: number; 18}
required
The numeric location associated with this attribute, which will correspond with a "@location" attribute declared in the vertex.module.required
The VertexFormat of the attribute.required
The offset, in bytes, from the beginning of the element to the data for the attribute.optional
Create a Readback to read GPU resouce's data from CPU side:
1createReadback: () => Readback;
1readBuffer: ( 2 b: Buffer, 3 srcByteOffset?: number, 4 dst?: ArrayBufferView, 5 dstOffset?: number, 6 length?: number, 7) => Promise<ArrayBufferView>;
1const readback = device.createReadback(); 2readback.readBuffer(buffer);
Only WebGL 2 & WebGPU support:
1createQueryPool: (type: QueryPoolType, elemCount: number) => QueryPool;
1queryResultOcclusion(dstOffs: number): boolean | null
A RenderPipeline
is a kind of pipeline that controls the vertex and fragment shader stages.
1createRenderPipeline: (descriptor: RenderPipelineDescriptor) => RenderPipeline;
The descriptor is as follows:
required
The formats of color attachment.optional
The type of primitive to be constructed from the vertex inputs. Defaulting to TRIANGLES
:optional
optional
The format of depth & stencil attachment.optional
Used in MSAA, defaulting to 1
.1interface RenderPipelineDescriptor extends PipelineDescriptor { 2 topology?: PrimitiveTopology; 3 megaStateDescriptor?: MegaStateDescriptor; 4 colorAttachmentFormats: (Format | null)[]; 5 depthStencilAttachmentFormat?: Format | null; 6 sampleCount?: number; 7}
1enum PrimitiveTopology { 2 POINTS, 3 TRIANGLES, 4 TRIANGLE_STRIP, 5 LINES, 6 LINE_STRIP, 7}
1interface MegaStateDescriptor { 2 attachmentsState: AttachmentState[]; 3 blendConstant?: Color; 4 depthCompare?: CompareFunction; 5 depthWrite?: boolean; 6 stencilFront?: Partial<StencilFaceState>; 7 stencilBack?: Partial<StencilFaceState>; 8 stencilWrite?: boolean; 9 cullMode?: CullMode; 10 frontFace?: FrontFace; 11 polygonOffset?: boolean; 12 polygonOffsetFactor?: number; 13 polygonOffsetUnits?: number; 14}
1createComputePipeline: (descriptor: ComputePipelineDescriptor) => 2 ComputePipeline;
1type ComputePipelineDescriptor = PipelineDescriptor; 2interface PipelineDescriptor { 3 bindingLayouts: BindingLayoutDescriptor[]; 4 inputLayout: InputLayout | null; 5 program: Program; 6}
A RenderPass is usually created at the beginning of each frame.
1createRenderPass: (renderPassDescriptor: RenderPassDescriptor) => RenderPass;
1export interface RenderPassDescriptor { 2 colorAttachment: (RenderTarget | null)[]; 3 colorAttachmentLevel?: number[]; 4 colorClearColor?: (Color | 'load')[]; 5 colorResolveTo: (Texture | null)[]; 6 colorResolveToLevel?: number[]; 7 colorStore?: boolean[]; 8 depthStencilAttachment?: RenderTarget | null; 9 depthStencilResolveTo?: Texture | null; 10 depthStencilStore?: boolean; 11 depthClearValue?: number | 'load'; 12 stencilClearValue?: number | 'load'; 13 occlusionQueryPool?: QueryPool | null; 14}
⚠️Only WebGPU support.
1createComputePass: () => ComputePass;
RenderBundle can record the draw calls during one frame and replay this recording for all subsequent frames.
1const renderBundle = device.createRenderBundle();
2
3// On each frame.
4if (frameCount === 0) {
5 renderPass.beginBundle(renderBundle);
6 // Omit other renderpass commands
7 renderPass.endBundle();
8} else {
9 renderPass.executeBundles([renderBundle]);
10}
Should call this method at the beginning of each frame.
1device.beginFrame();
2const renderPass = device.createRenderPass({});
3// Omit other commands.
4renderPass.draw();
5device.submitPass(renderPass);
6device.endFrame();
Schedules the execution of the command buffers by the GPU on this queue.
1submitPass(o: RenderPass | ComputePass): void;
Should call this method at the end of each frame.
1copySubTexture2D: ( 2 dst: Texture, 3 dstX: number, 4 dstY: number, 5 src: Texture, 6 srcX: number, 7 srcY: number, 8 depthOrArrayLayers?: number, 9) => void;
1// @see https://www.w3.org/TR/webgpu/#gpusupportedlimits 2queryLimits: () => DeviceLimits;
1interface DeviceLimits { 2 uniformBufferWordAlignment: number; 3 uniformBufferMaxPageWordSize: number; 4 supportedSampleCounts: number[]; 5 occlusionQueriesRecommended: boolean; 6 computeShadersSupported: boolean; 7}
Query whether device's context is already lost:
1queryPlatformAvailable(): boolean
WebGL / WebGPU will trigger Lost event:
1device.queryPlatformAvailable(); // false
1queryTextureFormatSupported(format: Format, width: number, height: number): boolean;
1const shadowsSupported = device.queryTextureFormatSupported( 2 Format.U16_RG_NORM, 3 0, 4 0, 5);
WebGL 1/2 & WebGPU use different origin:
1queryVendorInfo: () => VendorInfo;
1interface VendorInfo { 2 readonly platformString: string; 3 readonly glslVersion: string; 4 readonly explicitBindingLocations: boolean; 5 readonly separateSamplerTextures: boolean; 6 readonly viewportOrigin: ViewportOrigin; 7 readonly clipSpaceNearZ: ClipSpaceNearZ; 8 readonly supportMRT: boolean; 9}
When using Spector.js to debug our application, we can set a name to relative GPU resource.
1setResourceName: (o: Resource, s: string) => void;
For instance, we add a label for RT and Spector.js will show us the metadata:
1device.setResourceName(renderTarget, 'Main Render Target');
On WebGPU devtools we can also see the label:
Checks if there is currently a leaking GPU resource. We keep track of every GPU resource object created, and calling this method prints the currently undestroyed object and the stack information where the resource was created on the console, making it easy to troubleshoot memory leaks.
It is recommended to call this when destroying the scene to determine if there are resources that have not been destroyed correctly. For example, in the image below, there is a WebGL Buffer that has not been destroyed:
We should call buffer.destroy()
at this time to avoid OOM.
https://developer.mozilla.org/en-US/docs/Web/API/GPUCommandEncoder/pushDebugGroup
1pushDebugGroup(debugGroup: DebugGroup): void;
1interface DebugGroup { 2 name: string; 3 drawCallCount: number; 4 textureBindCount: number; 5 bufferUploadCount: number; 6 triangleCount: number; 7}
https://developer.mozilla.org/en-US/docs/Web/API/GPUCommandEncoder/popDebugGroup
A Buffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout.
We can set data in buffer with this method:
required
Offset of dest buffer in bytes.required
Source buffer data, must use Uint8Array.optional
Offset of src buffer in bytes. Defaulting to 0
.optional
Defaulting to the whole length of the src buffer.1setSubData: ( 2 dstByteOffset: number, 3 src: Uint8Array, 4 srcByteOffset?: number, 5 byteLength?: number, 6) => void;
One texture consists of one or more texture subresources, each uniquely identified by a mipmap level and, for 2d textures only, array layer and aspect.
We can set data in buffer with this method:
required
Array of TexImageSource or ArrayBufferView.optional
Lod. Defaulting to 0
.1setImageData: ( 2 data: (TexImageSource | ArrayBufferView)[], 3 lod?: number, 4) => void;
Create a cubemap texture:
1// The order of the array layers is [+X, -X, +Y, -Y, +Z, -Z] 2const imageBitmaps = await Promise.all( 3 [ 4 '/images/posx.jpg', 5 '/images/negx.jpg', 6 '/images/posy.jpg', 7 '/images/negy.jpg', 8 '/images/posz.jpg', 9 '/images/negz.jpg', 10 ].map(async (src) => loadImage(src)), 11); 12const texture = device.createTexture({ 13 format: Format.U8_RGBA_NORM, 14 width: imageBitmaps[0].width, 15 height: imageBitmaps[0].height, 16 depthOrArrayLayers: 6, 17 dimension: TextureDimension.TEXTURE_CUBE_MAP, 18 usage: TextureUsage.SAMPLED, 19}); 20texture.setImageData(imageBitmaps);
A GPUSampler encodes transformations and filtering information that can be used in a shader to interpret texture resource data.
The RenderPass has several methods which affect how draw commands.
Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.
required
Minimum X value of the viewport in pixels.required
Minimum Y value of the viewport in pixels.required
Width of the viewport in pixels.required
Height of the viewport in pixels.optional
Minimum depth value of the viewport.optional
Minimum depth value of the viewport.1setViewport: ( 2 x: number, 3 y: number, 4 w: number, 5 h: number, 6 minDepth?: number, // WebGPU only 7 maxDepth?: number, // WebGPU only 8) => void;
Sets the scissor rectangle used during the rasterization stage. After transformation into viewport coordinates any fragments which fall outside the scissor rectangle will be discarded.
required
Minimum X value of the scissor rectangle in pixels.required
Minimum Y value of the scissor rectangle in pixels.required
Width of the scissor rectangle in pixels.required
Height of the scissor rectangle in pixels.1setScissorRect: (x: number, y: number, w: number, h: number) => void;
Sets the current RenderPipeline.
1setPipeline(pipeline: RenderPipeline)
Bindings defines the interface between a set of resources bound and their accessibility in shader stages.
1setBindings: (bindings: Bindings) => void;
1setVertexInput: ( 2 inputLayout: InputLayout | null, 3 buffers: (VertexBufferDescriptor | null)[] | null, 4 indexBuffer: IndexBufferDescriptor | null, 5) => void;
Bind vertex & index buffer(s) like this:
1interface VertexBufferDescriptor { 2 buffer: Buffer; 3 offset?: number; // in bytes 4} 5type IndexBufferDescriptor = VertexBufferDescriptor;
Sets the stencilReference value used during stencil tests with the "replace" GPUStencilOperation.
1setStencilReference: (value: number) => void;
Draws primitives.
required
The number of vertices to draw.optional
The number of instances to draw.optional
Offset into the vertex buffers, in vertices, to begin drawing from.optional
First instance to draw.1draw: ( 2 vertexCount: number, 3 instanceCount?: number, 4 firstVertex?: number, 5 firstInstance?: number, 6) => void;
Draws indexed primitives.
required
The number of indices to draw.optional
The number of instances to draw.optional
Offset into the index buffer, in indices, begin drawing from.optional
Added to each index value before indexing into the vertex buffers.optional
First instance to draw.1drawIndexed: ( 2 indexCount: number, 3 instanceCount?: number, 4 firstIndex?: number, 5 baseVertex?: number, 6 firstInstance?: number, 7) => void;
⚠️ WebGPU only.
Draws primitives using parameters read from a GPUBuffer.
1drawIndirect: (indirectBuffer: Buffer, indirectOffset: number) => void;
1// Create drawIndirect values 2const uint32 = new Uint32Array(4); 3uint32[0] = 3; 4uint32[1] = 1; 5uint32[2] = 0; 6uint32[3] = 0; 7 8// Create a GPUBuffer and write the draw values into it 9const drawValues = device.createBuffer({ 10 viewOrSize: uint32, 11 usage: BufferUsage.INDIRECT, 12}); 13 14// Draw the vertices 15renderPass.drawIndirect(drawValues, 0);
⚠️ WebGPU only.
Draws indexed primitives using parameters read from a GPUBuffer.
1drawIndexedIndirect: (indirectBuffer: Buffer, indirectOffset: number) => void;
1// Create drawIndirect values 2const uint32 = new Uint32Array(5); 3uint32[0] = 6; // The indexCount value 4uint32[1] = 1; // The instanceCount value 5uint32[2] = 0; // The firstIndex value 6uint32[3] = 0; // The baseVertex value 7uint32[4] = 0; // The firstInstance value 8// Create a GPUBuffer and write the draw values into it 9const drawValues = device.createBuffer({ 10 viewOrSize: uint32, 11 usage: BufferUsage.INDIRECT, 12}); 13 14// Draw the vertices 15renderPass.drawIndirect(drawValues, 0);
⚠️ WebGL2 & WebGPU only.
Occlusion query is only available on render passes, to query the number of fragment samples that pass all the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha to coverage, stencil, and depth tests. Any non-zero result value for the query indicates that at least one sample passed the tests and reached the output merging stage of the render pipeline, 0 indicates that no samples passed the tests.
When beginning a render pass, occlusionQuerySet
must be set to be able to use occlusion queries during the pass. An occlusion query is begun and ended by calling beginOcclusionQuery()
and endOcclusionQuery()
in pairs that cannot be nested.
1beginOcclusionQuery: (queryIndex: number) => void;
⚠️ WebGL2 & WebGPU only.
1endOcclusionQuery: () => void;
Start recording draw calls in render bundle.
1beginBundle: (renderBundle: RenderBundle) => void;
Stop recording.
1endBundle: () => void;
Replay the commands recorded in render bundles.
1executeBundles: (renderBundles: RenderBundle[]) => void;
⚠️ WebGPU only.
Computing operations provide direct access to GPU’s programmable hardware. Compute shaders do not have shader stage inputs or outputs, their results are side effects from writing data into storage bindings.
Dispatch work to be performed with the current ComputePipeline.
X/Y/Z dimension of the grid of workgroups to dispatch.
1dispatchWorkgroups: ( 2 workgroupCountX: number, 3 workgroupCountY?: number, 4 workgroupCountZ?: number, 5) => void;
Dispatch work to be performed with the current GPUComputePipeline using parameters read from a GPUBuffer.
1dispatchWorkgroupsIndirect: ( 2 indirectBuffer: Buffer, 3 indirectOffset: number, 4) => void;
⚠️ Only WebGL1 need this method.
1setUniformsLegacy: (uniforms: Record<string, any>) => void;
1program.setUniformsLegacy({
2 u_ModelViewProjectionMatrix: modelViewProjectionMatrix,
3 u_Texture: texture,
4});
Readback can read data from Texture or Buffer.
Read pixels from texture.
required
Texture.required
X coordinate.required
Y coordinate.required
Width of dimension.required
Height of dimension.required
Dst buffer view.optional
1readTexture: ( 2 t: Texture, 3 x: number, 4 y: number, 5 width: number, 6 height: number, 7 dst: ArrayBufferView, 8 dstOffset?: number, 9 length?: number, 10) => Promise<ArrayBufferView>;
For instance, if we want to read pixels from a texture:
1const texture = device.createTexture({ 2 format: Format.U8_RGBA_NORM, 3 width: 1, 4 height: 1, 5 usage: TextureUsage.SAMPLED, 6}); 7texture.setImageData([new Uint8Array([1, 2, 3, 4])]); 8 9const readback = device.createReadback(); 10 11let output = new Uint8Array(4); 12// x/y 0/0 13await readback.readTexture(texture, 0, 0, 1, 1, output); 14expect(output[0]).toBe(1); 15expect(output[1]).toBe(2); 16expect(output[2]).toBe(3); 17expect(output[3]).toBe(4);
⚠️ WebGL1 & WebGL2 only.
1readTextureSync: ( 2 t: Texture, 3 x: number, 4 y: number, 5 width: number, 6 height: number, 7 dst: ArrayBufferView, 8 dstOffset?: number, 9 length?: number, 10) => ArrayBufferView;
⚠️ WebGL2 & WebGPU only.
Read buffer data.
required
Source buffer.required
Offset in bytes of src buffer. Defaulting to 0
.required
Dest buffer view.optional
Offset in bytes of dst buffer. Defaulting to 0
.optional
Length in bytes of dst buffer. Defaulting to its whole size.1readBuffer: ( 2 src: Buffer, 3 srcOffset: number, 4 dst: ArrayBufferView, 5 dstOffset?: number, 6 length?: number, 7) => Promise<ArrayBufferView>;
BufferUsage.COPY_SRC
must be used if this buffer will be read later:
1const vertexBuffer = device.createBuffer({ 2 viewOrSize: new Float32Array([0, 0.5, -0.5, -0.5, 0.5, -0.5]), 3 usage: BufferUsage.VERTEX | BufferUsage.COPY_SRC, 4 hint: BufferFrequencyHint.DYNAMIC, 5}); 6const data = await readback.readBuffer(vertexBuffer, 0, new Float32Array(6));
Since WebGL 1/2 & WebGPU use different shader languages, we do a lot of transpiling work at runtime.
We use a syntax very closed to GLSL 300, and for different devices:
Syntax as follows:
1// raw 2layout(location = 0) in vec4 a_Position; 3 4// compiled GLSL 100 5attribute vec4 a_Position; 6 7// compiled GLSL 300 8layout(location = 0) in vec4 a_Position; 9 10// compiled GLSL 440 11layout(location = 0) in vec4 a_Position; 12 13// compiled WGSL 14var<private> a_Position_1: vec4<f32>; 15@vertex 16fn main(@location(0) a_Position: vec4<f32>) -> VertexOutput { 17 a_Position_1 = a_Position; 18}
1// raw 2out vec4 a_Position; 3 4// compiled GLSL 100 5varying vec4 a_Position; 6 7// compiled GLSL 300 8out vec4 a_Position; 9 10// compiled GLSL 440 11layout(location = 0) out vec4 a_Position; 12 13// compiled WGSL 14struct VertexOutput { 15 @location(0) v_Position: vec4<f32>, 16}
We need to use SAMPLER_2D / SAMPLER_Cube
wrapping our texture.
1// raw 2uniform sampler2D u_Texture; 3outputColor = texture(SAMPLER_2D(u_Texture), v_Uv); 4 5// compiled GLSL 100 6uniform sampler2D u_Texture; 7outputColor = texture2D(u_Texture, v_TexCoord); 8 9// compiled GLSL 300 10uniform sampler2D u_Texture; 11outputColor = texture(u_Texture, v_Uv); 12 13// compiled GLSL 440 14layout(set = 1, binding = 0) uniform texture2D T_u_Texture; 15layout(set = 1, binding = 1) uniform sampler S_u_Texture; 16outputColor = texture(sampler2D(T_u_Texture, S_u_Texture), v_Uv); 17 18// compiled WGSL 19@group(1) @binding(0) 20var T_u_Texture: texture_2d<f32>; 21@group(1) @binding(1) 22var S_u_Texture: sampler; 23outputColor = textureSample(T_u_Texture, S_u_Texture, _e5);
WebGL2 uses Uniform Buffer Object.
1// raw 2layout(std140) uniform Uniforms { 3 mat4 u_ModelViewProjectionMatrix; 4}; 5 6// compiled GLSL 100 7uniform mat4 u_ModelViewProjectionMatrix; 8 9// compiled GLSL 300 10layout(std140) uniform Uniforms { 11 mat4 u_ModelViewProjectionMatrix; 12}; 13 14// compiled GLSL 440 15layout(std140, set = 0, binding = 0) uniform Uniforms { 16 mat4 u_ModelViewProjectionMatrix; 17}; 18 19// compiled WGSL 20struct Uniforms { 21 u_ModelViewProjectionMatrix: mat4x4<f32>, 22} 23@group(0) @binding(0) 24var<uniform> global: Uniforms;
⚠️ We don't allow instance_name
for now:
1// wrong 2layout(std140) uniform Uniforms { 3 mat4 projection; 4 mat4 modelview; 5} matrices;
We still use gl_Position
to represent the output of vertex shader:
1// raw 2gl_Position = vec4(1.0); 3 4// compiled GLSL 100 5gl_Position = vec4(1.0); 6 7// compiled GLSL 300 8gl_Position = vec4(1.0); 9 10// compiled GLSL 440 11gl_Position = vec4(1.0); 12 13// compiled WGSL 14struct VertexOutput { 15 @builtin(position) member: vec4<f32>, 16}
1// raw 2out vec4 outputColor; 3outputColor = vec4(1.0); 4 5// compiled GLSL 100 6vec4 outputColor; 7outputColor = vec4(1.0); 8gl_FragColor = vec4(outputColor); 9 10// compiled GLSL 300 11out vec4 outputColor; 12outputColor = vec4(1.0); 13 14// compiled GLSL 440 15layout(location = 0) out vec4 outputColor; 16outputColor = vec4(1.0); 17 18// compiled WGSL 19struct FragmentOutput { 20 @location(0) outputColor: vec4<f32>, 21}
It is worth mentioning that since WGSL is not natively supported, naga does conditional compilation during the GLSL 440 -> WGSL translation process.
#define KEY VAR
1#define PI 3.14
@group(x)
in WGSL should obey the following order:
group(0)
Uniform eg. var<uniform> time : Time;
group(1)
Texture & Sampler pairgroup(2)
StorageBuffer eg. var<storage, read_write> atomic_storage : array<atomic<i32>>;
group(3)
StorageTexture eg. var screen : texture_storage_2d<rgba16float, write>;
For example:
1@group(1) @binding(0) var myTexture : texture_2d<f32>; 2@group(1) @binding(1) var mySampler : sampler;
1@group(1) @binding(0) var myTexture : texture_2d<f32>; 2@group(1) @binding(1) var mySampler : sampler; 3@group(2) @binding(0) var<storage, read_write> input : array<i32>;
Uniform and storage buffer can be assigned binding number:
1device.createBindings({ 2 pipeline: computePipeline, 3 uniformBufferBindings: [ 4 { 5 binding: 0, 6 buffer: uniformBuffer, 7 }, 8 ], 9 storageBufferBindings: [ 10 { 11 binding: 1, 12 buffer: storageBuffer, 13 }, 14 ], 15});
1@group(0) @binding(0) var<uniform> params : SimParams; 2@group(0) @binding(1) var<storage, read_write> input : array<i32>; 3 4@group(1) @binding(0) var myTexture : texture_2d<f32>; 5@group(1) @binding(1) var mySampler : sampler;
Currently we don't support dynamicOffsets
when setting bindgroup.
1// Won't support for now.
2passEncoder.setBindGroup(1, dynamicBindGroup, dynamicOffsets);
No vulnerabilities found.
No security vulnerabilities found.