这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@litherum
Copy link
Contributor

This group has agreed that the officially-supported human-readable language for writing WebGPU shaders is WHLSL, and WHLSL strives to be source-compatible with HLSL.

In HLSL, it's pretty common to see things like:

SamplerState theSampler : register(s0);
ConstantBuffer<Foo> theBuffer : register(b0);

Note that these two resources share the same number, but differ by their type. Therefore, in order to bind resources to these bind points, a WebGPU program would create a GPUBindGroupLayoutDescriptor that looks like:

[{binding: 0, type: "sampler"}, {binding: 0, type: "uniform-buffer"}]

So far, we haven't specified the validation requirements for GPUBindGroupLayouts, but the above code requires that (binding, type) be unique, rather than just (binding).

This is fine, except when the author wants to go ahead and actually GPUDevice.createBindGroup(). If the GPUBindGroupBinding doesn't list the type, it doesn't know which GPUBindGroupLayoutBinding each item corresponds to. Therefore, this patch removes the binding member of the GPUBindGroupBinding dictionary. Instead, the sequence<GPUBindGroupBinding> bindings member of GPUBindGroupDescriptor must be parallel to the sequence<GPUBindGroupLayoutBinding> bindings member of GPUBindGroupLayoutDescriptor.

Another way we could do this is to instead add a GPUBindingType type member of GPUBindGroupBinding, but using parallel arrays is simpler and easier to understand.

@kainino0x
Copy link
Contributor

This is interesting and sounds like it's primarily a mismatch between HLSL's binding model (s0 != b0) and the one we have used so far in WebGPU (just 0, 1, ...; the numbers must all be different) (iirc modeled after Vulkan).

The example you gave ([{binding: 0, type: "sampler"}, {binding: 0, type: "uniform-buffer"}]) is invalid in Dawn and afaik wasn't intended to work when Corentin wrote the sketch originally.

If we keep things that way, then this change isn't holistic enough - instead, we need to look at both BindGroupDescriptor and BindGroupLayoutDescriptor and refactor them together (ideally in a similar way to whatever we do for vertex buffer input).

@kainino0x
Copy link
Contributor

kainino0x commented Jan 29, 2019

@dneto0 If I'm understanding the problem correctly, HLSL+Vulkan will have had this same problem as well. Do you know how it is handled there?

@litherum
Copy link
Contributor Author

litherum commented Jan 29, 2019

If we keep things that way

"keep things that way" isn't accurate - we never discussed the semantics of the calls in the IDL, and Dawn's implementation does not dictate the spec.

Putting Dawn's current binding model into the spec would be unfortunate because there would be a fundamental mismatch between the API and the shading language.

@magcius
Copy link

magcius commented Jan 29, 2019

HLSL/SPIR-V uses an additional field to the register() pragma, called space, to determine the bind group layout, as documented in https://github.com/Microsoft/DirectXShaderCompiler/blob/master/docs/SPIR-V.rst#implicit-binding-number-assignment

So developers are required to say register(b0, space1) in their shaders when porting to Vulkan.

Spiregg also uses separate register spaces between b0, r0, etc. So b0 might be placed at descriptor index 0, and r0 at descriptor index 256. This is customizable through command-line arguments.

@litherum
Copy link
Contributor Author

HLSL/SPIR-V uses an additional field to the register() pragma, called space

Right, space is used to determine which bind group inside the layout to access.

That link describes how the compiler re-maps resource IDs in an arbitrary manner. It would be unfortunate if WebGPU had to arbitrarily re-map resource IDs for the officially-supported shading language.

@magcius
Copy link

magcius commented Jan 29, 2019

I mean, there's a mismatch in the bind model, so you're going to have to resolve this some way. You cannot use the raw HLSL solution, since DX12's root signature bindless model will not work on non-bindless APIs like Vulkan.

You can either do this by providing some mapping and building API conveniences to match, or by changing HLSL so that it more closely matches Vulkan's bind model, as the DXC team has partially done.

@Kangz
Copy link
Contributor

Kangz commented Jan 29, 2019

@litherum D3D12 has the same issue where both register t0 and b0 can coexist and it is unclear where they should be in the root signature. To solve this they use an additional "root signature layout" that remaps d3d11-style register IDs to disjoint descriptors either directly in the root signature or in descriptor tables pointed to from the root signature.

One way they solve this is by allowing applications to specify the remap directly in HLSL.

What WHLSL could do is assume that registers with a "spaceN" declaration point directly in bindgroups and are disjoint. Then for register without a "spaceN" the application would have to provide an equivalent of the root signature layout, either in the source or as an extra argument to the ShaderModuleDescriptor (only if the API ingests WHLSL).

Putting Dawn's current binding model into the spec would be unfortunate because there would be a fundamental mismatch between the API and the shading language.

Agreed that Dawn's model shouldn't mandate what WebGPU does, but it should be seen as a model that's known to work seemlessly on D3D12, Metal, Vulkan with samples exercising multiple bindgroups at once.

That said the mismatch is between HLSL and Dawn is the same mismatch there is between HLSL and Metal2 (with argument buffers), D3D12 (root signatures) and Vulkan (descriptor sets) and I suggest WHLSL follows a solution close to what Microsoft found for D3D12.

typedef (GPUSampler or GPUTextureView or GPUBufferBinding) GPUBindingResource;

dictionary GPUBindGroupBinding {
u32 binding;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should use the same approach as in #186 WDYT?

@kainino0x
Copy link
Contributor

Strong +1 to match D3D12's solution.

I want to reiterate that I absolutely was NOT arguing WebGPU should do what Dawn does. It clearly goes the other way.

However, Dawn already has a working implementation, and its behavior is what was in mind when the sketch was originally written. Therefore it is a useful "originalist" reference point when trying to understand why we have the current thing (and trying to determine whether the current thing is wrong).

@kainino0x
Copy link
Contributor

Also, Re: "we never discussed the semantics of the calls in the IDL"

IIRC, we did. We tentatively agreed on the Vulkan binding model.

@litherum
Copy link
Contributor Author

litherum commented Jan 30, 2019

One of the goals of WHLSL is to preserve source compatibility with existing HLSL shaders, and only 9% of shaders in the corpus use the RootSignature block.

What WHLSL could do is assume that registers with a "spaceN" declaration point directly in bindgroups and are disjoint.

Yes, exactly.

Then for register without a "spaceN" the application would have to provide an equivalent of the root signature layout, either in the source or as an extra argument to the ShaderModuleDescriptor (only if the API ingests WHLSL).

Only 2% of shaders in the corpus use the spaceN construct, so we should not require it to make the binding model work.

The same mismatch there is between HLSL and Metal2 (with argument buffers), D3D12 (root signatures) and Vulkan (descriptor sets)

Right. Metal backends will have to do the same sort of internal mapping that Vulkan and D3D12 backends will have to do.

The question is "Should the WebGPU binding model work seamlessly with the officially-supported shading language, or should it work seamlessly with another shading language which may or may not be provided as a wire format." The fact that implementations internally map binding numbers to other numbers is much less important than making the WebGPU API and the shading language consistent.

If the binding model doesn't change to match the officially-supported shading language, then a mapping has to occur between the JavaScript API and the shading language. The corpus shows that shader authors generally don't provide that mapping, so it's egregious to expect them to do it for WebGPU. If the spec describes this mapping, programmers have to read specs (which they generally don't do) and understand these arbitrary details, which makes the API harmful to beginners. If the mapping is provided as an API call, the WebGPU author's code needs to be flexible enough to work with any possible mapping that could be returned by the browser. All of these are quite unfortunate.

Instead, making them match seamlessly is quite straightforward from a Web author's point of view. A GPUBindGroupLayoutBinding has three identifying pieces of information: A binding number, a type, and its BindGroup’s index into the PipelineLayout, which are exactly the three pieces of information that HLSL & WHLSL resources have. It works out quite nicely, and the cost is that all implementations need to perform internal mapping to expose a consistent API to the Web author. (And Metal backends already have to do mapping to make vertex buffers coexist with argument buffers.)

@Kangz
Copy link
Contributor

Kangz commented Jan 30, 2019

If the binding model doesn't change to match the officially-supported shading language, then a mapping has to occur between the JavaScript API and the shading language.

Indeed and D3D12 itself has it in the form of the root signature layout. It is always provided. either in the shading language or via a programmatically created object at pipeline compilation time. The corpus you have seems to show that most developers are using the programmatic way when using D3D12.

If the application doesn't specify a spaceN then the implementation has no way to know which bindgroup the resource is extracted from, so we need something similar to D3D12's root descriptor layout, ideally both programmatic and in the source language. Since we'll have that mapping from flat register space to (group, binding index) we don't need to have parallel register tables for texture, uniform buffers, samplers and storage resources.

IMHO the API would become more fragile if we require the bindings to be listed in the same order for GPUBindGroup and GPUBindGroupLayout creation. It hides an important interface matching concept behind "order must match" when we should instead be very explicit about it.

If the mapping is provided as an API call, the WebGPU author's code needs to be flexible enough to work with any possible mapping that could be returned by the browser.

For clarification, in D3D12 the root signature layout is totally under the application's control.

@kainino0x
Copy link
Contributor

IIUC, if the shader doesn't specify a spaceN, then it uses space0 and an automatic mapping:

For compatibility reasons, the HLSL compiler may automatically assign resource registers for ranges declared in space0. If ‘space’ is omitted in the register clause, the default space0 is used. The compiler uses the first-hole-fits heuristic to assign the registers. The assignment can be retrieved via the reflection API, which has been extended to add the Space field for space, while the BindPoint field indicates the lower bound of the resource register range.

[source] (not clear why it's in this section)

@litherum
Copy link
Contributor Author

litherum commented Jan 30, 2019

if the shader doesn't specify a spaceN, then it uses space0

Yep.

The root signature layout is totally under the application's control

Right. I forgot the fourth option which is to have the WebGPU programmer describe their desired mapping (instead of the WHLSL author). We could add something like the root signature to the WebGPU API.

However, this is significant complexity that is unnecessary. We already have been discussing topics such as how to describe holes in the binding model to satisfy the use case of a WebGPU programmer wanting to fit their pipeline layout to a particular shader. We don't need yet another mechanism for this.

It already takes multiple hundred lines to draw a triangle, and I don't think adding another required layer of indirection should be necessary, given the fact that the binding model and the shading language already match up almost perfectly.

@dneto0
Copy link
Contributor

dneto0 commented Jan 30, 2019

Just catching up on this.

Thanks @magcius for pointing at the docs for the SPIR-V backend on DXC. (For reference I'd start a little higher up in that doc, at https://github.com/Microsoft/DirectXShaderCompiler/blob/master/docs/SPIR-V.rst#hlsl-register-and-vulkan-binding )

That DXC compiler aims to be a quality solution for HLSL code we've encountered and which customers care deeply about. That's why DXC has N different ways of specifying Vulkan descriptor set and layout, with varying degrees of convenience vs. control.

Fine. But what's appropriate for WebGPU.

Only 2% of shaders in the corpus use the spaceN construct, so we should not require it to make the binding model work.

It's debatable whether that says more about the utility/importance of the spaceN construct or the quality/coverage/appropriateness of the corpus.

It's well-known that for maximum performance on Vulkan, descriptors that are updated with similar frequency should be grouped together into descriptor sets. That's a key motivation for the two-level (set,binding) binding model. One reference is from Valve: http://on-demand.gputechconf.com/gtc/2016/events/vulkanday/High_Performance_Vulkan.pdf . (about halfway through). Another reference is https://renderdoc.org/vulkan-in-30-minutes.html

So does WebGPU want to expose that optimization opportunity?

@magcius
Copy link

magcius commented Jan 30, 2019

Question: What do we do about arrays? That is:

RWBuffer<float> MyBuffer[4] register(t0);
RWBuffer<float> OtherBuffer register(t1);

How does the user fill in the entries for MyBuffer[0], MyBuffer[1], MyBuffer[2], MyBuffer[3] and OtherBuffer? Vulkan has the concept of multiple descriptors per slot, but WebGPU does not not adopt that and instead has a flat list of bindable items per bind group.

@Kangz
Copy link
Contributor

Kangz commented Jan 31, 2019

@magcius I don't know for sure but it doesn't look valid in the D3D11 binding model that Apple is looking at for WHLSL right now.

Basically the problem with this PR is that it over-specialize the API to the D3D11 flavor of HLSL by adding a third dimension (the register type) to bindings in WebGPU. The precedents with HLSL for D3D12 and HLSL for SPIR-V is that it is ok to modify the "register" attributes to match the API's binding model while still claiming HLSL compatibility.

In that spirit we could simply require that registers are disjoint in a single register space, which would match exactly the binding model we have for WebGPU. I'd suggest not leaving the defaulting to space0 because we don't want to make it easy to stuff everything in a single bindgroup because that's not the efficient way to use the API.

@magcius
Copy link

magcius commented Jan 31, 2019

I'm not 100% sure but I believe that will cause issues with Vulkan.

Given these two shader layouts:

RWBuffer<float> MyBuffer[2] register(t0);
RWBuffer<float> OtherBuffer register(t2);
RWBuffer<float> MyBuffer_0 register(t0);
RWBuffer<float> MyBuffer_1 register(t1);
RWBuffer<float> OtherBuffer register(t2);

They would appear to be compatible in WebGPU, meaning my binding layout is three storage buffers in slots 0, 1, and 2. However, these layouts are not compatible in Vulkan: one has two descriptors in slot 0, and the other has a single descriptor in slot 0 and a single descriptor in slot 1. The driver would have to create two Vulkan binding layouts and binding sets behind the scenes for the shader in use, which maybe isn't the worst thing, but isn't as clean as it should be and relies on the backend doing extra platform work.

This is my cursory reading of the spec and my own limited experiences with dxc, so I could be wrong.

@litherum
Copy link
Contributor Author

litherum commented Feb 1, 2019

Currently, WHLSL disallows arrays of resources. We'll have to see how reasonable that is in light of the compatibility goal.

@litherum
Copy link
Contributor Author

litherum commented Feb 1, 2019

So does WebGPU want to expose that optimization opportunity?

This wasn't the reason we (Apple) agreed to initially pursue the Vulkan binding model. Instead, we agreed to pursue it just because it seemed to be the most general of the three models. A mapping from the Vulkan model to the Metal and D3D models is fairly straightforward, which isn't necessarily true in the reverse direction.

The additional performance benefits are interesting, but I'd like to see some repeatable measurements. Unfortunately, both of those links are light on the details regarding performance of descriptor sets. I've been thinking today about how one would measure this, and it's not immediately obvious. Perhaps an A/B test where "A" is 1000 bind groups, each holding one resource, and "B" is 200 bind groups, each holding 5 resources, and the benchmark measures the time to create and destroy 5 resources again and again in a loop?

@kvark
Copy link
Contributor

kvark commented Feb 1, 2019

I'd like to summarize the dialog as I see it. So there are two different binding models that could be exposed to the user:

  • Vulkan-like model, where each descriptor is addressed by (bind group index, resource index) pair. Whether the arrays involve an extra dimension in WebGPU (like they do in Vulkan) isn't clear yet.
  • HLSL-like model, where each descriptor is addressed by (space, type, index) trio, and the space is often implicitly zero.

If the users provide code in SPIR-V, and the binding model is HLSL-like, then the mapping is fairly straightforward (since HLSL with spaces can be seen as a superset of the Vulkan model, having an extra type dimension), and the user would be able to define an appropriate pipeline layout.

If the users provide the code in (existing) HLSL, and the binding model we expose is Vulkan-like, then there are two options:

  1. require the users to modify the sources and make the resource indices disjoint (i.e. not use "s0" and "t0" at the same time) as well as specify "spaceX" explicitly.
  2. require the users to provide a layer of mapping in some way between shader-specified resource indices and the binding mode, aka the D3D12 root signature. This would say, for example, that "s0 -> bind group 2, index 5" for all the used resources.

Both of these choices are non-ideal: modifying existing shaders neglects the appeal of HLSL/WHLSL in general as a digestion format, having extra mapping is awkward because, well, it's an extra mapping in addition to the pipeline layout we declare, which is supposed to be mapping the shader to data.

At the end of the day, it doesn't seem like the binding model can be considered independently of the primary digesting shader language. If existing HLSL is what we need most, then it becomes tempting to just define our bind group layouts as maps instead of sequences:

var bingGroupLayout = device.createBindGroupLayout({
  "s0" => {
    visibility: WebGPUShaderStageBit.VERTEX,
    type: "sampler"
  },
  "t0" => {
    visibility: WebGPUShaderStageBit.FRAGMENT,
    type: "texture"
  }
});

Technically, those "s0", "t0", etc semantics could be agnostic of WebGPU (as long as they are consistent between the layout creation and actual bind group creation), and the user would just specify indices for the layouts they want to use with SPIR-V inputs:

var bindings = [];
bindings[1] = {
    visibility: WebGPUShaderStageBit.FRAGMENT,
    type: "texture"
};

All backends would need to do some sort of mapping into their internal descriptor set representations. I'd be fine with this overhead as long as it's strictly at the bind group (layout) creation and not the binding: binding the resource groups should stay as light as possible.

@litherum
Copy link
Contributor Author

litherum commented Feb 1, 2019

If existing HLSL is what we need most

I thought we agreed that WHLSL is the officially-supported shading language and its source-compatibility with existing HLSL is an important tenant of the language.

@kvark I like the Map idea.

@Kangz
Copy link
Contributor

Kangz commented Feb 4, 2019

@magcius that bindGroupLayout creation from sources would probably live in a JS reflection helper library, at least that's what we intend to provide for SPIR-V for WebGPU.

@litherum as explained in the previous comment, there are different flavors of HLSL and right now you're trying to be compatible with HLSL for D3D11. It uses a dated register-based binding model, and we should not compromise WebGPU's binding model to cater to it. Also WHLSL is a language the group agreed could be "blessed" but not part of the WebGPU API itself.

In a way the register attributes are more part of the API than the shading language as the modifications for D3D12 and Vulkan show. I wouldn't consider that part of compatibility, especially if we can have something like D3D12's root signature layout to remap from D3D11 semantics to WebGPU semantics, for people who really don't want to touch their existing HLSL codebase.

@kvark
Copy link
Contributor

kvark commented Feb 5, 2019

Let me re-iterate on some of the key points in this discussion:

  1. there are multiple flavors of HLSL, but D3D11 style is prevalent, since "Only 2% of shaders in the corpus use the spaceN construct".
  2. when using D3D11 style HLSL, native ISVs describe the root signature (in either HLSL code or in C++) that maps from registers (aka "s1") to descriptors. This is essentially what our pipeline layout needs to define.
  3. the API chosen binding model glues together the shader and the resources. It impacts both the usability (i.e. API complexity) and the performance (implementation complexity). Let's dive into these parts independently

Mapping shader -> WebGPU binding model

D3D11 HLSL typically addresses resources by (type, index) pair. SPIR-V addresses resources by (descriptor_set, index) pair:

  • matching SPIR-V to HLSL is straightforward
  • matching HLSL to SPIR-V, in the absence of the pipeline layout, is impossible without extra information, since only 2% of the shaders are aware of the descriptor set index. This means that we wouldn't be able to have a WASM module converting (existing D3D11) shaders if we don't directly ingest (W)HLSL (that is compatible with D3D11).

Mapping WebGPU binding model -> native API

  1. SPIR-V model maps fairly well to all of the native APIs:
    - in Vulkan, resource group == descriptor set
    - in Metal, resource group == argument buffer. Resources in an argument buffer are encoded continuously, and the type is not a part of the address.
    - in D3D12, resource group == continuous range of descriptors inside a D3D12 descriptor heap.
  2. D3D11 HLSL model doesn't match either of the native APIs we target. If we accept this model, we'll have a layer of indirection between (index, type) and actual continuous ranges of descriptors built into the API. It could be seen as a higher-level API that doesn't map directly to the native APIs.

Conclusion

I think it would be a wrong decision to make D3D10/D3D11 binding model (which is more than a decade old) affect our API. We are fortunate that all current native APIs pretty much agree on the binding model, and we should just expose it without indirection or extra complexity.

D3D11 HLSL code bases may be prevalent today, but that's just because of huge inertia of the industry, and is subject to change in time. Any new project started in the last ~5 years is designed up-front to support descriptor sets as first-class citizens, and we should be looking forward to it instead of being dragged by the past.

@kvark
Copy link
Contributor

kvark commented Feb 5, 2019

Small correction: D3D12 doesn't always associate a resource group with a range in a descriptor heap. Samplers have to go into a separate heap. So whenever both samplers and non-samplers are involved in a bind group layout, mapping has to split into two continuous ranges.

@kainino0x
Copy link
Contributor

Great argument summary, thanks!

There's one thing I'm not sure how you would handle. D3D12 HLSL, like D3D11 HLSL, still uses t0, b0 etc. even though it is not present in D3D12's binding model itself. This is solved with root signatures (in HLSL or at the API level). In Vulkan it is solved with attributes ([[vk::binding(5, 1)]]) or compiler flags (-fvk-s-shift etc) [great post on the topic].

How would you propose we solve it?

@devshgraphicsprogramming

The additional performance benefits are interesting, but I'd like to see some repeatable measurements. Unfortunately, both of those links are light on the details regarding performance of descriptor sets. I've been thinking today about how one would measure this, and it's not immediately obvious. Perhaps an A/B test where "A" is 1000 bind groups, each holding one resource, and "B" is 200 bind groups, each holding 5 resources, and the benchmark measures the time to create and destroy 5 resources again and again in a loop?

Whatever you measure in native benchmarks will most probably be maginified 100x on the actual WebGPU implementations with the burdensome validation they have to do.

Also don't forget, that for me a developer, descriptor sets are a convenience
#192 (comment)

@devshgraphicsprogramming

While yes it is a slight concern that 90%+ of the existing HLSL uses a D3D11 binding model, but also 65%+ of the existing GLSL on the web doesn't use explicit layouts forcing the developers to set up the slot to variable mappings in C before linking.

HLSL will get written to D3D12 standards as will GLSL, the future should be accounted for, we shouldn't look to past technical debt.

Furthermore, porting shaders in general has gotten much easier in recent times with the advent of tools like Cross-Compilers, there's even one that works in a browser
http://confettishadertranslator.azurewebsites.net/

I think it would be a wrong decision to make D3D10/D3D11 binding model (which is more than a decade old) affect our API. We are fortunate that all current native APIs pretty much agree on the binding model, and we should just expose it without indirection or extra complexity.

D3D11 HLSL code bases may be prevalent today, but that's just because of huge inertia of the industry, and is subject to change in time. Any new project started in the last ~5 years is designed up-front to support descriptor sets as first-class citizens, and we should be looking forward to it instead of being dragged by the past

Ditto, if you're going to base on HLSL at least base on HLSL 5.1 or 6.0

D3D11 binding model doesn't even map onto D3D12 (same API, different versions) itself.

@devshgraphicsprogramming

Also iirc, you loose more performance with a D3D12 binding model on Vulkan than a Vulkan model on D3D12, see my comment on the other issue.

#192 (comment)

@Kangz
Copy link
Contributor

Kangz commented Feb 6, 2019

Thanks @kvark for the summary, I have only a small question:

This means that we wouldn't be able to have a WASM module converting (existing D3D11) shaders if we don't directly ingest (W)HLSL (that is compatible with D3D11).

Wouldn't this be possible if we had the equivalent of a root signature layout either in source, or passed to the WASM module as a compilation argument? (you could even imagine it taking an array of GPUBindGroupLayoutDescriptors with register annotations).

@kainino0x: applications with D3D12 HLSL would probably be D3D12 application that already have a concept of root signature layout so they can swap that to the WebGPU equivalent of root signature layouts as part of the porting process.

@kvark
Copy link
Contributor

kvark commented Feb 6, 2019

@Kangz that was said in the context of "without extra information". We could, of course, provide (or rather, require the user to provide) an extra mapping table for the WASM module to resolve this.

@kainino0x
Copy link
Contributor

@Kangz: What about new WHLSL written for WebGPU in particular? Would it still need to use a root signature layout?

Also, how would an app porting from D3D12 communicate its root signature layout to WHLSL/WebGPU?

@Kangz
Copy link
Contributor

Kangz commented Feb 7, 2019

@kainino0x WHLSL for WebGPU would use register(tX with no overlap, spaceN), an app portingfrom D3D12 already has a notion of root signature layout so it can pass it at WHLSL shader compilation (either as source or as an object)

@kainino0x
Copy link
Contributor

By "with overlap" you mean that b0, t0, etc all refer to the same binding? (and therefore must not collide)

Makes sense to me. 👍

@devshgraphicsprogramming

maybe you can come up with a better name than with overlap that kind-of suggests the exact opposite of what you intend it to mean.

Maybe with unique_reg_index or something?

@Kangz
Copy link
Contributor

Kangz commented Feb 8, 2019

Fixed to say "with no overlap". My bad.

@kainino0x
Copy link
Contributor

do you meant to use with no overlap as syntax? I assumed it was not syntax.

@Kangz
Copy link
Contributor

Kangz commented Feb 9, 2019

Your assumption was correct.

@litherum
Copy link
Contributor Author

litherum commented Feb 13, 2019

The fact of the matter is that HLSL is the dominant language today. Even if the JS API's binding model matches SPIR-V's, that doesn't match the shader source code that the author is writing.

Even if the API injests SPIR-V, and even if the binding model matches SPIR-V, that still isn't sufficient. We would be in a situation where shader authors would have to know the intimate details of how their offline WHLSL -> SPIR-V compiler rewrites semantics. Being in a world where the shading language is incompatible with the API's binding model would be a failure of WebGPU.

@RafaelCintron do you have any thoughts?

@magcius
Copy link

magcius commented Feb 13, 2019

Being in a world where the shading language is incompatible with the API's binding model would be a failure of WebGPU.

It doesn't seem to have stopped D3D12 or Vulkan. I can't stress this enough -- this awkward incompatibility is there across both of those.

The WebGPU working group should not be using this as an opportunity to invent a new binding model, especially when this thread seems to suggest that existing API binding models are poorly understood by some WG members. This is not meant as an attack aimed at anyone here, these things are difficult and subtle and hard to understand.

@kainino0x
Copy link
Contributor

Being in a world where the shading language is incompatible with the API's binding model would be a failure of WebGPU.

IIUC, the "WebGPU-canon subset" of (W)HLSL Corentin is suggesting, where for example (b0,space0) and (t0,space0) conflict (i.e. cannot both be specified together) seems to me to be compatible with the suggested Vulkan/D3D12 style binding model. Am I wrong about this? (Aside: I'm guessing it would be equivalent to DXC with -fvk-*-shift=0, which is probably DXC's default, but I haven't used it)

Only applications porting existing shaders would (need to) use a d3d12-like root descriptor.

@kainino0x
Copy link
Contributor

kainino0x commented Feb 13, 2019

I think it would be useful to describe an example here. Here's a super basic one, according to my understanding (ignoring descriptor ranges for now (numDescriptors)). Others who suggested a scheme like this: please let me know if I got it wrong.

WebGPU-first app:

SamplerState theSampler : register(s0, space0);
ConstantBuffer<Foo> theBuffer : register(b1, space0);

Default mapping puts:

  • (s0, space0) in (bind group 0, binding 0)
  • (b1, space0) in (bind group 0, binding 1).
const bgl = device.createBindGroupLayout({
  { binding: 0, visibility: ..., type: 'sampler' },
  { binding: 1, visibility: ..., type: 'uniform-buffer' },
});
const pl = device.createPipelineLayout({
  bindGroupLayouts: [ bgl0 ]
});

const code = compileWHLSL(that_hlsl);
// (please disregard whether the hlsl is compiled or ingested in this example)
const module = device.createShaderModule({ code });

D3D11-compatible app:

SamplerState theSampler : register(s0);
ConstantBuffer<Foo> theBuffer : register(b0);

s0 and b0 collide, making this invalid without a root signature.

// [Same bind group layout and pipeline layout]

const code = compileWHLSL(that_hlsl, {
  signature: { // (format tbd - could be inverted like D3D12's root signature)
    'space0': {
      's0': { space: 0, slot: 0 },
      'b0': { space: 0, slot: 1 },
    },
  },
});
const module = device.createShaderModule({ code });

@Kangz
Copy link
Contributor

Kangz commented Feb 14, 2019

@kainino0x thanks for making examples, that's exactly what I'm suggesting!

@kvark
Copy link
Contributor

kvark commented Feb 14, 2019

@kainino0x I think the difference between WebGPU and D3D11 styles of HLSL is slightly larger:
For instance, specifying the kind of register ("t" it "t0") is redundant (and even confusing, since it makes it non-obvious that the indices can't overlap) when targeting WebGPU-first. It could be:

SamplerState theSampler : register(0, group0);
ConstantBuffer<Foo> theBuffer : register(1, group0);

The D3D11-style HLSL is unlikely to have "spaceX":

SamplerState theSampler : register(s0);
ConstantBuffer<Foo> theBuffer : register(b0);

That difference could be resolved at the parser level, just like the other differences with HLSL that WHLSL wants to keep.

@kainino0x
Copy link
Contributor

Good point about space0 in D3D11, let me update that.

I think I would be inclined to keep the t,s,u,b just because it uses the same syntax. But I don't really care.

@grorg
Copy link
Contributor

grorg commented Mar 4, 2019

Discussed in the 4 March 2019 meeting

@litherum
Copy link
Contributor Author

litherum commented Mar 4, 2019

We resolved to:

  1. If shaders have distinct binding numbers, they will work out-of-the-box
  2. Otherwise, there will be new API surface for mapping the shader variables to the WebGPU binding model

@litherum litherum closed this Mar 4, 2019
@kainino0x
Copy link
Contributor

BTW, one more thought I had during the meeting: I think it would be nice to require space annotations for the out-of-the-box case. (I.e. require signature for any shader that has a binding without a space, and for any shader that has collisions)

@kainino0x kainino0x mentioned this pull request Jul 2, 2019
ben-clayton pushed a commit to ben-clayton/gpuweb that referenced this pull request Sep 6, 2022
* Add additional log severity levels; reduce log verbosity by default

* Reduce verbosity of checkBuffer* errors

* dedup identical log entries

* revise

* oops
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants