这是indexloc提供的服务,不要输入任何密码
Skip to content

Handling out-of-memory when resizing canvas backing store #2520

@greggman

Description

@greggman

In WebGL the browser handled creating the drawingbuffer for a canvas. The spec says

If the requested width or height cannot be satisfied, either when the drawing buffer is first created or when the width and height attributes of the HTMLCanvasElement or OffscreenCanvas are changed, a drawing buffer with smaller dimensions shall be created. The dimensions actually used are implementation dependent and there is no guarantee that a buffer with the same aspect ratio will be created. The actual drawing buffer size can be obtained from the drawingBufferWidth and drawingBufferHeight attributes.

In WebGPU how would this be implemented?

In WebGL this all happens synchronously so

canvas.width  = devicePixelRatio * canvas.clientWidth;
canvas.height = devicePixelRatio * canvas.clientHeight;
// see what size I got
console.log(gl.drawingBufferWidth, gl.drawingBufferHeight);

Part of the reason for this is that many apps do the thing above (resize to match the CSS or window size) and a user might have multiple monitors and sizing a window across monitors could easily run out of memory. Ideally you'd like your app not to crash at that point.

Generally in WebGL I have code like that above called just before rendering though I usually check if the size changed before setting canvas.width and canvas.height.

Given I can't check allocation success or failure in WebGPU synchronously then it seems I have a few options

  1. Resize asynchronously and only switch buffers when on success, continuous rendering

    The problem here is i'll be using the wrong size for the canvas for a few frames and I'll need to keep the old canvas drawingbuffer around so I can continue to render until the new canvas drawingbuffer is ready. That means I need double the memory or more when sizing up. Worse, between the time I allocate the new size and the time I get the error message back the user may have sized the canvas to some other size so I have to throttle the new creation

  2. Stop rendering on resize until allocation succeeds

    This is, when you notice the canvas has changed size, stop rendering, delete the old textures, create the new ones
    wait for success, then start rendering again. This way you don't use double the memory. Curious what the resize experience would be with this method. (update: it's not good ATM)

  3. Resize asynchronously, switch immediately, barf if out of memory

    This is what all current samples I've seen do. They don't check for errors so if the size is too large or if they are out of memory they're going to start generating rendering errors as the GPUTextures they're using are invalid. This is kind of gross. It also means if you're checking for validation errors at all you'll likely get errors you weren't expecting.

  4. Resize asynchronously, switch immediately, check for errors

    The problem with this solution is you'd get several frames of missing/bad rendering until you tried a smaller size and recovered. Each smaller size requires another round of waiting for the results. And, like the previous example, you'll start getting validation errors. (suppose you could turn off any error checking while resizing)

This is even more perplexing given that if I call someGPUCanvasContext.configure and it fails with out-of-memory, now my canvas is unusable until I recover with a smaller size

Note: all the same problems exist if you're creating textures based on the size of the canvas for things like post processing etc....

What's best or recommended practice here?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions