Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom WebGL Layers API #6456

Closed
mourner opened this issue Apr 5, 2018 · 17 comments
Closed

Custom WebGL Layers API #6456

mourner opened this issue Apr 5, 2018 · 17 comments

Comments

@mourner
Copy link
Member

mourner commented Apr 5, 2018

Currently the only way to extend Mapbox GL JS with custom WebGL visualizations is overlaying the map with another canvas. This approach is used in several GL JS-powered visualization platforms such as Deck.gl (with Kepler.gl) and Carto VL. It has a few major drawbacks:

  • Severe performance hit; browsers have a hard time compositing two GL contexts.
  • You can only draw on top of a Mapbox map — there’s no way to draw something in between.
  • Syncing the camera view is unnecessarily difficult and redundant.

To solve this, we need to expose a way to hook into the GL JS rendering loop and draw something using the same GL context. We want to do that very carefully though, because it’s a volatile public API that is likely to change in a breaking way multiple times as we iterate on it, and has a risk of breaking from internal changes. Committing to it can significantly increase our maintenance burden.

To move this forward while reducing the risks, we'd like to introduce it in a way that doesn’t touch the style specification. To achieve this, instead of adding a new layer type, we could introduce a new event that fires after each layer draw so that you could do custom drawing in between layers. Additionally, we would expose public map methods for getting the GL context and invalidating the GL state.

One way this could look is:

// render something custom after "water" layer translucent draw pass
map.on('renderlayer', 'water', (e) => {
  const gl = map.getGLContext();
  gl.clearColor(0, 0.5, 0, 1);
  gl.clear(gl.COLOR_BUFFER_BIT);
});

// if you need to asynchronously clean up state:
map.resetGLState();

Open questions:

  • Do we want to add complexity by exposing internal rendering passes? To optimize performance, we currently render layers in three different passes — offscreen, opaque and translucent . Are we OK with only exposing the translucent pass for drawing custom layers?
  • How do we prevent depth-buffer-related gotchas / rendering artifacts such as this one?
  • Is there any GL state we don’t yet track in context.js besides #6273 but should (prior to releasing the custom layers API)?
  • How do we make it easy for custom layers to align their features with the current map view (including pitched/rotated views), and allow projecting data purely on the shader side? This would require exposing the projection matrix in a usable form. (Proposed PR in native)

Previous proposal by the Carto team: #6124
Original issue: #281

cc @mapbox/gl-core @davidmanzanares @philogb

@ibgreen
Copy link

ibgreen commented Apr 5, 2018

How do we make it easy for custom layers to align their features with the current map view (including pitched/rotated views), and allow projecting data purely on the shader side? This would require exposing the projection matrix in a usable form.

@mourner Very excited to see that this is progressing. I wanted to provide some input from deck.gl as we have been integrating with mapbox-gl for a long time.

  • It would certainly be nice to be able to get the matrix directly from mapbox. In fact, I am not sure I see how the API would work if the matrix isn't exposed.

  • However for deck.gl's purposes it would not be that useful. The reason is that deck.gl enables the user to view the scene from any pitch or e.g. from dual cameras (for WebVR) and in these cases we currently have to hide the mapbox map whenever the pitch etc exceeds mapbox supported values, and show it again once the values return to the range you currently accept. So we still need to be able to calculate matrices when the map cannot be shown.

What would be valuable for deck.gl is the ability to provide a matrix to mapbox-gl. Then we could keep showing the map at any (or at least a wider range of) angles and not have the map "pop in and out" as is currently the case.

There is an example of this interaction in our react-map-gl repo

FWIW - As you probably know, deck.gl provides a dedicated module viewport-mercator-project that is essentially devoted to generating mapbox compatible projection matrices from a {lng, lat, zoom, pitch, bearing} type object. It is in part a duplication of your transform code but it may be worth a peek for inspiration.

@anandthakker
Copy link
Contributor

What would be valuable for deck.gl is the ability to provide a matrix to mapbox-gl.

@ibgreen 👍 I think this aligns with the proposal in #2801 (a bit embedded in the discussion) for a declarative camera API. (It's not exactly the same, since if what you have is an already computed matrix, you'd have to work backwards to the camera's zoom/center/pitch/bearing abstraction.) #5743 brought us a step closer on this front.

@mourner
Copy link
Member Author

mourner commented Apr 5, 2018

@ibgreen @anandthakker as far as I remember, the original reason to limit pitch is that we don't have level-of-detail tile loading logic — all tiles load on the same zoom level, so a pitch too high would trigger tiles loading into infinity. What if we try to fix this root issue instead of exposing a leaky abstraction (accepting an external projection matrix)?

@davidmanzanares
Copy link

Hi,

We completely share your point of view regarding the opportunities and the risks of this feature, particularly the performance improvement, the ability to draw between Mapbox GL layers and the risk of sharing an unique WebGL context.

Regarding the idea of using events without “custom-webgl” layers. I think we’ll be able to adapt to that API, although I think it could require some workarounds from third-party renderers. For instance, in your example, if the water layer was removed by the user, the custom layer won’t be rendered anymore. A similar problem will arise if the user only wants a custom layer. The workaround would be to have a “dummy” layer (of one of the existing Mapbox GL layer types), as proposed by @jgoizueta).

Regarding the list of open questions:

  • I think we are ok with exposing just the translucent pass. As I understand the opaque pass is “just” a performance optimization to allow early depth testing. We don’t have many use-cases that would benefit from that since most of the times we need transparency, or there is almost no pixel overdraw.
  • Second question was great. There needs to be a way to avoid those gotchas, since painting below an opaque layer won’t work out of the box. I think that this could be solved by passing the Z near and Z far values to the external renderer. By reading the code I got to https://github.com/mapbox/mapbox-gl-js/blob/v0.44.2/src/render/painter.js#L255 However, it would be great if you could explain why the Z near value is computed the way it is. In particular, I didn’t expect to see overlaps between the ranges of the render layers, but it actually overlaps. By debugging, I got to see something like: Layer0=[0.11,0.99], Layer1=[0.10, 0.98], Layer2=[0.09, 0.97]... Also, it would be better if the external layer had a Z range greater than epsilon, since that would mean that Z testing cannot be used for features within the same layer.
  • Yes, I think there is, but I think that it doesn’t matter if the documentation includes an API like this:
  • External renderers cannot assume any particular state of the WebGL context provided by Mapbox GL
  • External modifications of the WebGL state must be reverted to the WebGL default state except those covered in TrackedWebGLState,
  • External modifications of the WebGL state must be followed by a call to resetGLState after yielding the context
  • The TrackedWebGLState includes: “GL_FRAMEBUFFER_BINDING”,...

Tracking all state probably takes a huge performance hit. And, in this way, a correct API-compliant external renderer should work with any Mapbox GL release as long as Mapbox GL doesn’t break this API promise. If in the future there is more tracked state, then it shouldn’t matter to external renderers. However, removing tracked state is a breaking change.

  • Regarding a “matrix getter”, it would be nice to have, although having getters for pitch/rotation is not a huge deal from our point of view.

Thank you!

Cc: @rochoa @Jesus89 @IagoLast

@ibgreen
Copy link

ibgreen commented Apr 16, 2018

@mourner Here is a GIF of the simple example in react-map-gl mentioned above, where we hide the map whenever mapbox can no longer match the view projection matrix. I realize it looks strange in a simple example like this but when we have a lot of deck.gl geometry on top of the map and carefully selected colors it makes more sense as the map is just one of many visual components.

mapbox-pitch

@ibgreen
Copy link

ibgreen commented Apr 16, 2018

as far as I remember, the original reason to limit pitch is that we don't have level-of-detail tile loading logic — all tiles load on the same zoom level, so a pitch too high would trigger tiles loading into infinity. What if we try to fix this root issue instead of exposing a leaky abstraction (accepting an external projection matrix)?

@mourner

  • Supporting higher pitch levels would certainly be a big improvement, it would allow us to keep drawing the map at higher pitch levels
  • That said, pitch is only one of the restrictions imposed by mapbox. In particular, the field of view and the exact viewing direction are thing we need to control in first person view and WebVR usage, so there would still be many cases where we need to hide the map.

Now, the requirement to specify an external matrix can probably be regarded as a separate issue from the custom layers API, so maybe we can leave this discussion here. But I do think this is a feature mapbox should consider offering as a complement to the API.

If there is any interest from your side, I'd be happy to help explain use cases or brainstorm for ways to work around your concerns.

@ibgreen
Copy link

ibgreen commented Apr 16, 2018

I think this aligns with the proposal in #2801 (a bit embedded in the discussion) for a declarative camera API. (It's not exactly the same, since if what you have is an already computed matrix, you'd have to work backwards to the camera's zoom/center/pitch/bearing abstraction.) #5743 brought us a step closer on this front.

@anandthakker

That looks promising, thanks for the links!

FWIW, we did a lot of work in deck.gl on a similar system. We ended up creating a hierarchy of "Viewport" classes (that are essentially geospatial aware cameras that are used to generate matrices and uniforms) and a separate "TransitionManager" to handle the various transition and animation cases. Our Viewport classes include a "WebMercatorViewport" that is mapbox compatible but also more "standard" viewports like "FirstPersonViewport" (when using these we just turn off the map).

@ibgreen
Copy link

ibgreen commented Apr 16, 2018

When does it make sense for us to do a test integration with deck.gl? Is the Carto PR in good condition, or is it still too early?

Our main initial focus would be quite simple:

  • enabling deck.gl to draw into the same context and framebuffer as mapbox, using a shared Z buffer.
  • We would do our drawing after mapbox, I don't think we don't have need to inject between mapbox styles at this stage.
  • As long as we are not drawing translucent stuff, the Z buffer should presumably make deck layers and mapbox styles play nice together.
  • I think we want to quickly discover and flush out potential issues (with e.g. far clipping plane or similar).

@ansis
Copy link
Contributor

ansis commented Apr 18, 2018

To move this forward while reducing the risks, we'd like to introduce it in a way that doesn’t touch the style specification. To achieve this, instead of adding a new layer type, we could introduce a new event that fires after each layer draw so that you could do custom drawing in between layers. Additionally, we would expose public map methods for getting the GL context and invalidating the GL state.

I think we could also expose this a CustomStyleLayer interface that people can implement and then add with addLayer, etc. This could be done without touching the style specification and would have the advantage of being able to use the regular methods for adding/removing layers.

class MyCustomStyleLayer extends CustomStyleLayer {
    constructor(...) {}
    renderTranslucent(context, matrix) {
        // draw
    }
}

map.addLayer(new MyCustomStyleLayer(...));

I think we many want to eventually expose the style spec format and style property evaluation so that custom layers can be specified within styles. If that happens, I think it would be easier to expand the CustomStyleLayer interface compared to adding it to the event based interface.

Additionally, we would expose public map methods for getting the GL context and invalidating the GL state.

Is there some way we could support async use of the context while not relying on the user to manually invalidate it? Either by assuming the state is invalid whenever mapboxgl code resumes control from user code, or by adding a map.withContext((gl) => { // gl is only valid in here, and throws if stored and called outside this function call }).

Do we want to add complexity by exposing internal rendering passes? To optimize performance, we currently render layers in three different passes — offscreen, opaque and translucent . Are we OK with only exposing the translucent pass for drawing custom layers?

I think we'll need to expose some concept of render passes so that we can support both flat and 3d layers. If we're doing this we might as well expose the opaque pass if the optimization is worthwhile enough for us to use it internally (is it @kkaefer?).

How do we prevent depth-buffer-related gotchas / rendering artifacts such as this one?

The matrix we provide users will transform a z value from some units (question: which units? pixels? % of the world? meters?) into the gl coords used by us. We could expose near and far values if that helps in some way but I think this should be mainly handled by the matrix we provide.

But what if we eventually want to switch to logarithmic depth in order to increase depth precision? This would be a breaking change for all custom 3d layers because we'd need them to add extra custom shader code. I'm not sure what we can do here besides possibly breaking things in the future.

Some more open questions:

  • will the eventual possible use of webgl2 affect this in any way?
  • would support for non-mercator projections affect this in any way by requiring extra custom shader code?
  • would implementing 3d terrain (and rendering custom layers draped on 3d terrain) require anything extra from custom style layers?
  • would exposing a bigger public api for sources so that a custom layer can use geojson sources be an eventual goal?

@jfirebaugh
Copy link
Contributor

I'm also in favor of sticking with the layer concept for this feature, rather than introducing a new paradigm.

@markusjohnsson
Copy link
Contributor

markusjohnsson commented Apr 20, 2018

Actually, I would really like to use a custom shader for { "type": "fill" } layers. We are doing weather rendering and would like to mask our layers by the "water" source-layer, for example wave height layer which sometimes overlaps fine features of coasts:

image

Edit: added description for screen shot.

In older versions of our systems we have used a "land" layer above "water", rather than a "water" layer above "land".. but that's not how mapbox/openstreetmap data is defined.

@planemad
Copy link

@markusjohnsson is this the same as the 'inverted fill' to mask a map area #6267 ?

@markusjohnsson
Copy link
Contributor

@planemad our problem could be solved using either method, but what I'm suggesting here is not the same thing, technically.

@ibesora
Copy link

ibesora commented Jul 17, 2018

As someone who pitched for something like this in #4327 more than year ago, I'm very interested in following this discussion. I currently resort to Tangram whenever I need to do some custom rendering that goes beyond overlaying something.
One thing that I think it's missing from the proposal is the ability to customize the shaders used while rendering a mapbox layer. I'm thinking about custom encoded raster data for example. This would have made possible to implement #6110 without adding another raster-dem source type.
Alternatively, that could be solved by giving access to the loaded tiles in the callback function proposed on the opening post.
Generally speaking I think we should have access to not only the model-view-projection matrix but also to the tiles loaded. That would give us the option to use mapbox tile management and render them as pleased.

@mourner
Copy link
Member Author

mourner commented Jul 17, 2018

@ibesora thanks for chiming in! Customizing the shaders is something I've explored a bit a while ago here #281 (comment), but I think we can follow up on that independently of this feature — it's a different kind of "custom" that doesn't overlap much.

BTW, we're sprinting on custom layers this week in Kyiv with @ansis — stay tuned for updates!

@anandthakker
Copy link
Contributor

This is in progress over at #7039

@mollymerp
Copy link
Contributor

closed in #7039

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants