Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Rework the creation and rasterisation of textures. #98

Open
zkburke opened this issue Aug 12, 2024 · 5 comments
Open

Proposal: Rework the creation and rasterisation of textures. #98

zkburke opened this issue Aug 12, 2024 · 5 comments

Comments

@zkburke
Copy link

zkburke commented Aug 12, 2024

First off I want to say thank you for creating this great library, I don't want to come off as negative by creating an issue I'm just interested in improvement and making the library more usable.

I've been working on creating a dvui backend for my engine, and I've found that dvui asks the backend to allocate a large amount of textures lazily when navigating complex UIs (and I'm mostly talking about the demo window here). This is mostly a problem with my renderer as it doesn't have the best strategy for allocating and freeing gpu memory, but I can still see this as being an issue in other backends and it's just not ideal.

I understand that dvui rasterizes new font textures and icons as elements are zoomed in for visual clarity, but perhaps there's a better way to handle vector graphics and perhaps defer such things to the backend. I propose allowing the backend to control the actual rendering of fonts, and let dvui handle glyph placement and sizing. This would allow backends to completely control allocation of textures and would allow me to implement SDF font rasterisation, and allow others to do similar.

Such a change wouldn't necessarily need to make the backends more complex, as the existing font rasterisation (using freetype and such) could just be moved down to the backend and called as a helper utility, so that the existing font logic could be still be used by most backends.

I'd be happy to help implement this if it's accepted, I just wanted to throw the idea out there before doing anything too radical like creating a PR.

@david-vanderson
Copy link
Owner

I am excited about this, and I am sure we can figure out something.

One idea I've been thinking about is allowing backends to received the rendering commands at a higher level. Instead of getting triangles, the backend could get commands like "fill this rect", "stroke this path", "render this text", and even mix and match possibly.

Text is a bit trickier than the rest due to needing to measure glyphs, but I'm sure there is a way to thread that problem.

Does that kind of thing sound like it would work for you?

@zkburke
Copy link
Author

zkburke commented Aug 12, 2024

As someone writing a backend I would really love that higher level command thing. I'm sure you may be familiar with the nuklear gui library, and it does exactly that and it would provide a lot of control from the backend side. That would also fit well with the idea of custom font rendering. Thanks for being receptive to the idea!

I did have a go at writing an immediate mode ui in zig a few months back (didn't get that far) but I did something like this. The way I handled fonts was that I created a kind of abstract representation of the glyphs (containing a mapping of characters to glyphs, where each glyph had a bounding box and other size stuff) that the "frontend" (the actual library) would use. When it emitted a "draw_text" command it would use this, and it used this to do layouts (although I my layouting code didn't get that far.). Anyway the design is ultimatley up to you, I'm just trying to write a backend.

@zkburke
Copy link
Author

zkburke commented Aug 12, 2024

Just so that my ideas aren't too abstract, here's the my toy immediate mode library. I haven't worked on it in months as I moved on to other projects and hit a few dead ends, but the design for all this high level rendering stuff is there. https://github.com/zkburke/somegui

Essentially all the drawing is encoded in a CommandBuffer that contains commands like draw_rect and draw_text. The frontend just sees a font as a collection of glyphs, which are agnostic of their actual visual content and just contain width, height, padding fields, just enough to compute a conservative bounding box per text draw to do layouting. The backend then ingests this command buffer and takes over all rendering, including generating geometry and rasterizing text (although I had only done all this using raylibs drawing library, but it's entirely possible to do custom font rendering). This allows the widget code to be completely agnostic of how things are drawn, only caring about the size and draw order.

This design may also have benefits for the whole "not updating the ui sometimes" thing you've innovated on, as this command buffer could just be retained between frames that don't need to be "repainted", and the backend could also not redraw command buffers that don't change if it wants to.

@david-vanderson
Copy link
Owner

Okay that sounds pretty good. We already have a GlyphInfo struct. Do you want dvui to communicate the GlyphInfo struct for each character we want to render? We'd need some extra info as well. For icons, I guess dvui would say "render this tvg_bytes into this screen rect".

I don't have a clear idea on what the api would look like, so please suggest something.

@david-vanderson
Copy link
Owner

Oh separately, maybe I'm not understanding, but "not updating the ui sometimes" for dvui is purely a matter of waiting between frames for input (or animation/timers). dvui doesn't do anything like saving painting commands or even a whole window buffer (except for the implicit double-buffering happening down in the opengl layer).

If that wasn't clear, can you remember what you read that lead you to believe we were doing something like that? I'm worried I miswrote some docs or something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants