-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Rework the creation and rasterisation of textures. #98
Comments
I am excited about this, and I am sure we can figure out something. One idea I've been thinking about is allowing backends to received the rendering commands at a higher level. Instead of getting triangles, the backend could get commands like "fill this rect", "stroke this path", "render this text", and even mix and match possibly. Text is a bit trickier than the rest due to needing to measure glyphs, but I'm sure there is a way to thread that problem. Does that kind of thing sound like it would work for you? |
As someone writing a backend I would really love that higher level command thing. I'm sure you may be familiar with the nuklear gui library, and it does exactly that and it would provide a lot of control from the backend side. That would also fit well with the idea of custom font rendering. Thanks for being receptive to the idea! I did have a go at writing an immediate mode ui in zig a few months back (didn't get that far) but I did something like this. The way I handled fonts was that I created a kind of abstract representation of the glyphs (containing a mapping of characters to glyphs, where each glyph had a bounding box and other size stuff) that the "frontend" (the actual library) would use. When it emitted a "draw_text" command it would use this, and it used this to do layouts (although I my layouting code didn't get that far.). Anyway the design is ultimatley up to you, I'm just trying to write a backend. |
Just so that my ideas aren't too abstract, here's the my toy immediate mode library. I haven't worked on it in months as I moved on to other projects and hit a few dead ends, but the design for all this high level rendering stuff is there. https://github.com/zkburke/somegui Essentially all the drawing is encoded in a This design may also have benefits for the whole "not updating the ui sometimes" thing you've innovated on, as this command buffer could just be retained between frames that don't need to be "repainted", and the backend could also not redraw command buffers that don't change if it wants to. |
Okay that sounds pretty good. We already have a I don't have a clear idea on what the api would look like, so please suggest something. |
Oh separately, maybe I'm not understanding, but "not updating the ui sometimes" for dvui is purely a matter of waiting between frames for input (or animation/timers). dvui doesn't do anything like saving painting commands or even a whole window buffer (except for the implicit double-buffering happening down in the opengl layer). If that wasn't clear, can you remember what you read that lead you to believe we were doing something like that? I'm worried I miswrote some docs or something. |
First off I want to say thank you for creating this great library, I don't want to come off as negative by creating an issue I'm just interested in improvement and making the library more usable.
I've been working on creating a dvui backend for my engine, and I've found that dvui asks the backend to allocate a large amount of textures lazily when navigating complex UIs (and I'm mostly talking about the demo window here). This is mostly a problem with my renderer as it doesn't have the best strategy for allocating and freeing gpu memory, but I can still see this as being an issue in other backends and it's just not ideal.
I understand that dvui rasterizes new font textures and icons as elements are zoomed in for visual clarity, but perhaps there's a better way to handle vector graphics and perhaps defer such things to the backend. I propose allowing the backend to control the actual rendering of fonts, and let dvui handle glyph placement and sizing. This would allow backends to completely control allocation of textures and would allow me to implement SDF font rasterisation, and allow others to do similar.
Such a change wouldn't necessarily need to make the backends more complex, as the existing font rasterisation (using freetype and such) could just be moved down to the backend and called as a helper utility, so that the existing font logic could be still be used by most backends.
I'd be happy to help implement this if it's accepted, I just wanted to throw the idea out there before doing anything too radical like creating a PR.
The text was updated successfully, but these errors were encountered: