Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Bounded Field Functions / Premodifiers #696

Open
NoTalkOnlyProx opened this issue Sep 3, 2024 · 9 comments
Open

Proposal: Bounded Field Functions / Premodifiers #696

NoTalkOnlyProx opened this issue Sep 3, 2024 · 9 comments

Comments

@NoTalkOnlyProx
Copy link

Is your feature request related to a problem?
Hi! We've quickly realized that it will be completely impractical for us to save our procedurally generated world to disk, so destructive methods for generating it are out of the question. That means we need more advanced tools for non-destructive worldgen changes.

Modifiers are already a brilliant tool for this, and we anticipate we will be using them extremely heavily (with our own custom modifier implementations).

However, they are limited by the fact that they apply after the terrain generation has finished. It would be very useful to us if modifiers could somehow influence the terrain gen step itself. I'll give some concrete examples of why a little later.

Describe the solution you'd like

The solution we arrived at is pre-modifiers. These would essentially be very very simple modifier regions which would non-destructively define custom channel fields which would be accessible to terrain generators (such as the graph generator).

Each pre-modifier would have an associated channel, and an associated Bounded Field Function. Essentially, BFFs are just SDFs, but intentionally dropping the association of the field value with an isosurface.

BFFs instead simply define a real-valued field that can be sampled at any location in space, and used, for example, as an input to a voxel generator graph. The key thing that makes them bounded is that each BFF has a well-defined AABB region outside which it can be assumed to output the value zero.

So, extremely similar to SDFs. You can make a BFF from any SDF by negating the SDF and clamping the output of that to > 0.

Each channel is sample-able by the generator. When a channel is sampled within a chunk, the output values for all of the BFFs with that channel index are simply summed. This list can be filtered to only include BFFs whose AABB intersects the AABB of the chunk being processed, so I believe this could be quite efficient. And, being completely non-destructive, it would come with little to no memory cost.

This simple summation is why the "rest value" of each BFF should be zero, and why I want to distinguish BFFs from SDFs. But the idea overall is very similar.

So, one very simple example: Suppose I wanted to create a road from point A to point B in my terrain. I could create a path BFF that is zero anywhere outside the region where the road exists, and write that value to, say, channel 10.

Then in my generator, I could include logic which samples channel 10, and anywhere that channel 10 is above a threshold, blend away anything creating rough terrain, and place "rocky path" texture paints into the weight data.

Hopefully that simple example demonstrates why this could be a very powerful feature.

The nice thing is, with the architecture I've laid out here, there would be no practical limit to the number of channels, and the number of pre-modifiers could be quite high with minimal impact. So this could allow for very detailed non-destructive terrain customization!

Describe alternatives you've considered
We need to be able to arbitrarily and non-destructively define structures in the generated terrain that do not derive simply from parallelize noise distributions, but which affect characteristics of the terrain generation.

This proposal is, IMO, the de-facto way of doing that efficiently. The key decision is really whether we should try to integrate it into godot_voxel directly, or if we would be better off implementing it as part of a custom generator class. And that really depends on whether you have interest in having it integrated.

So: What do you think of the idea? We'll absolutely be happy to implement it if you like it.

@NoTalkOnlyProx
Copy link
Author

NoTalkOnlyProx commented Sep 5, 2024

We decided we are probably better off, for now, developing a less versatile version of this independently without attempting to upstream it.

But do let us know if the idea seems reasonable for upstreaming, maybe at some point we will be in a better place to do that

@Zylann
Copy link
Owner

Zylann commented Sep 8, 2024

Hi! We've quickly realized that it will be completely impractical for us to save our procedurally generated world to disk, so destructive methods for generating it are out of the question

Depends, you can generate and save on the fly like Minecraft does. It just takes more space, but nowadays space is relatively cheap.

Modifiers are already a brilliant tool for this

Hmmm I don't feel very good seeing this coming ^^" Modifiers haven't been on my maintainance list for a while, and I have no plans for them.

Modifiers are already a brilliant tool for this, and we anticipate we will be using them extremely heavily

Arghhh

However, they are limited by the fact that they apply after the terrain generation has finished. It would be very useful to us if modifiers could somehow influence the terrain gen step itself. I'll give some concrete examples of why a little later.

They can't do that because they were not meant to (and also more a proof of concept more than anything else). If you want a generation system that mixes up modifiers and global stuff, you pretty much need to implement your own generator, which then would have a collection of your own kind of modifiers.

The solution we arrived at is pre-modifiers

Nope... those are still modifiers. At least if they were integrated to the module... there is no pre/post "Types" of modifiers, only one, with an "order of application" or "layers". In short: everything is modifiers. "generators" just happen to pre-date modifiers and are global, while modifiers arrived later as an experiment to apply on top. But the evolution of such a system would pretty much be "everything is a modifier with either local or global bounds", including generators, and maybe even edits.
Maybe the only roadblock to this is the fact generators are resources, which means modifiers must be part of them, not the terrain. Or the opposite. It comes in contradiction with changes done on a terrain or the generator. In Godot, the same generator could be on two different terrains in different worlds. Maybe it's just another composition layer (the node/resource separation makes it a bit awkward).

would be accessible to terrain generators (such as the graph generator).

Another multiplication of complexity: I have no idea how would that even work with it, but it sounds like a ton of work (and yet, only if you want to use such a system! Plenty of games don't need that).

Each pre-modifier would have an associated channel

There is a lexical conflict here, because "channel" is already used in VoxelBuffer, which is supposed to represent a box of sampled voxels. After reading a bit, the channels you are referring to don't seem to be the same.

The key thing that makes them bounded is that each BFF has a well-defined AABB region outside which it can be assumed to output the value zero.

Sounds like a rectangular seam generator. I would expect them to have a falloff eventually.

When a channel is sampled within a chunk, the output values for all of the BFFs with that channel index are simply summed

Sounds easy, but I have a feeling that at some point something different will be needed by someone. "modifiers" were meant to interact a bit similar way to layers in a painting program, where each layer can have different blending modes. So depending on that and their order, landscape could be shaped manually ("add" being a possible choice).

And, being completely non-destructive, it would come with little to no memory cost.

Depends on what you do with them. A lot of edits done in the same place becomes increasingly expensive to sample, and memory does increase endlessly in the long run. If you consider doing edits with them though. When I added modifiers, I meant them to be an off-line tool to manually place landscape features. I stopped working on them early on though.

So, one very simple example: Suppose I wanted to create a road from point A to point B in my terrain. I could create a path BFF

I would use a spline modifier.

Overall, you pretty much expanded over modifiers which I already had ideas about. However, as you might guess, I lost motivation in working on that because these are tools meant for manual landscape editing, while my main interests were fully-procedural generation with sampled in-game edits (in addition to things that are not specifically terrain stuff). And generally, a lot of work I don't have time to invest into currently (I'll say this often, because I usually do one thing at a time, and say that to everyone else xD).
Also, this is an approach that only works if you're into this workflow. This is not needed by every game. When I added modifiers I had to insert it along every step of the pipeline where generation was involved, and that annoyed me a bit. I often prefer if things can be modular and data-driven (adding code separately instead of complexifying existing systems; not paying for what you don't use), though the existing systems make this a little harder to do.
This is in part why I think making your own generator would pretty much make your system entirely modular by nature, though if it needs to be made more pervasive, that's where it would have to be inserted in other places, and harder to modularize.

We decided we are probably better off, for now, developing a less versatile version of this independently without attempting to upstream it.

A bit curious how you're going to implement this (notably the resource/node awkwardness).

@NoTalkOnlyProx
Copy link
Author

NoTalkOnlyProx commented Sep 9, 2024

Depends, you can generate and save on the fly like Minecraft does. It just takes more space, but nowadays space is relatively cheap.

We estimate the space required for this at our scale would be, like, on the order of a terabyte. It is entirely possible I did my math wrong, but for a 35000 x 35000 x 1000 terrain, let's first suppose that throwing out out empty chunks reduces that to more like a 35000 x 35000 x 100 terrain. Then, even at one byte per voxel, that's 0.1TB. Storage is cheap, yes, but we really don't feel we can go above 30GB.

Nope... those are still modifiers. At least if they were integrated to the module...

Sure, sure, though I think this is more a matter of semantics. I've switched to calling them Fields, rather than Pre-Modifiers, to better distinguish them, since these do not output SDF data, which is the main distinction I think is important to grasp.

Their value is fully defined before any SDF data gets operated on, which is really the main idea behind them (and they cannot have any knowledge of SDF data). So they bypass voxel storage completely.

Sounds like a rectangular seam generator. I would expect them to have a falloff eventually.

Maybe so! But for other reasons. Remember, I mentioned these are not SDF, so 0 is the fully-fallen-off value.

The best way to think about these is, as, like, non-destructive, floating-point spatial tagging data that the generator and shader have access to, and which is not stored anywhere in the voxel structure itself.

The main thing right now that worries me, regarding seams, is that I really don't understand all this range-analysis code I keep seeing, I may be forced to implement that for this feature. But I am trudging along, we'll see how it goes.

Sounds easy, but I have a feeling that at some point something different will be needed by someone.

I imagine everyone will have slightly different uses, yeah. But I think if you go back to my description above: non-destructive, floating-point spatial tagging data, maybe you can see how that could be useful to a wide variety of people. I certainly can. For example, this seems like a half-decent way to handle biome tagging. You could even have a "2DCellField" which accepts (or if you're feeling fancy, computes) a list of its neighbors, and calculates a bounding box / field value from that. You could have a "warmth" tag and a "moisture" tag, a-la minecraft.

Or, you could have an integer-valued version of this concept, which would not be very useful for me, but could be extremely useful for the biomes use-case. IDK.

Or in the worst case, if you have four biomes, you could have four floating point fields representing the influence of each biome.

But you're right, integrating this into the engine in a truly clean way will require a lot of extra work, and maybe this is too niche for that to be worth it. So that is why we decided to just do path of least resistance in our own fork. Pray for us that this decision does not come back to bite us xD

I would use a spline modifier.

Do spline modifiers exist somewhere? Cause we will need them, for other scenarios.

In this scenario, though, it would be nice to have a 2D path region within-which the generator knows to turn down roughness contributors. We're faking it right now by using a cellular noise, but the key is that we need these paths to follow specific shapes. So a spline field (or premodifier as I was calling them earlier) would get this job done. But a spline modifier would just write new SDF data for a spline.

To your credit, that's useful in other scenarios. For instance, should we move forward on this concept, we will want to add ravines, and these also need to meet very specific point-to-point guarantees. So in that case a spline modifier is 100% the right solution, with maybe a spline field alongside it to help the generator produce different weight paints in the area. Or perhaps we'll be forced to just implement weight paint outputs for modifiers, although I have a feeling that might turn into a slight headache. Really the choice between the two is gonna be path of least resistance for us.

When I added modifiers I had to insert it along every step of the pipeline where generation was involved, and that annoyed me a bit. I often prefer if things can be modular and data-driven.

Yeah, this is a valid concern. I think you could mitigate this by globalizing the GenContext struct you create locally inside voxel_data.cpp

So, for example, in detail_rendering.cpp, you have query_sdf_with_edits, which takes const VoxelModifierStack &modifiers as an argument.

But what if this function were passed GenContext instead?

And what if the generation logic were centralized?

Like, right now, you have it implemented in at least two places: query_sdf_with_edits implements the X/Y/Z span generation signature, and thus requires direct awareness of VoxelModifierStack, whereas VoxelData does blockwise generation.

But what if you just had a single class, VoxelPipeline that implements both of these generation schemes. All areas that need to rely on generated voxel data would call into this class, and the VoxelPipeline instance would be what gets passed around to all these areas, instead of the raw generation parameters.

VoxelPipeline itself could then be the owner of the VoxelModifierStack, and when it calls, for example, generator.generate_block(q), it could pass either a reference to itself, or a reference to a globalized version of the GenContext struct.

VoxelModifierStack could be left null inside the VoxelPipeline to totally disable that step.

Meanwhile, VoxelFieldStack produces data that really only means anything to VoxelGenerator and VoxelMesher. So a reference to this would just be passed to either of them whenever they are generating their output, and it would be up to them to decide what to do with it.

Most generators would just ignore VoxelFieldStack. The TransvoxelMesher could (optionally) pass that data to the shader via a similar mechanism to weight paints. Only in this case, it would be dynamically sampled during meshing, rather than stored and then loaded from the voxel data, thus avoiding quite a few of the pitfalls involved with weight paints. This would ideally be available in a similar sort of "optional context parameter" struct to what I describe with GenContext/VoxelPipeline.

But, yeah, that's a pretty significant refactor. If you implement that refactor (just ignore the VoxelFieldStack aspect), we would of course build our system on top of it. But for us path of least resistance is to really just update the subsystems that we are using to support this feature, and not bother with refactoring everything else. So that is why we are developing this locally for now.

A bit curious how you're going to implement this (notably the resource/node awkwardness).

My current approach is this: I am essentially cloning your VoxelModifierStack concept, and creating a VoxelFieldStack (if we are leaning into the idea that these are floating point spatial tags, then maybe VoxelTagStack or VoxelMetadataStack would have been better names, but this is the name I am sticking with for now, as in my brain these are custom floating-point fields. Maybe VoxelCustomFieldStack).

I will be adding the VoxelFieldStack instance as a parameter to the VoxelGenerator generate functions.

Then, it will be up to each VoxelGenerator implementation to either ignore this provided VoxelFieldStack, or query it.

In our local version, this is going to be quite gross, because I won't be implementing the GenContext/VoxelPipeline refactor. I'll just be passing this stuff around directly.

In an upstreamable version of this feature, I would have implemented some kind of optional-attribute-context structure (such as the VoxelPipeline idea), and then added VoxelFieldStack as one of the properties.

I would be happy to share with you what the final version ends up looking like, assuming we ever get it to work

@NoTalkOnlyProx
Copy link
Author

NoTalkOnlyProx commented Sep 9, 2024

Nope... those are still modifiers. At least if they were integrated to the module... there is no pre/post "Types" of modifiers, only one, with an "order of application" or "layers". In short: everything is modifiers

Rereading this, I think I understand a bit better.

Were this to be integrated, rather than implement a separate VoxelFieldStack, there would still only be VoxelModifierStack. These fields I propose would, then, be a new type of modifier, contrasting with the existing VoxelModifierSDF

I did forget that this exists when replying above.

The reason I chose not to go with this, at least for local implementation, is largely just simplicity. The sampling methodology is, IMO, better off being slightly different for VoxelSDFModifiers and VoxelFields. A separate stack is just the easiest way to accommodate that, IMO.

It does create quite a bit of code duplication, although this could be worked around with some code sharing. In my mind, this would be via a "VoxelEffect" class which contains just the logic surrounding transform/AABB, leaving the sampling logic and data structures up to subclasses.

So "VoxelField" and "VoxelModifier" would inherit from that, and then the two stack types would not worry about VoxelEffect, again due to the distinctions in how sampling logic probably should work.

But, I think you are saying that "VoxelEffect" could just as easily be "VoxedlModifier".

In which case, I would rename "VoxelModifierStack" to "VoxelSDFModifierStack" and then I would create "VoxelFieldModifierStack".

But for non-upstreamable implementation, it suffices for me, and is easier, to just clone the VoxelModifierStack system and not do the refactors required for a cleaner integration.

@NoTalkOnlyProx
Copy link
Author

It occurs to me that there is no need for separated stacks, either. There could be a single stack type, and just multiple different implementations of apply. IE, rename all of the apply functions to applySDF, and then create a new function sampleFields matching my field needs.

I think I will switch to this approach, since it will be less violent than the approach I had been thinking.

@Zylann Would you accept a PR renaming the existing VoxelModifierStack::apply implementations to VoxelModifierStack::applySDF and making them filter by SDF modifier type?

This would allow our modifications to live more peacefully with upstream.

@Zylann
Copy link
Owner

Zylann commented Sep 9, 2024

for a 35000 x 35000 x 1000 terrain

So you need LOD to see it all at once? Note that from what I said, you don't have to pre-generate all of that. But I guess you want your field system to manually author all this?

I really don't understand all this range-analysis code I keep seeing

The graph system is actually more like an expression evaluator. It has 2 backends: CPU and GPU. They produce the same results, but use different approaches.
The CPU one doesn't rely on an existing library or compiler, and is implemented in C++. This can get a bit slow, so range analysis is one of the optimizations it tries to make dynamically. The idea is, very often the area in which graph outputs are requested is known (cuboid for chunks). Therefore, like a "broad phase", the "compiled" nodes are evaluated to estimate the range of their output given that specific area as input. Then it may be that a bunch of nodes resolve as "locally constant" or "not affecting the result" in this area, therefore they get optimized out when running the "narrow phase" that will calculate all the values.
This optimization is not required. The instance generator noise graph doesn't use it. Not necessarily because it can't though, I just didn't consider using it there (it's a bit more work to handle results that come out as constant).
The GPU backend however leaves all this to the GLSL compiler, driver and GPU to do their own optimizations. However, GPU isn't always supported (some nodes don't work, and some usages can't use it, especially if the caller is not asynchronous). Btw, if you want to use GPU, that means you would also have to integrate your system to it (which is yet, yet, yet another pile of work)

I think if you go back to my description above: non-destructive, floating-point spatial tagging data, maybe you can see how that could be useful to a wide variety of people.

This is manual authoring of discrete items in the scene though. It requires user action to design this in the editor, and not every generator will want to work like that.
For example:

You could have a "warmth" tag and a "moisture" tag, a-la minecraft.

In Minecraft that's just a noise combination. Not something placed specifically at some XYZ coordinate with some specific size.
But I guess I see what you mean, that was kind of the idea behind modifiers.
I'm not sure if fields have to be arbitrary numbers necessarily, really it's anything in the scene that may affect generation. You don't always want to have to tinker with a combinator expression just to affect a region, but I guess that depends on the user and the project.

Do spline modifiers exist somewhere? Cause we will need them, for other scenarios.

No, it's just a concept. The idea is that modifiers/fields could be anything in the scene that may or may not have bounds, and affect generation. In that sense, they can be a path with control points, which is usually called a spline, or a bezier curve. The distance to the spline can be used as the field. Then a path could have a bounding box or be further subdivided to optimize things a bit (if you use a lot of fields you will eventually need a spatial partitionning system to look them up quickly and in a thread-safe manner). Easier to describe than to implement of course.

So, for example, in detail_rendering.cpp, you have query_sdf_with_edits, which takes const VoxelModifierStack &modifiers as an argument.

And what if the generation logic were centralized?

If you think about it, this is litterally what VoxelGenerator is. You might not even need the existing modifiers system. The only issue as I said earlier, is the absence of connection to the scene tree, which is partly why I ended up passing it around separately. It's an arbitrary Godot rule that resources should not contain anything about stuff in the scene tree. But that rule is not a requirement if you have specific needs. With a custom generator and a collection of custom nodes that recognize it and expose the interaction with fields, you can get a similar result without having to modify everything.

But what if you just had a single class, VoxelPipeline

Resource or node? That is the question. But so far everything discussed is about generation, so it's all been VoxelGenerator.
Also, another way to look, is VoxelData. That class contains everything needed to evaluate voxels in any area. Though it's not an abstract one, while VoxelGenerator is. And the modifier stack inside of it is actually optional, it's just not implemented as a null (instead, it is empty, which has the same result).

Meanwhile, VoxelFieldStack produces data that really only means anything to VoxelGenerator and VoxelMesher

Why VoxelMesher all of a sudden 🤔 I though this was about generation?
Well if you want this to expand even more everywhere that's even more of a concern in terms of modularity and amount of work^^"

The TransvoxelMesher could (optionally) pass that data to the shader via a similar mechanism to weight paints

I don't think meshers should be aware of that system at all. Meshers should only work on VoxelBuffer that they are given, which represents the result of sampling an area. It's the work of the previous stage (generating/copying voxels) to pass this data along. It doesn't mean it gets stored, however this is where things get very frustrating because this is disregarding destructive edits, which are the main way player edits happen in this module. All that data would have to be stored persistently in this case. That would mean the huge amount of changes and work that fields represent would be unusable for that other approach, or you'd have to use fields as edits too at runtime which brings yet another bunch of work (unless of course you don't want any editing at runtime, which I think defeats a lot of the point of this module compared to just using Blender or some other static terrain tools and associated rendering optimizations).

In our local version, this is going to be quite gross, because I won't be implementing the GenContext/VoxelPipeline refactor. I'll just be passing this stuff around directly.

Or like I said, just make your generator have a reference to it directly (or contain it). I might not do that from a plugin point of view, but from a dedicated approach for a game project that might do.

Were this to be integrated, rather than implement a separate VoxelFieldStack, there would still only be VoxelModifierStack. These fields I propose would, then, be a new type of modifier, contrasting with the existing VoxelModifierSDF

Let's say that what I mentionned was the ultimate goal at the time, fields are just another generalization. Though such generalization means lack of structure, like fields requiring tinkering with a generator while modifiers were meant to work on their own.

The reason I chose not to go with this, at least for local implementation, is largely just simplicity. The sampling methodology is, IMO, better off being slightly different for VoxelSDFModifiers and VoxelFields. A separate stack is just the easiest way to accommodate that, IMO.

I dont think it's any simpler, just different usage.
Also, it might not be a good idea to separate the two entirely (at least in terms of code re-use) because both need to exist as part of a spatial structure that would make them easy to query in specific areas (for every chunk). Right now it's just a vector, but that does not scale when you start to have hundreds of them.

There could be a single stack type, and just multiple different implementations of apply

Not sure of that. It sounded to me like modifiers were doing their own thing inside apply, while fields were rather "used" in different ways by generators instead of actively doing their own specific thing.

Would you accept a PR renaming the existing VoxelModifierStack::apply implementations to VoxelModifierStack::applySDF and making them filter by SDF modifier type?

I don't know... also it's not the only thing you'd have to change to insert your system there. If you want nodes to derive from the same base you might; Godot likes OOP so exposed classes may reflect that workflow, but the internals aim to be lighter and would have to be split, or largely rewritten. Again I didn't mean to maintain this anymore, at least for now, and even having to review PRs touching it is similar to taking time working indirectly on it.
If you try the approach where you use your own generator, you could avoid all of this maybe.

I realize I took a whole hour going through all this, overall I'm not really available to work in this area (whether it's code or reviews). I might come back to it if one day I get more interest for this kind of general design.
If you want to discuss more there is a Discord with voice channel (in readme), might be better than writing big posts.
But yeah, this is a very far-reaching system that I'm not planning for right now.

@NoTalkOnlyProx
Copy link
Author

NoTalkOnlyProx commented Sep 16, 2024

The graph system is actually more like an expression evaluator [...]

Thank you for the explanation, it is helpful for orienting myself in your codebase. It is especially a relief to hear that range analysis is not required, as that will allow us to skip it (for now) for prototyping.

Btw, if you want to use GPU, that means you would also have to integrate your system to it (which is yet, yet, yet another pile of work)

Heh, yeah, I saw that -- Skipping GPU support is one of the various shortcuts we will be taking, since we are not currently aiming to upstream this.

That being said, I hope to implement this feature in such a way that incompatibilities with upstream are minimized. So at least having a basic understanding of your generation systems (GPU included) seems prudent. So thank you for going into detail on that as well.

This is manual authoring of discrete items in the scene though. It requires user action to design this in the editor

I guess my intended meaning here is that "manual authoring of discrete items in the scene" may be a more common desire than you think. But this could just be my own bias speaking!

I don't think meshers should be aware of that system at all.

We are interested in meshers being aware, too, so that shader behavior can also be (optionally) affected by fields. Although we will probably save implementing that for when we are certain it makes artistic sense.

But, for instance, fields could present an alternative solution to the same problem that currently motivates us to use weight-painting. So we are interested in exploring that, especially since it suffers fewer of the drawbacks.

Actually, for that reason, I would have thought you would be very interested in this potential use-case!

[...] this is disregarding destructive edits, which are the main way player edits happen in this module. All that data would have to be stored persistently in this case. That would mean the huge amount of changes and work that fields represent would be unusable for that other approach

I don't really see a problem here -- You already demonstrated the solution with everything else in this engine which is non-destructive.

Destructive edits override non-destructive edits. Fields would/will be used only by non-destructive elements of the engine (generators and maybe meshers).

As you point out, this makes this feature far more useful to workflows that emphasize non-destructive editing. I think that's fine, personally. Not every user of your engine needs or desires a destructive workflow.

Or like I said, just make your generator have a reference to it directly (or contain it). I might not do that from a plugin point of view, but from a dedicated approach for a game project that might do.

Definitely an approach I considered, but having access to fields inside the graph editor would be useful, and taking advantage of the already existing infrastructure for defining, tracking, and selectively applying modifiers was enough to lure me into going with the "fields as a subtype of modifier" approach.

Also, it might not be a good idea to separate the two entirely

I agree, and this became more clear as I gained more familiarity with your existing implementation. Integrating this feature as a modifier subtype has (so far) been relatively painless.

Not sure of that. It sounded to me like modifiers were doing their own thing inside apply, while fields were rather "used" in different ways by generators instead of actively doing their own specific thing.

Yep, this is what I mean. But I decided not to rename apply, and I have simply added a new function, sample_field, which accomplishes that very purpose.

If you want to discuss more there is a Discord with voice channel (in readme), might be better than writing big posts.

I do like the big posts, because it creates a public, searchable record in a nice recognizable spot which can then be read by others who may have an interest in the issue, and further discussed. I actually am in the discord, but to me it is better to discuss in a forum format like this.

I realize I took a whole hour going through all this, overall I'm not really available to work in this area (whether it's code or reviews).

Very understandable. Right now, my primary interest is in minimizing the divergence between our custom solution, and upstream. I have a double purpose in that -- One, it may lead to a semi-upstreamable solution (in case you change your mind), and two, it may save us from nasty refactoring if we decide to upmerge.

Speaking on that, I will now turn my attention to what I actually returned here to discuss:

The only issue as I said earlier, is the absence of connection to the scene tree, which is partly why I ended up passing it around separately.

I see! This was not something I was aware of, I apologize. I did read a decent chunk of the Godot engine development docs (as your development docs recommend), but I missed this particular requirement, or perhaps I simply did not understand when I read it.

That does complicate things.

BUT! It seems to me you have worked around this problem already? Technically speaking, voxel_modifier_stack has no knowledge whatsoever of the scene tree. All it has is a list of active modifiers, and an ID lookup system that allows voxel_modifier_****_gd to efficiently access bound items.

But the class itself has no knowledge. (I think? It is entirely possible something is hiding right under my nose, I am still getting familiar)

So IMO, could voxel_modifier_stack not simply be directly owned by VoxelGenerator?

I ask because I am thinking about making this change in my local copy to improve modularity. The modifier having direct access would mean I would not have to add it as an argument to VoxelGenerator:generate_series and associated functions.

I will admit, I am not yet 100% certain this is the best course to take. But your input would be valuable to me for deciding. This is the kind of tweak I could potentially upstream, for instance. The benefit to you, in that case, would be (hypothetically) a small incremental step toward better modularity. Would such a change seem desirable to you?

@Zylann
Copy link
Owner

Zylann commented Sep 16, 2024

We are interested in meshers being aware, too, so that shader behavior can also be (optionally) affected by fields

What I meant by that is that meshers should not be aware of how those "fields" get generated. They could access the results the same way they access voxels without knowing how they got generated. Maybe one advantage of having access to evaluation would be to do it per vertex after the meshing process, but that depends a lot what kind of transfert is being done.

this is disregarding destructive edits, which are the main way player edits happen in this module. All that data would have to be stored persistently in this case. That would mean the huge amount of changes and work that fields represent would be unusable for that other approach

I don't really see a problem here -- You already demonstrated the solution with everything else in this engine which is non-destructive.

Not sure what you mean here. By essence, a destructive edit has no choice but to "freeze and store" all the data it modified in place, which means if that happens to field data it will start blowing up taken space (in particular if you layer tons and tons of fields). I guess field data would never be edited, but that might have undesired side effects depending on how it is used. Like, if you use fields for texturing, then a player wants to place voxels with a different texture, those voxels need to override somehow the whole process in which you make those textures shown. Which usually means taking chunks intersecting with the edit, and starting to save/load their values so they replace all procedural source.

Destructive edits override non-destructive edits. Fields would/will be used only by non-destructive elements of the engine (generators and maybe meshers).

So you mean fields never get destructively edited I guess?

I do like the big posts, because it creates a public, searchable record in a nice recognizable spot which can then be read by others who may have an interest in the issue, and further discussed. I actually am in the discord, but to me it is better to discuss in a forum format like this.

Maybe, but to me it's currently huge walls of debating text with very little synthesis for whoever wants to get a summary. That's why when posts start to become too long-winded with so much quoting I begin to think about voice chat because there is just too much going on.

Technically speaking, voxel_modifier_stack has no knowledge whatsoever of the scene tree

This is not exactly what I wanted to highlight. Indeed, VoxelModifierStack doesnt know about the scene tree, but the source of its configuration are nodes in the scene tree. That kind of approach only works as long as a VoxelModifierStack is tied to something in the scene tree, in which case, the terrain. You can't do that with Generators because they are resources, which can be shared between multiple terrains and can even exist without one (you can load a generator from script and request some chunks without ever adding anything to the scene tree).

could voxel_modifier_stack not simply be directly owned by VoxelGenerator?

As shown above, you can't do that, unless you enforce the breakage of resource rules. That is, you would decide that your generator must not be shared and must have a lifetime tied to the terrain node.

I ask because I am thinking about making this change in my local copy to improve modularity

It doesn't improve modularity, it introduces coupling, with the constraints that come with it.
What you want inherently ties a generator to a specific terrain node, so unless you make fields optional when generating a block, your project will have to make your generator expect that it is only used in one specific terrain with nodes matching the IDs in it.

@NoTalkOnlyProx
Copy link
Author

NoTalkOnlyProx commented Sep 19, 2024

So you mean fields never get destructively edited I guess?

Yes, exactly. This would be an intentional limitation, and it would be the thing that makes fields distinct from, say, weight paints.

Maybe one advantage of having access to evaluation would be to do it per vertex after the meshing process,

Precisely, precisely. This is what I have been trying to describe. And the ability to do this without those values ever being stored in the voxel is one of the key advantages of fields (given the right circumstances/use-case, at least)

Not sure what you mean here. By essence, a destructive edit has no choice but to "freeze and store" all the data it modified in place, which means if that happens to field data it will start blowing up taken space (in particular if you layer tons and tons of fields).

I think this is covered by my above two responses, but just to reiterate the point: Fields, by design, should not be destructively editable. If a destructive edit happens, the only thing that has to be saved is the final SDF there. By the time destructive edits happen, field values are no longer relevant at all (except to the mesher, for per-vertex sampling to be passed off directly to the shader).

I guess field data would never be edited, but that might have undesired side effects depending on how it is used.

Yes, precisely. Those undesired side-effects are very much an intentional tradeoff to this proposed feature. It is not a feature for everyone that is very much true.

Like, if you use fields for texturing, then a player wants to place voxels with a different texture, those voxels need to override somehow the whole process in which you make those textures shown.

Yes, exactly! Fields are simply not a good solution for this use-case. Generally, in this situation, you will be better off using weight paints (and dealing with the associated tradeoffs there).

If, on the other hand, you do not intend to allow destructive editing, or if you are okay with field values not being destructively editable, then fields might be quite useful to you, as is the case for us.

As shown above, you can't do that, unless you enforce the breakage of resource rules. That is, you would decide that your generator must not be shared and must have a lifetime tied to the terrain node.

Ahh, I see, thank you. I did not understand until this last explanation, I apologize for my confusion. Thank you for taking the time to explain it to me, this will help me stick to a more maintainable implementation.

What you want inherently ties a generator to a specific terrain node,

You're right, I see now that this would be a problem. I guess we are back to needing a more consistently used generator context of some kind.

But for my personal purposes, it seems my best bet is just to modify the signature of the generate_ functions in VoxelGenerator to accept the modifier stack after all.

Maybe, but to me it's currently huge walls of debating text with very little synthesis for whoever wants to get a summary. That's why when posts start to become too long-winded with so much quoting I begin to think about voice chat because there is just too much going on.

I can definitely understand that perspective. Though I would point out that even if we take this to VC, anyone exterior to us will still not have a good summary. At least this way there is some record. And with discussions like these, engineering decisions can be understood, supposing for instance that this feature ever gets upstreamed.

It's one of my go-to tricks when dealing with open-source projects to track down the PR or issue associated with a given commit, and see what the engineers were discussing. Those PRs also give me a good place to contact those engineers if I wind up having questions pertinent to the additions.

But, there is another issue, just personal to me: I find it much harder to discuss these things in voice format. I appreciate the time that text-based formats give me to formulate my thoughts and try to make them concise and clear. So if you are willing to continue to discuss in this format, that would be very helpful to me.

But I understand that I am using up your time, here, so if you'd prefer otherwise, I am happy to let it drop. You have already been greatly helpful and set us on the right track for our downstream implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants