Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KHR_texture_transform #1015

Closed

Conversation

stevenvergenz
Copy link

As a prerequisite to an upcoming baked lighting extension, I needed the ability to atlas textures. This is a good start I think. Any comments?

@lexaknyazev
Copy link
Member

Couple notes:

  • Are you sure that such extension must be a part of sampler, instead of texture? Will such extended samplers be usable for different textures?
  • What are valid ranges for all new properties? E.g., could they be negative, could tileS/T be 0?
  • Are there any restrictions on used wrapping mode (e.g., it must be REPEAT)?
  • Consider using number instead of float, to match JSON types.
  • Consider providing an example JSON.

@stevenvergenz
Copy link
Author

@lexaknyazev

  1. You're right, offset/tile settings probably should depend on the source image. Updated to be part of textures.
  2. See updated schema and README.
  3. No restrictions on wrap mode, but I've added a caveat that mentions it in the README.
  4. Done
  5. Done

@@ -10,6 +10,9 @@ _Draft Khronos extensions are not ratified yet._
* [KHR_materials_common](Khronos/KHR_materials_common/README.md)
* [KHR_technique_webgl](Khronos/KHR_technique_webgl/README.md)

#### Vendor extensions
* [AVR_sampler_offset_tile](Vendor/AVR_sampler_offset_tile/README.md)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, update this line.


### Example JSON

This example utilizes only the top left quadrant of the source image.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that offsetS is 0.5, it seems like top right quadrant is used.
@stevenvergenz Could you check?

},
"tileS": {
"type": "number",
"description": "How many times the texture is repeated across the surface, in the S direction.",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since tile is [-1, 1], could description be rephrased to not use "How many times"?
Text from readme (size of region) seems to be more precise, imho.

@stevenvergenz
Copy link
Author

Fixed those oversights, and reworded the descriptions in the schema.

@lexaknyazev
Copy link
Member

Not a very strong opinion, but there could be benefits (as well as drawbacks) of moving this extension to textureInfo object.


With current design, texture object needs to be duplicated for each atlas element. Implementations may need to go through all textures and group them by source.

{
    "materials": [
        {
            "emmissiveTexture": {
                "index": 0
            }
        },
        {
            "emmissiveTexture": {
                "index": 1
            }
        }
    ],
    "textures": [
        {
            "source": 0,
            "extensions": {
                "AVR_texture_offset_tile": {
                    "tileS": 0.5,
                    "tileT": 0.5
                }
            }
        },
        {
            "source": 0,
            "extensions": {
                "AVR_texture_offset_tile": {
                    "tileS": 0.5,
                    "tileT": 0.5,
                    "offsetS": 0.5
                }
            }
        }
    ]
}

However, this could be beneficial for glTF-to-glTF processing (e.g., individual textures could be converted to atlas without touching materials). Also, this scheme allows storing only texture atlas in glTF asset without anything else.


Other approach could be to move atlas information to textureInfo:

{
    "materials": [
        {
            "emmissiveTexture": {
                "index": 0,
                "texCoord": 0,
                "extensions": {
                    "AVR_texture_offset_tile": {
                        "tileS": 0.5,
                        "tileT": 0.5
                    }
                }
            }
        },
        {
            "emmissiveTexture": {
                "index": 0,
                "texCoord": 0,
                "extensions": {
                    "AVR_texture_offset_tile": {
                        "tileS": 0.5,
                        "tileT": 0.5,
                        "offsetS": 0.5,
                        "offsetT": 0.5
                    }
                }
            }
        }
    ],
    "textures": [
        {
            "source": 0
        }
    ]
}

Pros: more compact representation and possibly easier runtime processing.
Cons: Inability to store only texture atlas in an asset.

@stevenvergenz
Copy link
Author

The way I see it, there are two primary use cases:

  1. A base color or metallic map texture is pulled from an atlas. In this case, it makes sense to have it on the textureInfo, as each material will probably have different settings here.
  2. A lightmap texture is pulled from an atlas. In this case, each individual node in the scene graph will probably have its own portion of the texture, so if the tile/offset settings are not on the node, something down the chain (texture, material) will need to be duplicated (for both transmission and consumption).

Since the lightmap implementation is at a disadvantage either way, I think I'll move it to textureInfo, and handle the odd case in that extension's semantics.

@donmccurdy
Copy link
Contributor

donmccurdy commented Jun 24, 2017

Pros: more compact representation and possibly easier runtime processing.
Cons: Inability to store only texture atlas in an asset.

Reading through mrdoob/three.js#11577, my feeling is that the textureInfo may actually make runtime processing harder — one entry in the textures array previously mapped to a single Texture object, whereas now data required for texture creation is split between two places (flipY in textures, scale, offset in materials). If the asset only has a single texture with tiling specified, we'll end up cloning that texture needlessly while parsing materials because it might be reused by a later material with different tiling.

Despite the possible need for grouping by source, I think it's simpler to implement this extension, and to work out any necessary optimizations, with all properties in textures. IMO the compactness difference is a less critical issue.

@lexaknyazev
Copy link
Member

If the asset only has a single texture with tiling specified, we'll end up cloning that texture needlessly while parsing materials because it might be reused by a later material with different tiling.

Why not use uniforms for handling tile and offset?

@donmccurdy
Copy link
Contributor

Oh, no not copying the image data (we do use uniforms for tile and offset) but the THREE.Texture object that wraps it must be cloned to have different tile/offset values. So not particularly expensive, but complicates implementation a bit.

@emackey
Copy link
Member

emackey commented Nov 3, 2017

Any updates here? Seems like this got a bit of an uptick in interest. Also I notice that one of your sample usages provided in the docs here is the equivalent of a FLIP_Y flag, for people who wish that glTF's vertical texture coordinates went the other way.

"AVR_texture_offset_tile": {
    "offsetT": 1,
    "tileT": -1
}

@stevenvergenz
Copy link
Author

Are there any desired changes? I considered this extension complete. Though looking back, it's probably worthwhile to remove the [-1,1] restriction on the fields. There are use cases for having tile values greater than one that I hadn't considered when initially authoring this.

Plus, now that AltspaceVR has been acquired by Microsoft, I probably need to change the vendor extension.

@bghgary
Copy link
Contributor

bghgary commented Nov 3, 2017

Are there any desired changes?

The link to the UnityGLTF can probably be updated to the Khronos one.

change the vendor extension

Does it make sense for this to be a EXT_?

@sonygod
Copy link

sonygod commented Nov 6, 2017

Any updates here?I'm waiting for this feature in our project .
@stevenvergenz

@donmccurdy
Copy link
Contributor

By "one" texture transform, I think that means the same transform must be equally applied to base, occlusion, normal, and emissive maps, with a different (or non-existent) transform able to be applied only to the occlusion map, correct?

I'm not sure what happens with an occlusion map actually, but otherwise that's correct.

If the user was diligent and took the time to correctly copy-and-paste the same transform to each of the textures in a material, would ThreeJS "diff" the offsets and understand that the same transform was applied?

three.js will take the first transform it finds, if any, and apply it to all textures (unsure about occlusion maps).

In theory, given current limitations, the idealized correct thing to do is for ThreeJS to throw a warning/error if there is any difference between requested transforms for base, MR, Normal, Emit, or indeed if a transform is specified on any of those channels but not exactly repeated identically on each of those other channels.... right?

Agreed.

It seems like for a client to understand how many "truly different" UV maps are in use, the client must compare different transform extensions within a single material, and check for equality. Equality would indicate that a single set of transformed UVs could be applied to multiple images in a material, whereas inequality (or presence-vs-absence difference) would indicate the need for different final sets of UVs.

Not sure I follow... I think you're describing how a client that doesn't support texture transforms but does support multiple UVs could bake the transform to UVs on the fly, if needed?

@donmccurdy
Copy link
Contributor

^I'm not sure what change(s), if any, would follow from this. Certainly there is no need to constrain this extension to three.js's current limits. I'm mostly trying to understand our fallback recommendation, and whether the texCoord index override makes sense and will be used.

Generally my sentiment is that extensions meant for optimization should never be included by model catalogs like Sketchfab or Turbosquid, but added by later tools, in which case making all extensions optional is less important. I think the Draco extension is a clear example of that. This one I'm not sure about... the "texture atlas" use case we describe is an optimization, but texture animation is not. I'm hoping this extension will be widely implemented enough that it's a moot point.

@emackey
Copy link
Member

emackey commented May 15, 2018

I'm not opposed to anything here, just trying to think this one through. It strikes me as a bit odd that the seemingly common case of applying the same transform to multiple images can only be done by repeating the same transform information per image, and can only be detected by comparing such information. But I don't have a better idea how to structure the extension without limiting its flexibility.

@donmccurdy
Copy link
Contributor

Yes, I suppose this also means that (in some future version where texture transforms may be animated) the animation must target each texture with a separate channel? It's verbose but does seem like the more flexible choice.

@bghgary
Copy link
Contributor

bghgary commented May 15, 2018

in which case making all extensions optional is less important

If an optimization happens and stays within a closed ecosystem, then it's okay. If an asset ever leaves that closed ecosystem intentionally, the asset should not have any required extensions.

Draco extension is a clear example of that

If someone serves an asset with Draco compression from a server intending for anyone to consume (i.e. not a closed ecosystem), then I would expect Draco to be optional. I would expect the same thing for this texture transform extension. These optimizations will offer better performance for clients that support the extension while keeping compatibility with clients that don't support the optimizations. If we don't do this, we risk fragmenting the ecosystem.

In the future, when we introduce a new version (probably 3.0 because of forward compatibility requirements for 2.x) where Draco and this extension is part of the spec, these kind of problems will go away.

@donmccurdy
Copy link
Contributor

If someone serves an asset with Draco compression from a server intending for anyone to consume (i.e. not a closed ecosystem), then I would expect Draco to be optional. I would expect the same thing for this texture transform extension...

This makes sense, and I'm glad that we're ensuring optional extensions are possible. But, we're making a broad recommendation here that (as written) seems to imply all exporters should always include both. I don't expect exporters to do that, nor am I sure that they should. Things get complicated if an API wants to ensure an asset includes separate .bin files to prevent downloading unnecessary data:

  • draco buffer
  • uncompressed buffer
  • shared buffer
  • pre-transform UVs
  • post-transform UVs
  • pre/post transform UVs in draco/uncompressed variants?

Broadly I don't think there is any single recommendation we can make for all exporters. At some point I would like to write more detailed best practices for extensions in general, based on the functional role of the extension and the sort of ecosystem in which the model will appear.

I'm fine with moving forward with this extension as-is and potentially rephrasing this implementation note at a future date.


The points @emackey brings up are interesting; I imagine the first exporter implementation will run into some of that. So far it does still appear that this approach is the right one.

@bghgary
Copy link
Contributor

bghgary commented May 15, 2018

... Broadly I don't think there is any single recommendation we can make for all exporters. ...

I basically agree with this. +1 on rephrasing the implementation note.

Things get complicated if an API wants to ensure an asset includes separate .bin files to prevent downloading unnecessary data

I think exporters will need to decide which optimization extensions to use. I don't think there will be a need for all possible permutations. For example, there can be one path for the optimized version (i.e. both Draco and texture transform together), all other cases can use core only. But I'm not sure this works out. Can we ensure that an asset is authored such that a client implementation will only load the optimized path if both Draco and texture transform are supported or else fall back to core?


## glTF Schema Updates

The `KHR_texture_transform` extension may be defined on `textureInfo` structures. It may contain the following properties:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does

textureInfo structures

cover inherited structures as well (i.e., occlusionTextureInfo and normalTextureInfo)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say yes since both occlusionTextureInfo and normalTextureInfo inherit from textureInfo.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. It would be nice to have sample models showing those cases (I've run into this internally).

@donmccurdy
Copy link
Contributor

For example, there can be one path for the optimized version (i.e. both Draco and texture transform together), all other cases can use core only. But I'm not sure this works out. Can we ensure that an asset is authored such that a client implementation will only load the optimized path if both Draco and texture transform are supported or else fall back to core?

I don't think that's feasible, without resorting to requirements specific to these two extensions. Let's assume both the compressed and uncompressed mesh would need to include transformed and untransformed UVs, as the client makes a decision on each extension separately. An exporter might split that as follows:

  • buffer1: compressed geometry, including UVs
  • buffer2: uncompressed geometry, excluding UVs
  • buffer3: uncompressed, transformed UVs
  • buffer4: uncompressed, untransformed UVs
  • buffer5: common data (e.g. textures, animation)

That's OK, and shouldn't be especially complex for the client to load efficiently. But whether that complexity is worthwhile probably depends on the situation:

  • For APIs serving glTF assets for immediate viewing in arbitrary engines, this approach seems appropriate. An alternative would be a dynamic serving mechanism like foo.gltf?draco=true&textureTransform=false, where the .gltf file is generated on the fly but buffers are stored statically.
  • For a downloadable asset catalog like Sketchfab's website, prefer avoiding optimization extensions entirely and using .glb.
  • For developer/artist workflows, where assets are being created in a modeling tool and exported for a particular experience, ... I'm not sure. I suspect the developer will want control in the application layer and not in their library's glTF loader, so they'd create different versions of assets targeted to desktop, mobile, etc. They'll know whether the engine supports Draco or texture transforms, and would enable/disable those optimizations themselves.

@pailhead
Copy link

pailhead commented Jun 6, 2018

For example, if a user applies a transform to the base color, but does not specify one on any other channel, ThreeJS seems likely to end up using that transform for all of the textures (sans occlusion) in that same material, right?

Three.js is somewhat confusing when it comes to this. Depending on what combination of textures is in your material, it will randomly apply the offset to all the maps but sampling from only one.
mrdoob/three.js#12788

I think the relevant code is here:
https://github.com/mrdoob/three.js/blob/dev/src/renderers/WebGLRenderer.js#L1991-L2061 It's a priority list, which ever one gets encountered first should apply the transforms to the others. I cannot think of a valid use case for this at all.

@pailhead
Copy link

pailhead commented Jun 6, 2018

^I'm not sure what change(s), if any, would follow from this. Certainly there is no need to constrain this extension to three.js's current limits.

I agree with this. I had no idea that three.js is such a big authority on a format like glTF.

From which point should i try to observe this situation?

Three.js on it's own may not be aware of a file format like glTF. The core of the library does not know how to handle this format at all. There is an example which to me reads as:

if you like this rendering library, you can write various loaders for it (including glTF). Look at examples for an example of how to do this.

So someone sat down, used the core of the library and wrote an example for loading the glTF format specifically. Now the issue is, the core itself may not be built to specifically handle this proposed extension. This shouldn't matter, since the core was never built to specifically handle glTF either.

Three.js users can use the library out of the box, but can also infinitely extend it. The complicated way would be to fork it and implement your features, but it does have various apis that allow one to extend the core features.

I've made a quick demo to test this claim:

It uses a single texture object, with a single image source for two different map slots albedo and specular. The gui transforms the specular map while keeping the albedo intact. Both maps are using uv0 channel.
http://dusanbosnjak.com/test/webGL/three-material-includes/

This one applies it to most of the maps found in three.js's rough/metal PBR material:
http://dusanbosnjak.com/test/webGL/three-materials-extended/webgl_materials_extended_multiple_uvs_props.html

I'm not sure if it tackled this specific issue as i've somewhat got lost in the comment but this is what it's supposed to do.

Outside the demo

  1. Three's built in PBR material only supports roughness/metalness
  2. Three's built in materials use texture.transform for transformations, according to that list posted above, impossible to use two different transforms for two different textures/channels

In the demo

  1. Three's built in PBR can now use specular/gloss (while keeping everything else)
  2. each material.someMap gets a bunch of transform properties material.someMapOffset, material.someMapRepeat.

Would this be a valid way to approach this? Since GLTFLoader is not in the core of three.js, what requirements does it have for any of this stuff to also be in the core? IE. how much should the clients care about some specification of some 3d format?

This one uses the same extension applied to the GLTFLoader. Ie. this version of the loader has no limitations mentioned in this thread, but it depends on another 3rd part example, not the core:
http://dusanbosnjak.com/test/webGL/three-material-includes/webgl_loader_gltf_extensions.html

@pailhead
Copy link

pailhead commented Jun 6, 2018

buffer1: compressed geometry, including UVs
buffer2: uncompressed geometry, excluding UVs
buffer3: uncompressed, transformed UVs
buffer4: uncompressed, untransformed UVs
buffer5: common data (e.g. textures, animation)

Could the loaders transform this without sending the duplicated uvs through the wire (or even storing them)? There could still be multiple vertex buffers for each of the duplicated uvs, but at least they would be transferred as a single one. Although this seems like an optimization the client should make. Should it be baked and consume more attributes, or should it be computed on the fly (either in .fs or .vs). But writing and transferring these seems redundant.

I always thought of channels as something rather specific, holding a 2d version of the mesh, but not necessarily even having the same topology. ie. UV0 could have N vertices, UV1 could have M vertices, while the mesh itself could have X vertices. This kind of stuff is important to store in its own channel. Wether some linear transformation is applied to this, i think shouldn't matter.

@pailhead
Copy link

pailhead commented Jun 7, 2018

3dsmax

I've opened up an ancient version of 3ds max to compare this, and they seem to couple this with the texture object. I'm mostly curious about how this extension would affect three.js.

This screenshot corresponds to how three.js is already structured, the texture has what's called Offset, Tiling and Angle. It lacks the mapping i think, or it might actually know about "explicit map channel" and something else (like cube mapping) but it doesn't know about the uv channel.

The source though is still the same bitmap, which could be a texture atlas. Animation though seems like it would be a completely different use case.

standardshadernewemptymaterial

Unity though looks like it has 2 (or more) channels with individual tiling controls on the material?

@pailhead
Copy link

pailhead commented Jun 7, 2018

I have updated this demo, to include one of the gltf models from the three.js repo:

bbsginst

Exposed with the dat.gui are some transform parameters for the specularMap, but how it's done is just one implementation. I think it would make sense for this to be obtained from some texture object, that could share some source bitmaps.

@silvainSayduck
Copy link

Hi everyone,

This is not really directly a contribution to the discussion, but rather a general wondering of how using this kind of extension could be made optional? I mean, if we have a glTF using this extension:

  ...
  "emissiveTexture" : {
    "index" : 2,
    "extensions": {
      "KHR_texture_transform": {
        "offset": [0, 0.1],
        "scale": [0.2,0.2]
      }
    }
  }
  ...

How can a client without support for it display anything remotely usable? I mean, if it's going to use a texture atlas without knowledge of the extra parameters in extensions, the rendering will be completely wrong, won't it?
What I mean is that, shouldn't the default index point to a non-atlas texture, and another index be added within the extensions object, pointing to the atlased texture? The default texture could be ignored (and not downloaded) by a client supporting the extension, while other clients still provide an acceptable result using the default index.
This reasoning could be applied to many extensions, so I'm wondering if I'm just missing something here. Or maybe it just means that we will always put this kind of extensions in extensionsRequired (which will in turn prevent them to be widely used as they'd result in limited glTFs).

Thanks a lot in advance for the clarification.

@pailhead
Copy link

pailhead commented Jun 20, 2018

I can reason about this better if it weren't an extension. I fear that what you're describing can be mitigated by a network of fallback options, but i don't like those solutions.

I went to a recent meetup for glTF extensions, but I missed the introduction thus missing a lot. Looking forward to clarification.

@najadojo
Copy link

@pailhead is the Meetup: Get your glTF on with WebGL/WebVR at Microsoft! you are referring to?

@najadojo
Copy link

@silvainSayduck There is a note about how fallback should apply with this extension using the texCoord field:

Implementation Note: For maximum compatibility, it is recommended that exporters generate UV coordinate sets both with and without transforms applied, use the post-transform set in the texture texCoord field, then the pre-transform set with this extension. This way, if the extension is not supported by the consuming engine, the model still renders correctly. Including both will increase the size of the model, so if including the fallback UV set is too burdensome, either add this extension to extensionsRequired or use the same texCoord value in both places.

The texture would always be an atlas but one portion of it would be applicable for clients that don't implement the extension.

@pailhead
Copy link

pailhead commented Jun 20, 2018

@najadojo

Correct. The example that stuck in my head was the LOD presentation. I didn't see what benefits the extension brings over what's already available in the format. Ie. why can't this type of management be achieved with a naming convention myMesh_master + myMesh_lod. The argument over just storing multiple gltf files was i believe that it saves on the redundancy of duplicating the scene graph, but it left me wondering how much saving is achieved over for example adding an extra node called variousLods with just a flat list of corresponding LOD nodes.

I think the draco one discussed fallback options, so i have a bit of bias in this topic, when i try to understand something like this:

The texture would always be an atlas but one portion of it would be applicable for clients that don't implement the extension.

I imagine an atlas that includes itself, and a fallback texture, ie. the client without the extension would just read (half?) the texture, which actually doesn't make sense since it would still require some kind of scale and offset. How does this sentence apply to a layout of a bitmap (that is to be used for this example)?

Why would two models, one using textures, another an atlas even be considered the same? This seems like something some server side API would be doing, based on client capabilities figure out what data the client needs. I understand this would now happen in the gltf header, it notifies the client of the presence of an additional system. I don't understand what happens next conceptually.
Does this mean that a "gltf" loader needs to know how to handle the extensions, or that the client needs to know how to handle the extension with "gltf" loaders?

In this case i think it would be up to an implementation to store gltf and gltfAtlased on some server, and serve it appropriately through a generic gltfLoader, vs storing gltfOverloaded and having the client gltfSuperLoader manage that.

And then again, if it has fallback options, could it be a naming convention for storing in the scene graph, nodesPointingToGeometriesAtlased -> myNode_0_atlased, myNode_1_atlased... and let the core wire the appropriate attributes? If i'm literally describing what extensions are, i hope it made you chuckle at least :) Does this take this responsibility out of tech artist's hands, where they don't have to figure out a convention, but instead have an engineer that can refer to a spec?

I think i have a hard time understanding how extensions should fit the entire ecosystem, point me in the right direction as it seems to be a bit of a digression from this topic.

@silvainSayduck
Copy link

silvainSayduck commented Jun 20, 2018

I think i have a hard time understanding how extensions should fit the entire ecosystem, point me in the right direction as it seems to be a bit of a digression from this topic.

I agree 100% with this... :) If someone knows a more appropriate place for the genera discussion around this, it would be awesome!

The texture would always be an atlas but one portion of it would be applicable for clients that don't implement the extension.

Thanks for the clarification guys! However, we are actually planning to use this extension for tiling textures: basically, using a "low-resolution" texture that we tile across the whole mesh for high visual quality with low texture file size. And obviously in that case, unless I'm mistaken, including both a "non-tiled" texture and the tile as an atlas will defeat the original purpose. So I guess we'd have to go with the extensionsRequired way...

@sbtron
Copy link
Contributor

sbtron commented Jun 21, 2018

Some of the questions raised here are also discussed in #1256
lets follow the discussion there for the general extension best practices so that this PR can be about KHR_texture_transform extension.

@pjcozzi
Copy link
Member

pjcozzi commented Jul 3, 2018

Is this ready to merge?

I think just update https://github.com/KhronosGroup/glTF/blob/master/extensions/README.md so that KHR_texture_transform is under the Khronos extensions section.

@donmccurdy
Copy link
Contributor

Merged via #1372 — opened a new PR to rebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.