Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should an animation channel be able to target multiple nodes at once? #1520

Closed
ziriax opened this issue Dec 21, 2018 · 5 comments
Closed

Should an animation channel be able to target multiple nodes at once? #1520

ziriax opened this issue Dec 21, 2018 · 5 comments

Comments

@ziriax
Copy link

ziriax commented Dec 21, 2018

With our current characters that have a lot of nodes (Maya joints, groups, locators), we often end up with glTF JSON files of about 200 thousand lines. This is excluding the nodes that never animate relative to the base-scene, otherwise we get over a whopping 700 thousand lines.

Although we can optimize this a bit by deleting some redundant nodes, the biggest reason why our files our so huge, is because in the current spec, each node requires seperate output sampler accessors.

In our case, animation is almost always sampled frame-by-frame, and all output samplers are arrays of the same length, so could be grouped in a 2D array, requiring only a single accessor per translation, rotation, scale, but then the animation channel should be able to target multiple nodes...

So for example, instead of having:

        {
          "sampler": 98,
          "target": {
            "node": 107,
            "path": "rotation"
          }
        },
        {
          "sampler": 99,
          "target": {
            "node": 109,
            "path": "rotation"
          }
        },
        {
          "sampler": 100,
          "target": {
            "node": 111,
            "path": "rotation"
          }
        },

one would have

        {
          "sampler": 666,
          "target": {
            "nodes": [107, 109, 111]
            "path": "rotation"
          }
        },

In our case, this would result in a very long "nodes" text line (say K nodes), but the overal JSON would be a lot smaller, since only 3 accessors are needed for all translation, rotation, scaling, instead of 3*K. And most likely the animation processing itself would be more optimal too, since a lot less accessors need to be handled.

I realize that allowing this scenario would require the keys to be N-dimensional for translation, rotation and scaling, but since weight keys already have this, it doesn't really feel like a big deal, and besides an extra for loop, it wouldn't complicate existing code that much...

@donmccurdy
Copy link
Contributor

...the biggest reason why our files our so huge, is because in the current spec, each node requires seperate output sampler accessors.

To clarify — each of N nodes has the same K keyframes, the shared sampler.input would have K values, and the shared sampler.output would have N * K values?

On a similar note, I think it's a bit limiting that one morph target cannot be animated separately from the other targets on the same mesh.

@ziriax
Copy link
Author

ziriax commented Dec 21, 2018

Yes, exactly, the sampler output will have NxK components, unless some key-frame reducer has been applied of course.

IMHO it depends what one wants to do. For carefully crafted game models, where you want fine grained control over animation, glTF could indeed allow animating a subset of the morph target weights (for example allowing lipsync while showing emotions in a face; we achieve this using the typical additive animation, where weighted differences to the setup pose are added together, so we don't need this fine grained control)

However if one wants to use glTF to represent a complex animated CGI rig full of constraints like dynamic switching between forward and inverse kinematics, multiple parent constraints, orientation constraints, expressions, etc, this cannot be represented in glTF (after all, it's not aiming to be USD), so sampling every frame is needed, and the fine grained animation channels introduce the large overhead as I described.

I understand it is impossible to make a single file format that pleases everyone, JPEG doesn't have an alpha channel either :-)

@donmccurdy
Copy link
Contributor

@ziriax would you be able to take a look at #1301 (proposed EXT_property_animation) and suggest changes you think would fit your requirements? If a change like this were to be applied, I think that extension is a good place to put it initially.

@ziriax
Copy link
Author

ziriax commented Jan 4, 2019

The proposal looks very nice!

But I think this issue can be closed. You see, as soon as one starts to fit curves to the sampled data, one looses the NxK structure anyway.

Off topic, but still related: what I would like to see however, is a animation fitting/compression technique, like the one used in Thief, e.g. based on wavelets. That would be another extension (and certainly another issue 😉)

@aaronfranke
Copy link
Contributor

is because in the current spec, each node requires seperate output sampler accessors.

Where are you seeing that? I don't see anything in the animation specification that requires samplers to be unique. In fact, judging from the document structure, it seems that samplers were meant to be reusable.

What prevents you from de-duplicating identical samplers and using them in multiple animation channels?

        {
          "sampler": 98,
          "target": {
            "node": 107,
            "path": "rotation"
          }
        },
        {
          "sampler": 98,
          "target": {
            "node": 109,
            "path": "rotation"
          }
        },
        {
          "sampler": 98,
          "target": {
            "node": 111,
            "path": "rotation"
          }
        },

@emackey emackey closed this as not planned Won't fix, can't repro, duplicate, stale Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants