-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: frame animations with time encoding and timer param #8921
base: main
Are you sure you want to change the base?
Conversation
Thanks. Can we see whether you can run the formatting action as well so we know the formatting doesn't break. |
i was able to run |
Great. I was hoping we could get the GitHub action for checking formatting to work somehow. |
export const CURR = '_curr'; | ||
|
||
const animationSignals = (selectionName: string, scaleName: string): Signal[] => { | ||
return [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these selection-specific signals, or global signals (i.e., one set of signals that are used by all selections)? I think it's the former based on some of the references within. In which case, I think they'll all need to be prefixed by the selection name. Otherwise, if you have multiple timer selections within the same unit spec, you're going to get a duplicate signal error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think deciding what's selection-specific vs global matters for two situations:
- multiple timer selections in the same unit spec
i don't think this should be allowed, because what does it mean to animate a single set of marks two different ways? what's the best way to disallow this?
- multiple timer selections across multiple unit specs in a multi-view
we ideally want the animation clock ANIM_CLOCK
to be global, so we can share a common timer across multiple units. but the problem is that the animation clock needs to know the MAX_RANGE_EXTENT
to know when to wrap back around to 0, but this is calculated from the scale for each unit. imo for now this is fine since we didn't consider multi-view in the scope of the paper, but we'll have to figure this out when we think about multi-view / scale resolution. the extent of the clock needs to be determined from the unioned scale or the max of the independent scale extents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both of these make sense to me. But, can we add some error checking and logging (see src/log
) to guard against the compiler throwing a really hairy error because of duplicate signals (either at the top level or nested within a unit). Eg, during parsing, we can check if more than one timer selection is defined and ignore them with a warning message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: (2) a global ANIM_CLOCK
, I agree that we didn't consider multi-view in the paper, but I feel a little strange to introduce this constrain in production since, from an end user perspective, it might seem like an arbitrary limitation? Could you humor me and help me think through how complex this would be to implement? If the issue is primarily about how to resolve MAX_RANGE_EXTENT
for wrapping around back to 0, are there more than these two cases we would need to handle:
- No global
ANIM_CLOCK
: so each unit can have only one timer selection, and they are independent and loop independently. - A shared
ANIM_CLOCK
: they would wrap around at the same time; that means, you essentially would need to max over the constituentMAX_RANGE_EXTENT
and then use that to determine the wrap around. The effect would be that one timer would "pause" while the other(s) finish, and then they wrap.
These two cases feel like they map well to VL's existing notion of resolve
(which is used for scales/axes) and which we overload for selection predicates in terms of union
/intersect
. So I feel like there's conceptual machinery already in place that we might be able to piggy back off?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
option 2 sounds like what we want eventually. i looked back at what we did for the paper and we did implement layer
and vconcat
in the prototypes. but in our prototypes these were two separate implementations with not a lot of overlap. i also spent most of the day trying to get facet to work, and it almost works modulo the range resolution stuff, but i am not really sure how to do it. we knew when we were doing the paper that conceptually there's overlap, but i suspect there isn't overlap in the implementation (at least right now) since we're using signals and not stores, and can't make use of vlSelectionResolve
directly
all in all, i think it'd be a pretty big lift to get multi-view working in this PR. when we set out to do this PR, we thought of it as the minimal set of features to get people making basic animations that would cover the most common use case (we also aren't scoping in things like interpolation which i would prioritize over multi-view). i'm hesitant to increase the scope at this point because of 1) how hard it is for me to build velocity in this codebase due to complexity 2) the accessibility libraries being more of a personal priority
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay, y'all! A great start. I've left line-level comments throughout and will do a more holistic review next. In the meantime, some higher-level thoughts:
- Thanks for including example specs in your PR OP. Could you include them as json specs under
examples
as well please? That'll slurp them into our CI process (and, I think?, make them easier to expose in the documentation) - It might behoove us to also include some runtime tests since they've saved our (namely, my) behind a number of times when the compile-time specs appeared to be correctly constructed? (I'm happy to walk through the runtime test infrastructure post-CHI since it's a little complicated).
first round of comments has been addressed. pending todos:
|
Hi @jonathanzong and @joshpoll!👋 Would you mind updating this PR so the latest changes from the main repository are also included in this branch? Im not brave enough to do it myself🙈, but I'm sensing that this is the reason why the new deployment preview is not yet triggering. |
Just thinking about making a map with the trajectory of the upcoming eclipse🥳 |
Such a cool idea! Please share if you create it, I would love to see an animated VL chart for this. Btw, @jonathanzong are you waiting for a review on this branch or are you planning to add more commits (I saw you were still adding more since you last requested a review). |
I rebased |
We are indeed just waiting for a review |
Yes, apologies, it's been on my docket for a while but I've been underwater with various other deadlines. My plan is to wrap this up this month 🤞 |
81b135c
to
488dece
Compare
i have rebased, fixed checks, and verified that the animation works as expected in local editor edit: i'm going to wait to fix the codecov until after we get a review because otherwise we'll just have to do it again |
"mark": "point", | ||
"params": [ | ||
{ | ||
"name": "avl", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor point but it'd be great to name this in a more semantically-meaningful way vs (what I assume) is an abbreviation of "Animated Vega-Lite".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
export const CURR = '_curr'; | ||
|
||
const animationSignals = (selectionName: string, scaleName: string): Signal[] => { | ||
return [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both of these make sense to me. But, can we add some error checking and logging (see src/log
) to guard against the compiler throwing a really hairy error because of duplicate signals (either at the top level or nested within a unit). Eg, during parsing, we can check if more than one timer selection is defined and ignore them with a warning message.
export const CURR = '_curr'; | ||
|
||
const animationSignals = (selectionName: string, scaleName: string): Signal[] => { | ||
return [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: (2) a global ANIM_CLOCK
, I agree that we didn't consider multi-view in the paper, but I feel a little strange to introduce this constrain in production since, from an end user perspective, it might seem like an arbitrary limitation? Could you humor me and help me think through how complex this would be to implement? If the issue is primarily about how to resolve MAX_RANGE_EXTENT
for wrapping around back to 0, are there more than these two cases we would need to handle:
- No global
ANIM_CLOCK
: so each unit can have only one timer selection, and they are independent and loop independently. - A shared
ANIM_CLOCK
: they would wrap around at the same time; that means, you essentially would need to max over the constituentMAX_RANGE_EXTENT
and then use that to determine the wrap around. The effect would be that one timer would "pause" while the other(s) finish, and then they wrap.
These two cases feel like they map well to VL's existing notion of resolve
(which is used for scales/axes) and which we overload for selection predicates in terms of union
/intersect
. So I feel like there's conceptual machinery already in place that we might be able to piggy back off?
i have added an error explaining that facet, layer, and concat animations are unsupported when a user tries to create them (and unit tests for this error). how serious are we about codecov? if you look at what's left, it's codecov asking for:
can we just bypass this? |
This change implements basic features of Animated Vega-Lite. With this change, users can create frame animations using a
time
encoding, atimer
point selection, and afilter
transform.This change does not include more complex features e.g.: interpolation, custom predicates, rescale, interactive sliders, or data-driven pausing.
time
encoding channelisTimerSelection
function to check if a selection is an animation selections_curr
animation dataset fortimer
selections to store the current animation frameanim_clock
), current animation value (anim_value
), current position in the animation field's domain (t_index
), etc.time
encoding is present, updates associated marks'from.data
to use the animation dataset (current frame).Relevant issue: #4060
Coauthor: @joshpoll
Example specs
Hop example:
Gapminder: