Skip to content

Commit

Permalink
Merge pull request #11 from cBournhonesque/cb/fix-interpolation
Browse files Browse the repository at this point in the history
Fix the bug with the interpolation start
  • Loading branch information
cBournhonesque authored Dec 6, 2023
2 parents f31eb65 + 6db2aa8 commit 57efbfb
Show file tree
Hide file tree
Showing 7 changed files with 84 additions and 16 deletions.
17 changes: 10 additions & 7 deletions NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ PROBLEMS/BUGS:
- also it seems like our frames are smaller than our fixed update. This could cause issues?
- CONFIRMATION:
- the problem is indeed that we receive a ping for tick 20, but the world state for tick 20 is only received after
so the rollback thinks there is a problem
so the rollback thinks there is a problem (mismatch at tick 20 + rolls-back to a faulty state 20)
- it was more prevalent with send_interval = 0 because the frame_duration was lower than tick_duration. So sometimes
we would have: frame1/tick20: send updates. frame2/tick20: send ping. And the ping would arrive before.
- BASICALLY, when we receive a packet for tick 10, we think the whole word is replicated at tick 10, but that's not the case.
Expand All @@ -39,18 +39,21 @@ PROBLEMS/BUGS:
- it's probably because the spawn message (from joining a room) arrives after the despawn message



- interpolation has some lag at the beginning, it looks like the entity isn't moving. Probably because we only got an end but no start?
- is it because the start history got deleted? or we should interpolate from current to end?
-
- the problem is that we get regular update roughly every send_interval when the entity is moving. But when it's not the delay between start and end becomes bigger.
- when we have start = X, end = None, we should keep pushing start forward at roughly send-interval rate?

- interpolation
- how come the interpolation_tick is not behind the latest_server_tick, even after setting the interpolation_delay to 50ms?
(server update is 80ms)
normally it should be fine because we already make sure that interpolation time is behind the latest_server_tick...
need to look into that.

- interpolation is very unsmooth when the server update is small.
- SOLVED: That's because we used interpolation delay = ratio, and the send_interval was 0.0
- we need a setting that is ratio with min-delay

- prediction/interpolation are sometimes not spawning anymore. This must be a ordering issue
- maybe because the Predicted/Interpolated component are only added if client/entity are in the same room
- or because we only run entity_spawn for the given NetworkTarget, but those are


ADD TESTS FOR TRICKY SCENARIOS:
- replication at the beginning while RTT is 0?
Expand Down
10 changes: 6 additions & 4 deletions examples/interest_management/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ impl Plugin for MyClientPlugin {
let client_addr = SocketAddr::new(Ipv4Addr::LOCALHOST.into(), self.client_port);
let link_conditioner = LinkConditionerConfig {
incoming_latency: Duration::from_millis(100),
incoming_jitter: Duration::from_millis(20),
incoming_jitter: Duration::from_millis(5),
incoming_loss: 0.00,
};
let transport = match self.transport {
Expand All @@ -51,7 +51,9 @@ impl Plugin for MyClientPlugin {
prediction: PredictionConfig::default(),
// we are sending updates every frame (60fps), let's add a delay of 6 network-ticks
interpolation: InterpolationConfig::default().with_delay(
InterpolationDelay::default().with_min_delay(Duration::from_millis(150)),
InterpolationDelay::default()
.with_min_delay(Duration::from_millis(50))
.with_send_interval_ratio(2.0),
),
// .with_delay(InterpolationDelay::Ratio(2.0)),
};
Expand Down Expand Up @@ -178,11 +180,11 @@ pub(crate) fn log(
) {
let server_tick = client.latest_received_server_tick();
for confirmed_pos in confirmed.iter() {
info!(?server_tick, "Confirmed position: {:?}", confirmed_pos);
debug!(?server_tick, "Confirmed position: {:?}", confirmed_pos);
}
let client_tick = client.tick();
for predicted_pos in predicted.iter() {
info!(?client_tick, "Predicted position: {:?}", predicted_pos);
debug!(?client_tick, "Predicted position: {:?}", predicted_pos);
}
for event in interp_event.read() {
info!("Interpolated event: {:?}", event.entity());
Expand Down
2 changes: 1 addition & 1 deletion examples/interest_management/server.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ impl Plugin for MyServerPlugin {
.with_key(KEY);
let link_conditioner = LinkConditionerConfig {
incoming_latency: Duration::from_millis(100),
incoming_jitter: Duration::from_millis(20),
incoming_jitter: Duration::from_millis(5),
incoming_loss: 0.00,
};
let transport = match self.transport {
Expand Down
4 changes: 2 additions & 2 deletions examples/interest_management/shared.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@ pub fn shared_config() -> SharedConfig {
enable_replication: true,
client_send_interval: Duration::default(),
// server_send_interval: Duration::default(),
server_send_interval: Duration::from_millis(80),
server_send_interval: Duration::from_millis(30),
tick: TickConfig {
// right now, we NEED the tick_duration to be smaller than the frame_duration
// (otherwise we can send multiple packets for the same tick at different frames)
tick_duration: Duration::from_secs_f64(1.0 / 64.0),
},
log: LogConfig {
level: Level::WARN,
level: Level::INFO,
filter: "wgpu=error,wgpu_hal=error,naga=warn,bevy_app=info,bevy_render=warn,quinn=warn"
.to_string(),
},
Expand Down
63 changes: 62 additions & 1 deletion lightyear/src/client/interpolation/interpolate.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,13 @@ use crate::client::resource::Client;
use crate::protocol::Protocol;
use crate::shared::tick_manager::Tick;

// if we haven't received updates since UPDATE_INTERPOLATION_START_TICK_FACTOR * send_interval
// then we update the start_tick so that the interpolation looks good when we receive a new update
// - lower values (with a minimum of 1.0) will make the interpolation look better when we receive an update,
// but will also make it more likely to have a wrong interpolation when we have packet loss
// - however we can combat packet loss by having a bigger delay
const SEND_INTERVAL_TICK_FACTOR: f32 = 1.0;

// TODO: the inner fields are pub just for integration testing.
// maybe put the test here?
// NOTE: there's not a strict need for this, it just makes the logic easier to follow
Expand All @@ -22,6 +29,7 @@ pub struct InterpolateStatus<C: SyncComponent> {
}

/// At the end of each frame, interpolate the components between the last 2 confirmed server states
/// Invariant: start_tick <= current_interpolate_tick <= end_tick
pub(crate) fn update_interpolate_status<C: SyncComponent, P: Protocol>(
client: ResMut<Client<P>>,
mut query: Query<(&mut C, &mut InterpolateStatus<C>, &mut ConfirmedHistory<C>)>,
Expand All @@ -32,6 +40,12 @@ pub(crate) fn update_interpolate_status<C: SyncComponent, P: Protocol>(
if !client.is_synced() {
return;
}
// how many ticks between each interpolation (add 1 to roughly take the ceil)
let send_interval_delta_tick =
(SEND_INTERVAL_TICK_FACTOR * client.config().shared.server_send_interval.as_secs_f32()
/ client.config().shared.tick.tick_duration.as_secs_f32()) as i16
+ 1;

let current_interpolate_tick = client.interpolation_tick();
for (mut component, mut status, mut history) in query.iter_mut() {
let mut start = status.start.take();
Expand Down Expand Up @@ -66,7 +80,54 @@ pub(crate) fn update_interpolate_status<C: SyncComponent, P: Protocol>(
}
}

trace!(?current_interpolate_tick,
// NOTE: if we took enough margin, we should always have server snapshots (end tick) to interpolate towards,
// lets consider that this is the case.

// If start_tick < interpolation_tick < end_tick and end_tick - start_tick > UPDATE_FACTOR * send_interval
// that means that start_tick stopped chang
// ing because the component is fixed (we are not receiving updates)
// in that case we need to add a history at the correct time
if let (Some((start_tick, _)), Some((end_tick, end_component))) =
(&mut start, std::mem::take(&mut end))
{
if end_tick - *start_tick > send_interval_delta_tick {
info!(
?current_interpolate_tick,
?send_interval_delta_tick,
last_received_server_tick = ?client.latest_received_server_tick(),
start_tick = ?(*start_tick),
end_tick = ?*end_tick,
"situation");
let new_tick = end_tick - send_interval_delta_tick as u16;
if new_tick > current_interpolate_tick {
// put back the existing end in the history
history.buffer.add_item(end_tick, end_component);
// update end to be the current start component
end = Some((new_tick, component.clone()));
} else {
// advance the start
*start_tick = new_tick;
end = Some((end_tick, end_component));
}
} else {
end = Some((end_tick, end_component));
}
}

// // if we are not receiving any update for a long time (because there are no entity updates)
// // we need to bring the start tick forward so that the interpolation looks good when we receive a new update)
// if end.is_none() && start.is_some() {
// start.as_mut().map(|(tick, _)| {
// if current_interpolate_tick - *tick >= update_interpolation_start_tick {
// trace!(
// "no server update received for a long time, bringing the start tick forward"
// );
// *tick = current_interpolate_tick
// }
// });
// }

info!(?current_interpolate_tick,
last_received_server_tick = ?client.latest_received_server_tick(),
start_tick = ?start.as_ref().map(|(tick, _)| tick),
end_tick = ?end.as_ref().map(|(tick, _) | tick),
Expand Down
1 change: 1 addition & 0 deletions lightyear/src/client/interpolation/plugin.rs
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ impl InterpolationDelay {

/// How much behind the latest server update we want the interpolation time to be
pub(crate) fn to_duration(&self, server_send_interval: Duration) -> Duration {
// TODO: deal with server_send_interval = 0 (set to frame rate)
let ratio_value = server_send_interval.mul_f32(self.send_interval_ratio);
std::cmp::max(ratio_value, self.min_delay)
}
Expand Down
3 changes: 2 additions & 1 deletion lightyear/src/client/sync.rs
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ impl SyncManager {
) -> WrappedTime {
// We want the interpolation time to be just a little bit behind the latest server time
// We add `duration_since_latest_received_server_tick` because we receive them intermittently
// TODO: maybe integrate?
// TODO: maybe integrate because of jitter?
let objective_time = WrappedTime::from_duration(
self.latest_received_server_tick.0 as u32 * tick_manager.config.tick_duration
+ self.duration_since_latest_received_server_tick,
Expand All @@ -248,6 +248,7 @@ impl SyncManager {
let objective_delta =
chrono::Duration::from_std(interpolation_delay.to_duration(server_send_interval))
.unwrap();
// info!("objective_delta: {:?}", objective_delta);
objective_time - objective_delta
}

Expand Down

0 comments on commit 57efbfb

Please sign in to comment.