Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't trust the local machine's system time. #1453

Merged
merged 2 commits into from
Aug 17, 2023
Merged

Conversation

Hoikas
Copy link
Member

@Hoikas Hoikas commented Aug 8, 2023

Previously, when asking for the server's time, the calculation was done by basically offsetting the local system time by some previously calculated delta. This was basically useless because the user can easily change the system time out from under us and cause huge deltas. The attempt to fix things in #173 by using WM_TIMECHANGE was a good idea but ultimately incorrect because hsTimer is a monotonic timer. At this point, it uses std::chrono::steady_clock as its backend, so attempting to use hsTimer to recalculate the server/client time offset is useless.

Therefore, this changes us to simply save the last time value that we received from the game server and the monotonic time that we received that update. This is definitely not perfect because we don't consider things like round trip time for the message and time spent in the update loop. However, it is a clear improvement in that now the KI will always display what the server thinks Mountain time is, and it should be impossible to perform system clock based exploits (such as instantly baking pellets)... without any platform specific code!

Future work: currently, we are implicitly depending on the server to send us periodic plNetMessages with the time sent value to synchronize the time. If that's never done, then we simply return the client's time (yikes!). This is a decent assumption in all current content because we will receive plNetMsgGameMessage with plServerReply for region enters on spawn and plNetSDLMsg for the Age state (when an Age has any non-excluded SDL states), however, there is little guarantee that the server will continue to send us these messages if we are in an Age by ourselves doing nothing. One option would be to send an innocuous message such as plNetMsgGetSharedState periodically. I have not implemented that here because it is not immediately apparent whether or not the various server types implement this message.

Copy link
Contributor

@dgelessus dgelessus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a good solution to me.

Sources/Plasma/PubUtilLib/plNetClient/plNetClientMgr.cpp Outdated Show resolved Hide resolved
@dgelessus
Copy link
Contributor

dgelessus commented Aug 8, 2023

If that's never done, then we simply return the client's time (yikes!).

This can never happen I think. When joining an age instance, the client always sends a plNetMsgGameStateRequest and will stay on the loading screen until it has received the corresponding plNetMsgInitialAgeStateSent reply from the server (and the corresponding initial SDL states, if any). Assuming that that message always has the TimeSent field filled out, this guarantees that the server time will be known by the time the loading screen ends.

One option would be to send an innocuous message such as plNetMsgGetSharedState periodically. I have not implemented that here because it is not immediately apparent whether or not the various server types implement this message.

According to my notes, none of the fan servers support plNetMsgGetSharedState. I haven't checked if Cyan's server responds to it, but I feel like it's best not to rely on it if the client doesn't use it otherwise - this looks like a legacy "proto-SDL" thing that they probably wanted to move away from...

Perhaps plNetMsgMembersListReq would be safe to send as a "ping"? Though right now that message is only sent exactly once when joining the age instance, so we should check that all servers are fine with that message being sent multiple times...

Previously, when asking for the server's time, the calculation was done
by basically offsetting the local system time by some previously
calculated delta. This was basically useless because the user can easily
change the system time out from under us and cause huge deltas. The
attempt to fix things in H-uru#173 by using `WM_TIMECHANGE` was a good idea
but ultimately incorrect because `hsTimer` is a monotonic timer. At this
point, it uses `std::chrono::steady_clock` as its backend, so attempting
to use `hsTimer` to recalculate the server/client time offset is
useless.

Therefore, this changes us to simply save the last time value that we
received from the game server and the monotonic time that we received
that update. This is definitely not perfect because we don't consider
things like round trip time for the message and time spent in the update
loop. However, it *is* a clear improvement in that now the KI will
always display what the server thinks Mountain time is, and it should be
impossible to perform system clock based exploits (such as instantly
baking pellets)... without any platform specific code!
Copy link
Member

@dpogue dpogue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me

Copy link
Contributor

@dgelessus dgelessus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One really minor thing, otherwise LGTM.

Sources/Plasma/PubUtilLib/plNetClient/plNetClientMgr.cpp Outdated Show resolved Hide resolved
Co-authored-by: dgelessus <dgelessus@users.noreply.github.com>
@Hoikas Hoikas merged commit 3a27b23 into H-uru:master Aug 17, 2023
14 checks passed
@Hoikas Hoikas deleted the time_fix_fix branch August 17, 2023 22:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants