Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

P4 Runtime App read path performance improvement HLD. #1255

Merged
merged 4 commits into from
Mar 29, 2023

Conversation

rhalstea
Copy link
Contributor

@rhalstea rhalstea commented Feb 7, 2023

@rhalstea rhalstea marked this pull request as draft February 7, 2023 16:34

While warmboot is not currently supported in the P4RT App. When it is, we should be able to read any data back from Redis (i.e. same as we do today) and use those values to pre-populate the cache before warmboot completes.

To ensure the contents of the cache are in sync with Redis we will include verification logic that reads entries from Redis, translates them to PI and compares them to the cache. Operators can choose how often to run these checks.
Copy link
Contributor

@svshah-intel svshah-intel Feb 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you also meant to check other way around to purge any entry from the cache if no longer exists in AppDB. Just not clear from the description.

Is it sub-optimal if simply invalidate old cache and re-populate (instead of comparing each entry)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some clarifications around what will and will not be checked.

The contract between P4RT App and the P4Orch is that updates only happen when the controller issues a request. This is needed since it's up to the controller to decide how long it waits between reads. With that in mind any change between the cache and AppDb is going to be a much bigger concern, and we don't want to mask it by purging entries or trying to get them in sync.

Practically speaking this verification logic will be most useful during development to ensure we don't break the contract. In deployments we really shouldn't see a difference so operators can run it every hour, every few hours, once or twice a day, etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, and I see that you have updated design to report inconsistency (and not fix any inconsistency as a result of issued checker), which is fine since controllers/operators can choose to fix inconsistecies it via other means.

doc/pins/p4rt_app_read_cache_hld.md Outdated Show resolved Hide resolved
@rhalstea
Copy link
Contributor Author

@svshah-intel thanks for the comments. Are there any other improvements or suggestions you think we should make?

@svshah-intel
Copy link
Contributor

@svshah-intel thanks for the comments. Are there any other improvements or suggestions you think we should make?

I just responded to earlier comments thread. There is just one comment regarding measurement of programming time hit if any now that every entry has additional processing to build the cache. Regardless, your proposal of caching is required and even if there is hit either it can neglible or not. If it can not be neglible, all we can do is make caching optional and let use-case choose. But for now if found neglible, optional caching can be future work.

@rhalstea
Copy link
Contributor Author

The performance hit ends up being negligible.

Given that P4Rruntime is protobuf based we are currently doing a lot of string manipulation and map lookups for each request. This is part of the reason why we have the P4Info (i.e. without it we would have to do an iterative search). So having one more map lookup at the end doesn't hurt much. Especially since absl::flat_hash_map is basically a drop in replacement for unordered_maps which have good lookup times.

@rhalstea
Copy link
Contributor Author

@bhagatyj @kishanps would you mind also reviewing?

@rhalstea rhalstea marked this pull request as ready for review February 23, 2023 16:42
@zhangyanzhao
Copy link
Collaborator

@prsunny @lguohan can you please help to take a look and see if this can be merged? Thanks.

@rhalstea
Copy link
Contributor Author

rhalstea commented Mar 8, 2023

Friendly ping to help merge this PR

@zhangyanzhao
Copy link
Collaborator

@rhalstea can you please add the code PRs into this HLD PR ? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants