-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[APMSP-1317] Fetch agent info endpoint #619
Conversation
cf14d38
to
480c41f
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #619 +/- ##
==========================================
+ Coverage 73.73% 73.84% +0.10%
==========================================
Files 255 257 +2
Lines 37118 37309 +191
==========================================
+ Hits 27369 27549 +180
- Misses 9749 9760 +11
|
6f37661
to
543db99
Compare
BenchmarksComparisonBenchmark execution time: 2024-09-25 09:47:15 Comparing candidate commit 1439225 in PR branch Found 6 performance improvements and 34 performance regressions! Performance is the same for 11 metrics, 2 unstable metrics. scenario:benching string interning on wordpress profile
scenario:credit_card/is_card_number/
scenario:credit_card/is_card_number/ 3782-8224-6310-005
scenario:credit_card/is_card_number/ 378282246310005
scenario:credit_card/is_card_number/37828224631
scenario:credit_card/is_card_number/378282246310005
scenario:credit_card/is_card_number/37828224631000521389798
scenario:credit_card/is_card_number/x371413321323331
scenario:credit_card/is_card_number_no_luhn/
scenario:credit_card/is_card_number_no_luhn/ 3782-8224-6310-005
scenario:credit_card/is_card_number_no_luhn/ 378282246310005
scenario:credit_card/is_card_number_no_luhn/37828224631
scenario:credit_card/is_card_number_no_luhn/37828224631000521389798
scenario:credit_card/is_card_number_no_luhn/x371413321323331
scenario:normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo...
scenario:normalization/normalize_name/normalize_name/bad-name
scenario:normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters
scenario:normalization/normalize_service/normalize_service/[empty string]
scenario:normalization/normalize_service/normalize_service/test_ASCII
scenario:normalization/normalize_trace/test_trace
scenario:redis/obfuscate_redis_string
scenario:sql/obfuscate_sql_string
CandidateCandidate benchmark detailsGroup 1
Group 2
Group 3
Group 4
Group 5
Group 6
Group 7
Group 8
Group 9
Group 10
Group 11
Group 12
BaselineOmitted due to size. |
868ebe6
to
506770d
Compare
506770d
to
d1e4c37
Compare
agent-info/src/fetcher.rs
Outdated
/// given refresh interval. | ||
/// | ||
/// # Example | ||
/// ```no_run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm slightly uneasy about no_run
examples since it is easy for the example to drift from the implementation since the compiler isn't checking it. I'd much prefer a well documented test or example module to showcase usability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the no_run
doc tests are still compiled just not run (which is not possible since it does http calls). But I can add some test with the test agent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some integration tests with the test agent. We need to wait for DataDog/dd-apm-test-agent#193 to get merged since the test agent does not send the agent state header. I also added a require_test_agent
module to the trace-utils integration test to filter out all tests that use it in the pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, TIL. I incorrectly assumed no_run
was completely ignored by the compiler. In that case, I have no objection to using it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also added a require_test_agent module to the trace-utils integration test to filter out all tests that use it in the pipeline.
I'm not sure that's necessary. We already run integration tests separately on CI. I think the correct separation for tests is the type (unit, integration, fuzz, etc). Also, by definition I believe all trace-utils integrations should be hitting the test agent anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. We currently do that with the -E '!test(tracing_integration_tests::)'
flag.
But that filter won't apply to your test. What about changing it to something more generic like just integration_tests
instead of require_test_agent
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assume all integration tests will require the test agent ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can assume all integration tests may (but not necessarily) require external entities like containers running the test-agent or other services. I think we also want to assume integration tests should run separately from unit tests and other workflows in CI. These assumptions are why I think a less specific filter of just integration_tests
is preferable to filtering on whether or not they use the test-agent.
For data-pipeline or trace-utils specifically - yes I think we would assume the test-agent is being used. But I wouldn't necessarily say the same for other crates in libdatadog that may wish to write integration tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay with running the integrations tests in a separated ci step. But we still need to be able to run the tests using the test agent separately as they only run on Linux (at least we only run them on linux in the CI) whereas the other integration tests run on all platform.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are worried about operating systems wouldn't it be better to just ignore tests on OSes we don't want to use with #[cfg(not(target_os = "x"))]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks mostly good. Just need to update the info struct to reflect the changes we're making regarding some of the fields, namely span_kinds_stats_computed
agent-info/src/schema.rs
Outdated
/// Schema of an agent info response | ||
#[allow(missing_docs)] | ||
#[derive(Clone, Deserialize, Default, Debug, PartialEq)] | ||
pub struct AgentInfoStruct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be updated with the latest changes to the RFC and here:
DataDog/datadog-agent#28861
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
3ec4a0b
to
9b1a402
Compare
b45818a
to
39c22b5
Compare
FYI, this branch needs a Jira ticket associated with it. |
// SPDX-License-Identifier: Apache-2.0 | ||
|
||
#[cfg(test)] | ||
mod require_test_agent { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests don't seem to pass yet. Are you still waiting on changes from the test-agent to be deployed?
let current_info = self.info.load(); | ||
let current_hash = current_info.as_ref().map(|info| info.state_hash.as_str()); | ||
let res = fetch_info_with_state(&self.info_endpoint, current_hash).await; | ||
if let Ok(FetchInfoStatus::NewState(new_info)) = res { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about the not OK state? At a minimum, we should probably log something, no?
if let Ok(FetchInfoStatus::NewState(new_info)) = res { | ||
self.info.store(Some(Arc::new(*new_info))); | ||
} | ||
sleep(self.refresh_interval).await; // Wait 5 min between each call to /info |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment should probably remove the 5 min
part?
ef9bc24
to
dc65743
Compare
Requires https://github.com/DataDog/libddprof-build/pull/52 to be merged |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
What does this PR do?
Add AgentInfoFetcher to fetch the agent info endpoint and keep it updated
Motivation
The info endpoint stores some essential parameters for client side stats
Additional Notes
Anything else we should know when reviewing?
How to test the change?
Describe here in detail how the change can be validated.