-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mkp/gh/parse review test #85
Conversation
Tr/test author fix
…-fix-condition Test/mkp/bitbucket bug fix condition
…webhook-bug Test/tr/mkp/bitbucket webhook bug
…-fix-condition Test/mkp/bitbucket bug fix condition
…-fix-condition fix response status logic in get pr list
…-fix-condition add debug logs to fix get pr 404 error
…-fix-condition fixed webhook message parsing function
…-fix-condition add debug logs and fix keys
…-fix-condition fix keys in webhook data parsing
Mkp/ablejobs
…-fix-condition Test/mkp/bitbucket bug fix condition
Deploying all bugfixes to ablejobs
… mkp/gh/process-review
SonarCloud Quality Gate failed. 0 Bugs No Coverage information Catch issues before they fail your Quality Gate with our IDE extension SonarLint |
WalkthroughThe changes across the codebase reflect a significant expansion of the application's integration capabilities with GitHub and Bitbucket. New modules and functions have been added to handle authentication, webhooks, pull requests, and repository setup for both platforms. The refactoring includes renaming and reorganizing existing modules, improving code reusability, and enhancing security with the introduction of environment variables for GitHub app credentials. Overall, the application now supports more complex interactions with version control platforms and has improved its code organization and maintainability. Changes
TipsChat with CodeRabbit Bot (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 76
Configuration used: CodeRabbit UI
Files ignored due to filter (2)
- cloudbuild-test.yaml
- vibi-dpu/Cargo.toml
Files selected for processing (40)
- Dockerfile (2 hunks)
- vibi-dpu/src/bitbucket/auth.rs (4 hunks)
- vibi-dpu/src/bitbucket/comment.rs (1 hunks)
- vibi-dpu/src/bitbucket/config.rs (1 hunks)
- vibi-dpu/src/bitbucket/prs.rs (5 hunks)
- vibi-dpu/src/bitbucket/repo.rs (1 hunks)
- vibi-dpu/src/bitbucket/reviewer.rs (1 hunks)
- vibi-dpu/src/bitbucket/user.rs (2 hunks)
- vibi-dpu/src/bitbucket/webhook.rs (1 hunks)
- vibi-dpu/src/core/bitbucket/mod.rs (1 hunks)
- vibi-dpu/src/core/bitbucket/setup.rs (3 hunks)
- vibi-dpu/src/core/coverage.rs (1 hunks)
- vibi-dpu/src/core/github/mod.rs (1 hunks)
- vibi-dpu/src/core/github/setup.rs (1 hunks)
- vibi-dpu/src/core/mod.rs (1 hunks)
- vibi-dpu/src/core/review.rs (1 hunks)
- vibi-dpu/src/core/utils.rs (1 hunks)
- vibi-dpu/src/db/bitbucket/auth.rs (3 hunks)
- vibi-dpu/src/db/bitbucket/mod.rs (1 hunks)
- vibi-dpu/src/db/github/auth.rs (1 hunks)
- vibi-dpu/src/db/github/mod.rs (1 hunks)
- vibi-dpu/src/db/mod.rs (2 hunks)
- vibi-dpu/src/db/prs.rs (3 hunks)
- vibi-dpu/src/db/webhook.rs (1 hunks)
- vibi-dpu/src/github/auth.rs (1 hunks)
- vibi-dpu/src/github/config.rs (1 hunks)
- vibi-dpu/src/github/mod.rs (1 hunks)
- vibi-dpu/src/github/prs.rs (1 hunks)
- vibi-dpu/src/github/repos.rs (1 hunks)
- vibi-dpu/src/github/webhook.rs (1 hunks)
- vibi-dpu/src/main.rs (1 hunks)
- vibi-dpu/src/pubsub/listener.rs (5 hunks)
- vibi-dpu/src/utils/bitbucket_auth_info.rs (1 hunks)
- vibi-dpu/src/utils/github_auth_info.rs (1 hunks)
- vibi-dpu/src/utils/github_webhook.rs (1 hunks)
- vibi-dpu/src/utils/gitops.rs (5 hunks)
- vibi-dpu/src/utils/mod.rs (1 hunks)
- vibi-dpu/src/utils/repo.rs (3 hunks)
- vibi-dpu/src/utils/reqwest_client.rs (1 hunks)
- vibi-dpu/src/utils/setup_info.rs (1 hunks)
Files skipped from review due to trivial changes (9)
- vibi-dpu/src/bitbucket/comment.rs
- vibi-dpu/src/bitbucket/webhook.rs
- vibi-dpu/src/core/bitbucket/mod.rs
- vibi-dpu/src/core/coverage.rs
- vibi-dpu/src/core/github/mod.rs
- vibi-dpu/src/core/mod.rs
- vibi-dpu/src/db/github/mod.rs
- vibi-dpu/src/main.rs
- vibi-dpu/src/utils/reqwest_client.rs
Additional comments: 44
vibi-dpu/src/db/bitbucket/mod.rs (1)
- 1-1: The addition of the
auth
module should be cross-checked to ensure it aligns with the project's structure and naming conventions. Also, verify that the module is properly integrated and used within the application.vibi-dpu/src/db/mod.rs (1)
- 1-3: The removal of the
auth
module should be cross-checked to ensure that all authentication logic has been successfully migrated to the new structures and that no references to theauth
module remain elsewhere in the codebase.Dockerfile (2)
19-28: The addition of new ARG instructions for GitHub-related environment variables is consistent with the pull request's goal of enhancing GitHub integration. However, ensure that these new environment variables are documented and securely managed, as they likely contain sensitive information.
33-39: The corresponding ENV instructions are correctly set to pass the build-time ARG values to the runtime environment. This is a standard practice for Dockerfile configuration.
vibi-dpu/src/core/review.rs (2)
11-15: The addition of
reqwest_client::get_client
to the imports suggests that HTTP client creation functionality is now being used within this module. Ensure that theget_client
function is being used appropriately and that any necessary error handling is in place when it is called.22-22: The
process_review
function is now anasync
function, which implies that it will be called with.await
and that the calling context must also beasync
or within anasync
runtime context. Verify that all calls toprocess_review
have been updated accordingly.The rest of the code in this hunk does not show any obvious issues with logic, security, performance, data races, consistency, error handling, maintainability, modularity, complexity, or optimization. The code appears to follow best practices such as DRY and KISS principles, and the use of
async/await
suggests an understanding of asynchronous programming in Rust.vibi-dpu/src/utils/github_auth_info.rs (3)
4-8: The
GithubAuthInfo
struct is well-defined with appropriate fields for managing GitHub authentication information. The use ofOption<String>
forinstallation_id
is a good choice, as it allows for the possibility that an installation ID might not always be present.13-18: The constructor
new
is correctly implemented, taking ownership of the passed-inString
parameters and using theOption<String>
type forinstallation_id
, which aligns with the struct's definition.22-32: The getter methods are implemented correctly, returning references to the struct's fields. This is efficient as it avoids unnecessary cloning of the data.
vibi-dpu/src/bitbucket/user.rs (2)
1-4: The import changes reflect the refactoring mentioned in the pull request summary. It's important to ensure that the new
bitbucket_auth_info
module provides the same or improved functionality as the oldauth_info
.6-6: The function
get_and_save_workspace_users
now usesbitbucket_auth_info
. This change should be verified to ensure that the new auth info structure is compatible with the existing function logic and that any necessary adjustments have been made.vibi-dpu/src/bitbucket/reviewer.rs (2)
4-6: The
get_client
function is now being imported from a utility module, which is a good practice for code reusability and separation of concerns. Ensure that theget_client
function is no longer defined in this module to avoid duplicate code.8-8: The removal of the local
config
module import forget_client
is appropriate given the new import fromcrate::utils::reqwest_client
. Verify that no other functions from theconfig
module are being used in this file, or else the import should not be completely removed.vibi-dpu/src/core/github/setup.rs (1)
- 1-12: The imports and usage of various modules and functions are well-organized and seem to follow Rust's conventions. However, ensure that all the imported functions and structs are used within the module to avoid dead code.
vibi-dpu/src/utils/setup_info.rs (2)
3-8: The
SetupInfo
struct is well-defined with appropriate fields for provider, owner, and repositories. It's good to see that it implements common traits likeDebug
,Deserialize
,Serialize
, andClone
, which are often required for logging, serialization, and cloning operations.11-18: The
new
method is correctly implemented for creating instances ofSetupInfo
. It follows the conventional Rust pattern for constructors.vibi-dpu/src/utils/github_webhook.rs (2)
6-14: The
Webhook
struct is well-defined with appropriate data types for each field. The use ofHashMap<String, serde_json::Value>
forconfig
is flexible and allows for various configurations. However, ensure that thecreated_at
field is always in a consistent and parseable format, possibly ISO 8601 for datetime fields.17-37: The constructor method
new
is correctly implemented and initializes theWebhook
struct with the provided arguments. This is a good use of Rust'sSelf
keyword to simplify the return type.vibi-dpu/src/pubsub/listener.rs (2)
1-19: The new imports and the use of external crates like
futures_util
,google_cloud_auth
, andgoogle_cloud_pubsub
should be verified to ensure they are correctly added to the project's dependencies and are compatible with the existing codebase.134-147: The
listen_messages
function now includes a deduplication mechanism using aVecDeque
to store message hashes. Ensure that the hashing mechanism is collision-resistant and that the deduplication logic does not inadvertently drop messages that should be processed.vibi-dpu/src/utils/gitops.rs (4)
4-10: The new imports should be verified to ensure they are used in the code and are necessary. Unused imports should be removed to keep the code clean.
260-268: The
has_deletions
function is used to check if a hunk contains deletions before adding its index tolimiterpos
. This is a good use of a helper function to simplify the logic withinprocess_diff
. However, ensure that thehas_deletions
function is correctly identifying deletions and not encountering any edge cases.332-344: The
has_deletions
function is a good addition for clarity and maintainability. It simplifies the logic within theprocess_diff
function by abstracting the deletion check into its own function.506-517: The
create_clone_url
function is a good utility for handling different repository providers. However, ensure that the token is being inserted correctly into the URL and that there are no issues with the URL format for each provider.vibi-dpu/src/utils/repo.rs (3)
8-14: The change to make the
project
field anOption<String>
is a significant alteration to theRepository
struct. Ensure that all usages of this field across the application have been updated to handle theOption
type correctly. This includes checking forNone
values and updating any serialization/deserialization logic that assumesproject
is always present.22-28: The constructor has been correctly updated to accept an
Option<String>
for theproject
parameter. This change should be reflected wherever theRepository::new
method is invoked to ensure that the code correctly handles the possibility of aNone
value for theproject
.64-66: The return type of the
project
method has been updated to&Option<String>
. This is consistent with the change to theproject
field. Ensure that all calls toproject()
are updated to handle theOption
type.vibi-dpu/src/core/bitbucket/setup.rs (2)
1-15: The imports have been cleaned up to remove unused ones and add necessary imports for the new functionality. This is good for maintainability and avoiding clutter in the codebase.
72-78: The
SetupInfo
struct is being populated and sent using thesend_setup_info
function. This is a good use of structuring data for setup purposes. However, ensure that thesend_setup_info
function has proper error handling and that thepubreqs
vector is being used effectively in the context of the application.vibi-dpu/src/db/prs.rs (7)
6-10: The function
update_pr_info_in_db
now correctly accepts an additional parameterrepo_provider
, which is used to construct the database key. This is a necessary change for supporting multiple repository providers.29-30: Logging the successful update is good for traceability, but ensure that this does not leak sensitive information into logs.
32-54: The function
bitbucket_process_and_update_pr_if_different
has been renamed to match its functionality more accurately. This is a good practice for code clarity.41-41: The comment indicates a TODO for removing the print statement. Ensure that this is addressed before merging the PR as debug prints should not be present in production code.
57-61: The function
parse_bitbucket_webhook_data
has been correctly renamed and appears to parse the webhook data into aPrInfo
struct. Ensure that the fields being accessed exist and are in the correct format.109-144: The new function
github_process_and_update_pr_if_different
is introduced to handle GitHub webhook data. Ensure that theevent_action
strings match the expected GitHub webhook action types.148-161: The function
parse_github_webhook_data
is added to parse GitHub webhook data. Ensure that the fields being accessed are correct and that thepr_info
struct is constructed properly.vibi-dpu/src/github/mod.rs (1)
- 1-5: The addition of new modules (
auth
,repos
,config
,webhook
, andprs
) is consistent with the pull request's goal of enhancing GitHub integration and organizing functionality into separate concerns. Ensure that these modules are properly integrated with the rest of the application and that their public interfaces are documented for maintainability.vibi-dpu/src/utils/mod.rs (1)
- 1-15: The renaming of modules and addition of new ones seem to align with the pull request's goal of enhancing GitHub and Bitbucket integration and improving code organization. Ensure that all references to these modules throughout the codebase have been updated to reflect the new names and that the new modules are properly integrated.
vibi-dpu/src/github/config.rs (2)
78-98: The
get_all_pages
function has a potential infinite loop if thenext_url
is not properly updated within the loop. Ensure that thenext_url
is being updated correctly after each iteration.174-187: The
extract_next_url
function is well-implemented and seems to correctly handle the extraction of thenext
URL from theLink
header. However, ensure that theLink
header format from GitHub API responses matches the expected format in this function.vibi-dpu/src/utils/bitbucket_auth_info.rs (1)
- 4-44: The
BitbucketAuthInfo
struct and its associated methods are well-defined and follow Rust's idiomatic practices for struct definition and encapsulation of fields with public getter methods. The use ofOption<u64>
for thetimestamp
field allows for the possibility that the timestamp may not always be set, which is a good use of Rust'sOption
type to handle optional values. The implementation of common traits likeDebug
,Deserialize
,Serialize
, andClone
will facilitate debugging, serialization, and cloning of the struct instances, which is often required for structs that represent data entities.However, it's important to ensure that the
expires_in
field, which represents the duration in seconds until the token expires, is used correctly throughout the application to prevent unauthorized access when the token has expired. The application should have logic to refresh the token using therefresh_token
before theexpires_in
duration has elapsed.Additionally, the
set_timestamp
method allows mutation of thetimestamp
field, which is acceptable, but it's crucial to ensure that the timestamp is always set in a consistent manner across the application to avoid issues with token expiration checks.Overall, the struct and methods seem to be correctly implemented and should work as intended if used properly within the application.
vibi-dpu/src/github/prs.rs (3)
65-71: The code assumes that the response will always be an array of PRs. While this is likely, it would be more robust to handle the case where the response is not an array or the expected fields are not present.
111-116: The code does not handle the case where the JSON fields are not present or not strings. It would be safer to handle potential
None
values returned byas_str()
without using?
which will cause the function to returnNone
if any of the fields are not present or not strings.122-130: The function
get_and_store_pr_info
does not handle the case where updating the PR info in the database fails. It would be good to handle potential errors fromupdate_pr_info_in_db
.
pub fn save_webhook_to_db<T: Serialize>(webhook: &T) where T: Serialize + Debug, { | ||
let db = get_db(); | ||
// Generate unique ID | ||
let uuid = Uuid::new_v4(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function save_webhook_to_db
has been updated to be generic over types that implement Serialize
and Debug
. This is a good use of Rust's type system to make the function more flexible. However, the trait bound Serialize
is specified twice, which is redundant and can be removed.
- pub fn save_webhook_to_db<T: Serialize>(webhook: &T) where T: Serialize + Debug, {
+ pub fn save_webhook_to_db<T>(webhook: &T) where T: Serialize + Debug, {
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
pub fn save_webhook_to_db<T: Serialize>(webhook: &T) where T: Serialize + Debug, { | |
let db = get_db(); | |
// Generate unique ID | |
let uuid = Uuid::new_v4(); | |
pub fn save_webhook_to_db<T>(webhook: &T) where T: Serialize + Debug, { | |
let db = get_db(); | |
// Generate unique ID | |
let uuid = Uuid::new_v4(); |
pub mod hunk; | ||
pub mod review; | ||
pub mod repo_config; | ||
pub mod prs; No newline at end of file | ||
pub mod prs; | ||
pub mod bitbucket; | ||
pub mod github; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The addition of bitbucket
and github
modules should be accompanied by a review of their contents to ensure that they follow best practices for modularity and maintainability. It's also important to verify that the new module structure does not introduce any circular dependencies and that the new code is covered by tests.
.expect("Unable to convert clone to array").iter().filter(|clone_val| { | ||
clone_val["name".to_string()].as_str() == Some("ssh") | ||
}).collect::<Vec<&Value>>()[0]["href"].to_string().replace('\"',""), | ||
repo_json["project"]["name"].to_string().trim_matches('"').to_string(), | ||
Some(repo_json["project"]["name"].to_string().trim_matches('"').to_string()), | ||
repo_json["workspace"]["slug"].to_string().trim_matches('"').to_string(), | ||
None, | ||
"bitbucket".to_string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change wraps the repo_json["project"]["name"]
field in a Some
option, which is a good practice for handling potential null
or missing fields in JSON data. However, ensure that the Repository
struct and any functions that use it are updated to handle an Option<String>
type for the project name. Also, the error handling for is_private_res
could be improved by using a match statement instead of expect
, which will panic if None
is encountered. This could be a potential point of failure if the is_private
field is missing or not a boolean.
if let Some(is_private_val) = is_private_res {
is_private = is_private_val;
} else {
eprintln!("Error in deserializing is_private: {:?}", &repo_json);
// Handle the error appropriately, maybe continue with a default value or skip this repo
}
|
||
COPY ./vibi-dpu/target/debug/vibi-dpu /app/vibi-dpu | ||
COPY ./pubsub-sa.json /app/pubsub-sa.json | ||
COPY ./repo-profiler.pem /app/repo-profiler.pem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding the repo-profiler.pem
file to the application's directory implies that it is a private key or certificate for GitHub App authentication. Ensure that this file is handled securely and that its permissions are set appropriately to prevent unauthorized access.
General note:
Ensure that the Docker image is scanned for vulnerabilities, especially since it includes manual installation of packages and addition of certificates. This is a best practice for maintaining secure containerized applications.
pub fn set_installation_id(&mut self, installation_id: &str) { | ||
self.installation_id = Some(installation_id.to_string()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The setter method set_installation_id
correctly takes a string slice, converts it to a String
, and wraps it in Some
to set the installation_id
. However, consider if there should be a corresponding method to unset the installation_id
(i.e., set it to None
), depending on the application's requirements.
Overall, the GithubAuthInfo
struct and its associated methods are well-implemented, with clear and concise code. The implementation of traits like Debug
, Deserialize
, Serialize
, and Clone
will facilitate debugging, serialization, and cloning operations, which are common requirements for such data structures.
pub async fn add_webhook(repo_owner: &str, repo_name: &str, access_token: &str) { | ||
let url = format!("{}/repos/{}/{}/hooks", github_base_url(), repo_owner, repo_name); | ||
|
||
let headers_map_opt = prepare_headers(&access_token); | ||
if headers_map_opt.is_none() { | ||
return; | ||
} | ||
let headers_map = headers_map_opt.expect("Empty headers_map_opt"); | ||
let callback_url = format!("{}/api/github/callbacks/webhook", | ||
env::var("SERVER_URL").expect("SERVER_URL must be set")); | ||
let payload = json!({ | ||
"name": "web", | ||
"events": ["pull_request", "pull_request_review"], | ||
"config": { "url": callback_url, "content_type":"json", "insecure_ssl":"0"}, | ||
"active": true, | ||
}); | ||
let response = get_client() | ||
.post(&url) | ||
.headers(headers_map) | ||
.json(&payload) | ||
.send() | ||
.await; | ||
process_add_webhook_response(response).await; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The add_webhook
function is also asynchronous and seems to be well-structured for adding a webhook to a GitHub repository. However, the error handling in line 42-45 could be improved by using an if let
construct to avoid the need for an unwrap
or expect
which could cause a panic if headers_map_opt
is None
.
- if headers_map_opt.is_none() {
- return;
- }
- let headers_map = headers_map_opt.expect("Empty headers_map_opt");
+ if let Some(headers_map) = headers_map_opt {
+ // ... rest of the code using headers_map
+ } else {
+ eprintln!("Headers could not be prepared");
+ return;
+ }
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
pub async fn add_webhook(repo_owner: &str, repo_name: &str, access_token: &str) { | |
let url = format!("{}/repos/{}/{}/hooks", github_base_url(), repo_owner, repo_name); | |
let headers_map_opt = prepare_headers(&access_token); | |
if headers_map_opt.is_none() { | |
return; | |
} | |
let headers_map = headers_map_opt.expect("Empty headers_map_opt"); | |
let callback_url = format!("{}/api/github/callbacks/webhook", | |
env::var("SERVER_URL").expect("SERVER_URL must be set")); | |
let payload = json!({ | |
"name": "web", | |
"events": ["pull_request", "pull_request_review"], | |
"config": { "url": callback_url, "content_type":"json", "insecure_ssl":"0"}, | |
"active": true, | |
}); | |
let response = get_client() | |
.post(&url) | |
.headers(headers_map) | |
.json(&payload) | |
.send() | |
.await; | |
process_add_webhook_response(response).await; | |
} | |
pub async fn add_webhook(repo_owner: &str, repo_name: &str, access_token: &str) { | |
let url = format!("{}/repos/{}/{}/hooks", github_base_url(), repo_owner, repo_name); | |
let headers_map_opt = prepare_headers(&access_token); | |
if let Some(headers_map) = headers_map_opt { | |
let callback_url = format!("{}/api/github/callbacks/webhook", | |
env::var("SERVER_URL").expect("SERVER_URL must be set")); | |
let payload = json!({ | |
"name": "web", | |
"events": ["pull_request", "pull_request_review"], | |
"config": { "url": callback_url, "content_type":"json", "insecure_ssl":"0"}, | |
"active": true, | |
}); | |
let response = get_client() | |
.post(&url) | |
.headers(headers_map) | |
.json(&payload) | |
.send() | |
.await; | |
process_add_webhook_response(response).await; | |
} else { | |
eprintln!("Headers could not be prepared"); | |
return; | |
} | |
} |
let parse_result = response.json::<Value>().await; | ||
if parse_result.is_err() { | ||
let e = parse_result.expect_err("No error in parsing"); | ||
eprintln!( | ||
"[get_list_prs_github] Failed to parse JSON: {:?}", | ||
e | ||
); | ||
return None; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the previous comment, the use of expect_err
after is_err
is not recommended. It's better to handle the error directly.
- let e = parse_result.expect_err("No error in parsing");
- eprintln!(
- "[get_list_prs_github] Failed to parse JSON: {:?}",
- e
- );
+ if let Err(e) = parse_result {
+ eprintln!("[get_list_prs_github] Failed to parse JSON: {:?}", e);
+ }
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
let parse_result = response.json::<Value>().await; | |
if parse_result.is_err() { | |
let e = parse_result.expect_err("No error in parsing"); | |
eprintln!( | |
"[get_list_prs_github] Failed to parse JSON: {:?}", | |
e | |
); | |
return None; | |
} | |
let parse_result = response.json::<Value>().await; | |
if parse_result.is_err() { | |
if let Err(e) = parse_result { | |
eprintln!("[get_list_prs_github] Failed to parse JSON: {:?}", e); | |
} | |
return None; | |
} |
if response_result.is_err() { | ||
let e = response_result.expect_err("No error in getting PR response"); | ||
eprintln!("Error getting PR info: {:?}", e); | ||
return None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, the use of expect_err
is not recommended. Use error handling instead.
- let e = response_result.expect_err("No error in getting PR response");
- eprintln!("Error getting PR info: {:?}", e);
+ if let Err(e) = response_result {
+ eprintln!("Error getting PR info: {:?}", e);
+ }
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
if response_result.is_err() { | |
let e = response_result.expect_err("No error in getting PR response"); | |
eprintln!("Error getting PR info: {:?}", e); | |
return None; | |
if response_result.is_err() { | |
if let Err(e) = response_result { | |
eprintln!("Error getting PR info: {:?}", e); | |
} | |
return None; |
let parse_result = response.json::<Value>().await; | ||
if parse_result.is_err() { | ||
let e = parse_result.expect_err("No error in parsing"); | ||
eprintln!("Error parsing PR data: {:?}", e); | ||
return None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same issue as above with expect_err
after is_err
.
- let e = parse_result.expect_err("No error in parsing");
- eprintln!("Error parsing PR data: {:?}", e);
+ if let Err(e) = parse_result {
+ eprintln!("Error parsing PR data: {:?}", e);
+ }
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
let parse_result = response.json::<Value>().await; | |
if parse_result.is_err() { | |
let e = parse_result.expect_err("No error in parsing"); | |
eprintln!("Error parsing PR data: {:?}", e); | |
return None; | |
let parse_result = response.json::<Value>().await; | |
if parse_result.is_err() { | |
if let Err(e) = parse_result { | |
eprintln!("Error parsing PR data: {:?}", e); | |
} | |
return None; |
async fn process_add_webhook_response(response: Result<Response, Error>){ | ||
if response.is_err() { | ||
let err = response.expect_err("No error in response"); | ||
eprintln!("Error in api call: {:?}", err); | ||
return; | ||
} | ||
let res = response.expect("Uncaught error in response"); | ||
if !res.status().is_success() { | ||
eprintln!("Failed to add webhook. Status code: {}, Text: {:?}", | ||
res.status(), res.text().await); | ||
return; | ||
} | ||
let webhook_json = res.json::<Value>().await.expect("[process_add_webhook_response] Unable to deserialize res to Value"); | ||
let webhook = Webhook::new( | ||
webhook_json["id"].to_string(), | ||
webhook_json["active"].as_bool().expect("Unable to deserialize active"), | ||
webhook_json["created_at"].to_string().replace('"', ""), | ||
webhook_json["events"].as_array().expect("Unable to deserialize events").into_iter() | ||
.map(|events| events.as_str().expect("Unable to convert event").to_string()).collect(), | ||
webhook_json["ping_url"].to_string().replace('"', ""), | ||
webhook_json["config"]["url"].to_string().replace('"', ""), | ||
webhook_json.get("config") | ||
.and_then(Value::as_object) | ||
.map(|config_obj| { | ||
config_obj.iter().map(|(k, v)| (k.clone(), v.clone())).collect::<HashMap<String, Value>>() | ||
}).expect("Config should be a JSON object") | ||
); | ||
save_webhook_to_db(&webhook); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The process_add_webhook_response
function is handling the response from the GitHub API when adding a webhook. The error handling in lines 64-67 and 69-73 could be improved by using if let
or match
to avoid calling expect
which can cause panics. Additionally, the function should ideally return a Result
to indicate success or failure to the caller.
async fn process_add_webhook_response(response: Result<Response, Error>) -> Result<(), String> {
let res = match response {
Ok(res) => res,
Err(err) => return Err(format!("Error in API call: {:?}", err)),
};
if !res.status().is_success() {
let error_text = res.text().await.unwrap_or_else(|_| "Failed to get error text".to_string());
return Err(format!("Failed to add webhook. Status code: {}, Text: {}", res.status(), error_text));
}
// ... rest of the code to process the successful response
Ok(())
}
General note for the entire hunk:
Ensure that the environment variable SERVER_URL
is always set, as the code expects it to be present without a fallback. This could lead to a runtime panic if it's not set.
Please check the action items covered in the PR -
Summary by CodeRabbit
New Features
Enhancements
Bug Fixes
Documentation
Refactor
reqwest_client
module.setup_info
module.Style
Tests
Chores