-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement adding comment and auto assigning to relevant reviewers #6
Conversation
WalkthroughThe changes introduce a comprehensive set of functionalities for interacting with Bitbucket and Google PubSub, processing code reviews, and managing data in a sled database. The code handles Bitbucket authentication, workspace, repository, user, webhook, and comment management. It also processes PubSub messages, calculates code coverage, and manages review data. The database interactions include saving and retrieving authentication info, workspace, repository, review, user, and webhook data. Changes
TipsChat with CodeRabbit Bot (
|
vibi-dpu/src/bitbucket/webhook.rs
Outdated
let active = matches!(webhook_json["active"].to_string().trim_matches('"'), "true" | "false"); | ||
let webhook = Webhook::new( | ||
webhook_json["uuid"].to_string(), | ||
active, | ||
webhook_json["created_at"].to_string().replace('"', ""), | ||
webhook_json["events"].as_array().expect("Unable to deserialize events").into_iter() | ||
.map(|events| events.as_str().expect("Unable to convert event").to_string()).collect(), | ||
webhook_json["links"]["self"]["href"].to_string().replace('"', ""), | ||
webhook_json["url"].to_string().replace('"', ""), | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is creating a new Webhook
object by manually extracting and converting each field from the JSON response. This can be simplified and made more robust by defining a From
trait implementation for Webhook
that takes a serde_json::Value
as input.
vibi-dpu/src/bitbucket/workspace.rs
Outdated
use super::config::{bitbucket_base_url, get_api_values}; | ||
use crate::db::owner::save_workspace_to_db; | ||
use crate::utils::owner::Workspace; | ||
pub async fn get_bitbucket_workspaces(access_token: &str) -> Vec<Workspace> { | ||
let user_url = format!("{}/workspaces", bitbucket_base_url()); | ||
let response = get_api_values(&user_url, access_token, None).await; | ||
let mut workspace_vec = Vec::new(); | ||
for workspace_json in response { | ||
let val = serde_json::from_value::<Workspace>(workspace_json.clone()).expect("Unable to deserialize workspace"); | ||
save_workspace_to_db(&val); | ||
workspace_vec.push(val); | ||
} | ||
return workspace_vec; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function get_bitbucket_workspaces
is using expect
for error handling which will cause the program to panic and terminate if deserialization fails. It's better to propagate the error up and handle it gracefully.
- let val = serde_json::from_value::<Workspace>(workspace_json.clone()).expect("Unable to deserialize workspace");
+ let val = match serde_json::from_value::<Workspace>(workspace_json.clone()) {
+ Ok(val) => val,
+ Err(e) => {
+ eprintln!("Failed to deserialize workspace: {}", e);
+ continue;
+ }
+ };
vibi-dpu/src/bitbucket/workspace.rs
Outdated
pub async fn get_bitbucket_workspaces(access_token: &str) -> Vec<Workspace> { | ||
let user_url = format!("{}/workspaces", bitbucket_base_url()); | ||
let response = get_api_values(&user_url, access_token, None).await; | ||
let mut workspace_vec = Vec::new(); | ||
for workspace_json in response { | ||
let val = serde_json::from_value::<Workspace>(workspace_json.clone()).expect("Unable to deserialize workspace"); | ||
save_workspace_to_db(&val); | ||
workspace_vec.push(val); | ||
} | ||
return workspace_vec; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no error handling for the get_api_values
function call. If this function fails, it could return an error or unexpected value that might cause a runtime error in the following lines. Consider adding error handling here.
- let response = get_api_values(&user_url, access_token, None).await;
+ let response = match get_api_values(&user_url, access_token, None).await {
+ Ok(response) => response,
+ Err(e) => {
+ eprintln!("Failed to get API values: {}", e);
+ return Vec::new();
+ }
+ };
vibi-dpu/src/core/setup.rs
Outdated
// get access token from installation code by calling relevant repo provider's api | ||
// out of github, bitbucket, gitlab | ||
|
||
let authinfo_opt = get_access_token_from_bitbucket(installation_code).await; | ||
if authinfo_opt.is_none() { | ||
eprintln!("Unable to get authinfo in get_access_token_from_bitbucket"); | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no error handling for when get_access_token_from_bitbucket
returns None. If it fails to get an access token, the function should not continue executing. Consider adding proper error handling here.
vibi-dpu/src/bitbucket/auth.rs
Outdated
.replace("git@", format!("https://x-token-auth:{{{access_token}}}@").as_str()) | ||
.replace("bitbucket.org:", "bitbucket.org/"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The replacement operation in these lines seems to be hardcoding the protocol (https://
) and the host (bitbucket.org/
). This could lead to problems if the git URL uses a different protocol or host. Consider parsing the URL and replacing only the necessary parts.
vibi-dpu/src/bitbucket/auth.rs
Outdated
fn set_git_remote_url(git_url: &str, directory: &str, access_token: &str) { | ||
let clone_url = git_url.to_string() | ||
.replace("git@", format!("https://x-token-auth:{{{access_token}}}@").as_str()) | ||
.replace("bitbucket.org:", "bitbucket.org/"); | ||
let output = Command::new("git") | ||
.arg("remote").arg("set-url").arg("origin") | ||
.arg(clone_url) | ||
.current_dir(directory) | ||
.output() | ||
.expect("failed to execute git pull"); | ||
// Only for debug purposes | ||
match str::from_utf8(&output.stderr) { | ||
Ok(v) => println!("set_git_url stderr = {:?}", v), | ||
Err(e) => eprintln!("set_git_url stderr error: {}", e), | ||
}; | ||
match str::from_utf8(&output.stdout) { | ||
Ok(v) => println!("set_git_urll stdout = {:?}", v), | ||
Err(e) => eprintln!("set_git_url stdout error: {}", e), | ||
}; | ||
println!("git pull output = {:?}, {:?}", &output.stdout, &output.stderr); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function set_git_remote_url
is executing a shell command without any error handling. If the command fails for any reason (e.g., git
is not installed, the directory does not exist), the program will panic. Consider adding error handling to this function.
for prhunk in hunkmap.prhunkvec() { | ||
// calculate number of hunks for each userid | ||
let coverage_map = calculate_coverage(&hunkmap.repo_owner(), prhunk); | ||
if !coverage_map.is_empty() { | ||
// get user for each user id | ||
// add reviewers | ||
let mut author_set: HashSet<String> = HashSet::new(); | ||
author_set.insert(prhunk.author().to_string()); | ||
for blame in prhunk.blamevec() { | ||
if author_set.contains(blame.author()) { | ||
continue; | ||
} | ||
author_set.insert(blame.author().to_string()); | ||
let author_id = blame.author(); | ||
add_reviewers(blame.author(), review, &access_token).await; | ||
} | ||
// create comment text | ||
let comment = comment_text(coverage_map); | ||
// add comment | ||
add_comment(&comment, review, &access_token).await; | ||
// TODO - implement settings | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no error handling for the async functions add_reviewers
and add_comment
. If these functions fail due to network issues or API errors, the program may crash or behave unexpectedly. Consider adding error handling to these function calls.
vibi-dpu/src/core/review.rs
Outdated
async fn commit_check(review: &Review) { | ||
if !commit_exists(&review.base_head_commit(), &review.clone_dir()) | ||
|| !commit_exists(&review.pr_head_commit(), &review.clone_dir()) { | ||
println!("Pulling repository {} for commit history", &review.repo_name()); | ||
git_pull(review).await; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function commit_check
performs a Git pull operation if either of the commits do not exist. However, it doesn't check if the Git pull operation was successful. Consider adding error handling for the git_pull
function.
let mut headers; | ||
if headers_opt.is_none() { | ||
let headers_opt_new = prepare_headers(&access_token); | ||
if headers_opt_new.is_none() { | ||
eprintln!("Unable to prepare_headers, empty headers_opt"); | ||
return None; | ||
} | ||
headers = headers_opt_new.expect("Empty headers_opt"); | ||
} else { | ||
headers = headers_opt.expect("Empty headers_opt"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handling of headers in get_api_response
can be simplified. Instead of checking if headers_opt
is None
twice (once in the if
condition and once with expect
), you can use unwrap_or_else
to provide a default value when it's None
. This will make the code cleaner and easier to read.
- let mut headers;
- if headers_opt.is_none() {
- let headers_opt_new = prepare_headers(&access_token);
- if headers_opt_new.is_none() {
- eprintln!("Unable to prepare_headers, empty headers_opt");
- return None;
- }
- headers = headers_opt_new.expect("Empty headers_opt");
- } else {
- headers = headers_opt.expect("Empty headers_opt");
- }
+ let headers = headers_opt.unwrap_or_else(|| prepare_headers(&access_token));
+ if headers.is_none() {
+ eprintln!("Unable to prepare_headers, empty headers_opt");
+ return None;
+ }
vibi-dpu/src/bitbucket/config.rs
Outdated
let get_res = client.get(url).headers(headers).send().await; | ||
if get_res.is_err() { | ||
let e = get_res.expect_err("No error in get_res"); | ||
eprintln!("Error sending GET request without params to {}, error: {}", url, e); | ||
return None; | ||
} | ||
let response = get_res.expect("Uncaught error in get_res"); | ||
if !response.status().is_success() { | ||
eprintln!("Failed to call API {}, status: {}", url, response.status()); | ||
return None; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error handling in get_api_response
could be improved by using the ?
operator to propagate errors upwards instead of returning None
. This would allow the calling function to handle the error and potentially provide more context about what went wrong.
vibi-dpu/src/bitbucket/config.rs
Outdated
let mut values_vec = Vec::new(); | ||
if response_opt.is_none() { | ||
eprintln!("Response is None, can't deserialize"); | ||
return (values_vec, None); | ||
} | ||
let response = response_opt.expect("Uncaught empty response_opt"); | ||
let parse_res = response.json::<serde_json::Value>().await; | ||
if parse_res.is_err() { | ||
let e = parse_res.expect_err("No error in parse_res"); | ||
eprintln!("Unable to deserialize response: {}", e); | ||
return (values_vec, None); | ||
} | ||
let response_json = parse_res | ||
.expect("Uncaught error in parse_res in deserialize_response"); | ||
let res_values_opt = response_json["values"].as_array(); | ||
if res_values_opt.is_none() { | ||
eprintln!("response_json[values] is empty"); | ||
return (values_vec, None); | ||
} | ||
let values = res_values_opt.expect("res_values_opt is empty"); | ||
for value in values { | ||
values_vec.push(value.to_owned()); | ||
} | ||
return (values_vec, Some(response_json["next"].to_string())); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the deserialize_paginated_response
function, there are several instances where you're using expect
after checking if the result is an error. This is unnecessary and can lead to panic if the expectation is not met. Instead, you can use ?
to propagate the error upwards.
vibi-dpu/src/bitbucket/config.rs
Outdated
let mut values_vec = Vec::new(); | ||
let mut next_url_mut = next_url; | ||
while next_url_mut.is_some() { | ||
let url_opt = next_url_mut.as_ref(); | ||
if url_opt.is_none() { | ||
eprintln!("next_url is none"); | ||
break; | ||
} | ||
let url = url_opt.expect("Empty next url_opt").trim_matches('"'); | ||
if url == "null" { | ||
break; | ||
} | ||
let response_opt = get_api_response(url, None, access_token, params).await; | ||
let (mut response_values, url_opt) = deserialize_paginated_response(response_opt).await; | ||
next_url_mut = url_opt.clone(); | ||
values_vec.append(&mut response_values); | ||
} | ||
return values_vec; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function get_all_pages
is recursively calling the API until there are no more pages. This could potentially lead to a stack overflow if there are many pages. Consider refactoring this to a loop-based approach to avoid potential stack overflow issues.
vibi-dpu/src/bitbucket/user.rs
Outdated
pub async fn get_and_save_workspace_users(workspace_id: &str, access_token: &str) { | ||
let base_url = bitbucket_base_url(); | ||
let members_url = format!("{}/workspaces/{}/members", &base_url, workspace_id); | ||
let response_json = get_api_values(&members_url, access_token, None).await; | ||
for user_json in response_json { | ||
let user_val = user_json.get("user").expect("Empty reviewers_opt"); | ||
let user: BitbucketUser = serde_json::from_value(user_val.to_owned()) | ||
.expect("Unable to deserialize user"); | ||
add_bitbucket_user_to_workspace_user(user.clone()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function get_and_save_workspace_users
does not handle potential errors that may occur during the API call or JSON parsing. If an error occurs, the function will panic and terminate the program. It would be better to return a Result
from this function and handle errors gracefully.
let bitbucket_user = serde_json::from_value::<BitbucketUser>(bitbucket_user_json) | ||
.expect("error in deserializing BitbucketUser from bitbucket_user_json"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The expect
call here can cause a panic if the JSON parsing fails. It would be better to handle this error gracefully by returning an Err
result.
vibi-dpu/src/db/auth.rs
Outdated
let authinfo_res = db.get(IVec::from(authinfo_key.as_bytes())); | ||
if authinfo_res.is_err() { | ||
let e = authinfo_res.expect_err("No error in authinfo_res"); | ||
eprintln!("Unable to get bb authinfo from db: {:?}", e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to line 25, the error message in the eprintln!
macro is not being formatted correctly. The variable e
should be wrapped in curly braces {}
to be interpolated into the string.
- eprintln!("Unable to get bb authinfo from db: {:?}", e);
+ eprintln!("Unable to get bb authinfo from db: {}", e);
vibi-dpu/src/db/auth.rs
Outdated
let authinfo_parse = serde_json::from_slice(&authinfo_ivec); | ||
if authinfo_parse.is_err() { | ||
let e = authinfo_parse.expect_err("No error in authinfo_parse"); | ||
eprintln!("Unable to deserialize authinfo_parse: {:?}", e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, similar to lines 25 and 37, the error message in the eprintln!
macro is not being formatted correctly. The variable e
should be wrapped in curly braces {}
to be interpolated into the string.
- eprintln!("Unable to deserialize authinfo_parse: {:?}", e);
+ eprintln!("Unable to deserialize authinfo_parse: {}", e);
vibi-dpu/src/pubsub/listener.rs
Outdated
async fn setup_subscription(keypath: &str, topicname: &str) -> Subscription{ | ||
let config = get_pubsub_client_config(keypath).await; | ||
let client = Client::new(config).await | ||
.expect("Unable to create pubsub client to listen to messages"); | ||
let topic = client.topic(topicname); | ||
let topic_res = topic.exists(None).await; | ||
if topic_res.is_err() { | ||
let e = topic_res.expect_err("No error found in topic_res"); | ||
if e.code() == Code::NotFound { | ||
client.create_topic(topicname, None, None).await | ||
.expect("Unable to create topic"); | ||
} | ||
else { | ||
eprintln!("Error getting topic: {:?}", e); | ||
} | ||
} | ||
let sub_config = SubscriptionConfig { | ||
enable_message_ordering: true, | ||
..Default::default() | ||
}; | ||
let subscription_name = format!("{topicname}-sub"); | ||
let subscription = client.subscription(&subscription_name); | ||
let subconfig = SubscriptionConfig { | ||
enable_message_ordering: true, | ||
..Default::default() | ||
}; | ||
if !subscription.exists(None).await.expect("Unable to get subscription information") { | ||
subscription.create( | ||
topic.fully_qualified_name(), subconfig, None) | ||
.await.expect("Unable to create subscription for listening to messages"); | ||
} | ||
println!("sub = {:?}", &subscription); | ||
subscription | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function setup_subscription
has similar issues as get_pubsub_client_config
. It uses .expect()
which will cause the program to panic if it fails. Consider changing the function signature to return a Result<Subscription, Box<dyn Error>>
and propagate errors with the ?
operator.
- async fn setup_subscription(keypath: &str, topicname: &str) -> Subscription{
+ async fn setup_subscription(keypath: &str, topicname: &str) -> Result<Subscription, Box<dyn Error>> {
vibi-dpu/src/pubsub/listener.rs
Outdated
pub async fn listen_messages(keypath: &str, topicname: &str) { | ||
let queue_cap = 100; | ||
let mut message_hashes = VecDeque::with_capacity(queue_cap); | ||
let subscription = setup_subscription(keypath, topicname).await; | ||
let mut stream = subscription.subscribe(None).await | ||
.expect("Unable to subscribe to messages"); | ||
while let Some(message) = stream.next().await { | ||
println!("Listening for messages..."); | ||
let attrmap: HashMap<String, String> = message.message.attributes.clone().into_iter().collect(); | ||
let message_hash = digest(&*message.message.data); | ||
if !message_hashes.contains(&message_hash) { | ||
message_hashes.push_back(message_hash); | ||
if message_hashes.len() > queue_cap { | ||
while message_hashes.len() > queue_cap { | ||
message_hashes.pop_front(); | ||
} | ||
} | ||
let msg_bytes = message.message.data.clone(); | ||
process_message(&attrmap, &msg_bytes).await; | ||
} | ||
// Ack or Nack message. | ||
let _ = message.ack().await; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function listen_messages
also uses .expect()
which will cause the program to panic if it fails. Consider changing the function signature to return a Result<(), Box<dyn Error>>
and propagate errors with the ?
operator.
- pub async fn listen_messages(keypath: &str, topicname: &str) {
+ pub async fn listen_messages(keypath: &str, topicname: &str) -> Result<(), Box<dyn Error>> {
vibi-dpu/src/pubsub/listener.rs
Outdated
enable_message_ordering: true, | ||
..Default::default() | ||
}; | ||
let subscription_name = format!("{topicname}-sub"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There seems to be a typo in the string formatting for subscription_name
. It should be {}
instead of {topicname}
.
- let subscription_name = format!("{topicname}-sub");
+ let subscription_name = format!("{}-sub", topicname);
vibi-dpu/src/pubsub/listener.rs
Outdated
let subconfig = SubscriptionConfig { | ||
enable_message_ordering: true, | ||
..Default::default() | ||
}; | ||
if !subscription.exists(None).await.expect("Unable to get subscription information") { | ||
subscription.create( | ||
topic.fully_qualified_name(), subconfig, None) | ||
.await.expect("Unable to create subscription for listening to messages"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a duplicate definition of subconfig
(lines 84-87 and 90-93). You can remove one of them.
- let sub_config = SubscriptionConfig {
- enable_message_ordering: true,
- ..Default::default()
- };
vibi-dpu/src/pubsub/listener.rs
Outdated
let mut message_hashes = VecDeque::with_capacity(queue_cap); | ||
let subscription = setup_subscription(keypath, topicname).await; | ||
let mut stream = subscription.subscribe(None).await | ||
.expect("Unable to subscribe to messages"); | ||
while let Some(message) = stream.next().await { | ||
println!("Listening for messages..."); | ||
let attrmap: HashMap<String, String> = message.message.attributes.clone().into_iter().collect(); | ||
let message_hash = digest(&*message.message.data); | ||
if !message_hashes.contains(&message_hash) { | ||
message_hashes.push_back(message_hash); | ||
if message_hashes.len() > queue_cap { | ||
while message_hashes.len() > queue_cap { | ||
message_hashes.pop_front(); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for maintaining the message_hashes
queue seems overly complex. Instead of checking the length of the queue twice, you could use the VecDeque::truncate
method to ensure that the queue never exceeds its capacity.
- if message_hashes.len() > queue_cap {
- while message_hashes.len() > queue_cap {
- message_hashes.pop_front();
- }
- }
+ message_hashes.truncate(queue_cap);
Rename rustApp to dpu in url endpoint
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vibi-dpu/src/core/setup.rs
Outdated
pub async fn handle_install_bitbucket(installation_code: &str) { | ||
// get access token from installation code by calling relevant repo provider's api | ||
// out of github, bitbucket, gitlab | ||
|
||
let authinfo_opt = get_access_token_from_bitbucket(installation_code).await; | ||
if authinfo_opt.is_none() { | ||
eprintln!("Unable to get authinfo in get_access_token_from_bitbucket"); | ||
return; | ||
} | ||
let authinfo = authinfo_opt.expect("Empty authinfo_opt"); | ||
println!("AuthInfo: {:?}", authinfo); | ||
// let auth_info = { "access_token": access_token, "expires_in": expires_in_formatted, "refresh_token": auth_info["refresh_token"] }; db.insert("auth_info", serde_json::to_string(&auth_info).unwrap()); | ||
let access_token = authinfo.access_token().clone(); | ||
let user_workspaces = get_bitbucket_workspaces(&access_token).await; | ||
let mut pubreqs: Vec<SetupInfo> = Vec::new(); | ||
for workspace in user_workspaces { | ||
let workspace_slug = workspace.slug(); | ||
println!("=========<{:?}>=======", workspace_slug); | ||
|
||
let repos = get_workspace_repos(workspace.uuid(), | ||
&access_token).await; | ||
get_and_save_workspace_users(workspace.uuid(), &access_token).await; | ||
let mut reponames: Vec<String> = Vec::new(); | ||
for repo in repos.expect("repos is None") { | ||
let token_copy = access_token.clone(); | ||
let mut repo_copy = repo.clone(); | ||
clone_git_repo(&mut repo_copy, &token_copy).await; | ||
let repo_name = repo.name(); | ||
reponames.push(repo_name.clone()); | ||
println!("Repo url git = {:?}", &repo.clone_ssh_url()); | ||
println!("Repo name = {:?}", repo_name); | ||
process_webhooks(workspace_slug.to_string(), | ||
repo_name.to_string(), | ||
access_token.to_string()).await; | ||
let repo_name_async = repo_name.clone(); | ||
let workspace_slug_async = workspace_slug.clone(); | ||
let access_token_async = access_token.clone(); | ||
// task::spawn(async move { | ||
// get_prs(&workspace_slug_async, | ||
// &repo_name_async, | ||
// &access_token_async, | ||
// "OPEN").await; | ||
// }); | ||
} | ||
pubreqs.push(SetupInfo { | ||
provider: "bitbucket".to_owned(), | ||
owner: workspace_slug.clone(), | ||
repos: reponames }); | ||
} | ||
send_setup_info(&pubreqs).await; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function handle_install_bitbucket
is doing too much. It's handling authentication, workspace retrieval, repository cloning, webhook processing, and setup info sending. This violates the Single Responsibility Principle (SRP) of SOLID principles. Consider breaking this function into smaller ones, each with a clear, single responsibility.
vibi-dpu/src/core/setup.rs
Outdated
async fn send_setup_info(setup_info: &Vec<SetupInfo>) { | ||
let installation_id = env::var("INSTALL_ID") | ||
.expect("INSTALL_ID must be set"); | ||
println!("install_id = {:?}", &installation_id); | ||
let base_url = env::var("SERVER_URL") | ||
.expect("SERVER_URL must be set"); | ||
let body = PublishRequest { | ||
installationId: installation_id, | ||
info: setup_info.to_vec(), | ||
}; | ||
println!("body = {:?}", &body); | ||
let client = get_client(); | ||
let setup_url = format!("{base_url}/api/dpu/setup"); | ||
let post_res = client | ||
.post(&setup_url) | ||
.json(&body) | ||
.send() | ||
.await; | ||
if post_res.is_err() { | ||
let e = post_res.expect_err("No error in post_res in send_setup_info"); | ||
eprintln!("error in send_setup_info post_res: {:?}, url: {:?}", e, &setup_url); | ||
return; | ||
} | ||
let resp = post_res.expect("Uncaught error in post_res"); | ||
println!("Response: {:?}", resp.text().await); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the previous comment, the function send_setup_info
is also doing too much. It's responsible for environment variable retrieval, request body creation, HTTP request sending, and response handling. Consider breaking this function into smaller ones, each with a clear, single responsibility.
vibi-dpu/src/core/setup.rs
Outdated
async fn clone_git_repo(repo: &mut Repository, access_token: &str) { | ||
let git_url = repo.clone_ssh_url(); | ||
let clone_url = git_url.to_string() | ||
.replace("git@", format!("https://x-token-auth:{{{access_token}}}@").as_str()) | ||
.replace("bitbucket.org:", "bitbucket.org/"); | ||
let random_string: String = thread_rng() | ||
.sample_iter(&Alphanumeric) | ||
.take(10) | ||
.map(char::from) | ||
.collect(); | ||
let mut directory = format!("/tmp/{}/{}/{}", repo.provider(), | ||
repo.workspace(), random_string); | ||
// Check if directory exists | ||
let exists_res = fs::metadata(&directory).await; | ||
if exists_res.is_err() { | ||
let e = exists_res.expect_err("No error in exists_res"); | ||
println!("executing metadata in {:?}, output: {:?}", | ||
&directory, e); | ||
if e.kind() != ErrorKind::NotFound { | ||
return; | ||
} | ||
} | ||
let remove_dir_opt = fs::remove_dir_all(&directory).await; | ||
if remove_dir_opt.is_err() { | ||
let e = remove_dir_opt.expect_err("No error in remove_dir_opt"); | ||
println!("Execute in directory: {:?}, remove_dir_all: {:?}", | ||
&directory, e); | ||
if e.kind() != ErrorKind::NotFound { | ||
return; | ||
} | ||
} | ||
let create_dir_opt = fs::create_dir_all(&directory).await; | ||
if create_dir_opt.is_err() { | ||
let e = create_dir_opt.expect_err("No error in create_dir_opt"); | ||
println!("Executing in directory: {:?}, create_dir_all: {:?}", | ||
&directory, e); | ||
if e.kind() != ErrorKind::NotFound { | ||
return; | ||
} | ||
} | ||
println!("directory exists? {}", fs::metadata(&directory).await.is_ok()); | ||
let mut cmd = std::process::Command::new("git"); | ||
cmd.arg("clone").arg(clone_url).current_dir(&directory); | ||
let output_res = cmd.output(); | ||
if output_res.is_err() { | ||
let e = output_res.expect_err("No error in output_res in git clone"); | ||
eprintln!("Executing in directory: {:?}, git clone: {:?}", | ||
&directory, e); | ||
return; | ||
} | ||
let output = output_res.expect("Uncaught error in output_res"); | ||
match str::from_utf8(&output.stderr) { | ||
Ok(v) => println!("git pull stderr = {:?}", v), | ||
Err(e) => {/* error handling */ println!("git clone stderr error {}", e)}, | ||
}; | ||
match str::from_utf8(&output.stdout) { | ||
Ok(v) => println!("git pull stdout = {:?}", v), | ||
Err(e) => {/* error handling */ println!("git clone stdout error {}", e)}, | ||
}; | ||
directory = format!("{}/{}", &directory, repo.name()); | ||
repo.set_local_dir(&directory); | ||
save_repo_to_db(repo); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function clone_git_repo
is doing too much. It's responsible for URL formatting, directory creation, git cloning, output handling, and database saving. This violates the Single Responsibility Principle (SRP) of SOLID principles. Consider breaking this function into smaller ones, each with a clear, single responsibility.
vibi-dpu/src/core/setup.rs
Outdated
async fn process_webhooks(workspace_slug: String, repo_name: String, access_token: String) { | ||
let webhooks_data = get_webhooks_in_repo( | ||
&workspace_slug, &repo_name, &access_token).await; | ||
let webhook_callback_url = format!("{}/api/bitbucket/callbacks/webhook", | ||
env::var("SERVER_URL").expect("SERVER_URL must be set")); | ||
if webhooks_data.is_empty() { | ||
println!("Adding new webhook..."); | ||
let repo_name_async = repo_name.clone(); | ||
let workspace_slug_async = workspace_slug.clone(); | ||
let access_token_async = access_token.clone(); | ||
task::spawn(async move { | ||
add_webhook( | ||
&workspace_slug_async, | ||
&repo_name_async, | ||
&access_token_async).await; | ||
}); | ||
return; | ||
} | ||
let matching_webhook = webhooks_data.into_iter() | ||
.find(|w| w.url().to_string() == webhook_callback_url); | ||
if matching_webhook.is_none() { | ||
println!("Adding new webhook..."); | ||
let repo_name_async = repo_name.clone(); | ||
let workspace_slug_async = workspace_slug.clone(); | ||
let access_token_async = access_token.clone(); | ||
task::spawn(async move { | ||
add_webhook( | ||
&workspace_slug_async, | ||
&repo_name_async, | ||
&access_token_async).await; | ||
}); | ||
return; | ||
} | ||
let webhook = matching_webhook.expect("no matching webhook"); | ||
println!("Webhook already exists: {:?}", &webhook); | ||
save_webhook_to_db(&webhook); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function process_webhooks
is doing too much. It's responsible for webhook retrieval, URL formatting, new webhook addition, and database saving. This violates the Single Responsibility Principle (SRP) of SOLID principles. Consider breaking this function into smaller ones, each with a clear, single responsibility.
vibi-dpu/src/core/setup.rs
Outdated
.replace("git@", format!("https://x-token-auth:{{{access_token}}}@").as_str()) | ||
.replace("bitbucket.org:", "bitbucket.org/"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The access token is not being inserted into the URL correctly due to incorrect use of curly braces. Please verify this.
- let clone_url = git_url.to_string()
- .replace("git@", format!("https://x-token-auth:{{{access_token}}}@").as_str())
- .replace("bitbucket.org:", "bitbucket.org/");
+ let clone_url = git_url.to_string()
+ .replace("git@", &format!("https://x-token-auth:{}@", access_token))
+ .replace("bitbucket.org:", "bitbucket.org/");
vibi-dpu/src/core/setup.rs
Outdated
let exists_res = fs::metadata(&directory).await; | ||
if exists_res.is_err() { | ||
let e = exists_res.expect_err("No error in exists_res"); | ||
println!("executing metadata in {:?}, output: {:?}", | ||
&directory, e); | ||
if e.kind() != ErrorKind::NotFound { | ||
return; | ||
} | ||
} | ||
let remove_dir_opt = fs::remove_dir_all(&directory).await; | ||
if remove_dir_opt.is_err() { | ||
let e = remove_dir_opt.expect_err("No error in remove_dir_opt"); | ||
println!("Execute in directory: {:?}, remove_dir_all: {:?}", | ||
&directory, e); | ||
if e.kind() != ErrorKind::NotFound { | ||
return; | ||
} | ||
} | ||
let create_dir_opt = fs::create_dir_all(&directory).await; | ||
if create_dir_opt.is_err() { | ||
let e = create_dir_opt.expect_err("No error in create_dir_opt"); | ||
println!("Executing in directory: {:?}, create_dir_all: {:?}", | ||
&directory, e); | ||
if e.kind() != ErrorKind::NotFound { | ||
return; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for checking and creating the directory seems to be incorrect. It tries to get metadata of a directory that may not exist yet, then it tries to remove the directory (which may not exist), and finally creates the directory. This could be simplified by just trying to create the directory with create_dir_all
, which will create all directories in the path if they do not exist.
- // Check if directory exists
- let exists_res = fs::metadata(&directory).await;
- if exists_res.is_err() {
- let e = exists_res.expect_err("No error in exists_res");
- println!("executing metadata in {:?}, output: {:?}",
- &directory, e);
- if e.kind() != ErrorKind::NotFound {
- return;
- }
- }
- let remove_dir_opt = fs::remove_dir_all(&directory).await;
- if remove_dir_opt.is_err() {
- let e = remove_dir_opt.expect_err("No error in remove_dir_opt");
- println!("Execute in directory: {:?}, remove_dir_all: {:?}",
- &directory, e);
- if e.kind() != ErrorKind::NotFound {
- return;
- }
- }
- let create_dir_opt = fs::create_dir_all(&directory).await;
- if create_dir_opt.is_err() {
- let e = create_dir_opt.expect_err("No error in create_dir_opt");
- println!("Executing in directory: {:?}, create_dir_all: {:?}",
- &directory, e);
- if e.kind() != ErrorKind::NotFound {
- return;
- }
- }
+ // Create the directory
+ if let Err(e) = fs::create_dir_all(&directory).await {
+ eprintln!("Failed to create directory: {:?}, error: {:?}", &directory, e);
+ return;
+ }
vibi-dpu/src/core/setup.rs
Outdated
let mut cmd = std::process::Command::new("git"); | ||
cmd.arg("clone").arg(clone_url).current_dir(&directory); | ||
let output_res = cmd.output(); | ||
if output_res.is_err() { | ||
let e = output_res.expect_err("No error in output_res in git clone"); | ||
eprintln!("Executing in directory: {:?}, git clone: {:?}", | ||
&directory, e); | ||
return; | ||
} | ||
let output = output_res.expect("Uncaught error in output_res"); | ||
match str::from_utf8(&output.stderr) { | ||
Ok(v) => println!("git pull stderr = {:?}", v), | ||
Err(e) => {/* error handling */ println!("git clone stderr error {}", e)}, | ||
}; | ||
match str::from_utf8(&output.stdout) { | ||
Ok(v) => println!("git pull stdout = {:?}", v), | ||
Err(e) => {/* error handling */ println!("git clone stdout error {}", e)}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling for the git clone operation could be improved. Currently, it just prints the error and returns from the function. It would be better to propagate the error up to the caller so they can decide how to handle it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Files selected for processing (2)
- vibi-dpu/src/core/review.rs (4 hunks)
- vibi-dpu/src/utils/gitops.rs (2 hunks)
Additional comments (Suppressed): 5
vibi-dpu/src/core/review.rs (5)
14-18: The import of
core::coverage::process_coverage
has been uncommented, indicating that theprocess_coverage
function will now be used in this module. Ensure that the function is implemented correctly and that it is compatible with the rest of the code in this module.30-34: The
send_hunkmap
function is now asynchronous. This change could potentially improve performance by allowing other tasks to run while waiting for thesend_hunkmap
function to complete. However, ensure that the function's new asynchronous nature does not introduce any race conditions or other concurrency issues.36-39: The
send_hunkmap
function has been changed from a regular function to an asynchronous function. This change could potentially improve performance by allowing other tasks to run while waiting for thesend_hunkmap
function to complete. However, ensure that the function's new asynchronous nature does not introduce any race conditions or other concurrency issues.42-48: The
process_coverage
function is now being called with a clonedhunkmap
andreview
. This change could potentially increase memory usage, as a copy of thehunkmap
andreview
is now being created. However, this might be necessary if theprocess_coverage
function modifies its arguments and these modifications should not affect the originalhunkmap
andreview
.124-130: The author's UUID is now being used instead of the account ID. Ensure that this change is reflected in all parts of the codebase that use the author's identifier. Also, verify that the UUID is a more appropriate identifier to use in this context than the account ID.
vibi-dpu/src/bitbucket/reviewer.rs
Outdated
pub async fn add_reviewers(user_key: &str, review: &Review, access_token: &str) { | ||
let url = prepare_add_reviewers_url(review.repo_owner(), review.repo_name(), review.id()); | ||
let get_response = get_pr_info(&url, access_token).await; | ||
let put_request_body_opt = add_user_to_reviewers(get_response, user_key).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the actual prepare_add_reviewers_url
function.
use crate::bitbucket::auth::get_access_token_review; | ||
|
||
pub async fn process_coverage(hunkmap: &HunkMap, review: &Review) { | ||
let access_token_opt = get_access_token_review(review).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this called
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs new PR
for prhunk in hunkmap.prhunkvec() { | ||
// calculate number of hunks for each userid | ||
let coverage_map = calculate_coverage(&hunkmap.repo_owner(), prhunk); | ||
if !coverage_map.is_empty() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement never nesting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After PR for email/display name fix
// process_coverage(&hunkmap_async).await; TODO - include in future PR | ||
let hunkmap_async = hunkmap.clone(); | ||
let review_async = review.clone(); | ||
process_coverage(&hunkmap_async, &review_async).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add a status column to the hunks table in the database which we can update based on the success of this function for easier status report to the client.
Kudos, SonarCloud Quality Gate passed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 6
Configuration used: CodeRabbit UI
Files selected for processing (2)
- vibi-dpu/src/bitbucket/reviewer.rs (1 hunks)
- vibi-dpu/src/core/coverage.rs (1 hunks)
Additional comments: 6
vibi-dpu/src/core/coverage.rs (6)
1-3: The import section is still quite cluttered and hard to read. Consider organizing your imports in a more structured way, for example by separating them into different categories (standard library, third-party libraries, internal modules) and sorting them alphabetically within each category.
7-37: The function
process_coverage
is still doing too much. It's handling access token retrieval, processing hunks, adding reviewers, and adding comments. This violates the Single Responsibility Principle (SRP) of SOLID principles. Consider breaking it down into smaller functions each with a single responsibility.9-12: There is still no error propagation when the access token cannot be retrieved. If this function is part of a larger process, it might be better to return an
Err
result instead of just printing an error message and returning nothing. This would allow the caller to decide how to handle the error.13-13: Unwrapping an Option without checking if it's None can lead to a panic at runtime. Here, you've already checked that the Option is not None in the previous lines, but it's generally a good practice to avoid using
expect
unless absolutely necessary. Instead, consider using pattern matching or the?
operator to handle the Option in a safer way.14-35: There is still no error handling for the async functions
add_reviewers
andadd_comment
. If these functions fail due to network issues or API errors, the program may crash or behave unexpectedly. Consider adding error handling to these function calls.77-90: The
comment_text
function is still creating a markdown table in a string. This is prone to errors and hard to maintain. Consider using a markdown library that provides a structured way to create tables.
let num_lines: f32 = blame.line_end().parse::<f32>().expect("lines_end invalid float") | ||
- blame.line_start().parse::<f32>().expect("lines_end invalid float") | ||
+ 1.0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parsing the line numbers as floats could lead to precision issues. If the line numbers are integers, it would be better to keep them as integers and convert to float only when necessary for the division operation.
- let num_lines: f32 = blame.line_end().parse::<f32>().expect("lines_end invalid float")
- - blame.line_start().parse::<f32>().expect("lines_end invalid float")
- + 1.0;
+ let num_lines: f32 = (blame.line_end().parse::<i32>().expect("lines_end invalid integer")
+ - blame.line_start().parse::<i32>().expect("lines_start invalid integer")
+ + 1) as f32;
Committable suggestion (Beta)
let num_lines: f32 = blame.line_end().parse::<f32>().expect("lines_end invalid float") | |
- blame.line_start().parse::<f32>().expect("lines_end invalid float") | |
+ 1.0; | |
44: | |
let num_lines: f32 = (blame.line_end().parse::<i32>().expect("lines_end invalid integer") | |
45: | |
- blame.line_start().parse::<i32>().expect("lines_start invalid integer") | |
46: | |
+ 1) as f32; |
pub async fn add_reviewers(user_key: &str, review: &Review, access_token: &str) { | ||
let url = prepare_get_prinfo_url(review.repo_owner(), review.repo_name(), review.id()); | ||
let get_response = get_pr_info(&url, access_token).await; | ||
let reviewers_opt = add_user_to_reviewers(get_response, user_key).await; | ||
if reviewers_opt.is_none() { | ||
eprintln!("[add_reviewers] Unable to add reviewers for review: {}", review.id()); | ||
} | ||
let (reviewers, pr_info_json) = reviewers_opt.expect("Empty reviewers_opt"); | ||
let put_payload = prepare_put_body(&reviewers, &pr_info_json); | ||
put_reviewers(&url, access_token, &put_payload).await; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function add_reviewers
is a public function that is responsible for adding reviewers to a pull request. It does not return any value to indicate success or failure of the operation. It would be better to return a Result
type to indicate whether the operation was successful or not. This would allow the caller to handle the result accordingly.
- pub async fn add_reviewers(user_key: &str, review: &Review, access_token: &str) {
+ pub async fn add_reviewers(user_key: &str, review: &Review, access_token: &str) -> Result<(), Box<dyn std::error::Error>> {
async fn add_user_to_reviewers(response_res: Option<Response>, user_key: &str) -> Option<(Vec<BitbucketUser>, Value)> { | ||
let reviewers_opt = parse_reviewers_from_prinfo(response_res).await; | ||
if reviewers_opt.is_none() { | ||
eprintln!("Unable to parse and add reviewers"); | ||
return None; | ||
} | ||
let (mut reviewers, get_response_json) = reviewers_opt.expect("Empty reviewers_opt"); | ||
println!("reviewers = {:?}", reviewers); | ||
// Get user from db who needs to be added to reviewers | ||
let user_from_db_opt = get_workspace_user_from_db(&user_key); | ||
if user_from_db_opt.is_none() { | ||
eprintln!("Empty user_from_db_opt"); | ||
return None; | ||
} | ||
let user_from_db = user_from_db_opt.expect("empty user_from_db_opt"); | ||
println!("user_from_db = {:?}", &user_from_db); | ||
// For each user in user_from_db.users()... | ||
for user in user_from_db.users().iter() { | ||
// If the reviewers vector doesn't contain the user... | ||
if !reviewers.contains(user) { | ||
// Add the user to reviewers | ||
reviewers.push(user.clone()); | ||
} | ||
} | ||
println!("Updated reviewers = {:?}", reviewers); | ||
return Some((reviewers, get_response_json)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function add_user_to_reviewers
is responsible for adding a user to the list of reviewers for a pull request. It does not handle the case where the user is already a reviewer. It would be better to check if the user is already a reviewer before adding them to the list. This would prevent duplicate entries in the list of reviewers.
- for user in user_from_db.users().iter() {
- // If the reviewers vector doesn't contain the user...
- if !reviewers.contains(user) {
- // Add the user to reviewers
- reviewers.push(user.clone());
- }
- }
+ for user in user_from_db.users().iter() {
+ // If the reviewers vector doesn't contain the user...
+ if !reviewers.iter().any(|reviewer| reviewer.uuid == user.uuid) {
+ // Add the user to reviewers
+ reviewers.push(user.clone());
+ }
+ }
fn prepare_put_body(updated_reviewers: &Vec<BitbucketUser>, pr_info_json: &Value) -> Option<Value> { | ||
// Serialize and add updated reviewers to response json | ||
let reviewers_obj_res = serde_json::to_value(updated_reviewers); | ||
if reviewers_obj_res.is_err() { | ||
let e = reviewers_obj_res.expect_err("No error in reviewers_obj_res"); | ||
eprintln!("Unable to serialize users: {:?}", e); | ||
return None; | ||
} | ||
let reviewers_obj = reviewers_obj_res.expect("Uncaught error in reviewers_obj_res"); | ||
let mut response_json = pr_info_json.to_owned(); | ||
let obj_opt = response_json.as_object_mut(); | ||
if obj_opt.is_none() { | ||
eprintln!("Unable to get mutable reviewer response obj"); | ||
return None; | ||
} | ||
// Update obj | ||
let obj = obj_opt.expect("empty obj_opt"); | ||
obj.insert("reviewers".to_string(), reviewers_obj); | ||
obj.remove("summary"); // API gives error if not removed | ||
return Some(response_json); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function prepare_put_body
is responsible for preparing the body of the PUT request to update the list of reviewers. It does not handle the case where the serialization of the updated reviewers fails. It would be better to return an error in this case, so that the caller can handle it accordingly.
- if reviewers_obj_res.is_err() {
- let e = reviewers_obj_res.expect_err("No error in reviewers_obj_res");
- eprintln!("Unable to serialize users: {:?}", e);
- return None;
- }
+ let reviewers_obj = match reviewers_obj_res {
+ Ok(value) => value,
+ Err(e) => {
+ eprintln!("Unable to serialize users: {:?}", e);
+ return Err(Box::new(e));
+ }
+ };
async fn put_reviewers(url: &str, access_token: &str,put_body_opt: &Option<Value>) { | ||
if put_body_opt.is_none() { | ||
eprintln!("Empty put request body, not adding reviewers"); | ||
return; | ||
} | ||
let put_body = put_body_opt.to_owned().expect("Empty put_body_opt"); | ||
// Make the PUT API call | ||
let client = get_client(); | ||
let response_res = client | ||
.put(url) | ||
.bearer_auth(&access_token) | ||
.header("Accept", "application/json") | ||
.header("Content-Type", "application/json") | ||
.json(&put_body) | ||
.send().await; | ||
|
||
// Handle the response_res as necessary | ||
println!("response_res = {:?}", &response_res); | ||
// for debugging | ||
match response_res { | ||
Ok(v) => println!("response v = {:?}", v.text().await), | ||
Err(e) => println!("response err = {:?}", e) | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function put_reviewers
is responsible for making a PUT request to update the list of reviewers. It does not handle the response of the PUT request. It would be better to handle the response and return a Result
type to indicate whether the operation was successful or not.
- async fn put_reviewers(url: &str, access_token: &str,put_body_opt: &Option<Value>) {
+ async fn put_reviewers(url: &str, access_token: &str,put_body_opt: &Option<Value>) -> Result<(), Box<dyn std::error::Error>> {
async fn get_pr_info(url: &str, access_token: &str) -> Option<Response> { | ||
let client = get_client(); | ||
let headers_opt = prepare_headers(&access_token); | ||
if headers_opt.is_none() { | ||
eprintln!("Unable to prepare_headers, empty headers_opt"); | ||
return None; | ||
} | ||
let headers = headers_opt.expect("Empty headers_opt"); | ||
let get_res = client.get(url).headers(headers).send().await; | ||
if get_res.is_err() { | ||
let e = get_res.expect_err("No error in response_res"); | ||
eprintln!("Error in get request for adding reviewer - {:?}", e); | ||
return None; | ||
} | ||
let get_response = get_res.expect("Uncaught error in get_res"); | ||
return Some(get_response); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function get_pr_info
is responsible for making a GET request to retrieve information about a pull request. It does not handle the response of the GET request. It would be better to handle the response and return a Result
type to indicate whether the operation was successful or not.
- async fn get_pr_info(url: &str, access_token: &str) -> Option<Response> {
+ async fn get_pr_info(url: &str, access_token: &str) -> Result<Response, Box<dyn std::error::Error>> {
Please check the action items covered in the PR -
In this PR, following flow is covered ->
-> User pushes to PR
-> next server gets webhook callback message, passes to dpu in pubsub
-> dpu receives pubsub message in listener.rs and fires up a task to process it
-> dpu calculates hunks
----------<>Everything upto this point is already implemented, below steps are new in this pr<>-----------------
-> from hunks, dpu creates comment message and adds it
-> dpu adds relevant reviewers if they are present in sled db (are a member of the workspace) and not already added to the PR
Summary by CodeRabbit
tokio
runtime.