Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

agent: set sandbox id log field in CreateSandbox #309

Merged
merged 1 commit into from
Jul 31, 2018

Conversation

bergwolf
Copy link
Member

This effectively reverts 04457e3("logging: Add sandbox field").
CreateSandbox() is the point we know which sandbox id will use
this guest. If we rely on the kernel command line for it, we have
to put sandbox id in qemu args and it breaks vm factory.

Fixes: #308

cc @jodh-intel

@codecov
Copy link

codecov bot commented Jul 27, 2018

Codecov Report

Merging #309 into master will increase coverage by 0.24%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #309      +/-   ##
==========================================
+ Coverage    43.4%   43.64%   +0.24%     
==========================================
  Files          14       14              
  Lines        2336     2362      +26     
==========================================
+ Hits         1014     1031      +17     
- Misses       1187     1195       +8     
- Partials      135      136       +1
Impacted Files Coverage Δ
config.go 89.13% <ø> (+3.71%) ⬆️
agent.go 31.44% <ø> (ø) ⬆️
grpc.go 31.88% <0%> (-0.15%) ⬇️
channel.go 50.72% <0%> (+3.82%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 76470f0...745fa71. Read the comment docs.

@@ -87,6 +87,7 @@ type sandbox struct {
sync.RWMutex

id string
hostname string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These hostname changes seem unrelated? Maybe put them on a different commit with an explanation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's related... The original code saves hostname to sandbox.id. I moved it to sandbox.hostname.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - right, thanks.

@@ -251,12 +251,13 @@ message CreateSandboxRequest {
string hostname = 1;
repeated string dns = 2;
repeated Storage storages = 3;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Whitespace damage.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deleted the extra whitespaces because it was unintentionally added by someone. I don't think we need white space there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

// a shared pid namespace. All containers created will join this shared
// pid namespace.
bool sandbox_pidns = 4;
string sandbox_id = 5;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth adding a comment here explaining that we don't need this member in the other sandbox and network calls (as listed in kata-containers/tests#453 (comment)) since we implicitly require CreateSandbox() to be called before any other sandbox/network calls :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll add some text about it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. PTAL. @jodh-intel

This effectively reverts 04457e3("logging: Add sandbox field").
CreateSandbox() is the point we know which sandbox id will use
this guest. If we rely on the kernel command line for it, we have
to put sandbox id in qemu args and it breaks vm factory.

Fixes: kata-containers#308

Signed-off-by: Peng Tao <bergwolf@gmail.com>
@jodh-intel
Copy link
Contributor

jodh-intel commented Jul 27, 2018

lgtm!

Approved with PullApprove Approved with PullApprove

Copy link

@devimc devimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@sboeuf
Copy link

sboeuf commented Jul 27, 2018

@chavafg @GabyCT Swarm CI is not happy, any idea here ?

@GabyCT
Copy link
Contributor

GabyCT commented Jul 27, 2018

@sboeuf , the problem is not Swarm, the test is detecting that some processes are running before running Swarm

 `check_processes ${SHIM_PATH}' failed

@sboeuf
Copy link

sboeuf commented Jul 27, 2018

@GabyCT I see, but this checkprocess() function is called during the teardown, right ?
So I would assume something got wrong during the swarm test that makes the shim still being around ?

@GabyCT
Copy link
Contributor

GabyCT commented Jul 27, 2018

@sboeuf , using the following configuration

kata-runtime : 1.1.0
commit : cfbc974
OCI specs: 1.0.1
kata-proxy version 1.1.0-c416c9fc2ff8389c201075f0f26bfabd08a3b140
kata-shim version 1.1.0-6ddca0334d0e8dee6e1e00f0c120fde4b8c1c176
kata-agent version 1.1.0-c6ac153f0c4c03b475d2b65bb73427cd33459bf7

I ran swarm tests in a loop for 15 times and there were not issues. However, if I run first the kubernetes tests and then swarm, I am obtaining the failure that some processes were left behind.

@bergwolf
Copy link
Member Author

As agreed we do not trust codecov/patch. So merge this since all other checks have passed. I'll see how to disable codecov/patch globally (e.g., via codecov team yaml).

@bergwolf bergwolf merged commit 061f7a6 into kata-containers:master Jul 31, 2018
@bergwolf bergwolf deleted the agent_log branch April 8, 2019 08:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants