Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignore InVMArtifactsProfile.OnHold once it becomes false (#481) #482

Closed
wants to merge 1 commit into from

Conversation

jinhyunr
Copy link
Contributor

@jinhyunr jinhyunr commented Nov 7, 2016

InVMArtifactsProfile.OnHold is used to support overprovisioning feature of CRP VMSS. Thus, the flag is meaningful only at the very beginning of the VM lifecycle and once it becomes False, Agent should never put extension processing on hold. However, under certain circumstance like VM losing an outbound internet connection, the current Agent suffers from making the correct decision as it loses access to VMArtifactsProfile when internet is not available.

The PR is needed to resolve the issue.

After the PR, Agent makes the decision based on the following conditions:

  1. If VMArtifactsProfileBlobUrl is null/empty, do not hold
  2. If WasOnHoldEverFalse file exists, do not hold
  3. If VMArtifactsProfile is null, hold
  4. If VMArtifactsProfile is not null, honor the onHold flag.

Copy link
Member

@hglkrijger hglkrijger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a cleaner way to check/indicate whether we are in a restored vm? I know we already write a sentinel file to determine clean shut down. Can we tie in to that perhaps?

I think we should figure out if and how this applies to AzureStack as well.

# If in_vm_artifacts_profile_blob does not exist, do not hold
if ext_conf and \
ext_conf.in_vm_artifacts_profile_blob and not \
ext_conf.in_vm_artifacts_profile_blob.isspace():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be changed to the new function you just added?

Silently remove file
"""
try:
os.remove(filepath)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we might want some verbose logging here for debugging purposes

in_vm_artifacts_profile = self.protocol.get_in_vm_artifacts_profile()
if in_vm_artifacts_profile and \
in_vm_artifacts_profile.is_extension_handlers_handling_on_hold():
if self.protocol.is_ext_handling_on_hold():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this method need to be implemented or stubbed out in the Azure Stack / metadata protocol as well?

if not is_extension_handling_on_hold and not os.path.isfile(was_on_hold_ever_set_to_false):
# If extension handling is not on hold, create WAS_ON_HOLD_EVER_SET_TO_FALSE file
# Once on hold is set to false, WALA should never hold extension handling in the current VM
fileutil.write_file(was_on_hold_ever_set_to_false, "")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should probably add more logging throughout this method?

def is_extension_handling_on_hold(self):
is_extension_handling_on_hold = False
ext_conf = self.ext_conf
was_on_hold_ever_set_to_false = os.path.join(conf.get_lib_dir(), WAS_ON_HOLD_EVER_SET_TO_FALSE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just me, but this name confuses me, Can we pick something else?

@hglkrijger
Copy link
Member

Discussed offline, this is on hold so closing the PR for now.

@hglkrijger hglkrijger closed this Dec 6, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants