From 808b5340bd83483c312f9fad47e40a2e090d7b5e Mon Sep 17 00:00:00 2001 From: Cameron Motevasselani Date: Tue, 7 May 2019 17:18:04 -0700 Subject: [PATCH] chore(rebase): Update to upstream master (#4) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(delivery): upsert and delete delivery configs through orca (#2672) * fix(redis): Fix intermittently failing test (#2675) Another test fails occasionally due to non-monotonic ULIDs over short timescales. * feat(cf): Move service polling into orca (#2671) spinnaker/spinnaker#3637 Co-Authored-By: Jason Chu Co-Authored-By: Jammy Louie * chore(serviceaccounts): Do not update service account if no change in roles (#2674) * chore(serviceaccounts): Do not update service account if no change in roles After the introduction of the OR mode for checking permissions, a regular user should be able to modify the pipeline if it has any of the roles in the pipeline (not necessary all of them). However, the service user is created in every save operation, which prevents users to update the pipeline when the OR mode is enabled. This patch skips creating/updating the service user in case the user already existed and the roles were the same as in the pipeline definition. This change allows the user to udpate the pipeline only when the roles are not changed to avoid privilege scalation. * Return service account even thought it's not updated ... so the save pipeline task update the triggers accordingly. * chore(dependencies): Autobump spinnaker-dependencies (#2678) * fix(clouddriver): Expose attributes of `StageDefinition` (#2682) * fix(MPT): Propagate notifications from pipeline config (#2681) One can specify notification on the pipeline in deck (even if pipeline is templated) However, those notifications are not respected because they don't end up in the pipeline config. Copy them there so notifications work as expected. * fix(execution): Ensure exceptions in stage planning are captured (#2680) Presently any exceptions that occur during stage planning are not captured and therefore will not show up in the execution JSON/UI for end-user to see. This can be very confusing as there is no explanation why a pipeline stage fails. (reference SPIN-4518) * fix(triggers): surface error if build not found (#2683) * fix(kayenta): fix NPE when moniker defined without a cluster name (#2684) * fix(orca-redis): delete stageIndex when deleting a pipeline (#2676) * test(core): Basic startup test (#2685) * test(core): Basic startup test Test that orca starts up with a very basic config, with just the baseUrl for each dependent service defined, and an in-memory execution queue. * test(core): Remove optional dependencies Also add some mock beans and parameters to avoid creating any beans depending on redis, and remove the redis url from the config. * fix(core): Add omitted copyright header * feat(kubernetes): add dynamic target selection to Patch Manifest Stage (#2688) * fix(execution): Honor 'if stage fails' config for synthetic stages. (#2686) * fix(templated-pipelines): handle a missing notifications field in the template config (#2687) * feat(gremlin): Adds a Gremlin stage (#2664) Gremlin is a fault-injection tool, and this addition wraps its API to allow for creation/monitoring/halting a fault-injection. * feat(spel): add manifestLabelValue helper function (#2691) * fix(build): make gradle use https (#2695) https://github.com/spinnaker/spinnaker/issues/3997 * fix(expressions): Fix ConcurrentModificationException (#2698) Fixes error introduced by me in https://github.com/spinnaker/orca/pull/2653 * chore(artifacts): Clean up generics in ArtifactResolver (#2700) * feat(artifacts): Add stage artifact resolver (#2702) The new method is used to fully resolve a bound artifact on a stage that can either select an expected artifact ID for an expected artifact defined in a prior stage or as a trigger constraint OR define an inline expression-evaluable default artifact. * refactor(core): Allow registering custom SpEL expression functions (#2701) Provides a strategy for extending Orca's SpEL expression language with new helper functions. Additionally offers a more strongly typed and documented method of building these helpers, as well as namespacing capabilities in case an organization wants to provide a canned group of helper functions. * feat(redblack): pin min size of source server group * fix(redblack): unpin min size of source server group when deploy fails * fix(artifacts): handle bound artifact account missing (#2705) the bound artifact should use the matched artifact account if it doesn’t have one * feat(upsertScalingPolicyTask): make upsertScalingPolicyTask retryable (#2703) * feat(upsertScalingPolicyTask): make upsertScalingPolicyTask retryable * feat(upsertScalingPolicyTask): make upsertScalingPolicyTask retryable * feat(upsertScalingPolicyTask): make upsertScalingPolicyTask retryable * feat(MPTv2): Support artifacts when executing v2 templated pipelines. (#2710) * fix(imagetagging): retry for missing namedImages (#2713) * refactor(cf): Adopt artifacts model for CF deployments (#2714) * chore(jinja): Upgrade jinjava (#2717) The ModuleTag class breaks with jinjava >= 2.2.8 because of a fix in failsOnUnknownTokens. We need to catch this error in cases where we expect that a token may be unknown. The test also uses True as a literal which should be true (ie, lowercase) which worked before but breaks with the upgrade. * chore(dependencies): Autobump spinnaker-dependencies (#2716) * fix(artifacts): Revert double artifact resolution (#2719) This reverts commit 2f766c870a2268d8e7561cdcf62156f7d95d7d83. * feat(spel): New `currentStage()` function (#2718) This fn provides a direct reference to the currently executing `Stage`. * feat(core): add #stageByRefId SpEL helper function (#2699) * feat(spel): Moved `stageByRefId` to `StageExpressionFunctionProvider` (#2721) * fix(MPTv2): Supports artifact resolution for v2 MPTs. (#2725) * fix(artifacts): Make artifact resolution idempotent (#2731) Artifact resolution is not currently idempotent; this causes occasional bugs where trying to resolve again causes errors due to duplicate artifacts. In order to be truly idempotent, also changed the Set instances in the resolution to LinkedHashSet so that the order of operations (and resulting artifact lists in the pipeline) are stable and deterministic. * feat(rrb): add support for preconditions check as part of deploy stage Surprising things can happen when we start a deployment with a starting cluster configuration where there are multiple active server groups. With this change, we make an attempt to fail the deployment earlier, in particular before we make potentially dangerous changes to the infrastructure. * feat(core): Add support for an Artifactory Trigger (#2728) Co-Authored-By: Jammy Louie * debug(clouddriver): Log when initial target capacity cannot be found (#2734) * debug(clouddriver): Log when initial target capacity cannot be found (#2736) * fix(mpt): temporarily pin back jinjava (#2741) jinjava >= 2.2.9 breaks some v1 pipeline evaluation due to a change in unknown token behavior that needs to be handled. * debug(clouddriver): Include cloudprovider in server group capacity logs (#2742) * debug(clouddriver): Include cloudprovider in server group capacity logs (#2744) * feat(cloudformation): support YAML templates (#2737) adds support for YAML templates by attempting to deserialize using snakeyaml instead of object mapper. since YAML is a superset of JSON snakeyaml can process either format properly. * fix(provider/azure): Failed to delete firewall (#2747) Background: Currently it failed to delete firewall and throw timeout exception at force cache stage while deleting firewall. After investigated, delete firewall task and force cache refresh task would be processed concurrently. If delete firewall task doesn't get the response from azure within 20 seconds, then force cache refresh task would throw timeout exception. Fix: So update the execution order for deleting firewall. After updated, force cache refresh task will be processed when monitor delete task is completed. After tested, now it can delete firewall successfully. * feat(core): add save pipelines stage (#2715) * feat(core): add save pipelines stage This stage will be used to extract pipelines from an artifact and save them. Spinnaker is a tool for deploying code, so when we treat pipelines as code it makes sense to use Spinnaker to deploy them. Imagine you have your pipelines in a GitHub repo, and on each build you create an artifact that describes your pipelines. This CircleCI build is an example: https://circleci.com/gh/claymccoy/canal_example/14#artifacts/containers/0 You can now create a pipleine that is triggered by that build, grab the artifact produced, and (with this new stage) extract the pipelines and save them. It performs an upsert, where it looks for existing pipelines by app and name and uses the id to update in that case. The format of the artifact is a JSON object where the top level keys are application names with values that are a list of pipelines for the app. The nested pipeline JSON is standard pipeline JSON. Here is an example: https://14-171799544-gh.circle-artifacts.com/0/pipelines/pipelines.json For now this simply upserts every pipeline in the artifact, but in the future it could allow you to specify a subset of apps and pipelines and effectively test and increase the scope of pipeline roll out. It (or a similar stage) could also save pipeline templates in the future as well. * Use constructors and private fields rather than auto wired * summarize results of pipeline saves * Summarize save pipelines results as created, updated, or failed * fix(evaluateVariables): enable EvaluateVariables stage to run in dryRun (#2745) Variables are hard to get right, hence the EvaluateVariables stage, but if it doesn't run in `dryRun` it makes quickly iterating on it harder. * feat(clouddriver): Favor target capacity when current desired == 0 (#2746) This handles a situation where we appear to be getting an occasional stale min/max/desired when an asg id is re-used across deployments. ie. app-v000 was deleted and re-created Also reaching out to AWS for clarification as to whether this is expected or not. * feat(buildservices): Permission support for build services (CI's) (#2673) * refactor(triggers): Gate calls to igor from orca (#2748) Now that echo handles augmenting triggers with build info, and that manual triggers default to go through echo, all triggers should be arriving in Orca with their build information already populated. We should gate the logic in Orca to only run if it's not there. We can't completely remove this logic yet because while manual triggering via echo defaults to enabled, there's still a flag to turn it off. Once that flag is deprecated, and we're confident that all manual triggers (including any via the API) go through echo, we can completely remove this block of code. This commit also completely removes the populating of taggedImages, which has not been used since spinnaker/orca#837. * feat(metrcis): convert stage.invocations.duration to PercentileTimer (#2743) * feature(ci): Fetch artifacts from CI builds (#2723) * refactor(ci): Convert BuildService and IgorService to Java Both of these files are trivial to convert to Java. Also, in IgorService, replace deprecated @EncodedPath with the equivalent @Path(encode = false). * refactor(ci): Clean up inheritance of CI stages It's confusing that Travis and Wercker stages extend Jenkins stages; make an abstract CIStage that these all extend. Also convert these stages to Java, and clean up some of the string interpolation that was hard to read. * feature(ci): Fetch artifacts from CI builds Jenkins triggers support inflating artifacts from the build results based on a Jinja template specified in the build's properties. Add the same functionality to Jenkins stages. Use the property file defined in the CI stage to extract artifacts from the CI stage build. * refactor(ci): Pull getProperties into its own task The prior commit added general support for retrying tasks that communicate with Igor; update getProperties to be its own task and have it use that new support. * feat(cf): Add Sharing / Unsharing of services (#2750) - Also removed Autowired fields from com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker.*ServiceStage - Also converted all service-related tests from Spock to JUnit spinnaker/spinnaker#4065 Co-Authored-By: Jason Chu * fix(core): Fix startup (#2753) Orca no longer starts up without front50 because a dependency on front50 was added to a task bean. Halyard needs an orca without front50 to bootstrap deploy to kubernetes. * test(core): Remove front50 bean from startup test (#2752) When deploying to Kubernetes, halyard uses a bootstrap orca that doesn't have front50 enabled. This means that we need to keep orca starting without front50 around; set it to false in the test and remove the mock bean. * fix(clouddriver): Revert change to pin source capacity for redblack deploys (#2756) * fix(authz): Fix copying pipelines with managed service accounts (#2754) * fix(authz): Fix copying pipelines with managed service accounts Copying pipelines currently fails when managed service accounts are enabled. This commit fixes that by generating a pipeline id (UUID) in Orca before generating the managed service account. * Set flag for cron trigger update, overwrite old managed service accounts * fix(pipelines): Remove gating of SavePipelineTask (#2759) * fix(ci): Fix cast error in CIStage (#2760) * fix(ci): Fix cast error in CIStage WaitForCompletion is a Boolean, but given that the prior groovy code accepted either a String or a Boolean, being defensive and handling both. * fix(ci): Fix default value * feat(pipelines): if saving multiple pipelines, continue on failure (#2755) This is a minor change to SavePipelineTask. By default it will still fail with a TERMINAL status exactly as before. I’ve added a unit test for this original behavior. But if it detects that multiple pipelines are being saved, then the failure status will be FAILED_CONTINUE instead. This allows the SavePipelinesFromArtifactStage to attempt to save all the pipelines from the artifact and then give a summary of the results. * fix(scriptStage): add get properties task (#2762) * chore(dependencies): Autobump spinnaker-dependencies (#2763) * fix(executions): Break up execution lookup script. (#2757) The previous implementation was expensive in terms of redis memory usage and eventually would cause slowdowns of redis command processing. * feat(expressions): adding #pipeline function (#2738) Added #pipeline function for usage in SpEL expressions. It returns the ID of the pipeline given the name of the pipeline (within the same app only) * fix(aws): fix NPE when image name is not yet available (#2764) * feat(kubernetes): add expression evaluation options to bake and deploy manifest stages (#2761) * fix(docker): fix gradle build step (#2765) * feat(core): Add support for Concourse triggers (#2770) * fix(tests): Use JUnit vintage engine so Spock tests still run (#2766) * test(ci): Add tests to JenkinsStage and ScriptStage (#2768) * test(ci): Add tests to JenkinsStage and ScriptStage There are no tests that waitForCompletion is properly read when starting a Jenkins stage; this functionality recently broke, so add some tests to it. Also add some tests to verify that binding of artifacts works correctly. The ScriptStage has no tests at all; add a simple test that it fetches the properties after the script finishes. * refactor(ci): Move CIJobRequest and rename it CIStageDefinition This definition was originally only used one specific task, but it really represents the full stage definition of a CIStage, so rename it and move it to a more appropriate location. * refactor(ci): Remove explicit casting from CIStage In order to quickly fix a bug, I explicitly added some code to coerce waitForCompletion to a boolean. Let Jackson handle this by adding it to the CIStageDefinition model, and also do the same with the pre-existing expectedArtifacts field. * test(pipelinetemplates): Add tests to ModuleTag (#2767) I was trying to reproduce the particular case that broke on the Jinjava upgrade. So far I've been unsuccessful, as these tests all pass both before and after the upgrade. But it's probably worth committing the tests anyway, just to prevent other issues in the future. * fix(jenkins): Fix Jenkins trigger serialization (#2774) * chore(core): Remove noisy spel function registration log message (#2776) * fix(spel): Optionally auto-inject execution for spel helpers (#2772) * fix(MPTv2): Avoid resolving artifacts during v2 MPT plan. (#2777) * fix(unpin): bug caused us to not be able to unpin to 0 0 was being used in the Elvis operator but it is not Groovy-truthy. Fix that to an actual null check. * fix(unpin): touch up ResizeStrategySupportSpec Delete unused methods and add a few comments * chore(logging): add a log wrapper in chatty WaitForUpInstancesTask This allows us to consolidate the multiple log messages into a single one, and silences calls that don't pass a splainer object to calculateTargetDesiredSize. * chore(logging): add complete explanations The goal is to provide a single consolidated log that in WaitForCapacityMatchTask and WaitForUpInstancesTask that explains every step of the decision. No more guessing! * fix(concourse): Fix concourse build info type (#2779) * feat(ecs): Grab container image from trigger or context (#2751) * fix(artifacts): Fix successful filter for find artifacts (#2780) When filtering to only include successful executions, the front-end sets the flag 'successful' in the stage config. Orca is incorrectly looking for a value 'succeeded', and thus ignores the value that was set on the front-end. While 'succeeded' is slightly better named as it matches the execution status, changing this on the backend means that existing pipelines will work automatically. (Whereas changing it on the front-end would require either a pipeline migrator or for anyone affected to re-configure the stage.) To avoid breaking anyone who manually edited their stage config to set 'succeeded' because that's what the back-end was looking for, continue to accept that in the stage config. * chore(dependencies): Autobump spinnaker-dependencies (#2784) * feat(core): Add support for a condition aware deploy preprocessor (#2749) - adds ability to add a synthetic wait before a deploy stage - pauses a deployment if certain conditions are not met - provides visibility into currently unmet conditions * fix(triggers): Add Jenkins and Concourse build info types to allowed deserialization types (#2785) * fix(clouddriver): Reduce jardiff connect/read timeouts (#2786) This handles situations where a security group may prevent ingress and otherwise timeout after the default (10s). We automatically retry 5 times so our deployments are taking _at least_ 50s longer than they need to. This isn't perfect but that 50s will be lowered to 10s. * fix(sql): Fix intermittently failing tests (#2788) Some of the execution repository tests assert the ordering of executions retreived from the database, but ULIDs are only monotonic with millisecond resolution. The fix has been to sleep for 1ms beteween creating executions, which mostly fixed the tests but there are still occasional failures where two executions have the same timestamp bits in their ULID. To reduce these failures, just wait 5ms between adding executions to the database. * fix(conditions): Make task conditional per config (#2790) - s/@ConditionalOnBean/@ConditionalOnExpression * chore(dependencies): Autobump spinnaker-dependencies (#2789) * fix(queue): Ensure that after stages run with an authenticated context (#2791) This addresses a situation where the `DeployCanaryStage` failed to plan after stages that involved a lookup against a restricted account. * feat(gcb): Add Google Cloud Build stage (#2787) Add a stage that triggers a build using Google Cloud Build. At this point, the stage only accepts a build definition directly in the stage, and does not wait for the build to complete. * feat(clouddriver): Remove ec2-classic migration code (#2794) * fix(loadbalancer): wait for onDemand cache processing when supported (#2795) * chore(clouddriver): rest helper for deploying clouddriver-sql with an empty cache (#2690) * chore(logging): remove high volume logs used to debug old issue * fix(imagetagging): asset foundImage count >= upstreamImageIds count (#2799) * feat(runJob): support kubernetes jobs (#2793) adds support for preconfigured run job stages for kubernetes v2 * fix(preconfiguredJob): add tests preconfig job (#2802) adds tests for the preconfigured job stage for upcoming groovy -> java refactor. * Revert "fix(imagetagging): asset foundImage count >= upstreamImageIds count (#2799)" (#2807) This reverts commit 0d25d50cb7852b74a0747c969d88faec355ac9e5. * fix(MPTv2): Fix pipeline triggers for v2 templated pipelines. (#2803) * chore(dependencies): Autobump spinnaker-dependencies (#2800) * feat(gremlin): Add Halyard config for Gremlin (#2806) * fix(clouddriver): Enable the parameter of allowDeleteActive (#2801) * refactor(MPTv2): Change nomenclature from version to tag. (#2814) * fix(MPTv2): Allow unresolved SpEL in v2 MPT plan. (#2816) * fix(provider/cf): Bind clone manifest artifacts (#2815) * fix(expressions): make sure trigger is a map (#2817) Some places uses `ContextParameterProcessor.buildExecutionContext` which (correctly) converts the `trigger` in the pipeline to an object (instead of keeping it as `Trigger` class). However, when tasks run, they use `.withMergedContext` which opts to build it's own SpEL executionContext (in `.augmentContext`). This is needed so that fancy look up in ancestor stages works. Fix `.augmentContext` to convert the trigger to an object/map. This addresses issues evaluating expressions like `${trigger.buildNumber ?: #stage('s1').context.buildNumber}`` when no `buildNumber` is present on the `trigger` * feat(cf): Create Service Key Stage (#2819) spinnaker/spinnaker#4242 Co-Authored-By: Ria Stein * fix(MPTv2): Fix for #2803 (#2823) Many templatedPipelines don't have a `.schema` set and the code treats them as `v2` but they should stay as `v1` and not be processed. We saw a bunch of pipelines going through to `V2Util.planPipeline` increasing traffic to `front50` about 200x and many calls would fail (presumably due to `v1` being treated as `v2`?) * feat(deleteSnapshot): Adding deleteSnapshot stage and deleteSnapshot … (#2769) * feat(deleteSnapshot): Adding deleteSnapshot stage and deleteSnapshot task * feat(deleteSnapshot): Adding deleteSnapshot stage and deleteSnapshot task * feat(deleteSnapshot): Adding deleteSnapshot stage and deleteSnapshot task * fix(cloneservergrouptask): undo the move of CloneServerGroupTask (#2826) This change undoes the move of CloneServerGroupTask to a different package (this was introduced in #2815). Today, tasks can not be moved once they have been created because Orca will be unable to deserialize the tasks already in the queue created under a different package. * feat(conditions): Adding support for config based conditions (#2822) - support for config based conditions - support for config based whitelisted clusters * feat(exp): deployedServerGroups now grabs deployments from deploy result (#2812) * feat(exp): deployedServerGroups now grabs deployments from deploy results The deployment result is the place that cloud providers can put info about their deployments, but it is deeply nested on the context and hard to find. That can be eased by appending it to the info returned from the existing deployedServerGroups property/function. * Changed generics into nested classes I considered making this model generally useful, but after casting the context just to get a very specific path it doesn’t feel like it belongs with the more general purpose stuff under model. It actually seems to conflict with the existing stage/task/context model and would cause confusion. * feat(provider/kubernetes): Add traffic options to deploy manifest (#2829) * refactor(provider/kubernetes): Use constructor injection Refactor DeployManifestTask to use constructor injection, and change impelentation-specific fields to be private and final. * feat(provider/kubernetes): Add traffic options to deploy manifest Add functionality to the deploy manifest task to handle the fields in the new trafficManagement section. In particular, pass along any specified service as well as whether to send traffic to new workloads. * chore(expressions): Don't create new objectmapper all the time (#2831) This is from additional PR comments on #2817 * chore(build): Upgrade to Gradle 5.0 * feat(webhook): retry on name resolution failures This is adding a new narrow use case where we retry, instead of retrying on broader exceptions. CreateWebhookTask was already quite specific about the failure modes that are retryable, so this change is consistent with that pattern. The reason for being conservative is that we don't want to potentially cause side effects where we hit the same webhook multiple times in a row. I chose to rely on orca's built in scheduling mechanism (instead of internal retries within the task) as it seems to be properly configured already in terms of backoff and timeout. * fix(webhook): catch URL validation failures in MonitorWebhookTask Also actually fail on regular IllegalArgumentExceptions that are not caused by UnknownHostException * fix(k8s): Deploy manifest now accepts stage-inlined artifacts (#2830) * fix(travis): Support timestamp in JenkinsBuildInfo (provided by Travis) (#2813) (Travis is piggybacking on the Jenkins trigger implementation in Orca) * feat(preconfiguredJobs): produces artifacts (#2835) allow preconfigured jobs to produce artifact. kubernetes jobs default to true. can be overridden. * fix(repository): don't generate invalid grammar (#2833) When performing a search with a pipeline name that doesn't exist sqlrepository will generate a query with an empty IN clause: ``` SELECT * FROM ... WHERE IN () ``` this change short circuits this evaluation. Additionally, set H2 into `MODE=MYSQL` so that we can actually catch this invalid SQL in unit tests * fix(aws/imagetag): ensure matched images by name includes all upstream ids (#2839) * chore(*): Seeding initial OWNERS file (#2840) * fix(expressions): populate context for evaluateExpression endpoint (#2841) When calling `pipelines/{id}/evaluateExpression` we don't populate the eval context in the same way as we do for regular pipeline execution. This means expressions that work during regular execution don't work when using this EP. Most notably, nothing in `triggers` (or, more importantly, `trigger.parameters`) is accessible via `${parameters["myParam"]}` instead one must modify the expression to be `${execution.trigger.parameters["myParam"]}` * feat(MPTv2): Inverts inherit -> exclude in template inheritance. (#2842) Previously by default, template consumers would have to specifically opt in to including triggers, parameters, and notifications from the parent template. This is an inversion of what is expected by template consumers and has resulted in a bunch of confusion and misuse. We've changed the default to inherit by default and have users manually opt in to exclude the fields targeted by "inherit". * fix(logging): Correctly propagate stage IDs for logging (#2847) Currently, the stageID isn't sent over when we make e.g. clouddriver calls. This makes tracing operation of a given stage VERY difficult. Add stageIDs to the MDC context for propagation with HTTP headers * chore(migrations): ignore empty migrations (#2846) Currently, any non-standard migrations are defined in the orca.yml as a list. Sometimes, it's nice to be able to run without those migrations (e.g. when developing both in OSS and private land). However, due to an issue in spring (which, I believe, should be fixed in boot2.0.3) it's impossible to override a list with an empty list, but you can override a list with a new list. Hence this change to not run empy entries * fix(MPTv2): Restricts config var scope based on declared template vars. (#2849) * feat(kubernetes): support redblack and highlander strategies (#2844) * fix(provider/azure): Failed to disable azure server group when rollback (#2848) * Revert "fix(provider/azure): Failed to disable azure server group when rollback (#2848)" (#2850) This reverts commit 4c1a8960c69415612e1a24b516a8b41b65c88319. * feat(cf): Fetch created service key via SpEL (#2827) spinnaker/spinnaker#4260 Co-Authored-By: Ria Stein Co-Authored-By: Stu Pollock * refactor(gcb): Use generic maps for GCB objects (#2853) Orca currently deserializes the build configuration in the stage to a Build object only to send it immediately over to igor (which re-serializes it). As orca doesn't actually need to know any of the details about the Build object it's sending over, just use a generic Map. This addresses an issue where some fields are not correctly deserializing using the default objectMapper; I'll need to work around this in igor, but this reduces the scope of where that workaround needs to live. (This also allows us to better encapsulate which microservices actually need to know about GCB objects.) * fix(kubernetes): remove unused imports from DeployManifestStage * feat(kubernetes): pass DeployManifestTask strategy to Clouddriver to enable downstream validation * fix(orca-core): Add CANCELED to list of COMPLETED statuses (#2845) This will enable cleanup of old CANCELED tasks * fix(FindImageFromCluster): only infer regions from deploy for aws (#2851) * fix(orca): if build stage fails and prop file exists, try and fetch (#2855) * chore(dependencies): Autobump spinnaker-dependencies (#2838) * chore(dependencies): Autobump spinnaker-dependencies (#2856) * fix(webhooks): Avoid a nasty HTTP 500 fetching preconfigured webhooks (#2858) This PR offers protection if `fiat` were unavailable when attempting to fetch preconfigured webhooks. The current behavior results in an HTTP 500 even if there were no restricted preconfigured webhooks. The proposed behavior would result in the unrestricted subset being returned with an error logged. * refactor(conditions): Do not inject waitForCondition on no conditions (#2859) - Updated interface to specify cluster, region and account - Make injecting the stage optional on if there are active conditions - Move to orca-clouddriver as this is deploy-centric * chore(gradle): Convert orca to use kork-bom (#2860) * chore(BOM): Make orca use kork-bom * Use `kork-bom` * Remove dependence on `netflix.servo` * Fixed a bunch of `TODOs` in gradle files along the way * chore(cf): Move cfServiceKey from orca-core to orca-integrations-cloudfoundry (#2857) * Revert "chore(gradle): Convert orca to use kork-bom (#2860)" (#2863) This reverts commit fedf50d648ce47a73ea71ebe711b20e643b0571e. * chore(openstack): remove openstack provider (#2865) * chore(conditions): Adding logging (#2866) - added more logging around pausing deploys * refactor(provider/kubernetes): Add tests and simplify cache refresh (#2869) * test(provider/kubernetes): Add tests to ManifestForceCacheRefresh Most of the functionality in ManifestForceCacheRefreshTask is not tested. Add significant test coverage to this class to prepare for a bug fix/refactor. * refactor(provider/kubernetes): Replace nested maps with an object The ManifestForceCacheRefreshTask currently keeps track of its manifests as a Map> which leads to a lot of complex iteration over this structure. Create a new class ScopedManifest and flatten this structure into a List so that it's much easier to follow the processing that the class is doing. * refactor(provider/kubernetes): Add account to ScopedManifest Rather than thread account through multiple calls in this class, add it to the ScopedManifest class and set the account on each manifest when we create the initial list. This also makes the helper class Details identical to ScopedManifest, so replace instances of Details with ScopedManifest. Finally, replace pendingRefreshProcessed which both returns a status and mutates the stage context with getRefreshStatus, which only returns a status. Leave mutation of the context to the caller, checkPendingRefreshes. This allows us to avoid needing to mutate a ScopedManifest and keep the class immutable. * refactor(provider/kubernetes): Track ScopedManifests directly Now that we have a simple data class to represent a manifest to refresh that implements equals and hashCode, we don't need to manually serialize it with toManifestIdentifier. Just directly add these manifests to the collections we're using to track them. * chore(dependencies): ensure direct dependency on kork-secrets-aws (#2870) Prereq for spinnaker/kork#273 merge * fix(provider/kubernetes): Don't poll immediately after cache refresh (#2871) * test(provider/kubernetes): Re-order statements in tests This commit has no functional effect, it is just going to make the tests much easier to read in the next commit, where I change the order between checking on pending cache refresh requests and sending new ones. * refactor(provider/kubernetes): Change control flow in refresh Instead of having refreshManifests and checkPendingRefreshes contain logic for determinig if the task is done, have them focus on mutating refreshedManifests and deployedManifests as they take actions. Then add allManifestsProcessed to check whether we're done; which is just checking if all deployed manifests have been processed. This removes the need to track the state of all manifests in the mutating functions and allows us to consolidate the return value of the task to one place. * fix(provider/kubernetes): Don't poll immediately after cache refresh We currently poll clouddriver to get the status of a cache refresh immediately after requesting the cache refresh, and schedule a re-refresh is we don't see the request we just requested. For users with a read-only clouddriver pointed at a replica of redis, there will be some replication lag before the pending refresh appears in the cache, which will cause us to keep re-scheduling the same cache refresh. To address this, wait one polling cycle before checking on the status of a pending refresh. Do this by changing the order of operations so that we first check on any pending requests from the last cycle, then schedule and needed new requests. * fix(clouddriver): Hoist inferredRegions var to parent scope so it is accessible to groovy code down below (#2874) * chore(conditions): Adding better log messages (#2867) - update log messages * feat(cf): Added support for rolling red black deployments (#2864) Co-Authored-By: Joris Melchior Co-Authored-By: Ria Stein * feat(cf): Delete Service Key pipeline stage (#2834) - Also instroduced an intermediate package called `cf` spinnaker/spinnaker#4250 Co-Authored-By: Ria Stein Co-Authored-By: Stu Pollock * feat(gcb): Monitor GCB build status after starting a build (#2875) * refactor(gcb): Use a GoogleCloudBuild type as return from igor To make the task logic simpler, create a class GoogleCloudBuild that has the build fields that Orca cares about and have retrofit deserialize the result from igor. Also, add the resulting field to the stage context so it can be used by downstream tasks. * feat(gcb): Monitor GCB build status after starting a build Instead of immediatly returning success once the new GCB build is accepted, wait until the build completes and set the status of the stage based on the result of the build. * fix(MPTv2): Fails plan on missing template variable value. (#2876) * feat(core): Delegate task/stage lookup to `TaskResolver` and `StageResolver` respectively (#2868) The `*Resolver` implementations will be able to look at both raw implementation classes _as well as_ any specified aliases. This supports (currently unsupported!) use cases that would be made easier if a `Task` or `StageDefinitionBuilder` could be renamed. * feat(core): Add support for more flexible execution preprocessors (#2798) * chore(conditions): Adding metrics around deploy pauses (#2877) - added a metric around deploy pauses * chore(*): Bump dependencies to 1.42.0 (#2879) * feat(core): Allow `ExecutionPreprocessor` to discriminate on type (#2881) This supports existing use cases around pipeline-centric preprocessors. * refactor(ci): Generify RetryableIgorTask (#2880) * refactor(ci): Generify RetryableIgorTask I'd like to re-use the logic in RetryableIgorTask for some GCB tasks, but it requires that the stage map to a CIStageDefinition. Make the stage definition a parameter to the class so it can be re-used. At some point it might make sense to make this even more general than just for igor tasks, but at least this makes it a bit more general than it is now. * fix(ci): RetryableIgorTask should also retry on network errors We're currently immediately looking up the response status, which will NPE (and fail the task) if the error is a network error. We should retry network errors as these are likely to succeed on retry. * fix(web): s/pipeline/config (#2883) Doh! * fix(web): s/pipeline/config (#2884) * fix(cloudformation): Scope force update by account and region (#2843) When force updating the cloud formation cache on the AmazonCloudFormationCachingAgent, it can take some time if the number of accounts is quite big because the on demand update iterates over all accounts for a given type. This makes the force refresh task to fail because of a timeout. This patch sends scoping information to clouddriver so the caching agents can skip doing the update if the force refresh does not involve its own account and or region, making on demand update more efficient. * fix(gremlin): Remove Gremlin config template and let Halyard do it from scratch (#2873) * feat(gcb): Fetch artifacts produced by a GCB build (#2882) * feat(gcb): Fetch artifacts produced by a GCB build * refactor(gcb): Refactor monitor task to use generic retry logic Now that RetryableIgorTask is generic, MonitorGoogleCloudBuildTask can extend it instead of having its own retry logic. * fix(gcb): Add longer timeout to GCB polling stage We're currently using the default timeout of 1 minute, which will likely be too short. As a starting point, use the timeout and backoff period we use for other CI jobs, though we can change these in the future if needed. * refactor(TaskResult): Add a TaskResultBuilder and use it everywhere (#2872) There are a significant number of times when a variable named 'outputs' is passed to the 'context' field of TaskResult instead of the 'outputs' field. I don't know if these are bugs, but it should be less error-prone this way. * feat(gce): Add SetStatefulDisk{Task,Stage} (#2887) * fix(gcb): Properly set buildInfo in the context (#2890) The refactor of TaskResult dropped setting the context in StartGoogleCloudBuildTask; restore it. We also should update the context each time we run MonitorGoogleCloudBuildTask so the context has an up-to-date status; add this. * chore(conditions): Adding a flag to skip wait (#2885) - added config property tasks.evaluateCondition.skipWait - when skipWait=true, all paused deployments will proceed * refactor(headers): Update spinnaker headers (#2861) Update the headers to match https://github.com/spinnaker/kork/pull/270 * feat(core): ability to resolve targeted SG on different accounts and regions (#2862) * feat(kayenta): pass the accountId to Kayenta for deployments (#2889) * refactor(logging): improve logging for AbstractWaitForClusterWideClouddriverTask (#2888) Add specificity to the log message so you can tell which task is actually waiting for something to happen. So.. instead of: ``` Pipeline 01D9QSYBNK7ENG0BSM7V9V07ZZ is looking for [us-east-1->serverlabmvulfson-dev-v044] Server groups matching AbstractWaitForClusterWideClouddriverTask$DeployServerGroup(region:us-east-1, name:serverlabmvulfson-dev-v044) ... ``` we will have: ``` Pipeline 01D9R3HMZHZ82H0RNKG1D8JBRW:WaitForClusterDisableTask looking for server groups: [us-east-1->serverlabmvulfson-dev-v047] found: [[instances... ``` * fix(logging): use logger.GetFactory over @slf4j (#2894) This gets the proper class/logger name instead of using the parent `AbstractWaitForClusterWideClouddriverTask` fixup for: https://github.com/spinnaker/orca/pull/2888 * chore(dependencies): Autobump spinnaker-dependencies (#2891) * feat(gcb): Allow the build definition to come from an artifact (#2896) * feat(gcb): Allow the build definition to come from an artifact As an alternative to having the build definition inline in the stage, allow it to come from an artifact. * chore(gcb): nest gcb artifact properties under one key * feat(runJob/kubernetes): extract log annotation (#2893) * refactor(jobRunner): refactor k8s job runner refactor job runner from groovy to java * feat(runJob/kubernetes): extract log url template pulls the annotation `jobs.spinnaker.io/logs` and injects it into the execution context where most UI components look for this link. this will be used for the UI to provide a link to an external logging platform. --- Dockerfile | 2 +- OWNERS.md | 5 + build.gradle | 16 +- gradle/init-publish.gradle | 2 +- gradle/wrapper/gradle-wrapper.properties | 3 +- orca-applications/orca-applications.gradle | 3 +- .../pipelines/DeleteProjectStage.groovy | 2 +- .../pipelines/UpsertProjectStage.groovy | 2 +- .../tasks/DeleteApplicationTask.groovy | 10 +- .../tasks/UpsertApplicationTask.groovy | 2 +- ...ifyApplicationHasNoDependenciesTask.groovy | 10 +- orca-bakery/orca-bakery.gradle | 1 + .../orca/bakery/pipeline/BakeStage.groovy | 2 +- .../bakery/tasks/CompletedBakeTask.groovy | 2 +- .../orca/bakery/tasks/CreateBakeTask.groovy | 4 +- .../orca/bakery/tasks/MonitorBakeTask.groovy | 6 +- .../tasks/manifests/BakeManifestContext.java | 55 ++ .../manifests/CreateBakeManifestTask.java | 33 +- .../bakery/tasks/CreateBakeTaskSpec.groovy | 80 +-- .../bakery/tasks/MonitorBakeTaskSpec.groovy | 2 +- orca-clouddriver/orca-clouddriver.gradle | 16 + .../CloudDriverCacheService.groovy | 5 +- .../clouddriver/DelegatingOortService.java | 5 + .../orca/clouddriver/OortService.groovy | 6 + .../config/JobConfigurationProperties.java | 5 +- .../KubernetesPreconfiguredJobProperties.java | 46 ++ .../PreconfiguredJobStageProperties.java | 40 +- .../TitusPreconfiguredJobProperties.java | 37 ++ ...ConditionAwareDeployStagePreprocessor.java | 90 +++ .../pipeline/MigratePipelineStage.java | 41 -- .../ClouddriverClearAltTablespaceStage.java | 17 + .../pipeline/conditions/Condition.java | 74 +++ .../ConditionConfigurationProperties.java | 85 +++ .../conditions/ConditionSupplier.java | 29 + .../ConfigurationBackedConditionSupplier.java | 58 ++ .../conditions/WaitForConditionStage.java | 94 +++ .../pipeline/job/PreconfiguredJobStage.groovy | 12 +- .../manifest/DeployManifestStage.java | 19 +- .../pipeline/manifest/PatchManifestStage.java | 4 +- .../SavePipelinesFromArtifactStage.java | 45 ++ .../ApplySourceServerGroupCapacityTask.groovy | 25 +- .../aws/AwsDeployStagePreProcessor.groovy | 56 +- ...aptureSourceServerGroupCapacityTask.groovy | 2 +- .../aws/ModifyAwsScalingProcessStage.groovy | 2 +- ...FoundryDeployServiceStagePreprocessor.java | 42 ++ .../CloudFoundryDeployStagePreProcessor.java | 95 +++ ...oundryDestroyServiceStagePreprocessor.java | 42 ++ ...dFoundryShareServiceStagePreprocessor.java | 40 ++ ...oundryUnshareServiceStagePreprocessor.java | 40 ++ .../cf/CreateServiceKeyStage.java} | 24 +- .../providers/gce/SetStatefulDiskStage.java | 59 ++ .../gce/WaitForGceAutoscalingPolicyTask.java | 4 +- .../DeleteSecurityGroupStage.groovy | 2 +- .../servergroup/MigrateServerGroupStage.java | 44 -- .../strategies/CFRollingRedBlackStrategy.java | 290 +++++++++ .../strategies/DeployStagePreProcessor.java | 6 +- .../strategies/RedBlackStrategy.groovy | 4 +- .../servicebroker/DeployServiceStage.java | 32 +- .../DeployServiceStagePreprocessor.java | 32 + .../servicebroker/DestroyServiceStage.java | 32 +- .../DestroyServiceStagePreprocessor.java | 32 + .../servicebroker/ShareServiceStage.java | 50 ++ .../ShareServiceStagePreprocessor.java | 32 + .../servicebroker/UnshareServiceStage.java | 50 ++ .../UnshareServiceStagePreprocessor.java | 32 + .../snapshot/DeleteSnapshotStage.java | 83 +++ .../clouddriver/pollers/PollerSupport.java | 6 +- .../RestorePinnedServerGroupsPoller.java | 10 +- .../orca/clouddriver/service/JobService.java | 17 +- .../tasks/DetermineHealthProvidersTask.java | 14 +- .../orca/clouddriver/tasks/MigrateTask.java | 63 -- .../clouddriver/tasks/MonitorKatoTask.groovy | 6 +- .../tasks/artifacts/CleanupArtifactsTask.java | 2 +- .../FindArtifactFromExecutionTask.java | 11 +- .../FindArtifactsFromResourceTask.java | 2 +- .../ClouddriverClearAltTablespaceTask.java | 66 ++ .../AbstractClusterWideClouddriverTask.groovy | 4 +- ...ctWaitForClusterWideClouddriverTask.groovy | 24 +- .../ClusterSizePreconditionTask.groovy | 10 +- .../DetermineRollbackCandidatesTask.java | 12 +- .../cluster/FindImageFromClusterTask.groovy | 26 +- .../cluster/WaitForClusterDisableTask.groovy | 2 +- .../conditions/EvaluateConditionTask.java | 142 ++++ .../entitytags/BulkUpsertEntityTagsTask.java | 4 +- .../entitytags/DeleteEntityTagsTask.java | 4 +- .../entitytags/UpsertEntityTagsTask.java | 4 +- .../tasks/image/DeleteImageTask.java | 2 +- .../tasks/image/FindImageFromTagsTask.java | 6 +- .../image/ImageForceCacheRefreshTask.java | 2 +- .../clouddriver/tasks/image/ImageTagger.java | 26 +- .../tasks/image/MonitorDeleteImageTask.java | 4 +- .../tasks/image/UpsertImageTagsTask.java | 31 +- .../image/WaitForUpsertedImageTagsTask.java | 4 +- ...nstanceLoadBalancerRegistrationTask.groovy | 4 +- .../AbstractInstancesCheckTask.groovy | 16 +- ...ractWaitForInstanceHealthChangeTask.groovy | 6 +- .../instance/CaptureInstanceUptimeTask.groovy | 6 +- .../tasks/instance/RebootInstancesTask.groovy | 4 +- ...InstanceAndDecrementServerGroupTask.groovy | 2 +- .../instance/TerminateInstancesTask.groovy | 2 +- .../tasks/instance/UpdateInstancesTask.groovy | 4 +- .../instance/VerifyInstanceUptimeTask.groovy | 4 +- .../WaitForTerminatedInstancesTask.groovy | 3 +- .../instance/WaitForUpInstancesTask.groovy | 152 +++-- .../DestroyJobForceCacheRefreshTask.groovy | 2 +- .../tasks/job/DestroyJobTask.groovy | 2 +- .../clouddriver/tasks/job/RunJobTask.groovy | 2 +- .../tasks/job/WaitOnJobCompletion.groovy | 2 +- .../DeleteLoadBalancerForceRefreshTask.groovy | 2 +- .../DeleteLoadBalancerTask.groovy | 2 +- .../UpsertLoadBalancerForceRefreshTask.groovy | 160 ++++- ...lancerResultObjectExtrapolationTask.groovy | 4 +- .../UpsertLoadBalancerTask.groovy | 2 +- .../UpsertLoadBalancersTask.groovy | 2 +- .../tasks/manifest/DeleteManifestTask.java | 2 +- .../tasks/manifest/DeployManifestContext.java | 108 +++ .../tasks/manifest/DeployManifestTask.java | 96 +-- .../manifest/DynamicResolveManifestTask.java | 4 +- .../manifest/GenericUpdateManifestTask.java | 2 +- .../ManifestForceCacheRefreshTask.java | 289 ++++---- .../manifest/ManifestHighlanderStrategy.java | 37 ++ .../ManifestNoneStrategy.java} | 15 +- .../manifest/ManifestRedBlackStrategy.java | 36 + .../tasks/manifest/ManifestStrategy.java | 24 + .../manifest/ManifestStrategyHandler.java | 95 +++ .../manifest/ManifestStrategyStagesAdder.java | 45 ++ .../tasks/manifest/ManifestStrategyType.java | 42 ++ .../tasks/manifest/PatchManifestTask.java | 2 +- .../PromoteManifestKatoOutputsTask.java | 2 +- .../manifest/WaitForManifestStableTask.java | 8 +- .../CheckForRemainingPipelinesTask.java | 36 + .../pipeline/CheckPipelineResultsTask.java | 77 +++ .../GetPipelinesFromArtifactTask.java | 134 ++++ .../pipeline/MigratePipelineClustersTask.java | 119 ---- .../tasks/pipeline/PipelineReferenceData.java | 29 + .../pipeline/PreparePipelineToSaveTask.java | 66 ++ .../pipeline/SavePipelineResultsData.java | 31 + .../pipeline/SavePipelinesCompleteTask.java | 52 ++ .../tasks/pipeline/SavePipelinesData.java | 32 + .../pipeline/UpdateMigratedPipelineTask.java | 72 -- ...orAppEngineServerGroupStopStartTask.groovy | 10 +- .../UpsertAppEngineLoadBalancersTask.groovy | 2 +- .../providers/aws/AmazonImageTagger.java | 27 + .../CloudFormationForceCacheRefreshTask.java | 22 +- .../DeployCloudFormationTask.java | 14 +- .../AbstractAwsScalingProcessTask.groovy | 4 +- .../cf/AbstractCloudFoundryServiceTask.java} | 35 +- .../cf/CloudFoundryCreateServiceKeyTask.java} | 17 +- .../cf/CloudFoundryDeployServiceTask.java | 82 +++ .../cf/CloudFoundryDestroyServiceTask.java | 32 + .../CloudFoundryMonitorKatoServicesTask.java | 125 ++++ .../cf/CloudFoundryServerGroupCreator.groovy | 77 --- .../cf/CloudFoundryServerGroupCreator.java | 105 +++ .../cf/CloudFoundryShareServiceTask.java | 32 + .../cf/CloudFoundryUnshareServiceTask.java | 32 + .../CloudFoundryWaitForDeployServiceTask.java | 52 ++ ...CloudFoundryWaitForDestroyServiceTask.java | 50 ++ .../tasks/providers/cf/Manifest.java | 141 ++++ .../ecs/EcsServerGroupCreator.groovy | 57 +- .../providers/gce/SetStatefulDiskTask.java | 97 +++ .../UpsertGceAutoscalingPolicyTask.java | 2 +- .../KubernetesContainerFinder.groovy | 8 +- .../kubernetes/KubernetesJobRunner.groovy | 57 -- .../kubernetes/KubernetesJobRunner.java | 79 +++ .../tasks/providers/kubernetes/Manifest.java | 32 + .../ManifestAnnotationExtractor.java | 28 + .../OpenstackSecurityGroupUpserter.groovy | 64 -- .../OpenstackServerGroupCreator.groovy | 69 -- .../tasks/providers/openstack/README.md | 1 - .../DeleteScalingPolicyTask.groovy | 4 +- .../UpsertScalingPolicyTask.groovy | 23 +- ...DeleteSecurityGroupForceRefreshTask.groovy | 2 +- .../DeleteSecurityGroupTask.groovy | 2 +- .../SecurityGroupForceCacheRefreshTask.groovy | 2 +- .../UpsertSecurityGroupTask.groovy | 2 +- .../WaitForUpsertedSecurityGroupTask.groovy | 2 +- .../AbstractBulkServerGroupTask.java | 2 +- .../AbstractServerGroupTask.groovy | 8 +- .../AddServerGroupEntityTagsTask.groovy | 8 +- .../BulkWaitForDestroyedServerGroupTask.java | 12 +- ...tInterestingHealthProviderNamesTask.groovy | 4 +- .../servergroup/CloneServerGroupTask.groovy | 40 +- .../servergroup/CreateServerGroupTask.groovy | 2 +- ...MigrateForceRefreshDependenciesTask.groovy | 69 -- .../ServerGroupCacheForceRefreshTask.groovy | 6 +- ...nnakerMetadataServerGroupTagGenerator.java | 2 +- .../servergroup/UpdateLaunchConfigTask.groovy | 4 +- .../UpsertServerGroupTagsTask.groovy | 4 +- .../WaitForCapacityMatchTask.groovy | 29 +- .../WaitForDestroyedServerGroupTask.groovy | 12 +- ...keryImageAccessDescriptionDecorator.groovy | 65 ++ .../clone/CloneDescriptionDecorator.java | 29 + ...CloudFoundryManifestArtifactDecorator.java | 75 +++ .../DetermineTargetServerGroupTask.groovy | 4 +- .../AbstractWaitForServiceTask.java | 58 ++ .../servicebroker/DestroyServiceTask.java | 59 -- .../tasks/snapshot/DeleteSnapshotTask.java | 83 +++ .../tasks/snapshot/RestoreSnapshotTask.groovy | 2 +- .../tasks/snapshot/SaveSnapshotTask.groovy | 2 +- .../kato/pipeline/ParallelDeployStage.groovy | 2 +- .../DetermineSourceServerGroupTask.groovy | 6 +- .../orca/kato/pipeline/strategy/Strategy.java | 1 + .../support/ResizeStrategySupport.groovy | 4 +- .../pipeline/support/SourceResolver.groovy | 27 +- .../kato/pipeline/support/StageData.groovy | 4 + .../orca/kato/tasks/AbstractAsgTask.groovy | 4 +- .../kato/tasks/AbstractDiscoveryTask.groovy | 4 +- .../tasks/CopyAmazonLoadBalancerTask.groovy | 2 +- .../orca/kato/tasks/CreateDeployTask.groovy | 2 +- .../tasks/DestroyAwsServerGroupTask.groovy | 4 +- .../kato/tasks/DetachInstancesTask.groovy | 4 +- .../tasks/DetermineTargetReferenceTask.groovy | 4 +- .../kato/tasks/DisableInstancesTask.groovy | 4 +- .../orca/kato/tasks/JarDiffsTask.groovy | 18 +- .../orca/kato/tasks/ModifyAsgTask.groovy | 4 +- .../tasks/PreconfigureDestroyAsgTask.groovy | 4 +- .../orca/kato/tasks/ResizeAsgTask.groovy | 4 +- ...erminateInstanceAndDecrementAsgTask.groovy | 4 +- .../kato/tasks/UpsertAmazonDNSTask.groovy | 2 +- .../UpsertAsgScheduledActionsTask.groovy | 4 +- .../tasks/quip/InstanceHealthCheckTask.groovy | 6 +- .../kato/tasks/quip/MonitorQuipTask.groovy | 6 +- .../tasks/quip/ResolveQuipVersionTask.groovy | 2 +- .../kato/tasks/quip/TriggerQuipTask.groovy | 2 +- .../kato/tasks/quip/VerifyQuipTask.groovy | 2 +- .../CheckForRemainingTerminationsTask.groovy | 4 +- .../tasks/rollingpush/CleanUpTagsTask.java | 8 +- .../DetermineTerminationCandidatesTask.groovy | 2 +- ...ermineTerminationPhaseInstancesTask.groovy | 2 +- .../WaitForNewUpInstancesLaunchTask.groovy | 4 +- .../AbstractScalingProcessTask.groovy | 4 +- .../CopySecurityGroupTask.groovy | 2 +- ...gurationBackedConditionSupplierSpec.groovy | 57 ++ .../job/PreconfiguredJobStageSpec.groovy | 58 ++ .../CFRollingRedBlackStrategyTest.java | 405 ++++++++++++ .../MigratePipelineClustersTaskSpec.groovy | 85 --- .../UpdateMigratedPipelineTaskSpec.groovy | 80 --- .../FindImageFromClusterTaskSpec.groovy | 28 +- .../EvaluateConditionTaskSpec.groovy | 146 +++++ .../WaitForUpInstancesTaskSpec.groovy | 24 +- ...ertLoadBalancerForceRefreshTaskSpec.groovy | 95 ++- .../manifest/DeployManifestTaskSpec.groovy | 132 ++++ .../ManifestForceCacheRefreshTaskSpec.groovy | 615 +++++++++++++++++- .../CheckForRemainingPipelinesTaskSpec.groovy | 56 ++ .../CheckPipelineResultsTaskSpec.groovy | 107 +++ .../GetPipelinesFromArtifactTaskSpec.groovy | 164 +++++ .../PreparePipelineToSaveTaskSpec.groovy | 73 +++ .../AppEngineBranchFinderSpec.groovy | 7 +- .../aws/AmazonImageTaggerSpec.groovy | 103 ++- ...dFormationForceCacheRefreshTaskSpec.groovy | 31 +- .../DeployCloudFormationTaskSpec.groovy | 17 +- .../CloudFoundryServerGroupCreatorSpec.groovy | 313 --------- .../gce/GoogleImageTaggerSpec.groovy | 18 +- .../OpenstackSecurityGroupUpserterSpec.groovy | 140 ---- .../OpenstackServerGroupCreatorSpec.groovy | 117 ---- .../UpsertScalingPolicyTaskSpec.groovy | 76 +++ .../CloneServerGroupTaskSpec.groovy | 27 +- ...ateForceRefreshDependenciesTaskSpec.groovy | 115 ---- ...aitForRequiredInstancesDownTaskSpec.groovy | 2 +- .../snapshot/DeleteSnapshotTaskSpec.groovy | 57 ++ .../support/ResizeStrategySupportSpec.groovy | 49 ++ .../support/SourceResolverSpec.groovy | 49 ++ ...dryDeployServiceStagePreprocessorTest.java | 57 ++ ...oudFoundryDeployStagePreProcessorTest.java | 99 +++ ...ryDestroyServiceStagePreprocessorTest.java | 55 ++ ...ndryShareServiceStagePreprocessorTest.java | 55 ++ ...ryUnshareServiceStagePreprocessorTest.java | 55 ++ ...oundryWaitForServiceOperationTaskTest.java | 76 +++ .../CloudFoundryCreateServiceKeyTaskTest.java | 75 +++ .../cf/CloudFoundryDeployServiceTaskTest.java | 79 +++ .../CloudFoundryDestroyServiceTaskTest.java | 71 ++ ...oudFoundryMonitorKatoServicesTaskTest.java | 127 ++++ .../CloudFoundryServerGroupCreatorTest.java | 82 +++ ...udFoundryWaitForDeployServiceTaskTest.java | 54 ++ ...dFoundryWaitForDestroyServiceTaskTest.java | 49 ++ .../gce/SetStatefulDiskTaskTest.java | 89 +++ .../persistence/ExecutionRepositoryTck.groovy | 2 + orca-core/orca-core.gradle | 13 + .../spinnaker/orca/ExecutionContext.java | 13 +- .../spinnaker/orca/ExecutionStatus.java | 2 +- .../netflix/spinnaker/orca/StageResolver.java | 94 +++ .../java/com/netflix/spinnaker/orca/Task.java | 21 + .../netflix/spinnaker/orca/TaskResolver.java | 120 ++++ .../netflix/spinnaker/orca/TaskResult.java | 73 +-- ...faultApplicationConfigurationProperties.kt | 27 + .../orca/config/OrcaConfiguration.java | 36 +- .../config/PreprocessorConfiguration.java | 24 + .../orca/events/ExecutionListenerAdapter.java | 6 +- .../DefaultStageDefinitionBuilderFactory.java | 30 +- .../orca/pipeline/ExecutionLauncher.java | 1 + .../orca/pipeline/ExecutionRunner.java | 7 - .../RestrictExecutionDuringTimeWindow.java | 8 +- .../orca/pipeline/StageDefinitionBuilder.java | 22 + .../StageDefinitionBuilderFactory.java | 6 +- .../expressions/ExpressionFunctionProvider.kt | 32 + .../expressions/ExpressionTransform.java | 23 +- .../expressions/ExpressionsSupport.java | 114 ++-- .../PipelineExpressionEvaluator.java | 17 +- ...erverGroupsExpressionFunctionProvider.java | 128 ++++ ...tLabelValueExpressionFunctionProvider.java | 107 +++ .../StageExpressionFunctionProvider.java | 103 +++ .../whitelisting/ReturnTypeRestrictor.java | 13 +- .../orca/pipeline/model/ArtifactoryTrigger.kt | 37 ++ .../orca/pipeline/model/BuildInfo.kt | 12 + .../orca/pipeline/model/ConcourseTrigger.kt | 51 ++ .../orca/pipeline/model/JenkinsTrigger.kt | 51 +- .../orca/pipeline/model/SourceControl.kt | 11 + .../model/support/TriggerDeserializer.kt | 31 + .../orca/pipeline/tasks/AcquireLockTask.java | 4 +- .../pipeline/tasks/DetermineLockTask.java | 4 +- .../pipeline/tasks/EvaluateVariablesTask.java | 7 +- .../tasks/ExpressionPreconditionTask.java | 2 +- .../orca/pipeline/tasks/NoOpTask.java | 2 +- .../orca/pipeline/tasks/WaitTask.java | 10 +- .../orca/pipeline/tasks/WaitUntilTask.java | 8 +- .../artifacts/BindProducedArtifactsTask.java | 2 +- .../orca/pipeline/util/ArtifactResolver.java | 98 ++- .../pipeline/util/BuildDetailExtractor.java | 19 +- .../util/ContextFunctionConfiguration.java | 19 +- .../util/ContextParameterProcessor.java | 25 +- .../orca/pipeline/util/PackageInfo.java | 6 +- .../orca/pipelinetemplate/V2Util.java | 77 +++ ...DefaultApplicationExecutionPreprocessor.kt | 40 ++ .../spinnaker/orca/StageResolverSpec.groovy | 70 ++ .../spinnaker/orca/TaskResolverSpec.groovy | 72 ++ .../orca/locks/LockContextSpec.groovy | 2 - .../expressions/ExpressionsSupportSpec.groovy | 46 +- ...roupsExpressionFunctionProviderSpec.groovy | 189 ++++++ ...ValueExpressionFunctionProviderSpec.groovy | 88 +++ ...StageExpressionFunctionProviderSpec.groovy | 106 +++ .../orca/pipeline/model/ExecutionSpec.groovy | 4 +- .../orca/pipeline/model/TriggerSpec.groovy | 25 + .../tasks/EvaluateVariablesTaskSpec.groovy | 2 +- .../pipeline/util/ArtifactResolverSpec.groovy | 167 ++++- .../util/BuildDetailExtractorSpec.groovy | 12 +- .../util/ContextParameterProcessorSpec.groovy | 24 +- .../orca/pipeline/util/PackageInfoSpec.groovy | 25 +- .../pipeline/model/JenkinsTriggerTest.java | 76 +++ .../spinnaker/config/DryRunConfiguration.kt | 5 +- .../DryRunStageDefinitionBuilderFactory.kt | 16 +- .../spinnaker/orca/dryrun/DryRunTask.kt | 2 +- .../echo/pipeline/ManualJudgmentStage.groovy | 2 +- .../EchoNotifyingExecutionListener.groovy | 36 +- .../spring/EchoNotifyingStageListener.groovy | 15 +- .../orca/echo/tasks/CreateJiraIssueTask.java | 5 +- .../tasks/PageApplicationOwnerTask.groovy | 2 +- .../EchoNotifyingExecutionListenerSpec.groovy | 4 +- .../orca-extensionpoint.gradle | 5 + .../pipeline/ExecutionPreprocessor.java | 40 ++ .../flex/tasks/AbstractElasticIpTask.groovy | 2 +- orca-front50/orca-front50.gradle | 2 + .../front50/DependentPipelineStarter.groovy | 19 +- .../orca/front50/Front50Service.groovy | 19 +- .../config/Front50Configuration.groovy | 12 +- .../orca/front50/model/DeliveryConfig.java | 32 + .../pipeline/DeleteDeliveryConfigStage.java | 18 + .../PipelineExpressionFunctionProvider.java | 115 ++++ .../pipeline/UpsertDeliveryConfigStage.java | 20 + .../DependentPipelineExecutionListener.groovy | 35 +- .../front50/tasks/AbstractFront50Task.groovy | 2 +- .../tasks/DeleteDeliveryConfigTask.java | 79 +++ .../front50/tasks/MonitorFront50Task.java | 110 +++- .../front50/tasks/MonitorPipelineTask.groovy | 8 +- .../front50/tasks/ReorderPipelinesTask.java | 5 +- .../orca/front50/tasks/SavePipelineTask.java | 40 +- .../front50/tasks/SaveServiceAccountTask.java | 48 +- .../front50/tasks/StartPipelineTask.groovy | 2 +- .../tasks/UpsertDeliveryConfigTask.java | 76 +++ .../DependentPipelineStarterSpec.groovy | 17 +- ...endentPipelineExecutionListenerSpec.groovy | 52 +- .../front50/tasks/SavePipelineTaskSpec.groovy | 45 ++ .../tasks/SaveServiceAccountTaskSpec.groovy | 65 ++ orca-igor/orca-igor.gradle | 2 + .../spinnaker/orca/igor/BuildService.groovy | 59 -- .../spinnaker/orca/igor/BuildService.java | 67 ++ .../spinnaker/orca/igor/IgorService.groovy | 51 -- .../spinnaker/orca/igor/IgorService.java | 87 +++ .../orca/igor/config/IgorConfiguration.groovy | 4 +- .../orca/igor/model/CIStageDefinition.java | 58 ++ .../orca/igor/model/GoogleCloudBuild.java | 72 ++ .../GoogleCloudBuildStageDefinition.java | 70 ++ .../igor/model/RetryableStageDefinition.java | 10 +- .../spinnaker/orca/igor/pipeline/CIStage.java | 117 ++++ .../igor/pipeline/GetPropertiesStage.java | 23 +- .../igor/pipeline/GoogleCloudBuildStage.java | 44 ++ .../orca/igor/pipeline/JenkinsStage.groovy | 87 --- .../orca/igor/pipeline/JenkinsStage.java | 19 +- .../orca/igor/pipeline/ScriptStage.groovy | 2 + .../{TravisStage.groovy => TravisStage.java} | 10 +- ...{WerckerStage.groovy => WerckerStage.java} | 18 +- .../igor/tasks/GetBuildArtifactsTask.java | 57 ++ .../igor/tasks/GetBuildPropertiesTask.java | 63 ++ .../orca/igor/tasks/GetCommitsTask.groovy | 18 +- .../GetGoogleCloudBuildArtifactsTask.java | 54 ++ .../tasks/MonitorGoogleCloudBuildTask.java | 63 ++ .../igor/tasks/MonitorJenkinsJobTask.groovy | 34 +- .../tasks/MonitorQueuedJenkinsJobTask.groovy | 6 +- .../tasks/MonitorWerckerJobStartedTask.groovy | 6 +- .../orca/igor/tasks/RetryableIgorTask.java | 92 +++ .../igor/tasks/StartGoogleCloudBuildTask.java | 95 +++ .../igor/tasks/StartJenkinsJobTask.groovy | 2 +- .../orca/igor/tasks/StartScriptTask.groovy | 2 +- .../orca/igor/tasks/StopJenkinsJobTask.groovy | 2 +- .../orca/igor/BuildServiceSpec.groovy | 3 +- .../pipeline/GoogleCloudBuildStageSpec.groovy | 49 ++ .../igor/pipeline/JenkinsStageSpec.groovy | 119 +++- .../orca/igor/pipeline/ScriptStageSpec.groovy | 44 ++ .../tasks/GetBuildArtifactsTaskSpec.groovy | 98 +++ .../tasks/GetBuildPropertiesTaskSpec.groovy | 178 +++++ ...etGoogleCloudBuildArtifactsTaskSpec.groovy | 85 +++ .../MonitorGoogleCloudBuildTaskSpec.groovy | 98 +++ .../tasks/MonitorJenkinsJobTaskSpec.groovy | 141 ---- .../igor/tasks/RetryableIgorTaskSpec.groovy | 124 ++++ .../StartGoogleCloudBuildTaskSpec.groovy | 56 ++ .../orca-integrations-cloudfoundry.gradle | 18 + .../cf/pipeline/DeleteServiceKeyStage.java | 37 ++ .../ServiceKeyExpressionFunctionProvider.java | 97 +++ .../CloudFoundryDeleteServiceKeyTask.java | 33 + .../config/CloudFoundryConfiguration.java | 25 + ...viceKeyExpressionFunctionProviderTest.java | 129 ++++ .../CloudFoundryDeleteServiceKeyTaskTest.java | 77 +++ .../orca-integrations-gremlin.gradle | 9 + .../orca/config/GremlinConfiguration.kt | 55 ++ .../orca/gremlin/GremlinConverter.java | 54 ++ .../spinnaker/orca/gremlin/GremlinService.kt | 46 ++ .../orca/gremlin/pipeline/GremlinStage.java | 72 ++ .../tasks/LaunchGremlinAttackTask.java | 58 ++ .../tasks/MonitorGremlinAttackTask.java | 75 +++ .../spinnaker/orca/kayenta/KayentaService.kt | 2 +- .../orca/kayenta/model/Deployments.kt | 2 +- .../pipeline/RunCanaryIntervalsStage.kt | 89 ++- .../tasks/AggregateCanaryResultsTask.kt | 20 +- .../kayenta/tasks/MonitorKayentaCanaryTask.kt | 12 +- .../PropagateDeployedServerGroupScopes.kt | 31 +- .../kayenta/tasks/RunKayentaCanaryTask.kt | 35 +- .../pipeline/RunCanaryIntervalsStageTest.kt | 2 - .../PropagateDeployedServerGroupScopesTest.kt | 29 +- .../kayenta/tasks/RunKayentaCanaryTaskTest.kt | 104 --- .../orca/keel/task/DeleteIntentTask.kt | 6 +- .../orca/keel/task/UpsertIntentTask.kt | 6 +- .../mine/pipeline/DeployCanaryStage.groovy | 4 +- .../orca/mine/tasks/CleanupCanaryTask.groovy | 2 +- .../orca/mine/tasks/CompleteCanaryTask.groovy | 10 +- .../orca/mine/tasks/DisableCanaryTask.groovy | 8 +- .../orca/mine/tasks/MonitorAcaTaskTask.groovy | 6 +- .../orca/mine/tasks/MonitorCanaryTask.groovy | 6 +- .../mine/tasks/RegisterAcaTaskTask.groovy | 2 +- .../orca/mine/tasks/RegisterCanaryTask.groovy | 2 +- .../orca-pipelinetemplate.gradle | 5 +- .../PipelineTemplatePreprocessor.kt | 18 +- .../TemplatedPipelineRequest.java | 11 + .../tasks/CreatePipelineTemplateTask.java | 4 +- .../tasks/DeletePipelineTemplateTask.java | 5 +- .../tasks/PlanTemplateDependentsTask.java | 10 +- .../tasks/UpdatePipelineTemplateTask.java | 4 +- .../v2/CreateV2PipelineTemplateTask.java | 10 +- .../v2/DeleteV2PipelineTemplateTask.java | 6 +- .../v2/UpdateV2PipelineTemplateTask.java | 10 +- .../v1schema/graph/v2/V2GraphMutator.java | 2 - .../V2DefaultVariableAssignmentTransform.java | 22 +- .../handler/V1TemplateLoaderHandler.kt | 1 + .../v1schema/handler/v2/V2Handlers.kt | 19 +- .../handler/v2/V2TemplateLoaderHandler.java | 8 +- .../v1schema/render/tags/ModuleTag.java | 23 +- .../v2schema/V2SchemaExecutionGenerator.java | 40 +- .../V2PipelineConfigInheritanceTransform.java | 53 -- .../model/V2TemplateConfiguration.java | 3 +- ...ineTemplatePipelinePreprocessorSpec.groovy | 7 +- .../v1schema/V1SchemaIntegrationSpec.groovy | 7 +- ...aultVariableAssignmentTransformTest.groovy | 43 ++ .../v1schema/render/tags/ModuleTagSpec.groovy | 142 +++- .../config/RedisOrcaQueueConfiguration.kt | 10 +- .../q/redis/migration/TaskTypeDeserializer.kt | 35 + .../migration/TaskTypeDeserializerTest.kt | 66 ++ .../spinnaker/orca/q/QueueIntegrationTest.kt | 22 +- orca-queue/orca-queue.gradle | 1 + .../orca/q/admin/HydrateQueueCommand.kt | 6 +- ...tionTrackingMessageHandlerPostProcessor.kt | 31 +- .../orca/q/handler/AuthenticationAware.kt | 1 + .../orca/q/handler/CancelStageHandler.kt | 8 +- .../orca/q/handler/CompleteStageHandler.kt | 42 +- .../orca/q/handler/ExpressionAware.kt | 27 +- .../q/handler/RescheduleExecutionHandler.kt | 7 +- .../orca/q/handler/ResumeTaskHandler.kt | 7 +- .../orca/q/handler/RunTaskHandler.kt | 6 +- .../orca/q/handler/StartTaskHandler.kt | 5 +- .../orca/q/admin/HydrateQueueCommandTest.kt | 4 +- .../orca/q/handler/CancelStageHandlerTest.kt | 10 +- .../q/handler/CompleteStageHandlerTest.kt | 80 ++- .../handler/RescheduleExecutionHandlerTest.kt | 4 +- .../orca/q/handler/RestartStageHandlerTest.kt | 11 +- .../orca/q/handler/ResumeTaskHandlerTest.kt | 4 +- .../orca/q/handler/RunTaskHandlerTest.kt | 54 +- .../orca/q/handler/StartStageHandlerTest.kt | 25 +- .../orca/q/handler/StartTaskHandlerTest.kt | 4 +- .../jedis/RedisExecutionRepository.java | 111 ++-- .../jedis/JedisExecutionRepositorySpec.groovy | 28 + .../orca/sql/SpringLiquibaseProxy.kt | 1 + .../persistence/SqlExecutionRepository.kt | 40 +- .../SqlExecutionRepositorySpec.groovy | 16 +- .../spinnaker/orca/sql/SqlTestUtil.java | 8 +- orca-web/config/orca.yml | 5 + orca-web/orca-web.gradle | 4 + .../com/netflix/spinnaker/orca/Main.groovy | 6 +- .../controllers/OperationsController.groovy | 59 +- .../orca/controllers/TaskController.groovy | 7 +- .../V2PipelineTemplateController.java | 23 +- .../exceptions/OperationFailedException.java | 7 + .../com/netflix/spinnaker/orca/MainSpec.java | 47 ++ .../orca/StartupTestConfiguration.java | 37 ++ .../OperationsControllerSpec.groovy | 30 +- .../controllers/TaskControllerSpec.groovy | 45 +- orca-web/src/test/resources/orca-test.yml | 19 + orca-webhook/orca-webhook.gradle | 1 + .../webhook/tasks/CreateWebhookTask.groovy | 30 +- .../webhook/tasks/MonitorWebhookTask.groovy | 24 +- .../tasks/CreateWebhookTaskSpec.groovy | 45 ++ .../tasks/MonitorWebhookTaskSpec.groovy | 39 ++ settings.gradle | 4 +- 519 files changed, 15394 insertions(+), 3896 deletions(-) create mode 100644 OWNERS.md create mode 100644 orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/BakeManifestContext.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/KubernetesPreconfiguredJobProperties.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/TitusPreconfiguredJobProperties.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/ConditionAwareDeployStagePreprocessor.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/MigratePipelineStage.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/cache/ClouddriverClearAltTablespaceStage.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/Condition.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionConfigurationProperties.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionSupplier.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplier.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/WaitForConditionStage.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/pipeline/SavePipelinesFromArtifactStage.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessor.java rename orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/{securitygroup/MigrateSecurityGroupStage.java => providers/cf/CreateServiceKeyStage.java} (52%) create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/SetStatefulDiskStage.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/MigrateServerGroupStage.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategy.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStagePreprocessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStagePreprocessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStage.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStagePreprocessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStage.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStagePreprocessor.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/snapshot/DeleteSnapshotStage.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigrateTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cache/ClouddriverClearAltTablespaceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestContext.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestHighlanderStrategy.java rename orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/{securitygroup/MigrateSecurityGroupTask.java => manifest/ManifestNoneStrategy.java} (66%) create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestRedBlackStrategy.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategy.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyHandler.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyStagesAdder.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyType.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTask.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/MigratePipelineClustersTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PipelineReferenceData.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelineResultsData.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesCompleteTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesData.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/UpdateMigratedPipelineTask.java rename orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/{servicebroker/DeployServiceTask.java => providers/cf/AbstractCloudFoundryServiceTask.java} (57%) rename orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/{loadbalancer/MigrateLoadBalancerTask.java => providers/cf/CloudFoundryCreateServiceKeyTask.java} (61%) create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTask.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.groovy create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryShareServiceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryUnshareServiceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/Manifest.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTask.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.groovy create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/Manifest.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/ManifestAnnotationExtractor.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserter.groovy delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreator.groovy delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/README.md delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTask.groovy create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/BakeryImageAccessDescriptionDecorator.groovy create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloneDescriptionDecorator.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloudFoundryManifestArtifactDecorator.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/AbstractWaitForServiceTask.java delete mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/DestroyServiceTask.java create mode 100644 orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTask.java create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplierSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStageSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategyTest.java delete mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigratePipelineClustersTaskSpec.groovy delete mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/UpdateMigratedPipelineTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTaskSpec.groovy delete mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorSpec.groovy delete mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserterSpec.groovy delete mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreatorSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTaskSpec.groovy delete mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTaskSpec.groovy create mode 100644 orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupportSpec.groovy create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessorTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessorTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessorTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessorTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessorTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/AbstractCloudFoundryWaitForServiceOperationTaskTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryCreateServiceKeyTaskTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTaskTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTaskTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTaskTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTaskTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTaskTest.java create mode 100644 orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTaskTest.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/StageResolver.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResolver.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/config/DefaultApplicationConfigurationProperties.kt create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/config/PreprocessorConfiguration.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionFunctionProvider.kt create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProvider.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProvider.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProvider.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ArtifactoryTrigger.kt create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/BuildInfo.kt create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ConcourseTrigger.kt create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/SourceControl.kt create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/V2Util.java create mode 100644 orca-core/src/main/java/com/netflix/spinnaker/orca/preprocessors/DefaultApplicationExecutionPreprocessor.kt create mode 100644 orca-core/src/test/groovy/com/netflix/spinnaker/orca/StageResolverSpec.groovy create mode 100644 orca-core/src/test/groovy/com/netflix/spinnaker/orca/TaskResolverSpec.groovy create mode 100644 orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProviderSpec.groovy create mode 100644 orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProviderSpec.groovy create mode 100644 orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProviderSpec.groovy create mode 100644 orca-core/src/test/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTriggerTest.java create mode 100644 orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/ExecutionPreprocessor.java create mode 100644 orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/model/DeliveryConfig.java create mode 100644 orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/DeleteDeliveryConfigStage.java create mode 100644 orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/PipelineExpressionFunctionProvider.java create mode 100644 orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/UpsertDeliveryConfigStage.java create mode 100644 orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/DeleteDeliveryConfigTask.java create mode 100644 orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/UpsertDeliveryConfigTask.java delete mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.groovy create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.java delete mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.groovy create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/CIStageDefinition.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuild.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuildStageDefinition.java rename orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/PipelinePreprocessor.java => orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/RetryableStageDefinition.java (72%) create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/CIStage.java rename orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/loadbalancer/MigrateLoadBalancerStage.java => orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GetPropertiesStage.java (57%) create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStage.java delete mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.groovy rename orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateServerGroupTask.java => orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.java (61%) rename orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/{TravisStage.groovy => TravisStage.java} (66%) rename orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/{WerckerStage.groovy => WerckerStage.java} (52%) create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTask.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTask.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTask.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTask.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTask.java create mode 100644 orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTask.java create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStageSpec.groovy create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStageSpec.groovy create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTaskSpec.groovy create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTaskSpec.groovy create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTaskSpec.groovy create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTaskSpec.groovy create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTaskSpec.groovy create mode 100644 orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTaskSpec.groovy create mode 100644 orca-integrations-cloudfoundry/orca-integrations-cloudfoundry.gradle create mode 100644 orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/DeleteServiceKeyStage.java create mode 100644 orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProvider.java create mode 100644 orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTask.java create mode 100644 orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/config/CloudFoundryConfiguration.java create mode 100644 orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProviderTest.java create mode 100644 orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTaskTest.java create mode 100644 orca-integrations-gremlin/orca-integrations-gremlin.gradle create mode 100644 orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/config/GremlinConfiguration.kt create mode 100644 orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinConverter.java create mode 100644 orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinService.kt create mode 100644 orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/pipeline/GremlinStage.java create mode 100644 orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/LaunchGremlinAttackTask.java create mode 100644 orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/MonitorGremlinAttackTask.java delete mode 100644 orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTaskTest.kt delete mode 100644 orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/graph/V2PipelineConfigInheritanceTransform.java create mode 100644 orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransformTest.groovy create mode 100644 orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializer.kt create mode 100644 orca-queue-redis/src/test/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializerTest.kt create mode 100644 orca-web/src/main/groovy/com/netflix/spinnaker/orca/exceptions/OperationFailedException.java create mode 100644 orca-web/src/test/groovy/com/netflix/spinnaker/orca/MainSpec.java create mode 100644 orca-web/src/test/groovy/com/netflix/spinnaker/orca/StartupTestConfiguration.java create mode 100644 orca-web/src/test/resources/orca-test.yml diff --git a/Dockerfile b/Dockerfile index 57f8fbdd6c..16c462e2ec 100644 --- a/Dockerfile +++ b/Dockerfile @@ -6,7 +6,7 @@ COPY . workdir/ WORKDIR workdir -RUN GRADLE_USER_HOME=cache ./gradlew buildDeb -x test && \ +RUN GRADLE_USER_HOME=cache ./gradlew -I gradle/init-publish.gradle buildDeb -x test && \ dpkg -i ./orca-web/build/distributions/*.deb && \ cd .. && \ rm -rf workdir diff --git a/OWNERS.md b/OWNERS.md new file mode 100644 index 0000000000..69b981c668 --- /dev/null +++ b/OWNERS.md @@ -0,0 +1,5 @@ +ajordens +asher +marchello2000 +robfletcher +robzienert diff --git a/build.gradle b/build.gradle index d4025abe8e..f87864099c 100644 --- a/build.gradle +++ b/build.gradle @@ -17,7 +17,7 @@ buildscript { repositories { jcenter() - maven { url "http://spinnaker.bintray.com/gradle" } + maven { url "https://spinnaker.bintray.com/gradle" } maven { url "https://plugins.gradle.org/m2/" } } dependencies { @@ -33,7 +33,7 @@ allprojects { group = "com.netflix.spinnaker.orca" ext { - spinnakerDependenciesVersion = '1.30.0' + spinnakerDependenciesVersion = '1.44.1' if (project.hasProperty('spinnakerDependenciesVersion')) { spinnakerDependenciesVersion = project.property('spinnakerDependenciesVersion') } @@ -112,11 +112,13 @@ subprojects { } } - //c&p this because NetflixOss reverts it to 1.7 and ends up getting applied last.. - project.plugins.withType(JavaBasePlugin) { - JavaPluginConvention convention = project.convention.getPlugin(JavaPluginConvention) - convention.sourceCompatibility = JavaVersion.VERSION_1_8 - convention.targetCompatibility = JavaVersion.VERSION_1_8 + project.afterEvaluate { + //c&p this because NetflixOss reverts it to 1.7 and ends up getting applied last.. + project.plugins.withType(JavaBasePlugin) { + JavaPluginConvention convention = project.convention.getPlugin(JavaPluginConvention) + convention.sourceCompatibility = JavaVersion.VERSION_1_8 + convention.targetCompatibility = JavaVersion.VERSION_1_8 + } } } diff --git a/gradle/init-publish.gradle b/gradle/init-publish.gradle index 6dda0bd288..4f32b81f48 100644 --- a/gradle/init-publish.gradle +++ b/gradle/init-publish.gradle @@ -2,7 +2,7 @@ initscript { repositories { mavenLocal() jcenter() - maven { url 'http://dl.bintray.com/spinnaker/gradle/' } + maven { url 'https://dl.bintray.com/spinnaker/gradle/' } maven { url "https://plugins.gradle.org/m2/" } } dependencies { diff --git a/gradle/wrapper/gradle-wrapper.properties b/gradle/wrapper/gradle-wrapper.properties index a795ea3aae..ea13fdfd19 100644 --- a/gradle/wrapper/gradle-wrapper.properties +++ b/gradle/wrapper/gradle-wrapper.properties @@ -1,6 +1,5 @@ -#Thu Nov 29 10:16:52 PST 2018 distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists +distributionUrl=https\://services.gradle.org/distributions/gradle-5.3.1-bin.zip zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists -distributionUrl=https\://services.gradle.org/distributions/gradle-4.10.2-all.zip diff --git a/orca-applications/orca-applications.gradle b/orca-applications/orca-applications.gradle index 3abd54f5d7..acb49e51b3 100644 --- a/orca-applications/orca-applications.gradle +++ b/orca-applications/orca-applications.gradle @@ -20,6 +20,7 @@ dependencies { compile project(":orca-clouddriver") compile project(":orca-front50") compile project(":orca-retrofit") - compile spinnaker.dependency('lombok') + compileOnly spinnaker.dependency('lombok') + annotationProcessor spinnaker.dependency('lombok') testCompile project(":orca-test-groovy") } diff --git a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/DeleteProjectStage.groovy b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/DeleteProjectStage.groovy index 57d77c8085..04f85fc830 100644 --- a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/DeleteProjectStage.groovy +++ b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/DeleteProjectStage.groovy @@ -52,7 +52,7 @@ class DeleteProjectStage implements StageDefinitionBuilder { "notification.type": "deleteproject" ] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } } diff --git a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/UpsertProjectStage.groovy b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/UpsertProjectStage.groovy index 5615b2cc18..b00a8cc8e4 100644 --- a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/UpsertProjectStage.groovy +++ b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/pipelines/UpsertProjectStage.groovy @@ -65,7 +65,7 @@ class UpsertProjectStage implements StageDefinitionBuilder { "notification.type": "upsertproject" ] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } } diff --git a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/DeleteApplicationTask.groovy b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/DeleteApplicationTask.groovy index afa9e2ba44..670f7c12e4 100644 --- a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/DeleteApplicationTask.groovy +++ b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/DeleteApplicationTask.groovy @@ -40,20 +40,20 @@ class DeleteApplicationTask extends AbstractFront50Task { front50Service.deletePermission(application.name) } catch (RetrofitError re) { if (re.response?.status == 404) { - return new TaskResult(ExecutionStatus.SUCCEEDED, [:], [:]) + return TaskResult.SUCCEEDED } log.error("Could not create or update application permission", re) - return new TaskResult(ExecutionStatus.TERMINAL, [:], outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).outputs(outputs).build() } } } catch (RetrofitError e) { if (e.response?.status == 404) { - return new TaskResult(ExecutionStatus.SUCCEEDED, [:], [:]) + return TaskResult.SUCCEEDED } log.error("Could not create or update application permission", e) - return new TaskResult(ExecutionStatus.TERMINAL, [:], outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).outputs(outputs).build() } - return new TaskResult(ExecutionStatus.SUCCEEDED, [:], outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).outputs(outputs).build() } @Override diff --git a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/UpsertApplicationTask.groovy b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/UpsertApplicationTask.groovy index 6a22e7968a..322384d36f 100644 --- a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/UpsertApplicationTask.groovy +++ b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/UpsertApplicationTask.groovy @@ -63,7 +63,7 @@ class UpsertApplicationTask extends AbstractFront50Task implements ApplicationNa } outputs.newState = application ?: [:] - return new TaskResult(ExecutionStatus.SUCCEEDED, [:], outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).outputs(outputs).build() } @Override diff --git a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/VerifyApplicationHasNoDependenciesTask.groovy b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/VerifyApplicationHasNoDependenciesTask.groovy index dad051d0f2..6a443646ed 100644 --- a/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/VerifyApplicationHasNoDependenciesTask.groovy +++ b/orca-applications/src/main/groovy/com/netflix/spinnaker/orca/applications/tasks/VerifyApplicationHasNoDependenciesTask.groovy @@ -61,7 +61,7 @@ class VerifyApplicationHasNoDependenciesTask implements Task { } catch (RetrofitError e) { if (!e.response) { def exception = [operation: stage.tasks[-1].name, reason: e.message] - return new TaskResult(ExecutionStatus.TERMINAL, [exception: exception]) + return TaskResult.builder(ExecutionStatus.TERMINAL).context([exception: exception]).build() } else if (e.response && e.response.status && e.response.status != 404) { def resp = e.response def exception = [statusCode: resp.status, operation: stage.tasks[-1].name, url: resp.url, reason: resp.reason] @@ -70,22 +70,22 @@ class VerifyApplicationHasNoDependenciesTask implements Task { } catch (ignored) { } - return new TaskResult(ExecutionStatus.TERMINAL, [exception: exception]) + return TaskResult.builder(ExecutionStatus.TERMINAL).context([exception: exception]).build() } } if (!existingDependencyTypes) { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } - return new TaskResult(ExecutionStatus.TERMINAL, [exception: [ + return TaskResult.builder(ExecutionStatus.TERMINAL).context([exception: [ details: [ error: "Application has outstanding dependencies", errors: existingDependencyTypes.collect { "Application is associated with one or more ${it}" as String } ] - ]]) + ]]).build() } protected Map getOortResult(String applicationName) { diff --git a/orca-bakery/orca-bakery.gradle b/orca-bakery/orca-bakery.gradle index 91df3caf60..2741f5b6a7 100644 --- a/orca-bakery/orca-bakery.gradle +++ b/orca-bakery/orca-bakery.gradle @@ -22,6 +22,7 @@ dependencies { spinnaker.group('jackson') compile spinnaker.dependency('jacksonGuava') compileOnly spinnaker.dependency('lombok') + annotationProcessor spinnaker.dependency("lombok") testCompile project(":orca-test-groovy") testCompile "com.github.tomakehurst:wiremock:2.15.0" } diff --git a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/pipeline/BakeStage.groovy b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/pipeline/BakeStage.groovy index 39736843a8..1fc1256af4 100644 --- a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/pipeline/BakeStage.groovy +++ b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/pipeline/BakeStage.groovy @@ -126,7 +126,7 @@ class BakeStage implements StageDefinitionBuilder { return deploymentDetails } ] - new TaskResult(ExecutionStatus.SUCCEEDED, [:], globalContext) + TaskResult.builder(ExecutionStatus.SUCCEEDED).outputs(globalContext).build() } } diff --git a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CompletedBakeTask.groovy b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CompletedBakeTask.groovy index f0ad4b937e..dd3826bef2 100644 --- a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CompletedBakeTask.groovy +++ b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CompletedBakeTask.groovy @@ -55,6 +55,6 @@ class CompletedBakeTask implements Task { results.imageName = bake.imageName ?: bake.amiName } - new TaskResult(ExecutionStatus.SUCCEEDED, results) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(results).build() } } diff --git a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTask.groovy b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTask.groovy index 131ca44bc2..9ed3174ddb 100644 --- a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTask.groovy +++ b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTask.groovy @@ -125,7 +125,7 @@ class CreateBakeTask implements RetryableTask { } } - new TaskResult(ExecutionStatus.SUCCEEDED, stageOutputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs).build() } catch (RetrofitError e) { if (e.response?.status && e.response.status == 404) { try { @@ -138,7 +138,7 @@ class CreateBakeTask implements RetryableTask { // do nothing } - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } throw e } diff --git a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTask.groovy b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTask.groovy index e8dd8766aa..32ec394cae 100644 --- a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTask.groovy +++ b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTask.groovy @@ -52,13 +52,13 @@ class MonitorBakeTask implements OverridableTimeoutRetryableTask { if (isCanceled(newStatus.state) && previousStatus.state == BakeStatus.State.PENDING) { log.info("Original bake was 'canceled', re-baking (executionId: ${stage.execution.id}, previousStatus: ${previousStatus.state})") def rebakeResult = createBakeTask.execute(stage) - return new TaskResult(ExecutionStatus.RUNNING, rebakeResult.context, rebakeResult.outputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(rebakeResult.context).outputs(rebakeResult.outputs).build() } - new TaskResult(mapStatus(newStatus), [status: newStatus]) + TaskResult.builder(mapStatus(newStatus)).context([status: newStatus]).build() } catch (RetrofitError e) { if (e.response?.status == 404) { - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } throw e } diff --git a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/BakeManifestContext.java b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/BakeManifestContext.java new file mode 100644 index 0000000000..e36dbe946e --- /dev/null +++ b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/BakeManifestContext.java @@ -0,0 +1,55 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.bakery.tasks.manifests; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact; +import lombok.Getter; + +import java.util.List; +import java.util.Map; + +@Getter +public class BakeManifestContext { + private final List inputArtifacts; + private final List expectedArtifacts; + private final Map overrides; + private final Boolean evaluateOverrideExpressions; + private final String templateRenderer; + private final String outputName; + private final String namespace; + + // There does not seem to be a way to auto-generate a constructor using our current version of Lombok (1.16.20) that + // Jackson can use to deserialize. + public BakeManifestContext( + @JsonProperty("inputArtifacts") List inputArtifacts, + @JsonProperty("expectedArtifacts") List expectedArtifacts, + @JsonProperty("overrides") Map overrides, + @JsonProperty("evaluateOverrideExpressions") Boolean evaluateOverrideExpressions, + @JsonProperty("templateRenderer") String templateRenderer, + @JsonProperty("outputName") String outputName, + @JsonProperty("namespace") String namespace + ) { + this.inputArtifacts = inputArtifacts; + this.expectedArtifacts = expectedArtifacts; + this.overrides = overrides; + this.evaluateOverrideExpressions = evaluateOverrideExpressions; + this.templateRenderer = templateRenderer; + this.outputName = outputName; + this.namespace = namespace; + } +} diff --git a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/CreateBakeManifestTask.java b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/CreateBakeManifestTask.java index e746854aa7..9d82406a33 100644 --- a/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/CreateBakeManifestTask.java +++ b/orca-bakery/src/main/groovy/com/netflix/spinnaker/orca/bakery/tasks/manifests/CreateBakeManifestTask.java @@ -17,7 +17,6 @@ package com.netflix.spinnaker.orca.bakery.tasks.manifests; -import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.ObjectMapper; import com.netflix.spinnaker.kork.artifacts.model.Artifact; import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact; @@ -28,6 +27,7 @@ import com.netflix.spinnaker.orca.bakery.api.manifests.helm.HelmBakeManifestRequest; import com.netflix.spinnaker.orca.pipeline.model.Stage; import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor; import lombok.Data; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; @@ -62,12 +62,15 @@ public long getTimeout() { @Autowired ObjectMapper objectMapper; + @Autowired + ContextParameterProcessor contextParameterProcessor; + @Nonnull @Override public TaskResult execute(@Nonnull Stage stage) { - Map context = stage.getContext(); + BakeManifestContext context = stage.mapTo(BakeManifestContext.class); - List inputArtifactsObj = objectMapper.convertValue(context.get("inputArtifacts"), new TypeReference>() {}); + List inputArtifactsObj = context.getInputArtifacts(); List inputArtifacts; if (inputArtifactsObj == null || inputArtifactsObj.isEmpty()) { @@ -84,7 +87,7 @@ public TaskResult execute(@Nonnull Stage stage) { return a; }).collect(Collectors.toList()); - List expectedArtifacts = objectMapper.convertValue(context.get("expectedArtifacts"), new TypeReference>() {}); + List expectedArtifacts = context.getExpectedArtifacts(); if (expectedArtifacts == null || expectedArtifacts.isEmpty()) { throw new IllegalArgumentException("At least one expected artifact to baked manifest must be supplied"); @@ -96,12 +99,22 @@ public TaskResult execute(@Nonnull Stage stage) { String outputArtifactName = expectedArtifacts.get(0).getMatchArtifact().getName(); + Map overrides = context.getOverrides(); + Boolean evaluateOverrideExpressions = context.getEvaluateOverrideExpressions(); + if (evaluateOverrideExpressions != null && evaluateOverrideExpressions) { + overrides = contextParameterProcessor.process( + overrides, + contextParameterProcessor.buildExecutionContext(stage, true), + true + ); + } + HelmBakeManifestRequest request = new HelmBakeManifestRequest(); request.setInputArtifacts(inputArtifacts); - request.setTemplateRenderer((String) context.get("templateRenderer")); - request.setOutputName((String) context.get("outputName")); - request.setOverrides(objectMapper.convertValue(context.get("overrides"), new TypeReference>() { })); - request.setNamespace((String) context.get("namespace")); + request.setTemplateRenderer(context.getTemplateRenderer()); + request.setOutputName(context.getOutputName()); + request.setOverrides(overrides); + request.setNamespace(context.getNamespace()); request.setOutputArtifactName(outputArtifactName); log.info("Requesting {}", request); @@ -110,11 +123,11 @@ public TaskResult execute(@Nonnull Stage stage) { Map outputs = new HashMap<>(); outputs.put("artifacts", Collections.singleton(result)); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build(); } @Data - private static class InputArtifactPair { + protected static class InputArtifactPair { String id; String account; } diff --git a/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTaskSpec.groovy b/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTaskSpec.groovy index 7a62e4e16a..f830b6745a 100644 --- a/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTaskSpec.groovy +++ b/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/CreateBakeTaskSpec.groovy @@ -21,10 +21,7 @@ import com.netflix.spinnaker.orca.bakery.api.BakeRequest import com.netflix.spinnaker.orca.bakery.api.BakeStatus import com.netflix.spinnaker.orca.bakery.api.BakeryService import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper -import com.netflix.spinnaker.orca.pipeline.model.Execution -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger -import com.netflix.spinnaker.orca.pipeline.model.Stage -import com.netflix.spinnaker.orca.pipeline.model.Trigger +import com.netflix.spinnaker.orca.pipeline.model.* import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver import retrofit.RetrofitError import retrofit.client.Response @@ -34,9 +31,9 @@ import spock.lang.Shared import spock.lang.Specification import spock.lang.Subject import spock.lang.Unroll + import static com.netflix.spinnaker.orca.bakery.api.BakeStatus.State.COMPLETED import static com.netflix.spinnaker.orca.bakery.api.BakeStatus.State.RUNNING -import static com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.* import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage import static java.net.HttpURLConnection.HTTP_NOT_FOUND @@ -116,79 +113,82 @@ class CreateBakeTaskSpec extends Specification { ] @Shared - def buildInfo = new BuildInfo( - "name", 0, "http://jenkins", [ - new JenkinsArtifact("hodor_1.1_all.deb", "."), - new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), - new JenkinsArtifact("hodor.1.1.nupkg", ".") - ], [], false, "SUCCESS" + def buildInfo = new JenkinsBuildInfo( + "name", 0, "http://jenkins", "SUCCESS", + [ + new JenkinsArtifact("hodor_1.1_all.deb", "."), + new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), + new JenkinsArtifact("hodor.1.1.nupkg", ".") + ] ) @Shared - def buildInfoWithUrl = new BuildInfo( - "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", + def buildInfoWithUrl = new JenkinsBuildInfo( + "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", "SUCCESS", [ new JenkinsArtifact("hodor_1.1_all.deb", "."), new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), new JenkinsArtifact("hodor.1.1.nupkg", ".") - ], [], false, "SUCCESS" + ] ) @Shared - def buildInfoWithFoldersUrl = new BuildInfo( - "name", 0, "http://spinnaker.builds.test.netflix.net/job/folder/job/SPINNAKER-package-echo/69/", + def buildInfoWithFoldersUrl = new JenkinsBuildInfo( + "name", 0, "http://spinnaker.builds.test.netflix.net/job/folder/job/SPINNAKER-package-echo/69/", "SUCCESS", [ new JenkinsArtifact("hodor_1.1_all.deb", "."), new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), new JenkinsArtifact("hodor.1.1.nupkg", ".") - ], [], false, "SUCCESS" + ] ) @Shared - def buildInfoWithUrlAndSCM = new BuildInfo( - "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", + def buildInfoWithUrlAndSCM = new JenkinsBuildInfo( + "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", "SUCCESS", [ new JenkinsArtifact("hodor_1.1_all.deb", "."), new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), new JenkinsArtifact("hodor.1.1.nupkg", ".") - ], [ - new SourceControl("refs/remotes/origin/master", "master", "f83a447f8d02a40fa84ec9d4d0dccd263d51782d") - ], false, "SUCCESS" + ], + [new SourceControl("refs/remotes/origin/master", "master", "f83a447f8d02a40fa84ec9d4d0dccd263d51782d")] ) @Shared - def buildInfoWithUrlAndTwoSCMs = new BuildInfo( - "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", + def buildInfoWithUrlAndTwoSCMs = new JenkinsBuildInfo( + "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", "SUCCESS", [ new JenkinsArtifact("hodor_1.1_all.deb", "."), new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), new JenkinsArtifact("hodor.1.1.nupkg", ".") - ], [ - new SourceControl("refs/remotes/origin/master", "master", "f83a447f8d02a40fa84ec9d4d0dccd263d51782d"), - new SourceControl("refs/remotes/origin/some-feature", "some-feature", "1234567f8d02a40fa84ec9d4d0dccd263d51782d") - ], false, "SUCCESS" + ], + [ + new SourceControl("refs/remotes/origin/master", "master", "f83a447f8d02a40fa84ec9d4d0dccd263d51782d"), + new SourceControl("refs/remotes/origin/some-feature", "some-feature", "1234567f8d02a40fa84ec9d4d0dccd263d51782d") + ] ) @Shared - def buildInfoWithUrlAndMasterAndDevelopSCMs = new BuildInfo( - "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", + def buildInfoWithUrlAndMasterAndDevelopSCMs = new JenkinsBuildInfo( + "name", 0, "http://spinnaker.builds.test.netflix.net/job/SPINNAKER-package-echo/69/", "SUCCESS", [ new JenkinsArtifact("hodor_1.1_all.deb", "."), new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), new JenkinsArtifact("hodor.1.1.nupkg", ".") - ], [ - new SourceControl("refs/remotes/origin/master", "master", "f83a447f8d02a40fa84ec9d4d0dccd263d51782d"), - new SourceControl("refs/remotes/origin/develop", "develop", "1234567f8d02a40fa84ec9d4d0dccd263d51782d") - ], false, "SUCCESS" + ], + [ + new SourceControl("refs/remotes/origin/master", "master", "f83a447f8d02a40fa84ec9d4d0dccd263d51782d"), + new SourceControl("refs/remotes/origin/develop", "develop", "1234567f8d02a40fa84ec9d4d0dccd263d51782d") + ] ) @Shared - def buildInfoNoMatch = new BuildInfo( - "name", 0, "http://jenkins", [ - new JenkinsArtifact("hodornodor_1.1_all.deb", "."), - new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), - new JenkinsArtifact("hodor.1.1.nupkg", ".") - ], [], false, "SUCCESS" + def buildInfoNoMatch = new JenkinsBuildInfo( + "name", 0, "http://jenkins", "SUCCESS", + [ + new JenkinsArtifact("hodornodor_1.1_all.deb", "."), + new JenkinsArtifact("hodor-1.1.noarch.rpm", "."), + new JenkinsArtifact("hodor.1.1.nupkg", ".") + ] ) @Shared diff --git a/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTaskSpec.groovy b/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTaskSpec.groovy index 3c345bd627..5f56bbb556 100644 --- a/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTaskSpec.groovy +++ b/orca-bakery/src/test/groovy/com/netflix/spinnaker/orca/bakery/tasks/MonitorBakeTaskSpec.groovy @@ -77,7 +77,7 @@ class MonitorBakeTaskSpec extends Specification { ) } task.createBakeTask = Mock(CreateBakeTask) { - 1 * execute(_) >> { return new TaskResult(ExecutionStatus.SUCCEEDED, [stage: 1], [global: 2]) } + 1 * execute(_) >> { return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([stage: 1]).outputs([global: 2]).build() } } when: diff --git a/orca-clouddriver/orca-clouddriver.gradle b/orca-clouddriver/orca-clouddriver.gradle index 5a0ea3a3ab..e2a2e25b5d 100644 --- a/orca-clouddriver/orca-clouddriver.gradle +++ b/orca-clouddriver/orca-clouddriver.gradle @@ -16,16 +16,32 @@ apply from: "$rootDir/gradle/groovy.gradle" +test { + useJUnitPlatform { + includeEngines "junit-vintage", "junit-jupiter" + } +} + dependencies { compile spinnaker.dependency('frigga') compileOnly spinnaker.dependency('lombok') + annotationProcessor spinnaker.dependency("lombok") compile project(":orca-retrofit") compile project(":orca-front50") compile project(":orca-bakery") compile project(":orca-core") compile 'com.netflix.spinnaker.moniker:moniker:0.2.0' + compile "com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:${spinnaker.version('jackson')}" + compile 'io.kubernetes:client-java:1.0.0-beta1' + testCompile project(":orca-test") testCompile project(":orca-test-groovy") testCompile "com.github.tomakehurst:wiremock:2.15.0" testCompile spinnaker.dependency('springTest') + testCompile spinnaker.dependency("junitJupiterApi") + testCompile spinnaker.dependency("assertj") + testCompile "org.mockito:mockito-core:2.25.0" + + testRuntime spinnaker.dependency("junitJupiterEngine") + testRuntime "org.junit.vintage:junit-vintage-engine:${spinnaker.version('jupiter')}" } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/CloudDriverCacheService.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/CloudDriverCacheService.groovy index b5f9599739..dfa57c6be6 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/CloudDriverCacheService.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/CloudDriverCacheService.groovy @@ -19,6 +19,7 @@ package com.netflix.spinnaker.orca.clouddriver import retrofit.client.Response import retrofit.http.Body import retrofit.http.POST +import retrofit.http.PUT import retrofit.http.Path interface CloudDriverCacheService { @@ -28,4 +29,6 @@ interface CloudDriverCacheService { @Path("type") String type, @Body Map data) -} \ No newline at end of file + @PUT("/admin/db/truncate/{namespace}") + Map clearNamespace(@Path("namespace") String namespace) +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/DelegatingOortService.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/DelegatingOortService.java index a73c709da7..9a46d261af 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/DelegatingOortService.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/DelegatingOortService.java @@ -134,4 +134,9 @@ public List getEntityTags(Map parameters) { public Map getCloudFormationStack(String stackId) { return getService().getCloudFormationStack(stackId); } + + @Override + public Map getServiceInstance(String account, String cloudProvider, String region, String serviceInstanceName) { + return getService().getServiceInstance(account, cloudProvider, region, serviceInstanceName); + } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/OortService.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/OortService.groovy index b934d68dd0..50d681b84a 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/OortService.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/OortService.groovy @@ -139,4 +139,10 @@ interface OortService { @GET("/aws/cloudFormation/stacks/{stackId}") Map getCloudFormationStack(@Path(value = "stackId", encode = false) String stackId) + + @GET("/servicebroker/{account}/serviceInstance") + Map getServiceInstance(@Path("account") String account, + @Query("cloudProvider") String cloudProvider, + @Query("region") String region, + @Query("serviceInstanceName") String serviceInstanceName) } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/JobConfigurationProperties.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/JobConfigurationProperties.java index 4df1fabf77..0eea4d773e 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/JobConfigurationProperties.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/JobConfigurationProperties.java @@ -21,8 +21,9 @@ import java.util.List; -@ConfigurationProperties("job") +@ConfigurationProperties("job.preconfigured") @Data public class JobConfigurationProperties { - List preconfigured; + List titus; + List kubernetes; } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/KubernetesPreconfiguredJobProperties.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/KubernetesPreconfiguredJobProperties.java new file mode 100644 index 0000000000..e8f5593ae2 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/KubernetesPreconfiguredJobProperties.java @@ -0,0 +1,46 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.config; + +import com.google.common.collect.ImmutableList; +import io.kubernetes.client.models.V1Job; +import lombok.Data; +import lombok.EqualsAndHashCode; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +@Data +@EqualsAndHashCode(callSuper = true) +public class KubernetesPreconfiguredJobProperties extends PreconfiguredJobStageProperties { + + private String account; + private String application; + private V1Job manifest; + + public KubernetesPreconfiguredJobProperties() { + this.setProducesArtifacts(true); + } + + public List getOverridableFields() { + List overrideableFields = new ArrayList<>(Arrays.asList("account", "manifest", "application")); + overrideableFields.addAll(super.getOverridableFields()); + return overrideableFields; + } + +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/PreconfiguredJobStageProperties.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/PreconfiguredJobStageProperties.java index 7793e2d74b..9942fc27d7 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/PreconfiguredJobStageProperties.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/PreconfiguredJobStageProperties.java @@ -16,27 +16,35 @@ package com.netflix.spinnaker.orca.clouddriver.config; +import com.google.common.collect.ImmutableList; +import jdk.nashorn.internal.ir.annotations.Immutable; import lombok.Data; -import java.util.HashMap; -import java.util.List; -import java.util.Map; +import java.util.*; @Data +public abstract class PreconfiguredJobStageProperties { -public class PreconfiguredJobStageProperties { + private boolean enabled = true; + private String label; + private String description; + private String type; + private List parameters; + private boolean waitForCompletion = true; + private String cloudProvider; + private String credentials; + private String region; + private String propertyFile; + private boolean producesArtifacts = false; - // Fields are public as job stages use reflection to access these directly from outside the class - public boolean enabled = true; - public String label; - public String description; - public String type; - public List parameters; - public boolean waitForCompletion = true; - public String cloudProvider; - public String credentials; - public String region; - public String propertyFile; - public Map cluster = new HashMap<>(); + public List getOverridableFields() { + return Arrays.asList( + "cloudProvider", + "credentials", + "region", + "propertyFile", + "waitForCompletion" + ); + } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/TitusPreconfiguredJobProperties.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/TitusPreconfiguredJobProperties.java new file mode 100644 index 0000000000..8d5775013d --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/config/TitusPreconfiguredJobProperties.java @@ -0,0 +1,37 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.config; + +import com.google.common.collect.ImmutableList; +import lombok.Data; +import lombok.EqualsAndHashCode; + +import java.lang.reflect.Array; +import java.util.*; + +@Data +@EqualsAndHashCode(callSuper = true) +public class TitusPreconfiguredJobProperties extends PreconfiguredJobStageProperties { + + private Map cluster = new HashMap<>(); + + public List getOverridableFields() { + List overrideableFields = new ArrayList<>(Arrays.asList("cluster")); + overrideableFields.addAll(super.getOverridableFields()); + return overrideableFields; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/ConditionAwareDeployStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/ConditionAwareDeployStagePreprocessor.java new file mode 100644 index 0000000000..90721833ab --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/ConditionAwareDeployStagePreprocessor.java @@ -0,0 +1,90 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline; + +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.Condition; +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.ConditionSupplier; +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.WaitForConditionStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.strategies.DeployStagePreProcessor; +import com.netflix.spinnaker.orca.kato.pipeline.support.StageData; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.boot.autoconfigure.condition.ConditionalOnBean; +import org.springframework.boot.autoconfigure.condition.ConditionalOnExpression; +import org.springframework.stereotype.Component; + +import java.util.*; +import java.util.stream.Collectors; + +@Component +@ConditionalOnBean(ConditionSupplier.class) +@ConditionalOnExpression("${tasks.evaluateCondition.enabled:false}") +public class ConditionAwareDeployStagePreprocessor implements DeployStagePreProcessor { + private final Logger log = LoggerFactory.getLogger(ConditionAwareDeployStagePreprocessor.class); + private final WaitForConditionStage waitForConditionStage; + private final List conditionSuppliers; + + @Autowired + public ConditionAwareDeployStagePreprocessor( + WaitForConditionStage waitForConditionStage, + List conditionSuppliers + ) { + this.waitForConditionStage = waitForConditionStage; + this.conditionSuppliers = conditionSuppliers; + } + + @Override + public boolean supports(Stage stage) { + return true; + } + + @Override + public List beforeStageDefinitions(Stage stage) { + try { + final StageData stageData = stage.mapTo(StageData.class); + Set conditions = conditionSuppliers + .stream() + .flatMap(supplier -> supplier.getConditions( + stageData.getCluster(), + stageData.getRegion(), + stageData.getAccount() + ).stream()).filter(Objects::nonNull) + .collect(Collectors.toSet()); + if (conditions.isEmpty()) { + // do no inject the stage if there are no active conditions + return Collections.emptyList(); + } + + Map ctx = new HashMap<>(); + // defines what is required by condition suppliers + ctx.put("region", stageData.getRegion()); + ctx.put("cluster", stageData.getCluster()); + ctx.put("account", stageData.getAccount()); + StageDefinition stageDefinition = new StageDefinition(); + stageDefinition.name = "Wait For Condition"; + stageDefinition.context = ctx; + stageDefinition.stageDefinitionBuilder = waitForConditionStage; + return Collections.singletonList(stageDefinition); + } catch (Exception e) { + log.error("Error determining active conditions. Proceeding with execution {}", stage.getExecution().getId()); + } + + return Collections.emptyList(); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/MigratePipelineStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/MigratePipelineStage.java deleted file mode 100644 index 6bec2b8a7a..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/MigratePipelineStage.java +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.pipeline; - -import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.pipeline.MigratePipelineClustersTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.pipeline.UpdateMigratedPipelineTask; -import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; -import com.netflix.spinnaker.orca.pipeline.TaskNode; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.springframework.stereotype.Component; - -@Component -public class MigratePipelineStage implements StageDefinitionBuilder { - - public static final String PIPELINE_CONFIG_TYPE = "migratePipeline"; - - @Override - public void taskGraph(Stage stage, TaskNode.Builder builder) { - builder - .withTask("migratePipelineClusters", MigratePipelineClustersTask.class) - .withTask("monitorMigration", MonitorKatoTask.class); - if (!Boolean.TRUE.equals(stage.getContext().get("dryRun"))) { - builder.withTask("updateMigratedPipeline", UpdateMigratedPipelineTask.class); - } - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/cache/ClouddriverClearAltTablespaceStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/cache/ClouddriverClearAltTablespaceStage.java new file mode 100644 index 0000000000..2d8023a663 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/cache/ClouddriverClearAltTablespaceStage.java @@ -0,0 +1,17 @@ +package com.netflix.spinnaker.orca.clouddriver.pipeline.cache; + +import com.netflix.spinnaker.orca.clouddriver.tasks.cache.ClouddriverClearAltTablespaceTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.jetbrains.annotations.NotNull; +import org.springframework.stereotype.Component; + +@Component +public class ClouddriverClearAltTablespaceStage implements StageDefinitionBuilder { + + @Override + public void taskGraph(@NotNull Stage stage, @NotNull TaskNode.Builder builder) { + builder.withTask("clouddriverClearAltTablespace", ClouddriverClearAltTablespaceTask.class); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/Condition.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/Condition.java new file mode 100644 index 0000000000..64df49182a --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/Condition.java @@ -0,0 +1,74 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.conditions; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonProperty; + +import java.util.Objects; + +public class Condition { + private String name; + private String description; + + @JsonCreator + public Condition(@JsonProperty("name") String name, @JsonProperty("description") String description) { + this.name = name; + this.description = description; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } if (o == null || getClass() != o.getClass()) { + return false; + } + + Condition that = (Condition) o; + return name.equals(that.name) && description.equals(that.description); + } + + @Override + public int hashCode() { + return Objects.hash(name, description); + } + + @Override + public String toString() { + return "Condition{" + + "name='" + name + '\'' + + ", description='" + description + '\'' + + '}'; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionConfigurationProperties.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionConfigurationProperties.java new file mode 100644 index 0000000000..57e860d5b5 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionConfigurationProperties.java @@ -0,0 +1,85 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.conditions; + +import com.netflix.spinnaker.kork.dynamicconfig.DynamicConfigService; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.stereotype.Component; + +import java.util.List; +import java.util.concurrent.TimeUnit; + +@Component +@ConfigurationProperties("tasks.evaluateCondition") +public class ConditionConfigurationProperties { + private final DynamicConfigService configService; + private boolean enabled = false; + private Long backoffWaitMs = TimeUnit.MINUTES.toMillis(5); + private Long waitTimeoutMs = TimeUnit.MINUTES.toMillis(120); + private List clusters; + private List activeConditions; + + @Autowired + public ConditionConfigurationProperties(DynamicConfigService configService) { + this.configService = configService; + } + + public boolean isEnabled() { + return configService.getConfig(Boolean.class, "tasks.evaluateCondition", enabled); + } + + public void setEnabled(boolean enabled) { + this.enabled = enabled; + } + + public Long getBackoffWaitMs() { + return configService.getConfig(Long.class, "tasks.evaluateCondition.backoffWaitMs", backoffWaitMs); + } + + public void setBackoffWaitMs(Long backoffWaitMs) { + this.backoffWaitMs = backoffWaitMs; + } + + public long getWaitTimeoutMs() { + return configService.getConfig(Long.class, "tasks.evaluateCondition.waitTimeoutMs", waitTimeoutMs); + } + + public void setWaitTimeoutMs(long waitTimeoutMs) { + this.waitTimeoutMs = waitTimeoutMs; + } + + public List getClusters() { + return configService.getConfig(List.class, "tasks.evaluateCondition.clusters", clusters); + } + + public List getActiveConditions() { + return configService.getConfig(List.class, "tasks.evaluateCondition.activeConditions", activeConditions); + } + + public void setClusters(List clusters) { + this.clusters = clusters; + } + + public void setActiveConditions(List activeConditions) { + this.activeConditions = activeConditions; + } + + public boolean isSkipWait() { + return configService.getConfig(Boolean.class, "tasks.evaluateCondition.skipWait", false); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionSupplier.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionSupplier.java new file mode 100644 index 0000000000..8313c05a30 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConditionSupplier.java @@ -0,0 +1,29 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.conditions; + +import java.util.List; + +/** + * A provider of unmet conditions leading to a paused execution + */ +public interface ConditionSupplier { + /** + * returns a list of currently unmet conditions. + */ + List getConditions(String cluster, String region, String account); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplier.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplier.java new file mode 100644 index 0000000000..278c5e0ffd --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplier.java @@ -0,0 +1,58 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.conditions; + +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.boot.autoconfigure.condition.ConditionalOnExpression; +import org.springframework.stereotype.Component; + +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * Allows statically defined conditions + * Aimed to be used for testing or as a pause-all deployments mechanism + */ +@Component +@ConditionalOnExpression("${tasks.evaluateCondition.enabled:false}") +public class ConfigurationBackedConditionSupplier implements ConditionSupplier { + private final ConditionConfigurationProperties conditionsConfigurationProperties; + + @Autowired + public ConfigurationBackedConditionSupplier(ConditionConfigurationProperties conditionsConfigurationProperties) { + this.conditionsConfigurationProperties = conditionsConfigurationProperties; + } + + @Override + public List getConditions(String cluster, String region, String account) { + final List clusters = conditionsConfigurationProperties.getClusters(); + final List activeConditions = conditionsConfigurationProperties.getActiveConditions(); + + if (clusters == null || clusters.isEmpty() || activeConditions == null || activeConditions.isEmpty()) { + return Collections.emptyList(); + } + + if (!clusters.contains(cluster)) { + return Collections.emptyList(); + } + + return activeConditions.stream() + .map(conditionName -> new Condition(conditionName, String.format("Active condition applies to: %s", conditionName))) + .collect(Collectors.toList()); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/WaitForConditionStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/WaitForConditionStage.java new file mode 100644 index 0000000000..a0e428afa9 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/WaitForConditionStage.java @@ -0,0 +1,94 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.conditions; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.orca.clouddriver.tasks.conditions.EvaluateConditionTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.jetbrains.annotations.NotNull; +import org.springframework.stereotype.Component; + +import javax.annotation.Nullable; + +@Component +public class WaitForConditionStage implements StageDefinitionBuilder { + public static String STAGE_TYPE = "waitForCondition"; + + @Override + public void taskGraph(@NotNull Stage stage, @NotNull TaskNode.Builder builder) { + builder.withTask(STAGE_TYPE, EvaluateConditionTask.class); + } + + public static final class WaitForConditionContext { + private Status status; + private String region; + private String cluster; + private String account; + + @JsonCreator + public WaitForConditionContext( + @JsonProperty("status") Status status, + @JsonProperty("region") @Nullable String region, + @JsonProperty("cluster") @Nullable String cluster, + @JsonProperty("account") @Nullable String account + ) { + this.status = status; + this.region = region; + this.cluster = cluster; + this.account = account; + } + + public enum Status { + SKIPPED, WAITING, ERROR + } + + public Status getStatus() { + return status; + } + + public void setStatus(Status status) { + this.status = status; + } + + public String getRegion() { + return region; + } + + public void setRegion(String region) { + this.region = region; + } + + public String getCluster() { + return cluster; + } + + public void setCluster(String cluster) { + this.cluster = cluster; + } + + public String getAccount() { + return account; + } + + public void setAccount(String account) { + this.account = account; + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStage.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStage.groovy index 286b7850db..fc00e51ca8 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStage.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStage.groovy @@ -22,21 +22,19 @@ import com.netflix.spinnaker.orca.clouddriver.exception.PreconfiguredJobNotFound import com.netflix.spinnaker.orca.clouddriver.service.JobService import com.netflix.spinnaker.orca.pipeline.TaskNode import com.netflix.spinnaker.orca.pipeline.model.Stage -import org.springframework.beans.factory.annotation.Autowired import org.springframework.stereotype.Component @Component class PreconfiguredJobStage extends RunJobStage { - @Autowired(required=false) private JobService jobService - def fields = PreconfiguredJobStageProperties.declaredFields.findAll { - !it.synthetic && !['props', 'enabled', 'label', 'description', 'type', 'parameters'].contains(it.name) - }.collect { it.name } + public PreconfiguredJobStage(Optional optionalJobService) { + this.jobService = optionalJobService.orElse(null) + } @Override - void taskGraph(Stage stage, TaskNode.Builder builder) { + public void taskGraph(Stage stage, TaskNode.Builder builder) { def preconfiguredJob = jobService.getPreconfiguredStages().find { stage.type == it.type } if (!preconfiguredJob) { @@ -48,7 +46,7 @@ class PreconfiguredJobStage extends RunJobStage { } private Map overrideIfNotSetInContextAndOverrideDefault(Map context, PreconfiguredJobStageProperties preconfiguredJob) { - fields.each { + preconfiguredJob.getOverridableFields().each { if (context[it] == null || preconfiguredJob[it] != null) { context[it] = preconfiguredJob[it] } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/DeployManifestStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/DeployManifestStage.java index 7230898b03..87428c1d57 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/DeployManifestStage.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/DeployManifestStage.java @@ -19,22 +19,31 @@ import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; import com.netflix.spinnaker.orca.clouddriver.tasks.artifacts.CleanupArtifactsTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.DeployManifestContext; +import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.DeployManifestContext.TrafficManagement; import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.DeployManifestTask; import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.ManifestForceCacheRefreshTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.ManifestStrategyStagesAdder; import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.PromoteManifestKatoOutputsTask; import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.WaitForManifestStableTask; import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; import com.netflix.spinnaker.orca.pipeline.model.Stage; import com.netflix.spinnaker.orca.pipeline.tasks.artifacts.BindProducedArtifactsTask; +import javax.annotation.Nonnull; +import lombok.RequiredArgsConstructor; import org.springframework.stereotype.Component; @Component +@RequiredArgsConstructor public class DeployManifestStage implements StageDefinitionBuilder { public static final String PIPELINE_CONFIG_TYPE = "deployManifest"; + private final ManifestStrategyStagesAdder manifestStrategyStagesAdder; + @Override - public void taskGraph(Stage stage, TaskNode.Builder builder) { + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { builder.withTask(DeployManifestTask.TASK_NAME, DeployManifestTask.class) .withTask("monitorDeploy", MonitorKatoTask.class) .withTask(PromoteManifestKatoOutputsTask.TASK_NAME, PromoteManifestKatoOutputsTask.class) @@ -43,4 +52,12 @@ public void taskGraph(Stage stage, TaskNode.Builder builder) { .withTask(CleanupArtifactsTask.TASK_NAME, CleanupArtifactsTask.class) .withTask(BindProducedArtifactsTask.TASK_NAME, BindProducedArtifactsTask.class); } + + public void afterStages(@Nonnull Stage stage, @Nonnull StageGraphBuilder graph) { + DeployManifestContext context = stage.mapTo(DeployManifestContext.class); + TrafficManagement trafficManagement = context.getTrafficManagement(); + if (trafficManagement.isEnabled()) { + manifestStrategyStagesAdder.addAfterStages(trafficManagement.getOptions().getStrategy(), graph, context); + } + } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/PatchManifestStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/PatchManifestStage.java index 4fec1e1282..ed8cce5b80 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/PatchManifestStage.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/manifest/PatchManifestStage.java @@ -17,6 +17,7 @@ package com.netflix.spinnaker.orca.clouddriver.pipeline.manifest; import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.DynamicResolveManifestTask; import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.ManifestForceCacheRefreshTask; import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.PatchManifestTask; import com.netflix.spinnaker.orca.clouddriver.tasks.manifest.PromoteManifestKatoOutputsTask; @@ -33,7 +34,8 @@ public class PatchManifestStage implements StageDefinitionBuilder { @Override public void taskGraph(Stage stage, TaskNode.Builder builder) { - builder.withTask(PatchManifestTask.TASK_NAME, PatchManifestTask.class) + builder.withTask(DynamicResolveManifestTask.TASK_NAME, DynamicResolveManifestTask.class) + .withTask(PatchManifestTask.TASK_NAME, PatchManifestTask.class) .withTask("monitorPatch", MonitorKatoTask.class) .withTask(PromoteManifestKatoOutputsTask.TASK_NAME, PromoteManifestKatoOutputsTask.class) .withTask(ManifestForceCacheRefreshTask.TASK_NAME, ManifestForceCacheRefreshTask.class) diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/pipeline/SavePipelinesFromArtifactStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/pipeline/SavePipelinesFromArtifactStage.java new file mode 100644 index 0000000000..76ecfedc54 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/pipeline/SavePipelinesFromArtifactStage.java @@ -0,0 +1,45 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.pipeline.pipeline; + +import com.netflix.spinnaker.orca.clouddriver.tasks.pipeline.*; +import com.netflix.spinnaker.orca.front50.tasks.MonitorFront50Task; +import com.netflix.spinnaker.orca.front50.tasks.SavePipelineTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode.Builder; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +@Component +public class SavePipelinesFromArtifactStage implements StageDefinitionBuilder { + + @Override + public void taskGraph(Stage stage, Builder builder) { + + builder + .withTask("getPipelinesFromArtifact", GetPipelinesFromArtifactTask.class) + .withLoop(subGraph -> { + subGraph + .withTask("preparePipelineToSaveTask", PreparePipelineToSaveTask.class) + .withTask("savePipeline", SavePipelineTask.class) + .withTask("waitForPipelineSave", MonitorFront50Task.class) + .withTask("checkPipelineResults", CheckPipelineResultsTask.class) + .withTask("checkForRemainingPipelines", CheckForRemainingPipelinesTask.class); + }) + .withTask("savePipelinesCompleteTask", SavePipelinesCompleteTask.class); + } + +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ApplySourceServerGroupCapacityTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ApplySourceServerGroupCapacityTask.groovy index 1eb2110abb..aa271a0600 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ApplySourceServerGroupCapacityTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ApplySourceServerGroupCapacityTask.groovy @@ -67,15 +67,22 @@ class ApplySourceServerGroupCapacityTask extends AbstractServerGroupTask { targetServerGroup.capacity.min as Long ) - if (context.cloudProvider == "aws") { - // aws is the only cloud provider supporting partial resizes - // updating anything other than 'min' could result in instances being - // unnecessarily destroyed or created if autoscaling has occurred - context.capacity = [min: minCapacity] - } else { - context.capacity = targetServerGroup.capacity + [ - min: minCapacity - ] + + switch (context.cloudProvider) { + case 'aws': + // aws is the only cloud provider supporting partial resizes + // updating anything other than 'min' could result in instances being + // unnecessarily destroyed or created if autoscaling has occurred + context.capacity = [min: minCapacity] + break + case 'cloudfoundry': + // cloudfoundry always wants to resize to the snapshot taken from desired capacity + context.capacity = sourceServerGroupCapacitySnapshot + break + default: + context.capacity = targetServerGroup.capacity + [ + min: minCapacity + ] } log.info("Restoring min capacity of ${context.region}/${targetServerGroup.name} to ${minCapacity} (currentMin: ${targetServerGroup.capacity.min}, snapshotMin: ${sourceServerGroupCapacitySnapshot.min})") diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/AwsDeployStagePreProcessor.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/AwsDeployStagePreProcessor.groovy index 89f69f7e2a..16b8890bbf 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/AwsDeployStagePreProcessor.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/AwsDeployStagePreProcessor.groovy @@ -24,11 +24,12 @@ import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.Targe import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroupResolver import com.netflix.spinnaker.orca.kato.pipeline.support.ResizeStrategy import com.netflix.spinnaker.orca.kato.pipeline.support.StageData +import com.netflix.spinnaker.orca.pipeline.CheckPreconditionsStage import com.netflix.spinnaker.orca.pipeline.model.Stage import org.springframework.beans.factory.annotation.Autowired import org.springframework.stereotype.Component -import javax.annotation.Nullable +import java.util.concurrent.TimeUnit import static com.netflix.spinnaker.orca.kato.pipeline.support.ResizeStrategySupport.getSource @@ -43,6 +44,9 @@ class AwsDeployStagePreProcessor implements DeployStagePreProcessor { @Autowired TargetServerGroupResolver targetServerGroupResolver + @Autowired + CheckPreconditionsStage checkPreconditionsStage + @Override List additionalSteps(Stage stage) { def stageData = stage.mapTo(StageData) @@ -62,27 +66,47 @@ class AwsDeployStagePreProcessor implements DeployStagePreProcessor { @Override List beforeStageDefinitions(Stage stage) { def stageData = stage.mapTo(StageData) + def stageDefinitions = [] + + if (shouldCheckServerGroupsPreconditions(stageData)) { + stageDefinitions << new StageDefinition( + name: "Check Deploy Preconditions", + stageDefinitionBuilder: checkPreconditionsStage, + context: [ + preconditionType: "clusterSize", + context: [ + onlyEnabledServerGroups: true, + comparison: '<=', + expected: stageData.maxInitialAsgs, + regions: [ stageData.region ], + cluster: stageData.cluster, + application: stageData.application, + credentials: stageData.getAccount(), + moniker: stageData.moniker + ] + ] + ) + } + if (shouldPinSourceServerGroup(stageData.strategy)) { def optionalResizeContext = getResizeContext(stageData) if (!optionalResizeContext.isPresent()) { // this means we don't need to resize anything // happens in particular when there is no pre-existing source server group - return [] + return stageDefinitions } def resizeContext = optionalResizeContext.get() resizeContext.pinMinimumCapacity = true - return [ - new StageDefinition( - name: "Pin ${resizeContext.serverGroupName}", - stageDefinitionBuilder: resizeServerGroupStage, - context: resizeContext - ) - ] + stageDefinitions << new StageDefinition( + name: "Pin ${resizeContext.serverGroupName}", + stageDefinitionBuilder: resizeServerGroupStage, + context: resizeContext + ) } - return [] + return stageDefinitions } @Override @@ -130,6 +154,11 @@ class AwsDeployStagePreProcessor implements DeployStagePreProcessor { return strategy == "rollingredblack" } + private static boolean shouldCheckServerGroupsPreconditions(StageData stageData) { + // TODO(dreynaud): enabling cautiously for RRB only for testing, but we would ideally roll this out to other strategies + return stageData.strategy in ["rollingredblack"] && stageData.maxInitialAsgs != -1 + } + private Optional> getResizeContext(StageData stageData) { def cleanupConfig = AbstractDeployStrategyStage.CleanupConfig.fromStage(stageData) def baseContext = [ @@ -177,8 +206,13 @@ class AwsDeployStagePreProcessor implements DeployStagePreProcessor { def resizeContext = optionalResizeContext.get() resizeContext.unpinMinimumCapacity = true + if (deployFailed) { + // we want to specify a new timeout explicitly here, in case the deploy itself failed because of a timeout + resizeContext.stageTimeoutMs = TimeUnit.MINUTES.toMillis(20) + } + return new StageDefinition( - name: "Unpin ${resizeContext.serverGroupName}".toString(), + name: "Unpin ${resizeContext.serverGroupName} (deployFailed=${deployFailed})".toString(), stageDefinitionBuilder: resizeServerGroupStage, context: resizeContext ) diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/CaptureSourceServerGroupCapacityTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/CaptureSourceServerGroupCapacityTask.groovy index c18166ce52..4d77b2121e 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/CaptureSourceServerGroupCapacityTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/CaptureSourceServerGroupCapacityTask.groovy @@ -72,6 +72,6 @@ class CaptureSourceServerGroupCapacityTask implements Task { } } - return new TaskResult(ExecutionStatus.SUCCEEDED, stageOutputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ModifyAwsScalingProcessStage.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ModifyAwsScalingProcessStage.groovy index 689cc04bf1..3c8f13cf85 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ModifyAwsScalingProcessStage.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/aws/ModifyAwsScalingProcessStage.groovy @@ -109,7 +109,7 @@ class ModifyAwsScalingProcessStage extends TargetServerGroupLinearStageSupport { isComplete = suspendedProcesses?.intersect(stageData.processes) == stageData.processes } - return isComplete ? new TaskResult(ExecutionStatus.SUCCEEDED) : new TaskResult(ExecutionStatus.RUNNING) + return isComplete ? TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) : TaskResult.ofStatus(ExecutionStatus.RUNNING) } @CompileDynamic diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessor.java new file mode 100644 index 0000000000..1dc38831de --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessor.java @@ -0,0 +1,42 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker.DeployServiceStagePreprocessor; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryDeployServiceTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryWaitForDeployServiceTask; +import com.netflix.spinnaker.orca.kato.pipeline.support.StageData; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryDeployServiceStagePreprocessor implements DeployServiceStagePreprocessor { + @Override + public boolean supports(Stage stage) { + return "cloudfoundry".equals(stage.mapTo(StageData.class).getCloudProvider()); + } + + @Override + public void addSteps(TaskNode.Builder builder, Stage stage) { + builder + .withTask("deployService", CloudFoundryDeployServiceTask.class) + .withTask("monitorDeployService", CloudFoundryMonitorKatoServicesTask.class) + .withTask("waitForDeployService", CloudFoundryWaitForDeployServiceTask.class); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessor.java new file mode 100644 index 0000000000..f6b5c67709 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessor.java @@ -0,0 +1,95 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.orca.clouddriver.pipeline.cluster.RollbackClusterStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.ServerGroupForceCacheRefreshStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.strategies.DeployStagePreProcessor; +import com.netflix.spinnaker.orca.kato.pipeline.strategy.Strategy; +import com.netflix.spinnaker.orca.kato.pipeline.support.StageData; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.EqualsAndHashCode; +import org.springframework.stereotype.Component; + +import java.util.*; +import java.util.concurrent.TimeUnit; + +@Component +@AllArgsConstructor +class CloudFoundryDeployStagePreProcessor implements DeployStagePreProcessor { + private RollbackClusterStage rollbackClusterStage; + private ServerGroupForceCacheRefreshStage serverGroupForceCacheRefreshStage; + + @Override + public List onFailureStageDefinitions(Stage stage) { + CfRollingRedBlackStageData stageData = stage.mapTo(CfRollingRedBlackStageData.class); + List stageDefinitions = new ArrayList<>(); + Strategy strategy = Strategy.fromStrategy(stageData.getStrategy()); + + if (strategy.equals(Strategy.CF_ROLLING_RED_BLACK) && (stageData.getRollback() != null && stageData.getRollback().isOnFailure())) { + StageDefinition forceCacheRefreshStageDefinition = new StageDefinition(); + forceCacheRefreshStageDefinition.stageDefinitionBuilder = serverGroupForceCacheRefreshStage; + forceCacheRefreshStageDefinition.name = "Force Cache Refresh"; + forceCacheRefreshStageDefinition.context = createBasicContext(stageData); + stageDefinitions.add(forceCacheRefreshStageDefinition); + + StageDefinition rollbackStageDefinition = new StageDefinition(); + Map rollbackContext = createBasicContext(stageData); + rollbackContext.put("serverGroup", stageData.getSource().getServerGroupName()); + rollbackContext.put("stageTimeoutMs", TimeUnit.MINUTES.toMillis(30)); // timebox a rollback to 30 minutes + rollbackStageDefinition.stageDefinitionBuilder = rollbackClusterStage; + rollbackStageDefinition.name = "Rolling back to previous deployment"; + rollbackStageDefinition.context = rollbackContext; + stageDefinitions.add(rollbackStageDefinition); + } + + return stageDefinitions; + } + + @Override + public boolean supports(Stage stage) { + return "cloudfoundry".equals(stage.mapTo(StageData.class).getCloudProvider()); + } + + private Map createBasicContext(CfRollingRedBlackStageData stageData) { + Map context = new HashMap<>(); + String credentials = stageData.getCredentials() != null ? stageData.getCredentials() : stageData.getAccount(); + context.put("credentials", credentials); + context.put("cloudProvider", stageData.getCloudProvider()); + context.put("regions", Collections.singletonList(stageData.getRegion())); + context.put("deploy.server.groups", stageData.getDeployedServerGroups()); + return context; + } + + + @EqualsAndHashCode(callSuper = true) + @Data + private static class CfRollingRedBlackStageData extends StageData { + private Rollback rollback; + + @JsonProperty("deploy.server.groups") + Map> deployedServerGroups = new HashMap<>(); + + @Data + private static class Rollback { + private boolean onFailure; + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessor.java new file mode 100644 index 0000000000..908a81e70d --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessor.java @@ -0,0 +1,42 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker.DestroyServiceStagePreprocessor; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryDestroyServiceTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryWaitForDestroyServiceTask; +import com.netflix.spinnaker.orca.kato.pipeline.support.StageData; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryDestroyServiceStagePreprocessor implements DestroyServiceStagePreprocessor { + @Override + public boolean supports(Stage stage) { + return "cloudfoundry".equals(stage.mapTo(StageData.class).getCloudProvider()); + } + + @Override + public void addSteps(TaskNode.Builder builder, Stage stage) { + builder + .withTask("destroyService", CloudFoundryDestroyServiceTask.class) + .withTask("monitorDestroyService", CloudFoundryMonitorKatoServicesTask.class) + .withTask("waitForDestroyService", CloudFoundryWaitForDestroyServiceTask.class); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessor.java new file mode 100644 index 0000000000..3782b52c7e --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessor.java @@ -0,0 +1,40 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker.ShareServiceStagePreprocessor; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryShareServiceTask; +import com.netflix.spinnaker.orca.kato.pipeline.support.StageData; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryShareServiceStagePreprocessor implements ShareServiceStagePreprocessor { + @Override + public boolean supports(Stage stage) { + return "cloudfoundry".equals(stage.mapTo(StageData.class).getCloudProvider()); + } + + @Override + public void addSteps(TaskNode.Builder builder, Stage stage) { + builder + .withTask("shareService", CloudFoundryShareServiceTask.class) + .withTask("monitorShareService", CloudFoundryMonitorKatoServicesTask.class); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessor.java new file mode 100644 index 0000000000..2590ea0dc9 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessor.java @@ -0,0 +1,40 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker.UnshareServiceStagePreprocessor; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryUnshareServiceTask; +import com.netflix.spinnaker.orca.kato.pipeline.support.StageData; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryUnshareServiceStagePreprocessor implements UnshareServiceStagePreprocessor { + @Override + public boolean supports(Stage stage) { + return "cloudfoundry".equals(stage.mapTo(StageData.class).getCloudProvider()); + } + + @Override + public void addSteps(TaskNode.Builder builder, Stage stage) { + builder + .withTask("unshareService", CloudFoundryUnshareServiceTask.class) + .withTask("monitorUnshareService", CloudFoundryMonitorKatoServicesTask.class); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/securitygroup/MigrateSecurityGroupStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CreateServiceKeyStage.java similarity index 52% rename from orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/securitygroup/MigrateSecurityGroupStage.java rename to orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CreateServiceKeyStage.java index 7e98590e05..4ea52b08bc 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/securitygroup/MigrateSecurityGroupStage.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CreateServiceKeyStage.java @@ -1,7 +1,7 @@ /* - * Copyright 2016 Netflix, Inc. + * Copyright 2019 Pivotal, Inc. * - * Licensed under the Apache License, Version 2.0 (the "License") + * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * @@ -14,24 +14,24 @@ * limitations under the License. */ -package com.netflix.spinnaker.orca.clouddriver.pipeline.securitygroup; +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; -import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.securitygroup.MigrateSecurityGroupTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryCreateServiceKeyTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; import com.netflix.spinnaker.orca.pipeline.TaskNode; import com.netflix.spinnaker.orca.pipeline.model.Stage; import org.springframework.stereotype.Component; -@Component -public class MigrateSecurityGroupStage implements StageDefinitionBuilder { - - public static final String PIPELINE_CONFIG_TYPE = "migrateSecurityGroup"; +import javax.annotation.Nonnull; +@Component +class CreateServiceKeyStage implements StageDefinitionBuilder, CloudProviderAware { @Override - public void taskGraph(Stage stage, TaskNode.Builder builder) { + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { builder - .withTask("migrateSecurityGroup", MigrateSecurityGroupTask.class) - .withTask("monitorMigration", MonitorKatoTask.class); + .withTask("createServiceKey", CloudFoundryCreateServiceKeyTask.class) + .withTask("monitorCreateServiceKey", CloudFoundryMonitorKatoServicesTask.class); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/SetStatefulDiskStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/SetStatefulDiskStage.java new file mode 100644 index 0000000000..a6d5b79f23 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/SetStatefulDiskStage.java @@ -0,0 +1,59 @@ +/* + * + * * Copyright 2019 Google, Inc. + * * + * * Licensed under the Apache License, Version 2.0 (the "License") + * * you may not use this file except in compliance with the License. + * * You may obtain a copy of the License at + * * + * * http://www.apache.org/licenses/LICENSE-2.0 + * * + * * Unless required by applicable law or agreed to in writing, software + * * distributed under the License is distributed on an "AS IS" BASIS, + * * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * * See the License for the specific language governing permissions and + * * limitations under the License. + * + * + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.gce; + +import com.netflix.spinnaker.kork.dynamicconfig.DynamicConfigService; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.gce.SetStatefulDiskTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.ServerGroupCacheForceRefreshTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.Data; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +@Component +public class SetStatefulDiskStage implements StageDefinitionBuilder { + + private final DynamicConfigService dynamicConfigService; + + @Autowired + public SetStatefulDiskStage(DynamicConfigService dynamicConfigService) { + this.dynamicConfigService = dynamicConfigService; + } + + @Override + public void taskGraph(Stage stage, TaskNode.Builder builder) { + builder.withTask("setStatefulDisk", SetStatefulDiskTask.class); + + if (isForceCacheRefreshEnabled(dynamicConfigService)) { + builder.withTask("forceCacheRefresh", ServerGroupCacheForceRefreshTask.class); + } + } + + @Data + public static class StageData { + + public String accountName; + public String serverGroupName; + public String region; + public String deviceName; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/WaitForGceAutoscalingPolicyTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/WaitForGceAutoscalingPolicyTask.java index 1fbded7d19..e453b41e03 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/WaitForGceAutoscalingPolicyTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/gce/WaitForGceAutoscalingPolicyTask.java @@ -52,8 +52,8 @@ public TaskResult execute(@Nonnull Stage stage) { .getAutoscalingPolicy() .get("mode"); return AutoscalingMode.valueOf(autoscalingMode) == data.getMode() ? - new TaskResult(ExecutionStatus.SUCCEEDED) : - new TaskResult(ExecutionStatus.RUNNING); + TaskResult.SUCCEEDED : + TaskResult.RUNNING; } private TargetServerGroup getTargetGroupForLocation(StageData data, String location) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/DeleteSecurityGroupStage.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/DeleteSecurityGroupStage.groovy index ebab75ad09..61105180bb 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/DeleteSecurityGroupStage.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/DeleteSecurityGroupStage.groovy @@ -32,7 +32,7 @@ class DeleteSecurityGroupStage implements StageDefinitionBuilder { void taskGraph(Stage stage, TaskNode.Builder builder) { builder .withTask("deleteSecurityGroup", DeleteSecurityGroupTask) - .withTask("forceCacheRefresh", DeleteSecurityGroupForceRefreshTask) .withTask("monitorDelete", MonitorKatoTask) + .withTask("forceCacheRefresh", DeleteSecurityGroupForceRefreshTask) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/MigrateServerGroupStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/MigrateServerGroupStage.java deleted file mode 100644 index 7e50630459..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/MigrateServerGroupStage.java +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup; - -import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.MigrateForceRefreshDependenciesTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.MigrateServerGroupTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.ServerGroupCacheForceRefreshTask; -import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; -import com.netflix.spinnaker.orca.pipeline.TaskNode; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.springframework.stereotype.Component; - -@Component -public class MigrateServerGroupStage implements StageDefinitionBuilder { - - public static final String PIPELINE_CONFIG_TYPE = "migrateServerGroup"; - - @Override - public void taskGraph(Stage stage, TaskNode.Builder builder) { - builder - .withTask("migrateServerGroup", MigrateServerGroupTask.class) - .withTask("monitorMigration", MonitorKatoTask.class); - if (!(Boolean) stage.getContext().getOrDefault("dryRun", true)) { - builder - .withTask("refreshDependencies", MigrateForceRefreshDependenciesTask.class) - .withTask("refreshServerGroup", ServerGroupCacheForceRefreshTask.class); - } - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategy.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategy.java new file mode 100644 index 0000000000..f0099ba03f --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategy.java @@ -0,0 +1,290 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.strategies; + +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.clouddriver.pipeline.cluster.DisableClusterStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.cluster.ShrinkClusterStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.ResizeServerGroupStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.DetermineTargetServerGroupStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroup; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroupResolver; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.Manifest; +import com.netflix.spinnaker.orca.front50.pipeline.PipelineStage; +import com.netflix.spinnaker.orca.kato.pipeline.support.ResizeStrategy; +import com.netflix.spinnaker.orca.kato.pipeline.support.ResizeStrategySupport; +import com.netflix.spinnaker.orca.pipeline.WaitStage; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.model.SyntheticStageOwner; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import groovy.util.logging.Slf4j; +import lombok.AllArgsConstructor; +import lombok.Data; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.context.ApplicationContext; +import org.springframework.context.ApplicationContextAware; +import org.springframework.stereotype.Component; +import org.yaml.snakeyaml.Yaml; +import org.yaml.snakeyaml.constructor.SafeConstructor; +import retrofit.client.Response; + +import javax.annotation.Nullable; +import java.io.IOException; +import java.util.*; + +import static com.netflix.spinnaker.orca.kato.pipeline.support.ResizeStrategySupport.getSource; + +/** + * CFRollingRedBlackStrategy is a rolling red/black strategy specifically made for Cloud Foundry + * to handle the differences in this type of rollout between an IaaS and Cloud Foundry. + *

+ * If you run on any other platform you should probably be using "{@link RollingRedBlackStrategy}" + */ +@Slf4j +@Data +@Component +@AllArgsConstructor +public class CFRollingRedBlackStrategy implements Strategy, ApplicationContextAware { + private static final Logger log = LoggerFactory.getLogger(CFRollingRedBlackStrategy.class); + public final String name = "cfrollingredblack"; + + private ApplicationContext applicationContext; + private ArtifactResolver artifactResolver; + private Optional pipelineStage; + private ResizeStrategySupport resizeStrategySupport; + private TargetServerGroupResolver targetServerGroupResolver; + private ObjectMapper objectMapper; + private OortService oort; + private static final ThreadLocal yamlParser = ThreadLocal.withInitial(() -> new Yaml(new SafeConstructor())); + + @Override + public List composeFlow(Stage stage) { + if (!pipelineStage.isPresent()) { + throw new IllegalStateException("Rolling red/black cannot be run without front50 enabled. Please set 'front50.enabled: true' in your orca config."); + } + + List stages = new ArrayList<>(); + RollingRedBlackStageData stageData = stage.mapTo(RollingRedBlackStageData.class); + AbstractDeployStrategyStage.CleanupConfig cleanupConfig = AbstractDeployStrategyStage.CleanupConfig.fromStage(stage); + + Map baseContext = new HashMap<>(); + baseContext.put(cleanupConfig.getLocation().singularType(), cleanupConfig.getLocation().getValue()); + baseContext.put("cluster", cleanupConfig.getCluster()); + baseContext.put("moniker", cleanupConfig.getMoniker()); + baseContext.put("credentials", cleanupConfig.getAccount()); + baseContext.put("cloudProvider", cleanupConfig.getCloudProvider()); + + if (stage.getContext().get("targetSize") != null) { + stage.getContext().put("targetSize", 0); + } + + if (stage.getContext().get("useSourceCapacity") != null) { + stage.getContext().put("useSourceCapacity", false); + } + + ResizeStrategy.Capacity savedCapacity = new ResizeStrategy.Capacity(); + Map manifest = (Map) stage.getContext().get("manifest"); + if (manifest.get("direct") == null) { + Artifact artifact = objectMapper.convertValue(manifest.get("artifact"), Artifact.class); + String artifactId = manifest.get("artifactId") != null ? manifest.get("artifactId").toString() : null; + Artifact boundArtifact = artifactResolver.getBoundArtifactForStage(stage, artifactId, artifact); + + if (boundArtifact == null) { + throw new IllegalArgumentException("Unable to bind the manifest artifact"); + } + + Response manifestText = oort.fetchArtifact(boundArtifact); + try { + Object manifestYml = yamlParser.get().load(manifestText.getBody().in()); + Map>> applicationManifests = objectMapper + .convertValue(manifestYml, new TypeReference>>>() {}); + List> applications = applicationManifests.get("applications"); + Map applicationConfiguration = applications.get(0); + manifest.put("direct", applicationConfiguration); + manifest.remove("artifact"); + manifest.remove("artifactId"); + } catch (IOException e) { + log.warn("Failure fetching/parsing manifests from {}", boundArtifact, e); + throw new IllegalStateException(e); + } + } + Manifest.Direct directManifestAttributes = objectMapper.convertValue(manifest.get("direct"), Manifest.Direct.class); + + if (!stage.getContext().containsKey("savedCapacity")) { + int instances = directManifestAttributes.getInstances(); + savedCapacity.setMin(instances); + savedCapacity.setMax(instances); + savedCapacity.setDesired(instances); + stage.getContext().put("savedCapacity", savedCapacity); + stage.getContext().put("sourceServerGroupCapacitySnapshot", savedCapacity); + } else { + Map savedCapacityMap = (Map) stage.getContext().get("savedCapacity"); + savedCapacity.setMin(savedCapacityMap.get("min")); + savedCapacity.setMax(savedCapacityMap.get("max")); + savedCapacity.setDesired(savedCapacityMap.get("desired")); + } + + // FIXME: this clobbers the input capacity value (if any). Should find a better way to request a new asg of size 0 + ResizeStrategy.Capacity zeroCapacity = new ResizeStrategy.Capacity(); + zeroCapacity.setMin(0); + zeroCapacity.setMax(0); + zeroCapacity.setDesired(0); + stage.getContext().put("capacity", zeroCapacity); + + // Start off with deploying one instance of the new version + ((Map) manifest.get("direct")).put("instances", 1); + + Execution execution = stage.getExecution(); + String executionId = execution.getId(); + List targetPercentages = stageData.getTargetPercentages(); + if (targetPercentages.isEmpty() || targetPercentages.get(targetPercentages.size() - 1) != 100) { + targetPercentages.add(100); + } + + Map findContext = new HashMap<>(baseContext); + findContext.put("target", TargetServerGroup.Params.Target.current_asg_dynamic); + findContext.put("targetLocation", cleanupConfig.getLocation()); + + Stage dtsgStage = new Stage(execution, DetermineTargetServerGroupStage.PIPELINE_CONFIG_TYPE, "Determine Deployed Server Group", findContext); + dtsgStage.setParentStageId(stage.getId()); + dtsgStage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(dtsgStage); + + ResizeStrategy.Source source; + try { + source = getSource(targetServerGroupResolver, stageData, baseContext); + } catch (Exception e) { + source = null; + } + + if (source == null) { + log.warn("no source server group -- will perform RRB to exact fallback capacity {} with no disableCluster or scaleDownCluster stages", savedCapacity); + } + + ResizeStrategy.Capacity sourceCapacity = source == null ? + savedCapacity : + resizeStrategySupport.getCapacity(source.getCredentials(), source.getServerGroupName(), source.getCloudProvider(), source.getLocation()); + + for (Integer percentage : targetPercentages) { + Map scaleUpContext = getScalingContext(stage, cleanupConfig, baseContext, savedCapacity, percentage, null); + + log.info("Adding `Grow target to {}% of desired size` stage with context {} [executionId={}]", percentage, scaleUpContext, executionId); + + Stage resizeStage = new Stage(execution, ResizeServerGroupStage.TYPE, "Grow target to " + percentage + "% of desired size", scaleUpContext); + resizeStage.setParentStageId(stage.getId()); + resizeStage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(resizeStage); + + // only generate the "disable p% of traffic" stages if we have something to disable + if (source != null) { + stages.addAll(getBeforeCleanupStages(stage, stageData)); + Map scaleDownContext = + getScalingContext(stage, cleanupConfig, baseContext, sourceCapacity, 100 - percentage, source.getServerGroupName()); + scaleDownContext.put("scaleStoppedServerGroup", true); + + log.info("Adding `Shrink source to {}% of initial size` stage with context {} [executionId={}]", 100 - percentage, scaleDownContext, executionId); + + Stage scaleDownStage = new Stage(execution, ResizeServerGroupStage.TYPE, "Shrink source to " + (100 - percentage) + "% of initial size", scaleDownContext); + scaleDownStage.setParentStageId(stage.getId()); + scaleDownStage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(scaleDownStage); + } + } + + if (source != null) { + // shrink cluster to size + Map shrinkClusterContext = new HashMap<>(baseContext); + shrinkClusterContext.put("allowDeleteActive", false); + shrinkClusterContext.put("shrinkToSize", stage.getContext().get("maxRemainingAsgs")); + shrinkClusterContext.put("retainLargerOverNewer", false); + Stage shrinkClusterStage = new Stage(execution, ShrinkClusterStage.STAGE_TYPE, "shrinkCluster", shrinkClusterContext); + shrinkClusterStage.setParentStageId(stage.getId()); + shrinkClusterStage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(shrinkClusterStage); + + // disable old + log.info("Adding `Disable cluster` stage with context {} [executionId={}]", baseContext, executionId); + Map disableContext = new HashMap<>(baseContext); + Stage disableStage = new Stage(execution, DisableClusterStage.STAGE_TYPE, "Disable cluster", disableContext); + disableStage.setParentStageId(stage.getId()); + disableStage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(disableStage); + + // scale old back to original + Map scaleSourceContext = getScalingContext(stage, cleanupConfig, baseContext, sourceCapacity, 100, source.getServerGroupName()); + scaleSourceContext.put("scaleStoppedServerGroup", true); + log.info("Adding `Grow source to 100% of original size` stage with context {} [executionId={}]", scaleSourceContext, executionId); + Stage scaleSourceStage = new Stage(execution, ResizeServerGroupStage.TYPE, "Reset source to original size", scaleSourceContext); + scaleSourceStage.setParentStageId(stage.getId()); + scaleSourceStage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(scaleSourceStage); + + if (stageData.getDelayBeforeScaleDown() > 0L) { + Map waitContext = Collections.singletonMap("waitTime", stageData.getDelayBeforeScaleDown()); + Stage delayStage = new Stage(execution, WaitStage.STAGE_TYPE, "Wait Before Scale Down", waitContext); + delayStage.setParentStageId(stage.getId()); + delayStage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(delayStage); + } + } + + return stages; + } + + private List getBeforeCleanupStages(Stage parentStage, + RollingRedBlackStageData stageData) { + List stages = new ArrayList<>(); + + if (stageData.getDelayBeforeCleanup() != 0) { + Map waitContext = Collections.singletonMap("waitTime", stageData.getDelayBeforeCleanup()); + Stage stage = new Stage(parentStage.getExecution(), WaitStage.STAGE_TYPE, "wait", waitContext); + stage.setParentStageId(parentStage.getId()); + stage.setSyntheticStageOwner(SyntheticStageOwner.STAGE_AFTER); + stages.add(stage); + } + + return stages; + } + + private Map getScalingContext(Stage stage, + AbstractDeployStrategyStage.CleanupConfig cleanupConfig, + Map baseContext, + ResizeStrategy.Capacity savedCapacity, + Integer percentage, + @Nullable String serverGroupName) { + Map scaleContext = new HashMap<>(baseContext); + if (serverGroupName != null) { + scaleContext.put("serverGroupName", serverGroupName); + } else { + scaleContext.put("target", TargetServerGroup.Params.Target.current_asg_dynamic); + } + scaleContext.put("targetLocation", cleanupConfig.getLocation()); + scaleContext.put("scalePct", percentage); + scaleContext.put("pinCapacity", percentage < 100); // if p < 100, capacity should be pinned (min == max == desired) + scaleContext.put("unpinMinimumCapacity", percentage == 100); // if p == 100, min capacity should be restored to the original unpinned value from source + scaleContext.put("useNameAsLabel", true); // hint to deck that it should _not_ override the name + scaleContext.put("targetHealthyDeployPercentage", stage.getContext().get("targetHealthyDeployPercentage")); + scaleContext.put("action", ResizeStrategy.ResizeAction.scale_exact); + scaleContext.put("capacity", savedCapacity); // we always scale to what was part of the manifest configuration + + return scaleContext; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/DeployStagePreProcessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/DeployStagePreProcessor.java index f20ca347d1..88844bd0dd 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/DeployStagePreProcessor.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/DeployStagePreProcessor.java @@ -55,8 +55,8 @@ class StepDefinition { } class StageDefinition { - String name; - StageDefinitionBuilder stageDefinitionBuilder; - Map context; + public String name; + public StageDefinitionBuilder stageDefinitionBuilder; + public Map context; } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/RedBlackStrategy.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/RedBlackStrategy.groovy index c3f70e3474..a1adaa7e44 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/RedBlackStrategy.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/RedBlackStrategy.groovy @@ -71,7 +71,7 @@ class RedBlackStrategy implements Strategy, ApplicationContextAware { if (stageData?.maxRemainingAsgs && (stageData?.maxRemainingAsgs > 0)) { Map shrinkContext = baseContext + [ shrinkToSize : stageData.maxRemainingAsgs, - allowDeleteActive : false, + allowDeleteActive : stageData.allowDeleteActive ?: false, retainLargerOverNewer: false ] stages << newStage( @@ -123,7 +123,7 @@ class RedBlackStrategy implements Strategy, ApplicationContextAware { } def scaleDown = baseContext + [ - allowScaleDownActive : false, + allowScaleDownActive : stageData.allowScaleDownActive ?: false, remainingFullSizeServerGroups: 1, preferLargerOverNewer : false ] diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStage.java index 50f47dcb97..539cc5d561 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStage.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStage.java @@ -1,11 +1,11 @@ /* - * Copyright 2018 Pivotal, Inc. + * Copyright 2019 Pivotal, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -16,25 +16,35 @@ package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; -import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker.DeployServiceTask; +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; import com.netflix.spinnaker.orca.pipeline.TaskNode; import com.netflix.spinnaker.orca.pipeline.model.Stage; -import groovy.transform.CompileStatic; +import lombok.AllArgsConstructor; import org.springframework.stereotype.Component; import javax.annotation.Nonnull; +import java.util.ArrayList; +import java.util.List; +@AllArgsConstructor @Component -@CompileStatic -class DeployServiceStage implements StageDefinitionBuilder { +class DeployServiceStage implements StageDefinitionBuilder, CloudProviderAware { + public static final String PIPELINE_CONFIG_TYPE = "deployService"; + + List deployServiceStagePreprocessors = new ArrayList<>(); + + @Nonnull + @Override + public String getType() { + return PIPELINE_CONFIG_TYPE; + } @Override public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { - builder - .withTask("deployService", DeployServiceTask.class) - .withTask("monitorDeployService", MonitorKatoTask.class); + deployServiceStagePreprocessors + .stream() + .filter(it -> it.supports(stage)) + .forEach(it -> it.addSteps(builder, stage)); } } - diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStagePreprocessor.java new file mode 100644 index 0000000000..f6bcdd9af7 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DeployServiceStagePreprocessor.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; + +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; + +/** + * Supports generic modification of a Deploy Service stage. + * + * Common use-cases: + * - injecting cloud-aware steps + */ +public interface DeployServiceStagePreprocessor { + boolean supports(Stage stage); + + void addSteps(TaskNode.Builder builder, Stage stage); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStage.java index 615e655711..f17f1ade7f 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStage.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStage.java @@ -1,11 +1,11 @@ /* - * Copyright 2018 Pivotal, Inc. + * Copyright 2019 Pivotal, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -16,25 +16,35 @@ package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; -import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker.DestroyServiceTask; +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; import com.netflix.spinnaker.orca.pipeline.TaskNode; import com.netflix.spinnaker.orca.pipeline.model.Stage; -import groovy.transform.CompileStatic; +import lombok.AllArgsConstructor; import org.springframework.stereotype.Component; import javax.annotation.Nonnull; +import java.util.ArrayList; +import java.util.List; +@AllArgsConstructor @Component -@CompileStatic -class DestroyServiceStage implements StageDefinitionBuilder { +class DestroyServiceStage implements StageDefinitionBuilder, CloudProviderAware { + public static final String PIPELINE_CONFIG_TYPE = "destroyService"; + + List destroyServiceStagePreprocessors = new ArrayList<>(); + + @Nonnull + @Override + public String getType() { + return PIPELINE_CONFIG_TYPE; + } @Override public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { - builder - .withTask("destroyService", DestroyServiceTask.class) - .withTask("monitorDestroyService", MonitorKatoTask.class); + destroyServiceStagePreprocessors + .stream() + .filter(it -> it.supports(stage)) + .forEach(it -> it.addSteps(builder, stage)); } } - diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStagePreprocessor.java new file mode 100644 index 0000000000..d31a6a9b06 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/DestroyServiceStagePreprocessor.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; + +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; + +/** + * Supports generic modification of a Deploy Service stage. + * + * Common use-cases: + * - injecting cloud-aware steps + */ +public interface DestroyServiceStagePreprocessor { + boolean supports(Stage stage); + + void addSteps(TaskNode.Builder builder, Stage stage); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStage.java new file mode 100644 index 0000000000..5582e60d08 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStage.java @@ -0,0 +1,50 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; + +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.AllArgsConstructor; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.ArrayList; +import java.util.List; + +@AllArgsConstructor +@Component +class ShareServiceStage implements StageDefinitionBuilder, CloudProviderAware { + public static final String PIPELINE_CONFIG_TYPE = "shareService"; + + List shareServiceStagePreprocessors = new ArrayList<>(); + + @Nonnull + @Override + public String getType() { + return PIPELINE_CONFIG_TYPE; + } + + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + shareServiceStagePreprocessors + .stream() + .filter(it -> it.supports(stage)) + .forEach(it -> it.addSteps(builder, stage)); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStagePreprocessor.java new file mode 100644 index 0000000000..dea10c30ba --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/ShareServiceStagePreprocessor.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; + +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; + +/** + * Supports generic modification of a Share Service stage. + * + * Common use-cases: + * - injecting cloud-aware steps + */ +public interface ShareServiceStagePreprocessor { + boolean supports(Stage stage); + + void addSteps(TaskNode.Builder builder, Stage stage); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStage.java new file mode 100644 index 0000000000..2ff0dfb2e3 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStage.java @@ -0,0 +1,50 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; + +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.AllArgsConstructor; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.ArrayList; +import java.util.List; + +@AllArgsConstructor +@Component +class UnshareServiceStage implements StageDefinitionBuilder, CloudProviderAware { + public static final String PIPELINE_CONFIG_TYPE = "unshareService"; + + List unshareServiceStagePreprocessors = new ArrayList<>(); + + @Nonnull + @Override + public String getType() { + return PIPELINE_CONFIG_TYPE; + } + + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + unshareServiceStagePreprocessors + .stream() + .filter(it -> it.supports(stage)) + .forEach(it -> it.addSteps(builder, stage)); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStagePreprocessor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStagePreprocessor.java new file mode 100644 index 0000000000..867993ae28 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servicebroker/UnshareServiceStagePreprocessor.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.servicebroker; + +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; + +/** + * Supports generic modification of an Unshare Service stage. + * + * Common use-cases: + * - injecting cloud-aware steps + */ +public interface UnshareServiceStagePreprocessor { + boolean supports(Stage stage); + + void addSteps(TaskNode.Builder builder, Stage stage); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/snapshot/DeleteSnapshotStage.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/snapshot/DeleteSnapshotStage.java new file mode 100644 index 0000000000..ad04233832 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/snapshot/DeleteSnapshotStage.java @@ -0,0 +1,83 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.snapshot; + +import java.util.Set; +import javax.validation.constraints.NotNull; +import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.snapshot.DeleteSnapshotTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +@Component +public class DeleteSnapshotStage implements StageDefinitionBuilder { + @Override + public void taskGraph(@NotNull Stage stage, @NotNull TaskNode.Builder builder) { + builder + .withTask("deleteSnapshot", DeleteSnapshotTask.class) + .withTask("monitorDeleteSnapshot", MonitorKatoTask.class); + } + + public static class DeleteSnapshotRequest { + @NotNull + private String credentials; + + @NotNull + private String cloudProvider; + + @NotNull + private String region; + + @NotNull + private Set snapshotIds; + + public String getCredentials() { + return credentials; + } + + public void setCredentials(String credentials) { + this.credentials = credentials; + } + + public String getCloudProvider() { + return cloudProvider; + } + + public void setCloudProvider(String cloudProvider) { + this.cloudProvider = cloudProvider; + } + + public String getRegion() { + return region; + } + + public void setRegion(String region) { + this.region = region; + } + + public Set getSnapshotIds() { + return snapshotIds; + } + + public void setSnapshotIds(Set snapshotIds) { + this.snapshotIds = snapshotIds; + } + + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/PollerSupport.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/PollerSupport.java index 96b44844ba..2339605cab 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/PollerSupport.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/PollerSupport.java @@ -26,18 +26,18 @@ import static java.lang.String.format; -class PollerSupport { +public class PollerSupport { private final ObjectMapper objectMapper; private final RetrySupport retrySupport; private final OortService oortService; - PollerSupport(ObjectMapper objectMapper, RetrySupport retrySupport, OortService oortService) { + public PollerSupport(ObjectMapper objectMapper, RetrySupport retrySupport, OortService oortService) { this.objectMapper = objectMapper; this.retrySupport = retrySupport; this.oortService = oortService; } - Optional fetchServerGroup(String account, String region, String name) { + public Optional fetchServerGroup(String account, String region, String name) { return retrySupport.retry(() -> { try { Response response = oortService.getServerGroup(account, region, name); diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/RestorePinnedServerGroupsPoller.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/RestorePinnedServerGroupsPoller.java index 9c72d67de5..07afaa97a3 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/RestorePinnedServerGroupsPoller.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pollers/RestorePinnedServerGroupsPoller.java @@ -50,7 +50,7 @@ @Slf4j @Component @ConditionalOnExpression(value = "${pollers.restorePinnedServerGroups.enabled:false}") -class RestorePinnedServerGroupsPoller extends AbstractPollingNotificationAgent { +public class RestorePinnedServerGroupsPoller extends AbstractPollingNotificationAgent { private static final Logger log = LoggerFactory.getLogger(RestorePinnedServerGroupsPoller.class); private final ObjectMapper objectMapper; @@ -89,7 +89,7 @@ public RestorePinnedServerGroupsPoller(NotificationClusterLock notificationClust } @VisibleForTesting - RestorePinnedServerGroupsPoller(NotificationClusterLock notificationClusterLock, + public RestorePinnedServerGroupsPoller(NotificationClusterLock notificationClusterLock, ObjectMapper objectMapper, OortService oortService, RetrySupport retrySupport, @@ -184,7 +184,7 @@ protected void tick() { } } - List fetchPinnedServerGroupTags() { + public List fetchPinnedServerGroupTags() { List allEntityTags = retrySupport.retry(() -> objectMapper.convertValue( oortService.getEntityTags(ImmutableMap.builder() .put("tag:" + PINNED_CAPACITY_TAG, "*") @@ -211,7 +211,7 @@ List fetchPinnedServerGroupTags() { .collect(Collectors.toList()); } - boolean hasCompletedExecution(PinnedServerGroupTag pinnedServerGroupTag) { + public boolean hasCompletedExecution(PinnedServerGroupTag pinnedServerGroupTag) { try { Execution execution = executionRepository.retrieve( pinnedServerGroupTag.executionType, pinnedServerGroupTag.executionId @@ -276,7 +276,7 @@ private Map buildResizeOperation(PinnedServerGroupTag pinnedServ .build(); } - private static class PinnedServerGroupTag { + public static class PinnedServerGroupTag { public String id; public String cloudProvider; diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/service/JobService.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/service/JobService.java index 5c2f3304fe..d1a96d4ca1 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/service/JobService.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/service/JobService.java @@ -21,6 +21,7 @@ import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; +import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.stream.Collectors; @@ -30,11 +31,21 @@ public class JobService { @Autowired JobConfigurationProperties jobConfigurationProperties; - List getPreconfiguredStages() { - if(jobConfigurationProperties.getPreconfigured()==null){ + public List getPreconfiguredStages() { + if(jobConfigurationProperties.getTitus()==null && jobConfigurationProperties.getKubernetes()==null){ return Collections.EMPTY_LIST; } - return jobConfigurationProperties.getPreconfigured().stream().filter(it -> it.enabled == true).collect(Collectors.toList()); + + List preconfiguredJobStageProperties = new ArrayList<>(); + if (jobConfigurationProperties.getTitus() != null && !jobConfigurationProperties.getTitus().isEmpty()) { + preconfiguredJobStageProperties.addAll(jobConfigurationProperties.getTitus()); + } + + if (jobConfigurationProperties.getKubernetes() != null && !jobConfigurationProperties.getKubernetes().isEmpty()) { + preconfiguredJobStageProperties.addAll(jobConfigurationProperties.getKubernetes()); + } + + return preconfiguredJobStageProperties; } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/DetermineHealthProvidersTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/DetermineHealthProvidersTask.java index 501c2070db..6feeeec408 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/DetermineHealthProvidersTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/DetermineHealthProvidersTask.java @@ -83,18 +83,18 @@ public TaskResult execute(Stage stage) { results.put("interestingHealthProviderNames", interestingHealthProviderNames); } - return new TaskResult(ExecutionStatus.SUCCEEDED, results); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(results).build(); } if (stage.getContext().containsKey("interestingHealthProviderNames")) { // should not override any stage-specified health providers - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; } String platformSpecificHealthProviderName = healthProviderNamesByPlatform.get(getCloudProvider(stage)); if (platformSpecificHealthProviderName == null) { log.warn("Unable to determine platform health provider for unknown cloud provider '{}'", getCloudProvider(stage)); - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; } try { @@ -112,7 +112,7 @@ public TaskResult execute(Stage stage) { if (front50Service == null) { log.warn("Unable to determine health providers for an application without front50 enabled."); - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; } Application application = front50Service.get(applicationName); @@ -120,15 +120,15 @@ public TaskResult execute(Stage stage) { if (application.platformHealthOnly == Boolean.TRUE && application.platformHealthOnlyShowOverride != Boolean.TRUE) { // if `platformHealthOnlyShowOverride` is true, the expectation is that `interestingHealthProviderNames` will // be included in the request if it's desired ... and that it should NOT be automatically added. - return new TaskResult(ExecutionStatus.SUCCEEDED, Collections.singletonMap( + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap( "interestingHealthProviderNames", Collections.singletonList(platformSpecificHealthProviderName) - )); + )).build(); } } catch (Exception e) { log.error("Unable to determine platform health provider (executionId: {}, stageId: {})", stage.getExecution().getId(), stage.getId(), e); } - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; } @Override diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigrateTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigrateTask.java deleted file mode 100644 index b8214a02a8..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigrateTask.java +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks; - -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import com.fasterxml.jackson.databind.ObjectMapper; -import com.netflix.spinnaker.orca.ExecutionStatus; -import com.netflix.spinnaker.orca.TaskResult; -import com.netflix.spinnaker.orca.clouddriver.KatoService; -import com.netflix.spinnaker.orca.clouddriver.model.TaskId; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.springframework.beans.factory.annotation.Autowired; - -public abstract class MigrateTask extends AbstractCloudProviderAwareTask { - - public abstract String getCloudOperationType(); - - @Autowired - KatoService kato; - - @Autowired - ObjectMapper mapper; - - @Override - public TaskResult execute(Stage stage) { - String cloudProvider = getCloudProvider(stage); - - Map operation = new HashMap<>(); - operation.put(getCloudOperationType(), new HashMap<>(stage.getContext())); - - List> operations = new ArrayList<>(); - operations.add(operation); - - TaskId taskId = kato.requestOperations(cloudProvider, operations) - .toBlocking() - .first(); - - Map outputs = new HashMap<>(); - Map target = (Map) stage.getContext().get("target"); - outputs.put("notification.type", getCloudOperationType().toLowerCase()); - outputs.put("kato.last.task.id", taskId); - outputs.put("account.name", target.get("credentials")); - outputs.put("region", target.get("region")); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MonitorKatoTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MonitorKatoTask.groovy index 611b496d23..2738208d57 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MonitorKatoTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MonitorKatoTask.groovy @@ -70,7 +70,7 @@ class MonitorKatoTask implements RetryableTask { TaskResult execute(Stage stage) { TaskId taskId = stage.context."kato.last.task.id" as TaskId if (!taskId) { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } Task katoTask @@ -101,7 +101,7 @@ class MonitorKatoTask implements RetryableTask { registry.counter("monitorKatoTask.taskNotFound.retry").increment() ctx['kato.task.notFoundRetryCount'] = ((stage.context."kato.task.notFoundRetryCount" as Integer) ?: 0) + 1 - return new TaskResult(ExecutionStatus.RUNNING, ctx) + return TaskResult.builder(ExecutionStatus.RUNNING).context(ctx).build() } else { throw re } @@ -152,7 +152,7 @@ class MonitorKatoTask implements RetryableTask { } - new TaskResult(status, outputs) + TaskResult.builder(status).context(outputs).build() } private static ExecutionStatus katoStatusToTaskStatus(Task katoTask, boolean katoResultExpected) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/CleanupArtifactsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/CleanupArtifactsTask.java index 9c569286d0..e7f14c3c20 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/CleanupArtifactsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/CleanupArtifactsTask.java @@ -67,6 +67,6 @@ public TaskResult execute(@Nonnull Stage stage) { .put("deploy.account.name", credentials) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactFromExecutionTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactFromExecutionTask.java index fc2abdc7ff..ea1590bb22 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactFromExecutionTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactFromExecutionTask.java @@ -62,24 +62,29 @@ public TaskResult execute(@Nonnull Stage stage) { if (match == null) { outputs.put("exception", "No artifact matching " + expectedArtifact + " found among " + priorArtifacts); - return new TaskResult(ExecutionStatus.TERMINAL, new HashMap<>(), outputs); + return TaskResult.builder(ExecutionStatus.TERMINAL).context(new HashMap<>()).outputs(outputs).build(); } outputs.put("resolvedExpectedArtifacts", Collections.singletonList(expectedArtifact)); outputs.put("artifacts", Collections.singletonList(match)); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build(); } @Data private static class ExecutionOptions { + // Accept either 'succeeded' or 'successful' in the stage config. The front-end sets 'successful', but due to a bug + // this class was only looking for 'succeeded'. Fix this by accepting 'successful' but to avoid breaking anyone who + // discovered this bug and manually edited their stage to set 'succeeded', continue to accept 'succeeded'. boolean succeeded; + boolean successful; + boolean terminal; boolean running; ExecutionCriteria toCriteria() { List statuses = new ArrayList<>(); - if (succeeded) { + if (succeeded || successful) { statuses.add("SUCCEEDED"); } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactsFromResourceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactsFromResourceTask.java index b5b04ce20c..83d8858fa5 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactsFromResourceTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/artifacts/FindArtifactsFromResourceTask.java @@ -62,7 +62,7 @@ public TaskResult execute(@Nonnull Stage stage) { + stageData.location); } - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build(); } public static class StageData { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cache/ClouddriverClearAltTablespaceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cache/ClouddriverClearAltTablespaceTask.java new file mode 100644 index 0000000000..a2c3c3b788 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cache/ClouddriverClearAltTablespaceTask.java @@ -0,0 +1,66 @@ +package com.netflix.spinnaker.orca.clouddriver.tasks.cache; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheService; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.jetbrains.annotations.NotNull; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; +import retrofit.RetrofitError; +import retrofit.mime.TypedByteArray; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static java.util.Collections.emptyList; + +@Component +public class ClouddriverClearAltTablespaceTask implements Task { + private final CloudDriverCacheService river; + + private final Logger log = LoggerFactory.getLogger(getClass()); + + @Autowired + public ClouddriverClearAltTablespaceTask(CloudDriverCacheService river) { + this.river = river; + } + + @NotNull + @Override + public TaskResult execute(@NotNull Stage stage) { + String namespace = ((String) stage.getContext().get("namespace")); + if (namespace == null) { + throw new IllegalArgumentException("Missing namespace"); + } + + try { + Map result = river.clearNamespace(namespace); + log.info( + "Cleared clouddriver namespace {}, tables truncated: {}", + namespace, + result.getOrDefault("tables", emptyList()) + ); + + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(result).build(); + } catch (RetrofitError e) { + Map output = new HashMap<>(); + List errors = new ArrayList<>(); + + if (e.getResponse() != null && e.getResponse().getBody() != null) { + String error = new String(((TypedByteArray) e.getResponse().getBody()).getBytes()); + log.error("Failed clearing clouddriver table namespace: {}", error, e); + errors.add(error); + } else { + errors.add(e.getMessage()); + } + output.put("errors", errors); + return TaskResult.builder(ExecutionStatus.TERMINAL).context(output).build(); + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractClusterWideClouddriverTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractClusterWideClouddriverTask.groovy index c4df74eead..111791f524 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractClusterWideClouddriverTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractClusterWideClouddriverTask.groovy @@ -135,12 +135,12 @@ abstract class AbstractClusterWideClouddriverTask extends AbstractCloudProviderA } def taskId = katoService.requestOperations(clusterSelection.cloudProvider, katoOps).toBlocking().first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : getNotificationType(), "deploy.account.name" : clusterSelection.credentials, "kato.last.task.id" : taskId, "deploy.server.groups": locationGroups - ]) + ]).build() } private static boolean shouldSkipTrafficGuardCheck(List> katoOps) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractWaitForClusterWideClouddriverTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractWaitForClusterWideClouddriverTask.groovy index 30b3819e28..87861a5507 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractWaitForClusterWideClouddriverTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/AbstractWaitForClusterWideClouddriverTask.groovy @@ -26,12 +26,14 @@ import com.netflix.spinnaker.orca.clouddriver.utils.OortHelper import com.netflix.spinnaker.orca.pipeline.model.Stage import groovy.transform.Canonical import groovy.transform.ToString -import groovy.util.logging.Slf4j +import org.slf4j.Logger +import org.slf4j.LoggerFactory import org.springframework.beans.factory.annotation.Autowired import org.springframework.beans.factory.annotation.Value -@Slf4j abstract class AbstractWaitForClusterWideClouddriverTask extends AbstractCloudProviderAwareTask implements OverridableTimeoutRetryableTask { + private Logger log = LoggerFactory.getLogger(getClass()) + @Override public long getBackoffPeriod() { 10000 } @@ -65,13 +67,11 @@ abstract class AbstractWaitForClusterWideClouddriverTask extends AbstractCloudPr // Possible issue here for GCE if multiple server groups are named the same in // different zones but with the same region. However, this is not allowable by // Spinnaker constraints, so we're accepting the risk. - log.info "currentServerGroup.region: $it.region, currentServerGroup.name: $it.name" - log.info " deployServerGroup.region: $it.region, deployServerGroup.name: $it.name" def isMatch = it.region == deployServerGroup.region && it.name == deployServerGroup.name - log.info "is match? $isMatch" isMatch }) - log.info "Server groups matching $deployServerGroup : $matchingServerGroups" + + log.info("Server groups matching $deployServerGroup : $matchingServerGroups") isServerGroupOperationInProgress(stage, interestingHealthProviderNames, matchingServerGroups) } @@ -84,6 +84,11 @@ abstract class AbstractWaitForClusterWideClouddriverTask extends AbstractCloudPr static class DeployServerGroup { String region String name + + @Override + String toString() { + return "${region}->${name}" + } } static class RemainingDeployServerGroups { @@ -113,8 +118,7 @@ abstract class AbstractWaitForClusterWideClouddriverTask extends AbstractCloudPr } def serverGroups = cluster.get().serverGroups.collect { new TargetServerGroup(it) } - log.info "Pipeline ${stage.execution?.id} found server groups ${serverGroups.collect { it.region + "->" + it.name }}" - log.info "Pipeline ${stage.execution?.id} is looking for ${remainingDeployServerGroups.collect { it.region + "->" + it.name }}" + log.info "Pipeline ${stage.execution?.id} looking for server groups: $remainingDeployServerGroups found: $serverGroups" if (!serverGroups) { return emptyClusterResult(stage, clusterSelection, cluster.get()) @@ -124,8 +128,8 @@ abstract class AbstractWaitForClusterWideClouddriverTask extends AbstractCloudPr List stillRemaining = remainingDeployServerGroups.findAll(this.&isServerGroupOperationInProgress.curry(stage, serverGroups, healthProviderTypesToCheck)) if (stillRemaining) { - log.info "Pipeline ${stage.execution?.id} still has ${stillRemaining.collect { it.region + "->" + it.name }}" - return new TaskResult(ExecutionStatus.RUNNING, [remainingDeployServerGroups: stillRemaining]) + log.info "Pipeline ${stage.execution?.id} still has $stillRemaining" + return TaskResult.builder(ExecutionStatus.RUNNING).context([remainingDeployServerGroups: stillRemaining]).build() } log.info "Pipeline ${stage.execution?.id} no server groups remain" diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/ClusterSizePreconditionTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/ClusterSizePreconditionTask.groovy index 829322468a..66d4bd1f5e 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/ClusterSizePreconditionTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/ClusterSizePreconditionTask.groovy @@ -54,6 +54,7 @@ class ClusterSizePreconditionTask extends AbstractCloudProviderAwareTask impleme int expected = 1 String credentials Set regions + Boolean onlyEnabledServerGroups = false public String getApplication() { moniker?.app ?: Names.parseName(cluster)?.app @@ -117,15 +118,20 @@ class ClusterSizePreconditionTask extends AbstractCloudProviderAwareTask impleme def failures = [] for (String region : config.regions) { def serverGroups = serverGroupsByRegion[region] ?: [] + + if (config.onlyEnabledServerGroups) { + serverGroups = serverGroups.findAll { it.disabled == false } + } + int actual = serverGroups.size() boolean acceptable = config.getOp().evaluate(actual, config.expected) if (!acceptable) { - failures << "$region - expected $config.expected server groups but found $actual : ${serverGroups*.name}" + failures << "expected $config.comparison $config.expected ${config.onlyEnabledServerGroups ? 'enabled ' : ''}server groups in $region but found $actual: ${serverGroups*.name}" } } if (failures) { - throw new IllegalStateException("Precondition failed: ${failures.join(',')}") + throw new IllegalStateException("Precondition check failed: ${failures.join(', ')}") } return TaskResult.SUCCEEDED diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/DetermineRollbackCandidatesTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/DetermineRollbackCandidatesTask.java index 2c5add1e58..f21e508b49 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/DetermineRollbackCandidatesTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/DetermineRollbackCandidatesTask.java @@ -127,7 +127,7 @@ public TaskResult execute(@Nonnull Stage stage) { stageData.serverGroup, e ); - return new TaskResult(ExecutionStatus.RUNNING); + return TaskResult.RUNNING; } } @@ -147,7 +147,7 @@ public TaskResult execute(@Nonnull Stage stage) { stageData.cloudProvider, e ); - return new TaskResult(ExecutionStatus.RUNNING); + return TaskResult.RUNNING; } List serverGroups = objectMapper.convertValue( @@ -228,14 +228,10 @@ public TaskResult execute(@Nonnull Stage stage) { } } - return new TaskResult( - ExecutionStatus.SUCCEEDED, - Collections.singletonMap("imagesToRestore", imagesToRestore), - ImmutableMap.builder() + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap("imagesToRestore", imagesToRestore)).outputs(ImmutableMap.builder() .put("rollbackTypes", rollbackTypes) .put("rollbackContexts", rollbackContexts) - .build() - ); + .build()).build(); } private Map fetchCluster(String application, diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTask.groovy index 86ec489863..8e95697be5 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTask.groovy @@ -122,16 +122,18 @@ class FindImageFromClusterTask extends AbstractCloudProviderAwareTask implements Set imageNames = [] Map imageIds = [:] - - // Supplement config with regions from subsequent deploy/canary stages: - def deployRegions = regionCollector.getRegionsFromChildStages(stage) Set inferredRegions = new HashSet<>() - deployRegions.forEach { - if (!config.regions.contains(it)) { - config.regions.add(it) - inferredRegions.add(it) - log.info("Inferred and added region ($it) from deploy stage to FindImageFromClusterTask (executionId: ${stage.execution.id})") + if (cloudProvider == 'aws') { + // Supplement config with regions from subsequent deploy/canary stages: + def deployRegions = regionCollector.getRegionsFromChildStages(stage) + + deployRegions.forEach { + if (!config.regions.contains(it)) { + config.regions.add(it) + inferredRegions.add(it) + log.info("Inferred and added region ($it) from deploy stage to FindImageFromClusterTask (executionId: ${stage.execution.id})") + } } } @@ -181,7 +183,7 @@ class FindImageFromClusterTask extends AbstractCloudProviderAwareTask implements if (!locationsWithMissingImageIds.isEmpty()) { // signifies that at least one summary was missing image details, let's retry until we see image details log.warn("One or more locations are missing image details (locations: ${locationsWithMissingImageIds*.value}, cluster: ${config.cluster}, account: ${account}, executionId: ${stage.execution.id})") - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } if (missingLocations) { @@ -290,12 +292,12 @@ class FindImageFromClusterTask extends AbstractCloudProviderAwareTask implements return artifact }.flatten() - return new TaskResult(ExecutionStatus.SUCCEEDED, [ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ amiDetails: deploymentDetails, artifacts: artifacts - ], [ + ]).outputs([ deploymentDetails: deploymentDetails - ]) + ]).build() } private void resolveFromBaseImageName( diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/WaitForClusterDisableTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/WaitForClusterDisableTask.groovy index 3b0ac94f9d..416024f722 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/WaitForClusterDisableTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/WaitForClusterDisableTask.groovy @@ -56,7 +56,7 @@ class WaitForClusterDisableTask extends AbstractWaitForClusterWideClouddriverTas def duration = System.currentTimeMillis() - stage.startTime if (stage.context['deploy.server.groups'] && taskResult.status == ExecutionStatus.SUCCEEDED && duration < MINIMUM_WAIT_TIME_MS) { // wait at least MINIMUM_WAIT_TIME to account for any necessary connection draining to occur if there were actually server groups - return new TaskResult(ExecutionStatus.RUNNING, taskResult.context, taskResult.outputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(taskResult.context).outputs(taskResult.outputs).build() } return taskResult diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTask.java new file mode 100644 index 0000000000..b78be75810 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTask.java @@ -0,0 +1,142 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.conditions; + +import com.netflix.spectator.api.Id; +import com.netflix.spectator.api.Registry; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.RetryableTask; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.Condition; +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.ConditionConfigurationProperties; +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.ConditionSupplier; +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.WaitForConditionStage.WaitForConditionContext; +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.WaitForConditionStage.WaitForConditionContext.Status; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.boot.autoconfigure.condition.ConditionalOnBean; +import org.springframework.boot.autoconfigure.condition.ConditionalOnExpression; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.time.Clock; +import java.time.Duration; +import java.time.Instant; +import java.util.*; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; + +@Component +@ConditionalOnBean(ConditionSupplier.class) +@ConditionalOnExpression("${tasks.evaluateCondition.enabled:false}") +public class EvaluateConditionTask implements RetryableTask { + private static final Logger log = LoggerFactory.getLogger(EvaluateConditionTask.class); + private final ConditionConfigurationProperties conditionsConfigurationProperties; + private final List suppliers; + private final Registry registry; + private final Clock clock; + private final Id pauseDeployId; + + @Autowired + public EvaluateConditionTask( + ConditionConfigurationProperties conditionsConfigurationProperties, + List suppliers, + Registry registry, + Clock clock + ) { + this.conditionsConfigurationProperties = conditionsConfigurationProperties; + this.suppliers = suppliers; + this.registry = registry; + this.clock = clock; + this.pauseDeployId = registry.createId("conditions.deploy.pause"); + } + + @Override + public long getBackoffPeriod() { + return 3000L; + } + + @Override + public long getTimeout() { + return TimeUnit.SECONDS.toMillis(conditionsConfigurationProperties.getWaitTimeoutMs()); + } + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + final WaitForConditionContext ctx = stage.mapTo(WaitForConditionContext.class); + if (conditionsConfigurationProperties.isSkipWait()) { + log.debug("Un-pausing deployment to {} (execution: {}) based on configuration", + ctx.getCluster(), stage.getExecution()); + ctx.setStatus(Status.SKIPPED); + } + + if (ctx.getStatus() == Status.SKIPPED) { + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap("status", Status.SKIPPED)).build(); + } + + Duration backoff = Duration.ofMillis(conditionsConfigurationProperties.getBackoffWaitMs()); + Instant startTime = getStartTime(stage); + Instant now = clock.instant(); + if (ctx.getStatus() != null && startTime.plus(backoff).isAfter(now)) { + recordDeployPause(ctx); + log.debug("Deployment to {} has been conditionally paused (executionId: {})", + ctx.getCluster(), stage.getExecution().getId()); + return TaskResult.builder(ExecutionStatus.RUNNING) + .context(Collections.singletonMap("status", Status.WAITING)) + .build(); + } + + try { + Set conditions = suppliers + .stream() + .flatMap(supplier -> supplier.getConditions( + ctx.getCluster(), + ctx.getRegion(), + ctx.getAccount() + ).stream()).filter(Objects::nonNull) + .collect(Collectors.toSet()); + + final Status status = conditions.isEmpty() ? Status.SKIPPED : Status.WAITING; + if (status == Status.WAITING) { + recordDeployPause(ctx); + log.debug("Deployment to {} has been conditionally paused (executionId: {}). Conditions: {}", + ctx.getCluster(), stage.getExecution().getId(), conditions); + + return TaskResult.builder(ExecutionStatus.RUNNING).context(Collections.singletonMap("status", status)).outputs(Collections.singletonMap("conditions", conditions)).build(); + } + + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap("status", status)).build(); + } catch (Exception e) { + log.error("Error occurred while fetching for conditions to eval.", e); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap("status", Status.ERROR)).build(); + } + } + + private void recordDeployPause(WaitForConditionContext ctx) { + registry.counter( + pauseDeployId + .withTags("cluster", ctx.getCluster(), "region", ctx.getRegion(), "account", ctx.getAccount()) + ).increment(); + } + + private Instant getStartTime(Stage stage) { + return Instant.ofEpochMilli(Optional.ofNullable(stage.getStartTime()).orElse(clock.millis())); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/BulkUpsertEntityTagsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/BulkUpsertEntityTagsTask.java index 9df158fa98..890b84966f 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/BulkUpsertEntityTagsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/BulkUpsertEntityTagsTask.java @@ -46,10 +46,10 @@ public TaskResult execute(Stage stage) { }}) ).toBlocking().first(); - return new TaskResult(ExecutionStatus.SUCCEEDED, new HashMap() {{ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(new HashMap() {{ put("notification.type", "bulkupsertentitytags"); put("kato.last.task.id", taskId); - }}); + }}).build(); } @Override diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/DeleteEntityTagsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/DeleteEntityTagsTask.java index cf7a8bce29..6a1632d6a0 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/DeleteEntityTagsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/DeleteEntityTagsTask.java @@ -47,10 +47,10 @@ public TaskResult execute(Stage stage) { }}) ).toBlocking().first(); - return new TaskResult(ExecutionStatus.SUCCEEDED, new HashMap() {{ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(new HashMap() {{ put("notification.type", "deleteentitytags"); put("kato.last.task.id", taskId); - }}); + }}).build(); } @Override diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/UpsertEntityTagsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/UpsertEntityTagsTask.java index ad66b4a02b..63512da170 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/UpsertEntityTagsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/entitytags/UpsertEntityTagsTask.java @@ -47,10 +47,10 @@ public TaskResult execute(Stage stage) { }}) ).toBlocking().first(); - return new TaskResult(ExecutionStatus.SUCCEEDED, new HashMap() {{ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(new HashMap() {{ put("notification.type", "upsertentitytags"); put("kato.last.task.id", taskId); - }}); + }}).build(); } @Override diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/DeleteImageTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/DeleteImageTask.java index 1798a0aad6..4e88bddfea 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/DeleteImageTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/DeleteImageTask.java @@ -74,7 +74,7 @@ public TaskResult execute(@Nonnull Stage stage) { outputs.put("delete.region", deleteImageRequest.getRegion()); outputs.put("delete.account.name", deleteImageRequest.getCredentials()); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } private void validateInputs(DeleteImageStage.DeleteImageRequest createIssueRequest) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/FindImageFromTagsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/FindImageFromTagsTask.java index 92ebc105f4..7e2230032c 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/FindImageFromTagsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/FindImageFromTagsTask.java @@ -69,11 +69,7 @@ public TaskResult execute(Stage stage) { stageOutputs.put("amiDetails", imageDetails); stageOutputs.put("artifacts", artifacts); - return new TaskResult( - ExecutionStatus.SUCCEEDED, - stageOutputs, - Collections.singletonMap("deploymentDetails", imageDetails) - ); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs).outputs(Collections.singletonMap("deploymentDetails", imageDetails)).build(); } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageForceCacheRefreshTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageForceCacheRefreshTask.java index c09ad97742..377d18ba68 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageForceCacheRefreshTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageForceCacheRefreshTask.java @@ -49,7 +49,7 @@ public TaskResult execute(Stage stage) { // ) // } - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; } @Override diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageTagger.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageTagger.java index 1a1340421f..d4a3100562 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageTagger.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/ImageTagger.java @@ -63,11 +63,13 @@ protected ImageTagger(OortService oortService, ObjectMapper objectMapper) { } protected Collection findImages(Collection imageNames, Collection consideredStageRefIds, Stage stage, Class matchedImageType) { + List upstreamImageIds = new ArrayList<>(); + if (imageNames == null || imageNames.isEmpty()) { imageNames = new HashSet<>(); // attempt to find upstream images in the event that one was not explicitly provided - Collection upstreamImageIds = upstreamImageIds(stage, consideredStageRefIds, getCloudProvider()); + upstreamImageIds.addAll(upstreamImageIds(stage, consideredStageRefIds, getCloudProvider())); if (upstreamImageIds.isEmpty()) { throw new IllegalStateException("Unable to determine source image(s)"); } @@ -93,11 +95,15 @@ protected Collection findImages(Collection imageNames, Collection image.get("imageName").equals(targetImageName)) .findFirst() - .orElseThrow(() -> new ImageNotFound(format("No image found (imageName: %s)", targetImageName), false)); + .orElseThrow(() -> + new ImageNotFound(format("No image found (imageName: %s)", targetImageName), !upstreamImageIds.isEmpty()) + ); foundImages.add(objectMapper.convertValue(matchedImage, matchedImageType)); } + foundAllImages(upstreamImageIds, foundImages); + return foundImages; } @@ -130,6 +136,22 @@ public OperationContext(List> operations, Map e } } + /** + * This method is a helper for AmazonImageTagger; Amazon images are regional with a one-to-many relationship + * between image names (treated globally) and regional image ids. Clouddriver caches Amazon images and + * namedImages as distinct and eventually consistent collections but findImages() uses the output of a lookup + * by names to determine which imageIds to tag. If upstream images were baked in n regions for "myapp-1.0.0" but + * findImage("aws", "myapp-1.0.0") returns n-1 amis as the namedImages collection for an account/region + * is mid-update, UpsertImageTags would appear successful despite only applying tags to a subset of upstream images. + * + * @param upstreamImageIds list of upstream image ids + * @param foundImages collection of cloudprovider specific MatchedImage objects + * + * Throws ImageNotFound with shouldRetry=true in AmazonImageTagger.foundAllImages + */ + protected void foundAllImages(List upstreamImageIds, Collection foundImages) { + } + protected static class Image { public final String imageName; public final String account; diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/MonitorDeleteImageTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/MonitorDeleteImageTask.java index 640014fcb7..b7d987e9ab 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/MonitorDeleteImageTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/MonitorDeleteImageTask.java @@ -74,10 +74,10 @@ public TaskResult execute(@Nonnull Stage stage) { outputs.put("delete.image.ids", deleteResult); if (deleteResult.containsAll(deleteImageRequest.getImageIds())) { - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } - return new TaskResult(ExecutionStatus.RUNNING, outputs); + return TaskResult.builder(ExecutionStatus.RUNNING).context(outputs).build(); } @Override diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/UpsertImageTagsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/UpsertImageTagsTask.java index 1c04a9f9ca..d3ab909ea4 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/UpsertImageTagsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/UpsertImageTagsTask.java @@ -17,6 +17,7 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.image; import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.kork.core.RetrySupport; import com.netflix.spinnaker.orca.ExecutionStatus; import com.netflix.spinnaker.orca.RetryableTask; import com.netflix.spinnaker.orca.TaskResult; @@ -29,8 +30,11 @@ import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Component; +import retrofit.RetrofitError; +import java.util.ArrayList; import java.util.List; +import java.util.Map; import java.util.concurrent.TimeUnit; @Component @@ -43,6 +47,9 @@ public class UpsertImageTagsTask extends AbstractCloudProviderAwareTask implemen @Autowired List imageTaggers; + @Autowired + RetrySupport retrySupport; + @Value("${tasks.upsertImageTagsTimeoutMillis:600000}") private Long upsertImageTagsTimeoutMillis; @@ -55,22 +62,36 @@ public TaskResult execute(Stage stage) { .findFirst() .orElseThrow(() -> new IllegalStateException("ImageTagger not found for cloudProvider " + cloudProvider)); + List> operations = new ArrayList<>(); + try { ImageTagger.OperationContext result = tagger.getOperationContext(stage); - TaskId taskId = kato.requestOperations(cloudProvider, result.operations).toBlocking().first(); + operations.addAll(result.operations); + + TaskId taskId = retrySupport.retry(() -> + kato.requestOperations(cloudProvider, result.operations).toBlocking().first(), + 10, 5, false); - return new TaskResult(ExecutionStatus.SUCCEEDED, ImmutableMap.builder() + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(ImmutableMap.builder() .put("notification.type", "upsertimagetags") .put("kato.last.task.id", taskId) .putAll(result.extraOutput) - .build() - ); + .build()).build(); } catch (ImageTagger.ImageNotFound e) { if (e.shouldRetry) { log.error(String.format("Retrying... (reason: %s, executionId: %s, stageId: %s)", e.getMessage(), stage.getExecution().getId(), stage.getId())); - return new TaskResult(ExecutionStatus.RUNNING); + return TaskResult.RUNNING; } + throw e; + } catch (RetrofitError e) { + log.error( + "Failed creating clouddriver upsertimagetags task, cloudprovider: {}, operations: {}", + cloudProvider, + operations.isEmpty() ? "not found" : operations, + e + ); + throw e; } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/WaitForUpsertedImageTagsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/WaitForUpsertedImageTagsTask.java index f86d15e250..8f8bebcc3a 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/WaitForUpsertedImageTagsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/image/WaitForUpsertedImageTagsTask.java @@ -48,9 +48,7 @@ public TaskResult execute(Stage stage) { .orElseThrow(() -> new IllegalStateException("ImageTagger not found for cloudProvider " + cloudProvider)); StageData stageData = stage.mapTo(StageData.class); - return new TaskResult( - tagger.areImagesTagged(stageData.targets, stageData.consideredStages, stage) ? ExecutionStatus.SUCCEEDED : ExecutionStatus.RUNNING - ); + return TaskResult.ofStatus(tagger.areImagesTagged(stageData.targets, stageData.consideredStages, stage) ? ExecutionStatus.SUCCEEDED : ExecutionStatus.RUNNING); } @Override diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstanceLoadBalancerRegistrationTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstanceLoadBalancerRegistrationTask.groovy index 7ab4f81383..f64c4addcd 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstanceLoadBalancerRegistrationTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstanceLoadBalancerRegistrationTask.groovy @@ -38,10 +38,10 @@ abstract class AbstractInstanceLoadBalancerRegistrationTask extends AbstractClou def taskId = kato.requestOperations(getCloudProvider(stage), [[(getAction()): stage.context]]) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : getAction().toLowerCase(), "kato.last.task.id" : taskId, interestingHealthProviderNames: HealthHelper.getInterestingHealthProviderNames(stage, ["LoadBalancer", "TargetGroup"]) - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstancesCheckTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstancesCheckTask.groovy index a017cb0e34..227e88d1cb 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstancesCheckTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractInstancesCheckTask.groovy @@ -95,14 +95,14 @@ abstract class AbstractInstancesCheckTask extends AbstractCloudProviderAwareTask Map> serverGroupsByRegion = getServerGroups(stage) if (!serverGroupsByRegion || !serverGroupsByRegion?.values()?.flatten()) { - return new TaskResult(ExecutionStatus.TERMINAL) + return TaskResult.ofStatus(ExecutionStatus.TERMINAL) } try { Moniker moniker = MonikerHelper.monikerFromStage(stage) def serverGroups = fetchServerGroups(account, getCloudProvider(stage), serverGroupsByRegion, moniker) if (!serverGroups) { - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } Map seenServerGroup = serverGroupsByRegion.values().flatten().collectEntries { [(it): false] } @@ -139,7 +139,7 @@ abstract class AbstractInstancesCheckTask extends AbstractCloudProviderAwareTask } } newContext.currentInstanceCount = serverGroup.instances?.size() ?: 0 - return new TaskResult(ExecutionStatus.RUNNING, newContext) + return TaskResult.builder(ExecutionStatus.RUNNING).context(newContext).build() } } @@ -156,24 +156,24 @@ abstract class AbstractInstancesCheckTask extends AbstractCloudProviderAwareTask throw e } log.info "Waiting for server group to show up, ignoring error: $e.message" - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } else { throw e } } if (seenServerGroup.values().contains(false)) { - new TaskResult(ExecutionStatus.RUNNING) + TaskResult.ofStatus(ExecutionStatus.RUNNING) } else { - new TaskResult(ExecutionStatus.SUCCEEDED) + TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } } catch (RetrofitError e) { def retrofitErrorResponse = new RetrofitExceptionHandler().handle(stage.name, e) if (e.response?.status == 404) { - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } else if (e.response?.status >= 500) { log.error("Unexpected retrofit error (${retrofitErrorResponse})") - return new TaskResult(ExecutionStatus.RUNNING, [lastRetrofitException: retrofitErrorResponse]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([lastRetrofitException: retrofitErrorResponse]).build() } throw e diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractWaitForInstanceHealthChangeTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractWaitForInstanceHealthChangeTask.groovy index 5bea68489a..e00071db2e 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractWaitForInstanceHealthChangeTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/AbstractWaitForInstanceHealthChangeTask.groovy @@ -37,7 +37,7 @@ abstract class AbstractWaitForInstanceHealthChangeTask implements OverridableTim @Override TaskResult execute(Stage stage) { if (stage.context.interestingHealthProviderNames != null && ((List)stage.context.interestingHealthProviderNames).isEmpty()) { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } String region = stage.context.region as String @@ -46,7 +46,7 @@ abstract class AbstractWaitForInstanceHealthChangeTask implements OverridableTim def instanceIds = getInstanceIds(stage) if (!instanceIds) { - return new TaskResult(ExecutionStatus.TERMINAL) + return TaskResult.ofStatus(ExecutionStatus.TERMINAL) } def stillRunning = instanceIds.find { @@ -54,7 +54,7 @@ abstract class AbstractWaitForInstanceHealthChangeTask implements OverridableTim return !hasSucceeded(instance, healthProviderTypesToCheck) } - return new TaskResult(stillRunning ? ExecutionStatus.RUNNING : ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(stillRunning ? ExecutionStatus.RUNNING : ExecutionStatus.SUCCEEDED) } protected List getInstanceIds(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/CaptureInstanceUptimeTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/CaptureInstanceUptimeTask.groovy index bc8bb08e64..5518188d90 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/CaptureInstanceUptimeTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/CaptureInstanceUptimeTask.groovy @@ -47,7 +47,7 @@ class CaptureInstanceUptimeTask extends AbstractCloudProviderAwareTask implement @Override TaskResult execute(Stage stage) { if (!instanceUptimeCommand) { - return new TaskResult(ExecutionStatus.SUCCEEDED, [instanceUptimes: [:]]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([instanceUptimes: [:]]).build() } def cloudProvider = getCloudProvider(stage) @@ -65,9 +65,9 @@ class CaptureInstanceUptimeTask extends AbstractCloudProviderAwareTask implement return accumulator } - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "instanceUptimes": instanceUptimes - ]) + ]).build() } protected Map getInstance(String account, String region, String instanceId) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/RebootInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/RebootInstancesTask.groovy index 169d2a3044..8ff2c3f37d 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/RebootInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/RebootInstancesTask.groovy @@ -42,11 +42,11 @@ class RebootInstancesTask extends AbstractCloudProviderAwareTask implements Task .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "rebootinstances", "reboot.account.name" : account, "kato.last.task.id" : taskId, interestingHealthProviderNames: HealthHelper.getInterestingHealthProviderNames(stage, ["Discovery"]) - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstanceAndDecrementServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstanceAndDecrementServerGroupTask.groovy index d7b3dd3d4c..fefbfc76ec 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstanceAndDecrementServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstanceAndDecrementServerGroupTask.groovy @@ -77,6 +77,6 @@ class TerminateInstanceAndDecrementServerGroupTask extends AbstractCloudProvider ctx['kato.last.task.id'] = taskId } - return new TaskResult(ExecutionStatus.SUCCEEDED, ctx) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(ctx).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstancesTask.groovy index a46172754c..a99bcbf129 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/TerminateInstancesTask.groovy @@ -76,6 +76,6 @@ class TerminateInstancesTask extends AbstractCloudProviderAwareTask implements T ctx["kato.last.task.id"] = taskId ctx["kato.task.id"] = taskId // TODO retire this. } - new TaskResult(ExecutionStatus.SUCCEEDED, ctx) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(ctx).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/UpdateInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/UpdateInstancesTask.groovy index 44f6c4777d..073b17d739 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/UpdateInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/UpdateInstancesTask.groovy @@ -40,12 +40,12 @@ class UpdateInstancesTask extends AbstractCloudProviderAwareTask implements Task TaskId taskId = kato.requestOperations(cloudProvider, [[updateInstances: stage.context]]) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "updateinstances", "update.account.name": account, "update.region" : stage.context.region, "kato.last.task.id" : taskId, "serverGroupName" : stage.context.serverGroupName, - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/VerifyInstanceUptimeTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/VerifyInstanceUptimeTask.groovy index 9c3e6a1d2a..34564ea444 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/VerifyInstanceUptimeTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/VerifyInstanceUptimeTask.groovy @@ -47,7 +47,7 @@ class VerifyInstanceUptimeTask extends AbstractCloudProviderAwareTask implements @Override TaskResult execute(Stage stage) { if (!instanceUptimeCommand || !stage.context.instanceUptimes) { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } def cloudProvider = getCloudProvider(stage) @@ -67,7 +67,7 @@ class VerifyInstanceUptimeTask extends AbstractCloudProviderAwareTask implements } } - return new TaskResult(allInstancesHaveRebooted ? ExecutionStatus.SUCCEEDED : ExecutionStatus.RUNNING) + return TaskResult.ofStatus(allInstancesHaveRebooted ? ExecutionStatus.SUCCEEDED : ExecutionStatus.RUNNING) } protected Map getInstance(String account, String region, String instanceId) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForTerminatedInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForTerminatedInstancesTask.groovy index 28d43572ee..9158b7c17a 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForTerminatedInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForTerminatedInstancesTask.groovy @@ -41,8 +41,7 @@ class WaitForTerminatedInstancesTask extends AbstractCloudProviderAwareTask impl TaskResult execute(Stage stage) { List remainingInstances = instanceSupport.remainingInstances(stage) return remainingInstances ? - new TaskResult(ExecutionStatus.RUNNING, - [(TerminatingInstanceSupport.TERMINATE_REMAINING_INSTANCES): remainingInstances]) : + TaskResult.builder(ExecutionStatus.RUNNING).context([(TerminatingInstanceSupport.TERMINATE_REMAINING_INSTANCES): remainingInstances]).build() : TaskResult.SUCCEEDED } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTask.groovy index 48e4bcc667..8fe3bd61cf 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTask.groovy @@ -20,8 +20,11 @@ import com.netflix.spinnaker.orca.clouddriver.utils.HealthHelper import com.netflix.spinnaker.orca.clouddriver.utils.HealthHelper.HealthCountSnapshot import com.netflix.spinnaker.orca.pipeline.model.Stage import groovy.util.logging.Slf4j +import org.slf4j.MDC import org.springframework.stereotype.Component +import java.util.concurrent.TimeUnit + @Component @Slf4j class WaitForUpInstancesTask extends AbstractWaitingForInstancesTask { @@ -55,57 +58,75 @@ class WaitForUpInstancesTask extends AbstractWaitingForInstancesTask { static boolean allInstancesMatch(Stage stage, Map serverGroup, List instances, - Collection interestingHealthProviderNames) { - if (!(serverGroup?.capacity)) { - return false - } - int targetDesiredSize = calculateTargetDesiredSize(stage, serverGroup) - - if (targetDesiredSize == 0 && stage.context.capacitySnapshot) { - // if we've seen a non-zero value before, but we are seeing a target size of zero now, assume - // it's a transient issue with edda unless we see it repeatedly - Map snapshot = stage.context.capacitySnapshot as Map - Integer snapshotDesiredCapacity = snapshot.desiredCapacity as Integer - if (snapshotDesiredCapacity != 0) { - Integer seenCount = stage.context.zeroDesiredCapacityCount as Integer - return seenCount >= MIN_ZERO_INSTANCE_RETRY_COUNT + Collection interestingHealthProviderNames, + Splainer parentSplainer = null) { + def splainer = parentSplainer ?: new Splainer() + .add("Instances up check for server group ${serverGroup?.name} [executionId=${stage.execution.id}, stagedId=${stage.execution.id}]") + + try { + if (!(serverGroup?.capacity)) { + splainer.add("short-circuiting out of allInstancesMatch because of empty capacity in serverGroup=${serverGroup}") + return false } - } - if (targetDesiredSize > instances.size()) { - return false - } + int targetDesiredSize = calculateTargetDesiredSize(stage, serverGroup, splainer) + if (targetDesiredSize == 0 && stage.context.capacitySnapshot) { + // if we've seen a non-zero value before, but we are seeing a target size of zero now, assume + // it's a transient issue with edda unless we see it repeatedly + Map snapshot = stage.context.capacitySnapshot as Map + Integer snapshotDesiredCapacity = snapshot.desiredCapacity as Integer + if (snapshotDesiredCapacity != 0) { + Integer seenCount = stage.context.zeroDesiredCapacityCount as Integer + boolean noLongerRetrying = seenCount >= MIN_ZERO_INSTANCE_RETRY_COUNT + splainer.add("seeing targetDesiredSize=0 but capacitySnapshot=${snapshot} has non-0 desiredCapacity after ${seenCount}/${MIN_ZERO_INSTANCE_RETRY_COUNT} retries}") + return noLongerRetrying + } + } - if (interestingHealthProviderNames != null && interestingHealthProviderNames.isEmpty()) { - log.info("${serverGroup.name}: Empty health providers supplied; considering it healthy") - return true - } + if (targetDesiredSize > instances.size()) { + splainer.add("short-circuiting out of allInstancesMatch because targetDesiredSize=${targetDesiredSize} > instances.size()=${instances.size()}") + return false + } - def healthyCount = instances.count { Map instance -> - HealthHelper.someAreUpAndNoneAreDown(instance, interestingHealthProviderNames) - } + if (interestingHealthProviderNames != null && interestingHealthProviderNames.isEmpty()) { + splainer.add("empty health providers supplied; considering server group healthy") + return true + } - log.info("${serverGroup.name}: Instances up check - healthy: $healthyCount, target: $targetDesiredSize") - return healthyCount >= targetDesiredSize - } + def healthyCount = instances.count { Map instance -> + HealthHelper.someAreUpAndNoneAreDown(instance, interestingHealthProviderNames) + } - static int calculateTargetDesiredSize(Stage stage, Map serverGroup) { - Map capacity = getServerGroupCapacity(stage, serverGroup) - Integer targetDesiredSize = capacity.desired as Integer + splainer.add("returning healthyCount=${healthyCount} >= targetDesiredSize=${targetDesiredSize}") + return healthyCount >= targetDesiredSize + } finally { + // if we have a parent splainer, then it's not our job to splain + if (!parentSplainer) { + splainer.splain() + } + } + } + static int calculateTargetDesiredSize(Stage stage, Map serverGroup, Splainer splainer = NOOPSPLAINER) { // Don't wait for spot instances to come up if the deployment strategy is None. All other deployment strategies rely on // confirming the new serverGroup is up and working correctly, so doing this is only safe with the None strategy // This should probably be moved to an AWS-specific part of the codebase if (serverGroup?.launchConfig?.spotPrice != null && stage.context.strategy == '') { + splainer.add("setting targetDesiredSize=0 because the server group has a spot price configured and the strategy is None") return 0 } + Map capacity = getServerGroupCapacity(stage, serverGroup) + Integer targetDesiredSize = capacity.desired as Integer + splainer.add("setting targetDesiredSize=${targetDesiredSize} from the desired size in capacity=${capacity}") + if (stage.context.capacitySnapshot) { Integer snapshotCapacity = ((Map) stage.context.capacitySnapshot).desiredCapacity as Integer // if the server group is being actively scaled down, this operation might never complete, // so take the min of the latest capacity from the server group and the snapshot - log.info("${serverGroup.name}: Calculating target desired size from snapshot (${snapshotCapacity}) and server group (${targetDesiredSize})") - targetDesiredSize = Math.min(targetDesiredSize, snapshotCapacity) + def newTargetDesiredSize = Math.min(targetDesiredSize, snapshotCapacity) + splainer.add("setting targetDesiredSize=${newTargetDesiredSize} as the min of desired in capacitySnapshot=${stage.context.capacitySnapshot} and the previous targetDesiredSize=${targetDesiredSize})") + targetDesiredSize = newTargetDesiredSize } if (stage.context.targetHealthyDeployPercentage != null) { @@ -113,14 +134,17 @@ class WaitForUpInstancesTask extends AbstractWaitingForInstancesTask { if (percentage < 0 || percentage > 100) { throw new NumberFormatException("targetHealthyDeployPercentage must be an integer between 0 and 100") } - targetDesiredSize = Math.ceil(percentage * targetDesiredSize / 100D) as Integer - log.info("${serverGroup.name}: Calculating target desired size based on configured percentage (${percentage}) as ${targetDesiredSize} instances") + + def newTargetDesiredSize = Math.ceil(percentage * targetDesiredSize / 100D) as Integer + splainer.add("setting targetDesiredSize=${newTargetDesiredSize} based on configured targetHealthyDeployPercentage=${percentage}% of previous targetDesiredSize=${targetDesiredSize}") + targetDesiredSize = newTargetDesiredSize } else if (stage.context.desiredPercentage != null) { Integer percentage = (Integer) stage.context.desiredPercentage targetDesiredSize = getDesiredInstanceCount(capacity, percentage) + splainer.add("setting targetDesiredSize=${targetDesiredSize} based on desiredPercentage=${percentage}% of capacity=${capacity}") } - log.info("${serverGroup.name}: Target desired size is ${targetDesiredSize}") - targetDesiredSize + + return targetDesiredSize } @Override @@ -180,21 +204,51 @@ class WaitForUpInstancesTask extends AbstractWaitingForInstancesTask { private static Map getServerGroupCapacity(Stage stage, Map serverGroup) { def serverGroupCapacity = serverGroup.capacity as Map + def cloudProvider = stage.context.cloudProvider + + Optional taskStartTime = Optional.ofNullable(MDC.get("taskStartTime")); + if (taskStartTime.isPresent()) { + if (System.currentTimeMillis() - TimeUnit.MINUTES.toMillis(10) > Long.valueOf(taskStartTime.get())) { + // expectation is reconciliation has happened within 10 minutes and that the + // current server group capacity should be preferred + log.error( + "Short circuiting initial target capacity determination after 10 minutes (serverGroup: {}, executionId: {})", + "${cloudProvider}:${serverGroup.region}:${serverGroup.name}", + stage.execution.id + ) + return serverGroupCapacity + } + } + def initialTargetCapacity = getInitialTargetCapacity(stage, serverGroup) if (!initialTargetCapacity) { + log.debug( + "Unable to determine initial target capacity (serverGroup: {}, executionId: {})", + "${cloudProvider}:${serverGroup.region}:${serverGroup.name}", + stage.execution.id + ) return serverGroupCapacity } - if (serverGroup.capacity.max == 0 && initialTargetCapacity.max != 0) { + if ((serverGroup.capacity.max == 0 && initialTargetCapacity.max != 0) || + (serverGroup.capacity.desired == 0 && initialTargetCapacity.desired > 0)) { log.info( "Overriding server group capacity (serverGroup: {}, initialTargetCapacity: {}, executionId: {})", - "${serverGroup.region}:${serverGroup.name}", + "${cloudProvider}:${serverGroup.region}:${serverGroup.name}", initialTargetCapacity, stage.execution.id ) serverGroupCapacity = initialTargetCapacity } + log.debug( + "Determined server group capacity (serverGroup: {}, serverGroupCapacity: {}, initialTargetCapacity: {}, executionId: {}", + "${cloudProvider}:${serverGroup.region}:${serverGroup.name}", + serverGroupCapacity, + initialTargetCapacity, + stage.execution.id + ) + return serverGroupCapacity } @@ -219,4 +273,24 @@ class WaitForUpInstancesTask extends AbstractWaitingForInstancesTask { return deployment?.capacity as Map } + + public static class Splainer { + List messages = new ArrayList<>() + + def add(String message) { + messages.add(message) + return this + } + + def splain() { + log.info(messages.join("\n - ")) + } + } + + private static class NoopSplainer extends Splainer { + def add(String message) {} + def splain() {} + } + + private static NoopSplainer NOOPSPLAINER = new NoopSplainer() } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobForceCacheRefreshTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobForceCacheRefreshTask.groovy index d5f02b23ee..a4b942983d 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobForceCacheRefreshTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobForceCacheRefreshTask.groovy @@ -42,7 +42,7 @@ class DestroyJobForceCacheRefreshTask extends AbstractCloudProviderAwareTask imp def model = [jobName: name, region: region, account: account, evict: true] cacheService.forceCacheUpdate(cloudProvider, REFRESH_TYPE, model) - new TaskResult(ExecutionStatus.SUCCEEDED) + TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobTask.groovy index ed10f7023c..71d4914c71 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/DestroyJobTask.groovy @@ -45,6 +45,6 @@ class DestroyJobTask extends AbstractCloudProviderAwareTask implements Task { "delete.region" : stage.context.region, "delete.account.name": account ] - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/RunJobTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/RunJobTask.groovy index 91bb3a7b66..84258fc163 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/RunJobTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/RunJobTask.groovy @@ -76,6 +76,6 @@ class RunJobTask extends AbstractCloudProviderAwareTask implements RetryableTask creator.getAdditionalOutputs(stage, ops) ) - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/WaitOnJobCompletion.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/WaitOnJobCompletion.groovy index ca440ec4bf..e7344777c6 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/WaitOnJobCompletion.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/job/WaitOnJobCompletion.groovy @@ -122,6 +122,6 @@ public class WaitOnJobCompletion extends AbstractCloudProviderAwareTask implemen } } - new TaskResult(status, outputs, outputs) + TaskResult.builder(status).context(outputs).outputs(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerForceRefreshTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerForceRefreshTask.groovy index f10cba6c0e..7e0560ca7f 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerForceRefreshTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerForceRefreshTask.groovy @@ -45,6 +45,6 @@ class DeleteLoadBalancerForceRefreshTask extends AbstractCloudProviderAwareTask def model = [loadBalancerName: name, region: region, account: account, vpcId: vpcId, evict: true] cacheService.forceCacheUpdate(cloudProvider, REFRESH_TYPE, model) } - new TaskResult(ExecutionStatus.SUCCEEDED) + TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerTask.groovy index 4b64d54f3c..ad4586a313 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/DeleteLoadBalancerTask.groovy @@ -48,6 +48,6 @@ class DeleteLoadBalancerTask extends AbstractCloudProviderAwareTask implements T "delete.regions" : stage.context.regions?.join(',') ?: [], "delete.account.name": account ] - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTask.groovy index 01c3760776..5e14c3f791 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTask.groovy @@ -16,33 +16,179 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.loadbalancer +import com.fasterxml.jackson.core.type.TypeReference +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.kork.core.RetrySupport import com.netflix.spinnaker.orca.ExecutionStatus -import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.RetryableTask import com.netflix.spinnaker.orca.TaskResult import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheService +import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheStatusService import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask import com.netflix.spinnaker.orca.pipeline.model.Stage +import groovy.util.logging.Slf4j import org.springframework.beans.factory.annotation.Autowired import org.springframework.stereotype.Component +import retrofit.client.Response +import retrofit.mime.TypedByteArray +import java.time.Duration +import java.util.concurrent.TimeUnit + +@Slf4j @Component -public class UpsertLoadBalancerForceRefreshTask extends AbstractCloudProviderAwareTask implements Task { +public class UpsertLoadBalancerForceRefreshTask extends AbstractCloudProviderAwareTask implements RetryableTask { static final String REFRESH_TYPE = "LoadBalancer" + static final int MAX_CHECK_FOR_PENDING = 3 + + private final CloudDriverCacheService cacheService + private final CloudDriverCacheStatusService cacheStatusService + private final ObjectMapper mapper + private final RetrySupport retrySupport + @Autowired - CloudDriverCacheService cacheService + UpsertLoadBalancerForceRefreshTask(CloudDriverCacheService cacheService, + CloudDriverCacheStatusService cacheStatusService, + ObjectMapper mapper, + RetrySupport retrySupport) { + this.cacheService = cacheService + this.cacheStatusService = cacheStatusService + this.mapper = mapper + this.retrySupport = retrySupport + } @Override TaskResult execute(Stage stage) { + LBUpsertContext context = stage.mapTo(LBUpsertContext.class) + + if (!context.refreshState.hasRequested) { + return requestCacheUpdates(stage, context) + } + + if (!context.refreshState.seenPendingCacheUpdates && context.refreshState.attempt >= MAX_CHECK_FOR_PENDING) { + log.info("Failed to see pending cache updates in {} attempts, short circuiting", MAX_CHECK_FOR_PENDING) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(getOutput(context)).build() + } + + checkPending(stage, context) + if (context.refreshState.allAreComplete) { + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(getOutput(context)).build() + } + TaskResult.builder(ExecutionStatus.RUNNING).context(getOutput(context)).build() + } + + @Override + long getTimeout() { + return TimeUnit.MINUTES.toMillis(10) + } + + @Override + long getBackoffPeriod() { + return TimeUnit.SECONDS.toMillis(5) + } + + @Override + long getDynamicBackoffPeriod(Stage stage, Duration taskDuration) { + LBUpsertContext context = stage.mapTo(LBUpsertContext.class) + if (context.refreshState.seenPendingCacheUpdates) { + return getBackoffPeriod() + } else { + // Some LB types don't support onDemand updates and we'll never observe a pending update for their keys, + // this ensures quicker short circuiting in that case. + return TimeUnit.SECONDS.toMillis(1) + } + } + + private TaskResult requestCacheUpdates(Stage stage, LBUpsertContext context) { String cloudProvider = getCloudProvider(stage) + + List requestStatuses = new ArrayList<>() + stage.context.targets.each { Map target -> target.availabilityZones.keySet().each { String region -> - cacheService.forceCacheUpdate( - cloudProvider, REFRESH_TYPE, [loadBalancerName: target.name, region: region, account: target.credentials, loadBalancerType: stage.context.loadBalancerType] - ) + Response response = retrySupport.retry({ + cacheService.forceCacheUpdate( + cloudProvider, + REFRESH_TYPE, + [loadBalancerName: target.name, + region : region, + account : target.credentials, + loadBalancerType: stage.context.loadBalancerType] + ) + }, 3, 1000, false) + + if (response != null && response.status != HttpURLConnection.HTTP_OK) { + requestStatuses.add(false) + + Map responseBody = mapper.readValue( + ((TypedByteArray) response.getBody()).getBytes(), + new TypeReference>() {} + ) + + if (responseBody?.cachedIdentifiersByType?.loadBalancers) { + context.refreshState.refreshIds.addAll( + responseBody["cachedIdentifiersByType"]["loadBalancers"] as List + ) + } + } else { + requestStatuses.add(true) + } + } + } + + context.refreshState.hasRequested = true + if (requestStatuses.every { it } || context.refreshState.refreshIds.isEmpty()) { + context.refreshState.allAreComplete = true + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(getOutput(context)).build() + } else { + return TaskResult.builder(ExecutionStatus.RUNNING).context(getOutput(context)).build() + } + } + + private void checkPending(Stage stage, LBUpsertContext context) { + String cloudProvider = getCloudProvider(stage) + + Collection pendingCacheUpdates = retrySupport.retry({ + cacheStatusService.pendingForceCacheUpdates(cloudProvider, REFRESH_TYPE) + }, 3, 1000, false) + + if (!pendingCacheUpdates.isEmpty() && !context.refreshState.seenPendingCacheUpdates) { + if (context.refreshState.refreshIds.every { refreshId -> + pendingCacheUpdates.any { it.id as String == refreshId as String} + }) { + context.refreshState.seenPendingCacheUpdates = true + } + } + + if (context.refreshState.seenPendingCacheUpdates) { + if (pendingCacheUpdates.isEmpty()) { + context.refreshState.allAreComplete = true + } else { + if (!pendingCacheUpdates.any { + context.refreshState.refreshIds.contains(it.id as String) + }) { + context.refreshState.allAreComplete = true + } } + } else { + context.refreshState.attempt++ } + } + + private Map getOutput(LBUpsertContext context) { + return mapper.convertValue(context, new TypeReference>() {}) + } + + private static class CacheRefreshState { + Boolean hasRequested = false + Boolean seenPendingCacheUpdates = false + Integer attempt = 0 + Boolean allAreComplete = false + List refreshIds = new ArrayList<>() + } - new TaskResult(ExecutionStatus.SUCCEEDED) + private static class LBUpsertContext { + CacheRefreshState refreshState = new CacheRefreshState() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerResultObjectExtrapolationTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerResultObjectExtrapolationTask.groovy index 440c7b4c25..7d27e7330e 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerResultObjectExtrapolationTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerResultObjectExtrapolationTask.groovy @@ -33,7 +33,7 @@ class UpsertLoadBalancerResultObjectExtrapolationTask implements Task { def lastKatoTask = katoTasks.find { it.id.toString() == lastTaskId.id } if (!lastKatoTask) { - return new TaskResult(ExecutionStatus.TERMINAL) + return TaskResult.ofStatus(ExecutionStatus.TERMINAL) } def resultObjects = lastKatoTask.resultObjects as List @@ -47,7 +47,7 @@ class UpsertLoadBalancerResultObjectExtrapolationTask implements Task { } } - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerTask.groovy index b6f9719162..9f1e4044f2 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerTask.groovy @@ -87,6 +87,6 @@ class UpsertLoadBalancerTask extends AbstractCloudProviderAwareTask implements R } ] - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancersTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancersTask.groovy index fb2968e449..d1f1d292cb 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancersTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancersTask.groovy @@ -82,6 +82,6 @@ class UpsertLoadBalancersTask extends AbstractCloudProviderAwareTask implements ] } ] - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeleteManifestTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeleteManifestTask.java index e642bed4be..b720ed3e8c 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeleteManifestTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeleteManifestTask.java @@ -56,6 +56,6 @@ public TaskResult execute(@Nonnull Stage stage) { .put("deploy.account.name", credentials) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestContext.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestContext.java new file mode 100644 index 0000000000..edcccee04f --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestContext.java @@ -0,0 +1,108 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import lombok.Getter; +import org.jetbrains.annotations.Nullable; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Optional; + +@Getter +public class DeployManifestContext extends HashMap { + private final String source; + private final String manifestArtifactId; + private final Artifact manifestArtifact; + private final String manifestArtifactAccount; + private final Boolean skipExpressionEvaluation; + private final TrafficManagement trafficManagement; + private final List requiredArtifactIds; + private final List requiredArtifacts; + + // There does not seem to be a way to auto-generate a constructor using our current version of Lombok (1.16.20) that + // Jackson can use to deserialize. + public DeployManifestContext( + @JsonProperty("source") String source, + @JsonProperty("manifestArtifactId") String manifestArtifactId, + @JsonProperty("manifestArtifact") Artifact manifestArtifact, + @JsonProperty("manifestArtifactAccount") String manifestArtifactAccount, + @JsonProperty("skipExpressionEvaluation") Boolean skipExpressionEvaluation, + @JsonProperty("trafficManagement") TrafficManagement trafficManagement, + @JsonProperty("requiredArtifactIds") List requiredArtifactIds, + @JsonProperty("requiredArtifacts") List requiredArtifacts + ){ + this.source = source; + this.manifestArtifactId = manifestArtifactId; + this.manifestArtifact = manifestArtifact; + this.manifestArtifactAccount = manifestArtifactAccount; + this.skipExpressionEvaluation = skipExpressionEvaluation; + this.trafficManagement = Optional.ofNullable(trafficManagement).orElse(new TrafficManagement(false, null)); + this.requiredArtifactIds = requiredArtifactIds; + this.requiredArtifacts = requiredArtifacts; + } + + @Getter + public static class BindArtifact { + @Nullable + private final String expectedArtifactId; + + @Nullable + private final Artifact artifact; + + + public BindArtifact(@JsonProperty("expectedArtifactId") @Nullable String expectedArtifactId, + @JsonProperty("artifact") @Nullable Artifact artifact) { + this.expectedArtifactId = expectedArtifactId; + this.artifact = artifact; + } + } + + @Getter + public static class TrafficManagement { + private final boolean enabled; + private final Options options; + + public TrafficManagement ( + @JsonProperty("enabled") Boolean enabled, + @JsonProperty("options") Options options + ) { + this.enabled = Optional.ofNullable(enabled).orElse(false); + this.options = Optional.ofNullable(options).orElse(new Options(false, Collections.emptyList(), null)); + } + + @Getter + public static class Options { + private final boolean enableTraffic; + private final List services; + private final ManifestStrategyType strategy; + + public Options( + @JsonProperty("enableTraffic") Boolean enableTraffic, + @JsonProperty("services") List services, + @JsonProperty("strategy") String strategy + ) { + this.enableTraffic = Optional.ofNullable(enableTraffic).orElse(false); + this.services = Optional.ofNullable(services).orElse(Collections.emptyList()); + this.strategy = ManifestStrategyType.fromKey(strategy); + } + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTask.java index 1570f0756d..0d4a94a1b5 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTask.java @@ -31,45 +31,33 @@ import com.netflix.spinnaker.orca.pipeline.model.Stage; import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor; +import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang.StringUtils; -import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import org.yaml.snakeyaml.Yaml; import org.yaml.snakeyaml.constructor.SafeConstructor; import retrofit.client.Response; import javax.annotation.Nonnull; -import java.util.ArrayList; -import java.util.Collection; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Map; +import java.util.*; import java.util.stream.Collectors; import java.util.stream.StreamSupport; +import static java.util.Collections.emptyList; + @Component +@RequiredArgsConstructor @Slf4j public class DeployManifestTask extends AbstractCloudProviderAwareTask implements Task { - @Autowired - KatoService kato; - - @Autowired - OortService oort; - - @Autowired - ArtifactResolver artifactResolver; - - @Autowired - ObjectMapper objectMapper; + private final KatoService kato; + private final OortService oort; + private final ArtifactResolver artifactResolver; + private final ObjectMapper objectMapper; + private final ContextParameterProcessor contextParameterProcessor; private static final ThreadLocal yamlParser = ThreadLocal.withInitial(() -> new Yaml(new SafeConstructor())); - - @Autowired - ContextParameterProcessor contextParameterProcessor; - - RetrySupport retrySupport = new RetrySupport(); + private final RetrySupport retrySupport = new RetrySupport(); public static final String TASK_NAME = "deployManifest"; @@ -80,26 +68,29 @@ public TaskResult execute(@Nonnull Stage stage) { String cloudProvider = getCloudProvider(stage); List artifacts = artifactResolver.getArtifacts(stage); - Map task = new HashMap(stage.getContext()); - String artifactSource = (String) task.get("source"); + DeployManifestContext context = stage.mapTo(DeployManifestContext.class); + Map task = new HashMap<>(context); + String artifactSource = context.getSource(); if (StringUtils.isNotEmpty(artifactSource) && artifactSource.equals("artifact")) { - if (task.get("manifestArtifactId") == null) { + Artifact manifestArtifact = artifactResolver.getBoundArtifactForStage(stage, context.getManifestArtifactId(), + context.getManifestArtifact()); + + if (manifestArtifact == null) { throw new IllegalArgumentException("No manifest artifact was specified."); } - if (task.get("manifestArtifactAccount") == null) { - throw new IllegalArgumentException("No manifest artifact account was specified."); + // Once the legacy artifacts feature is removed, all trigger expected artifacts will be required to define + // an account up front. + if(context.getManifestArtifactAccount() != null) { + manifestArtifact.setArtifactAccount(context.getManifestArtifactAccount()); } - Artifact manifestArtifact = artifactResolver.getBoundArtifactForId(stage, task.get("manifestArtifactId").toString()); - - if (manifestArtifact == null) { - throw new IllegalArgumentException("No artifact could be bound to '" + task.get("manifestArtifactId") + "'"); + if (manifestArtifact.getArtifactAccount() == null) { + throw new IllegalArgumentException("No manifest artifact account was specified."); } log.info("Using {} as the manifest to be deployed", manifestArtifact); - manifestArtifact.setArtifactAccount((String) task.get("manifestArtifactAccount")); Object parsedManifests = retrySupport.retry(() -> { try { Response manifestText = oort.fetchArtifact(manifestArtifact); @@ -119,14 +110,17 @@ public TaskResult execute(@Nonnull Stage stage) { Map manifestWrapper = new HashMap<>(); manifestWrapper.put("manifests", manifests); - manifestWrapper = contextParameterProcessor.process( + Boolean skipExpressionEvaluation = context.getSkipExpressionEvaluation(); + if (skipExpressionEvaluation == null || !skipExpressionEvaluation) { + manifestWrapper = contextParameterProcessor.process( manifestWrapper, contextParameterProcessor.buildExecutionContext(stage, true), true - ); + ); - if (manifestWrapper.containsKey("expressionEvaluationSummary")) { - throw new IllegalStateException("Failure evaluating manifest expressions: " + manifestWrapper.get("expressionEvaluationSummary")); + if (manifestWrapper.containsKey("expressionEvaluationSummary")) { + throw new IllegalStateException("Failure evaluating manifest expressions: " + manifestWrapper.get("expressionEvaluationSummary")); + } } return manifestWrapper.get("manifests"); @@ -140,10 +134,8 @@ public TaskResult execute(@Nonnull Stage stage) { task.put("source", "text"); } - List requiredArtifactIds = (List) task.get("requiredArtifactIds"); List requiredArtifacts = new ArrayList<>(); - requiredArtifactIds = requiredArtifactIds == null ? new ArrayList<>() : requiredArtifactIds; - for (String id : requiredArtifactIds) { + for (String id : Optional.ofNullable(context.getRequiredArtifactIds()).orElse(emptyList())) { Artifact requiredArtifact = artifactResolver.getBoundArtifactForId(stage, id); if (requiredArtifact == null) { throw new IllegalStateException("No artifact with id '" + id + "' could be found in the pipeline context."); @@ -152,10 +144,32 @@ public TaskResult execute(@Nonnull Stage stage) { requiredArtifacts.add(requiredArtifact); } + // resolve SpEL expressions in artifacts defined inline in the stage + for (DeployManifestContext.BindArtifact artifact : Optional.ofNullable(context.getRequiredArtifacts()).orElse(emptyList())) { + Artifact requiredArtifact = artifactResolver.getBoundArtifactForStage(stage, artifact.getExpectedArtifactId(), artifact.getArtifact()); + + if (requiredArtifact == null) { + throw new IllegalStateException("No artifact with id '" + artifact.getExpectedArtifactId() + "' could be found in the pipeline context."); + } + + requiredArtifacts.add(requiredArtifact); + } + log.info("Deploying {} artifacts within the provided manifest", requiredArtifacts); task.put("requiredArtifacts", requiredArtifacts); task.put("optionalArtifacts", artifacts); + + if (context.getTrafficManagement() != null && context.getTrafficManagement().isEnabled()) { + task.put("services", context.getTrafficManagement().getOptions().getServices()); + task.put("enableTraffic", context.getTrafficManagement().getOptions().isEnableTraffic()); + task.put("strategy", context.getTrafficManagement().getOptions().getStrategy().name()); + } else { + // For backwards compatibility, traffic is always enabled to new server groups when the new traffic management + // features are not enabled. + task.put("enableTraffic", true); + } + Map operation = new ImmutableMap.Builder() .put(TASK_NAME, task) .build(); @@ -168,6 +182,6 @@ public TaskResult execute(@Nonnull Stage stage) { .put("deploy.account.name", credentials) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DynamicResolveManifestTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DynamicResolveManifestTask.java index a6d360308e..77940a38cc 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DynamicResolveManifestTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DynamicResolveManifestTask.java @@ -57,7 +57,7 @@ public TaskResult execute(@Nonnull Stage stage) { StageData stageData = fromStage(stage); if (StringUtils.isEmpty(stageData.criteria)) { - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; } Manifest target = retrySupport.retry(() -> oortService.getDynamicManifest(credentials, @@ -76,7 +76,7 @@ public TaskResult execute(@Nonnull Stage stage) { .put("manifestName", target.getName()) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build(); } private StageData fromStage(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/GenericUpdateManifestTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/GenericUpdateManifestTask.java index 9bf6958821..8b7bc1645b 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/GenericUpdateManifestTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/GenericUpdateManifestTask.java @@ -63,6 +63,6 @@ public TaskResult execute(@Nonnull Stage stage) { ).build()) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTask.java index 358fb37a0d..b1812d5eee 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTask.java @@ -17,11 +17,12 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; +import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.ObjectMapper; -import com.google.common.collect.ImmutableMap; import com.netflix.spectator.api.Id; +import com.netflix.spectator.api.Registry; import com.netflix.spinnaker.orca.RetryableTask; import com.netflix.spinnaker.orca.Task; import com.netflix.spinnaker.orca.TaskResult; @@ -31,26 +32,20 @@ import com.netflix.spinnaker.orca.pipeline.model.Stage; import lombok.Data; import lombok.Getter; +import lombok.Value; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang.StringUtils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import retrofit.client.Response; +import javax.annotation.Nonnull; import java.io.IOException; import java.time.Clock; -import java.util.Collection; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Optional; -import java.util.Set; +import java.util.*; import java.util.concurrent.TimeUnit; import java.util.stream.Collectors; -import com.netflix.spectator.api.Registry; - import static com.netflix.spinnaker.orca.ExecutionStatus.RUNNING; import static com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED; import static java.net.HttpURLConnection.HTTP_OK; @@ -95,7 +90,8 @@ public ManifestForceCacheRefreshTask(Registry registry, } @Override - public TaskResult execute(Stage stage) { + @Nonnull + public TaskResult execute(@Nonnull Stage stage) { Long startTime = stage.getStartTime(); if (startTime == null) { throw new IllegalStateException("Stage has no start time, cannot be executing."); @@ -105,171 +101,142 @@ public TaskResult execute(Stage stage) { log.info("{}: Force cache refresh never finished processing... assuming the cache is in sync and continuing...", stage.getExecution().getId()); registry.timer(durationTimerId.withTags("success", "true", "outcome", "autoSucceed")) .record(duration, TimeUnit.MILLISECONDS); - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } String cloudProvider = getCloudProvider(stage); - String account = getCredentials(stage); StageData stageData = fromStage(stage); - stageData.manifestNamesByNamespace = manifestNamesByNamespace(stage); + stageData.deployedManifests = getDeployedManifests(stage); + + checkPendingRefreshes(cloudProvider, stageData, startTime); - if (refreshManifests(cloudProvider, account, stageData)) { + refreshManifests(cloudProvider, stageData); + + if (allManifestsProcessed(stageData)) { registry.timer(durationTimerId.withTags("success", "true", "outcome", "complete")) .record(duration, TimeUnit.MILLISECONDS); - return new TaskResult(SUCCEEDED, toContext(stageData)); - } else { - TaskResult taskResult = checkPendingRefreshes(cloudProvider, account, stageData, startTime); - - // ignoring any non-success, non-failure statuses - if (taskResult.getStatus().isSuccessful()) { - registry.timer(durationTimerId.withTags("success", "true", "outcome", "complete")) - .record(duration, TimeUnit.MILLISECONDS); - } else if (taskResult.getStatus().isFailure()) { - registry.timer(durationTimerId.withTags("success", "false", "outcome", "failure")) - .record(duration, TimeUnit.MILLISECONDS); - } - return taskResult; + return TaskResult.builder(SUCCEEDED).context(toContext(stageData)).build(); } - } - private TaskResult checkPendingRefreshes(String provider, String account, StageData stageData, long startTime) { - Collection pendingRefreshes = objectMapper.convertValue( - cacheStatusService.pendingForceCacheUpdates(provider, REFRESH_TYPE), - new TypeReference>() { } - ); + return TaskResult.builder(RUNNING).context(toContext(stageData)).build(); + } - Map> deployedManifests = stageData.getManifestNamesByNamespace(); - Set refreshedManifests = stageData.getRefreshedManifests(); - Set processedManifests = stageData.getProcessedManifests(); - boolean allProcessed = true; + /** + * Checks whether all manifests deployed in the stage have been processed by the cache + * @return true if all manifests have been processed + */ + private boolean allManifestsProcessed(StageData stageData) { + return stageData.getProcessedManifests().containsAll(stageData.getDeployedManifests()); + } - for (Map.Entry> entry : deployedManifests.entrySet()) { - String location = entry.getKey(); + /** + * Checks on the status of any pending on-demand cache refreshes. If a pending refresh has been processed, adds the + * corresponding manifest to processedManifests; if a pending refresh is not found in clouddriver or is invalid, + * removes the corresponding manifest from refreshedManifests + */ + private void checkPendingRefreshes(String provider, StageData stageData, long startTime) { + Set refreshedManifests = stageData.getRefreshedManifests(); + Set processedManifests = stageData.getProcessedManifests(); + + List manifestsToCheck = refreshedManifests.stream() + .filter(m -> !processedManifests.contains(m)) + .collect(Collectors.toList()); + + if (manifestsToCheck.isEmpty()) { + return; + } - for (String name : entry.getValue()) { - String id = toManifestIdentifier(location, name); - if (processedManifests.contains(id)) { - continue; - } + Collection pendingRefreshes = objectMapper.convertValue( + cacheStatusService.pendingForceCacheUpdates(provider, REFRESH_TYPE), + new TypeReference>() { } + ); - Optional pendingRefresh = pendingRefreshes.stream() - .filter(pr -> pr.getDetails() != null) - .filter(pr -> account.equals(pr.getDetails().getAccount()) && - (location.equals(pr.getDetails().getLocation()) || StringUtils.isNotEmpty(location) && StringUtils.isEmpty(pr.getDetails().getLocation())) && - name.equals(pr.getDetails().getName()) - ) - .findAny(); - - if (pendingRefresh.isPresent()) { - PendingRefresh refresh = pendingRefresh.get(); - // it's possible the resource isn't supposed to have a namespace -- clouddriver reports this by removing it - // in the response. in this case, we make sure to set it to match between clouddriver and orca - if (StringUtils.isEmpty(refresh.getDetails().getLocation())) { - refresh.getDetails().setLocation(location); - } - if (pendingRefreshProcessed(refresh, refreshedManifests, startTime)) { - log.debug("Pending manifest refresh of {} in {} completed", id, account); - processedManifests.add(id); - } else { - log.debug("Pending manifest refresh of {} in {} still pending", id, account); - allProcessed = false; - } - } else { - log.warn("No pending refresh of {} in {}", id, account); - allProcessed = false; - refreshedManifests.remove(id); - } + for (ScopedManifest manifest : manifestsToCheck) { + RefreshStatus refreshStatus = pendingRefreshes.stream() + .filter(pr -> pr.getScopedManifest() != null) + .filter(pr -> refreshMatches(pr.getScopedManifest(), manifest)) + .map(pr -> getRefreshStatus(pr, startTime)) + .sorted() + .findFirst() + .orElse(RefreshStatus.INVALID); + + if (refreshStatus == RefreshStatus.PROCESSED) { + log.debug("Pending manifest refresh of {} completed", manifest); + processedManifests.add(manifest); + } else if (refreshStatus == RefreshStatus.PENDING) { + log.debug("Pending manifest refresh of {} still pending", manifest); + } else { + log.warn("No valid pending refresh of {}", manifest); + refreshedManifests.remove(manifest); } } + } - return new TaskResult(allProcessed ? SUCCEEDED : RUNNING, toContext(stageData)); + private boolean refreshMatches(ScopedManifest refresh, ScopedManifest manifest) { + return manifest.account.equals(refresh.account) + && (manifest.location.equals(refresh.location) || StringUtils.isEmpty(refresh.location)) + && manifest.name.equals(refresh.name); } - private boolean pendingRefreshProcessed(PendingRefresh pendingRefresh, Set refreshedManifests, long startTime) { - PendingRefresh.Details details = pendingRefresh.getDetails(); - if (pendingRefresh.cacheTime == null || pendingRefresh.processedTime == null || details == null) { + private RefreshStatus getRefreshStatus(PendingRefresh pendingRefresh, long startTime) { + ScopedManifest scopedManifest = pendingRefresh.getScopedManifest(); + if (pendingRefresh.cacheTime == null || pendingRefresh.processedTime == null || scopedManifest == null) { log.warn("Pending refresh of {} is missing cache metadata", pendingRefresh); - refreshedManifests.remove(toManifestIdentifier(details.getLocation(), details.getName())); - return false; + return RefreshStatus.INVALID; } else if (pendingRefresh.cacheTime < startTime) { log.warn("Pending refresh of {} is stale", pendingRefresh); - refreshedManifests.remove(toManifestIdentifier(details.getLocation(), details.getName())); - return false; + return RefreshStatus.INVALID; } else if (pendingRefresh.processedTime < startTime) { log.info("Pending refresh of {} was cached as a part of this request, but not processed", pendingRefresh); - return false; + return RefreshStatus.PENDING; } else { - return true; + return RefreshStatus.PROCESSED; } } - private Map> manifestsNeedingRefresh(StageData stageData) { - Map> deployedManifests = stageData.getManifestNamesByNamespace(); - Set refreshedManifests = stageData.getRefreshedManifests(); + private List manifestsNeedingRefresh(StageData stageData) { + List deployedManifests = stageData.getDeployedManifests(); + Set refreshedManifests = stageData.getRefreshedManifests(); if (deployedManifests.isEmpty()) { log.warn("No manifests were deployed, nothing to refresh..."); } - Map> result = new HashMap<>(); - for (Map.Entry> entry : deployedManifests.entrySet()) { - String location = entry.getKey(); - List names = entry.getValue().stream() - .filter(n -> !refreshedManifests.contains(toManifestIdentifier(location, n))) - .collect(Collectors.toList()); - - if (!names.isEmpty()) { - result.put(location, names); - } - } + return deployedManifests.stream() + .filter(m -> !refreshedManifests.contains(m)) + .collect(Collectors.toList()); + } - return result; + private List getDeployedManifests(Stage stage) { + String account = getCredentials(stage); + Map> deployedManifests = manifestNamesByNamespace(stage); + return deployedManifests.entrySet().stream() + .flatMap(e -> e.getValue().stream().map(v -> new ScopedManifest(account, e.getKey(), v))) + .collect(Collectors.toList()); } - private boolean refreshManifests(String provider, String account, StageData stageData) { - Map> manifests = manifestsNeedingRefresh(stageData); - - final boolean[] allRefreshesSucceeded = {true}; - for (Map.Entry> entry : manifests.entrySet()) { - String location = entry.getKey(); - entry.getValue().forEach(name -> { - String id = toManifestIdentifier(location, name); - Map request = new ImmutableMap.Builder() - .put("account", account) - .put("name", name) - .put("location", location) - .build(); - - try { - Response response = cacheService.forceCacheUpdate(provider, REFRESH_TYPE, request); - if (response.getStatus() == HTTP_OK) { - log.info("Refresh of {} in {} succeeded immediately", id, account); - stageData.getProcessedManifests().add(id); - } else { - allRefreshesSucceeded[0] = false; - } - - stageData.getRefreshedManifests().add(id); - } catch (Exception e) { - log.warn("Failed to refresh {}: ", id, e); - allRefreshesSucceeded[0] = false; - stageData.errors.add(e.getMessage()); + /** + * Requests an on-demand cache refresh for any manifest without a refresh requests that is either pending or + * processed. Adds each manifest to refreshedManifests; if the request to clouddriver was immediately processed, + * also adds the manifest to processedManifests. + */ + private void refreshManifests(String provider, StageData stageData) { + List manifests = manifestsNeedingRefresh(stageData); + + for (ScopedManifest manifest : manifests) { + Map request = objectMapper.convertValue(manifest, new TypeReference>() {}); + try { + Response response = cacheService.forceCacheUpdate(provider, REFRESH_TYPE, request); + if (response.getStatus() == HTTP_OK) { + log.info("Refresh of {} succeeded immediately", manifest); + stageData.getProcessedManifests().add(manifest); } - }); - } - - boolean allRefreshesProcessed = stageData.getRefreshedManifests().equals(stageData.getProcessedManifests()); - // This can happen when the prior execution of this task returned RUNNING because one or more manifests - // were not processed. In this case, all manifests may have been refreshed successfully without finishing processing. - if (allRefreshesSucceeded[0] && !allRefreshesProcessed) { - log.warn("All refreshes succeeded, but not all have been processed yet..."); + stageData.getRefreshedManifests().add(manifest); + } catch (Exception e) { + log.warn("Failed to refresh {}: ", manifest, e); + stageData.errors.add(e.getMessage()); + } } - - return allRefreshesSucceeded[0] && allRefreshesProcessed; - } - - private String toManifestIdentifier(String namespace, String name) { - return namespace + ":" + name; } private StageData fromStage(Stage stage) { @@ -286,29 +253,47 @@ private Map toContext(StageData stageData) { @Data static private class PendingRefresh { - Details details; + @JsonProperty("details") + ScopedManifest scopedManifest; Long processedTime; Long cacheTime; Long processedCount; - - @Data - static private class Details { - String account; - String location; - String name; - } } @Data + @JsonIgnoreProperties(ignoreUnknown = true) static private class StageData { - Map> manifestNamesByNamespace = new HashMap<>(); + List deployedManifests = Collections.emptyList(); - @JsonProperty("refreshed.manifests") - Set refreshedManifests = new HashSet<>(); + @JsonProperty("refreshed.scopedManifests") + Set refreshedManifests = new HashSet<>(); - @JsonProperty("processed.manifests") - Set processedManifests = new HashSet<>(); + @JsonProperty("processed.scopedManifests") + Set processedManifests = new HashSet<>(); Set errors = new HashSet<>(); } + + @Value + private static class ScopedManifest { + final String account; + final String location; + final String name; + + ScopedManifest( + @JsonProperty("account") String account, + @JsonProperty("location") String location, + @JsonProperty("name") String name + ) { + this.account = account; + this.location = location; + this.name = name; + } + } + + private enum RefreshStatus { + PROCESSED, + PENDING, + INVALID + } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestHighlanderStrategy.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestHighlanderStrategy.java new file mode 100644 index 0000000000..ad1448985b --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestHighlanderStrategy.java @@ -0,0 +1,37 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; + +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +@Component +public class ManifestHighlanderStrategy implements ManifestStrategy { + private final ManifestStrategyHandler strategyHandler; + + @Autowired + public ManifestHighlanderStrategy(ManifestStrategyHandler strategyHandler) { + this.strategyHandler = strategyHandler; + } + + public void composeFlow(DeployManifestContext parentContext, StageGraphBuilder graph) { + strategyHandler.disableOldManifests(parentContext, graph); + strategyHandler.deleteOldManifests(parentContext, graph); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/MigrateSecurityGroupTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestNoneStrategy.java similarity index 66% rename from orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/MigrateSecurityGroupTask.java rename to orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestNoneStrategy.java index bd2c02fef2..eaf45d571a 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/MigrateSecurityGroupTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestNoneStrategy.java @@ -1,5 +1,5 @@ /* - * Copyright 2016 Netflix, Inc. + * Copyright 2019 Google, Inc. * * Licensed under the Apache License, Version 2.0 (the "License") * you may not use this file except in compliance with the License. @@ -12,18 +12,15 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. + * */ -package com.netflix.spinnaker.orca.clouddriver.tasks.securitygroup; +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; -import com.netflix.spinnaker.orca.clouddriver.tasks.MigrateTask; +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; import org.springframework.stereotype.Component; @Component -public class MigrateSecurityGroupTask extends MigrateTask { - - @Override - public String getCloudOperationType() { - return "migrateSecurityGroup"; - } +public class ManifestNoneStrategy implements ManifestStrategy { + public void composeFlow(DeployManifestContext parentContext, StageGraphBuilder graph) {} } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestRedBlackStrategy.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestRedBlackStrategy.java new file mode 100644 index 0000000000..2d53702333 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestRedBlackStrategy.java @@ -0,0 +1,36 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; + +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +@Component +public class ManifestRedBlackStrategy implements ManifestStrategy { + private final ManifestStrategyHandler strategyHandler; + + @Autowired + public ManifestRedBlackStrategy(ManifestStrategyHandler strategyHandler) { + this.strategyHandler = strategyHandler; + } + + public void composeFlow(DeployManifestContext parentContext, StageGraphBuilder graph) { + strategyHandler.disableOldManifests(parentContext, graph); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategy.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategy.java new file mode 100644 index 0000000000..647be3bc72 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategy.java @@ -0,0 +1,24 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; + +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; + +public interface ManifestStrategy { + void composeFlow(DeployManifestContext parentContext, StageGraphBuilder graph); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyHandler.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyHandler.java new file mode 100644 index 0000000000..f03bf542d5 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyHandler.java @@ -0,0 +1,95 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; + +import com.netflix.spinnaker.orca.clouddriver.pipeline.manifest.DeleteManifestStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.manifest.DisableManifestStage; +import com.netflix.spinnaker.orca.clouddriver.utils.OortHelper; +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; +import lombok.RequiredArgsConstructor; +import org.springframework.stereotype.Component; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.stream.Collectors; + +@Component +@RequiredArgsConstructor +public class ManifestStrategyHandler { + private final OortHelper oortHelper; + + void disableOldManifests(DeployManifestContext parentContext, StageGraphBuilder graph) { + addStagesForOldManifests(parentContext, graph, DisableManifestStage.PIPELINE_CONFIG_TYPE); + } + + void deleteOldManifests(DeployManifestContext parentContext, StageGraphBuilder graph) { + addStagesForOldManifests(parentContext, graph, DeleteManifestStage.PIPELINE_CONFIG_TYPE); + } + + private Map getNewManifest(DeployManifestContext parentContext) { + List> manifests = (List>) parentContext.get("outputs.manifests"); + return manifests.get(0); + } + + private List getOldManifestNames(String application, String account, String clusterName, String namespace, String newManifestName) { + Map cluster = oortHelper.getCluster(application, account, clusterName, "kubernetes") + .orElseThrow(() -> new IllegalArgumentException(String.format("Error fetching cluster %s in account %s and namespace %s", clusterName, account, namespace))); + + List serverGroups = Optional.ofNullable((List) cluster.get("serverGroups")) + .orElse(null); + + if (serverGroups == null) { + return new ArrayList<>(); + } + + return serverGroups.stream() + .filter(s -> s.get("region").equals(namespace)) + .filter(s -> !s.get("name").equals(newManifestName)) + .map(s -> (String) s.get("name")) + .collect(Collectors.toList()); + } + + private void addStagesForOldManifests(DeployManifestContext parentContext, StageGraphBuilder graph, String stageType) { + Map deployedManifest = getNewManifest(parentContext); + String account = (String) parentContext.get("account"); + Map manifestMoniker = (Map) parentContext.get("moniker"); + String application = (String) manifestMoniker.get("app"); + + Map manifestMetadata = (Map) deployedManifest.get("metadata"); + String manifestName = String.format("replicaSet %s", (String) manifestMetadata.get("name")); + String namespace = (String) manifestMetadata.get("namespace"); + Map annotations = (Map) manifestMetadata.get("annotations"); + String clusterName = (String) annotations.get("moniker.spinnaker.io/cluster"); + String cloudProvider = "kubernetes"; + + List previousManifestNames = getOldManifestNames(application, account, clusterName, namespace, manifestName); + previousManifestNames.forEach(name -> { + graph.append((stage) -> { + stage.setType(stageType); + Map context = stage.getContext(); + context.put("account", account); + context.put("app", application); + context.put("cloudProvider", cloudProvider); + context.put("manifestName", name); + context.put("location", namespace); + }); + }); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyStagesAdder.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyStagesAdder.java new file mode 100644 index 0000000000..ec663375e5 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyStagesAdder.java @@ -0,0 +1,45 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; + +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; +import lombok.RequiredArgsConstructor; +import org.springframework.stereotype.Component; + +@Component +@RequiredArgsConstructor +public class ManifestStrategyStagesAdder { + private final ManifestHighlanderStrategy highlanderStrategy; + + private final ManifestRedBlackStrategy redBlackStrategy; + + private final ManifestNoneStrategy noneStrategy; + + public void addAfterStages(ManifestStrategyType strategy, StageGraphBuilder graph, DeployManifestContext parentContext) { + switch (strategy) { + case RED_BLACK: + redBlackStrategy.composeFlow(parentContext, graph); + break; + case HIGHLANDER: + highlanderStrategy.composeFlow(parentContext, graph); + break; + default: + noneStrategy.composeFlow(parentContext, graph); + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyType.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyType.java new file mode 100644 index 0000000000..b42c0cb7f1 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestStrategyType.java @@ -0,0 +1,42 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest; + +public enum ManifestStrategyType { + RED_BLACK("redblack"), + HIGHLANDER("highlander"), + NONE("none"); + + String key; + + ManifestStrategyType(String key) { + this.key = key; + } + + public static ManifestStrategyType fromKey(String key) { + if (key == null) { + return NONE; + } + for (ManifestStrategyType strategy : values()) { + if (key.equals(strategy.key)) { + return strategy; + } + } + return NONE; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PatchManifestTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PatchManifestTask.java index 9dbb8bc532..8a74f6d860 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PatchManifestTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PatchManifestTask.java @@ -111,7 +111,7 @@ public TaskResult execute(@Nonnull Stage stage) { .put("deploy.account.name", credentials) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } // TODO(dibyom) : Refactor into ManifestArtifact utils class for both Deploy and Patch. diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PromoteManifestKatoOutputsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PromoteManifestKatoOutputsTask.java index 9e3c2fda82..a85ecb62d6 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PromoteManifestKatoOutputsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/PromoteManifestKatoOutputsTask.java @@ -77,7 +77,7 @@ public TaskResult execute(@Nonnull Stage stage) { addToOutputs(outputs, allResults, CREATED_ARTIFACTS_KEY, ARTIFACTS_KEY); convertKey(outputs, ARTIFACTS_KEY, artifactListType); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build(); } private void convertKey(Map outputs, String key, TypeReference tr) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/WaitForManifestStableTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/WaitForManifestStableTask.java index d314a9b179..35927fa49b 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/WaitForManifestStableTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/WaitForManifestStableTask.java @@ -83,7 +83,7 @@ public TaskResult execute(@Nonnull Stage stage) { manifest = oortService.getManifest(account, location, name); } catch (RetrofitError e) { log.warn("Unable to read manifest {}", identifier, e); - return new TaskResult(ExecutionStatus.RUNNING, new HashMap<>(), new HashMap<>()); + return TaskResult.builder(ExecutionStatus.RUNNING).context(new HashMap<>()).outputs(new HashMap<>()).build(); } catch (Exception e) { throw new RuntimeException("Execution '" + stage.getExecution().getId() + "' failed with unexpected reason: " + e.getMessage(), e); } @@ -137,11 +137,11 @@ public TaskResult execute(@Nonnull Stage stage) { Map context = builder.build(); if (!anyUnknown && anyFailed) { - return new TaskResult(ExecutionStatus.TERMINAL, context); + return TaskResult.builder(ExecutionStatus.TERMINAL).context(context).build(); } else if (allStable) { - return new TaskResult(ExecutionStatus.SUCCEEDED, context, new HashMap<>()); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(context).outputs(new HashMap<>()).build(); } else { - return new TaskResult(ExecutionStatus.RUNNING, context, new HashMap<>()); + return TaskResult.builder(ExecutionStatus.RUNNING).context(context).outputs(new HashMap<>()).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTask.java new file mode 100644 index 0000000000..c346de5c31 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTask.java @@ -0,0 +1,36 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +@Component +public class CheckForRemainingPipelinesTask implements Task { + + @Override + public TaskResult execute(Stage stage) { + final SavePipelinesData savePipelines = stage.mapTo(SavePipelinesData.class); + if (savePipelines.getPipelinesToSave() == null || savePipelines.getPipelinesToSave().isEmpty()) { + return TaskResult.SUCCEEDED; + } + return TaskResult.ofStatus(ExecutionStatus.REDIRECT); + } + +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTask.java new file mode 100644 index 0000000000..c6b780d746 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTask.java @@ -0,0 +1,77 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +@Component +public class CheckPipelineResultsTask implements Task { + + private final ObjectMapper objectMapper; + + public CheckPipelineResultsTask(ObjectMapper objectMapper) { + this.objectMapper = objectMapper; + } + + @Override + public TaskResult execute(Stage stage) { + final SavePipelineResultsData previousSavePipelineResults = stage.mapTo(SavePipelineResultsData.class); + final SavePipelinesData savePipelinesData = stage.mapTo(SavePipelinesData.class); + final List previousCreated = previousSavePipelineResults.getPipelinesCreated(); + final List previousUpdated = previousSavePipelineResults.getPipelinesUpdated(); + final List previousFailedToSave = previousSavePipelineResults.getPipelinesFailedToSave(); + final SavePipelineResultsData savePipelineResults = new SavePipelineResultsData( + previousCreated == null ? new ArrayList() : previousCreated, + previousUpdated == null ? new ArrayList() : previousUpdated, + previousFailedToSave == null ? new ArrayList() : previousFailedToSave + ); + + stage.getTasks().stream().filter( task -> task.getName().equals("savePipeline")).findFirst() + .ifPresent(savePipelineTask -> { + final String application = (String) stage.getContext().get("application"); + final String pipelineName = (String) stage.getContext().get("pipeline.name"); + final String pipelineId = (String) stage.getContext().get("pipeline.id"); + final PipelineReferenceData ref = new PipelineReferenceData(application, pipelineName, pipelineId); + if (savePipelineTask.getStatus().isSuccessful()) { + final Boolean isExistingPipeline = (Boolean) Optional.ofNullable(stage.getContext().get("isExistingPipeline")) + .orElse(false); + if (isExistingPipeline) { + savePipelineResults.getPipelinesUpdated().add(ref); + } else { + savePipelineResults.getPipelinesCreated().add(ref); + } + } else { + savePipelineResults.getPipelinesFailedToSave().add(ref); + } + }); + + final Map output = objectMapper. + convertValue(savePipelineResults, new TypeReference>() {}); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(output).build(); + } + +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTask.java new file mode 100644 index 0000000000..2ca6b902ab --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTask.java @@ -0,0 +1,134 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.io.CharStreams; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.kork.core.RetrySupport; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.front50.Front50Service; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import lombok.AllArgsConstructor; +import lombok.Getter; +import lombok.NoArgsConstructor; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; +import org.springframework.stereotype.Component; +import retrofit.client.Response; + +import java.io.*; +import java.util.*; +import java.util.stream.Collectors; + +@Component +@ConditionalOnProperty("front50.enabled") +public class GetPipelinesFromArtifactTask implements Task { + + private Logger log = LoggerFactory.getLogger(getClass()); + RetrySupport retrySupport = new RetrySupport(); + + private final Front50Service front50Service; + private final OortService oort; + private final ObjectMapper objectMapper; + private final ArtifactResolver artifactResolver; + + public GetPipelinesFromArtifactTask(Front50Service front50Service, + OortService oort, + ObjectMapper objectMapper, + ArtifactResolver artifactResolver) { + this.front50Service = front50Service; + this.oort = oort; + this.objectMapper = objectMapper; + this.artifactResolver = artifactResolver; + } + + @Getter + @NoArgsConstructor + @AllArgsConstructor + public static class PipelinesArtifactData { + @JsonProperty("pipelinesArtifactId") private String id; + @JsonProperty("pipelinesArtifact") private Artifact inline; + } + + @SuppressWarnings("unchecked") + @Override + public TaskResult execute(Stage stage) { + final PipelinesArtifactData pipelinesArtifact = stage.mapTo(PipelinesArtifactData.class); + Artifact resolvedArtifact = artifactResolver + .getBoundArtifactForStage(stage, pipelinesArtifact.getId(), pipelinesArtifact.getInline()); + if (resolvedArtifact == null) { + throw new IllegalArgumentException("No artifact could be bound to '" + pipelinesArtifact.getId() + "'"); + } + log.info("Using {} as the pipelines to be saved", pipelinesArtifact); + + String pipelinesText = getPipelinesArtifactContent(resolvedArtifact); + + Map> pipelinesFromArtifact = null; + try { + pipelinesFromArtifact = objectMapper.readValue(pipelinesText, new TypeReference>>() {}); + } catch (IOException e) { + log.warn("Failure parsing pipelines from {}", pipelinesArtifact, e); + throw new IllegalStateException(e); // forces a retry + } + final Map> finalPipelinesFromArtifact = pipelinesFromArtifact; + final Set appNames = pipelinesFromArtifact.keySet(); + final List newAndUpdatedPipelines = appNames.stream().flatMap(appName -> { + final List> existingAppPipelines = front50Service.getPipelines(appName); + final List specifiedAppPipelines = finalPipelinesFromArtifact.get(appName); + return specifiedAppPipelines.stream().map(p -> { + final Map pipeline = p; + pipeline.put("application", appName); + final Optional> matchedExistingPipeline = existingAppPipelines + .stream().filter(existingPipeline -> existingPipeline.get("name").equals(pipeline.get("name"))).findFirst(); + matchedExistingPipeline.ifPresent(matchedPipeline -> { + pipeline.put("id", matchedPipeline.get("id")); + }); + return pipeline; + }).filter(pipeline -> !pipeline.isEmpty()); + }).collect(Collectors.toList()); + final SavePipelinesData output = new SavePipelinesData(null, newAndUpdatedPipelines); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(objectMapper.convertValue(output, new TypeReference>() {})).build(); + } + + private String getPipelinesArtifactContent(Artifact artifact) { + return retrySupport.retry(() -> { + Response response = oort.fetchArtifact(artifact); + InputStream artifactInputStream; + try { + artifactInputStream = response.getBody().in(); + } catch (IOException e) { + log.warn("Failure fetching pipelines from {}", artifact, e); + throw new IllegalStateException(e); // forces a retry + } + try (InputStreamReader rd = new InputStreamReader(artifactInputStream)) { + return CharStreams.toString(rd); + } catch (IOException e) { + log.warn("Failure reading pipelines from {}", artifact, e); + throw new IllegalStateException(e); // forces a retry + } + }, 10, 200, true); + } + +} + diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/MigratePipelineClustersTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/MigratePipelineClustersTask.java deleted file mode 100644 index ee3bb7a115..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/MigratePipelineClustersTask.java +++ /dev/null @@ -1,119 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; - -import java.util.*; -import java.util.stream.Collectors; -import com.netflix.spinnaker.orca.ExecutionStatus; -import com.netflix.spinnaker.orca.TaskResult; -import com.netflix.spinnaker.orca.clouddriver.KatoService; -import com.netflix.spinnaker.orca.clouddriver.model.TaskId; -import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.cluster.PipelineClusterExtractor; -import com.netflix.spinnaker.orca.front50.Front50Service; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.springframework.beans.factory.annotation.Autowired; -import org.springframework.stereotype.Component; - -@Component -public class MigratePipelineClustersTask extends AbstractCloudProviderAwareTask { - - @Autowired - KatoService katoService; - - @Autowired(required = false) - Front50Service front50Service; - - @Autowired - List extractors; - - @Override - public TaskResult execute(Stage stage) { - if (front50Service == null) { - throw new UnsupportedOperationException("Cannot migrate pipeline clusters, front50 is not enabled. Fix this by setting front50.enabled: true"); - } - - Map context = stage.getContext(); - Optional> pipelineMatch = getPipeline(context); - - if (!pipelineMatch.isPresent()) { - return pipelineNotFound(context); - } - - List sources = getSources(pipelineMatch.get()); - List> operations = generateKatoOperation(context, sources); - - TaskId taskId = katoService.requestOperations(getCloudProvider(stage), operations) - .toBlocking() - .first(); - Map outputs = new HashMap<>(); - outputs.put("notification.type", "migratepipelineclusters"); - outputs.put("kato.last.task.id", taskId); - outputs.put("source.pipeline", pipelineMatch.get()); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); - } - - private List getSources(Map pipeline) { - List stages = (List) pipeline.getOrDefault("stages", new ArrayList<>()); - return stages.stream().map(s -> { - Optional extractor = PipelineClusterExtractor.getExtractor(s, extractors); - if (extractor.isPresent()) { - return extractor.get().extractClusters(s).stream() - .map(c -> Collections.singletonMap("cluster", c)) - .collect(Collectors.toList()); - } - return new ArrayList(); - }).flatMap(Collection::stream).collect(Collectors.toList()); - } - - private List> generateKatoOperation(Map context, List sources) { - Map migrateOperation = new HashMap<>(); - migrateOperation.put("sources", sources); - Map operation = new HashMap<>(); - operation.put("migrateClusterConfigurations", migrateOperation); - addMappings(context, migrateOperation); - - List> operations = new ArrayList<>(); - operations.add(operation); - return operations; - } - - private void addMappings(Map context, Map operation) { - operation.put("regionMapping", context.getOrDefault("regionMapping", new HashMap<>())); - operation.put("accountMapping", context.getOrDefault("accountMapping", new HashMap<>())); - operation.put("subnetTypeMapping", context.getOrDefault("subnetTypeMapping", new HashMap<>())); - operation.put("elbSubnetTypeMapping", context.getOrDefault("elbSubnetTypeMapping", new HashMap<>())); - operation.put("iamRoleMapping", context.getOrDefault("iamRoleMapping", new HashMap<>())); - operation.put("keyPairMapping", context.getOrDefault("keyPairMapping", new HashMap<>())); - operation.put("dryRun", context.getOrDefault("dryRun", false)); - operation.put("allowIngressFromClassic", context.getOrDefault("allowIngressFromClassic", false)); - } - - private Optional> getPipeline(Map context) { - String application = (String) context.get("application"); - String pipelineId = (String) context.get("pipelineConfigId"); - return front50Service.getPipelines(application).stream() - .filter(p -> pipelineId.equals(p.get("id"))).findFirst(); - } - - private TaskResult pipelineNotFound(Map context) { - Map outputs = new HashMap<>(); - outputs.put("exception", "Could not find pipeline with ID " + context.get("pipelineConfigId")); - return new TaskResult(ExecutionStatus.TERMINAL, outputs); - } - -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PipelineReferenceData.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PipelineReferenceData.java new file mode 100644 index 0000000000..2ca2f71466 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PipelineReferenceData.java @@ -0,0 +1,29 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import lombok.AllArgsConstructor; +import lombok.Getter; +import lombok.NoArgsConstructor; + +@Getter +@NoArgsConstructor +@AllArgsConstructor +public class PipelineReferenceData { + private String application; + private String name; + private String id; +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTask.java new file mode 100644 index 0000000000..9f7e753bac --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTask.java @@ -0,0 +1,66 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.stereotype.Component; + +import java.util.Base64; +import java.util.List; +import java.util.Map; + +@Component +public class PreparePipelineToSaveTask implements Task { + + private Logger log = LoggerFactory.getLogger(getClass()); + + private final ObjectMapper objectMapper; + + public PreparePipelineToSaveTask(ObjectMapper objectMapper) { + this.objectMapper = objectMapper; + } + + @Override + public TaskResult execute(Stage stage) { + final SavePipelinesData input = stage.mapTo(SavePipelinesData.class); + if (input.getPipelinesToSave() == null || input.getPipelinesToSave().isEmpty()) { + log.info("There are no pipelines to save."); + return TaskResult.ofStatus(ExecutionStatus.TERMINAL); + } + final Map pipelineData = input.getPipelinesToSave().get(0); + final String pipelineString; + try { + pipelineString = objectMapper.writeValueAsString(pipelineData); + } catch (JsonProcessingException e) { + throw new IllegalStateException(e); + } + final String encodedPipeline = Base64.getEncoder().encodeToString(pipelineString.getBytes()); + final List remainingPipelinesToSave = input.getPipelinesToSave().subList(1, input.getPipelinesToSave().size()); + final SavePipelinesData outputSavePipelinesData = new SavePipelinesData(encodedPipeline, remainingPipelinesToSave); + final Map output = objectMapper.convertValue(outputSavePipelinesData, new TypeReference>() {}); + output.put("isExistingPipeline", pipelineData.get("id") != null); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(output).build(); + } + +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelineResultsData.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelineResultsData.java new file mode 100644 index 0000000000..5bf086f873 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelineResultsData.java @@ -0,0 +1,31 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import lombok.AllArgsConstructor; +import lombok.Getter; +import lombok.NoArgsConstructor; + +import java.util.List; + +@Getter +@NoArgsConstructor +@AllArgsConstructor +public class SavePipelineResultsData { + private List pipelinesCreated; + private List pipelinesUpdated; + private List pipelinesFailedToSave; +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesCompleteTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesCompleteTask.java new file mode 100644 index 0000000000..5f958cafe3 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesCompleteTask.java @@ -0,0 +1,52 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.stereotype.Component; + +import java.util.List; +import java.util.stream.Collectors; + +@Component +public class SavePipelinesCompleteTask implements Task { + + private final Logger log = LoggerFactory.getLogger(getClass()); + + @Override public TaskResult execute(Stage stage) { + final SavePipelineResultsData savePipelineResults = stage.mapTo(SavePipelineResultsData.class); + logResults(savePipelineResults.getPipelinesFailedToSave(), "Failed to save pipelines: "); + logResults(savePipelineResults.getPipelinesCreated(), "Created pipelines: "); + logResults(savePipelineResults.getPipelinesUpdated(), "Updated pipelines: "); + if (savePipelineResults.getPipelinesFailedToSave().isEmpty()) { + return TaskResult.SUCCEEDED; + } + return TaskResult.ofStatus(ExecutionStatus.TERMINAL); + } + + private void logResults(List savePipelineSuccesses, String s) { + if (!savePipelineSuccesses.isEmpty()) { + log.info(s + savePipelineSuccesses.stream() + .map(ref -> ref.getApplication() + ":" + ref.getName()) + .collect(Collectors.joining(", "))); + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesData.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesData.java new file mode 100644 index 0000000000..09148540b8 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/SavePipelinesData.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; + +import lombok.AllArgsConstructor; +import lombok.Getter; +import lombok.NoArgsConstructor; + +import java.util.List; +import java.util.Map; + +@Getter +@NoArgsConstructor +@AllArgsConstructor +public class SavePipelinesData { + private String pipeline; + private List pipelinesToSave; + private final Boolean isSavingMultiplePipelines = true; +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/UpdateMigratedPipelineTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/UpdateMigratedPipelineTask.java deleted file mode 100644 index 2339d32630..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/UpdateMigratedPipelineTask.java +++ /dev/null @@ -1,72 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline; - -import java.util.*; -import java.util.stream.Collectors; -import com.netflix.spinnaker.orca.ExecutionStatus; -import com.netflix.spinnaker.orca.TaskResult; -import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.cluster.PipelineClusterExtractor; -import com.netflix.spinnaker.orca.front50.Front50Service; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.springframework.beans.factory.annotation.Autowired; -import org.springframework.stereotype.Component; - -@Component -public class UpdateMigratedPipelineTask extends AbstractCloudProviderAwareTask { - - @Autowired(required = false) - Front50Service front50Service; - - @Autowired - List extractors; - - @Override - public TaskResult execute(Stage stage) { - if (front50Service == null) { - throw new UnsupportedOperationException("Unable to update migrated pipelines, front50 is not enabled. Fix this by setting front50.enabled: true"); - } - Map context = stage.getContext(); - Map pipeline = (Map) context.get("source.pipeline"); - - List stages = (List) pipeline.getOrDefault("stages", new ArrayList<>()); - List katoTasks = (List) context.get("kato.tasks"); - List resultObjects = (List) katoTasks.get(0).get("resultObjects"); - List replacements = new ArrayList<>(resultObjects.stream().map(o -> (Map) o.get("cluster")).collect(Collectors.toList())); - stages.forEach(s -> - PipelineClusterExtractor.getExtractor(s, extractors).ifPresent(e -> e.updateStageClusters(s, replacements)) - ); - String newName = (String) context.getOrDefault("newPipelineName", pipeline.get("name") + " - migrated"); - pipeline.put("name", newName); - pipeline.remove("id"); - List triggers = (List) pipeline.getOrDefault("triggers", new ArrayList<>()); - triggers.forEach(t -> t.put("enabled", false)); - front50Service.savePipeline(pipeline); - String application = (String) context.get("application"); - Optional> newPipeline = front50Service.getPipelines(application).stream() - .filter(p -> newName.equals(p.get("name"))).findFirst(); - if (!newPipeline.isPresent()) { - Map outputs = new HashMap<>(); - outputs.put("exception", "Pipeline migration was successful but could not find new pipeline with name " + newName); - return new TaskResult(ExecutionStatus.TERMINAL, outputs); - } - Map outputs = new HashMap<>(); - outputs.put("newPipelineId", newPipeline.get().get("id")); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AbstractWaitForAppEngineServerGroupStopStartTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AbstractWaitForAppEngineServerGroupStopStartTask.groovy index c3966e4789..50f109cca3 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AbstractWaitForAppEngineServerGroupStopStartTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AbstractWaitForAppEngineServerGroupStopStartTask.groovy @@ -57,24 +57,24 @@ abstract class AbstractWaitForAppEngineServerGroupStopStartTask extends Abstract def serverGroup = cluster.serverGroups.find { it.name == serverGroupName } if (!serverGroup) { log.info("${serverGroupName}: not found.") - return new TaskResult(ExecutionStatus.TERMINAL) + return TaskResult.ofStatus(ExecutionStatus.TERMINAL) } def desiredServingStatus = start ? "SERVING" : "STOPPED" if (serverGroup.servingStatus == desiredServingStatus) { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } else { log.info("${serverGroupName}: not yet ${start ? "started" : "stopped"}.") - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } } catch (RetrofitError e) { def retrofitErrorResponse = new RetrofitExceptionHandler().handle(stage.name, e) if (e.response?.status == 404) { - return new TaskResult(ExecutionStatus.TERMINAL) + return TaskResult.ofStatus(ExecutionStatus.TERMINAL) } else if (e.response?.status >= 500) { log.error("Unexpected retrofit error (${retrofitErrorResponse})") - return new TaskResult(ExecutionStatus.RUNNING, [lastRetrofitException: retrofitErrorResponse]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([lastRetrofitException: retrofitErrorResponse]).build() } throw e diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/UpsertAppEngineLoadBalancersTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/UpsertAppEngineLoadBalancersTask.groovy index 1244b9c035..191ec2bab5 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/UpsertAppEngineLoadBalancersTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/UpsertAppEngineLoadBalancersTask.groovy @@ -87,7 +87,7 @@ class UpsertAppEngineLoadBalancersTask extends AbstractCloudProviderAwareTask im ] } ] - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } String resolveTargetServerGroupName(Map loadBalancer, Map allocationDescription) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTagger.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTagger.java index 64952cb573..36b253c30c 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTagger.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTagger.java @@ -135,6 +135,10 @@ public ImageTagger.OperationContext getOperationContext(Stage stage) { */ @Override public boolean areImagesTagged(Collection targetImages, Collection consideredStageRefIds, Stage stage) { + if (targetImages.stream().anyMatch(image -> image.imageName == null)) { + return false; + } + Collection matchedImages = findImages( targetImages.stream().map(targetImage -> targetImage.imageName).collect(Collectors.toSet()), consideredStageRefIds, @@ -170,6 +174,29 @@ public boolean areImagesTagged(Collection targetImages, Collection upstreamImageIds, Collection foundImages) { + Set foundAmis = new HashSet<>(); + Collection matchedImages = ((Collection) foundImages); + + for (MatchedImage matchedImage : matchedImages) { + matchedImage.amis.values().forEach(foundAmis::addAll); + } + + if (foundAmis.size() < upstreamImageIds.size()) { + throw new ImageNotFound( + format("Only found %d images to tag but %d were specified upstream (found imageIds: %s, found imageNames: %s)", + foundAmis.size(), + upstreamImageIds.size(), + foundAmis, + matchedImages.stream() + .map(i -> i.imageName) + .collect(Collectors.toSet())), + true + ); + } + } + @Override public String getCloudProvider() { return "aws"; diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTask.java index 8840198f60..a8f22ca491 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTask.java @@ -26,7 +26,9 @@ import org.springframework.stereotype.Component; import javax.annotation.Nonnull; -import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; @Component public class CloudFormationForceCacheRefreshTask extends AbstractCloudProviderAwareTask implements Task { @@ -39,8 +41,20 @@ public class CloudFormationForceCacheRefreshTask extends AbstractCloudProviderAw public TaskResult execute(@Nonnull Stage stage) { String cloudProvider = getCloudProvider(stage); - cacheService.forceCacheUpdate(cloudProvider, REFRESH_TYPE, Collections.emptyMap()); - - return new TaskResult(ExecutionStatus.SUCCEEDED); + Map data = new HashMap<>(); + + String credentials = getCredentials(stage); + if (credentials != null) { + data.put("credentials", credentials); + } + + List regions = (List) stage.getContext().get("regions"); + if (regions != null && !regions.isEmpty()) { + data.put("region", regions); + } + + cacheService.forceCacheUpdate(cloudProvider, REFRESH_TYPE, data); + + return TaskResult.SUCCEEDED; } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTask.java index 05649c38b0..429e4218df 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTask.java @@ -15,7 +15,6 @@ */ package com.netflix.spinnaker.orca.clouddriver.tasks.providers.aws.cloudformation; -import com.fasterxml.jackson.databind.ObjectMapper; import com.google.common.collect.ImmutableMap; import com.google.common.io.CharStreams; import com.netflix.spinnaker.kork.artifacts.model.Artifact; @@ -32,6 +31,8 @@ import org.apache.commons.lang3.StringUtils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; +import org.yaml.snakeyaml.Yaml; +import org.yaml.snakeyaml.parser.ParserException; import retrofit.client.Response; import javax.annotation.Nonnull; @@ -49,9 +50,6 @@ public class DeployCloudFormationTask extends AbstractCloudProviderAwareTask imp @Autowired OortService oortService; - @Autowired - ObjectMapper objectMapper; - @Autowired ArtifactResolver artifactResolver; @@ -80,9 +78,13 @@ public TaskResult execute(@Nonnull Stage stage) { try { String template = CharStreams.toString(new InputStreamReader(response.getBody().in())); log.debug("Fetched template from artifact {}: {}", artifact.getReference(), template); - task.put("templateBody", objectMapper.readValue(template, Map.class)); + // attempt to deserialize template body to a map. supports YAML or JSON formatted templates. + Map templateBody = (Map) new Yaml().load(template); + task.put("templateBody", templateBody); } catch (IOException e) { throw new IllegalArgumentException("Failed to read template from artifact definition "+ artifact, e); + } catch (ParserException e) { + throw new IllegalArgumentException("Template body must be valid JSON or YAML.", e); } } @@ -111,7 +113,7 @@ public TaskResult execute(@Nonnull Stage stage) { .put("kato.last.task.id", taskId) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, context); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(context).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/scalingprocess/AbstractAwsScalingProcessTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/scalingprocess/AbstractAwsScalingProcessTask.groovy index 1e7539ccc3..d64c0bd8b7 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/scalingprocess/AbstractAwsScalingProcessTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/scalingprocess/AbstractAwsScalingProcessTask.groovy @@ -89,9 +89,9 @@ abstract class AbstractAwsScalingProcessTask extends AbstractCloudProviderAwareT stageOutputs."kato.last.task.id" = taskId } - return new TaskResult(ExecutionStatus.SUCCEEDED, stageOutputs, [ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs).outputs([ ("scalingProcesses.${asgName}" as String): stageContext.processes - ]) + ]).build() } static class StageData { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/DeployServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/AbstractCloudFoundryServiceTask.java similarity index 57% rename from orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/DeployServiceTask.java rename to orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/AbstractCloudFoundryServiceTask.java index 07ef03bf36..8733f0dd37 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/DeployServiceTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/AbstractCloudFoundryServiceTask.java @@ -1,11 +1,11 @@ /* - * Copyright 2018-Present Pivotal, Inc. + * Copyright 2019 Pivotal, Inc. * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -14,7 +14,7 @@ * limitations under the License. */ -package com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker; +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; import com.google.common.collect.ImmutableMap; import com.netflix.spinnaker.orca.ExecutionStatus; @@ -22,37 +22,40 @@ import com.netflix.spinnaker.orca.TaskResult; import com.netflix.spinnaker.orca.clouddriver.KatoService; import com.netflix.spinnaker.orca.clouddriver.model.TaskId; -import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; import com.netflix.spinnaker.orca.pipeline.model.Stage; import org.springframework.stereotype.Component; import javax.annotation.Nonnull; -import java.util.*; +import java.util.Collections; +import java.util.Map; +import java.util.Optional; @Component -public class DeployServiceTask extends AbstractCloudProviderAwareTask implements Task { +public abstract class AbstractCloudFoundryServiceTask implements CloudProviderAware, Task { + private KatoService kato; - private final KatoService kato; - - public DeployServiceTask(KatoService kato) { + public AbstractCloudFoundryServiceTask(KatoService kato) { this.kato = kato; } + protected abstract String getNotificationType(); + @Nonnull @Override public TaskResult execute(@Nonnull Stage stage) { String cloudProvider = getCloudProvider(stage); String account = getCredentials(stage); Map operation = new ImmutableMap.Builder() - .put("deployService", stage.getContext()) + .put(getNotificationType(), stage.getContext()) .build(); TaskId taskId = kato.requestOperations(cloudProvider, Collections.singletonList(operation)).toBlocking().first(); Map outputs = new ImmutableMap.Builder() - .put("notification.type", "deployService") + .put("notification.type", getNotificationType()) .put("kato.last.task.id", taskId) - .put("service.region", stage.getContext().get("region")) + .put("service.region", Optional.ofNullable(stage.getContext().get("region")).orElse("")) .put("service.account", account) .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/MigrateLoadBalancerTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryCreateServiceKeyTask.java similarity index 61% rename from orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/MigrateLoadBalancerTask.java rename to orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryCreateServiceKeyTask.java index 48e949cd67..faaa10e369 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/MigrateLoadBalancerTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryCreateServiceKeyTask.java @@ -1,7 +1,7 @@ /* - * Copyright 2016 Netflix, Inc. + * Copyright 2019 Pivotal, Inc. * - * Licensed under the Apache License, Version 2.0 (the "License") + * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * @@ -14,16 +14,19 @@ * limitations under the License. */ -package com.netflix.spinnaker.orca.clouddriver.tasks.loadbalancer; +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; -import com.netflix.spinnaker.orca.clouddriver.tasks.MigrateTask; +import com.netflix.spinnaker.orca.clouddriver.KatoService; import org.springframework.stereotype.Component; @Component -public class MigrateLoadBalancerTask extends MigrateTask { +public class CloudFoundryCreateServiceKeyTask extends AbstractCloudFoundryServiceTask { + public CloudFoundryCreateServiceKeyTask(KatoService kato) { + super(kato); + } @Override - public String getCloudOperationType() { - return "migrateLoadBalancer"; + protected String getNotificationType() { + return "createServiceKey"; } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTask.java new file mode 100644 index 0000000000..28b7c93f39 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTask.java @@ -0,0 +1,82 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import lombok.RequiredArgsConstructor; +import org.jetbrains.annotations.NotNull; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.*; + +@RequiredArgsConstructor +@Component +public class CloudFoundryDeployServiceTask extends AbstractCloudProviderAwareTask { + private final KatoService kato; + private final ArtifactResolver artifactResolver; + private final ObjectMapper artifactMapper = new ObjectMapper() + .disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + String cloudProvider = getCloudProvider(stage); + String account = getCredentials(stage); + Map context = bindArtifactIfNecessary(stage); + Map operation = new ImmutableMap.Builder() + .put("deployService", context) + .build(); + TaskId taskId = kato.requestOperations(cloudProvider, Collections.singletonList(operation)).toBlocking().first(); + Map outputs = new ImmutableMap.Builder() + .put("notification.type", "deployService") + .put("kato.last.task.id", taskId) + .put("service.region", stage.getContext().get("region")) + .put("service.account", account) + .build(); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); + } + + @NotNull + private Map bindArtifactIfNecessary(@Nonnull Stage stage) { + Map context = stage.getContext(); + Map manifest = (Map) context.get("manifest"); + if(manifest.get("artifactId") != null || manifest.get("artifact") != null) { + Artifact artifact = manifest.get("artifact") != null ? + artifactMapper.convertValue(manifest.get("artifact"), Artifact.class) : + null; + Artifact boundArtifact = artifactResolver.getBoundArtifactForStage(stage, (String) manifest.get("artifactId"), artifact); + if(boundArtifact == null) { + throw new IllegalArgumentException("Unable to bind the service manifest artifact"); + } + manifest.remove("artifactId"); // replacing with the bound artifact now + //noinspection unchecked + manifest.put("artifact", artifactMapper.convertValue(boundArtifact, Map.class)); + } + return context; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTask.java new file mode 100644 index 0000000000..07eef51fe8 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTask.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryDestroyServiceTask extends AbstractCloudFoundryServiceTask { + public CloudFoundryDestroyServiceTask(KatoService kato) { + super(kato); + } + + @Override + protected String getNotificationType() { + return "destroyService"; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTask.java new file mode 100644 index 0000000000..0fb978d584 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTask.java @@ -0,0 +1,125 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.RetryableTask; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.Task; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import groovy.transform.CompileStatic; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.*; +import java.util.stream.Collectors; + +import static com.netflix.spinnaker.orca.ExecutionStatus.*; + +@Component +@CompileStatic +public class CloudFoundryMonitorKatoServicesTask extends AbstractCloudProviderAwareTask implements RetryableTask { + private KatoService kato; + + @Autowired + public CloudFoundryMonitorKatoServicesTask(KatoService kato) { + this.kato = kato; + } + + @Override + public long getBackoffPeriod() { + return 10 * 1000L; + } + + @Override + public long getTimeout() { + return 30 * 60 * 1000L; + } + + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + TaskId taskId = stage.mapTo("/kato.last.task.id", TaskId.class); + List> katoTasks = Optional + .ofNullable((List>) stage.getContext().get("kato.tasks")) + .orElse(new ArrayList<>()); + Map stageContext = stage.getContext(); + + Task katoTask = kato.lookupTask(taskId.getId(), true).toBlocking().first(); + ExecutionStatus status = katoStatusToTaskStatus(katoTask); + List results = Optional + .ofNullable(katoTask.getResultObjects()) + .orElse(Collections.emptyList()); + + ImmutableMap.Builder katoTaskMapBuilder = new ImmutableMap.Builder() + .put("id", katoTask.getId()) + .put("status", katoTask.getStatus()) + .put("history", katoTask.getHistory()) + .put("resultObjects", results); + + ImmutableMap.Builder builder = new ImmutableMap.Builder() + .put("kato.last.task.id", taskId) + .put("kato.task.firstNotFoundRetry", -1L) + .put("kato.task.notFoundRetryCount", 0); + + switch (status) { + case TERMINAL: { + results.stream() + .filter(result -> "EXCEPTION".equals(result.get("type"))) + .findAny() + .ifPresent(e -> katoTaskMapBuilder.put("exception", e)); + break; + } + case SUCCEEDED: { + builder + .put("service.region", Optional.ofNullable(stageContext.get("region")).orElse("")) + .put("service.account", getCredentials(stage)) + .put("service.operation.type", results.get(0).get("type")) + .put("service.instance.name", results.get(0).get("serviceInstanceName")); + break; + } + default: + } + + katoTasks = katoTasks.stream() + .filter(task -> !katoTask.getId().equals(task.get("id"))) + .collect(Collectors.toList()); + katoTasks.add(katoTaskMapBuilder.build()); + builder.put("kato.tasks", katoTasks); + + return TaskResult.builder(status).context(builder.build()).build(); + } + + private static ExecutionStatus katoStatusToTaskStatus(Task katoTask) { + Task.Status katoStatus = katoTask.getStatus(); + if (katoStatus.isFailed()) { + return TERMINAL; + } else if (katoStatus.isCompleted()) { + List results = katoTask.getResultObjects(); + if (results != null && results.size() > 0) { + return SUCCEEDED; + } + } + return RUNNING; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.groovy deleted file mode 100644 index 4e96805f92..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.groovy +++ /dev/null @@ -1,77 +0,0 @@ -/* - * Copyright 2018 Pivotal, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf - -import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.ServerGroupCreator -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger -import com.netflix.spinnaker.orca.pipeline.model.Stage -import groovy.util.logging.Slf4j -import org.springframework.stereotype.Component - -@Slf4j -@Component -class CloudFoundryServerGroupCreator implements ServerGroupCreator { - - boolean katoResultExpected = false - String cloudProvider = "cloudfoundry" - - @Override - List getOperations(Stage stage) { - def operation = [ - application: stage.context.application, - credentials: stage.context.account, - manifest: stage.context.manifest, - region: stage.context.region, - startApplication: stage.context.startApplication, - artifact: stage.context.artifact - ] - - stage.context.stack?.with { operation.stack = it } - stage.context.freeFormDetails?.with { operation.freeFormDetails = it } - - if(stage.execution.trigger instanceof JenkinsTrigger) { - JenkinsTrigger jenkins = stage.execution.trigger as JenkinsTrigger - def artifact = stage.context.artifact - if(artifact.type == 'trigger') { - operation.artifact = getArtifactFromJenkinsTrigger(jenkins, artifact.account, artifact.pattern) - } - def manifest = stage.context.manifest - if(manifest.type == 'trigger') { - operation.manifest = getArtifactFromJenkinsTrigger(jenkins, manifest.account, manifest.pattern) - } - } - - return [[(OPERATION): operation]] - } - - private Map getArtifactFromJenkinsTrigger(JenkinsTrigger jenkinsTrigger, String account, String regex) { - def matchingArtifact = jenkinsTrigger.buildInfo.artifacts.find { it.fileName ==~ regex } - if(!matchingArtifact) { - throw new IllegalStateException("No Jenkins artifacts matched the pattern '${regex}'.") - } - return [ - type: 'artifact', - account: account, - reference: jenkinsTrigger.buildInfo.url + 'artifact/' + matchingArtifact.relativePath - ] - } - - @Override - Optional getHealthProviderName() { - return Optional.empty() - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.java new file mode 100644 index 0000000000..e15c2ebc41 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreator.java @@ -0,0 +1,105 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.ServerGroupCreator; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import lombok.Data; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Component; + +import javax.annotation.Nullable; +import java.util.*; + +@Slf4j +@Component +class CloudFoundryServerGroupCreator implements ServerGroupCreator { + private final ObjectMapper mapper; + private final ArtifactResolver artifactResolver; + + CloudFoundryServerGroupCreator(ObjectMapper mapper, ArtifactResolver artifactResolver) { + this.mapper = mapper; + this.artifactResolver = artifactResolver; + } + + @Override + public List getOperations(Stage stage) { + Map context = stage.getContext(); + ImmutableMap.Builder operation = ImmutableMap.builder() + .put("application", context.get("application")) + .put("credentials", context.get("account")) + .put("startApplication", context.get("startApplication")) + .put("region", context.get("region")) + .put("applicationArtifact", applicationArtifact(stage, context.get("applicationArtifact"))) + .put("manifest", manifestArtifact(stage, context.get("manifest"))); + + if (context.get("stack") != null) { + operation.put("stack", context.get("stack")); + } + + if (context.get("freeFormDetails") != null) { + operation.put("freeFormDetails", context.get("freeFormDetails")); + } + + return Collections.singletonList(ImmutableMap.builder() + .put(OPERATION, operation.build()) + .build()); + } + + private Artifact applicationArtifact(Stage stage, Object input) { + ApplicationArtifact applicationArtifactInput = mapper.convertValue(input, ApplicationArtifact.class); + Artifact artifact = artifactResolver.getBoundArtifactForStage(stage, applicationArtifactInput.getArtifactId(), + applicationArtifactInput.getArtifact()); + if(artifact == null) { + throw new IllegalArgumentException("Unable to bind the application artifact"); + } + + return artifact; + } + + private Artifact manifestArtifact(Stage stage, Object input) { + return mapper.convertValue(input, Manifest.class).toArtifact(artifactResolver, stage); + } + + @Override + public boolean isKatoResultExpected() { + return false; + } + + @Override + public String getCloudProvider() { + return "cloudfoundry"; + } + + @Override + public Optional getHealthProviderName() { + return Optional.empty(); + } + + @Data + private static class ApplicationArtifact { + @Nullable + private String artifactId; + + @Nullable + private Artifact artifact; + } +} \ No newline at end of file diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryShareServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryShareServiceTask.java new file mode 100644 index 0000000000..668e5f6409 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryShareServiceTask.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryShareServiceTask extends AbstractCloudFoundryServiceTask { + public CloudFoundryShareServiceTask(KatoService kato) { + super(kato); + } + + @Override + protected String getNotificationType() { + return "shareService"; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryUnshareServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryUnshareServiceTask.java new file mode 100644 index 0000000000..36fc3187ce --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryUnshareServiceTask.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryUnshareServiceTask extends AbstractCloudFoundryServiceTask { + public CloudFoundryUnshareServiceTask(KatoService kato) { + super(kato); + } + + @Override + protected String getNotificationType() { + return "unshareService"; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTask.java new file mode 100644 index 0000000000..ed0c5d1906 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTask.java @@ -0,0 +1,52 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.RetryableTask; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker.AbstractWaitForServiceTask; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import java.util.Map; +import java.util.Optional; + +@Component +public class CloudFoundryWaitForDeployServiceTask extends AbstractWaitForServiceTask { + @Autowired + public CloudFoundryWaitForDeployServiceTask(OortService oortService) { + super(oortService); + } + + protected ExecutionStatus oortStatusToTaskStatus(Map m) { + return Optional.ofNullable(m) + .map(myMap -> { + String state = Optional.ofNullable(myMap.get("status")).orElse("").toString(); + switch (state) { + case "FAILED": + return ExecutionStatus.TERMINAL; + case "SUCCEEDED": + return ExecutionStatus.SUCCEEDED; + case "IN_PROGRESS": + default: + return ExecutionStatus.RUNNING; + } + }) + .orElse(ExecutionStatus.TERMINAL); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTask.java new file mode 100644 index 0000000000..7d329bfbbd --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTask.java @@ -0,0 +1,50 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.RetryableTask; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker.AbstractWaitForServiceTask; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import java.util.Map; +import java.util.Optional; + +@Component +public class CloudFoundryWaitForDestroyServiceTask extends AbstractWaitForServiceTask { + @Autowired + public CloudFoundryWaitForDestroyServiceTask(OortService oortService) { + super(oortService); + } + + protected ExecutionStatus oortStatusToTaskStatus(Map m) { + return Optional.ofNullable(m).map( + myMap -> { + String state = Optional.ofNullable(myMap.get("status")).orElse("").toString(); + switch (state) { + case "FAILED": + return ExecutionStatus.TERMINAL; + case "IN_PROGRESS": + default: + return ExecutionStatus.RUNNING; + } + } + ).orElse(ExecutionStatus.SUCCEEDED); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/Manifest.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/Manifest.java new file mode 100644 index 0000000000..881db3f76c --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/Manifest.java @@ -0,0 +1,141 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.fasterxml.jackson.annotation.JsonAlias; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.PropertyNamingStrategy; +import com.fasterxml.jackson.dataformat.yaml.YAMLFactory; +import com.fasterxml.jackson.dataformat.yaml.YAMLGenerator; +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import lombok.Data; +import lombok.Getter; +import lombok.Setter; + +import javax.annotation.Nullable; +import java.util.Base64; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + +@Setter +public class Manifest { + @Nullable + private Direct direct; + + @Nullable + private String artifactId; + + @Nullable + private Artifact artifact; + + public Artifact toArtifact(ArtifactResolver artifactResolver, Stage stage) { + if (direct != null) { + return Artifact.builder() + .name("manifest") + .type("embedded/base64") + .artifactAccount("embedded-artifact") + .reference(Base64.getEncoder().encodeToString(direct.toManifestYml().getBytes())) + .build(); + } + + Artifact boundArtifact = artifactResolver.getBoundArtifactForStage(stage, artifactId, this.artifact); + if(boundArtifact == null) { + throw new IllegalArgumentException("Unable to bind the manifest artifact"); + } + return boundArtifact; + } + + public static class Direct { + private static ObjectMapper manifestMapper = new ObjectMapper(new YAMLFactory() + .enable(YAMLGenerator.Feature.MINIMIZE_QUOTES) + .enable(YAMLGenerator.Feature.INDENT_ARRAYS)) + .setPropertyNamingStrategy(PropertyNamingStrategy.KEBAB_CASE); + + @Getter + private final String name = "app"; // doesn't matter, has no effect on the CF app name. + + @Getter + @Setter + private List buildpacks; + + @Getter + @Setter + @JsonProperty("disk_quota") + @JsonAlias("diskQuota") + private String diskQuota; + + @Getter + @Setter + private String healthCheckType; + + @Getter + @Setter + private String healthCheckHttpEndpoint; + + @Getter + private Map env; + + public void setEnvironment(List environment) { + this.env = environment.stream().collect(Collectors.toMap(EnvironmentVariable::getKey, EnvironmentVariable::getValue)); + } + + @Setter + private List routes; + + public List> getRoutes() { + return routes.stream() + .map(r -> ImmutableMap.builder().put("route", r).build()) + .collect(Collectors.toList()); + } + + @Getter + @Setter + private List services; + + @Getter + @Setter + private Integer instances; + + @Getter + @Setter + private String memory; + + String toManifestYml() { + try { + Map> apps = ImmutableMap.>builder() + .put("applications", Collections.singletonList(this)) + .build(); + return manifestMapper.writeValueAsString(apps); + } catch (JsonProcessingException e) { + throw new IllegalArgumentException("Unable to generate Cloud Foundry Manifest", e); + } + } + } + + @Data + public static class EnvironmentVariable { + private String key; + private String value; + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/ecs/EcsServerGroupCreator.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/ecs/EcsServerGroupCreator.groovy index 8ad53de4c7..6e1fa12a03 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/ecs/EcsServerGroupCreator.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/ecs/EcsServerGroupCreator.groovy @@ -18,6 +18,8 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.providers.ecs import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.ServerGroupCreator import com.netflix.spinnaker.orca.kato.tasks.DeploymentDetailsAware +import com.netflix.spinnaker.orca.pipeline.model.DockerTrigger +import com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType import com.netflix.spinnaker.orca.pipeline.model.Stage import groovy.util.logging.Slf4j import org.springframework.stereotype.Component @@ -41,13 +43,62 @@ class EcsServerGroupCreator implements ServerGroupCreator, DeploymentDetailsAwar operation.credentials = operation.account } - def bakeStage = getPreviousStageWithImage(stage, operation.region, cloudProvider) + def imageDescription = (Map) operation.imageDescription - if (bakeStage) { - operation.put('dockerImageAddress', bakeStage.context.amiDetails.imageId.value.get(0).toString()) + if (imageDescription) { + if (imageDescription.fromContext) { + if (stage.execution.type == ExecutionType.ORCHESTRATION) { + // Use image from specific "find image from tags" stage + def imageStage = getAncestors(stage, stage.execution).find { + it.refId == imageDescription.stageId && it.context.containsKey("amiDetails") + } + + if (!imageStage) { + throw new IllegalStateException("No image stage found in context for $imageDescription.imageLabelOrSha.") + } + + imageDescription.imageId = imageStage.context.amiDetails.imageId.value.get(0).toString() + } + } + + if (imageDescription.fromTrigger) { + if (stage.execution.type == ExecutionType.PIPELINE) { + def trigger = stage.execution.trigger + + if (trigger instanceof DockerTrigger && trigger.account == imageDescription.account && trigger.repository == imageDescription.repository) { + imageDescription.tag = trigger.tag + } + + imageDescription.imageId = buildImageId(imageDescription.registry, imageDescription.repository, imageDescription.tag) + } + + if (!imageDescription.tag) { + throw new IllegalStateException("No tag found for image ${imageDescription.registry}/${imageDescription.repository} in trigger context.") + } + } + + if (!imageDescription.imageId) { + imageDescription.imageId = buildImageId(imageDescription.registry, imageDescription.repository, imageDescription.tag) + } + + operation.dockerImageAddress = imageDescription.imageId + } else if (!operation.dockerImageAddress) { + // Fall back to previous behavior: use image from any previous "find image from tags" stage by default + def bakeStage = getPreviousStageWithImage(stage, operation.region, cloudProvider) + + if (bakeStage) { + operation.dockerImageAddress = bakeStage.context.amiDetails.imageId.value.get(0).toString() + } } return [[(ServerGroupCreator.OPERATION): operation]] } + static String buildImageId(Object registry, Object repo, Object tag) { + if (registry) { + return "$registry/$repo:$tag" + } else { + return "$repo:$tag" + } + } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTask.java new file mode 100644 index 0000000000..733366de67 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTask.java @@ -0,0 +1,97 @@ +/* + * + * * Copyright 2019 Google, Inc. + * * + * * Licensed under the Apache License, Version 2.0 (the "License") + * * you may not use this file except in compliance with the License. + * * You may obtain a copy of the License at + * * + * * http://www.apache.org/licenses/LICENSE-2.0 + * * + * * Unless required by applicable law or agreed to in writing, software + * * distributed under the License is distributed on an "AS IS" BASIS, + * * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * * See the License for the specific language governing permissions and + * * limitations under the License. + * + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.gce; + +import static com.google.common.base.Preconditions.checkArgument; +import static com.google.common.base.Preconditions.checkState; + +import com.google.common.collect.ImmutableList; +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.clouddriver.pipeline.providers.gce.SetStatefulDiskStage.StageData; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroup; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroupResolver; +import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import java.util.List; +import java.util.Map; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; +import rx.Observable; + +@Component +public class SetStatefulDiskTask extends AbstractCloudProviderAwareTask { + + private static final String KATO_OP_NAME = "setStatefulDisk"; + + private final KatoService katoService; + private final TargetServerGroupResolver resolver; + + @Autowired + public SetStatefulDiskTask(KatoService katoService, TargetServerGroupResolver resolver) { + this.katoService = katoService; + this.resolver = resolver; + } + + @Override + public TaskResult execute(Stage stage) { + + StageData data = stage.mapTo(StageData.class); + + List resolvedServerGroups = resolver.resolve(stage); + checkArgument( + resolvedServerGroups.size() > 0, + "Could not find a server group named %s for %s in %s", + data.serverGroupName, + data.accountName, + data.region); + checkState( + resolvedServerGroups.size() == 1, + "Found multiple server groups named %s for %s in %s", + data.serverGroupName, + data.accountName, + data.region); + TargetServerGroup serverGroup = resolvedServerGroups.get(0); + + ImmutableMap opData = + ImmutableMap.of( + "credentials", getCredentials(stage), + "serverGroupName", serverGroup.getName(), + "region", data.getRegion(), + "deviceName", data.deviceName); + + Map operation = ImmutableMap.of(KATO_OP_NAME, opData); + Observable observable = + katoService.requestOperations(getCloudProvider(stage), ImmutableList.of(operation)); + observable.toBlocking().first(); + + ImmutableMap> modifiedServerGroups = + ImmutableMap.of(data.getRegion(), ImmutableList.of(serverGroup.getName())); + ImmutableMap context = + ImmutableMap.of( + "notification.type", KATO_OP_NAME.toLowerCase(), + "serverGroupName", serverGroup.getName(), + "deploy.server.groups", modifiedServerGroups); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(context).build(); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/autoscaling/UpsertGceAutoscalingPolicyTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/autoscaling/UpsertGceAutoscalingPolicyTask.java index e894d6dc3c..a0f96207d8 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/autoscaling/UpsertGceAutoscalingPolicyTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/autoscaling/UpsertGceAutoscalingPolicyTask.java @@ -93,6 +93,6 @@ public TaskResult execute(Stage stage) { .first(); stageOutputs.put("kato.last.task.id", taskId); - return new TaskResult(ExecutionStatus.SUCCEEDED, stageOutputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs).build(); } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesContainerFinder.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesContainerFinder.groovy index 9f17bd8cb4..24b9217ece 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesContainerFinder.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesContainerFinder.groovy @@ -65,7 +65,7 @@ class KubernetesContainerFinder { } containers.forEach { container -> - if (container.imageDescription.fromContext) { + if (container?.imageDescription?.fromContext) { def image = deploymentDetails.find { // stageId is used here to match the source of the image to the find image stage specified by the user. // Previously, this was done by matching the pattern used to the pattern selected in the deploy stage, but @@ -83,7 +83,7 @@ class KubernetesContainerFinder { throw new IllegalStateException("No image found in context for pattern $container.imageDescription.pattern.") } - if (container.imageDescription.fromTrigger) { + if (container?.imageDescription?.fromTrigger) { if (stage.execution.type == PIPELINE) { def trigger = stage.execution.trigger @@ -92,12 +92,12 @@ class KubernetesContainerFinder { } } - if (!container.imageDescription.tag) { + if (!container?.imageDescription?.tag) { throw new IllegalStateException("No tag found for image ${container.imageDescription.registry}/${container.imageDescription.repository} in trigger context.") } } - if (container.imageDescription.fromArtifact) { + if (container?.imageDescription?.fromArtifact) { def resolvedArtifact = artifactResolver.getBoundArtifactForId(stage, container.imageDescription.artifactId) container.imageDescription.uri = resolvedArtifact.reference } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.groovy deleted file mode 100644 index 05a272f9ba..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.groovy +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Copyright 2016 Google, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.providers.kubernetes - -import com.netflix.spinnaker.orca.clouddriver.tasks.job.JobRunner -import com.netflix.spinnaker.orca.pipeline.model.Stage -import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver -import groovy.util.logging.Slf4j -import org.springframework.beans.factory.annotation.Autowired -import org.springframework.stereotype.Component - -@Slf4j -@Component -class KubernetesJobRunner implements JobRunner { - - boolean katoResultExpected = false - String cloudProvider = "kubernetes" - - @Autowired - ArtifactResolver artifactResolver - - @Override - List getOperations(Stage stage) { - def operation = [:] - - // If this stage was synthesized by a parallel deploy stage, the operation properties will be under 'cluster'. - if (stage.context.containsKey("cluster")) { - operation.putAll(stage.context.cluster as Map) - } else { - operation.putAll(stage.context) - } - - KubernetesContainerFinder.populateFromStage(operation, stage, artifactResolver) - - return [[(OPERATION): operation]] - } - - @Override - Map getAdditionalOutputs(Stage stage, List operations) { - return [:] - } -} - diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.java new file mode 100644 index 0000000000..ffbe2b291c --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/KubernetesJobRunner.java @@ -0,0 +1,79 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.kubernetes; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.orca.clouddriver.tasks.job.JobRunner; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import lombok.Data; +import org.springframework.stereotype.Component; + +import java.util.*; + +@Component +@Data +public class KubernetesJobRunner implements JobRunner { + + + private boolean katoResultExpected = false; + private String cloudProvider = "kubernetes"; + + private ArtifactResolver artifactResolver; + private ObjectMapper objectMapper; + + + public KubernetesJobRunner(ArtifactResolver artifactResolver, ObjectMapper objectMapper) { + this.artifactResolver = artifactResolver; + this.objectMapper = objectMapper; + } + + public List getOperations(Stage stage) { + Map operation = new HashMap<>(); + + if (stage.getContext().containsKey("cluster")) { + operation.putAll((Map) stage.getContext().get("cluster")); + } else { + operation.putAll(stage.getContext()); + } + + KubernetesContainerFinder.populateFromStage(operation, stage, artifactResolver); + + Map task = new HashMap<>(); + task.put(OPERATION, operation); + return Collections.singletonList(task); + } + + public Map getAdditionalOutputs(Stage stage, List operations) { + Map outputs = new HashMap<>(); + Map execution = new HashMap<>(); + + // if the manifest contains the template annotation put it into the context + if (stage.getContext().containsKey("manifest")) { + Manifest manifest = objectMapper.convertValue(stage.getContext().get("manifest"), Manifest.class); + String logTemplate = ManifestAnnotationExtractor.logs(manifest); + if (logTemplate != null) { + execution.put("logs", logTemplate); + outputs.put("execution", execution); + } + } + + return outputs; + } + + +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/Manifest.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/Manifest.java new file mode 100644 index 0000000000..959cd6bc01 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/Manifest.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Armory + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.kubernetes; + +import lombok.Data; + +import java.util.HashMap; +import java.util.Map; + +@Data +public class Manifest { + public ManifestMeta metadata = new ManifestMeta(); + + @Data + public static class ManifestMeta { + Map annotations = new HashMap<>(); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/ManifestAnnotationExtractor.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/ManifestAnnotationExtractor.java new file mode 100644 index 0000000000..7a17476fa8 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/kubernetes/ManifestAnnotationExtractor.java @@ -0,0 +1,28 @@ +/* + * Copyright 2019 Armory + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.kubernetes; + +public class ManifestAnnotationExtractor { + private static final String LOG_TEMPLATE_ANNOTATION = "job.spinnaker.io/logs"; + + public static String logs(Manifest manifest) { + if (!manifest.getMetadata().getAnnotations().containsKey(LOG_TEMPLATE_ANNOTATION)) { + return null; + } + return manifest.getMetadata().getAnnotations().get(LOG_TEMPLATE_ANNOTATION); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserter.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserter.groovy deleted file mode 100644 index c2b1f127d1..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserter.groovy +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Copyright 2016 Target, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.providers.openstack - -import com.netflix.spinnaker.orca.clouddriver.MortService -import com.netflix.spinnaker.orca.clouddriver.tasks.securitygroup.SecurityGroupUpserter -import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware -import com.netflix.spinnaker.orca.pipeline.model.Stage -import org.springframework.beans.factory.annotation.Autowired -import org.springframework.stereotype.Component -import retrofit.RetrofitError - -@Component -class OpenstackSecurityGroupUpserter implements SecurityGroupUpserter, CloudProviderAware { - - final String cloudProvider = 'openstack' - - @Autowired - MortService mortService - - @Override - SecurityGroupUpserter.OperationContext getOperationContext(Stage stage) { - def ops = [[(SecurityGroupUpserter.OPERATION): stage.context]] - - def targets = [ - new MortService.SecurityGroup(name: stage.context.securityGroupName, - region: stage.context.region, - accountName: getCredentials(stage)) - ] - - return new SecurityGroupUpserter.OperationContext(ops, [targets: targets]) - } - - @Override - boolean isSecurityGroupUpserted(MortService.SecurityGroup upsertedSecurityGroup, Stage stage) { - try { - return mortService.getSecurityGroup(upsertedSecurityGroup.accountName, - cloudProvider, - upsertedSecurityGroup.name, - upsertedSecurityGroup.region) - - } catch (RetrofitError e) { - if (e.response?.status != 404) { - throw e - } - } - return false - } - -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreator.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreator.groovy deleted file mode 100644 index 192cb43d54..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreator.groovy +++ /dev/null @@ -1,69 +0,0 @@ -/* - * Copyright 2016 Target, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.providers.openstack - -import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.ServerGroupCreator -import com.netflix.spinnaker.orca.kato.tasks.DeploymentDetailsAware -import com.netflix.spinnaker.orca.pipeline.model.Stage -import groovy.util.logging.Slf4j -import org.springframework.stereotype.Component - -/** - * Create an openstack createServerGroup operation, potentially injecting the image to use - * from the pipeline context. - */ -@Slf4j -@Component -class OpenstackServerGroupCreator implements ServerGroupCreator, DeploymentDetailsAware { - - boolean katoResultExpected = false - String cloudProvider = 'openstack' - - @Override - List getOperations(Stage stage) { - def operation = [:] - - // If this stage was synthesized by a parallel deploy stage, the operation properties will be under 'cluster'. - if (stage.context.containsKey("cluster")) { - operation.putAll(stage.context.cluster as Map) - } else { - operation.putAll(stage.context) - } - - //let's not throw NPE's here, even if the request is invalid - operation.serverGroupParameters = operation.serverGroupParameters ?: [:] - - withImageFromPrecedingStage(stage, operation.region, cloudProvider) { - operation.serverGroupParameters.image = operation.serverGroupParameters.image ?: it.imageId - } - - withImageFromDeploymentDetails(stage, operation.region, cloudProvider) { - operation.serverGroupParameters.image = operation.serverGroupParameters.image ?: it.imageId - } - - if (!operation.serverGroupParameters.image) { - throw new IllegalStateException("No image could be found in ${stage.context.region}.") - } - - return [[(OPERATION): operation]] - } - - @Override - Optional getHealthProviderName() { - return Optional.of("Openstack") - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/README.md b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/README.md deleted file mode 100644 index d9088f0e01..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/README.md +++ /dev/null @@ -1 +0,0 @@ -Only the tasks which are specific to Openstack should belong to this package diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/DeleteScalingPolicyTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/DeleteScalingPolicyTask.groovy index fa00d9c1bc..06efcfd853 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/DeleteScalingPolicyTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/DeleteScalingPolicyTask.groovy @@ -37,10 +37,10 @@ class DeleteScalingPolicyTask extends AbstractCloudProviderAwareTask implements .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "deploy.account.name" : stage.context.credentials, "kato.last.task.id" : taskId, "deploy.server.groups": [(stage.context.region): [stage.context.serverGroupName]] - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTask.groovy index d155a81b2a..1d4e933670 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTask.groovy @@ -16,31 +16,42 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.scalingpolicy +import java.util.concurrent.TimeUnit import com.netflix.spinnaker.orca.ExecutionStatus -import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.RetryableTask import com.netflix.spinnaker.orca.TaskResult import com.netflix.spinnaker.orca.clouddriver.KatoService import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask import com.netflix.spinnaker.orca.pipeline.model.Stage import org.springframework.beans.factory.annotation.Autowired import org.springframework.stereotype.Component +import groovy.util.logging.Slf4j @Component -class UpsertScalingPolicyTask extends AbstractCloudProviderAwareTask implements Task { +@Slf4j +class UpsertScalingPolicyTask extends AbstractCloudProviderAwareTask implements RetryableTask { @Autowired KatoService kato + long backoffPeriod = TimeUnit.SECONDS.toMillis(5) + long timeout = TimeUnit.SECONDS.toMillis(100) + @Override TaskResult execute(Stage stage) { - def taskId = kato.requestOperations(getCloudProvider(stage), [[upsertScalingPolicy: stage.context]]) + try { + def taskId = kato.requestOperations(getCloudProvider(stage), [[upsertScalingPolicy: stage.context]]) .toBlocking() .first() - - new TaskResult(ExecutionStatus.SUCCEEDED, [ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "deploy.account.name" : stage.context.credentials, "kato.last.task.id" : taskId, "deploy.server.groups": [(stage.context.region): [stage.context.serverGroupName]] - ]) + ]).build() + } + catch (Exception e) { + log.error("Failed upsertScalingPolicy task (stageId: ${stage.id}, executionId: ${stage.execution.id})", e) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) + } } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupForceRefreshTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupForceRefreshTask.groovy index 49d8cfe0c1..31fa49dfb9 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupForceRefreshTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupForceRefreshTask.groovy @@ -45,6 +45,6 @@ class DeleteSecurityGroupForceRefreshTask extends AbstractCloudProviderAwareTask def model = [securityGroupName: name, vpcId: vpcId, region: region, account: account, evict: true] cacheService.forceCacheUpdate(cloudProvider, REFRESH_TYPE, model) } - new TaskResult(ExecutionStatus.SUCCEEDED) + TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupTask.groovy index 08ba74be65..93b8374cc5 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/DeleteSecurityGroupTask.groovy @@ -50,6 +50,6 @@ class DeleteSecurityGroupTask extends AbstractCloudProviderAwareTask implements if (stage.context.vpcId) { outputs["delete.vpcId"] = stage.context.vpcId } - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/SecurityGroupForceCacheRefreshTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/SecurityGroupForceCacheRefreshTask.groovy index 56f27517c4..f2b188d804 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/SecurityGroupForceCacheRefreshTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/SecurityGroupForceCacheRefreshTask.groovy @@ -42,6 +42,6 @@ public class SecurityGroupForceCacheRefreshTask extends AbstractCloudProviderAwa ) } - new TaskResult(ExecutionStatus.SUCCEEDED) + TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/UpsertSecurityGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/UpsertSecurityGroupTask.groovy index bd1ff21763..5532896ae0 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/UpsertSecurityGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/UpsertSecurityGroupTask.groovy @@ -58,6 +58,6 @@ class UpsertSecurityGroupTask extends AbstractCloudProviderAwareTask { "kato.last.task.id" : taskId, ] + result.extraOutput - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/WaitForUpsertedSecurityGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/WaitForUpsertedSecurityGroupTask.groovy index 495cce27f4..1614525e24 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/WaitForUpsertedSecurityGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/securitygroup/WaitForUpsertedSecurityGroupTask.groovy @@ -50,6 +50,6 @@ class WaitForUpsertedSecurityGroupTask implements RetryableTask, CloudProviderAw } } - return new TaskResult(status) + return TaskResult.ofStatus(status) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractBulkServerGroupTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractBulkServerGroupTask.java index c9cad55c39..6ba0e4d6e4 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractBulkServerGroupTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractBulkServerGroupTask.java @@ -125,7 +125,7 @@ public TaskResult execute(Stage stage) { .collect(Collectors.toList())); result.put("deploy.server.groups", regionToServerGroupNames); - return new TaskResult(ExecutionStatus.SUCCEEDED, result); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(result).build(); } protected Location getLocation(Map operation) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractServerGroupTask.groovy index c169b3d553..ecdfd381b7 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AbstractServerGroupTask.groovy @@ -75,7 +75,7 @@ abstract class AbstractServerGroupTask extends AbstractCloudProviderAwareTask im }, 6, 5000, false) // retry for up to 30 seconds if (!operation) { // nothing to do but succeed - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } def taskId = kato.requestOperations(cloudProvider, [[(serverGroupAction): operation]]) @@ -97,11 +97,7 @@ abstract class AbstractServerGroupTask extends AbstractCloudProviderAwareTask im ] } - new TaskResult( - ExecutionStatus.SUCCEEDED, - stageOutputs + getAdditionalContext(stage, operation), - getAdditionalOutputs(stage, operation) - ) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs + getAdditionalContext(stage, operation)).outputs(getAdditionalOutputs(stage, operation)).build() } Map convert(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AddServerGroupEntityTagsTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AddServerGroupEntityTagsTask.groovy index 8349cbd37c..0d1d5e8170 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AddServerGroupEntityTagsTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/AddServerGroupEntityTagsTask.groovy @@ -46,18 +46,18 @@ class AddServerGroupEntityTagsTask extends AbstractCloudProviderAwareTask implem try { List tagOperations = buildTagOperations(stage) if (!tagOperations) { - return new TaskResult(ExecutionStatus.SKIPPED) + return TaskResult.ofStatus(ExecutionStatus.SKIPPED) } TaskId taskId = kato.requestOperations(tagOperations).toBlocking().first() - return new TaskResult(ExecutionStatus.SUCCEEDED, new HashMap() { + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(new HashMap() { { put("notification.type", "upsertentitytags") put("kato.last.task.id", taskId) } - }) + }).build() } catch (Exception e) { log.error("Failed to tag deployed server groups (stageId: ${stage.id}, executionId: ${stage.execution.id})", e) - return new TaskResult(ExecutionStatus.FAILED_CONTINUE) + return TaskResult.ofStatus(ExecutionStatus.FAILED_CONTINUE) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/BulkWaitForDestroyedServerGroupTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/BulkWaitForDestroyedServerGroupTask.java index 0feb33ba07..50bc361bd4 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/BulkWaitForDestroyedServerGroupTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/BulkWaitForDestroyedServerGroupTask.java @@ -64,25 +64,25 @@ public TaskResult execute(Stage stage) { ); if (response.getStatus() != 200) { - return new TaskResult(ExecutionStatus.RUNNING); + return TaskResult.RUNNING; } Map cluster = objectMapper.readValue(response.getBody().in(), Map.class); Map output = new HashMap<>(); output.put("remainingInstances", Collections.emptyList()); if (cluster == null || cluster.get("serverGroups") == null) { - return new TaskResult(ExecutionStatus.SUCCEEDED, output); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(output).build(); } List> serverGroups = getServerGroups(region, cluster, serverGroupNames); if (serverGroups.isEmpty()) { - return new TaskResult(ExecutionStatus.SUCCEEDED, output); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(output).build(); } List> instances = getInstances(serverGroups); LOGGER.info("{} not destroyed, found instances {}", serverGroupNames, instances); output.put("remainingInstances", instances); - return new TaskResult(ExecutionStatus.RUNNING, output); + return TaskResult.builder(ExecutionStatus.RUNNING).context(output).build(); } catch (RetrofitError e) { return handleRetrofitError(stage, e); } catch (IOException e) { @@ -96,12 +96,12 @@ private TaskResult handleRetrofitError(Stage stage, RetrofitError e) { } switch (e.getResponse().getStatus()) { case 404: - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; case 500: Map error = new HashMap<>(); error.put("lastRetrofitException", new RetrofitExceptionHandler().handle(stage.getName(), e)); LOGGER.error("Unexpected retrofit error {}", error.get("lastRetrofitException"), e); - return new TaskResult(ExecutionStatus.RUNNING, error); + return TaskResult.builder(ExecutionStatus.RUNNING).context(error).build(); default: throw e; } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CaptureParentInterestingHealthProviderNamesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CaptureParentInterestingHealthProviderNamesTask.groovy index c4ba20cbf4..86010179e9 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CaptureParentInterestingHealthProviderNamesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CaptureParentInterestingHealthProviderNamesTask.groovy @@ -33,9 +33,9 @@ class CaptureParentInterestingHealthProviderNamesTask implements Task, CloudProv def interestingHealthProviderNames = parentStage?.context?.interestingHealthProviderNames as List if (interestingHealthProviderNames != null) { - return new TaskResult(ExecutionStatus.SUCCEEDED, [interestingHealthProviderNames: interestingHealthProviderNames]); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([interestingHealthProviderNames: interestingHealthProviderNames]).build(); } - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTask.groovy index b664f98a59..e019da82fb 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTask.groovy @@ -22,17 +22,19 @@ import com.netflix.spinnaker.orca.Task import com.netflix.spinnaker.orca.TaskResult import com.netflix.spinnaker.orca.clouddriver.KatoService import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask +import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.clone.CloneDescriptionDecorator import com.netflix.spinnaker.orca.clouddriver.utils.HealthHelper import com.netflix.spinnaker.orca.kato.tasks.DeploymentDetailsAware import com.netflix.spinnaker.orca.pipeline.model.Stage import groovy.util.logging.Slf4j import org.springframework.beans.factory.annotation.Autowired -import org.springframework.beans.factory.annotation.Value import org.springframework.stereotype.Component @Slf4j @Component class CloneServerGroupTask extends AbstractCloudProviderAwareTask implements Task, DeploymentDetailsAware { + @Autowired + Collection cloneDescriptionDecorators = [] @Autowired KatoService kato @@ -40,9 +42,6 @@ class CloneServerGroupTask extends AbstractCloudProviderAwareTask implements Tas @Autowired ObjectMapper mapper - @Value('${default.bake.account:default}') - String defaultBakeAccount - @Override TaskResult execute(Stage stage) { def operation = [:] @@ -62,7 +61,7 @@ class CloneServerGroupTask extends AbstractCloudProviderAwareTask implements Tas } String credentials = getCredentials(stage) - def taskId = kato.requestOperations(cloudProvider, getDescriptions(operation)).toBlocking().first() + def taskId = kato.requestOperations(cloudProvider, getDescriptions(stage, operation)).toBlocking().first() def outputs = [ "notification.type" : "createcopylastasg", @@ -78,35 +77,18 @@ class CloneServerGroupTask extends AbstractCloudProviderAwareTask implements Tas } } - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } - private List> getDescriptions(Map operation) { - log.info("Generating descriptions (cloudProvider: ${operation.cloudProvider}, getCloudProvider: ${getCloudProvider(operation)}, credentials: ${operation.credentials}, defaultBakeAccount: ${defaultBakeAccount}, availabilityZones: ${operation.availabilityZones})") + private List> getDescriptions(Stage stage, Map operation) { + log.info("Generating descriptions (cloudProvider: ${operation.cloudProvider}, getCloudProvider: ${getCloudProvider(operation)}, credentials: ${operation.credentials}, availabilityZones: ${operation.availabilityZones})") - List> descriptions = [] - // NFLX bakes images in their test account. This rigmarole is to allow the prod account access to that image. - Collection targetRegions = operation.region ? [operation.region] : - operation.availabilityZones ? operation.availabilityZones.keySet() : [] - if (getCloudProvider(operation) == "aws" && // the operation is a clone of stage.context. - operation.credentials != defaultBakeAccount && - targetRegions && - operation.amiName) { - def allowLaunchDescriptions = targetRegions.collect { String region -> - [ - allowLaunchDescription: [ - account : operation.credentials, - credentials: defaultBakeAccount, - region : region, - amiName : operation.amiName - ] - ] + List> descriptions = [[cloneServerGroup: operation]] + cloneDescriptionDecorators.each { decorator -> + if (decorator.shouldDecorate(operation)) { + decorator.decorate(operation, descriptions, stage) } - descriptions.addAll(allowLaunchDescriptions) - - log.info("Generated `allowLaunchDescriptions` (allowLaunchDescriptions: ${allowLaunchDescriptions})") } - descriptions.add([cloneServerGroup: operation]) descriptions } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CreateServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CreateServerGroupTask.groovy index e9572eed89..d44062e7ce 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CreateServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CreateServerGroupTask.groovy @@ -63,6 +63,6 @@ class CreateServerGroupTask extends AbstractCloudProviderAwareTask implements Re outputs.interestingHealthProviderNames = HealthHelper.getInterestingHealthProviderNames(stage, ["Amazon"]) } - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTask.groovy deleted file mode 100644 index b33ea0752c..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTask.groovy +++ /dev/null @@ -1,69 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.servergroup - -import com.netflix.spinnaker.orca.ExecutionStatus -import com.netflix.spinnaker.orca.Task -import com.netflix.spinnaker.orca.TaskResult -import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheService -import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask -import com.netflix.spinnaker.orca.clouddriver.tasks.loadbalancer.UpsertLoadBalancerForceRefreshTask -import com.netflix.spinnaker.orca.clouddriver.tasks.securitygroup.SecurityGroupForceCacheRefreshTask -import com.netflix.spinnaker.orca.pipeline.model.Stage -import org.springframework.beans.factory.annotation.Autowired -import org.springframework.stereotype.Component - -@Component -class MigrateForceRefreshDependenciesTask extends AbstractCloudProviderAwareTask implements Task { - - @Autowired - CloudDriverCacheService cacheService - - @Override - TaskResult execute(Stage stage) { - String cloudProvider = getCloudProvider(stage) - Map target = stage.context.target as Map - Map migratedGroup = stage.context["kato.tasks"].get(0).resultObjects.find { it.serverGroupNames } as Map - - migratedGroup.securityGroups.each { Map securityGroup -> - migrateSecurityGroups(securityGroup, cloudProvider, target.region as String, securityGroup.credentials as String) - } - - migratedGroup.loadBalancers.each { Map loadBalancer -> - cacheService.forceCacheUpdate( - cloudProvider, - UpsertLoadBalancerForceRefreshTask.REFRESH_TYPE, - [loadBalancerName: loadBalancer.targetName, region: target.region, account: target.credentials] - ) - loadBalancer.securityGroups.each { Map securityGroup -> - migrateSecurityGroups(securityGroup, cloudProvider, target.region as String, loadBalancer.credentials as String) - } - } - - new TaskResult(ExecutionStatus.SUCCEEDED) - } - - private void migrateSecurityGroups(Map migratedGroup, String cloudProvider, String region, String targetCredentials) { - (migratedGroup.created + migratedGroup.reused).flatten().each { Map securityGroup -> - cacheService.forceCacheUpdate( - cloudProvider, - SecurityGroupForceCacheRefreshTask.REFRESH_TYPE, - [securityGroupName: securityGroup.targetName, region: region, account: targetCredentials ?: securityGroup.credentials, vpcId: securityGroup.vpcId] - ) - } - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/ServerGroupCacheForceRefreshTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/ServerGroupCacheForceRefreshTask.groovy index 176a3c66f9..7bcec1c50e 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/ServerGroupCacheForceRefreshTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/ServerGroupCacheForceRefreshTask.groovy @@ -83,7 +83,7 @@ class ServerGroupCacheForceRefreshTask extends AbstractCloudProviderAwareTask im ) registry.counter(cacheForceRefreshTaskId.withTag("stageType", stage.type)).increment() - return new TaskResult(SUCCEEDED, ["shortCircuit": true]) + return TaskResult.builder(SUCCEEDED).context(["shortCircuit": true]).build() } def account = getCredentials(stage) @@ -104,7 +104,7 @@ class ServerGroupCacheForceRefreshTask extends AbstractCloudProviderAwareTask im registry.counter(cacheForceRefreshTaskId.withTag("stageType", stage.type)).increment() } - return new TaskResult(allAreComplete ? SUCCEEDED : RUNNING, convertAndStripNullValues(stageData)) + return TaskResult.builder(allAreComplete ? SUCCEEDED : RUNNING).context(convertAndStripNullValues(stageData)).build() } /** @@ -159,7 +159,7 @@ class ServerGroupCacheForceRefreshTask extends AbstractCloudProviderAwareTask im stageData.reset() } - return Optional.of(new TaskResult(status, convertAndStripNullValues(stageData))) + return Optional.of(TaskResult.builder(status).context(convertAndStripNullValues(stageData)).build()) } /** diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/SpinnakerMetadataServerGroupTagGenerator.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/SpinnakerMetadataServerGroupTagGenerator.java index ba5c6aff75..203b481872 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/SpinnakerMetadataServerGroupTagGenerator.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/SpinnakerMetadataServerGroupTagGenerator.java @@ -139,7 +139,7 @@ Map getPreviousServerGroupFromCluster(String application, }, 10, 3000, false); // retry for up to 30 seconds } - Map getPreviousServerGroupFromClusterByTarget(String application, + public Map getPreviousServerGroupFromClusterByTarget(String application, String account, String cluster, String cloudProvider, diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpdateLaunchConfigTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpdateLaunchConfigTask.groovy index fb06381c93..51f7c1a257 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpdateLaunchConfigTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpdateLaunchConfigTask.groovy @@ -55,13 +55,13 @@ class UpdateLaunchConfigTask implements Task, DeploymentDetailsAware, CloudProvi def taskId = kato.requestOperations(cloudProvider, ops) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "modifyasglaunchconfiguration", "modifyasglaunchconfiguration.account.name": getCredentials(stage), "modifyasglaunchconfiguration.region" : stage.context.region, "kato.last.task.id" : taskId, "deploy.server.groups" : [(stage.context.region): [stage.context.serverGroupName ?: stage.context.asgName]] - ]) + ]).build() } private getAwsOps(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpsertServerGroupTagsTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpsertServerGroupTagsTask.groovy index 90d4b16fc5..ccae8937f9 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpsertServerGroupTagsTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/UpsertServerGroupTagsTask.groovy @@ -48,11 +48,11 @@ class UpsertServerGroupTagsTask extends AbstractCloudProviderAwareTask implement } } - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "upsertservergrouptags", "deploy.account.name" : getCredentials(stage), "kato.last.task.id" : taskId, "deploy.server.groups": deployServerGroups, - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForCapacityMatchTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForCapacityMatchTask.groovy index cfcb0bf45f..47c384f183 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForCapacityMatchTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForCapacityMatchTask.groovy @@ -40,11 +40,30 @@ class WaitForCapacityMatchTask extends AbstractInstancesCheckTask { @Override protected boolean hasSucceeded(Stage stage, Map serverGroup, List instances, Collection interestingHealthProviderNames) { - if (!serverGroup.capacity || serverGroup.capacity.desired != instances.size()) { - return false + def splainer = new WaitForUpInstancesTask.Splainer() + .add("Capacity match check for server group ${serverGroup?.name} [executionId=${stage.execution.id}, stagedId=${stage.execution.id}]") + + try { + if (!serverGroup.capacity) { + splainer.add("short-circuiting out of WaitForCapacityMatchTask because of empty capacity in serverGroup=${serverGroup}") + return false + } + + splainer.add("checking if capacity matches (capacity.desired=${serverGroup.capacity.desired}, instances.size()=${instances.size()}) ") + if (serverGroup.capacity.desired != instances.size()) { + splainer.add("short-circuiting out of WaitForCapacityMatchTask because expected and current capacity don't match}") + return false + } + + if (serverGroup.disabled) { + splainer.add("capacity matches but server group is disabled, so returning hasSucceeded=true") + return true + } + + splainer.add("capacity matches and server group is enabled, so we delegate to WaitForUpInstancesTask to check for healthy instances") + return WaitForUpInstancesTask.allInstancesMatch(stage, serverGroup, instances, interestingHealthProviderNames, splainer) + } finally { + splainer.splain() } - return !serverGroup.disabled ? - WaitForUpInstancesTask.allInstancesMatch(stage, serverGroup, instances, interestingHealthProviderNames) : - true } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForDestroyedServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForDestroyedServerGroupTask.groovy index a2fa1a3c82..da608749f5 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForDestroyedServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForDestroyedServerGroupTask.groovy @@ -55,27 +55,27 @@ class WaitForDestroyedServerGroupTask extends AbstractCloudProviderAwareTask imp def response = oortService.getCluster(appName, account, clusterName, cloudProvider) if (response.status != 200) { - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } Map cluster = objectMapper.readValue(response.body.in().text, Map) if (!cluster || !cluster.serverGroups) { - return new TaskResult(ExecutionStatus.SUCCEEDED, [remainingInstances: []]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([remainingInstances: []]).build() } def serverGroup = cluster.serverGroups.find { it.name == serverGroupName && it.region == serverGroupRegion } if (!serverGroup) { - return new TaskResult(ExecutionStatus.SUCCEEDED, [remainingInstances: []]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([remainingInstances: []]).build() } def instances = serverGroup.instances ?: [] log.info("${serverGroupName}: not yet destroyed, found instances: ${instances?.join(', ') ?: 'none'}") - return new TaskResult(ExecutionStatus.RUNNING, [remainingInstances: instances.findResults { it.name }]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([remainingInstances: instances.findResults { it.name }]).build() } catch (RetrofitError e) { def retrofitErrorResponse = new RetrofitExceptionHandler().handle(stage.name, e) if (e.response?.status == 404) { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } else if (e.response?.status >= 500) { log.error("Unexpected retrofit error (${retrofitErrorResponse})") - return new TaskResult(ExecutionStatus.RUNNING, [lastRetrofitException: retrofitErrorResponse]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([lastRetrofitException: retrofitErrorResponse]).build() } throw e diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/BakeryImageAccessDescriptionDecorator.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/BakeryImageAccessDescriptionDecorator.groovy new file mode 100644 index 0000000000..4012a3f185 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/BakeryImageAccessDescriptionDecorator.groovy @@ -0,0 +1,65 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.clone + +import com.netflix.spinnaker.orca.pipeline.model.Stage +import groovy.util.logging.Slf4j +import org.springframework.beans.factory.annotation.Value +import org.springframework.stereotype.Component + +/** + * Netflix bakes images in their test account. This rigmarole is to allow the + * prod account access to that image. + */ +@Slf4j +@Component +class BakeryImageAccessDescriptionDecorator implements CloneDescriptionDecorator { + @Value('${default.bake.account:default}') + String defaultBakeAccount + + @Override + boolean shouldDecorate(Map operation) { + Collection targetRegions = targetRegions(operation) + + return getCloudProvider(operation) == "aws" && // the operation is a clone of stage.context. + operation.credentials != defaultBakeAccount && + targetRegions && + operation.amiName + } + + @Override + void decorate(Map operation, List> descriptions, Stage stage) { + def allowLaunchDescriptions = targetRegions(operation).collect { String region -> + [ + allowLaunchDescription: [ + account : operation.credentials, + credentials: defaultBakeAccount, + region : region, + amiName : operation.amiName + ] + ] + } + descriptions.addAll(allowLaunchDescriptions) + + log.info("Generated `allowLaunchDescriptions` (allowLaunchDescriptions: ${allowLaunchDescriptions})") + } + + private static Collection targetRegions(Map operation) { + return operation.region ? [operation.region] : + operation.availabilityZones ? operation.availabilityZones.keySet() : [] + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloneDescriptionDecorator.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloneDescriptionDecorator.java new file mode 100644 index 0000000000..50cc895e2a --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloneDescriptionDecorator.java @@ -0,0 +1,29 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.clone; + +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; +import com.netflix.spinnaker.orca.pipeline.model.Stage; + +import java.util.List; +import java.util.Map; + +public interface CloneDescriptionDecorator extends CloudProviderAware { + boolean shouldDecorate(Map operation); + + void decorate(Map operation, List> descriptions, Stage stage); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloudFoundryManifestArtifactDecorator.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloudFoundryManifestArtifactDecorator.java new file mode 100644 index 0000000000..2e3cf42449 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/clone/CloudFoundryManifestArtifactDecorator.java @@ -0,0 +1,75 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.clone; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.Manifest; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import lombok.Data; +import lombok.RequiredArgsConstructor; +import org.springframework.stereotype.Component; + +import java.util.List; +import java.util.Map; +import java.util.Optional; + +@RequiredArgsConstructor +@Component +public class CloudFoundryManifestArtifactDecorator implements CloneDescriptionDecorator { + private final ObjectMapper mapper; + private final ArtifactResolver artifactResolver; + + @Override + public boolean shouldDecorate(Map operation) { + return "cloudfoundry".equals(getCloudProvider(operation)); + } + + @Override + public void decorate(Map operation, List> descriptions, Stage stage) { + CloudFoundryCloneServerGroupOperation op = mapper.convertValue(operation, CloudFoundryCloneServerGroupOperation.class); + + operation.put("applicationArtifact", Artifact.builder() + .type("cloudfoundry/app") + .artifactAccount(op.getSource().getAccount()) + .location(op.getSource().getRegion()) + .name(op.getSource().getAsgName()) + .build()); + operation.put("manifest", op.getManifest().toArtifact(artifactResolver, stage)); + operation.put("credentials", Optional.ofNullable(op.getAccount()).orElse(op.getCredentials())); + operation.put("region", op.getRegion()); + + operation.remove("source"); + } + + @Data + private static class CloudFoundryCloneServerGroupOperation { + private String account; + private String credentials; + private String region; + private Manifest manifest; + private Source source; + + @Data + static class Source { + String account; + String region; + String asgName; + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/support/DetermineTargetServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/support/DetermineTargetServerGroupTask.groovy index c0cda5eb47..31989e882c 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/support/DetermineTargetServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/support/DetermineTargetServerGroupTask.groovy @@ -33,9 +33,9 @@ class DetermineTargetServerGroupTask implements Task { @Override TaskResult execute(Stage stage) { - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ targetReferences: getTargetServerGroups(stage) - ]) + ]).build() } List getTargetServerGroups(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/AbstractWaitForServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/AbstractWaitForServiceTask.java new file mode 100644 index 0000000000..a7e59a4829 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/AbstractWaitForServiceTask.java @@ -0,0 +1,58 @@ +/* + * Copyright 2019 Pivotal Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.RetryableTask; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; +import com.netflix.spinnaker.orca.pipeline.model.Stage; + +import javax.annotation.Nonnull; +import java.util.Map; + +public abstract class AbstractWaitForServiceTask extends AbstractCloudProviderAwareTask implements RetryableTask { + protected OortService oortService; + + public AbstractWaitForServiceTask(OortService oortService) { + this.oortService = oortService; + } + + @Override + public long getBackoffPeriod() { + return 10 * 1000L; + } + + @Override + public long getTimeout() { + return 30 * 60 * 1000L; + } + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + String cloudProvider = getCloudProvider(stage); + String account = stage.mapTo("/service.account", String.class); + String region = stage.mapTo("/service.region", String.class); + String serviceInstanceName = stage.mapTo("/service.instance.name", String.class); + + return TaskResult.ofStatus(oortStatusToTaskStatus(oortService.getServiceInstance(account, cloudProvider, region, serviceInstanceName))); + } + + abstract protected ExecutionStatus oortStatusToTaskStatus(Map m); +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/DestroyServiceTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/DestroyServiceTask.java deleted file mode 100644 index 3f48ae0790..0000000000 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servicebroker/DestroyServiceTask.java +++ /dev/null @@ -1,59 +0,0 @@ -/* - * Copyright 2018 Pivotal, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker; - -import com.google.common.collect.ImmutableMap; -import com.netflix.spinnaker.orca.ExecutionStatus; -import com.netflix.spinnaker.orca.Task; -import com.netflix.spinnaker.orca.TaskResult; -import com.netflix.spinnaker.orca.clouddriver.KatoService; -import com.netflix.spinnaker.orca.clouddriver.model.TaskId; -import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.springframework.stereotype.Component; - -import javax.annotation.Nonnull; -import java.util.Collections; -import java.util.Map; - -@Component -public class DestroyServiceTask extends AbstractCloudProviderAwareTask implements Task { - - private final KatoService kato; - - public DestroyServiceTask(KatoService kato) { - this.kato = kato; - } - - @Nonnull - @Override - public TaskResult execute(@Nonnull Stage stage) { - String cloudProvider = getCloudProvider(stage); - String account = getCredentials(stage); - Map operation = new ImmutableMap.Builder() - .put("destroyService", stage.getContext()) - .build(); - TaskId taskId = kato.requestOperations(cloudProvider, Collections.singletonList(operation)).toBlocking().first(); - Map outputs = new ImmutableMap.Builder() - .put("notification.type", "destroyService") - .put("kato.last.task.id", taskId) - .put("service.region", stage.getContext().get("region")) - .put("service.account", account) - .build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTask.java new file mode 100644 index 0000000000..d5630bcc47 --- /dev/null +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTask.java @@ -0,0 +1,83 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.snapshot; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.annotation.Nonnull; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.RetryableTask; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.clouddriver.pipeline.snapshot.DeleteSnapshotStage; +import com.netflix.spinnaker.orca.clouddriver.tasks.AbstractCloudProviderAwareTask; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; +import static java.util.stream.Collectors.toList; + +@Component +public class DeleteSnapshotTask extends AbstractCloudProviderAwareTask implements RetryableTask { + private final KatoService katoService; + + @Autowired + public DeleteSnapshotTask(KatoService katoService) { + this.katoService = katoService; + } + + @Override + public TaskResult execute(@Nonnull Stage stage) { + DeleteSnapshotStage.DeleteSnapshotRequest deleteSnapshotRequest = stage.mapTo(DeleteSnapshotStage.DeleteSnapshotRequest.class); + + List> operations = deleteSnapshotRequest + .getSnapshotIds() + .stream() + .map(snapshotId -> { + Map operation = new HashMap<>(); + operation.put("credentials", deleteSnapshotRequest.getCredentials()); + operation.put("region", deleteSnapshotRequest.getRegion()); + operation.put("snapshotId", snapshotId); + return Collections.singletonMap("deleteSnapshot", operation); + } + ).collect(toList()); + + TaskId taskId = katoService + .requestOperations(deleteSnapshotRequest.getCloudProvider(), operations).toBlocking().first(); + + Map outputs = new HashMap<>(); + outputs.put("notification.type", "deleteSnapshot"); + outputs.put("kato.last.task.id", taskId); + outputs.put("delete.region", deleteSnapshotRequest.getRegion()); + outputs.put("delete.account.name", deleteSnapshotRequest.getCredentials()); + + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); + } + + @Override + public long getBackoffPeriod() { + return TimeUnit.SECONDS.toMillis(10); + } + + @Override + public long getTimeout() { + return TimeUnit.MINUTES.toMillis(2); + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/RestoreSnapshotTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/RestoreSnapshotTask.groovy index 0e5290eb69..f118fa8208 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/RestoreSnapshotTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/RestoreSnapshotTask.groovy @@ -46,7 +46,7 @@ class RestoreSnapshotTask extends AbstractCloudProviderAwareTask implements Task "restore.snapshot" : stage.context.snapshotTimestamp, "restore.account.name": account ] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/SaveSnapshotTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/SaveSnapshotTask.groovy index 54ce31523b..653f81ade4 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/SaveSnapshotTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/SaveSnapshotTask.groovy @@ -44,7 +44,7 @@ class SaveSnapshotTask extends AbstractCloudProviderAwareTask implements Task { "snapshot.application" : stage.context.applicationName, "snapshot.account.name": account ] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/ParallelDeployStage.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/ParallelDeployStage.groovy index a4ed07702e..26cfba2970 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/ParallelDeployStage.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/ParallelDeployStage.groovy @@ -165,7 +165,7 @@ class ParallelDeployStage implements StageDefinitionBuilder { static class CompleteParallelDeployTask implements Task { TaskResult execute(Stage stage) { log.info("Completed Parallel Deploy") - new TaskResult(ExecutionStatus.SUCCEEDED, [:], [:]) + TaskResult.SUCCEEDED } } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/DetermineSourceServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/DetermineSourceServerGroupTask.groovy index 8d5fd923cb..44a2d2c7a7 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/DetermineSourceServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/DetermineSourceServerGroupTask.groovy @@ -78,7 +78,7 @@ class DetermineSourceServerGroupTask implements RetryableTask { // to avoid later stages trying to dynamically resolve the source and actually get the newly deployed server group stageOutputs.source = [:] } - return new TaskResult(ExecutionStatus.SUCCEEDED, stageOutputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs).build() } catch (ex) { log.warn("${getClass().simpleName} failed with $ex.message on attempt ${stage.context.attempt ?: 1}") lastException = ex @@ -103,14 +103,14 @@ class DetermineSourceServerGroupTask implements RetryableTask { if (!stage.context.capacity) { throw new IllegalStateException("Could not find source server group to copy capacity from, and no capacity specified.") } - return new TaskResult(ExecutionStatus.SUCCEEDED, ctx) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(ctx).build() } if (ctx.consecutiveNotFound >= MIN_CONSECUTIVE_404 && useSourceCapacity(stage, null) || ctx.attempt > MAX_ATTEMPTS) { throw new IllegalStateException(lastException.getMessage(), lastException) } - return new TaskResult(ExecutionStatus.RUNNING, ctx) + return TaskResult.builder(ExecutionStatus.RUNNING).context(ctx).build() } Boolean useSourceCapacity(Stage stage, StageData.Source source) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/Strategy.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/Strategy.java index e5c2d1f566..ce194e095e 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/Strategy.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/strategy/Strategy.java @@ -24,6 +24,7 @@ public enum Strategy implements StrategyFlowComposer{ RED_BLACK("redblack"), ROLLING_RED_BLACK("rollingredblack"), + CF_ROLLING_RED_BLACK("cfrollingredblack"), HIGHLANDER("highlander"), ROLLING_PUSH("rollingpush"), CUSTOM("custom"), diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupport.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupport.groovy index 0e9df30387..93720830bd 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupport.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupport.groovy @@ -139,7 +139,9 @@ public class ResizeStrategySupport { if (stageData.unpinMinimumCapacity) { Integer originalMin = null def originalSourceCapacity = stage.context.get("originalCapacity.${stageData.source?.serverGroupName}".toString()) as Capacity - originalMin = originalSourceCapacity?.min ?: stage.context.savedCapacity?.min + originalMin = originalSourceCapacity?.min != null + ? originalSourceCapacity?.min + : stage.context.savedCapacity?.min if (originalMin != null) { newCapacity = unpinMin(newCapacity, originalMin as Integer) diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolver.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolver.groovy index 2dde6c8d3a..a432f36d24 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolver.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolver.groovy @@ -20,6 +20,7 @@ import com.fasterxml.jackson.core.JsonParseException import com.fasterxml.jackson.databind.JsonMappingException import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spinnaker.orca.clouddriver.OortService +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.Location import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroup import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroupResolver import com.netflix.spinnaker.orca.pipeline.model.Stage @@ -43,8 +44,30 @@ class SourceResolver { StageData.Source getSource(Stage stage) throws RetrofitError, JsonParseException, JsonMappingException { def stageData = stage.mapTo(StageData) if (stageData.source) { - // has an existing source, return it - return stageData.source + // targeting a source in a different account and region + if (stageData.source.clusterName && stage.context.target) { + TargetServerGroup.Params params = new TargetServerGroup.Params( + cloudProvider: stageData.cloudProvider, + credentials: stageData.source.account, + cluster: stageData.source.clusterName, + target: TargetServerGroup.Params.Target.valueOf(stage.context.target as String), + locations: [Location.region(stageData.source.region)] + ) + + def targetServerGroups = resolver.resolveByParams(params) + + if (targetServerGroups) { + return new StageData.Source(account: params.credentials as String, + region: targetServerGroups[0].region as String, + serverGroupName: targetServerGroups[0].name as String, + asgName: targetServerGroups[0].name as String) + } else { + return null + } + } else { + // has an existing source, return it + return stageData.source + } } else if (stage.context.target) { // If no source was specified, but targeting coordinates were, attempt to resolve the target server group. TargetServerGroup.Params params = TargetServerGroup.Params.fromStage(stage) diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/StageData.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/StageData.groovy index 2de7b5a0eb..bf4a885592 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/StageData.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/StageData.groovy @@ -38,6 +38,9 @@ class StageData { boolean scaleDown Map> availabilityZones int maxRemainingAsgs + Boolean allowDeleteActive + Boolean allowScaleDownActive + int maxInitialAsgs = 1 Boolean useSourceCapacity Boolean preferSourceCapacity Source source @@ -110,6 +113,7 @@ class StageData { String serverGroupName Boolean useSourceCapacity Boolean preferSourceCapacity + String clusterName } static class PipelineBeforeCleanup { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractAsgTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractAsgTask.groovy index f3d67420f2..075231b46a 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractAsgTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractAsgTask.groovy @@ -46,7 +46,7 @@ abstract class AbstractAsgTask implements Task { def taskId = kato.requestOperations([[("${asgAction}Description".toString()): operation]]) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : getAsgAction().toLowerCase(), "kato.last.task.id" : taskId, "deploy.account.name" : operation.credentials, @@ -56,7 +56,7 @@ abstract class AbstractAsgTask implements Task { "deploy.server.groups" : (operation.regions as Collection).collectEntries { [(it): [operation.asgName]] } - ]) + ]).build() } Map convert(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractDiscoveryTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractDiscoveryTask.groovy index 773a0328cf..66892a26e7 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractDiscoveryTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/AbstractDiscoveryTask.groovy @@ -42,10 +42,10 @@ abstract class AbstractDiscoveryTask implements Task { def taskId = kato.requestOperations([["${action}": stage.context]]) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : getAction().toLowerCase(), "kato.last.task.id" : taskId, interestingHealthProviderNames: HealthHelper.getInterestingHealthProviderNames(stage, ["Discovery"]) - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CopyAmazonLoadBalancerTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CopyAmazonLoadBalancerTask.groovy index 846c7702c5..149c9b26c1 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CopyAmazonLoadBalancerTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CopyAmazonLoadBalancerTask.groovy @@ -105,7 +105,7 @@ class CopyAmazonLoadBalancerTask implements Task { } ] - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } @VisibleForTesting diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CreateDeployTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CreateDeployTask.groovy index 3ea8b243e1..e0f125b0f6 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CreateDeployTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/CreateDeployTask.groovy @@ -69,7 +69,7 @@ class CreateDeployTask extends AbstractCloudProviderAwareTask implements Task, D outputs.interestingHealthProviderNames = HealthHelper.getInterestingHealthProviderNames(stage, ["Amazon"]) } - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } private Map deployOperationFromContext(String cloudProvider, Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DestroyAwsServerGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DestroyAwsServerGroupTask.groovy index d0019fd304..9e15edf027 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DestroyAwsServerGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DestroyAwsServerGroupTask.groovy @@ -52,14 +52,14 @@ class DestroyAwsServerGroupTask extends AbstractCloudProviderAwareTask implement TaskId taskId = kato.requestOperations(cloudProvider, [[destroyServerGroup: context]]) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "destroyservergroup", "deploy.account.name" : context.credentials, "kato.last.task.id" : taskId, "asgName" : context.serverGroupName, // TODO: Retire asgName "serverGroupName" : context.serverGroupName, "deploy.server.groups": ((Iterable) context.regions).collectEntries { [(it): [context.serverGroupName]] } - ]) + ]).build() } Map convert(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetachInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetachInstancesTask.groovy index ccc4da36bd..0d90386fd8 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetachInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetachInstancesTask.groovy @@ -61,12 +61,12 @@ class DetachInstancesTask implements RetryableTask, CloudProviderAware { .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "detachinstances", "kato.last.task.id" : taskId, "terminate.instance.ids": stage.context.instanceIds, "terminate.account.name": getCredentials(stage), "terminate.region" : stage.context.region - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetermineTargetReferenceTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetermineTargetReferenceTask.groovy index f182363436..9a677f026d 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetermineTargetReferenceTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DetermineTargetReferenceTask.groovy @@ -33,8 +33,8 @@ class DetermineTargetReferenceTask implements Task { @Override TaskResult execute(Stage stage) { - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ targetReferences: targetReferenceSupport.getTargetAsgReferences(stage) - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DisableInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DisableInstancesTask.groovy index 811c46c0ee..d0726b765f 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DisableInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/DisableInstancesTask.groovy @@ -58,10 +58,10 @@ class DisableInstancesTask implements CloudProviderAware, Task { def taskId = katoService.requestOperations(actions) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : 'disableinstances', "kato.last.task.id" : taskId, interestingHealthProviderNames: HealthHelper.getInterestingHealthProviderNames(stage, ["Discovery", "LoadBalancer"]) - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/JarDiffsTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/JarDiffsTask.groovy index 072040de27..1c09f15d23 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/JarDiffsTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/JarDiffsTask.groovy @@ -71,7 +71,7 @@ class JarDiffsTask implements DiffTask { def retriesRemaining = stage.context.jarDiffsRetriesRemaining != null ? stage.context.jarDiffsRetriesRemaining : MAX_RETRIES if (retriesRemaining <= 0) { log.info("retries exceeded") - return new TaskResult(ExecutionStatus.SUCCEEDED, [jarDiffsRetriesRemaining: retriesRemaining]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([jarDiffsRetriesRemaining: retriesRemaining]).build() } try { @@ -92,7 +92,7 @@ class JarDiffsTask implements DiffTask { if (!targetInstances || !sourceInstances) { log.debug("No instances found (targetAsg: ${targetAsg}, sourceAsg: ${sourceAsg})") - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } // get jar json info @@ -104,19 +104,27 @@ class JarDiffsTask implements DiffTask { LibraryDiffs jarDiffs = libraryDiffTool.calculateLibraryDiffs(sourceJarList, targetJarList) // add the diffs to the context - return new TaskResult(ExecutionStatus.SUCCEEDED, [jarDiffs: jarDiffs]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([jarDiffs: jarDiffs]).build() } catch (Exception e) { // return success so we don't break pipelines log.error("error while fetching jar diffs, retrying", e) - return new TaskResult(ExecutionStatus.RUNNING, [jarDiffsRetriesRemaining: --retriesRemaining]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([jarDiffsRetriesRemaining: --retriesRemaining]).build() } } InstanceService createInstanceService(String address) { + def okHttpClient = new OkHttpClient(retryOnConnectionFailure: false) + + // short circuit as quickly as possible security groups don't allow ingress to :8077 + // (spinnaker applications don't allow this) + okHttpClient.setConnectTimeout(2, TimeUnit.SECONDS) + okHttpClient.setReadTimeout(2, TimeUnit.SECONDS) + RestAdapter restAdapter = new RestAdapter.Builder() .setEndpoint(address) - .setClient(new OkClient(new OkHttpClient(retryOnConnectionFailure: false))) + .setClient(new OkClient(okHttpClient)) .build() + return restAdapter.create(InstanceService.class) } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ModifyAsgTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ModifyAsgTask.groovy index 1b9c5e6fbc..9ca3b17c39 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ModifyAsgTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ModifyAsgTask.groovy @@ -38,10 +38,10 @@ class ModifyAsgTask implements Task { def deployServerGroups = AsgDescriptionSupport.convertAsgsToDeploymentTargets(stage.context.asgs) - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "modifyasg", "deploy.server.groups" : deployServerGroups, "kato.last.task.id" : taskId, - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/PreconfigureDestroyAsgTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/PreconfigureDestroyAsgTask.groovy index 5cf9990724..21b34e51ff 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/PreconfigureDestroyAsgTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/PreconfigureDestroyAsgTask.groovy @@ -29,13 +29,13 @@ class PreconfigureDestroyAsgTask implements Task { @Override TaskResult execute(Stage stage) { - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "resizeAsg.credentials" : stage.context.credentials, "resizeAsg.regions" : stage.context.regions, "resizeAsg.asgName" : stage.context.asgName, "resizeAsg.capacity.min" : 0, "resizeAsg.capacity.max" : 0, "resizeAsg.capacity.desired": 0 - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ResizeAsgTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ResizeAsgTask.groovy index 79a4805790..5ae9ed4d9b 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ResizeAsgTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/ResizeAsgTask.groovy @@ -46,7 +46,7 @@ class ResizeAsgTask implements Task { def taskId = kato.requestOperations([[resizeAsgDescription: operation]]) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "resizeasg", "deploy.account.name" : operation.credentials, "kato.last.task.id" : taskId, @@ -56,7 +56,7 @@ class ResizeAsgTask implements Task { [(it): [operation.asgName]] } - ]) + ]).build() } Map convert(Stage stage) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/TerminateInstanceAndDecrementAsgTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/TerminateInstanceAndDecrementAsgTask.groovy index 183c6fb613..2c9f063bf2 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/TerminateInstanceAndDecrementAsgTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/TerminateInstanceAndDecrementAsgTask.groovy @@ -35,14 +35,14 @@ class TerminateInstanceAndDecrementAsgTask implements Task { def taskId = kato.requestOperations([[terminateInstanceAndDecrementAsgDescription: stage.context]]) .toBlocking() .first() - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "terminateinstanceanddecrementasg", "terminate.account.name": stage.context.credentials, "terminate.region" : stage.context.region, "kato.last.task.id" : taskId, "kato.task.id" : taskId, // TODO retire this. "terminate.instance.ids": [stage.context.instance], - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAmazonDNSTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAmazonDNSTask.groovy index a136b85007..ca03b4de44 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAmazonDNSTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAmazonDNSTask.groovy @@ -60,6 +60,6 @@ class UpsertAmazonDNSTask implements Task { "kato.last.task.id": taskId ] - return new TaskResult(SUCCEEDED, outputs) + return TaskResult.builder(SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAsgScheduledActionsTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAsgScheduledActionsTask.groovy index 593a7bf6a0..57d42f5305 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAsgScheduledActionsTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/UpsertAsgScheduledActionsTask.groovy @@ -39,10 +39,10 @@ class UpsertAsgScheduledActionsTask implements Task { def deployServerGroups = AsgDescriptionSupport.convertAsgsToDeploymentTargets(stage.context.asgs) - new TaskResult(ExecutionStatus.SUCCEEDED, [ + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ "notification.type" : "upsertasgscheduledactions", "deploy.server.groups" : deployServerGroups, "kato.last.task.id" : taskId, - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/InstanceHealthCheckTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/InstanceHealthCheckTask.groovy index 2c73804f27..24067d1085 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/InstanceHealthCheckTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/InstanceHealthCheckTask.groovy @@ -48,7 +48,7 @@ class InstanceHealthCheckTask extends AbstractQuipTask implements RetryableTask ExecutionStatus executionStatus = ExecutionStatus.SUCCEEDED //skipped instances if (!instances) { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } // verify instance list, package, and version are in the context if(instances) { @@ -59,7 +59,7 @@ class InstanceHealthCheckTask extends AbstractQuipTask implements RetryableTask // ask kato for a refreshed version of the instance info instances = oortHelper.getInstancesForCluster(stage.context, null, true, false) stageOutputs << [instances: instances] - return new TaskResult(ExecutionStatus.RUNNING, stageOutputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(stageOutputs).build() } URL healthCheckUrl = new URL(instance.healthCheckUrl) @@ -74,6 +74,6 @@ class InstanceHealthCheckTask extends AbstractQuipTask implements RetryableTask } else { throw new RuntimeException("one or more required parameters are missing : instances") } - return new TaskResult(executionStatus, stageOutputs) + return TaskResult.builder(executionStatus).context(stageOutputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/MonitorQuipTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/MonitorQuipTask.groovy index 8551da8107..836f544197 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/MonitorQuipTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/MonitorQuipTask.groovy @@ -41,7 +41,7 @@ class MonitorQuipTask extends AbstractQuipTask implements RetryableTask { */ @Override TaskResult execute(Stage stage) { - def result = new TaskResult(ExecutionStatus.SUCCEEDED) + def result = TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) //we skipped instances that were up to date if (!stage.context.instances) { @@ -64,12 +64,12 @@ class MonitorQuipTask extends AbstractQuipTask implements RetryableTask { } else if(status == "Failed") { throw new RuntimeException("quip task failed for ${hostName} with a result of ${status}, see http://${hostName}:5050/tasks/${taskId}") } else if(status == "Running") { - result = new TaskResult(ExecutionStatus.RUNNING) + result = TaskResult.ofStatus(ExecutionStatus.RUNNING) } else { throw new RuntimeException("quip task failed for ${hostName} with a result of ${status}, see http://${hostName}:5050/tasks/${taskId}") } } catch(RetrofitError e) { - result = new TaskResult(ExecutionStatus.RUNNING) + result = TaskResult.ofStatus(ExecutionStatus.RUNNING) } } return result diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/ResolveQuipVersionTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/ResolveQuipVersionTask.groovy index 5acc19f738..01a596f3a4 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/ResolveQuipVersionTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/ResolveQuipVersionTask.groovy @@ -71,6 +71,6 @@ class ResolveQuipVersionTask implements RetryableTask { objectMapper) String version = stage.context?.patchVersion ?: packageInfo.findTargetPackage(allowMissingPackageInstallation)?.packageVersion - return new TaskResult(ExecutionStatus.SUCCEEDED, [version: version], [version:version]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([version: version]).outputs([version:version]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/TriggerQuipTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/TriggerQuipTask.groovy index 7bb2b487c3..5a5ebf47ed 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/TriggerQuipTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/TriggerQuipTask.groovy @@ -92,7 +92,7 @@ class TriggerQuipTask extends AbstractQuipTask implements RetryableTask { remainingInstances: remainingInstances, version: version ] - return new TaskResult(remainingInstances ? ExecutionStatus.RUNNING : ExecutionStatus.SUCCEEDED, stageOutputs) + return TaskResult.builder(remainingInstances ? ExecutionStatus.RUNNING : ExecutionStatus.SUCCEEDED).context(stageOutputs).build() } String getAppVersion(InstanceService instanceService, String packageName) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/VerifyQuipTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/VerifyQuipTask.groovy index b9ecdd029c..29cb43c4b6 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/VerifyQuipTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/quip/VerifyQuipTask.groovy @@ -57,7 +57,7 @@ class VerifyQuipTask extends AbstractQuipTask implements Task { } else { throw new RuntimeException("one or more of these parameters is missing : cluster || region || account || healthProviders") } - return new TaskResult(executionStatus, stageOutputs, [:]) + return TaskResult.builder(executionStatus).context(stageOutputs).build() } private boolean checkInstancesForQuip(Map instances) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CheckForRemainingTerminationsTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CheckForRemainingTerminationsTask.groovy index d4b6109558..d17dab124a 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CheckForRemainingTerminationsTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CheckForRemainingTerminationsTask.groovy @@ -35,9 +35,9 @@ class CheckForRemainingTerminationsTask implements Task { return TaskResult.SUCCEEDED } - return new TaskResult(ExecutionStatus.REDIRECT, [ + return TaskResult.builder(ExecutionStatus.REDIRECT).context([ skipRemainingWait: false, startTime: Instant.EPOCH - ]) + ]).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CleanUpTagsTask.java b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CleanUpTagsTask.java index a56515b661..2e75077bd4 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CleanUpTagsTask.java +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/CleanUpTagsTask.java @@ -94,7 +94,7 @@ public TaskResult execute(Stage stage) { log.info("found tags to delete {}", tagsToDelete); if (tagsToDelete.isEmpty()) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } // All IDs should be the same; use the first one @@ -105,10 +105,10 @@ public TaskResult execute(Stage stage) { operations(entityId, tagsToDelete) ).toBlocking().first(); - return new TaskResult(SUCCEEDED, new HashMap() {{ + return TaskResult.builder(SUCCEEDED).context(new HashMap() {{ put("notification.type", "deleteentitytags"); put("kato.last.task.id", taskId); - }}); + }}).build(); } catch (Exception e) { log.error( "Failed to clean up tags for stage {} of {} {}", @@ -117,7 +117,7 @@ public TaskResult execute(Stage stage) { stage.getExecution().getId(), e ); - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationCandidatesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationCandidatesTask.groovy index a5e86e7b89..549991aa6c 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationCandidatesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationCandidatesTask.groovy @@ -52,7 +52,7 @@ class DetermineTerminationCandidatesTask implements Task { } int totalRelaunches = getNumberOfRelaunches(stage.context.termination, terminationInstancePool.size()) def terminationInstanceIds = terminationInstancePool.take(totalRelaunches) - new TaskResult(ExecutionStatus.SUCCEEDED, [terminationInstanceIds: terminationInstanceIds, knownInstanceIds: knownInstanceIds, skipRemainingWait: true, waitTime: stage.context.termination?.waitTime ?: 0 ]) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([terminationInstanceIds: terminationInstanceIds, knownInstanceIds: knownInstanceIds, skipRemainingWait: true, waitTime: stage.context.termination?.waitTime ?: 0 ]).build() } int getNumberOfRelaunches(Map termination, int totalAsgSize) { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationPhaseInstancesTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationPhaseInstancesTask.groovy index 3b26695015..bb48559ebb 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationPhaseInstancesTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/DetermineTerminationPhaseInstancesTask.groovy @@ -42,6 +42,6 @@ class DetermineTerminationPhaseInstancesTask implements Task { (TerminatingInstanceSupport.TERMINATE_REMAINING_INSTANCES): [] ] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/WaitForNewUpInstancesLaunchTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/WaitForNewUpInstancesLaunchTask.groovy index c8c290c106..80cd017e3f 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/WaitForNewUpInstancesLaunchTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/rollingpush/WaitForNewUpInstancesLaunchTask.groovy @@ -66,8 +66,8 @@ class WaitForNewUpInstancesLaunchTask implements OverridableTimeoutRetryableTask int expectedNewInstances = (stage.context.instanceIds as List).size() if (newUpInstanceIds.size() >= expectedNewInstances) { knownInstanceIds.addAll(newUpInstanceIds) - return new TaskResult(ExecutionStatus.SUCCEEDED, [knownInstanceIds: knownInstanceIds.toList()]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([knownInstanceIds: knownInstanceIds.toList()]).build() } - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } } diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/scalingprocess/AbstractScalingProcessTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/scalingprocess/AbstractScalingProcessTask.groovy index 578db4dfef..6dd9cf7f22 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/scalingprocess/AbstractScalingProcessTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/scalingprocess/AbstractScalingProcessTask.groovy @@ -84,9 +84,9 @@ abstract class AbstractScalingProcessTask implements Task { stageOutputs."kato.last.task.id" = taskId } - return new TaskResult(ExecutionStatus.SUCCEEDED, stageOutputs, [ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stageOutputs).outputs([ ("scalingProcesses.${asgName}" as String): stageContext.processes - ]) + ]).build() } static class StageData { diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/securitygroup/CopySecurityGroupTask.groovy b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/securitygroup/CopySecurityGroupTask.groovy index f06e805ca4..e34d5ec1db 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/securitygroup/CopySecurityGroupTask.groovy +++ b/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/kato/tasks/securitygroup/CopySecurityGroupTask.groovy @@ -74,7 +74,7 @@ class CopySecurityGroupTask implements Task { ] } ] - new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } static class StageData { diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplierSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplierSpec.groovy new file mode 100644 index 0000000000..ba4b2d15ad --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/conditions/ConfigurationBackedConditionSupplierSpec.groovy @@ -0,0 +1,57 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.conditions + +import com.netflix.spinnaker.kork.dynamicconfig.DynamicConfigService +import com.netflix.spinnaker.orca.time.MutableClock +import spock.lang.Specification +import spock.lang.Subject +import spock.lang.Unroll + +class ConfigurationBackedConditionSupplierSpec extends Specification { + def configService = Stub(DynamicConfigService) { + getConfig(_ as Class, _ as String, _ as Object) >> { type, name, defaultValue -> return defaultValue } + isEnabled(_ as String, _ as Boolean) >> { flag, defaultValue -> return defaultValue } + } + + def conditionsConfigurationProperties = new ConditionConfigurationProperties(configService) + def clock = new MutableClock() + + @Subject + def conditionSupplier = new ConfigurationBackedConditionSupplier(conditionsConfigurationProperties) + + @Unroll + def "should return configured conditions"() { + given: + conditionsConfigurationProperties.setClusters(clusters) + conditionsConfigurationProperties.setActiveConditions(activeConditions) + + when: + def result = conditionSupplier.getConditions(cluster, "region", "account") + + then: + result.size() == numberOfResultingConditions + + where: + cluster | clusters | activeConditions | numberOfResultingConditions + "foo" | [] | [] | 0 + "foo" | ["foo", "bar"] | [] | 0 + "foo" | ["bar"] | [ "c1", "c2"] | 0 + "foo" | ["foo", "bar"] | [ "c1", "c2"] | 2 + "foo" | [] | [ "c1", "c2"] | 0 + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStageSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStageSpec.groovy new file mode 100644 index 0000000000..b74aa3289d --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/job/PreconfiguredJobStageSpec.groovy @@ -0,0 +1,58 @@ +/* + * Copyright 2019 Armory + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.job + +import com.netflix.spinnaker.orca.clouddriver.config.PreconfiguredJobStageParameter +import com.netflix.spinnaker.orca.clouddriver.service.JobService +import com.netflix.spinnaker.orca.clouddriver.config.KubernetesPreconfiguredJobProperties +import spock.lang.Specification + +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage + +class PreconfiguredJobStageSpec extends Specification { + + def "should should replace properties in context"() { + given: + def jobService = Mock(JobService) { + 1 * getPreconfiguredStages() >> { + return [ + preconfiguredJobProperties + ] + } + } + + def stage = stage { + type = stageName + context = stageContext + } + + when: + PreconfiguredJobStage preconfiguredJobStage = new PreconfiguredJobStage(Optional.of(jobService)) + preconfiguredJobStage.buildTaskGraph(stage) + + then: + stage.getContext().get(expectedField) == expectedValue + + where: + expectedField | expectedValue | stageName | stageContext | preconfiguredJobProperties + "cloudProvider" | "kubernetes" | "testJob" | [account: "test-account"] | new KubernetesPreconfiguredJobProperties(enabled: true, label: "testJob", type: "testJob", parameters: [], cloudProvider: "kubernetes") + "cloudProvider" | "titus" | "testJob" | [account: "test-account"] | new KubernetesPreconfiguredJobProperties(enabled: true, label: "testJob", type: "testJob", parameters: [new PreconfiguredJobStageParameter(mapping: "cloudProvider", defaultValue: "titus")], cloudProvider: "kubernetes") + "cloudProvider" | "somethingElse" | "testJob" | [account: "test-account", parameters: ["cloudProvider": "somethingElse"]] | new KubernetesPreconfiguredJobProperties(enabled: true, label: "testJob", type: "testJob", parameters: [new PreconfiguredJobStageParameter(mapping: "cloudProvider", defaultValue: "titus", "name": "cloudProvider")], cloudProvider: "kubernetes") + + + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategyTest.java b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategyTest.java new file mode 100644 index 0000000000..94809428cb --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/servergroup/strategies/CFRollingRedBlackStrategyTest.java @@ -0,0 +1,405 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.strategies; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.kork.dynamicconfig.SpringDynamicConfigService; +import com.netflix.spinnaker.moniker.Moniker; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.clouddriver.pipeline.cluster.DisableClusterStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.cluster.ShrinkClusterStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.CreateServerGroupStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.ResizeServerGroupStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.DetermineTargetServerGroupStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroup; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroupResolver; +import com.netflix.spinnaker.orca.front50.pipeline.PipelineStage; +import com.netflix.spinnaker.orca.kato.pipeline.support.ResizeStrategy; +import com.netflix.spinnaker.orca.kato.pipeline.support.ResizeStrategySupport; +import com.netflix.spinnaker.orca.pipeline.WaitStage; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import org.jetbrains.annotations.NotNull; +import org.junit.jupiter.api.Test; +import org.springframework.core.env.Environment; +import org.springframework.mock.env.MockEnvironment; +import retrofit.client.Response; +import retrofit.mime.TypedInput; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.util.*; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static java.util.Collections.*; +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.*; + + +class CFRollingRedBlackStrategyTest { + private CFRollingRedBlackStrategy strategy; + private Environment env = new MockEnvironment(); + private SpringDynamicConfigService springDynamicConfigService = new SpringDynamicConfigService(); + private PipelineStage pipelineStage = mock(PipelineStage.class); + private ResizeStrategySupport resizeStrategySupport = mock(ResizeStrategySupport.class); + private ArtifactResolver artifactResolver = mock(ArtifactResolver.class); + private OortService oortService = mock(OortService.class); + private TargetServerGroupResolver targetServerGroupResolver = mock(TargetServerGroupResolver.class); + private final ResizeStrategy.Capacity zeroCapacity = new ResizeStrategy.Capacity(0, 0, 0); + private final ObjectMapper objectMapper = new ObjectMapper(); + + { + springDynamicConfigService.setEnvironment(env); + strategy = new CFRollingRedBlackStrategy( + null, + artifactResolver, + Optional.of(pipelineStage), + resizeStrategySupport, + targetServerGroupResolver, + objectMapper, + oortService); + } + + @Test + void composeFlowWithDelayBeforeScaleDown() { + List targetPercentageList = Stream.of(50, 100).collect(Collectors.toList()); + Map direct = new HashMap<>(); + direct.put("instances", 4); + direct.put("memory", "64M"); + Map manifest = new HashMap<>(); + manifest.put("direct", direct); + Map source = createSource(); + + Map context = createBasicContext(); + context.put("manifest", manifest); + context.put("targetPercentages", targetPercentageList); + context.put("source", source); + context.put("delayBeforeScaleDownSec", 5L); + Stage deployServerGroupStage = new Stage(new Execution(PIPELINE, "unit"), CreateServerGroupStage.PIPELINE_CONFIG_TYPE, context); + when(targetServerGroupResolver.resolve(any())).thenReturn(singletonList(new TargetServerGroup(Collections.emptyMap()))); + + List stages = strategy.composeFlow(deployServerGroupStage); + + assertThat(stages.stream().map(Stage::getType)) + .containsExactly( + DetermineTargetServerGroupStage.PIPELINE_CONFIG_TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ShrinkClusterStage.STAGE_TYPE, + DisableClusterStage.STAGE_TYPE, + ResizeServerGroupStage.TYPE, + WaitStage.STAGE_TYPE + ); + assertThat(stages.stream().filter(stage -> stage.getType().equals(WaitStage.STAGE_TYPE)) + .map(stage -> stage.getContext().get("waitTime"))) + .containsExactly(5L); + } + + @Test + void composeFlowWithDelayBeforeCleanup() { + List targetPercentageList = Stream.of(50, 100).collect(Collectors.toList()); + Map direct = new HashMap<>(); + direct.put("instances", 4); + direct.put("memory", "64M"); + Map manifest = new HashMap<>(); + manifest.put("direct", direct); + Map source = createSource(); + + Map context = createBasicContext(); + context.put("manifest", manifest); + context.put("targetPercentages", targetPercentageList); + context.put("source", source); + context.put("delayBeforeCleanup", 5L); + Stage deployServerGroupStage = new Stage(new Execution(PIPELINE, "unit"), CreateServerGroupStage.PIPELINE_CONFIG_TYPE, context); + + when(targetServerGroupResolver.resolve(any())).thenReturn(singletonList(new TargetServerGroup(Collections.emptyMap()))); + + List stages = strategy.composeFlow(deployServerGroupStage); + + assertThat(stages.stream().map(Stage::getType)) + .containsExactly( + DetermineTargetServerGroupStage.PIPELINE_CONFIG_TYPE, + ResizeServerGroupStage.TYPE, + WaitStage.STAGE_TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + WaitStage.STAGE_TYPE, + ResizeServerGroupStage.TYPE, + ShrinkClusterStage.STAGE_TYPE, + DisableClusterStage.STAGE_TYPE, + ResizeServerGroupStage.TYPE + ); + assertThat(stages.stream().filter(stage -> stage.getType().equals(WaitStage.STAGE_TYPE)) + .map(stage -> stage.getContext().get("waitTime"))) + .containsExactly(5L, 5L); + } + + @Test + void composeFlowWithNoSourceAndManifestDirect() { + List targetPercentageList = Stream.of(50, 75, 100).collect(Collectors.toList()); + Map direct = new HashMap<>(); + direct.put("instances", 4); + direct.put("memory", "64M"); + Map manifest = new HashMap<>(); + manifest.put("direct", direct); + + Map context = createBasicContext(); + context.put("manifest", manifest); + context.put("targetPercentages", targetPercentageList); + + Map expectedDirect = new HashMap<>(); + expectedDirect.put("memory", "64M"); + expectedDirect.put("instances", 1); + Map expectedManifest = Collections.singletonMap("direct", expectedDirect); + ResizeStrategy.Capacity resizeTo4Capacity = new ResizeStrategy.Capacity(4, 4, 4); + Stage deployServerGroupStage = new Stage(new Execution(PIPELINE, "unit"), CreateServerGroupStage.PIPELINE_CONFIG_TYPE, context); + + List stages = strategy.composeFlow(deployServerGroupStage); + + assertThat(stages.stream().map(stage -> stage.getContext().get("capacity"))) + .containsExactly(null, resizeTo4Capacity, resizeTo4Capacity, resizeTo4Capacity); + assertThat(stages.stream().map(stage -> stage.getContext().get("scalePct"))) + .containsExactly(null, 50, 75, 100); + assertThat(stages.stream().map(Stage::getType)) + .containsExactly( + DetermineTargetServerGroupStage.PIPELINE_CONFIG_TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE + ); + assertThat(deployServerGroupStage.getContext().get("targetSize")).isNull(); + assertThat(deployServerGroupStage.getContext().get("useSourceCapacity")).isNull(); + assertThat(deployServerGroupStage.getContext().get("capacity")).isEqualTo(zeroCapacity); + assertThat(deployServerGroupStage.getContext().get("manifest")).isEqualTo(expectedManifest); + verifyZeroInteractions(artifactResolver); + verifyZeroInteractions(oortService); + } + + @Test + void composeFlowWithSourceAndManifestDirect() { + List targetPercentageList = Stream.of(50, 75, 100).collect(Collectors.toList()); + Map context = createBasicContext(); + Map direct = new HashMap<>(); + direct.put("instances", 4); + direct.put("memory", "64M"); + Map manifest = new HashMap<>(); + manifest.put("direct", direct); + context.put("manifest", manifest); + context.put("targetPercentages", targetPercentageList); + context.put("source", createSource()); + Stage deployServerGroupStage = new Stage(new Execution(PIPELINE, "unit"), CreateServerGroupStage.PIPELINE_CONFIG_TYPE, context); + ResizeStrategy.Capacity initialSourceCapacity = new ResizeStrategy.Capacity(8, 8, 8); + + when(targetServerGroupResolver.resolve(any())).thenReturn(singletonList(new TargetServerGroup(Collections.emptyMap()))); + when(resizeStrategySupport.getCapacity(any(), any(), any(), any())).thenReturn(initialSourceCapacity); + + Map expectedDirect = new HashMap<>(); + expectedDirect.put("memory", "64M"); + expectedDirect.put("instances", 1); + Map expectedManifest = Collections.singletonMap("direct", expectedDirect); + + List stages = strategy.composeFlow(deployServerGroupStage); + + ResizeStrategy.Capacity resizeTo4Capacity = new ResizeStrategy.Capacity(4, 4, 4); + assertThat(stages.stream().map(Stage::getType)) + .containsExactly( + DetermineTargetServerGroupStage.PIPELINE_CONFIG_TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ShrinkClusterStage.STAGE_TYPE, + DisableClusterStage.STAGE_TYPE, + ResizeServerGroupStage.TYPE + ); + assertThat(stages.stream().map(stage -> stage.getContext().get("scalePct"))) + .containsExactly(null, 50, 50, 75, 25, 100, 0, null, null, 100); + assertThat(stages.stream().map(stage -> stage.getContext().get("capacity"))) + .containsExactly(null, resizeTo4Capacity, initialSourceCapacity, resizeTo4Capacity, initialSourceCapacity, + resizeTo4Capacity, initialSourceCapacity, null, null, initialSourceCapacity); + assertThat(deployServerGroupStage.getContext().get("targetSize")).isNull(); + assertThat(deployServerGroupStage.getContext().get("useSourceCapacity")).isNull(); + assertThat(deployServerGroupStage.getContext().get("capacity")).isEqualTo(zeroCapacity); + assertThat(deployServerGroupStage.getContext().get("manifest")).isEqualTo(expectedManifest); + verifyZeroInteractions(artifactResolver); + verifyZeroInteractions(oortService); + } + + @Test + void composeFlowWithNoSourceAndManifestArtifactConvertsManifestToDirect() throws IOException { + String artifactId = "artifact-id"; + Map manifest = new HashMap<>(); + manifest.put("artifactId", artifactId); + manifest.put("artifact", new HashMap<>()); + Map context = createBasicContext(); + List targetPercentageList = Stream.of(50, 75, 100).collect(Collectors.toList()); + context.put("manifest", manifest); + context.put("targetPercentages", targetPercentageList); + Artifact boundArtifactForStage = new Artifact(); + Map application = new HashMap<>(); + application.put("instances", "4"); + application.put("memory", "64M"); + application.put("diskQuota", "128M"); + Map body = singletonMap("applications", singletonList(application)); + Response oortServiceResponse = createMockOortServiceResponse(body); + Stage deployServerGroupStage = new Stage(new Execution(PIPELINE, "unit"), CreateServerGroupStage.PIPELINE_CONFIG_TYPE, context); + ResizeStrategy.Capacity resizeTo4Capacity = new ResizeStrategy.Capacity(4, 4, 4); + + when(artifactResolver.getBoundArtifactForStage(any(), any(), any())).thenReturn(boundArtifactForStage); + when(oortService.fetchArtifact(any())).thenReturn(oortServiceResponse); + + Map expectedDirect = new HashMap<>(); + expectedDirect.put("memory", "64M"); + expectedDirect.put("diskQuota", "128M"); + expectedDirect.put("instances", 1); + Map expectedManifest = Collections.singletonMap("direct", expectedDirect); + + List stages = strategy.composeFlow(deployServerGroupStage); + + assertThat(stages.stream().map(stage -> stage.getContext().get("capacity"))) + .containsExactly(null, resizeTo4Capacity, resizeTo4Capacity, resizeTo4Capacity); + assertThat(stages.stream().map(stage -> stage.getContext().get("scalePct"))) + .containsExactly(null, 50, 75, 100); + assertThat(stages.stream().map(Stage::getType)) + .containsExactly( + DetermineTargetServerGroupStage.PIPELINE_CONFIG_TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE + ); + assertThat(deployServerGroupStage.getContext().get("targetSize")).isNull(); + assertThat(deployServerGroupStage.getContext().get("useSourceCapacity")).isNull(); + assertThat(deployServerGroupStage.getContext().get("capacity")).isEqualTo(zeroCapacity); + assertThat(deployServerGroupStage.getContext().get("manifest")).isEqualTo(expectedManifest); + verify(artifactResolver).getBoundArtifactForStage(deployServerGroupStage, artifactId, new Artifact()); + verify(oortService).fetchArtifact(boundArtifactForStage); + } + + @Test + void composeFlowWithSourceAndManifestArtifactConvertsManifestToDirect() throws IOException { + String artifactId = "artifact-id"; + Map artifact = new HashMap<>(); + Map manifest = new HashMap<>(); + manifest.put("artifactId", artifactId); + manifest.put("artifact", artifact); + Map expectedDirect = new HashMap<>(); + expectedDirect.put("memory", "64M"); + expectedDirect.put("diskQuota", "128M"); + expectedDirect.put("instances", 1); + Map expectedManifest = Collections.singletonMap("direct", expectedDirect); + List targetPercentageList = Stream.of(50, 75, 100).collect(Collectors.toList()); + Map context = createBasicContext(); + context.put("manifest", manifest); + context.put("targetPercentages", targetPercentageList); + context.put("source", createSource()); + ResizeStrategy.Capacity resizeTo4Capacity = new ResizeStrategy.Capacity(4, 4, 4); + ResizeStrategy.Capacity initialSourceCapacity = new ResizeStrategy.Capacity(8, 8, 8); + + Stage deployServerGroupStage = new Stage(new Execution(PIPELINE, "unit"), "type", context); + Artifact boundArtifactForStage = new Artifact(); + Map application = new HashMap<>(); + application.put("instances", "4"); + application.put("memory", "64M"); + application.put("diskQuota", "128M"); + Map body = singletonMap("applications", singletonList(application)); + Response oortServiceResponse = createMockOortServiceResponse(body); + + when(targetServerGroupResolver.resolve(any())).thenReturn(singletonList(new TargetServerGroup(Collections.emptyMap()))); + when(resizeStrategySupport.getCapacity(any(), any(), any(), any())).thenReturn(initialSourceCapacity); + when(artifactResolver.getBoundArtifactForStage(any(), any(), any())).thenReturn(boundArtifactForStage); + when(oortService.fetchArtifact(any())).thenReturn(oortServiceResponse); + + List stages = strategy.composeFlow(deployServerGroupStage); + + assertThat(stages.stream().map(Stage::getType)) + .containsExactly( + DetermineTargetServerGroupStage.PIPELINE_CONFIG_TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ResizeServerGroupStage.TYPE, + ShrinkClusterStage.STAGE_TYPE, + DisableClusterStage.STAGE_TYPE, + ResizeServerGroupStage.TYPE + ); + + assertThat(stages.stream().map(stage -> stage.getContext().get("capacity"))) + .containsExactly(null, resizeTo4Capacity, initialSourceCapacity, resizeTo4Capacity, + initialSourceCapacity, resizeTo4Capacity, initialSourceCapacity, null, null, initialSourceCapacity); + assertThat(stages.stream().map(stage -> stage.getContext().get("scalePct"))) + .containsExactly(null, 50, 50, 75, 25, 100, 0, null, null, 100); + assertThat(deployServerGroupStage.getContext().get("targetSize")).isNull(); + assertThat(deployServerGroupStage.getContext().get("useSourceCapacity")).isNull(); + assertThat(deployServerGroupStage.getContext().get("capacity")).isEqualTo(zeroCapacity); + assertThat(deployServerGroupStage.getContext().get("manifest")).isEqualTo(expectedManifest); + verify(artifactResolver).getBoundArtifactForStage(deployServerGroupStage, artifactId, new Artifact()); + verify(oortService).fetchArtifact(boundArtifactForStage); + } + + @NotNull + private Response createMockOortServiceResponse(Object body) throws IOException { + InputStream inputStream = new ByteArrayInputStream(objectMapper.writeValueAsBytes(body)); + TypedInput typedInput = mock(TypedInput.class); + when(typedInput.in()).thenReturn(inputStream); + return new Response("url", 200, "success", emptyList(), typedInput); + } + + @NotNull + private Map createSource() { + Map source = new HashMap<>(); + source.put("account", "account"); + source.put("region", "org > space"); + source.put("asgName", "asg-name"); + source.put("serverGroupName", "server-group-name"); + return source; + } + + private Map createBasicContext() { + Moniker moniker = new Moniker("unit", null, null, "test0", null); + Map rollbackOnFailure = Collections.singletonMap("onFailure", true); + + Map context = new HashMap<>(); + context.put("account", "testAccount"); + context.put("application", "unit"); + context.put("cloudProvider", "cloudfoundry"); + context.put("freeFormDetails", "detail"); + context.put("maxRemainingAsgs", 2); + context.put("moniker", moniker); + context.put("name", "Deploy in test > test"); + context.put("provider", "cloudfoundry"); + context.put("region", "test > test"); + context.put("rollback", rollbackOnFailure); + context.put("scaleDown", "false"); + context.put("stack", "test0"); + context.put("startApplication", "true"); + context.put("strategy", "cfrollingredblack"); + context.put("type", "createServerGroup"); + return context; + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigratePipelineClustersTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigratePipelineClustersTaskSpec.groovy deleted file mode 100644 index d23dbe8c86..0000000000 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/MigratePipelineClustersTaskSpec.groovy +++ /dev/null @@ -1,85 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks - -import com.netflix.spinnaker.orca.ExecutionStatus -import com.netflix.spinnaker.orca.clouddriver.KatoService -import com.netflix.spinnaker.orca.clouddriver.model.TaskId -import com.netflix.spinnaker.orca.clouddriver.tasks.pipeline.MigratePipelineClustersTask -import com.netflix.spinnaker.orca.front50.Front50Service -import com.netflix.spinnaker.orca.kato.pipeline.ParallelDeployClusterExtractor -import com.netflix.spinnaker.orca.pipeline.model.Execution -import com.netflix.spinnaker.orca.pipeline.model.Stage -import spock.lang.Specification -import spock.lang.Subject - -class MigratePipelineClustersTaskSpec extends Specification { - - @Subject - MigratePipelineClustersTask task = new MigratePipelineClustersTask() - - Front50Service front50Service - KatoService katoService - - void setup() { - front50Service = Mock() - katoService = Mock() - task.front50Service = front50Service - task.katoService = katoService - task.extractors = [new ParallelDeployClusterExtractor()] - } - - void 'returns terminal status when pipeline not found'() { - when: - def result = task.execute(new Stage(null, "migratePipelineCluster", "migrate", [pipelineConfigId: 'abc', application: 'theapp'])) - - then: - 1 * front50Service.getPipelines('theapp') >> [[id: 'def']] - result.status == ExecutionStatus.TERMINAL - result.context.exception == "Could not find pipeline with ID abc" - } - - void 'extracts clusters, sends them to clouddriver, and puts pipeline into context for later retrieval'() { - given: - def pipeline = [ - id : 'abc', - stages: [ - [type: 'deploy', clusters: [[id: 1], [id: 2]]], - [type: 'not-a-deploy', clusters: [[id: 3], [id: 4]]], - [type: 'deploy', clusters: [[id: 5], [id: 6]]] - ] - ] - def context = [ - pipelineConfigId: 'abc', - application : 'theapp', - regionMapping : ['us-east-1': 'us-west-1'] - ] - def stage = new Stage(Execution.newPipeline("orca"), "mpc", "m", context) - - when: - def result = task.execute(stage) - - then: - 1 * front50Service.getPipelines('theapp') >> [pipeline] - 1 * katoService.requestOperations('aws', { - it[0].migrateClusterConfigurations.sources.cluster.id == [1,2,3,4] - it[0].migrateClusterConfigurations.regionMapping == ['us-east-1': 'us-west-1'] - }) >> rx.Observable.from([new TaskId(id: "1")]) - result.context['source.pipeline'] == pipeline - } - -} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/UpdateMigratedPipelineTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/UpdateMigratedPipelineTaskSpec.groovy deleted file mode 100644 index 9c6b67cb5c..0000000000 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/UpdateMigratedPipelineTaskSpec.groovy +++ /dev/null @@ -1,80 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks - -import com.netflix.spinnaker.orca.ExecutionStatus -import com.netflix.spinnaker.orca.clouddriver.KatoService -import com.netflix.spinnaker.orca.clouddriver.tasks.pipeline.UpdateMigratedPipelineTask -import com.netflix.spinnaker.orca.front50.Front50Service -import com.netflix.spinnaker.orca.kato.pipeline.ParallelDeployClusterExtractor -import com.netflix.spinnaker.orca.pipeline.model.Execution -import com.netflix.spinnaker.orca.pipeline.model.Stage -import spock.lang.Specification -import spock.lang.Subject - -class UpdateMigratedPipelineTaskSpec extends Specification { - - @Subject - UpdateMigratedPipelineTask task = new UpdateMigratedPipelineTask() - - Front50Service front50Service - KatoService katoService - Map pipeline - - void setup() { - front50Service = Mock() - katoService = Mock() - task.front50Service = front50Service - task.extractors = [new ParallelDeployClusterExtractor()] - pipeline = [ - id : 'abc', - name : 'to migrate', - stages: [ - [type: 'deploy', clusters: [[id: 1], [id: 2]]], - [type: 'not-a-deploy', clusters: [[id: 3], [id: 4]]], - [type: 'deploy', clusters: [[id: 5], [id: 6]]] - ] - ] - } - - void 'applies new clusters to pipeline, removes id, updates name, saves, adds new ID to output'() { - when: - def context = [ - application : 'theapp', - "source.pipeline": pipeline, - "kato.tasks" : [ - [ - resultObjects: [ - [cluster: [id: 10]], [cluster: [id: 11]], [cluster: [id: 12]], [cluster: [id: 13]], - ] - ] - ] - ] - def result = task.execute(new Stage(Execution.newPipeline("orca"), "updatePipeline", "migrate", context)) - - then: - 1 * front50Service.savePipeline({ - it.id == null - it.stages[0].clusters.id == [10, 11] - it.stages[1].clusters.id == [3, 4] - it.stages[2].clusters.id == [12, 13] - }) - 1 * front50Service.getPipelines("theapp") >> [pipeline.clone(), [id: 'def', name: 'to migrate - migrated']] - result.status == ExecutionStatus.SUCCEEDED - } - -} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTaskSpec.groovy index 8aa9708da3..ca8fbbdd6c 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/cluster/FindImageFromClusterTaskSpec.groovy @@ -182,6 +182,7 @@ class FindImageFromClusterTaskSpec extends Specification { ['foo-x86_64-201603232351'] | ['foo-x86_64-201603232351'] as Set } + @Unroll def "should resolve images via find if not all regions exist in source server group"() { given: def pipe = pipeline { @@ -189,7 +190,7 @@ class FindImageFromClusterTaskSpec extends Specification { } def stage = new Stage(pipe, "findImage", [ resolveMissingLocations: true, - cloudProvider : "cloudProvider", + cloudProvider : cloudProvider, cluster : "foo-test", account : "test", selectionStrategy : "LARGEST", @@ -204,21 +205,28 @@ class FindImageFromClusterTaskSpec extends Specification { def result = task.execute(stage) then: - 1 * oortService.getServerGroupSummary("foo", "test", "foo-test", "cloudProvider", location1.value, + 1 * oortService.getServerGroupSummary("foo", "test", "foo-test", cloudProvider, location1.value, "LARGEST", FindImageFromClusterTask.SUMMARY_TYPE, false.toString()) >> oortResponse1 - 1 * oortService.getServerGroupSummary("foo", "test", "foo-test", "cloudProvider", location2.value, + findCalls * oortService.getServerGroupSummary("foo", "test", "foo-test", cloudProvider, location2.value, "LARGEST", FindImageFromClusterTask.SUMMARY_TYPE, false.toString()) >> { throw RetrofitError.httpError("http://clouddriver", new Response("http://clouddriver", 404, 'Not Found', [], new TypedString("{}")), null, Map) } - 1 * oortService.findImage("cloudProvider", "ami-012-name-ebs*", "test", null, null) >> imageSearchResult - 1 * regionCollector.getRegionsFromChildStages(stage) >> regionCollectorResponse + findCalls * oortService.findImage(cloudProvider, "ami-012-name-ebs*", "test", null, null) >> imageSearchResult + findCalls * regionCollector.getRegionsFromChildStages(stage) >> regionCollectorResponse assertNorth(result.outputs?.deploymentDetails?.find { it.region == "north" } as Map, [imageName: "ami-012-name-ebs"]) - assertSouth(result.outputs?.deploymentDetails?.find { - it.region == "south" - } as Map, [sourceServerGroup: "foo-test", imageName: "ami-012-name-ebs1", foo: "bar"]) + + if (cloudProvider == "aws") { + assertSouth(result.outputs?.deploymentDetails?.find { + it.region == "south" + } as Map, [sourceServerGroup: "foo-test", imageName: "ami-012-name-ebs1", foo: "bar"]) + } else { + assert !result.outputs?.deploymentDetails?.any { + it.region == "south" + } + } where: location1 = new Location(type: Location.Type.REGION, value: "north") @@ -250,6 +258,10 @@ class FindImageFromClusterTaskSpec extends Specification { ] ] ] + + cloudProvider || findCalls + "aws" || 1 + "gcp" || 0 } def "should resolve images via find if not all regions exist in source server group without build info"() { diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTaskSpec.groovy new file mode 100644 index 0000000000..a2acbe4e3e --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/conditions/EvaluateConditionTaskSpec.groovy @@ -0,0 +1,146 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.conditions + +import com.netflix.spectator.api.NoopRegistry +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.Condition +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.ConditionConfigurationProperties +import com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.ConditionSupplier +import com.netflix.spinnaker.orca.time.MutableClock +import spock.lang.Specification +import spock.lang.Subject +import spock.lang.Unroll + +import java.time.Duration + +import static com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.WaitForConditionStage.* +import static com.netflix.spinnaker.orca.clouddriver.pipeline.conditions.WaitForConditionStage.WaitForConditionContext.Status.* +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage + +class EvaluateConditionTaskSpec extends Specification { + def conditionSupplier = Mock(ConditionSupplier) + def conditionsConfigurationProperties = Mock(ConditionConfigurationProperties) + def clock = new MutableClock() + + @Subject + def task = new EvaluateConditionTask( + conditionsConfigurationProperties, + [conditionSupplier], + new NoopRegistry(), + clock + ) + + def "should wait for conditions"() { + given: + def stage = stage { + type = STAGE_TYPE + startTime = clock.millis() + context = [ + status: WAITING.toString(), + region: "region", + cluster: "cluster", + account: "account" + ] + } + + and: + conditionsConfigurationProperties.getBackoffWaitMs() >> 5 + + when: + def result = task.execute(stage) + + then: + 0 * conditionSupplier.getConditions("cluster", "region", "account") + result.status == ExecutionStatus.RUNNING + + when: + clock.incrementBy(Duration.ofMillis(5)) + + and: + result = task.execute(stage) + + then: + 1 * conditionSupplier.getConditions( + "cluster", + "region", + "account" + ) >> [new Condition("a", "b")] + result.status == ExecutionStatus.RUNNING + + when: + result = task.execute(stage) + + then: + 1 * conditionSupplier.getConditions("cluster", "region", "account") >> [] + result.status == ExecutionStatus.SUCCEEDED + + when: + stage.context.status = SKIPPED + result = task.execute(stage) + + then: + result.status == ExecutionStatus.SUCCEEDED + 0 * conditionSupplier.getConditions(_, _, _) + + when: + stage.context.status = WAITING + 1 * conditionsConfigurationProperties.isSkipWait() >> true + + and: + result = task.execute(stage) + + then: + result.status == ExecutionStatus.SUCCEEDED + result.context.status == SKIPPED + 0 * conditionSupplier.getConditions(_, _, _) + } + + @Unroll + def "should wait for conditions reflecting wait status"() { + given: + def stage = stage { + refId = "1" + type = STAGE_TYPE + startTime = clock.millis() + context = [ + status: initialWaitStatus.toString(), + region: "region", + cluster: "cluster", + account: "account" + ] + } + + and: + conditionsConfigurationProperties.getBackoffWaitMs() >> 4 + clock.incrementBy(Duration.ofMillis(5)) + + when: + def result = task.execute(stage) + + then: + conditionSupplier.getConditions("cluster", "region", "account") >> conditions + result.status == executionStatus + + where: + initialWaitStatus | conditions | executionStatus + WAITING | [] | ExecutionStatus.SUCCEEDED + SKIPPED | [] | ExecutionStatus.SUCCEEDED + WAITING | [new Condition("n", "d")] | ExecutionStatus.RUNNING + SKIPPED | [new Condition("n", "d")] | ExecutionStatus.SUCCEEDED + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTaskSpec.groovy index c7aba61744..fbb1923327 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/instance/WaitForUpInstancesTaskSpec.groovy @@ -21,12 +21,15 @@ import com.netflix.spinnaker.orca.clouddriver.OortService import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper import com.netflix.spinnaker.orca.pipeline.model.Execution import com.netflix.spinnaker.orca.pipeline.model.Stage +import org.slf4j.MDC import retrofit.client.Response import retrofit.mime.TypedString import spock.lang.Specification import spock.lang.Subject import spock.lang.Unroll +import java.util.concurrent.TimeUnit + import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage class WaitForUpInstancesTaskSpec extends Specification { @@ -40,6 +43,10 @@ class WaitForUpInstancesTaskSpec extends Specification { def mapper = OrcaObjectMapper.newInstance() + void cleanup() { + MDC.clear() + } + void "should check cluster to get server groups"() { given: def pipeline = Execution.newPipeline("orca") @@ -504,14 +511,19 @@ class WaitForUpInstancesTaskSpec extends Specification { def serverGroup = [name: "app-v001", region: "us-west-2", capacity: serverGroupCapacity] + and: + MDC.put("taskStartTime", taskStartTime.toString()) + expect: WaitForUpInstancesTask.getServerGroupCapacity(stage, serverGroup) == expectedServerGroupCapacity where: - katoTasks | serverGroupCapacity || expectedServerGroupCapacity - null | [min: 0, max: 0, desired: 0] || [min: 0, max: 0, desired: 0] - [[resultObjects: [[deployments: [deployment("app-v001", "us-west-2", 0, 1, 1)]]]]] | [min: 0, max: 0, desired: 0] || [min: 0, max: 1, desired: 1] // should take initial capacity b/c max = 0 - [[resultObjects: [[deployments: [deployment("app-v001", "us-west-2", 0, 1, 1)]]]]] | [min: 0, max: 2, desired: 2] || [min: 0, max: 2, desired: 2] // should take current capacity b/c max > 0 + katoTasks | taskStartTime | serverGroupCapacity || expectedServerGroupCapacity + null | startTime(0) | [min: 0, max: 0, desired: 0] || [min: 0, max: 0, desired: 0] + [[resultObjects: [[deployments: [deployment("app-v001", "us-west-2", 0, 1, 1)]]]]] | startTime(9) | [min: 0, max: 0, desired: 0] || [min: 0, max: 1, desired: 1] // should take initial capacity b/c max = 0 + [[resultObjects: [[deployments: [deployment("app-v001", "us-west-2", 0, 1, 1)]]]]] | startTime(9) | [min: 0, max: 400, desired: 0] || [min: 0, max: 1, desired: 1] // should take initial capacity b/c desired = 0 + [[resultObjects: [[deployments: [deployment("app-v001", "us-west-2", 0, 1, 1)]]]]] | startTime(9) | [min: 0, max: 2, desired: 2] || [min: 0, max: 2, desired: 2] // should take current capacity b/c max > 0 + [[resultObjects: [[deployments: [deployment("app-v001", "us-west-2", 0, 1, 1)]]]]] | startTime(11) | [min: 0, max: 0, desired: 0] || [min: 0, max: 0, desired: 0] // should take current capacity b/c timeout } static Map deployment(String serverGroupName, String location, int min, int max, int desired) { @@ -519,4 +531,8 @@ class WaitForUpInstancesTaskSpec extends Specification { serverGroupName: serverGroupName, location: location, capacity: [min: min, max: max, desired: desired] ] } + + static Long startTime(int minutesOld) { + return System.currentTimeMillis() - TimeUnit.MINUTES.toMillis(minutesOld) + } } diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTaskSpec.groovy index 5e80fcb146..f502af132d 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/loadbalancer/UpsertLoadBalancerForceRefreshTaskSpec.groovy @@ -16,19 +16,34 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.loadbalancer +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.kork.core.RetrySupport +import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheService +import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheStatusService +import retrofit.client.Response +import retrofit.mime.TypedString import spock.lang.Specification import spock.lang.Subject import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage class UpsertLoadBalancerForceRefreshTaskSpec extends Specification { + def cloudDriverCacheService = Mock(CloudDriverCacheService) + def cloudDriverCacheStatusService = Mock(CloudDriverCacheStatusService) + @Subject - def task = new UpsertLoadBalancerForceRefreshTask() + def task = new UpsertLoadBalancerForceRefreshTask( + cloudDriverCacheService, + cloudDriverCacheStatusService, + new ObjectMapper(), + new NoSleepRetry() + ) + def stage = stage() def config = [ targets: [ - [credentials: "fzlem", availabilityZones: ["us-west-1": []], name: "flapjack-frontend"] + [credentials: "spinnaker", availabilityZones: ["us-west-1": []], name: "flapjack-frontend"] ] ] @@ -37,18 +52,78 @@ class UpsertLoadBalancerForceRefreshTaskSpec extends Specification { } void "should force cache refresh server groups via oort when name provided"() { - setup: - task.cacheService = Mock(CloudDriverCacheService) + when: + 1 * cloudDriverCacheService.forceCacheUpdate('aws', 'LoadBalancer', _) >> { + String cloudProvider, String type, Map body -> + assert cloudProvider == "aws" + assert body.loadBalancerName == "flapjack-frontend" + assert body.account == "spinnaker" + assert body.region == "us-west-1" + } + def result = task.execute(stage) + + then: + result.status == ExecutionStatus.SUCCEEDED + result.context.refreshState.hasRequested == true + result.context.refreshState.allAreComplete == true + } + + def "checks for pending onDemand keys and awaits processing"() { + // Create the forceCacheUpdate request when: - task.execute(stage) + 1 * cloudDriverCacheService.forceCacheUpdate('aws', 'LoadBalancer', _) >> { + new Response("/cache", 202, "OK", [], new TypedString(""" + {"cachedIdentifiersByType": + {"loadBalancers": ["aws:loadBalancers:spinnaker:us-west-1:flapjack-frontend"]} + }""")) + } + + def result = task.execute(stage) then: - 1 * task.cacheService.forceCacheUpdate('aws', UpsertLoadBalancerForceRefreshTask.REFRESH_TYPE, _) >> { String cloudProvider, String type, Map body -> - assert cloudProvider == "aws" - assert body.loadBalancerName == "flapjack-frontend" - assert body.account == "fzlem" - assert body.region == "us-west-1" + result.status == ExecutionStatus.RUNNING + result.context.refreshState.hasRequested == true + result.context.refreshState.allAreComplete == false + result.context.refreshState.refreshIds == ["aws:loadBalancers:spinnaker:us-west-1:flapjack-frontend"] + + // checks for pending, receives empty list and retries + when: + 1 * cloudDriverCacheStatusService.pendingForceCacheUpdates('aws', 'LoadBalancer') >> { [] } + stage.context = result.context + result = task.execute(stage) + + then: + result.status == ExecutionStatus.RUNNING + result.context.refreshState.attempt == 1 + result.context.refreshState.seenPendingCacheUpdates == false + + // sees a pending onDemand key for our load balancers + when: + 1 * cloudDriverCacheStatusService.pendingForceCacheUpdates('aws', 'LoadBalancer') >> { + [[id: "aws:loadBalancers:spinnaker:us-west-1:flapjack-frontend"]] } + + stage.context = result.context + result = task.execute(stage) + + then: + result.status == ExecutionStatus.RUNNING + result.context.refreshState.attempt == 1 // has not incremented + result.context.refreshState.seenPendingCacheUpdates == true + + // onDemand key has been processed, task completes + when: + 1 * cloudDriverCacheStatusService.pendingForceCacheUpdates('aws', 'LoadBalancer') >> { [] } + stage.context = result.context + result = task.execute(stage) + + then: + result.context.refreshState.allAreComplete == true + result.status == ExecutionStatus.SUCCEEDED + } + + static class NoSleepRetry extends RetrySupport { + void sleep(long time) {} } } diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTaskSpec.groovy new file mode 100644 index 0000000000..eb67aa855a --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/DeployManifestTaskSpec.groovy @@ -0,0 +1,132 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.manifest + +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.orca.clouddriver.KatoService +import com.netflix.spinnaker.orca.clouddriver.OortService +import com.netflix.spinnaker.orca.clouddriver.model.TaskId +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver +import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor +import rx.Observable +import spock.lang.Specification +import spock.lang.Subject + +class DeployManifestTaskSpec extends Specification { + def TASK_ID = "12345" + + def katoService = Mock(KatoService) + def artifactResolver = Stub(ArtifactResolver) { + getArtifacts(*_) >> [] + } + + @Subject + DeployManifestTask task = new DeployManifestTask( + katoService, + Stub(OortService), + artifactResolver, + new ObjectMapper(), + Stub(ContextParameterProcessor) + ) + + def "enables traffic when the trafficManagement field is absent"() { + given: + def stage = createStage([:]) + + when: + task.execute(stage) + + then: + 1 * katoService.requestOperations("kubernetes", { + Map it -> it.deployManifest.enableTraffic == true && !it.deployManifest.services + }) >> Observable.from(new TaskId(TASK_ID)) + 0 * katoService._ + } + + def "enables traffic when trafficManagement is disabled"() { + given: + def stage = createStage([ + trafficManagement: [ + enabled: false + ] + ]) + + when: + task.execute(stage) + + then: + 1 * katoService.requestOperations("kubernetes", { + Map it -> it.deployManifest.enableTraffic == true && !it.deployManifest.services + }) >> Observable.from(new TaskId(TASK_ID)) + 0 * katoService._ + } + + def "enables traffic when trafficManagement is enabled and explicitly enables traffic"() { + given: + def stage = createStage([ + trafficManagement: [ + enabled: true, + options: [ + enableTraffic: true, + services: ["service my-service"] + ] + ] + ]) + + when: + task.execute(stage) + + then: + 1 * katoService.requestOperations("kubernetes", { + Map it -> it.deployManifest.enableTraffic == true && it.deployManifest.services == ["service my-service"] + }) >> Observable.from(new TaskId(TASK_ID)) + 0 * katoService._ + } + + def "does not enable traffic when trafficManagement is enabled and enableTraffic is disabled"() { + given: + def stage = createStage([ + trafficManagement: [ + enabled: true, + options: [ + enableTraffic: false, + services: ["service my-service"] + ] + ] + ]) + + when: + task.execute(stage) + + then: + 1 * katoService.requestOperations("kubernetes", { + Map it -> it.deployManifest.enableTraffic == false && it.deployManifest.services == ["service my-service"] + }) >> Observable.from(new TaskId(TASK_ID)) + 0 * katoService._ + } + + + def createStage(Map extraParams) { + return new Stage(Stub(Execution), "deployManifest", [ + account: "my-k8s-account", + cloudProvider: "kubernetes", + source: "text", + ] + extraParams) + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTaskSpec.groovy index d06a0a8a8d..708d22d768 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/manifest/ManifestForceCacheRefreshTaskSpec.groovy @@ -18,10 +18,12 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.manifest import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spectator.api.DefaultRegistry +import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheService import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheStatusService import com.netflix.spinnaker.orca.pipeline.model.Execution import com.netflix.spinnaker.orca.pipeline.model.Stage +import retrofit.client.Response import spock.lang.Specification import spock.lang.Subject @@ -30,21 +32,33 @@ import java.time.Instant import java.time.ZoneId import java.util.concurrent.TimeUnit +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE +import static java.net.HttpURLConnection.HTTP_ACCEPTED +import static java.net.HttpURLConnection.HTTP_OK + class ManifestForceCacheRefreshTaskSpec extends Specification { + static final String ACCOUNT = "k8s" + static final String PROVIDER = "kubernetes" + static final String REFRESH_TYPE = "manifest" + def now = Instant.now() + def cacheService = Mock(CloudDriverCacheService) + def cacheStatusService = Mock(CloudDriverCacheStatusService) + def objectMapper = new ObjectMapper() + def registry = new DefaultRegistry() @Subject task = new ManifestForceCacheRefreshTask( registry, - Mock(CloudDriverCacheService), - Mock(CloudDriverCacheStatusService), - Mock(ObjectMapper), + cacheService, + cacheStatusService, + objectMapper, Clock.fixed(now, ZoneId.of("UTC")) ) def "auto Succeed from timeout increments counter"() { given: - def stage = new Stage(Execution.newPipeline("orca"), "whatever") + def stage = mockStage([:]) stage.setStartTime(now.minusMillis(TimeUnit.MINUTES.toMillis(13)).toEpochMilli()) def taskResult = task.execute(stage) @@ -52,4 +66,597 @@ class ManifestForceCacheRefreshTaskSpec extends Specification { taskResult.getStatus().isSuccessful() registry.timer("manifestForceCacheRefreshTask.duration", "success", "true", "outcome", "autoSucceed").count() == 1 } + + def "returns RUNNING when the refresh request is accepted but not processed"() { + given: + def namespace = "my-namespace" + def manifest = "replicaSet my-replicaset-v014" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + manifest + ] + ], + ] + def refreshDetails = [ + account: ACCOUNT, + location: namespace, + name: manifest + ] + def stage = mockStage(context) + stage.setStartTime(now.toEpochMilli()) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + } + + def "returns SUCCEEDED when the refresh request is immediately processed"() { + given: + def namespace = "my-namespace" + def manifest = "replicaSet my-replicaset-v014" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + manifest + ] + ], + ] + def refreshDetails = [ + account: ACCOUNT, + location: namespace, + name: manifest + ] + def stage = mockStage(context) + stage.setStartTime(now.toEpochMilli()) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_OK) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "waits for a pending refresh"() { + given: + def namespace = "my-namespace" + def manifest = "replicaSet my-replicaset-v014" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + manifest + ] + ], + ] + def refreshDetails = [ + account: ACCOUNT, + location: namespace, + name: manifest + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [pendingRefresh(refreshDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [processedRefresh(refreshDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "only returns succeeded if a processed refresh exactly matches"() { + given: + def namespace = "my-namespace" + def manifest = "replicaSet my-replicaset-v014" + def noMatch = "no-match" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + manifest + ] + ], + ] + def refreshDetails = [ + account: ACCOUNT, + location: namespace, + name: manifest + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [ + processedRefresh(refreshDetails + [account: noMatch]), + processedRefresh(refreshDetails + [location: noMatch]), + processedRefresh(refreshDetails + [name: noMatch]) + ] + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + } + + def "retries when the cache does not know about the refresh request"() { + given: + def namespace = "my-namespace" + def manifest = "replicaSet my-replicaset-v014" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + manifest + ] + ], + ] + def refreshDetails = [ + account: ACCOUNT, + location: namespace, + name: manifest + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [] + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [processedRefresh(refreshDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "waits until all manifests are processed when one is immediately processed"() { + given: + def namespace = "my-namespace" + def replicaSet = "replicaSet my-replicaset-v014" + def deployment = "deployment my-deployment" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + replicaSet, + deployment + ] + ], + ] + def replicaSetRefreshDetails = [ + account: ACCOUNT, + location: namespace, + name: replicaSet + ] + def deploymentRefreshDetails = [ + account: ACCOUNT, + location: namespace, + name: deployment + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, replicaSetRefreshDetails) >> mockResponse(HTTP_OK) + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, deploymentRefreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [pendingRefresh(deploymentRefreshDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [processedRefresh(deploymentRefreshDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "waits until all manifests are processed when all are accepted for later processing"() { + given: + def namespace = "my-namespace" + def replicaSet = "replicaSet my-replicaset-v014" + def deployment = "deployment my-deployment" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + replicaSet, + deployment + ] + ], + ] + def replicaSetRefreshDetails = [ + account: ACCOUNT, + location: namespace, + name: replicaSet + ] + def deploymentRefreshDetails = [ + account: ACCOUNT, + location: namespace, + name: deployment + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, replicaSetRefreshDetails) >> mockResponse(HTTP_ACCEPTED) + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, deploymentRefreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [ + processedRefresh(replicaSetRefreshDetails), + pendingRefresh(deploymentRefreshDetails) + ] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [ + processedRefresh(replicaSetRefreshDetails), + processedRefresh(deploymentRefreshDetails) + ] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "returns RUNNING if there is an outstanding request, even if all requests in the current iteration succeeded"() { + given: + def namespace = "my-namespace" + def replicaSet = "replicaSet my-replicaset-v014" + def deployment = "deployment my-deployment" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + replicaSet, + deployment + ] + ], + ] + def replicaSetRefreshDetails = [ + account: ACCOUNT, + location: namespace, + name: replicaSet + ] + def deploymentRefreshDetails = [ + account: ACCOUNT, + location: namespace, + name: deployment + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, replicaSetRefreshDetails) >> mockResponse(HTTP_ACCEPTED) + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, deploymentRefreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [ + pendingRefresh(replicaSetRefreshDetails) + ] + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, deploymentRefreshDetails) >> mockResponse(HTTP_OK) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [ + processedRefresh(replicaSetRefreshDetails) + ] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "handles refreshing the cache for manifests in different namespaces"() { + given: + def replicaSetNamespace = "replicaSet-namespace" + def deploymentNamespace = "deployment-namespace" + def replicaSet = "replicaSet my-replicaset-v014" + def deployment = "deployment my-deployment" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (replicaSetNamespace): [ + replicaSet + ], + (deploymentNamespace): [ + deployment + ] + ], + ] + def replicaSetRefreshDetails = [ + account: ACCOUNT, + location: replicaSetNamespace, + name: replicaSet + ] + def deploymentRefreshDetails = [ + account: ACCOUNT, + location: deploymentNamespace, + name: deployment + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, replicaSetRefreshDetails) >> mockResponse(HTTP_ACCEPTED) + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, deploymentRefreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [ + processedRefresh(replicaSetRefreshDetails), + pendingRefresh(deploymentRefreshDetails) + ] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [ + processedRefresh(replicaSetRefreshDetails), + processedRefresh(deploymentRefreshDetails) + ] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "properly handles a manifest without a namespace"() { + given: + def namespace = "" + def manifest = "namespace new-namespace" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + manifest + ] + ], + ] + def refreshDetails = [ + account: ACCOUNT, + location: namespace, + name: manifest + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [pendingRefresh(refreshDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [processedRefresh(refreshDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + def "properly handles a manifest without a namespace, even if incorrectly assigned a namespace"() { + given: + def namespace = "my-namespace" + def manifest = "namespace new-namespace" + def context = [ + account: ACCOUNT, + cloudProvider: PROVIDER, + "outputs.manifestNamesByNamespace": [ + (namespace): [ + manifest + ] + ], + ] + def refreshDetails = [ + account: ACCOUNT, + location: namespace, + name: manifest + ] + def refreshResponseDetails = [ + account: ACCOUNT, + location: "", + name: manifest + ] + def stage = mockStage(context) + + when: + def taskResult = task.execute(stage) + + then: + 1 * cacheService.forceCacheUpdate(PROVIDER, REFRESH_TYPE, refreshDetails) >> mockResponse(HTTP_ACCEPTED) + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [pendingRefresh(refreshResponseDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.RUNNING + + when: + context << taskResult.context + stage = mockStage(context) + taskResult = task.execute(stage) + + then: + 1 * cacheStatusService.pendingForceCacheUpdates(PROVIDER, REFRESH_TYPE) >> [processedRefresh(refreshResponseDetails)] + 0 * cacheService._ + taskResult.getStatus() == ExecutionStatus.SUCCEEDED + } + + private Response mockResponse(int status) { + return new Response("", status, "", [], null) + } + + private Stage mockStage(Map context) { + Stage stage = new Stage(new Execution(PIPELINE, "test"), "whatever", context) + stage.setStartTime(now.toEpochMilli()) + return stage + } + + private Map pendingRefresh(Map refreshDetails) { + return [ + details: refreshDetails, + processedCount: 0, + processedTime: -1, + cacheTime: now.plusMillis(10).toEpochMilli() + ] + } + + private Map processedRefresh(Map refreshDetails) { + return [ + details: refreshDetails, + processedCount: 1, + processedTime: now.plusMillis(5000).toEpochMilli(), + cacheTime: now.plusMillis(10).toEpochMilli() + ] + } + + private Map staleRefresh(Map refreshDetails) { + return [ + details: refreshDetails, + processedCount: 1, + processedTime: now.plusMillis(5000).toEpochMilli(), + cacheTime: now.minusMillis(10).toEpochMilli() + ] + } + } diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTaskSpec.groovy new file mode 100644 index 0000000000..a32d2fdba0 --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckForRemainingPipelinesTaskSpec.groovy @@ -0,0 +1,56 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline + +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import spock.lang.Specification +import spock.lang.Subject + +class CheckForRemainingPipelinesTaskSpec extends Specification { + + @Subject + final task = new CheckForRemainingPipelinesTask() + + void 'keep looping to save more tasks'() { + when: + def context = [ + pipelinesToSave: [ + [ name: "pipeline1" ] + ] + ] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + result.status == ExecutionStatus.REDIRECT + } + + void 'stop looping when there are no more more tasks to save'() { + when: + def context = [ + pipelinesToSave: [ + ] + ] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + result.status == ExecutionStatus.SUCCEEDED + } + +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTaskSpec.groovy new file mode 100644 index 0000000000..c8b72aa87c --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/CheckPipelineResultsTaskSpec.groovy @@ -0,0 +1,107 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline + +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.pipeline.model.Task +import spock.lang.Specification +import spock.lang.Subject + +class CheckPipelineResultsTaskSpec extends Specification { + + final ObjectMapper objectMapper = OrcaObjectMapper.newInstance() + + @Subject + final task = new CheckPipelineResultsTask(objectMapper) + + void 'add created pipeline success to context'() { + when: + def context = [ + application: 'app1', + 'pipeline.name': 'pipeline1' + ] + final Task savePipelineTask = new Task().with { + setName('savePipeline') + setStatus(ExecutionStatus.SUCCEEDED) + return it + } + final Stage stage = new Stage(Execution.newPipeline("orca"), "whatever", context).with { + setTasks([savePipelineTask]) + return it + } + def result = task.execute(stage) + + then: + result.status == ExecutionStatus.SUCCEEDED + result.context.get("pipelinesCreated") == [[application: 'app1', name: 'pipeline1']] + result.context.get("pipelinesUpdated") == [] + result.context.get("pipelinesFailedToSave") == [] + } + + void 'add updated pipeline success to context'() { + when: + def context = [ + application: 'app1', + 'pipeline.name': 'pipeline1', + 'isExistingPipeline': true + ] + final Task savePipelineTask = new Task().with { + setName('savePipeline') + setStatus(ExecutionStatus.SUCCEEDED) + return it + } + final Stage stage = new Stage(Execution.newPipeline("orca"), "whatever", context).with { + setTasks([savePipelineTask]) + return it + } + def result = task.execute(stage) + + then: + result.status == ExecutionStatus.SUCCEEDED + result.context.get("pipelinesCreated") == [] + result.context.get("pipelinesUpdated") == [[application: 'app1', name: 'pipeline1']] + result.context.get("pipelinesFailedToSave") == [] + } + + void 'add saved pipeline failure to context'() { + when: + def context = [ + application: 'app1', + 'pipeline.name': 'pipeline1' + ] + final Task savePipelineTask = new Task().with { + setName('savePipeline') + setStatus(ExecutionStatus.TERMINAL) + return it + } + final Stage stage = new Stage(Execution.newPipeline("orca"), "whatever", context).with { + setTasks([savePipelineTask]) + return it + } + def result = task.execute(stage) + + then: + result.status == ExecutionStatus.SUCCEEDED + result.context.get("pipelinesCreated") == [] + result.context.get("pipelinesUpdated") == [] + result.context.get("pipelinesFailedToSave") == [[application: 'app1', name: 'pipeline1']] + } + +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTaskSpec.groovy new file mode 100644 index 0000000000..2776843777 --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/GetPipelinesFromArtifactTaskSpec.groovy @@ -0,0 +1,164 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline + +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.kork.artifacts.model.Artifact +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.clouddriver.OortService +import com.netflix.spinnaker.orca.front50.Front50Service +import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver +import retrofit.client.Response +import retrofit.mime.TypedString +import spock.lang.Specification +import spock.lang.Subject + +class GetPipelinesFromArtifactTaskSpec extends Specification { + + final Front50Service front50Service = Mock() + final OortService oortService = Mock() + final ArtifactResolver artifactResolver = Mock() + final ObjectMapper objectMapper = OrcaObjectMapper.newInstance() + + @Subject + final task = new GetPipelinesFromArtifactTask(front50Service, oortService, objectMapper, artifactResolver) + + void 'extract pipelines JSON from artifact'() { + when: + def context = [ + pipelinesArtifactId: '123' + ] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + 1 * artifactResolver.getBoundArtifactForStage(_, '123', _) >> new Artifact().builder().type('http/file') + .reference('url1').build() + 1 * oortService.fetchArtifact(_) >> new Response("url1", 200, "reason1", [], + new TypedString(pipelineJson)) + front50Service.getPipelines(_) >> [] + result.status == ExecutionStatus.SUCCEEDED + final pipelinesToSave = ((List) result.context.get("pipelinesToSave")) + pipelinesToSave.size() == 3 + pipelinesToSave.every { !it.containsKey("id") } + } + + void 'extract pipelines JSON from artifact with existing pipeline'() { + when: + def context = [ + pipelinesArtifactId: '123' + ] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + 1 * artifactResolver.getBoundArtifactForStage(_, '123', _) >> new Artifact().builder().type('http/file') + .reference('url1').build() + 1 * oortService.fetchArtifact(_) >> new Response("url1", 200, "reason1", [], + new TypedString(pipelineJson)) + front50Service.getPipelines("app1") >> [] + front50Service.getPipelines("app2") >> [ + [name: "just judging", id: "exitingPipelineId"] + ] + result.status == ExecutionStatus.SUCCEEDED + final pipelinesToSave = ((List) result.context.get("pipelinesToSave")) + pipelinesToSave.size() == 3 + pipelinesToSave.find { it.name == "just judging"}.containsKey("id") + pipelinesToSave.findAll { !it.name == "just judging"}.every { !it.containsKey("id") } + } + + void 'fail to extract pipelines JSON from artifact without bound artifact'() { + when: + def context = [ + pipelinesArtifactId: '123' + ] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + 1 * artifactResolver.getBoundArtifactForStage(_, '123', _) >> null + IllegalArgumentException ex = thrown() + ex.message == "No artifact could be bound to '123'" + } + + final pipelineJson = ''' +{ + "app1": [{ + "name": "just waiting", + "description": "", + "parameterConfig": [], + "notifications": [], + "triggers": [], + "stages": [{ + "refId": "wait1", + "requisiteStageRefIds": [], + "type": "wait", + "waitTime": "420" + }], + "expectedArtifacts": [], + "keepWaitingPipelines": false, + "limitConcurrent": true + }], + "app2": [{ + "name": "just judging", + "description": "", + "parameterConfig": [], + "notifications": [], + "triggers": [], + "stages": [{ + "refId": "manualJudgment1", + "requisiteStageRefIds": [], + "instructions": "Judge me.", + "judgmentInputs": [], + "type": "manualJudgment" + }], + "expectedArtifacts": [], + "keepWaitingPipelines": false, + "limitConcurrent": true + }, + { + "name": "waiting then judging", + "description": "", + "parameterConfig": [], + "notifications": [], + "triggers": [], + "stages": [{ + "refId": "wait1", + "requisiteStageRefIds": [], + "type": "wait", + "waitTime": "420", + "comments": "Wait before judging me." + }, + { + "refId": "manualJudgment2", + "requisiteStageRefIds": [ + "wait1" + ], + "instructions": "Okay, Judge me now.", + "judgmentInputs": [], + "type": "manualJudgment" + } + ], + "expectedArtifacts": [], + "keepWaitingPipelines": false, + "limitConcurrent": true + } + ] +} + +''' + +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTaskSpec.groovy new file mode 100644 index 0000000000..79abf34aff --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/pipeline/PreparePipelineToSaveTaskSpec.groovy @@ -0,0 +1,73 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.pipeline + +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import spock.lang.Specification +import spock.lang.Subject + +class PreparePipelineToSaveTaskSpec extends Specification { + + final ObjectMapper objectMapper = OrcaObjectMapper.newInstance() + + @Subject + final task = new PreparePipelineToSaveTask(objectMapper) + + void 'prepare pipeline for save pipeline task'() { + when: + def context = [ + pipelinesToSave: [ + [ name: "pipeline1" ] + ] + ] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + result.status == ExecutionStatus.SUCCEEDED + result.context.get("pipeline") == "eyJuYW1lIjoicGlwZWxpbmUxIn0=" + result.context.get("pipelinesToSave") == [] + } + + void 'prepare pipeline for save pipeline task with multiple pipelines'() { + when: + def context = [ + pipelinesToSave: [ + [ name: "pipeline1" ], + [ name: "pipeline2" ] + ] + ] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + result.status == ExecutionStatus.SUCCEEDED + result.context.get("pipeline") == "eyJuYW1lIjoicGlwZWxpbmUxIn0=" + result.context.get("pipelinesToSave") == [ [name: "pipeline2"] ] + } + + void 'fail to prepare pipeline for save pipeline task with no pipelines'() { + when: + def context = [:] + def result = task.execute(new Stage(Execution.newPipeline("orca"), "whatever", context)) + + then: + result.status == ExecutionStatus.TERMINAL + } + +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AppEngineBranchFinderSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AppEngineBranchFinderSpec.groovy index cf0dd2820e..b220f58bd5 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AppEngineBranchFinderSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/appengine/AppEngineBranchFinderSpec.groovy @@ -16,11 +16,12 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.providers.appengine + import com.netflix.spinnaker.orca.pipeline.model.GitTrigger +import com.netflix.spinnaker.orca.pipeline.model.JenkinsBuildInfo import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger import spock.lang.Specification import spock.lang.Unroll -import static com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.BuildInfo class AppEngineBranchFinderSpec extends Specification { @Unroll @@ -74,7 +75,7 @@ class AppEngineBranchFinderSpec extends Specification { def "(jenkins trigger) should resolve branch, using regex (if provided) to narrow down options"() { given: def trigger = new JenkinsTrigger("Jenkins", "poll_git_repo", 1, null) - trigger.buildInfo = new BuildInfo("poll_git_repo", 1, "http://jenkins", [], scm, false, "SUCCESS") + trigger.buildInfo = new JenkinsBuildInfo("poll_git_repo", 1, "http://jenkins", "SUCCESS", [], scm) def operation = [ trigger: [ @@ -97,7 +98,7 @@ class AppEngineBranchFinderSpec extends Specification { def "(jenkins trigger) should throw appropriate error if method cannot resolve exactly one branch"() { given: def trigger = new JenkinsTrigger("Jenkins", "poll_git_repo", 1, null) - trigger.buildInfo = new BuildInfo("poll_git_repo", 1, "http://jenkins", [], scm, false, "SUCCESS") + trigger.buildInfo = new JenkinsBuildInfo("poll_git_repo", 1, "http://jenkins", "SUCCESS", [], scm) def operation = [ trigger : [ diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTaggerSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTaggerSpec.groovy index 3a57b17ea4..98c1ab0e1f 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTaggerSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/AmazonImageTaggerSpec.groovy @@ -22,6 +22,7 @@ import com.netflix.spinnaker.orca.clouddriver.tasks.image.ImageTagger import com.netflix.spinnaker.orca.clouddriver.tasks.image.ImageTaggerSpec import com.netflix.spinnaker.orca.pipeline.model.Execution import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.test.model.ExecutionBuilder import spock.lang.Unroll class AmazonImageTaggerSpec extends ImageTaggerSpec { @@ -54,7 +55,16 @@ class AmazonImageTaggerSpec extends ImageTaggerSpec { pipeline.stages << stage1 << stage2 and: - oortService.findImage("aws", "my-ami", null, null, null) >> { [] } + if (foundById) { + 1 * oortService.findImage("aws", "ami-id", null, null, null) >> { + [["imageName": "ami-name"]] + } + 1 * oortService.findImage("aws", "ami-name", null, null, null) >> { [] } + } else if (imageId != null) { + 1 * oortService.findImage("aws", imageId, null, null, null) >> { [] } + } else { + 1 * oortService.findImage("aws", imageName, null, null, null) >> { [] } + } when: imageTagger.getOperationContext(stage2) @@ -64,9 +74,92 @@ class AmazonImageTaggerSpec extends ImageTaggerSpec { e.shouldRetry == shouldRetry where: - imageId | imageName || shouldRetry - "my-ami" | null || true - null | "my-ami" || false // do not retry if an explicitly provided image does not exist (user error) + imageId | imageName || foundById || shouldRetry + "ami-id" | null || false || true + "ami-id" | null || true || true + null | "ami-name" || false || false // do not retry if an explicitly provided image does not exist (user error) + } + + def "retries when namedImage data is missing an upstream imageId"() { + given: + def name = "spinapp-1.0.0-ebs" + def pipeline = ExecutionBuilder.pipeline {} + def stage1 = new Stage( + pipeline, + "bake", + [ + cloudProvider: "aws", + imageId : "ami-1", + imageName : name, + region : "us-east-1" + ] + ) + + def stage2 = new Stage( + pipeline, + "bake", + [ + cloudProvider: "aws", + imageId : "ami-2", + imageName : name, + region : "us-west-1" + ] + ) + + def stage3 = new Stage(pipeline, "upsertImageTags", [imageName: name, cloudProvider: "aws"]) + + stage1.refId = stage1.id + stage2.refId = stage2.id + stage3.requisiteStageRefIds = [stage1.refId, stage2.refId] + + pipeline.stages << stage1 << stage2 << stage3 + + when: + 1 * oortService.findImage("aws", "ami-1", _, _, _) >> { + [[imageName: name]] + } + + 1 * oortService.findImage("aws", "ami-2", _, _, _) >> { + [[imageName: name]] + } + + 1 * oortService.findImage("aws", name, _, _, _) >> { + [[ + imageName: name, + amis : ["us-east-1": ["ami-1"]] + ]] + } + + imageTagger.getOperationContext(stage3) + + then: + ImageTagger.ImageNotFound e = thrown(ImageTagger.ImageNotFound) + e.shouldRetry == true + + when: + 1 * oortService.findImage("aws", "ami-1", _, _, _) >> { + [[imageName: name]] + } + + 1 * oortService.findImage("aws", "ami-2", _, _, _) >> { + [[imageName: name]] + } + + 1 * oortService.findImage("aws", name, _, _, _) >> { + [[ + imageName: name, + amis : [ + "us-east-1": ["ami-1"], + "us-west-1": ["ami-2"] + ], + accounts : ["compute"] + ]] + } + + imageTagger.getOperationContext(stage3) + + then: + noExceptionThrown() } def "should build upsertMachineImageTags and allowLaunchDescription operations"() { @@ -114,7 +207,7 @@ class AmazonImageTaggerSpec extends ImageTaggerSpec { def stage = new Stage(Execution.newOrchestration("orca"), "", [ imageNames: ["my-ami-1", "my-ami-2"], tags : [ - "tag1" : "value1" + "tag1": "value1" ] ]) diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTaskSpec.groovy index 04312b343b..25823a31be 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/CloudFormationForceCacheRefreshTaskSpec.groovy @@ -19,15 +19,16 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.providers.aws.cloudformatio import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheService import spock.lang.Specification import spock.lang.Subject +import spock.lang.Unroll import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage class CloudFormationForceCacheRefreshTaskSpec extends Specification { @Subject task = new CloudFormationForceCacheRefreshTask() - def stage = stage() void "should force cache refresh cloud formations via mort"() { - setup: + given: + def stage = stage() task.cacheService = Mock(CloudDriverCacheService) when: @@ -36,4 +37,30 @@ class CloudFormationForceCacheRefreshTaskSpec extends Specification { then: 1 * task.cacheService.forceCacheUpdate('aws', CloudFormationForceCacheRefreshTask.REFRESH_TYPE, Collections.emptyMap()) } + + @Unroll + void "should add scoping data if available"() { + given: + task.cacheService = Mock(CloudDriverCacheService) + + and: + def stage = stage() + stage.context.put("credentials", credentials) + stage.context.put("regions", regions) + + when: + task.execute(stage) + + then: + 1 * task.cacheService.forceCacheUpdate('aws', CloudFormationForceCacheRefreshTask.REFRESH_TYPE, expectedData) + + where: + credentials | regions || expectedData + null | null || [:] + "credentials" | null || [credentials: "credentials"] + null | ["eu-west-1"] || [region: ["eu-west-1"]] + "credentials" | ["eu-west-1"] || [credentials: "credentials", region: ["eu-west-1"]] + + + } } diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTaskSpec.groovy index c7c9388c90..443f7b07bc 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/aws/cloudformation/DeployCloudFormationTaskSpec.groovy @@ -38,8 +38,7 @@ class DeployCloudFormationTaskSpec extends Specification { def artifactResolver = Mock(ArtifactResolver) @Subject - def deployCloudFormationTask = new DeployCloudFormationTask(katoService: katoService, oortService: oortService, - objectMapper: objectMapper, artifactResolver: artifactResolver) + def deployCloudFormationTask = new DeployCloudFormationTask(katoService: katoService, oortService: oortService, artifactResolver: artifactResolver) def "should put kato task information as output"() { given: @@ -104,25 +103,29 @@ class DeployCloudFormationTaskSpec extends Specification { def context = [ credentials: 'creds', cloudProvider: 'aws', - source: 'artifact', - stackArtifactId: 'id', - stackArtifactAccount: 'account', + source: source, + stackArtifactId: stackArtifactId, + stackArtifactAccount: stackArtifactAccount, regions: ['eu-west-1'], templateBody: [key: 'value']] def stage = new Stage(pipeline, 'test', 'test', context) - def template = new TypedString('{ "key": "value" }') when: def result = deployCloudFormationTask.execute(stage) then: 1 * artifactResolver.getBoundArtifactForId(stage, 'id') >> new Artifact() - 1 * oortService.fetchArtifact(_) >> new Response("url", 200, "reason", Collections.emptyList(), template) + 1 * oortService.fetchArtifact(_) >> new Response("url", 200, "reason", Collections.emptyList(), new TypedString(template)) 1 * katoService.requestOperations("aws", { it.get(0).get("deployCloudFormation").containsKey("templateBody") }) >> Observable.just(taskId) result.context.'kato.result.expected' == true result.context.'kato.last.task.id' == taskId + + where: + source | stackArtifactId | stackArtifactAccount | template + 'artifact' | 'id' | 'account' | '{"key": "value"}' + 'artifact' | 'id' | 'account' | 'key: value' } } diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorSpec.groovy deleted file mode 100644 index e06ea23a7d..0000000000 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorSpec.groovy +++ /dev/null @@ -1,313 +0,0 @@ -/* - * Copyright 2018 Pivotal, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf - -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger -import spock.lang.Specification - -import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage - -class CloudFoundryServerGroupCreatorSpec extends Specification { - - def "should get operations when an artifact is specified as the deployable"() { - given: - def ctx = [ - application : "abc", - account : "abc", - region : "org > space", - deploymentDetails: [[imageId: "testImageId", zone: "north-pole-1"]], - artifact : [type: "artifact", account: "count-von-count", reference: "some-reference"], - manifest : [ - type : "artifact", - account : "dracula", - reference: "https://example.com/mani-pedi.yml" - ], - startApplication : true, - ] - def stage = stage { - context.putAll(ctx) - } - - when: - def ops = new CloudFoundryServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - application : "abc", - credentials : "abc", - manifest : null, - region : "org > space", - startApplication: true, - artifact: [ - type : "artifact", - account : "count-von-count", - reference: "some-reference" - ], - manifest : [ - type : "artifact", - account : "dracula", - reference: "https://example.com/mani-pedi.yml" - ], - ] - ] - ] - } - - def "should get operations when a triggered artifact is specified as the deployable"() { - given: - def ctx = [ - application : "abc", - account : "abc", - region : "org > space", - deploymentDetails: [[imageId: "testImageId", zone: "north-pole-1"]], - artifact : [type: "trigger", account: "count-von-count", pattern: "that_s.*a.*m.ore.*.jar"], - manifest : [ - type : "artifact", - account : "dracula", - reference: "https://example.com/mani-pedi.yml" - ], - startApplication : true, - ] - JenkinsTrigger.BuildInfo info = new JenkinsTrigger.BuildInfo( - "my-name", - 0, - "https://example.com/", - [new JenkinsTrigger.JenkinsArtifact("that_s_father_m_oregon_to_you.jar", "sister_path/that_s_father_m_oregon_to_you.jar")], - [], - false, - "" - ) - JenkinsTrigger trig = new JenkinsTrigger("master", "job", 1, "propertyfile") - trig.buildInfo = info - def stage = stage { - context.putAll(ctx) - execution.trigger = trig - } - - when: - def ops = new CloudFoundryServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - application : "abc", - credentials : "abc", - manifest : [ - type : "artifact", - account : "dracula", - reference: "https://example.com/mani-pedi.yml" - ], - region : "org > space", - startApplication: true, - artifact: [ - type : "artifact", - account : "count-von-count", - reference: "https://example.com/artifact/sister_path/that_s_father_m_oregon_to_you.jar" - ] - ], - ] - ] - } - - def "should get operations when a package artifact is specified as the deployable"() { - given: - def ctx = [ - application : "abc", - account : "abc", - region : "org > space", - deploymentDetails: [[imageId: "testImageId", zone: "north-pole-1"]], - artifact : [type: "package", - cluster: [name: "my-cloister"], - serverGroupName: "s-club-7", - account: "my-account", - region: "saar-region", - ], - startApplication: true, - ] - def stage = stage { - context.putAll(ctx) - } - - when: - def ops = new CloudFoundryServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - application : "abc", - credentials : "abc", - manifest : null, - region : "org > space", - startApplication: true, - artifact: [type: "package", - cluster: [name: "my-cloister"], - serverGroupName: "s-club-7", - account: "my-account", - region: "saar-region", - ] - ] - ] - ] - } - - def "should get operations when an artifact is specified as the configuration"() { - given: - def ctx = [ - application : "abc", - account : "abc", - region : "org > space", - deploymentDetails: [[imageId: "testImageId", zone: "north-pole-1"]], - artifact : [type: "artifact", account: "count-von-count", reference: "some-reference"], - manifest : [type: "artifact", account: "dracula", reference: "https://example.com/mani-pedi.yml"], - startApplication : true, - ] - def stage = stage { - context.putAll(ctx) - } - - when: - def ops = new CloudFoundryServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - application : "abc", - credentials : "abc", - manifest : [ - type : "artifact", - account : "dracula", - reference: "https://example.com/mani-pedi.yml" - ], - region : "org > space", - startApplication: true, - artifact: [ - type : "artifact", - account : "count-von-count", - reference: "some-reference" - ] - ] - ] - ] - } - - def "should get operations when a triggered manifest regex is specified as the configuration"() { - given: - def ctx = [ - application : "abc", - account : "abc", - region : "org > space", - deploymentDetails: [[imageId: "testImageId", zone: "north-pole-1"]], - artifact : [type: "artifact", account: "count-von-count", reference: "some-reference"], - manifest : [type: "trigger", account: "count-von-count", pattern: ".*.yml"], - startApplication : true, - ] - JenkinsTrigger.BuildInfo info = new JenkinsTrigger.BuildInfo( - "my-name", - 0, - "https://example.com/", - [new JenkinsTrigger.JenkinsArtifact("deploy.yml", "deployment/deploy.yml")], - [], - false, - "" - ) - JenkinsTrigger trig = new JenkinsTrigger("master", "job", 1, "propertyfile") - trig.buildInfo = info - - def stage = stage { - context.putAll(ctx) - execution.trigger = trig - } - - when: - def ops = new CloudFoundryServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - application : "abc", - credentials : "abc", - manifest : [ - type : "artifact", - account : "count-von-count", - reference: "https://example.com/artifact/deployment/deploy.yml" - ], - region : "org > space", - startApplication : true, - artifact: [ - type : "artifact", - account : "count-von-count", - reference: "some-reference" - ] - ], - ] - ] - } - - def "should get operations when a direct attributes are specified as the configuration"() { - given: - def ctx = [ - application : "abc", - account : "abc", - region : "org > space", - deploymentDetails: [[imageId: "testImageId", zone: "north-pole-1"]], - artifact : [type: "artifact", account: "count-von-count", reference: "some-reference"], - manifest : [ - type: "direct", - memory: "1024M", - diskQuota: "1024M", - instances: "1" - ], - startApplication : true, - ] - def stage = stage { - context.putAll(ctx) - } - - when: - def ops = new CloudFoundryServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - application : "abc", - credentials : "abc", - manifest : [ - type: "direct", - memory: "1024M", - diskQuota: "1024M", - instances: "1" - ], - region : "org > space", - startApplication : true, - artifact: [ - type : "artifact", - account : "count-von-count", - reference: "some-reference" - ] - ] - ] - ] - } - -} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/GoogleImageTaggerSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/GoogleImageTaggerSpec.groovy index 63727ec849..aa71e3960f 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/GoogleImageTaggerSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/GoogleImageTaggerSpec.groovy @@ -53,7 +53,16 @@ class GoogleImageTaggerSpec extends ImageTaggerSpec { pipeline.stages << stage1 << stage2 and: - oortService.findImage("gce", "my-gce-image", null, null, null) >> { [] } + if (foundById) { + 1 * oortService.findImage("gce", "gce-image-id", null, null, null) >> { + [["imageName": "my-gce-image"]] + } + 1 * oortService.findImage("gce", "my-gce-image", null, null, null) >> { [] } + } else if (imageId != null) { + 1 * oortService.findImage("gce", imageId, null, null, null) >> { [] } + } else { + 1 * oortService.findImage("gce", imageName, null, null, null) >> { [] } + } when: imageTagger.getOperationContext(stage2) @@ -63,9 +72,10 @@ class GoogleImageTaggerSpec extends ImageTaggerSpec { e.shouldRetry == shouldRetry where: - imageId | imageName || shouldRetry - "my-gce-image" | null || true - null | "my-gce-image" || false // do not retry if an explicitly provided image does not exist (user error) + imageId | imageName || foundById || shouldRetry + "my-gce-image" | null || false || true + "gce-image-id" | null || true || true + null | "my-gce-image" || false || false // do not retry if an explicitly provided image does not exist (user error) } def "should build upsertImageTags operation"() { diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserterSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserterSpec.groovy deleted file mode 100644 index 5cfa587d22..0000000000 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackSecurityGroupUpserterSpec.groovy +++ /dev/null @@ -1,140 +0,0 @@ -/* - * Copyright 2016 Target, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.providers.openstack - -import com.netflix.spinnaker.orca.clouddriver.MortService -import com.netflix.spinnaker.orca.pipeline.model.Stage -import retrofit.RetrofitError -import retrofit.client.Response -import spock.lang.Specification -import spock.lang.Subject - -import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline - -class OpenstackSecurityGroupUpserterSpec extends Specification { - - @Subject - OpenstackSecurityGroupUpserter upserter - - def "should return operations and extra outputs"() { - given: - upserter = new OpenstackSecurityGroupUpserter() - def context = [ - securityGroupName : 'my-security-group', - region : 'west', - credentials : 'cred' - ] - def pipe = pipeline { - application = "orca" - } - def stage = new Stage(pipe, 'whatever', context) - - when: - def results = upserter.getOperationContext(stage) - - then: - results - - def ops = results.operations - ops.size() == 1 - (ops[0] as Map).upsertSecurityGroup == context - - def extraOutputs = results.extraOutput - List targets = extraOutputs.targets - targets.size() == 1 - targets[0].name == 'my-security-group' - targets[0].region == 'west' - targets[0].accountName == 'cred' - } - - def "should return the correct result if the security group has been upserted"() { - given: - MortService.SecurityGroup sg = new MortService.SecurityGroup( - name: "my-security-group", - region: "west", - accountName: "abc") - MortService mortService = Mock(MortService) { - 1 * getSecurityGroup("abc", "openstack", "my-security-group", "west") >> sg - } - upserter = new OpenstackSecurityGroupUpserter(mortService: mortService) - - when: - def result = upserter.isSecurityGroupUpserted(sg, null) - - then: - result - } - - def "handles null when getting the security group"() { - given: - MortService.SecurityGroup sg = new MortService.SecurityGroup( - name: "my-security-group", - region: "west", - accountName: "abc") - MortService mortService = Mock(MortService) { - 1 * getSecurityGroup("abc", "openstack", "my-security-group", "west") >> null - } - upserter = new OpenstackSecurityGroupUpserter(mortService: mortService) - - when: - def result = upserter.isSecurityGroupUpserted(sg, null) - - then: - !result - } - - def "returns false for 404 retrofit error"() { - given: - MortService.SecurityGroup sg = new MortService.SecurityGroup( - name: "my-security-group", - region: "west", - accountName: "abc") - MortService mortService = Mock(MortService) { - 1 * getSecurityGroup("abc", "openstack", "my-security-group", "west") >> { - throw RetrofitError.httpError("/", new Response("", 404, "", [], null), null, null) - } - } - upserter = new OpenstackSecurityGroupUpserter(mortService: mortService) - - when: - def result = upserter.isSecurityGroupUpserted(sg, null) - - then: - !result - } - - def "throws error for non-404 retrofit error"() { - given: - MortService.SecurityGroup sg = new MortService.SecurityGroup( - name: "my-security-group", - region: "west", - accountName: "abc") - MortService mortService = Mock(MortService) { - 1 * getSecurityGroup("abc", "openstack", "my-security-group", "west") >> { - throw RetrofitError.httpError("/", new Response("", 400, "", [], null), null, null) - } - } - upserter = new OpenstackSecurityGroupUpserter(mortService: mortService) - - when: - def result = upserter.isSecurityGroupUpserted(sg, null) - - then: - thrown(RetrofitError) - } - -} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreatorSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreatorSpec.groovy deleted file mode 100644 index 3291a39900..0000000000 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/providers/openstack/OpenstackServerGroupCreatorSpec.groovy +++ /dev/null @@ -1,117 +0,0 @@ -/* - * Copyright 2016 Target, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.providers.openstack - -import com.netflix.spinnaker.orca.test.model.ExecutionBuilder -import spock.lang.Specification - -class OpenstackServerGroupCreatorSpec extends Specification { - - def "should get operations"() { - given: - def ctx = [ - account : "abc", - region : "north-pole", - deploymentDetails: [[imageId: "testImageId", region: "north-pole"]] - ] - def stage = ExecutionBuilder.stage { - context.putAll(ctx) - } - - when: - def ops = new OpenstackServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - account : "abc", - region : "north-pole", - deploymentDetails : [[imageId: "testImageId", region: "north-pole"]], - serverGroupParameters: [ - image: "testImageId", - ] - ] - ] - ] - - when: "fallback to non-region matching image" - ctx.region = "south-pole" - stage = ExecutionBuilder.stage { - context.putAll(ctx) - } - ops = new OpenstackServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - account : "abc", - region : "south-pole", - deploymentDetails: [[imageId: "testImageId", region: "north-pole"]], - serverGroupParameters: [ - image: "testImageId", - ] - ], - ] - ] - - when: "throw error if no image found" - ctx.deploymentDetails = [] - stage = ExecutionBuilder.stage { - context.putAll(ctx) - } - new OpenstackServerGroupCreator().getOperations(stage) - - then: - Throwable ise = thrown() - ise.message == "No image could be found in south-pole." - } - - def "should get image from provider context"() { - given: - String imageId = UUID.randomUUID().toString() - def ctx = [ - account : "abc", - region : "north-pole", - deploymentDetails: [[cloudProviderType: "openstack", - imageId: imageId]] - ] - def stage = ExecutionBuilder.stage { - context.putAll(ctx) - } - - when: - def ops = new OpenstackServerGroupCreator().getOperations(stage) - - then: - ops == [ - [ - "createServerGroup": [ - account : "abc", - region : "north-pole", - serverGroupParameters: [ - image: imageId, - ], - deploymentDetails : [[cloudProviderType: "openstack", - imageId: imageId]] - ] - ] - ] - } - -} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTaskSpec.groovy new file mode 100644 index 0000000000..cdd43f5524 --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/scalingpolicy/UpsertScalingPolicyTaskSpec.groovy @@ -0,0 +1,76 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.clouddriver.tasks.scalingpolicy + +import com.netflix.spinnaker.orca.clouddriver.KatoService +import com.netflix.spinnaker.orca.clouddriver.model.TaskId +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import spock.lang.Shared +import spock.lang.Specification +import spock.lang.Unroll +import rx.Observable + +class UpsertScalingPolicyTaskSpec extends Specification { + + @Shared + def taskId = new TaskId(UUID.randomUUID().toString()) + + @Unroll + def "should retry task on exception"() { + + given: + KatoService katoService = Mock(KatoService) + def task = new UpsertScalingPolicyTask(kato: katoService) + def stage = new Stage(Execution.newPipeline("orca"), "upsertScalingPolicy", + [credentials : "abc", cloudProvider: "aCloud", + estimatedInstanceWarmup : "300", + targetValue : "75", + targetTrackingConfiguration: + [predefinedMetricSpecification: + [predefinedMetricType: "ASGAverageCPUUtilization"]]]) + + when: + def result = task.execute(stage) + + then: + 1 * katoService.requestOperations(_, _) >> { throw new Exception() } + result.status.toString() == "RUNNING" + + } + + @Unroll + def "should set the task status to SUCCEEDED for successful execution"() { + + given: + KatoService katoService = Mock(KatoService) + def task = new UpsertScalingPolicyTask(kato: katoService) + def stage = new Stage(Execution.newPipeline("orca"), "upsertScalingPolicy", + [credentials : "abc", cloudProvider: "aCloud", + estimatedInstanceWarmup : "300", + targetValue : "75", + targetTrackingConfiguration: + [predefinedMetricSpecification: + [predefinedMetricType: "ASGAverageCPUUtilization"]]]) + + when: + def result = task.execute(stage) + + then: + 1 * katoService.requestOperations(_, _) >> { Observable.from(taskId) } + result.status.toString() == "SUCCEEDED" + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTaskSpec.groovy index 06656be7e7..d9db316075 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/CloneServerGroupTaskSpec.groovy @@ -19,6 +19,7 @@ package com.netflix.spinnaker.orca.clouddriver.tasks.servergroup import com.fasterxml.jackson.datatype.guava.GuavaModule import com.netflix.spinnaker.orca.clouddriver.KatoService import com.netflix.spinnaker.orca.clouddriver.model.TaskId +import com.netflix.spinnaker.orca.clouddriver.tasks.servergroup.clone.BakeryImageAccessDescriptionDecorator import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper import com.netflix.spinnaker.orca.pipeline.model.Stage import com.netflix.spinnaker.orca.test.model.ExecutionBuilder @@ -47,6 +48,7 @@ class CloneServerGroupTaskSpec extends Specification { mapper.registerModule(new GuavaModule()) task.mapper = mapper + task.cloneDescriptionDecorators = [new BakeryImageAccessDescriptionDecorator()] stage.execution.stages.add(stage) stage.context = cloneServerGroupConfig @@ -67,10 +69,10 @@ class CloneServerGroupTaskSpec extends Specification { then: operations.size() == 3 - operations[2].cloneServerGroup.amiName == "hodor-image" - operations[2].cloneServerGroup.application == "hodor" - operations[2].cloneServerGroup.availabilityZones == ["us-east-1": ["a", "d"], "us-west-1": ["a", "b"]] - operations[2].cloneServerGroup.credentials == "fzlem" + operations[0].cloneServerGroup.amiName == "hodor-image" + operations[0].cloneServerGroup.application == "hodor" + operations[0].cloneServerGroup.availabilityZones == ["us-east-1": ["a", "d"], "us-west-1": ["a", "b"]] + operations[0].cloneServerGroup.credentials == "fzlem" } def "can include optional parameters"() { @@ -91,7 +93,7 @@ class CloneServerGroupTaskSpec extends Specification { then: operations.size() == 3 - with(operations[2].cloneServerGroup) { + with(operations[0].cloneServerGroup) { amiName == "hodor-image" application == "hodor" availabilityZones == ["us-east-1": ["a", "d"], "us-west-1": ["a", "b"]] @@ -120,7 +122,7 @@ class CloneServerGroupTaskSpec extends Specification { then: operations.size() == 3 - with(operations[2].cloneServerGroup) { + with(operations[0].cloneServerGroup) { amiName == contextAmi application == "hodor" availabilityZones == ["us-east-1": ["a", "d"], "us-west-1": ["a", "b"]] @@ -156,7 +158,7 @@ class CloneServerGroupTaskSpec extends Specification { then: operations.size() == 3 - with(operations[2].cloneServerGroup) { + with(operations[0].cloneServerGroup) { amiName == bakeAmi application == "hodor" availabilityZones == ["us-east-1": ["a", "d"], "us-west-1": ["a", "b"]] @@ -219,11 +221,11 @@ class CloneServerGroupTaskSpec extends Specification { then: operations.size() == 3 - operations[0].allowLaunchDescription.amiName == contextAmi - operations[0].allowLaunchDescription.region == "us-east-1" + operations[0].cloneServerGroup.amiName == contextAmi operations[1].allowLaunchDescription.amiName == contextAmi - operations[1].allowLaunchDescription.region == "us-west-1" - operations[2].cloneServerGroup.amiName == contextAmi + operations[1].allowLaunchDescription.region == "us-east-1" + operations[2].allowLaunchDescription.amiName == contextAmi + operations[2].allowLaunchDescription.region == "us-west-1" where: contextAmi = "ami-ctx" @@ -247,7 +249,6 @@ class CloneServerGroupTaskSpec extends Specification { then: operations.size() == 2 - operations[0].allowLaunchDescription.region == "eu-west-1" - + operations[1].allowLaunchDescription.region == "eu-west-1" } } diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTaskSpec.groovy deleted file mode 100644 index 23ef0a9e32..0000000000 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateForceRefreshDependenciesTaskSpec.groovy +++ /dev/null @@ -1,115 +0,0 @@ -/* - * Copyright 2016 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.clouddriver.tasks.servergroup - -import com.netflix.spinnaker.orca.clouddriver.CloudDriverCacheService -import com.netflix.spinnaker.orca.clouddriver.model.TaskId -import com.netflix.spinnaker.orca.pipeline.model.Execution -import com.netflix.spinnaker.orca.pipeline.model.Stage -import spock.lang.Specification -import spock.lang.Subject - -class MigrateForceRefreshDependenciesTaskSpec extends Specification { - - @Subject - def task = new MigrateForceRefreshDependenciesTask() - def stage = new Stage(Execution.newPipeline("orca"), "refreshTask") - def taskId = new TaskId(UUID.randomUUID().toString()) - - CloudDriverCacheService cacheService = Mock(CloudDriverCacheService) - - void setup() { - task.cacheService = cacheService - stage.context = [ - cloudProvider: 'aws', - target : [ - region : 'us-east-1', - credentials: 'test' - ] - ] - } - - void 'should not refresh anything if there is nothing to refresh'() { - given: - stage.context["kato.tasks"] = [ - [ - resultObjects: [ - [someBogusResult: true], - [serverGroupNames: ["new-asg-v002"], - securityGroups : [] - ] - ] - ] - ] - when: - task.execute(stage) - - then: - 0 * _ - } - - void 'should refresh all security groups from the server group result itself'() { - given: - stage.context["kato.tasks"] = [ - [ - resultObjects: [ - [someBogusResult: true], - [serverGroupNames: ["new-asg-v002"], - securityGroups : [ - [created: [[targetName: 'new-sg-1', credentials: 'test', vpcId: 'vpc-1']], - reused : [[targetName: 'new-sg-2', credentials: 'prod', vpcId: 'vpc-2']]] - ] - ] - ] - ] - ] - when: - task.execute(stage) - - then: - 1 * cacheService.forceCacheUpdate('aws', 'SecurityGroup', [securityGroupName: 'new-sg-1', region: 'us-east-1', account: 'test', vpcId: 'vpc-1']) - 1 * cacheService.forceCacheUpdate('aws', 'SecurityGroup', [securityGroupName: 'new-sg-2', region: 'us-east-1', account: 'prod', vpcId: 'vpc-2']) - 0 * _ - } - - void 'should refresh all security groups from the load balancers, and refresh the load balancers'() { - given: - stage.context["kato.tasks"] = [ - [ - resultObjects: [ - [serverGroupNames: ["new-asg-v002"], - securityGroups : [ - [created: [[targetName: 'new-sg-1', credentials: 'test', vpcId: 'vpc-1']], - reused : []] - ], - loadBalancers : [ - [targetName: 'newElb-vpc1', securityGroups: [[created: [], reused: [[targetName: 'new-sg-2', credentials: 'prod', vpcId: 'vpc-2']]]]] - ] - ] - ] - ] - ] - when: - task.execute(stage) - - then: - 1 * cacheService.forceCacheUpdate('aws', 'LoadBalancer', [loadBalancerName: 'newElb-vpc1', region: 'us-east-1', account: 'test']) - 1 * cacheService.forceCacheUpdate('aws', 'SecurityGroup', [securityGroupName: 'new-sg-1', region: 'us-east-1', account: 'test', vpcId: 'vpc-1']) - 1 * cacheService.forceCacheUpdate('aws', 'SecurityGroup', [securityGroupName: 'new-sg-2', region: 'us-east-1', account: 'prod', vpcId: 'vpc-2']) - 0 * _ - } -} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForRequiredInstancesDownTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForRequiredInstancesDownTaskSpec.groovy index afab4af14c..117db2f401 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForRequiredInstancesDownTaskSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/WaitForRequiredInstancesDownTaskSpec.groovy @@ -59,7 +59,7 @@ class WaitForRequiredInstancesDownTaskSpec extends Specification { getServerGroup(*_) >> new Response('oort', 200, 'ok', [], new TypedString(response)) } task.serverGroupCacheForceRefreshTask = Mock(ServerGroupCacheForceRefreshTask) { - 2 * execute(_) >> new TaskResult(ExecutionStatus.SUCCEEDED) + 2 * execute(_) >> TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) 0 * _ } task.oortHelper = Mock(OortHelper) { diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTaskSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTaskSpec.groovy new file mode 100644 index 0000000000..3bc394cf9c --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/snapshot/DeleteSnapshotTaskSpec.groovy @@ -0,0 +1,57 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.snapshot + +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.clouddriver.KatoService +import com.netflix.spinnaker.orca.clouddriver.model.TaskId +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import spock.lang.Specification + +class DeleteSnapshotTaskSpec extends Specification { + + def "Should delete a snapshot"() { + given: + def context = [ + cloudProvider: "aws", + credentials : "test", + region : "us-east-1", + snapshotIds : ["snap-08e97a12bceb0b750"] + ] + + def stage = new Stage(Execution.newPipeline("orca"), "deleteSnapshot", context) + + and: + List operations = [] + def katoService = Mock(KatoService) { + 1 * requestOperations("aws", _) >> { + operations = it[1] + rx.Observable.from(new TaskId(UUID.randomUUID().toString())) + } + } + def task = new DeleteSnapshotTask(katoService) + + when: + def result = task.execute(stage) + + then: + operations.size() == 1 + operations[0].deleteSnapshot.snapshotId == stage.context.snapshotIds[0] + result.status == ExecutionStatus.SUCCEEDED + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupportSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupportSpec.groovy new file mode 100644 index 0000000000..50e33f1423 --- /dev/null +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/ResizeStrategySupportSpec.groovy @@ -0,0 +1,49 @@ +package com.netflix.spinnaker.orca.kato.pipeline.support + +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.test.model.ExecutionBuilder +import spock.lang.Specification +import spock.lang.Subject +import spock.lang.Unroll + +class ResizeStrategySupportSpec extends Specification { + @Subject + ResizeStrategySupport resizeStrategySupport + + @Unroll + def "test min logic in performScalingAndPinning() with unpinMin=#unpinMin originalMin=#originalMin savedMin=#savedMin"() { + given: + resizeStrategySupport = new ResizeStrategySupport() + Stage stage = ExecutionBuilder.stage {} + stage.context = [ + unpinMinimumCapacity: unpinMin, + source: [ + serverGroupName: "app-v000" + ], + "originalCapacity.app-v000": [ + min: originalMin + ], + savedCapacity: [ + min: savedMin + ] + ] + ResizeStrategy.OptionalConfiguration config = Mock(ResizeStrategy.OptionalConfiguration) + + when: + def outputCapacity = resizeStrategySupport.performScalingAndPinning(sourceCapacity as ResizeStrategy.Capacity, stage, config) + + then: + outputCapacity == expectedCapacity as ResizeStrategy.Capacity + + where: + sourceCapacity | unpinMin | originalMin | savedMin || expectedCapacity + [min: 1, max: 3, desired: 2] | null | 1 | null || [min: 1, max: 3, desired: 2] + [min: 1, max: 3, desired: 2] | false | 1 | null || [min: 1, max: 3, desired: 2] + [min: 1, max: 3, desired: 2] | true | 1 | null || [min: 1, max: 3, desired: 2] + [min: 1, max: 3, desired: 2] | true | 2 | null || [min: 1, max: 3, desired: 2] // won't unpin to a higher min 2 + [min: 1, max: 3, desired: 2] | true | 0 | null || [min: 0, max: 3, desired: 2] + [min: 1, max: 3, desired: 2] | true | null | 2 || [min: 1, max: 3, desired: 2] + [min: 1, max: 3, desired: 2] | true | 0 | 2 || [min: 0, max: 3, desired: 2] // verify that 0 is a valid originalMin + [min: 1, max: 3, desired: 2] | true | null | 0 || [min: 0, max: 3, desired: 2] // picks the savedMin value + } +} diff --git a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolverSpec.groovy b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolverSpec.groovy index c032c9fa32..c10ee7f49a 100644 --- a/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolverSpec.groovy +++ b/orca-clouddriver/src/test/groovy/com/netflix/spinnaker/orca/kato/pipeline/support/SourceResolverSpec.groovy @@ -139,6 +139,55 @@ class SourceResolverSpec extends Specification { source?.asgName == 'app-test-v009' } + void "should populate deploy stage 'source' with targeted server group if source contains the location of the target"() { + given: + OortService oort = Mock(OortService) + ObjectMapper mapper = new ObjectMapper() + RetrySupport retrySupport = Spy(RetrySupport) { + _ * sleep(_) >> { /* do nothing */ } + } + + SourceResolver resolver = new SourceResolver( + oortService: oort, + mapper: mapper, + resolver: new TargetServerGroupResolver(oortService: oort, mapper: mapper, retrySupport: retrySupport) + ) + + when: + def stage = new Stage( + Execution.newPipeline("orca"), + "test", + [ + cloudProvider: "cloudfoundry", + application: "app", + credentials: "test1", + region: "org > space", + source: [clusterName: "app-test", account: "test2", region: "org2 > space2"], + target: "current_asg_dynamic", + ] + ) + def source = resolver.getSource(stage) + + then: + 1 * oort.getTargetServerGroup( + 'app', + 'test2', + 'app-test', + 'cloudfoundry', + 'org2 > space2', + 'current_asg_dynamic') >> new Response('http://oort.com', 200, 'Okay', [], new TypedString('''\ + { + "name": "app-test-v009", + "region": "org2 > space2", + "createdTime": 1 + }'''.stripIndent())) + + source?.account == 'test2' + source?.region == 'org2 > space2' + source?.serverGroupName == 'app-test-v009' + source?.asgName == 'app-test-v009' + } + void "should ignore target if source is explicitly specified"() { given: SourceResolver resolver = new SourceResolver(mapper: new ObjectMapper()) diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessorTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessorTest.java new file mode 100644 index 0000000000..16ddd586ab --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployServiceStagePreprocessorTest.java @@ -0,0 +1,57 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryDeployServiceTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryWaitForDeployServiceTask; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; + +class CloudFoundryDeployServiceStagePreprocessorTest { + @Test + void ensureThatCorrectTasksAreAddedForDeployingCloudFoundryService() { + TaskNode.Builder expectedBuilder = TaskNode.Builder(TaskNode.GraphType.FULL); + expectedBuilder + .withTask("deployService", CloudFoundryDeployServiceTask.class) + .withTask("monitorDeployService", CloudFoundryMonitorKatoServicesTask.class) + .withTask("waitForDeployService", CloudFoundryWaitForDeployServiceTask.class); + + CloudFoundryDeployServiceStagePreprocessor preprocessor = new CloudFoundryDeployServiceStagePreprocessor(); + Map context = new HashMap<>(); + context.put("cloudProvider", "my-cloud"); + context.put("manifest", Collections.singletonMap("type", "direct")); + Stage stage = new Stage( + new Execution(PIPELINE, "orca"), + "deployService", + context); + + TaskNode.Builder builder = new TaskNode.Builder(TaskNode.GraphType.FULL); + preprocessor.addSteps(builder, stage); + + assertThat(builder).isEqualToComparingFieldByFieldRecursively(expectedBuilder); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessorTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessorTest.java new file mode 100644 index 0000000000..dc6a5dc497 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDeployStagePreProcessorTest.java @@ -0,0 +1,99 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.pipeline.cluster.RollbackClusterStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.ServerGroupForceCacheRefreshStage; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.strategies.DeployStagePreProcessor; +import com.netflix.spinnaker.orca.kato.pipeline.support.StageData; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static java.util.Collections.singletonMap; +import static org.assertj.core.api.Assertions.assertThat; + +class CloudFoundryDeployStagePreProcessorTest { + private RollbackClusterStage rollbackClusterStage = new RollbackClusterStage(null, null); + private ServerGroupForceCacheRefreshStage serverGroupForceCacheRefreshStage = new ServerGroupForceCacheRefreshStage(); + private CloudFoundryDeployStagePreProcessor preProcessor = + new CloudFoundryDeployStagePreProcessor(rollbackClusterStage, serverGroupForceCacheRefreshStage); + + @Test + void onFailureStageDefinitionsReturnsEmptyListForRedBlack() { + Stage stage = new Stage(); + Map context = new HashMap<>(); + context.put("strategy", "redblack"); + context.put("cloudProvider", "cloudfoundry"); + context.put("rollback", singletonMap("onFailure", true)); + stage.setContext(context); + + List results = preProcessor.onFailureStageDefinitions(stage); + + assertThat(results).isEmpty(); + } + + @Test + void onFailureStageDefinitionsReturnsEmptyListIfRollbackIsNull() { + Stage stage = new Stage(); + Map context = new HashMap<>(); + context.put("strategy", "redblack"); + context.put("cloudProvider", "cloudfoundry"); + stage.setContext(context); + + List results = preProcessor.onFailureStageDefinitions(stage); + + assertThat(results).isEmpty(); + } + + @Test + void onFailureStageDefinitionsReturnsEmptyListIfRollbackOnFailureIsFalse() { + Stage stage = new Stage(); + Map context = new HashMap<>(); + context.put("strategy", "redblack"); + context.put("cloudProvider", "cloudfoundry"); + context.put("rollback", singletonMap("onFailure", false)); + stage.setContext(context); + + List results = preProcessor.onFailureStageDefinitions(stage); + + assertThat(results).isEmpty(); + } + + @Test + void onFailureStageDefinitionsReturnsCacheRefreshAndRollbackForCfRollingRedBlack() { + Stage stage = new Stage(); + StageData.Source source = new StageData.Source(); + source.setServerGroupName("sourceServerGroupName"); + Map context = new HashMap<>(); + context.put("strategy", "cfrollingredblack"); + context.put("cloudProvider", "cloudfoundry"); + context.put("source", source); + context.put("rollback", singletonMap("onFailure", true)); + stage.setContext(context); + + List results = preProcessor.onFailureStageDefinitions(stage); + + assertThat(results.stream().map(stageDefinition -> stageDefinition.stageDefinitionBuilder.getType())) + .containsExactly(StageDefinitionBuilder.getType(ServerGroupForceCacheRefreshStage.class), + RollbackClusterStage.PIPELINE_CONFIG_TYPE); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessorTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessorTest.java new file mode 100644 index 0000000000..453ef16fbb --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryDestroyServiceStagePreprocessorTest.java @@ -0,0 +1,55 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.*; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; + +class CloudFoundryDestroyServiceStagePreprocessorTest { + @Test + void ensureThatCorrectTasksAreAddedForDestroyingCloudFoundryService() { + TaskNode.Builder expectedBuilder = TaskNode.Builder(TaskNode.GraphType.FULL); + expectedBuilder + .withTask("destroyService", CloudFoundryDestroyServiceTask.class) + .withTask("monitorDestroyService", CloudFoundryMonitorKatoServicesTask.class) + .withTask("waitForDestroyService", CloudFoundryWaitForDestroyServiceTask.class); + + CloudFoundryDestroyServiceStagePreprocessor preprocessor = new CloudFoundryDestroyServiceStagePreprocessor(); + Map context = new HashMap<>(); + context.put("cloudProvider", "my-cloud"); + context.put("manifest", Collections.singletonMap("type", "direct")); + Stage stage = new Stage( + new Execution(PIPELINE, "orca"), + "destroyService", + context); + + TaskNode.Builder builder = new TaskNode.Builder(TaskNode.GraphType.FULL); + preprocessor.addSteps(builder, stage); + + assertThat(builder).isEqualToComparingFieldByFieldRecursively(expectedBuilder); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessorTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessorTest.java new file mode 100644 index 0000000000..e9e8c4421b --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryShareServiceStagePreprocessorTest.java @@ -0,0 +1,55 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryShareServiceTask; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; + +class CloudFoundryShareServiceStagePreprocessorTest { + @Test + void ensureThatCorrectTasksAreAddedForSharingCloudFoundryService() { + TaskNode.Builder expectedBuilder = TaskNode.Builder(TaskNode.GraphType.FULL); + expectedBuilder + .withTask("shareService", CloudFoundryShareServiceTask.class) + .withTask("monitorShareService", CloudFoundryMonitorKatoServicesTask.class); + + CloudFoundryShareServiceStagePreprocessor preprocessor = new CloudFoundryShareServiceStagePreprocessor(); + Map context = new HashMap<>(); + context.put("cloudProvider", "my-cloud"); + context.put("manifest", Collections.singletonMap("type", "direct")); + Stage stage = new Stage( + new Execution(PIPELINE, "orca"), + "shareService", + context); + + TaskNode.Builder builder = new TaskNode.Builder(TaskNode.GraphType.FULL); + preprocessor.addSteps(builder, stage); + + assertThat(builder).isEqualToComparingFieldByFieldRecursively(expectedBuilder); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessorTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessorTest.java new file mode 100644 index 0000000000..d64971a135 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/pipeline/providers/cf/CloudFoundryUnshareServiceStagePreprocessorTest.java @@ -0,0 +1,55 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.pipeline.providers.cf; + +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryUnshareServiceTask; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; + +class CloudFoundryUnshareServiceStagePreprocessorTest { + @Test + void ensureThatCorrectTasksAreAddedForUnsharingCloudFoundryService() { + TaskNode.Builder expectedBuilder = TaskNode.Builder(TaskNode.GraphType.FULL); + expectedBuilder + .withTask("unshareService", CloudFoundryUnshareServiceTask.class) + .withTask("monitorUnshareService", CloudFoundryMonitorKatoServicesTask.class); + + CloudFoundryUnshareServiceStagePreprocessor preprocessor = new CloudFoundryUnshareServiceStagePreprocessor(); + Map context = new HashMap<>(); + context.put("cloudProvider", "my-cloud"); + context.put("manifest", Collections.singletonMap("type", "direct")); + Stage stage = new Stage( + new Execution(PIPELINE, "orca"), + "unshareService", + context); + + TaskNode.Builder builder = new TaskNode.Builder(TaskNode.GraphType.FULL); + preprocessor.addSteps(builder, stage); + + assertThat(builder).isEqualToComparingFieldByFieldRecursively(expectedBuilder); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/AbstractCloudFoundryWaitForServiceOperationTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/AbstractCloudFoundryWaitForServiceOperationTaskTest.java new file mode 100644 index 0000000000..6243021c15 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/AbstractCloudFoundryWaitForServiceOperationTaskTest.java @@ -0,0 +1,76 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.clouddriver.tasks.servicebroker.AbstractWaitForServiceTask; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; + +import javax.annotation.Nullable; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Function; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.matches; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +class AbstractCloudFoundryWaitForServiceOperationTaskTest { + private final String operationType; + private final Function subjectConstructor; + + AbstractCloudFoundryWaitForServiceOperationTaskTest( + String operationType, + Function subjectConstructor) { + this.operationType = operationType; + this.subjectConstructor = subjectConstructor; + } + + void testOortServiceStatus(ExecutionStatus expectedStatus, @Nullable Map serviceInstance) { + OortService oortService = mock(OortService.class); + String credentials = "my-account"; + String cloudProvider = "cloud"; + String region = "org > space"; + String serviceInstanceName = "service-instance-name"; + when(oortService.getServiceInstance( + matches(credentials), + matches(cloudProvider), + matches(region), + matches(serviceInstanceName))) + .thenReturn(serviceInstance); + + T task = subjectConstructor.apply(oortService); + + Map context = new HashMap<>(); + context.put("cloudProvider", cloudProvider); + context.put("service.account", credentials); + context.put("service.region", region); + context.put("service.instance.name", serviceInstanceName); + + TaskResult result = task.execute(new Stage( + new Execution(PIPELINE, "orca"), + operationType, + context)); + + assertThat(result.getStatus()).isEqualTo(expectedStatus); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryCreateServiceKeyTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryCreateServiceKeyTaskTest.java new file mode 100644 index 0000000000..ff94b5a80b --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryCreateServiceKeyTaskTest.java @@ -0,0 +1,75 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; +import rx.Observable; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.ArgumentMatchers.matches; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +class CloudFoundryCreateServiceKeyTaskTest { + @Test + void shouldMakeRequestToKatoToCreateServiceKey() { + String type = "createServiceKey"; + KatoService kato = mock(KatoService.class); + String cloudProvider = "my-cloud"; + String credentials = "cf-foundation"; + String region = "org > space"; + TaskId taskId = new TaskId("kato-task-id"); + Map context = new HashMap<>(); + context.put("cloudProvider", cloudProvider); + context.put("credentials", credentials); + context.put("region", region); + context.put("serviceInstanceName", "service-instance"); + context.put("serviceKeyName", "service-key"); + when(kato.requestOperations(matches(cloudProvider), + eq(Collections.singletonList(Collections.singletonMap(type, context))))) + .thenReturn(Observable.from(new TaskId[] { taskId })); + CloudFoundryCreateServiceKeyTask task = new CloudFoundryCreateServiceKeyTask(kato); + + Map expectedContext = new ImmutableMap.Builder() + .put("notification.type", type) + .put("kato.last.task.id", taskId) + .put("service.region", region) + .put("service.account", credentials) + .build(); + TaskResult expected = TaskResult.builder(ExecutionStatus.SUCCEEDED).context(expectedContext).build(); + + TaskResult result = task.execute(new Stage( + new Execution(PIPELINE, "orca"), + type, + context)); + + assertThat(result).isEqualToComparingFieldByFieldRecursively(expected); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTaskTest.java new file mode 100644 index 0000000000..831326cdf6 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDeployServiceTaskTest.java @@ -0,0 +1,79 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import org.junit.jupiter.api.Test; +import org.mockito.ArgumentCaptor; +import rx.Observable; + +import java.io.IOException; +import java.util.Collection; +import java.util.Map; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.*; + +class CloudFoundryDeployServiceTaskTest { + @Test + void bindArtifacts() throws IOException { + String stageJson = "{\n" + + " \"cloudProvider\": \"cloudfoundry\",\n" + + " \"credentials\": \"montclair\",\n" + + " \"manifest\": {\n" + + " \"artifact\": {\n" + + " \"artifactAccount\": \"spring-artifactory\",\n" + + " \"reference\": \"g:a:${expression}\",\n" + + " \"type\": \"maven/file\"\n" + + " }\n" + + " },\n" + + " \"name\": \"Deploy Service\",\n" + + " \"region\": \"development > development\",\n" + + " \"type\": \"deployService\"\n" + + "}"; + + ObjectMapper mapper = new ObjectMapper().disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); + + KatoService katoService = mock(KatoService.class); + when(katoService.requestOperations(any(), any())).thenReturn(Observable.just(new TaskId("taskid"))); + + ArtifactResolver artifactResolver = mock(ArtifactResolver.class); + Artifact boundArtifact = Artifact.builder() + .reference("g:a:v").type("maven/file").artifactAccount("spring-artifactory").build(); + when(artifactResolver.getBoundArtifactForStage(any(), isNull(), any())).thenReturn(boundArtifact); + + CloudFoundryDeployServiceTask task = new CloudFoundryDeployServiceTask(katoService, artifactResolver); + Stage stage = new Stage(); + stage.setContext(mapper.readValue(stageJson, Map.class)); + task.execute(stage); + + ArgumentCaptor>> captor = ArgumentCaptor.forClass(Collection.class); + verify(katoService).requestOperations(eq("cloudfoundry"), captor.capture()); + + Map operation = captor.getValue().iterator().next(); + Map manifest = (Map) operation.get("deployService").get("manifest"); + Object capturedBoundArtifact = manifest.get("artifact"); + assertThat(boundArtifact).isEqualTo(mapper.convertValue(capturedBoundArtifact, Artifact.class)); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTaskTest.java new file mode 100644 index 0000000000..dc6b835675 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryDestroyServiceTaskTest.java @@ -0,0 +1,71 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; +import rx.Observable; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.matches; +import static org.mockito.Mockito.*; + +class CloudFoundryDestroyServiceTaskTest { + @Test + void shouldMakeRequestToKatoToDestroyService() { + KatoService kato = mock(KatoService.class); + String cloudProvider = "my-cloud"; + String credentials = "cf-foundation"; + String region = "org > space"; + TaskId taskId = new TaskId("kato-task-id"); + Map context = new HashMap<>(); + context.put("cloudProvider", cloudProvider); + context.put("credentials", credentials); + context.put("region", region); + when(kato.requestOperations(matches(cloudProvider), + eq(Collections.singletonList(Collections.singletonMap("destroyService", context))))) + .thenReturn(Observable.from(new TaskId[] { taskId })); + CloudFoundryDestroyServiceTask task = new CloudFoundryDestroyServiceTask(kato); + + String type = "destroyService"; + Map expectedContext = new ImmutableMap.Builder() + .put("notification.type", type) + .put("kato.last.task.id", taskId) + .put("service.region", region) + .put("service.account", credentials) + .build(); + TaskResult expected = TaskResult.builder(ExecutionStatus.SUCCEEDED).context(expectedContext).build(); + + TaskResult result = task.execute(new Stage( + new Execution(PIPELINE, "orca"), + "destroyService", + context)); + + assertThat(result).isEqualToComparingFieldByFieldRecursively(expected); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTaskTest.java new file mode 100644 index 0000000000..a4f32de3c1 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryMonitorKatoServicesTaskTest.java @@ -0,0 +1,127 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.Task; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; +import rx.Observable; + +import javax.annotation.Nullable; +import java.util.*; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.ArgumentMatchers.matches; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +class CloudFoundryMonitorKatoServicesTaskTest { + private void testKatoServiceStatus(boolean completed, boolean failed, @Nullable List resultObjects, ExecutionStatus expectedStatus) { + KatoService katoService = mock(KatoService.class); + String taskIdString = "kato-task-id"; + String credentials = "my-account"; + String cloudProvider = "cloud"; + String region = "org > space"; + when(katoService.lookupTask( + matches(taskIdString), + eq(true))) + .thenReturn(Observable.from(new Task[] { new Task(taskIdString, new Task.Status(completed, failed), resultObjects, Collections.emptyList()) })); + + CloudFoundryMonitorKatoServicesTask task = new CloudFoundryMonitorKatoServicesTask(katoService); + + ImmutableMap.Builder katoTaskMapBuilder = new ImmutableMap.Builder() + .put("id", taskIdString) + .put("status", new Task.Status(completed, failed)) + .put("history", Collections.emptyList()) + .put("resultObjects", Optional.ofNullable(resultObjects).orElse(Collections.emptyList())); + Optional.ofNullable(resultObjects) + .ifPresent(results -> results.stream() + .filter(result -> "EXCEPTION".equals(result.get("type"))) + .findFirst() + .ifPresent(r -> katoTaskMapBuilder.put("exception", r))); + + Map expectedContext = new HashMap<>(); + TaskId taskId = new TaskId(taskIdString); + expectedContext.put("kato.last.task.id", taskId); + expectedContext.put("kato.task.firstNotFoundRetry", -1L); + expectedContext.put("kato.task.notFoundRetryCount", 0); + expectedContext.put("kato.tasks", Collections.singletonList(katoTaskMapBuilder.build())); + TaskResult expected = TaskResult.builder(expectedStatus).context(expectedContext).build(); + + Map context = new HashMap<>(); + context.put("cloudProvider", cloudProvider); + context.put("kato.last.task.id", taskId); + context.put("credentials", credentials); + context.put("region", region); + + TaskResult result = task.execute(new Stage( + new Execution(PIPELINE, "orca"), + "deployService", + context)); + + assertThat(result).isEqualToComparingFieldByFieldRecursively(expected); + } + + @Test + void returnsStatusRunningWhenIncompleteAndNotFailedWithEmptyResults() { + testKatoServiceStatus(false, false, Collections.emptyList(), ExecutionStatus.RUNNING); + } + + @Test + void returnsStatusRunningWhenCompleteAndNotFailedWithNullResults() { + testKatoServiceStatus(true, false, null, ExecutionStatus.RUNNING); + } + + @Test + void returnsStatusRunningWhenCompleteAndNotFailedWithEmptyResults() { + testKatoServiceStatus(true, false, Collections.emptyList(), ExecutionStatus.RUNNING); + } + + @Test + void returnsStatusTerminalWhenCompleteAndFailedWithEmptyResults() { + testKatoServiceStatus(true, true, Collections.emptyList(), ExecutionStatus.TERMINAL); + } + + @Test + void returnsStatusSucceededWhenCompleteAndNotFailedWithAResult() { + Map inProgressResult = new ImmutableMap.Builder() + .put("type", "CREATE") + .put("state", "IN_PROGRESS") + .put("serviceInstanceName", "service-instance-name") + .build(); + testKatoServiceStatus(true, true, Collections.singletonList(inProgressResult), ExecutionStatus.TERMINAL); + } + + @Test + void returnsStatusTerminalWithExceptionWhenCompleteAndailedWithAnExceptionResult() { + Map inProgressResult = new ImmutableMap.Builder() + .put("type", "EXCEPTION") + .put("operation", "my-atomic-operation") + .put("cause", "MyException") + .put("message", "Epic Failure") + .build(); + testKatoServiceStatus(true, true, Collections.singletonList(inProgressResult), ExecutionStatus.TERMINAL); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorTest.java new file mode 100644 index 0000000000..05be3d99ea --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryServerGroupCreatorTest.java @@ -0,0 +1,82 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import org.junit.jupiter.api.Test; + +import java.io.IOException; +import java.util.Base64; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.Mockito.mock; + +class CloudFoundryServerGroupCreatorTest { + @Test + void generateCloudFoundryManifestFromDirectInput() throws IOException { + String manifestPipelineJson = "{\n" + + " \"direct\": {\n" + + " \"buildpacks\": [\"java\"],\n" + + " \"diskQuota\": \"1024M\",\n" + + " \"environment\": [\n" + + " {\n" + + " \"key\": \"k\",\n" + + " \"value\": \"v\"\n" + + " }\n" + + " ],\n" + + " \"healthCheckHttpEndpoint\": \"http://healthme\",\n" + + " \"healthCheckType\": \"http\",\n" + + " \"instances\": 1,\n" + + " \"memory\": \"1024M\",\n" + + " \"routes\": [\"route\"],\n" + + " \"services\": [\"service\"]\n" + + " }\n" + + "}"; + + ObjectMapper mapper = new ObjectMapper(); + + ArtifactResolver artifactResolver = mock(ArtifactResolver.class); + Stage stage = mock(Stage.class); + + Artifact artifact = mapper.readValue(manifestPipelineJson, Manifest.class).toArtifact(artifactResolver, stage); + + assertThat(artifact.getType()).isEqualTo("embedded/base64"); + assertThat(new String(Base64.getDecoder().decode(artifact.getReference()))).isEqualTo( + "---\n" + + "applications:\n" + + " -\n" + + " name: app\n" + + " buildpacks:\n" + + " - java\n" + + " health-check-type: http\n" + + " health-check-http-endpoint: http://healthme\n" + + " env:\n" + + " k: v\n" + + " routes:\n" + + " -\n" + + " route: route\n" + + " services:\n" + + " - service\n" + + " instances: 1\n" + + " memory: 1024M\n" + + " disk_quota: 1024M\n" + ); + } +} \ No newline at end of file diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTaskTest.java new file mode 100644 index 0000000000..e986cd04e5 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDeployServiceTaskTest.java @@ -0,0 +1,54 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import org.junit.jupiter.api.Test; + +import java.util.Collections; + +class CloudFoundryWaitForDeployServiceTaskTest + extends AbstractCloudFoundryWaitForServiceOperationTaskTest { + CloudFoundryWaitForDeployServiceTaskTest() { + super("deployService", CloudFoundryWaitForDeployServiceTask::new); + } + + @Test + void isTerminalWhenOortResultIsFailed() { + testOortServiceStatus(ExecutionStatus.TERMINAL, Collections.singletonMap("status", "FAILED")); + } + + @Test + void isSuccessWhenOortResultIsSucceeded() { + testOortServiceStatus(ExecutionStatus.SUCCEEDED, Collections.singletonMap("status", "SUCCEEDED")); + } + + @Test + void isRunningWhenOortResultIsInProgress() { + testOortServiceStatus(ExecutionStatus.RUNNING, Collections.singletonMap("status", "IN_PROGRESS")); + } + + @Test + void isRunningWhenOortResultsAreEmpty() { + testOortServiceStatus(ExecutionStatus.RUNNING, Collections.emptyMap()); + } + + @Test + void isTerminalWhenOortResultsAreNull() { + testOortServiceStatus(ExecutionStatus.TERMINAL, null); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTaskTest.java new file mode 100644 index 0000000000..ca7c7db868 --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/cf/CloudFoundryWaitForDestroyServiceTaskTest.java @@ -0,0 +1,49 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import org.junit.jupiter.api.Test; + +import java.util.Collections; + +class CloudFoundryWaitForDestroyServiceTaskTest + extends AbstractCloudFoundryWaitForServiceOperationTaskTest { + CloudFoundryWaitForDestroyServiceTaskTest() { + super("destroyService", CloudFoundryWaitForDestroyServiceTask::new); + } + + @Test + void isTerminalWhenOortResultIsFailed() { + testOortServiceStatus(ExecutionStatus.TERMINAL, Collections.singletonMap("status", "FAILED")); + } + + @Test + void isRunningWhenOortResultIsInProgress() { + testOortServiceStatus(ExecutionStatus.RUNNING, Collections.singletonMap("status", "IN_PROGRESS")); + } + + @Test + void isRunningWhenOortResultsAreEmpty() { + testOortServiceStatus(ExecutionStatus.RUNNING, Collections.emptyMap()); + } + + @Test + void isSuccessWhenOortResultsAreNull() { + testOortServiceStatus(ExecutionStatus.SUCCEEDED, null); + } +} diff --git a/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTaskTest.java b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTaskTest.java new file mode 100644 index 0000000000..8464fc885b --- /dev/null +++ b/orca-clouddriver/src/test/java/com/netflix/spinnaker/orca/clouddriver/tasks/providers/gce/SetStatefulDiskTaskTest.java @@ -0,0 +1,89 @@ +/* + * + * * Copyright 2019 Google, Inc. + * * + * * Licensed under the Apache License, Version 2.0 (the "License") + * * you may not use this file except in compliance with the License. + * * You may obtain a copy of the License at + * * + * * http://www.apache.org/licenses/LICENSE-2.0 + * * + * * Unless required by applicable law or agreed to in writing, software + * * distributed under the License is distributed on an "AS IS" BASIS, + * * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * * See the License for the specific language governing permissions and + * * limitations under the License. + * + * + */ + +package com.netflix.spinnaker.orca.clouddriver.tasks.providers.gce; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import com.google.common.collect.ImmutableList; +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroup; +import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.support.TargetServerGroupResolver; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.JUnit4; +import rx.Observable; + +@RunWith(JUnit4.class) +public class SetStatefulDiskTaskTest { + + private SetStatefulDiskTask task; + + private KatoService katoService; + private TargetServerGroupResolver resolver; + + @Before + public void setUp() { + katoService = mock(KatoService.class); + resolver = mock(TargetServerGroupResolver.class); + task = new SetStatefulDiskTask(katoService, resolver); + } + + @Test + public void success() { + when(resolver.resolve(any())) + .thenReturn( + ImmutableList.of(new TargetServerGroup(ImmutableMap.of("name", "testapp-v000")))); + when(katoService.requestOperations(any(), any())) + .thenReturn(Observable.just(new TaskId("10111"))); + + Stage stage = new Stage(); + stage.getContext().put("cloudProvider", "gce"); + stage.getContext().put("credentials", "spinnaker-test"); + stage.getContext().put("serverGroupName", "testapp-v000"); + stage.getContext().put("region", "us-desertoasis1"); + stage.getContext().put("deviceName", "testapp-v000-1"); + + TaskResult result = task.execute(stage); + + ImmutableMap operationParams = + ImmutableMap.of( + "credentials", "spinnaker-test", + "serverGroupName", "testapp-v000", + "region", "us-desertoasis1", + "deviceName", "testapp-v000-1"); + verify(katoService) + .requestOperations( + "gce", ImmutableList.of(ImmutableMap.of("setStatefulDisk", operationParams))); + + assertThat(result.getContext().get("notification.type")).isEqualTo("setstatefuldisk"); + assertThat(result.getContext().get("serverGroupName")).isEqualTo("testapp-v000"); + assertThat(result.getContext().get("deploy.server.groups")) + .isEqualTo(ImmutableMap.of("us-desertoasis1", ImmutableList.of("testapp-v000"))); + } +} diff --git a/orca-core-tck/src/main/groovy/com/netflix/spinnaker/orca/pipeline/persistence/ExecutionRepositoryTck.groovy b/orca-core-tck/src/main/groovy/com/netflix/spinnaker/orca/pipeline/persistence/ExecutionRepositoryTck.groovy index d7d7cca47d..bbf472fce7 100644 --- a/orca-core-tck/src/main/groovy/com/netflix/spinnaker/orca/pipeline/persistence/ExecutionRepositoryTck.groovy +++ b/orca-core-tck/src/main/groovy/com/netflix/spinnaker/orca/pipeline/persistence/ExecutionRepositoryTck.groovy @@ -157,6 +157,8 @@ abstract class ExecutionRepositoryTck extends Spe when: repository.store(runningExecution) + // our ULID implementation isn't monotonic + sleep(5) repository.store(succeededExecution) def orchestrations = repository.retrieveOrchestrationsForApplication( runningExecution.application, diff --git a/orca-core/orca-core.gradle b/orca-core/orca-core.gradle index dfc3f053a2..0aa6ed0efc 100644 --- a/orca-core/orca-core.gradle +++ b/orca-core/orca-core.gradle @@ -17,6 +17,12 @@ apply from: "$rootDir/gradle/kotlin.gradle" apply from: "$rootDir/gradle/spock.gradle" +test { + useJUnitPlatform { + includeEngines "junit-vintage", "junit-jupiter" + } +} + dependencies { compile project(":orca-extensionpoint") compile spinnaker.dependency('guava') @@ -42,9 +48,16 @@ dependencies { compile "org.apache.commons:commons-lang3:3.7" compile "de.huxhorn.sulky:de.huxhorn.sulky.ulid:8.1.1" compile "javax.servlet:javax.servlet-api:4.0.1" + compile('com.jayway.jsonpath:json-path:2.2.0') compileOnly spinnaker.dependency("lombok") + annotationProcessor spinnaker.dependency("lombok") testCompile project(":orca-test") testCompile project(":orca-test-groovy") + testCompile spinnaker.dependency("junitJupiterApi") + testCompile spinnaker.dependency("assertj") + + testRuntime spinnaker.dependency("junitJupiterEngine") + testRuntime "org.junit.vintage:junit-vintage-engine:${spinnaker.version('jupiter')}" } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionContext.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionContext.java index 0a23065234..f71211f3ab 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionContext.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionContext.java @@ -23,13 +23,20 @@ public class ExecutionContext { private final String authenticatedUser; private final String executionType; private final String executionId; + private final String stageId; private final String origin; - public ExecutionContext(String application, String authenticatedUser, String executionType, String executionId, String origin) { + public ExecutionContext(String application, + String authenticatedUser, + String executionType, + String executionId, + String stageId, + String origin) { this.application = application; this.authenticatedUser = authenticatedUser; this.executionType = executionType; this.executionId = executionId; + this.stageId = stageId; this.origin = origin; } @@ -62,4 +69,8 @@ public String getExecutionId() { } public String getOrigin() { return origin; } + + public String getStageId() { + return stageId; + } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionStatus.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionStatus.java index 30442d3b5c..9b034275bf 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionStatus.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/ExecutionStatus.java @@ -102,7 +102,7 @@ public final boolean isHalt() { return halt; } - public static final Collection COMPLETED = Collections.unmodifiableList(Arrays.asList(SUCCEEDED, STOPPED, SKIPPED, TERMINAL, FAILED_CONTINUE)); + public static final Collection COMPLETED = Collections.unmodifiableList(Arrays.asList(CANCELED, SUCCEEDED, STOPPED, SKIPPED, TERMINAL, FAILED_CONTINUE)); private static final Collection SUCCESSFUL = Collections.unmodifiableList(Arrays.asList(SUCCEEDED, STOPPED, SKIPPED)); private static final Collection FAILURE = Collections.unmodifiableList(Arrays.asList(TERMINAL, STOPPED, FAILED_CONTINUE)); diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/StageResolver.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/StageResolver.java new file mode 100644 index 0000000000..52f84de677 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/StageResolver.java @@ -0,0 +1,94 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca; + +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; + +import javax.annotation.Nonnull; +import java.util.Collection; +import java.util.HashMap; +import java.util.Map; + +import static java.lang.String.format; + +/** + * {@code StageResolver} allows for {@code StageDefinitionBuilder} retrieval via bean name or alias. + *

+ * Aliases represent the previous bean names that a {@code StageDefinitionBuilder} registered as. + */ +public class StageResolver { + private final Map stageDefinitionBuilderByAlias = new HashMap<>(); + + public StageResolver(Collection stageDefinitionBuilders) { + for (StageDefinitionBuilder stageDefinitionBuilder : stageDefinitionBuilders) { + stageDefinitionBuilderByAlias.put(stageDefinitionBuilder.getType(), stageDefinitionBuilder); + for (String alias : stageDefinitionBuilder.aliases()) { + if (stageDefinitionBuilderByAlias.containsKey(alias)) { + throw new DuplicateStageAliasException( + format( + "Duplicate stage alias detected (alias: %s, previous: %s, current: %s)", + alias, + stageDefinitionBuilderByAlias.get(alias).getClass().getCanonicalName(), + stageDefinitionBuilder.getClass().getCanonicalName() + ) + ); + } + + stageDefinitionBuilderByAlias.put(alias, stageDefinitionBuilder); + } + } + } + + /** + * Fetch a {@code StageDefinitionBuilder} by {@param type} or {@param typeAlias}. + * + * @param type StageDefinitionBuilder type + * @param typeAlias StageDefinitionBuilder alias (optional) + * @return the StageDefinitionBuilder matching {@param type} or {@param typeAlias} + * @throws NoSuchStageDefinitionBuilderException if StageDefinitionBuilder does not exist + */ + @Nonnull + public StageDefinitionBuilder getStageDefinitionBuilder(@Nonnull String type, String typeAlias) { + StageDefinitionBuilder stageDefinitionBuilder = stageDefinitionBuilderByAlias.getOrDefault( + type, stageDefinitionBuilderByAlias.get(typeAlias) + ); + + if (stageDefinitionBuilder == null) { + throw new NoSuchStageDefinitionBuilderException(type, stageDefinitionBuilderByAlias.keySet()); + } + + return stageDefinitionBuilder; + } + + class DuplicateStageAliasException extends IllegalStateException { + DuplicateStageAliasException(String message) { + super(message); + } + } + + class NoSuchStageDefinitionBuilderException extends IllegalArgumentException { + NoSuchStageDefinitionBuilderException(String type, Collection knownTypes) { + super( + format( + "No StageDefinitionBuilder implementation for %s found (knownTypes: %s)", + type, + String.join(",", knownTypes) + ) + ); + } + } +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/Task.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/Task.java index 0d8167ed3b..488b960756 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/Task.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/Task.java @@ -18,6 +18,13 @@ import com.netflix.spinnaker.orca.pipeline.model.Stage; import javax.annotation.Nonnull; +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; public interface Task { @Nonnull TaskResult execute(@Nonnull Stage stage); @@ -25,4 +32,18 @@ public interface Task { default void onTimeout(@Nonnull Stage stage) {} default void onCancel(@Nonnull Stage stage) {} + + default Collection aliases() { + if (getClass().isAnnotationPresent(Aliases.class)) { + return Arrays.asList(getClass().getAnnotation(Aliases.class).value()); + } + + return Collections.emptyList(); + } + + @Retention(RetentionPolicy.RUNTIME) + @Target(ElementType.TYPE) + @interface Aliases { + String[] value() default {}; + } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResolver.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResolver.java new file mode 100644 index 0000000000..f918dc2362 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResolver.java @@ -0,0 +1,120 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca; + +import com.google.common.annotations.VisibleForTesting; + +import javax.annotation.Nonnull; +import java.util.Collection; +import java.util.HashMap; +import java.util.Map; + +import static java.lang.String.format; + +/** + * {@code TaskResolver} allows for {@code Task} retrieval via class name or alias. + *

+ * Aliases represent the previous class names of a {@code Task}. + */ +public class TaskResolver { + private final Map taskByAlias = new HashMap<>(); + + private final boolean allowFallback; + + @VisibleForTesting + public TaskResolver(Collection tasks) { + this(tasks, true); + } + + /** + * @param tasks Task implementations + * @param allowFallback Fallback to {@code Class.forName()} if a task cannot be located by name or alias + */ + public TaskResolver(Collection tasks, boolean allowFallback) { + for (Task task : tasks) { + taskByAlias.put(task.getClass().getCanonicalName(), task); + for (String alias : task.aliases()) { + if (taskByAlias.containsKey(alias)) { + throw new DuplicateTaskAliasException( + String.format( + "Duplicate task alias detected (alias: %s, previous: %s, current: %s)", + alias, + taskByAlias.get(alias).getClass().getCanonicalName(), + task.getClass().getCanonicalName() + ) + ); + } + + taskByAlias.put(alias, task); + } + } + + this.allowFallback = allowFallback; + } + + /** + * Fetch a {@code Task} by {@param taskTypeIdentifier}. + * + * @param taskTypeIdentifier Task identifier (class name or alias) + * @return the Task matching {@param taskTypeIdentifier} + * @throws NoSuchTaskException if Task does not exist + */ + @Nonnull + public Task getTask(@Nonnull String taskTypeIdentifier) { + Task task = taskByAlias.get(taskTypeIdentifier); + + if (task == null) { + throw new NoSuchTaskException(taskTypeIdentifier); + } + + return task; + } + + /** + * @param taskTypeIdentifier Task identifier (class name or alias) + * @return Task Class + * @throws NoSuchTaskException if task does not exist + */ + @Nonnull + public Class getTaskClass(@Nonnull String taskTypeIdentifier) { + try { + return getTask(taskTypeIdentifier).getClass(); + } catch (IllegalArgumentException e) { + if (!allowFallback) { + throw e; + } + + try { + return (Class) Class.forName(taskTypeIdentifier); + } catch (ClassNotFoundException ex) { + throw e; + } + } + } + + class DuplicateTaskAliasException extends IllegalStateException { + DuplicateTaskAliasException(String message) { + super(message); + } + } + + class NoSuchTaskException extends IllegalArgumentException { + NoSuchTaskException(String taskTypeIdentifier) { + super("No task found for '" + taskTypeIdentifier + "'"); + } + } +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResult.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResult.java index 558b4bbffd..d553c32374 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResult.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/TaskResult.java @@ -15,61 +15,46 @@ */ package com.netflix.spinnaker.orca; -import java.util.Map; -import javax.annotation.Nonnull; import com.google.common.collect.ImmutableMap; -import static java.util.Collections.emptyMap; - +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.Builder; +import lombok.Data; +import lombok.NonNull; +import lombok.Singular; + +@Data +@Builder public final class TaskResult { - /** - * A useful constant for a success result with no outputs. - */ - public static final TaskResult SUCCEEDED = new TaskResult(ExecutionStatus.SUCCEEDED); - public static final TaskResult RUNNING = new TaskResult(ExecutionStatus.RUNNING); - - private final ExecutionStatus status; - private final ImmutableMap context; - private final ImmutableMap outputs; - - public TaskResult(ExecutionStatus status) { - this(status, emptyMap(), emptyMap()); - } + /** A useful constant for a success result with no outputs. */ + public static final TaskResult SUCCEEDED = TaskResult.ofStatus(ExecutionStatus.SUCCEEDED); - public TaskResult(ExecutionStatus status, Map context, Map outputs) { - this.status = status; - this.context = ImmutableMap.copyOf(context); - this.outputs = ImmutableMap.copyOf(outputs); - } - - public TaskResult(ExecutionStatus status, Map context) { - this(status, context, emptyMap()); - } + public static final TaskResult RUNNING = TaskResult.ofStatus(ExecutionStatus.RUNNING); - public @Nonnull ExecutionStatus getStatus() { - return status; - } + @NonNull private final ExecutionStatus status; /** - * Updates to the current stage context. + * Stage-scoped data. + * + *

Data stored in the context will be available to other tasks within this stage, but not to + * tasks in other stages. */ - public @Nonnull Map getContext() { - return context; - } + @Singular("context") + private final ImmutableMap context; /** - * Values to be output from the stage and potentially accessed by downstream - * stages. + * Pipeline-scoped data. + * + *

Data stored in outputs will be available (via {@link Stage#getContext()} to tasks in later + * stages of the pipeline. */ - public @Nonnull Map getOutputs() { - return outputs; + @Singular("output") + private final ImmutableMap outputs; + + public static TaskResult ofStatus(ExecutionStatus status) { + return TaskResult.builder(status).build(); } - @Override - public String toString() { - return "TaskResult{" + - "status=" + status + - ", context=" + context + - ", outputs=" + outputs + - '}'; + public static TaskResultBuilder builder(ExecutionStatus status) { + return new TaskResultBuilder().status(status); } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/config/DefaultApplicationConfigurationProperties.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/config/DefaultApplicationConfigurationProperties.kt new file mode 100644 index 0000000000..1a8ee8ae26 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/config/DefaultApplicationConfigurationProperties.kt @@ -0,0 +1,27 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.config + +import org.springframework.boot.context.properties.ConfigurationProperties + +/** + * @param defaultApplicationName When an execution is received which does not include an "application" field, this + * value will be used instead. + */ +@ConfigurationProperties("orca") +data class DefaultApplicationConfigurationProperties( + var defaultApplicationName: String = "spinnaker_unknown" +) diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/config/OrcaConfiguration.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/config/OrcaConfiguration.java index cb903da44e..895d20f35f 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/config/OrcaConfiguration.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/config/OrcaConfiguration.java @@ -18,6 +18,9 @@ import com.fasterxml.jackson.databind.ObjectMapper; import com.netflix.spectator.api.Registry; import com.netflix.spinnaker.kork.core.RetrySupport; +import com.netflix.spinnaker.orca.StageResolver; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResolver; import com.netflix.spinnaker.orca.commands.ForceExecutionCancellationCommand; import com.netflix.spinnaker.orca.events.ExecutionEvent; import com.netflix.spinnaker.orca.events.ExecutionListenerAdapter; @@ -29,10 +32,10 @@ import com.netflix.spinnaker.orca.pipeline.DefaultStageDefinitionBuilderFactory; import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilderFactory; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider; import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository; import com.netflix.spinnaker.orca.pipeline.util.ContextFunctionConfiguration; import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor; -import java.util.Arrays; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; @@ -43,6 +46,7 @@ import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; +import org.springframework.context.annotation.Import; import org.springframework.context.event.ApplicationEventMulticaster; import org.springframework.context.event.EventListenerFactory; import org.springframework.context.event.SimpleApplicationEventMulticaster; @@ -51,13 +55,15 @@ import org.springframework.scheduling.TaskScheduler; import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; import org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler; -import rx.Notification; import rx.Scheduler; import rx.schedulers.Schedulers; import java.time.Clock; import java.time.Duration; import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Optional; import static java.time.temporal.ChronoUnit.MINUTES; import static org.springframework.context.annotation.AnnotationConfigUtils.EVENT_LISTENER_FACTORY_BEAN_NAME; @@ -68,9 +74,11 @@ "com.netflix.spinnaker.orca.pipeline", "com.netflix.spinnaker.orca.deprecation", "com.netflix.spinnaker.orca.pipeline.util", + "com.netflix.spinnaker.orca.preprocessors", "com.netflix.spinnaker.orca.telemetry", "com.netflix.spinnaker.orca.notifications.scheduling" }) +@Import(PreprocessorConfiguration.class) @EnableConfigurationProperties public class OrcaConfiguration { @Bean public Clock clock() { @@ -122,9 +130,13 @@ UserConfiguredUrlRestrictions userConfiguredUrlRestrictions(UserConfiguredUrlRes @Bean public ContextFunctionConfiguration contextFunctionConfiguration(UserConfiguredUrlRestrictions userConfiguredUrlRestrictions, - @Value("${spelEvaluator:v2}") - String spelEvaluator) { - return new ContextFunctionConfiguration(userConfiguredUrlRestrictions, spelEvaluator); + Optional> expressionFunctionProviders, + @Value("${spelEvaluator:v2}") String spelEvaluator) { + return new ContextFunctionConfiguration( + userConfiguredUrlRestrictions, + expressionFunctionProviders.orElse(Collections.emptyList()), + spelEvaluator + ); } @Bean @@ -139,8 +151,8 @@ public ApplicationListener onCompleteMetricExecutionListenerAdap @Bean @ConditionalOnMissingBean(StageDefinitionBuilderFactory.class) - public StageDefinitionBuilderFactory stageDefinitionBuilderFactory(Collection stageDefinitionBuilders) { - return new DefaultStageDefinitionBuilderFactory(stageDefinitionBuilders); + public StageDefinitionBuilderFactory stageDefinitionBuilderFactory(StageResolver stageResolver) { + return new DefaultStageDefinitionBuilderFactory(stageResolver); } @Bean @@ -175,6 +187,16 @@ public TaskScheduler taskScheduler() { return scheduler; } + @Bean + public TaskResolver taskResolver(Collection tasks) { + return new TaskResolver(tasks, true); + } + + @Bean + public StageResolver stageResolver(Collection stageDefinitionBuilders) { + return new StageResolver(stageDefinitionBuilders); + } + @Bean(name = EVENT_LISTENER_FACTORY_BEAN_NAME) public EventListenerFactory eventListenerFactory() { return new InspectableEventListenerFactory(); diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/config/PreprocessorConfiguration.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/config/PreprocessorConfiguration.java new file mode 100644 index 0000000000..06928af1c3 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/config/PreprocessorConfiguration.java @@ -0,0 +1,24 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.config; + +import org.springframework.boot.context.properties.EnableConfigurationProperties; +import org.springframework.context.annotation.Configuration; + +@Configuration +@EnableConfigurationProperties(DefaultApplicationConfigurationProperties.class) +public class PreprocessorConfiguration { +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/events/ExecutionListenerAdapter.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/events/ExecutionListenerAdapter.java index 447eff1018..da91b06f17 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/events/ExecutionListenerAdapter.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/events/ExecutionListenerAdapter.java @@ -22,9 +22,9 @@ import com.netflix.spinnaker.orca.listeners.Persister; import com.netflix.spinnaker.orca.pipeline.model.Execution; import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository; +import com.netflix.spinnaker.security.AuthenticatedRequest; import org.slf4j.MDC; import org.springframework.context.ApplicationListener; -import static com.netflix.spinnaker.security.AuthenticatedRequest.SPINNAKER_EXECUTION_ID; /** * Adapts events emitted by the nu-orca queue to an old-style listener. @@ -43,14 +43,14 @@ public ExecutionListenerAdapter(ExecutionListener delegate, ExecutionRepository @Override public void onApplicationEvent(ExecutionEvent event) { try { - MDC.put(SPINNAKER_EXECUTION_ID, event.getExecutionId()); + MDC.put(AuthenticatedRequest.Header.EXECUTION_ID.getHeader(), event.getExecutionId()); if (event instanceof ExecutionStarted) { onExecutionStarted((ExecutionStarted) event); } else if (event instanceof ExecutionComplete) { onExecutionComplete((ExecutionComplete) event); } } finally { - MDC.remove(SPINNAKER_EXECUTION_ID); + MDC.remove(AuthenticatedRequest.Header.EXECUTION_ID.getHeader()); } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/DefaultStageDefinitionBuilderFactory.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/DefaultStageDefinitionBuilderFactory.java index 85f1a566a1..e76bd377f9 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/DefaultStageDefinitionBuilderFactory.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/DefaultStageDefinitionBuilderFactory.java @@ -16,36 +16,20 @@ package com.netflix.spinnaker.orca.pipeline; -import static java.util.stream.Collectors.toList; - -import java.util.Arrays; -import java.util.Collection; -import java.util.List; import javax.annotation.Nonnull; -import com.netflix.spinnaker.orca.pipeline.ExecutionRunner.NoSuchStageDefinitionBuilder; + +import com.netflix.spinnaker.orca.StageResolver; import com.netflix.spinnaker.orca.pipeline.model.Stage; public class DefaultStageDefinitionBuilderFactory implements StageDefinitionBuilderFactory { - private final Collection stageDefinitionBuilders; - - public DefaultStageDefinitionBuilderFactory(Collection stageDefinitionBuilders) { - this.stageDefinitionBuilders = stageDefinitionBuilders; - } + private final StageResolver stageResolver; - public DefaultStageDefinitionBuilderFactory(StageDefinitionBuilder... stageDefinitionBuilders) { - this(Arrays.asList(stageDefinitionBuilders)); + public DefaultStageDefinitionBuilderFactory(StageResolver stageResolver) { + this.stageResolver = stageResolver; } @Override - public @Nonnull StageDefinitionBuilder builderFor( - @Nonnull Stage stage) throws NoSuchStageDefinitionBuilder { - return stageDefinitionBuilders - .stream() - .filter((it) -> it.getType().equals(stage.getType()) || it.getType().equals(stage.getContext().get("alias"))) - .findFirst() - .orElseThrow(() -> { - List knownTypes = stageDefinitionBuilders.stream().map(it -> it.getType()).sorted().collect(toList()); - return new NoSuchStageDefinitionBuilder(stage.getType(), knownTypes); - }); + public @Nonnull StageDefinitionBuilder builderFor(@Nonnull Stage stage) { + return stageResolver.getStageDefinitionBuilder(stage.getType(), (String) stage.getContext().get("alias")); } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionLauncher.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionLauncher.java index 0b2dee6764..741dd400bb 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionLauncher.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionLauncher.java @@ -26,6 +26,7 @@ import com.netflix.spinnaker.orca.pipeline.model.Trigger; import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionNotFoundException; import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository; +import com.netflix.spinnaker.security.AuthenticatedRequest; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionRunner.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionRunner.java index 8198e0fcc0..df9e1fa872 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionRunner.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/ExecutionRunner.java @@ -47,11 +47,4 @@ default void cancel( @Nonnull String user, @Nullable String reason) throws Exception { throw new UnsupportedOperationException(); } - - class NoSuchStageDefinitionBuilder extends RuntimeException { - public NoSuchStageDefinitionBuilder(String type, Collection knownTypes) { - super(format("No StageDefinitionBuilder implementation for %s found. %s", type, - knownTypes == null || knownTypes.size() == 0 ? "There are no known stage types." : format(" Known stage types: %s", String.join(",", knownTypes)))); - } - } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/RestrictExecutionDuringTimeWindow.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/RestrictExecutionDuringTimeWindow.java index 5c5d2b27c5..ef33fd4bf5 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/RestrictExecutionDuringTimeWindow.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/RestrictExecutionDuringTimeWindow.java @@ -144,15 +144,15 @@ private static class SuspendExecutionDuringTimeWindowTask implements RetryableTa try { scheduledTime = getTimeInWindow(stage, now); } catch (Exception e) { - return new TaskResult(TERMINAL, Collections.singletonMap("failureReason", "Exception occurred while calculating time window: " + e.getMessage())); + return TaskResult.builder(TERMINAL).context(Collections.singletonMap("failureReason", "Exception occurred while calculating time window: " + e.getMessage())).build(); } if (now.equals(scheduledTime) || now.isAfter(scheduledTime)) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } else if (parseBoolean(stage.getContext().getOrDefault("skipRemainingWait", "false").toString())) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } else { stage.setScheduledTime(scheduledTime.toEpochMilli()); - return new TaskResult(RUNNING); + return TaskResult.RUNNING; } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilder.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilder.java index cf5982c25d..05947794e0 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilder.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilder.java @@ -17,6 +17,7 @@ package com.netflix.spinnaker.orca.pipeline; import com.netflix.spinnaker.kork.dynamicconfig.DynamicConfigService; +import com.netflix.spinnaker.orca.Task; import com.netflix.spinnaker.orca.pipeline.TaskNode.TaskGraph; import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; import com.netflix.spinnaker.orca.pipeline.model.Execution; @@ -25,6 +26,13 @@ import javax.annotation.Nonnull; import javax.annotation.Nullable; +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; import java.util.List; import java.util.Map; @@ -186,4 +194,18 @@ default boolean isForceCacheRefreshEnabled(DynamicConfigService dynamicConfigSer return true; } } + + default Collection aliases() { + if (getClass().isAnnotationPresent(Aliases.class)) { + return Arrays.asList(getClass().getAnnotation(Aliases.class).value()); + } + + return Collections.emptyList(); + } + + @Retention(RetentionPolicy.RUNTIME) + @Target(ElementType.TYPE) + @interface Aliases { + String[] value() default {}; + } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilderFactory.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilderFactory.java index d6426c534d..0346d91009 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilderFactory.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/StageDefinitionBuilderFactory.java @@ -17,13 +17,9 @@ package com.netflix.spinnaker.orca.pipeline; import javax.annotation.Nonnull; -import com.netflix.spinnaker.orca.pipeline.ExecutionRunner.NoSuchStageDefinitionBuilder; import com.netflix.spinnaker.orca.pipeline.model.Stage; @FunctionalInterface public interface StageDefinitionBuilderFactory { - - @Nonnull StageDefinitionBuilder builderFor( - @Nonnull Stage stage) throws NoSuchStageDefinitionBuilder; - + @Nonnull StageDefinitionBuilder builderFor(@Nonnull Stage stage); } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionFunctionProvider.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionFunctionProvider.kt new file mode 100644 index 0000000000..ad3c989c36 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionFunctionProvider.kt @@ -0,0 +1,32 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.pipeline.expressions + +interface ExpressionFunctionProvider { + val namespace: String? + val functions: Collection + + data class FunctionDefinition( + val name: String, + val parameters: List + ) + + data class FunctionParameter( + val type: Class<*>, + val name: String, + val description: String + ) +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionTransform.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionTransform.java index a14cf9f1d7..ea2be9b4b5 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionTransform.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionTransform.java @@ -17,6 +17,7 @@ package com.netflix.spinnaker.orca.pipeline.expressions; import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.pipeline.model.Execution; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.expression.EvaluationContext; @@ -37,13 +38,18 @@ public class ExpressionTransform { private final Logger log = LoggerFactory.getLogger(getClass()); - private static final List EXECUTION_AWARE_FUNCTIONS = Arrays.asList("judgment", "judgement", "stage", "stageExists", "deployedServerGroups"); + private static final List EXECUTION_AWARE_FUNCTIONS = Arrays.asList("judgment", "judgement", "stage", "stageExists", "deployedServerGroups", "manifestLabelValue"); private static final List EXECUTION_AWARE_ALIASES = Collections.singletonList("deployedServerGroups"); private static final List STRINGIFYABLE_TYPES = Collections.singletonList(ExecutionStatus.class); + + private final Collection expressionFunctionProviders; private final ParserContext parserContext; private final ExpressionParser parser; - public ExpressionTransform(ParserContext parserContext, ExpressionParser parser) { + public ExpressionTransform(Collection expressionFunctionProviders, + ParserContext parserContext, + ExpressionParser parser) { + this.expressionFunctionProviders = expressionFunctionProviders; this.parserContext = parserContext; this.parser = parser; } @@ -266,7 +272,18 @@ static String escapeSimpleExpression(String expression) { */ private String includeExecutionParameter(String e) { String expression = e; - for (String fn : EXECUTION_AWARE_FUNCTIONS) { + + // An expression aware function is any that takes an Execution as its first parameter + Set expressionAwareFunctions = new HashSet<>(EXECUTION_AWARE_FUNCTIONS); + expressionFunctionProviders.forEach(p -> { + p.getFunctions().forEach(f -> { + if (!f.getParameters().isEmpty() && f.getParameters().get(0).getType() == Execution.class) { + expressionAwareFunctions.add(f.getName()); + } + }); + }); + + for (String fn : expressionAwareFunctions) { if (expression.contains("#" + fn) && !expression.contains("#" + fn + "( #root.execution, ")) { expression = expression.replaceAll("#" + fn + "\\(", "#" + fn + "( #root.execution, "); } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupport.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupport.java index 0a27e4ddf7..203db3ca11 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupport.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupport.java @@ -16,15 +16,7 @@ package com.netflix.spinnaker.orca.pipeline.expressions; -import java.io.ByteArrayInputStream; -import java.io.IOException; -import java.io.UnsupportedEncodingException; -import java.net.URL; -import java.util.*; -import java.util.concurrent.atomic.AtomicReference; -import java.util.function.Predicate; import com.fasterxml.jackson.databind.ObjectMapper; -import com.netflix.spinnaker.orca.ExecutionStatus; import com.netflix.spinnaker.orca.pipeline.expressions.whitelisting.FilteredMethodResolver; import com.netflix.spinnaker.orca.pipeline.expressions.whitelisting.FilteredPropertyAccessor; import com.netflix.spinnaker.orca.pipeline.expressions.whitelisting.MapPropertyAccessor; @@ -39,6 +31,16 @@ import org.springframework.expression.common.TemplateParserContext; import org.springframework.expression.spel.support.StandardEvaluationContext; +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.UnsupportedEncodingException; +import java.net.URL; +import java.util.*; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Predicate; + +import static java.lang.String.format; + /** * Provides utility support for SPEL integration * Supports registering SPEL functions, ACLs to classes (via whitelisting) @@ -50,8 +52,6 @@ public class ExpressionsSupport { private static AtomicReference helperFunctionConfigurationAtomicReference = new AtomicReference<>(); private static Map>> registeredHelperFunctions = new HashMap<>(); - private static List DEPLOY_STAGE_NAMES = Arrays.asList("deploy", "createServerGroup", "cloneServerGroup", "rollingPush"); - ExpressionsSupport(ContextFunctionConfiguration contextFunctionConfiguration) { helperFunctionConfigurationAtomicReference.set(contextFunctionConfiguration); } @@ -100,7 +100,24 @@ public static StandardEvaluationContext newEvaluationContext(Object rootObject, registerFunction(evaluationContext, "stageExists", Object.class, String.class); registerFunction(evaluationContext, "judgment", Object.class, String.class); registerFunction(evaluationContext, "judgement", Object.class, String.class); - registerFunction(evaluationContext, "deployedServerGroups", Object.class, String[].class); + + ContextFunctionConfiguration contextFunctionConfiguration = helperFunctionConfigurationAtomicReference.get(); + for (ExpressionFunctionProvider p : contextFunctionConfiguration.getExpressionFunctionProviders()) { + for (ExpressionFunctionProvider.FunctionDefinition function : p.getFunctions()) { + String namespacedFunctionName = function.getName(); + if (p.getNamespace() != null) { + namespacedFunctionName = format("%s_%s", p.getNamespace(), namespacedFunctionName); + } + Class[] functionTypes = function.getParameters() + .stream() + .map(ExpressionFunctionProvider.FunctionParameter::getType) + .toArray(Class[]::new); + + evaluationContext.registerFunction( + namespacedFunctionName, p.getClass().getDeclaredMethod(function.getName(), functionTypes) + ); + } + } } } catch (NoSuchMethodException e) { // Indicates a function was not properly registered. This should not happen. Please fix the faulty function @@ -182,7 +199,7 @@ static String toJson(Object o) { return converted; } catch (Exception e) { - throw new SpelHelperFunctionException(String.format("#toJson(%s) failed", o.toString()), e); + throw new SpelHelperFunctionException(format("#toJson(%s) failed", o.toString()), e); } } @@ -196,7 +213,7 @@ static String fromUrl(String url) { URL u = helperFunctionConfigurationAtomicReference.get().getUrlRestrictions().validateURI(url).toURL(); return HttpClientUtils.httpGetAsString(u.toString()); } catch (Exception e) { - throw new SpelHelperFunctionException(String.format("#from(%s) failed", url), e); + throw new SpelHelperFunctionException(format("#from(%s) failed", url), e); } } @@ -213,7 +230,7 @@ static Object readJson(String text) { return mapper.readValue(text, Map.class); } catch (Exception e) { - throw new SpelHelperFunctionException(String.format("#readJson(%s) failed", text), e); + throw new SpelHelperFunctionException(format("#readJson(%s) failed", text), e); } } @@ -235,7 +252,7 @@ static Map propertiesFromUrl(String url) { try { return readProperties(fromUrl(url)); } catch (Exception e) { - throw new SpelHelperFunctionException(String.format("#propertiesFromUrl(%s) failed", url), e); + throw new SpelHelperFunctionException(format("#propertiesFromUrl(%s) failed", url), e); } } @@ -268,12 +285,12 @@ static Object stage(Object obj, String id) { .findFirst() .orElseThrow( () -> new SpelHelperFunctionException( - String.format("Unable to locate [%s] using #stage(%s) in execution %s", id, id, execution.getId()) + format("Unable to locate [%s] using #stage(%s) in execution %s", id, id, execution.getId()) ) ); } - throw new SpelHelperFunctionException(String.format("Invalid first param to #stage(%s). must be an execution", id)); + throw new SpelHelperFunctionException(format("Invalid first param to #stage(%s). must be an execution", id)); } /** @@ -290,7 +307,7 @@ static boolean stageExists(Object obj, String id) { .anyMatch(i -> id != null && (id.equals(i.getName()) || id.equals(i.getId()))); } - throw new SpelHelperFunctionException(String.format("Invalid first param to #stage(%s). must be an execution", id)); + throw new SpelHelperFunctionException(format("Invalid first param to #stage(%s). must be an execution", id)); } /** @@ -308,7 +325,7 @@ static String judgment(Object obj, String id) { .findFirst() .orElseThrow( () -> new SpelHelperFunctionException( - String.format("Unable to locate manual Judgment stage [%s] using #judgment(%s) in execution %s. " + + format("Unable to locate manual Judgment stage [%s] using #judgment(%s) in execution %s. " + "Stage doesn't exist or doesn't contain judgmentInput in its context ", id, id, execution.getId() ) @@ -319,54 +336,10 @@ static String judgment(Object obj, String id) { } throw new SpelHelperFunctionException( - String.format("Invalid first param to #judgment(%s). must be an execution", id) + format("Invalid first param to #judgment(%s). must be an execution", id) ); } - static List> deployedServerGroups(Object obj, String...id) { - if (obj instanceof Execution) { - List> deployedServerGroups = new ArrayList<>(); - ((Execution) obj).getStages() - .stream() - .filter(matchesDeployedStage(id)) - .forEach(stage -> { - String region = (String) stage.getContext().get("region"); - if (region == null) { - Map availabilityZones = (Map) stage.getContext().get("availabilityZones"); - if (availabilityZones != null) { - region = availabilityZones.keySet().iterator().next(); - } - } - - if (region != null) { - Map deployDetails = new HashMap<>(); - deployDetails.put("account", stage.getContext().get("account")); - deployDetails.put("capacity", stage.getContext().get("capacity")); - deployDetails.put("parentStage", stage.getContext().get("parentStage")); - deployDetails.put("region", region); - List existingDetails = (List) stage.getContext().get("deploymentDetails"); - if (existingDetails != null) { - existingDetails - .stream() - .filter(d -> deployDetails.get("region").equals(d.get("region"))) - .forEach(deployDetails::putAll); - } - - List serverGroups = (List) ((Map) stage.getContext().get("deploy.server.groups")).get(region); - if (serverGroups != null) { - deployDetails.put("serverGroup", serverGroups.get(0)); - } - - deployedServerGroups.add(deployDetails); - } - }); - - return deployedServerGroups; - } - - throw new IllegalArgumentException("An execution is required for this function"); - } - /** * Alias to judgment */ @@ -377,17 +350,4 @@ static String judgement(Object obj, String id) { private static Predicate isManualStageWithManualInput(String id) { return i -> (id != null && id.equals(i.getName())) && (i.getContext() != null && i.getType().equals("manualJudgment") && i.getContext().get("judgmentInput") != null); } - - private static Predicate matchesDeployedStage(String ...id) { - List idsOrNames = Arrays.asList(id); - if (!idsOrNames.isEmpty()){ - return stage -> DEPLOY_STAGE_NAMES.contains(stage.getType()) && - stage.getContext().containsKey("deploy.server.groups") && - stage.getStatus() == ExecutionStatus.SUCCEEDED && - (idsOrNames.contains(stage.getName()) || idsOrNames.contains(stage.getId())); - } else { - return stage -> DEPLOY_STAGE_NAMES.contains(stage.getType()) && - stage.getContext().containsKey("deploy.server.groups") && stage.getStatus() == ExecutionStatus.SUCCEEDED; - } - } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/PipelineExpressionEvaluator.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/PipelineExpressionEvaluator.java index 9f437d7621..c7c00fa89d 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/PipelineExpressionEvaluator.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/PipelineExpressionEvaluator.java @@ -21,6 +21,7 @@ import org.springframework.expression.spel.standard.SpelExpressionParser; import org.springframework.expression.spel.support.StandardEvaluationContext; +import java.util.Collection; import java.util.Map; public class PipelineExpressionEvaluator extends ExpressionsSupport implements ExpressionEvaluator { @@ -28,19 +29,29 @@ public class PipelineExpressionEvaluator extends ExpressionsSupport implements E public static final String ERROR = "Failed Expression Evaluation"; private final ExpressionParser parser = new SpelExpressionParser(); + private final ContextFunctionConfiguration contextFunctionConfiguration; public interface ExpressionEvaluationVersion { String V2 = "v2"; } - public PipelineExpressionEvaluator(final ContextFunctionConfiguration contextFunctionConfiguration) { + public PipelineExpressionEvaluator(ContextFunctionConfiguration contextFunctionConfiguration) { super(contextFunctionConfiguration); + + this.contextFunctionConfiguration = contextFunctionConfiguration; } @Override - public Map evaluate(Map source, Object rootObject, ExpressionEvaluationSummary summary, boolean allowUnknownKeys) { + public Map evaluate(Map source, + Object rootObject, + ExpressionEvaluationSummary summary, + boolean allowUnknownKeys) { StandardEvaluationContext evaluationContext = newEvaluationContext(rootObject, allowUnknownKeys); - return new ExpressionTransform(parserContext, parser).transformMap(source, evaluationContext, summary); + return new ExpressionTransform( + contextFunctionConfiguration.getExpressionFunctionProviders(), + parserContext, + parser + ).transformMap(source, evaluationContext, summary); } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProvider.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProvider.java new file mode 100644 index 0000000000..ddfe7769cf --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProvider.java @@ -0,0 +1,128 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.pipeline.expressions.functions; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.springframework.stereotype.Component; + +import java.util.*; +import java.util.function.Predicate; +import java.util.stream.Collectors; + +import static java.util.Collections.*; + +@Component +public class DeployedServerGroupsExpressionFunctionProvider implements ExpressionFunctionProvider { + + private static List DEPLOY_STAGE_NAMES = Arrays.asList( + "deploy", "createServerGroup", "cloneServerGroup", "rollingPush" + ); + + @Nullable + @Override + public String getNamespace() { + return null; + } + + @NotNull + @Override + public Collection getFunctions() { + return singletonList( + new FunctionDefinition("deployedServerGroups", Arrays.asList( + new FunctionParameter(Execution.class, "execution", "The execution to search for stages within"), + new FunctionParameter(String[].class, "ids", "A list of stage name or stage IDs to search") + )) + ); + } + + public static List> deployedServerGroups(Execution execution, String... id) { + List> deployedServerGroups = new ArrayList<>(); + execution.getStages() + .stream() + .filter(matchesDeployedStage(id)) + .forEach(stage -> { + String region = (String) stage.getContext().get("region"); + if (region == null) { + Map availabilityZones = (Map) stage.getContext().get("availabilityZones"); + if (availabilityZones != null) { + region = availabilityZones.keySet().iterator().next(); + } + } + + if (region != null) { + Map deployDetails = new HashMap<>(); + deployDetails.put("account", stage.getContext().get("account")); + deployDetails.put("capacity", stage.getContext().get("capacity")); + deployDetails.put("parentStage", stage.getContext().get("parentStage")); + deployDetails.put("region", region); + List existingDetails = (List) stage.getContext().get("deploymentDetails"); + if (existingDetails != null) { + existingDetails + .stream() + .filter(d -> deployDetails.get("region").equals(d.get("region"))) + .forEach(deployDetails::putAll); + } + + List serverGroups = (List) ((Map) stage.getContext().get("deploy.server.groups")).get(region); + if (serverGroups != null) { + deployDetails.put("serverGroup", serverGroups.get(0)); + } + + DeploymentContext deploymentContext = stage.mapTo(DeploymentContext.class); + List> deployments = Optional.ofNullable(deploymentContext.tasks).orElse(emptyList()).stream() + .flatMap(task -> Optional.ofNullable(task.results).orElse(emptyList()).stream()) + .flatMap(result -> Optional.ofNullable(result.deployments).orElse(emptyList()).stream()) + .collect(Collectors.toList()); + deployDetails.put("deployments", deployments); + + deployedServerGroups.add(deployDetails); + } + }); + + return deployedServerGroups; + } + + static class DeploymentContext { + @JsonProperty("kato.tasks") List tasks; + } + + static class KatoTasks { + @JsonProperty("resultObjects") List results; + } + + static class ResultObject { + @JsonProperty("deployments") List> deployments; + } + + private static Predicate matchesDeployedStage(String... id) { + List idsOrNames = Arrays.asList(id); + if (!idsOrNames.isEmpty()){ + return stage -> DEPLOY_STAGE_NAMES.contains(stage.getType()) && + stage.getContext().containsKey("deploy.server.groups") && + stage.getStatus() == ExecutionStatus.SUCCEEDED && + (idsOrNames.contains(stage.getName()) || idsOrNames.contains(stage.getId())); + } else { + return stage -> DEPLOY_STAGE_NAMES.contains(stage.getType()) && + stage.getContext().containsKey("deploy.server.groups") && stage.getStatus() == ExecutionStatus.SUCCEEDED; + } + } +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProvider.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProvider.java new file mode 100644 index 0000000000..4aef3f74ae --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProvider.java @@ -0,0 +1,107 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.pipeline.expressions.functions; + +import com.jayway.jsonpath.JsonPath; +import com.jayway.jsonpath.PathNotFoundException; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider; +import com.netflix.spinnaker.orca.pipeline.expressions.SpelHelperFunctionException; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.springframework.stereotype.Component; + +import java.util.*; + +import static java.lang.String.format; + +@Component +public class ManifestLabelValueExpressionFunctionProvider implements ExpressionFunctionProvider { + @Nullable + @Override + public String getNamespace() { + return null; + } + + @NotNull + @Override + public Collection getFunctions() { + return Collections.singletonList( + new FunctionDefinition("manifestLabelValue", Arrays.asList( + new FunctionParameter(Execution.class, "execution", "The execution to search for stages within"), + new FunctionParameter(String.class, "stageName", "Name of a deployManifest stage to find"), + new FunctionParameter(String.class, "kind", "The kind of manifest to find"), + new FunctionParameter(String.class, "labelKey", "The key of the label to find") + )) + ); + } + + /** + * Gets value of given label key in manifest of given kind deployed by stage of given name + * @param execution #root.execution + * @param stageName the name of a `deployManifest` stage to find + * @param kind the kind of manifest to find + * @param labelKey the key of the label to find + * @return the label value + */ + public static String manifestLabelValue(Execution execution, String stageName, String kind, String labelKey) { + List validKinds = Arrays.asList("Deployment", "ReplicaSet"); + if (!validKinds.contains(kind)) { + throw new IllegalArgumentException("Only Deployments and ReplicaSets are valid kinds for this function"); + } + + if (labelKey == null) { + throw new IllegalArgumentException("A labelKey is required for this function"); + } + + Optional stage = execution.getStages() + .stream() + .filter(s -> s.getName().equals(stageName) && s.getType().equals("deployManifest") && s.getStatus() == ExecutionStatus.SUCCEEDED) + .findFirst(); + + if (!stage.isPresent()) { + throw new SpelHelperFunctionException("A valid Deploy Manifest stage name is required for this function"); + } + + List manifests = (List) stage.get().getContext().get("manifests"); + + if (manifests == null || manifests.size() == 0) { + throw new SpelHelperFunctionException("No manifest could be found in the context of the specified stage"); + } + + Optional manifestOpt = manifests.stream() + .filter(m -> m.get("kind").equals(kind)) + .findFirst(); + + if (!manifestOpt.isPresent()) { + throw new SpelHelperFunctionException(format("No manifest of kind %s could be found on the context of the specified stage", kind)); + } + + Map manifest = manifestOpt.get(); + String labelPath = format("$.spec.template.metadata.labels.%s", labelKey); + String labelValue; + + try { + labelValue = JsonPath.read(manifest, labelPath); + } catch (PathNotFoundException e) { + throw new SpelHelperFunctionException("No label of specified key found on matching manifest spec.template.metadata.labels"); + } + + return labelValue; + } +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProvider.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProvider.java new file mode 100644 index 0000000000..6a462b752a --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProvider.java @@ -0,0 +1,103 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.pipeline.expressions.functions; + +import com.netflix.spinnaker.orca.ExecutionContext; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider; +import com.netflix.spinnaker.orca.pipeline.expressions.SpelHelperFunctionException; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.springframework.stereotype.Component; + +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; + +import static java.lang.String.format; + +@Component +public class StageExpressionFunctionProvider implements ExpressionFunctionProvider { + @Nullable + @Override + public String getNamespace() { + return null; + } + + @NotNull + @Override + public Collection getFunctions() { + return Arrays.asList( + new FunctionDefinition("currentStage", Collections.singletonList( + new FunctionParameter( + Execution.class, "execution", "The execution containing the currently executing stage" + ) + )), + new FunctionDefinition("stageByRefId", Arrays.asList( + new FunctionParameter( + Execution.class, "execution", "The execution containing the currently executing stage" + ), + new FunctionParameter( + String.class, "refId", "A valid stage reference identifier" + ) + )) + ); + } + + /** + * @param execution the current execution + * @return the currently executing stage + */ + public static Stage currentStage(Execution execution) { + ExecutionContext executionContext = ExecutionContext.get(); + if (executionContext == null) { + throw new SpelHelperFunctionException("An execution context is required for this function"); + } + + String currentStageId = ExecutionContext.get().getStageId(); + return execution + .getStages() + .stream() + .filter(s -> s.getId().equalsIgnoreCase(currentStageId)) + .findFirst() + .orElseThrow(() -> new SpelHelperFunctionException("No stage found with id '" + currentStageId + "'")); + } + + /** + * Finds a Stage by refId. This function should only be used by programmatic pipeline generators, as refIds are + * fragile and may change from execution-to-execution. + * + * @param execution the current execution + * @param refId the stage reference ID + * @return a stage specified by refId + */ + static Object stageByRefId(Execution execution, String refId) { + if (refId == null) { + throw new SpelHelperFunctionException(format( + "Stage refId must not be null in #stageByRefId in execution %s", execution.getId() + )); + } + return execution.getStages() + .stream() + .filter(s -> refId.equals(s.getRefId())) + .findFirst() + .orElseThrow(() -> new SpelHelperFunctionException(format( + "Unable to locate [%1$s] using #stageByRefId(%1$s) in execution %2$s", refId, execution.getId() + ))); + } +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/whitelisting/ReturnTypeRestrictor.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/whitelisting/ReturnTypeRestrictor.java index c5b3b90df7..f0642ea176 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/whitelisting/ReturnTypeRestrictor.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/expressions/whitelisting/ReturnTypeRestrictor.java @@ -17,10 +17,7 @@ package com.netflix.spinnaker.orca.pipeline.expressions.whitelisting; import com.netflix.spinnaker.orca.ExecutionStatus; -import com.netflix.spinnaker.orca.pipeline.model.Execution; -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import com.netflix.spinnaker.orca.pipeline.model.Trigger; +import com.netflix.spinnaker.orca.pipeline.model.*; import java.util.ArrayList; import java.util.Arrays; @@ -60,9 +57,11 @@ public interface ReturnTypeRestrictor extends InstantiationTypeRestrictor { Execution.class, Stage.class, Trigger.class, - JenkinsTrigger.BuildInfo.class, - JenkinsTrigger.JenkinsArtifact.class, - JenkinsTrigger.SourceControl.class, + BuildInfo.class, + JenkinsArtifact.class, + JenkinsBuildInfo.class, + ConcourseBuildInfo.class, + SourceControl.class, ExecutionStatus.class, Execution.AuthenticationDetails.class, Execution.PausedDetails.class diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ArtifactoryTrigger.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ArtifactoryTrigger.kt new file mode 100644 index 0000000000..61d14d85f6 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ArtifactoryTrigger.kt @@ -0,0 +1,37 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.pipeline.model + +import com.netflix.spinnaker.kork.artifacts.model.Artifact +import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact + +data class ArtifactoryTrigger +@JvmOverloads constructor( + override val type: String = "artifactory", + override val correlationId: String? = null, + override val user: String? = "[anonymous]", + override val parameters: Map = mutableMapOf(), + override val artifacts: List = mutableListOf(), + override val notifications: List> = mutableListOf(), + override var isRebake: Boolean = false, + override var isDryRun: Boolean = false, + override var isStrategy: Boolean = false, + val artifactorySearchName: String +) : Trigger { + override var other: Map = mutableMapOf() + override var resolvedExpectedArtifacts: List = mutableListOf() +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/BuildInfo.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/BuildInfo.kt new file mode 100644 index 0000000000..5e493dab9d --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/BuildInfo.kt @@ -0,0 +1,12 @@ +package com.netflix.spinnaker.orca.pipeline.model + +abstract class BuildInfo(open val name: String?, + open val number: Int, + open val url: String?, + open val result: String?, + open val artifacts: List? = emptyList(), + open val scm: List? = emptyList(), + open val building: Boolean = false) { + var fullDisplayName: String? = null + get() = field ?: "$name#$number" +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ConcourseTrigger.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ConcourseTrigger.kt new file mode 100644 index 0000000000..08064691c8 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/ConcourseTrigger.kt @@ -0,0 +1,51 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.pipeline.model + +import com.fasterxml.jackson.annotation.JsonCreator +import com.fasterxml.jackson.annotation.JsonProperty +import com.netflix.spinnaker.kork.artifacts.model.Artifact +import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact + +data class ConcourseTrigger +@JvmOverloads constructor( + override val type: String = "artifactory", + override val correlationId: String? = null, + override val user: String? = "[anonymous]", + override val parameters: Map = mutableMapOf(), + override val artifacts: List = mutableListOf(), + override val notifications: List> = mutableListOf(), + override var isRebake: Boolean = false, + override var isDryRun: Boolean = false, + override var isStrategy: Boolean = false +) : Trigger { + override var other: Map = mutableMapOf() + override var resolvedExpectedArtifacts: List = mutableListOf() + var buildInfo: ConcourseBuildInfo? = null + var properties: Map = mutableMapOf() +} + +class ConcourseBuildInfo +@JsonCreator +constructor(@param:JsonProperty("name") override val name: String?, + @param:JsonProperty("number") override val number: Int, + @param:JsonProperty("url") override val url: String?, + @param:JsonProperty("result") override val result: String?, + @param:JsonProperty("artifacts") override val artifacts: List?, + @param:JsonProperty("scm") override val scm: List?, + @param:JsonProperty("building") override var building: Boolean = false) + : BuildInfo(name, number, url, result, artifacts, scm, building) diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTrigger.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTrigger.kt index 4a5acf09ad..ccb61b6571 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTrigger.kt +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTrigger.kt @@ -17,7 +17,6 @@ package com.netflix.spinnaker.orca.pipeline.model import com.fasterxml.jackson.annotation.JsonCreator -import com.fasterxml.jackson.annotation.JsonIgnore import com.fasterxml.jackson.annotation.JsonProperty import com.netflix.spinnaker.kork.artifacts.model.Artifact import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact @@ -41,34 +40,32 @@ data class JenkinsTrigger override var other: Map = mutableMapOf() override var resolvedExpectedArtifacts: List = mutableListOf() - var buildInfo: BuildInfo? = null + var buildInfo: JenkinsBuildInfo? = null var properties: Map = mutableMapOf() +} - data class BuildInfo - @JsonCreator constructor( - @param:JsonProperty("name") val name: String, - @param:JsonProperty("number") val number: Int, - @param:JsonProperty("url") val url: String, - @JsonProperty("artifacts") val artifacts: List? = emptyList(), - @JsonProperty("scm") val scm: List? = emptyList(), - @param:JsonProperty("building") val isBuilding: Boolean, - @param:JsonProperty("result") val result: String? - ) { - @get:JsonIgnore - val fullDisplayName: String - get() = name + " #" + number - } +class JenkinsArtifact +@JsonCreator +constructor(@param:JsonProperty("fileName") val fileName: String, + @param:JsonProperty("relativePath") val relativePath: String) - data class SourceControl - @JsonCreator constructor( - @param:JsonProperty("name") val name: String, - @param:JsonProperty("branch") val branch: String, - @param:JsonProperty("sha1") val sha1: String - ) +class JenkinsBuildInfo +@JsonCreator +constructor(@param:JsonProperty("name") override val name: String?, + @param:JsonProperty("number") override val number: Int, + @param:JsonProperty("url") override val url: String?, + @param:JsonProperty("result") override val result: String?, + @param:JsonProperty("artifacts") override val artifacts: List?, + @param:JsonProperty("scm") override val scm: List?, + @param:JsonProperty("building") override var building: Boolean = false, + @param:JsonProperty("timestamp") val timestamp: Long?) + : BuildInfo(name, number, url, result, artifacts, scm, building) { - data class JenkinsArtifact - @JsonCreator constructor( - @param:JsonProperty("fileName") val fileName: String, - @param:JsonProperty("relativePath") val relativePath: String - ) + @JvmOverloads + constructor(name: String, + number: Int, + url: String, + result: String, + artifacts: List = emptyList(), + scm: List = emptyList()): this(name, number, url, result, artifacts, scm, false, null) } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/SourceControl.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/SourceControl.kt new file mode 100644 index 0000000000..75d39063da --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/SourceControl.kt @@ -0,0 +1,11 @@ +package com.netflix.spinnaker.orca.pipeline.model + +import com.fasterxml.jackson.annotation.JsonCreator +import com.fasterxml.jackson.annotation.JsonProperty + +data class SourceControl +@JsonCreator constructor( + @param:JsonProperty("name") val name: String, + @param:JsonProperty("branch") val branch: String, + @param:JsonProperty("sha1") val sha1: String +) diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/support/TriggerDeserializer.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/support/TriggerDeserializer.kt index 570f8a6f6a..86393ec434 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/support/TriggerDeserializer.kt +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/model/support/TriggerDeserializer.kt @@ -46,6 +46,20 @@ internal class TriggerDeserializer get("repository").textValue(), get("tag").textValue() ) + looksLikeConcourse() -> ConcourseTrigger( + get("type").textValue(), + get("correlationId")?.textValue(), + get("user")?.textValue() ?: "[anonymous]", + get("parameters")?.mapValue(parser) ?: mutableMapOf(), + get("artifacts")?.listValue(parser) ?: mutableListOf(), + get("notifications")?.listValue(parser) ?: mutableListOf(), + get("rebake")?.booleanValue() == true, + get("dryRun")?.booleanValue() == true, + get("strategy")?.booleanValue() == true + ).apply { + buildInfo = get("buildInfo")?.parseValue(parser) + properties = get("properties")?.parseValue(parser) ?: mutableMapOf() + } looksLikeJenkins() -> JenkinsTrigger( get("type").textValue(), get("correlationId")?.textValue(), @@ -77,6 +91,18 @@ internal class TriggerDeserializer get("parentExecution").parseValue(parser), get("parentPipelineStageId")?.textValue() ) + looksLikeArtifactory() -> ArtifactoryTrigger( + get("type").textValue(), + get("correlationId")?.textValue(), + get("user")?.textValue() ?: "[anonymous]", + get("parameters")?.mapValue(parser) ?: mutableMapOf(), + get("artifacts")?.listValue(parser) ?: mutableListOf(), + get("notifications")?.listValue(parser) ?: mutableListOf(), + get("rebake")?.booleanValue() == true, + get("dryRun")?.booleanValue() == true, + get("strategy")?.booleanValue() == true, + get("artifactorySearchName").textValue() + ) looksLikeGit() -> GitTrigger( get("type").textValue(), get("correlationId")?.textValue(), @@ -124,9 +150,14 @@ internal class TriggerDeserializer private fun JsonNode.looksLikeJenkins() = hasNonNull("master") && hasNonNull("job") && hasNonNull("buildNumber") + private fun JsonNode.looksLikeConcourse() = get("type")?.textValue() == "concourse" + private fun JsonNode.looksLikePipeline() = hasNonNull("parentExecution") + private fun JsonNode.looksLikeArtifactory() = + hasNonNull("artifactorySearchName") + private fun JsonNode.looksLikeCustom() = customTriggerSuppliers.any { it.predicate.invoke(this) } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/AcquireLockTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/AcquireLockTask.java index 51778d0033..f704586766 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/AcquireLockTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/AcquireLockTask.java @@ -53,7 +53,7 @@ public TaskResult execute(@Nonnull Stage stage) { LockContext lock = stage.mapTo("/lock", LockContext.LockContextBuilder.class).withStage(stage).build(); try { lockManager.acquireLock(lock.getLockName(), lock.getLockValue(), lock.getLockHolder(), lockingConfigurationProperties.getTtlSeconds()); - return new TaskResult(ExecutionStatus.SUCCEEDED, Collections.singletonMap("lock", lock)); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap("lock", lock)).build(); } catch (LockFailureException lfe) { Map resultContext = new HashMap<>(); ExceptionHandler.Response exResult = new DefaultExceptionHandler().handle("acquireLock", lfe); @@ -74,7 +74,7 @@ public TaskResult execute(@Nonnull Stage stage) { // stages halfway through so the pipeline will proceed for any downstream join // points. resultContext.put("completeOtherBranchesThenFail", true); - return new TaskResult(ExecutionStatus.STOPPED, resultContext); + return TaskResult.builder(ExecutionStatus.STOPPED).context(resultContext).build(); } } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/DetermineLockTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/DetermineLockTask.java index ff595f5df1..a34f619c2a 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/DetermineLockTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/DetermineLockTask.java @@ -70,7 +70,7 @@ public TaskResult execute(@Nonnull Stage stage) { lockContext = stage.mapTo("/lock", LockContext.LockContextBuilder.class).build(); } - return new TaskResult(ExecutionStatus.SUCCEEDED, Collections.singletonMap("lock", lockContext)); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap("lock", lockContext)).build(); } catch (Exception ex) { final boolean lockingEnabled = lockingConfigurationProperties.isEnabled(); final boolean learningMode = lockingConfigurationProperties.isLearningMode(); @@ -80,7 +80,7 @@ public TaskResult execute(@Nonnull Stage stage) { StructuredArguments.kv("locking.learningMode", learningMode), ex); LockContext lc = new LockContext.LockContextBuilder("unknown", null, "unknown").withStage(stage).build(); - return new TaskResult(ExecutionStatus.SUCCEEDED, Collections.singletonMap("lock", lc)); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.singletonMap("lock", lc)).build(); } throw new IllegalStateException("Unable to determine lock from context or previous lock stage", ex); } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTask.java index 6d600a87ef..62f87138a1 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTask.java @@ -29,6 +29,11 @@ import java.util.HashMap; import java.util.Map; +/** + * Copies previously evaluated expressions to the outputs map for consumption by subsequent stages. + * The variables aren't evaluated here because would've been evaluated already by a call to + * e.g. ExpressionAware.Stage#withMergedContext + */ @Component public class EvaluateVariablesTask implements Task { @@ -47,6 +52,6 @@ public TaskResult execute(@Nonnull Stage stage) { outputs.put(v.getKey(), v.getValue()); } - return new TaskResult(ExecutionStatus.SUCCEEDED, stage.mapTo(Map.class), outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(stage.mapTo(Map.class)).outputs(outputs).build(); } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/ExpressionPreconditionTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/ExpressionPreconditionTask.java index 56765872cb..7c148929fc 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/ExpressionPreconditionTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/ExpressionPreconditionTask.java @@ -65,7 +65,7 @@ public ExpressionPreconditionTask(ContextParameterProcessor contextParameterProc Map context = (Map) stage.getContext().get("context"); context.put("expressionResult", expression); - return new TaskResult(status, singletonMap("context", context)); + return TaskResult.builder(status).context(singletonMap("context", context)).build(); } private static void ensureEvaluationSummaryIncluded(Map result, Stage stage, String expression) { diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/NoOpTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/NoOpTask.java index a22c225b05..66fc9aa97a 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/NoOpTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/NoOpTask.java @@ -26,6 +26,6 @@ public class NoOpTask implements Task { @Override public TaskResult execute(Stage ignored) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitTask.java index 04f43c0b43..bafca29842 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitTask.java @@ -48,19 +48,19 @@ TaskResult execute(@Nonnull Stage stage) { WaitStage.WaitStageContext context = stage.mapTo(WaitStage.WaitStageContext.class); if (context.getWaitTime() == null) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } Instant now = clock.instant(); if (context.isSkipRemainingWait()) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } else if (context.getStartTime() == null || context.getStartTime() == Instant.EPOCH) { - return new TaskResult(RUNNING, singletonMap("startTime", now)); + return TaskResult.builder(RUNNING).context(singletonMap("startTime", now)).build(); } else if (context.getStartTime().plus(context.getWaitDuration()).isBefore(now)) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } else { - return new TaskResult(RUNNING); + return TaskResult.RUNNING; } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitUntilTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitUntilTask.java index fdf070a30c..fa1aafb00e 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitUntilTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/WaitUntilTask.java @@ -48,17 +48,17 @@ TaskResult execute(@Nonnull Stage stage) { WaitUntilStage.WaitUntilStageContext context = stage.mapTo(WaitUntilStage.WaitUntilStageContext.class); if (context.getEpochMillis() == null) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } Instant now = clock.instant(); if (context.getStartTime() == null || context.getStartTime() == Instant.EPOCH) { - return new TaskResult(RUNNING, singletonMap("startTime", now)); + return TaskResult.builder(RUNNING).context(singletonMap("startTime", now)).build(); } else if (context.getEpochMillis() <= now.toEpochMilli()) { - return new TaskResult(SUCCEEDED); + return TaskResult.SUCCEEDED; } else { - return new TaskResult(RUNNING); + return TaskResult.RUNNING; } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/artifacts/BindProducedArtifactsTask.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/artifacts/BindProducedArtifactsTask.java index 8d47aa2ce3..bb29263cf6 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/artifacts/BindProducedArtifactsTask.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/tasks/artifacts/BindProducedArtifactsTask.java @@ -68,6 +68,6 @@ public TaskResult execute(@Nonnull Stage stage) { outputs.put("artifacts", resolvedArtifacts); outputs.put("resolvedExpectedArtifacts", expectedArtifacts); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build(); } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolver.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolver.java index 60d86b91fa..886cb563f4 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolver.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolver.java @@ -37,14 +37,7 @@ import javax.annotation.Nonnull; import javax.annotation.Nullable; import java.io.IOException; -import java.util.ArrayList; -import java.util.Collections; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Objects; -import java.util.Optional; -import java.util.Set; +import java.util.*; import java.util.stream.Collectors; import java.util.stream.Stream; @@ -60,21 +53,24 @@ public class ArtifactResolver { private final ObjectMapper objectMapper; private final ExecutionRepository executionRepository; + private final ContextParameterProcessor contextParameterProcessor; @Autowired - public ArtifactResolver(ObjectMapper objectMapper, ExecutionRepository executionRepository) { + public ArtifactResolver(ObjectMapper objectMapper, ExecutionRepository executionRepository, + ContextParameterProcessor contextParameterProcessor) { this.objectMapper = objectMapper; this.executionRepository = executionRepository; + this.contextParameterProcessor = contextParameterProcessor; } public @Nonnull List getArtifacts(@Nonnull Stage stage) { if (stage.getContext() instanceof StageContext) { - return (List) Optional.ofNullable((List) ((StageContext) stage.getContext()).getAll("artifacts")) + return Optional.ofNullable((List) ((StageContext) stage.getContext()).getAll("artifacts")) .map(list -> list.stream() .filter(Objects::nonNull) - .flatMap(it -> ((List) it).stream()) - .map(a -> a instanceof Map ? objectMapper.convertValue(a, Artifact.class) : a) + .flatMap(it -> ((List) it).stream()) + .map(a -> a instanceof Map ? objectMapper.convertValue(a, Artifact.class) : (Artifact) a) .collect(Collectors.toList())) .orElse(emptyList()); } else { @@ -91,9 +87,9 @@ List getAllArtifacts(@Nonnull Execution execution) { List emittedArtifacts = Stage.topologicalSort(execution.getStages()) .filter(s -> s.getOutputs().containsKey("artifacts")) .flatMap( - s -> (Stream) ((List) s.getOutputs().get("artifacts")) + s -> ((List) s.getOutputs().get("artifacts")) .stream() - .map(a -> a instanceof Map ? objectMapper.convertValue(a, Artifact.class) : a) + .map(a -> a instanceof Map ? objectMapper.convertValue(a, Artifact.class) : (Artifact) a) ).collect(Collectors.toList()); Collections.reverse(emittedArtifacts); @@ -106,6 +102,26 @@ List getAllArtifacts(@Nonnull Execution execution) { return emittedArtifacts; } + /** + * Used to fully resolve a bound artifact on a stage that can either select + * an expected artifact ID for an expected artifact defined in a prior stage + * or as a trigger constraint OR define an inline expression-evaluable default artifact. + * @param stage The stage containing context to evaluate expressions on the bound artifact. + * @param id An expected artifact id. Either id or artifact must be specified. + * @param artifact An inline default artifact. Either id or artifact must be specified. + * @return A bound artifact with expressions evaluated. + */ + public @Nullable Artifact getBoundArtifactForStage(Stage stage, @Nullable String id, @Nullable Artifact artifact) { + Artifact boundArtifact = id != null ? getBoundArtifactForId(stage, id) : artifact; + Map boundArtifactMap = objectMapper.convertValue(boundArtifact, new TypeReference>() { + }); + + Map evaluatedBoundArtifactMap = contextParameterProcessor.process(boundArtifactMap, + contextParameterProcessor.buildExecutionContext(stage, true), true); + + return objectMapper.convertValue(evaluatedBoundArtifactMap, Artifact.class); + } + public @Nullable Artifact getBoundArtifactForId( @Nonnull Stage stage, @Nullable String id) { @@ -115,11 +131,11 @@ Artifact getBoundArtifactForId( List expectedArtifacts; if (stage.getContext() instanceof StageContext) { - expectedArtifacts = (List) Optional.ofNullable((List) ((StageContext) stage.getContext()).getAll("resolvedExpectedArtifacts")) + expectedArtifacts = Optional.ofNullable((List) ((StageContext) stage.getContext()).getAll("resolvedExpectedArtifacts")) .map(list -> list.stream() .filter(Objects::nonNull) - .flatMap(it -> ((List) it).stream()) - .map(a -> a instanceof Map ? objectMapper.convertValue(a, ExpectedArtifact.class) : a) + .flatMap(it -> ((List) it).stream()) + .map(a -> a instanceof Map ? objectMapper.convertValue(a, ExpectedArtifact.class) : (ExpectedArtifact) a) .collect(Collectors.toList())) .orElse(emptyList()); } else { @@ -127,10 +143,20 @@ Artifact getBoundArtifactForId( expectedArtifacts = new ArrayList<>(); } - return expectedArtifacts + final Optional expectedArtifactOptional = expectedArtifacts .stream() .filter(e -> e.getId().equals(id)) - .findFirst() + .findFirst(); + + expectedArtifactOptional.ifPresent(expectedArtifact -> { + final Artifact boundArtifact = expectedArtifact.getBoundArtifact(); + final Artifact matchArtifact = expectedArtifact.getMatchArtifact(); + if (boundArtifact != null && matchArtifact != null && boundArtifact.getArtifactAccount() == null) { + boundArtifact.setArtifactAccount(matchArtifact.getArtifactAccount()); + } + }); + + return expectedArtifactOptional .map(ExpectedArtifact::getBoundArtifact) .orElse(null); } @@ -153,18 +179,23 @@ List getArtifactsForPipelineId( return execution == null ? Collections.emptyList() : getAllArtifacts(execution); } - public void resolveArtifacts(@Nonnull Map pipeline) { + public void resolveArtifacts(@Nonnull Map pipeline) { Map trigger = (Map) pipeline.get("trigger"); - List expectedArtifacts = (List) Optional.ofNullable((List) pipeline.get("expectedArtifacts")) + List expectedArtifacts = Optional.ofNullable((List) pipeline.get("expectedArtifacts")) .map(list -> list.stream().map(it -> objectMapper.convertValue(it, ExpectedArtifact.class)).collect(toList())) .orElse(emptyList()); - List receivedArtifactsFromPipeline = (List) Optional.ofNullable((List) pipeline.get("receivedArtifacts")) + + List receivedArtifactsFromPipeline = Optional.ofNullable((List) pipeline.get("receivedArtifacts")) .map(list -> list.stream().map(it -> objectMapper.convertValue(it, Artifact.class)).collect(toList())) .orElse(emptyList()); - List artifactsFromTrigger = (List) Optional.ofNullable((List) trigger.get("artifacts")) + List artifactsFromTrigger = Optional.ofNullable((List) trigger.get("artifacts")) .map(list -> list.stream().map(it -> objectMapper.convertValue(it, Artifact.class)).collect(toList())) .orElse(emptyList()); - List receivedArtifacts = Stream.concat(receivedArtifactsFromPipeline.stream(), artifactsFromTrigger.stream()).collect(toList()); + + List receivedArtifacts = Stream.concat( + receivedArtifactsFromPipeline.stream(), + artifactsFromTrigger.stream() + ).distinct().collect(toList()); if (expectedArtifacts.isEmpty()) { try { @@ -176,13 +207,13 @@ public void resolveArtifacts(@Nonnull Map pipeline) { } List priorArtifacts = getArtifactsForPipelineId((String) pipeline.get("id"), new ExecutionCriteria()); - Set resolvedArtifacts = resolveExpectedArtifacts(expectedArtifacts, receivedArtifacts, priorArtifacts, true); - Set allArtifacts = new HashSet<>(receivedArtifacts); - + LinkedHashSet resolvedArtifacts = resolveExpectedArtifacts(expectedArtifacts, receivedArtifacts, priorArtifacts, true); + LinkedHashSet allArtifacts = new LinkedHashSet<>(receivedArtifacts); allArtifacts.addAll(resolvedArtifacts); try { trigger.put("artifacts", objectMapper.readValue(objectMapper.writeValueAsString(allArtifacts), List.class)); + trigger.put("expectedArtifacts", objectMapper.readValue(objectMapper.writeValueAsString(expectedArtifacts), List.class)); trigger.put("resolvedExpectedArtifacts", objectMapper.readValue(objectMapper.writeValueAsString(expectedArtifacts), List.class)); // Add the actual expectedArtifacts we included in the ids. } catch (IOException e) { throw new ArtifactResolutionException("Failed to store artifacts in trigger: " + e.getMessage(), e); @@ -190,6 +221,9 @@ public void resolveArtifacts(@Nonnull Map pipeline) { } public Artifact resolveSingleArtifact(ExpectedArtifact expectedArtifact, List possibleMatches, boolean requireUniqueMatches) { + if (expectedArtifact.getBoundArtifact() != null) { + return expectedArtifact.getBoundArtifact(); + } List matches = possibleMatches .stream() .filter(expectedArtifact::matches) @@ -216,9 +250,9 @@ public Set resolveExpectedArtifacts(List expectedArt return resolveExpectedArtifacts(expectedArtifacts, receivedArtifacts, null, requireUniqueMatches); } - public Set resolveExpectedArtifacts(List expectedArtifacts, List receivedArtifacts, List priorArtifacts, boolean requireUniqueMatches) { - Set resolvedArtifacts = new HashSet<>(); - Set unresolvedExpectedArtifacts = new HashSet<>(); + public LinkedHashSet resolveExpectedArtifacts(List expectedArtifacts, List receivedArtifacts, List priorArtifacts, boolean requireUniqueMatches) { + LinkedHashSet resolvedArtifacts = new LinkedHashSet<>(); + LinkedHashSet unresolvedExpectedArtifacts = new LinkedHashSet<>(); for (ExpectedArtifact expectedArtifact : expectedArtifacts) { Artifact resolved = resolveSingleArtifact(expectedArtifact, receivedArtifacts, requireUniqueMatches); @@ -252,10 +286,6 @@ public Set resolveExpectedArtifacts(List expectedArt } private static class ArtifactResolutionException extends RuntimeException { - ArtifactResolutionException(String message) { - super(message); - } - ArtifactResolutionException(String message, Throwable cause) { super(message, cause); } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractor.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractor.java index b6d976a0d4..7a12a3b6b6 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractor.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractor.java @@ -18,8 +18,9 @@ import com.fasterxml.jackson.databind.ObjectMapper; import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper; -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.BuildInfo; -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.SourceControl; +import com.netflix.spinnaker.orca.pipeline.model.BuildInfo; +import com.netflix.spinnaker.orca.pipeline.model.JenkinsBuildInfo; +import com.netflix.spinnaker.orca.pipeline.model.SourceControl; import org.apache.commons.lang3.StringUtils; import java.util.*; @@ -37,7 +38,7 @@ public BuildDetailExtractor() { this.detailExtractors = Arrays.asList(new DefaultDetailExtractor(), new LegacyJenkinsUrlDetailExtractor()); } - public boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map request) { + public boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map request) { // The first strategy to succeed ends the loop. That is: the DefaultDetailExtractor is trying first // if it can not succeed the Legacy parser will be applied return detailExtractors.stream().anyMatch(it -> @@ -46,9 +47,9 @@ public boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map } @Deprecated - public boolean tryToExtractBuildDetails(Map buildInfo, Map request) { + public boolean tryToExtractJenkinsBuildDetails(Map buildInfo, Map request) { try { - return tryToExtractBuildDetails(mapper.convertValue(buildInfo, BuildInfo.class), request); + return tryToExtractBuildDetails(mapper.convertValue(buildInfo, JenkinsBuildInfo.class), request); } catch (IllegalArgumentException e) { return false; } @@ -59,7 +60,7 @@ public boolean tryToExtractBuildDetails(Map buildInfo, Map request) { + public boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map request) { if (buildInfo == null || request == null) { return false; } @@ -104,7 +105,7 @@ private List parseBuildInfoUrl(String url) { private static class DefaultDetailExtractor implements DetailExtractor { @Override - public boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map request) { + public boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map request) { if (buildInfo == null || request == null) { return false; @@ -133,9 +134,9 @@ private void extractBuildHost(String url, Map request) { //Common trait for DetailExtractor private interface DetailExtractor { - boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map request); + boolean tryToExtractBuildDetails(BuildInfo buildInfo, Map request); - default void extractCommitHash(BuildInfo buildInfo, Map request) { + default void extractCommitHash(BuildInfo buildInfo, Map request) { // buildInfo.scm contains a list of maps. Each map contains these keys: name, sha1, branch. // If the list contains more than one entry, prefer the first one that is not master and is not develop. String commitHash = null; diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextFunctionConfiguration.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextFunctionConfiguration.java index 82bd00ba30..b16a917b21 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextFunctionConfiguration.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextFunctionConfiguration.java @@ -17,25 +17,38 @@ package com.netflix.spinnaker.orca.pipeline.util; import com.netflix.spinnaker.orca.config.UserConfiguredUrlRestrictions; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider; + +import java.util.List; + import static com.netflix.spinnaker.orca.pipeline.expressions.PipelineExpressionEvaluator.ExpressionEvaluationVersion.V2; public class ContextFunctionConfiguration { private final UserConfiguredUrlRestrictions urlRestrictions; + private final List expressionFunctionProviders; private final String spelEvaluator; - public ContextFunctionConfiguration(UserConfiguredUrlRestrictions urlRestrictions, String spelEvaluator) { + public ContextFunctionConfiguration(UserConfiguredUrlRestrictions urlRestrictions, + List expressionFunctionProviders, + String spelEvaluator) { this.urlRestrictions = urlRestrictions; + this.expressionFunctionProviders = expressionFunctionProviders; this.spelEvaluator = spelEvaluator; } - public ContextFunctionConfiguration(UserConfiguredUrlRestrictions urlRestrictions) { - this(urlRestrictions, V2); + public ContextFunctionConfiguration(UserConfiguredUrlRestrictions urlRestrictions, + List expressionFunctionProviders) { + this(urlRestrictions, expressionFunctionProviders, V2); } public UserConfiguredUrlRestrictions getUrlRestrictions() { return urlRestrictions; } + public List getExpressionFunctionProviders() { + return expressionFunctionProviders; + } + public String getSpelEvaluator() { return spelEvaluator; } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessor.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessor.java index 55cdc0f54b..06fb271b78 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessor.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessor.java @@ -23,18 +23,13 @@ import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionEvaluationSummary; import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionEvaluator; import com.netflix.spinnaker.orca.pipeline.expressions.PipelineExpressionEvaluator; -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger; -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.BuildInfo; -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.SourceControl; -import com.netflix.spinnaker.orca.pipeline.model.Stage; -import com.netflix.spinnaker.orca.pipeline.model.Trigger; +import com.netflix.spinnaker.orca.pipeline.expressions.functions.DeployedServerGroupsExpressionFunctionProvider; +import com.netflix.spinnaker.orca.pipeline.expressions.functions.ManifestLabelValueExpressionFunctionProvider; +import com.netflix.spinnaker.orca.pipeline.model.*; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Optional; +import java.util.*; import static com.netflix.spinnaker.orca.pipeline.expressions.PipelineExpressionEvaluator.ExpressionEvaluationVersion.V2; import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; @@ -54,7 +49,14 @@ public class ContextParameterProcessor { private ExpressionEvaluator expressionEvaluator; public ContextParameterProcessor() { - this(new ContextFunctionConfiguration(new UserConfiguredUrlRestrictions.Builder().build(), V2)); + this(new ContextFunctionConfiguration( + new UserConfiguredUrlRestrictions.Builder().build(), + Arrays.asList( + new DeployedServerGroupsExpressionFunctionProvider(), + new ManifestLabelValueExpressionFunctionProvider() + ), + V2 + )); } public ContextParameterProcessor(ContextFunctionConfiguration contextFunctionConfiguration) { @@ -139,6 +141,9 @@ private Map precomputeValues(Map context) { if (context.get("scmInfo") == null && trigger instanceof JenkinsTrigger) { context.put("scmInfo", Optional.ofNullable(((JenkinsTrigger) trigger).getBuildInfo()).map(BuildInfo::getScm).orElse(emptyList())); } + if (context.get("scmInfo") == null && trigger instanceof ConcourseTrigger) { + context.put("scmInfo", Optional.ofNullable(((ConcourseTrigger) trigger).getBuildInfo()).map(BuildInfo::getScm).orElse(emptyList())); + } if (context.get("scmInfo") != null && ((List) context.get("scmInfo")).size() >= 2) { List scmInfos = (List) context.get("scmInfo"); SourceControl scmInfo = scmInfos diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/PackageInfo.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/PackageInfo.java index a21e965e09..59f3ffa719 100644 --- a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/PackageInfo.java +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipeline/util/PackageInfo.java @@ -22,8 +22,6 @@ import com.google.common.annotations.VisibleForTesting; import com.netflix.spinnaker.kork.artifacts.model.Artifact; import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; import static java.lang.String.format; import static java.util.Collections.emptyList; @@ -42,8 +40,6 @@ */ public class PackageInfo { - private final Logger log = LoggerFactory.getLogger(getClass()); - private final ObjectMapper mapper; private final Stage stage; private final List artifacts; @@ -231,7 +227,7 @@ private Map createAugmentedRequest(Map trigger, if (packageIdentifier != null) { if (extractBuildDetails) { Map buildInfoForDetails = !buildArtifact.isEmpty() ? buildInfoCurrentExecution : triggerBuildInfo; - buildDetailExtractor.tryToExtractBuildDetails(buildInfoForDetails, stageContext); + buildDetailExtractor.tryToExtractJenkinsBuildDetails(buildInfoForDetails, stageContext); } } } diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/V2Util.java b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/V2Util.java new file mode 100644 index 0000000000..2e3302cb56 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/V2Util.java @@ -0,0 +1,77 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.pipelinetemplate; + +import com.netflix.spinnaker.kork.web.exceptions.ValidationException; +import com.netflix.spinnaker.orca.extensionpoint.pipeline.ExecutionPreprocessor; +import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor; +import lombok.extern.slf4j.Slf4j; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + +@Slf4j +public class V2Util { + public static Map planPipeline(ContextParameterProcessor contextParameterProcessor, + List pipelinePreprocessors, + Map pipeline) { + // TODO(jacobkiefer): Excise the logic in OperationsController that requires plan to avoid resolving artifacts. + pipeline.put("plan", true); // avoid resolving artifacts + + Map finalPipeline = pipeline; + List preprocessors = pipelinePreprocessors + .stream() + .filter(p -> p.supports(finalPipeline, ExecutionPreprocessor.Type.PIPELINE)) + .collect(Collectors.toList()); + for (ExecutionPreprocessor pp : preprocessors) { + pipeline = pp.process(pipeline); + } + + List> pipelineErrors = (List>) pipeline.get("errors"); + if (pipelineErrors != null && !pipelineErrors.isEmpty()) { + throw new ValidationException( + "Pipeline template is invalid", pipelineErrors); + } + + Map augmentedContext = new HashMap<>(); + augmentedContext.put("trigger", pipeline.get("trigger")); + augmentedContext.put("templateVariables", pipeline.getOrDefault("templateVariables", Collections.EMPTY_MAP)); + Map spelEvaluatedPipeline = contextParameterProcessor.process( + pipeline, augmentedContext, true); + + Map expressionEvalSummary = + (Map) spelEvaluatedPipeline.get("expressionEvaluationSummary"); + if (expressionEvalSummary != null) { + List failedTemplateVars = expressionEvalSummary.entrySet() + .stream() + .map(e -> e.getKey()) + .filter(v -> v.startsWith("templateVariables.")) + .map(v -> v.replace("templateVariables.", "")) + .collect(Collectors.toList()); + + if (failedTemplateVars.size() > 0) { + throw new ValidationException( + "Missing template variable values for the following variables: %s", failedTemplateVars); + } + } + + return spelEvaluatedPipeline; + } +} diff --git a/orca-core/src/main/java/com/netflix/spinnaker/orca/preprocessors/DefaultApplicationExecutionPreprocessor.kt b/orca-core/src/main/java/com/netflix/spinnaker/orca/preprocessors/DefaultApplicationExecutionPreprocessor.kt new file mode 100644 index 0000000000..9103da9042 --- /dev/null +++ b/orca-core/src/main/java/com/netflix/spinnaker/orca/preprocessors/DefaultApplicationExecutionPreprocessor.kt @@ -0,0 +1,40 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.preprocessors + +import com.netflix.spinnaker.orca.config.DefaultApplicationConfigurationProperties +import com.netflix.spinnaker.orca.extensionpoint.pipeline.ExecutionPreprocessor +import org.springframework.stereotype.Component +import javax.annotation.Nonnull + +/** + * Populates an Execution config payload with a default application value if one is not provided. + */ +@Component +class DefaultApplicationExecutionPreprocessor( + private val properties: DefaultApplicationConfigurationProperties +) : ExecutionPreprocessor { + + override fun supports(@Nonnull execution: MutableMap, + @Nonnull type: ExecutionPreprocessor.Type): Boolean = true + + override fun process(execution: MutableMap): MutableMap { + if (!execution.containsKey("application")) { + execution["application"] = properties.defaultApplicationName + } + return execution + } +} diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/StageResolverSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/StageResolverSpec.groovy new file mode 100644 index 0000000000..47e61e8d8a --- /dev/null +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/StageResolverSpec.groovy @@ -0,0 +1,70 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca + +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder +import com.netflix.spinnaker.orca.pipeline.WaitStage +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.pipeline.tasks.WaitTask +import spock.lang.Specification +import spock.lang.Subject +import spock.lang.Unroll + +import javax.annotation.Nonnull; + +class StageResolverSpec extends Specification { + @Subject + def stageResolver = new StageResolver([ + new WaitStage(), + new AliasedStageDefinitionBuilder() + ]) + + @Unroll + def "should lookup stage by name or alias"() { + expect: + stageResolver.getStageDefinitionBuilder(stageTypeIdentifier, null).getType() == expectedStageType + + where: + stageTypeIdentifier || expectedStageType + "wait" || "wait" + "aliased" || "aliased" + "notAliased" || "aliased" + } + + def "should raise exception on duplicate alias"() { + when: + new StageResolver([ + new AliasedStageDefinitionBuilder(), + new AliasedStageDefinitionBuilder() + ]) + + then: + thrown(StageResolver.DuplicateStageAliasException) + } + + def "should raise exception when stage not found"() { + when: + stageResolver.getStageDefinitionBuilder("DoesNotExist", null) + + then: + thrown(StageResolver.NoSuchStageDefinitionBuilderException) + } + + @StageDefinitionBuilder.Aliases("notAliased") + class AliasedStageDefinitionBuilder implements StageDefinitionBuilder { + } +} diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/TaskResolverSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/TaskResolverSpec.groovy new file mode 100644 index 0000000000..51ad7e9f31 --- /dev/null +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/TaskResolverSpec.groovy @@ -0,0 +1,72 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca + +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.pipeline.tasks.WaitTask +import spock.lang.Specification +import spock.lang.Subject +import spock.lang.Unroll + +import javax.annotation.Nonnull; + +class TaskResolverSpec extends Specification { + @Subject + def taskResolver = new TaskResolver([ + new WaitTask(), + new AliasedTask() + ], false) + + @Unroll + def "should lookup task by name or alias"() { + expect: + taskResolver.getTaskClass(taskTypeIdentifier) == expectedTaskClass + + where: + taskTypeIdentifier || expectedTaskClass + "com.netflix.spinnaker.orca.pipeline.tasks.WaitTask" || WaitTask.class + "com.netflix.spinnaker.orca.TaskResolverSpec.AliasedTask" || AliasedTask.class + "com.netflix.spinnaker.orca.NotAliasedTask" || AliasedTask.class + } + + def "should raise exception on duplicate alias"() { + when: + new TaskResolver([ + new AliasedTask(), + new AliasedTask() + ], false) + + then: + thrown(TaskResolver.DuplicateTaskAliasException) + } + + def "should raise exception when task not found"() { + when: + taskResolver.getTaskClass("DoesNotExist") + + then: + thrown(TaskResolver.NoSuchTaskException) + } + + @Task.Aliases("com.netflix.spinnaker.orca.NotAliasedTask") + class AliasedTask implements Task { + @Override + TaskResult execute(@Nonnull Stage stage) { + return TaskResult.SUCCEEDED + } + } +} diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/locks/LockContextSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/locks/LockContextSpec.groovy index 358a3e6a7b..12baffe998 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/locks/LockContextSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/locks/LockContextSpec.groovy @@ -118,7 +118,5 @@ class LockContextSpec extends Specification { type = stageType } } - - } } diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupportSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupportSpec.groovy index 60f25f4e83..4fae2f1e49 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupportSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/ExpressionsSupportSpec.groovy @@ -16,19 +16,23 @@ package com.netflix.spinnaker.orca.pipeline.expressions +import com.netflix.spinnaker.orca.config.UserConfiguredUrlRestrictions +import com.netflix.spinnaker.orca.pipeline.util.ContextFunctionConfiguration +import org.springframework.expression.EvaluationContext import spock.lang.Shared import spock.lang.Specification import spock.lang.Unroll import static com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline -import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage; +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage class ExpressionsSupportSpec extends Specification { @Shared def pipeline = pipeline { stage { id = "1" + refId = "1" name = "My First Stage" context = [ "region": "us-east-1", @@ -37,6 +41,7 @@ class ExpressionsSupportSpec extends Specification { stage { id = "2" + refId = "2" name = "My Second Stage" context = [ "region": "us-west-1", @@ -45,6 +50,7 @@ class ExpressionsSupportSpec extends Specification { stage { id = "3" + refId = "3" status = SUCCEEDED type = "createServerGroup" name = "Deploy in us-east-1" @@ -73,6 +79,7 @@ class ExpressionsSupportSpec extends Specification { stage { id = "4" + refId = "4" status = SUCCEEDED type = "disableServerGroup" name = "disable server group" @@ -144,11 +151,40 @@ class ExpressionsSupportSpec extends Specification { "42" | false } - def "deployedServerGroup should resolve for valid stage type"() { + def "support registering custom expression functions"() { + given: + ContextFunctionConfiguration configuration = new ContextFunctionConfiguration( + new UserConfiguredUrlRestrictions.Builder().build(), + [new HelloExpressionFunctionProvider()] + ) + + ExpressionsSupport.helperFunctionConfigurationAtomicReference.set(configuration) + when: - def map = ExpressionsSupport.deployedServerGroups(pipeline) + EvaluationContext context = ExpressionsSupport.newEvaluationContext(pipeline, true) + + then: + context.variables.containsKey("test_hello") + } +} + +class HelloExpressionFunctionProvider implements ExpressionFunctionProvider { + + @Override + String getNamespace() { + return "test" + } + + @Override + Collection getFunctions() { + return [ + new FunctionDefinition("hello", [ + new FunctionParameter(String.class, "name", "Person's name to say hello to") + ]) + ] + } - then: "(deploy|createServerGroup|cloneServerGroup|rollingPush)" - map.serverGroup == ["app-test-v001"] + static String hello(String name) { + return "Hello, $name" } } diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProviderSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProviderSpec.groovy new file mode 100644 index 0000000000..a4e29b44ba --- /dev/null +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/DeployedServerGroupsExpressionFunctionProviderSpec.groovy @@ -0,0 +1,189 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.pipeline.expressions.functions + +import spock.lang.Shared +import spock.lang.Specification + +import static com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED +import static com.netflix.spinnaker.orca.pipeline.expressions.functions.DeployedServerGroupsExpressionFunctionProvider.deployedServerGroups +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage + +class DeployedServerGroupsExpressionFunctionProviderSpec extends Specification { + @Shared + def pipeline = pipeline { + stage { + id = "1" + name = "My First Stage" + context = [ + "region": "us-east-1", + ] + } + + stage { + id = "2" + name = "My Second Stage" + context = [ + "region": "us-west-1", + ] + } + + stage { + id = "3" + status = SUCCEEDED + type = "createServerGroup" + name = "Deploy in us-east-1" + context.putAll( + "account": "test", + "deploy.account.name": "test", + "availabilityZones": [ + "us-east-1": [ + "us-east-1c", + "us-east-1d", + "us-east-1e" + ] + ], + "capacity": [ + "desired": 1, + "max" : 1, + "min" : 1 + ], + "deploy.server.groups": [ + "us-east-1": [ + "app-test-v001" + ] + ] + ) + } + + stage { + id = "4" + status = SUCCEEDED + type = "disableServerGroup" + name = "disable server group" + context.putAll( + "account": "test", + "deploy.account.name": "test", + "availabilityZones": [ + "us-east-1": [ + "us-east-1c", + "us-east-1d", + "us-east-1e" + ] + ], + "capacity": [ + "desired": 1, + "max" : 1, + "min" : 1 + ], + "deploy.server.groups": [ + "us-west-2": [ + "app-test-v002" + ] + ] + ) + } + } + + def "deployedServerGroup should resolve for valid stage type"() { + when: + def map = deployedServerGroups(pipeline) + + then: "(deploy|createServerGroup|cloneServerGroup|rollingPush)" + map.serverGroup == ["app-test-v001"] + } + + def "deployedServerGroup should resolve deployments for valid stage type"() { + + final pipelineWithDeployments = pipeline { + stage { + id = "1" + status = SUCCEEDED + type = "createServerGroup" + name = "Deploy in us-east-1" + context.putAll( + "account": "test", + "deploy.account.name": "test", + "availabilityZones": [ + "us-east-1": [ + "us-east-1c", + "us-east-1d", + "us-east-1e" + ] + ], + "capacity": [ + "desired": 1, + "max" : 1, + "min" : 1 + ], + "deploy.server.groups": [ + "us-east-1": [ + "app-test-v001", + "app-test-v002", + "app-test-v003" + ] + ], + "kato.tasks": [ + [ + "resultObjects": [ + [ + "deployments": [ + [ + "serverGroupName": "app-test-v001" + ] + ], + "serverGroupNames": [ + "us-east-1:app-test-v001" + ] + ] + ] + ], + [ + "resultObjects": [ + [ + "deployments": [ + [ + "serverGroupName": "app-test-v002" + ], + [ + "serverGroupName": "app-test-v003" + ] + ] + ] + ] + ], + [ + "resultObjects": [ [:] ] + ], + [:] + ] + ) + } + } + + when: + def map = deployedServerGroups(pipelineWithDeployments) + + then: + map.serverGroup == ["app-test-v001"] + map.deployments == [[ + [ "serverGroupName": "app-test-v001" ], + [ "serverGroupName": "app-test-v002" ], + [ "serverGroupName": "app-test-v003" ], + ]] + } +} diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProviderSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProviderSpec.groovy new file mode 100644 index 0000000000..d4c3bd753e --- /dev/null +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/ManifestLabelValueExpressionFunctionProviderSpec.groovy @@ -0,0 +1,88 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.pipeline.expressions.functions + + +import com.netflix.spinnaker.orca.pipeline.expressions.SpelHelperFunctionException +import spock.lang.Shared +import spock.lang.Specification +import spock.lang.Unroll + +import static com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED +import static com.netflix.spinnaker.orca.pipeline.expressions.functions.ManifestLabelValueExpressionFunctionProvider.manifestLabelValue +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage + +class ManifestLabelValueExpressionFunctionProviderSpec extends Specification { + + @Shared + def deployManifestPipeline = pipeline { + stage { + id = "1" + name = "Deploy ReplicaSet" + context.putAll( + "manifests": [ + [ + "kind": "ReplicaSet", + "spec": [ + "template": [ + "metadata": [ + "labels": [ + "my-label-key": "my-label-value", + "my-other-label-key": "my-other-label-value" + ] + ] + ] + ] + ] + ] + ) + status = SUCCEEDED + type = "deployManifest" + } + } + + @Unroll + def "manifestLabelValue should resolve label value for manifest of given kind deployed by stage of given name"() { + expect: + manifestLabelValue(deployManifestPipeline, "Deploy ReplicaSet", "ReplicaSet", labelKey) == expectedLabelValue + + where: + labelKey || expectedLabelValue + "my-label-key" || "my-label-value" + "my-other-label-key" || "my-other-label-value" + } + + def "manifestLabelValue should raise exception if stage, manifest, or label not found"() { + when: + manifestLabelValue(deployManifestPipeline, "Non-existent Stage", "ReplicaSet", "my-label-key") + + then: + thrown(SpelHelperFunctionException) + + when: + manifestLabelValue(deployManifestPipeline, "Deploy ReplicaSet", "Deployment", "my-label-key") + + then: + thrown(SpelHelperFunctionException) + + when: + manifestLabelValue(deployManifestPipeline, "Deploy ReplicaSet", "ReplicaSet", "non-existent-label") + + then: + thrown(SpelHelperFunctionException) + } +} diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProviderSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProviderSpec.groovy new file mode 100644 index 0000000000..ce5c2ff34a --- /dev/null +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/expressions/functions/StageExpressionFunctionProviderSpec.groovy @@ -0,0 +1,106 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.pipeline.expressions.functions + +import com.netflix.spinnaker.orca.ExecutionContext +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionsSupport +import com.netflix.spinnaker.orca.pipeline.expressions.SpelHelperFunctionException +import spock.lang.Shared +import spock.lang.Specification +import spock.lang.Unroll + +import static com.netflix.spinnaker.orca.pipeline.expressions.functions.StageExpressionFunctionProvider.* +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage; + +class StageExpressionFunctionProviderSpec extends Specification { + @Shared + def pipeline = pipeline { + stage { + id = "1" + refId = "1.0" + name = "My First Stage" + } + + stage { + id = "2" + refId = "2.0" + name = "My Second Stage" + } + } + + @Unroll + def "should resolve current stage"() { + given: + ExecutionContext.set( + new ExecutionContext( + null, null, null, null, currentStageId, null + ) + ) + + when: + def currentStage = currentStage(pipeline) + + then: + currentStage.name == expectedStageName + + where: + currentStageId || expectedStageName + "1" || "My First Stage" + "2" || "My Second Stage" + } + + @Unroll + def "should raise exception if current stage cannot be found"() { + given: + ExecutionContext.set( + executionContext + ) + + when: + currentStage(pipeline) + + then: + thrown(SpelHelperFunctionException) + + where: + executionContext << [ + new ExecutionContext( + null, null, null, null, "-1", null + ), + null + ] + } + + def "stageByRefId() should match on #matchedAttribute"() { + expect: + stageByRefId(pipeline, stageCriteria).name == expectedStageName + + where: + stageCriteria || matchedAttribute || expectedStageName + "1.0" || "refId" || "My First Stage" + "2.0" || "refId" || "My Second Stage" + } + + def "stageByRefId() should raise exception if stage not found"() { + when: + stageByRefId(pipeline, "does_not_exist") + + then: + thrown(SpelHelperFunctionException) + } +} diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/ExecutionSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/ExecutionSpec.groovy index 1838480c42..07aa9ce921 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/ExecutionSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/ExecutionSpec.groovy @@ -34,8 +34,8 @@ class ExecutionSpec extends Specification { def "should build AuthenticationDetails containing authenticated details"() { given: MDC.clear() - MDC.put(AuthenticatedRequest.SPINNAKER_USER, "SpinnakerUser") - MDC.put(AuthenticatedRequest.SPINNAKER_ACCOUNTS, "Account1,Account2") + MDC.put(AuthenticatedRequest.Header.USER.header, "SpinnakerUser") + MDC.put(AuthenticatedRequest.Header.ACCOUNTS.header, "Account1,Account2") when: def authenticationDetails = Execution.AuthenticationDetails.build().get() diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/TriggerSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/TriggerSpec.groovy index 55ba6eeb7e..375a09e57d 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/TriggerSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/model/TriggerSpec.groovy @@ -469,6 +469,31 @@ class TriggerSpec extends Specification { ''' } + def "can parse an artifactory trigger"() { + given: + def trigger = mapper.readValue(triggerJson, Trigger) + + expect: + trigger instanceof ArtifactoryTrigger + with(trigger) { + artifactorySearchName == "search-name" + } + + where: + triggerJson = ''' +{ + "account": "theaccount", + "enabled": true, + "job": "the-job", + "master": "master", + "organization": "org", + "artifactorySearchName": "search-name", + "artifactoryRepository": "libs-demo-local", + "type": "artifactory" +} +''' + } + def pubSubTrigger = ''' { "attributeConstraints": { diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTaskSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTaskSpec.groovy index 94aa7a0e9d..d325fd2b42 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTaskSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/tasks/EvaluateVariablesTaskSpec.groovy @@ -26,7 +26,7 @@ class EvaluateVariablesTaskSpec extends Specification { @Subject task = new EvaluateVariablesTask() - void "Should correctly evaulate variables"() { + void "Should correctly copy evaluated variables"() { setup: def stage = stage { refId = "1" diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolverSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolverSpec.groovy index de30f34846..30a7011074 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolverSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ArtifactResolverSpec.groovy @@ -17,19 +17,52 @@ package com.netflix.spinnaker.orca.pipeline.util +import com.fasterxml.jackson.core.type.TypeReference import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spinnaker.kork.artifacts.model.Artifact import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact import com.netflix.spinnaker.orca.pipeline.model.DefaultTrigger import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository +import rx.Observable import spock.lang.Specification import spock.lang.Unroll + import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage class ArtifactResolverSpec extends Specification { + ObjectMapper objectMapper = new ObjectMapper() + def executionRepository = Stub(ExecutionRepository) { + retrievePipelinesForPipelineConfigId(*_) >> Observable.empty(); + } + def makeArtifactResolver() { - return new ArtifactResolver(new ObjectMapper(), Mock(ExecutionRepository)) + return new ArtifactResolver(new ObjectMapper(), executionRepository, + new ContextParameterProcessor()) + } + + def "should resolve expressions in stage-inlined artifacts"() { + setup: + def execution = pipeline { + stage { + name = "upstream stage" + type = "stage1" + refId = "1" + } + } + + execution.trigger = new DefaultTrigger('manual') + execution.trigger.other['buildNumber'] = 100 + execution.trigger.artifacts.add(Artifact.builder().type('http/file').name('build/libs/my-jar-100.jar').build()) + + when: + def artifact = makeArtifactResolver().getBoundArtifactForStage(execution.stages[0], null, Artifact.builder() + .type('http/file') + .name('build/libs/my-jar-${trigger[\'buildNumber\']}.jar') + .build()) + + then: + artifact.name == 'build/libs/my-jar-100.jar' } def "should find upstream artifacts in small pipeline"() { @@ -45,28 +78,28 @@ class ArtifactResolverSpec extends Specification { artifacts.find { it.type == "extra" } != null where: - execution = pipeline { - stage { - name = "upstream stage" - type = "stage1" - refId = "1" - outputs.artifacts = [new Artifact(type: "1")] - } - stage { - name = "upstream stage" - type = "stage2" - refId = "2" - requisiteStageRefIds = ["1"] - outputs.artifacts = [new Artifact(type: "2"), new Artifact(type: "extra")] - } - stage { - name = "desired" - requisiteStageRefIds = ["2"] - } + execution = pipeline { + stage { + name = "upstream stage" + type = "stage1" + refId = "1" + outputs.artifacts = [new Artifact(type: "1")] + } + stage { + name = "upstream stage" + type = "stage2" + refId = "2" + requisiteStageRefIds = ["1"] + outputs.artifacts = [new Artifact(type: "2"), new Artifact(type: "extra")] } + stage { + name = "desired" + requisiteStageRefIds = ["2"] + } + } } - def "should find upstream artifacts only" () { + def "should find upstream artifacts only"() { when: def desired = execution.getStages().find { it.name == "desired" } def artifactResolver = makeArtifactResolver() @@ -98,7 +131,7 @@ class ArtifactResolverSpec extends Specification { } } - def "should find artifacts from trigger and upstream stages" () { + def "should find artifacts from trigger and upstream stages"() { when: def execution = pipeline { stage { @@ -124,7 +157,7 @@ class ArtifactResolverSpec extends Specification { artifacts.find { it.type == "trigger" } != null } - def "should find no artifacts" () { + def "should find no artifacts"() { when: def execution = pipeline { stage { @@ -146,7 +179,7 @@ class ArtifactResolverSpec extends Specification { artifacts.size == 0 } - def "should find a bound artifact from upstream stages" () { + def "should find a bound artifact from upstream stages"() { when: def execution = pipeline { stage { @@ -154,8 +187,8 @@ class ArtifactResolverSpec extends Specification { type = "stage1" refId = "1" outputs.resolvedExpectedArtifacts = [ - new ExpectedArtifact(id: "1", boundArtifact: new Artifact(type: "correct")), - new ExpectedArtifact(id: "2", boundArtifact: new Artifact(type: "incorrect")) + new ExpectedArtifact(id: "1", boundArtifact: new Artifact(type: "correct")), + new ExpectedArtifact(id: "2", boundArtifact: new Artifact(type: "incorrect")) ] } stage { @@ -175,7 +208,7 @@ class ArtifactResolverSpec extends Specification { artifact.type == "correct" } - def "should find a bound artifact from a trigger" () { + def "should find a bound artifact from a trigger"() { when: def execution = pipeline { stage { @@ -183,7 +216,7 @@ class ArtifactResolverSpec extends Specification { type = "stage1" refId = "1" outputs.resolvedExpectedArtifacts = [ - new ExpectedArtifact(id: "2", boundArtifact: new Artifact(type: "incorrect")) + new ExpectedArtifact(id: "2", boundArtifact: new Artifact(type: "incorrect")) ] } stage { @@ -237,7 +270,7 @@ class ArtifactResolverSpec extends Specification { new ExpectedArtifact(matchArtifact: new Artifact(type: "docker/.*", name: "none")) | [new Artifact(type: "docker/image", name: "bad"), new Artifact(type: "docker/image", name: "image")] } - def "should find all artifacts from an execution, in reverse order" () { + def "should find all artifacts from an execution, in reverse order"() { when: def execution = pipeline { stage { @@ -281,4 +314,82 @@ class ArtifactResolverSpec extends Specification { [new ExpectedArtifact(matchArtifact: new Artifact(type: "docker/.*"), usePriorArtifact: true)] | [new Artifact(type: "bad")] | [new Artifact(type: "docker/image")] || [new Artifact(type: "docker/image")] [new ExpectedArtifact(matchArtifact: new Artifact(type: "google/.*"), usePriorArtifact: true), new ExpectedArtifact(matchArtifact: new Artifact(type: "docker/.*"), useDefaultArtifact: true, defaultArtifact: new Artifact(type: "docker/image"))] | [new Artifact(type: "bad"), new Artifact(type: "more bad")] | [new Artifact(type: "google/image")] || [new Artifact(type: "docker/image"), new Artifact(type: "google/image")] } + + def "resolveArtifacts sets the bound artifact on an expected artifact"() { + given: + def matchArtifact = Artifact.builder().type("docker/.*").build() + def expectedArtifact = ExpectedArtifact.builder().matchArtifact(matchArtifact).build() + def receivedArtifact = Artifact.builder().name("my-artifact").type("docker/image").build() + def pipeline = [ + id: "abc", + trigger: [:], + expectedArtifacts: [expectedArtifact], + receivedArtifacts: [receivedArtifact], + ] + def artifactResolver = makeArtifactResolver() + + when: + artifactResolver.resolveArtifacts(pipeline) + + then: + pipeline.expectedArtifacts.size() == 1 + pipeline.expectedArtifacts[0].boundArtifact == receivedArtifact + } + + def "resolveArtifacts adds received artifacts to the trigger, skipping duplicates"() { + given: + def matchArtifact = Artifact.builder().name("my-pipeline-artifact").type("docker/.*").build() + def expectedArtifact = ExpectedArtifact.builder().matchArtifact(matchArtifact).build() + def receivedArtifact = Artifact.builder().name("my-pipeline-artifact").type("docker/image").build() + def triggerArtifact = Artifact.builder().name("my-trigger-artifact").type("docker/image").build() + def bothArtifact = Artifact.builder().name("my-both-artifact").type("docker/image").build() + def pipeline = [ + id: "abc", + trigger: [ + artifacts: [triggerArtifact, bothArtifact] + ], + expectedArtifacts: [expectedArtifact], + receivedArtifacts: [receivedArtifact, bothArtifact], + ] + def artifactResolver = makeArtifactResolver() + + when: + artifactResolver.resolveArtifacts(pipeline) + + then: + List triggerArtifacts = extractTriggerArtifacts(pipeline.trigger) + triggerArtifacts.size() == 3 + triggerArtifacts == [receivedArtifact, bothArtifact, triggerArtifact] + } + + def "resolveArtifacts is idempotent"() { + given: + def matchArtifact = Artifact.builder().name("my-pipeline-artifact").type("docker/.*").build() + def expectedArtifact = ExpectedArtifact.builder().matchArtifact(matchArtifact).build() + def receivedArtifact = Artifact.builder().name("my-pipeline-artifact").type("docker/image").build() + def triggerArtifact = Artifact.builder().name("my-trigger-artifact").type("docker/image").build() + def bothArtifact = Artifact.builder().name("my-both-artifact").type("docker/image").build() + def pipeline = [ + id: "abc", + trigger: [ + artifacts: [triggerArtifact, bothArtifact] + ], + expectedArtifacts: [expectedArtifact], + receivedArtifacts: [receivedArtifact, bothArtifact], + ] + def artifactResolver = makeArtifactResolver() + + when: + artifactResolver.resolveArtifacts(pipeline) + List initialArtifacts = extractTriggerArtifacts(pipeline.trigger) + artifactResolver.resolveArtifacts(pipeline) + List finalArtifacts = extractTriggerArtifacts(pipeline.trigger) + + then: + initialArtifacts == finalArtifacts + } + + private List extractTriggerArtifacts(Map trigger) { + return objectMapper.convertValue(trigger.artifacts, new TypeReference>(){}); + } } diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractorSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractorSpec.groovy index 9eaec9225a..13185460a9 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractorSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/BuildDetailExtractorSpec.groovy @@ -15,17 +15,13 @@ */ package com.netflix.spinnaker.orca.pipeline.util -import com.fasterxml.jackson.databind.ObjectMapper -import org.springframework.beans.factory.annotation.Autowired + import spock.lang.Shared import spock.lang.Specification import spock.lang.Unroll class BuildDetailExtractorSpec extends Specification { - @Autowired - ObjectMapper mapper - @Shared BuildDetailExtractor buildDetailExtractor = new BuildDetailExtractor() @@ -33,7 +29,7 @@ class BuildDetailExtractorSpec extends Specification { def "Default detail from buildInfo"() { when: - buildDetailExtractor.tryToExtractBuildDetails(buildInfo, result) + buildDetailExtractor.tryToExtractJenkinsBuildDetails(buildInfo, result) then: result == expectedResult @@ -48,7 +44,7 @@ class BuildDetailExtractorSpec extends Specification { def "Legacy Jenkins detail from the url"() { when: - buildDetailExtractor.tryToExtractBuildDetails(buildInfo, result) + buildDetailExtractor.tryToExtractJenkinsBuildDetails(buildInfo, result) then: result == expectedResult @@ -64,7 +60,7 @@ class BuildDetailExtractorSpec extends Specification { def "Extract detail, missing fields and edge cases"() { when: - buildDetailExtractor.tryToExtractBuildDetails(buildInfo, result) + buildDetailExtractor.tryToExtractJenkinsBuildDetails(buildInfo, result) then: result == expectedResult diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessorSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessorSpec.groovy index 8529eee312..103a22345f 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessorSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/ContextParameterProcessorSpec.groovy @@ -22,17 +22,13 @@ import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionEvaluationSumma import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionTransform import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionsSupport import com.netflix.spinnaker.orca.pipeline.expressions.SpelHelperFunctionException -import com.netflix.spinnaker.orca.pipeline.model.DefaultTrigger -import com.netflix.spinnaker.orca.pipeline.model.Execution -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger +import com.netflix.spinnaker.orca.pipeline.model.* import org.springframework.expression.spel.SpelEvaluationException import spock.lang.Specification import spock.lang.Subject import spock.lang.Unroll import static com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED -import static com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.BuildInfo -import static com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.SourceControl import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage @@ -306,7 +302,7 @@ class ContextParameterProcessorSpec extends Specification { @Unroll def "correctly compute scmInfo attribute"() { given: - context.trigger.buildInfo = new BuildInfo("name", 1, "http://jenkins", [], scm, false, "SUCCESS") + context.trigger.buildInfo = new JenkinsBuildInfo("name", 1, "http://jenkins", "SUCCESS", [], scm) def source = ['branch': '${scmInfo.branch}'] @@ -547,6 +543,7 @@ class ContextParameterProcessorSpec extends Specification { result.deployed.serverGroup == ["flex-test-v043", "flex-prestaging-v011"] result.deployed.region == ["us-east-1", "us-west-1"] result.deployed.ami == ["ami-06362b6e", "ami-f759b7b3"] + result.deployed.deployments == [ [[ "serverGroupName": "flex-test-v043" ]], [] ] when: 'specifying a stage name' source = ['deployed': '${#deployedServerGroups("Deploy in us-east-1")}'] @@ -557,7 +554,7 @@ class ContextParameterProcessorSpec extends Specification { result.deployed.serverGroup == ["flex-test-v043"] result.deployed.region == ["us-east-1"] result.deployed.ami == ["ami-06362b6e"] - + result.deployed.deployments == [ [[ "serverGroupName": "flex-test-v043" ]] ] where: execution = pipeline { @@ -628,6 +625,19 @@ class ContextParameterProcessorSpec extends Specification { "suspendedProcesses": [], "terminationPolicies": [ "Default" + ], + "kato.tasks": [ + [ + "resultObjects": [ + [ + "deployments": [ + [ + "serverGroupName": "flex-test-v043" + ] + ] + ] + ] + ] ] ) } diff --git a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/PackageInfoSpec.groovy b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/PackageInfoSpec.groovy index 495e913829..dae64ad8a1 100644 --- a/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/PackageInfoSpec.groovy +++ b/orca-core/src/test/groovy/com/netflix/spinnaker/orca/pipeline/util/PackageInfoSpec.groovy @@ -15,19 +15,16 @@ */ package com.netflix.spinnaker.orca.pipeline.util -import com.netflix.spinnaker.kork.artifacts.model.Artifact - -import java.util.regex.Pattern import com.fasterxml.jackson.databind.ObjectMapper -import com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger -import com.netflix.spinnaker.orca.pipeline.model.PipelineTrigger -import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.kork.artifacts.model.Artifact +import com.netflix.spinnaker.orca.pipeline.model.* import com.netflix.spinnaker.orca.test.model.ExecutionBuilder import org.springframework.beans.factory.annotation.Autowired import spock.lang.Specification import spock.lang.Unroll -import static com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.BuildInfo -import static com.netflix.spinnaker.orca.pipeline.model.JenkinsTrigger.JenkinsArtifact + +import java.util.regex.Pattern + import static com.netflix.spinnaker.orca.pipeline.util.PackageType.DEB import static com.netflix.spinnaker.orca.pipeline.util.PackageType.RPM import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline @@ -65,7 +62,7 @@ class PackageInfoSpec extends Specification { given: def execution = pipeline { trigger = new JenkinsTrigger("master", "job", 1, null) - trigger.buildInfo = new BuildInfo("name", 1, "http://jenkins", [new JenkinsArtifact("testFileName", ".")], [], false, "SUCCESS") + trigger.buildInfo = new JenkinsBuildInfo("name", 1, "http://jenkins", "SUCCESS", [new JenkinsArtifact("testFileName", ".")]) stage { context = [buildInfo: [name: "someName"], package: "testPackageName"] } @@ -395,7 +392,7 @@ class PackageInfoSpec extends Specification { given: def pipeline = pipeline { trigger = new JenkinsTrigger("master", "job", 1, null) - trigger.buildInfo = new BuildInfo("name", 1, "http://jenkins", [new JenkinsArtifact("api_1.1.1-h01.sha123_all.deb", ".")], [], false, "SUCCESS") + trigger.buildInfo = new JenkinsBuildInfo("name", 1, "http://jenkins", "SUCCESS", [new JenkinsArtifact("api_1.1.1-h01.sha123_all.deb", ".")]) stage { refId = "1" context["package"] = "another_package" @@ -431,7 +428,7 @@ class PackageInfoSpec extends Specification { given: def pipeline = pipeline { trigger = new JenkinsTrigger("master", "job", 1, null) - trigger.buildInfo = new BuildInfo("name", 1, "http://jenkins", [new JenkinsArtifact("api_2.2.2-h02.sha321_all.deb", ".")], [], false, "SUCCESS") + trigger.buildInfo = new JenkinsBuildInfo("name", 1, "http://jenkins", "SUCCESS", [new JenkinsArtifact("api_2.2.2-h02.sha321_all.deb", ".")]) stage { context = [package: 'api'] } @@ -518,10 +515,10 @@ class PackageInfoSpec extends Specification { pipelineTrigger << [ new PipelineTrigger(ExecutionBuilder.pipeline { trigger = new JenkinsTrigger("master", "job", 1, null) - trigger.buildInfo = new BuildInfo("name", 1, "http://jenkins", [new JenkinsArtifact("api_1.1.1-h01.sha123_all.deb", ".")], [], false, "SUCCESS") + trigger.buildInfo = new JenkinsBuildInfo("name", 1, "http://jenkins", "SUCCESS", [new JenkinsArtifact("api_1.1.1-h01.sha123_all.deb", ".")]) }), new JenkinsTrigger("master", "job", 1, null).with { - it.buildInfo = new BuildInfo("name", 1, "http://jenkins", [new JenkinsArtifact("api_1.1.1-h01.sha123_all.deb", ".")], [], false, "SUCCESS") + it.buildInfo = new JenkinsBuildInfo("name", 1, "http://jenkins", "SUCCESS", [new JenkinsArtifact("api_1.1.1-h01.sha123_all.deb", ".")]) it } ] @@ -530,7 +527,7 @@ class PackageInfoSpec extends Specification { def "should fetch artifacts from upstream stage when not specified on pipeline trigger"() { given: def jenkinsTrigger = new JenkinsTrigger("master", "job", 1, "propertyFile") - jenkinsTrigger.buildInfo = new BuildInfo("name", 0, "url", [], [], false, "result") + jenkinsTrigger.buildInfo = new JenkinsBuildInfo("name", 0, "url", "result") def pipeline = pipeline { trigger = jenkinsTrigger // has no artifacts! diff --git a/orca-core/src/test/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTriggerTest.java b/orca-core/src/test/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTriggerTest.java new file mode 100644 index 0000000000..41063711a3 --- /dev/null +++ b/orca-core/src/test/java/com/netflix/spinnaker/orca/pipeline/model/JenkinsTriggerTest.java @@ -0,0 +1,76 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.pipeline.model; + +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.SerializationFeature; +import org.junit.jupiter.api.Test; + +import java.io.IOException; + +import static org.assertj.core.api.Assertions.assertThat; + +class JenkinsTriggerTest { + private String trigger = "{" + + "\"type\": \"jenkins\"," + + "\"master\": \"my-jenkins-master\"," + + "\"job\": \"my-job\"," + + "\"buildNumber\": 50," + + "\"buildInfo\": {" + + " \"artifacts\": [" + + " {" + + " \"displayPath\": \"props\"," + + " \"fileName\": \"props\"," + + " \"relativePath\": \"properties/props\"" + + " }" + + " ]," + + " \"building\": false," + + " \"duration\": 246," + + " \"fullDisplayName\": \"PropertiesTest #106\"," + + " \"name\": \"PropertiesTest\"," + + " \"number\": 106," + + " \"result\": \"SUCCESS\"," + + " \"scm\": [" + + " {" + + " \"branch\": \"master\"," + + " \"name\": \"refs/remotes/origin/master\"," + + " \"remoteUrl\": \"https://github.com/ezimanyi/docs-site-manifest\"," + + " \"sha1\": \"8d0e9525df913c3e42a070f515155e6de4d03f86\"" + + " }" + + " ]," + + " \"timestamp\": \"1552776586747\"," + + " \"url\": \"http://localhost:5656/job/PropertiesTest/106/\"" + + "}" + + "}"; + + /** + * Trigger serialization preserves generic type of {@link BuildInfo} + * through deserialization/serialization + */ + @Test + void jenkinsTriggerSerialization() throws IOException { + ObjectMapper mapper = new ObjectMapper() + .disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES) + .enable(SerializationFeature.INDENT_OUTPUT); + JenkinsTrigger jenkinsTrigger = mapper.readValue(trigger, JenkinsTrigger.class); + String triggerSerialized = mapper.writeValueAsString(jenkinsTrigger); + assertThat(triggerSerialized) + .contains("\"fileName\" : \"props\"") + .contains("\"relativePath\" : \"properties/props\""); + } +} \ No newline at end of file diff --git a/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/config/DryRunConfiguration.kt b/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/config/DryRunConfiguration.kt index bdd294ddd8..14a88797a7 100644 --- a/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/config/DryRunConfiguration.kt +++ b/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/config/DryRunConfiguration.kt @@ -16,6 +16,7 @@ package com.netflix.spinnaker.config +import com.netflix.spinnaker.orca.StageResolver import com.netflix.spinnaker.orca.dryrun.DryRunStageDefinitionBuilderFactory import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilderFactory @@ -33,10 +34,10 @@ import org.springframework.context.annotation.Configuration class DryRunConfiguration { @Bean fun dryRunStageDefinitionBuilderFactory( - stageDefinitionBuilders: Collection + stageResolver: StageResolver ): StageDefinitionBuilderFactory { log.info("Dry run trigger support enabled") - return DryRunStageDefinitionBuilderFactory(stageDefinitionBuilders) + return DryRunStageDefinitionBuilderFactory(stageResolver) } private val log = LoggerFactory.getLogger(javaClass) diff --git a/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunStageDefinitionBuilderFactory.kt b/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunStageDefinitionBuilderFactory.kt index ef8f9ecc2d..181e6509c7 100644 --- a/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunStageDefinitionBuilderFactory.kt +++ b/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunStageDefinitionBuilderFactory.kt @@ -16,14 +16,15 @@ package com.netflix.spinnaker.orca.dryrun +import com.netflix.spinnaker.orca.StageResolver import com.netflix.spinnaker.orca.pipeline.CheckPreconditionsStage import com.netflix.spinnaker.orca.pipeline.DefaultStageDefinitionBuilderFactory import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder import com.netflix.spinnaker.orca.pipeline.model.Stage class DryRunStageDefinitionBuilderFactory( - stageDefinitionBuilders: Collection -) : DefaultStageDefinitionBuilderFactory(stageDefinitionBuilders) { + stageResolver: StageResolver +) : DefaultStageDefinitionBuilderFactory(stageResolver) { override fun builderFor(stage: Stage): StageDefinitionBuilder = stage.execution.let { execution -> @@ -37,7 +38,13 @@ class DryRunStageDefinitionBuilderFactory( } private val Stage.shouldExecuteNormallyInDryRun: Boolean - get() = isManualJudgment || isPipeline || isExpressionPrecondition || isFindImage || isDetermineTargetServerGroup || isRollbackCluster + get() = isManualJudgment || + isPipeline || + isExpressionPrecondition || + isFindImage || + isDetermineTargetServerGroup || + isRollbackCluster || + isEvalVariables private val Stage.isManualJudgment: Boolean get() = type == "manualJudgment" @@ -68,4 +75,7 @@ class DryRunStageDefinitionBuilderFactory( private val Stage.isRollbackCluster: Boolean get() = type == "rollbackCluster" + + private val Stage.isEvalVariables: Boolean + get() = type == "evaluateVariables" } diff --git a/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunTask.kt b/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunTask.kt index 9e4f0a3775..d2538e29b1 100644 --- a/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunTask.kt +++ b/orca-dry-run/src/main/kotlin/com/netflix/spinnaker/orca/dryrun/DryRunTask.kt @@ -34,7 +34,7 @@ class DryRunTask( stage.execution.also { execution -> log.info("Dry run of ${execution.application} ${execution.name} ${execution.id} stage ${stage.type} ${stage.refId} outputting $outputs") } - TaskResult(SUCCEEDED, emptyMap(), outputs) + TaskResult.builder(SUCCEEDED).outputs(outputs).build() } private fun Stage.generateOutputs(): Map = diff --git a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/pipeline/ManualJudgmentStage.groovy b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/pipeline/ManualJudgmentStage.groovy index 69885395be..674d95f2da 100644 --- a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/pipeline/ManualJudgmentStage.groovy +++ b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/pipeline/ManualJudgmentStage.groovy @@ -90,7 +90,7 @@ class ManualJudgmentStage implements StageDefinitionBuilder, AuthenticatedStage Map outputs = processNotifications(stage, stageData, notificationState) - return new TaskResult(executionStatus, outputs) + return TaskResult.builder(executionStatus).context(outputs).build() } Map processNotifications(Stage stage, StageData stageData, String notificationState) { diff --git a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListener.groovy b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListener.groovy index 32121a93d1..20d985bfff 100644 --- a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListener.groovy +++ b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListener.groovy @@ -57,14 +57,16 @@ class EchoNotifyingExecutionListener implements ExecutionListener { if (execution.type == PIPELINE) { addApplicationNotifications(execution) } - echoService.recordEvent( - details: [ - source : "orca", - type : "orca:${execution.type}:starting".toString(), - application: execution.application, - ], - content: buildContent(execution) - ) + AuthenticatedRequest.allowAnonymous({ + echoService.recordEvent( + details: [ + source : "orca", + type : "orca:${execution.type}:starting".toString(), + application: execution.application, + ], + content: buildContent(execution) + ) + }) } } catch (Exception e) { log.error("Failed to send pipeline start event: ${execution?.id}", e) @@ -81,14 +83,16 @@ class EchoNotifyingExecutionListener implements ExecutionListener { if (execution.type == PIPELINE) { addApplicationNotifications(execution) } - echoService.recordEvent( - details: [ - source : "orca", - type : "orca:${execution.type}:${wasSuccessful ? "complete" : "failed"}".toString(), - application: execution.application, - ], - content: buildContent(execution) - ) + AuthenticatedRequest.allowAnonymous({ + echoService.recordEvent( + details: [ + source : "orca", + type : "orca:${execution.type}:${wasSuccessful ? "complete" : "failed"}".toString(), + application: execution.application, + ], + content: buildContent(execution) + ) + }) } } catch (Exception e) { log.error("Failed to send pipeline end event: ${execution?.id}", e) diff --git a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingStageListener.groovy b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingStageListener.groovy index f160de2e32..5bd1305933 100644 --- a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingStageListener.groovy +++ b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingStageListener.groovy @@ -24,6 +24,7 @@ import com.netflix.spinnaker.orca.pipeline.model.Stage import com.netflix.spinnaker.orca.pipeline.model.Task import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor +import com.netflix.spinnaker.security.AuthenticatedRequest import groovy.transform.CompileDynamic import groovy.transform.CompileStatic import groovy.util.logging.Slf4j @@ -31,8 +32,6 @@ import org.slf4j.MDC import org.springframework.beans.factory.annotation.Autowired import static com.netflix.spinnaker.orca.ExecutionStatus.* import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.ORCHESTRATION -import static com.netflix.spinnaker.security.AuthenticatedRequest.SPINNAKER_EXECUTION_ID -import static com.netflix.spinnaker.security.AuthenticatedRequest.SPINNAKER_USER /** * Converts execution events to Echo events. @@ -127,12 +126,14 @@ class EchoNotifyingStageListener implements StageListener { } try { - MDC.put(SPINNAKER_EXECUTION_ID, stage.execution.id); - MDC.put(SPINNAKER_USER, stage.execution?.authentication?.user ?: "anonymous") - echoService.recordEvent(event) + MDC.put(AuthenticatedRequest.Header.EXECUTION_ID.header, stage.execution.id) + MDC.put(AuthenticatedRequest.Header.USER.header, stage.execution?.authentication?.user ?: "anonymous") + AuthenticatedRequest.allowAnonymous({ + echoService.recordEvent(event) + }) } finally { - MDC.remove(SPINNAKER_EXECUTION_ID) - MDC.remove(SPINNAKER_USER) + MDC.remove(AuthenticatedRequest.Header.EXECUTION_ID.header) + MDC.remove(AuthenticatedRequest.Header.USER.header) } } catch (Exception e) { log.error("Failed to send ${type} event ${phase} ${stage.execution.id} ${maybeTask.map { Task task -> task.name }}", e) diff --git a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/CreateJiraIssueTask.java b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/CreateJiraIssueTask.java index 8a68054396..0196396791 100644 --- a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/CreateJiraIssueTask.java +++ b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/CreateJiraIssueTask.java @@ -59,9 +59,6 @@ public TaskResult execute(@Nonnull Stage stage) { .ifPresent(createIssueRequest::setReporter); CreateJiraIssueResponse createJiraIssueResponse = jiraService.createJiraIssue(createIssueRequest); - return new TaskResult( - ExecutionStatus.SUCCEEDED, - ImmutableMap.of("createJiraIssueResponse", createJiraIssueResponse) - ); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(ImmutableMap.of("createJiraIssueResponse", createJiraIssueResponse)).build(); } } diff --git a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/PageApplicationOwnerTask.groovy b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/PageApplicationOwnerTask.groovy index 6cd6e1f424..2865d283d6 100644 --- a/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/PageApplicationOwnerTask.groovy +++ b/orca-echo/src/main/groovy/com/netflix/spinnaker/orca/echo/tasks/PageApplicationOwnerTask.groovy @@ -97,7 +97,7 @@ class PageApplicationOwnerTask implements RetryableTask { ) log.info("Sent page (key(s): ${allPagerDutyKeys.join(", ")}, message: '${stage.context.message}')") - return new TaskResult(SUCCEEDED) + return TaskResult.ofStatus(SUCCEEDED) } private String fetchApplicationPagerDutyKey(String applicationName) { diff --git a/orca-echo/src/test/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListenerSpec.groovy b/orca-echo/src/test/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListenerSpec.groovy index 22a22789c8..b31ae4e078 100644 --- a/orca-echo/src/test/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListenerSpec.groovy +++ b/orca-echo/src/test/groovy/com/netflix/spinnaker/orca/echo/spring/EchoNotifyingExecutionListenerSpec.groovy @@ -190,8 +190,8 @@ class EchoNotifyingExecutionListenerSpec extends Specification { pipeline.notifications == [slackPipes] 1 * front50Service.getApplicationNotifications("myapp") >> { - assert MDC.get(AuthenticatedRequest.SPINNAKER_USER) == "user@schibsted.com" - assert MDC.get(AuthenticatedRequest.SPINNAKER_ACCOUNTS) == "someAccount,anotherAccount" + assert MDC.get(AuthenticatedRequest.Header.USER.header) == "user@schibsted.com" + assert MDC.get(AuthenticatedRequest.Header.ACCOUNTS.header) == "someAccount,anotherAccount" return notifications } 1 * echoService.recordEvent(_) diff --git a/orca-extensionpoint/orca-extensionpoint.gradle b/orca-extensionpoint/orca-extensionpoint.gradle index e69de29bb2..3f53a362ea 100644 --- a/orca-extensionpoint/orca-extensionpoint.gradle +++ b/orca-extensionpoint/orca-extensionpoint.gradle @@ -0,0 +1,5 @@ +apply from: "$rootDir/gradle/spock.gradle" + +dependencies { + compile "com.google.code.findbugs:jsr305:3.0.2" +} diff --git a/orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/ExecutionPreprocessor.java b/orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/ExecutionPreprocessor.java new file mode 100644 index 0000000000..d7a6027709 --- /dev/null +++ b/orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/ExecutionPreprocessor.java @@ -0,0 +1,40 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.extensionpoint.pipeline; + +import javax.annotation.Nonnull; +import java.util.Map; + +/** + * A preprocessor that can modify an Execution upon initial receipt of the configuration. + */ +public interface ExecutionPreprocessor { + + /** + * Returns whether or not the preprocess can handle the inbound execution. + */ + boolean supports(@Nonnull Map execution, @Nonnull Type type); + + /** + * Allows modification of an execution configuration. + */ + @Nonnull Map process(@Nonnull Map execution); + + enum Type { + PIPELINE, + ORCHESTRATION + } +} diff --git a/orca-flex/src/main/groovy/com/netflix/spinnaker/orca/flex/tasks/AbstractElasticIpTask.groovy b/orca-flex/src/main/groovy/com/netflix/spinnaker/orca/flex/tasks/AbstractElasticIpTask.groovy index a81d42c666..bb71ec7588 100644 --- a/orca-flex/src/main/groovy/com/netflix/spinnaker/orca/flex/tasks/AbstractElasticIpTask.groovy +++ b/orca-flex/src/main/groovy/com/netflix/spinnaker/orca/flex/tasks/AbstractElasticIpTask.groovy @@ -44,7 +44,7 @@ abstract class AbstractElasticIpTask implements Task { "elastic.ip.assignment": performRequest(stage.mapTo(StageData)) ] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } static class StageData { diff --git a/orca-front50/orca-front50.gradle b/orca-front50/orca-front50.gradle index d8862158b3..83f58bf558 100644 --- a/orca-front50/orca-front50.gradle +++ b/orca-front50/orca-front50.gradle @@ -18,7 +18,9 @@ apply from: "$rootDir/gradle/groovy.gradle" dependencies { compile project(":orca-retrofit") + compile "com.netflix.servo:servo-core:0.12.21" compileOnly spinnaker.dependency("lombok") + annotationProcessor spinnaker.dependency("lombok") spinnaker.group("fiat") testCompile project(":orca-test-groovy") diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarter.groovy b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarter.groovy index ca17e327fd..5aa56b9957 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarter.groovy +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarter.groovy @@ -16,13 +16,11 @@ package com.netflix.spinnaker.orca.front50 +import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spectator.api.Id import com.netflix.spectator.api.Registry - -import java.util.concurrent.Callable -import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spinnaker.kork.web.exceptions.InvalidRequestException -import com.netflix.spinnaker.orca.extensionpoint.pipeline.PipelinePreprocessor +import com.netflix.spinnaker.orca.extensionpoint.pipeline.ExecutionPreprocessor import com.netflix.spinnaker.orca.pipeline.ExecutionLauncher import com.netflix.spinnaker.orca.pipeline.model.Execution import com.netflix.spinnaker.orca.pipeline.model.Trigger @@ -36,6 +34,9 @@ import org.springframework.beans.factory.annotation.Autowired import org.springframework.context.ApplicationContext import org.springframework.context.ApplicationContextAware import org.springframework.stereotype.Component + +import java.util.concurrent.Callable + import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE @Component @@ -44,7 +45,7 @@ class DependentPipelineStarter implements ApplicationContextAware { private ApplicationContext applicationContext ObjectMapper objectMapper ContextParameterProcessor contextParameterProcessor - List pipelinePreprocessors + List executionPreprocessors ArtifactResolver artifactResolver Registry registry @@ -52,13 +53,13 @@ class DependentPipelineStarter implements ApplicationContextAware { DependentPipelineStarter(ApplicationContext applicationContext, ObjectMapper objectMapper, ContextParameterProcessor contextParameterProcessor, - Optional> pipelinePreprocessors, + Optional> executionPreprocessors, Optional artifactResolver, Registry registry) { this.applicationContext = applicationContext this.objectMapper = objectMapper this.contextParameterProcessor = contextParameterProcessor - this.pipelinePreprocessors = pipelinePreprocessors.orElse(null) + this.executionPreprocessors = executionPreprocessors.orElse(new ArrayList<>()) this.artifactResolver = artifactResolver.orElse(null) this.registry = registry } @@ -100,7 +101,9 @@ class DependentPipelineStarter implements ApplicationContextAware { //keep the trigger as the preprocessor removes it. def expectedArtifacts = pipelineConfig.expectedArtifacts - for (PipelinePreprocessor preprocessor : (pipelinePreprocessors ?: [])) { + for (ExecutionPreprocessor preprocessor : executionPreprocessors.findAll { + it.supports(pipelineConfig, ExecutionPreprocessor.Type.PIPELINE) + }) { pipelineConfig = preprocessor.process(pipelineConfig) } diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/Front50Service.groovy b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/Front50Service.groovy index 89943e635b..95e08429f6 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/Front50Service.groovy +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/Front50Service.groovy @@ -18,6 +18,7 @@ package com.netflix.spinnaker.orca.front50 import com.netflix.spinnaker.fiat.model.resources.ServiceAccount import com.netflix.spinnaker.orca.front50.model.Application import com.netflix.spinnaker.orca.front50.model.ApplicationNotifications +import com.netflix.spinnaker.orca.front50.model.DeliveryConfig import com.netflix.spinnaker.orca.front50.model.Front50Credential import retrofit.client.Response import retrofit.http.* @@ -95,17 +96,17 @@ interface Front50Service { // v2 @POST("/v2/pipelineTemplates") - Response saveV2PipelineTemplate(@Query("version") String version, @Body Map pipelineTemplate) + Response saveV2PipelineTemplate(@Query("tag") String tag, @Body Map pipelineTemplate) @GET("/v2/pipelineTemplates/{pipelineTemplateId}/dependentPipelines") List> getDependentPipelinesForTemplate(@Path("pipelineTemplateId") String pipelineTemplateId) @PUT("/v2/pipelineTemplates/{pipelineTemplateId}") - Response updateV2PipelineTemplate(@Path("pipelineTemplateId") String pipelineTemplateId, @Query("version") String version, @Body Map pipelineTemplate) + Response updateV2PipelineTemplate(@Path("pipelineTemplateId") String pipelineTemplateId, @Query("tag") String tag, @Body Map pipelineTemplate) @DELETE("/v2/pipelineTemplates/{pipelineTemplateId}") Response deleteV2PipelineTemplate(@Path("pipelineTemplateId") String pipelineTemplateId, - @Query("version") String version, + @Query("tag") String tag, @Query("digest") String digest) @GET("/strategies") @@ -129,6 +130,18 @@ interface Front50Service { @POST("/serviceAccounts") Response saveServiceAccount(@Body ServiceAccount serviceAccount) + @GET("/deliveries/{id}") + DeliveryConfig getDeliveryConfig(@Path("id") String id) + + @POST("/deliveries") + DeliveryConfig createDeliveryConfig(@Body DeliveryConfig deliveryConfig) + + @PUT("/deliveries/{id}") + DeliveryConfig updateDeliveryConfig(@Path("id") String id, @Body DeliveryConfig deliveryConfig) + + @DELETE("/applications/{application}/deliveries/{id}") + Response deleteDeliveryConfig(@Path("application") String application, @Path("id") String id) + static class Project { String id String name diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/config/Front50Configuration.groovy b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/config/Front50Configuration.groovy index 230235f508..fde934fa76 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/config/Front50Configuration.groovy +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/config/Front50Configuration.groovy @@ -17,10 +17,8 @@ package com.netflix.spinnaker.orca.front50.config import com.fasterxml.jackson.databind.ObjectMapper -import com.netflix.spinnaker.fiat.shared.FiatStatus import com.netflix.spinnaker.orca.events.ExecutionEvent import com.netflix.spinnaker.orca.events.ExecutionListenerAdapter -import com.netflix.spinnaker.orca.front50.DependentPipelineStarter import com.netflix.spinnaker.orca.front50.Front50Service import com.netflix.spinnaker.orca.front50.spring.DependentPipelineExecutionListener import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository @@ -40,6 +38,7 @@ import retrofit.RequestInterceptor import retrofit.RestAdapter import retrofit.client.Client import retrofit.converter.JacksonConverter + import static retrofit.Endpoints.newFixedEndpoint @Configuration @@ -81,15 +80,6 @@ class Front50Configuration { .create(Front50Service) } - @Bean - DependentPipelineExecutionListener dependentPipelineExecutionListener( - Front50Service front50Service, - DependentPipelineStarter dependentPipelineStarter, - FiatStatus fiatStatus - ) { - new DependentPipelineExecutionListener(front50Service, dependentPipelineStarter, fiatStatus) - } - @Bean ApplicationListener dependentPipelineExecutionListenerAdapter(DependentPipelineExecutionListener delegate, ExecutionRepository repository) { return new ExecutionListenerAdapter(delegate, repository) diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/model/DeliveryConfig.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/model/DeliveryConfig.java new file mode 100644 index 0000000000..dd8e0f6e68 --- /dev/null +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/model/DeliveryConfig.java @@ -0,0 +1,32 @@ +package com.netflix.spinnaker.orca.front50.model; + +import com.fasterxml.jackson.annotation.JsonAnyGetter; +import com.fasterxml.jackson.annotation.JsonAnySetter; +import lombok.Data; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +@Data +public class DeliveryConfig { + private String id; + private String application; + private Long lastModified; + private Long createTs; + private String lastModifiedBy; + private List> deliveryArtifacts; + private List> deliveryEnvironments; + + private Map details = new HashMap<>(); + + @JsonAnyGetter + Map details() { + return details; + } + + @JsonAnySetter + void set(String name, Object value) { + details.put(name, value); + } +} diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/DeleteDeliveryConfigStage.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/DeleteDeliveryConfigStage.java new file mode 100644 index 0000000000..8c9543670d --- /dev/null +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/DeleteDeliveryConfigStage.java @@ -0,0 +1,18 @@ +package com.netflix.spinnaker.orca.front50.pipeline; + +import com.netflix.spinnaker.orca.front50.tasks.DeleteDeliveryConfigTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; + +@Component +public class DeleteDeliveryConfigStage implements StageDefinitionBuilder { + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + builder + .withTask("deleteDeliveryConfig", DeleteDeliveryConfigTask.class); + } +} diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/PipelineExpressionFunctionProvider.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/PipelineExpressionFunctionProvider.java new file mode 100644 index 0000000000..70e061688b --- /dev/null +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/PipelineExpressionFunctionProvider.java @@ -0,0 +1,115 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.front50.pipeline; + +import com.netflix.servo.util.Strings; +import com.netflix.spinnaker.kork.core.RetrySupport; +import com.netflix.spinnaker.orca.front50.Front50Service; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider; +import com.netflix.spinnaker.orca.pipeline.expressions.SpelHelperFunctionException; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import java.util.Arrays; +import java.util.Collection; +import java.util.Map; +import java.util.Optional; + +import static java.lang.String.format; + +@Component +public class PipelineExpressionFunctionProvider implements ExpressionFunctionProvider { + // Static because it's needed during expression eval (which is a static) + private static Front50Service front50Service = null; + + @Autowired + PipelineExpressionFunctionProvider(Optional front50Service) { + if (front50Service.isPresent()) { + PipelineExpressionFunctionProvider.front50Service = front50Service.get(); + } + } + + @Nullable + @Override + public String getNamespace() { + return null; + } + + @NotNull + @Override + public Collection getFunctions() { + return Arrays.asList( + new FunctionDefinition("pipelineId", Arrays.asList( + new FunctionParameter( + Execution.class, "execution", "The execution containing the currently executing stage" + ), + new FunctionParameter( + String.class, "pipelineName", "A valid stage reference identifier" + ) + )) + ); + } + + /** + * Function to convert pipeline name to pipeline ID (within current application) + * + * @param execution the current execution + * @param pipelineName name of the pipeline to lookup + * @return the id of the pipeline or null if pipeline not found + */ + public static String pipelineId(Execution execution, String pipelineName) { + if (Strings.isNullOrEmpty(pipelineName)) { + throw new SpelHelperFunctionException("pipelineName must be specified for function #pipelineId"); + } + + if (front50Service == null) { + throw new SpelHelperFunctionException("front50 service is missing. It's required when using #pipelineId function"); + } + + try { + String currentApplication = execution.getApplication(); + + RetrySupport retrySupport = new RetrySupport(); + Map pipeline = retrySupport.retry(() -> front50Service.getPipelines(currentApplication) + .stream() + .filter(p -> pipelineName.equals(p.getOrDefault("name", null))) + .findFirst() + .orElse(null), + 3, 1000, true); + + if (pipeline == null) { + throw new SpelHelperFunctionException( + format( + "Pipeline with name '%s' could not be found on application %s", + pipelineName, + currentApplication + )); + } + + return (String) pipeline.get("id"); + } + catch (SpelHelperFunctionException e) { + throw e; + } + catch (Exception e) { + throw new SpelHelperFunctionException("Failed to evaluate #pipelineId function", e); + } + } +} diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/UpsertDeliveryConfigStage.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/UpsertDeliveryConfigStage.java new file mode 100644 index 0000000000..af810b6968 --- /dev/null +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/pipeline/UpsertDeliveryConfigStage.java @@ -0,0 +1,20 @@ +package com.netflix.spinnaker.orca.front50.pipeline; + +import com.netflix.spinnaker.orca.front50.tasks.MonitorFront50Task; +import com.netflix.spinnaker.orca.front50.tasks.UpsertDeliveryConfigTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; + +@Component +public class UpsertDeliveryConfigStage implements StageDefinitionBuilder { + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + builder + .withTask("upsertDeliveryConfig", UpsertDeliveryConfigTask.class) + .withTask("monitorUpsert", MonitorFront50Task.class); + } +} diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListener.groovy b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListener.groovy index 91e9202a7c..e789edd0ef 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListener.groovy +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListener.groovy @@ -18,30 +18,47 @@ package com.netflix.spinnaker.orca.front50.spring import com.netflix.spinnaker.fiat.shared.FiatStatus import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.extensionpoint.pipeline.ExecutionPreprocessor import com.netflix.spinnaker.orca.front50.DependentPipelineStarter import com.netflix.spinnaker.orca.front50.Front50Service import com.netflix.spinnaker.orca.listeners.ExecutionListener import com.netflix.spinnaker.orca.listeners.Persister import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor +import com.netflix.spinnaker.orca.pipelinetemplate.V2Util import com.netflix.spinnaker.security.User import groovy.transform.CompileDynamic import groovy.util.logging.Slf4j +import org.springframework.beans.factory.annotation.Autowired +import org.springframework.boot.autoconfigure.condition.ConditionalOnExpression +import org.springframework.stereotype.Component + import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE @Slf4j @CompileDynamic +@Component +@ConditionalOnExpression('${front50.enabled:true}') class DependentPipelineExecutionListener implements ExecutionListener { private final Front50Service front50Service - private DependentPipelineStarter dependentPipelineStarter + private final DependentPipelineStarter dependentPipelineStarter private final FiatStatus fiatStatus + private final List executionPreprocessors + + private final ContextParameterProcessor contextParameterProcessor + @Autowired DependentPipelineExecutionListener(Front50Service front50Service, DependentPipelineStarter dependentPipelineStarter, - FiatStatus fiatStatus) { + FiatStatus fiatStatus, + Optional> pipelinePreprocessors, + ContextParameterProcessor contextParameterProcessor) { this.front50Service = front50Service this.dependentPipelineStarter = dependentPipelineStarter this.fiatStatus = fiatStatus + this.executionPreprocessors = pipelinePreprocessors.orElse(null) + this.contextParameterProcessor = contextParameterProcessor } @Override @@ -51,8 +68,20 @@ class DependentPipelineExecutionListener implements ExecutionListener { } def status = convertStatus(execution) + def allPipelines = front50Service.getAllPipelines() + if (executionPreprocessors) { + // Resolve templated pipelines if enabled. + allPipelines = allPipelines.collect { pipeline -> + if (pipeline.type == 'templatedPipeline' && pipeline?.schema != null && pipeline?.schema != "1") { + return V2Util.planPipeline(contextParameterProcessor, executionPreprocessors, pipeline) + } else { + return pipeline + } + } + } - front50Service.getAllPipelines().findAll { !it.disabled }.each { + allPipelines.findAll { !it.disabled } + .each { it.triggers.each { trigger -> if (trigger.enabled && trigger.type == 'pipeline' && diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/AbstractFront50Task.groovy b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/AbstractFront50Task.groovy index b7991666a1..10d869bcf9 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/AbstractFront50Task.groovy +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/AbstractFront50Task.groovy @@ -65,7 +65,7 @@ abstract class AbstractFront50Task implements Task { TaskResult taskResult = performRequest(application) outputs << taskResult.outputs - return new TaskResult(taskResult.status, outputs) + return TaskResult.builder(taskResult.status).context(outputs).build() } Application fetchApplication(String applicationName) { diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/DeleteDeliveryConfigTask.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/DeleteDeliveryConfigTask.java new file mode 100644 index 0000000000..9f3342e8be --- /dev/null +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/DeleteDeliveryConfigTask.java @@ -0,0 +1,79 @@ +package com.netflix.spinnaker.orca.front50.tasks; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.front50.Front50Service; +import com.netflix.spinnaker.orca.front50.model.DeliveryConfig; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; +import retrofit.RetrofitError; + +import javax.annotation.Nonnull; +import java.util.Arrays; +import java.util.Optional; + +@Component +public class DeleteDeliveryConfigTask implements Task { + + private Logger log = LoggerFactory.getLogger(getClass()); + + private Front50Service front50Service; + private ObjectMapper objectMapper; + + @Autowired + public DeleteDeliveryConfigTask(Front50Service front50Service, ObjectMapper objectMapper) { + this.front50Service = front50Service; + this.objectMapper = objectMapper; + } + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + StageData stageData = stage.mapTo(StageData.class); + + if (stageData.deliveryConfigId == null) { + throw new IllegalArgumentException("Key 'deliveryConfigId' must be provided."); + } + + Optional config = getDeliveryConfig(stageData.deliveryConfigId); + + if (!config.isPresent()) { + log.debug("Config {} does not exist, considering deletion successful.", stageData.deliveryConfigId); + return TaskResult.SUCCEEDED; + } + + try { + log.debug("Deleting delivery config: " + objectMapper.writeValueAsString(config.get())); + } catch (JsonProcessingException e) { + // ignore + } + + front50Service.deleteDeliveryConfig(config.get().getApplication(), stageData.deliveryConfigId); + + return TaskResult.SUCCEEDED; + } + + public Optional getDeliveryConfig(String id) { + try { + DeliveryConfig deliveryConfig = front50Service.getDeliveryConfig(id); + return Optional.of(deliveryConfig); + } catch (RetrofitError e) { + //ignore an unknown (404) or unauthorized (403, 401) + if (e.getResponse() != null && Arrays.asList(404, 403, 401).contains(e.getResponse().getStatus())) { + return Optional.empty(); + } else { + throw e; + } + } + } + + private static class StageData { + public String deliveryConfigId; + } +} diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorFront50Task.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorFront50Task.java index 6c81ac4b1c..24f987556d 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorFront50Task.java +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorFront50Task.java @@ -17,22 +17,27 @@ package com.netflix.spinnaker.orca.front50.tasks; import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.databind.ObjectMapper; import com.netflix.spinnaker.orca.ExecutionStatus; import com.netflix.spinnaker.orca.RetryableTask; import com.netflix.spinnaker.orca.TaskResult; import com.netflix.spinnaker.orca.front50.Front50Service; +import com.netflix.spinnaker.orca.front50.model.DeliveryConfig; import com.netflix.spinnaker.orca.pipeline.model.Stage; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Component; +import retrofit.RetrofitError; import javax.annotation.Nonnull; +import java.util.Arrays; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.concurrent.TimeUnit; +import java.util.function.Function; @Component public class MonitorFront50Task implements RetryableTask { @@ -42,11 +47,15 @@ public class MonitorFront50Task implements RetryableTask { private final int successThreshold; private final int gracePeriodMs; + private final ObjectMapper objectMapper; + @Autowired public MonitorFront50Task(Optional front50Service, + ObjectMapper objectMapper, @Value("${tasks.monitorFront50Task.successThreshold:0}") int successThreshold, @Value("${tasks.monitorFront50Task.gracePeriodMs:5000}") int gracePeriodMs) { this.front50Service = front50Service.orElse(null); + this.objectMapper = objectMapper; this.successThreshold = successThreshold; // some storage providers round the last modified time to the nearest second, this allows for a configurable @@ -78,33 +87,7 @@ public TaskResult execute(@Nonnull Stage stage) { StageData stageData = stage.mapTo(StageData.class); if (stageData.pipelineId != null) { try { - /* - * Some storage services (notably S3) are eventually consistent when versioning is enabled. - * - * This "dirty hack" attempts to ensure that each underlying instance of Front50 has cached an _updated copy_ - * of the modified resource. - * - * It does so by making multiple requests (currently only applies to pipelines) with the expectation that they - * will round-robin across all instances of Front50. - */ - for (int i = 0; i < successThreshold; i++) { - Optional> pipeline = getPipeline(stageData.pipelineId); - if (!pipeline.isPresent()) { - return TaskResult.RUNNING; - } - - Long lastModifiedTime = Long.valueOf(pipeline.get().get("updateTs").toString()); - if (lastModifiedTime < (stage.getStartTime() - gracePeriodMs)) { - return TaskResult.RUNNING; - } - - try { - // small delay between verification attempts - Thread.sleep(1000); - } catch (InterruptedException ignored) {} - } - - return TaskResult.SUCCEEDED; + return monitor(this::getPipeline, stageData.pipelineId, stage.getStartTime()); } catch (Exception e) { log.error( "Unable to verify that pipeline has been updated (executionId: {}, pipeline: {})", @@ -114,15 +97,63 @@ public TaskResult execute(@Nonnull Stage stage) { ); return TaskResult.RUNNING; } + } else if (stageData.deliveryConfig != null) { + String deliveryConfigId = stageData.deliveryConfig.getId(); + try { + return monitor(this::getDeliveryConfig, deliveryConfigId, stage.getStartTime()); + } catch (Exception e) { + log.error( + "Unable to verify that delivery config has been updated (executionId: {}, configId: {})", + stage.getExecution().getId(), + deliveryConfigId, + e + ); + return TaskResult.RUNNING; + } } else { log.warn( - "No pipeline id found, unable to verify that pipeline has been updated (executionId: {}, pipeline: {})", - stage.getExecution().getId(), - stageData.pipelineName + "No id found, unable to verify that the object has been updated (executionId: {})", + stage.getExecution().getId() ); } - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; + } + + private TaskResult monitor(Function>> getObjectFunction, String id, Long startTime) { + /* + * Some storage services (notably S3) are eventually consistent when versioning is enabled. + * + * This "dirty hack" attempts to ensure that each underlying instance of Front50 has cached an _updated copy_ + * of the modified resource. + * + * It does so by making multiple requests (currently only applies to pipelines) with the expectation that they + * will round-robin across all instances of Front50. + */ + for (int i = 0; i < successThreshold; i++) { + Optional> object = getObjectFunction.apply(id); + if (!object.isPresent()) { + return TaskResult.RUNNING; + } + + Long lastModifiedTime; + if (object.get().containsKey("updateTs")) { + lastModifiedTime = Long.valueOf(object.get().get("updateTs").toString()); + } else { + lastModifiedTime = Long.valueOf(object.get().get("lastModified").toString()); + } + + if (lastModifiedTime < (startTime - gracePeriodMs)) { + return TaskResult.RUNNING; + } + + try { + // small delay between verification attempts + Thread.sleep(1000); + } catch (InterruptedException ignored) {} + } + + return TaskResult.SUCCEEDED; } private Optional> getPipeline(String id) { @@ -130,6 +161,21 @@ private Optional> getPipeline(String id) { return pipelines.isEmpty() ? Optional.empty() : Optional.of(pipelines.get(0)); } + @SuppressWarnings("unchecked") + private Optional> getDeliveryConfig(String id) { + try { + DeliveryConfig deliveryConfig = front50Service.getDeliveryConfig(id); + return Optional.of(objectMapper.convertValue(deliveryConfig, Map.class)); + } catch (RetrofitError e) { + //ignore an unknown (404) or unauthorized (403, 401) + if (e.getResponse() != null && Arrays.asList(404, 403, 401).contains(e.getResponse().getStatus())) { + return Optional.empty(); + } else { + throw e; + } + } + } + private static class StageData { public String application; @@ -138,5 +184,7 @@ private static class StageData { @JsonProperty("pipeline.name") public String pipelineName; + + public DeliveryConfig deliveryConfig; } } diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorPipelineTask.groovy b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorPipelineTask.groovy index 3acfd41835..4d21a103de 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorPipelineTask.groovy +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/MonitorPipelineTask.groovy @@ -44,7 +44,7 @@ class MonitorPipelineTask implements OverridableTimeoutRetryableTask { Execution childPipeline = executionRepository.retrieve(PIPELINE, pipelineId) if (childPipeline.status == ExecutionStatus.SUCCEEDED) { - return new TaskResult(ExecutionStatus.SUCCEEDED, [status: childPipeline.status], childPipeline.getContext()) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([status: childPipeline.status]).outputs(childPipeline.getContext()).build() } if (childPipeline.status.halt) { @@ -85,13 +85,13 @@ class MonitorPipelineTask implements OverridableTimeoutRetryableTask { ] } - return new TaskResult(ExecutionStatus.TERMINAL, [ + return TaskResult.builder(ExecutionStatus.TERMINAL).context([ status : childPipeline.status, exception: exceptionDetails - ]) + ]).build() } - return new TaskResult(ExecutionStatus.RUNNING, [status: childPipeline.status]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([status: childPipeline.status]).build() } private static String buildExceptionMessage(String pipelineName, String message, Stage stage) { diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/ReorderPipelinesTask.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/ReorderPipelinesTask.java index b88404f642..e9606bea7c 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/ReorderPipelinesTask.java +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/ReorderPipelinesTask.java @@ -73,10 +73,7 @@ public TaskResult execute(Stage stage) { outputs.put("notification.type", "reorderpipelines"); outputs.put("application", application); - return new TaskResult( - (response.getStatus() == HttpStatus.OK.value()) ? ExecutionStatus.SUCCEEDED : ExecutionStatus.TERMINAL, - outputs - ); + return TaskResult.builder((response.getStatus() == HttpStatus.OK.value()) ? ExecutionStatus.SUCCEEDED : ExecutionStatus.TERMINAL).context(outputs).build(); } private void validateTask(Stage stage) { diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTask.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTask.java index 88c4b7da3f..33e9ffaabc 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTask.java +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTask.java @@ -22,19 +22,16 @@ import com.netflix.spinnaker.orca.front50.Front50Service; import com.netflix.spinnaker.orca.front50.PipelineModelMutator; import com.netflix.spinnaker.orca.pipeline.model.Stage; -import org.apache.commons.lang.StringUtils; +import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.http.HttpStatus; import org.springframework.stereotype.Component; import retrofit.client.Response; -import java.util.ArrayList; -import java.util.Base64; -import java.util.HashMap; -import java.util.List; -import java.util.Map; +import java.util.*; import java.util.concurrent.TimeUnit; @Component @@ -87,6 +84,13 @@ public TaskResult execute(Stage stage) { updateServiceAccount(pipeline, serviceAccount); } + if (stage.getContext().get("pipeline.id") != null && pipeline.get("id") == null) { + pipeline.put("id", stage.getContext().get("pipeline.id")); + + // We need to tell front50 to regenerate cron trigger id's + pipeline.put("regenerateCronTriggerIds", true); + } + pipelineModelMutators.stream().filter(m -> m.supports(pipeline)).forEach(m -> m.mutate(pipeline)); Response response = front50Service.savePipeline(pipeline); @@ -109,10 +113,19 @@ public TaskResult execute(Stage stage) { } } - return new TaskResult( - (response.getStatus() == HttpStatus.OK.value()) ? ExecutionStatus.SUCCEEDED : ExecutionStatus.TERMINAL, - outputs - ); + final ExecutionStatus status; + if (response.getStatus() == HttpStatus.OK.value()) { + status = ExecutionStatus.SUCCEEDED; + } else { + final Boolean isSavingMultiplePipelines = (Boolean) Optional + .ofNullable(stage.getContext().get("isSavingMultiplePipelines")).orElse(false); + if (isSavingMultiplePipelines) { + status = ExecutionStatus.FAILED_CONTINUE; + } else { + status = ExecutionStatus.TERMINAL; + } + } + return TaskResult.builder(status).context(outputs).build(); } @Override @@ -139,7 +152,12 @@ private void updateServiceAccount(Map pipeline, String serviceAc } // Managed Service account exists and roles are set; Update triggers - triggers.forEach(t -> t.putIfAbsent("runAsUser", serviceAccount)); + triggers.stream() + .filter(t -> { + String runAsUser = (String) t.get("runAsUser"); + return runAsUser == null || runAsUser.endsWith("@managed-service-account"); + }) + .forEach(t -> t.put("runAsUser", serviceAccount)); } private Map fetchExistingPipeline(Map newPipeline) { diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTask.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTask.java index 636dcb9794..580b66b31a 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTask.java +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTask.java @@ -27,16 +27,19 @@ import com.netflix.spinnaker.orca.TaskResult; import com.netflix.spinnaker.orca.front50.Front50Service; import com.netflix.spinnaker.orca.pipeline.model.Stage; -import javax.annotation.Nonnull; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.stereotype.Component; import retrofit.client.Response; +import javax.annotation.Nonnull; +import java.util.HashMap; +import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; +import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.stream.Collectors; @@ -102,33 +105,48 @@ public TaskResult execute(@Nonnull Stage stage) { if (!pipeline.containsKey("roles")) { log.debug("Skipping managed service accounts since roles field is not present."); - return new TaskResult(ExecutionStatus.SUCCEEDED); + return TaskResult.SUCCEEDED; } List roles = (List) pipeline.get("roles"); String user = stage.getExecution().getTrigger().getUser(); + Map outputs = new HashMap<>(); + + pipeline.computeIfAbsent("id", k -> { + String uuid = UUID.randomUUID().toString(); + outputs.put("pipeline.id", uuid); + return uuid; + }); + + // Check if pipeline roles did not change, and skip updating a service account if so. + String serviceAccountName = generateSvcAcctName(pipeline); + if (!pipelineRolesChanged(serviceAccountName, roles)) { + log.debug("Skipping managed service account creation/updatimg since roles have not changed."); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(ImmutableMap.of("pipeline.serviceAccount", serviceAccountName)).build(); + } + if (!isUserAuthorized(user, roles)) { // TODO: Push this to the output result so Deck can show it. log.warn("User {} is not authorized with all roles for pipeline", user); - return new TaskResult(ExecutionStatus.TERMINAL); + return TaskResult.ofStatus(ExecutionStatus.TERMINAL); } ServiceAccount svcAcct = new ServiceAccount(); - svcAcct.setName(generateSvcAcctName(pipeline)); + svcAcct.setName(serviceAccountName); svcAcct.setMemberOf(roles); // Creating a service account with an existing name will overwrite it // i.e. perform an update for our use case - // TODO(dibyom): If roles are unmodified, skip the create/update below. Response response = front50Service.saveServiceAccount(svcAcct); if (response.getStatus() != HttpStatus.OK.value()) { - return new TaskResult(ExecutionStatus.TERMINAL); + return TaskResult.ofStatus(ExecutionStatus.TERMINAL); } - return new TaskResult( - ExecutionStatus.SUCCEEDED, ImmutableMap.of("pipeline.serviceAccount", svcAcct.getName())); + outputs.put("pipeline.serviceAccount", svcAcct.getName()); + + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } private String generateSvcAcctName(Map pipeline) { @@ -165,4 +183,18 @@ private boolean isUserAuthorized(String user, List pipelineRoles) { return userRoles.containsAll(pipelineRoles); } + + private boolean pipelineRolesChanged(String serviceAccountName, List pipelineRoles) { + UserPermission.View permission = fiatPermissionEvaluator.getPermission(serviceAccountName); + if (permission == null || pipelineRoles == null) { // check if user has all permissions + return true; + } + + Set currentRoles = permission.getRoles() + .stream() + .map(Role.View::getName) + .collect(Collectors.toSet()); + + return !currentRoles.equals(new HashSet<>(pipelineRoles)); + } } diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/StartPipelineTask.groovy b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/StartPipelineTask.groovy index c9dfd60754..f5d47baed8 100644 --- a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/StartPipelineTask.groovy +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/StartPipelineTask.groovy @@ -95,7 +95,7 @@ class StartPipelineTask implements Task { getUser(stage.execution) ) - new TaskResult(ExecutionStatus.SUCCEEDED, [executionId: pipeline.id, executionName: pipelineConfig.name]) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([executionId: pipeline.id, executionName: pipelineConfig.name]).build() } // There are currently two sources-of-truth for the user: diff --git a/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/UpsertDeliveryConfigTask.java b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/UpsertDeliveryConfigTask.java new file mode 100644 index 0000000000..846e823793 --- /dev/null +++ b/orca-front50/src/main/groovy/com/netflix/spinnaker/orca/front50/tasks/UpsertDeliveryConfigTask.java @@ -0,0 +1,76 @@ +package com.netflix.spinnaker.orca.front50.tasks; + +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.front50.Front50Service; +import com.netflix.spinnaker.orca.front50.model.DeliveryConfig; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; +import retrofit.RetrofitError; + +import javax.annotation.Nonnull; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; + +@Component +public class UpsertDeliveryConfigTask implements Task { + + private Logger log = LoggerFactory.getLogger(getClass()); + + private Front50Service front50Service; + private ObjectMapper objectMapper; + + @Autowired + public UpsertDeliveryConfigTask(Front50Service front50Service, ObjectMapper objectMapper) { + this.front50Service = front50Service; + this.objectMapper = objectMapper; + } + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + if (!stage.getContext().containsKey("delivery")) { + throw new IllegalArgumentException("Key 'delivery' must be provided."); + } + + //todo eb: base64 encode this if it will have expressions + DeliveryConfig deliveryConfig = objectMapper + .convertValue(stage.getContext().get("delivery"), new TypeReference(){}); + + DeliveryConfig savedConfig; + if (configExists(deliveryConfig.getId())) { + savedConfig = front50Service.updateDeliveryConfig(deliveryConfig.getId(), deliveryConfig); + } else { + savedConfig = front50Service.createDeliveryConfig(deliveryConfig); + } + + Map outputs = new HashMap<>(); + outputs.put("application", deliveryConfig.getApplication()); + outputs.put("deliveryConfig", savedConfig); + + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); + } + + private boolean configExists(String id) { + if (id == null) { + return false; + } + try { + front50Service.getDeliveryConfig(id); + return true; + } catch (RetrofitError e) { + if (e.getResponse() != null && Arrays.asList(404, 403, 401).contains(e.getResponse().getStatus())) { + return false; + } else { + throw e; + } + } + } +} diff --git a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarterSpec.groovy b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarterSpec.groovy index 71b4c1597d..94a8051f10 100644 --- a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarterSpec.groovy +++ b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/DependentPipelineStarterSpec.groovy @@ -33,6 +33,7 @@ import org.springframework.context.ApplicationContext import org.springframework.context.support.StaticApplicationContext import spock.lang.Specification import spock.lang.Subject + import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage @@ -43,7 +44,7 @@ class DependentPipelineStarterSpec extends Specification { ObjectMapper mapper = OrcaObjectMapper.newInstance() ExecutionRepository executionRepository = Mock(ExecutionRepository) - ArtifactResolver artifactResolver = Spy(ArtifactResolver, constructorArgs: [mapper, executionRepository]) + ArtifactResolver artifactResolver = Spy(ArtifactResolver, constructorArgs: [mapper, executionRepository, new ContextParameterProcessor()]) def "should only propagate credentials when explicitly provided"() { setup: @@ -73,7 +74,7 @@ class DependentPipelineStarterSpec extends Specification { applicationContext, mapper, new ContextParameterProcessor(), - Optional.of([]), + Optional.empty(), Optional.of(artifactResolver), new NoopRegistry() ) @@ -126,7 +127,7 @@ class DependentPipelineStarterSpec extends Specification { applicationContext, mapper, new ContextParameterProcessor(), - Optional.of([]), + Optional.empty(), Optional.of(artifactResolver), new NoopRegistry() ) @@ -184,7 +185,7 @@ class DependentPipelineStarterSpec extends Specification { applicationContext, mapper, new ContextParameterProcessor(), - Optional.of([]), + Optional.empty(), Optional.of(artifactResolver), new NoopRegistry() ) @@ -256,7 +257,7 @@ class DependentPipelineStarterSpec extends Specification { applicationContext, mapper, new ContextParameterProcessor(), - Optional.of([]), + Optional.empty(), Optional.of(artifactResolver), new NoopRegistry() ) @@ -322,7 +323,7 @@ class DependentPipelineStarterSpec extends Specification { applicationContext, mapper, new ContextParameterProcessor(), - Optional.of([]), + Optional.empty(), Optional.of(artifactResolver), new NoopRegistry() ) @@ -373,7 +374,7 @@ class DependentPipelineStarterSpec extends Specification { applicationContext, mapper, new ContextParameterProcessor(), - Optional.of([]), + Optional.empty(), Optional.of(artifactResolver), new NoopRegistry() ) @@ -423,7 +424,7 @@ class DependentPipelineStarterSpec extends Specification { applicationContext, mapper, new ContextParameterProcessor(), - Optional.of([]), + Optional.empty(), Optional.of(artifactResolver), new NoopRegistry() ) diff --git a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListenerSpec.groovy b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListenerSpec.groovy index e8b6c0c731..c0deb4505b 100644 --- a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListenerSpec.groovy +++ b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/spring/DependentPipelineExecutionListenerSpec.groovy @@ -22,9 +22,12 @@ import com.netflix.spinnaker.orca.front50.DependentPipelineStarter import com.netflix.spinnaker.orca.front50.Front50Service import com.netflix.spinnaker.orca.front50.pipeline.PipelineStage import com.netflix.spinnaker.orca.pipeline.model.Task +import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor +import com.netflix.spinnaker.orca.pipelinetemplate.V2Util import com.netflix.spinnaker.security.User import spock.lang.Specification import spock.lang.Subject + import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.pipeline import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage @@ -33,7 +36,10 @@ class DependentPipelineExecutionListenerSpec extends Specification { def front50Service = Mock(Front50Service) def dependentPipelineStarter = Mock(DependentPipelineStarter) def pipelineConfig = buildPipelineConfig(null) + def v2MptPipelineConfig = buildTemplatedPipelineConfig() def pipelineConfigWithRunAsUser = buildPipelineConfig("my_run_as_user") + def contextParameterProcessor = new ContextParameterProcessor() + def templatePreprocessor = [process: {}] // Groovy thunk mock since the actual class is Kotlin and makes compliation fail. def pipeline = pipeline { application = "orca" @@ -49,7 +55,7 @@ class DependentPipelineExecutionListenerSpec extends Specification { @Subject DependentPipelineExecutionListener listener = new DependentPipelineExecutionListener( - front50Service, dependentPipelineStarter, fiatStatus + front50Service, dependentPipelineStarter, fiatStatus, Optional.of([templatePreprocessor]), contextParameterProcessor ) def "should trigger downstream pipeline when status and pipelines match"() { @@ -75,6 +81,31 @@ class DependentPipelineExecutionListenerSpec extends Specification { status << [ExecutionStatus.SUCCEEDED, ExecutionStatus.TERMINAL] } + def "should trigger downstream v2 templated pipeline when status and pipelines match"() { + given: + pipeline.stages.each { + it.status = status + it.tasks = [Mock(Task)] + } + + pipeline.pipelineConfigId = "97c435a0-0faf-11e5-a62b-696d38c37faa" + front50Service.getAllPipelines() >> [ + pipelineConfig, pipelineConfigWithRunAsUser, v2MptPipelineConfig + ] + GroovyMock(V2Util, global: true) + V2Util.planPipeline(_, _, v2MptPipelineConfig) >> v2MptPipelineConfig + + when: + listener.afterExecution(null, pipeline, null, true) + + then: + 2 * dependentPipelineStarter.trigger(_, _, _, _, _, null) + 1 * dependentPipelineStarter.trigger(_, _, _, _, _, { User user -> user.email == "my_run_as_user" }) + + where: + status << [ExecutionStatus.SUCCEEDED, ExecutionStatus.TERMINAL] + } + def "should not trigger downstream pipeline when conditions don't match"() { given: pipeline.stages.each { @@ -165,6 +196,25 @@ class DependentPipelineExecutionListenerSpec extends Specification { 0 * dependentPipelineStarter._ } + private static Map buildTemplatedPipelineConfig() { + return [ + schema: "v2", + type: "templatedPipeline", + triggers: [ + [ + "enabled" : true, + "type" : "pipeline", + "application": "rush", + "status" : [ + "successful", "failed" + ], + "pipeline" : "97c435a0-0faf-11e5-a62b-696d38c37faa", + "runAsUser" : null + ] + ] + ] + } + private static Map buildPipelineConfig(String runAsUser) { return [ triggers: [ diff --git a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTaskSpec.groovy b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTaskSpec.groovy index a348f0fa34..965f41d867 100644 --- a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTaskSpec.groovy +++ b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SavePipelineTaskSpec.groovy @@ -222,4 +222,49 @@ class SavePipelineTaskSpec extends Specification { then: runAsUser == null } + + def "should fail task when front 50 save call fails"() { + given: + def pipeline = [ + application: 'orca', + name: 'my pipeline', + stages: [] + ] + def stage = new Stage(Execution.newPipeline("orca"), "whatever", [ + pipeline: Base64.encoder.encodeToString(objectMapper.writeValueAsString(pipeline).bytes) + ]) + + when: + front50Service.getPipelines(_) >> [] + front50Service.savePipeline(_) >> { Map newPipeline -> + new Response('http://front50', 500, 'OK', [], null) + } + def result = task.execute(stage) + + then: + result.status == ExecutionStatus.TERMINAL + } + + def "should fail and continue task when front 50 save call fails and stage is iterating over pipelines"() { + given: + def pipeline = [ + application: 'orca', + name: 'my pipeline', + stages: [] + ] + def stage = new Stage(Execution.newPipeline("orca"), "whatever", [ + pipeline: Base64.encoder.encodeToString(objectMapper.writeValueAsString(pipeline).bytes), + isSavingMultiplePipelines: true + ]) + + when: + front50Service.getPipelines(_) >> [] + front50Service.savePipeline(_) >> { Map newPipeline -> + new Response('http://front50', 500, 'OK', [], null) + } + def result = task.execute(stage) + + then: + result.status == ExecutionStatus.FAILED_CONTINUE + } } diff --git a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTaskSpec.groovy b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTaskSpec.groovy index 42860bf1e6..c7861933d6 100644 --- a/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTaskSpec.groovy +++ b/orca-front50/src/test/groovy/com/netflix/spinnaker/orca/front50/tasks/SaveServiceAccountTaskSpec.groovy @@ -65,6 +65,35 @@ class SaveServiceAccountTaskSpec extends Specification { result.status == ExecutionStatus.SUCCEEDED } + def "should do nothing if roles are present and didn't change compared to the service user"() { + given: + def serviceAccount = 'pipeline-id@managed-service-account' + def pipeline = [ + application : 'orca', + name : 'my pipeline', + id : 'pipeline-id', + serviceAccount: serviceAccount, + stages : [], + roles : ['foo', 'bar'] + ] + def stage = stage { + context = [ + pipeline: Base64.encoder.encodeToString(objectMapper.writeValueAsString(pipeline).bytes) + ] + } + + when: + def result = task.execute(stage) + + then: + 1 * fiatPermissionEvaluator.getPermission(serviceAccount) >> { + new UserPermission().addResources([new Role('foo'), new Role('bar')]).view + } + 0 * front50Service.saveServiceAccount(_) + result.status == ExecutionStatus.SUCCEEDED + result.context == ImmutableMap.of('pipeline.serviceAccount', serviceAccount) + } + def "should create a serviceAccount with correct roles"() { given: def pipeline = [ @@ -163,4 +192,40 @@ class SaveServiceAccountTaskSpec extends Specification { result.context == ImmutableMap.of('pipeline.serviceAccount', expectedServiceAccount.name) } + def "should generate a pipeline id if not already present"() { + given: + def pipeline = [ + application: 'orca', + name: 'My pipeline', + stages: [], + roles: ['foo'] + ] + def stage = stage { + context = [ + pipeline: Base64.encoder.encodeToString(objectMapper.writeValueAsString(pipeline).bytes) + ] + } + def uuid = null + def expectedServiceAccountName = null + + when: + stage.getExecution().setTrigger(new DefaultTrigger('manual', null, 'abc@somedomain.io')) + def result = task.execute(stage) + + then: + 1 * fiatPermissionEvaluator.getPermission('abc@somedomain.io') >> { + new UserPermission().addResources([new Role('foo')]).view + } + + 1 * front50Service.saveServiceAccount({ it.name != null }) >> { ServiceAccount serviceAccount -> + uuid = serviceAccount.name - "@managed-service-account" + expectedServiceAccountName = serviceAccount.name + new Response('http://front50', 200, 'OK', [], null) + } + + result.status == ExecutionStatus.SUCCEEDED + result.context == ImmutableMap.of( + 'pipeline.id', uuid, + 'pipeline.serviceAccount', expectedServiceAccountName) + } } diff --git a/orca-igor/orca-igor.gradle b/orca-igor/orca-igor.gradle index 95f339cbcc..a9f2d555fd 100644 --- a/orca-igor/orca-igor.gradle +++ b/orca-igor/orca-igor.gradle @@ -22,6 +22,8 @@ dependencies { compile project(":orca-front50") compile spinnaker.dependency('kork') compile spinnaker.dependency('bootAutoConfigure') + compileOnly spinnaker.dependency("lombok") + annotationProcessor spinnaker.dependency("lombok") testCompile project(":orca-test-groovy") testCompile spinnaker.dependency("springTest") } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.groovy deleted file mode 100644 index 28dba642e2..0000000000 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.groovy +++ /dev/null @@ -1,59 +0,0 @@ -/* - * Copyright 2015 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -package com.netflix.spinnaker.orca.igor - -import groovy.transform.CompileStatic -import org.springframework.beans.factory.annotation.Autowired -import org.springframework.stereotype.Component -import org.springframework.web.util.UriUtils - -@CompileStatic -@Component -class BuildService { - - @Autowired - IgorService igorService - - private String encode(uri) { - return UriUtils.encodeFragment(uri.toString(), "UTF-8") - } - - String build(String master, String jobName, Map queryParams) { - return igorService.build(master, encode(jobName), queryParams, "") - } - - String stop(String master, String jobName, String queuedBuild, Integer buildNumber) { - return igorService.stop(master, jobName, queuedBuild, buildNumber, '') - } - - Map queuedBuild(String master, String item) { - return igorService.queuedBuild(master, item) - } - - Map getBuild(Integer buildNumber, String master, String job) { - return igorService.getBuild(buildNumber, master, encode(job)) - } - - Map getPropertyFile(Integer buildNumber, String fileName, String master, String job) { - return igorService.getPropertyFile(buildNumber, fileName, master, encode(job)) - } - - List compareCommits(String repoType, String projectKey, String repositorySlug, Map requestParams) { - return igorService.compareCommits(repoType, projectKey, repositorySlug, requestParams) - } -} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.java new file mode 100644 index 0000000000..4decc53539 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/BuildService.java @@ -0,0 +1,67 @@ +/* + * Copyright 2015 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.igor; + +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import lombok.RequiredArgsConstructor; +import org.springframework.stereotype.Component; +import org.springframework.web.util.UriUtils; + +import java.io.UnsupportedEncodingException; +import java.util.List; +import java.util.Map; + +@Component +@RequiredArgsConstructor +public class BuildService { + private final IgorService igorService; + + private String encode(String uri) { + try { + return UriUtils.encodeFragment(uri, "UTF-8"); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e); + } + } + + public String build(String master, String jobName, Map queryParams) { + return igorService.build(master, encode(jobName), queryParams, ""); + } + + public String stop(String master, String jobName, String queuedBuild, Integer buildNumber) { + return igorService.stop(master, jobName, queuedBuild, buildNumber, ""); + } + + public Map queuedBuild(String master, String item) { + return igorService.queuedBuild(master, item); + } + + public Map getBuild(Integer buildNumber, String master, String job) { + return igorService.getBuild(buildNumber, master, encode(job)); + } + + public Map getPropertyFile(Integer buildNumber, String fileName, String master, String job) { + return igorService.getPropertyFile(buildNumber, fileName, master, encode(job)); + } + + public List getArtifacts(Integer buildNumber, String fileName, String master, String job) { + return igorService.getArtifacts(buildNumber, fileName, master, encode(job)); + } + + public List compareCommits(String repoType, String projectKey, String repositorySlug, Map requestParams) { + return igorService.compareCommits(repoType, projectKey, repositorySlug, requestParams); + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.groovy deleted file mode 100644 index c8e7ac460c..0000000000 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.groovy +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Copyright 2015 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.igor - -import retrofit.http.Body -import retrofit.http.EncodedPath -import retrofit.http.GET -import retrofit.http.PUT -import retrofit.http.Path -import retrofit.http.QueryMap - -interface IgorService { - - @PUT("/masters/{name}/jobs/{jobName}") - String build(@Path("name") String master, @Path(encode = false, value = "jobName") String jobName, @QueryMap Map queryParams, @Body String ignored) - - @PUT("/masters/{name}/jobs/{jobName}/stop/{queuedBuild}/{buildNumber}") - String stop(@Path("name") String master, @Path(encode = false, value = "jobName") String jobName, @Path(encode = false, value = "queuedBuild") String queuedBuild, @Path(encode = false, value = "buildNumber") Integer buildNumber, @Body String ignored) - - @GET("/builds/queue/{master}/{item}") - Map queuedBuild(@Path("master") String master, @Path("item") String item) - - @GET("/builds/status/{buildNumber}/{master}/{job}") - Map getBuild(@Path("buildNumber") Integer buildNumber, - @Path("master") String master, - @EncodedPath("job") String job) - - @GET("/builds/properties/{buildNumber}/{fileName}/{master}/{job}") - Map getPropertyFile(@Path("buildNumber") Integer buildNumber, - @Path("fileName") String fileName, - @Path("master") String master, - @EncodedPath("job") String job) - - @GET("/{repoType}/{projectKey}/{repositorySlug}/compareCommits") - List compareCommits(@Path("repoType") String repoType, @Path("projectKey") String projectKey, @Path("repositorySlug") String repositorySlug, @QueryMap Map requestParams) - -} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.java new file mode 100644 index 0000000000..4fd3bedf50 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/IgorService.java @@ -0,0 +1,87 @@ +/* + * Copyright 2015 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.igor; + +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuild; +import retrofit.http.*; + +import java.util.List; +import java.util.Map; + +public interface IgorService { + @PUT("/masters/{name}/jobs/{jobName}") + String build( + @Path("name") String master, + @Path(encode = false, value = "jobName") String jobName, + @QueryMap Map queryParams, + @Body String ignored); + + @PUT("/masters/{name}/jobs/{jobName}/stop/{queuedBuild}/{buildNumber}") + String stop( + @Path("name") String master, + @Path(encode = false, value = "jobName") String jobName, + @Path(encode = false, value = "queuedBuild") String queuedBuild, + @Path(encode = false, value = "buildNumber") Integer buildNumber, + @Body String ignored); + + @GET("/builds/queue/{master}/{item}") + Map queuedBuild( + @Path("master") String master, + @Path("item") String item); + + @GET("/builds/status/{buildNumber}/{master}/{job}") + Map getBuild( + @Path("buildNumber") Integer buildNumber, + @Path("master") String master, + @Path(encode = false, value = "job") String job); + + @GET("/builds/properties/{buildNumber}/{fileName}/{master}/{job}") + Map getPropertyFile( + @Path("buildNumber") Integer buildNumber, + @Path("fileName") String fileName, + @Path("master") String master, + @Path(encode = false, value = "job") String job); + + @GET("/{repoType}/{projectKey}/{repositorySlug}/compareCommits") + List compareCommits( + @Path("repoType") String repoType, + @Path("projectKey") String projectKey, + @Path("repositorySlug") String repositorySlug, + @QueryMap Map requestParams); + + @GET("/builds/artifacts/{buildNumber}/{master}/{job}") + List getArtifacts( + @Path("buildNumber") Integer buildNumber, + @Query("propertyFile") String propertyFile, + @Path("master") String master, + @Path(value = "job", encode = false) String job); + + @POST("/gcb/builds/create/{account}") + GoogleCloudBuild createGoogleCloudBuild( + @Path("account") String account, + @Body Map job); + + @GET("/gcb/builds/{account}/{buildId}") + GoogleCloudBuild getGoogleCloudBuild( + @Path("account") String account, + @Path("buildId") String buildId); + + @GET("/gcb/builds/{account}/{buildId}/artifacts") + List getGoogleCloudBuildArtifacts( + @Path("account") String account, + @Path("buildId") String buildId); +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/config/IgorConfiguration.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/config/IgorConfiguration.groovy index 438b932f18..6e0c9e93eb 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/config/IgorConfiguration.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/config/IgorConfiguration.groovy @@ -29,6 +29,7 @@ import org.springframework.context.annotation.ComponentScan import org.springframework.context.annotation.Configuration import org.springframework.context.annotation.Import import retrofit.Endpoint +import retrofit.RequestInterceptor import retrofit.RestAdapter import retrofit.client.Client import retrofit.converter.JacksonConverter @@ -52,11 +53,12 @@ class IgorConfiguration { } @Bean - IgorService igorService(Endpoint igorEndpoint, ObjectMapper mapper) { + IgorService igorService(Endpoint igorEndpoint, ObjectMapper mapper, RequestInterceptor spinnakerRequestInterceptor) { new RestAdapter.Builder() .setEndpoint(igorEndpoint) .setClient(retrofitClient) .setLogLevel(retrofitLogLevel) + .setRequestInterceptor(spinnakerRequestInterceptor) .setLog(new RetrofitSlf4jLog(IgorService)) .setConverter(new JacksonConverter(mapper)) .build() diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/CIStageDefinition.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/CIStageDefinition.java new file mode 100644 index 0000000000..e3407d9023 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/CIStageDefinition.java @@ -0,0 +1,58 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.model; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact; +import lombok.Getter; + +import java.util.Collections; +import java.util.List; +import java.util.Optional; + +@Getter +public class CIStageDefinition implements RetryableStageDefinition { + private final String master; + private final String job; + private final String propertyFile; + private final Integer buildNumber; + private final boolean waitForCompletion; + private final List expectedArtifacts; + private final int consecutiveErrors; + + // There does not seem to be a way to auto-generate a constructor using our current version of Lombok (1.16.20) that + // Jackson can use to deserialize. + public CIStageDefinition( + @JsonProperty("master") String master, + @JsonProperty("job") String job, + @JsonProperty("property") String propertyFile, + @JsonProperty("buildNumber") Integer buildNumber, + @JsonProperty("waitForCompletion") Boolean waitForCompletion, + @JsonProperty("expectedArtifacts") List expectedArtifacts, + @JsonProperty("consecutiveErrors") Integer consecutiveErrors + ) { + this.master = master; + this.job = job; + this.propertyFile = propertyFile; + this.buildNumber = buildNumber; + this.waitForCompletion = Optional.ofNullable(waitForCompletion).orElse(true); + this.expectedArtifacts = Collections.unmodifiableList( + Optional.ofNullable(expectedArtifacts).orElse(Collections.emptyList()) + ); + this.consecutiveErrors = Optional.ofNullable(consecutiveErrors).orElse(0); + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuild.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuild.java new file mode 100644 index 0000000000..9f20967f97 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuild.java @@ -0,0 +1,72 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.model; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnoreProperties; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.orca.ExecutionStatus; +import lombok.Builder; +import lombok.Getter; +import lombok.RequiredArgsConstructor; + +@Builder +@Getter +@RequiredArgsConstructor +@JsonIgnoreProperties(ignoreUnknown = true) +public class GoogleCloudBuild { + private final String id; + private final Status status; + private final String logUrl; + + @JsonCreator + public GoogleCloudBuild( + @JsonProperty("id") String id, + @JsonProperty("status") String status, + @JsonProperty("logUrl") String logUrl + ) { + this.id = id; + this.status = Status.fromString(status); + this.logUrl = logUrl; + } + + public enum Status { + STATUS_UNKNOWN(ExecutionStatus.RUNNING), + QUEUED(ExecutionStatus.RUNNING), + WORKING(ExecutionStatus.RUNNING), + SUCCESS(ExecutionStatus.SUCCEEDED), + FAILURE(ExecutionStatus.TERMINAL), + INTERNAL_ERROR(ExecutionStatus.TERMINAL), + TIMEOUT(ExecutionStatus.TERMINAL), + CANCELLED(ExecutionStatus.TERMINAL); + + @Getter + private ExecutionStatus executionStatus; + + Status(ExecutionStatus executionStatus) { + this.executionStatus = executionStatus; + } + + public static Status fromString(String status) { + try { + return valueOf(status); + } catch (NullPointerException | IllegalArgumentException e) { + return STATUS_UNKNOWN; + } + } + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuildStageDefinition.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuildStageDefinition.java new file mode 100644 index 0000000000..958c589b2b --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/GoogleCloudBuildStageDefinition.java @@ -0,0 +1,70 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.model; + +import com.fasterxml.jackson.annotation.JsonProperty; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import lombok.Getter; + +import java.util.Map; +import java.util.Optional; + +@Getter +public class GoogleCloudBuildStageDefinition implements RetryableStageDefinition { + private final String account; + private final GoogleCloudBuild buildInfo; + private final Map buildDefinition; + private final String buildDefinitionSource; + private final GoogleCloudBuildDefinitionArtifact buildDefinitionArtifact; + private final int consecutiveErrors; + + // There does not seem to be a way to auto-generate a constructor using our current version of Lombok (1.16.20) that + // Jackson can use to deserialize. + public GoogleCloudBuildStageDefinition( + @JsonProperty("account") String account, + @JsonProperty("buildInfo") GoogleCloudBuild build, + @JsonProperty("buildDefinition") Map buildDefinition, + @JsonProperty("buildDefinitionSource") String buildDefinitionSource, + @JsonProperty("buildDefinitionArtifact") GoogleCloudBuildDefinitionArtifact buildDefinitionArtifact, + @JsonProperty("consecutiveErrors") Integer consecutiveErrors + ) { + this.account = account; + this.buildInfo = build; + this.buildDefinition = buildDefinition; + this.buildDefinitionSource = buildDefinitionSource; + this.buildDefinitionArtifact = Optional.ofNullable(buildDefinitionArtifact) + .orElse(new GoogleCloudBuildDefinitionArtifact(null, null, null)); + this.consecutiveErrors = Optional.ofNullable(consecutiveErrors).orElse(0); + } + + @Getter + public static class GoogleCloudBuildDefinitionArtifact { + private final Artifact artifact; + private final String artifactAccount; + private final String artifactId; + + public GoogleCloudBuildDefinitionArtifact( + @JsonProperty("artifact") Artifact artifact, + @JsonProperty("artifactAccount") String artifactAccount, + @JsonProperty("artifactId") String artifactId + ) { + this.artifact = artifact; + this.artifactAccount = artifactAccount; + this.artifactId = artifactId; + } + } +} diff --git a/orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/PipelinePreprocessor.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/RetryableStageDefinition.java similarity index 72% rename from orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/PipelinePreprocessor.java rename to orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/RetryableStageDefinition.java index ff11faa87a..87b6bbbfd9 100644 --- a/orca-extensionpoint/src/main/java/com/netflix/spinnaker/orca/extensionpoint/pipeline/PipelinePreprocessor.java +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/model/RetryableStageDefinition.java @@ -1,5 +1,5 @@ /* - * Copyright 2017 Netflix, Inc. + * Copyright 2019 Google, Inc. * * Licensed under the Apache License, Version 2.0 (the "License") * you may not use this file except in compliance with the License. @@ -14,10 +14,8 @@ * limitations under the License. */ -package com.netflix.spinnaker.orca.extensionpoint.pipeline; +package com.netflix.spinnaker.orca.igor.model; -import java.util.Map; - -public interface PipelinePreprocessor { - Map process(Map pipeline); +public interface RetryableStageDefinition { + int getConsecutiveErrors(); } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/CIStage.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/CIStage.java new file mode 100644 index 0000000000..3d1e3ff273 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/CIStage.java @@ -0,0 +1,117 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.igor.pipeline; + +import com.netflix.spinnaker.orca.CancellableStage; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.igor.model.CIStageDefinition; +import com.netflix.spinnaker.orca.igor.tasks.*; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.tasks.artifacts.BindProducedArtifactsTask; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; + +import javax.annotation.Nonnull; +import java.util.HashMap; +import java.util.Map; + +@RequiredArgsConstructor +@Slf4j +public abstract class CIStage implements StageDefinitionBuilder, CancellableStage { + private final StopJenkinsJobTask stopJenkinsJobTask; + + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + CIStageDefinition stageDefinition = stage.mapTo(CIStageDefinition.class); + String jobType = StringUtils.capitalize(getType()); + builder + .withTask(String.format("start%sJob", jobType), StartJenkinsJobTask.class) + .withTask(String.format("waitFor%sJobStart", jobType), waitForJobStartTaskClass()); + + if (stageDefinition.isWaitForCompletion()) { + builder.withTask(String.format("monitor%sJob", jobType), MonitorJenkinsJobTask.class); + builder.withTask("getBuildProperties", GetBuildPropertiesTask.class); + builder.withTask("getBuildArtifacts", GetBuildArtifactsTask.class); + } + if (stageDefinition.getExpectedArtifacts().size() > 0) { + builder.withTask(BindProducedArtifactsTask.TASK_NAME, BindProducedArtifactsTask.class); + } + } + + protected Class waitForJobStartTaskClass() { + return MonitorQueuedJenkinsJobTask.class; + } + + @Override + @SuppressWarnings("unchecked") + public void prepareStageForRestart(@Nonnull Stage stage) { + Object buildInfo = stage.getContext().get("buildInfo"); + if (buildInfo != null) { + Map restartDetails = (Map) stage.getContext() + .computeIfAbsent("restartDetails", k -> new HashMap()); + restartDetails.put("previousBuildInfo", buildInfo); + } + stage.getContext().remove("buildInfo"); + stage.getContext().remove("buildNumber"); + } + + @Override + public Result cancel(final Stage stage) { + log.info(String.format( + "Cancelling stage (stageId: %s, executionId: %s context: %s)", + stage.getId(), + stage.getExecution().getId(), + stage.getContext() + )); + + try { + stopJenkinsJobTask.execute(stage); + } catch (Exception e) { + log.error( + String.format("Failed to cancel stage (stageId: %s, executionId: %s), e: %s", stage.getId(), stage.getExecution().getId(), e.getMessage()), + e + ); + } + return new Result(stage, new HashMap()); + } + + @Override + public void onFailureStages(@Nonnull Stage stage, @Nonnull StageGraphBuilder graph) { + CIStageDefinition stageDefinition = stage.mapTo(CIStageDefinition.class); + if (stageDefinition.getPropertyFile() != null && !stageDefinition.getPropertyFile().equals("")) { + log.info( + "Stage failed (stageId: {}, executionId: {}), trying to find requested property file in case it was archived.", + stage.getId(), + stage.getExecution().getId() + ); + graph.add( (Stage s) -> { + s.setType(new GetPropertiesStage().getType()); + s.setName("Try to get properties file"); + Map context = new HashMap<>(); + context.put("master", stageDefinition.getMaster()); + context.put("job", stageDefinition.getJob()); + context.put("propertyFile", stageDefinition.getPropertyFile()); + context.put("buildNumber", stageDefinition.getBuildNumber()); + s.setContext(context); + } + ); + } + } +} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/loadbalancer/MigrateLoadBalancerStage.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GetPropertiesStage.java similarity index 57% rename from orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/loadbalancer/MigrateLoadBalancerStage.java rename to orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GetPropertiesStage.java index 38cb6ec47f..20449b37af 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/pipeline/loadbalancer/MigrateLoadBalancerStage.java +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GetPropertiesStage.java @@ -1,5 +1,6 @@ /* - * Copyright 2016 Netflix, Inc. + * + * Copyright 2019 Netflix, Inc. * * Licensed under the Apache License, Version 2.0 (the "License") * you may not use this file except in compliance with the License. @@ -12,26 +13,22 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. + * */ +package com.netflix.spinnaker.orca.igor.pipeline; -package com.netflix.spinnaker.orca.clouddriver.pipeline.loadbalancer; - -import com.netflix.spinnaker.orca.clouddriver.tasks.MonitorKatoTask; -import com.netflix.spinnaker.orca.clouddriver.tasks.loadbalancer.MigrateLoadBalancerTask; +import com.netflix.spinnaker.orca.igor.tasks.GetBuildPropertiesTask; import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; import com.netflix.spinnaker.orca.pipeline.TaskNode; import com.netflix.spinnaker.orca.pipeline.model.Stage; import org.springframework.stereotype.Component; -@Component -public class MigrateLoadBalancerStage implements StageDefinitionBuilder { - - public static final String PIPELINE_CONFIG_TYPE = "migrateLoadBalancer"; +import javax.annotation.Nonnull; +@Component +public class GetPropertiesStage implements StageDefinitionBuilder { @Override - public void taskGraph(Stage stage, TaskNode.Builder builder) { - builder - .withTask("migrateLoadBalancer", MigrateLoadBalancerTask.class) - .withTask("monitorMigration", MonitorKatoTask.class); + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + builder.withTask("getPropertiesFile", GetBuildPropertiesTask.class); } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStage.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStage.java new file mode 100644 index 0000000000..ca21fda528 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStage.java @@ -0,0 +1,44 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.netflix.spinnaker.orca.igor.pipeline; + +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuildStageDefinition; +import com.netflix.spinnaker.orca.igor.tasks.GetGoogleCloudBuildArtifactsTask; +import com.netflix.spinnaker.orca.igor.tasks.MonitorGoogleCloudBuildTask; +import com.netflix.spinnaker.orca.igor.tasks.StartGoogleCloudBuildTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; + +@Component +@RequiredArgsConstructor +@Slf4j +public class GoogleCloudBuildStage implements StageDefinitionBuilder { + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + GoogleCloudBuildStageDefinition stageDefinition = stage.mapTo(GoogleCloudBuildStageDefinition.class); + builder + .withTask("startGoogleCloudBuildTask", StartGoogleCloudBuildTask.class) + .withTask("monitorGoogleCloudBuildTask", MonitorGoogleCloudBuildTask.class) + .withTask("getGoogleCloudBuildArtifactsTask", GetGoogleCloudBuildArtifactsTask.class) + ; + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.groovy deleted file mode 100644 index 17d818ac7d..0000000000 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.groovy +++ /dev/null @@ -1,87 +0,0 @@ -/* - * Copyright 2015 Netflix, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.igor.pipeline - -import com.netflix.spinnaker.orca.CancellableStage -import com.netflix.spinnaker.orca.igor.tasks.MonitorJenkinsJobTask -import com.netflix.spinnaker.orca.igor.tasks.MonitorQueuedJenkinsJobTask -import com.netflix.spinnaker.orca.igor.tasks.StartJenkinsJobTask -import com.netflix.spinnaker.orca.igor.tasks.StopJenkinsJobTask -import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder -import com.netflix.spinnaker.orca.pipeline.TaskNode -import com.netflix.spinnaker.orca.pipeline.model.Stage -import com.netflix.spinnaker.orca.pipeline.tasks.artifacts.BindProducedArtifactsTask -import groovy.util.logging.Slf4j -import org.springframework.beans.factory.annotation.Autowired -import org.springframework.stereotype.Component - -@Slf4j -@Component -public class JenkinsStage implements StageDefinitionBuilder, CancellableStage { - @Autowired StopJenkinsJobTask stopJenkinsJobTask - - @Override - void taskGraph(Stage stage, TaskNode.Builder builder) { - builder - .withTask("start${getType().capitalize()}Job", startJobTaskClass()) - .withTask("waitFor${getType().capitalize()}JobStart", waitForJobStartTaskClass()) - - if (!stage.getContext().getOrDefault("waitForCompletion", "true").toString().equalsIgnoreCase("false")) { - builder.withTask("monitor${getType().capitalize()}Job", waitForCompletionTaskClass()) - } - - if (stage.context.containsKey("expectedArtifacts")) { - builder - .withTask(BindProducedArtifactsTask.TASK_NAME, BindProducedArtifactsTask.class) - } - } - - Class startJobTaskClass() { - return StartJenkinsJobTask.class - } - - Class waitForJobStartTaskClass() { - return MonitorQueuedJenkinsJobTask.class - } - - Class waitForCompletionTaskClass() { - return MonitorJenkinsJobTask.class - } - - @Override - void prepareStageForRestart(Stage stage) { - if (stage.context.buildInfo) { - if (!stage.context.restartDetails) stage.context.restartDetails = [:] - stage.context.restartDetails["previousBuildInfo"] = stage.context.buildInfo - } - stage.context.remove("buildInfo") - stage.context.remove("buildNumber") - } - - @Override - CancellableStage.Result cancel(Stage stage) { - log.info("Cancelling stage (stageId: ${stage.id}, executionId: ${stage.execution.id}, context: ${stage.context as Map})") - - try { - stopJenkinsJobTask.execute(stage) - } catch (Exception e) { - log.error("Failed to cancel stage (stageId: ${stage.id}, executionId: ${stage.execution.id}), e: ${e.message}", e) - } - - return new CancellableStage.Result(stage, [:]) - } -} diff --git a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateServerGroupTask.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.java similarity index 61% rename from orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateServerGroupTask.java rename to orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.java index 0bfce7ed11..420967fdc5 100644 --- a/orca-clouddriver/src/main/groovy/com/netflix/spinnaker/orca/clouddriver/tasks/servergroup/MigrateServerGroupTask.java +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStage.java @@ -1,11 +1,11 @@ /* - * Copyright 2016 Netflix, Inc. + * Copyright 2015 Netflix, Inc. * - * Licensed under the Apache License, Version 2.0 (the "License") + * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -13,17 +13,14 @@ * See the License for the specific language governing permissions and * limitations under the License. */ +package com.netflix.spinnaker.orca.igor.pipeline; -package com.netflix.spinnaker.orca.clouddriver.tasks.servergroup; - -import com.netflix.spinnaker.orca.clouddriver.tasks.MigrateTask; +import com.netflix.spinnaker.orca.igor.tasks.StopJenkinsJobTask; import org.springframework.stereotype.Component; @Component -public class MigrateServerGroupTask extends MigrateTask { - - @Override - public String getCloudOperationType() { - return "migrateServerGroup"; +public class JenkinsStage extends CIStage { + public JenkinsStage(StopJenkinsJobTask stopJenkinsJobTask) { + super(stopJenkinsJobTask); } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStage.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStage.groovy index 89573778f5..95e3b2cac0 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStage.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStage.groovy @@ -17,6 +17,7 @@ package com.netflix.spinnaker.orca.igor.pipeline import com.netflix.spinnaker.orca.CancellableStage +import com.netflix.spinnaker.orca.igor.tasks.GetBuildPropertiesTask import com.netflix.spinnaker.orca.igor.tasks.MonitorJenkinsJobTask import com.netflix.spinnaker.orca.igor.tasks.MonitorQueuedJenkinsJobTask import com.netflix.spinnaker.orca.igor.tasks.StartScriptTask @@ -44,6 +45,7 @@ class ScriptStage implements StageDefinitionBuilder, CancellableStage { if (!stage.getContext().getOrDefault("waitForCompletion", "true").toString().equalsIgnoreCase("false")) { builder.withTask("monitorScript", MonitorJenkinsJobTask) + builder.withTask("getBuildProperties", GetBuildPropertiesTask.class) } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/TravisStage.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/TravisStage.java similarity index 66% rename from orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/TravisStage.groovy rename to orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/TravisStage.java index 72e03ca80a..6cc1a4d21b 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/TravisStage.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/TravisStage.java @@ -13,10 +13,14 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package com.netflix.spinnaker.orca.igor.pipeline +package com.netflix.spinnaker.orca.igor.pipeline; -import org.springframework.stereotype.Component +import com.netflix.spinnaker.orca.igor.tasks.StopJenkinsJobTask; +import org.springframework.stereotype.Component; @Component -class TravisStage extends JenkinsStage { +public class TravisStage extends CIStage { + public TravisStage(StopJenkinsJobTask stopJenkinsJobTask) { + super(stopJenkinsJobTask); + } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/WerckerStage.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/WerckerStage.java similarity index 52% rename from orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/WerckerStage.groovy rename to orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/WerckerStage.java index bc49047a03..7a3efd65d5 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/WerckerStage.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/pipeline/WerckerStage.java @@ -6,16 +6,20 @@ * If a copy of the Apache License Version 2.0 was not distributed with this file, * You can obtain one at https://www.apache.org/licenses/LICENSE-2.0.html */ -package com.netflix.spinnaker.orca.igor.pipeline +package com.netflix.spinnaker.orca.igor.pipeline; -import com.netflix.spinnaker.orca.igor.tasks.MonitorWerckerJobStartedTask -import org.springframework.stereotype.Component +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.igor.tasks.MonitorWerckerJobStartedTask; +import com.netflix.spinnaker.orca.igor.tasks.StopJenkinsJobTask; +import org.springframework.stereotype.Component; @Component -class WerckerStage extends JenkinsStage { - - Class waitForJobStartTaskClass() { - return MonitorWerckerJobStartedTask.class +public class WerckerStage extends CIStage { + public WerckerStage(StopJenkinsJobTask stopJenkinsJobTask) { + super(stopJenkinsJobTask); } + public Class waitForJobStartTaskClass() { + return MonitorWerckerJobStartedTask.class; + } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTask.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTask.java new file mode 100644 index 0000000000..ab46a47bbf --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTask.java @@ -0,0 +1,57 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks; + +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.igor.BuildService; +import com.netflix.spinnaker.orca.igor.model.CIStageDefinition; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +@Component +@RequiredArgsConstructor +@Slf4j +public class GetBuildArtifactsTask extends RetryableIgorTask { + private final BuildService buildService; + + @Override + protected @Nonnull TaskResult tryExecute(@Nonnull CIStageDefinition stageDefinition) { + if (StringUtils.isEmpty(stageDefinition.getPropertyFile())) { + return TaskResult.SUCCEEDED; + } + List artifacts = buildService.getArtifacts( + stageDefinition.getBuildNumber(), stageDefinition.getPropertyFile(), stageDefinition.getMaster(), stageDefinition.getJob() + ); + Map> outputs = Collections.singletonMap("artifacts", artifacts); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(Collections.emptyMap()).outputs(outputs).build(); + } + + @Override + protected @Nonnull CIStageDefinition mapStage(@Nonnull Stage stage) { + return stage.mapTo(CIStageDefinition.class); + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTask.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTask.java new file mode 100644 index 0000000000..0d225b4940 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTask.java @@ -0,0 +1,63 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.igor.BuildService; +import com.netflix.spinnaker.orca.igor.model.CIStageDefinition; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.HashMap; +import java.util.Map; + +@Component +@RequiredArgsConstructor +@Slf4j +public class GetBuildPropertiesTask extends RetryableIgorTask { + private final BuildService buildService; + + @Override + protected @Nonnull TaskResult tryExecute(@Nonnull CIStageDefinition stageDefinition) { + if (StringUtils.isEmpty(stageDefinition.getPropertyFile())) { + return TaskResult.SUCCEEDED; + } + + Map properties = buildService.getPropertyFile( + stageDefinition.getBuildNumber(), + stageDefinition.getPropertyFile(), + stageDefinition.getMaster(), + stageDefinition.getJob() + ); + if (properties.size() == 0) { + throw new IllegalStateException(String.format("Expected properties file %s but it was either missing, empty or contained invalid syntax", stageDefinition.getPropertyFile())); + } + HashMap outputs = new HashMap<>(properties); + outputs.put("propertyFileContents", properties); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build(); + } + + @Override + protected @Nonnull CIStageDefinition mapStage(@Nonnull Stage stage) { + return stage.mapTo(CIStageDefinition.class); + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetCommitsTask.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetCommitsTask.groovy index beb850d962..f85570334d 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetCommitsTask.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetCommitsTask.groovy @@ -60,12 +60,12 @@ class GetCommitsTask implements DiffTask { // is igor not configured or have we exceeded configured retries if (!buildService || retriesRemaining == 0) { log.info("igor is not configured or retries exceeded : buildService : ${buildService}, retries : ${retriesRemaining}") - return new TaskResult(ExecutionStatus.SUCCEEDED, [commits: [], getCommitsRetriesRemaining: retriesRemaining]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([commits: [], getCommitsRetriesRemaining: retriesRemaining]).build() } if (!front50Service) { log.warn("Front50 is not configured. Fix this by setting front50.enabled: true") - return new TaskResult(ExecutionStatus.SUCCEEDED, [commits: [], getCommitsRetriesRemaining: retriesRemaining]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([commits: [], getCommitsRetriesRemaining: retriesRemaining]).build() } Map repoInfo = [:] @@ -90,7 +90,7 @@ class GetCommitsTask implements DiffTask { if (!ancestorAmi) { log.info "could not determine ancestor ami, this may be a new cluster with no ancestor asg" - return new TaskResult(ExecutionStatus.SUCCEEDED, [commits: []]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([commits: []]).build() } //figure out the new asg/ami/commit @@ -121,26 +121,26 @@ class GetCommitsTask implements DiffTask { outputs << [buildInfo: [ancestor: sourceInfo.build, target: targetInfo.build]] } - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } catch (RetrofitError e) { if (e.kind == RetrofitError.Kind.UNEXPECTED) { // give up on internal errors log.error("internal error while talking to igor : [repoType: ${repoInfo?.repoType} projectKey:${repoInfo?.projectKey} repositorySlug:${repoInfo?.repositorySlug} sourceCommit:$sourceInfo targetCommit: $targetInfo]") - return new TaskResult(ExecutionStatus.SUCCEEDED, [commits: []]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([commits: []]).build() } else if (e.response?.status == 404) { // just give up on 404 log.error("got a 404 from igor for : [repoType: ${repoInfo?.repoType} projectKey:${repoInfo?.projectKey} repositorySlug:${repoInfo?.repositorySlug} sourceCommit:${sourceInfo} targetCommit: ${targetInfo}]") - return new TaskResult(ExecutionStatus.SUCCEEDED, [commits: []]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([commits: []]).build() } else { // retry on other status codes log.error("retrofit error (${e.message}) for : [repoType: ${repoInfo?.repoType} projectKey:${repoInfo?.projectKey} repositorySlug:${repoInfo?.repositorySlug} sourceCommit:${sourceInfo} targetCommit: ${targetInfo}], retrying") - return new TaskResult(ExecutionStatus.RUNNING, [getCommitsRetriesRemaining: retriesRemaining - 1]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([getCommitsRetriesRemaining: retriesRemaining - 1]).build() } } catch (Exception f) { // retry on everything else log.error("unexpected exception for : [repoType: ${repoInfo?.repoType} projectKey:${repoInfo?.projectKey} repositorySlug:${repoInfo?.repositorySlug} sourceCommit:${sourceInfo} targetCommit: ${targetInfo}], retrying", f) - return new TaskResult(ExecutionStatus.RUNNING, [getCommitsRetriesRemaining: retriesRemaining - 1]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([getCommitsRetriesRemaining: retriesRemaining - 1]).build() } catch (Throwable g) { log.error("unexpected throwable for : [repoType: ${repoInfo?.repoType} projectKey:${repoInfo?.projectKey} repositorySlug:${repoInfo?.repositorySlug} sourceCommit:${sourceInfo} targetCommit: ${targetInfo}], retrying", g) - return new TaskResult(ExecutionStatus.RUNNING, [getCommitsRetriesRemaining: retriesRemaining - 1]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([getCommitsRetriesRemaining: retriesRemaining - 1]).build() } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTask.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTask.java new file mode 100644 index 0000000000..c87abbd454 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTask.java @@ -0,0 +1,54 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks; + +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.igor.IgorService; +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuildStageDefinition; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +@Component +@RequiredArgsConstructor +@Slf4j +public class GetGoogleCloudBuildArtifactsTask extends RetryableIgorTask { + private final IgorService igorService; + + @Override + public @Nonnull TaskResult tryExecute(@Nonnull GoogleCloudBuildStageDefinition stageDefinition) { + List artifacts = igorService.getGoogleCloudBuildArtifacts( + stageDefinition.getAccount(), + stageDefinition.getBuildInfo().getId() + ); + Map> outputs = Collections.singletonMap("artifacts", artifacts); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).outputs(outputs).build(); + } + + @Override + public @Nonnull GoogleCloudBuildStageDefinition mapStage(@Nonnull Stage stage) { + return stage.mapTo(GoogleCloudBuildStageDefinition.class); + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTask.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTask.java new file mode 100644 index 0000000000..6e4ea46813 --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTask.java @@ -0,0 +1,63 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks; + +import com.netflix.spinnaker.orca.OverridableTimeoutRetryableTask; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.igor.IgorService; +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuild; +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuildStageDefinition; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +@Component +@RequiredArgsConstructor +@Slf4j +public class MonitorGoogleCloudBuildTask extends RetryableIgorTask implements OverridableTimeoutRetryableTask { + @Getter + protected long backoffPeriod = 10000; + @Getter + protected long timeout = TimeUnit.HOURS.toMillis(2); + + private final IgorService igorService; + + @Override + @Nonnull + public TaskResult tryExecute(@Nonnull GoogleCloudBuildStageDefinition stageDefinition) { + GoogleCloudBuild build = igorService.getGoogleCloudBuild( + stageDefinition.getAccount(), + stageDefinition.getBuildInfo().getId() + ); + Map context = new HashMap<>(); + context.put("buildInfo", build); + return TaskResult.builder(build.getStatus().getExecutionStatus()).context(context).build(); + } + + @Override + @Nonnull + protected GoogleCloudBuildStageDefinition mapStage(@Nonnull Stage stage) { + return stage.mapTo(GoogleCloudBuildStageDefinition.class); + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTask.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTask.groovy index 4f2eb668b7..0217aefd94 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTask.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTask.groovy @@ -17,8 +17,6 @@ package com.netflix.spinnaker.orca.igor.tasks import com.netflix.spinnaker.kork.core.RetrySupport - -import java.util.concurrent.TimeUnit import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.OverridableTimeoutRetryableTask import com.netflix.spinnaker.orca.TaskResult @@ -29,6 +27,8 @@ import org.springframework.beans.factory.annotation.Autowired import org.springframework.stereotype.Component import retrofit.RetrofitError +import java.util.concurrent.TimeUnit + @Slf4j @Component class MonitorJenkinsJobTask implements OverridableTimeoutRetryableTask { @@ -56,7 +56,7 @@ class MonitorJenkinsJobTask implements OverridableTimeoutRetryableTask { if (!stage.context.buildNumber) { log.error("failed to get build number for job ${job} from master ${master}") - return new TaskResult(ExecutionStatus.TERMINAL) + return TaskResult.ofStatus(ExecutionStatus.TERMINAL) } def buildNumber = (int) stage.context.buildNumber @@ -65,44 +65,24 @@ class MonitorJenkinsJobTask implements OverridableTimeoutRetryableTask { Map outputs = [:] String result = build.result if ((build.building && build.building != 'false') || (build.running && build.running != 'false')) { - return new TaskResult(ExecutionStatus.RUNNING, [buildInfo: build]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([buildInfo: build]).build() } outputs.buildInfo = build if (statusMap.containsKey(result)) { ExecutionStatus status = statusMap[result] - - if (stage.context.propertyFile) { - Map properties = [:] - try { - retrySupport.retry({ - properties = buildService.getPropertyFile(buildNumber, stage.context.propertyFile, master, job) - if (properties.size() == 0 && result == 'SUCCESS') { - throw new IllegalStateException("Expected properties file ${stage.context.propertyFile} but it was either missing, empty or contained invalid syntax") - } - }, 6, 5000, false) - } catch (RetrofitError e) { - if (e.response?.status == 404) { - throw new IllegalStateException("Expected properties file " + stage.context.propertyFile + " but it was missing") - } else { - throw e - } - } - outputs << properties - outputs.propertyFileContents = properties - } if (result == 'UNSTABLE' && stage.context.markUnstableAsSuccessful) { status = ExecutionStatus.SUCCEEDED } - return new TaskResult(status, outputs, outputs) + return TaskResult.builder(status).context(outputs).outputs(outputs).build() } else { - return new TaskResult(ExecutionStatus.RUNNING, [buildInfo: build]) + return TaskResult.builder(ExecutionStatus.RUNNING).context([buildInfo: build]).build() } } catch (RetrofitError e) { if ([503, 500, 404].contains(e.response?.status)) { log.warn("Http ${e.response.status} received from `igor`, retrying...") - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } throw e diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorQueuedJenkinsJobTask.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorQueuedJenkinsJobTask.groovy index 773e5254fa..ef76a5aa05 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorQueuedJenkinsJobTask.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorQueuedJenkinsJobTask.groovy @@ -46,14 +46,14 @@ class MonitorQueuedJenkinsJobTask implements OverridableTimeoutRetryableTask { try { Map build = buildService.queuedBuild(master, queuedBuild) if (build?.number == null) { - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } else { - return new TaskResult(ExecutionStatus.SUCCEEDED, [buildNumber: build.number]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([buildNumber: build.number]).build() } } catch (RetrofitError e) { if ([503, 500, 404].contains(e.response?.status)) { log.warn("Http ${e.response.status} received from `igor`, retrying...") - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } throw e } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorWerckerJobStartedTask.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorWerckerJobStartedTask.groovy index 69ad933a39..3189e9709b 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorWerckerJobStartedTask.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorWerckerJobStartedTask.groovy @@ -41,16 +41,16 @@ class MonitorWerckerJobStartedTask implements OverridableTimeoutRetryableTask { Map outputs = [:] if ("not_built".equals(build?.result) || build?.number == null) { //The build has not yet started, so the job started monitoring task needs to be re-run - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } else { //The build has started, so the job started monitoring task is completed - return new TaskResult(ExecutionStatus.SUCCEEDED, [buildNumber: build.number]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([buildNumber: build.number]).build() } } catch (RetrofitError e) { if ([503, 500, 404].contains(e.response?.status)) { log.warn("Http ${e.response.status} received from `igor`, retrying...") - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } throw e diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTask.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTask.java new file mode 100644 index 0000000000..4165c5f48f --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTask.java @@ -0,0 +1,92 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks; + +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.RetryableTask; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.igor.model.RetryableStageDefinition; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.annotation.Nonnull; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import retrofit.RetrofitError; + +@RequiredArgsConstructor +@Slf4j +public abstract class RetryableIgorTask implements RetryableTask { + public long getBackoffPeriod() { + return TimeUnit.SECONDS.toMillis(5); + } + + public long getTimeout() { + return TimeUnit.MINUTES.toMillis(1); + } + + protected int getMaxConsecutiveErrors() { + return 5; + } + + @Override + public @Nonnull TaskResult execute(@Nonnull Stage stage) { + T stageDefinition = mapStage(stage); + int errors = stageDefinition.getConsecutiveErrors(); + try { + TaskResult result = tryExecute(stageDefinition); + return resetErrorCount(result); + } catch (RetrofitError e) { + if (stageDefinition.getConsecutiveErrors() < getMaxConsecutiveErrors() && isRetryable(e)) { + return TaskResult.builder(ExecutionStatus.RUNNING).context(errorContext(errors + 1)).build(); + } + throw e; + } + } + + abstract protected @Nonnull TaskResult tryExecute(@Nonnull T stageDefinition); + + abstract protected @Nonnull T mapStage(@Nonnull Stage stage); + + private TaskResult resetErrorCount(TaskResult result) { + Map newContext = ImmutableMap.builder() + .putAll(result.getContext()) + .put("consecutiveErrors", 0) + .build(); + return TaskResult.builder(result.getStatus()).context(newContext).outputs(result.getOutputs()).build(); + } + + private Map errorContext(int errors) { + return Collections.singletonMap("consecutiveErrors", errors); + } + + private boolean isRetryable(RetrofitError retrofitError) { + if (retrofitError.getKind() == RetrofitError.Kind.NETWORK) { + log.warn("Failed to communicate with igor, retrying..."); + return true; + } + + int status = retrofitError.getResponse().getStatus(); + if (status == 500 || status == 503) { + log.warn(String.format("Received HTTP %s response from igor, retrying...", status)); + return true; + } + return false; + } +} diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTask.java b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTask.java new file mode 100644 index 0000000000..5b0bfcde5c --- /dev/null +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTask.java @@ -0,0 +1,95 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.netflix.spinnaker.kork.artifacts.model.Artifact; +import com.netflix.spinnaker.kork.core.RetrySupport; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.OortService; +import com.netflix.spinnaker.orca.igor.IgorService; +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuild; +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuildStageDefinition; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver; +import lombok.RequiredArgsConstructor; +import org.springframework.stereotype.Component; +import org.yaml.snakeyaml.Yaml; +import org.yaml.snakeyaml.constructor.SafeConstructor; +import retrofit.client.Response; + +import javax.annotation.Nonnull; +import java.io.IOException; +import java.util.Map; + +@Component +@RequiredArgsConstructor +public class StartGoogleCloudBuildTask implements Task { + private final IgorService igorService; + private final OortService oortService; + private final ArtifactResolver artifactResolver; + + private final ObjectMapper objectMapper = new ObjectMapper(); + private final RetrySupport retrySupport = new RetrySupport(); + private static final ThreadLocal yamlParser = ThreadLocal.withInitial(() -> new Yaml(new SafeConstructor())); + + @Override + @Nonnull public TaskResult execute(@Nonnull Stage stage) { + GoogleCloudBuildStageDefinition stageDefinition = stage.mapTo(GoogleCloudBuildStageDefinition.class); + + Map buildDefinition; + if (stageDefinition.getBuildDefinitionSource() != null && stageDefinition.getBuildDefinitionSource().equals("artifact")) { + buildDefinition = getBuildDefinitionFromArtifact(stage, stageDefinition); + } else { + buildDefinition = stageDefinition.getBuildDefinition(); + } + GoogleCloudBuild result = igorService.createGoogleCloudBuild(stageDefinition.getAccount(), buildDefinition); + Map context = stage.getContext(); + context.put("buildInfo", result); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(context).build(); + } + + private Map getBuildDefinitionFromArtifact(@Nonnull Stage stage, GoogleCloudBuildStageDefinition stageDefinition) { + Artifact buildDefinitionArtifact = artifactResolver.getBoundArtifactForStage(stage, stageDefinition.getBuildDefinitionArtifact().getArtifactId(), + stageDefinition.getBuildDefinitionArtifact().getArtifact()); + + if (buildDefinitionArtifact == null) { + throw new IllegalArgumentException("No manifest artifact was specified."); + } + + if(stageDefinition.getBuildDefinitionArtifact().getArtifactAccount() != null) { + buildDefinitionArtifact.setArtifactAccount(stageDefinition.getBuildDefinitionArtifact().getArtifactAccount()); + } + + if (buildDefinitionArtifact.getArtifactAccount() == null) { + throw new IllegalArgumentException("No manifest artifact account was specified."); + } + + return retrySupport.retry(() -> { + try { + Response buildText = oortService.fetchArtifact(buildDefinitionArtifact); + Object result = yamlParser.get().load(buildText.getBody().in()); + return (Map) result; + } catch (IOException e) { + throw new RuntimeException(e); + } + }, 10, 200, false); + } +} + diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartJenkinsJobTask.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartJenkinsJobTask.groovy index 91b3f5e383..dd5003be9b 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartJenkinsJobTask.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartJenkinsJobTask.groovy @@ -39,6 +39,6 @@ class StartJenkinsJobTask implements Task { String master = stage.context.master String job = stage.context.job String queuedBuild = buildService.build(master, job, stage.context.parameters) - new TaskResult(ExecutionStatus.SUCCEEDED, [queuedBuild: queuedBuild] ) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([queuedBuild: queuedBuild]).build() } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartScriptTask.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartScriptTask.groovy index 03af51cebb..994e9ae0dd 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartScriptTask.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StartScriptTask.groovy @@ -81,7 +81,7 @@ class StartScriptTask implements Task { } String queuedBuild = buildService.build(master, job, parameters) - new TaskResult(ExecutionStatus.SUCCEEDED, [master: master, job: job, queuedBuild: queuedBuild]) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([master: master, job: job, queuedBuild: queuedBuild]).build() } } diff --git a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StopJenkinsJobTask.groovy b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StopJenkinsJobTask.groovy index 1c12aae8bf..f11a8867a3 100644 --- a/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StopJenkinsJobTask.groovy +++ b/orca-igor/src/main/groovy/com/netflix/spinnaker/orca/igor/tasks/StopJenkinsJobTask.groovy @@ -45,6 +45,6 @@ class StopJenkinsJobTask implements Task { buildService.stop(master, job, queuedBuild, buildNumber) - new TaskResult(ExecutionStatus.SUCCEEDED, [:]) + TaskResult.SUCCEEDED } } diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/BuildServiceSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/BuildServiceSpec.groovy index dd12062e1c..d62cabf6e6 100644 --- a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/BuildServiceSpec.groovy +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/BuildServiceSpec.groovy @@ -16,7 +16,6 @@ package com.netflix.spinnaker.orca.igor -import spock.lang.Shared import spock.lang.Specification class BuildServiceSpec extends Specification { @@ -33,7 +32,7 @@ class BuildServiceSpec extends Specification { void setup() { igorService = Mock(IgorService) - buildService = new BuildService(igorService: igorService) + buildService = new BuildService(igorService) } void 'build method encodes the job name'() { diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStageSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStageSpec.groovy new file mode 100644 index 0000000000..0d98558d61 --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/GoogleCloudBuildStageSpec.groovy @@ -0,0 +1,49 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.pipeline + + +import com.netflix.spinnaker.orca.igor.tasks.StartGoogleCloudBuildTask +import spock.lang.Specification + +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage + +class GoogleCloudBuildStageSpec extends Specification { + def ACCOUNT = "my-account" + def BUILD = new HashMap() + + def "should start a build"() { + given: + def googleCloudBuildStage = new GoogleCloudBuildStage() + + def stage = stage { + type = "googleCloudBuild" + context = [ + account: ACCOUNT, + buildDefinition: BUILD, + ] + } + + when: + def tasks = googleCloudBuildStage.buildTaskGraph(stage) + + then: + tasks.findAll { + it.implementingClass == StartGoogleCloudBuildTask + }.size() == 1 + } +} diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStageSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStageSpec.groovy index 030fc3776a..8bcd6f07ca 100644 --- a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStageSpec.groovy +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/JenkinsStageSpec.groovy @@ -1,7 +1,9 @@ package com.netflix.spinnaker.orca.igor.pipeline +import com.netflix.spinnaker.orca.igor.tasks.MonitorJenkinsJobTask import com.netflix.spinnaker.orca.pipeline.tasks.artifacts.BindProducedArtifactsTask import spock.lang.Specification +import spock.lang.Unroll import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage @@ -33,7 +35,6 @@ class JenkinsStageSpec extends Specification { def tasks = jenkinsStage.buildTaskGraph(stage) then: - tasks.iterator().size() == 4 tasks.findAll { it.implementingClass == BindProducedArtifactsTask }.size() == 1 @@ -57,9 +58,123 @@ class JenkinsStageSpec extends Specification { def tasks = jenkinsStage.buildTaskGraph(stage) then: - tasks.iterator().size() == 3 tasks.findAll { it.implementingClass == BindProducedArtifactsTask }.size() == 0 } + + @Unroll + def "should wait for completion if set in the stage context"() { + given: + def jenkinsStage = new JenkinsStage() + + def stage = stage { + type = "jenkins" + context = [ + master: "builds", + job: "orca", + buildNumber: 4, + propertyFile: "sample.properties", + waitForCompletion: waitForCompletion + ] + } + + when: + def tasks = jenkinsStage.buildTaskGraph(stage) + def result = tasks.findAll { + it.implementingClass == MonitorJenkinsJobTask + }.size() == 1 + + then: + result == didWaitForCompletion + + where: + waitForCompletion | didWaitForCompletion + true | true + "true" | true + false | false + "false" | false + } + + def "should wait for completion when waitForCompletion is absent"() { + given: + def jenkinsStage = new JenkinsStage() + + def stage = stage { + type = "jenkins" + context = [ + master: "builds", + job: "orca", + buildNumber: 4, + propertyFile: "sample.properties" + ] + } + + when: + def tasks = jenkinsStage.buildTaskGraph(stage) + + then: + tasks.findAll { + it.implementingClass == MonitorJenkinsJobTask + }.size() == 1 + } + + def "should not bind artifacts if no expected artifacts were defined"() { + given: + def jenkinsStage = new JenkinsStage() + + def stage = stage { + type = "jenkins" + context = [ + master: "builds", + job: "orca", + buildNumber: 4, + propertyFile: "sample.properties" + ] + } + + when: + def tasks = jenkinsStage.buildTaskGraph(stage) + + then: + tasks.findAll { + it.implementingClass == BindProducedArtifactsTask + }.size() == 0 + } + + def "should bind artifacts if expected artifacts are defined"() { + given: + def jenkinsStage = new JenkinsStage() + + def stage = stage { + type = "jenkins" + context = [ + master: "builds", + job: "orca", + buildNumber: 4, + propertyFile: "sample.properties", + expectedArtifacts: [ + [ + defaultArtifact: [ + customKind: true, + id: "1d3af620-1a63-4063-882d-ea05eb185b1d", + ], + displayName: "my-favorite-artifact", + id: "547ac2ac-a646-4b8f-8ab4-d7337678b6b6", + name: "gcr.io/my-registry/my-container", + useDefaultArtifact: false, + usePriorArtifact: false, + ] + ] + ] + } + + when: + def tasks = jenkinsStage.buildTaskGraph(stage) + + then: + tasks.findAll { + it.implementingClass == BindProducedArtifactsTask + }.size() == 1 + } } diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStageSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStageSpec.groovy new file mode 100644 index 0000000000..d909d19c7e --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/pipeline/ScriptStageSpec.groovy @@ -0,0 +1,44 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.pipeline + +import com.netflix.spinnaker.orca.igor.tasks.GetBuildPropertiesTask +import spock.lang.Specification + +import static com.netflix.spinnaker.orca.test.model.ExecutionBuilder.stage + +class ScriptStageSpec extends Specification { + def "should fetch properties after running the script"() { + given: + def scriptStage = new ScriptStage() + + def stage = stage { + type = "script" + context = [ + command: "echo hello", + ] + } + + when: + def tasks = scriptStage.buildTaskGraph(stage) + + then: + tasks.findAll { + it.implementingClass == GetBuildPropertiesTask + }.size() == 1 + } +} diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTaskSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTaskSpec.groovy new file mode 100644 index 0000000000..e16956c232 --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildArtifactsTaskSpec.groovy @@ -0,0 +1,98 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks + +import com.netflix.spinnaker.kork.artifacts.model.Artifact +import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.igor.BuildService +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import spock.lang.Specification +import spock.lang.Subject + +class GetBuildArtifactsTaskSpec extends Specification { + def buildService = Mock(BuildService) + def testArtifact = Artifact.builder().name("my-artifact").build() + + def BUILD_NUMBER = 4 + def MASTER = "builds" + def JOB = "orca" + def PROPERTY_FILE = "my-file" + + @Subject + GetBuildArtifactsTask task = new GetBuildArtifactsTask(buildService) + + def "retreives artifacts and adds them to the stage outputs"() { + given: + def stage = createStage(PROPERTY_FILE) + + when: + TaskResult result = task.execute(stage) + def artifacts = result.getOutputs().get("artifacts") as List + + then: + 1 * buildService.getArtifacts(BUILD_NUMBER, PROPERTY_FILE, MASTER, JOB) >> [testArtifact] + artifacts.size() == 1 + artifacts.get(0).getName() == "my-artifact" + } + + def "handles an empty artifact list"() { + given: + def stage = createStage(PROPERTY_FILE) + + when: + TaskResult result = task.execute(stage) + def artifacts = result.getOutputs().get("artifacts") as List + + then: + 1 * buildService.getArtifacts(BUILD_NUMBER, PROPERTY_FILE, MASTER, JOB) >> [] + artifacts.size() == 0 + } + + def "does not fetch artifacts if the property file is empty"() { + given: + def stage = createStage("") + + when: + TaskResult result = task.execute(stage) + + then: + 0 * buildService.getArtifacts(*_) + result.outputs.size() == 0 + } + + def "does not fetch artifacts if the property file is null"() { + given: + def stage = createStage(null) + + when: + TaskResult result = task.execute(stage) + + then: + 0 * buildService.getArtifacts(*_) + result.outputs.size() == 0 + } + + def createStage(String propertyFile) { + return new Stage(Stub(Execution), "jenkins", [ + master: MASTER, + job: JOB, + buildNumber: BUILD_NUMBER, + propertyFile: propertyFile + ]) + } +} diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTaskSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTaskSpec.groovy new file mode 100644 index 0000000000..f3822ae3a1 --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetBuildPropertiesTaskSpec.groovy @@ -0,0 +1,178 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks + +import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.kork.artifacts.model.Artifact +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.igor.BuildService +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository +import com.netflix.spinnaker.orca.pipeline.tasks.artifacts.BindProducedArtifactsTask +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver +import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor +import retrofit.RetrofitError +import retrofit.client.Response +import spock.lang.Shared +import spock.lang.Specification +import spock.lang.Subject + +class GetBuildPropertiesTaskSpec extends Specification { + def executionRepository = Mock(ExecutionRepository) + def artifactResolver = new ArtifactResolver(new ObjectMapper(), executionRepository, new ContextParameterProcessor()) + def buildService = Stub(BuildService) + + def BUILD_NUMBER = 4 + def MASTER = "builds" + def JOB = "orca" + def PROPERTY_FILE = "sample.properties" + + @Subject + GetBuildPropertiesTask task = new GetBuildPropertiesTask(buildService) + + @Shared + def execution = Stub(Execution) + + def "retrieves values from a property file if specified"() { + given: + def stage = new Stage(execution, "jenkins", [master: MASTER, job: JOB, buildNumber: 4, propertyFile: PROPERTY_FILE]) + + and: + buildService.getPropertyFile(BUILD_NUMBER, PROPERTY_FILE, MASTER, JOB) >> [val1: "one", val2: "two"] + + when: + TaskResult result = task.execute(stage) + + then: + result.context.val1 == 'one' + result.context.val2 == 'two' + } + + def "retrieves complex from a property file"() { + given: + def stage = new Stage(execution, "jenkins", [master: "builds", job: "orca", buildNumber: 4, propertyFile: PROPERTY_FILE]) + + and: + buildService.getPropertyFile(BUILD_NUMBER, PROPERTY_FILE, MASTER, JOB) >> + [val1: "one", val2: [complex: true]] + + when: + TaskResult result = task.execute(stage) + + then: + result.context.val1 == 'one' + result.context.val2 == [complex: true] + } + + def "resolves artifact from a property file"() { + given: + def stage = new Stage(execution, "jenkins", [master : MASTER, + job : JOB, + buildNumber : BUILD_NUMBER, + propertyFile : PROPERTY_FILE, + expectedArtifacts: [[matchArtifact: [type: "docker/image"]],]]) + def bindTask = new BindProducedArtifactsTask() + + and: + buildService.getPropertyFile(BUILD_NUMBER, PROPERTY_FILE, MASTER, JOB) >> + [val1: "one", artifacts: [ + [type: "docker/image", + reference: "gcr.io/project/my-image@sha256:28f82eba", + name: "gcr.io/project/my-image", + version: "sha256:28f82eba"],]] + bindTask.artifactResolver = artifactResolver + bindTask.objectMapper = new ObjectMapper() + + when: + def jenkinsResult = task.execute(stage) + // We don't have a execution, so we pass context manually + stage.context << jenkinsResult.context + def bindResult = bindTask.execute(stage) + def artifacts = bindResult.outputs["artifacts"] as List + + then: + bindResult.status == ExecutionStatus.SUCCEEDED + artifacts.size() == 1 + artifacts[0].name == "gcr.io/project/my-image" + } + + def "queues the task for re-try after a failed attempt"() { + given: + def stage = createStage(PROPERTY_FILE) + def igorError = Stub(RetrofitError) { + getResponse() >> new Response("", 500, "", Collections.emptyList(), null) + } + + and: + buildService.getPropertyFile(BUILD_NUMBER, PROPERTY_FILE, MASTER, JOB) >> + { throw igorError } + + when: + TaskResult result = task.execute(stage) + + then: + result.status == ExecutionStatus.RUNNING + } + + def "fails stage if property file is expected but not returned from jenkins and build passed"() { + given: + def stage = createStage(PROPERTY_FILE) + + and: + buildService.getPropertyFile(BUILD_NUMBER, PROPERTY_FILE, MASTER, JOB) >> [:] + + when: + task.execute(stage) + + then: + IllegalStateException e = thrown IllegalStateException + e.message == "Expected properties file $PROPERTY_FILE but it was either missing, empty or contained invalid syntax" + } + + def "does not fetch properties if the property file is empty"() { + given: + def stage = createStage("") + + when: + task.execute(stage) + + then: + 0 * buildService.getPropertyFile(*_) + } + + def "does not fetch properties if the property file is null"() { + given: + def stage = createStage(null) + + when: + task.execute(stage) + + then: + 0 * buildService.getPropertyFile(*_) + } + + def createStage(String propertyFile) { + return new Stage(Stub(Execution), "jenkins", [ + master: MASTER, + job: JOB, + buildNumber: BUILD_NUMBER, + propertyFile: propertyFile + ]) + } +} diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTaskSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTaskSpec.groovy new file mode 100644 index 0000000000..8242a03e2d --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/GetGoogleCloudBuildArtifactsTaskSpec.groovy @@ -0,0 +1,85 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks + +import com.netflix.spinnaker.kork.artifacts.model.Artifact +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.igor.IgorService +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import retrofit.RetrofitError +import spock.lang.Specification +import spock.lang.Subject + +class GetGoogleCloudBuildArtifactsTaskSpec extends Specification { + def ACCOUNT = "my-account" + def BUILD_ID = "f2526c98-0c20-48ff-9f1f-736503937084" + + Execution execution = Mock(Execution) + IgorService igorService = Mock(IgorService) + + @Subject + GetGoogleCloudBuildArtifactsTask task = new GetGoogleCloudBuildArtifactsTask(igorService) + + def "fetches artifacts from igor and returns success"() { + given: + def artifacts = [ + Artifact.builder().reference("abc").build(), + Artifact.builder().reference("def").build() + ] + def stage = new Stage(execution, "googleCloudBuild", [ + account: ACCOUNT, + buildInfo: [ + id: BUILD_ID + ], + ]) + + when: + TaskResult result = task.execute(stage) + + then: + 1 * igorService.getGoogleCloudBuildArtifacts(ACCOUNT, BUILD_ID) >> artifacts + 0 * igorService._ + result.getStatus() == ExecutionStatus.SUCCEEDED + result.getOutputs().get("artifacts") == artifacts + } + + def "task returns RUNNING when communcation with igor fails"() { + given: + def stage = new Stage(execution, "googleCloudBuild", [ + account: ACCOUNT, + buildInfo: [ + id: BUILD_ID + ], + ]) + + when: + TaskResult result = task.execute(stage) + + then: + 1 * igorService.getGoogleCloudBuildArtifacts(ACCOUNT, BUILD_ID) >> { throw stubRetrofitError() } + 0 * igorService._ + result.getStatus() == ExecutionStatus.RUNNING + } + + def stubRetrofitError() { + return Stub(RetrofitError) { + getKind() >> RetrofitError.Kind.NETWORK + } + } +} diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTaskSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTaskSpec.groovy new file mode 100644 index 0000000000..0b27e89b0f --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorGoogleCloudBuildTaskSpec.groovy @@ -0,0 +1,98 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks + +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.igor.IgorService +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuild +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import retrofit.RetrofitError +import spock.lang.Specification +import spock.lang.Subject +import spock.lang.Unroll + +class MonitorGoogleCloudBuildTaskSpec extends Specification { + def ACCOUNT = "my-account" + def BUILD_ID = "0cc67a01-714f-49c7-aaf3-d09b5ec1a18a" + + Execution execution = Mock(Execution) + IgorService igorService = Mock(IgorService) + + @Subject + MonitorGoogleCloudBuildTask task = new MonitorGoogleCloudBuildTask(igorService) + + @Unroll + def "task returns #executionStatus when build returns #buildStatus"() { + given: + def igorResponse = GoogleCloudBuild.builder() + .id(BUILD_ID) + .status(GoogleCloudBuild.Status.valueOf(buildStatus)) + .build() + def stage = new Stage(execution, "googleCloudBuild", [ + account: ACCOUNT, + buildInfo: [ + id: BUILD_ID + ], + ]) + + when: + TaskResult result = task.execute(stage) + + then: + 1 * igorService.getGoogleCloudBuild(ACCOUNT, BUILD_ID) >>igorResponse + 0 * igorService._ + result.getStatus() == executionStatus + result.getContext().buildInfo == igorResponse + + where: + buildStatus | executionStatus + "STATUS_UNKNOWN" | ExecutionStatus.RUNNING + "QUEUED" | ExecutionStatus.RUNNING + "WORKING" | ExecutionStatus.RUNNING + "SUCCESS" | ExecutionStatus.SUCCEEDED + "FAILURE" | ExecutionStatus.TERMINAL + "INTERNAL_ERROR" | ExecutionStatus.TERMINAL + "TIMEOUT" | ExecutionStatus.TERMINAL + "CANCELLED" | ExecutionStatus.TERMINAL + } + + def "task returns RUNNING when communcation with igor fails"() { + given: + def stage = new Stage(execution, "googleCloudBuild", [ + account: ACCOUNT, + buildInfo: [ + id: BUILD_ID + ], + ]) + + when: + TaskResult result = task.execute(stage) + + then: + 1 * igorService.getGoogleCloudBuild(ACCOUNT, BUILD_ID) >> { throw stubRetrofitError() } + 0 * igorService._ + result.getStatus() == ExecutionStatus.RUNNING + } + + def stubRetrofitError() { + return Stub(RetrofitError) { + getKind() >> RetrofitError.Kind.NETWORK + } + } +} diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTaskSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTaskSpec.groovy index 1437ba32cd..dfad88d802 100644 --- a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTaskSpec.groovy +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/MonitorJenkinsJobTaskSpec.groovy @@ -16,16 +16,10 @@ package com.netflix.spinnaker.orca.igor.tasks -import com.fasterxml.jackson.databind.ObjectMapper -import com.netflix.spinnaker.kork.core.RetrySupport import com.netflix.spinnaker.orca.ExecutionStatus -import com.netflix.spinnaker.orca.TaskResult import com.netflix.spinnaker.orca.igor.BuildService import com.netflix.spinnaker.orca.pipeline.model.Execution import com.netflix.spinnaker.orca.pipeline.model.Stage -import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository -import com.netflix.spinnaker.orca.pipeline.tasks.artifacts.BindProducedArtifactsTask -import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver import retrofit.RetrofitError import retrofit.client.Response import spock.lang.Shared @@ -34,9 +28,6 @@ import spock.lang.Subject import spock.lang.Unroll class MonitorJenkinsJobTaskSpec extends Specification { - def executionRepository = Mock(ExecutionRepository) - def artifactResolver = new ArtifactResolver(new ObjectMapper(), executionRepository) - @Subject MonitorJenkinsJobTask task = new MonitorJenkinsJobTask() @@ -143,138 +134,6 @@ class MonitorJenkinsJobTaskSpec extends Specification { 400 || null } - def "retrieves values from a property file if specified"() { - - given: - def stage = new Stage(pipeline, "jenkins", [master: "builds", job: "orca", buildNumber: 4, propertyFile: "sample.properties"]) - - and: - task.buildService = Stub(BuildService) { - getBuild(stage.context.buildNumber, stage.context.master, stage.context.job) >> [result: 'SUCCESS', running: false] - getPropertyFile(stage.context.buildNumber, stage.context.propertyFile, stage.context.master, stage.context.job) >> [val1: "one", val2: "two"] - } - task.retrySupport = Spy(RetrySupport) { - _ * sleep(_) >> { /* do nothing */ } - } - - when: - TaskResult result = task.execute(stage) - - then: - result.context.val1 == 'one' - result.context.val2 == 'two' - - } - - def "retrieves complex from a property file"() { - - given: - def stage = new Stage(pipeline, "jenkins", [master: "builds", job: "orca", buildNumber: 4, propertyFile: "sample.properties"]) - - and: - task.buildService = Stub(BuildService) { - getBuild(stage.context.buildNumber, stage.context.master, stage.context.job) >> [result: 'SUCCESS', running: false] - getPropertyFile(stage.context.buildNumber, stage.context.propertyFile, stage.context.master, stage.context.job) >> - [val1: "one", val2: [complex: true]] - } - task.retrySupport = Spy(RetrySupport) { - _ * sleep(_) >> { /* do nothing */ } - } - - when: - TaskResult result = task.execute(stage) - - then: - result.context.val1 == 'one' - result.context.val2 == [complex: true] - - } - - def "resolves artifact from a property file"() { - - given: - def stage = new Stage(pipeline, "jenkins", [master: "builds", - job: "orca", - buildNumber: 4, - propertyFile: "sample.properties", - expectedArtifacts: [[matchArtifact: [type: "docker/image"]],]]) - def bindTask = new BindProducedArtifactsTask() - - and: - task.buildService = Stub(BuildService) { - getBuild(stage.context.buildNumber, stage.context.master, stage.context.job) >> [result: 'SUCCESS', running: false] - getPropertyFile(stage.context.buildNumber, stage.context.propertyFile, stage.context.master, stage.context.job) >> - [val1: "one", artifacts: [ - [type: "docker/image", - reference: "gcr.io/project/my-image@sha256:28f82eba", - name: "gcr.io/project/my-image", - version: "sha256:28f82eba"],]] - } - task.retrySupport = Spy(RetrySupport) { - _ * sleep(_) >> { /* do nothing */ } - } - bindTask.artifactResolver = artifactResolver - bindTask.objectMapper = new ObjectMapper() - - - when: - def jenkinsResult = task.execute(stage) - // We don't have a pipeline, so we pass context manually - stage.context << jenkinsResult.context - def bindResult = bindTask.execute(stage) - def artifacts = bindResult.outputs["artifacts"] - - then: - bindResult.status == ExecutionStatus.SUCCEEDED - artifacts.size() == 1 - artifacts[0].name == "gcr.io/project/my-image" - - } - - def "retrieves values from a property file if specified after a failed attempt"() { - - given: - def stage = new Stage(pipeline, "jenkins", [master: "builds", job: "orca", buildNumber: 4, propertyFile: "sample.properties"]) - - and: - task.buildService = Stub(BuildService) { - getBuild(stage.context.buildNumber, stage.context.master, stage.context.job) >> [result: 'SUCCESS', running: false] - getPropertyFile(stage.context.buildNumber, stage.context.propertyFile, stage.context.master, stage.context.job) >>> [[], [val1: "one", val2: "two"]] - } - task.retrySupport = Spy(RetrySupport) { - _ * sleep(_) >> { /* do nothing */ } - } - - when: - TaskResult result = task.execute(stage) - - then: - result.context.val1 == 'one' - result.context.val2 == 'two' - - } - - def "fails stage if property file is expected but not returned from jenkins and build passed"() { - given: - def stage = new Stage(pipeline, "jenkins", [master: 'builds', job: 'orca', buildNumber: 4, propertyFile: 'noexist.properties']) - - and: - task.buildService = Stub(BuildService) { - getBuild(stage.context.buildNumber, stage.context.master, stage.context.job) >> [result: 'SUCCESS', running: false] - getPropertyFile(stage.context.buildNumber, stage.context.propertyFile, stage.context.master, stage.context.job) >> [:] - } - task.retrySupport = Spy(RetrySupport) { - _ * sleep(_) >> { /* do nothing */ } - } - - when: - task.execute(stage) - - then: - IllegalStateException e = thrown IllegalStateException - e.message == 'Expected properties file noexist.properties but it was either missing, empty or contained invalid syntax' - } - def "marks 'unstable' results as successful if explicitly configured to do so"() { given: def stage = new Stage(pipeline, "jenkins", diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTaskSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTaskSpec.groovy new file mode 100644 index 0000000000..67e4530b3a --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/RetryableIgorTaskSpec.groovy @@ -0,0 +1,124 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks + +import com.netflix.spinnaker.orca.ExecutionStatus +import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.igor.model.RetryableStageDefinition +import com.netflix.spinnaker.orca.pipeline.model.Stage +import retrofit.RetrofitError +import retrofit.client.Response +import spock.lang.Specification +import spock.lang.Subject + +class RetryableIgorTaskSpec extends Specification { + RetryableStageDefinition jobRequest = Stub(RetryableStageDefinition) + Stage stage = Mock(Stage) + + @Subject + RetryableIgorTask task = Spy(RetryableIgorTask) { + mapStage(stage) >> jobRequest + } + + def "should delegate to subclass"() { + given: + jobRequest.getConsecutiveErrors() >> 0 + + when: + TaskResult result = task.execute(stage) + + then: + 1 * task.tryExecute(jobRequest) >> TaskResult.SUCCEEDED + result.status == ExecutionStatus.SUCCEEDED + } + + def "should return RUNNING status when a retryable exception is thrown"() { + when: + def result = task.execute(stage) + + then: + 1 * task.tryExecute(jobRequest) >> { throw stubRetrofitError(500) } + jobRequest.getConsecutiveErrors() >> 0 + result.status == ExecutionStatus.RUNNING + result.context.get("consecutiveErrors") == 1 + } + + def "should return RUNNING status when a network error is thrown"() { + when: + def result = task.execute(stage) + + then: + 1 * task.tryExecute(jobRequest) >> { throw stubRetrofitNetworkError() } + jobRequest.getConsecutiveErrors() >> 0 + result.status == ExecutionStatus.RUNNING + result.context.get("consecutiveErrors") == 1 + } + + def "should propagate the error if a non-retryable exception is thrown"() { + when: + def result = task.execute(stage) + + then: + 1 * task.tryExecute(jobRequest) >> { throw stubRetrofitError(404) } + jobRequest.getConsecutiveErrors() >> 0 + thrown RetrofitError + } + + def "should propagate the error we have reached the retry limit"() { + when: + def result = task.execute(stage) + + then: + 1 * task.tryExecute(jobRequest) >> { throw stubRetrofitError(500) } + jobRequest.getConsecutiveErrors() >> 5 + thrown RetrofitError + } + + def "should propagate a non-successful task status"() { + when: + def result = task.execute(stage) + + then: + 1 * task.tryExecute(jobRequest) >> TaskResult.ofStatus(ExecutionStatus.TERMINAL) + jobRequest.getConsecutiveErrors() >> 0 + result.status == ExecutionStatus.TERMINAL + } + + def "resets the error count on success"() { + when: + def result = task.execute(stage) + + then: + 1 * task.tryExecute(jobRequest) >> TaskResult.SUCCEEDED + jobRequest.getConsecutiveErrors() >> 3 + result.status == ExecutionStatus.SUCCEEDED + result.context.get("consecutiveErrors") == 0 + } + + def stubRetrofitError(int statusCode) { + return Stub(RetrofitError) { + getKind() >> RetrofitError.Kind.HTTP + getResponse() >> new Response("", statusCode, "", Collections.emptyList(), null) + } + } + + def stubRetrofitNetworkError() { + return Stub(RetrofitError) { + getKind() >> RetrofitError.Kind.NETWORK + } + } +} diff --git a/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTaskSpec.groovy b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTaskSpec.groovy new file mode 100644 index 0000000000..c54454f74e --- /dev/null +++ b/orca-igor/src/test/groovy/com/netflix/spinnaker/orca/igor/tasks/StartGoogleCloudBuildTaskSpec.groovy @@ -0,0 +1,56 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.igor.tasks + + +import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.clouddriver.OortService +import com.netflix.spinnaker.orca.igor.IgorService +import com.netflix.spinnaker.orca.igor.model.GoogleCloudBuild +import com.netflix.spinnaker.orca.pipeline.model.Execution +import com.netflix.spinnaker.orca.pipeline.model.Stage +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver +import spock.lang.Specification +import spock.lang.Subject + +class StartGoogleCloudBuildTaskSpec extends Specification { + def ACCOUNT = "my-account" + def BUILD = new HashMap() + + Execution execution = Mock(Execution) + IgorService igorService = Mock(IgorService) + OortService oortService = Mock(OortService) + ArtifactResolver artifactResolver = Mock(ArtifactResolver) + + @Subject + StartGoogleCloudBuildTask task = new StartGoogleCloudBuildTask(igorService, oortService, artifactResolver) + + def "starts a build"() { + given: + def igorResponse = GoogleCloudBuild.builder() + .id("98edf783-162c-4047-9721-beca8bd2c275") + .build() + + when: + def stage = new Stage(execution, "googleCloudBuild", [account: ACCOUNT, buildDefinition: BUILD]) + TaskResult result = task.execute(stage) + + then: + 1 * igorService.createGoogleCloudBuild(ACCOUNT, BUILD) >> igorResponse + result.context.buildInfo == igorResponse + } +} diff --git a/orca-integrations-cloudfoundry/orca-integrations-cloudfoundry.gradle b/orca-integrations-cloudfoundry/orca-integrations-cloudfoundry.gradle new file mode 100644 index 0000000000..388f048448 --- /dev/null +++ b/orca-integrations-cloudfoundry/orca-integrations-cloudfoundry.gradle @@ -0,0 +1,18 @@ +test { + useJUnitPlatform() +} + +dependencies { + compileOnly spinnaker.dependency("lombok") + annotationProcessor spinnaker.dependency("lombok") + testAnnotationProcessor spinnaker.dependency("lombok") + + implementation project(":orca-clouddriver") + implementation project(":orca-core") + + testImplementation spinnaker.dependency("junitJupiterApi") + testImplementation spinnaker.dependency("assertj") + testImplementation "org.mockito:mockito-core:2.25.0" + + testRuntime spinnaker.dependency("junitJupiterEngine") +} diff --git a/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/DeleteServiceKeyStage.java b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/DeleteServiceKeyStage.java new file mode 100644 index 0000000000..6828622011 --- /dev/null +++ b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/DeleteServiceKeyStage.java @@ -0,0 +1,37 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.cf.pipeline; + +import com.netflix.spinnaker.orca.cf.tasks.CloudFoundryDeleteServiceKeyTask; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.CloudFoundryMonitorKatoServicesTask; +import com.netflix.spinnaker.orca.clouddriver.utils.CloudProviderAware; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; + +@Component +class DeleteServiceKeyStage implements StageDefinitionBuilder, CloudProviderAware { + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + builder + .withTask("deleteServiceKey", CloudFoundryDeleteServiceKeyTask.class) + .withTask("monitorDeleteServiceKey", CloudFoundryMonitorKatoServicesTask.class); + } +} diff --git a/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProvider.java b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProvider.java new file mode 100644 index 0000000000..1c8289d89e --- /dev/null +++ b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProvider.java @@ -0,0 +1,97 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.cf.pipeline.expressions.functions; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.TreeTraversingParser; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import lombok.Data; +import org.jetbrains.annotations.NotNull; +import org.jetbrains.annotations.Nullable; +import org.springframework.stereotype.Component; + +import java.io.IOException; +import java.util.*; +import java.util.function.Predicate; + +@Component +public class ServiceKeyExpressionFunctionProvider implements ExpressionFunctionProvider { + private static final String CREATE_SERVICE_KEY_STAGE_NAME = "createServiceKey"; + private static final ObjectMapper objectMapper = OrcaObjectMapper.newInstance(); + + @Nullable + @Override + public String getNamespace() { + return null; + } + + @NotNull + @Override + public Collection getFunctions() { + return Collections.singletonList( + new FunctionDefinition("cfServiceKey", Arrays.asList( + new FunctionParameter(Execution.class, "execution", "The execution within which to search for stages"), + new FunctionParameter(String.class, "idOrName", "A stage name or stage ID to match") + )) + ); + } + + public static Map cfServiceKey(Execution execution, String idOrName) { + return execution.getStages() + .stream() + .filter(matchesServiceKeyStage(idOrName)) + .findFirst() + .map(stage -> { + Map serviceKeyDetails = new HashMap<>(); + + Optional.ofNullable(stage.getContext().get("kato.tasks")) + .ifPresent(k -> { + List> katoTasks = (List>) k; + try { + ServiceKeyKatoTask katoTask = objectMapper.readValue( + new TreeTraversingParser(objectMapper.valueToTree(katoTasks.get(0)), objectMapper), + ServiceKeyKatoTask.class); + serviceKeyDetails.putAll(katoTask.getResultObjects().get(0).getServiceKey()); + } catch (IOException e) { + } + }); + + return serviceKeyDetails; + }) + .orElse(Collections.emptyMap()); + } + + private static Predicate matchesServiceKeyStage(String idOrName) { + return stage -> CREATE_SERVICE_KEY_STAGE_NAME.equals(stage.getType()) && + stage.getStatus() == ExecutionStatus.SUCCEEDED && + (Objects.equals(idOrName, stage.getName()) || Objects.equals(idOrName, stage.getId())); + } + + @Data + private static class ServiceKeyKatoTask { + private List resultObjects; + + @Data + private static class ServiceKeyResult { + private Map serviceKey; + } + } +} diff --git a/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTask.java b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTask.java new file mode 100644 index 0000000000..a05612ef2f --- /dev/null +++ b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTask.java @@ -0,0 +1,33 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.cf.tasks; + +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.tasks.providers.cf.AbstractCloudFoundryServiceTask; +import org.springframework.stereotype.Component; + +@Component +public class CloudFoundryDeleteServiceKeyTask extends AbstractCloudFoundryServiceTask { + CloudFoundryDeleteServiceKeyTask(KatoService kato) { + super(kato); + } + + @Override + protected String getNotificationType() { + return "deleteServiceKey"; + } +} diff --git a/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/config/CloudFoundryConfiguration.java b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/config/CloudFoundryConfiguration.java new file mode 100644 index 0000000000..b8c2eb36d8 --- /dev/null +++ b/orca-integrations-cloudfoundry/src/main/java/com/netflix/spinnaker/orca/config/CloudFoundryConfiguration.java @@ -0,0 +1,25 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.config; + +import org.springframework.context.annotation.ComponentScan; +import org.springframework.context.annotation.Configuration; + +@Configuration +@ComponentScan({"com.netflix.spinnaker.orca.cf"}) +public class CloudFoundryConfiguration { +} diff --git a/orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProviderTest.java b/orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProviderTest.java new file mode 100644 index 0000000000..07598c0de8 --- /dev/null +++ b/orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/pipeline/expressions/functions/ServiceKeyExpressionFunctionProviderTest.java @@ -0,0 +1,129 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.cf.pipeline.expressions.functions; + +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionFunctionProvider.FunctionDefinition; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; + +import java.util.*; + +import static com.netflix.spinnaker.orca.ExecutionStatus.RUNNING; +import static com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED; +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static java.util.Collections.*; +import static org.assertj.core.api.Assertions.assertThat; + +class ServiceKeyExpressionFunctionProviderTest { + @Test + void getFunctionsShouldReturnOneFunctionWithTheCorrectNameWhichHasTwoParameters() { + ServiceKeyExpressionFunctionProvider functionProvider = new ServiceKeyExpressionFunctionProvider(); + Collection functionDefinitions = functionProvider.getFunctions(); + + assertThat(functionDefinitions.size()).isEqualTo(1); + functionDefinitions.stream().findFirst().ifPresent(functionDefinition -> { + assertThat(functionDefinition.getName()).isEqualTo("cfServiceKey"); + assertThat(functionDefinition.getParameters().get(0).getType()).isEqualTo(Execution.class); + assertThat(functionDefinition.getParameters().get(1).getType()).isEqualTo(String.class); + }); + } + + @Test + void serviceKeyShouldResolveForValidStageType() { + String user = "user1"; + String password = "password1"; + String url = "mysql.example.com"; + Map serviceKey = new ImmutableMap.Builder() + .put("username", user) + .put("password", password) + .put("url", url) + .build(); + Map katoTaskMapWithResults = createKatoTaskMap(SUCCEEDED, singletonList(singletonMap("serviceKey", serviceKey))); + Map katoTaskMapWithoutResults = createKatoTaskMap(SUCCEEDED, emptyList()); + Map katoTaskMapRunning = createKatoTaskMap(RUNNING, emptyList()); + Execution execution = new Execution(PIPELINE, "stage-name-1", "application-name"); + Map contextWithServiceKey = createContextMap(katoTaskMapWithResults); + Map contextWithoutServiceKey = createContextMap(katoTaskMapWithoutResults); + Map contextWithRunningTask = createContextMap(katoTaskMapRunning); + + Stage stage1 = new Stage(new Execution(PIPELINE, "orca"), "createServiceKey", "stage-name-1", contextWithServiceKey); + stage1.setStatus(SUCCEEDED); + Stage stage2 = new Stage(new Execution(PIPELINE, "orca"), "deployService", "stage-name-2", contextWithoutServiceKey); + stage2.setStatus(SUCCEEDED); + Stage stage3 = new Stage(new Execution(PIPELINE, "orca"), "createServiceKey", "stage-name-3", contextWithoutServiceKey); + stage3.setStatus(SUCCEEDED); + Stage stage4 = new Stage(new Execution(PIPELINE, "orca"), "createServiceKey", "stage-name-4", contextWithRunningTask); + stage4.setStatus(RUNNING); + + execution.getStages().add(stage1); + execution.getStages().add(stage2); + execution.getStages().add(stage3); + execution.getStages().add(stage4); + + Map expectedServiceKey = new ImmutableMap.Builder() + .put("username", user) + .put("password", password) + .put("url", url) + .build(); + + Map resultServiceKey = ServiceKeyExpressionFunctionProvider.cfServiceKey(execution, "stage-name-1"); + + assertThat(resultServiceKey).isEqualTo(expectedServiceKey); + } + + @Test + void serviceKeyShouldReturnEmptyMapForNoResult() { + Map katoTaskMapWithoutResults = new ImmutableMap.Builder() + .put("id", "task-id") + .put("status", SUCCEEDED) + .put("history", emptyList()) + .put("resultObjects", emptyList()) + .build(); + Execution execution = new Execution(PIPELINE, "stage-name-1", "application-name"); + Map contextWithoutServiceKey = createContextMap(katoTaskMapWithoutResults); + + Stage stage = new Stage(new Execution(PIPELINE, "orca"), "createServiceKey", "stage-name-3", contextWithoutServiceKey); + stage.setStatus(SUCCEEDED); + execution.getStages().add(stage); + + Map resultServiceKey = ServiceKeyExpressionFunctionProvider.cfServiceKey(execution, "stage-name-1"); + + assertThat(resultServiceKey).isEqualTo(Collections.emptyMap()); + } + + private Map createKatoTaskMap(ExecutionStatus status, List> resultObjects) { + return new ImmutableMap.Builder() + .put("id", "task-id") + .put("status", status) + .put("history", emptyList()) + .put("resultObjects", resultObjects) + .build(); + } + + private Map createContextMap(Map katoTaskMap) { + Map context = new HashMap<>(); + context.put("parentStage", "parent-stage"); + context.put("account", "account-name"); + context.put("credentials", "my-account"); + context.put("region", "org > space"); + context.put("kato.tasks", singletonList(katoTaskMap)); + return context; + } +} diff --git a/orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTaskTest.java b/orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTaskTest.java new file mode 100644 index 0000000000..f1fd027069 --- /dev/null +++ b/orca-integrations-cloudfoundry/src/test/java/com/netflix/spinnaker/orca/cf/tasks/CloudFoundryDeleteServiceKeyTaskTest.java @@ -0,0 +1,77 @@ +/* + * Copyright 2019 Pivotal, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.cf.tasks; + +import com.google.common.collect.ImmutableMap; +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.clouddriver.KatoService; +import com.netflix.spinnaker.orca.clouddriver.model.TaskId; +import com.netflix.spinnaker.orca.pipeline.model.Execution; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.junit.jupiter.api.Test; +import rx.Observable; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.PIPELINE; +import static org.assertj.core.api.Assertions.assertThat; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.ArgumentMatchers.matches; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +class CloudFoundryDeleteServiceKeyTaskTest { + @Test + void shouldMakeRequestToKatoToDeleteServiceKey() { + String type = "deleteServiceKey"; + KatoService kato = mock(KatoService.class); + String cloudProvider = "my-cloud"; + String credentials = "cf-foundation"; + String region = "org > space"; + String serviceInstanceName = "service-instance"; + String serviceKeyName = "service-key"; + TaskId taskId = new TaskId("kato-task-id"); + Map context = new HashMap<>(); + context.put("cloudProvider", cloudProvider); + context.put("credentials", credentials); + context.put("region", region); + context.put("serviceInstanceName", serviceInstanceName); + context.put("serviceKeyName", serviceKeyName); + when(kato.requestOperations(matches(cloudProvider), + eq(Collections.singletonList(Collections.singletonMap(type, context))))) + .thenReturn(Observable.from(new TaskId[] { taskId })); + CloudFoundryDeleteServiceKeyTask task = new CloudFoundryDeleteServiceKeyTask(kato); + + Map expectedContext = new ImmutableMap.Builder() + .put("notification.type", type) + .put("kato.last.task.id", taskId) + .put("service.region", region) + .put("service.account", credentials) + .build(); + TaskResult expected = TaskResult.builder(ExecutionStatus.SUCCEEDED).context(expectedContext).build(); + + TaskResult result = task.execute(new Stage( + new Execution(PIPELINE, "orca"), + type, + context)); + + assertThat(result).isEqualToComparingFieldByFieldRecursively(expected); + } +} diff --git a/orca-integrations-gremlin/orca-integrations-gremlin.gradle b/orca-integrations-gremlin/orca-integrations-gremlin.gradle new file mode 100644 index 0000000000..06be5ee4a9 --- /dev/null +++ b/orca-integrations-gremlin/orca-integrations-gremlin.gradle @@ -0,0 +1,9 @@ +apply from: "$rootDir/gradle/kotlin.gradle" + +dependencies { + compile project(":orca-core") + compile project(":orca-kotlin") + compile project(":orca-retrofit") + + testCompile project(":orca-core-tck") +} diff --git a/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/config/GremlinConfiguration.kt b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/config/GremlinConfiguration.kt new file mode 100644 index 0000000000..64a1f7ef19 --- /dev/null +++ b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/config/GremlinConfiguration.kt @@ -0,0 +1,55 @@ +package com.netflix.spinnaker.orca.config; + +import com.fasterxml.jackson.databind.PropertyNamingStrategy +import com.fasterxml.jackson.databind.SerializationFeature +import com.netflix.spinnaker.orca.gremlin.GremlinConverter +import com.netflix.spinnaker.orca.gremlin.GremlinService +import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper +import com.netflix.spinnaker.orca.retrofit.logging.RetrofitSlf4jLog +import org.springframework.beans.factory.annotation.Value +import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty +import org.springframework.context.annotation.Bean +import org.springframework.context.annotation.ComponentScan; +import org.springframework.context.annotation.Configuration; +import retrofit.Endpoint +import retrofit.Endpoints +import retrofit.RequestInterceptor +import retrofit.RestAdapter +import retrofit.client.Client + +@Configuration +@ConditionalOnProperty("integrations.gremlin.enabled") +@ComponentScan( + "com.netflix.spinnaker.orca.gremlin.pipeline", + "com.netflix.spinnaker.orca.gremlin.tasks" +) +class GremlinConfiguration { + + @Bean + fun gremlinEndpoint( + @Value("\${integrations.gremlin.baseUrl}") gremlinBaseUrl: String): Endpoint { + return Endpoints.newFixedEndpoint(gremlinBaseUrl) + } + + @Bean + fun gremlinService( + retrofitClient: Client, + gremlinEndpoint: Endpoint, + spinnakerRequestInterceptor: RequestInterceptor + ): GremlinService { + val mapper = OrcaObjectMapper + .newInstance() + .setPropertyNamingStrategy( + PropertyNamingStrategy.SNAKE_CASE) + .disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS) // we want Instant serialized as ISO string + return RestAdapter.Builder() + .setRequestInterceptor(spinnakerRequestInterceptor) + .setEndpoint(gremlinEndpoint) + .setClient(retrofitClient) + .setLogLevel(RestAdapter.LogLevel.BASIC) + .setLog(RetrofitSlf4jLog(GremlinService::class.java)) + .setConverter(GremlinConverter(mapper)) + .build() + .create(GremlinService::class.java) + } +} diff --git a/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinConverter.java b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinConverter.java new file mode 100644 index 0000000000..e909c98584 --- /dev/null +++ b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinConverter.java @@ -0,0 +1,54 @@ +package com.netflix.spinnaker.orca.gremlin; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.JavaType; +import com.fasterxml.jackson.databind.ObjectMapper; +import retrofit.converter.ConversionException; +import retrofit.converter.Converter; +import retrofit.mime.TypedByteArray; +import retrofit.mime.TypedInput; +import retrofit.mime.TypedOutput; + +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStreamReader; +import java.io.UnsupportedEncodingException; +import java.lang.reflect.Type; +import java.util.stream.Collectors; + +public class GremlinConverter implements Converter { + private static final String MIME_TYPE = "application/json; charset=UTF-8"; + + private final ObjectMapper objectMapper; + + public GremlinConverter() { + this(new ObjectMapper()); + } + + public GremlinConverter(ObjectMapper objectMapper) { + this.objectMapper = objectMapper; + } + + @Override public Object fromBody(TypedInput body, Type type) throws ConversionException { + try { + if (type.getTypeName().equals(String.class.getName())) { + return new BufferedReader(new InputStreamReader(body.in())) + .lines().collect(Collectors.joining("\n")); + } else { + JavaType javaType = objectMapper.getTypeFactory().constructType(type); + return objectMapper.readValue(body.in(), javaType); + } + } catch (final IOException ioe) { + throw new ConversionException(ioe); + } + } + + @Override public TypedOutput toBody(Object object) { + try { + String json = objectMapper.writeValueAsString(object); + return new TypedByteArray(MIME_TYPE, json.getBytes("UTF-8")); + } catch (JsonProcessingException | UnsupportedEncodingException e) { + throw new AssertionError(e); + } + } +} diff --git a/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinService.kt b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinService.kt new file mode 100644 index 0000000000..63277182a6 --- /dev/null +++ b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/GremlinService.kt @@ -0,0 +1,46 @@ +package com.netflix.spinnaker.orca.gremlin + +import retrofit.http.*; + +interface GremlinService { + @POST("/attacks/new") + @Headers( + "Content-Type: application/json", + "X-Gremlin-Agent: spinnaker/0.1.0" + ) + fun create( + @Header("Authorization") authHeader: String, + @Body attackParameters: AttackParameters + ): String + + @GET("/executions") + @Headers( + "X-Gremlin-Agent: spinnaker/0.1.0" + ) + fun getStatus( + @Header("Authorization") authHeader: String, + @Query("taskId") attackGuid: String + ): List + + @DELETE("/attacks/{attackGuid}") + @Headers( + "X-Gremlin-Agent: spinnaker/0.1.0" + ) + fun haltAttack( + @Header("Authorization") authHeader: String, + @Path("attackGuid") attackGuid: String + ): Void +} + +data class AttackParameters( + val command: Map, + val target: Map +) + +data class AttackStatus( + val guid: String, + val stage: String, + val stageLifecycle: String, + val endTime: String?, + val output: String? +) diff --git a/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/pipeline/GremlinStage.java b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/pipeline/GremlinStage.java new file mode 100644 index 0000000000..02fcbb15d9 --- /dev/null +++ b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/pipeline/GremlinStage.java @@ -0,0 +1,72 @@ +package com.netflix.spinnaker.orca.gremlin.pipeline; + +import com.netflix.spinnaker.orca.CancellableStage; +import com.netflix.spinnaker.orca.gremlin.GremlinService; +import com.netflix.spinnaker.orca.gremlin.tasks.LaunchGremlinAttackTask; +import com.netflix.spinnaker.orca.gremlin.tasks.MonitorGremlinAttackTask; +import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilder; +import com.netflix.spinnaker.orca.pipeline.TaskNode; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.Map; +import java.util.Optional; + +@Component +public class GremlinStage implements StageDefinitionBuilder, CancellableStage { + public final static String APIKEY_KEY = "gremlinApiKey"; + public final static String COMMAND_TEMPLATE_ID_KEY = "gremlinCommandTemplateId"; + public final static String TARGET_TEMPLATE_ID_KEY = "gremlinTargetTemplateId"; + public final static String GUID_KEY = "gremlinAttackGuid"; + public final static String TERMINAL_KEY = "isGremlinTerminal"; + + @Autowired + private GremlinService gremlinService; + + @Override + public void taskGraph(@Nonnull Stage stage, @Nonnull TaskNode.Builder builder) { + builder + .withTask("launchGremlinAttack", LaunchGremlinAttackTask.class) + .withTask("monitorGremlinAttack", MonitorGremlinAttackTask.class); + } + + @Override + public Result cancel(Stage stage) { + final Map ctx = stage.getContext(); + final boolean isAttackCompleted = Optional.ofNullable(ctx.get(TERMINAL_KEY)) + .map(s -> { + try { + return Boolean.parseBoolean((String) s); + } catch (final Exception ex) { + return false; + } + }) + .orElse(false); + + if (!isAttackCompleted) { + gremlinService.haltAttack(getApiKey(ctx), getAttackGuid(ctx)); + return new CancellableStage.Result(stage, ctx); + } + return null; + } + + public static String getApiKey(final Map ctx) { + final String apiKey = (String) ctx.get(APIKEY_KEY); + if (apiKey == null || apiKey.isEmpty()) { + throw new RuntimeException("No API Key provided"); + } else { + return "Key " + apiKey; + } + } + + public static String getAttackGuid(final Map ctx) { + final String guid = (String) ctx.get(GUID_KEY); + if (guid == null || guid.isEmpty()) { + throw new RuntimeException("Could not find an active Gremlin attack GUID"); + } else { + return guid; + } + } +} diff --git a/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/LaunchGremlinAttackTask.java b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/LaunchGremlinAttackTask.java new file mode 100644 index 0000000000..980224da3c --- /dev/null +++ b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/LaunchGremlinAttackTask.java @@ -0,0 +1,58 @@ +package com.netflix.spinnaker.orca.gremlin.tasks; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.gremlin.AttackParameters; +import com.netflix.spinnaker.orca.gremlin.GremlinService; +import com.netflix.spinnaker.orca.gremlin.pipeline.GremlinStage; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.HashMap; +import java.util.Map; + +import static com.netflix.spinnaker.orca.gremlin.pipeline.GremlinStage.COMMAND_TEMPLATE_ID_KEY; +import static com.netflix.spinnaker.orca.gremlin.pipeline.GremlinStage.GUID_KEY; +import static com.netflix.spinnaker.orca.gremlin.pipeline.GremlinStage.TARGET_TEMPLATE_ID_KEY; + +@Component +public class LaunchGremlinAttackTask implements Task { + private static final String GREMLIN_TEMPLATE_ID_KEY = "template_id"; + + @Autowired + private GremlinService gremlinService; + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + final Map ctx = stage.getContext(); + + final String apiKey = GremlinStage.getApiKey(ctx); + + final String commandTemplateId = (String) ctx.get(COMMAND_TEMPLATE_ID_KEY); + if (commandTemplateId == null || commandTemplateId.isEmpty()) { + throw new RuntimeException("No command template provided"); + } + + final String targetTemplateId = (String) ctx.get(TARGET_TEMPLATE_ID_KEY); + if (targetTemplateId == null || targetTemplateId.isEmpty()) { + throw new RuntimeException("No target template provided"); + } + + final Map commandViaTemplate = new HashMap<>(); + commandViaTemplate.put(GREMLIN_TEMPLATE_ID_KEY, commandTemplateId); + + final Map targetViaTemplate = new HashMap<>(); + targetViaTemplate.put(GREMLIN_TEMPLATE_ID_KEY, targetTemplateId); + + final AttackParameters newAttack = new AttackParameters(commandViaTemplate, targetViaTemplate); + + final String createdGuid = gremlinService.create(apiKey, newAttack); + final Map responseMap = new HashMap<>(); + responseMap.put(GUID_KEY, createdGuid); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(responseMap).build(); + } +} diff --git a/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/MonitorGremlinAttackTask.java b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/MonitorGremlinAttackTask.java new file mode 100644 index 0000000000..13222f0286 --- /dev/null +++ b/orca-integrations-gremlin/src/main/java/com/netflix/spinnaker/orca/gremlin/tasks/MonitorGremlinAttackTask.java @@ -0,0 +1,75 @@ +package com.netflix.spinnaker.orca.gremlin.tasks; + +import com.netflix.spinnaker.orca.ExecutionStatus; +import com.netflix.spinnaker.orca.OverridableTimeoutRetryableTask; +import com.netflix.spinnaker.orca.Task; +import com.netflix.spinnaker.orca.TaskResult; +import com.netflix.spinnaker.orca.gremlin.AttackStatus; +import com.netflix.spinnaker.orca.gremlin.GremlinService; +import com.netflix.spinnaker.orca.gremlin.pipeline.GremlinStage; +import com.netflix.spinnaker.orca.pipeline.model.Stage; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import javax.annotation.Nonnull; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import static com.netflix.spinnaker.orca.gremlin.pipeline.GremlinStage.TERMINAL_KEY; + +@Component +public class MonitorGremlinAttackTask implements OverridableTimeoutRetryableTask, Task { + @Autowired + private GremlinService gremlinService; + + private final Logger log = LoggerFactory.getLogger(getClass()); + + @Nonnull + @Override + public TaskResult execute(@Nonnull Stage stage) { + final Map ctx = stage.getContext(); + + final String apiKey = GremlinStage.getApiKey(ctx); + final String attackGuid = GremlinStage.getAttackGuid(ctx); + + final List statuses = gremlinService.getStatus(apiKey, attackGuid); + + boolean foundFailedAttack = false; + String failureType = ""; + String failureOutput = ""; + + for (final AttackStatus status : statuses) { + if (status.getEndTime() == null) { + return TaskResult.builder(ExecutionStatus.RUNNING).context(ctx).build(); + } + if (isFailure(status.getStageLifecycle())) { + foundFailedAttack = true; + failureType = status.getStage(); + failureOutput = status.getOutput(); + } + } + ctx.put(TERMINAL_KEY, "true"); + if (foundFailedAttack) { + throw new RuntimeException("Gremlin run failed (" + failureType + ") with output : " + failureOutput); + } else { + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(ctx).build(); + } + } + + @Override + public long getBackoffPeriod() { + return TimeUnit.SECONDS.toMillis(10); + } + + @Override + public long getTimeout() { + return TimeUnit.MINUTES.toMillis(15); + } + + private boolean isFailure(final String gremlinStageName) { + return gremlinStageName.equals("Error"); + } +} diff --git a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/KayentaService.kt b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/KayentaService.kt index 9f4e910669..c91ef1c656 100644 --- a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/KayentaService.kt +++ b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/KayentaService.kt @@ -67,7 +67,7 @@ data class CanaryScope( val start: Instant, val end: Instant, val step: Long = 60, // TODO: would be nice to use a Duration - val extendedScopeParams: Map = emptyMap() + val extendedScopeParams: Map = emptyMap() ) data class Thresholds( diff --git a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/model/Deployments.kt b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/model/Deployments.kt index 31b1e64b78..87d5e5798c 100644 --- a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/model/Deployments.kt +++ b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/model/Deployments.kt @@ -83,7 +83,7 @@ internal data class ServerGroupSpec( */ internal val ServerGroupSpec.cluster: String get() = when { - moniker != null -> moniker.cluster + moniker != null && moniker.cluster != null -> moniker.cluster application != null -> { val builder = AutoScalingGroupNameBuilder() builder.appName = application diff --git a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStage.kt b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStage.kt index 66848e3ffe..6edc9ade8d 100644 --- a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStage.kt +++ b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStage.kt @@ -16,6 +16,7 @@ package com.netflix.spinnaker.orca.kayenta.pipeline +import com.fasterxml.jackson.annotation.JsonCreator import com.fasterxml.jackson.databind.SerializationFeature.WRITE_DATES_AS_TIMESTAMPS import com.fasterxml.jackson.module.kotlin.convertValue import com.netflix.spinnaker.orca.ext.mapTo @@ -46,6 +47,18 @@ class RunCanaryIntervalsStage(private val clock: Clock) : StageDefinitionBuilder override fun taskGraph(stage: Stage, builder: TaskNode.Builder) { } + private fun getDeployDetails(stage: Stage) : DeployedServerGroupContext? { + val deployedServerGroupsStage = stage.parent?.execution?.stages?.find { + it.type == DeployCanaryServerGroupsStage.STAGE_TYPE && it.parentStageId == stage.parentStageId + } + if (deployedServerGroupsStage == null) { + return null + } + val deployedServerGroups = deployedServerGroupsStage.outputs["deployedServerGroups"] as List<*> + val data = deployedServerGroups.first() as Map + return DeployedServerGroupContext.from(data) + } + override fun beforeStages(parent: Stage, graph: StageGraphBuilder) { val canaryConfig = parent.mapTo("/canaryConfig") @@ -89,7 +102,7 @@ class RunCanaryIntervalsStage(private val clock: Clock) : StageDefinitionBuilder canaryConfig.configurationAccountName, canaryConfig.storageAccountName, canaryConfig.canaryConfigId, - buildRequestScopes(canaryConfig, i, canaryAnalysisInterval), + buildRequestScopes(canaryConfig, getDeployDetails(parent), i, canaryAnalysisInterval), canaryConfig.scoreThresholds ) @@ -104,6 +117,7 @@ class RunCanaryIntervalsStage(private val clock: Clock) : StageDefinitionBuilder private fun buildRequestScopes( config: KayentaCanaryContext, + deploymentDetails: DeployedServerGroupContext?, interval: Int, intervalDuration: Duration ): Map { @@ -127,22 +141,55 @@ class RunCanaryIntervalsStage(private val clock: Clock) : StageDefinitionBuilder start = end.minus(config.lookback) } - val controlScope = CanaryScope( - scope.controlScope, - scope.controlLocation, + val controlExtendedScopeParams = mutableMapOf() + controlExtendedScopeParams.putAll(scope.extendedScopeParams) + var controlLocation = scope.controlLocation + var controlScope = scope.controlScope + if (deploymentDetails != null) { + if (!controlExtendedScopeParams.containsKey("dataset")) { + controlExtendedScopeParams["dataset"] = "regional" + } + controlLocation = deploymentDetails.controlLocation + controlScope = deploymentDetails.controlScope + controlExtendedScopeParams["type"] = "asg" + if (deploymentDetails.controlAccountId != null) { + controlExtendedScopeParams["accountId"] = deploymentDetails.controlAccountId + } + } + + val experimentExtendedScopeParams = mutableMapOf() + experimentExtendedScopeParams.putAll(scope.extendedScopeParams) + var experimentLocation = scope.experimentLocation + var experimentScope = scope.experimentScope + if (deploymentDetails != null) { + if (!experimentExtendedScopeParams.containsKey("dataset")) { + experimentExtendedScopeParams["dataset"] = "regional" + } + experimentLocation = deploymentDetails.experimentLocation + experimentScope = deploymentDetails.experimentScope + experimentExtendedScopeParams["type"] = "asg" + if (deploymentDetails.experimentAccountId != null) { + experimentExtendedScopeParams["accountId"] = deploymentDetails.experimentAccountId + } + } + + val controlScopeData = CanaryScope( + controlScope, + controlLocation, start, end, config.step.seconds, - scope.extendedScopeParams + controlExtendedScopeParams ) - val experimentScope = controlScope.copy( - scope = scope.experimentScope, - location = scope.experimentLocation + val experimentScopeData = controlScopeData.copy( + scope = experimentScope, + location = experimentLocation, + extendedScopeParams = experimentExtendedScopeParams ) requestScopes[scope.scopeName] = CanaryScopes( - controlScope = controlScope, - experimentScope = experimentScope + controlScope = controlScopeData, + experimentScope = experimentScopeData ) } return requestScopes @@ -164,3 +211,25 @@ private val KayentaCanaryContext.startTime: Instant? private val KayentaCanaryContext.step: Duration get() = Duration.ofSeconds(scopes.first().step) + +data class DeployedServerGroupContext @JsonCreator constructor( + val controlLocation: String, + val controlScope: String, + val controlAccountId: String?, + val experimentLocation: String, + val experimentScope: String, + val experimentAccountId: String? +) { + companion object { + fun from(data: Map) : DeployedServerGroupContext { + return DeployedServerGroupContext( + data["controlLocation"].orEmpty(), + data["controlScope"].orEmpty(), + data["controlAccountId"], + data["experimentLocation"].orEmpty(), + data["experimentScope"].orEmpty(), + data["experimentAccountId"] + ) + } + } +} diff --git a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/AggregateCanaryResultsTask.kt b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/AggregateCanaryResultsTask.kt index 52235e8d2c..c533613947 100644 --- a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/AggregateCanaryResultsTask.kt +++ b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/AggregateCanaryResultsTask.kt @@ -46,30 +46,30 @@ class AggregateCanaryResultsTask : Task { val finalCanaryScore = runCanaryScores[runCanaryScores.size - 1] return if (canaryConfig.scoreThresholds?.marginal == null && canaryConfig.scoreThresholds?.pass == null) { - TaskResult(SUCCEEDED, mapOf( + TaskResult.builder(SUCCEEDED).context(mapOf( "canaryScores" to runCanaryScores, "canaryScoreMessage" to "No score thresholds were specified." - )) + )).build() } else if (canaryConfig.scoreThresholds.marginal != null && finalCanaryScore <= canaryConfig.scoreThresholds.marginal) { - TaskResult(TERMINAL, mapOf( + TaskResult.builder(TERMINAL).context(mapOf( "canaryScores" to runCanaryScores, "canaryScoreMessage" to "Final canary score $finalCanaryScore is not above the marginal score threshold." - )) + )).build() } else if (canaryConfig.scoreThresholds.pass == null) { - TaskResult(SUCCEEDED, mapOf( + TaskResult.builder(SUCCEEDED).context(mapOf( "canaryScores" to runCanaryScores, "canaryScoreMessage" to "No pass score threshold was specified." - )) + )).build() } else if (finalCanaryScore < canaryConfig.scoreThresholds.pass) { - TaskResult(TERMINAL, mapOf( + TaskResult.builder(TERMINAL).context(mapOf( "canaryScores" to runCanaryScores, "canaryScoreMessage" to "Final canary score $finalCanaryScore is below the pass score threshold." - )) + )).build() } else { - TaskResult(SUCCEEDED, mapOf( + TaskResult.builder(SUCCEEDED).context(mapOf( "canaryScores" to runCanaryScores, "canaryScoreMessage" to "Final canary score $finalCanaryScore met or exceeded the pass score threshold." - )) + )).build() } } } diff --git a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/MonitorKayentaCanaryTask.kt b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/MonitorKayentaCanaryTask.kt index 5acdfca6fe..61b9b38454 100644 --- a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/MonitorKayentaCanaryTask.kt +++ b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/MonitorKayentaCanaryTask.kt @@ -54,7 +54,7 @@ class MonitorKayentaCanaryTask( return if (canaryScore <= context.scoreThresholds.marginal) { val resultStatus = if (stage.context["continuePipeline"] == true) FAILED_CONTINUE else TERMINAL - TaskResult(resultStatus, mapOf( + TaskResult.builder(resultStatus).context(mapOf( "canaryPipelineStatus" to SUCCEEDED, "lastUpdated" to canaryResults.endTimeIso?.toEpochMilli(), "lastUpdatedIso" to canaryResults.endTimeIso, @@ -62,16 +62,16 @@ class MonitorKayentaCanaryTask( "canaryScore" to canaryScore, "canaryScoreMessage" to "Canary score is not above the marginal score threshold.", "warnings" to warnings - )) + )).build() } else { - TaskResult(SUCCEEDED, mapOf( + TaskResult.builder(SUCCEEDED).context(mapOf( "canaryPipelineStatus" to SUCCEEDED, "lastUpdated" to canaryResults.endTimeIso?.toEpochMilli(), "lastUpdatedIso" to canaryResults.endTimeIso, "durationString" to canaryResults.result.canaryDuration.toString(), "canaryScore" to canaryScore, "warnings" to warnings - )) + )).build() } } @@ -85,10 +85,10 @@ class MonitorKayentaCanaryTask( } // Indicates a failure of some sort. - return TaskResult(TERMINAL, stageOutputs) + return TaskResult.builder(TERMINAL).context(stageOutputs).build() } - return TaskResult(RUNNING, mapOf("canaryPipelineStatus" to canaryResults.executionStatus)) + return TaskResult.builder(RUNNING).context(mapOf("canaryPipelineStatus" to canaryResults.executionStatus)).build() } fun getResultWarnings(context: MonitorKayentaCanaryContext, canaryResults: CanaryResults): List { diff --git a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopes.kt b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopes.kt index 448563d75f..0199dc83e0 100644 --- a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopes.kt +++ b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopes.kt @@ -21,6 +21,7 @@ import com.fasterxml.jackson.annotation.JsonProperty import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.Task import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.clouddriver.MortService import com.netflix.spinnaker.orca.ext.mapTo import com.netflix.spinnaker.orca.kayenta.pipeline.DeployCanaryServerGroupsStage.Companion.DEPLOY_CONTROL_SERVER_GROUPS import com.netflix.spinnaker.orca.kayenta.pipeline.DeployCanaryServerGroupsStage.Companion.DEPLOY_EXPERIMENT_SERVER_GROUPS @@ -28,31 +29,44 @@ import com.netflix.spinnaker.orca.pipeline.model.Stage import org.springframework.stereotype.Component @Component -class PropagateDeployedServerGroupScopes : Task { +class PropagateDeployedServerGroupScopes( + private val mortService: MortService +) : Task { + + private fun findAccountId(accountName: String): String? { + val details = mortService.getAccountDetails(accountName) + val accountId = details["accountId"] as String? + return accountId + } override fun execute(stage: Stage): TaskResult { val serverGroupPairs = stage.childrenOf(DEPLOY_CONTROL_SERVER_GROUPS) zip stage.childrenOf(DEPLOY_EXPERIMENT_SERVER_GROUPS) val scopes = serverGroupPairs.map { (control, experiment) -> - val scope = mutableMapOf() - control.mapTo().deployServerGroups.entries.first().let { (location, serverGroups) -> + val scope = mutableMapOf() + val controlContext = control.mapTo() + controlContext.deployServerGroups.entries.first().let { (location, serverGroups) -> scope["controlLocation"] = location scope["controlScope"] = serverGroups.first() } - experiment.mapTo().deployServerGroups.entries.first().let { (location, serverGroups) -> + scope["controlAccountId"] = findAccountId(controlContext.accountName) + val experimentContext = experiment.mapTo() + experimentContext.deployServerGroups.entries.first().let { (location, serverGroups) -> scope["experimentLocation"] = location scope["experimentScope"] = serverGroups.first() } + scope["experimentAccountId"] = findAccountId(experimentContext.accountName) scope } - return TaskResult(ExecutionStatus.SUCCEEDED, emptyMap(), mapOf( - "deployedServerGroups" to scopes - )) + return TaskResult.builder(ExecutionStatus.SUCCEEDED) + .outputs(mapOf("deployedServerGroups" to scopes)) + .build() } } + private fun Stage.childrenOf(name: String): List { val stage = execution.stages.find { it.name == name @@ -66,5 +80,6 @@ private fun Stage.childrenOf(name: String): List { data class DeployServerGroupContext @JsonCreator constructor( - @param:JsonProperty("deploy.server.groups") val deployServerGroups: Map> + @param:JsonProperty("deploy.server.groups") val deployServerGroups: Map>, + @param:JsonProperty("deploy.account.name") val accountName: String ) diff --git a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTask.kt b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTask.kt index 55c83eff00..71326605f9 100644 --- a/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTask.kt +++ b/orca-kayenta/src/main/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTask.kt @@ -16,14 +16,11 @@ package com.netflix.spinnaker.orca.kayenta.tasks -import com.fasterxml.jackson.module.kotlin.convertValue import com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED import com.netflix.spinnaker.orca.Task import com.netflix.spinnaker.orca.TaskResult import com.netflix.spinnaker.orca.ext.mapTo -import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper import com.netflix.spinnaker.orca.kayenta.CanaryExecutionRequest -import com.netflix.spinnaker.orca.kayenta.CanaryScopes import com.netflix.spinnaker.orca.kayenta.KayentaService import com.netflix.spinnaker.orca.kayenta.model.RunCanaryContext import com.netflix.spinnaker.orca.pipeline.model.Stage @@ -40,12 +37,6 @@ class RunKayentaCanaryTask( override fun execute(stage: Stage): TaskResult { val context = stage.mapTo() - // The `DeployCanaryServerGroups` stage will deploy a list of experiment/control - // pairs, but we will only canary the first pair in `deployedServerGroups`. - val scopes = stage.context["deployedServerGroups"]?.let { - val pairs = OrcaObjectMapper.newInstance().convertValue>(it) - context.scopes.from(pairs.first()) - } ?: context.scopes val canaryPipelineExecutionId = kayentaService.create( context.canaryConfigId, @@ -54,31 +45,9 @@ class RunKayentaCanaryTask( context.metricsAccountName, context.configurationAccountName, context.storageAccountName, - CanaryExecutionRequest(scopes, context.scoreThresholds) + CanaryExecutionRequest(context.scopes, context.scoreThresholds) )["canaryExecutionId"] as String - return TaskResult(SUCCEEDED, singletonMap("canaryPipelineExecutionId", canaryPipelineExecutionId)) + return TaskResult.builder(SUCCEEDED).context("canaryPipelineExecutionId", canaryPipelineExecutionId).build() } } - -private fun Map.from(pair: DeployedServerGroupPair): Map { - return entries.associate { (key, scope) -> - key to scope.copy( - controlScope = scope.controlScope.copy( - scope = pair.controlScope, - location = pair.controlLocation - ), - experimentScope = scope.experimentScope.copy( - scope = pair.experimentScope, - location = pair.experimentLocation - ) - ) - } -} - -internal data class DeployedServerGroupPair( - val controlLocation: String, - val controlScope: String, - val experimentLocation: String, - val experimentScope: String -) diff --git a/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStageTest.kt b/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStageTest.kt index d4f4464d30..f4e05a9dd9 100644 --- a/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStageTest.kt +++ b/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/pipeline/RunCanaryIntervalsStageTest.kt @@ -335,7 +335,6 @@ object RunCanaryIntervalsStageTest : Spek({ .allMatch { it == attributes } } } - } }) @@ -379,4 +378,3 @@ val Int.minutesInSeconds: Int val Long.minutesInSeconds: Long get() = Duration.ofMinutes(this).seconds - diff --git a/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopesTest.kt b/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopesTest.kt index 0f6b2c18c0..16bded9aa0 100644 --- a/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopesTest.kt +++ b/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/PropagateDeployedServerGroupScopesTest.kt @@ -17,14 +17,18 @@ package com.netflix.spinnaker.orca.kayenta.tasks import com.fasterxml.jackson.module.kotlin.convertValue +import com.netflix.spinnaker.orca.clouddriver.MortService import com.netflix.spinnaker.orca.clouddriver.pipeline.servergroup.CreateServerGroupStage import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper import com.netflix.spinnaker.orca.kato.pipeline.ParallelDeployStage +import com.netflix.spinnaker.orca.kayenta.CanaryScopes import com.netflix.spinnaker.orca.kayenta.pipeline.DeployCanaryServerGroupsStage import com.netflix.spinnaker.orca.kayenta.pipeline.DeployCanaryServerGroupsStage.Companion.DEPLOY_CONTROL_SERVER_GROUPS import com.netflix.spinnaker.orca.kayenta.pipeline.DeployCanaryServerGroupsStage.Companion.DEPLOY_EXPERIMENT_SERVER_GROUPS +import com.nhaarman.mockito_kotlin.doReturn +import com.nhaarman.mockito_kotlin.mock import org.assertj.core.api.Assertions.assertThat import org.assertj.core.api.Assertions.fail import org.jetbrains.spek.api.Spek @@ -33,8 +37,12 @@ import org.jetbrains.spek.api.dsl.it object PropagateDeployedServerGroupScopesTest : Spek({ + val mort = mock { + on { getAccountDetails("foo") } doReturn mapOf("accountId" to "abc123") + on { getAccountDetails("bar") } doReturn mapOf("accountId" to "def456") + } - val subject = PropagateDeployedServerGroupScopes() + val subject = PropagateDeployedServerGroupScopes(mort) val objectMapper = OrcaObjectMapper.newInstance() given("upstream experiment and control deploy stages") { @@ -52,6 +60,7 @@ object PropagateDeployedServerGroupScopesTest : Spek({ context["deploy.server.groups"] = mapOf( "us-central1" to listOf("app-control-a-v000") ) + context["deploy.account.name"] = "foo" } stage { @@ -59,6 +68,7 @@ object PropagateDeployedServerGroupScopesTest : Spek({ context["deploy.server.groups"] = mapOf( "us-central1" to listOf("app-control-b-v000") ) + context["deploy.account.name"] = "bar" } } @@ -71,6 +81,7 @@ object PropagateDeployedServerGroupScopesTest : Spek({ context["deploy.server.groups"] = mapOf( "us-central1" to listOf("app-experiment-a-v000") ) + context["deploy.account.name"] = "foo" } stage { @@ -78,6 +89,7 @@ object PropagateDeployedServerGroupScopesTest : Spek({ context["deploy.server.groups"] = mapOf( "us-central1" to listOf("app-experiment-b-v000") ) + context["deploy.account.name"] = "bar" } } } @@ -88,16 +100,20 @@ object PropagateDeployedServerGroupScopesTest : Spek({ objectMapper.convertValue>(pairs).let { assertThat(it).containsExactlyInAnyOrder( DeployedServerGroupPair( + experimentAccountId = "abc123", experimentScope = "app-experiment-a-v000", experimentLocation = "us-central1", + controlAccountId = "abc123", controlScope = "app-control-a-v000", controlLocation = "us-central1" ), DeployedServerGroupPair( + experimentAccountId = "def456", experimentScope = "app-experiment-b-v000", experimentLocation = "us-central1", controlScope = "app-control-b-v000", - controlLocation = "us-central1" + controlLocation = "us-central1", + controlAccountId = "def456" ) ) } @@ -105,3 +121,12 @@ object PropagateDeployedServerGroupScopesTest : Spek({ } ?: fail("Task should output `deployedServerGroups`") } }) + +internal data class DeployedServerGroupPair( + val controlLocation: String, + val controlScope: String, + val controlAccountId: String?, + val experimentLocation: String, + val experimentScope: String, + val experimentAccountId: String? +) diff --git a/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTaskTest.kt b/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTaskTest.kt deleted file mode 100644 index 6c3e4d35ce..0000000000 --- a/orca-kayenta/src/test/kotlin/com/netflix/spinnaker/orca/kayenta/tasks/RunKayentaCanaryTaskTest.kt +++ /dev/null @@ -1,104 +0,0 @@ -/* - * Copyright 2018 Google, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License") - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.kayenta.tasks - -import com.netflix.spinnaker.orca.fixture.pipeline -import com.netflix.spinnaker.orca.fixture.stage -import com.netflix.spinnaker.orca.kayenta.KayentaService -import com.netflix.spinnaker.orca.kayenta.pipeline.DeployCanaryServerGroupsStage -import com.netflix.spinnaker.orca.kayenta.pipeline.RunCanaryPipelineStage -import com.nhaarman.mockito_kotlin.* -import org.assertj.core.api.Assertions.assertThat -import org.jetbrains.spek.api.Spek -import org.jetbrains.spek.api.dsl.given -import org.jetbrains.spek.api.dsl.it -import org.jetbrains.spek.api.dsl.on - -import java.time.Duration -import java.time.Instant -import java.util.* - -object RunKayentaCanaryTaskTest : Spek({ - - val kayenta: KayentaService = mock() - val subject = RunKayentaCanaryTask(kayenta) - - given("an upstream stage that generated a summary of upstream control & experiment server groups") { - val pipeline = pipeline { - stage { - refId = "1" - type = DeployCanaryServerGroupsStage.STAGE_TYPE - name = "deployCanaryServerGroups" - outputs["deployedServerGroups"] = listOf( - mapOf( - "controlScope" to "app-control-v000", - "controlLocation" to "us-central1", - "experimentScope" to "app-experiment-v000", - "experimentLocation" to "us-central1" - ) - ) - } - stage { - refId = "2" - requisiteStageRefIds = listOf("1") - type = RunCanaryPipelineStage.STAGE_TYPE - context = mapOf( - "canaryConfigId" to UUID.randomUUID().toString(), - "parentPipelineExecutionId" to "ABC", - "scopes" to mapOf( - "default" to mapOf( - "controlScope" to mapOf( - "start" to Instant.now().epochSecond, - "end" to Instant.now().epochSecond - ), - "experimentScope" to mapOf( - "start" to Instant.now().epochSecond, - "end" to Instant.now().epochSecond - ) - ) - ), - "scoreThresholds" to mapOf( - "pass" to 90, - "marginal" to 50 - ), - "lifetimeDuration" to Duration.parse("PT1H") - ) - } - } - - beforeGroup { - whenever(kayenta.create(anyOrNull(), anyOrNull(), anyOrNull(), anyOrNull(), anyOrNull(), anyOrNull(), anyOrNull())).thenReturn( - mapOf("canaryExecutionId" to "ABC") - ) - } - - on("executing the task") { - subject.execute(pipeline.stageByRef("2")) - } - - it("executes Kayenta canary request with hydrated scopes") { - verify(kayenta).create(anyOrNull(), anyOrNull(), anyOrNull(), anyOrNull(), anyOrNull(), anyOrNull(), check { - it.scopes["default"].let { - assertThat(it?.controlScope?.location).isEqualTo("us-central1") - assertThat(it?.controlScope?.scope).isEqualTo("app-control-v000") - assertThat(it?.experimentScope?.location).isEqualTo("us-central1") - assertThat(it?.experimentScope?.scope).isEqualTo("app-experiment-v000") - } - }) - } - } -}) diff --git a/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/DeleteIntentTask.kt b/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/DeleteIntentTask.kt index c14a66d249..4502a99a53 100644 --- a/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/DeleteIntentTask.kt +++ b/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/DeleteIntentTask.kt @@ -47,10 +47,8 @@ class DeleteIntentTask val outputs = mapOf("intent.id" to intentId) - return TaskResult( - if (response.status == HttpStatus.NO_CONTENT.value()) ExecutionStatus.SUCCEEDED else ExecutionStatus.TERMINAL, - outputs - ) + val executionStatus = if (response.status == HttpStatus.NO_CONTENT.value()) ExecutionStatus.SUCCEEDED else ExecutionStatus.TERMINAL + return TaskResult.builder(executionStatus).context(outputs).build() } override fun getBackoffPeriod() = TimeUnit.SECONDS.toMillis(15) diff --git a/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/UpsertIntentTask.kt b/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/UpsertIntentTask.kt index f0db0d1353..ba2982fe6e 100644 --- a/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/UpsertIntentTask.kt +++ b/orca-keel/src/main/kotlin/com/netflix/spinnaker/orca/keel/task/UpsertIntentTask.kt @@ -78,10 +78,8 @@ class UpsertIntentTask throw e } - return TaskResult( - if (response.status == HttpStatus.ACCEPTED.value()) ExecutionStatus.SUCCEEDED else ExecutionStatus.TERMINAL, - outputs - ) + val executionStatus = if (response.status == HttpStatus.ACCEPTED.value()) ExecutionStatus.SUCCEEDED else ExecutionStatus.TERMINAL + return TaskResult.builder(executionStatus).context(outputs).build() } override fun getBackoffPeriod() = TimeUnit.SECONDS.toMillis(15) diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/pipeline/DeployCanaryStage.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/pipeline/DeployCanaryStage.groovy index 69b2fbf4be..315c1d298e 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/pipeline/DeployCanaryStage.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/pipeline/DeployCanaryStage.groovy @@ -159,7 +159,7 @@ class DeployCanaryStage extends ParallelDeployStage implements CloudProviderAwar // if the canary is configured to continue on failure, we need to short-circuit if one of the deploys failed def unsuccessfulDeployStage = deployStages.find { s -> s.status != ExecutionStatus.SUCCEEDED } if (unsuccessfulDeployStage) { - return new TaskResult(ExecutionStatus.TERMINAL) + return TaskResult.ofStatus(ExecutionStatus.TERMINAL) } def deployedClusterPairs = [] for (Map pair in context.clusterPairs) { @@ -222,7 +222,7 @@ class DeployCanaryStage extends ParallelDeployStage implements CloudProviderAwar ) canary.canaryDeployments = deployedClusterPairs - new TaskResult(ExecutionStatus.SUCCEEDED, [canary: canary, deployedClusterPairs: deployedClusterPairs]) + TaskResult.builder(ExecutionStatus.SUCCEEDED).context([canary: canary, deployedClusterPairs: deployedClusterPairs]).build() } } diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CleanupCanaryTask.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CleanupCanaryTask.groovy index 22be83e21c..fb28cc6c83 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CleanupCanaryTask.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CleanupCanaryTask.groovy @@ -47,7 +47,7 @@ class CleanupCanaryTask extends AbstractCloudProviderAwareTask implements Task { log.info "Cleaning up canary clusters in ${stage.id} with ${ops}" String cloudProvider = ops && !ops.empty ? ops.first()?.values().first()?.cloudProvider : getCloudProvider(stage) ?: 'aws' def taskId = katoService.requestOperations(cloudProvider, ops).toBlocking().first() - return new TaskResult(ExecutionStatus.SUCCEEDED, ['kato.last.task.id': taskId]) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(['kato.last.task.id': taskId]).build() } static class Canary { diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CompleteCanaryTask.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CompleteCanaryTask.groovy index 8c64ff6132..53aa427521 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CompleteCanaryTask.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/CompleteCanaryTask.groovy @@ -28,15 +28,13 @@ class CompleteCanaryTask implements Task { TaskResult execute(Stage stage) { Map canary = stage.context.canary if (canary.status?.status == 'CANCELED') { - return new TaskResult(ExecutionStatus.CANCELED) + return TaskResult.ofStatus(ExecutionStatus.CANCELED) } else if (canary.canaryResult?.overallResult == 'SUCCESS') { - return new TaskResult(ExecutionStatus.SUCCEEDED) + return TaskResult.ofStatus(ExecutionStatus.SUCCEEDED) } else if (canary?.health?.health in ["UNHEALTHY", "UNKNOWN"] || canary.canaryResult?.overallResult == 'FAILURE') { - return new TaskResult( - stage.context.continueOnUnhealthy == true + return TaskResult.ofStatus(stage.context.continueOnUnhealthy == true ? ExecutionStatus.FAILED_CONTINUE - : ExecutionStatus.TERMINAL - ) + : ExecutionStatus.TERMINAL) } else { throw new IllegalStateException("Canary in unhandled state") } diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/DisableCanaryTask.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/DisableCanaryTask.groovy index 7ab028a2d6..eccbc41e14 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/DisableCanaryTask.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/DisableCanaryTask.groovy @@ -45,10 +45,10 @@ class DisableCanaryTask extends AbstractCloudProviderAwareTask implements Task { def canary = mineService.getCanary(stage.context.canary.id) if (canary.health?.health == 'UNHEALTHY' || stage.context.unhealthy != null) { // If unhealthy, already disabled in MonitorCanaryTask - return new TaskResult(ExecutionStatus.SUCCEEDED, [ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ waitTime : waitTime, unhealthy : true - ]) + ]).build() } } catch (RetrofitError e) { log.error("Exception occurred while getting canary status with id {} from mine, continuing with disable", @@ -63,11 +63,11 @@ class DisableCanaryTask extends AbstractCloudProviderAwareTask implements Task { String cloudProvider = ops && !ops.empty ? ops.first()?.values().first()?.cloudProvider : getCloudProvider(stage) ?: 'aws' def taskId = katoService.requestOperations(cloudProvider, ops).toBlocking().first() - return new TaskResult(ExecutionStatus.SUCCEEDED, [ + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context([ 'kato.last.task.id' : taskId, 'deploy.server.groups' : dSG, disabledCluster : selector, waitTime : waitTime - ]) + ]).build() } } diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorAcaTaskTask.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorAcaTaskTask.groovy index e47195cfcd..f0fd853a43 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorAcaTaskTask.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorAcaTaskTask.groovy @@ -50,15 +50,15 @@ class MonitorAcaTaskTask extends AbstractCloudProviderAwareTask implements Overr ] } catch (RetrofitError e) { log.error("Exception occurred while getting canary with id ${context.canary.id} from mine service", e) - return new TaskResult(ExecutionStatus.RUNNING, outputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(outputs).build() } if (outputs.canary.status?.complete) { log.info("Canary $stage.id complete") - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build() } log.info("Canary in progress: ${outputs.canary}") - return new TaskResult(ExecutionStatus.RUNNING, outputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(outputs).build() } } diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorCanaryTask.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorCanaryTask.groovy index b5d65751b0..3de9267975 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorCanaryTask.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/MonitorCanaryTask.groovy @@ -59,12 +59,12 @@ class MonitorCanaryTask extends AbstractCloudProviderAwareTask implements Overri ] } catch (RetrofitError e) { log.error("Exception occurred while getting canary with id ${context.canary.id} from mine service", e) - return new TaskResult(ExecutionStatus.RUNNING, outputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(outputs).build() } if (outputs.canary.status?.complete) { log.info("Canary $stage.id complete") - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).outputs(outputs).build() } if (outputs.canary.health?.health == 'UNHEALTHY' && !context.disableRequested) { @@ -91,7 +91,7 @@ class MonitorCanaryTask extends AbstractCloudProviderAwareTask implements Overri } log.info("Canary in progress: ${outputs.canary}") - return new TaskResult(ExecutionStatus.RUNNING, outputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(outputs).build() } String getCloudProvider(List operations, Stage stage){ diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterAcaTaskTask.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterAcaTaskTask.groovy index a3b74588e0..1e93077cca 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterAcaTaskTask.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterAcaTaskTask.groovy @@ -59,7 +59,7 @@ class RegisterAcaTaskTask implements Task { stageTimeoutMs: getMonitorTimeout(canary), ] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } Map buildCanary(String app, Stage stage) { diff --git a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterCanaryTask.groovy b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterCanaryTask.groovy index 4034f1052b..8bae4902a1 100644 --- a/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterCanaryTask.groovy +++ b/orca-mine/src/main/groovy/com/netflix/spinnaker/orca/mine/tasks/RegisterCanaryTask.groovy @@ -85,7 +85,7 @@ class RegisterCanaryTask implements Task { outputs.account = deployStage.context.deployedClusterPairs[0].canaryCluster.accountName } - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build() } Map buildCanary(String app, Stage stage) { diff --git a/orca-pipelinetemplate/orca-pipelinetemplate.gradle b/orca-pipelinetemplate/orca-pipelinetemplate.gradle index 06625a55f4..f07245743e 100644 --- a/orca-pipelinetemplate/orca-pipelinetemplate.gradle +++ b/orca-pipelinetemplate/orca-pipelinetemplate.gradle @@ -7,7 +7,9 @@ dependencies { compile project(":orca-front50") compile project(":orca-clouddriver") - compile('com.hubspot.jinjava:jinjava:2.2.3') + compile('com.hubspot.jinjava:jinjava:2.2.3') { + force = true + } compile "com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:${spinnaker.version('jackson')}" compile "com.fasterxml.jackson.module:jackson-module-kotlin:${spinnaker.version("jackson")}" @@ -23,6 +25,7 @@ dependencies { compile spinnaker.dependency("okHttp3") compileOnly spinnaker.dependency("lombok") + annotationProcessor spinnaker.dependency("lombok") testCompile spinnaker.dependency("slf4jSimple") testCompile 'org.spockframework:spock-unitils:1.1-groovy-2.4-rc-2' diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePreprocessor.kt b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePreprocessor.kt index 406508a098..b62077d8d1 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePreprocessor.kt +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePreprocessor.kt @@ -18,12 +18,17 @@ package com.netflix.spinnaker.orca.pipelinetemplate import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spectator.api.BasicTag import com.netflix.spectator.api.Registry -import com.netflix.spinnaker.orca.extensionpoint.pipeline.PipelinePreprocessor -import com.netflix.spinnaker.orca.pipelinetemplate.handler.* +import com.netflix.spinnaker.orca.extensionpoint.pipeline.ExecutionPreprocessor +import com.netflix.spinnaker.orca.pipelinetemplate.handler.DefaultHandlerChain +import com.netflix.spinnaker.orca.pipelinetemplate.handler.GlobalPipelineTemplateContext +import com.netflix.spinnaker.orca.pipelinetemplate.handler.PipelineTemplateContext +import com.netflix.spinnaker.orca.pipelinetemplate.handler.PipelineTemplateErrorHandler +import com.netflix.spinnaker.orca.pipelinetemplate.handler.SchemaVersionHandler import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2PipelineTemplate import org.slf4j.LoggerFactory import org.springframework.beans.factory.annotation.Autowired import org.springframework.stereotype.Component +import javax.annotation.Nonnull import javax.annotation.PostConstruct @Component("pipelineTemplatePreprocessor") @@ -33,18 +38,17 @@ class PipelineTemplatePreprocessor private val schemaVersionHandler: SchemaVersionHandler, private val errorHandler: PipelineTemplateErrorHandler, private val registry: Registry -) : PipelinePreprocessor { +) : ExecutionPreprocessor { private val log = LoggerFactory.getLogger(javaClass) private val requestsId = registry.createId("mpt.requests") @PostConstruct fun confirmUsage() = log.info("Using ${javaClass.simpleName}") - override fun process(pipeline: MutableMap?): MutableMap { - if (pipeline == null) { - return mutableMapOf() - } + override fun supports(@Nonnull execution: MutableMap, + @Nonnull type: ExecutionPreprocessor.Type): Boolean = true + override fun process(pipeline: MutableMap): MutableMap { // TODO(jacobkiefer): We push the 'toplevel' v2 config into a 'config' field to play nice // with MPT v1's opinionated TemplatedPipelineRequest. When we cut over, the template configuration // should be lifted to the top level like users will specify them. diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/TemplatedPipelineRequest.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/TemplatedPipelineRequest.java index 42dfef7e97..8942dcbdc8 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/TemplatedPipelineRequest.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/TemplatedPipelineRequest.java @@ -17,7 +17,9 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.netflix.spinnaker.kork.artifacts.model.ExpectedArtifact; +import com.netflix.spinnaker.orca.pipelinetemplate.v1schema.model.NamedHashMap; +import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -30,6 +32,7 @@ public class TemplatedPipelineRequest { Map trigger = new HashMap<>(); Map config; Map template; + List notifications = new ArrayList<>(); String executionId; Boolean plan = false; boolean limitConcurrent = true; @@ -92,6 +95,14 @@ public Map getTemplate() { return template; } + public void setNotifications(List notifications) { + this.notifications = notifications; + } + + public List getNotifications() { + return notifications; + } + public void setTemplate(Map template) { this.template = template; } diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/CreatePipelineTemplateTask.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/CreatePipelineTemplateTask.java index 7dbc545310..28e9a72d01 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/CreatePipelineTemplateTask.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/CreatePipelineTemplateTask.java @@ -75,10 +75,10 @@ public TaskResult execute(Stage stage) { outputs.put("pipelineTemplate.id", pipelineTemplate.getId()); if (response.getStatus() == HttpStatus.OK.value()) { - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } - return new TaskResult(ExecutionStatus.TERMINAL, outputs); + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build(); } @Override diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/DeletePipelineTemplateTask.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/DeletePipelineTemplateTask.java index 41472806c1..64ae96d26a 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/DeletePipelineTemplateTask.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/DeletePipelineTemplateTask.java @@ -53,10 +53,7 @@ public TaskResult execute(Stage stage) { outputs.put("notification.type", "deletepipelinetemplate"); outputs.put("pipeline.id", templateId); - return new TaskResult( - (response.getStatus() == HttpStatus.OK.value()) ? ExecutionStatus.SUCCEEDED : ExecutionStatus.TERMINAL, - outputs - ); + return TaskResult.builder((response.getStatus() == HttpStatus.OK.value()) ? ExecutionStatus.SUCCEEDED : ExecutionStatus.TERMINAL).context(outputs).build(); } @Override diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/PlanTemplateDependentsTask.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/PlanTemplateDependentsTask.java index 43b1102245..809a184b82 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/PlanTemplateDependentsTask.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/PlanTemplateDependentsTask.java @@ -19,7 +19,7 @@ import com.netflix.spinnaker.orca.ExecutionStatus; import com.netflix.spinnaker.orca.RetryableTask; import com.netflix.spinnaker.orca.TaskResult; -import com.netflix.spinnaker.orca.extensionpoint.pipeline.PipelinePreprocessor; +import com.netflix.spinnaker.orca.extensionpoint.pipeline.ExecutionPreprocessor; import com.netflix.spinnaker.orca.front50.Front50Service; import com.netflix.spinnaker.orca.pipeline.model.Stage; import com.netflix.spinnaker.orca.pipelinetemplate.v1schema.model.PipelineTemplate; @@ -44,7 +44,7 @@ public class PlanTemplateDependentsTask implements RetryableTask { private ObjectMapper pipelineTemplateObjectMapper; @Autowired - private PipelinePreprocessor pipelineTemplatePreprocessor; + private ExecutionPreprocessor pipelineTemplatePreprocessor; @Nonnull @Override @@ -93,11 +93,7 @@ public TaskResult execute(@Nonnull Stage stage) { context.put("pipelineTemplate.dependentErrors", errorResponses); } - return new TaskResult( - errorResponses.isEmpty() ? ExecutionStatus.SUCCEEDED : ExecutionStatus.TERMINAL, - context, - Collections.emptyMap() - ); + return TaskResult.builder(errorResponses.isEmpty() ? ExecutionStatus.SUCCEEDED : ExecutionStatus.TERMINAL).context(context).outputs(Collections.emptyMap()).build(); } @Override diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/UpdatePipelineTemplateTask.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/UpdatePipelineTemplateTask.java index fe2ed6cb5c..538ea4eb77 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/UpdatePipelineTemplateTask.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/UpdatePipelineTemplateTask.java @@ -88,10 +88,10 @@ public TaskResult execute(Stage stage) { outputs.put("pipelineTemplate.id", pipelineTemplate.getId()); if (response.getStatus() == HttpStatus.OK.value()) { - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } - return new TaskResult(ExecutionStatus.TERMINAL, outputs); + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build(); } @Override diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/CreateV2PipelineTemplateTask.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/CreateV2PipelineTemplateTask.java index 723635d008..24b887bcdb 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/CreateV2PipelineTemplateTask.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/CreateV2PipelineTemplateTask.java @@ -67,21 +67,21 @@ public TaskResult execute(Stage stage) { validate(pipelineTemplate); - String version = (String) stage.getContext().get("version"); - Response response = front50Service.saveV2PipelineTemplate(version, + String tag = (String) stage.getContext().get("tag"); + Response response = front50Service.saveV2PipelineTemplate(tag, (Map) stage.decodeBase64("/pipelineTemplate", Map.class, pipelineTemplateObjectMapper)); // TODO(jacobkiefer): Reduce duplicated code. - String templateId = StringUtils.isEmpty(version) ? pipelineTemplate.getId() : String.format("%s:%s", pipelineTemplate.getId(), version); + String templateId = StringUtils.isEmpty(tag) ? pipelineTemplate.getId() : String.format("%s:%s", pipelineTemplate.getId(), tag); Map outputs = new HashMap<>(); outputs.put("notification.type", "createpipelinetemplate"); outputs.put("pipelineTemplate.id", templateId); if (response.getStatus() == HttpStatus.OK.value()) { - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } - return new TaskResult(ExecutionStatus.TERMINAL, outputs); + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build(); } @Override diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/DeleteV2PipelineTemplateTask.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/DeleteV2PipelineTemplateTask.java index 004db6bfc5..d2f6d21740 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/DeleteV2PipelineTemplateTask.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/DeleteV2PipelineTemplateTask.java @@ -47,16 +47,16 @@ public TaskResult execute(Stage stage) { } String templateId = (String) stage.getContext().get("pipelineTemplateId"); - String version = (String) stage.getContext().get("version"); + String tag = (String) stage.getContext().get("tag"); String digest = (String) stage.getContext().get("digest"); - Response _ = front50Service.deleteV2PipelineTemplate(templateId, version, digest); + Response _ = front50Service.deleteV2PipelineTemplate(templateId, tag, digest); Map outputs = new HashMap<>(); outputs.put("notification.type", "deletepipelinetemplate"); outputs.put("pipeline.id", templateId); - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } @Override diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/UpdateV2PipelineTemplateTask.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/UpdateV2PipelineTemplateTask.java index a6a50e36a3..163049ea38 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/UpdateV2PipelineTemplateTask.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/tasks/v2/UpdateV2PipelineTemplateTask.java @@ -84,21 +84,21 @@ public TaskResult execute(Stage stage) { validate(pipelineTemplate); - String version = (String) stage.getContext().get("version"); + String tag = (String) stage.getContext().get("tag"); Response response = front50Service.updateV2PipelineTemplate((String) stage.getContext().get("id"), - version, (Map) stage.decodeBase64("/pipelineTemplate", Map.class, pipelineTemplateObjectMapper)); + tag, (Map) stage.decodeBase64("/pipelineTemplate", Map.class, pipelineTemplateObjectMapper)); // TODO(jacobkiefer): Reduce duplicated code. - String templateId = StringUtils.isEmpty(version) ? pipelineTemplate.getId() : String.format("%s:%s", pipelineTemplate.getId(), version); + String templateId = StringUtils.isEmpty(tag) ? pipelineTemplate.getId() : String.format("%s:%s", pipelineTemplate.getId(), tag); Map outputs = new HashMap<>(); outputs.put("notification.type", "updatepipelinetemplate"); outputs.put("pipelineTemplate.id", templateId); if (response.getStatus() == HttpStatus.OK.value()) { - return new TaskResult(ExecutionStatus.SUCCEEDED, outputs); + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputs).build(); } - return new TaskResult(ExecutionStatus.TERMINAL, outputs); + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build(); } @Override diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/V2GraphMutator.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/V2GraphMutator.java index 2f0564de97..540a8bd2a8 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/V2GraphMutator.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/V2GraphMutator.java @@ -19,7 +19,6 @@ import com.netflix.spinnaker.orca.pipelinetemplate.v1schema.graph.v2.transform.V2ConfigStageInjectionTransform; import com.netflix.spinnaker.orca.pipelinetemplate.v1schema.graph.v2.transform.V2DefaultVariableAssignmentTransform; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.V2PipelineTemplateVisitor; -import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.graph.V2PipelineConfigInheritanceTransform; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2PipelineTemplate; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2TemplateConfiguration; @@ -32,7 +31,6 @@ public class V2GraphMutator { public V2GraphMutator(V2TemplateConfiguration configuration) { visitors.add(new V2DefaultVariableAssignmentTransform(configuration)); - visitors.add(new V2PipelineConfigInheritanceTransform(configuration)); visitors.add(new V2ConfigStageInjectionTransform(configuration)); } diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransform.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransform.java index a714ed3ff6..bb9e2a12f4 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransform.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransform.java @@ -17,10 +17,10 @@ package com.netflix.spinnaker.orca.pipelinetemplate.v1schema.graph.v2.transform; import com.netflix.spinnaker.orca.pipelinetemplate.exceptions.IllegalTemplateConfigurationException; -import com.netflix.spinnaker.orca.pipelinetemplate.v1schema.model.TemplateConfiguration; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.V2PipelineTemplateVisitor; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2PipelineTemplate; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2TemplateConfiguration; +import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; import java.util.Collection; @@ -29,6 +29,7 @@ import java.util.Map; import java.util.stream.Collectors; +@Slf4j public class V2DefaultVariableAssignmentTransform implements V2PipelineTemplateVisitor { private V2TemplateConfiguration templateConfiguration; @@ -45,7 +46,7 @@ public void visitPipelineTemplate(V2PipelineTemplate pipelineTemplate) { } Map configVars = templateConfiguration.getVariables() != null - ? templateConfiguration.getVariables() + ? configurationVariables(pipelineTemplateVariables, templateConfiguration.getVariables()) : new HashMap<>(); // if the config is missing vars and the template defines a default value, assign those values from the config @@ -120,4 +121,21 @@ private boolean isInteger(V2PipelineTemplate.Variable templateVar, Object actual return expectedtype.equalsIgnoreCase("int") && (actualVar instanceof Integer || (noDecimal && instanceOfDouble) || (noDecimal && instanceOfFloat)); } + + /** + * Filter out configuration variables that don't exist in the template. + * + * @param pipelineTemplateVariables Variables from the pipeline template. + * @param configVariables Variables from the pipeline configuration referencing the template. + * @return Map of configuration variables exist in the declared template variables. + */ + public static Map configurationVariables(List pipelineTemplateVariables, + Map configVariables) { + List templateVariableNames = pipelineTemplateVariables.stream() + .map(var -> var.getName()).collect(Collectors.toList()); + return configVariables.entrySet() + .stream() + .filter(configVar -> templateVariableNames.contains(configVar.getKey())) + .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); + } } diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/V1TemplateLoaderHandler.kt b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/V1TemplateLoaderHandler.kt index be8a03b6b5..455c82650f 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/V1TemplateLoaderHandler.kt +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/V1TemplateLoaderHandler.kt @@ -37,6 +37,7 @@ class V1TemplateLoaderHandler( override fun handle(chain: HandlerChain, context: PipelineTemplateContext) { val config = objectMapper.convertValue(context.getRequest().config, TemplateConfiguration::class.java) + config.configuration.notifications.addAll(context.getRequest().notifications.orEmpty()) // Allow template inlining to perform plans without publishing the template if (context.getRequest().plan && context.getRequest().template != null) { diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2Handlers.kt b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2Handlers.kt index 7bc6482faf..77ecc58d73 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2Handlers.kt +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2Handlers.kt @@ -17,6 +17,7 @@ package com.netflix.spinnaker.orca.pipelinetemplate.v1schema.handler.v2 import com.fasterxml.jackson.databind.ObjectMapper +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor import com.netflix.spinnaker.orca.pipelinetemplate.handler.Handler import com.netflix.spinnaker.orca.pipelinetemplate.handler.HandlerChain @@ -34,7 +35,8 @@ class V2SchemaHandlerGroup @Autowired constructor( private val templateLoader: V2TemplateLoader, private val objectMapper: ObjectMapper, - private val contextParameterProcessor: ContextParameterProcessor + private val contextParameterProcessor: ContextParameterProcessor, + private val artifactResolver: ArtifactResolver ): HandlerGroup { override fun getHandlers(): List @@ -43,7 +45,7 @@ class V2SchemaHandlerGroup V2ConfigurationValidationHandler(), V2TemplateValidationHandler(), V2GraphMutatorHandler(), - V2PipelineGenerator() + V2PipelineGenerator(artifactResolver) ) } @@ -55,10 +57,19 @@ class V2GraphMutatorHandler : Handler { } } -class V2PipelineGenerator : Handler { +class V2PipelineGenerator ( + private val artifactResolver: ArtifactResolver +) : Handler { override fun handle(chain: HandlerChain, context: PipelineTemplateContext) { val ctx = context.getSchemaContext() val generator = V2SchemaExecutionGenerator() - context.getProcessedOutput().putAll(generator.generate(ctx.template, ctx.configuration, context.getRequest())) + + // Explicitly resolve artifacts after preprocessing to support artifacts in templated pipelines. + // TODO(jacobkiefer): Refactor /orchestrate so we don't have to special case v2 artifact resolution. + val generatedPipeline = generator.generate(ctx.template, ctx.configuration, context.getRequest()) + if (!context.getRequest().plan) { + artifactResolver.resolveArtifacts(generatedPipeline) + } + context.getProcessedOutput().putAll(generatedPipeline) } } diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2TemplateLoaderHandler.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2TemplateLoaderHandler.java index 5a8c9b6754..455331e565 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2TemplateLoaderHandler.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/handler/v2/V2TemplateLoaderHandler.java @@ -53,15 +53,9 @@ public V2TemplateLoaderHandler(V2TemplateLoader templateLoader, ContextParameter public void handle(@NotNull HandlerChain chain, @NotNull PipelineTemplateContext context) { V2TemplateConfiguration config = objectMapper.convertValue(context.getRequest().getConfig(), V2TemplateConfiguration.class); - // Allow template inlining to perform plans without publishing the template - if (context.getRequest().getPlan() && context.getRequest().getTemplate() != null) { - V2PipelineTemplate template = objectMapper.convertValue(context.getRequest().getTemplate(), V2PipelineTemplate.class); - context.setSchemaContext(new V2PipelineTemplateContext(config, template)); - return; - } - Map trigger = context.getRequest().getTrigger(); // Allow the config's source to be dynamically resolved from trigger payload. + // TODO(jacobkiefer): Reevaluate whether we should enable dynamically resolved templates. renderPipelineTemplateSource(config, trigger); // If a template source isn't provided by the configuration, we're assuming that the configuration is fully-formed. diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTag.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTag.java index b4d8f96d5c..263cc6f2f3 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTag.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTag.java @@ -18,11 +18,7 @@ import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.ObjectMapper; import com.google.common.base.Splitter; -import com.hubspot.jinjava.interpret.Context; -import com.hubspot.jinjava.interpret.InterpretException; -import com.hubspot.jinjava.interpret.JinjavaInterpreter; -import com.hubspot.jinjava.interpret.TemplateStateException; -import com.hubspot.jinjava.interpret.TemplateSyntaxException; +import com.hubspot.jinjava.interpret.*; import com.hubspot.jinjava.lib.tag.Tag; import com.hubspot.jinjava.tree.TagNode; import com.hubspot.jinjava.util.HelperStringTokenizer; @@ -37,11 +33,7 @@ import com.netflix.spinnaker.orca.pipelinetemplate.validator.Errors.Error; import org.apache.commons.lang3.StringUtils; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Optional; +import java.util.*; import java.util.function.Supplier; import java.util.stream.Collectors; @@ -103,12 +95,12 @@ public String interpret(TagNode tagNode, JinjavaInterpreter interpreter) { List missing = new ArrayList<>(); for (NamedHashMap var : module.getVariables()) { // First try to assign the variable from the context directly - Object val = interpreter.resolveELExpression(var.getName(), tagNode.getLineNumber()); + Object val = tryResolveExpression(interpreter, var.getName(), tagNode.getLineNumber()); if (val == null) { // Try to assign from a parameter (using the param value as a context key first, then as a literal) if (paramPairs.containsKey(var.getName())) { val = Optional.ofNullable( - interpreter.resolveELExpression(paramPairs.get(var.getName()), tagNode.getLineNumber()) + tryResolveExpression(interpreter, paramPairs.get(var.getName()), tagNode.getLineNumber()) ).orElse(paramPairs.get(var.getName())); } @@ -216,4 +208,11 @@ private static String removeTrailingCommas(String token) { } return token; } + + private Object tryResolveExpression(JinjavaInterpreter interpreter, String expression, int lineNumber) { + try { + return interpreter.resolveELExpression(expression, lineNumber); + } catch (UnknownTokenException ignored) { } + return null; + } } diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/V2SchemaExecutionGenerator.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/V2SchemaExecutionGenerator.java index 1fc51086a9..5558be7c46 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/V2SchemaExecutionGenerator.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/V2SchemaExecutionGenerator.java @@ -19,6 +19,7 @@ import com.netflix.spinnaker.orca.pipelinetemplate.TemplatedPipelineRequest; import com.netflix.spinnaker.orca.pipelinetemplate.generator.V2ExecutionGenerator; import com.netflix.spinnaker.orca.pipelinetemplate.v1schema.TemplateMerge; +import com.netflix.spinnaker.orca.pipelinetemplate.v1schema.graph.v2.transform.V2DefaultVariableAssignmentTransform; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2PipelineTemplate; import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2TemplateConfiguration; @@ -46,7 +47,8 @@ public Map generate(V2PipelineTemplate template, V2TemplateConfi addNotifications(pipeline, template, configuration); addParameters(pipeline, template, configuration); addTriggers(pipeline, template, configuration); - pipeline.put("templateVariables", configuration.getVariables()); + pipeline.put("templateVariables", + V2DefaultVariableAssignmentTransform.configurationVariables(template.getVariables(), configuration.getVariables())); if (request.getTrigger() != null && !request.getTrigger().isEmpty()) { pipeline.put("trigger", request.getTrigger()); @@ -56,35 +58,35 @@ public Map generate(V2PipelineTemplate template, V2TemplateConfi } private void addNotifications(Map pipeline, V2PipelineTemplate template, V2TemplateConfiguration configuration) { - if (configuration.getInherit().contains("notifications")) { + if (configuration.getExclude().contains("notifications")) { pipeline.put( "notifications", - TemplateMerge.mergeDistinct( - (List>) template.getPipeline().get("notifications"), - configuration.getNotifications() - ) + Optional.ofNullable(configuration.getNotifications()).orElse(Collections.emptyList()) ); } else { pipeline.put( "notifications", - Optional.ofNullable(configuration.getNotifications()).orElse(Collections.emptyList()) + TemplateMerge.mergeDistinct( + (List>) template.getPipeline().get("notifications"), + configuration.getNotifications() + ) ); } } private void addParameters(Map pipeline, V2PipelineTemplate template, V2TemplateConfiguration configuration) { - if (configuration.getInherit().contains("parameters")) { + if (configuration.getExclude().contains("parameters")) { pipeline.put( "parameterConfig", - TemplateMerge.mergeDistinct( - (List>) template.getPipeline().get("parameterConfig"), - configuration.getParameters() - ) + Optional.ofNullable(configuration.getParameters()).orElse(Collections.emptyList()) ); } else { pipeline.put( "parameterConfig", - Optional.ofNullable(configuration.getParameters()).orElse(Collections.emptyList()) + TemplateMerge.mergeDistinct( + (List>) template.getPipeline().get("parameterConfig"), + configuration.getParameters() + ) ); } } @@ -92,18 +94,18 @@ private void addParameters(Map pipeline, V2PipelineTemplate temp private void addTriggers(Map pipeline, V2PipelineTemplate template, V2TemplateConfiguration configuration) { - if (configuration.getInherit().contains("triggers")) { + if (configuration.getExclude().contains("triggers")) { pipeline.put( "triggers", - TemplateMerge.mergeDistinct( - (List>) template.getPipeline().get("triggers"), - configuration.getTriggers() - ) + Optional.ofNullable(configuration.getTriggers()).orElse(Collections.emptyList()) ); } else { pipeline.put( "triggers", - Optional.ofNullable(configuration.getTriggers()).orElse(Collections.emptyList()) + TemplateMerge.mergeDistinct( + (List>) template.getPipeline().get("triggers"), + configuration.getTriggers() + ) ); } } diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/graph/V2PipelineConfigInheritanceTransform.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/graph/V2PipelineConfigInheritanceTransform.java deleted file mode 100644 index 83c5efac26..0000000000 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/graph/V2PipelineConfigInheritanceTransform.java +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Copyright 2018 Google, Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.netflix.spinnaker.orca.pipelinetemplate.v2schema.graph; - -import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.V2PipelineTemplateVisitor; -import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2PipelineTemplate; -import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2TemplateConfiguration; - -import java.util.Collections; -import java.util.List; -import java.util.Map; - -public class V2PipelineConfigInheritanceTransform implements V2PipelineTemplateVisitor { - - private V2TemplateConfiguration templateConfiguration; - - public V2PipelineConfigInheritanceTransform(V2TemplateConfiguration templateConfiguration) { - this.templateConfiguration = templateConfiguration; - } - - @Override - public void visitPipelineTemplate(V2PipelineTemplate pipelineTemplate) { - List inherit = templateConfiguration.getInherit(); - Map pipeline = pipelineTemplate.getPipeline(); - - if (!inherit.contains("triggers")) { - pipeline.put("triggers", Collections.emptyList()); - } - if (!inherit.contains("parameterConfig")) { - pipeline.put("parameterConfig", Collections.emptyList()); - } - if (!inherit.contains("expectedArtifacts")) { - pipeline.put("expectedArtifacts", Collections.emptyList()); - } - if (!inherit.contains("notifications")) { - pipeline.put("notifications", Collections.emptyList()); - } - } -} diff --git a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/model/V2TemplateConfiguration.java b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/model/V2TemplateConfiguration.java index dc0cf4f417..a145016535 100644 --- a/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/model/V2TemplateConfiguration.java +++ b/orca-pipelinetemplate/src/main/java/com/netflix/spinnaker/orca/pipelinetemplate/v2schema/model/V2TemplateConfiguration.java @@ -33,12 +33,11 @@ public class V2TemplateConfiguration implements VersionedSchema { private Artifact template; private Map variables = new HashMap<>(); private List stages = new ArrayList<>(); - private List inherit = new ArrayList<>(); + private List exclude = new ArrayList<>(); private Map concurrentExecutions = new HashMap<>(); private List> triggers = new ArrayList<>(); private List> parameters = new ArrayList<>(); private List> notifications = new ArrayList<>(); - private List> expectedArtifacts = new ArrayList<>(); private String description; private final String runtimeId = UUID.randomUUID().toString(); diff --git a/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePipelinePreprocessorSpec.groovy b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePipelinePreprocessorSpec.groovy index 6f0fcf8612..97cf4d584c 100644 --- a/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePipelinePreprocessorSpec.groovy +++ b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/PipelineTemplatePipelinePreprocessorSpec.groovy @@ -19,6 +19,8 @@ import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spectator.api.* import com.netflix.spinnaker.orca.clouddriver.OortService import com.netflix.spinnaker.orca.front50.Front50Service +import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor import com.netflix.spinnaker.orca.pipelinetemplate.handler.PipelineTemplateErrorHandler import com.netflix.spinnaker.orca.pipelinetemplate.handler.SchemaVersionHandler @@ -47,6 +49,9 @@ class PipelineTemplatePipelinePreprocessorSpec extends Specification { V2TemplateLoader v2TemplateLoader = new V2TemplateLoader(oortService, objectMapper) ContextParameterProcessor contextParameterProcessor = new ContextParameterProcessor() + ExecutionRepository executionRepository = Mock(ExecutionRepository) + ArtifactResolver artifactResolver = Spy(ArtifactResolver, constructorArgs: [objectMapper, executionRepository, new ContextParameterProcessor()]) + Renderer renderer = new JinjaRenderer( new YamlRenderedValueConverter(), objectMapper, Mock(Front50Service), [] ) @@ -65,7 +70,7 @@ class PipelineTemplatePipelinePreprocessorSpec extends Specification { objectMapper, new SchemaVersionHandler( new V1SchemaHandlerGroup( templateLoader, renderer, objectMapper, registry), - new V2SchemaHandlerGroup(v2TemplateLoader, objectMapper, contextParameterProcessor)), + new V2SchemaHandlerGroup(v2TemplateLoader, objectMapper, contextParameterProcessor, artifactResolver)), new PipelineTemplateErrorHandler(), registry ) diff --git a/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/V1SchemaIntegrationSpec.groovy b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/V1SchemaIntegrationSpec.groovy index 1e8a6858da..4acfcde946 100644 --- a/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/V1SchemaIntegrationSpec.groovy +++ b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/V1SchemaIntegrationSpec.groovy @@ -19,6 +19,8 @@ import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spectator.api.* import com.netflix.spinnaker.orca.clouddriver.OortService import com.netflix.spinnaker.orca.front50.Front50Service +import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository +import com.netflix.spinnaker.orca.pipeline.util.ArtifactResolver import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor import com.netflix.spinnaker.orca.pipelinetemplate.PipelineTemplatePreprocessor import com.netflix.spinnaker.orca.pipelinetemplate.handler.PipelineTemplateErrorHandler @@ -52,6 +54,9 @@ class V1SchemaIntegrationSpec extends Specification { V2TemplateLoader v2TemplateLoader = new V2TemplateLoader(oortService, objectMapper) ContextParameterProcessor contextParameterProcessor = new ContextParameterProcessor() + ExecutionRepository executionRepository = Mock(ExecutionRepository) + ArtifactResolver artifactResolver = Spy(ArtifactResolver, constructorArgs: [objectMapper, executionRepository, new ContextParameterProcessor()]) + Renderer renderer = new JinjaRenderer( new YamlRenderedValueConverter(), objectMapper, Mock(Front50Service), [] ) @@ -72,7 +77,7 @@ class V1SchemaIntegrationSpec extends Specification { objectMapper, new SchemaVersionHandler( new V1SchemaHandlerGroup(templateLoader, renderer, objectMapper, registry), - new V2SchemaHandlerGroup(v2TemplateLoader, objectMapper, contextParameterProcessor)), + new V2SchemaHandlerGroup(v2TemplateLoader, objectMapper, contextParameterProcessor, artifactResolver)), new PipelineTemplateErrorHandler(), registry ) diff --git a/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransformTest.groovy b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransformTest.groovy new file mode 100644 index 0000000000..7e65d9bfb5 --- /dev/null +++ b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/graph/v2/transform/V2DefaultVariableAssignmentTransformTest.groovy @@ -0,0 +1,43 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.pipelinetemplate.v1schema.graph.v2.transform + +import com.netflix.spinnaker.orca.pipelinetemplate.v2schema.model.V2PipelineTemplate +import spock.lang.Specification + +class V2DefaultVariableAssignmentTransformTest extends Specification { + def "configurationVariables filters config vars not present in template vars"() { + when: + def actualConfigVars = V2DefaultVariableAssignmentTransform.configurationVariables(templateVars, configVars) + + then: + actualConfigVars == expectedVars + + where: + templateVars | configVars | expectedVars + [newTemplateVar("wait")] | [wait: "OK"] | [wait: "OK"] + [] | [wait: "OK"] | [:] + [newTemplateVar("wait")] | [wait: "OK", alsoWait: "NO"] | [wait: "OK"] + [newTemplateVar("wait")] | [:] | [:] + } + + V2PipelineTemplate.Variable newTemplateVar(String name) { + def var = new V2PipelineTemplate.Variable() + var.name = name + return var + } +} diff --git a/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTagSpec.groovy b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTagSpec.groovy index 54d708528c..ca6c54f3f6 100644 --- a/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTagSpec.groovy +++ b/orca-pipelinetemplate/src/test/groovy/com/netflix/spinnaker/orca/pipelinetemplate/v1schema/render/tags/ModuleTagSpec.groovy @@ -58,9 +58,149 @@ class ModuleTagSpec extends Specification { context.variables.put("m", [myKey: 'myValue']) when: - def result = renderer.render("{% module myModule myOtherVar=world, subject=testerName, job=trigger.job, concat=m['my' + 'Key'], filtered=trigger.nonExist|default('hello', True) %}", context) + def result = renderer.render("{% module myModule myOtherVar=world, subject=testerName, job=trigger.job, concat=m['my' + 'Key'], filtered=trigger.nonExist|default('hello', true) %}", context) then: result == 'hello world, Mr. Tester Testington. You triggered myJob myValue hello' } + + def 'should correctly fall back to defaults defined in variable'() { + given: + PipelineTemplate pipelineTemplate = new PipelineTemplate( + modules: [ + new TemplateModule( + id: 'myModule', + variables: [ + [name: 'myVar'] as NamedHashMap, + ], + definition: "{{ myVar }}") + ] + ) + RenderContext context = new DefaultRenderContext('myApp', pipelineTemplate, [job: 'myJob', buildNumber: 1234]) + context.variables.put("trigger", [myKey: 'triggerValue', otherKey: 'Key']) + context.variables.put("overrides", [myKey: 'overrideValue']) + + when: + def result = renderer.render("{% module myModule myVar=trigger.myKey|default(overrides['my'+'Key'])|default('1') %}", context) + + then: + result == 'triggerValue' + + when: + result = renderer.render("{% module myModule myVar=trigger.nonExistentKey|default(overrides['my'+'Key'])|default('1') %}", context) + + then: + result == 'overrideValue' + + when: + result = renderer.render("{% module myModule myVar=trigger.nonExistentKey|default(overrides['my'+'NonExistentKey'])|default('1') %}", context) + + then: + result == '1' + + when: + result = renderer.render("{% module myModule myVar=trigger.nonExistentKey|default(overrides['my'+trigger.otherKey])|default('1') %}", context) + + then: + result == 'overrideValue' + + when: + result = renderer.render("{% module myModule myVar=trigger.nonExistentKey|default(overrides['my'+trigger.missingKey])|default('1') %}", context) + + then: + result == '1' + } + + def 'should correctly fall back to defaults defined in template'() { + given: + PipelineTemplate pipelineTemplate = new PipelineTemplate( + modules: [ + new TemplateModule( + id: 'myModule', + variables: [ + [name: 'myVar'] as NamedHashMap, + [name: 'overrides'] as NamedHashMap, + ], + definition: "{{ trigger.noKey|default(trigger.stillNoKey)|default(overrides['myKey']) }}") + ] + ) + RenderContext context = new DefaultRenderContext('myApp', pipelineTemplate, [job: 'myJob', buildNumber: 1234]) + context.variables.put("trigger", [myKey: 'triggerValue', otherKey: 'Key']) + context.variables.put("overrides", [myKey: 'overrideValue']) + + when: + def result = renderer.render("{% module myModule myVar='' %}", context) + + then: + result == 'overrideValue' + } + + def 'can access one template variable in the key of another'() { + given: + PipelineTemplate pipelineTemplate = new PipelineTemplate( + modules: [ + new TemplateModule( + id: 'myModule', + variables: [ + [name: 'myVar'] as NamedHashMap, + [name: 'overrides'] as NamedHashMap, + ], + definition: "{{ trigger.noKey|default(trigger.stillNoKey)|default(overrides['my' + trigger.otherKey]) }}") + ] + ) + RenderContext context = new DefaultRenderContext('myApp', pipelineTemplate, [job: 'myJob', buildNumber: 1234]) + context.variables.put("trigger", [myKey: 'triggerValue', otherKey: 'Key']) + context.variables.put("overrides", [myKey: 'overrideValue']) + + when: + def result = renderer.render("{% module myModule myVar='' %}", context) + + then: + result == 'overrideValue' + } + + def 'can handle a null context variable in the template'() { + given: + PipelineTemplate pipelineTemplate = new PipelineTemplate( + modules: [ + new TemplateModule( + id: 'myModule', + variables: [ + [name: 'overrides'] as NamedHashMap, + ], + definition: "{{ trigger.noKey.noOtherKey|default(overrides['abc'+trigger.none.nope])|default('1') }}") + ] + ) + RenderContext context = new DefaultRenderContext('myApp', pipelineTemplate, [job: 'myJob', buildNumber: 1234]) + context.variables.put("overrides", [myKey: 'overrideValue']) + + when: + def result = renderer.render("{% module myModule %}", context) + + then: + result == '1' + } + + def 'can handle a null context variable in another variable'() { + given: + PipelineTemplate pipelineTemplate = new PipelineTemplate( + modules: [ + new TemplateModule( + id: 'myModule', + variables: [ + [name: 'myVar'] as NamedHashMap, + [name: 'overrides'] as NamedHashMap, + ], + definition: "{{ myVar }}") + ] + ) + RenderContext context = new DefaultRenderContext('myApp', pipelineTemplate, [job: 'myJob', buildNumber: 1234]) + context.variables.put("overrides", [myKey: 'overrideValue']) + + when: + def result = renderer.render("{% module myModule myVar=trigger.noKey.noOtherKey|default(overrides['abc'+trigger.none.nope])|default('1') %}", context) + + then: + result == '1' + } } diff --git a/orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/config/RedisOrcaQueueConfiguration.kt b/orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/config/RedisOrcaQueueConfiguration.kt index ed3e27b33a..5a78452554 100644 --- a/orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/config/RedisOrcaQueueConfiguration.kt +++ b/orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/config/RedisOrcaQueueConfiguration.kt @@ -19,9 +19,12 @@ import com.fasterxml.jackson.databind.DeserializationFeature.FAIL_ON_UNKNOWN_PRO import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.databind.module.SimpleModule import com.fasterxml.jackson.module.kotlin.KotlinModule +import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType import com.netflix.spinnaker.orca.q.redis.migration.ExecutionTypeDeserializer import com.netflix.spinnaker.orca.q.redis.migration.OrcaToKeikoSerializationMigrator +import com.netflix.spinnaker.orca.q.redis.migration.TaskTypeDeserializer import com.netflix.spinnaker.orca.q.redis.pending.RedisPendingExecutionService import com.netflix.spinnaker.q.metrics.EventPublisher import com.netflix.spinnaker.q.migration.SerializationMigrator @@ -41,13 +44,16 @@ import java.util.* @EnableConfigurationProperties(ObjectMapperSubtypeProperties::class) class RedisOrcaQueueConfiguration : RedisQueueConfiguration() { - @Autowired fun redisQueueObjectMapper(mapper: ObjectMapper, - objectMapperSubtypeProperties: ObjectMapperSubtypeProperties) { + @Autowired + fun redisQueueObjectMapper(mapper: ObjectMapper, + objectMapperSubtypeProperties: ObjectMapperSubtypeProperties, + taskResolver: TaskResolver) { mapper.apply { registerModule(KotlinModule()) registerModule( SimpleModule() .addDeserializer(ExecutionType::class.java, ExecutionTypeDeserializer()) + .addDeserializer(Class::class.java, TaskTypeDeserializer(taskResolver)) ) disable(FAIL_ON_UNKNOWN_PROPERTIES) diff --git a/orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializer.kt b/orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializer.kt new file mode 100644 index 0000000000..9df350bfb5 --- /dev/null +++ b/orca-queue-redis/src/main/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializer.kt @@ -0,0 +1,35 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.q.redis.migration + +import com.fasterxml.jackson.core.JsonParser +import com.fasterxml.jackson.databind.DeserializationContext +import com.fasterxml.jackson.databind.JsonDeserializer +import com.netflix.spinnaker.orca.TaskResolver + +class TaskTypeDeserializer( + private val taskResolver: TaskResolver +) : JsonDeserializer>() { + override fun deserialize( + p: JsonParser, + ctxt: DeserializationContext + ) = if (p.parsingContext.currentName == "taskType") { + taskResolver.getTaskClass(p.valueAsString) + } else { + Class.forName(p.valueAsString) + } +} diff --git a/orca-queue-redis/src/test/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializerTest.kt b/orca-queue-redis/src/test/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializerTest.kt new file mode 100644 index 0000000000..d2ddf70df4 --- /dev/null +++ b/orca-queue-redis/src/test/kotlin/com/netflix/spinnaker/orca/q/redis/migration/TaskTypeDeserializerTest.kt @@ -0,0 +1,66 @@ +/* + * Copyright 2019 Netflix, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca.q.redis.migration + +import com.fasterxml.jackson.databind.ObjectMapper +import com.fasterxml.jackson.databind.module.SimpleModule +import com.fasterxml.jackson.module.kotlin.KotlinModule +import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.TaskResolver +import com.netflix.spinnaker.orca.TaskResult +import com.netflix.spinnaker.orca.pipeline.model.Stage +import org.assertj.core.api.Assertions +import org.jetbrains.spek.api.Spek +import org.jetbrains.spek.api.dsl.describe + +object TaskTypeDeserializerTest : Spek({ + val taskResolver = TaskResolver(listOf(DummyTask()), false) + + val objectMapper = ObjectMapper().apply { + registerModule(KotlinModule()) + registerModule( + SimpleModule() + .addDeserializer(Class::class.java, TaskTypeDeserializer(taskResolver)) + ) + } + + describe("when 'taskType' is deserialized") { + val canonicalJson = """{ "taskType" : "${DummyTask::class.java.canonicalName}" }""" + Assertions.assertThat( + objectMapper.readValue(canonicalJson, Target::class.java).taskType + ).isEqualTo(DummyTask::class.java) + + val aliasedJson = """{ "taskType" : "anotherTaskAlias" }""" + Assertions.assertThat( + objectMapper.readValue(aliasedJson, Target::class.java).taskType + ).isEqualTo(DummyTask::class.java) + + val notTaskTypeJson = """{ "notTaskType" : "java.lang.String" }""" + Assertions.assertThat( + objectMapper.readValue(notTaskTypeJson, Target::class.java).notTaskType + ).isEqualTo(String::class.java) + } +}) + +class Target(val taskType: Class<*>?, val notTaskType: Class<*>?) + +@Task.Aliases("anotherTaskAlias") +class DummyTask : Task { + override fun execute(stage: Stage): TaskResult { + return TaskResult.SUCCEEDED + } +} \ No newline at end of file diff --git a/orca-queue-tck/src/main/kotlin/com/netflix/spinnaker/orca/q/QueueIntegrationTest.kt b/orca-queue-tck/src/main/kotlin/com/netflix/spinnaker/orca/q/QueueIntegrationTest.kt index 52183cbcc6..13af4a64e6 100644 --- a/orca-queue-tck/src/main/kotlin/com/netflix/spinnaker/orca/q/QueueIntegrationTest.kt +++ b/orca-queue-tck/src/main/kotlin/com/netflix/spinnaker/orca/q/QueueIntegrationTest.kt @@ -142,7 +142,7 @@ class QueueIntegrationTest { repository.store(pipeline) whenever(dummyTask.timeout) doReturn 2000L - whenever(dummyTask.execute(any())) doReturn TaskResult(RUNNING) doReturn TaskResult.SUCCEEDED + whenever(dummyTask.execute(any())) doReturn TaskResult.RUNNING doReturn TaskResult.SUCCEEDED context.runToCompletion(pipeline, runner::start, repository) @@ -264,7 +264,7 @@ class QueueIntegrationTest { } repository.store(pipeline) - whenever(dummyTask.execute(any())) doReturn TaskResult(TERMINAL) + whenever(dummyTask.execute(any())) doReturn TaskResult.ofStatus(TERMINAL) context.runToCompletion(pipeline, runner::start, repository) @@ -304,7 +304,7 @@ class QueueIntegrationTest { repository.store(pipeline) whenever(dummyTask.timeout) doReturn 2000L - whenever(dummyTask.execute(argThat { refId == "2a1" })) doReturn TaskResult(TERMINAL) + whenever(dummyTask.execute(argThat { refId == "2a1" })) doReturn TaskResult.ofStatus(TERMINAL) whenever(dummyTask.execute(argThat { refId != "2a1" })) doReturn TaskResult.SUCCEEDED context.runToCompletion(pipeline, runner::start, repository) @@ -352,7 +352,7 @@ class QueueIntegrationTest { repository.store(pipeline) whenever(dummyTask.timeout) doReturn 2000L - whenever(dummyTask.execute(argThat { refId == "2a1" })) doReturn TaskResult(TERMINAL) + whenever(dummyTask.execute(argThat { refId == "2a1" })) doReturn TaskResult.ofStatus(TERMINAL) whenever(dummyTask.execute(argThat { refId != "2a1" })) doReturn TaskResult.SUCCEEDED context.runToCompletion(pipeline, runner::start, repository) @@ -407,7 +407,7 @@ class QueueIntegrationTest { repository.store(pipeline) whenever(dummyTask.timeout) doReturn 2000L - whenever(dummyTask.execute(argThat { refId == "2a1" })) doReturn TaskResult(TERMINAL) + whenever(dummyTask.execute(argThat { refId == "2a1" })) doReturn TaskResult.ofStatus(TERMINAL) whenever(dummyTask.execute(argThat { refId != "2a1" })) doReturn TaskResult.SUCCEEDED context.runToCompletion(pipeline, runner::start, repository) @@ -445,7 +445,7 @@ class QueueIntegrationTest { repository.store(childPipeline) repository.store(parentPipeline) - whenever(dummyTask.execute(argThat { refId == "1" })) doReturn TaskResult(CANCELED) + whenever(dummyTask.execute(argThat { refId == "1" })) doReturn TaskResult.ofStatus(CANCELED) context.runParentToCompletion(parentPipeline, childPipeline, runner::start, repository) repository.retrieve(PIPELINE, parentPipeline.id).apply { @@ -481,7 +481,7 @@ class QueueIntegrationTest { } repository.store(pipeline) - whenever(dummyTask.execute(argThat { refId == "2" })) doReturn TaskResult(TERMINAL) + whenever(dummyTask.execute(argThat { refId == "2" })) doReturn TaskResult.ofStatus(TERMINAL) context.runToCompletion(pipeline, runner::start, repository) @@ -602,7 +602,7 @@ class QueueIntegrationTest { repository.store(pipeline) whenever(dummyTask.timeout) doReturn 2000L - whenever(dummyTask.execute(any())) doReturn TaskResult(SUCCEEDED, mapOf("output" to "foo")) + whenever(dummyTask.execute(any())) doReturn TaskResult.builder(SUCCEEDED).context(mapOf("output" to "foo")).build() context.runToCompletion(pipeline, runner::start, repository) @@ -695,7 +695,7 @@ class QueueIntegrationTest { whenever(dummyTask.execute(any())) doAnswer { val stage = it.arguments.first() as Stage if (stage.refId == "1") { - TaskResult(SUCCEEDED, emptyMap(), mapOf("foo" to false)) + TaskResult.builder(SUCCEEDED).outputs(mapOf("foo" to false)).build() } else { TaskResult.SUCCEEDED } @@ -752,7 +752,7 @@ class QueueIntegrationTest { whenever(dummyTask.execute(any())) doAnswer { val stage = it.arguments.first() as Stage if (stage.refId == "1") { - TaskResult(SUCCEEDED, emptyMap(), mapOf("foo" to false)) + TaskResult.builder(SUCCEEDED).outputs(mapOf("foo" to false)).build() } else { TaskResult.SUCCEEDED } @@ -789,7 +789,7 @@ class QueueIntegrationTest { whenever(dummyTask.execute(any())) doAnswer { val stage = it.arguments.first() as Stage if (stage.refId == "1") { - TaskResult(TERMINAL) + TaskResult.ofStatus(TERMINAL) } else { TaskResult.SUCCEEDED } diff --git a/orca-queue/orca-queue.gradle b/orca-queue/orca-queue.gradle index c53e1ac5cc..01ed334c10 100644 --- a/orca-queue/orca-queue.gradle +++ b/orca-queue/orca-queue.gradle @@ -25,6 +25,7 @@ dependencies { compile "org.funktionale:funktionale-partials:1.2" compile spinnaker.dependency("logstashEncoder") compile "com.netflix.spinnaker.keiko:keiko-spring:$keikoVersion" + compile "javax.ws.rs:jsr311-api:1.1.1" testCompile project(":orca-test-kotlin") testCompile project(":orca-queue-tck") diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommand.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommand.kt index 2fe8ef4d7a..3db8c137dc 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommand.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommand.kt @@ -19,6 +19,7 @@ import com.fasterxml.jackson.annotation.JsonIgnore import com.netflix.spinnaker.orca.ExecutionStatus.NOT_STARTED import com.netflix.spinnaker.orca.ExecutionStatus.RUNNING import com.netflix.spinnaker.orca.RetryableTask +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.ext.afterStages import com.netflix.spinnaker.orca.ext.allAfterStagesSuccessful import com.netflix.spinnaker.orca.ext.allBeforeStagesSuccessful @@ -55,7 +56,8 @@ import kotlin.reflect.full.memberProperties @Component class HydrateQueueCommand( private val queue: Queue, - private val executionRepository: ExecutionRepository + private val executionRepository: ExecutionRepository, + private val taskResolver: TaskResolver ) : (HydrateQueueInput) -> HydrateQueueOutput { private val log = LoggerFactory.getLogger(javaClass) @@ -258,7 +260,7 @@ class HydrateQueueCommand( @Suppress("UNCHECKED_CAST") private val com.netflix.spinnaker.orca.pipeline.model.Task.type - get() = Class.forName(implementingClass) as Class + get() = taskResolver.getTaskClass(implementingClass) private fun Task.isRetryable(): Boolean = RetryableTask::class.java.isAssignableFrom(type) diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/audit/ExecutionTrackingMessageHandlerPostProcessor.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/audit/ExecutionTrackingMessageHandlerPostProcessor.kt index 46494bd2e0..97fb00b463 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/audit/ExecutionTrackingMessageHandlerPostProcessor.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/audit/ExecutionTrackingMessageHandlerPostProcessor.kt @@ -16,10 +16,13 @@ package com.netflix.spinnaker.orca.q.audit +import com.netflix.spinnaker.orca.q.ApplicationAware import com.netflix.spinnaker.orca.q.ExecutionLevel +import com.netflix.spinnaker.orca.q.StageLevel +import com.netflix.spinnaker.orca.q.TaskLevel import com.netflix.spinnaker.q.Message import com.netflix.spinnaker.q.MessageHandler -import com.netflix.spinnaker.security.AuthenticatedRequest.SPINNAKER_EXECUTION_ID +import com.netflix.spinnaker.security.AuthenticatedRequest import org.slf4j.MDC import org.springframework.beans.factory.config.BeanPostProcessor import org.springframework.stereotype.Component @@ -41,12 +44,32 @@ class ExecutionTrackingMessageHandlerPostProcessor : BeanPostProcessor { ) : MessageHandler by delegate { override fun invoke(message: Message) { try { - if (message is ExecutionLevel) { - MDC.put(SPINNAKER_EXECUTION_ID, message.executionId) + when(message) { + is TaskLevel -> { + MDC.put( + AuthenticatedRequest.Header.EXECUTION_ID.header, + "${message.executionId}:${message.stageId}:${message.taskId}") + } + is StageLevel -> { + MDC.put( + AuthenticatedRequest.Header.EXECUTION_ID.header, + "${message.executionId}:${message.stageId}") + } + is ExecutionLevel -> { + MDC.put( + AuthenticatedRequest.Header.EXECUTION_ID.header, + message.executionId) + } } + + if (message is ApplicationAware) { + MDC.put(AuthenticatedRequest.Header.APPLICATION.header, message.application) + } + delegate.invoke(message) } finally { - MDC.remove(SPINNAKER_EXECUTION_ID) + MDC.remove(AuthenticatedRequest.Header.EXECUTION_ID.header) + MDC.remove(AuthenticatedRequest.Header.APPLICATION.header) } } } diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/AuthenticationAware.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/AuthenticationAware.kt index 2c3dac8913..e4a0962685 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/AuthenticationAware.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/AuthenticationAware.kt @@ -44,6 +44,7 @@ interface AuthenticationAware { currentUser.username, execution.type.name.toLowerCase(), execution.id, + this.id, execution.origin )) AuthenticatedRequest.propagate(block, false, currentUser).call() diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandler.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandler.kt index 353f22f865..4cd46e04d8 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandler.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandler.kt @@ -18,7 +18,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.CancellableStage import com.netflix.spinnaker.orca.ExecutionStatus.RUNNING -import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilderFactory import com.netflix.spinnaker.orca.pipeline.model.Execution.ExecutionType.* import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository @@ -35,7 +35,9 @@ class CancelStageHandler( override val queue: Queue, override val repository: ExecutionRepository, override val stageDefinitionBuilderFactory: StageDefinitionBuilderFactory, - @Qualifier("messageHandlerPool") private val executor: Executor + + @Qualifier("messageHandlerPool") private val executor: Executor, + private val taskResolver: TaskResolver ) : OrcaMessageHandler, StageBuilderAware { override val messageType = CancelStage::class.java @@ -101,5 +103,5 @@ class CancelStageHandler( @Suppress("UNCHECKED_CAST") private val com.netflix.spinnaker.orca.pipeline.model.Task.type - get() = Class.forName(implementingClass) as Class + get() = taskResolver.getTaskClass(implementingClass) } diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandler.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandler.kt index c7519bc3bf..2607382500 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandler.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandler.kt @@ -17,10 +17,11 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spectator.api.Registry -import com.netflix.spectator.api.histogram.BucketCounter +import com.netflix.spectator.api.histogram.PercentileTimer import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.ExecutionStatus.* import com.netflix.spinnaker.orca.events.StageComplete +import com.netflix.spinnaker.orca.exceptions.ExceptionHandler import com.netflix.spinnaker.orca.ext.* import com.netflix.spinnaker.orca.pipeline.StageDefinitionBuilderFactory import com.netflix.spinnaker.orca.pipeline.graph.StageGraphBuilder @@ -28,6 +29,7 @@ import com.netflix.spinnaker.orca.pipeline.model.Stage import com.netflix.spinnaker.orca.pipeline.model.Task import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor +import com.netflix.spinnaker.orca.pipeline.util.StageNavigator import com.netflix.spinnaker.orca.q.* import com.netflix.spinnaker.q.Queue import org.springframework.beans.factory.annotation.Qualifier @@ -40,12 +42,14 @@ import java.util.concurrent.TimeUnit class CompleteStageHandler( override val queue: Queue, override val repository: ExecutionRepository, + override val stageNavigator: StageNavigator, @Qualifier("queueEventPublisher") private val publisher: ApplicationEventPublisher, private val clock: Clock, + private val exceptionHandlers: List, override val contextParameterProcessor: ContextParameterProcessor, private val registry: Registry, override val stageDefinitionBuilderFactory: StageDefinitionBuilderFactory -) : OrcaMessageHandler, StageBuilderAware, ExpressionAware { +) : OrcaMessageHandler, StageBuilderAware, ExpressionAware, AuthenticationAware { override fun handle(message: CompleteStage) { message.withStage { stage -> @@ -62,7 +66,9 @@ class CompleteStageHandler( // check to see if this stage has any unplanned synthetic after stages var afterStages = stage.firstAfterStages() if (afterStages.isEmpty()) { - stage.planAfterStages() + stage.withAuth { + stage.planAfterStages() + } afterStages = stage.firstAfterStages() } if (afterStages.isNotEmpty() && afterStages.any { it.status == NOT_STARTED }) { @@ -76,7 +82,13 @@ class CompleteStageHandler( status = SKIPPED } } else if (status.isFailure) { - if (stage.planOnFailureStages()) { + var hasOnFailureStages = false + + stage.withAuth { + hasOnFailureStages = stage.planOnFailureStages() + } + + if (hasOnFailureStages) { stage.firstAfterStages().forEach { queue.push(StartStage(it)) } @@ -87,7 +99,10 @@ class CompleteStageHandler( stage.status = status stage.endTime = clock.millis() } catch (e: Exception) { - log.error("Failed to construct after stages for $stage.id", e) + log.error("Failed to construct after stages for ${stage.name} ${stage.id}", e) + + val exceptionDetails = exceptionHandlers.shouldRetry(e, stage.name + ":ConstructAfterStages") + stage.context["exception"] = exceptionDetails stage.status = TERMINAL stage.endTime = clock.millis() } @@ -134,20 +149,11 @@ class CompleteStageHandler( } ?: id } - BucketCounter - .get(registry, id, { v -> bucketDuration(v) }) - .record((stage.endTime ?: clock.millis()) - (stage.startTime ?: 0)) + PercentileTimer + .get(registry, id) + .record((stage.endTime ?: clock.millis()) - (stage.startTime ?: 0), TimeUnit.MILLISECONDS) } - private fun bucketDuration(duration: Long) = - when { - duration > TimeUnit.MINUTES.toMillis(60) -> "gt60m" - duration > TimeUnit.MINUTES.toMillis(30) -> "gt30m" - duration > TimeUnit.MINUTES.toMillis(15) -> "gt15m" - duration > TimeUnit.MINUTES.toMillis(5) -> "gt5m" - else -> "lt5m" - } - override val messageType = CompleteStage::class.java /** @@ -221,7 +227,7 @@ class CompleteStageHandler( val afterStageStatuses = afterStages().map(Stage::getStatus) return when { allStatuses.isEmpty() -> NOT_STARTED - allStatuses.contains(TERMINAL) -> TERMINAL + allStatuses.contains(TERMINAL) -> failureStatus() // handle configured 'if stage fails' options correctly allStatuses.contains(STOPPED) -> STOPPED allStatuses.contains(CANCELED) -> CANCELED allStatuses.contains(FAILED_CONTINUE) -> FAILED_CONTINUE diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ExpressionAware.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ExpressionAware.kt index 41228eba80..70bec8c83d 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ExpressionAware.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ExpressionAware.kt @@ -16,7 +16,10 @@ package com.netflix.spinnaker.orca.q.handler +import com.fasterxml.jackson.databind.ObjectMapper +import com.fasterxml.jackson.module.kotlin.convertValue import com.netflix.spinnaker.orca.exceptions.ExceptionHandler +import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper import com.netflix.spinnaker.orca.pipeline.expressions.ExpressionEvaluationSummary import com.netflix.spinnaker.orca.pipeline.expressions.PipelineExpressionEvaluator import com.netflix.spinnaker.orca.pipeline.model.Execution @@ -33,6 +36,11 @@ import org.slf4j.LoggerFactory interface ExpressionAware { val contextParameterProcessor: ContextParameterProcessor + + companion object { + val mapper: ObjectMapper = OrcaObjectMapper.newInstance() + } + val log: Logger get() = LoggerFactory.getLogger(javaClass) @@ -69,12 +77,19 @@ interface ExpressionAware { // context. Otherwise, it's very confusing in the UI because the value is clearly correctly evaluated but // the error is still shown if (hasFailedExpressions()) { - val failedExpressions = this.context[PipelineExpressionEvaluator.SUMMARY] as MutableMap + try { + val failedExpressions = this.context[PipelineExpressionEvaluator.SUMMARY] as MutableMap + + val keysToRemove: List = failedExpressions.keys.filter { expressionKey -> + (evalSummary.wasAttempted(expressionKey) && !evalSummary.hasFailed(expressionKey)) + }.toList() - failedExpressions.keys.forEach { expressionKey -> - if (evalSummary.wasAttempted(expressionKey) && !evalSummary.hasFailed(expressionKey)) { + keysToRemove.forEach { expressionKey -> failedExpressions.remove(expressionKey) } + } catch (e: Exception) { + // Best effort clean up, if if fails just log the error and leave the context be + log.error("Failed to remove stale expression errors", e) } } @@ -121,9 +136,13 @@ interface ExpressionAware { ) ) + // TODO (mvulfson): Ideally, we opt out of this method and use ContextParameterProcessor.buildExecutionContext + // but that doesn't generate StageContext preventing us from doing recursive lookups... An investigation for another day private fun StageContext.augmentContext(execution: Execution): StageContext = if (execution.type == PIPELINE) { - this + mapOf("trigger" to execution.trigger, "execution" to execution) + this + mapOf( + "trigger" to mapper.convertValue>(execution.trigger), + "execution" to execution) } else { this } diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandler.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandler.kt index d7475cfa4b..d3b37f882a 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandler.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandler.kt @@ -17,7 +17,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.ExecutionStatus -import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository import com.netflix.spinnaker.orca.q.RescheduleExecution import com.netflix.spinnaker.orca.q.RunTask @@ -27,7 +27,8 @@ import org.springframework.stereotype.Component @Component class RescheduleExecutionHandler( override val queue: Queue, - override val repository: ExecutionRepository + override val repository: ExecutionRepository, + private val taskResolver: TaskResolver ) : OrcaMessageHandler { override val messageType = RescheduleExecution::class.java @@ -45,7 +46,7 @@ class RescheduleExecutionHandler( queue.reschedule(RunTask(message, stage.id, it.id, - Class.forName(it.implementingClass) as Class + taskResolver.getTaskClass(it.implementingClass) )) } } diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandler.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandler.kt index 35a7e631f0..b47e8df41a 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandler.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandler.kt @@ -18,7 +18,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.ExecutionStatus.PAUSED import com.netflix.spinnaker.orca.ExecutionStatus.RUNNING -import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository import com.netflix.spinnaker.orca.q.ResumeTask import com.netflix.spinnaker.orca.q.RunTask @@ -28,7 +28,8 @@ import org.springframework.stereotype.Component @Component class ResumeTaskHandler( override val queue: Queue, - override val repository: ExecutionRepository + override val repository: ExecutionRepository, + private val taskResolver: TaskResolver ) : OrcaMessageHandler { override val messageType = ResumeTask::class.java @@ -48,5 +49,5 @@ class ResumeTaskHandler( @Suppress("UNCHECKED_CAST") private val com.netflix.spinnaker.orca.pipeline.model.Task.type - get() = Class.forName(implementingClass) as Class + get() = taskResolver.getTaskClass(implementingClass) } diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandler.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandler.kt index e11a7ba300..a856fe41a3 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandler.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandler.kt @@ -41,7 +41,7 @@ import com.netflix.spinnaker.orca.time.toDuration import com.netflix.spinnaker.orca.time.toInstant import com.netflix.spinnaker.q.Message import com.netflix.spinnaker.q.Queue -import org.apache.commons.lang.time.DurationFormatUtils +import org.apache.commons.lang3.time.DurationFormatUtils import org.slf4j.MDC import org.springframework.stereotype.Component import java.time.Clock @@ -222,7 +222,7 @@ class RunTaskHandler( val durationString = formatTimeout(elapsedTime.toMillis()) val msg = StringBuilder("${javaClass.simpleName} of stage ${stage.name} timed out after $durationString. ") msg.append("pausedDuration: ${formatTimeout(pausedDuration.toMillis())}, ") - msg.append("elapsedTime: ${formatTimeout(elapsedTime.toMillis())},") + msg.append("elapsedTime: ${formatTimeout(elapsedTime.toMillis())}, ") msg.append("timeoutValue: ${formatTimeout(actualTimeout.toMillis())}") log.warn(msg.toString()) @@ -305,7 +305,6 @@ class RunTaskHandler( private fun Stage.withLoggingContext(taskModel: com.netflix.spinnaker.orca.pipeline.model.Task, block: () -> Unit) { try { - MDC.put("application", this.execution.application) MDC.put("stageType", type) MDC.put("taskType", taskModel.implementingClass) @@ -318,7 +317,6 @@ class RunTaskHandler( MDC.remove("stageType") MDC.remove("taskType") MDC.remove("taskStartTime") - MDC.remove("application") } } } diff --git a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandler.kt b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandler.kt index 2bed925a49..80d7c306c3 100644 --- a/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandler.kt +++ b/orca-queue/src/main/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandler.kt @@ -17,7 +17,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.ExecutionStatus.RUNNING -import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.events.TaskStarted import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor @@ -35,6 +35,7 @@ class StartTaskHandler( override val repository: ExecutionRepository, override val contextParameterProcessor: ContextParameterProcessor, @Qualifier("queueEventPublisher") private val publisher: ApplicationEventPublisher, + private val taskResolver: TaskResolver, private val clock: Clock ) : OrcaMessageHandler, ExpressionAware { @@ -55,5 +56,5 @@ class StartTaskHandler( @Suppress("UNCHECKED_CAST") private val com.netflix.spinnaker.orca.pipeline.model.Task.type - get() = Class.forName(implementingClass) as Class + get() = taskResolver.getTaskClass(implementingClass) } diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommandTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommandTest.kt index 9fdb964697..1c2d16f322 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommandTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/admin/HydrateQueueCommandTest.kt @@ -19,6 +19,7 @@ import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.ExecutionStatus.NOT_STARTED import com.netflix.spinnaker.orca.ExecutionStatus.RUNNING import com.netflix.spinnaker.orca.ExecutionStatus.SUCCEEDED +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.ext.beforeStages import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage @@ -65,9 +66,10 @@ object HydrateQueueCommandTest : SubjectSpek({ val queue: Queue = mock() val repository: ExecutionRepository = mock() + val taskResolver = TaskResolver(emptyList()) subject(CachingMode.GROUP) { - HydrateQueueCommand(queue, repository) + HydrateQueueCommand(queue, repository, taskResolver) } fun resetMocks() = reset(queue, repository) diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandlerTest.kt index 7cee7c8781..8e4fb03e0f 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CancelStageHandlerTest.kt @@ -19,6 +19,8 @@ package com.netflix.spinnaker.orca.q.handler import com.google.common.util.concurrent.MoreExecutors import com.netflix.spinnaker.orca.CancellableStage import com.netflix.spinnaker.orca.ExecutionStatus.* +import com.netflix.spinnaker.orca.StageResolver +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage import com.netflix.spinnaker.orca.pipeline.DefaultStageDefinitionBuilderFactory @@ -39,14 +41,18 @@ object CancelStageHandlerTest : SubjectSpek({ val queue: Queue = mock() val repository: ExecutionRepository = mock() val executor = MoreExecutors.directExecutor() + val taskResolver: TaskResolver = TaskResolver(emptyList()) + val cancellableStage: CancelableStageDefinitionBuilder = mock() + val stageResolver = StageResolver(listOf(singleTaskStage, cancellableStage)) subject(GROUP) { CancelStageHandler( queue, repository, - DefaultStageDefinitionBuilderFactory(singleTaskStage, cancellableStage), - executor + DefaultStageDefinitionBuilderFactory(stageResolver), + executor, + taskResolver ) } diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandlerTest.kt index f78f464fb4..9600ecd5c9 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/CompleteStageHandlerTest.kt @@ -18,7 +18,10 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spectator.api.NoopRegistry import com.netflix.spinnaker.orca.ExecutionStatus.* +import com.netflix.spinnaker.orca.StageResolver import com.netflix.spinnaker.orca.events.StageComplete +import com.netflix.spinnaker.orca.exceptions.DefaultExceptionHandler +import com.netflix.spinnaker.orca.exceptions.ExceptionHandler import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage import com.netflix.spinnaker.orca.fixture.task @@ -33,6 +36,7 @@ import com.netflix.spinnaker.orca.pipeline.model.SyntheticStageOwner.STAGE_AFTER import com.netflix.spinnaker.orca.pipeline.model.SyntheticStageOwner.STAGE_BEFORE import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor +import com.netflix.spinnaker.orca.pipeline.util.StageNavigator import com.netflix.spinnaker.orca.q.* import com.netflix.spinnaker.q.Message import com.netflix.spinnaker.q.Queue @@ -46,14 +50,15 @@ import org.jetbrains.spek.api.dsl.* import org.jetbrains.spek.api.lifecycle.CachingMode.GROUP import org.jetbrains.spek.subject.SubjectSpek import org.springframework.context.ApplicationEventPublisher -import java.time.Duration -import java.time.Duration.* +import java.time.Duration.ZERO object CompleteStageHandlerTest : SubjectSpek({ val queue: Queue = mock() val repository: ExecutionRepository = mock() + val stageNavigator: StageNavigator = mock() val publisher: ApplicationEventPublisher = mock() + val exceptionHandler: ExceptionHandler = DefaultExceptionHandler() val clock = fixedClock() val registry = NoopRegistry() val contextParameterProcessor: ContextParameterProcessor = mock() @@ -103,22 +108,28 @@ object CompleteStageHandlerTest : SubjectSpek({ CompleteStageHandler( queue, repository, + stageNavigator, publisher, clock, + listOf(exceptionHandler), contextParameterProcessor, registry, DefaultStageDefinitionBuilderFactory( - singleTaskStage, - multiTaskStage, - stageWithSyntheticBefore, - stageWithSyntheticAfter, - stageWithParallelBranches, - stageWithTaskAndAfterStages, - stageThatBlowsUpPlanningAfterStages, - stageWithSyntheticOnFailure, - stageWithNothingButAfterStages, - stageWithSyntheticOnFailure, - emptyStage + StageResolver( + listOf( + singleTaskStage, + multiTaskStage, + stageWithSyntheticBefore, + stageWithSyntheticAfter, + stageWithParallelBranches, + stageWithTaskAndAfterStages, + stageThatBlowsUpPlanningAfterStages, + stageWithSyntheticOnFailure, + stageWithNothingButAfterStages, + stageWithSyntheticOnFailure, + emptyStage + ) + ) ) ) } @@ -431,6 +442,12 @@ object CompleteStageHandlerTest : SubjectSpek({ assertThat(pipeline.stageById(message.stageId).status).isEqualTo(TERMINAL) } + it("correctly records exception") { + assertThat(pipeline.stageById(message.stageId).context).containsKey("exception") + val exceptionContext = pipeline.stageById(message.stageId).context["exception"] as ExceptionHandler.Response + assertThat(exceptionContext.exceptionType).isEqualTo(RuntimeException().javaClass.simpleName) + } + it("runs cancellation") { verify(queue).push(CancelStage(pipeline.stageById(message.stageId))) } @@ -1036,6 +1053,43 @@ object CompleteStageHandlerTest : SubjectSpek({ } } } + + given("a synthetic stage's task ends with $TERMINAL status and parent stage should continue on failure") { + val pipeline = pipeline { + stage { + refId = "1" + context = mapOf("continuePipeline" to true) // should continue on failure + type = stageWithSyntheticBefore.type + stageWithSyntheticBefore.buildBeforeStages(this) + stageWithSyntheticBefore.plan(this) + } + } + val message = CompleteStage(pipeline.stageByRef("1<1")) + + beforeGroup { + pipeline.stageById(message.stageId).apply { + status = RUNNING + singleTaskStage.plan(this) + tasks.first().status = TERMINAL + } + + whenever(repository.retrieve(PIPELINE, message.executionId)) doReturn pipeline + } + + on("receiving the message") { + subject.handle(message) + } + + afterGroup(::resetMocks) + + it("rolls up to the parent stage") { + verify(queue).push(message.copy(stageId = pipeline.stageByRef("1").id)) + } + + it("runs the parent stage's complete routine") { + verify(queue).push(CompleteStage(message.copy(stageId = pipeline.stageByRef("1").id))) + } + } } describe("branching stages") { diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandlerTest.kt index 0b542e43f6..3e5d4eca20 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RescheduleExecutionHandlerTest.kt @@ -18,6 +18,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.Task +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage import com.netflix.spinnaker.orca.fixture.task @@ -35,9 +36,10 @@ object RescheduleExecutionHandlerTest : SubjectSpek( val queue: Queue = mock() val repository: ExecutionRepository = mock() + val taskResolver = TaskResolver(emptyList()) subject(CachingMode.GROUP) { - RescheduleExecutionHandler(queue, repository) + RescheduleExecutionHandler(queue, repository, taskResolver) } fun resetMocks() = reset(queue, repository) diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RestartStageHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RestartStageHandlerTest.kt index cbd2041b84..67aa83dc58 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RestartStageHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RestartStageHandlerTest.kt @@ -18,6 +18,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.ExecutionStatus import com.netflix.spinnaker.orca.ExecutionStatus.* +import com.netflix.spinnaker.orca.StageResolver import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage import com.netflix.spinnaker.orca.pipeline.DefaultStageDefinitionBuilderFactory @@ -49,9 +50,13 @@ object RestartStageHandlerTest : SubjectSpek({ queue, repository, DefaultStageDefinitionBuilderFactory( - singleTaskStage, - stageWithSyntheticBefore, - stageWithNestedSynthetics + StageResolver( + listOf( + singleTaskStage, + stageWithSyntheticBefore, + stageWithNestedSynthetics + ) + ) ), pendingExecutionService, clock diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandlerTest.kt index d034c1a612..aa32188fda 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/ResumeTaskHandlerTest.kt @@ -18,6 +18,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.ExecutionStatus.PAUSED import com.netflix.spinnaker.orca.ExecutionStatus.RUNNING +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage import com.netflix.spinnaker.orca.fixture.task @@ -38,9 +39,10 @@ object ResumeTaskHandlerTest : SubjectSpek({ val queue: Queue = mock() val repository: ExecutionRepository = mock() + val taskResolver = TaskResolver(emptyList()) subject(GROUP) { - ResumeTaskHandler(queue, repository) + ResumeTaskHandler(queue, repository, taskResolver) } fun resetMocks() = reset(queue, repository) diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandlerTest.kt index abaa7683f1..907f0e0feb 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/RunTaskHandlerTest.kt @@ -44,7 +44,6 @@ import org.jetbrains.spek.api.dsl.on import org.jetbrains.spek.api.lifecycle.CachingMode.GROUP import org.jetbrains.spek.subject.SubjectSpek import org.threeten.extra.Minutes -import java.lang.RuntimeException import java.time.Duration import kotlin.reflect.jvm.jvmName @@ -90,7 +89,7 @@ object RunTaskHandlerTest : SubjectSpek({ val message = RunTask(pipeline.type, pipeline.id, "foo", pipeline.stages.first().id, "1", DummyTask::class.java) and("has no context updates outputs") { - val taskResult = TaskResult(SUCCEEDED) + val taskResult = TaskResult.SUCCEEDED beforeGroup { whenever(task.execute(any())) doReturn taskResult @@ -120,7 +119,7 @@ object RunTaskHandlerTest : SubjectSpek({ and("has context updates") { val stageOutputs = mapOf("foo" to "covfefe") - val taskResult = TaskResult(SUCCEEDED, stageOutputs, emptyMap()) + val taskResult = TaskResult.builder(SUCCEEDED).context(stageOutputs).build() beforeGroup { whenever(task.execute(any())) doReturn taskResult @@ -142,7 +141,7 @@ object RunTaskHandlerTest : SubjectSpek({ and("has outputs") { val outputs = mapOf("foo" to "covfefe") - val taskResult = TaskResult(SUCCEEDED, emptyMap(), outputs) + val taskResult = TaskResult.builder(SUCCEEDED).outputs(outputs).build() beforeGroup { whenever(task.execute(any())) doReturn taskResult @@ -167,7 +166,7 @@ object RunTaskHandlerTest : SubjectSpek({ "foo" to "covfefe", "stageTimeoutMs" to Long.MAX_VALUE ) - val taskResult = TaskResult(SUCCEEDED, emptyMap(), outputs) + val taskResult = TaskResult.builder(SUCCEEDED).outputs(outputs).build() beforeGroup { whenever(task.execute(any())) doReturn taskResult @@ -202,7 +201,7 @@ object RunTaskHandlerTest : SubjectSpek({ } } val message = RunTask(pipeline.type, pipeline.id, "foo", pipeline.stages.first().id, "1", DummyTask::class.java) - val taskResult = TaskResult(RUNNING) + val taskResult = TaskResult.RUNNING val taskBackoffMs = 30_000L beforeGroup { @@ -235,7 +234,7 @@ object RunTaskHandlerTest : SubjectSpek({ } } val message = RunTask(pipeline.type, pipeline.id, "foo", pipeline.stages.first().id, "1", DummyTask::class.java) - val taskResult = TaskResult(taskStatus) + val taskResult = TaskResult.ofStatus(taskStatus) and("no overrides are in place") { beforeGroup { @@ -1123,6 +1122,47 @@ object RunTaskHandlerTest : SubjectSpek({ } } + describe("can reference non-existent trigger props") { + mapOf( + "\${trigger.type == 'manual'}" to true, + "\${trigger.buildNumber == null}" to true, + "\${trigger.quax ?: 'no quax'}" to "no quax" + ).forEach { expression, expected -> + given("an expression $expression in the stage context") { + val pipeline = pipeline { + stage { + refId = "1" + type = "whatever" + context["expr"] = expression + trigger = DefaultTrigger ("manual") + task { + id = "1" + startTime = clock.instant().toEpochMilli() + } + } + } + val message = RunTask(pipeline.type, pipeline.id, "foo", pipeline.stageByRef("1").id, "1", DummyTask::class.java) + + beforeGroup { + whenever(task.execute(any())) doReturn TaskResult.SUCCEEDED + whenever(repository.retrieve(PIPELINE, message.executionId)) doReturn pipeline + } + + afterGroup(::resetMocks) + + action("the handler receives a message") { + subject.handle(message) + } + + it("evaluates the expression") { + verify(task).execute(check { + assertThat(it.context["expr"]).isEqualTo(expected) + }) + } + } + } + } + given("a reference to deployedServerGroups in the stage context") { val pipeline = pipeline { stage { diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartStageHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartStageHandlerTest.kt index 6af8e1ec8a..aa064917a9 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartStageHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartStageHandlerTest.kt @@ -20,6 +20,7 @@ import com.fasterxml.jackson.databind.ObjectMapper import com.netflix.spectator.api.NoopRegistry import com.netflix.spinnaker.assertj.assertSoftly import com.netflix.spinnaker.orca.ExecutionStatus.* +import com.netflix.spinnaker.orca.StageResolver import com.netflix.spinnaker.orca.events.StageStarted import com.netflix.spinnaker.orca.exceptions.ExceptionHandler import com.netflix.spinnaker.orca.fixture.pipeline @@ -65,16 +66,20 @@ object StartStageHandlerTest : SubjectSpek({ repository, stageNavigator, DefaultStageDefinitionBuilderFactory( - singleTaskStage, - multiTaskStage, - stageWithSyntheticBefore, - stageWithSyntheticAfter, - stageWithParallelBranches, - rollingPushStage, - zeroTaskStage, - stageWithSyntheticAfterAndNoTasks, - webhookStage, - failPlanningStage + StageResolver( + listOf( + singleTaskStage, + multiTaskStage, + stageWithSyntheticBefore, + stageWithSyntheticAfter, + stageWithParallelBranches, + rollingPushStage, + zeroTaskStage, + stageWithSyntheticAfterAndNoTasks, + webhookStage, + failPlanningStage + ) + ) ), ContextParameterProcessor(), publisher, diff --git a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandlerTest.kt b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandlerTest.kt index ae294065b0..1e13b35a7d 100644 --- a/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandlerTest.kt +++ b/orca-queue/src/test/kotlin/com/netflix/spinnaker/orca/q/handler/StartTaskHandlerTest.kt @@ -17,6 +17,7 @@ package com.netflix.spinnaker.orca.q.handler import com.netflix.spinnaker.orca.ExecutionStatus.RUNNING +import com.netflix.spinnaker.orca.TaskResolver import com.netflix.spinnaker.orca.events.TaskStarted import com.netflix.spinnaker.orca.fixture.pipeline import com.netflix.spinnaker.orca.fixture.stage @@ -40,10 +41,11 @@ object StartTaskHandlerTest : SubjectSpek({ val queue: Queue = mock() val repository: ExecutionRepository = mock() val publisher: ApplicationEventPublisher = mock() + val taskResolver = TaskResolver(emptyList()) val clock = fixedClock() subject(GROUP) { - StartTaskHandler(queue, repository, ContextParameterProcessor(), publisher, clock) + StartTaskHandler(queue, repository, ContextParameterProcessor(), publisher, taskResolver, clock) } fun resetMocks() = reset(queue, repository, publisher) diff --git a/orca-redis/src/main/java/com/netflix/spinnaker/orca/pipeline/persistence/jedis/RedisExecutionRepository.java b/orca-redis/src/main/java/com/netflix/spinnaker/orca/pipeline/persistence/jedis/RedisExecutionRepository.java index 68e89b03f2..2f1077833b 100644 --- a/orca-redis/src/main/java/com/netflix/spinnaker/orca/pipeline/persistence/jedis/RedisExecutionRepository.java +++ b/orca-redis/src/main/java/com/netflix/spinnaker/orca/pipeline/persistence/jedis/RedisExecutionRepository.java @@ -54,6 +54,7 @@ import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; +import java.util.stream.Stream; import static com.google.common.collect.Maps.filterValues; import static com.netflix.spinnaker.orca.ExecutionStatus.BUFFERED; @@ -81,23 +82,6 @@ public class RedisExecutionRepository implements ExecutionRepository { new TypeReference>() { }; - private static String GET_EXECUTIONS_FOR_PIPELINE_CONFIG_IDS_SCRIPT = String.join("\n", - "local executions = {}", - "for k,pipelineConfigId in pairs(KEYS) do", - " local pipelineConfigToExecutionsKey = 'pipeline:executions:' .. pipelineConfigId", - " local ids = redis.call('ZRANGEBYSCORE', pipelineConfigToExecutionsKey, ARGV[1], ARGV[2])", - " for k,id in pairs(ids) do", - " table.insert(executions, id)", - " local executionKey = 'pipeline:' .. id", - " local execution = redis.call('HGETALL', executionKey)", - " table.insert(executions, execution)", - " local stageIdsKey = executionKey .. ':stageIndex'", - " local stageIds = redis.call('LRANGE', stageIdsKey, 0, -1)", - " table.insert(executions, stageIds)", - " end", - "end", - "return executions"); - private final RedisClientDelegate redisClientDelegate; private final Optional previousRedisClientDelegate; private final ObjectMapper mapper = OrcaObjectMapper.newInstance(); @@ -1009,52 +993,67 @@ private void deleteInternal(RedisClientDelegate delegate, ExecutionType type, St // do nothing } finally { c.del(key); + c.del(key + ":stageIndex"); c.srem(alljobsKey(type), id); } }); } - private List getPipelinesForPipelineConfigIdsBetweenBuildTimeBoundaryFromRedis(RedisClientDelegate redisClientDelegate, List pipelineConfigIds, long buildTimeStartBoundary, long buildTimeEndBoundary) { - List executions = new ArrayList<>(); - - redisClientDelegate.withScriptingClient(c -> { - Object response = c.eval(GET_EXECUTIONS_FOR_PIPELINE_CONFIG_IDS_SCRIPT, pipelineConfigIds, Arrays.asList(Long.toString(buildTimeStartBoundary), Long.toString(buildTimeEndBoundary))); - /* - * - * Response of eval script is in this format: - * - * For N executions, - * - * Type - * [ - * for(i = 0; i < N; i++) - * execution ID String - * execution hash List - * stage IDs List - * ] - */ - List lists = (List) response; - - int i = 0; - while (i < lists.size()) { - String id = (String) lists.get(i); - i++; - - final Map map = buildExecutionMapFromRedisResponse((List) lists.get(i)); - i++; - - final List stageIds = (List) lists.get(i); - i++; + /** + * + * Unpacks the following redis script into several roundtrips: + * + * local pipelineConfigToExecutionsKey = 'pipeline:executions:' .. pipelineConfigId + * local ids = redis.call('ZRANGEBYSCORE', pipelineConfigToExecutionsKey, ARGV[1], ARGV[2]) + * for k,id in pairs(ids) do + * table.insert(executions, id) + * local executionKey = 'pipeline:' .. id + * local execution = redis.call('HGETALL', executionKey) + * table.insert(executions, execution) + * local stageIdsKey = executionKey .. ':stageIndex' + * local stageIds = redis.call('LRANGE', stageIdsKey, 0, -1) + * table.insert(executions, stageIds) + * end + * + * The script is intended to build a list of executions for a pipeline config id in a given time boundary. + * + * @param delegate Redis delegate + * @param pipelineConfigId Pipeline config Id we are looking up executions for + * @param buildTimeStartBoundary + * @param buildTimeEndBoundary + * @return Stream of executions for pipelineConfigId between the time boundaries. + */ + private Stream getExecutionForPipelineConfigId(RedisClientDelegate delegate, + String pipelineConfigId, + Long buildTimeStartBoundary, + Long buildTimeEndBoundary) { + String executionsKey = executionsByPipelineKey(pipelineConfigId); + Set executionIds = delegate.withCommandsClient(c -> { + return c.zrangeByScore(executionsKey, buildTimeStartBoundary, buildTimeEndBoundary); + }); - if (stageIds.isEmpty()) { - stageIds.addAll(extractStages(map)); - } + return executionIds.stream() + .map(executionId -> { + String executionKey = pipelineKey(executionId); + Map executionMap = delegate.withCommandsClient(c -> { + return c.hgetAll(executionKey); + }); + String stageIdsKey = String.format("%s:stageIndex", executionKey); + List stageIds = delegate.withCommandsClient(c -> { + return c.lrange(stageIdsKey, 0, -1); + }); + Execution execution = new Execution(PIPELINE, executionId, executionMap.get("application")); + return buildExecution(execution, executionMap, stageIds); + }); + } - Execution execution = new Execution(PIPELINE, id, map.get("application")); - executions.add(buildExecution(execution, map, stageIds)); - } - }); - return executions; + private List getPipelinesForPipelineConfigIdsBetweenBuildTimeBoundaryFromRedis(RedisClientDelegate redisClientDelegate, + List pipelineConfigIds, + long buildTimeStartBoundary, + long buildTimeEndBoundary) { + return pipelineConfigIds.stream() + .flatMap(pipelineConfigId -> getExecutionForPipelineConfigId(redisClientDelegate, pipelineConfigId, buildTimeStartBoundary, buildTimeEndBoundary)) + .collect(Collectors.toList()); } protected Observable all(ExecutionType type, RedisClientDelegate redisClientDelegate) { diff --git a/orca-redis/src/test/groovy/com/netflix/spinnaker/orca/pipeline/persistence/jedis/JedisExecutionRepositorySpec.groovy b/orca-redis/src/test/groovy/com/netflix/spinnaker/orca/pipeline/persistence/jedis/JedisExecutionRepositorySpec.groovy index b443c81293..faffd6dcad 100644 --- a/orca-redis/src/test/groovy/com/netflix/spinnaker/orca/pipeline/persistence/jedis/JedisExecutionRepositorySpec.groovy +++ b/orca-redis/src/test/groovy/com/netflix/spinnaker/orca/pipeline/persistence/jedis/JedisExecutionRepositorySpec.groovy @@ -135,6 +135,34 @@ class JedisExecutionRepositorySpec extends ExecutionRepositoryTck pipelinePreprocessors + List executionPreprocessors = new ArrayList<>(); @Autowired(required = false) private List pipelineModelMutators = new ArrayList<>(); @@ -184,7 +186,9 @@ class OperationsController { public Map parseAndValidatePipeline(Map pipeline, boolean resolveArtifacts) { parsePipelineTrigger(executionRepository, buildService, pipeline, resolveArtifacts) - for (PipelinePreprocessor preprocessor : (pipelinePreprocessors ?: [])) { + for (ExecutionPreprocessor preprocessor : executionPreprocessors.findAll { + it.supports(pipeline, ExecutionPreprocessor.Type.PIPELINE) + }) { pipeline = preprocessor.process(pipeline) } @@ -264,9 +268,24 @@ class OperationsController { } } + @Deprecated private void decorateBuildInfo(Map trigger) { - if (trigger.master && trigger.job && trigger.buildNumber) { - def buildInfo = buildService.getBuild(trigger.buildNumber, trigger.master, trigger.job) + // Echo now adds build information to the trigger before sending it to Orca, and manual triggers now default to + // going through echo (and thus receive build information). We still need this logic to populate build info for + // manual triggers when the 'triggerViaEcho' deck feature flag is off, or to handle users still hitting the old + // API endpoint manually, but we should short-circuit if we already have build info. + if (trigger.master && trigger.job && trigger.buildNumber && !trigger.buildInfo) { + log.info("Populating build information in Orca for trigger {}.", trigger) + def buildInfo + try { + buildInfo = buildService.getBuild(trigger.buildNumber, trigger.master, trigger.job) + } catch (RetrofitError e) { + if (e.response?.status == 404) { + throw new IllegalStateException("Build ${trigger.buildNumber} of ${trigger.master}/${trigger.job} not found") + } else { + throw new OperationFailedException("Failed to get build ${trigger.buildNumber} of ${trigger.master}/${trigger.job}", e) + } + } if (buildInfo?.artifacts) { if (trigger.type == "manual") { trigger.artifacts = buildInfo.artifacts @@ -285,14 +304,10 @@ class OperationsController { if (e.response?.status == 404) { throw new IllegalStateException("Expected properties file " + trigger.propertyFile + " (configured on trigger), but it was missing") } else { - throw e + throw new OperationFailedException("Failed to get properties file ${trigger.propertyFile}", e) } } } - } else if (trigger?.registry && trigger?.repository && trigger?.tag) { - trigger.buildInfo = [ - taggedImages: [[registry: trigger.registry, repository: trigger.repository, tag: trigger.tag]] - ] } } @@ -317,11 +332,19 @@ class OperationsController { } def webhooks = webhookService.preconfiguredWebhooks - if (fiatStatus.isEnabled()) { - String user = AuthenticatedRequest.getSpinnakerUser().orElse("anonymous") - UserPermission.View userPermission = fiatService.getUserPermission(user) + if (webhooks && fiatStatus.isEnabled()) { + if (webhooks.any { it.permissions }) { + def userPermissionRoles = [new Role.View(new Role("anonymous"))] as Set + try { + String user = AuthenticatedRequest.getSpinnakerUser().orElse("anonymous") + UserPermission.View userPermission = fiatService.getUserPermission(user) + userPermissionRoles = userPermission.roles + } catch (Exception e) { + log.error("Unable to determine roles for current user, falling back to 'anonymous'", e) + } - webhooks = webhooks.findAll { it.isAllowed("READ", userPermission.roles) } + webhooks = webhooks.findAll { it.isAllowed("READ", userPermissionRoles) } + } } return webhooks.collect { @@ -349,6 +372,7 @@ class OperationsController { waitForCompletion: it.waitForCompletion, noUserConfigurableFields: true, parameters: it.parameters, + producesArtifacts: it.producesArtifacts, ] } } @@ -385,6 +409,13 @@ class OperationsController { applyStageRefIds(config) } injectPipelineOrigin(config) + + for (ExecutionPreprocessor preprocessor : executionPreprocessors.findAll { + it.supports(config, ExecutionPreprocessor.Type.ORCHESTRATION) + }) { + config = preprocessor.process(config) + } + def json = objectMapper.writeValueAsString(config) log.info('requested task:{}', json) def pipeline = executionLauncher.start(ORCHESTRATION, json) diff --git a/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/TaskController.groovy b/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/TaskController.groovy index bf5a33f8b6..1b24bdf0c1 100644 --- a/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/TaskController.groovy +++ b/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/TaskController.groovy @@ -508,9 +508,14 @@ class TaskController { @RequestParam("expression") String expression) { def execution = executionRepository.retrieve(PIPELINE, id) + def context = [ + execution: execution, + trigger: mapper.convertValue(execution.trigger, Map.class) + ] + def evaluated = contextParameterProcessor.process( [expression: expression], - [execution: execution], + context, true ) return [result: evaluated?.expression, detail: evaluated?.expressionEvaluationSummary] diff --git a/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/V2PipelineTemplateController.java b/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/V2PipelineTemplateController.java index 6992667228..fb9d4f53c7 100644 --- a/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/V2PipelineTemplateController.java +++ b/orca-web/src/main/groovy/com/netflix/spinnaker/orca/controllers/V2PipelineTemplateController.java @@ -16,10 +16,10 @@ package com.netflix.spinnaker.orca.controllers; +import com.netflix.spinnaker.orca.extensionpoint.pipeline.ExecutionPreprocessor; import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor; +import com.netflix.spinnaker.orca.pipelinetemplate.V2Util; import lombok.extern.slf4j.Slf4j; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.autoconfigure.condition.ConditionalOnExpression; import org.springframework.web.bind.annotation.RequestBody; @@ -27,8 +27,8 @@ import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; -import java.util.Collections; -import java.util.HashMap; +import java.util.ArrayList; +import java.util.List; import java.util.Map; @RestController @@ -38,18 +38,13 @@ public class V2PipelineTemplateController { @Autowired - private OperationsController operationsController; + private ContextParameterProcessor contextParameterProcessor; - @Autowired - ContextParameterProcessor contextParameterProcessor; + @Autowired(required = false) + private List executionPreprocessors = new ArrayList<>(); @RequestMapping(value = "/plan", method = RequestMethod.POST) - Map orchestrate(@RequestBody Map pipeline) { - pipeline = operationsController.parseAndValidatePipeline(pipeline); - - Map augmentedContext = new HashMap<>(); - augmentedContext.put("trigger", pipeline.get("trigger")); - augmentedContext.put("templateVariables", pipeline.getOrDefault("templateVariables", Collections.EMPTY_MAP)); - return contextParameterProcessor.process(pipeline, augmentedContext, false); + Map plan(@RequestBody Map pipeline) { + return V2Util.planPipeline(contextParameterProcessor, executionPreprocessors, pipeline); } } diff --git a/orca-web/src/main/groovy/com/netflix/spinnaker/orca/exceptions/OperationFailedException.java b/orca-web/src/main/groovy/com/netflix/spinnaker/orca/exceptions/OperationFailedException.java new file mode 100644 index 0000000000..d27e0510f7 --- /dev/null +++ b/orca-web/src/main/groovy/com/netflix/spinnaker/orca/exceptions/OperationFailedException.java @@ -0,0 +1,7 @@ +package com.netflix.spinnaker.orca.exceptions; + +public class OperationFailedException extends RuntimeException { + public OperationFailedException(String message) { super(message);} + + public OperationFailedException(String message, Throwable cause) { super(message, cause);} +} diff --git a/orca-web/src/test/groovy/com/netflix/spinnaker/orca/MainSpec.java b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/MainSpec.java new file mode 100644 index 0000000000..48e1150edd --- /dev/null +++ b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/MainSpec.java @@ -0,0 +1,47 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca; + +import com.netflix.spinnaker.orca.locks.LockManager; +import com.netflix.spinnaker.orca.notifications.NotificationClusterLock; +import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.springframework.boot.test.context.SpringBootTest; +import org.springframework.boot.test.mock.mockito.MockBean; +import org.springframework.test.context.ContextConfiguration; +import org.springframework.test.context.TestPropertySource; +import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; + +@RunWith(SpringJUnit4ClassRunner.class) +@SpringBootTest(classes = {Main.class}) +@ContextConfiguration(classes = {StartupTestConfiguration.class}) +@TestPropertySource(properties = {"spring.config.location=classpath:orca-test.yml"}) +public class MainSpec { + @MockBean + ExecutionRepository executionRepository; + + @MockBean + LockManager lockManager; + + @MockBean + NotificationClusterLock notificationClusterLock; + + @Test + public void startupTest() { + } +} diff --git a/orca-web/src/test/groovy/com/netflix/spinnaker/orca/StartupTestConfiguration.java b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/StartupTestConfiguration.java new file mode 100644 index 0000000000..8c8fccc6f3 --- /dev/null +++ b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/StartupTestConfiguration.java @@ -0,0 +1,37 @@ +/* + * Copyright 2019 Google, Inc. + * + * Licensed under the Apache License, Version 2.0 (the "License") + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.netflix.spinnaker.orca; + +import com.netflix.spinnaker.q.Queue; +import com.netflix.spinnaker.q.memory.InMemoryQueue; +import com.netflix.spinnaker.q.metrics.EventPublisher; +import org.springframework.boot.test.context.TestConfiguration; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Primary; + +import java.time.Clock; +import java.time.Duration; +import java.util.Collections; + +@TestConfiguration +class StartupTestConfiguration { + @Bean + @Primary + Queue queue(Clock clock, EventPublisher publisher) { + return new InMemoryQueue(clock, Duration.ofMinutes(1), Collections.emptyList(), publisher); + } +} diff --git a/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/OperationsControllerSpec.groovy b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/OperationsControllerSpec.groovy index 3014aec3c6..ec9b0b4389 100644 --- a/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/OperationsControllerSpec.groovy +++ b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/OperationsControllerSpec.groovy @@ -21,6 +21,7 @@ import com.netflix.spinnaker.fiat.model.resources.Account import com.netflix.spinnaker.fiat.model.resources.Role import com.netflix.spinnaker.fiat.shared.FiatService import com.netflix.spinnaker.fiat.shared.FiatStatus + import com.netflix.spinnaker.orca.front50.Front50Service import javax.servlet.http.HttpServletResponse @@ -178,7 +179,7 @@ class OperationsControllerSpec extends Specification { trigger : [ type : "manual", parentPipelineId: "12345", - parentExecution : [name: "abc"] + parentExecution : [name: "abc", type: PIPELINE, id: "1", application: "application"] ] ] } @@ -263,7 +264,7 @@ class OperationsControllerSpec extends Specification { buildService.getBuild(buildNumber, master, job) >> buildInfo if (queryUser) { - MDC.put(AuthenticatedRequest.SPINNAKER_USER, queryUser) + MDC.put(AuthenticatedRequest.Header.USER.header, queryUser) } when: controller.orchestrate(requestedPipeline, Mock(HttpServletResponse)) @@ -481,7 +482,7 @@ class OperationsControllerSpec extends Specification { startedPipeline } executionRepository.retrievePipelinesForPipelineConfigId(*_) >> Observable.empty() - ArtifactResolver realArtifactResolver = new ArtifactResolver(mapper, executionRepository) + ArtifactResolver realArtifactResolver = new ArtifactResolver(mapper, executionRepository, new ContextParameterProcessor()) // can't use @subject, since we need to test the behavior of otherwise mocked-out 'artifactResolver' def tempController = new OperationsController( @@ -492,7 +493,7 @@ class OperationsControllerSpec extends Specification { executionLauncher: executionLauncher, contextParameterProcessor: new ContextParameterProcessor(), webhookService: webhookService, - artifactResolver: realArtifactResolver + artifactResolver: realArtifactResolver, ) def reference = 'gs://bucket' @@ -716,6 +717,27 @@ class OperationsControllerSpec extends Specification { ] } + def "should return unrestricted preconfigured webhooks if fiat is unavailable"() { + given: + UserPermission userPermission = new UserPermission() + userPermission.addResource(new Role("test")) + + when: + def preconfiguredWebhooks = controller.preconfiguredWebhooks() + + then: + 1 * fiatService.getUserPermission(*_) >> { + throw new IllegalStateException("Sorry, Fiat is unavailable") + } + 1 * controller.fiatStatus.isEnabled() >> { return true } + 1 * webhookService.preconfiguredWebhooks >> [ + createPreconfiguredWebhook("Webhook #1", "Description #1", "webhook_1", [:]), + createPreconfiguredWebhook("Webhook #2", "Description #2", "webhook_2", ["READ": ["some-role"], "WRITE": ["some-role"]]), + createPreconfiguredWebhook("Webhook #3", "Description #3", "webhook_3", ["READ": ["anonymous"], "WRITE": ["anonymous"]]) + ] + preconfiguredWebhooks*.label == ["Webhook #1", "Webhook #3"] + } + def "should start pipeline by config id with provided trigger"() { given: Execution startedPipeline = null diff --git a/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/TaskControllerSpec.groovy b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/TaskControllerSpec.groovy index 5f5cba5d07..0d30f3033f 100644 --- a/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/TaskControllerSpec.groovy +++ b/orca-web/src/test/groovy/com/netflix/spinnaker/orca/controllers/TaskControllerSpec.groovy @@ -25,6 +25,7 @@ import com.netflix.spinnaker.orca.jackson.OrcaObjectMapper import com.netflix.spinnaker.orca.pipeline.ExecutionRunner import com.netflix.spinnaker.orca.pipeline.model.* import com.netflix.spinnaker.orca.pipeline.persistence.ExecutionRepository +import com.netflix.spinnaker.orca.pipeline.util.ContextParameterProcessor import groovy.json.JsonSlurper import org.springframework.http.MediaType import org.springframework.mock.web.MockHttpServletResponse @@ -69,7 +70,8 @@ class TaskControllerSpec extends Specification { numberOfOldPipelineExecutionsToInclude: numberOfOldPipelineExecutionsToInclude, clock: clock, mapper: mapper, - registry: registry + registry: registry, + contextParameterProcessor: new ContextParameterProcessor() ) ).build() } @@ -111,7 +113,8 @@ class TaskControllerSpec extends Specification { application = "covfefe" stage { type = "test" - tasks = [new Task(name: 'jobOne'), new Task(name: 'jobTwo')] + tasks = [new Task(id:'1', name: 'jobOne', startTime: 1L, endTime: 2L, implementingClass: 'Class' ), + new Task(id:'2', name: 'jobTwo', startTime: 1L, endTime: 2L, implementingClass: 'Class' )] } }]) @@ -214,6 +217,33 @@ class TaskControllerSpec extends Specification { results.id == ['not-started', 'also-not-started', 'older2', 'older1', 'newer'] } + void '/applications/{application}/evaluateExpressions precomputes values'() { + given: + executionRepository.retrieve(Execution.ExecutionType.PIPELINE, "1") >> { + pipeline { + id = "1" + application = "doesn't matter" + startTime = 1 + pipelineConfigId = "1" + trigger = new DefaultTrigger("manual", "id", "user", [param1: "param1Value"]) + } + } + + when: + def response = mockMvc.perform( + get("/pipelines/1/evaluateExpression") + .param("id", "1") + .param("expression", '${parameters.param1}')) + .andReturn().response + Map results = new ObjectMapper().readValue(response.contentAsString, Map) + + then: + results == [ + result: "param1Value", + detail: null + ] + } + void '/pipelines should only return the latest pipelines for the provided config ids, newest first'() { given: def pipelines = [ @@ -308,6 +338,9 @@ class TaskControllerSpec extends Specification { ], [pipelineConfigId: "1", id: "test-3", startTime: clock.instant().minus(daysOfExecutionHistory, DAYS).minus(2, HOURS).toEpochMilli(), trigger: new JenkinsTrigger("master", "job", 1, "test-property-file") + ], + [pipelineConfigId: "1", id: "test-4", startTime: clock.instant().minus(daysOfExecutionHistory, DAYS).minus(2, HOURS).toEpochMilli(), + trigger: new ArtifactoryTrigger("libs-demo-local") ] ] @@ -335,7 +368,7 @@ class TaskControllerSpec extends Specification { List results = new ObjectMapper().readValue(response.contentAsString, List) then: - results.id == ['test-1', 'test-2', 'test-3'] + results.id == ['test-1', 'test-2', 'test-3', 'test-4'] } void '/applications/{application}/pipelines/search should only return pipelines of given types'() { @@ -353,6 +386,9 @@ class TaskControllerSpec extends Specification { ], [pipelineConfigId: "1", id: "test-4", startTime: clock.instant().minus(daysOfExecutionHistory, DAYS).minus(2, HOURS).toEpochMilli(), trigger: new JenkinsTrigger("master", "job", 1, "test-property-file") + ], + [pipelineConfigId: "1", id: "test-5", startTime: clock.instant().minus(daysOfExecutionHistory, DAYS).minus(2, HOURS).toEpochMilli(), + trigger: new ArtifactoryTrigger("libs-demo-local") ] ] @@ -399,6 +435,9 @@ class TaskControllerSpec extends Specification { ], [pipelineConfigId: "1", id: "test-4", startTime: clock.instant().minus(daysOfExecutionHistory, DAYS).minus(2, HOURS).toEpochMilli(), trigger: new JenkinsTrigger("master", "job", 1, "test-property-file"), eventId: eventId + ], + [pipelineConfigId: "1", id: "test-5", startTime: clock.instant().minus(daysOfExecutionHistory, DAYS).minus(2, HOURS).toEpochMilli(), + trigger: new ArtifactoryTrigger("libs-demo-local"), eventId: wrongEventId ] ] diff --git a/orca-web/src/test/resources/orca-test.yml b/orca-web/src/test/resources/orca-test.yml new file mode 100644 index 0000000000..c8f9b5f2f1 --- /dev/null +++ b/orca-web/src/test/resources/orca-test.yml @@ -0,0 +1,19 @@ +front50: + enabled: false + +igor: + enabled: false + +bakery: + enabled: false + +echo: + enabled: false + +monitor: + activeExecutions: + redis: false + +executionRepository: + redis: + enabled: false diff --git a/orca-webhook/orca-webhook.gradle b/orca-webhook/orca-webhook.gradle index 323a66f8c4..ea3472f003 100644 --- a/orca-webhook/orca-webhook.gradle +++ b/orca-webhook/orca-webhook.gradle @@ -23,6 +23,7 @@ dependencies { compile spinnaker.dependency('korkWeb') compile spinnaker.dependency('bootAutoConfigure') compile spinnaker.dependency('lombok') + annotationProcessor spinnaker.dependency("lombok") compile('com.jayway.jsonpath:json-path:2.2.0') compile spinnaker.dependency("okHttp3") compile spinnaker.dependency('retrofit1okHttp3Client') diff --git a/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTask.groovy b/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTask.groovy index 1474139f97..6b68729c41 100644 --- a/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTask.groovy +++ b/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTask.groovy @@ -59,6 +59,18 @@ class CreateWebhookTask implements RetryableTask { def response try { response = webhookService.exchange(stageData.method, stageData.url, stageData.payload, stageData.customHeaders) + } catch (IllegalArgumentException e) { + if (e.cause instanceof UnknownHostException) { + String errorMessage = "name resolution failure in webhook for pipeline ${stage.execution.id} to ${stageData.url}, will retry." + log.warn(errorMessage, e) + outputs.webhook << [error: errorMessage] + return TaskResult.builder(ExecutionStatus.RUNNING).context(outputs).build() + } else { + String errorMessage = "an exception occurred in webhook to ${stageData.url}: ${e}" + log.error(errorMessage, e) + outputs.webhook << [error: errorMessage] + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build() + } } catch (HttpStatusCodeException e) { def statusCode = e.getStatusCode() @@ -86,7 +98,7 @@ class CreateWebhookTask implements RetryableTask { String webhookMessage = "Received a status code configured to fail fast, terminating stage." outputs.webhook << [error: webhookMessage] - return new TaskResult(ExecutionStatus.TERMINAL, outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build() } if (statusCode.is5xxServerError() || statusCode.value() == 429) { @@ -95,13 +107,13 @@ class CreateWebhookTask implements RetryableTask { outputs.webhook << [error: errorMessage] - return new TaskResult(ExecutionStatus.RUNNING, outputs) + return TaskResult.builder(ExecutionStatus.RUNNING).context(outputs).build() } String errorMessage = "Error submitting webhook for pipeline ${stage.execution.id} to ${stageData.url} with status code ${statusCode}." outputs.webhook << [error: errorMessage] - return new TaskResult(ExecutionStatus.TERMINAL, outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build() } def statusCode = response.statusCode @@ -130,7 +142,7 @@ class CreateWebhookTask implements RetryableTask { statusUrl = new JsonContext().parse(response.body).read(path) } catch (PathNotFoundException e) { outputs.webhook << [error: e.message] - return new TaskResult(ExecutionStatus.TERMINAL, outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build() } } if (!statusUrl || !(statusUrl instanceof String)) { @@ -138,11 +150,11 @@ class CreateWebhookTask implements RetryableTask { error : "The status URL couldn't be resolved, but 'Wait for completion' was checked", statusEndpoint: statusUrl ] - return new TaskResult(ExecutionStatus.TERMINAL, outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build() } stage.context.statusEndpoint = statusUrl outputs.webhook << [statusEndpoint: statusUrl] - return new TaskResult(ExecutionStatus.SUCCEEDED, outputsDeprecated + outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputsDeprecated + outputs).build() } if (stage.context.containsKey("expectedArtifacts") && !((List) stage.context.get("expectedArtifacts")).isEmpty()) { try { @@ -150,13 +162,13 @@ class CreateWebhookTask implements RetryableTask { outputs << [artifacts: artifacts] } catch (Exception e) { outputs.webhook << [error: "Expected artifacts in webhook response none were found"] - return new TaskResult(ExecutionStatus.TERMINAL, outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputs).build() } } - return new TaskResult(ExecutionStatus.SUCCEEDED, outputsDeprecated + outputs) + return TaskResult.builder(ExecutionStatus.SUCCEEDED).context(outputsDeprecated + outputs).build() } else { outputs.webhook << [error: "The webhook request failed"] - return new TaskResult(ExecutionStatus.TERMINAL, outputsDeprecated + outputs) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(outputsDeprecated + outputs).build() } } diff --git a/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTask.groovy b/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTask.groovy index eecd2a37f1..7f4bcf1ab6 100644 --- a/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTask.groovy +++ b/orca-webhook/src/main/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTask.groovy @@ -80,12 +80,20 @@ class MonitorWebhookTask implements OverridableTimeoutRetryableTask { stage.execution.id, stage.id ) + } catch (IllegalArgumentException e) { + if (e.cause instanceof UnknownHostException) { + log.warn("name resolution failure in webhook for pipeline ${stage.execution.id} to ${statusEndpoint}, will retry.", e) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) + } + + throw e } catch (HttpStatusCodeException e) { def statusCode = e.getStatusCode() if (statusCode.is5xxServerError() || statusCode.value() == 429) { log.warn("error getting webhook status from ${statusEndpoint}, will retry", e) - return new TaskResult(ExecutionStatus.RUNNING) + return TaskResult.ofStatus(ExecutionStatus.RUNNING) } + throw e } @@ -106,11 +114,11 @@ class MonitorWebhookTask implements OverridableTimeoutRetryableTask { result = JsonPath.read(response.body, statusJsonPath) } catch (PathNotFoundException e) { responsePayload.webhook.monitor << [error: String.format(JSON_PATH_NOT_FOUND_ERR_FMT, "status", statusJsonPath)] - return new TaskResult(ExecutionStatus.TERMINAL, responsePayload) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(responsePayload).build() } if (!(result instanceof String || result instanceof Number || result instanceof Boolean)) { responsePayload.webhook.monitor << [error: "The json path '${statusJsonPath}' did not resolve to a single value", resolvedValue: result] - return new TaskResult(ExecutionStatus.TERMINAL, responsePayload) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(responsePayload).build() } if (progressJsonPath) { @@ -119,11 +127,11 @@ class MonitorWebhookTask implements OverridableTimeoutRetryableTask { progress = JsonPath.read(response.body, progressJsonPath) } catch (PathNotFoundException e) { responsePayload.webhook.monitor << [error: String.format(JSON_PATH_NOT_FOUND_ERR_FMT, "progress", statusJsonPath)] - return new TaskResult(ExecutionStatus.TERMINAL, responsePayload) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(responsePayload).build() } if (!(progress instanceof String)) { responsePayload.webhook.monitor << [error: "The json path '${progressJsonPath}' did not resolve to a String value", resolvedValue: progress] - return new TaskResult(ExecutionStatus.TERMINAL, responsePayload) + return TaskResult.builder(ExecutionStatus.TERMINAL).context(responsePayload).build() } if (progress) { responsePayload << [progressMessage: progress] // TODO: deprecated @@ -137,12 +145,12 @@ class MonitorWebhookTask implements OverridableTimeoutRetryableTask { def status = result == 100 ? ExecutionStatus.SUCCEEDED : ExecutionStatus.RUNNING responsePayload << [percentComplete: result] // TODO: deprecated responsePayload.webhook.monitor << [percentComplete: result] - return new TaskResult(status, responsePayload) + return TaskResult.builder(status).context(responsePayload).build() } else if (statusMap.containsKey(result.toString().toUpperCase())) { - return new TaskResult(statusMap[result.toString().toUpperCase()], responsePayload) + return TaskResult.builder(statusMap[result.toString().toUpperCase()]).context(responsePayload).build() } - return new TaskResult(ExecutionStatus.RUNNING, response ? responsePayload : [:]) + return TaskResult.builder(ExecutionStatus.RUNNING).context(response ? responsePayload : [:]).build() } private static Map createStatusMap(String successStatuses, String canceledStatuses, String terminalStatuses) { diff --git a/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTaskSpec.groovy b/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTaskSpec.groovy index 593accec06..6dc6c029c0 100644 --- a/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTaskSpec.groovy +++ b/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/CreateWebhookTaskSpec.groovy @@ -188,6 +188,51 @@ class CreateWebhookTaskSpec extends Specification { ] } + def "should retry on name resolution failure"() { + setup: + def stage = new Stage(pipeline, "webhook", "My webhook", [:]) + + createWebhookTask.webhookService = Stub(WebhookService) { + exchange(_, _, _, _) >> { + // throwing it like UserConfiguredUrlRestrictions::validateURI does + throw new IllegalArgumentException("Invalid URL", new UnknownHostException("Temporary failure in name resolution")) + } + } + + when: + def result = createWebhookTask.execute(stage) + + then: + result.status == ExecutionStatus.RUNNING + (result.context as Map) == [ + webhook: [ + error: "name resolution failure in webhook for pipeline ${stage.execution.id} to ${stage.context.url}, will retry." + ] + ] + } + + def "should return TERMINAL on URL validation failure"() { + setup: + def stage = new Stage(pipeline, "webhook", "My webhook", [url: "wrong://my-service.io/api/"]) + + createWebhookTask.webhookService = Stub(WebhookService) { + exchange(_, _, _, _) >> { + throw new IllegalArgumentException("Invalid URL") + } + } + + when: + def result = createWebhookTask.execute(stage) + + then: + result.status == ExecutionStatus.TERMINAL + (result.context as Map) == [ + webhook: [ + error: "an exception occurred in webhook to wrong://my-service.io/api/: java.lang.IllegalArgumentException: Invalid URL" + ] + ] + } + def "should parse response correctly on failure"() { setup: def stage = new Stage(pipeline, "webhook", "My webhook", [ diff --git a/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTaskSpec.groovy b/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTaskSpec.groovy index 7b2beffdbf..feb98d78c6 100644 --- a/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTaskSpec.groovy +++ b/orca-webhook/src/test/groovy/com/netflix/spinnaker/orca/webhook/tasks/MonitorWebhookTaskSpec.groovy @@ -70,6 +70,45 @@ class MonitorWebhookTaskSpec extends Specification { ex.message == "Missing required parameters 'statusEndpoint', 'statusJsonPath'" as String } + def "should fail in case of URL validation error"() { + setup: + def stage = new Stage(pipeline, "webhook", [ + statusEndpoint: 'https://my-service.io/api/status/123', + statusJsonPath: '$.status']) + + monitorWebhookTask.webhookService = Stub(WebhookService) { + getStatus(_, _) >> { + throw new IllegalArgumentException("Invalid URL") + } + } + + when: + monitorWebhookTask.execute stage + + then: + def ex = thrown IllegalArgumentException + ex.message == "Invalid URL" + } + + def "should retry in case of name resolution error"() { + setup: + def stage = new Stage(pipeline, "webhook", [ + statusEndpoint: 'https://my-service.io/api/status/123', + statusJsonPath: '$.status']) + + monitorWebhookTask.webhookService = Stub(WebhookService) { + getStatus(_, _) >> { + throw new IllegalArgumentException("Invalid URL", new UnknownHostException()) + } + } + + when: + def result = monitorWebhookTask.execute stage + + then: + result.status == ExecutionStatus.RUNNING + } + def "should do a get request to the defined statusEndpoint"() { setup: monitorWebhookTask.webhookService = Mock(WebhookService) { diff --git a/settings.gradle b/settings.gradle index 5d5e2c7f30..1d4600e9a1 100644 --- a/settings.gradle +++ b/settings.gradle @@ -46,7 +46,9 @@ include "orca-extensionpoint", "orca-qos", "orca-migration", "orca-sql", - "orca-sql-mysql" + "orca-sql-mysql", + "orca-integrations-gremlin", + "orca-integrations-cloudfoundry" rootProject.name = "orca"