Skip to content

Commit

Permalink
Introduce ContinueOnFailure for Ordered containers
Browse files Browse the repository at this point in the history
Ordered containers that are also decorated with ContinueOnFailure will not stop running specs after the first spec fails.

Also - this commit fixes a separate bug where timedout specs were not correctly treated as failures when determining whether or not to run AfterAlls in an Ordered container.
  • Loading branch information
onsi committed Jan 9, 2023
1 parent 89dda20 commit e0123ca
Show file tree
Hide file tree
Showing 11 changed files with 390 additions and 29 deletions.
14 changes: 12 additions & 2 deletions decorator_dsl.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,22 +46,32 @@ const Pending = internal.Pending

/*
Serial is a decorator that allows you to mark a spec or container as serial. These specs will never run in parallel with other specs.
Tests in ordered containers cannot be marked as serial - mark the ordered container instead.
Specs in ordered containers cannot be marked as serial - mark the ordered container instead.
You can learn more here: https://onsi.github.io/ginkgo/#serial-specs
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const Serial = internal.Serial

/*
Ordered is a decorator that allows you to mark a container as ordered. Tests in the container will always run in the order they appear.
Ordered is a decorator that allows you to mark a container as ordered. Specs in the container will always run in the order they appear.
They will never be randomized and they will never run in parallel with one another, though they may run in parallel with other specs.
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const Ordered = internal.Ordered

/*
ContinueOnFailure is a decorator that allows you to mark an Ordered container to continue running specs even if failures occur. Ordinarily an ordered container will stop running specs after the first failure occurs. Note that if a BeforeAll or a BeforeEach/JustBeforeEach annotated with OncePerOrdered fails then no specs will run as the precondition for the Ordered container will consider to be failed.
ContinueOnFailure only applies to the outermost Ordered container. Attempting to place ContinueOnFailure in a nested container will result in an error.
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const ContinueOnFailure = internal.ContinueOnFailure

/*
OncePerOrdered is a decorator that allows you to mark outer BeforeEach, AfterEach, JustBeforeEach, and JustAfterEach setup nodes to run once
per ordered context. Normally these setup nodes run around each individual spec, with OncePerOrdered they will run once around the set of specs in an ordered container.
Expand Down
11 changes: 10 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2264,7 +2264,11 @@ Lastly, the `OncePerOrdered` container cannot be applied to the `ReportBeforeEac

Normally, when a spec fails Ginkgo moves on to the next spec. This is possible because Ginkgo assumes, by default, that all specs are independent. However `Ordered` containers explicitly opt in to a different behavior. Spec independence cannot be guaranteed in `Ordered` containers, so Ginkgo treats failures differently.

When a spec in an `Ordered` container fails all subsequent specs are skipped. Ginkgo will then run any `AfterAll` node closures to clean up after the specs. This failure behavior cannot be overridden.
When a spec in an `Ordered` container fails all subsequent specs are skipped. Ginkgo will then run any `AfterAll` node closures to clean up after the specs.

You can override this behavior by decorating an `Ordered` container with `ContinueOnFailure`. This is useful in cases where `Ordered` is being used to provide shared expensive set up for a collection of specs. When `ContinueOnFailure` is set, Ginkgo will continue running specs even if an earlier spec in the `Ordered` container has failed. If, however a `BeforeAll` or `OncePerOrdered` `BeforeEach` node has failed then Ginkgo will skip all subsequent specs as the setup for the collection specs is presumed to have failed.

`ContinueOnFailure` can only be applied to the outermost `Ordered` container. It is an error to apply it to a nested container.

#### Combining Serial and Ordered

Expand Down Expand Up @@ -4819,6 +4823,11 @@ The `Ordered` decorator applies to container nodes only. It is an error to try
When a spec in an `Ordered` container fails, all subsequent specs in the ordered container are skipped. Only `Ordered` containers can contain `BeforeAll` and `AfterAll` setup nodes.
#### The ContinueOnFailure Decorator
The `ContinueOnFailure` decorator applies to outermost `Ordered` container nodes only. It is an error to try to apply the `ContinueOnFailure` decorator to anything other than an `Ordered` container - and that `Ordered` container must not have any parent `Ordered` containers.
When an `Ordered` container is decorated with `ContinueOnFailure` then the failure of one spec in the container will not prevent other specs from running. This is useful in cases where `Ordered` containers are being used to have share common (expensive) setup for a collection of specs but the specs, themselves, don't rely on one another.
#### The OncePerOrdered Decorator
The `OncePerOrdered` decorator applies to setup nodes only. It is an error to try to apply the `OncePerOrdered` decorator to a container or subject node.
Expand Down
1 change: 1 addition & 0 deletions dsl/decorators/decorators_dsl.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ const Focus = ginkgo.Focus
const Pending = ginkgo.Pending
const Serial = ginkgo.Serial
const Ordered = ginkgo.Ordered
const ContinueOnFailure = ginkgo.ContinueOnFailure
const OncePerOrdered = ginkgo.OncePerOrdered
const SuppressProgressReporting = ginkgo.SuppressProgressReporting

Expand Down
41 changes: 29 additions & 12 deletions internal/group.go
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,19 @@ type group struct {
runOncePairs map[uint]runOncePairs
runOnceTracker map[runOncePair]types.SpecState

succeeded bool
succeeded bool
failedInARunOnceBefore bool
continueOnFailure bool
}

func newGroup(suite *Suite) *group {
return &group{
suite: suite,
runOncePairs: map[uint]runOncePairs{},
runOnceTracker: map[runOncePair]types.SpecState{},
succeeded: true,
suite: suite,
runOncePairs: map[uint]runOncePairs{},
runOnceTracker: map[runOncePair]types.SpecState{},
succeeded: true,
failedInARunOnceBefore: false,
continueOnFailure: false,
}
}

Expand Down Expand Up @@ -137,10 +141,14 @@ func (g *group) evaluateSkipStatus(spec Spec) (types.SpecState, types.Failure) {
if !g.suite.deadline.IsZero() && g.suite.deadline.Before(time.Now()) {
return types.SpecStateSkipped, types.Failure{}
}
if !g.succeeded {
if !g.succeeded && !g.continueOnFailure {
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because an earlier spec in an ordered container failed")
}
if g.failedInARunOnceBefore && g.continueOnFailure {
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because a BeforeAll node failed")
}
beforeOncePairs := g.runOncePairs[spec.SubjectID()].withType(types.NodeTypeBeforeAll | types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach)
for _, pair := range beforeOncePairs {
if g.runOnceTracker[pair].Is(types.SpecStateSkipped) {
Expand Down Expand Up @@ -168,7 +176,8 @@ func (g *group) isLastSpecWithPair(specID uint, pair runOncePair) bool {
return lastSpecID == specID
}

func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) bool {
failedInARunOnceBefore := false
pairs := g.runOncePairs[spec.SubjectID()]

nodes := spec.Nodes.WithType(types.NodeTypeBeforeAll)
Expand All @@ -194,6 +203,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
}
if g.suite.currentSpecReport.State != types.SpecStatePassed {
terminatingNode, terminatingPair = node, oncePair
failedInARunOnceBefore = !terminatingPair.isZero()
break
}
}
Expand All @@ -216,7 +226,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
//this node has already been run on this attempt, don't rerun it
return false
}
pair := runOncePair{}
var pair runOncePair
switch node.NodeType {
case types.NodeTypeCleanupAfterEach, types.NodeTypeCleanupAfterAll:
// check if we were generated in an AfterNode that has already run
Expand Down Expand Up @@ -246,9 +256,13 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
if !terminatingPair.isZero() && terminatingNode.NestingLevel == node.NestingLevel {
return true //...or, a run-once node at our nesting level was skipped which means this is our last chance to run
}
case types.SpecStateFailed, types.SpecStatePanicked: // the spec has failed...
case types.SpecStateFailed, types.SpecStatePanicked, types.SpecStateTimedout: // the spec has failed...
if isFinalAttempt {
return true //...if this was the last attempt then we're the last spec to run and so the AfterNode should run
if g.continueOnFailure {
return isLastSpecWithPair || failedInARunOnceBefore //...we're configured to continue on failures - so we should only run if we're the last spec for this pair or if we failed in a runOnceBefore (which means we _are_ the last spec to run)
} else {
return true //...this was the last attempt and continueOnFailure is false therefore we are the last spec to run and so the AfterNode should run
}
}
if !terminatingPair.isZero() { // ...and it failed in a run-once. which will be running again
if node.NodeType.Is(types.NodeTypeCleanupAfterEach | types.NodeTypeCleanupAfterAll) {
Expand Down Expand Up @@ -281,10 +295,12 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
includeDeferCleanups = true
}

return failedInARunOnceBefore
}

func (g *group) run(specs Specs) {
g.specs = specs
g.continueOnFailure = specs[0].Nodes.FirstNodeMarkedOrdered().MarkedContinueOnFailure
for _, spec := range g.specs {
g.runOncePairs[spec.SubjectID()] = runOncePairsForSpec(spec)
}
Expand All @@ -301,8 +317,8 @@ func (g *group) run(specs Specs) {
skip := g.suite.config.DryRun || g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates|types.SpecStateSkipped|types.SpecStatePending)

g.suite.currentSpecReport.StartTime = time.Now()
failedInARunOnceBefore := false
if !skip {

var maxAttempts = 1

if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
Expand All @@ -327,7 +343,7 @@ func (g *group) run(specs Specs) {
}
}

g.attemptSpec(attempt == maxAttempts-1, spec)
failedInARunOnceBefore = g.attemptSpec(attempt == maxAttempts-1, spec)

g.suite.currentSpecReport.EndTime = time.Now()
g.suite.currentSpecReport.RunTime = g.suite.currentSpecReport.EndTime.Sub(g.suite.currentSpecReport.StartTime)
Expand Down Expand Up @@ -355,6 +371,7 @@ func (g *group) run(specs Specs) {
g.suite.processCurrentSpecReport()
if g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
g.succeeded = false
g.failedInARunOnceBefore = g.failedInARunOnceBefore || failedInARunOnceBefore
}
g.suite.selectiveLock.Lock()
g.suite.currentSpecReport = types.SpecReport{}
Expand Down
Loading

0 comments on commit e0123ca

Please sign in to comment.