-
-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for VSCode problem-tracking for failed assertions #1700
Conversation
@nohwnd Got any ideas on how to build tests for output like this? We could mock |
Not sure if I understand what you mean here. But yeah, writing P tests (those in .ts.ps1), and invoking the actual Pester tests in a job would be a good way forward. You can capture the output directly by receiving the job output. |
Can I mock in P-tests? Got a quick example? Prior to PSv5, you can't redirect or catch the output of Describe "VSCode Problem Matcher Output" {
BeforeAll {
<#
# Some way to catch the output. Only officially supported in PSv5 and above
$TestFile = 'TestDrive:/VSCodeOutputTest.txt'
Mock Write-Host {
$Object | Out-File -FilePath $TestFile -NoNewline:$NoNewline
}
#>
# Storing the original pattern from vscode-powershell below for easier comparison
$titlePattern = ('^\\s*(?:\\[-\\]\\s+)(.*?)(?:\\s+\\d+\\.?\\d*\\s*m?s)\\s*?(?:\\([\\w\\|]+\\))?\\s*?$' -replace '\\\\','\')
$atPattern = ('^\\s+[Aa]t\\s+([^,]+,)?(.+?):(\\s+line\\s+)?(\\d+)(\\s+char:\\d+)?$' -replace '\\\\','\')
# Fake running as VScode task to trigger VSCode Problem matcher output-style
$org_TERMPROGRAM = $env:TERM_PROGRAM
$env:TERM_PROGRAM = 'vscode'
$psEditor = $null
$sb = {
Describe 'VSCode Output Test' {
It 'Single error' {
1 | Should -Be 2
}
It 'Multiple errors' {
1 | Should -Be 2
1 | Should -Be 3
}
}
}
$configuration = [PesterConfiguration]::Default
$configuration.Run.ScriptBlock = $sb
$configuration.Run.PassThru = $true
$configuration.Should.ErrorAction = 'Continue'
$results = Invoke-Pester -Configuration $configuration
#Need to catch Write-Host called in Invoke-Pester somehow, lets say into $hostOut
}
It "Tests failed as expected" {
$results.FailedCount | Should -Be 2
}
It "Matches test with single error" {
$hostSingleError = $hostOut | Select-String 'Single error' -Context 0,1
$hostSingleError.Line | Should -Match $titlePattern
$hostSingleError.PostContext[0] | Should -Match $atPattern
}
It "Matches test with multiple errors" {
$hostMultipleErrors = $hostOut | Select-String 'Multiple error' -Context 0,1
$hostMultipleErrors.Count | Should -Be 2
$hostMultipleErrors[0].Line | Should -Match $titlePattern
$hostMultipleErrors[0].PostContext[0] | Should -Match $atPattern
$hostMultipleErrors[1].Line | Should -Match $titlePattern
$hostMultipleErrors[1].PostContext[0] | Should -Match $atPattern
}
AfterAll {
# Reverse
$env:TERM_PROGRAM = $org_TERMPROGRAM
Remove-Variable -Name psEditor -Scope Local
}
} Maybe I'm overcomplicating and you've got some |
@fflaten Looks like you are overcomplicating it. You can't mock in P tests, and you should not need to mock there. The idea is to run some tests as they would run in VSCode, and then inspect the results in the P test. Right now you are running Pester in Pester, trying to assert that the output is correct in the Pester test, don't do that. The pester test is the SUT (system under test), it should not test itself. Instead of running pester in pester, run P test that will run a simple Pester test and inspect the result. You are right that we cannot capture the output prior powershell 5 when running in a job. So we can either only test this on PowerShell 5 and newer. OR run it in separate process, that way we get the serialized text output (IMHO better), as if it would print into a console. # in a new file Pester.RSpec.Output.ts.ps1
param ([switch] $PassThru)
Get-Module Pester.Runtime, Pester.Utility, P, Pester, Axiom, Stack | Remove-Module
Import-Module $PSScriptRoot\p.psm1 -DisableNameChecking
Import-Module $PSScriptRoot\axiom\Axiom.psm1 -DisableNameChecking
& "$PSScriptRoot\..\build.ps1"
Import-Module $PSScriptRoot\..\bin\Pester.psd1
$global:PesterPreference = @{
Debug = @{
ShowFullErrors = $true
WriteDebugMessages = $false
WriteDebugMessagesFrom = "Mock"
ReturnRawResultObject = $true
}
Output = @{
Verbosity = "None"
}
}
$PSDefaultParameterValues = @{}
function Invoke-PesterInProcess ([ScriptBlock] $ScriptBlock) {
# get the path of the currently loaded Pester to re-import it in the child process
$pesterPath = Get-Module Pester | Select-Object -ExpandProperty Path
$powershell = Get-Process -Id $pid | Select-Object -ExpandProperty Path
# run the test in a separate process to be able to grab all the output
$command = {
param ($PesterPath, [ScriptBlock] $ScriptBlock)
Import-Module $PesterPath
$container = New-TestContainer -ScriptBlock $ScriptBlock
Invoke-Pester -Container $container
}.ToString()
$cmd = "& { $command } -PesterPath ""$PesterPath"" -ScriptBlock { $ScriptBlock }"
& $powershell -NoProfile -ExecutionPolicy Bypass -Command $cmd
}
i -PassThru:$PassThru {
b "Output in VSCode mode" {
t "Outputs errors with the location marker on top" {
$sb = {
Describe "d" {
BeforeAll {
# set the env variables and don't bother cleaning up,
# they will go away when the process exits
}
It "i" {
# fail the test
$false | Should -Be $true
}
}
}
$output = Invoke-PesterInProcess $sb
# do your verification here, right now it will fail
# to show that we captured some output
$output | Verify-Null
}
}
} |
@nohwnd Omg, I discarded child process as part of jobs, thinking they would behave the same due to the same host (e.g. conhost). Thanks! 👏 Will find some time the next days to update with tests and start QA on new regex-pattern. Any thoughts on removing |
WriteVSCodeMarker <- you are free to remove it everywhere from the code, if it is used somewhere. Just don't remove it from the config object, we should deprecate it somehow. |
It's never been used in Pester 5.0.0+ besides a simple default-configuration test and a comment-block with placeholder-code for the VSCode output, which is removed in this PR. So not much to deprecate since v5.0 changed the configuration-object and broke old scripts anyways. Only reason to keep it would be to easy revert to the option-value later or maybe implement "autodetect OR option" for the vscode-output in this PR. We could move the autodetect-logic based inside the assembly to use it for the default option-value (if I'm able to check the psEditor variable from it). Then people might still use the option to override the default. |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
@fflaten K remove it then. I know it did not work, I just did not want to break people who might have used it by accident. But it's probably very low risk. |
Some of these comments are outside of this PR, though these could probably be addressed as part of opening up Output.ps1, either by fixing the issues below or ignoring them in the file. Others could be issues, let me know if I should open any if important.
invoke-scriptanalyzer .\Output.ps1
|
@asears Are you commenting on the correct PR? I don't see how most of those are relevant to this issue 😵 |
To support regex problemMatcher-pattern i vscode-powershell extension
30bd8a6
to
0094d7c
Compare
I believe this is good to go now. WIP PR in vscode-powershell is ready as well, but it's waiting for this to be implemented just in case they want to test. The tests add 5sec due to creating two powershell sessions. I could replace them by one |
Looks good, do you need me to release this, to be able to test this with VSCode extension? |
It can wait for the next release with more changes. Already tested by manual editing the package.json in the extension and using local Pester-build. The other PR is pending this just in case the output would be changed further during the review. If vscode-powershell maintainers wants to test, they can use local Pester-build and task like:
|
1. General summary of the pull request
VSCode's problem matcher used to report failed Pester-tests in previous versions using a regex-pattern and task.json setup included in the vscode-powershell extension. New format for test output introduced in Pester v5 broke this feature for this version. See PowerShell/vscode-powershell#2931
This PR and a related PR in vscode-powershell will restore the feature by providing a similar output format to previous versions when using tasks in VSCode.
When test-tasks are configured to continue beyond the first failed assertion (configuration option
[PesterConfiguration]@{Should=@{ErrorAction="Continue"}}
), all failed assertions will be get it's own failed-test output. This way VSCode can detect and show a problem-error per failed assertion which will also highlight every failed Should-line in a test for easier troubleshooting.Fix #1577
Fix #1452
Todos
- Update problemMatcher-pattern to support new
(UserDuration|FrameworkDuration)
timings in v5