Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V5 not honoring WriteVSCodeMarker option #1577

Closed
rkeithhill opened this issue May 28, 2020 · 9 comments · Fixed by #1700
Closed

V5 not honoring WriteVSCodeMarker option #1577

rkeithhill opened this issue May 28, 2020 · 9 comments · Fixed by #1700
Milestone

Comments

@rkeithhill
Copy link

rkeithhill commented May 28, 2020

1. General summary of the issue

I've specified the WriteVSCodeMarker issue to work with the PowerShell extension's Pester problem matcher. That causes Pester failures to show up in the Problems window and tests get red squiggles. But this is no longer working with Pester v5.0.0.

2. Describe Your Environment

Pester version     : 5.0.0 C:\Users\Keith\Documents\WindowsPowerShell\Modules\Pester\5.0.0\Pester.psd1
PowerShell version : 5.1.18362.752
OS version         : Microsoft Windows NT 10.0.18363.0

3. Expected Behavior

I expect this one failure to show up in the Problems window because the PowerShell extension's problem matcher was able to find the error long with filename and line info.

4.Current Behavior

image

5. Possible Solution

Part of the reason Dave (IIRC) added the VSCode marker was that we wanted a text format that didn't change between different versions of Pester. Also, the VSCode problem matcher is regex driven so it can be a bit of a pain to use. We found it hard to come up with a problem matcher spec when the error filename/line number occurred on varying lines below the initial error line. Here is the current pester problem matcher:

      {
        "name": "pester",
        "owner": "powershell",
        "fileLocation": [
          "absolute"
        ],
        "severity": "error",
        "pattern": [
          {
            "regexp": "^\\s*(?:\\[-\\]\\s+)(.*?)(?:\\s+\\d+\\.?\\d*\\s*m?s)\\s*$",
            "message": 1
          },
          {
            "regexp": "^\\s+[Aa]t\\s+([^,]+,)?(.+?):(\\s+line\\s+)?(\\d+)(\\s+char:\\d+)?$",
            "file": 2,
            "line": 4
          }
        ]
      }

6. Context

I configure tasks.json in VSCode to run all my tests and I bind that to a keyboard shortcut (typically ctrl+shift+t.

My Pester test runner task in my various VSCode projects no longer works like it used to.

@nohwnd nohwnd added this to the 5.1 milestone May 28, 2020
@nohwnd
Copy link
Member

nohwnd commented May 28, 2020

Thanks for the report, this indeed needs to be addressed, but last time I tried VSCode itself was broken and I never got back to it. Putting this into 5.1, I think detecting VSCode via env variable will be a better option, and will align better with adding automatic colors to AzDO output, or similar.

@fflaten
Copy link
Collaborator

fflaten commented Sep 30, 2020

I've been playing around with this tonight. The easiest would be to output Pester 3/4 format to keep a single problemMatcher for Pester in vscode-powershell. We'd just need to extend it to support the new UserDuration/FrameworkDuration timings, but that's easy enough. There are some other decisions to discuss also.

  1. Are we sure about environment-variable detection? If so, I'm thinking of using $env:TERM_PROGRAM -eq 'vscode' -and -not $psEditor to detect VS Code while ignoring integrated console. This would avoid downgrading the output for interactive usage in terminal or CodeLens-links. Tasks (which is where problemMatcher is used) run outside of integrated terminal (no $psEditor variable). WriteVSCodeMarker option would be less fragile to potential changes in the env-variables in the future and allow a user to trigger it manually. Thoughts?

  2. Pester 5 allows us to run all checks in a test before generating output (Should.ErrorAction = "Continue"). Should we generate one problem per failed check (would highlight the lines) or per test (highlight only one line) like today?

If we want to loop through the errors, then we need to choose between two approaches as the problemMatcher is kinda strict - see example options below.

Alternative 1 - individual output per assertion.
It will show ex. two failed tests, while summary will say "Failed: 1" which is a bit confusing.

[-] VSCode Problem demo.MyContext Pester5.multiple errors 15ms (13ms|1ms)
 at <ScriptBlock>, /workspaces/Pester/demoProblemMatcher.tests.ps1:14
 Expected 2, but got 1.
 at 1 | Should -Be 2, /workspaces/Pester/demoProblemMatcher.tests.ps1:14
[-] VSCode Problem demo.MyContext Pester5.multiple errors 15ms (13ms|1ms)
 at <ScriptBlock>, /workspaces/Pester/demoProblemMatcher.tests.ps1:15
 Expected 3, but got 1.
 at 1 | Should -Be 3, /workspaces/Pester/demoProblemMatcher.tests.ps1:15

Alternative 2 - using loop in problemMatcher.
Failed-count matches, but I personally don't like this very much. Looks clean, but users loses all readable text to help them troubleshoot the failed test.

[-] VSCode Problem demo.MyContext Pester5.multiple errors 14ms (13ms|2ms)
 at <ScriptBlock>, /workspaces/Pester/demoProblemMatcher.tests.ps1:14
 at <ScriptBlock>, /workspaces/Pester/demoProblemMatcher.tests.ps1:16

@nohwnd
Copy link
Member

nohwnd commented Oct 1, 2020

  1. Yes that would be nice. I am guessing that VSCode interactive run is probably where the majority of tests is run when they are not run in CI. So I don't want to make the usage in VSCode worse.

  2. I think alternative 1 is better. I just don't like that the location is written twice, but that might be just my problem.

@fflaten
Copy link
Collaborator

fflaten commented Oct 1, 2020

  1. Yes that would be nice. I am guessing that VSCode interactive run is probably where the majority of tests is run when they are not run in CI. So I don't want to make the usage in VSCode worse.

That's a GO for $env:TERM_PROGRAM -eq 'vscode' -and -not $psEditor, correct? Just to clarify since I threw in the codemarker-option at the end 😅

  1. I think alternative 1 is better. I just don't like that the location is written twice, but that might be just my problem.

There is an alternative to NOT show all errors also. To keep it like it is and just display a single problem per task. Figured it would be nice for users to find all errors in the first run if they choose to enable the option in PesterConfiguration, but it also might get noisy since a failed test could trigger 5+ asserts due to the same root issue. Any experience with how other languages and test frameworks handle this?

@nohwnd
Copy link
Member

nohwnd commented Oct 1, 2020

  1. Yes
  2. They don't, most other frameworks just report a single failure. Except the golang testing framework I think. That is the only one that I am aware of that reports those combined errors. It would be nice to report all errors, so I would make that the default, and if it gets noisy we can go from there.

@asears
Copy link
Contributor

asears commented Nov 30, 2020

Seems as though $configuration.Debug.WriteVSCodeMarker was removed recently? May need to update README and documentation. I see in readme it's mentioned to be removed in future, but broke in a recent pipeline script.

++ @nohwnd @fflaten

@fflaten
Copy link
Collaborator

fflaten commented Nov 30, 2020

Yes. We didn't expect anyone to use it as it was renamed and never implemented in Pester v5, so considered it low-risk.

The option and all mentions were removed in #1700 except for once sentence in the "Additional issues to be solved in 5.1" section in README, which I guess should be updated now anyways.

@nohwnd
Copy link
Member

nohwnd commented Dec 1, 2020

@asears did you experience the problem? Is that how you noticed?

@asears
Copy link
Contributor

asears commented Dec 1, 2020

After the upgrade one of my scripts setting this option broke. I just turned off all accesses to $configuration.Debug object to address this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants