Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable AUTHORING UX approval tests creation #3868

Closed
18 of 26 tasks
Tracked by #4281
DavidKarlas opened this issue Sep 6, 2021 · 23 comments
Closed
18 of 26 tasks
Tracked by #4281

Enable AUTHORING UX approval tests creation #3868

DavidKarlas opened this issue Sep 6, 2021 · 23 comments
Assignees
Labels
7.0 Cost:L Work that requires one engineer up to 4 weeks Priority:1 Work that is critical for the release, but we could probably ship without triaged The issue was evaluated by the triage team, placed on correct area, next action defined. User Story A single user-facing feature. Can be grouped under an epic.
Milestone

Comments

@DavidKarlas
Copy link
Contributor

DavidKarlas commented Sep 6, 2021

Background

Today there is no good story for template authors to test their templates and ensure they work as intended after doing changes and environment around them changes(.NET Framework, Template Engine...)

We have https://github.com/dotnet/templating/tree/main/tools/ProjectTestRunner but its pretty hard to understand and navigate for novice template author, also tooling is not compiled as dotnet global tool to just use...
We run tests in our repo using Process.Start("dotnet new console) and than check output see example here.. Again, not very good way for template author to run/maintain tests.

Outcomes

This enables template development inner loop. We want to support approval tests, which means template author would do:

  1. dotnet new console once to create initial content
  2. On test run TemplateEngine will provide ability to run tests that compare content from step 1) with what it would generate, if change is intentional author re-runs step 1) and commits to git changes.

Justification

  • Customer impact - 1st party customers has easy tools to test their templates (popular request)
  • Engineering impact - automated testing contribute to less bugs - automated testing allows to reduce amount of manual testing - teams don't need to invent and support own tooling for testing

Prerequisite

What needs to be solved how to handle random values like PortNumber or GUIDs...

Subtasks

Investigations:

  • Get stats on usage of nondeterministic generators (Guid, Now, Port, Random) - @vlada-shubina
  • Investigate ways of usage of XUnit Verifier so that multiple verification can be performed and reported (even if multiple are failing)
    • Verify.Net doesn't support verification of multiple files at the moment.
    • Simon is considering to implement it in near future
    • We stick to 1-by-1 file comparison at the moment
  • Go through CommonTemplatesTests to access what functionality we'll need from the test framework in order to transform and adopt these tests.
    • stdout and stderr content comparison
    • content regex matching
    • content substring matching, absence of patterns
    • newlines normalization
    • custom content checking (xml parsing)
  • Investigate options to programaticaly change dotnet sdk version to run a sdk tool (as a fallback we can programatically create and then discard global.json)
    We will leverage global.json for this - simplified approach:
ren global.json global.json.bak
dotnet new globaljson --sdk-version <version>
ren global.json.bak global.json

Subtasks for MVP (not to be exposed to customers):

V2 (preparation for customers exposed toolset):

  • Define Verification module API. CLI and MSBuild interfaces should call the logic through the API
  • Extract external process wrapping logic (Command) from Microsoft.DotNet.Cli.Utils or find another utility for wrapping CLI processes - and get rid of copied code within the Microsoft.TemplateEngine.Authoring.TemplateVerifier. (this task might be joined with Consider dropping dependency to Microsoft.DotNet.Cli.Utils. #5296)
  • Support batch execution logic for multiple test cases (probably configured by files)
  • Support filter/ignore lists (with defualt behavior that should suffice in most common cases) - e.g. to be able to ignore images, bin/* outputs etc.
  • Support for not installed templates (arbitrary location on disk)
  • Support for switching sdk versions
  • Documenting API and CLI in docs/wiki
  • Simplify and unify working with paths (include/exclude patterns, paths given in custom scruber and custom verifier etc.) - tooling should be permissive and be able to accept mix-n-matches of dir separator chars, plus it should have settings for enforcing separator char used in path it's passing out (custom scrubber and verifier). Paths passed out should probably be passed as custom type - allowing to fetch relative path (to template root or test root), absolute path, paths with custom spearators
  • Telemetry opt out in our integration tests (currently: https://github.com/dotnet/sdk/blob/main/src/Tests/Microsoft.NET.TestFramework/Commands/DotnetNewCommand.cs#L20, possibilities: explicit set of env in integration set fixture; or ability to inject env into instantiatot process via API)

Next iterations (ideally part of the first customer facing version):

  • Review (and adjust if needed) signing of tooling imposed by source build - is it required for shipping?
  • Rewrite more snapshots-based integration test for templating instantiation in sdk to use the tooling
  • Add telemetry
  • Implement context detection and extraction for nondeterministic generators handling (so e.g. for Port generator, the logic should be able to detect the resulting value in the generated output and then process the output by replacing all instances of the generator being used).
  • Add Template Validator as another tool in the authoring toolset. Implement just a sample of most importatnt validations - [ ] Create MSBuild Task version of the Template Validator
    • Design and use continuable errors during validation - so that as much errors as possible cna be reported during single run (while not reporting nonsense issues caused by inconsistent data detected in previous steps).
  • Investigate, Design and implement deterministic mode for Macros (and hence generators): macros 2.0 #5223
  • Build in property based testing
@bekir-ozturk bekir-ozturk added the Epic Groups multiple user stories. Can be grouped under a theme. label Sep 13, 2021
@bekir-ozturk bekir-ozturk added this to the Backlog milestone Sep 13, 2021
@bekir-ozturk bekir-ozturk added .NET 7 candidate triaged The issue was evaluated by the triage team, placed on correct area, next action defined. labels Sep 13, 2021
@vlada-shubina vlada-shubina removed the Epic Groups multiple user stories. Can be grouped under a theme. label Sep 15, 2021
@RussKie
Copy link
Member

RussKie commented Nov 22, 2021

@vlada-shubina @DavidKarlas a number of repos are starting to create .NET 7 templates (e.g, dotnet/winforms#6206), and having a test infra for templates could be very helpful.

@vlada-shubina vlada-shubina added User Story A single user-facing feature. Can be grouped under an epic. and removed .NET 7 candidate labels Mar 31, 2022
@vlada-shubina vlada-shubina mentioned this issue Apr 4, 2022
3 tasks
@donJoseLuis donJoseLuis added 7.0 Priority:1 Work that is critical for the release, but we could probably ship without Cost:L Work that requires one engineer up to 4 weeks labels Apr 5, 2022
@donJoseLuis donJoseLuis changed the title Make it simple for template authors to create approval tests Enable AUTHORING UX approval tests creation Apr 5, 2022
@vlada-shubina vlada-shubina modified the milestones: Backlog, July 2022 May 4, 2022
@vlada-shubina vlada-shubina modified the milestones: July 2022, August 2022 Jul 12, 2022
@vlada-shubina
Copy link
Member

This feature will be useful for testing: #3418 to avoid the need of using custom settings location and installing the template.

@JanKrivanek
Copy link
Member

JanKrivanek commented Aug 19, 2022

Based on brainstroming session with @vlada-shubina those are the tasks we came up with:

Investigations:

  • Get stats on usage of nondeterministic generators (Guid, Now, Port, Random) - @vlada-shubina
  • Investigate ways of usage of XUnit Verifier so that multiple verification can be performed and reported (even if multiple are failing)
    • Verify.Net doesn't support verification of multiple files at the moment.
    • Simon is considering to implement it in near future
    • We stick to 1-by-1 file comparison at the moment
  • Go through CommonTemplatesTests to access what functionality we'll need from the test framework in order to transform and adopt these tests.
    • stdout and stderr content comparison
    • content regex matching
    • content substring matching, absence of patterns
    • newlines normalization
    • custom content checking (xml parsing)
  • Investigate options to programaticaly change dotnet sdk version to run a sdk tool (as a fallback we can programatically create and then discard global.json)
    We will leverage global.json for this - simplified approach:
ren global.json global.json.bak
dotnet new globaljson --sdk-version <version>
ren global.json.bak global.json

Subtasks for MVP (not to be exposed to customers):

  • Generalize and repurpose Microsoft.TemplateEngine.TemplateLocalizer as templates authoring toolset. Packaged as nuget - @vlada-shubina
  • Define configuration model for a single test case ({template to be tested; dotnet sdk version; parameter values; approvals location}). Create System.CommandLine Parser transforming CLI arguments to this configuration model
  • Verification logic module (the API and actual logic doesn't have to be polished for first version) - @JanKrivanek
  • Add programatic way of simple scrubbing and/or replacing keyed by files.
  • Transform and onboard CommonTemplatesTests to the new framework

V2 (preparation for customers exposed toolset):

  • Define Verification module API. CLI and MSBuild interfaces should call the logic through the API
  • (In Progress) Implement context detection and extraction for nondeterministic generators handling (so e.g. for Port generator, the logic should be able to detect the resulting value in the generated output and then process the output by replacing all instances of the generator being used).
  • Support batch execution logic for multiple test cases (probably configured by files)
  • Support filter/ignore lists (with defualt behavior that should suffice in most common cases) - e.g. to be able to ignore images, bin/* outputs etc.

Next iterations (ideally part of the first customer facing version):

  • Add telemetry
  • Add Template Validator as another tool in the authoring toolset. Implement just a sample of most importatnt validations (more comprehensive list: Authoring tools: templates & template packages validation #2623)
    • Create MSBuild Task version of the Template Validator
    • Design and use continuable errors during validation - so that as much errors as possible cna be reported during single run (while not reporting nonsense issues caused by inconsistent data detected in previous steps).
  • Investigate, Design and implement deterministic mode for Macros (and hence generators)

@RussKie
Copy link
Member

RussKie commented Aug 23, 2022

Great plan!

  • Investigate ways of usage of XUnit Verifier so that multiple verification can be performed and reported (even if multiple are failing)

This point sounds strange... Perhaps I don't quite understand the intent here, could you elaborate on this please?

@vlada-shubina
Copy link
Member

vlada-shubina commented Aug 23, 2022

We plan to use Verify.NET to build the framework. We need to investigate if it can do verification of multiple files out of the box. So we far we are using it only for single file\object validation.

@RussKie
Copy link
Member

RussKie commented Aug 23, 2022

We use Verify in Windows Forms repos quite a bit, and I don't think it's possible to verify multiple files simultaneously, such verifications need to be serialized. That is, how do you present multiple failures in a diff tool?
This is how we verify multiple files in a single test:

    protected async Task VerifyAsync(
        IVisualStudioDocument mainFile,
        IVisualStudioDocument codeBehindFile,
        [CallerMemberName] string testMethodName = "")
    {
        var (TestMethodName, AdditionalPath) = GetTestContext(testMethodName);

        await VerifyAsync(mainFile.GetTextBuffer().CurrentSnapshot, $"{TestMethodName}_{MainFileSuffix}", AdditionalPath);
        await VerifyAsync(codeBehindFile.GetTextBuffer().CurrentSnapshot, $"{TestMethodName}_{CodeBehindFileSuffix}", AdditionalPath);
    }

    private static Task VerifyAsync(ITextSnapshot textSnapshot, string methodName, string additionalPath)
        => Verifier.Verify(textSnapshot.GetText())
            .UseDirectory($@"TestData\{additionalPath}")
            .UseFileName(methodName);

Pining @SimonCropp for sharing his thoughts on this.

@SimonCropp
Copy link
Contributor

@RussKie yeah thats not ideal. where do i find that code so i can have a go a making it better?

@vlada-shubina
Copy link
Member

Just to explain our use case better: we would like to verify the template output, that can be N files with random folder structure.
Example:

  • src
    • Project1
      • project 1 content goes here (multiple files)
    • Project2
      • project 2 content goes here (multiple files)
  • test
    • test files go here

The template output is pretty static (with some exceptions like guids, dates etc) so approval tests seems to be good match to test them.

Ideally we do the following:

await VerifyFiles(pathToTemplateOutput); 

and this produces the snapshot identical to folder/files in the pathToTemplateOutput folder.
Ideally in case of failure, all the failures will be shown in the one window. Certain settings decorators should be still applicable.
Initial plan was to similar to what @RussKie mentioned above, but we would have many files to check and not to fail on the first incorrect file.

Imho, this use case (verifying all the files in the folder) is pretty generic and implementing it in Verify.NET would be of benefit.

@RussKie
Copy link
Member

RussKie commented Aug 24, 2022

Thank you for the context, now I think I get it - executing a template will produce several files, and each template may have a different number of those files. So essentially we need to verify a folder content. I don't think it'd be difficult to create a helper method that verifies a content of a folder, though it'd make our lives easier if that helper was provided out of a box. :)

In Windows Forms scenarios we generally compare one or two known files (more like entities that represent files, and those may not even exist on disk).

Ideally in case of failure, all the failures will be shown in the one window.

I can interpret this wish in a few different ways, though not sure if my interpretation aligns with yours.

  • Tooling: some diff/merge tools provide folder comparison functionality (e.g., BeyondCompare 4 IIRC) but a lot of tools don't.
  • Test result presentation: collecte results from all failed file verifications and concatenate them together. It's possible to catch every "verification failed exception" (there's a specific exception type, I don't remember the name of it), and then provide a custom report to the user. Depending on how much data is presented to a developer this may degrade the developer experience.

@RussKie
Copy link
Member

RussKie commented Aug 24, 2022

Thank you @SimonCropp for jumping in. In essence, here we're talking about verifying the result of dotnet new command. E.g., dotnet new winforms, dotnet new winforms -n MyApp, dotnet new winforms -f net5.0, etc. will produce folders with own sets of files, and we'll need to verify the content of each file. (@vlada-shubina please correct me if I misinterpret it) We will also need to be able to exclude some folders (e.g., obj).

D:\Development\throwaway\foo>dotnet new winforms -f net5.0
The template "Windows Forms App" was created successfully.

Processing post-creation actions...
Restoring D:\Development\throwaway\foo\foo.csproj:
  Determining projects to restore...
  Restored D:\Development\throwaway\foo\foo.csproj (in 68 ms).
Restore succeeded.


D:\Development\throwaway\foo>dir /b
foo.csproj
foo.csproj.user
Form1.cs
Form1.Designer.cs
obj
Program.cs

Do you think it's a worthy addition to your already awesome library?

The sample I mentioned earlier is in a close source, but I'm sure @vlada-shubina or @JanKrivanek may be able to explain their test procedures in more details.

@vlada-shubina
Copy link
Member

vlada-shubina commented Aug 26, 2022

Added couple of test drafts in #5163.
Note that this syntax is not set in stone, it may be different.

@vlada-shubina
Copy link
Member

now.txt
port.txt
random.txt
guid.txt

Usage stats of non-deterministic macro attached:

  • port - 78 templates
  • random - 11 templates
  • now - 51 templates
  • guid - 298 templates

@SimonCropp
Copy link
Contributor

@vlada-shubina i should have a beta "verifydirectory" for you in a couple of days

@SimonCropp
Copy link
Contributor

@vlada-shubina if you update to the current beta of Verify, you can try this https://github.com/VerifyTests/Verify#verifydirectory

@vlada-shubina
Copy link
Member

vlada-shubina commented Sep 8, 2022

@vlada-shubina if you update to the current beta of Verify, you can try this https://github.com/VerifyTests/Verify#verifydirectory

Awesome work! Thanks. First very basic test passed.

The only obstacle I'm stuck with is here:
https://github.com/VerifyTests/Verify/blob/33f951c84a2e08d5aed9743158abb88795f3067f/src/Verify/Splitters/Target.cs#L83-L86
The filenames can easily contain . in the name. Also subfolders do not work for the same reason as above.

update: if possible I would prefer different naming

- <testname folder>.received
  - content received goes here with unchanged names
- <testname folder>.verified
  - verified content goes here with unchanged names

Having verified and received mixed in the file name is a big confusing, see:
image

@SimonCropp
Copy link
Contributor

The filenames can easily contain .

i have deployed a new version with that constraint removed. I no longer makes sense anyway given the current file naming logic

if possible I would prefer different naming

hmm. let me think on that and get back to you

@SimonCropp
Copy link
Contributor

@vlada-shubina how does this look https://github.com/VerifyTests/Verify/blob/main/docs/naming.md#usesplitmodeforuniquedirectory
it is in the 18.0.0-beta.17 nuget

@vlada-shubina
Copy link
Member

@vlada-shubina how does this look https://github.com/VerifyTests/Verify/blob/main/docs/naming.md#usesplitmodeforuniquedirectory it is in the 18.0.0-beta.17 nuget

Thanks, it works great.
The only improvement that I may think of is to do folder diff in diff tool, if it's available.
For example: DiffMerge has a folder mode, and it's much more convenient to use that one instead of running multiple DiffMerge processes for individual files. CodeCompare supports it as well.

@JanKrivanek
Copy link
Member

Closing as work in context of 7.0 was done.
Tracking the next iteration ideas in a new item: #5705

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
7.0 Cost:L Work that requires one engineer up to 4 weeks Priority:1 Work that is critical for the release, but we could probably ship without triaged The issue was evaluated by the triage team, placed on correct area, next action defined. User Story A single user-facing feature. Can be grouped under an epic.
Projects
None yet
Development

No branches or pull requests

7 participants