I’ve been writing here many stories about enumeration in .NET. I know it’s hard to remember everything, specially when developing large projects with several other developers. I decided to develop NetFabric.Hyperlinq.Analyzer that peer reviews the code while it’s typed. I actually use it myself to develop NetFabric.Hyperlinq.
The easiest way to start developing an analyzer is to use the template available with Visual Studio. One of its great features is that it also generates the unit testing project.
Notice that it applies the analyzer and the code fixer to source code strings. I find this approach to have several disadvantages:
- It doesn’t allow syntax color coding and IntelliSense. These make it so much easier to visualize and write the code.
- No compiler errors are generated. The code doesn’t have to compile correctly for the analyzer to run but, it’s good to have the compiler’s help.
- It’s not possible to use the Syntax Visualizer on the strings. This tool is essential in development of diagnostic analyzers.
An workaround is to have another project and copy/paste the source between files. I find this to be too cumbersome.
Adding the test sources to the unit tests project
I came up with this idea while developing another project that I spun-off from the analyzer. The NetFabric.CodeAnalysis repository is used to generate two NuGet packages:
- NetFabric.CodeAnalysis — contains the logic used by the analyzer to check if a type is an enumerable. It covers more scenarios than just checking if it implements IEnumerable or IAsyncEnumerable.
- NetFabric.Reflection — contains the exact same logic but for reflection. It’s used by my fluent assertions library NetFabric.Assertive.
These packages are used on different contexts (compile time vs. run time) but have the exact same functionality so I wanted to use the same testing data. This means a lot less code to maintain…
For the case of the analyzer, I added a folder TestData to my unit testing project. Inside of it, I added a folder tree for each diagnostic analyzer with source files split into tests with or without diagnostics reported:
Once the files are added, they become part of the project and are compiled just like any other source file. One thing you’ll have to worry about now is naming conflicts. To avoid this, I use namespaces with the base name equal to the diagnostic identifier, which should be unique by definition.
Using the source files for unit testing
The classes DiagnosticVerifier and CodeFixVerifier used to validate the diagnostics are prepared to receive source strings. We now need to change the testing code to use the files.
I like to use xUnit but you can use any other unit testing framework:
Notice that each test method is used to test multiple scenarios. I only set two methods per diagnostic, one for no diagnostics reported and another for a single reported diagnostic. The path to the file and any other required information is passed as a parameter.
Notice also the use of File.ReadAllText(path) so that the file is loaded and its content used for the validation.
Having simple tests that report only one diagnostic makes it easier to debug. There won’t be multiple threads running at the same time.
It’s important that the build and tests can be executed anywhere the repository is cloned into. Including any continuous-integration agent.
I added the following to the unit tests .csproj file:
This copies all the .cs files under the TestData folder, maintaining the folder structure and still compiling during build. This allows the use of relative paths for the test source code files.
I found two reasons to have source files excluded from the build:
- The source doesn’t have to compile correctly for the analyzer to be run. It’s important that unit tests include these scenarios.
- The fixed source files used to test the code fixers may not have enough changes to avoid naming collisions.
In Visual Studio, you can exclude each of these files by right-clicking of them in the Solution Explorer and then clicking on Exclude From Project.
Alternatively, you can use a naming pattern and set a rule. I add a .Fix to the end of the name of the files used to test the code fixer and I add the following to the .csproj file:
This guarantees that the .Fix.cs files are not compiled but still copied on build.
Please note that after this, when a new file is added, Visual Studio will override these settings for this new file. You’ll have to delete what’s added to the .csproj file.
The test verifiers create an internal project where the source strings are added as documents:
- The VerifyCSharpDiagnostic method has an overload that accepts an array of strings. These are all added to the internal project and the analyzer run on all of them.
- The VerifyCSharpFix method only accepts one string. The analyzer and the code fixer are run on it and the result compared to the other string parameter.
I wanted to be able to share auxiliary code between tests. For example, share the enumerable definitions I use in most of the testing scenarios. Neither the verifiers allow this scenario so, I tweaked a bit their implementation to fit my requirements. You can copy them from my repository.
I now can pass an array of strings to either verifiers but the analyzer is only run on the first one:
It’s possible to take advantage of the IDE tools to develop the source code used to test an analyzer.
You can always access the source code in my analyzer repository to check on any detail that I missed.
I hope you found this information useful.