André Blocking | .NET adventures (2023)

Table of Contents
Background: minimum hosting on .NET 6 Option 1: do nothing Option 2: Reuse your starting class Option 3: Local methods in Program.cs Bonus: extract modules, use Carter Summary Background: Font generators The domain: Enumerations andChain() 1. Create the Font Builder project 2. Gathering details via bulleted lists 3. Add a marker attribute 4. Creation of the incremental font generator 5. Build the incremental generator pipeline 6. Implementation of pipeline steps 7. Parsing EnumDeclarationSyntax to create an EnumToGenerate 8. Source code generation restrictions Summary Summary: The EnumExtensions generator Snapshot tests for font generators 1. Create a test project 2. Create a simple snapshot test 3. Debugging an error: missing references 4. Moar-Tests 5. Diagnosetests Summary 1. Create the integration test project 2. Add the integration test 3. Create a NuGet package 4. Create a local NuGet repository with a custom NuGet configuration 6. Add a NuGet package test project 7. Run the NuGet package integration test Summary Origin Factory Marker Attributerweiterung 1. Update the marker attribute 2. Allows a separate extension class name to be defined for each enumeration 3. Update code generation 4. Reading the property value of a marker attribute 5. Supported attribute builders Summary Finding the namespace for a class syntax Finding the full type hierarchy from a type declaration syntax Summary By default, font generators do not produce any artifacts Compiler-generated output files Output location control Exclude dumped files from compilation Divided by target frame Summary Bookmark attributes and font generators Set the marker attribute 1. Adding Attributes to a User Build 2. Ask users to create it themselves 3. Reference marker attributes in an external DLL Summary Reference marker attributes in an external DLL 1. Directly refer to the build output 2. Create a separate NuGet package just for the DLL 3. Make additional attribute pack optional 4. Pack the DLL into the generator package Bonus: enter attributes if you want! Summary Why use an enumeration source builder? Performance Installation des Fontgenerators NetEscapades.EnumGenerators Using the font generator Source that generates other helper methods Summary Why do we need to find URLs in a hosted service? Find out what URL ASP.NET Core is listening on IHostedService startup order in .NET 6 Receive application status notifications with IHostApplicationLifetime Waiting for Kestrel to be ready in a background service Summary A quick history lesson: .NET Standard 2.0 That's a lot of integration testing For example, a package is .NET Standard 2.0 But why? So does it really matter? Is it reasonable? How are they anyway? Summary The newTask.WaitAsyncAPI for .NET 6 waiting for oneTaskwith waiting time Summary Added timeout or cancel support forhomework waiting dive into theTask.WaitAsyncimplementation under the hood ofCancelarPromessa<T> Summary Without special treatment, aTaskalways run to the end Actually cancel a task If heTaskIt still works, can I get the result? Cancel calls to Task.Delay() Summary The problem: A CI suspension test Creating a custom test framework in xUnit Creating the customtest frame Adding a diagnostic message Activate diagnostic messages create a habitTestMethodRunner Creating the test framework class hierarchy Make it easier to identify suspended evidence Summary Scenarios that require frequent branch changes Scenario 1: Help a colleague Scenario 2: Troubleshoot Scenario 3: Working in two roles at the same time Working on multiple Git branches at the same time Solution 1: Work in the GitHub UI Solution 2: Clone the repository again Solution 3:Git working tree Several working trees withGit working tree Management of working trees withGit working tree Create a working tree from a new branch Change branches within a job The benefits ofGit working tree Disadvantages/faults ofGit working tree Summary read blog posts Watch .NET Conf videos See documentation listen to the community Try the previews Folgende GitHub-Repositories See API differences for base class libraries Summary why would you do that Approaches to running JavaScript in .NET Jering.Javascript.NodeJS Chakra-Kern ClearScript (V8) Jint Jura JavaScriptEngineSwitcher - when a JS engine is not enough A case study: running Prism.js in a console application using the JavaScriptEngineSwitcher Summary Background: Testing ASP.NET Core on CentOS Build the sample app using a Dockerfile. Building an ASP.NET Core image based on CentOS Debugging because the application is not responding ASP.NET-Haupt-URL: Loopback vs. IP-Adresse Summary Videos

Show all 645 items

discover the latest watch live

November 30, 2021, 2:00 a.m

Next Building an Incremental Generator: Building a Source Generator – Part 1

Anterior [CallerArgumentExpression] and Throw Helpers: Explorando .NET 6 - Part 11

André Blocking | .NET adventures (1)

In this short post, I'll answer a question I've been asked multiple times: "How do I update an ASP.NET Core 5 application thatStart.NET 6" Mindest-Hosting-APIs?

Background: minimum hosting on .NET 6

I recently got some emails from people about some of the changes introduced in .NET 6 that I've described in this series. Most of the questions relate to the "minimal hosting" and "minimal API" changes and what this means for your existing .NET 5 applications.

I covered the newinternet applicationjWebApplicationBuilderHe writes a lot in this post so if you don't know her I suggest you check out the second post in this series. In short, your typical "blank" .NET 5 application consists of two files,program.csjStart.cs, for just one, containing something like:

Erasconstructor=internet application.CreateBuilderName(argument);ErasApplication=constructor.Build();Application.MapGet("/", () => "Hello World!");Application.Run();

There are many C# features that were included in .NET 6 to really make this file as easy as possible, such asworldwide usageHigh-level instructions and programs, but the new onesinternet applicationGuys really seem to fool people.

The "problem" is that the templates (and, to be fair, a lot of the demos) contain all the logic that used to be scattered aroundat leasttwo files and multiple methods in one file.

The argument for this is strong enough: this is all "procedural" setup code, so why not just wrap it in a single "script-like" file and avoid unnecessary levels of abstraction? The problem is that this file can get quite large for larger applications. So what do I recommend?

Option 1: do nothing

If you have a .NET 5 application that you're upgrading to .NET 6 and you're worried about what to do about itprogram.csjStart.cs, so the simple answer is: do nothing.

The "old" style initialization with the generic host andStartescompletesupports. After all, under the hood, the new attractive web application only uses the generic host. literally, theonlyThe changes you will probably need to make is to update the target framework in yours.csprojfile and update some NuGet packages:

<Project SDK="Microsoft.NET.Sdk.Web"> <property group> <!-- 👇 --> <target frame>net6.0</target frame> </property group></Project>

Your Program.cs file is functioning normally. By himStartThe class will function normally. Just enjoy these sweet performance improvements and keep going 😄

Option 2: Reuse your starting class

but what if youin realityI want to use the new oneinternet application-style, but I do not want to putatemprogram.cs. Got it, it's shiny and new, but this file can stayin uuuucho, depending on how much work you do on yoursStartclassroom.

One approach you can choose ismaintainthey areStartclass but call it manually instead of using the magic methodUseIniciar<T>()Method. Say yoursStartClass is relatively typical:

  • it needs oneIConfigurationRootim Constructor
  • There is aconfigure services()Method to configure DI
  • There is aFurnish ()Method to configure your middleware and endpoints including other injected services from the created DI container (IHostApplicationLifetimein this case).

In general it looks like this:

public classroom Start{ public Start(IConfigurationRootcontext) {context=context; } public IConfigurationRootcontext{ receive; } public file configure services(IServiceCollectionIServiceCollectionServices) { // ... } public file To set up(IApplicationBuilderNameApplication, IHostApplicationLifetimethe whole life) { // ... }}

With version before .NET 6Use Iniciar<T>Method used in generic hosting whichStartThe class was magically created and its methods were automatically called at the appropriate places. However, nothing prevents you from doing the same steps "by hand" with the new oneWebApplicationBuilderaccommodation model.

For example, if we start with the hello world of minimal hosting:

Erasconstructor=internet application.CreateBuilderName(argument);ErasApplication=constructor.Build();Application.MapGet("/", () => "Hello World!");Application.Run();

We can update it to use oursStartclassroom:

Erasconstructor=internet application.CreateBuilderName(argument);// Manually create an instance of the Startup classErasStart= Novo Start(constructor.context);// Call ConfigureServices() manuallyStart.configure services(constructor.Services);ErasApplication=constructor.Build();// Get all DI container dependencies// var hostLifetime = app.Services.GetRequiredService<IHostApplicationLifetime>();// As DavidFowler pointed out, IHostApplicationLifetime is exposed directly in the ApplicationBuilder// Call Configure(), passing dependenciesStart.To set up(Application,Application.the whole life);Application.Run();

This is probably the easiest approach to reusing yourStartGreat if you want to switch to the new oneinternet applicationgetting closer.

You should be aware that there are some slight differences in usageinternet applicationwhich you might not notice at first. For example, you cannot change settings such as the application name or environment after you create an application.WebApplicationBuilder. For more information on these subtle differences, see the documentation.

Option 3: Local methods in Program.cs

If you're launching a new .NET 6 application, you probably wouldn't choose to build one.StartClass, but probablyfariaadd similar methods in myprogram.csfile to give it some structure.

For example, a typical structure you would choose would look like this:

Erasconstructor=internet application.CreateBuilderName(argument);Configure settings(constructor.context);configure services(constructor.Services);ErasApplication=constructor.Build();Configure middleware(Application,Application.Services);Configure endpoints(Application,Application.Services);Application.Run();file Configure settings(configuration managercontext) => { }file configure services(IServiceCollectionIServiceCollectionServices) => { }file Configure middleware(IApplicationBuilderNameApplication, IServiceProviderServices) => { }file Configure endpoints(IEndpointRouteBuilderApplication, IServiceProviderServices) => { }

In general, this looks very much like itStart-Version is based on option 2 but removed part of the boilerplate by having a separate class. There are a few other things to consider:

  • I have separate methods for configuring the middleware and the endpoint - the former are order sensitive and the latter are not, so I prefer to keep them separate.
  • I used thoseIApplicationBuilderNamejIEndpointRouteBuilderWrite method signatures to use them.
  • It's easy to update or split method signatures when we need more flexibility.

Overall, I think this is as good a pattern as any in many cases, but it really doesn't matter: you can impose as much or as little structure here as you like.

Bonus: extract modules, use Carter

If you arein realitylike to useinternet application, then it should be very easy to "align" yours.Startclass oneprogram.cs, copy and paste theconfigure services()jFurnish ()Methods in place, similar to above.

Of course, at some point you may find yourself intoxicated with power and yoursprogram.csNow it's a big mess of configuration code and endpoints. So what?

One of the things some people like about the new hosting model is that it doesn't impose any specific standards or requirements on its code. This gives you plenty of leeway to apply any pattern that works for you.

Personally, I'd prefer my frameworks to have fewer options, choose a "winning" pattern, and have all projects be very similar. The problem is that the "blessed" pattern in ASP.NET/ASP.NET Core hasn't been very good in the past. I'm looking at youControllerFile…

An approach to addressing the extraction of common features in "modules". This can be especially useful when you're using minimal APIs and don't want them all listedprogram.cs!

Now if you're thinking "someone must already have a library to maintain this," then you're in luck: Carter is what you're looking for.

Carter includes a variety of helper methods for working with minimal APIs, one of which is a useful grouping of "modules" (taken from their documentation):

public classroom HomeModule : ICarterModuleName{ public file Add routes(IEndpointRouteBuilderApplication) {Application.MapGet("/", () => "Hello from Carter!");Application.MapGet("/connect", (HTTPAntwortresolution) =>resolution.Negotiate(Novo {Name= "Davi" }));Application.MapPost("/validation",Sleeve post); } Private Result Sleeve post(HttpContextoctx, PersonaPersona, DatabaseDatabase) { // ... }}

You can use these modules to add some structure to your app if you're going too far in destructuring your app from web APIs to minimal APIs! If you're interested in Carter, I highly recommend checking out the Introduction to the .NET Community.

Summary

In this short post, I described how to upgrade a .NET 5 ASP.NET Core application to .NET 6. The simple approach is to bypass minimal hostingWebApplicationBuilderAPI and just update your target framework! If you want to use the minimal hosting APIs but have a big oneStartclass you don't want to override, I showed you how to call yoursStartclass directly. Finally, if you want to do the opposite and add some structure to your minimal APIs, check out Carter!

December 8, 2021, 7 p.m

Next Testing an Incremental Generator with Snapshot Tests: Building a Source Generator - Part 2

Anterior Upgrading a "Launcher-based" Application from .NET 5 to .NET 6: Exploring .NET 6 - Part 12

André Blocking | .NET adventures (2)

This post is my contribution to the .NET advent calendar. Check out other great posts too!

In this post I describe how to create an incremental font generator. As a case study, I describe a font generator to generate an extension methodCountcalledToStringFast(). This method isquitefaster than installedChain()Equivalent, and using a font generator means it's just as easy to use.

This is based on a font generator I recently created called NetEscapades.EnumGenerators. You can find it on GitHub or NuGet.

I'll start by giving a little background on source code generators and the problem of invocationChain()in oneCount. In the rest of the post, I walk through creating an incremental generator. The end result is a font generator that works, albeit with limitations, as I describe at the end of the post.

  1. Build the source code generator project
  2. Collection of enumeration details
  3. Add a bookmark attribute
  4. Creating the incremental font generator
  5. Build the incremental generator pipeline
  6. Implementation of pipeline stages
  7. Parsing EnumDeclarationSyntax to create an EnumToGenerate
  8. Generating the source code
  9. restrictions

Background: Font generators

Source code generators were added as a built-in feature in .NET 5. They perform code generation at compile time and give you the ability to automatically add source code to your project. This opens up a huge field of possibilities, but the ability to use font generators to replace things that would otherwise have to be done with reflection is a favorite.

I have written many posts about source code generators, for example:

  • Using source factories to find all routable components in a Blazor WebAssembly app
  • Improved registry performance with font generators
  • Font generator updates: Incremental generators

If you are totally new to font generators, I highly recommend Jason Bock's introduction to font generators provided in .NET Conf. It only takes half an hour (he also has a longer version of the lecture) and will have you up and running in no time.

A new API was introduced in .NET 6 to create "incremental generators". In general, they have the same functionality as the source code generators in .NET 5, but are designed to take advantage of caching to significantly improve performance so your IDE doesn't slow down. The main disadvantage of incremental generators is that they are only supported in the .NET 6 SDK (and therefore only in VS 2022).

The domain: Enumerations andChain()

Very easyCountin C# is a useful little thought to present a selection of options. Under the hood, it's represented by a numeric value (usually aE T), but instead of having to remember that in your code0stands for "red" and1represents "blue", you can use an enumeration that contains this information for you:

public CountKor// Yes, I'm British{Rot= 0,azul= 1,}

In your code, you pass instances of the enumerationKoraround, but behind the scenes the runtime really only uses oneE T. The problem is that sometimes you want to know the name of the color. The built in way to do this is to callChain()

public file ink(KorKor){console.write line("you choose"+Kor.Chain()); // You chose red}

This is probably known to everyone reading this post. But it is perhaps less well known that this is the caseslow. We'll see how slow it is in a moment, but first let's look at a quick implementation using modern C#:

public static classroom Enumeration Extensions{ public Chain ToStringFast(That's it KorKor) =>Korexchange  {Kor.Rot=> name from(Kor.Rot),Kor.azul=> name from(Kor.azul),_=>Kor.Chain(), } }}

This simple switch statement checks each of the known values ​​ofKorand usename fromto return the text representation of theCount. If the value is unknown, the underlying value is returned as a string.

You should always be careful with these unknown values: For example, this is valid C#Ink((Color)123)

Let's compare this simple switch statement to the patternChain()Implementation using BenchmarkDotNet for a known color, you can see how much faster our implementation is:

BenchmarkDotNet=v0.13.1, OS = Windows 10.0.19042.1348 (Update 20H2/Oct 2020)CPU Intel Core i7-7500U 2.70 GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores DefaultJob: .NET Framework 4.8 (4.8.4420.0), X64 RyuJIT.NET SDK=6.0.100Standard-Worker: .NET 6.0.0 (6.0.21.52210), X64 RyuJIT
Methodspecial effectsMeanMistakestandard deviationRelationshipGeneration 0assigned
EnumToStringNetz48578,276 ns3,3109 ns3,0970 ns1.0000,045896b
ToStringFastNetz483.091 ns0,0567 ns0,0443 ns0,005--
EnumToStringnet6.017.9850 ns0,1230 ns0,1151 ns1.0000,011524b
ToStringFastnet6.00,1212 ns0,0225 ns0,0199 ns0,007--

This is worth mentioning firstChain()in .NET 6 it is over 30 times faster and allocates only a quarter of the bytes than the method in .NET Framework. Compare that to the "fast" version and it's still super slow!

It will be created as quickly as possibleToStringFast()The method is a bit of work as you need to update it when your enumeration changes. Luckily, this is a perfect use case for a font generator!

I know of a few community enum generators namely this one and this one but none of them did what I wanted so I made my own!

In this post we will see how to create a source code generator toToStringFast()Using the new incremental font generators supported in the .NET 6 SDK.

1. Create the Font Builder project

First we need to create a C# project. Font generators should showNetwork standard 2.0, and you need to add some default packages to get access to font generator types.

Start by creating a class library. The following uses the SDK to create a solution and project in the current folder:

dotnet new sln -n NetEscapades.EnumGeneratorsdotnet new classlib -o ./src/NetEscapades.EnumGeneratorsdotnet sln add ./src/NetEscapades.EnumGenerators

Replace the content ofNetEscapades.EnumGenerators.csprojwith the following. I've described what each of the properties does in the comments:

<Project SDK="Microsoft.NET.SDK"> <property group> <!-- 👇 Font generators must refer to netstandard 2.0 --> <target frame>Network standard 2.0</target frame> <!-- 👇 We don't want to reference the source generator DLL directly in consuming projects --> <includeGenerateOutput>INCORRECT</includeGenerateOutput> <!-- 👇 New project, why not! --> <contestable>allow</contestable> <implied uses>TRUE</implied uses> <great execution>The last</great execution> </property group> <!-- The following libraries contain the source code generator interfaces and types we need --> <group of items> <Package reference Contain="Microsoft.CodeAnalysis.Analyzers" execution="3.3.2" private property="at" /> <Package reference Contain="Microsoft.CodeAnalysis.CSharp" execution="4.0.1" private property="at" /> </group of items> <!-- This ensures that the library is packed as a font generator when we use `dotnet pack` --> <group of items> <none Contain="$(output path)\$(package name).dll" Pack="TRUE" Package path="Parser/dotnet/cs" Visible="INCORRECT" /> </group of items></Project>

That's pretty much it for now, so let's get to the code.

2. Gathering details via bulleted lists

Before we create the generator itself, let's consider the extension method we're trying to create. We need to know at least the following:

  • the fullTypHis nameCount
  • The name of all values.

And that's it. There's a lot more information we could gather for a better user experience, but let's stick with it for now to make something work. Given that, we can create a simple type to store the details about the enums we've discovered:

public just read StructureEnumToGenerate{ public just read ChainName; public just readList<Chain>Values; public EnumToGenerate(ChainName,List<Chain>Values) {Name=Name;Values=Values; }}

3. Add a marker attribute

We also need to think about how to choose which enums to generate extension methods for. We could do this for every enumeration in the project, but that seems a bit overkill. We could use a "mark attribute" instead. A placeholder attribute is a simple attribute that has no functionality and only exists so that something else (our font generator in this case) can find the type. Users would decorate their enum with the attribute, so we know how to generate the extension method for it:

[Enumeration Extensions] // Our marker attributepublic CountKor{Rot= 0,azul= 1,}

We'll create a simple placeholder attribute as shown below, but we won't define this attribute directly in code. Instead we create a chainthis containsC# code for[enumeration extensions]mark attribute. We let the source code generator automatically add it to the build of the consuming projects at runtime to make the attribute available.

public static classroom SourceGenerationHelper{ public Constantly ChainAttribute= @"Namespace NetEscapades.EnumGenerators{ [System.AttributeUsage(System.AttributeTargets.Enum)] öffentliche Klasse EnumExtensionsAttribute : System.Attribute { }}";}

We will add moreSourceGenerationHelperClass later, but now it's time to build the generator yourself.

4. Creation of the incremental font generator

To create an incremental font generator you need to do three things:

  1. include theMicrosoft.CodeAnalysis.CSharppackage in your project. Note that incremental generators were introduced in version 4.0.0 and are only supported in .NET 6/VS 2022.
  2. Create a class that implementsincremental generator
  3. Dress up the class[Generator]Attribute

We've already taken the first step, so let's create oursenumeration generatorImplementation:

namespaceNetEscapes.enumeration generators;[Generator]public classroom enumeration generator : incremental generator{ public file initialize(IncrementalGeneratorInitializationContextIncrementalGeneratorInitializationContextcontext) { // Add marker attribute to buildcontext.RegistrierungPostInicializaciónSalida(ctx=>ctx.Add font( "EnumExtensionsAttribute.g.cs",Original Text.Von(SourceGenerationHelper.Attribute,codification.UTF8))); // TODO: implement the rest of the source generator }}

incremental generatoronly requires the implementation of a single method,initialize(). This method allows you to register your "static" source code (e.g. placeholder attributes), create a pipeline to identify the syntax of interest and convert that syntax into source code.

In the implementation above, I've already added the code that registers our marker attribute in the build. In the next section, we'll create code to identify enumerations that have been decorated with the Marker attribute.

5. Build the incremental generator pipeline

One of the most important things to keep in mind when creating font generators is that there is aChargeof the changes that happen when you write source code. Any change the user makes may cause the font builder to run againTerto be efficient otherwise it will destroy IDE user experience

These are not just anecdotal and preliminary versions of the[message from registrar]Generator ran into exactly this problem.

Incremental generators are designed to create a "pipeline" of transformations and filters and store the results at each level to avoid redoing the work if no changes are made. It is important that the pipeline step, as it is called, is very efficientCharge, apparently for every change to the source code. The later layers should still be efficient, but there is more wiggle room there. If you've designed your pipeline well, later layers will only be invoked when users edit the code that matters to them.

I wrote about this design in a recent blog post.

With that in mind (and inspired by the[message from registrar]generator), let's create a simple generator pipeline that does the following:

  • Filter syntax onlyCounts that have one or more attributes. This should be very fast and contain all the enums we are interested in.
  • Filter syntax onlyCountI know they have [enumeration extensions] Attribute. This is slightly more expensive than the first tier as it uses the semantic model (not just the syntax), but it's still not too expensive.
  • Extract all the information we need using the compilation. This is the most expensive step and combines thecompilationfor the project with the previously selectedCountSyntax. Here we can create our collectionEnumToGenerate, generate the font and record it as the output of the font generator.

In the code, the pipeline is shown below. The previous three steps correspond to thisEsSyntaxTargetForGeneration(),GetSemanticTargetForGeneration()jCarry out()or which we will show in the next section.

namespaceNetEscapes.enumeration generators;[Generator]public classroom enumeration generator : incremental generator{ public file initialize(IncrementalGeneratorInitializationContextIncrementalGeneratorInitializationContextcontext) { // Add marker attributecontext.RegistrierungPostInicializaciónSalida(ctx=>ctx.Add font( "EnumExtensionsAttribute.g.cs",Original Text.Von(SourceGenerationHelper.Attribute,codification.UTF8))); // Create a simple enumerated filterIncrementalValuesProvider<EnumDeclarationSyntaxe>enumDeclarations=context.Syntaxanbieter.CreateSyntaxProviderCreateSyntaxProvider(predicate: static (S,_) => IsSyntaxTargetForGeneration(S), // select enums with attributesreshape: static (ctx,_) => GetSemanticTargetForGeneration(ctx)) // remove the enumeration with the [EnumExtensions] attribute .Wo(staticMetro=>MetroesNONull)!; // Filter out assigned enums we're not interested in // Combine selected enums with `Compilation`IncrementalValueProviderIncrementalValueProvider<(compilation,Immutable array<EnumDeclarationSyntaxe>)>CompilationAndEnums=context.CompilationProvider.Combine(enumDeclarations.Bring together()); // Generate source with compilation and enumscontext.RecordSourceOutput(CompilationAndEnums, static (spc,Those) => Carry out(Those.Article 1,Those.Article 2,spc)); }}

The first pipeline step usesCreateSyntaxProvider()to filter the received list of syntax tokens. the predicate,EsSyntaxTargetForGeneration(), provides a first level of filtering. The transformation,GetSemanticTargetForGeneration(), can be used to cast syntax tokens, but in this case we only use it to provide additional filtering after the predicate. laterWo()-clause looks like LINQ, but is actually a method inIncrementalValuesProviderWhat does this second level of filtering do for us?

The next step in the pipeline is simply combining our collection ofEnumDeclarationSyntaxeemitted from the first stage, with electricitycompilation.

Finally, we use a combined tuple of(Compile, ImmutableArray<EnumDeclarationSyntax>)to actually generate the source code for theEnumeration Extensionsclass, with theCarry out()Method.

Now let's take a look at each of these methods.

6. Implementation of pipeline steps

The first stage of the pipeline needs to be very fast, so we're just working on thatSyntaxNodepassed, filter to select onlyEnumDeclarationSyntaxeNodes that have at least one attribute:

static bool IsSyntaxTargetForGeneration(SyntaxNodeEs) =>Eses EnumDeclarationSyntaxeMetro&&Metro.The attribute bar.Tell> 0;

As you can see, this is a very efficient predicate. You use simple pattern matching to check node type and properties.

In C# 10 you can also write this asthe node is EnumDeclarationSyntax { AttributeLists.Count: > 0 }, but I personally prefer the former.

With this efficient filtering in place, we can be a bit more critical. We do not want toanyattribute, we just want our specific marker attribute. InGetSemanticTargetForGeneration()We loop through each of the nodes that passed the previous test, looking for our marking attribute. If the node has the attribute, we return the node so it can participate in a later build. If the enumeration does not have the mark attribute, we returnNulland filter in the next step.

Private Constantly ChainEnumExtensionsAttributeEnumExtensionsAttribute= "NetEscapades.EnumGenerators.EnumExtensionsAttribute";staticEnumDeclarationSyntaxe? GetSemanticTargetForGeneration(GeneratorSyntaxContextGeneratorSyntaxContextcontext){ // thanks to IsSyntaxTargetForGeneration we know that the node is an EnumDeclarationSyntax ErasenumDeclarationSyntax= (EnumDeclarationSyntaxe)context.Es; // iterate over all the attributes of the method for every (AttributeListSintaxisattributeListSyntaxemenumDeclarationSyntax.The attribute bar) { for every (AttributeSyntaxAttributeSyntaxemattributeListSyntax.Attribute) { e (context.semantic model.GetInfoSymbol(AttributeSyntax).Symbol esNOIMethodSymbolAttributeSymbol) { // strange we couldn't get the symbol, ignore Keep going; } INamedTypeSymbolatributoContainingTypeSymbol=AttributeSymbol.TipoContenedor; Chainfull name=atributoContainingTypeSymbol.ParaDisplayString(); // Is the attribute [EnumExtensions] attribute? e (full name== "NetEscapades.EnumGenerators.EnumExtensionsAttribute") { // returns the enumeration hand backenumDeclarationSyntax; } } } // We didn't find the attribute we were looking for hand back Null;} 

Note that we're still trying to be efficient where we can, so we're usingfor everyLoops instead of LINQ.

After running this pipeline step, we have a collection ofEnumDeclarationSyntaxeWe know we have them[enumeration extensions]Attribute. NOCarry outmethod we create aEnumToGenerateTo include the details we need from each enumeration, pass it to oursSourceGenerationHelperclass to generate the source code and add it to the build output

static file Carry out(compilationcompilation,Immutable array<EnumDeclarationSyntaxe>enumerations, OriginProductionContextcontext){ e (enumerations.IsDefaultOrBlank) { // nothing to do yet hand back; } // Not sure if this is really necessary, but `[LoggerMessage]` does it, so it seems like a good idea!IEnumerable<EnumDeclarationSyntaxe>Unmistakable listings=enumerations.distinguishable(); // Convert each EnumDeclarationSyntax to an EnumToGenerateList<EnumToGenerate>enumsToGenerate= GetTypesToGenerate(compilation,Unmistakable listings,context.cancellation token); // If there are errors in the EnumDeclarationSyntax, we don't create one // EnumToGenerate for this, so make sure we have something to generate e (enumsToGenerate.Tell> 0) { // Generate the source code and add it to the output ChainResult=SourceGenerationHelper.GenerateExtensionClassGenerateExtensionClass(enumsToGenerate);context.Add font("EnumExtensions.g.cs",Original Text.Von(Result,codification.UTF8)); }}

Getting closer now, we only have two more methods to complete:GetTypesToGenerate(), OfSourceGenerationHelper.GenerateExtensionClass().

7. Parsing EnumDeclarationSyntax to create an EnumToGenerate

IsGetTypesToGenerate()The Method is where most of the typical work related to working with Roslyn takes place. We need to use the combination of syntax tree and semantic tree.compilationto get the details we need, namely:

  • The full name of the type ofCount
  • The name of all values ​​in theCount

The following code iterates through each of theEnumDeclarationSyntaxeand collect this data.

staticList<EnumToGenerate> GetTypesToGenerate(compilationcompilation,IEnumerable<EnumDeclarationSyntaxe>enumerations, cancellation tokenConnecticut){ // Create a list containing our output ErasenumsToGenerate= Novo List<EnumToGenerate>(); // Get the semantic representation of our marker attributeINamedTypeSymbol?enumAttribute=compilation.GetTypeByMetadataName("NetEscapades.EnumGenerators.EnumExtensionsAttribute"); e (enumAttribute== Null) { // If null, the compilation could not find the marker attribute type // which suggests something is seriously wrong! Rescue.. hand backenumsToGenerate; } for every (EnumDeclarationSyntaxeenumDeclarationSyntaxemenumerations) { // be requestedConnecticut.ThrowIfCancellationRequested(); // Get the semantic representation of the enumeration syntax semantic modelsemantic model=compilation.GetSemanticModel(enumDeclarationSyntax.Syntaxbaum); e (semantic model.GetDeclaredSymbol(enumDeclarationSyntax) esNOINamedTypeSymbolenumSymbol) { // something went wrong, rescue Keep going; } // Gets the full name of the enumeration type, for example color, // or OuterClass<T>.Colour if nested in a generic type (e.g.) Chainenumeration name=enumSymbol.Chain(); // Get all members in the enumerationImmutable array<Symbol>enumMembers=enumSymbol.get members(); Erasmembers= Novo List<Chain>(enumMembers.Long); // Get all fields from the enumeration and add their names to the list for every (SymbolMemberemenumMembers) { e (Memberes Feldsymbol Icampo&&campo.constant value esNONull) {members.Add to(Member.Name); } } // Create an EnumToGenerate to use in the generation phaseenumsToGenerate.Add to(Novo EnumToGenerate(enumeration name,members)); } hand backenumsToGenerate;}

It only remains to generate the source code of oursLista<EnumToGenerate>!

8. Source code generation

the last methodSourceGenerationHelper.GenerateExtensionClass()shows how we get our list ofEnumToGenerate, and generate theEnumeration Extensionsclassroom. This one is relatively easy conceptually (although a bit hard to visualize!) since you're just creating a string:

public static Chain GenerateExtensionClassGenerateExtensionClass(List<EnumToGenerate>enumsToGenerate){ Erasjdn= Novo chainbuilder();jdn.attach(@"Namespace NetEscapades.EnumGenerators{ public static subclass EnumExtensions {"); for every(ErasenumToGenerateemenumsToGenerate) {jdn.attach(@" public static string ToStringFast(this ").attach(enumToGenerate.Name).attach(@" value) => value change {"); for every (ErasMemberemenumToGenerate.Values) {jdn.attach(@" ").attach(enumToGenerate.Name).attach('.').attach(Member) .attach(" => name of(") .attach(enumToGenerate.Name).attach('.').attach(Member).attach("),"); }jdn.attach(@" _ => valor.ToString(), };"); }jdn.attach(@"}}"); hand backjdn.Chain();}

And we're done! We now have a fully functional font generator. Add the source generator to a project that contains theKorenum at the top of the post creates an extension method like this:

public static classroom Enumeration Extensions{ public Chain ToStringFast(That's it KorKor) =>Korexchange  {Kor.Rot=> name from(Kor.Rot),Kor.azul=> name from(Kor.azul),_=>Kor.Chain(), } }}

restrictions

When your source generator is ready, you can pack it by running itdotnet pack -c Iniciarand upload it to NuGet!

Wait, don't actually do that.

There are many limitations to this code, including the fact that we haven't tested it yet. The top of my head:

  • IsEnumeration ExtensionsThe class always has the same name and is always in the same namespace. It would be nice if the user could control this.
  • We do not take into account the visibility of theCount. If heCountesintern, the generated code will not compile because it is apublicextension method
  • We need to mark the code as automatically generated and with#aborting releaseas the code format may not conform to design conventions
  • We haven't tested it, so we don't know if it really works!
  • Sometimes adding placeholder attributes directly to the build can be a problem, more on that in a later post.

With that in mind, I hope this has been helpful. I'll address many of the above issues in future posts, but the code in this post should provide a good structure if you want to create your own incremental generator.

Summary

In this post I have described all the necessary steps to create an incremental generator. I showed how to build the project file, how to add a marker attribute to the build, how to deployincremental generatorand how to consider performance to ensure your builder's consumers don't experience any lag in your IDE. The resulting implementation has many limitations but shows the basic process. In future posts in this series, we'll address many of these limitations.

You can find my NetEscapades.EnumGenerators project on GitHub and the source code for the basic simplified version used in this post on my blog samples.

December 14, 2021, 2:00 a.m

Next NuGet Packaging and Integration Testing: Building a Source Code Generator - Part 3

Anterior Building an Incremental Generator: Building a Source Generator – Part 1

André Blocking | .NET adventures (3)

In my previous post, I showed in detail how to create a source code generator, but I missed a very important step: testing. In this post I describe one of the ways I like to test my font generators, which is to manually run the font generator on a known string and evaluate the output.snap testprovides a great way to ensure your generator stays up and running, and in this post I'm using the great onesCheck overLibrary.

Summary: The EnumExtensions generator

Brief summary, in the previous post I talked about the problem of connectionsChain()in oneCount(it's slow) and described how we could use a font builder to create an extension method that would provide the same functionality but 100x faster.

So for a simple oneCountas follows:

public CountKor{Rot= 0,azul= 1,}

We generate an extension method that looks like this:

public static classroom Enumeration Extensions{ public Chain ToStringFast(That's it KorKor) =>Korexchange  {Kor.Rot=> name from(Kor.Rot),Kor.azul=> name from(Kor.azul),_=>Kor.Chain(), } }}

As you can see, this implementation consists of a simpleexchange expression and usesname fromcue, pretendColor.Rojo.ToStringFast()Returns"Rojo"as expected.

I won't cover the constructor implementation in this post, see the previous post if that's what you're looking for.

Instead, in this post, we'll look at a way to test that our source code generator is generating the correct code. My favorite approach to this is to use "snap testing".

Snapshot tests for font generators

I've never written about snapshot testing, and this post is long enough to go into detail here, but the concept is pretty simple: instead of validating against a property or two, snapshot testing asserts an entire object (or a other file). identical to the expected result. There's a lot more to it than that, but that's enough for now!

Fortunately, Dan Clarke recently wrote an excellent introduction to Instant Testing as his contribution to the .NET Advent Calendar.

It turns out that font generators are great for testing snapshots. Source generators are about generating a deterministic output for a given input (the source code), and we want the output to be exactly the same every time. By taking a "snapshot" of our required output and comparing the actual output to it, we can be sure that our source generator is working properly.

Now you can write the code for all of this manually, but you don't have to because there's a great library for that called Verify, written by Simon Cropp. This library does the serialization and comparison for you, takes care of file naming, and even integrates with comparison tools to simplify comparison and visualize the differences between your objects when a test fails.

Verify also has a plethora of extensions for testing snapshots of almost anything: in-memory objects, EF Core queries, images, Blazor components, HTML, XAML, WinForms UIs, the list seems endless! However, the extension we are interested in is Verify.SourceGenerators.

I didn't know until recently that Verify has built-in support for generator testing. I'd used Verify “by hand” before, but when I heard Simon talk to Dan Clarke on the Unhandled Exceptions podcast, I just had to try it!

Extensions and helpers provided byVerify.SourceGeneratorswork with both "native" font generators (ISourceGenerator) and incremental font generatorsincremental generator, and it has two main advantages over the "manual" approach you used before:

  • They automatically process multiple generated files that are added to the build
  • They easily handle any diagnostics added to the build

For these reasons I will review and update any font builders I need to use your library!

That covers the basics of snapshot testing for now, so it's time to add a test project and start testing our incremental font generator!

1. Create a test project

I will pick up where I left off last time where we have a single project calledNetEscapades.EnumGeneratorsin a solution. This project contains our font generator.

In the following script I do the following:

  • Create a xunit test project
  • Add it to the solution.
  • Add a reference to the test project's src project
  • Add some packages we need for the test project:
    • Microsoft.CodeAnalysis.CSharpjMicrosoft.CodeAnalysis.Analyzerscontains methods for running an in-memory font generator and examining the output.
    • Check.XUnitcontains theCheck overIntegration test snapshot for xunit. There are corresponding adapters for other test frameworks
    • Verify.SourceGeneratorscontains the extensions ofCheck overespecially for working with font generators. This is not absolutely necessary, but makes a lot easier!
dotnet new xunit -o ./tests/NetEscapades.EnumGenerators.Testsdotnet sln add ./tests/NetEscapades.EnumGenerators.Testsdotnet add ./tests/NetEscapades.EnumGenerators.Tests reference ./src/NetEscapades.EnumGenerators# Add some helper packages to the test projectdotnet add ./tests/NetEscapades.EnumGenerators.Tests Paket Microsoft.CodeAnalysis.CSharpdotnet add ./tests/NetEscapades.EnumGenerators.Tests Paket Microsoft.CodeAnalysis.Analyzersdotnet add ./tests/NetEscapades.EnumGenerators.Tests Paket Verify.SourceGeneratorsdotnet add ./ testes/NetEscapades.EnumGenerators.Tests-Paket Verify.XUnit

After running the above script, your test project.csprojThe file should look something like this

<Project SDK="Microsoft.NET.SDK"> <property group> <target frame>net6.0</target frame> <contestable>allow</contestable> <it is packable>INCORRECT</it is packable> <implied uses>TRUE</implied uses> </property group> <!-- Add this 👇 to the base template --> <group of items> <Package reference Contain="Check.XUnit" execution="14.7.0" /> <Package reference Contain="Verify.SourceGenerators" execution="1.2.0" /> <Package reference Contain="Microsoft.CodeAnalysis.Analyzers" execution="3.3.2" private property="at" /> <Package reference Contain="Microsoft.CodeAnalysis.CSharp" execution="4.0.1" private property="at" /> </group of items> <!-- Add 👇 a reference to the builder project --> <group of items> <reference project Contain="..\..\src\NetEscapades.EnumGenerators\NetEscapades.EnumGenerators.csproj" /> </group of items> <!-- 👇 This is all part of the base model --> <group of items> <Package reference Contain="Microsoft.NET.Test.Sdk" execution="16.11.0" /> <Package reference Contain="xeinheit" execution="2.4.1" /> <Package reference Contain="xunit.runner.visualstudio" execution="2.4.3"> <include assets>Duration; top up indigenous; content files; analyzers; transitive construction</include assets> <private property>at</private property> </Package reference> <Package reference Contain="quilt.collector" execution="3.1.0"> <include assets>Duration; top up indigenous; content files; analyzers; transitive construction</include assets> <private property>at</private property> </Package reference> </group of items></Project>

Now that we have all the dependencies installed, let's write a test!

2. Create a simple snapshot test

Testing a font generator requires a bit of setup, so let's create a helper class that creates onecompilationto oneChain, run our font builder on it, and then use snapshot tests to test the output.

Before we get to that, let's see what our test will look like:

useCheck xunit;useXunit;namespaceNetEscapes.enumeration generators.Proof;[UsesVerify] // 👈 Add hooks to check for XUnitpublic classroom EnumGeneratorSnapshotTests{ [Completed] public Task Generate enum extensions correctly() { // The source code to test ErasThose= @"usando NetEscapades.EnumGenerators;[EnumExtensions]public enum Color{ Red = 0, Blue = 1,}"; // Pass the source code to our helper and test the output immediately hand backTestHelper.Check over(Those); }}

For aTestHelperdoes all the work here, so rather than burying the lede, the following shows the initial implementation, annotated to describe what's going on

useMicrosoft.Codeanalyse;useMicrosoft.Codeanalyse.CSharp;useCheck xunit;namespaceNetEscapes.enumeration generators.Proof;public static classroom TestHelper{ public static Task Check over(ChainThose) { // Parse the specified string into a C# syntax tree SyntaxbaumSyntaxbaum=CSharpSyntaxTreeName.ParseText(Those); // Make a Roslyn build for the syntax tree. CSharpRecopilationcompilation=CSharpRecopilation.Create(AssemblyName: "Proof",Syntaxbäume: Novo[] {Syntaxbaum}); // Create an instance of our incremental font generator EnumGenerator ErasGenerator= Novo enumeration generator(); // The GeneratorDriver is used to run our generator against a build conductor of the generatorConductor=CSharpGeneratorDriver.Create(Generator); // Run the font generator!Conductor=Conductor.Execution Generators(compilation); // Use verification to test the source generator output immediately! hand backtester.Check over(Conductor); }}

When you run your snapshot test, Verify attempts to compare a snapshot of theconductor of the generatorOutput with an existing snapshot. Since this is your first time running the test, the test will fail, so Verify will automatically open your default diff tool, VS Code in my case. However, the difference probably doesn't show what you expect!

André Blocking | .NET adventures (4)

The field on the right is empty because we don't have an existing snapshot. But instead of displaying the output of our font generator on the left, we only see{}. Something seems to have gone wrong.

Well, it turns out because I didn't read the docs. The Verify.SourceGenerators readme is very clear that in order to process the outputs of the source generator through calls, you must initialize the convertersVerifySourceGenerators.Enable();once to assemble.

The correct way to do this in modern C# is to use a [ModuleInitializer] attribute. As described in the spec, this code runs once before any other code in your assembly.

You can create a module launcher by decorating any onetoday in ecstasymethod in your project with the[module initializer]Attribute. In our case, we would do the following:

useSystem.Duration.Compiler Services;useVerificarPruebas;namespaceNetEscapes.enumeration generators.Proof;public static classroom Module initializer{ [Module initializer] public static file Darin() {VerifySourceGenerators.Allow(); }}

Note that module initializers are a C#9 feature, which means you can use them when targeting earlier versions of .NET as well. However, the[module initializer]The attribute is only available in .NET 5+. If you are targeting earlier versions of .NET, create your own implementation of the attribute, similar to the approach I describe in this post for the .NET[No refund]Attribute.

After adding the launcher, if we run our test again, we get something that looks a little better: it's our habit[enumeration extensions]Attribute we added to the build as part of our source generator:

André Blocking | .NET adventures (5)

This attribute looks like what we expected, but there's still something wrong; No further source code will be generated. Our font generator added the attribute, but it should also generate aEnumeration Extensionsclassroom. 🤔

3. Debugging an error: missing references

The good thing about trying out such font generators is that they aresupereasy to debug. You don't need to launch separate instances of your IDE or anything like that. You literally run the source code generator in the context of the unit test, so when testing in your IDE (I'm using JetBrains Rider) you can click debug and step through the code.

Since the test doesn't throw an exception, it just doesn't produce the correct output, I suspected that my logic must be wrong somewhere in the source generator. I set a breakpointGetSemanticTargetForGeneration()the first "transform" methods in our incremental generator pipeline. I then started debugging and checked that we hit the breakpoint.

André Blocking | .NET adventures (6)

As you can see above, we've reached the breakpoint atGetSemanticTargetForGeneration()it's himenumDeclarationSyntaxvariable contains theKorenum from our test code, so everything looks good so far. I went through the method where we iterate over the attributes inCountexplanation, try to find our[enumeration extensions]Attribute. However, oddly enough, an attempt to use thesemantic modelto access theSymbolAgain[enumeration extensions]returned syntaxNullso we saved! This explainsifOur font generator failed. The next question is why?

André Blocking | .NET adventures (7)

Before I stopped debugging, I checked the values ​​ofcontext.SemanticModel.GetSymbolInfo(attributeSymbols).CandidateSymbolsUsing the immediate window. This returned a single value, so the error wasn't due to ambiguity or a similar issue. examinationcontext.SemanticModel.GetSymbolInfo(attributeSyntax).CandidateReasonreturnedNo attribute type.

Eh?No attribute type?

After doing some research I realized the problem was that the build had no references by default. It meant I couldn't find itSystem.Attribute, then it could not be created[enumeration extensions]attributes correct. The solution was to update mineTestHelperto add a reference to the correct DLL. I made a reference to the assembly that contains itObject(Sistema.Privado.CoreLibhere) and added it to the build. the fullTestHelperthe class is:

useMicrosoft.Codeanalyse;useMicrosoft.Codeanalyse.CSharp;useCheck xunit;namespaceNetEscapes.enumeration generators.Proof;public static classroom TestHelper{ public static Task Check over(ChainThose) { SyntaxbaumSyntaxbaum=CSharpSyntaxTreeName.ParseText(Those); // Create references to the required assemblies // We could add multiple references if neededIEnumerable<PortableExecutableReference>references= Novo[] {metadata reference.CreateFromFile(So'ne Art(Object).Montage.Location) }; CSharpRecopilationcompilation=CSharpRecopilation.Create(AssemblyName: "Proof",Syntaxbäume: Novo[] {Syntaxbaum},references:references); // 👈 Pass the references to the build enumeration generatorGenerator= Novo enumeration generator(); conductor of the generatorConductor=CSharpGeneratorDriver.Create(Generator);Conductor=Conductor.Execution Generators(compilation); hand backtester.Check over(Conductor) .UseDirectory("snapshots"); }}

After making this change and running the test, Verify opens our diff tool again, and this time it contains two diffs: the[enumeration extensions]attribute as before, but also the generated oneEnumeration Extensionsclassroom:

André Blocking | .NET adventures (8)

At this point we can accept the verified file diffs, which will save them to disk. You can manually copy the differences back and forth, or run the command that Verify puts on a terminal's clipboard, for example

cmd /c entf"C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.00.verified.txt"cmd /c entf"C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.02.verified.cs"cmd /c entf"C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.verified.cs"cmd /c entf"C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.verified.txt"cmd /c moveor /Y"C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.00.received.cs" "C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.00.verified.cs"cmd /c moveor /Y"C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.01.received.cs" "C:\repo\sourcegen\tests\NetEscapades.EnumGenerators.Tests\EnumGeneratorSnapshotTests.GeneratesEnumExtensionsCorrectly.01.verified.cs"

After we update our snapshots, if we run them again, they'll pass the tests! 🎉

4. Moar-Tests

Now that we've written a single snapshot test for our source code generator, it's trivial to add more. I decided to test the following cases:

  • Countwithout attribute: do not generate an extension method
  • CountMissing proper namespace import: Doesn't generate an extension method
  • Of theCounts in file: Generate extensions for bothCountS
  • Of theCounts, one with no attribute: just generate an extension for the attributeCount

You can find the source code for these examples on GitHub, but they look virtually identical to our existing test. The only things that change are the test source code and snapshots.

5. Diagnosetests

One aspect of font generators that we haven't covered yet is diagnostics. Source code generators also act as parsers, allowing them to report problems in the user's source code. This is useful if, for example, you need to inform a user that they are using your generator incorrectly in some way.

We don't have diagnostics in our font builder, but just to show that they work well with snap tests, let's add a dummy!

First, let's create a helper method to generate a diagnostic for aCountim Fontgenerator:

static Diagnose make a diagnosis(EnumDeclarationSyntaxeSyntax){ ErasDescription= Novo diagnostic descriptor(I WALKED: "TEST01",title: "A Test Diagnosis",message format: "A Description of the Problem",Category: "Proof",standard gravity:severity of the diagnosis.note,is enabled by default: TRUE); hand backDiagnose.Create(Description,Syntax.GetLocation());}

Next we call this method to diagnose theCarry out()from our font generator and register it with the output using theOriginProductionContextprovided for the method:

static file Carry out(compilationcompilation,Immutable array<EnumDeclarationSyntaxe>enumerations, OriginProductionContextcontext){ e (enumerations.IsDefaultOrBlank) { hand back; } // Add a dummy diagnosticcontext.diagnostic report(make a diagnosis(enumerations[0])); // ...}

Remember, this is just to demonstrate quick testing, we really don't want any random diagnoses to pop up!

If we run our tests again, we get errors. Make sure to extract additional source code added to the buildjThe diagnostic The diagnostic is a C# object, so it's serialized into a JSON-like document, like so:

{Diagnose: [ {I WALKED:TEST01,title: ADiagnosetest,heaviness:note,alert level: 1,Location: : (3,0)-(8,1),message format: AProblem Description,News: AProblem Description,Category:Proof} ]}

Verify starts the diff tool again and shows that there is now an additional file for testing and diagnostics:

André Blocking | .NET adventures (9)

Source generators seem like a near-perfect use case for testing snapshots, since there's usually a very specific deterministic output that you need for a given input. Of course you can design your source code generators for more detailed unit testing, but for the most part I find flash testing with a bit of debugging if needed gives me everything I need.

Summary

In this post I showed how to use snapshot tests to test the font builder I created in the previous post. I gave a brief introduction to testing snapshots and then showed how you can use Verify.SourceGenerators to test your generator's output. We fixed some issues and finally showed that Verify handles the diagnostic and syntax trees that its source code generator creates.

December 21, 2021, 2:00 a.m

Next Adjusting Generated Code with Placeholder Attributes: Building a Source Code Generator – Part 4

Anterior Testing an Incremental Generator with Snapshot Tests: Building a Source Generator - Part 2

(Video) Andre Soul Unknown Gamer net Auf der suche nach Arbeit und Freunden

André Blocking | .NET adventures (10)

In the first post of this series, I described how to create an incremental .NET 6 font generator, and in the second post, I described how to run your generators using Instant Tests with Verify Unit tests. These are important first steps in building a source code generator, and snapshot testing provides a fast and debug-friendly approach to testing.

Another essential part of testing your package isIntegrationProof. By this I mean testing the source code generator as it is used in practice as part of the process of building a project. Likewise, if you ship your source code generator as a NuGet package, you should test that the NuGet package works properly when consumed by consuming projects.

In this post I will do 3 things for the font generator I made in this series:

  • Create an integration test project
  • Create a NuGet package
  • Test the NuGet package in an integration test project

Everything in this post builds on the work from previous posts, so keep checking back if you find anything confusing!

  1. Create the integration test project
  2. Add integration test
  3. Create a NuGet package
  4. Create a local NuGet repository with a custom NuGet configuration
  5. Add a NuGet package test project
  6. Run the NuGet package integration test

1. Create the integration test project

The first step is to create the integration test project. The following script creates a new xunit test project, adds it to the solution, and adds a reference to the source generator project:

dotnet new xunit -o ./tests/NetEscapades.EnumGenerators.IntegrationTestsdotnet sln add ./tests/NetEscapades.EnumGenerators.IntegrationTestsdotnet add ./tests/NetEscapades.EnumGenerators.IntegrationTests reference ./src/NetEscapades.EnumGenerators

This creates a normal project reference between the test project and the source code generator project, something like this:

<reference project Contain="..\..\src\NetEscapades.EnumGenerators\NetEscapades.EnumGenerators.csproj" />

Unfortunately, for source builder (or parser) projects, you need to modify this element a bit to make it work properly. In particular, you need to add theOutputItemTypejReferenceOutputAssemblyAttribute

  • OutputItemType="Analizer"tells the compiler to load the project as part of the build process.
  • ReferenceOutputAssembly="falso"consider the projectNOto refer to the source generator project dll

This provides a project reference similar to the following:

<reference project Contain="..\..\src\NetEscapades.EnumGenerators\NetEscapades.EnumGenerators.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="INCORRECT" />

With these changes, your integration test project should look like this:

<Project SDK="Microsoft.NET.SDK"> <property group> <target frame>net6.0</target frame> <contestable>allow</contestable> <it is packable>INCORRECT</it is packable> </property group> <group of items> <!-- 👇 This project reference will be added by the script...--> <reference project Contain="..\..\src\NetEscapades.EnumGenerators\NetEscapades.EnumGenerators.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="INCORRECT" /> <!-- 👆 But you must add these attributes yourself--> </group of items> <!-- 👇 Added in default template --> <group of items> <Package reference Contain="Microsoft.NET.Test.Sdk" execution="16.11.0" /> <Package reference Contain="xeinheit" execution="2.4.1" /> <Package reference Contain="xunit.runner.visualstudio" execution="2.4.3"> <include assets>Duration; top up indigenous; content files; analyzers; transitive construction</include assets> <private property>at</private property> </Package reference> <Package reference Contain="quilt.collector" execution="3.1.0"> <include assets>Duration; top up indigenous; content files; analyzers; transitive construction</include assets> <private property>at</private property> </Package reference> </group of items></Project>

With the project file organized, let's add some basic tests to confirm that the source generator is working properly.

2. Add the integration test

The first thing we need for our tests is aCountfor the font generator to create an extension class. The following is very simple, but note that I gave you this[Banderas]Attribute for more complexity. This is notnecessarybut it's a slightly more complex example of our testing that we want to ensure:

useSystem;namespaceNetEscapes.enumeration generators.Integrationstest;[Enumeration Extensions][Banderas]public CountKor{Rot= 1,azul= 2,Green= 4,}

For our first tests, let's just confirm two things:

  1. The font generator generates a called extension methodToStringFast()If someoneCountis decorated with[enumeration extensions]Attribute.
  2. the outcome of the callToStringFast()is the same as callingChain().

The following test does just that. Try 5 different values ​​for theKor Count, Including:

  • Valid values ​​(Red color)
  • Invalid values ​​((Kor)15)
  • Composite value (Color.Green | Blue colour)

This test confirms that the extension exists (otherwise it wouldn't compile) and that we're getting the expected results for all of the above values:

useXunit;namespaceNetEscapes.enumeration generators.Integrationstest;public classroom EnumExtensionsTests{ [theory] [online data(Kor.Rot)] [online data(Kor.Green)] [online data(Kor.Green|Kor.azul)] [online data((Kor)15)] [online data((Kor)0)] public file FastToStringIsSameAsToString(Kor valeria) { Erasexpected= valeria.Chain(); Erasreal= valeria.ToStringFast();claim.Same(expected,real); }}

And this is. We can run all of our tests by running themnetwork testin the solution

Note that if you make changes to your source builder, you may need to close and reopen your IDE before your integration test project reflects the changes.

If you are creating a font generator for a specific project, this level of integration testing may be all you need. However, if you plan to distribute the source code generator, you might want to create a NuGet package.

3. Create a NuGet package

Creating a NuGet package from a source builder issimilarto a NuGet package for a standard library, but thesatisfiedof the NuGet package are distributed differently. In particular, you must:

  • Make sure the build output ends atParser/dotnet/csFolder in the NuGet package.
  • Make sure the dllNOthey end up in the "normal" folder in the NuGet package.

For the first point, make sure you have the following<group of elements>in your project:

<group of items> <none Contain="$(output path)\$(package name).dll" Pack="TRUE" Package path="Parser/dotnet/cs" Visible="INCORRECT" /></group of items>

This ensures that the source generator assembly is packaged in the correct place in the NuGet package so the compiler can load as a source generator/parser.

You must also set the propertyincludeGenerateOutputAINCORRECT, so the consuming project doesn't get a reference to the source generator DLL itself:

<property group> <includeGenerateOutput>INCORRECT</includeGenerateOutput></property group>

With this you can easilydotnet packagethe project. In the following example I set the version number to0.1.0-betaand make sure the output is placed in the folder./Artifacts:

dotnet pack -c Version -o ./artifacts -p:Version=0.1.0-beta

This creates a NuGet package with the following name:

NetEscapades.EnumGenerators.0.1.0-beta.nupkg

When you open the package in the NuGet Package Explorer, the layout should look like the image below, with the DLL in the fileParser/dotnet/csFolder, no other DLLs/folders included.

André Blocking | .NET adventures (11)

Now testing this package gets tricky. We don't want to push the NuGet package to a repository before testing it. UsAlsoI don't want to "pollute" our local NuGet cache with this package. To do this, a number of hurdles have to be overcome.

4. Create a local NuGet repository with a custom NuGet configuration

First we need to create a local NuGet repository. By default when runningdotnet recovery, the packages are pulled from nuget.org, but we want to make sure our NuGet test project uses the local test package. This means we need to configure a custom recovery source.

The typical way to do this is to create anuget.configand list other sources. You can include remote sources (like nuget.org or private NuGet sources like myget.org).j"local" sources, which are just a folder of NuGet packages. This last option is exactly what we want.

However, for our testing, we don't necessarily want to create a configuration file with the "default".nuget.configName, otherwise this source would be used for recoveryatin our solution. Ideally, for this NuGet integration test project, we would like to only use the local source with our beta package. To achieve this, we'll give our configuration file a different name so it's not used automatically, and provide that name explicitly when needed.

The following script creates anuget.configfile, rename it tonuget.integration-tests.config, and add the./ArtifactsDirectory called as Nuget sourcelocal packages(where we packed our test NuGet package):

dotnet new nuget configurationmvnuget.config nuget.integration-tests.configdotnet nuget agregarThose./artifacts -n paquetes locales --configfile nuget.integration-tests.config

The resultnuget.integration-tests.configFile looks like this:

<?xmlversion="1.0" encoding="utf-8"?><context> <package fonts> <!--To inherit the global sources from the NuGet package, remove the <clear/> line under --> <Naturally /> <add to  Taste="Nuget" valeria="https://api.nuget.org/v3/index.json" /> <add to  Taste="local packages" valeria="./Artifacts" /> </package fonts></context>

Now that we have the configuration file, it's time to create our NuGet integration test project.

6. Add a NuGet package test project

In the first part of this post, we created an integration test project to confirm that the source generator worked correctly when run inline in the compiler. For the NuGet package tests, I use a little MSBuild trick to use exactly the same test files in the NuGet package test as in the "normal" integration test to reduce duplication and ensure consistency.

The following script creates a new xunit test project and adds a reference to our test NuGet package:

dotnet new xunit -o ./tests/NetEscapades.EnumGenerators.NugetIntegrationTestsdotnet add ./tests/NetEscapades.EnumGenerators.NugetIntegrationTests Paket NetEscapades.EnumGenerators --Version 0.1.0-beta

Please note that we areNOAdding this project to the solution file so that it is not part of the normal test recovery/build/development cycle. That simplifies a lot and since we already have the integration test, it's not a big problem. This project tests that the NuGet package compiles correctly, but I think it's okay to just do it as part of the CI process.

Another option is to add the project to the solution but remove the project from any build configurations in the solution.

After running the script above, we need to make some manual changes to the.csprojto include all C# files from the "normal" integration test project in this NuGet integration test project. We can use a for this<Compile>element, with a placeholder for thatContainattribute that points to the other project.csFiles The resulting project file should look like this:

<Project SDK="Microsoft.NET.SDK"> <property group> <target frame>net6.0</target frame> <contestable>allow</contestable> <it is packable>INCORRECT</it is packable> </property group> <!-- 👇 Add Temporary Package --> <group of items> <Package reference Contain="NetEscapades.EnumGenerators" execution="0.1.0-beta" /> </group of items> <!-- 👇 Link all the integration test project files so we can run the same tests --> <group of items> <Compile Contain="..\NetEscapades.EnumGenerators.IntegrationTests\*.cs" shortcut="%(filename)%(extension)"/> </group of items> <!-- Default Template Packages --> <group of items> <Package reference Contain="Microsoft.NET.Test.Sdk" execution="16.11.0" /> <Package reference Contain="xeinheit" execution="2.4.1" /> <Package reference Contain="xunit.runner.visualstudio" execution="2.4.3"> <include assets>Duration; top up indigenous; content files; analyzers; transitive construction</include assets> <private property>at</private property> </Package reference> <Package reference Contain="quilt.collector" execution="3.1.0"> <include assets>Duration; top up indigenous; content files; analyzers; transitive construction</include assets> <private property>at</private property> </Package reference> </group of items></Project>

And this is. This project literally consists of a project file and thenuget.integration-tests.configconfiguration file. All that's left is to run it.

7. Run the NuGet package integration test

Unfortunately, the implementation of the project is another case where we have to be careful. We don't want to "pollute" our computer's NuGet caches with the test NuGet package and need to use oursnuget.configFile. To do this, the must be executedrestore something/build/court hearingSteps all separately, stepping through the necessary command switches as needed.

To ensure that we don't pollute our NuGet caches with the test package, we restore the NuGet packages to a local folder../Package. This uses a lot more disk and network space during the restore (because it's extracting NuGet packages that are already cached elsewhere in that folder). But believe me, it's worth it. Otherwise, prepare yourself for nasty bugs later when you can't update your test-suite or a completely separate project starts using your test-suite.

The following script runs therestore something/build/court hearingfor the NuGet integration test project. It is assumed that you have already created the NuGet package as described in Section 3.

# Restore the project using the custom configuration file by restoring the packages to a local folderRestore dotnet ./tests/NetEscapades.EnumGenerators.NugetIntegrationTests --packages ./packages --configfile"nuget.integration-tests.config" # Build the project (without restoring) using the restored packages in the local folderdotnet build ./tests/NetEscapades.EnumGenerators.NugetIntegrationTests -c Release --packages ./packages --no-restore# Test the project (without building or restoring)Net pointcourt hearing./tests/NetEscapades.EnumGenerators.NugetIntegrationTests -c Release --no-build --no-restore

If everything went well, your tests should pass and you can be sure that the NuGet package you created is correctly compiled and ready for distribution.

Summary

In this post I showed how to create an integration test project for a source generator using a project reference and run the generator as part of the compiler just like you would with a normal reference. I also showed you how to create a NuGet package for your source code generator and how to create an integration test for your package to make sure it was built correctly. This last process is more difficult because you must be careful not to pollute the local NuGet package caches with the test package.

January 4, 2022 at 2:00 am.

Next Finding the Namespace and Type Hierarchy of a Type Declaration: Building a Source Code Generator – Part 5

Anterior NuGet Packaging and Integration Testing: Building a Source Code Generator - Part 3

André Blocking | .NET adventures (12)

In previous posts in this series, I showed you how to create an incremental feed generator, test it for units and integration, and package it in a NuGet package. In this post, I describe how to customize the behavior of the font builder by extending the marker attribute with additional properties.

Origin Factory Marker Attributerweiterung

One of the first steps for any source code generator is to identify what code in the project is required to participate in source code generation. The font builder can look for specific types or members, but another common approach is to use amark attribute. This is the approach I described in the first post in this series.

Is[enumeration extensions]The attribute I described in the first post was a simple attribute with no other properties. This meant there was no way to customize the code generated by the font generator. This was one of the limitations I discussed at the end of the post.

A common way to provide this functionality is to add additional properties to the placeholder attribute. In this post, I'll show you how to do this for a single setting: the class name of the extension method to generate.

By default the nameEnumeration Extensionsused for the extension method class. This change allows you to specify an alias by using theextension class name extension class nameProperty. For example the following:

[Enumeration Extensions(extension class name extension class name= "Address Extensions")]public CountAddress{Links,parts list,Over,Under,}

would generate a class namedaddress extensions, which looks like this:

//Hint name: EnumExtensions.g.csnamespaceNetEscapes.enumeration generators{ public static partially classroom address extensions // 👈 Note the custom name { public static Chain ToStringFast(That's it Address valeria) => valeria exchange  {Address.Links=> name from(Address.Links),Address.parts list=> name from(Address.parts list),Address.Over=> name from(Address.Over),Address.Under=> name from(Address.Under),_=> valeria.Chain(), }; }}

In the rest of the post, I'll go over the changes needed to the original source generator to achieve this.

I won't show the full generator source code here, just the incremental changes from the original in the first post. The complete code can be found on GitHub.

1. Update the marker attribute

The first step is to update the marker attribute with the new property:

[System.Use of Attributes(System.Attributziele.Count)]public classroom EnumExtensionsAttributeEnumExtensionsAttribute : System.Attribute{ public Chainextension class name extension class name{ receive; put down; } // 👈 New property}

The source builder automatically adds this placeholder attribute to the build as described in the first post, so here we are actually updating a string instead of an attribute. If you want to add more customizations, e.g. You can add additional properties to this attribute, such as the ability to customize the namespace of the generated code.

2. Allows a separate extension class name to be defined for each enumeration

With this change, users can now define a different extension class name for each classCount, so we need to register the extension name when extracting the details about theCountin oneEnumToGenerateObject:

public just read StructureEnumToGenerate{ public just read Chainextension name; // 👈 New field public just read ChainName; public just readList<Chain>Values; public EnumToGenerate(Chainextension name, ChainName,List<Chain>Values) {Name=Name;Values=Values;extension name=extension name; }}

Note that we make the extension method partial and eachToStringFast()will be a different overload regardless of whether a user specifies the same extension class name more than once.

3. Update code generation

We're working a little backwards here, so below is the updated code for the extension generator. There's nothing complicated here, just a little fiddly to work with.chainbuilder. The main difference from the previous iteration is that we generate a separate class for each oneCount(instead of a class with many methods) and that the name of the class comes from theEnumToGenerate:

public static Chain GenerateExtensionClassGenerateExtensionClass(List<EnumToGenerate>enumsToGenerate){ Erasjdn= Novo chainbuilder();jdn.attach(@"Namespace NetEscapades.EnumGenerators{"); for every(ErasenumToGenerateemenumsToGenerate) {jdn.attach(@"public static part class").attach(enumToGenerate.extension name).attach(@"{ public static string ToStringFast(est ").attach(enumToGenerate.Name).attach(@" value) => value change {"); for every (ErasMemberemenumToGenerate.Values) {jdn.attach(@" ") .attach(enumToGenerate.Name).attach('.').attach(Member) .attach(" => name of(") .attach(enumToGenerate.Name).attach('.').attach(Member).attach("),"); }jdn.attach(@" _ => valor.ToString(), };}"); }jdn.attach('}'); hand backjdn.Chain();}

It remains to update the generator source code to read the valueextension class name extension class nameof the marking attribute.

4. Reading the property value of a marker attribute

So far we've only had to make minor changes to support this new functionality, but we haven't yet done the hard part: reading the build value. When you define a property for an attribute, you semantically define anamed constructor argument.

To find the property valueextension class name extension class name, we need to find those firstAttributdatenFor the[enumeration extensions]Attribute. We can then check thatnamed argumentsfor the specific property. Here is a simplified version of the code to extract the property value if it exists:

staticList<EnumToGenerate> GetTypesToGenerate(compilationcompilation,IEnumerable<EnumDeclarationSyntaxe>enumerations, cancellation tokenConnecticut){ ErasenumsToGenerate= Novo List<EnumToGenerate>(); // Get a reference to the symbol [EnumExtensions]INamedTypeSymbol?enumAttribute=compilation.GetTypeByMetadataName("NetEscapades.EnumGenerators.EnumExtensionsAttribute"); // ... error checking and omitted check for every (ErasenumDeclarationSyntaxemenumerations) { // Get the semantic model of the bullet symbol semantic modelsemantic model=compilation.GetSemanticModel(enumDeclarationSyntax.Syntaxbaum); INamedTypeSymbolenumSymbol=semantic model.GetDeclaredSymbol(enumDeclarationSyntax); // Set the name of the default extension Chainextension name= "enumeration extensions"; // Iterate over all attributes of the enumeration for every (AttributdatenAttributdatenemenumSymbol.Get Attributes()) { e (!enumAttribute.even(Attributdaten.Attributklasse,SymbolEqualityComparer.By default)) { // This is not the [EnumExtensions] attribute. Keep going; } // This is the attribute, check all named arguments for every (key-value pair<Chain,TypedConstant>named argumentemAttributdaten.named arguments) { // Is this the ExtensionClassName argument? e (named argument.Taste== "ExtensãoClassName" &&named argument.Valentina.Valentina?.Chain() es { }Norte) {extension name=Norte; } } Overall; } // ...Not shown: existing code to get the name and members of the enumeration // Write the name of the extensionenumsToGenerate.Add to(Novo EnumToGenerate(extension name,enumeration name,members)); } hand backenumsToGenerate;}

With these changes, you can add any number of customizations to your font builder by extending the bullet attribute.

5. Supported attribute builders

In the example above, we only check thenamed argumentsof the attribute because the attribute doesn't have a constructor, so it's the only way to specify thatextension class name extension class nameProperty. But what if the marker attribute was defined differently andThose onesdo you have a builder For example, what happens when we do thatextension class name extension class nameneeded and add a new optional property,ExtensionNamespaceName:

[System.Use of Attributes(System.Attributziele.Count)]public classroom EnumExtensionsAttributeEnumExtensionsAttribute : System.Attribute{ public EnumExtensionsAttributeEnumExtensionsAttribute(Chainextension class name) {extension class name extension class name=extension class name; } public Chainextension class name extension class name{ receive; } public ChainExtensionNamespaceName{ receive; put down; }}

Therefore the code from the previous section does not work. And when you have multiple properties and multiple constructors, it gets complicated again. The code below shows the general approach to extracting these values ​​in the source generator. In particular, you must read both thoseConstructorArgument jIsnamed argumentsAgainAttributdaten, and derive the correctly configured values:

INamedTypeSymbolenumSymbol=semantic model.GetDeclaredSymbol(enumDeclarationSyntax);// Placeholder variables for the specified ExtensionClassName and ExtensionNamespaceNameChainclass name= Null;ChainNamespace-Name= Null;// Iterate through all the attributes in the enumeration until the [EnumExtensions] attribute is foundfor every (AttributdatenAttributdatenemenumSymbol.Get Attributes()){ e (!enumAttribute.even(Attributdaten.Attributklasse,SymbolEqualityComparer.By default)) { // This is not the [EnumExtensions] attribute. Keep going; } // This is the correct attribute, check the constructor arguments e (!Attribute.ConstructorArgument.Is empty) {Immutable array<TypedConstant>argument=Attribute.ConstructorArgument; // Make sure we don't have any errors for every (TypedConstantStreitemargument) { e (Streit.Art==TypedConstantKind.Mistake) { // got an error, so don't try to generate hand back; } } // Use the position of the argument to deduce which value is set exchange  (argument.Long) { Fall 1:class name= (Chain)argument[0].Valentina; Overall; } } // Now look for named arguments e (!Attribute.named arguments.Is empty) { for every (key-value pair<Chain,TypedConstant>StreitemAttribute.named arguments) { TypedConstantstandardized constant=Streit.Valentina; e (standardized constant.Art==TypedConstantKind.Mistake) { // got an error, so don't try to generate hand back; } the rest { // Use the constructor argument or property name to infer which value is set exchange  (Streit.Taste) { Fall "extensãoClassName":class name= (Chain)standardized constant.Valentina; Overall; Fall "namespace name extension":Namespace-Name= (Chain)standardized constant.Valentina; Overall; } } } } Overall;}

This is of course more complex, but it may be necessary to provide a better user experience to the consumer of your source code generator.

Summary

In this post, I described how you can provide customization options to users of a font generator by adding properties to a placeholder attribute. It takes a bit of gymnastics to parse the supplied values, especially if you use mandatory constructor arguments in your attribute as well as named properties. In general, this is a good way to expand the capabilities of your font generator.

January 11, 2022 at 2:00 am.

Next Saving Font Generator Output in Source Control: Building a Font Generator – Part 6

Anterior Adjusting Generated Code with Placeholder Attributes: Building a Source Code Generator – Part 4

André Blocking | .NET adventures (13)

In this next post about font generators, I will show some common patterns I need when creating font generators, namely:

  • How to determine the namespace of a type for a specific class/struct/enum syntax
  • Handling of nested types when calculating a class/struct/enum name

At first glance, these seem like simple tasks, but there are subtleties that can make things more complicated than expected.

Finding the namespace for a class syntax

A common requirement for a font generator is to determine thenamespacea cubeclassroomor other syntax. For example, so far in this series, theEnumeration ExtensionsThe generator I described generates your extension method in anamespace:NetEscapades.EnumGenerators. An improvement could be to generate the extension method in the samenamespacelike the originalCount.

For example if we have thisCount:

namespacemy app.Domain{ [Enumeration Extensions] public CountKor{Rot= 0,azul= 1, }}

We may want to generate the extension method in theMyApp.Domainnamespace:

namespacemy app.Domain{ public partially static classroom Enumeration Extensions { public Chain ToStringFast(That's it KorKor) =>Korexchange  {Kor.Rot=> name from(Kor.Rot),Kor.azul=> name from(Kor.azul),_=>Kor.Chain(), } } }}

At first glance it looks like it should be easy, but unfortunately there are some cases we have to deal with:

  • File-relative namespaces - Introduced in C# 10, these ignore curly braces and apply the namespace to the entire file, for example:
public namespacemy app.Domain; // Filespace namespace[Enumeration Extensions]public CountKor{Rot= 0,azul= 1,}
  • Multiple nested namespaces - a little unusual, but you can have multiple nested namespace declarations:
namespacemy app{ namespaceDomain// nested namespace { [Enumeration Extensions] public CountKor{Rot= 0,azul= 1, } }}
  • Default namespace: If you don't specify a namespace, the namespaceBy defaultnamespace is used, which can beglobal::, but can also be replaced in thecsprosuse file<Rootspace>.
[Enumeration Extensions]public CountKor// no namespace specified, so use default{Rot= 0,azul= 1,}

The following commented snippet is based on the code that the LoggerMessage generator uses to handle all of these cases. Can be used when you have some kind of "type" syntax that is inferredBaseTypeDeclarationSyntaxBaseTypeDeclarationSyntax(which includesEnumDeclarationSyntaxe,ClassDeclarationSyntax,StructureDeclarationSyntax,RecordDeclarationSyntaxRecordDeclarationSyntaxetc.), then it should handle most cases.

// Determine the namespace in which the class/enumeration/struct is declared, if anystatic Chain get namespace(BaseTypeDeclarationSyntaxBaseTypeDeclarationSyntaxSyntax){ // If we don't have a namespace, we return an empty string // This explains the "default namespace" case. Chainnamespace= Chain.File; // Get the wrapper syntax node for the type declaration // (could be a nested type for example)SyntaxNode?potential namespace parent=Syntax.Pater; // Move "outside" of nested classes etc. until we come to a namespace // or until we run out of parents while (potential namespace parent!= Null &&potential namespace parentessem NamespaceDeclarationSyntax&&potential namespace parentesno FileScopedNamespaceDeclarationSyntax) {potential namespace parent=potential namespace parent.Pater; } // Create the final namespace by looping until we run out of namespace declarations e (potential namespace parentes BaseNamespaceDeclarationSyntaxparent namespace) { // We have a namespace. Use this as typenamespace=parent namespace.Name.Chain(); // Move "outside" the namespace declarations to // No more nested namespace declarations while (TRUE) { e (parent namespace.Pater esNONamespaceDeclarationSyntaxPater) { Overall; } // Add outer namespace as a prefix to final namespacenamespace=PS"{NamespaceParent.Name}.{Namespace}";parent namespace=Pater; } } // returns the final namespace hand backnamespace;}

With this code we can handle all the namespace cases defined above. For the default/global namespace, we returnstring.learn, which instructs the font generatorNOthrow anamespaceExplanation. This ensures that the generated code is in the same namespace as the target type, no matter ifglobal::or another value defined in<Rootspace>.

With this code, we can now generate our extension method in the same namespace as the original oneCount. This should provide consumers of the source generator with a better user experience given the extension methods for oneCountThey're easier to find if they're in the same namespace as theCountBy default.

Finding the full type hierarchy from a type declaration syntax

So far we've supported implicitly in this seriesnested Counts for our extension methods because we're callingChain()in oneINamedTypeSymbol, which takes nested types into account. For example, if you have aCountdefined like this:

publicBurn out{ public classroom nested { [Enumeration Extensions] public CountKor{Rot= 0,azul= 1, } }}

then callChain()about itKorsyntax backnested.external.color, which we're welcome to use in our extension method:

public static partially classroom Enumeration Extensions{ public static Chain ToStringFast(That's it Outside.nested.Kor valeria) => valeria exchange  {Outside.nested.Kor.Rot=> name from(Outside.nested.Kor.Rot),Outside.nested.Kor.azul=> name from(Outside.nested.Kor.azul),_=> valeria.Chain(), };}

Unfortunately this fails if you have a generic external type e.g.Exterior<T>. substituteOutsideimpostorExterior<T>in the above excerpt, aEnumeration ExtensionsClass that doesn't compile:

public static partially classroom Enumeration Extensions{ public static Chain ToStringFast(That's itOutside<T>.nested.Kor valeria) // 👈 C# invalid // ...}

There are a few ways to deal with this, but in most cases we need to understand the entire type hierarchy. We can't just "replicate" the hierarchy for our extension class (extension methods can be defined on nested types), but extending types in other ways might solve your problem. For example, I have a font creation project that members are being added toStructureTypes call StronglyTypedId. If you decorate a nested structure like this:

public partiallyBurn out{ public partially classroom Generic<T> WoT: Novo() { public partially Structurenested{ [StronglyTypedId] public partially just read StructureTest-ID{ } } }}

Therefore, we need to generate code similar to the following that replicates the hierarchy:

public partiallyBurn out{ public partially classroom Generic<T> WoT: Novo() { public partially Structurenested{ public partially just read StructureTest-ID{ publicTest-ID(E T valeria) =>Valentina= valeria; public E TValentina{ receive; } // ... etc } } }}

It saves us from adding any special treatment for generic types or anything like that and is generally very versatile. This is the same approach that the LoggerMessage generator uses to implement high performance logging in .NET 6.

To implement this in our font generator, we need a helper (which we will callParentClass) to display the details of the "main" type of each nested target (Kor). We need to record 3 pieces of information:

  • Iskeywordof the type, ieclassroom/Plaster/record
  • IsNameof the type, ieOutside,nested,General<T>
  • Any restriction to a generic type, i. H.where T: new()

We also need to register the parent-child reference between classes. we could use arelay a message/Colafor this, but the implementation below uses a linked-list approach, where eachParentClasscontains a reference to your child:

intern classroom ParentClass{ public ParentClass(Chainkeyword, ChainName, Chainrestrictions,ParentClass?Kind) {keyword=keyword;Name=Name;restrictions=restrictions;Young=Kind; } publicParentClass?Young{ receive; } public Chainkeyword{ receive; } public ChainName{ receive; } public Chainrestrictions{ receive; }}

VonCountstatement itself we can create the linked list fromParentClassit, with code similar to the following. As before, this code works for any type (classroom/Structureetc):

staticParentClass? GetParentClasses(BaseTypeDeclarationSyntaxBaseTypeDeclarationSyntaxtipoSintaxis){ // Try to get the main syntax. If it's not a type like class/struct, it's nullDeclarationTypeSyntax?paySyntax=tipoSintaxis.Pater ifDeclarationTypeSyntax;ParentClass?parentClassInfo= Null; // Repeat as long as we're in a compatible nested type while (paySyntax!= Null && IsAllowedType(paySyntax.Art())) { // Log the main type keyword (class/struct etc.), name and constraintsparentClassInfo= Novo ParentClass(keyword:paySyntax.keyword.ValueText,Name:paySyntax.identifier.Chain() +paySyntax.TypeParameterList,restrictions:paySyntax.Restriction Clauses.Chain(),Kind:parentClassInfo); // sets the child link (initial null) // Go to the next external typepaySyntax= (paySyntax.Pater ifDeclarationTypeSyntax); } // returns a link to the outermost main type hand backparentClassInfo;}// Can only be nested in class/struct/recordstatic bool IsAllowedType(Syntax typArt) =>Art==Syntax typ.class declaration||Art==Syntax typ.structure declaration||Art==Syntax typ.Registration Statement;

This code creates the list of the type that is closest to our target type. So for our example above, this creates a ParentClass hierarchy similar to this:

ErasPater= Novo ParentClass(keyword: "record",Name: "Outside",restrictions: "",Kind: Novo ParentClass(keyword: "classroom",Name: "Generic<T>",restrictions: "where T:new()",Kind: Novo ParentClass(keyword: "Structure",Name: "nested",restrictions: "",Kind: Null ) ));

We can then rebuild this hierarchy in our source constructor when generating the output. Below is an easy way to use bothParentClassHierarchy and namespace from the previous section:

static public get resource(Chainnamespace,ParentClass?mother class){ Erasjdn= Novo chainbuilder(); // If we don't have a namespace, we generate the code in "default" // Namespace, either global:: or another <RootNamespace> Erashas room for names= !Chain.is null or empty(namespace) e (has room for names) { // We could use a per-file namespace here, which would be a bit simpler // However, this requires C# 10 which may not be available. // Depends on what you want to support!jdn.attach("namespace") .attach(namespace) .add row(@" {"); } // Traverse the entire parent type hierarchy, starting with the outermost while (mother classesNONull) {jdn.attach("partially") .attach(mother class.keyword) // for example class/struct/record .attach('') .attach(mother class.Name) // For example. Outside/Generic<T> .attach('') .attach(mother class.restrictions) // for example where T: new() .add row(@" {");parent account++; // Keep track of how many layers deep we aremother class=mother class.Young; // repeat with the next child } // Write the actual target generation code here. Not shown for brevityjdn.add row(@"public read-only subtree TestId { }"); // We need to "close" each of the main types, so write // the required number of '}' for (E TEU= 0;EU<parent account;EU++) {jdn.add row(@"}"); } // Close the namespace if we had one e (has room for names) {jdn.attach('}').add row(); } hand backjdn.Chain();}

The example above is not a complete example and will not work in every situation, but it shows one possible approach that might work for you as I have found it useful in a number of situations.

Summary

In this post I have shown how to calculate two specific functions that are useful in source code generators: the namespace of a type declaration syntax and the nested type hierarchy of a type declaration syntax. This isn't always required, but can be useful to deal with complexities like generic core types or to ensure you're generating your code in the same namespace as the original.

January 18, 2022 at 2:00 am

Next Solving the font generator marker attribute problem - Part 1: Creating a font generator - Part 7

Anterior Finding the Namespace and Type Hierarchy of a Type Declaration: Building a Source Code Generator – Part 5

André Blocking | .NET adventures (14)

In this post, I describe how to keep the output of your source code generator on disk so it can be part of source control and code reviews, how to control where files are uploaded, and how to deal with the case that your source code generator produces source code ever different outputs depending on the target framework.

By default, font generators do not produce any artifacts

One of the great attractions of font generators is that they workemthe compiler. This makes them more convenient than other source generation techniques like t4 templates as you don't need a separate build step.

However, there is a possible downsideAlsois because the source code generator runs inside the compiler. This can make it difficult to see the effect of a font generator when not in the context of an IDE.

For example, if you're reviewing a pull request on GitHub that uses source code generators and you make a change that adds code to the project, it can be helpful to make that output visible in PR. This can be especially important for "critical" code.

For example, in Datadog Tracer we recently started using source generators to generate methods that are called by the "native" part of the profiler that controls which integrations are enabled. This is a crucial part of the tracker, so it's important to watch out for changes. We wanted the changes to be visible in the PRs, so we had to make sure the font generator output was written to files.

Compiler-generated output files

There is a simple option to enable persistent font generator files in the file system:EmitCompilerGeneratedFiles. You can set this property in your project file:

<property group> <EmitCompilerGeneratedFiles>TRUE</EmitCompilerGeneratedFiles></property group>

Or you can set the MSBuild property in other ways, such as on the command line when compiling

build dotnet/p:EmitCompilerGeneratedFiles=true

If you set this property alone, the compiler generates the hint files on disk. For example, let's consider the NetEscapades.EnumGenerators package and enable theEmitCompilerGeneratedFilesproperty we can see that the generated source files are written to theObjectFile:

André Blocking | .NET adventures (15)

In particular, the output of the source generator is written to a folder defined as follows:

{BaseIntermediateOutpath}/generated/{Montage}/{QuellgeneratorName}/{generated file}

In the example above we have

  • BaseIntermediateOutpath:obj/Debug/net6.0
  • Montage:NetEscapades.EnumGenerators
  • QuellgeneratorName:NetEscapades.EnumGenerators.EnumGenerator
  • Generated file:ColoursExtensions_EnumExtensions.g.cs,EnumExtensionsAttribute.g.cs

write filesObjectThe folder is fine, but it doesn't really solve our problem since thecompartmentjObjectFolders are usually excluded from source control. Usit couldexplicitly include them in source control, but a better option is to dump the files elsewhere.

Output location control

You can control the location of files output by the compiler by using theCompiladorGeneratedFilesOutputPathProperty. This is a path relative to the project's root folder. For example, if you define the following in your project file:

<property group> <EmitCompilerGeneratedFiles>TRUE</EmitCompilerGeneratedFiles> <CompiladorGeneratedFilesOutputPath>generated</CompiladorGeneratedFilesOutputPath></property group>

This will write the files to thegeneratedFolder in project folder:

André Blocking | .NET adventures (16)

whatever you putCompiladorGeneratedFilesOutputPathreplace that{BaseIntermediateOutpath}/generatedprefixed to the file path, so the files will be written to:

{CompiladorGeneratedFilesOutputPath}/{Montage}/{QuellgeneratorName}/{generated file}

At first glance, this seems to solve all the problems: the content of the source code generator is sent to the file system, to a location that is included in source control. problem solved correctly?

The difficulty is when you try to build a second time,afterthe files have already been written, you will get a series of errors:

ColoursExtensions_EnumExtensions.g.cs(31,28): Error CS0111: Type'color extensions'already defines a member named"be defined"with the same parameter types ColoursExtensions_EnumExtensions.g.cs(40,28): Error CS0111: Type'color extensions'already defines a member named'Test Analysis'with the same types of parameters

This is because the compiler includes the output files.what is moreto the output of the source generator in memory. This causes duplication of the above types and errors. The answer is to delete the build.

Exclude dumped files from compilation

The simple solution to this problem is to remove the output files from the project build so that only the output of the in-memory font generator is part of the build. You can exclude them individually (e.g. by right-clicking the file in Visual Studio) or, more conveniently, you can use a wildcard pattern to exclude all .cs files in those folders:

<property group> <EmitCompilerGeneratedFiles>TRUE</EmitCompilerGeneratedFiles> <CompiladorGeneratedFilesOutputPath>generated</CompiladorGeneratedFilesOutputPath></property group><group of items> <!-- Exclude output from build source generators --> <Compile Remove="$(Output path of files generated by the compiler)/**/*.cs" /></group of items>

With this change, we now have the best of all worlds: the source code generator output is sent to disk, is included in source control so it can be reviewed in PR etc., and doesn't affect the build itself.

Divided by target frame

We originally used the above properties when adding our first source builder in Datadog Tracer. However, this caused us some trouble afterwards.

For context, Datadog Tracer currently supports several target milestones:net461,Network standard 2.0,netcoreapp3.1. However, some of our integrations are only applicable to specific target frameworks. For example, ASP.NET integration only applies to net461, so we use #if NETFRAMEWORK to exclude it from the .NET Core assembly.

The difficulty is that the output is our source generatorandersHowever, for each target framework, the output of each target framework is written to theThe same thingfolders in any case. Each time the compiler runs for a target framework, it overwrites the existing file output inGenerated/AssemblyName/GeneratorName/Filename.cs! Three different source generator outputs, but only one of them is saved to disk.

To fix this problem, let's add the target frame with the output file path$(target frame)Property.

<property group> <!-- keep source generator files (and others) on disk --> <EmitCompilerGeneratedFiles>TRUE</EmitCompilerGeneratedFiles> <!-- 👇 The "base" path for source generators --> <generated folder>generated</generated folder> <!-- 👇 Write the output to a different subfolder for each target frame --> <CompiladorGeneratedFilesOutputPath>$(created folder)\$(target frame)</CompiladorGeneratedFilesOutputPath></property group><group of items> <!-- 👇 Delete everything in the base folder --> <Compile Remove="$(generated folder)/**/*.cs" /></group of items>

With this change, the output of the source code generator for each framework is written into a separate folder so that we can easily see the difference between the assemblies.

André Blocking | .NET adventures (17)

Obviously this approach isn't necessary unless you have multiple goals.jIt produces different Source Builder output for different target frameworks, but it's a straightforward approach, if any.

Summary

In this post I described how you can ensure that font generators send their generated output to disk. This can be useful if you want to monitor changes to the source generator output or if you want to review this output in a non-IDE scenario, e.g. B. a pull request on GitHub. I then showed how to controlWothe files are written and an approach in case the source builder creates different outputs for different target framework builds of your project.

January 25, 2022 at 2:00 am

Next Solving the font generator marker attribute problem - Part 2: Creating a font generator - Part 8

Anterior Saving Font Generator Output in Source Control: Building a Font Generator – Part 6

André Blocking | .NET adventures (18)

In this post, I describe an issue I'm struggling with regarding font generators: where to place the "placeholder attributes" that trigger the font generator. In this post, I describe what bullet attributes are, why they are useful for font generators, and why deciding where to place them can be problematic. Finally, in the post below, I describe the solution I went for which strikes me as the best of both worlds.

Bookmark attributes and font generators

I'm a big fan of C# source code generators and have written several posts on how to use them in your applications. I recently updated a library called StronglyTypedId for generating strong type ids to use support for the built-in .NET source generator instead of a custom Roslyn task.

One of the most important phases of most source code generators is identifying the syntax in your application that needs to be involved in generating the code. This entirely depends on the purpose of the source code generator, but a very common approach is to use attributes to decorate the code that needs to participate in the code generation process.

For example, the LoggerMessage source generator that is part ofMicrosoft.Extensions.LoggingLibrary in .NET 6 uses a[message from registrar]Attribute defining the code to be generated:

useMicrosoft.extensions.get connected;public partially classroom Test controller{ // Adding the attribute here specifies LogHelloWorld // The method must have generated code [RegistrarMessage(0,Record level.Information, "Write a hello world reply to {person}")] partially file RegistrarHelloWorld(PersonaPersona);}

Likewise in mineStronglyTypedIdPackage uses an attribute[long ID entered]appliedStructures to indicate that the type should be aStronglyTypedId:

usestrongly-typed id;[StronglyTypedId]public partially StructureMyIdPersonalization{ }

In both cases, the attribute itself is just oneTextmarker, is used at compile time to tell the source code generator what to generate. It doesn't have to be in the final compiled output, although if it is, it's usually not a problem.

The question I address in this post is: where should these placeholder attributes be put?

Set the marker attribute

In some cases there is a trivial answer. If the generator aimprovementFor an existing library that has some functionality that the user needs, the generator can simply be packaged with that library.

For example himRegistrarMessageGenerator is partMicrosoft.Extensions.Logging.AbstractionsLibrary. It's packaged in the same NuGet package that users will install anyway, and the marker attributes are included in the referenced DLL, so they're always there. This is the "best case" scenario as far as marker attributes are concerned.

André Blocking | .NET adventures (19)

But what if you have a library that isonlya font generator. You still need to reference these attributes, so at first glance you have 3 main options.

  1. Use the source builder to automatically add attributes to your build.
  2. Ask users to add the attribute to the build themselves.
  3. Put the attributes in an external DLL and make sure the project references it.

Each of them has its pros and cons, so in this post I am going to talk about the pros and cons of each and which one is the best in my opinion.

1. Adding Attributes to a User Build

Source code generators have the ability to add source code to a consuming project. In general, source generators cannot access the code they added to the build, which avoids many recursion problems. There is one exception: a font generator can register a "post-initialization" hook that allows you to add some fixed fonts to the build.

For the .NET 6 Incremental Builder API, this hook is calledRegisterPostInitializationOutput(). You don't currently have access to the user code, so it's only useful for addinghe hascode, but the user can reference it, and you can use the code that references it in your source code generator. For example

[Generator]public classroom Hello world generator : incremental generator{ /// <heredardoc /> public file initialize(IncrementalGeneratorInitializationContextIncrementalGeneratorInitializationContextcontext) { // Register the source of the attributecontext.RegistrierungPostInicializaciónSalida(EU=> { ErasAttributFuente= @" Namespace HelloWorld {public class MyExampleAttribute: System.Attribute {} }";EU.Add font("MiAtributoEjemplo.g.cs",AttributFuente); }); // ... generator implementation }}

This catch is apparentlytailor-madeto add placeholder attributes to the user build that you can later use in the builder. In fact, this scenario is explicitly mentioned in the Source Code Generator Cookbook as "the way" to work with placeholder attributes.

And most of the time it works perfectly.

Where things fail is when a user references your font generator in more than one project. The classMiAttributeExamplewould be added to two projects in whichHello WorldNamespace If one of your projects references the other, you get oneCS0436warning and a compile warning like:

Warning CS0436: TheTyp 'MeuAtributoExample' em 'HelloWorldGenerator\MyExampleAttribute.g.cs'Conflicts with importsTyp 'MeuAtributoExample' em 'MyProject, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.

The problem is that we are defining the same type in two different projects and the compiler cannot tell them apart. How can we solve this?

The obvious solution is to create the attributeinterninstead ofpublic. In this way, each project only refers to theMiAttributeExampleadded to this specific project. And it will work 🎉

Anyway thatNOwork if someone uses[InternaVisibleTo]. Actually everyone at this pointinternthe guys arepublicSo we're back to zero.

Now you might be thinking, “People really aren't consuming[InternaVisibleTo]Have?" Well, originally I used this approach in myStronglyTypedIdand I can confirm that they are. But I'm not the one to judge, our AssemblyInfo.cs file for my daily work contains 22 [InternalsVisibleTo] attributes!

The big problem is that there is no solution for the users here. You are simply broken in this scenario. So let's look at another option.

2. Ask users to create it themselves

The next option is to prompt users to add the attribute themselves. You may be wondering how or why this helps, but the key is that users can add it.once, and uses the same attribute throughout the solution. Add instead of Font GeneratoreveryProject that the user createsMiAttributeExamplein your "domain helpers" class (for example).

In fact, this approach is not as strange or backwards as it might seem at first glance. In fact, there are several C# functions that useExactlyThis approach. I mentioned such a case in a recent post when mentioning the use of[No refund]Attribute. This attribute is used, among other things, for parsing nullable flows, but that's allAre definedon the BCL for .NET Core 3. That means you can't use it if you're targeting .NET Core 2.x or .NET Standard, right?

well no! The C# compiler uses the "add it yourself" approach. doesn't matterWothe attribute is defined as long as it is definedsomewhere. This means you can add it to your own project (make sure you use the correct namespace) and the C# compiler will "magically" treat it the same as the "original".

#e!NETCOREAPP3_0_OR_GREATERnamespaceSystem.Diagnose.Codeanalyse{ [Use of Attributes(Attributziele.Method)] public classroom FromDevuelveAttribute: Attribute { }}#would end if

We could do the same with font generators. However, it seems like a bit of work to ask users to do this. Also, it's good for a super simple attribute like[No refund], but what about a complex attribute like[long ID entered]?

useSystem;namespacestrongly-typed id{ [Use of Attributes(Attributziele.Structure,inherited= INCORRECT,allow multiple= INCORRECT)] [System.Diagnose.Conditional("STRONGLY_TYPED_ID_USAGES")] public sealed classroom StronglyTypedIdAttribute : Attribute { public StronglyTypedIdAttribute( StronglyTypedIdBackingTypebackup type=StronglyTypedIdBackingType.By default, Strongly typed ID converterConverter=Strongly typed ID converter.By default, Strongly typed implementationsimplementations=Strongly typed implementations.By default) {backup type=backup type;Converter=Converter;implementations=implementations; } public StronglyTypedIdBackingTypebackup type{ receive; } public Strongly typed ID converterConverter{ receive; } public Strongly typed implementationsimplementations{ receive; } }}

Asking a user to add this, to do everything just right so as not to break the generator, seems impossible to me. Also, you lose the ability to further develop your API because users would have to update that codeevery timeupdate your project. That sounds like a recipe for support calls...

This leaves us with only one option.

3. Reference marker attributes in an external DLL

With this approach, the builder does not add the markup attributes themselves, nor does the user add them to their construction. Instead, the source code generator relies on attributes defined in a DLL referenced by the user's project.

Note that I'm being intentionally cautious about how or where this DLL comes from since there are so many options. He[message from registrar]Generator, for example, is based on the attributes present inMicrosoft.Extensions.Logging.AbstractionsNuGet-Packages, dasAlsocontains the generator. This is particularly useful as the generator can be sure that the attributes are always available for use and vice versa; if the attribute is available, so is the generator.

If your generator is an "optional extra" to a "main" DLL, this approach makes a lot of sense. A similar argument could be made for putting the generator in a separate package from which the "main" package takes a dependency, similar to what is done for parsers in some projects. Font generators really are like sophisticated parsers; Therefore, many of the same standards must apply. For example the main thingxeinheitPackage has a dependency onxunit.analyzersPackage.

(Video) Jermaine Jackson, Pia Zadora - When the Rain Begins to Fall

André Blocking | .NET adventures (20)

This approach makes sense if your generator is an "additional add-on" to a main package. By maintaining the dependency chain in this way, you ensure that if the marker attributes are present (in thexeinheitpackage), then reference is always made to the generator.

Although it is possible to use the generator package (e.g.xunit.analyzers)sinprimarilyxeinheitattempting to use the marker attributes would be a compiler error, so the behavior is expected.

But coming back to the original problem, what if you have an "autonomous" generator, i.e.justa font generator? We don't really need to introduce a NuGet package just containing the attributes to fix this, do we?

Another option is to include the attributes in the font generator DLL itself. By default, the DLL containing the source generator is not included in the user's build, butit could be. Crazy enough to work?

I tried different approaches to solve the problem with my StronglyTypedId generator project. And instead of jumping straight into the solution, in the next post I'm going to make you suffer as I talk about some of the approaches I've tried, how I failed, and finally the solution I found.

Summary

In this post I described what "placeholder attributes" are in the context of source code generators and how they can drive code generation. Next I discussed how attributes should be added to the build.

Conventional wisdom uses the source code generator itself to add them to the build, but this can cause problems when users use the[InternaVisibleTo]Attribute. As a workaround, we could ask users to add their own attribute like the C# compiler does in some cases. Alternatively, we can add the attributes to a DLL and reference that DLL in some way. There are many different ways to achieve this. In the next post I will examine some of them and describe the solution I have chosen.

February 1, 2022, 2:00 a.m.

Next NetEscapades.EnumGenerators: a font generator for enumeration performance

Anterior Solving the font generator marker attribute problem - Part 1: Creating a font generator - Part 7

André Blocking | .NET adventures (21)

In the previous post I described marker attributes as used by font generators and the problem of deciding how to reference them in a user project. In this post, I describe some of the approaches I tried, along with the final approach I decided on.

Reference marker attributes in an external DLL

In short, placeholder attributes are simple attributes used to control which types a source code generator should use for code generation and provide a way of passing options to the source code generator.

For example, you can use my StronglyTypedId project to decorate aStructuretogether[long ID entered]Attribute. The source factory uses the presence of this attribute to trigger the generation of type converters and properties for the structure.

In the same way the[message from registrar]attribute onMicrosoft.Extensions.Logging.AbstractionsIt is used to build an efficient registry infrastructure.

The question is, where should the marking attributes go? In the previous post I described three options:

  1. Added to Build by Source Generator.
  2. Created manually by users.
  3. Contained in a referenced DLL.

Option 1 is the default approach, but it doesn't work when users use[InternaVisibleTo], since you might define the same type more than once. In this post, I examine variations of option 3. These variations are more or less in the same order as I tried to solve this problem myself.

1. Directly refer to the build output

The first option is somewhat brilliant in its simplicity. The source generator/parser DLL is often not referenced in the normal way when you add the generator package to a project. With this approach we are changing that!

The beauty of this one is how easy it is. Just create the attributes in the Font Builder project and remove the<IncludeBuildOutput>falso</IncludeBuildOutput>replace what you usually have in source generators. For example:

<Project SDK="Microsoft.NET.SDK"> <property group> <target frame>Network standard 2.0</target frame> <!-- 👇 don't include this so the DLL ends up in the build output--> <!-- <IncludeBuildOutput>false</IncludeBuildOutput> --> </property group> <!-- References to standard font generators --> <group of items> <Package reference Contain="Microsoft.CodeAnalysis.Analyzers" execution="3.3.3" private property="at" /> <Package reference Contain="Microsoft.CodeAnalysis.CSharp" execution="4.0.1" private property="at" /> </group of items> <!-- Pack the build output into the "parser" slot in the NuGet package --> <group of items> <none Contain="$(output path)\$(package name).dll" Pack="TRUE" Package path="Parser/dotnet/cs" Visible="INCORRECT" /> </group of items></Project>

I only had to make a single adjustment to the generator design, so far so good! After packaging into a NuGet package, the DLL is added to theParser/dotnet/csPath (required for font generators)jNot normalreleaseFolder, for direct consumption Design reference:

André Blocking | .NET adventures (22)

All consumers of the NuGet package reference the placeholder attributes contained in their DLL generator, so there are no problems with conflicting types. Problem solved!

If you reference the Source Builder project in the same solution, either for testing purposes or because you have a solution-specific builder, you must configure itReferenceOutputAssembly="verdadero"NO<project reference>Consumer design element. For example:

<group of items> <reference project Contain="..\id strongly-typed\id strongly-typed.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="TRUE" /> <!-- 👈 This is usually wrong --></group of items>

So that's it, problem solved, right? Perhaps. But I don't like this approach at all. Your DLL generator is now part of user references, which looks gross. There are also potential problems around thatMicrosoft.CodeAnalysis.CSharpdependencies, etc. For example, in my tests, although my projects compiled well, there were many warnings about incompatible versions ofSystem.Collections.Imutable:

Warning MSB3277: Conflicts were found between different versions of"System.Collections.Imutable"that could not be solved. Warning MSB3277: There was a conflict between"System.Collections.Immutable, Version=1.2.5.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"j"System.Collections.Immutable, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a". 

None of my projects referenced it directlySystem.Collections.Imutablebut it's a transitive reference used by the generator, hence the problems. The potential for problems was too great for my liking, so I dropped it and tried a different approach.

2. Create a separate NuGet package just for the DLL

Rather than referencing the source builder dll and all of its dependencies it relies on, what we actually want is a small dll that containsonlythe marker attributes (and their associated types). So the logical step is to create a NuGet package that contains exactly these types of placeholders. We can then add a dependency to the builder project so that when the attribute project is added to a consuming project, the builder project is automatically added to the consuming package as well.

My main concern with this approach wasn't really related to the technical difficulties. Instead, I was more concerned with naming things and making them look ugly.

I happened to have some technical difficulties with it, but I think it was more due to the details of my project, so I don't consider it a real hindrance.

Take mine for exampleStronglyTypedIdProject. Should the package be called "Marker Attributes"?StronglyTypedId.Attribute, and the called "generator" packageStronglyTypedId? It seems likely that users will add thoseStronglyTypedIdpackage, and then I don't understand why the constructor doesn't seem to work (since they don't have a reference to the marker attributes).

Alternatively, you can view the marker's attribute packageStronglyTypedIdand call the font generator packageStronglyTypedId.Generator. The hierarchy seems to work better, but it still looks like someone is adding the generator package without the attributes. After all, it's the generator they want, the attributes are a by-product! The documentation is great, but people don't read it 😉

3. Make additional attribute pack optional

The above solution looked like it wasclosing ofRight, but I didn't like the fact that users always had to think of two different packages. As I played around with this, I realized I was trying to solve a problem for possibly a small subset of the project's users, and maybe that should spur my focus.

As I mentioned in the previous post, there is a "standard" way to use placeholder attributes with font generators: the font generator adds them as part of the initialization phase. It works wellexceptIf users have[InternaVisibleTo]Attributes and use the source code generator in several projects.

In this case I decided why not use the source generator initialization phase to add the attributes automatically and provide a separate attribute pack for users with problems?

That would mean that 99% of users would just have a single package, use the auto-added attributes as usual, and not worry about the other. The main generator package would be invokedStronglyTypedIdand the supplementary attribute package would be invokedStronglyTypedId.Attribute. The hierarchy feels right and people are (hopefully) directed to the right package.

The problem with this approach is that users who come across[InternaVisibleTo]You need a way to "disable" automatically added attributes. The best way I could think of was to bundle the generated attribute code into one#if/#endif. For example something like the following:

#e!STRONGLY_TYPED_ID_EXCLUDE_ATTRIBUTESuseSystem;namespacestrongly-typed id{ [Use of Attributes(Attributziele.Structure,inherited= INCORRECT,allow multiples= INCORRECT)] [System.Diagnose.Conditional("STRONGLY_TYPED_ID_USAGES")] intern sealed classroom StronglyTypedIdAttribute : Attribute { public StronglyTypedIdAttribute( StronglyTypedIdBackingTypebackup type=StronglyTypedIdBackingType.By default, Strongly typed ID converterConverter=Strongly typed ID converter.By default, Strongly typed implementationsimplementations=Strongly typed implementations.By default) {backup type=backup type;Converter=Converter;implementations=implementations; } public StronglyTypedIdBackingTypebackup type{ receive; } public Strongly typed ID converterConverter{ receive; } public Strongly typed implementationsimplementations{ receive; } }}#would end if

By default, the variableSTRONGLY_TYPED_ID_EXCLUDE_ATTRIBUTESwould not be set, so the attributes would be part of the build. When a user clicks on the[InternaVisibleTo]Problem, you could define this constant in your project and the built-in generated attributes would no longer be part of the build. Instead, they could refer to theStronglyTypedId.Attributepackage to use the generator

 <Project SDK="Microsoft.NET.SDK"> <property group> <output type>Exe</output type> <target frame>net6.0</target frame> <!-- Define MSBuild constant --> <The constants are defined>STRONGLY_TYPED_ID_EXCLUDE_ATTRIBUTES</The constants are defined> </property group> <Package reference Contain="StronglyTypedId" execution="1.0.0" private property="at"/> <Package reference Contain="StronglyTypedId.Attribute" execution="1.0.0" private property="at" /> </Project>

The main advantage of this approach is thatmajorityUsers don't need to worry about the extra package. It's only when you have a problem that you have to dig deeper, and at that point you're more motivated to read the documentation 😉

4. Pack the DLL into the generator package

Shortly after implementing and submitting the above approach, I realized I had missed a trick. Instead of requiring users to install a separate package to fix the problem, you can just wrap the DLL attributes in the builder package and bypass the automatic embedding of the marker attributes entirely.

This is the same approach used by the[message from registrar]Generator. I slapped myself in the face when I realized I had finally gotten to this point for posting this project as a reference 🤦‍♂️

The net result is a NuGet package layout similar to the following, with theStronglyTypedId.dlldll "Generator" inParser/dotnet/csfolder, so it is used to compile and bookmark attributes dllStronglyTypedId.Attributes.dllNOreleaseFolder referenced directly by user code.

Note that in my case I also want to refer to the placeholder attributes from my constructor code, ieStronglyTypedId.Attributes.dllis packedParser/dotnet/csAlso, this will probably not be required for all font generator projects.

André Blocking | .NET adventures (23)

Achieving this layout required some csproj magic to ensure thisdotnet packagePut the DLLs in the right place, but nothing too mysterious.

<Project SDK="Microsoft.NET.SDK"> <property group> <target frame>Network standard 2.0</target frame> <includeGenerateOutput>INCORRECT</includeGenerateOutput> </property group> <!-- References to standard font generators --> <group of items> <Package reference Contain="Microsoft.CodeAnalysis.Analyzers" execution="3.3.3" private property="at" /> <Package reference Contain="Microsoft.CodeAnalysis.CSharp" execution="4.0.1" private property="at" /> </group of items> <!-- Reference generator attributes to compile --> <!-- Be sure to specify PrivateAssets so NuGet has no dependencies --> <group of items> <reference project Contain="..\Strongly Typed Ids.Attributes\Strongly Typed Ids.Attributes.csproj" private property="at" /> </group of items> <group of items> <!-- Pack the generator dll in the path parsers/dotnet/cs --> <none Contain="$(output path)\$(package name).dll" Pack="TRUE" Package path="Parser/dotnet/cs" Visible="INCORRECT" /> <!-- Pack the DLL attributes into the path parsers/dotnet/cs --> <none Contain="$(output path)\Ids.Attributes.dll strongly typed" Pack="TRUE" Package path="Parser/dotnet/cs" Visible="INCORRECT" /> <!-- Pack the DLL attributes into the path lib\netstandard2.0 --> <none Contain="$(output path)\Ids.Attributes.dll strongly typed" Pack="TRUE" Package path="lib\netstandard2.0" Visible="TRUE" /> </group of items></Project>

There are probably "better" ways to do this, but it worked, so it will work for me.

When referencing the NuGet package, you don't need to do anything special:

<group of items> <Package reference Contain="StronglyTypedId" execution="1.0.0" private property="at" /></group of items>

was formerlyPrivateAssets="todos"here to prevent subsequent projects from also getting a reference to the originating factory, but this is entirely optional. Note that this results in the dll marker attributeStronglyTypedId.Attributes.dllIt appears in the project's bin folder. However, the attributes themselves are provided with the condition, so that there is no run-time dependency on the DLL.

You can be sure that the dllNOcopied to the output by setting ExcludeAssets="runtime" in the <PackageReference> element:

<group of items> <Package reference Contain="StronglyTypedId" execution="1.0.0" private property="at" delete assets="Duration" /></group of items>

This allows you to still compile with the marker's attributes, but the DLL is not in yourscompartmentFile.

If you reference the Source Builder project in the same solution, you must add one<Package reference>also design attributes. In my case it was a bit more complicated because I neededbothSource generator and target project have a reference to DLL attributes.

Font generators live in their own bubble in terms of recommendations. Although the consuming project has a reference to the attribute project, the source generator does not have access to it or any other references in the consuming project.

This is all a bit confusing, but in order for the Source Builder project to be able to access the attribute DLL in the consuming project, you need to tell the consuming project to treat the attribute project as a parser. The source generator "parser" can reference it and generate it correctly. Because we want the consumer projectAlsoTo refer to the DLL marker attributes, we need to defineReferenceOutputAssembly="verdadero".

<Project SDK="Microsoft.NET.SDK"> <property group> <target frame>net6.0</target frame> </property group> <group of items> <!-- Refer to the source generator project --> <reference project Contain="..\Strongly Typed Identifiers\Strongly Typed Identifiers.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="INCORRECT" /> <!-- Don't reference the generator dll --> <!-- Reference the project attribute "treat as parser"--> <reference project Contain="..\Strongly Typed Ids.Attributes\Strongly Typed Ids.Attributes.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="TRUE" /> <!-- We refer to DLL attributes --> </group of items></Project>

With this last setup we have the best of all worlds in my opinion:

  • Just a single NuGet package to worry about
  • No problems when users use[InternaVisibleTo]
  • Users can use to clear the DLL marker from their build outputExclude Features="Runtime"
  • users can doAdd dotnet package StronglyTypedIdand it will work that extra<Package reference>Properties are purely optional

Bonus: enter attributes if you want!

For StronglyTypedId I went a step further and gave users the option to embed the attributes in their project's DLL using the Source Builder by defining an MSBuild variable.STRONGLY_TYPED_ID_EMBED_ATTRIBUTES. Attributes are always added to the build but are only available when defined as follows:

#eSTRONGLY_TYPED_ID_EMBED_ATTRIBUTESuseSystem;namespacestrongly-typed id{ [Use of Attributes(Attributziele.Structure,inherited= INCORRECT,allow multiples= INCORRECT)] [System.Diagnose.Conditional("STRONGLY_TYPED_ID_USAGES")] intern sealed classroom StronglyTypedIdAttribute : Attribute { // ... }}#would end if

if the usersmakeIf you enable this, you will first encounter problems with duplicate types because the "internal" types from the font builder as well as the public types are built into the dll attribute. To fix this you can addcompilein the directiondelete assetsfor package:

<Project SDK="Microsoft.NET.SDK"> <property group> <target frame>net6.0</target frame> <!-- Define this constant to trigger built-in attributes --> <The constants are defined>STRONGLY_TYPED_ID_EMBED_ATTRIBUTES</The constants are defined> </property group> <group of items> <Package reference Contain="StronglyTypedId" execution="1.0.0" delete assets="compile; Duration" private property="at" /> <!-- Add this ☝ to not compile against the DLL flag attribute --> </group of items></Project>

Now I really can't imagineWhysomeone would love to do this, but since I had already written the code for the original approach, I left it there for anyone who needs it! 😄

Summary

In this post, I describe the path I took to decide how to handle placeholder attributes for my font builder. I have described 4 main approaches: Direct reference to the source generator DLL in the consuming project; create two separate NuGet packages; Make the NuGet package optional for the flag attribute by using conditional compilation. and embed the DLL marker attribute and the DLL generator in the same NuGet package. The last option seemed to be the best approach and offers users the smoothest experience.

February 8, 2022 at 2:00 am.

Next Waiting for your ASP.NET Core app to be ready from an IHostedService in .NET 6

Anterior Solving the font generator marker attribute problem - Part 2: Creating a font generator - Part 8

André Blocking | .NET adventures (24)

In this post I describe a font builder I created to improve the performance of enumeration operations. It's available as a NuGet package, so you can use it in your projects too!

IsNetEscapades.EnumGeneratorsThe NuGet package currently generates 7 useful onesCountMethods that are much faster than their built-in equivalents:

  • ToStringFast()(replacedChain())
  • Is set (T value)(replacedEnum.IsDefined<T>(valor T))
  • IsDefined(Stringname)(new, includedChaina household nameCount)
  • TryParse(string? name, boolignoreCase, out of T value)(replacedEnum.TryParse())
  • TryParse(string? nome, valor T)(replacedEnum.TryParse())
  • getValues()(replacedEnum.GetValues())
  • getNomes()(replacedEnum.GetNames())

You can see the benchmarks for these methods below, or read on to learn why you should use them and how to use the source code generator in your project.

Why use an enumeration source builder? Performance

One of the first questions to ask yourself isWhyuse a font generator? The simple answer is that enums can be very slow in some cases. With a font generator, you can recover some of that power.

Suppose you have this simple enumeration:

public CountKor{Rot= 0,azul= 1,}

Sometimes you want to print the enumeration name as wellChain(). No problem, right?

public file ink(KorKor){console.write line("you choose"+Kor.Chain()); // You chose red}

So what's the problem? Well unfortunately callChain()is in an enumerationvery slowly. let's checkifslow soon, but first let's look at a quick implementation using modern C#:

public static classroom color extensions{ public Chain ToStringFast(That's it KorKor) =>Korexchange  {Kor.Rot=> name from(Kor.Rot),Kor.azul=> name from(Kor.azul),_=>Kor.Chain(), } }}

This simple switch statement checks each of the known values ​​ofKorand usename fromto return the text representation of theCount. If the value is unknown, the underlying value is returned using the integrated valueChain()Implementation.

You should always be careful with these unknown values: For example, this is valid C#Ink((Color)123)

Let's compare this simple switch statement to the patternChain()Implementation using BenchmarkDotNet for a known color, you can see how much faster our implementation is:

BenchmarkDotNet=v0.13.1, OS = Windows 10.0.19042.1348 (Update 20H2/Oct 2020)CPU Intel Core i7-7500U 2.70 GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores DefaultJob: .NET Framework 4.8 (4.8.4420.0), X64 RyuJIT.NET SDK=6.0.100Standard-Worker: .NET 6.0.0 (6.0.21.52210), X64 RyuJIT
Methodspecial effectsMeanMistakestandard deviationRelationshipGeneration 0assigned
ChainNetz48578,276 ns3,3109 ns3,0970 ns1.0000,045896b
ToStringFastNetz483.091 ns0,0567 ns0,0443 ns0,005--
Chainnet6.017.9850 ns0,1230 ns0,1151 ns1.0000,011524b
ToStringFastnet6.00,1212 ns0,0225 ns0,0199 ns0,007--

This is worth mentioning firstChain()in .NET 6 it is over 30 times faster and allocates only a quarter of the bytes than the method in .NET Framework. Compare that to the "fast" version and it's still super slow!

It will be created as quickly as possibleToStringFast()The method is a bit of work as you need to update it when your enumeration changes. That's where theNetEscapades.EnumGeneratorsEnter the font generator!

Installation des Fontgenerators NetEscapades.EnumGenerators

You can install thoseNetEscapades.EnumGeneratorsNuGet package containing the font generator by running the following from the project directory:

dotnet add package NetEscapades.EnumGenerators --prerelease

Note that this NuGet package uses the .NET 6 Incremental Builder APIs, so you must have the .NET 6 SDK installed, although it may target older frameworks.

This will add the package to your project file:

<Package reference Contain="NetEscapades.EnumGenerators" execution="1.0.0-beta04" />

I suggest you update this to setPrivateAssets="todos", OfExclude Features="Runtime":

<Package reference Contain="NetEscapades.EnumGenerators" execution="1.0.0-beta04" private property="at" delete assets="Duration" />

contextPrivateAssets="todos"means that any project that references it will not get a reference to theNetEscapades.EnumGeneratorsPackage. ContextExclude Features="Runtime"guarantee theNetEscapades.EnumGenerators.Attributes.dllThe file used by the source code generator is not copied to the compilation output (not needed at runtime).

This package uses the wildcard attribute approach I described in my previous post to avoid problems with references to transitive designs.

Using the font generator

When adding the package to your project, a placeholder attribute is automatically added,[enumeration extensions], to your project. To use the generator, add the[enumeration extensions]assign aCount. For example:

useNetEscapes.enumeration generators;[Enumeration Extensions]public CountKor{Rot= 0,azul= 1,}

This will generate multiple extension methods for yourCount, IncludingToStringFast(). You can use this method anywhere you would normally callChain()in the enumeration and benefit from improved performance for known values:

public file ink(KorKor){console.write line("you choose"+Kor.ToStringFast()); // You chose red}

You can see the definition ofToStringFast()Navigate to its definition:

André Blocking | .NET adventures (25)

By default, font generators do not write their output to disk. In a previous post I described how you can configure<EmitCompilerGeneratedFiles>j<Output path of compiler generated files>to keep these files on disk.

IsToStringFast()The above method can be easily speeded upCountYes, it's one that a lot of people know. Butmanyof methods aroundCountthey are pretty slow. The font generator can also help here!

Source that generates other helper methods

A recent tweet by Bartosz Adamczewski showed how slow another isCountThe method isEnum.IsDefined<T>(valor T):

The enum type in C# .NET has an interesting way to check if a numeric type is really an enum, but unfortunately its performance is much slower than simply changing or checking.

Thanks to @hypeartistmusic for finding the issue. #dotnet pic.twitter.com/pMFSVEmFQB

— Bartosz Adamczewski (@badamczewski01) 5. Februar 2022

As shown in the benchmarks above, CallingEnum.IsDefined<T>(valor T)It may be slower than you might expect! Fortunately if you useNetEscapades.EnumGeneratorsYou can get a quick version of this generated method for free:

intern static partially classroom color extensions public static bool it is defined(Kor valeria) => valeria exchange  {Kor.Rot=> TRUE,Kor.azul=> TRUE,_=> INCORRECT, };

Instead of generating this as an extension method, this method is exposed asstaticabout what was generatedstatic classroom. The same applies to any additional helper functions generated by the font generator.

The benchmarks for this method agree with those of Bartosz:

BenchmarkDotNet=v0.13.1, OS = Windows 10.0.19042.1348 (Update 20H2/Oct 2020)Intel Core i7-7500U CPU 2.70 GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores.NET SDK=6.0.100[Host]: .NET 6.0.0 (6.0.21.52210), X64 RyuJIT Standardjob: .NET 6.0.0 (6.0.21.52210), X64 RyuJIT
MethodMeanMistakestandard deviationMedianRelationshipGeneration 0assigned
enum is defined123.6001 ns1,0314 ns0,9648 ns123.7756 ns1.0000,011424b
Extensions are defined0,0016 ns0,0044 ns0,0039 ns0,0000 ns0.000--

This shows the advantage of two of the font-generated methods,ToStringFast()jsee define(). The following code shows the complete generated code for thecolor extensionsClass generated by the source generator including the 7 methods:

#aborting releaseintern static partially classroom color extensions{ public static Chain ToStringFast(That's it Kor valeria) => valeria exchange  {Kor.Rot=> name from(Kor.Rot),Kor.azul=> name from(Kor.azul),_=> valeria.Chain(), }; public static bool it is defined(Kor valeria) => valeria exchange  {Kor.Rot=> TRUE,Kor.azul=> TRUE,_=> INCORRECT, }; public static bool it is defined(ChainName) =>Nameexchange  { name from(Kor.Rot) => TRUE, name from(Kor.azul) => TRUE,_=> INCORRECT, }; public static bool TryParse(#eNETCOREAPP3_0_OR_GREATER [System.Diagnose.Codeanalyse.NotNullWhen(TRUE)]#would end if Chain?Name, boolignore case, For a Kor valeria) =>ignore case? TryParseIgnoreCase(Name, For a valeria) : TryParse(Name, For a valeria); Private static bool TryParseIgnoreCase(#eNETCOREAPP3_0_OR_GREATER [System.Diagnose.Codeanalyse.NotNullWhen(TRUE)]#would end if Chain?Name, For a Kor valeria) { exchange  (Name) { Fall { }I know when I know.even(name from(Kor.Rot),System.string comparison.OrdinalIgnoreCase): valeria =Kor.Rot; hand back TRUE; Fall { }I know when I know.even(name from(Kor.azul),System.string comparison.OrdinalIgnoreCase): valeria =Kor.azul; hand back TRUE; Fall { }and whenE T.TryParse(Name, For a Erasvaleria): valeria = (Kor)valeria; hand back TRUE; By default: valeria = By default; hand back INCORRECT; } } public static bool TryParse(#eNETCOREAPP3_0_OR_GREATER [System.Diagnose.Codeanalyse.NotNullWhen(TRUE)]#would end if Chain?Name, For a Kor valeria) { exchange  (Name) { Fall name from(Kor.Rot): valeria =Kor.Rot; hand back TRUE; Fall name from(Kor.azul): valeria =Kor.azul; hand back TRUE; Fall { }and whenE T.TryParse(Name, For a Erasvaleria): valeria = (Kor)valeria; hand back TRUE; By default: valeria = By default; hand back INCORRECT; } } public staticKor[] getValues() { hand back Novo[] {Kor.Rot,Kor.azul, }; } public static Chain[] Obernamen() { hand back Novo[] { name from(Kor.Rot), name from(Kor.azul), }; }}

As you can see, a lot of code is generated here for free! And just for completeness, here are some benchmarks comparing source code generated methods to their framework equivalents:

BenchmarkDotNet=v0.13.1, OS = Windows 10.0.19042.1348 (Update 20H2/Oct 2020)Intel Core i7-7500U CPU 2.70 GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores.NET SDK=6.0.100[Host]: .NET 6.0.0 (6.0.21.52210), X64 RyuJIT Standardjob: .NET 6.0.0 (6.0.21.52210), X64 RyuJIT
MethodMeanMistakestandard deviationRelationshipGeneration 0assigned
EnumToString17.9850 ns0,1230 ns0,1151 ns1.0000,011524b
ToStringFast0,1212 ns0,0225 ns0,0199 ns0,007--
MethodMeanMistakestandard deviationMedianRelationshipGeneration 0assigned
enum is defined123.6001 ns1,0314 ns0,9648 ns123.7756 ns1.0000,011424b
Extensions are defined0,0016 ns0,0044 ns0,0039 ns0,0000 ns0.000--
MethodMeanMistakestandard deviationRelationshipassigned
EnumIsDefinedNameEnumIsDefinedName60.735 ns0,3510 ns0,3284 ns1,00-
ExtensionesIsDefinedName5.757 ns0,0875 ns0,0730 ns0,09-
MethodMeanMistakestandard deviationMedianRelationshipSD ratioassigned
EnumTryParseIgnoreCase75,20 ns3.956 ns10.962 ns70,55 ns1,000,00-
Extends TryParseIgnoreCase14,27 ns0,486 ns1.371 ns13,91 ns0,190,03-
MethodMeanMistakestandard deviationRelationshipGeneration 0assigned
EnumGetValues470,613 ns9,3125 ns16,3101 ns1,000,0534112B
ExtensionsGetValues4.705 ns0,1455 ns0,1290 ns0,010,019140B
MethodMeanMistakestandard deviationRelationshipSD ratioGeneration 0assigned
EnumGetNames27,88 ns1.557 ns4.540 ns1,000,000,022948B
ExtensionsGetNames12,28 ns0,315 ns0,323 ns0,420,080,022948B

Basically,atBenchmarks show improved runtimes and most show reduced allocations. These Are All Good Things™

Summary

I described that in this postNetEscapades.EnumGeneratorsNuGet package. This provides several helper methods to work withenumerationswhich perform better than the built-in methods and require nothing more than adding a package and adding a[enumeration extensions]Attribute. If it sounds interesting, give it a try and don't hesitate to report an issue or PR on GitHub.

February 15, 2022 at 2:00 am

Next Stop lying about .NET Standard 2.0 compatibility!

Anterior NetEscapades.EnumGenerators: a font generator for enumeration performance

André Blocking | .NET adventures (26)

In this post, I describe how you can expect your ASP.NET Core app to be ready to receive requests from within aIHostedService/FoundServiceon .NET 6. This can be useful if yourIHostedServiceYou must send requests to your ASP.NET Core app to either find the URLs that the app is listening on or to wait for the app to fully start.

Why do we need to find URLs in a hosted service?

One of my most popular blog posts is "5 Ways to Configure URLs for an ASP.NET Core Application". In a follow-up post, I showed how you can tell ASP.NET Core to randomly pick a free port instead of assigning you a specific port. The difficulty with this approach is detecting themwhichThe ASP.NET Core port has been selected.

I recently wrote a small test application where I needed to figure out which URLs the application accessed from within aFoundService. The details of why aren't very important, but I wanted themFoundServiceto call some public endpoints in the app as a "self-test":

  • The application will launch and start listening on a random port
  • The hosted service calls the public endpoint in the app
  • After receiving the response, the service triggers the application shutdown.

The question was, how can I tell what URL Kestrel is listening on from the hosted service?

Find out what URL ASP.NET Core is listening on

As I mentioned in a previous post, it's pretty easy to find the URLs that an ASP.NET Core app is listening on. if you bring oneServerInstance with dependency injection then you can check thoseIServerAddressesCharacterísticaabout itCharacteristicsProperty. This will expose theThe addressproperty that lists the addresses.

file printing guide(IServiceProviderServices){console.write line("Checking addresses..."); ErasServer=Services.GetServiceErforderlich<Server>(); ErasAddress resource=Server.Characteristics.Receive<IServerAddressesCharacterística>(); for every(ErasADDRESSemAddress resource.The address) {console.write line("Listed at address: " +ADDRESS); }}

So if it's that easy, it shouldn't be a problem, right? Receive instructions fromIHostedService/FoundServiceand send them requests? Not exactly…

IHostedService startup order in .NET 6

In .NET Core 2.x before the introduction of GenericI can accommodateabstraction, theIHostedServicebecause your application would startafterKestrel was fully configured and started listening for requests. I discussed this in a series back then on running startup tasks asynchronously. Ironically the reasonIHostedService it was notsuitable for running asynchronous startup tasks back then (they started after Kestrel), would now be perfect for my use case as I could get addresses from Kestrel when I know they're available.

In .NET Core 3.0, when the platform changed from ASP.NET Core to GenericI can accommodate, things have changed. Now kestrel would run like aIHostedServicehimself, and would beginlastfinallyIHostedServiceS. This made IHostedService perfect for asynchronous initialization tasks, but now I couldn't rely on Kestrel to be available whenIHostedServiceCareer.

Things changed a bit again in .NET 6 with the introduction of the minimal hosting API. These hosting APIs allow you to build incredibly precise programs (no need forStart"magic" classes and method names, etc.), but there are some differences in how things are created and launched. especially thehosted servicesThey start when you callweb app. Carry out (), which is typicalafterYou have configured your middleware and endpoints:

Erasconstructor=internet application.CreateBuilderName(argument);constructor.Services.Add HostedService<TestHostedService>();ErasApplication=constructor.Build();Application.MapGet("/", () => "Hello World!");Application.Run(); // 👈 TestHostedService starts here

This is slightly different from .NET Core 3.x/.NET 5/I can accommodateScenario where hosted services would startBeforeIsstart.configure()method was called. All endpoints and middleware are added now and only when you callweb app. Carry out ()All hosted services are started.

This difference doesn't necessarily change anything for our scenario, but it's something to keep in mind if you need yoursIHostedServicebeginBeforemiddleware routes are configured. For more information, see this GitHub issue.

The bottom line is we can'tTruststarted in Kestrel and be available when youIHostedService/FoundServiceis running, so we need a way to expect this in our service.

Receive application status notifications with IHostApplicationLifetime

Fortunately, a service is available in all ASP.NET Core 3.x+ apps that can notify you once your app has started and is processing requests:IHostApplicationLifetime. This interface contains three properties that can notify you of your application's lifecycle stages and a method to trigger your application to close.

public interface IHostApplicationLifetime{ cancellation tokenapplication started{ receive; } cancellation tokenexecution detention{ receive; } cancellation tokenApplication stopped{ receive; } file stop application();}

As you can see, each of the properties is acancellation token. This may seem like a weird option to receive notifications (nothing iscancelledif your app just started! 🤔), but provides a convenient way to safely run callbacks when an event occurs. For example:

public file PrintMessageStarted(IHostApplicationLifetimethe whole life){the whole life.application started.Record(() =>console.write line("The application has started!"));}

As this shows, you can callRecord()and pass in oneactionwhich runs when the application starts. Likewise, you can get notifications for the other states, such as: B. "stops" or "paused".

For example, the stopping callback is particularly useful because it allows you to block the shutdown until the callback completes, giving you the opportunity to, for example, drain resources or perform other long-running cleanups.

While this is helpful, it is only one piece of the puzzle. We have to let some goasynchronousCode (e.g. calling an HTTP API) when the app starts. So how can we do this safely?

Waiting for Kestrel to be ready in a background service

Let's start with something concrete, aFoundServicewhich we want to "block" until the application starts:

public classroom TestHostedService: FoundService{ Private just read IServiceProvider_Services; public TestHostedService(IServiceProviderServices) {_Services=Services; } protected cancel asynchronous Task ExecuteAsync(cancellation tokenStop Token) { // TODO: Wait here until the Kestrel is ready printing guide(_Services); Hope DoSomethingAsynchronous(); }}

For a first approximation we can use theIHostApplicationLifetimeand a simple oneboolWait for the app to get ready and repeat the loop until we get this signal:

public classroom TestHostedService: FoundService{ Private just read IServiceProvider_Services; Private volatile bool_List= INCORRECT; // 👈 New field public TestHostedService(IServiceProviderServices, IHostApplicationLifetimethe whole life) {_Services=Services;the whole life.application started.Record(() =>_List= TRUE); // 👈 Update field when Kestrel starts } protected cancel asynchronous Task ExecuteAsync(cancellation tokenStop Token) { while(!_List) { // The application hasn't started yet, keep trying! HopeTask.delay(1_000) } printing guide(_Services); Hope DoSomethingAsynchronous(); }}

It works, but it's not exactly pretty. every second theExecuteAsyncmethod is verification_Listfield and then goes back to sleep if not set. This probably won't happen very often (unless the app starts very slowly), but it still looks a bit messy.

I specifically ignore thatStop TokenWe've now moved on to the method, we'll come back to that later!

The cleanest approach I've found is to use a helper class as an intermediary between the "launched" cancellation token signal and the asynchronous code we need to run.Ideally, we wantHopeATaskwhich is finished when theapplication startedsignal is received. The following code usesTaskCompletionSourceto do just that:

public classroom TestHostedService: FoundService{ Private just read IServiceProvider_Services; Private just read IHostApplicationLifetime_the whole life; Private just read TaskCompletionSource_fuente= Novo(); // 👈 New field public TestHostedService(IServiceProviderServices, IHostApplicationLifetimethe whole life) {_Services=Services;_the whole life=the whole life; // 👇 Set the result to TaskCompletionSource_the whole life.application started.Record(() =>_fuente.Define result()); } protected cancel asynchronous Task ExecuteAsync(cancellation tokenStop Token) { Hope_fuente.Task.ConfigureWait(INCORRECT); // Wait for the task to complete! printing guide(_Services); Hope DoSomethingAsynchronous(); }}

This approach isquitebetter Instead of using polling to define a field, we have a single oneHopefor oneTask, which is completed when theapplication startedevent trigger. This is the suggested approach when you "click on acancellation token" like this.

However, there is a potential problem in the code. What if the app never starts?

If heapplication startedThe token is never triggered, so theTaskCompletionSource.Taskwill never be complete and theExecuteAsyncThe method will never complete! This is unlikely, but it can happen, for example, if there is a problem launching your app.

Fortunately, there is a workaround for this using theStop Tokenstep toExecuteAsyncand anotherTaskCompletionSource! For example:

public classroom TestHostedService: FoundService{ Private just read IServiceProvider_Services; Private just read IHostApplicationLifetime_the whole life; Private just read TaskCompletionSource_fuente= Novo(); public TestHostedService(IServiceProviderServices, IHostApplicationLifetimethe whole life) {_Services=Services;_the whole life=the whole life;_the whole life.application started.Record(() =>_fuente.Define result()); } protected cancel asynchronous Task ExecuteAsync(cancellation tokenStop Token) { // 👇 Create a TaskCompletionSource for the stop token Erastcs= Novo TaskCompletionSource();Stop Token.Record(() =>tcs.Define result()); // wait for _any_ of the fonts to complete HopeTask.if someone(tcs.Task,_fuente.Task).ConfigureWait(INCORRECT); // If abort was requested, stop e (Stop Token.It is a requested cancellation) { hand back; } // Otherwise the app is ready, do what you want printing guide(_Services); Hope DoSomethingAsynchronous(); }}

This code is a bit more complex, but it handles everything we need. We could even extract it into a useful helper method.

public classroom TestHostedService: FoundService{ Private just read IServiceProvider_Services; Private just read IHostApplicationLifetime_the whole life; public TestHostedService(IServiceProviderServices, IHostApplicationLifetimethe whole life) {_Services=Services;_the whole life=the whole life; } protected cancel asynchronous Task ExecuteAsync(cancellation tokenStop Token) { e (!Hope WaitForAppStartup(_the whole life,Stop Token)) { hand back; } printing guide(_Services); Hope DoSomethingAsynchronous(); } static asynchronousTask<bool> WaitForAppStartup(IHostApplicationLifetimethe whole life, cancellation tokenStop Token) { Erassource started= Novo TaskCompletionSource();the whole life.application started.Record(() =>source started.Define result()); ErasFont aborted= Novo TaskCompletionSource();Stop Token.Record(() =>Font aborted.Define result()); Taskcomplete task= HopeTask.if someone(source started.Task,Font aborted.Task).ConfigureWait(INCORRECT); // If the completed tasks are the application started task, returns true, else false. hand backcomplete task==source started.Task; }}

Whichever approach you choose, you can now run the task's code in the background, safe in the knowledge that Kestrel is listening!

Summary

In this post I described how to wait in aFoundService/IHostedServicefor your ASP.NET Core app to finish launching, allowing you to send requests to Kestrel or get the URLs used (e.g.). This approach uses theIHostApplicationLifetimeService available through dependency injection. You can bind a callback to theapplication started cancellation tokenexposed to trigger aTaskCompletionSource, which you can laterHopethat backExecuteAsyncMethod. This avoids the need to create or run loopsasynchronouscode in onesynchronizeContext.

February 22, 2022 at 2:00 am.

Next Cancel waiting calls in .NET 6 with Task.WaitAsync()

Anterior Waiting for your ASP.NET Core app to be ready from an IHostedService in .NET 6

André Blocking | .NET adventures (27)

This post is a tirade about an issue I've been struggling with more and more lately: NuGet packages lie about compatibility with .NET Standard 2.0, when they don't because they run on .NET Core 2.x /3.0 don't work.

A quick history lesson: .NET Standard 2.0

When Microsoft first released .NET Core 1.0, it contained only a small selection of the APIs then available in the .NET Framework. To make it easier to write libraries that can be used inboth.NET Framework and .NET Core (no multithreading required) introduced the concept of the .NET standard.

Unlike .NET Core and .NET Framework, whichplatformscan download and run, the .NET standard is just an interface definition. Each version of .NET Standard includes a list of APIs that a platform must support in order to implement that version of .NET Standard. For example, here are the APIs in .NET Standard 1.0.

Each version of .NET Standard is a strict superset of earlier versions, so all APIs from earlier versions are available in later versions (assuming .NET Standard 1.5/1.6 didn't exist). For example, if a platform implements .NET Standard 1.4, by definition it also implements .NET Standard 1.0-1.3:

André Blocking | .NET adventures (28)

You can also think of the .NET standard in terms of C# classes and interfaces. I like this metaphor by David Fowler.

.NET Standard 2.0 was released with .NET Core 2.0 and introducedChargenew APIs over previous versions. It is strongly based on the interface of .NET Framework 4.6.1. The result was that you should be able to write a library that targets .NET Standard 2.0 and works on .NET Framework 4.6.1+ and .NET Core 2.0+.

And it worked (mostly), until recently, wheninterruptedWork.

That's a lot of integration testing

In Datadog's APM crawler library, we use multiple targets to achieve maximum client compatibility. We currently support .NET 4.6.1, .NET Standard 2.0 and .NET Core 3.1, which means we can run any app that targets .NET Framework 4.6.1+ or .NET Core 2.1+.

Datadog's tracer integrates with a variety of libraries to add APM trace capabilities for a variety of old and new libraries in those frameworks. Obviously this means we need to make oneChargeof integration tests, therefore we perform extensive integration tests for the packages we support in a variety of TFMs:

<property group> <!-- Only run .NET Framework tests on Windows --> <target frame illness="'$(SO)'=='Windows_NT'">net461;netcoreapp2.1;netcoreapp3.0;netcoreapp3.1;net5.0;net6.0</target frame> <target frame illness="'$(SO)'!='Windows_NT'">netcoreapp2.1;netcoreapp3.0;netcoreapp3.1;net5.0;net6.0</target frame></property group>

Datadog's crawler relies on the internal implementation details of most of the libraries it supports, which means we have to be very careful to look for changes that don't work. Likewise, we test with the latest minor version of each package for all supported framework versions. We generate this list programmatically for each library based on our range of supported versions and the frameworks that the library itself supports. For example, for Npgsql we generate the following theoretical XUnit data:

public staticIEnumerable<Object[]>npgsql=> Novo List<Object[]> {#eNET461 Novo Object[] { "4.0.12" }, Novo Object[] { "4.1.10" }, Novo Object[] { "5.0.12" }, Novo Object[] { "6.0.3" },#would end if#eNETCOREAPP2_1 Novo Object[] { "4.0.12" }, Novo Object[] { "4.1.10" }, Novo Object[] { "5.0.12" },#would end if#eAPPLICATION NETCORE3_0 Novo Object[] { "4.0.12" }, Novo Object[] { "4.1.10" }, Novo Object[] { "5.0.12" },#would end if#eAPPLICATION NETCORE3_1 Novo Object[] { "4.0.12" }, Novo Object[] { "4.1.10" }, Novo Object[] { "5.0.12" }, Novo Object[] { "6.0.3" },#would end if#eNET5_0 Novo Object[] { "4.0.12" }, Novo Object[] { "4.1.10" }, Novo Object[] { "5.0.12" }, Novo Object[] { "6.0.3" },#would end if#eNET6_0 Novo Object[] { "4.0.12" }, Novo Object[] { "4.1.10" }, Novo Object[] { "5.0.12" }, Novo Object[] { "6.0.3" },#would end if }

The eagle-eyed among you can see that#efor .NET Core 2.1 and .NET Core 3.0NOcontain a 6.x version of npgsql, although npgsql clearly shows that it supports .NET Standard 2.0 and should therefore support .NET Core 2.1 and .NET Core 3.0:

André Blocking | .NET adventures (29)

The problem is, it doesn't. So what's up?

For example, a package is .NET Standard 2.0

To be clear, it isn'tnpgsqlLying it's one of its dependencies, System.Runtime.CompilerServices.Unsafe. This package is designed to support

  • .NET Framework 4.6.1
  • .NET-Standard 2.0
  • .NET Core 3.1
  • .NET6

then you can be happyTo installin a .NET Core 2.1 or .NET Core 3.0 app thanks to the .NET Standard 2.0 target:

>dotnet add package System.Runtime.CompilerServices.Unsafe Determining Projects to Restore... Escrito C:\Users\Sock\AppData\Local\Temp\tmp9167.tmpinfo:Add package referenceforPackage"System.Runtime.CompilerServices.Insecure"in the project'C:\repos\temp\temp.csproj'.Information:GET https://api.nuget.org/v3/registration5-gz-semver2/system.runtime.compilerservices.unsafe/index.jsoninfo:Beer https://api.nuget.org/v3/registration5-gz-semver2/system.runtime.compilerservices.unsafe/index.json 423msinfo:packet recoveryforC:\repos\temp\temp.csproj...die Info:Package"System.Runtime.CompilerServices.Insecure"supports all given frameworksemProject'C:\repos\temp\temp.csproj'.Information:Package referenceforPackage"System.Runtime.CompilerServices.Insecure"execution'6.0.0'added tofile 'C:\repos\temp\temp.csproj'.Information:confirmation of recovery...die Info:MSBuild generationfileC:\repos\temp\temp5\obj\temp5.csproj.nuget.g.props.info:MSBuild generationfileC:\repos\temp\temp5\obj\temp5.csproj.nuget.g.targets.info:Warenurkundefileal Disco Ruta: C:\repos\temp\temp5\obj\project.assets.jsonlog:C:\repos\temp\temp.csproj restored(e 151ms).

but if you rundotnet recoveryÖbuild dotnetSuddenly there is an error:

C:\Users\Sock\.nuget\packages\system.runtime.compilerservices.unsafe\6.0.0\buildTransitive\netcoreapp2.0\System.Runtime.CompilerServices.Unsafe.targets(4,5): Mistake:System.Runtime.CompilerServices.Unsafe is not compatible with netcoreapp2.1. Consider upgrading your TargetFramework to netcoreapp3.1 or later.[C:\repos\temp\temp.csproj]

So what's going on here? The package is .NET Standard compliant, why can't you use it in a .NET Standard compliant project?

Yes you can. It simply cannot be used in a .NET Core 2.1 or .NET Core 3.0 application. Youhe canUse it in a Xamarin app, a UWP app, a Unity app,anything elsewhich implements .NET Standard 2.0. Easy on .NET Core prior to .NET Core 3.1.

But why?

The short answer is that .NET Core 2.1 and .NET Core 3.0 are no longer supported. According to the original PR and this amendment document:

Continuing the build for all frameworks increases the complexity and size of a package. In the past, .NET solved this problem by only compiling for current frameworks and collecting binaries for older frameworks. Collection means that during compilation, the previous version of the package is downloaded and the binaries are extracted.

While using a compiled binary means you can always upgrade without worrying about a framework being removed, it also means you won't get bug fixes or new features. In other words, harvested assets cannot be repaired. This is hidden from you as you can still update the package to a newer version even if you are using the same old binary which is no longer updated.

As of .NET 6, .NET no longer performs any collection to ensure that all submitted assets can be served.

On the one hand, this all seems reasonable to me. Unsupported versionsshould notdelay development. SheNOneed support andNOneed further development.

But if the packagehe doesstill supports .NET Standard 2.0, so you don't have to make any efforts to support these unsupported versions,You get this support for free.

the sentencepoint it outfrom .NET Standard 2.0 is thatanyThe platform that implements it can use packages targeting .NET Standard 2.0.

So you can completely remove all compilation complexity and if you are targeting .NET Standard 2.0 it doesn't matter youit shouldcan still use! Instead, the package throws an explicit error when you try to use it in a .NET Core 2.1/3.0 app.

I don't see anything in the doc or PR that addresses the reasonneedfor the error All I can find is an indication that you're getting runtime errors, although I haven't seen any in my limited testing. 🤷‍♂️

Basically, this looks like a big change in support policy. where "not supported" no longer means "you're on your own if it doesn't work" but "we actively interrupt".

So does it really matter? Is it reasonable?

So the question is, am I upset about something here? So multiple NuGet packages (~50) imply that they are compatible with .NET Core 2.1, but the restore/build fails. That is a big problem? If you're using such an old app, you'll find out quickly and (hopefully) be able to stay on the older package version.

I would say that there are 2 basic problems with this:

  • .NET Standard doesn't make sense anymore
  • These packages are usually transitive dependencies.

As for the first point, this fundamentally breaks the contract that the .NET standard was intended to provide. Yes, the .NET standard is becoming less relevant with the Xamarin/Maui mix in .NET itself, but that shouldn't mean Microsoft is sidestepping itselfactively hostilefor all your purposes.

(Video) PBS NewsHour full episode, Feb. 15, 2023

If they really don't support .NET Standard 2.0 (because they don't run on platforms that support .NET Standard 2.0) then I think they could/should use more platform specific TFMs.makeSupport. Yes, that means more goals in the packs again, but at least those goals arecorrect.

But the real problem is when they are used as transitive dependencies. Take the npgsql package for example. As we've already seen, this is compatible with .NET Standard 2.0, but due to a transitive dependency onSystem.Runtime.CompilerServices.Unsafe, it can no longer be used with .NET Core 2.1/3.0.

Now npgsql author Shay Rojansky works for Microsoft so he definitely understands the implications and has paused an important release, fair enough. But what about other package authors?

  • The CouchbaseNetClient package was no longer supported in version 3.2.6 when it worked in .NET Core 2.1 in version 3.2.5.
  • StackExchange.Redis 2.2.x failsDurationfrom .NET Core 2.1/3.0 con"Could not load assembly for System.Runtime.CompilerServices.Unsafe".

I'm sure there will be more, and I want to reiterate that I don't expect or want package authors to support these old frameworks.The problem is that .NET Standard doesn't mean anything anymore..

And just to clarify, the first .NET 7 builds show that targets for .NET Core 3.1 and .NET 5 will be removed (as they will no longer be supported in November 2022). So keep that in mind in the future.

How are they anyway?

Well, now it's time. But some of you may be wonderingifthese packages are causing the restore/compile errors. You can find the answer in the NuGet package. As shown below, there is onebuildTransitivfolder with anetcoreapp2.0folder with a.Goalsfile and an empty onenetcoreapp3.1File.

André Blocking | .NET adventures (30)

In the buildTransitive folder you can add.Goalsj.AccesoriesFiles that apply to bothconsumerproject and alldown the riverProjects that consumeTheProject.

By including the.Goalsfile in onenetcoreapp2.0folder and an empty onenetcoreapp3.1folder that.GoalsThe file is only applied to projects that use one of the following target frameworks:

  • .NET Core 2.0
  • .NET Core 2.1
  • .NET Core 2.2
  • .NET Core 3.0

Is.GoalsFile itself simply writes aMistake(unless the variableSuppressTfmSupportBuildAdvertênciasit is established)

<Project initial goals="NETStandardCompatError_System_Runtime_CompilerServices_Unsafe_netcoreapp3_1"> <Seek Name="NETStandardCompatError_System_Runtime_CompilerServices_Unsafe_netcoreapp3_1" illness="'$(Suppress TfmSupportBuildWarnings)'==''"> <Mistake Text="System.Runtime.CompilerServices.Unsafe no't supports $(TargetFramework). Consider upgrading your TargetFramework to netcoreapp3.1 or later." /> </Seek></Project>

This will cause all restore/compile operations to fail as we saw earlier. It's quite elegant in its own way.

As you can see in the file above, you can try configuring themSuppressTfmSupportBuildAdvertênciasto compile and run with .NET Standard 2.0 features. Based on my (limited) testing, this seems to work fine in .NET Core 2.1 and .NET Core 3.0. But do you really want to risk it? 🤔

So am I exaggerating here?

yes possibly

Summary

In this post I described how some NuGet packages supporting .NET Standard 2.0NOsupports .NET Core 2.1/.NET Core 3.0. You can install these NuGet packages asappearto be compatible with .NET Core 2.1 projects. but if you rundotnet recovery/build dotnet, you will get an error message that the package is not supported. In my opinion, this fundamentally breaks the promise of the .NET standard.

March 8, 2022 at 2:00 am.

Next A deep dive into the new Task.WaitAsync() API in .NET 6

Anterior Stop lying about .NET Standard 2.0 compatibility!

André Blocking | .NET adventures (31)

In this post I discuss the newTask.WaitAsync()APIs introduced in .NET 6 how you can use them to aHopecall and how they can replace other approaches you are currently using.

The newTask.WaitAsyncAPI for .NET 6

In a recent post I described how to use aTaskCompletionSourceimpostorIHostApplicationLifetimeto "pause" a background service until the application starts. In this code I used the following function which expects aTaskCompletionSource.Taskto complete, but also supports cancellation via acancellation token:

static asynchronousTask<bool> WaitForAppStartup(IHostApplicationLifetimethe whole life, cancellation tokenStop Token){ Erassource started= Novo TaskCompletionSource(); ErasFont aborted= Novo TaskCompletionSource(); use ErasRegistrar1=the whole life.Record(() =>source started.Define result()); use ErasRegistration2=Stop Token.Record(() =>Font aborted.Define result()); Taskcomplete task= HopeTask.if someone(source started.Task,Font aborted.Task).ConfigureWait(INCORRECT); // If the completed tasks are the application started task, returns true, else false. hand backcomplete task==source started.Task;}

This code works on many versions of .NET, but in the post I specifically mentioned that it was .NET 6, so Andreas Gehrke pointed out that he could have used a simpler approach:

Good post! Since this is .NET 6, couldn't you just call WaitAsync() in your tcs and pass the stop token?

— Andreas Gehrke (@agehrke) February 15, 2022

Andreas references a new API introduced inTask(yDureza<T>) API that allows youHopeATaskwhileAlsoto do thisHopeinterrupt:

namespaceSystem.soaking.Tasks;public classroom Task{ public Task WaitAsync(cancellation tokencancellation token); public Task WaitAsync(time lapseTime is up); public Task WaitAsync(time lapseTime is up, cancellation tokencancellation token);}

As you can see there are three new methods added to thisTask, all are overloads ofWaitAsync(). This is useful for the very scenario I described above: you want to wait for aTaskbut i want itHopebe terminable by acancellation token.

Based on this new API we could rewrite itWaitForAppStartupwork as follows:

static asynchronousTask<bool> WaitForAppStartup(IHostApplicationLifetimethe whole life, cancellation tokenStop Token){ attempt { Erastcs= Novo TaskCompletionSource(); use Eras_=the whole life.application started.Record(() =>tcs.Define result()); Hopetcs.Task.WaitAsync(Stop Token).ConfigureWait(INCORRECT); hand back TRUE; } take(TaskCanceledException) { hand back INCORRECT; }}

I think this is a lot easier to read, so thanks Andreas for pointing that out!

waiting for oneTaskwith waiting time

IsTask.WaitAsync(CancellationToken CancelacionToken)method (and its counterpart inDureza<T>) is very useful if you want to create oneHopecancellable by acancellation token. The other overloads are useful when you want to time out.

For example, consider the following pseudocode:

public asynchronousTask<E T> result received(){ Erascurly result= Hope load from cache(); e (curly resultesNONull) { hand backcurly result.Valentina; } hand back Hope charge directly(); //TODO: Cache the result asynchronousTask<E T?> load from cache() { // simulate something fast HopeTask.delay(time lapse.sincemilliseconds(10)); hand back 123; } asynchronousTask<E T> charge directly() { // simulate something slow HopeTask.delay(time lapse.seconds(30)); hand back 123; }}

This code shows a single public method with two local functions:

  • getResult()returns the result of an expensive operation, the result of which can be cached
  • load from cache()returns the result from a cache with a small delay
  • load directly()returns the original source result, which takes much longer

This code is common when you need to cache the result of an expensive operation. But note that the "Cache API" is in this exampleasynchronous. For example, this could be because you are using IDistributedCache in ASP.NET Core.

If everything goes well, then callgetResult()multiple times should work like this:

Erasresultado1= Hope result received(); // Tom ~5sErasresultado2= Hope result received(); // takes ~10ms as the result is cachedErasresultado3= Hope result received(); // takes ~10ms as the result is cached

In this case, the cache does a great job of speeding up subsequent requests for the result.

But what if something goes wrong with distributed caching?

For example, maybe you use Redis as a distributed cache, which is extremely fast most of the time. But for some reason your Redis server is suddenly unavailable: maybe the server crashes, there are network problems or the network is very slow.

suddenly youload from cache()The method actually callsgetResult() Slower, not faster!😱

Ideally, you should be able to say, "Try loading this from cache, but if it's taking longer thanXMilliseconds, so stop trying." That means you want to set a timeout.

Now you can add a reasonable timeout yourself in the Redis connection library, but assume for a moment that you can't, or that your caching API doesn't expose these APIs. In this case you can use .NET 6Tara<T>.WaitAsync(TimeSpan):

public asynchronousTask<E T> result received(){ // sets a threshold for waiting for the cached result ErascacheTimeout=time lapse.sincemilliseconds(100); attempt { Erascurly result= Hope load from cache().WaitAsync(cacheTimeout); e (curly resultesNONull) { hand backcurly result.Valentina; } } take(Timeout exception) { // Caching took too long } hand back Hope charge directly(); //TODO: Cache the result // ...}

Mit dieser ÄnderunggetResult()It will not wait more than 100ms for the cache return. Yesload from cache()exceed this waiting timeTask.WaitAsync()throw aTimeout exceptionand the function is loaded immediately fromload directly()instead of.

Note that when usingcancellation tokenoverload ofWaitAsync(), get oneTaskCanceledExceptionwhen the task is aborted. If you use a timeout, you getTimeout exception.

If you wanted this behaviorBefore.NET 6 you can replicate it with an extension like this:

// extension method on `Task<T>`public static asynchronousTask<excess of results> waiting time afterwards<excess of results>(That's itTask<excess of results>Task, time lapseTime is up){ // We need to be able to cancel the "timeout" task, so create a token source Erascts= Novo CancelamentoTokenSource(); // create timeout task (don't wait for it) Erastimeout task=Task<excess of results>.delay(Time is up,cts.Symbolic); // Execute task and timeout in parallel, return completed task first Erascomplete task= HopeTask<excess of results>.if someone(Task,timeout task).ConfigureWait(INCORRECT); e (complete task==Task) { // Cancel the "timeout" task so as not to lose a timercts.Cancel(); // wait for task for error etc. hand back HopeTask.ConfigureWait(INCORRECT); } the rest { throw Novo Timeout exception(PS"Task expired after {timeout}"); }}

Of course, having this code as part of the .NET base class library is very useful, but it also helps to avoid subtle errors when writing this code yourself. For example, with the above extension, one could easily forget to cancel theTask.Delay()call up. This would filter astopwatchInstance until the delay trigger fires in the background. With high-performance code, this can easily become a problem!

In addition, .NET 6 adds additional overhead that supportsbothwaiting time andcancellation token, which saves you from writing another extension method 🙂 In the next post I'll cover how this is implemented behind the scenes as there's a lot more involved than just the above extension method!

Summary

In this post I discussed the newTask.WaitAsync()Method overloads introduced in .NET 6 and how you can use them to simplify any code where you want to wait for aTask, but I wanted thoseHopebe terminable by acancellation tokenor after a specified waiting period.

March 15, 2022 at 3:00 a.m.

Next Just because you stopped waiting doesn't mean the task isn't running anymore.

Anterior Cancel waiting calls in .NET 6 with Task.WaitAsync()

André Blocking | .NET adventures (32)

Neste post, examino como o novoTask.WaitAsync()The API is implemented in .NET 6 by looking at the built-in types used to implement it.

Added timeout or cancel support forhomework waiting

In my previous post I showed how to "unsubscribe".homework waitingcall oneTaskwhich does not directly support the cancellation with the new oneWaitAsync()API for .NET 6.

was formerlyWaitAsync()in this post on improving the code that expects theIHostApplicationLifetime.ApplicationStartedevent are triggered. The final code I settled on is shown below:

static asynchronousTask<bool> WaitForAppStartup(IHostApplicationLifetimethe whole life, cancellation tokenStop Token){ attempt { // Create a TaskCompletionSource that completes when // dispara o token life.ApplicationStarted Erastcs= Novo TaskCompletionSource(); use Eras_=the whole life.application started.Record(() =>tcs.Define result()); // Wait for the TaskCompletionSource _or_ stop token to fire // using the new .NET 6 API, WaitAsync() Hopetcs.Task.WaitAsync(Stop Token).ConfigureWait(INCORRECT); hand back TRUE; } take(TaskCanceledException) { // stopToken lowered hand back INCORRECT; }}

In this post I see how the .NET 6 APITask.WaitAsync()is actually implemented.

dive into theTask.WaitAsyncimplementation

For the rest of the post, I'll look at the implementation behind the API. there is nothingverysurprisingly there, but I haven't looked much into the code behind itTaskand their relatives, so it was interesting to see some of the details.

Task.WaitAsync()was featured in this PR by Stephen Toub.

We'll start with the Task.WaitAsync methods:

public classroom Task{ public Task WaitAsync(cancellation tokencancellation token) => WaitAsync(Time is up.UnsignedInfinity,cancellation token); public Task WaitAsync(time lapseTime is up) => WaitAsync(ValidateTimeout(Time is up,exception argument.Time is up), By default); public Task WaitAsync(time lapseTime is up, cancellation tokencancellation token) => WaitAsync(ValidateTimeout(Time is up,exception argument.Time is up),cancellation token);}

Finally, these three methods delegate to another private WaitAsync overload (shown shortly) that times out in milliseconds. This timeout is calculated and validated in the ValidateTimeout method shown below, which confirms that the timeout is within the acceptable range and converts it to auintof milliseconds.

intern static uint ValidateTimeout(time lapseTime is up, exception argumentStreit){ Largototal milliseconds= (Largo)Time is up.total milliseconds; e (total milliseconds< -1 ||total milliseconds>stopwatch.MaxSupportedTimeout) {pitch helper.ThrowArgumentOutOfRangeException(Streit,exception resource.Task_InvalidTimerTimeSpan); } hand back (uint)total milliseconds;}

Now we come to the WaitAsync method, which is also delegated by all public APIs. I've commented the method below:

Private Task WaitAsync(uintmilliseconds timeout, cancellation tokencancellation token){ // If the task is already completed or if we don't have a timeout OR an abort token // so we can't do anything, and WaitAsync is a noop that returns the original task e (It's complete|| (!cancellation token.can be cancelled&&milliseconds timeout==Time is up.UnsignedInfinity)) { hand back That's it; } // If the cancel token has already been triggered, we can immediately return a canceled task e (cancellation token.It is a requested cancellation) { hand back cancelled(cancellation token); } // If the timeout is 0, we immediately return a failed task e (milliseconds timeout== 0) { hand back OfException(Novo Timeout exception()); } // CancellationPromise<T> is where most of the heavy lifting happens hand back Novo CancelamentoPromessa<Result of the canceled task>(That's it,milliseconds timeout,cancellation token);}

The bulk of this method is to see if we can take a quick route and avoid the extra work involved in creating aCancelarPromessa<T>, but if not, then we have to deal with it. Before that, it's worth addressingResult of the canceled taskgeneric parameter used with returnCancelarPromessa<T>.

VoidTaskResult is a built-in nested task type used a bit like the unit type in functional programming; indicates that you can ignore themT.

// Special internal structure we use to indicate we're not interested// Result of a Task<VoidTaskResult>.intern StructureResult of the canceled task{ }

UseResult of the canceled taskrather means the implementation ofTaskjDureza<T>can be shared. In this case theCancelarPromessa<T>The implementation is the same for bothTask.WaitAsync()Implementation (shown above) and generic versions of these methods exposed by the Task<TR>.

Let's take a look at the implementation ofCancelarPromessa<T>to see the magic happen.

under the hood ofCancelarPromessa<T>

There are a couple of guys thereCancelamentoPromessawhich you probably aren't familiar with unless you regularly browse the .NET source code, so we'll keep that in mind.

First we have the type signature for the nested typeCancelarPromessa<T>:

public classroom Task{ Private protected sealed classroom CancelamentoPromessa<excess of results> :Task<excess of results>,Action to complete IT tasks{ // ... }}

If you are signing alone, there are a few things to keep in mind:

  • protected private—this modifier means that theCancelarPromessa<T>The type can only be accessed by derived classesTaskand they are in the same congregation. That means you can't use it directly in your user code.
  • Task<TRresult>-IsCancelarPromessa<T>comes fromTask<TRresult>. Most of the time, it's a "normal" task that can be canceled, completed, or failed like any other.Task.
  • Action to complete IT tasks: This is a built-in interface that essentially lets you register a simple action to be performed when a task is completed. This is similar to a standard continuation created with ContinueWith, except it has less overhead. this is againintern, so you can't use it in your types. We'll take a closer look at that soon.

We've already seen the signature, now let's look at your private fields. The descriptions of them in the source cover it pretty well I think:

/// <summary>The source task. It is saved so that we can delete the continuation if it expires or is cancelled.</summary>Private just read Task_Task;/// <summary>Abort register used to unregister the token source when the task expires or completes.</summary>Private just read CancelaciónTokenRegistro_record;/// <summary>The timer used to implement the timeout. It is saved so that it is rooted and we can discard it if we cancel or complete the task.</summary>Private just readTimerQueueTimer?_Stopwatch;

So we have 3 fields:

  • O OriginalTaskwhat we callWaitAsync()
  • The registration of the cancellation token that we received when we registered with thecancellation token. If heBy defaultCancel token was used, this is a "dummy"By defaultExample.
  • The timer used to implement the timeout behavior (if required).

Note that the_Stopwatchthe field is of typeTimerQueueTimer. This is another internal implementation, this time it's part of the overall timer implementation. We're going to go deep enough in this post, so I'll just briefly cover how this is used. For now it's enough to know that it behaves similar to a normal oneSystem.Threading.Timer.

Then theCancelarPromessa<T>is a class derived fromDureza<T>, retains a reference to the originalTask, ACancelaciónTokenRegistro, It is aTimerQueueTimer.

IsCancelamentoPromessaconstructor

Now let's look at the constructor. Let's take this in 4 small pieces. At first the arguments passedTask.WaitAsync()Let some debug assertions apply and then the originalTaskis stored in it_Task. Finally, thatCancelarPromessa<T>Instance is registered asfinal actionto the sourceTask(We'll get to what that means in a moment.)

intern CancelamentoPromessa(TaskThose, uintmilliseconds delay, cancellation tokensymbolic){Debugging.claim(Those!= Null);Debugging.claim(milliseconds delay!= 0); // Register with the target task._Task=Those;Those.Add CompletionAction(That's it); // ...Rest des Konstruktors kurz behandelt}

Next we have the timeout setting. This creates aTimerQueueTimerand pass a callback that will be executed latermilliseconds delay(and does not run periodically). FORstaticLambda is used to avoid capturing the state passed as the second argument to theTimerQueueTimer. The callback tries to dial theCancelarPromessa<T>as erroneous in determining aException Timeout()(Remember thatCancelarPromessa<T>itself is oneTask) and then performs a cleanup, which we'll see later.

Note that tooFlow execution contextesINCORRECT, which avoids capturing and restoring the execution context for performance reasons. For more information on execution context, see this post by Stephen Toub.

// Register with a timer if needed.e (milliseconds delay!=Time is up.UnsignedInfinity){_Stopwatch= Novo TimerQueueTimer(staticCondition=> { ErasesteRef= (CancelamentoPromessa<excess of results>)Condition!; e (esteRef.TrySetException(Novo Timeout exception())) {esteRef.Clean(); } },Condition: That's it,Deadline:milliseconds delay,Period:Time is up.UnsignedInfinity,Flow execution context: INCORRECT);}

After setting the timeout, the builder sets thecancellation tokenSupport. This also registers a callback to fire when thecancellation tokenIt's cancelled. Note that this is used againinsecure record()(instead of the normalRecord()) to prevent execution context from flowing to the callback.

// Check in with cancellation token._record=symbolic.Insecure registration(static (Condition,cancellation token) =>{ ErasesteRef= (CancelamentoPromessa<excess of results>)Condition!; e (esteRef.TrySetCancelled(cancellation token)) {esteRef.Clean(); }}, That's it);

Finally, the builder takes care of the maintenance of the house. This explains the situation where the sourceTaskcompletewhile the constructor is running, registered before the deadline and cancellation. Or if the timeout takes effect before the cancellation is posted. Without the following block, there could be resource leaks that are not cleaned up

// If one of the callbacks was triggered, it might have been triggered before we registered the other callbacks.// and therefore the clean might have missed these extra records. Just in case, look here and when we are// already done, unsubscribe everything again. Unregistering is idempotent and thread-safe.e (It's complete){ Clean();}

That's all the code in the constructor. Once built, theCancelarPromessa<T>is returned byWaitAsync()Method asTask(oh umDureza<T>) and can be expected like any otherTask. In the next section we will see what happens when theThose Taskcomplete

To implementAction to complete IT tasks

im ConstructorCancelarPromessa<T>We register a closing action at theThose Task(as we callWaitAsync()em):

_Task=Those;Those.Add CompletionAction(That's it);

The object came onAddCompletionAction()must implementAction to complete IT tasks(ifCancelarPromessa<T>he does)Action to complete IT tasksThe interface is simple and consists of a single method (which is called when the sourceTaskcomplete) and a single property:

intern interface Action to complete IT tasks{ // Called to perform the action file apply(Taskcomplete tare); // Should only return false for specialized scenarios for performance reasons // Controls whether to force execution as a continuation (synchronous) boolCall MayRunArbitraryCode{ receive; }}

CancelarPromessa<T>Implement this method as shown below. put onCall MayRunArbitraryCodeATRUE(like all non-specialized scenarios) and implements theapply()method, always the stuffingThose Taskals Argument.

The implementation essentially "copies" the entire source stateTaskNOCancelarPromessa<T>Task:

  • If heThose Taskhas been cancelled, callTrySetCancelled, reusing the exception dispatch information to "hide" the details of .CancelarPromessa<T>
  • If the source task has an error, invokeTrySetException()
  • When the task is done, callTrySetResult

Note that regardless of the status of theThose Task, IsTest Set*MethodMayonnaisefails if an abort was requested or the timeout has expired in the meantime. In these cases, theboolVariable is defined asINCORRECT, and we can ignore the callsClean()(as the successful route will call it).

classroom CancelamentoPromessa<excess of results> :Action to complete IT tasks{ boolAction to complete IT tasks.Call MayRunArbitraryCode=> TRUE; fileAction to complete IT tasks.apply(Taskcomplete tare) {Debugging.claim(complete tare.It's complete); bool put down =complete tare.Condition exchange  {task status.Cancelled=> TrySetCancelled(complete tare.cancellation token,complete tare.GetCancellationExceptionDispatchInfo()),task status.spoken=> TrySetException(complete tare.GetExceptionDispatchInfos()),_=>complete tareesTask<excess of results>TarTRErgebnis? TrySetResult(TarTRErgebnis.Result) : TrySetResult(), }; e (put down) { Clean(); } }}

Now that you've seen the three callbacks for the three possible outcomes ofWaitAsync(). In any case, whether the task completed first, timed out, or canceled, we have something to clean up.

Clean

One of those things you can forget about at workcancellation tokens and timers, be sure to clean up after yourself.CancelarPromessa<T>Make sure you do this on every callClean(). This does three things:

  • throw that awayCancelaciónTokenRegistrocame back fromCancellationToken.UnsafeRegister()
  • closeThreadQueueTimer(if any), which cleans up the underlying resources
  • Remove callback fromThose Task, then theITaskCompletionAction.Invoke()method oneCancelarPromessa<T>will not be called.
Private file Clean(){_record.Take care of();_Stopwatch?.Fence();_Task.Removecontinued(That's it);}

Each of these methods is idempotent and thread-safe, so it's safe to call themClean()several method callbacks that can happen if something is triggered while we're still executing theCancelarPromessa<T>designer for example.

One point to note is that even if the timeout expires or if the cancel token is triggered and theCancelarPromessa<T>complete itThose Taskit keeps running in the background. Hecallerwho executedsource.WaitAsync()You will never see the output of the result ofTask, but if thatTaskhas side effects, they will continue to occur.

And this is! This took a while to get through, but doesn't require much code to implement.WaitAsync(), forksa littlecomparable to the "naive" approach you might have taken in previous versions of .NET, but using some of the features of .NETinternPerformance Fees. Hope it was interesting!

Summary

In this article I have dealt intensively with the newTask.WaitAsync()in .NET 6 and explore how to implement it using built-in BCL types. I have shown thatTaskcame back fromWaitAsync()is actually oneCancelarPromessa<T>Example that is derivedDureza<T>, which, however, allows cancellations and waiting times directly. Finally, I analyzed the implementation ofCancelarPromessa<T>, shows what it's all aboutThose Task.

March 22, 2022 at 3:00 a.m.

Next Tracking a pending xUnit test in CI - Build a custom test framework

Anterior A deep dive into the new Task.WaitAsync() API in .NET 6

André Blocking | .NET adventures (33)

At the end of my previous post, which covered the new .NET 6 Task.WaitAsync() API, I added a small side note about what happens to yoursTaskif you useTask.WaitAsync(). That is, even if theWaitAsync()The call is dropped or times outTaskcontinues to run in the background.

Depending on your familiarity with the Task Parallel Library (TPL) or .NET in general, this may or may not be new to you, so I thought I'd take some time to describe some of the potential errors you might encounter when you interrupt.Tasks normally (or "timeout").

Without special treatment, aTaskalways run to the end

Let's start by looking at what happens to the "source".Taskif you use the new oneWaitAsync()API for .NET 6.

A point you might not consider when callingWaitAsync()is that, even if it expires or the cancel token is fired, thatThose Taskit keeps running in the background. Hecallerwho executedsource.WaitAsync()You will never see the output of the result ofTask, but if thatTaskhas side effects, they will continue to occur.

For example, in this trivial example, we have a function that loops 10 times and prints to the console every second. We call this method and invoke itWaitAsync():

useSystem;useSystem.soaking.Tasks;attempt{ Hope DruckenOlá().WaitAsync(time lapse.seconds(3));}take(Exception){console.write line("I'm tired of waiting");}// don't exitconsole.read line();asynchronous Task DruckenOlá(){ for(ErasEU=0;EU<10;EU++) {console.write line("Hello Number" +EU); HopeTask.delay(1_000); }}

The output shows that theTaskwe expected was aborted after 3s, but thatprintHallo()the task continued:

Hello number 0Hello number 1Hello number 2Hello number 3I'm done waitingHello number 4Hello number 5Hello number 6Hello number 7Hello number 8Hello number 9

WaitAsync()you can control when you want to stopHopeforTaskcomplete. He doesNOallow you to interrupt a at willTaskOperation. The same applies if you use acancellation tokenimpostorWaitAsync(), the sourceTaskIt runs to the end, but the result is not observed.

You will also get similar behavior when using a "poor man".WaitAsync()(This is an approach you could use before .NET 6):

useSystem;useSystem.soaking.Tasks;ErasPrint Hello= DruckenOlá();Erascomplete task=Task.if someone(Print Hello,Task.delay(time lapse.seconds(3));e (complete task==Print Hello){console.write line("Hello ready to print"); // This will not be called due to timeout}the rest{console.write line("I'm tired of waiting");}// don't exitconsole.read line();asynchronous Task DruckenOlá(){ for(ErasEU=0;EU<10;EU++) {console.write line("Hello Number" +EU); HopeTask.delay(1_000); }}

As before, the output shows that thePrint HelloThe task continues even after we stop waiting for it:

Hello number 0Hello number 1Hello number 2Hello number 3I'm done waitingHello number 4Hello number 5Hello number 6Hello number 7Hello number 8Hello number 9

So what if you want to stop oneTaskget in your way and stop you from consuming resources?

Actually cancel a task

The only way to getTRUEliquidation of a fundTask, it's for himTask you yourselfto support you For this reasonasynchronousAPIs should almost always take acancellation tokento provide the caller with a mechanism to ask the callerTaskstop processing!

For example, we could rewrite the above program with acancellation tokeninstead of:

useSystem;useSystem.soaking;useSystem.soaking.Tasks;attempt{ use Erascts= Novo CancelamentoTokenSource();cts.cancel later(time lapse.seconds(3)); Hope DruckenOlá(cts.Symbolic);}take(Exception){console.write line("I'm tired of waiting");}// don't exitconsole.read line();asynchronousTask<bool> DruckenOlá(cancellation tokenConnecticut){ for(ErasEU=0;EU<10;EU++) {console.write line("Hello Number" +EU);Connecticut.ThrowIfCancellationRequested(); // We could exist gracefully, but instead we just throw away HopeTask.delay(time lapse.seconds(1),Connecticut); } hand back TRUE;}

Running this program shows the following output:

Hello number 0Hello number 1Hello number 2I'm done waiting

Alternatively, we could rewrite themDruckenOlámethod so that it doesn't fire when cancellation is requested:

asynchronousTask<bool> DruckenOlá(cancellation tokenConnecticut){ attempt { for(ErasEU=0;EU<10;EU++) {console.write line("Hello Number" +EU); e(Connecticut.It is a requested cancellation()) { hand back INCORRECT; }Connecticut.ThrowIfCancellationRequested(); // This is triggered when ct aborts while waiting // So I need the test shot HopeTask.delay(time lapse.seconds(1),Connecticut); } hand back TRUE; } take(TaskCancelledExceptionex) { hand back INCORRECT; }}

However, note that in a recent blog post, Stephen Cleary points out that in generalshould notLeave quietly if cancellation is requested. Instead, you must start.

Processing of the cancellation in cooperation with acancellation tokenis generally a best practice as consumers often want to stop aTaskProcessing immediately when they stop waiting. But what if you want to do something different?

If heTaskIt still works, can I get the result?

While writing this post, I realized that there was an interesting scenario that I could handle with the help of the new oneWaitAsync()API in .NET 6. That means you canHopethe sourceTask after WaitAsync()He's finished. For example, you can wait a bit for aTaskto complete, and if not, do something else in the meantime before coming back to it later:

useSystem;useSystem.soaking.Tasks;ErasTask= DruckenOlá();attempt{ // wait with timeout HopeTask.WaitAsync(time lapse.seconds(3)); // If this completes successfully, the job will complete before the timeout}take(Timeout exception){ // Time out, do something else for a whileconsole.write line("I'm tired of waiting, I'm tired of doing other work...");}// Ok, we really need that result nowErasResult= HopeTask;console.write line("Receive: " +Result);asynchronousTask<bool> DruckenOlá(){ for(ErasEU=0;EU<10;EU++) {console.write line("Hello Number" +EU); HopeTask.delay(time lapse.seconds(1)); } hand back TRUE;}

That issimilarfor the first example in this post where the task keeps running after the time is up. But in this case we later retrieve the result of the completed task, even if theWaitAsync()The task was aborted:

Hello number 0Hello number 1Hello number 2Hello number 3I'm done waiting,do other work....Hello Number 4Hello Number 5Hello Number 6Hello Number 7Hello Number 8Hello Number 9Receive: TRUE

Support for cancellation in your createasynchronousMethods give callers more flexibility by allowing them to drop out. and you probablyit shouldabandon tasks when you no longer expect them, even if they have no side effects.

Cancel calls to Task.Delay()

An example forTaskis no side effectTask.Delay(). You've probably used this API before; wait asynchronously (without aHilo) for a period of time before proceeding.

Is it possible to useTask.Delay()as "time out", similar to what I showed above as "poor man".WaitAsync", like this:

// Start the current task we are interested in (don't wait for it)ErasTask= DoSomethingAsynchronous(); // create timeout task (don't wait for it)ErasTime is up=time lapse.seconds(10);Erastimeout task=Task.delay(Time is up);// Execute task and timeout in parallel, return completed task firstErascomplete task= HopeTask.if someone(Task,timeout task);e (complete task==Task){ // wait for task for error etc. hand back HopeTask.ConfigureWait(INCORRECT);}the rest{ throw Novo Timeout exception(PS"Task expired after {timeout}");}

I'm not saying this is the "best" way to create a timeout, you can use it as wellCancelamentoTokenSource.CancelAfter().

In the example above we have both the "main"asynchronoustask and also namedTask.Delay (Timeout), sinHopein none of them. so we useTask.WhenAny()Waiting foranythe task to be done,Öthe waiting timeTaskclose and handle the result accordingly.

The "nice" thing about this approach is that you don't necessarily have to have exception handling. Youhe candrag if you want (as I did in the case of aTime is upin the example above), but you might as well use an exception-free approach.

The only thing to remember here is whateverTaskfinish firstthe other keeps walking.

So why does it matter if aTask.Delay()is it still running in the background? Good,Task.Delay()it uses a timer under the hood (specifically a TimerQueueTimer). This is primarily an implementation detail. But if you create oneChargefrom calls toTask.Delay()For some reason these references can leak out. HeTimerQueueTimerInstances are deleted when theTask.Delay()the call expires but if you put onTask.Delay()Calls faster than they are being ended, you have a memory leak.

So how can this leak be prevented? As before, the "simple" answer is to cancel thatTaskwhen you're done with him. For example:

ErasTask= DoSomethingAsynchronous(); ErasTime is up=time lapse.seconds(10);// 👇 Use a CancellationTokenSource, Pass, or Token for Task.DelayErascts= Novo CancelamentoTokenSource();Erastimeout task=Task.delay(Time is up,cts.Symbolic);Erascomplete task= HopeTask.if someone(Task,timeout task);e (complete task==Task){cts.Cancel(); // 👈 Cancels the delay hand back HopeTask.ConfigureWait(INCORRECT);}the rest{ throw Novo Timeout exception(PS"Task expired after {timeout}");}

This approach will avoidTask.Delay()of leaks, but be careful thatCancelamentoTokenSourceit's also quite heavy, so keep that in mind if you're doing a lot of these!

Summary

This post showed a number of different scenariosTaskTermination and what happensTasks that do not support cooperative cancellation with acancellation token. In all cases, theTaskit keeps running in the background. If heTaskcauses side effects, so you should be aware that they can still happen. Likewise, even if theTask NOhas additional side effects, may lose resources if kept running.

March 29, 2022 at 3:00 a.m.

Next Working on two Git branches at the same time with Git Worktree

Anterior Just because you stopped waiting doesn't mean the task isn't running anymore.

André Blocking | .NET adventures (34)

In this post, I describe how we track a pending xUnit test on our CI build. To achieve this, we wanted xUnit to log what tests are run. tested thisquitemore difficult than we had planned and ended up creating a customXunitTestFrameworkNameImplementation. This post shares our pain with the world.

The problem: A CI suspension test

In the Datadog APM plotter library we keep aChargeTests Obviously, we have many unit tests that run quickly and rarely break. But due to the nature of the APM Tracer, where we are tightly coupled with the CLR Profiler APIs, we do a lot of integration testing, instrumenting example applications and confirming that they behave as expected.

Running these tests is expensive because each test requires starting a separate process, connecting the crawler, and performing the expected behavior (e.g., sending web requests, accessing a database, etc.). Besides, how arerealApplications, with all the concurrency and edge cases that come with it, will sometimes crash a test.

The problem is that sometimes a test hangs on the CI. And by default, you don't know which one. xUnit does not indicate which test is running. And if you can't replicate the problem locally in an IDE, how do we know which test was blocked? 🤔

It looked likeit should be something easy to activate: A log message when a test starts and another when a test ends. In this way we were able to locate the hanged perpetrator. It wasn't as easy as we had hoped.

Creating a custom test framework in xUnit

xUnit takes a very idiosyncratic view as a testing framework. It provides the minimum functionality required of a framework and leaves the rest up to you. There are many ways to extend and connect the framework, but the framework doesn't offer much out of the box.

My colleague Kevin Gosse is the one to credit/blame for this code, which can be found on GitHub

After a lot of research, we (Kevin) found that the only way to record when a test starts and ends is to write a custom test framework for xUnit.

In this section I'll go through all the layers required to do this, but as a quick test I created a new project using the .NET CLI by running:

dotnet new xunit -n XunitCustomFrameworkTestsdotnet new slndotnet sln add ./XunitCustomFramework.csproj

So I added a simple test:

useXunit;namespaceXunitCustomFrameworkPruebas;public classroom CalculadoraPruebas{ [Completed] public file It works out() {claim.TRUE(TRUE); }}

Now that we have a test project, let's start and end a method.

Creating the customtest frame

Istest framein xUnit takes care of detecting and executing all tests in your application. The default implementation is XunitTestFramework, but you can create your own test framework and "register" it with xUnit by adding one[assembly:Xunit.TestFramework]Attribute.

The following creates ourcustom testing frameworkderived from the patternXunitTestFrameworkName.

useXunit.abstractions;useXunit.SDK;namespaceXunitCustomFrameworkPruebas;public classroom custom testing framework : XunitTestFrameworkName{ public custom testing framework(IMessageSinkMessageSink) : Base(MessageSink) { }}

IsIMessageSinkis an important interface that we will use to write messages to the consoleFor acontext of an exam. We use it to record when a test starts and ends.

If you want to write messages to test the output of a test, you need to use the ITestOutputHelper interface and put it in the constructors of your test class.

Xunit will not use themcustom testing frameworkautomatically you have to add them[test structure]assigned to the assembly. For example:

[Montage:Xunit.test frame("XunitCustomFrameworkTests.CustomTestFramework", "XunitCustomFrameworkTests")]

Remember that youhave toSet mount attributesFor aNamespace Declarations. It's easy to get wrong with the new namespace declarations in C#10!

Adding a diagnostic message

With himcustom testing frameworkInstead, we add a message to the constructor to confirm that we're using the type as expected:

public custom testing framework(IMessageSinkMessageSink) : Base(MessageSink){MessageSink.in the message(Novo diagnostic message("Using the CustomTestFramework"));}

In order to write a message, you must create onediagnostic message, and pass onIMessageSink.OnMessageMethod. We can test it by runningnetwork test:

>Net pointcourt hearing--no-build --nologo --no-restoreTestlaufforC:\repos\blog-examples\XunitCustomFramework\bin\Debug\net6.0\XunitCustomFramework.dll(.NETCoreApp, Version=v6.0)Out ofcourt hearingrunning please wait...Overall 1court hearingThe files matched the specified pattern. approved!- Error: 0, Password: 1, Skipped: 0, Total: 11, Duration: 11ms - XunitCustomFramework.dll(net6.0)

Hmmm, there are no diagnostic messages indicating that thecustom testing frameworkwill not be called. This is because diagnostic messages need to be enabled in the xUnit configuration.

Activate diagnostic messages

The suggested approach to configuring xUnit is to use axunit.runner.jsonFile copied into the build output. create the filexunit.runner.jsonin the root of your test project and add the following:

{ "$schema": "https://xunit.net/schema/current/xunit.runner.schema.json", "diagnostic message": TRUE}

Is$schemakey enables intellisense for most file editors and we also enable diagnostic messages. You have to make surejsonThe file is set to be copied into your build output, so make sure you have the following<group of elements>that backcsprosFile:

<group of items> <none To update="xunit.runner.json" Copy to the output directory="Keep the latest ones" /></group of items>

Nowwhen we runnetwork testWe can see the diagnostic message

>Net pointcourt hearingfunctional testforC:\repos\blog-examples\XunitCustomFramework\bin\Debug\net6.0\XunitCustomFramework.dll(.NETCoreApp, Version=v6.0)Out ofcourt hearingrunning please wait...Overall 1court hearingThe files matched the specified pattern.[xUnit.net 00:00:00.43]XunitCustomFramework: uses CustomTestFrameworkPassed!- Error: 0, Password: 1, Skipped: 0, Total: 11, Duration: 11ms - XunitCustomFramework.dll(net6.0)

Okay, now that we know we're successfully connecting, we can achieve our goal: record when each test begins and ends.

create a habitTestMethodRunner

Istest frameit's the "top level" part of the equation that we need to replace, but I'll skip thatlowerPiece flush now, adjustedTestMethodRunner. IsRunTestCaseAsyncMethod in this class is called once for eachTest cases(A[Completed], or a single case of[Theory]test) so that we can record the behavior nicely.

For simplicity, we can derive from the default implementation,XunitTestMethodRunner, and cancel theRunTestCaseAsync()to add the behavior we need. The following commented code shows how to do this:

classroom CustomTestMethodRunner : XunitTestMethodRunner{ Private just read IMessageSink_diagnosticMessageSink; // We need to pass all injected values ​​to the base constructor public CustomTestMethodRunner(Test methodTest method,IReflectionTypeInfo @classroom, IReflectionMethodInfoIReflectionMethodInfoMethod,IEnumerable<IXunitTestCase>Test case, IMessageSinkdiagnosticoMessageSink, IMessageAutobusMessageBus, exception aggregatorAggregator, CancelamentoTokenSourceCancelacionTokenSource, Object[]ConstructorArgument) : Base(Test method,@classroom,Method,Test case,diagnosticoMessageSink,MessageBus,Aggregator,CancelacionTokenSource,ConstructorArgument) {_diagnosticMessageSink=diagnosticoMessageSink; } protected cancel asynchronousTask<RunSummary> RunTestCaseAsync(IXunitTestCaseTest cases) { // create a textual representation of the test parameters (for theoretical tests) ErasParameter= Chain.File; e (Test cases.Test method argument!= Null) {Parameter= Chain.Bring together(", ",Test cases.Test method argument.choose(A=>A?.Chain() ?? "Null")); } // create the full name of the test (class + method + parameters) Erascourt hearing=PS"{TestMethod.TestClass.Class.Name}.{TestMethod.Method.Name}({Parameter})"; // Write a record to the output that we start the test_diagnosticMessageSink.in the message(Novo diagnostic message(PS"START: {test}")); attempt { // Run the test and get the result ErasResult= Hope Base.RunTestCaseAsync(Test cases); // Calculate the final state of the test ErasCondition=Result.failed> 0 ? "FALLA" : (Result.omitted> 0 ? "OPIED" : "SUCCESS"); // Write the test result to the output_diagnosticMessageSink.in the message(Novo diagnostic message(PS"{status}: {test} ({result.Time}s)")); hand backResult; } take (Exceptionex) { // Something went wrong while running the test_diagnosticMessageSink.in the message(Novo diagnostic message(PS"ERROR: {test} ({ex.Message})")); throw; } }}

I hope the code in the snippet above is easy to understand. The main question now is how to combine this with thatcustom testing frameworkuntil now. This is where things get a bit confusing.

Creating the test framework class hierarchy

We are currently implementing:

  • custom testing framework(an implementation oftest frame)
  • CustomTestMethodRunner(an implementation ofTestMethodRunner<IXunitTestCase>)

Chineseconnectyou, so that ourcustom testing frameworkbenutze oCustomTestMethodRunner, usAlsoYou need to create the following types:

  • CustomExecutor(an implementation ofTestFrameworkExecutor<IXunitTestCase>)
  • CustomAssemblyRunner(an implementation ofTestAssemblyRunner<IXunitTestCase>)
  • CustomTestCollectionRunner(an implementation ofTestCollectionRunner<IXunitTestCase>)
  • CustomTestClassRunner(an implementation ofTestClassRunner<IXunitTestCase>)

Iscustom testing frameworkcreate theCustomExecutor, which creates theCustomAssemblyRunner, which creates theCustomTestCollectionRunner, which creates aCustomTestClassRunnerwhichFinallycreate theCustomTestMethodRunner! The sequence diagram looks like this:

André Blocking | .NET adventures (35)

This is all boilerplate, nothing is user defined, but due to the way class structures are designed we need to implement thematfrom them. finalcompleteThe hierarchy should look something like this:

public classroom custom testing framework : XunitTestFrameworkName{ public custom testing framework(IMessageSinkMessageSink) : Base(MessageSink) {MessageSink.in the message(Novo diagnostic message("Using the CustomTestFramework")); } protected cancel ITestFrameworkExecutor Create Performer(assembly nameAssemblyName) => Novo CustomExecutor(AssemblyName,SourceInformationSupplier,DiagnoseMessageSink); Private classroom CustomExecutor : XunitTestFrameworkExecutor { public CustomExecutor(assembly nameAssemblyName, ISourceInformationProviderISourceInformationProvidersourceInformationProvider, IMessageSinkdiagnosticoMessageSink) : Base(AssemblyName,sourceInformationProvider,diagnosticoMessageSink) { } protected cancel asynchronous file run test cases(IEnumerable<IXunitTestCase>Test case, IMessageSinkExecutionMessageSink, ITestFrameworkExecutionOptionsrun options) { use ErasClassroom= Novo CustomAssemblyRunner(test mount,Test case,DiagnoseMessageSink,ExecutionMessageSink,run options); HopeClassroom.RunAsync(); } } Private classroom CustomAssemblyRunner : XunitTestAssemblyRunner { public CustomAssemblyRunner(IT-Testaufbauexperimental setup,IEnumerable<IXunitTestCase>Test case, IMessageSinkdiagnosticoMessageSink, IMessageSinkExecutionMessageSink, ITestFrameworkExecutionOptionsrun options) : Base(experimental setup,Test case,diagnosticoMessageSink,ExecutionMessageSink,run options) { } protected cancelTask<RunSummary> RunTestCollectionAsync(IMessageAutobusMessageBus, Collection of IT testssample collection,IEnumerable<IXunitTestCase>Test case, CancelamentoTokenSourceCancelacionTokenSource) => Novo CustomTestCollectionRunner(sample collection,Test case,DiagnoseMessageSink,MessageBus,TestCaseOrderer, Novo exception aggregator(Aggregator),CancelacionTokenSource).RunAsync(); } Private classroom CustomTestCollectionRunner : XunitTestCollectionRunner { public CustomTestCollectionRunner(Collection of IT testssample collection,IEnumerable<IXunitTestCase>Test case, IMessageSinkdiagnosticoMessageSink, IMessageAutobusMessageBus, ITestCaseOrdererTest Case-Befehl, exception aggregatorAggregator, CancelamentoTokenSourceCancelacionTokenSource) : Base(sample collection,Test case,diagnosticoMessageSink,MessageBus,Test Case-Befehl,Aggregator,CancelacionTokenSource) { } protected cancelTask<RunSummary> Run TestClassAsync(exam classTest class,IReflectionTypeInfo @classroom,IEnumerable<IXunitTestCase>Test case) => Novo CustomTestClassRunner(Test class,@classroom,Test case,DiagnoseMessageSink,MessageBus,TestCaseOrderer, Novo exception aggregator(Aggregator),CancelamentoTokenSource,Allocations of Collection Accessories) .RunAsync(); } Private classroom CustomTestClassRunner : XunitTestClassRunner { public CustomTestClassRunner(exam classTest class,IReflectionTypeInfo @classroom,IEnumerable<IXunitTestCase>Test case, IMessageSinkdiagnosticoMessageSink, IMessageAutobusMessageBus, ITestCaseOrdererTest Case-Befehl, exception aggregatorAggregator, CancelamentoTokenSourceCancelacionTokenSource,iDictionary<Typ, Object>Collection of accessory associations) : Base(Test class,@classroom,Test case,diagnosticoMessageSink,MessageBus,Test Case-Befehl,Aggregator,CancelacionTokenSource,Collection of accessory associations) { } protected cancelTask<RunSummary> RunTestMethodAsync(Test methodTest method, IReflectionMethodInfoIReflectionMethodInfoMethod,IEnumerable<IXunitTestCase>Test case, Object[]ConstructorArgument) => Novo CustomTestMethodRunner(Test method, That's it.classroom,Method,Test case, That's it.DiagnoseMessageSink, That's it.MessageBus, Novo exception aggregator(That's it.Aggregator), That's it.CancelamentoTokenSource,ConstructorArgument) .RunAsync(); } Private classroom CustomTestMethodRunner : XunitTestMethodRunner { Private just read IMessageSink_diagnosticMessageSink; public CustomTestMethodRunner(Test methodTest method,IReflectionTypeInfo @classroom, IReflectionMethodInfoIReflectionMethodInfoMethod,IEnumerable<IXunitTestCase>Test case, IMessageSinkdiagnosticoMessageSink, IMessageAutobusMessageBus, exception aggregatorAggregator, CancelamentoTokenSourceCancelacionTokenSource, Object[]ConstructorArgument) : Base(Test method,@classroom,Method,Test case,diagnosticoMessageSink,MessageBus,Aggregator,CancelacionTokenSource,ConstructorArgument) {_diagnosticMessageSink=diagnosticoMessageSink; } protected cancel asynchronousTask<RunSummary> RunTestCaseAsync(IXunitTestCaseTest cases) { ErasParameter= Chain.File; e (Test cases.Test method argument!= Null) {Parameter= Chain.Bring together(", ",Test cases.Test method argument.choose(A=>A?.Chain() ?? "Null")); } Erascourt hearing=PS"{TestMethod.TestClass.Class.Name}.{TestMethod.Method.Name}({Parameter})";_diagnosticMessageSink.in the message(Novo diagnostic message(PS"START: {test}")); attempt { ErasResult= Hope Base.RunTestCaseAsync(Test cases); ErasCondition=Result.failed> 0 ? "FALLA" : (Result.omitted> 0 ? "OPIED" : "SUCCESS");_diagnosticMessageSink.in the message(Novo diagnostic message(PS"{status}: {test} ({result.Time}s)")); hand backResult; } take (Exceptionex) {_diagnosticMessageSink.in the message(Novo diagnostic message(PS"ERROR: {test} ({ex.Message})")); throw; } } }}

With the full hierarchy if we run nownetwork test, we can see and complete each test run:

>Net pointcourt hearingfunctional testforC:\repos\blog-examples\XunitCustomFramework\bin\Debug\net6.0\XunitCustomFramework.dll(.NETCoreApp, Version=v6.0)Out ofcourt hearingrunning please wait...Overall 1court hearingThe files matched the specified pattern.[xUnit.net 00:00:00.43]XunitCustomFramework: Use the CustomTestFramework[xUnit.net 00:00:01.11]XunitCustomFramework: INICIADO: XunitCustomFrameworkTests.CalculatorTests.ItWorks()[xUnit.net 00:00:01.13]XunitCustomFramework: AUSGANG: ​​​​XunitCustomFramework.CalculatorTests.ItWorks() (0,0075955s)Approved!- Error: 0, Password: 1, Skipped: 0, Total: 11, Duration: 11ms - XunitCustomFramework.dll(net6.0)

With the custom testing framework installed, we can now see which test started and didn't finish when something crashes in the CI.

Make it easier to identify suspended evidence

Using the custom testing framework to keep track of the tests currently running is certainly useful, but it still requires you to analyze the output and try to identify which of the (potentially thousands) tests don't have a corresponding final data set.

We can improve this a bit by starting a timer for each test and recording an explicit warning when the tests last two.

For example, we can update theCustomTestMethodRunner.RunTestCaseAsync()Method to start a timer before calling itbase.RunTestCaseAsync():

Erasminutes deadline= 2;use Erasstopwatch= Novo stopwatch(_=>_diagnosticMessageSink.in the message(Novo diagnostic message(PS"WARNING: {test} has been running for more than {deadlineMinutes} minutes")), Null,time lapse.of minutes(minutes deadline),Time is up.infinite time interval);

If the test takes longer than the timeout (2 minutes in the example above), a warning is logged in the output. For example, if we create an intentionally long-running test:

public classroom CalculadoraPruebas{ [Completed] public file TestVerySlow() {Hilo.Sleep(time lapse.of minutes(3)); }}

So if we runnetwork test, we see a warning logged in the output:

>Net pointcourt hearingfunctional testforC:\repos\blog-examples\XunitCustomFramework\bin\Debug\net6.0\XunitCustomFramework.dll(.NETCoreApp, Version=v6.0)Out ofcourt hearingrunning please wait...Overall 1court hearingThe files matched the specified pattern.[xUnit.net 00:00:00.43]XunitCustomFramework: Use the CustomTestFramework[xUnit.net 00:00:00.73]XunitCustomFramework: INICIADO: XunitCustomFramework.CalculatorTests.VerySlowTest()[xUnit.net 00:02:00.74]XunitCustomFramework: ADVERTENCIA: XunitCustomFramework.CalculatorTests.VerySlowTest()runsfor Advance payment2 minutes

It's now much easier to see which of the tests in the CI crashed without having to go through a long list of started and completed tests.

Summary

In this post I described how to create a custom xUnittest frame. This allows you to insert "hooks" into the test execution process so you can track which tests ran and which completed. To help further, we can track when tests run longer than a threshold by starting a timer and logging an alert for each test. The whole process is a bit more work than I'd like, but I hope this helps if you ever need to do something similar!

April 5, 2022 at 3:00 a.m.

(Video) He Tried To Mess With A Royal Guard & Big Mistake

Next Stay current with .NET: Learn about new features and APIs

Anterior Tracking a pending xUnit test in CI - Build a custom test framework

André Blocking | .NET adventures (36)

In this post, I describe some scenarios where you need to change Git branches frequently and why it can be annoying at times. Here are a few ways to avoid branch changes. Finally, I describe howGit working treeallows you to check multiple branches at the same time, so you can work on two branches at the same time without affecting each other.

Scenarios that require frequent branch changes

Have you ever been forced to switch between different branches of Git to work on two different features? Do thatrelativelyeasy to do but it can still be a bit tedious and time consuming. I've encountered several scenarios where you need to switch from one branch to another.

Scenario 1: Help a colleague

The first scenario is when you are working on a function and working in your codingmy featureBranch when a colleague sends you a message asking you to help with something in the branchanother feature. They offer to stop by their agency to take a look, but it involves a series of steps:

  1. Save the code you are working on. You can use git stash --all to save changes and any new files for everyone. Or you can create a "dummy" commit on your branch using git commit -m a "WIP" (which I prefer).
  2. Switch to the other branch. You can use the UI in your IDE (e.g. Visual Studio, Rider) or you can use the command line to check out another feature or toggle another feature.
  3. Wait for your IDE to catch up. I think this is often the most painful step, whether you're using Visual Studio or Rider. For large solutions, it can take a while for the IDE to notice all the changes, reanalyze the files, and do whatever is necessary.
  4. make changes. From here you can work normally, committing the changes and sending them to theanother feature. When you're done, it's time to go backgo to 1.

This is a conceptually simple series of steps to follow, with the most painful step in my experience being Step 3: Waiting for the IDE to finish what it needs to do before you can be productive again, and the scenario is likely to occur pretty weird up. enough don't worry too much about it.

Interestingly, I've found that IDEs are a lot less "messy" when you use the built-in support for togglinggitBranches instead of changing them on the command line and waiting for the IDE to "notice" the changes.

Scenario 2: Troubleshoot

In this scenario, you just completed a feature and deleted it. Unfortunately, it has a bug and you need to fix it quickly. Unfortunately you had to work theremy feature, this involves exactly the same steps as in the previous scenario.

Scenario 3: Working in two roles at the same time

This last scenario, working on two separate resources at the same time,sonslike a bad idea. In addition to the technical issues we describe in this post, the constant context switching costs productivity. Unfortunately, it's a scenario I find myself in fairly regularly.

In my daily work, I usually work on the CI creation process. We're constantly trying to tweak and improve our builds, and while using Nuke to ensure consistency between our local and CI builds, a few thingsTerbe tested for IC.

As anyone who has worked with CI knows, working on a CI branch leads to tradeoffs like these:

André Blocking | .NET adventures (37)

Each of these commits fixes a small change that needs to be pushed to the server and waited for a CI build to complete. Depending on your CI process, this could lead to aLargoCycle time where you have to wait (for example) an hour to see the results of your changes.

Because of this cycle time, I usually work on something else while waiting to see the fruits of my CI work. What it means to follow the same steps as Scenario 1 aboveevery hour or so. When the CI change results come back, I save my work in progress and switch toci-CharacteristicsBranch, make my changes, trigger another build and go back tomy featurerama.

Adding in the IDE branch switch tax quickly becomes frustrating. To avoid this, I looked for ways to make it easier to work on two branches at the same time.

Working on multiple Git branches at the same time

Just to be clear, switch branches withgitit's just quick and easy. Working on a large solution creates friction as it makes branch changes more expensive for the IDEs (since they have to do more work to check in changes and update their internal representations, etc.). This friction prompted me to look for ways to avoid relocation.

Solution 1: Work in the GitHub UI

The simplest solution to avoid the problems of switching branches locally is: Don't switch branches locally. It seems like an odd suggestion, but oftenparticularlyWhen working in CI, I just edit a branch directly with GitHub CI. This is especially true for the new github.dev experience built into GitHub.

To enable github.dev for a repository, click the button.github.com repository key.

https://github.dev offers a browser-based VS Code editing experience that is far superior to the experience on https://github.com. From here you can create and modify branches, edit multiple files, and commit them. This is usually more than enough to make a quick edit or fix a typo.

André Blocking | .NET adventures (38)

For more complex tasks, where this fails is whenneedthis full IDE experience.

Solution 2: Clone the repository again

The brute force way to work on two branches of Git at the same time is to clone the entire repository into a different folder and check out a different branch in each clone. The following example clones the same repositoryapplication examplejSample application-2:

André Blocking | .NET adventures (39)

This certainly does the job. You can open the solution in each clone in a separate instance of the IDE and never have to switch branches. Your IDE is happy you don't have to reparse, and you switch branches as easily as windows.

Unfortunately, this has some disadvantages.

  • data duplication. As you can see in the image above, the two clones have their own.gitFolders containing all history of the repository. This folder is essentially the same between the two clones.
  • Duplication of the update process. Because the two clones are completely independent of each other, you must update your local clones each time by asearch forÖpull git, you need to repeat the process on the other clone if you wantatstay up to date.
  • No local branches are shared. Again, since the two clones are independent of each other, any changes to a branch will be reflected inapplication examplewillpowerNObe visible in the associated branchSample application-2. The only (reasonable) way to sync the local branches between the two clones is to push and drag the local branch to the other clone like a remote control. This can add some friction, which doesn't seem that difficult.

I say the only "reasonable" way because youit couldadd to application exampleclone than othersRemote-Gitsource for theSample application-2clone, but this is crazy.

Solution 3:Git working tree

The last solution I know of is to useGit working tree. This is very similar to how Solution 2 works, but it manages to solve all the problems mentioned above. In the rest of this post I will describe how to use itGit working tree, and how it allows you to work in two Git branches at the same time.

Several working trees withGit working tree

githas the concept of "working tree". These are the actual files you see in the folder when you check out a branch (except for the special.gitFile). If you pay at another agency,gitupdates all files on disk to match the files in the new branch. You can have many branches in your repository, but only one of them will be "checked out" as a working tree so you can work and make changes in it.

Git working treeadds the concept ofadditionallytrees work. This means that you can register two (or more) branches at the same time. Each working tree is checked into a different folder, similar to the multiple clones solution in the previous section. But contrary to this solution are working treesall linked to the same clone. We'll explore the implications of this shortly.

Management of working trees withGit working tree

as always withgitThere are many different ways of using itGit working tree, and a variety of options you can provide. In this section, I'll show you the basics of how to use itGit working treebased on the scenarios I showed at the beginning of this post.

Creating a working tree from an existing branch

In the first scenario, a colleague asks you to look at an existing branch. You're in the middle of a major redesign of your branch, and instead of hiding the changes, you decide to create a new working tree to take a look at it. The branch you need to look at is calledanother feature, so the following is executed in the root of your repository:

$ git worktree add ../app-example-2 other function Prepare worktree (check 'other function') HEAD is now on d6a507b. attempt to fix

In this case theGit working treeThe command has three additional arguments:

  • add to indicates that you want to create a new working tree
  • ../example-application-2is the path to the folder where we want to create the working tree. Since we are running from the root directory, a folder calledSample application-2parallel to the pasta clone.
  • another featureis the name of the branch to be checked in the working tree

After you run the command, you can see thatgitcreates theSample application-2Directory and contains the checked files:

André Blocking | .NET adventures (40)

The eagle-eyed among you can see that there areit is notA.gitdirectorySample application-2working tree. Instead there is one.git file. This file points to thegitdirectory on the original clone, and that means all yourgitCommands work insideSample application-2directory as well as in the originalapplication exampleDirectory.

I won't go into the technical details of how this all works, see the manual for more details if you're interested.

After you've helped your colleague, you no longer need themanother featurework around tree. To remove it, run the following in your "main" working tree (in theapplication exampleDirectory):

gitDelete working tree../app-example-2

This will remove theSample application-2Directory. The branch you referenced is obviously unaffected, it's just not being checked anymore.

If you have uncommitted changes in the linked working tree, git will block their removal. You can use--Strengthto force delete if you are sure you want to lose uncommitted changes.

Create a working tree from a new branch

In the second scenario, imagine that you need to fix a bug quickly. In this case youNOhas a new branch.Git working treehas a practical-BOption to create a new branch and check it in the new working tree:

> gitAdd work tree../app-example-2 origin/main -b bug-fix Preparing the working tree(New department'Bug-Fix')Rama'Bug-Fix' put downTrack remote branch'Director'von'Origin'.HEAD is now at 37ae55f merge pull request#417 by Any-Natalie/The Main Thing

This example creates a new branch,Bug-FixsinceSource/MainBranch and then check in../example-application-2. It can be removed from the main working tree by running itremove git worktree ../app-example-2, like in old times.

Change branches within a job

This brings us to the final scenario where I have two long running branches that I want to open at the same time. SinceGit working treeIn perspective, this is the same as in the previous scenarios. Just create a new working tree (and optionally a new branch) in a different location. You can work on it from your IDE and it will be treated like a "normal" Git clone.

In my case, I tend to create "long life" working trees. The "main" feature I'm working on goes in my "main" working tree; The CI page/feature I am working on is included in the "linked" working tree. But instead of deleting the linked working tree when I'm done, I keep it. So I have a "permanent" linked treeSample application-2, which I can use to query another branch if needed.

Sounds like a lot of overhead but luckily it isn't as it solves most of the issues related to handling multiple clones.

The benefits ofGit working tree

Although it is obvious that having two working trees is more confusing than a single working tree,Git working treesolves all the problems associated with owning multipleClones.

  • data duplication. The linked working tree uses the same data (i.e..gitfolder) as your main working tree, so there is no duplication
  • Duplication of the update process. If you roll back one of your working trees or change the name of a branch in the other working tree, the changes will be reflected immediately in all working trees since you are working on the same underlying data.
  • No local branches are shared. Again, since you're using the same data, it's easy to just share local branches between working trees, so you don't have any of the problems you have when using multiple clones.

I've only shown some of the most basic uses ofGit working treehere, but there are many different things you can do if you want (build a working tree without checking a branch, lock a working tree, add a custom configuration, etc.). See the manual for details.

Disadvantages/faults ofGit working tree

The biggest brand on the other handGit working treeIt's the simple overhead of having two "top-level" folders for a single repository. But that's a big sign against him. If this is a pattern you use all the time, I would suggest nesting your entire working tree in a subdirectory, like so:

André Blocking | .NET adventures (41)

I suggested using_as the name of the parent folder in the working tree, so it's usually sorted at the top of the folder list in Explorer.

Also note that you cannot check the same branch into more than one working tree. For example, if you checked out onDirectorin one tree and then try to check it in another tree, you also get an error like below:

PSgitAdd work tree../mainfatal linkado:'Director'is already registered'C:/repos/SampleApplication/_'Prepare work tree(Box'Director')

If you try to switch to a branch that is checked out in another working tree, you will also get an error:

PSgitChange bug fix fatal:'Bug-Fix'is already registered'C:/repos/example-app/linked'

These are generally fairly minor issues with obvious fixes just to be aware of. The biggest downside is the cognitive overhead of having two folders linked to the same clone, but if that's what you need I guessGit working treecan be a very useful solution.

Summary

I have described several in this postgitScenarios where you need to branch forward and backward. When you're in the middle of a major refactoring or complex work, dealing with this scenario can be frustrating.Git working treeprovides a workaround for the problem, but allows you to check in a different branch to a different folder and "link" the working tree to your repository. I've found this useful when working on two separate feature branches at the same time, although it does come with some cognitive overhead, so I tend to only use it in those "long-running branch" scenarios.

April 12, 2022 at 3:00 a.m.

Next Execute JavaScript in a .NET application with the JavaScriptEngineSwitcher

Anterior Working on two Git branches at the same time with Git Worktree

André Blocking | .NET adventures (42)

In a recent post I looked at a new API,Task.WaitAsync(), introduced in .NET 6. This new API was pointed out to me on Twitter by Andreas Gehrke, but it raises an interesting point: how can you keep up with all the new APIs introduced in new versions of .NET? I got this question recently, so I thought I'd quickly document some of the sources I use to keep up to date when a new version of .NET is released.

read blog posts

Each new version of .NET contains a large number of blog posts. For example, this is the .NET 6 release post, which links to similar posts specific to ASP.NET Core, Entity Framework, .NET MAUI, and more. These posts are often the best place to start, as they provide a high-level overview and step-by-step guides to many of the new features.

Watch .NET Conf videos

I'm a big fan of blog posts, but if you prefer to learn from videos: All recent .NET versions were announced at .NET Conf, online only. It consists of hours of content from presenters covering everything from what's new in the latest version of .NET to getting started with your first application. There are also many community contributions, so you can find a wide variety of speakers and content.

See documentation

In addition to announcement blog posts, Microsoft provides upgrade guides and breaklists for each .NET version at https://docs.microsoft.com. For example, this link shows important changes between .NET 5 and .NET 6.

André Blocking | .NET adventures (43)

Each major change has its own link describing what changed, why, when it was introduced, and the recommended action. This list will be invaluable when updating a new project. Most changes will not affect you, but I strongly encourage you to read the list and do your due diligence.

listen to the community

The official channels (Blog Posts/Documentation/.NET Conf) are a great way to learn about most features released in a new version of .NET, but you can always find additional content created by the community . This often takes a different approach than the official Microsoft channels, so I'd recommend keeping your eyes peeled. Personally, I get most of my content from my Twitter community, RSS feeds, and the .NET community meetup.

I mainly consume written content, but there is also a lot of .NET content in other forms. For example, Nick Chapsas has a huge following on YouTube, while Jeff Fritz streams regularly on Twitch. As for podcasts, I subscribe to and listen to the following .NET-related podcasts (among others!)

  • Rocks .NET
  • Adventures in .NET
  • Azure DevOps-Podcast
  • Hansel minutes
  • Hütecode
  • merge conflict
  • Sin Dogma Podcast
  • O Podcast for .NET Core
  • The developer of 6 characters
  • The Unhandled Exceptions Podcast
  • Weekly tips for developers

Try the previews

.NET is developed openly on GitHub, and the first preview builds of the next version will be shipped shortly after the release of the previous stable version. The first previews of .NET 7 were released in February 2022, just 3 months after the release of .NET 6.

These releases always come with associated blog posts detailing what's new and can be a great way to keep up to date on what's coming up for the next release. Even if you really don'tTo installand if you are using the preview builds it may help to read these blog posts. Tracking changes in small chunks is often easier than trying to digest thematThis is new in a stable release. What about youmakeTry preview builds, you can shape the final results by reporting issues or commenting on GitHub.

Folgende GitHub-Repositories

As I mentioned, .NET is developed openly on GitHub, so you can see pretty much everything that's happening by following the organization https://github.com/dotnet. Depending on where your interests lie, you may want to look into one or more of the following:

  • https://github.com/dotnet/runtime - The .NET runtime and libraries
  • https://github.com/dotnet/aspnetcore – ASP.NET Core
  • https://github.com/dotnet/efcore – Entity Framework Core
  • https://github.com/dotnet/roslyn – There Roslyn Compiler
  • https://github.com/dotnet/fsharp - The F# compiler and core libraries

Fullmanymore libraries, but be careful, they areveryIn busy repositories, trying to keep up with everything as it happens can be very difficult. But to explore a new resource, learning how to navigate these repositories is extremely rewarding.

See API differences for base class libraries

This last point is one of the least known options to know the changes in new versions of .NET. The https://github.com/dotnet/core repository has release notes for each .NET version. This includes the usual links to downloaders and documentation, but one option I've found incredibly useful is the api-diff list.

api-diff describes all API changes to the .NET Core (and ASP.NET Core/Windows desktop) libraries for a given version. For example, if you check api-diff for System.Threading.Tasks for .NET 6, you'll see them allWaitAsync()Methods that have been addedTaskjDureza<T>(as well as the new oneParalelo.ForEachAsync()Method)

André Blocking | .NET adventures (44)

You certainly don't have to look through itatSome of these APIs differ when a new version of .NET is released, but I personally find them very useful for driving small quality of life improvements that aren't big enough to mention elsewhere. For example, did you know that you can now control the behavior of your application when aFoundServicethrow an exception? There's a big change in the documentation on this, but you can also find that by looking at the api-diff for Microsoft.Extensions.Hosting:

André Blocking | .NET adventures (45)

if you wanna learnuseYou'll inevitably have to browse the code in the repository (or read the documentation) to see the new APIs and how they interact with other features, but I really like the differences because they give a high-level view of what's changed without going into the Getting bogged down in implementation details.

Summary

.NET is big, so keeping track of all the changes in a new version can be quite a task. In this post, I've outlined some of the content and resources I use to understand common features introduced in a new release and implementation details. I usually get my main overview from blog post announcements and documentation, while looking up implementation details on GitHub and getting alternative descriptions from the community. I also featured the api-diff function on GitHub, which lists API changes in easily consumable interface diffs.

Apr 19, 2022 3:00 am

Next Why isn't my ASP.NET Core app working on Docker?

Anterior Stay current with .NET: Learn about new features and APIs

André Blocking | .NET adventures (46)

I was recently working on a side project and realized I really needed to use some JavaScript functions. I was completely blown away by the idea of ​​going back to Node.js and npm, so I decided to run JavaScript.withina .NET application. Crazy, right? It's actually surprisingly easy!

why would you do that

As much as I love the .NET ecosystem, there are some things that the JavaScript ecosystem just does better. One of those things is a library for everythingparticularlywhen taken off the grid.

Let's use syntax highlighting as an example. This can be done directly with C#, but it's not particularly smooth. For example, the TextMateSharp project provides an interpreter for TextMate grammar files. These are the files VS Code uses to add basic syntax highlighting for a language. However, it is a native dependency which adds some complexities when you want to deploy the app.

In contrast, JavaScript has a wealth of mature syntax highlighting libraries. To name a few there are Highlight.js, Prism.js (used in this blog) and shiki.js. The first two in particular are quite mature, with lots of plugins and themes and simple APIs.

The obviousProblemwith JavaScript as a .NET developer is that you have to learn and move into a completely separate toolchain that works with Node.js and NPM. That seems like a lot of overhead just to use a small function.

So we're in trouble. We can go the C# route (+ native) or jump to JavaScript.

Or… we call it JavaScriptImmediatelyfrom our .NET app 🤯

Approaches to running JavaScript in .NET

After accepting that you want to run JavaScript from your .NET code, a few options come to mind. Youit couldUse a JavaScript engine (like Node.js) and have your JavaScript run it for you, but that didn't really solve the problem. You would still need to have Node.js installed.

Another option isMontethe JavaScript engine right in your library. This isn't as crazy as it sounds, and there are several NuGet packages that take this approach and expose a C# layer for interacting with the engine. The following is a collection of onlysomeof packages you can use for .

Jering.Javascript.NodeJS

This library uses the first of the above approaches. HeNOInclude Node.js in the package. Instead, it exposes a C# API to run the JavaScript code and calls Node.js installed on your machine. This can be useful in environments where you know both are installed, but it doesn't really solve the logistical problem you were trying to avoid.

Chakra-Kern

ChakraCore was the original JavaScript engine used by Microsoft Edge before Edge was based on Chromium. According to the GitHub project:

ChakraCore is a JavaScript engine with a C API that allows you to add JavaScript support to any C or C compatible project. It can be compiled for x64 processors on Linux, macOS and Windows. And x86 and ARM for Windows only.

So ChakraCore comes with a native dependency, but since C# can call native libraries via P/Call, this isn't a problem per se. However, it can present some implementation challenges.

ClearScript (V8)

The JavaScript V8 engine powers Node.JS, Chromium, Chrome and the latest Edge. The Microsoft.ClearScript package provides a wrapper for the library that provides a C# interface to call the V8 library. As with the ChakraCore, the V8 engine itself is aindigenousdependency. The ClearScript library takes care of the P/Invoke calls and provides a good C# API, but you still need to ensure that you implement the correct native libraries based on your target platform.

Jint

Jint is interesting because it's a JavaScript interpreter that runs entirely on top of .NET; no native dependencies to manage! It has full ECMAScript 5.1 (ES5) support and supports .NET Standard 2.0, so you can use it in all your projects!

Jura

Jurassic is another .NET implementation of a JavaScript engine, similar to Jint. Similar to Jint, it supports all ES5 and seems to partially support ES6 as well. Unlike Jint, Jurassic is not an artist; compiles JavaScript to IL, making it very fast and has no native dependencies!

So, with all these options, which one should you choose?

JavaScriptEngineSwitcher - when a JS engine is not enough

I've buried the lede a bit here as there is another great project that makes it easy to try. While all libraries let you run JavaScript, they all have slightly different C# APIs to interact with them. This can make the comparison a bit tricky as you have to learn a different API for each.

Enter JavaScriptEngineSwitcher. This library provides container packages foratof the libraries I mentioned above and more:

  • Jering.Javascript.NodeJS
  • Chakra-Kern
  • Microsoft ClearScript.V8
  • Jint
  • Jura
  • Engine MSIE JavaScript for .NET
  • NiL.JS
  • VroomJs

Each of the libraries is supported in a separate package (an additional native package is required for engines with native dependencies) and a "core" package that provides the common API surface. Even if you don't intend to change the JS engines, I would be inclined to use thoseJavaScriptEngineSwitcherWrapper libraries wherever possible, just so you don't have to find a new API if you need to switch engines later.

Contrary to the old trope "how often does your database change", changing the JavaScript engine you use in your .NET project seems quite doable to me. For example, I started with Jint, but when I needed to run larger scripts, I ran into performance issues and switched to Jurassic. JavaScriptEngineSwitcher made it as easy as adding a new package to my project and changing the startup code.

I recently discovered JavaScriptEngineSwitcher, but the latest version has almost a million downloads and is used in the .NET Statiq website builder. In the last part of this post, I'll give a quick example of the most basic usage.

A case study: running Prism.js in a console application using the JavaScriptEngineSwitcher

I started this post by discussing a specific scenario: syntax highlighting of blocks of code. In this section, I'll show you how to use prism.js to highlight a small section of code running in a console application.

First add a reference toJavaScriptEngineSwitcher.JurassicNuGet-Package:

dotnet additional package JavaScriptEngineSwitcher.Jurassic

Then download the JavaScript file you want to run. For example, I downloaded the prism.js file from your website and added C# to the default set of supported languages. After putting the file in the root of the project folder, I updated the file as an embedded resource. You can do this in your IDE or manually by editing the project file.

<Project SDK="Microsoft.NET.SDK"> <property group> <output type>Exe</output type> <target frame>net6.0</target frame> <implied uses>allow</implied uses> <contestable>allow</contestable> </property group> <group of items> <Package reference Contain="JavaScriptEngineSwitcher.Jurassic" execution="3.17.4" /> </group of items> <!-- 👇 Make prism.js an embedded resource --> <group of items> <none Remove="prisma.js" /> <built-in function Contain="prisma.js" /> </group of items></Project>

Now all you have to do is write the code to run the script in our program. The following snippet configures the JavaScript engine and loads the embedprisma.jsassembly library and run it.

useJavaScriptEngineSwitcher.Jura;// Create a JavaScript engine instanceIJsEngineMotor= Novo JurassicJsMotor();// Run the embedded resource named JsInDotnet.prism.js from the given assemblyMotor.RunResource("JsInDotnet.prism.js", So'ne Art(program).Montage);

Now we can run our own JavaScript commands in the same context. We can pass values ​​from C# to the JavaScript engineset variable names,Carry out, OfTo judge:

// This is the code we want to highlightChainCode= @"usando System;public class test: ITest{ public int ID { get; set; } public string Name { get; set; }}";// sets the JavaScript variable named "input" to the value of the C# variable "code"Motor.EstablecerValorVariable("Verboten",Code);// sets the JavaScript variable named "lang" to the string "csharp"Motor.EstablecerValorVariable("Language", "scharf");// Execute the Prism.highlight() function and set the result to the "highlight" variable.Motor.Carry out(PS"highlight = Prism.highlight(input, Prism.languages.csharp, lang)");// "extracts the value of "highlight" from JavaScript to C#ChainResult=Motor.To judge<Chain>("unusual");console.write line(Result);

Putting everything together will print the highlighted code to the console:

<Period classroom="keyword token">use</Period> <Period classroom="Token-Namespace">System</Period><Period classroom="symbolic punctuation marks">;</Period><Period classroom="keyword token">public</Period> <Period classroom="keyword token">classroom</Period> <Period classroom="token class name">court hearing</Period> <Period classroom="symbolic punctuation marks">:</Period> <Period classroom="List there Token Type"><Period classroom="token class name">I evaluate</Period></Period><Period classroom="symbolic punctuation marks">{</Period> <Period classroom="keyword token">public</Period> <Period classroom="Class name of the token return type"><Period classroom="keyword token">E T</Period></Period>I WALKED<Period classroom="symbolic punctuation marks">{</Period> <Period classroom="keyword token">receive</Period><Period classroom="symbolic punctuation marks">;</Period> <Period classroom="keyword token">put down</Period><Period classroom="symbolic punctuation marks">;</Period> <Period classroom="symbolic punctuation marks">}</Period> <Period classroom="keyword token">public</Period> <Period classroom="Class name of the token return type"><Period classroom="keyword token">Chain</Period></Period>Name<Period classroom="symbolic punctuation marks">{</Period> <Period classroom="keyword token">receive</Period><Period classroom="symbolic punctuation marks">;</Period> <Period classroom="keyword token">put down</Period><Period classroom="symbolic punctuation marks">;</Period> <Period classroom="symbolic punctuation marks">}</Period><Period classroom="symbolic punctuation marks">}</Period>

Which when rendered looks like this:

use System;public classroom court hearing : I evaluate{ public E TI WALKED{ receive; put down; } public ChainName{ receive; put down; }}

I was surprised how easy this whole process was. Activating a new JavaScript engine, loading theprisma.jsand running our custom code was a breeze. It was the perfect solution for my scenario.

Of course, I wouldn't suggest doing this for all apps. If you have a lot of JavaScript to do, it's probably easier to use the Node.js ecosystem languages ​​and tools directly. But if you're just trying to use a small, standalone tool (like prims.js), this is a great option.

Summary

In this post, I showed how you can use themJavaScriptEngineSwitcherNuGet package to run JavaScript in a .NET application. This package provides a consistent interface to many different JavaScript engines. Some of the engines (likecore of the chakrajV8) have a native component, while others (likeJintjJura) only use managed code. Finally, I showed you how to use itJavaScriptEngineSwitcherto run the Prims.js code highlighting library from within a .NET application.

April 26, 2022 at 3:00 a.m.

Next Generating sortable Guids with NewId

Anterior Execute JavaScript in a .NET application with the JavaScriptEngineSwitcher

André Blocking | .NET adventures (47)

In this post, I describe an issue I encountered the other day that confused me: why is my ASP.NET Core app running on Docker not responding when I try to navigate to it? The problem was related to the way ASP.NET Core binds to ports by default.

Background: Testing ASP.NET Core on CentOS

I ran into my problem the other day while replying to a CentOS related issue report. To diagnose the problem I needed to run an ASP.NET Core app on CentOS. Unfortunately, while ASP.NET Core supports CentOS, they don't provide pre-installed Docker images. They currently provide Linux Docker images based on:

  • Debian
  • Ubuntu
  • alpine

Besides, while youhe canInstalling CentOS on WSL is a lot more complicated than something like Ubuntu, which you can install straight from the Microsoft Store.

This left me with an obvious answer: create my own CentOS Docker image and install ASP.NET Core on it "by hand".

Build the sample app using a Dockerfile.

I started by creating a sample web app using Visual Studio. I could have used the CLI to build the app, but I chose Visual Studio because I knew it would also give me the option to automatically generate the Dockerfile. That would save a few minutes.

I went with ASP.NET Core Web API, used minimal APIs, disabled https, enabled Docker support (Linux) and generated the solution:

André Blocking | .NET adventures (48)

This defaults to a Debian-based Dockerfile (themcr.microsoft.com/dotnet/aspnetcore:6.0Images are based on Debian unless you select other tags), which looks like this:

VONmcr.microsoft.com/dotnet/aspnet:6.0 as a basisWORK MANAGER/ApplicationEXPOSE80VONmcr.microsoft.com/dotnet/sdk:6.0 as builtWORK MANAGER/OriginCOPY ["WebApplication1.csproj", "."]RUNdotnet recovery"./WebApplication1.csproj"COPY. .WORK MANAGER "/Those/."RUNbuild dotnet"WebApplication1.csproj" -c release-die /application/compilationVONcompile HOWTO publishRUNpublish dotnet"WebApplication1.csproj" -c release-oder /application/publishVONfinal AS baseWORK MANAGER/ApplicationCOPY --from=publish /application/publish .ENTRY POINT ["Net point", "WebApplication1.dll"]

This Dockerfile uses best practices for tiered compilation to ensure your runtime images are as small as possible. Has 4 well differentiated phases

  • mcr.microsoft.com/dotnet/aspnetcore:6.0 as a base. This step defines the base image that will be usedrunYour application. contains theMinimumDependencies to run your application.
  • mcr.microsoft.com/dotnet/sdk:6.0 als Build. This step defines the Docker image that will be usedbuildYour application. It includes the full .NET SDK along with several other dependencies. In this phase, your application is actually built.
  • compile FROM LIKE publish. This step takes some getting used toPostYour application.
  • final AS base. The last phase is what you would actually deploy to production. It is based onBasepicture, but withPostcopied assets.

Multi-stage builds are always best practices when deploying to Docker, but this one is more complex than it generally needs to be. It has additional stages to make Visual Studio fasterdevelopalso in docker images with the "quick mode". if only you areImplantationfor Docker without developing in Docker, then you can simplify this file.

Building an ASP.NET Core image based on CentOS

For my test I only neededrunthe application on CentOS, not neededbuildon CentOS, which meant you could leave thatbuildScenario as it was based on Debian. It was just the first stepBase, which you need to switch to a CentOS-based image.

I started looking for instructions on how to install ASP.NET Core on CentOS. Each Linux distribution is a little different, some versions use package managers, others use snap packages, etc. For CentOS, we can use Yum package manager.

Luckily, installing ASP.NET Core is very easy. You just need to add the Microsoft package repository and install it using YUM. Starting with the CentOS version 7 Docker image, we can build the ASP.NET Core Docker image:

VONhundreds:7 as a base# Add Microsoft package repository and install ASP.NET CoreRUNr/min-Äh https://paquetes.microsoft.com/config/centos/7/paquetes-Microsoft-prod.rpm \ && yum installed-and aspnetcore-Duration-6.0WORK MANAGER/Application# ... Rest of Dockerfile as before

With this change ofBaseimage we can now build and run our sample ASP.NET Core application on CentOS using a command like this:

Docker-Build-t Centos-court hearing.running Docker--rm -p 8000:5000 Centavos-court hearing

Which when you run it and navigate to http://localhost:8000/weatherforecast looks like this:

André Blocking | .NET adventures (49)

oh darling

Debugging because the application is not responding

I didn't expect this result. I figured it would be an easy case to install ASP.NET Core and the app would just run. My first thought was that I had introduced a bug somewhere that was causing the application not to start, but the logs dumped on the console suggested the application was listening:

info: Microsoft.Hosting.Lifetime[14]Now stop at: http://localhost:5000info:Microsoft.Hosting.Lifetime[0]application started. Press Ctrl+C to delete.info: Microsoft.Hosting.Lifetime[0]Hosting Environment: Production Info: Microsoft.Hosting.Lifetime[0]Content Root Path: /app/

Also, I could see that the app was listening on the as wellcorrectPort, port 5000. In my Docker command, I specified that Docker should assign the port5000 withinthe container to the port8000 For athe container so that looked right too.

I've checked the documentation at this point to make sure I have it8000:5000correct, and yes, the format isHost: Recipient

It all seemed pretty strange.Allegedly, the application didn't receive the request, but to be sure I loaded the log inDebugginglevel and try again:

running Docker--rm -page 8000:5000".-e Logging__Loglevel__Default=Depurar `-and Registration__Registration_Level__Microsoft.AspNetCore=Debug `centos-court hearing

Sure enough, the logs were more detailed, but there was no indication that any request was being processed.

dbug: Microsoft.Extensions.Hosting.Internal.Host[1]Hosting-Startinformationen: Microsoft.AspNetCore.Server.Kestrel[0]Unable to bind to http://localhost:5000 on IPv6 loopback interface:'Requested address cannot be assigned'.dbug: Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer[1]unable tolocatea suitable HTTPS development certificate dbug: Microsoft.AspNetCore.Server.Kestrel[0]No listening endpoint has been configured. Link to http://localhost:5000 by default.info: Microsoft.Hosting.Lifetime[14]Now stop at: http://localhost:5000dbug: Microsoft.AspNetCore.Hosting.Diagnostics[13]Geladene Hosting-Boot-Assembly WebApplication1info: Microsoft.Hosting.Lifetime[0]application started. Press Ctrl+C to delete.info: Microsoft.Hosting.Lifetime[0]Hosting Environment: Production Info: Microsoft.Hosting.Lifetime[0]Content Root Path: /app/dbug:Microsoft.Extensions.Hosting.Internal.Host[2]accommodation started

So at this point I had two possible scenarios

  • The app doesn't work at all.
  • The application is not properly exposed outside of the container.

To prove the first case, I decidedexecutivein the container andReisthe endpoint while it was running. This would tell me if the app was running correctly in the container on the expected port. I could have used the CLI to do thisexecutive stevedore..., but for simplicity I used Docker Desktop to open and close a command prompt inside the containerReisthe end point:

André Blocking | .NET adventures (50)

for sure,Reis-ing the endpoint inside the container (using theContainerPorto,5000) returned the expected data. So the app worked.jhe replied on the correct port. This reduced the possible failure modes.

At this point I ran out of options. Luckily, a word in the app's logs suddenly caught my attention and pointed me in the right direction.reverse loop.

ASP.NET-Haupt-URL: Loopback vs. IP-Adresse

One of my most popular blog posts (two years after I wrote it) is "5 Ways to Configure URLs for an ASP.NET Core Application". In this post, I describe a few ways you can control which URL ASP.NET Core points to on startup, but the relevant section is now titled "What URL can you use?". This section mentions that there are basically 3 types of URLs you can link to:

  • A host loopback name for IPv4 and IPv6 (p.http://localhost:5000), do not format:{schema}://{loopbackAddress}:{port}
  • A specific IP address available on your computer (e.g.http://192.168.8.31:5005), do not format{Schema}://{IP Address}:{Port}
  • "Arbitrary" IP address for a specific port (eg.http://*:6264), do not format{schema}://*:{port}

The "loopback" address is the network address related to the "current machine". So if you agreehttp://localhost:5000, try to access the port5000on the current machine. Typically, this is what you want when developing, and it's the default URL that ASP.NET Core apps link to. So if you're running and navigating an ASP.NET Core app locallyhttp://localhost:5000Everything works in your browser because everything comes from the same network interface on the same machine.

However, if you are in a Docker containerRequests do not come from the same network interface. Essentially, you can think of the Docker container as a separate machine. union forpremises Serverinside the Docker container means your app will never be exposed outside of the container, making it pretty much useless.

To fix this, make sure your app is associated withanyIP address with the{schema}://*:{port}Syntax.

As mentioned in my previous post, you don't have to use*You can use anything other than an IP address or in this patternpremises Server, then you can usehttp://*:5000,http://+:5000, Öhttp://ejemplo.com:5000etc. All of these behave identically.

By linking the ASP.NET Core app withanyIP address, the request is "passed" by the host for your application to process. We can set the url at runtime when we run the docker image e.g. B. with

running Docker--rm -page 8000:5000".-und DOTNET_URLS=http://+:5000 Cent-court hearing

or we can bake it in Dockerfile as shown below. The following is the full final Dockerfile I used:

VONhundreds:7 as a base# Add Microsoft package repository and install ASP.NET CoreRUNr/min-Äh https://paquetes.microsoft.com/config/centos/7/paquetes-Microsoft-prod.rpm \ && yum installed-and aspnetcore-Duration-6.0# Make sure you are listening on any IP addressENVDOTNET_URLS=http://+:5000WORK MANAGER/Application# ... Rest of Dockerfile as beforeVONmcr.microsoft.com/dotnet/sdk:6.0 as builtWORK MANAGER/OriginCOPY ["WebApplication1.csproj", "."]RUNdotnet recovery"./WebApplication1.csproj"COPY. .WORK MANAGER "/Those/."RUNbuild dotnet"WebApplication1.csproj" -c release-die /application/compilationVONcompile HOWTO publishRUNpublish dotnet"WebApplication1.csproj" -c release-oder /application/publishVONfinal AS baseWORK MANAGER/ApplicationCOPY --from=publish /application/publish .ENTRY POINT ["Net point", "WebApplication1.dll"]

With this change, we can rebuild the Docker image and run the app again

Docker-Build-t Centos-court hearing.running Docker--rm -p 8000:5000 Centavos-court hearing

and finally we can call the endpoint from our browser:

André Blocking | .NET adventures (51)

So the important lesson here is:

When building your own ASP.NET Core Docker images, make sure you configure your application to bind to any IP address, not justpremises Server.

Of course, the official .NET Docker images already do this by configuring them to bind to port 80ASPNETCORE_URLS=http://+:80.

Summary

In this post I described a situation where I was trying to build a CentOS Docker image to run ASP.NET Core. I described how to build the image following the ASP.NET Core installation instructions, but my ASP.NET Core application was not responding to requests. I went through my debugging process to try and determine the source of the problem and found that it is tied to the loopback address. This meant the application could be accessedwithinDocker containers, but not from the outside. To fix the problem, I made sure that my ASP.NET Core app isn't bound to just any IP addresspremises Server.

Show all 645 items

discover the latest watch live

Click hereforThe latest and most popular articles about Electronic Design Automation (EDA)

(Video) The CUBA No One TALKS About 🇨🇺

Videos

1. André Rieu - Waltzing Matilda, live in Australia
(André Rieu)
2. Outkast - Da Art of Storytellin' (Part 1) (Official Video)
(Outkast)
3. Minecraft auf OPSUCHT.NET Teil/1 Mit Andre
(XxSNOWBR33ZExX)
4. Otilia - Adelante (Y3MR$ Remix) | LIMMA
(Alex's List)
5. Daniel Boaventura - You'll Never Find Another Love Like Mine (Ao Vivo)
(Daniel Boaventura)
6. BMW R1250 GS Adventure Malaysia - Episode 230 Jean Michel André Jarre
(Duke 916)
Top Articles
Latest Posts
Article information

Author: Lakeisha Bayer VM

Last Updated: 02/22/2023

Views: 5462

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Lakeisha Bayer VM

Birthday: 1997-10-17

Address: Suite 835 34136 Adrian Mountains, Floydton, UT 81036

Phone: +3571527672278

Job: Manufacturing Agent

Hobby: Skimboarding, Photography, Roller skating, Knife making, Paintball, Embroidery, Gunsmithing

Introduction: My name is Lakeisha Bayer VM, I am a brainy, kind, enchanting, healthy, lovely, clean, witty person who loves writing and wants to share my knowledge and understanding with you.