Category Archives: ASP.NET Core

Logging and Application Insights with ASP.NET core

Obviously when you’re running an ASP.NET core application in Azure, we’re going to want the ability to capture logs to Azure. This usually means logging to Application Insights.

Adding Logging

Let’s start out by just looking at what we need to do to enable logging from ASP.NET core.

Logging is included by default in the way of the ILogger interface (ILogger<T>), hence we can inject into our code like this (this example uses minimal API)

app.MapGet("/test", (ILogger<Program> logger) =>
{
    logger.LogCritical("Critical Log");
    logger.LogDebug("Debug Log");
    logger.LogError("Error Log");
    logger.LogInformation("Information Log");
    logger.LogTrace("Trace Log");
    logger.LogWarning("Warning Log");
})
.WithName("Test")
.WithOpenApi();

To enable/filter logging we have something like the following within the appsettings.json file

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
}

The LogLevel, Default section sets the minimum logging level for all categories. So for example a Default of Information means only logging of Information level and above (i.e. Warning, Error and Critical) are captured.

The Microsoft.AspNetCore is a category specific logging in that it logs Microsoft.AspNetCore namespace logging using the supplied log level. Because we can configure by namespace we can also use categories such as Microsoft, System, Microsoft.Hosting.Lifetime. We can also do the same with our code, i.e. MyApp.Controllers, so this allows us to really start to tailor different sections of our application an what gets captured in the logs.

Logging Levels

There various logging levels are as follows

  • LogLevel.Trace: The most detailed level, use for debugging and tracing (useful for entering/existing methods and logging variables).
  • LogLevel.Debug: Detailed but less so than Trace (useful for debugging and workflow logging).
  • LogLevel.Information: Information messages at a higher level than the previous two levels (useful for logging steps of processing code).
  • LogLevel.Warning: Indicates potentially problems that do not warrant error level logging.
  • LogLevel.Error: Use for logging errors and exceptions and other failures.
  • LogLevel.Critical: Critical issues that may cause an application to fail, such as those that might crash your application. Could also include things like missing connection strings etc.
  • LogLevel.None: Essentially disables logging

Application Insights

Once you’ve created an Azure resource group and/or Application Insights service, you’ll be able to copy the connection string to connect to Application Insights from your application.

Before we can use Application Insights in our application we’ll need to

  • Add the nuget package Microsoft.ApplicationInsights.AspNetCore to our project
  • Add the ApplicationInsights section to the appsettings.json file, something this
    "ApplicationInsights": {
      "InstrumentationKey": "InstrumentationKey=xxxxxx",
      "LogLevel": {
        "Default": "Information",
        "Microsoft": "Warning"
      }
    },
    

    We can obviously set the InstrumentKey in code if preferred, but the LogLevel is specific to what is captured within Application Insights

  • Add the following to the Program.cs file below CreateBuilder

    var configuration = builder.Configuration;
    
    builder.Services.AddApplicationInsightsTelemetry(options =>
    {
        options.ConnectionString = configuration["ApplicationInsights:InstrumentationKey"];
    });
    

Logging Providers in code

We can also add logging via code, so for example after the CreateBuilder line in Program.cs we might have

builder.Logging.ClearProviders();
builder.Logging.AddConsole();
builder.Logging.AddDebug(); 

In the above we start by clearing all currently logging providers, the we add a provider for logging to console and debug. The appsettings.json log levels are still relevant to which logs we wish to capture.

Using secrets in your appsettings.json via Visual Studio 2022 and dotnet CLI

You’ve got yourself an appsettings.json file for your ASP.NET core application and you’re using sensitive data, such as passwords or other secrets. Now you obviously don’t want to commit those secrets to source control, so you’re not going to want to store these values in your appsettings.json file.

There’s several ways to achieve this, one of those is to use Visual Studio 2022 “Manage User Secrets” option which is on the context menu off of your project file. There’s also the ability to use to dotnet CLI for this as we’ll see later.

This context menu option will create a secrets.json in %APPDATA%\Microsoft\UserSecrets\{Guid}. The GUID is stored within your .csproj in a PropertyGroup like this

<UserSecretsId>0e6abf63-deda-47fc-9a80-1cb56abaeead</UserSecretsId>

So the secrets file can be used like this

{
  "ConnectionStrings:DefaultConnection": "my-secret"
}

and this will map to your appsettings.json, that might look like this

{
  "ConnectionStrings": {
    "DefaultConnection": "not set"
  },
}

Now we can access the configuration in the usual way, for example

builder.Configuration.AddUserSecrets<Program>();

var app = builder.Build();
var connectionString = app.Configuration.GetSection("ConnectionStrings:DefaultConnection");
var defaultConnection = connectionString.Value;

When somebody else clones your repository you’ll need to recreate the secrets file, we could use _dotnet user-secrets_ for example

dotnet user-secrets set "ConnectionStrings:DefaultConnection" "YourConnectionString"

and you can list the secrets using

dotnet user-secrets list

Disable the Kestrel server header

We generally don’t want to expose information about the server we’re running our ASP.NET core application on.

In the case of Kestrel we can disable the server header using

var builder = WebApplication.CreateBuilder(args); 

builder.WebHost.UseKestrel(options => 
   options.AddServerHeader = false);

Azure web app with IIS running ASP.NET core/Kestrel

When you deploy your ASP.NET core (.NET 8) to an Azure web app, you’ll have likely created the app to work with Kestrel (so you can deploy to pretty much any environment). But when you deploy as an Azure Web App, you’re essentially deploying to an IIS application.

So we need for IIS to simply proxy across to our Kestrel app. We achieve this by adding a Web.config to the root of our published app. and we’ll have configuration such as below

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <location path="." inheritInChildApplications="false">
    <system.webServer>
      <handlers>
        <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
      </handlers>
      <aspNetCore processPath=".\MyAspNetCoreApp.exe" stdoutLogEnabled="false" stdoutLogFile="\\?\%home%\LogFiles\stdout" hostingModel="inprocess" />
    </system.webServer>
  </location>
</configuration>

Is my ASP.NET core application running in Kubernetes (or Docker) ?

I came across a situation where I needed to change some logic in my ASP.NET core application dependant upon whether I was running inside Kubernetes or an Azure Web app, luckily there is an environment variable that we can use (and whilst we’re at it, there’s one for Docker as well)

var isHostedInKubernetes = 
   Environment.GetEnvironmentVariable("KUBERNETES_SERVICE_HOST") != null;
var isHostedInDocker = 
   Environment.GetEnvironmentVariable("DOTNET_RUNNING_IN_CONTAINER") == "true";

Signal R and React

I haven’t touched Signal R in a while. I wanted to see how to work with Signal R in a React app.

Let’s start by creating a simple ASP.NET Core Web API server

  • Create an ASP.NET Core Web API application, I’m going to use minimal API
  • Add the NuGet package Microsoft.AspNetCore.SignalR.Client
  • Add the following to the Program.cs
    builder.Services.AddSignalR();
    builder.Services.AddCors();
    
  • We’ve added CORS support as we’re going to need tthis for testing locally, we’ll also need the following code
    app.UseCors(options =>
    {
      options.AllowAnyHeader()
        .AllowAnyMethod()
        .AllowCredentials()
        .SetIsOriginAllowed(origin => true);
    });
    

Before we can use Signal R we’ll need to add a hub, I’m going to add a file NotificationHub.cs with the following code

public class NotificationHub : Hub;

Now return to Program.cs and add the following

  • We need to map our hub into the application using
    app.MapHub<NotificationHub>("/notifications");
    
  • Finally let’s map an endpoint to allow us to send messages via Swagger to our clients. After the line above, add the following
    app.MapGet("/test", async (IHubContext<NotificationHub> hub, string message) =>
      await hub.Clients.All.SendAsync("NotifyMe",$"Message: {message}"));
    

Now we need a client, so create yourself a React application (I’m using TypeScript as usual with mine).

  • Add the package @microsoft/signalr, i.e. from yarn yarn add @microsoft/signalr
  • In the App.tsx we’re going to create the HubConnectionBuilder against out ASP.NET Core API server. We’ll then start the connection and finally watch for messages on the “NotifyMe” name as previously set up in the ASP.NET app, the code looks like this
    import { HubConnectionBuilder } from '@microsoft/signalr';
    
    function App() {
      const [message, setMessage] = useState("");
    
      useEffect(() => {
        const connection = new HubConnectionBuilder()
          .withUrl("http://localhost:5021/notifications")
          .build();
      
        connection.start();  
        connection.on("NotifyMe", data => {
          setMessage(data);
        });
      }, [])  
    
      return (
        <div className="App">
          {message}
        </div>
      );
    }
    

Make sure you start the ASP.NET server first, then start your React application. Now from the Swagger page we can send messages into the server and out to the React client’s connected to SignalR.

Messing around with MediatR

MediatR is an implementation of the Mediator pattern. It doesn’t match the pattern exactly, but as the creator, Jimmy Bogard states that “It matches the problem description (reducing chaotic dependencies), the implementation doesn’t exactly match…”. It’s worth reading his post You Probably Don’t Need to Worry About MediatR.

This pattern is aimed at decoupling the likes of business logic from a UI layer or request/response’s.

There are several ways we can already achieve this in our code, for example, using interfaces to decouple the business logic from the UI or API layers as “services” as we’ve probably all done for years. The only drawback of this approach is it requires the interfaces to be either passed around in our code or via DI and is a great way to do things. Another way to do this is, as used within UI, using WPF, Xamarin Forms, MAUI and others where we often use in-process message queues to send messages around our application tell it to undertake some task and this is essentially what MediatR is giving us.

Let’s have a look at using MediatR. I’m going to create an ASP.NET web API (obviously you could use MediatR in other types of solutions)

  • Create an ASP.NET Core Web API. I’m using Minimal API, so feel free to check that or stick with controllers as you prefer.
  • Add the nuget package MediatR
  • To the Program.cs file add
    builder.Services.AddMediatR(cfg => 
      cfg.RegisterServicesFromAssembly(typeof(Program).Assembly));
    

At this point we have MediatR registering services for us at startup. We can passing multiple assemblies to the RegisterServicesFromAssembly method, so if we have all our reqeust/response code in multiple assemblies we can supply just those assemblies. Obviously this makes our life simpler but at the cost of reflecting across our code at startup.

The ASP.NET Core Web API creates the WeatherForecast example, we’ll just use this for our sample code as well.

The first thing you’ll notice is that the route to the weatherforecast is tightly coupled to the sample code. Ofcourse it’s an example, so this is fine, but we’re going to clean things up here and move the implementation into a file named GetWeatherForecastHandler but before we do that…

Note: Ofcourse we could just move the weather forecast code into an WeatherForecastService, create an IWeatherForecastService interface and there’s no reason not to do that, MediatR just offers and alternative way of doing things.

MediatR will try to find a matching handler for your request. In this example we have no request parameters. This begs the question as to how MediatR will match against our GetWeatherForecastHandler. It needs a unique request type to map to our handler, in this case the simplest thing to do is create yourself the request type. Mine’s named GetWeatherForecast and looks like this

public record GetWeatherForecast : IRequest<WeatherForecast[]>
{
    public static GetWeatherForecast Default { get; } = new();
}

Note: I’ve created a static method so we’re not creating an instance for every call, however this is not required and obviously when you are passing parameters you will be creating an instance of a type each time – this does obviously concern me a little if we need high performance and are trying to write allocation free code, but then we’d do lots differently then including probably not using MediatR.

Now we’ll create the GetWeatherForecastHandler file and the code looks like this

public class GetWeatherForecastHandler : IRequestHandler<GetWeatherForecast, WeatherForecast[]>
{
  private static readonly string[] Summaries = new[]
  {
    "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
  };

  public Task<WeatherForecast[]> Handle(GetWeatherForecast request, CancellationToken cancellationToken)
  {
    var forecast = Enumerable.Range(1, 5).Select(index =>
      new WeatherForecast
      {
        Date = DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
        TemperatureC = Random.Shared.Next(-20, 55),
        Summary = Summaries[Random.Shared.Next(Summaries.Length)]
      })
    .ToArray();

    return Task.FromResult(forecast);
  }
}

At this point we’ve created a way for MediatR to find the required handler (i.e. using the GetWeatherForecast type) and we’ve created a handler to create the response. In this example we’re not doing any async work, so we just wrap the result in a Task.FromResult.

Next go back to the Program.cs or if you’ve used controllers, go to your controller. If using controller you’ll need the constructor to take the parameters IMediator mediator and assign to a readonly field in the usually way.

For our minimal API example, go back to the Program.cs file remove the summaries variable/code and then change the route code to look like this

app.MapGet("/weatherforecast",  (IMediator mediator) => 
  mediator.Send(GetWeatherForecast.Default))
.WithName("GetWeatherForecast")
.WithOpenApi();

We’re not really playing too nice in the code above, in that we’re not returning results code, so let’s add some basic result handling

app.MapGet("/weatherforecast",  async (IMediator mediator) => 
  await mediator.Send(GetWeatherForecast.Default) is var results 
    ? Results.Ok(results) 
    : Results.NotFound())
  .WithName("GetWeatherForecast")
  .WithOpenApi();

Now for each new HTTP method call, we would create a request object and a handler object. In this case we send no parameters, but as you can no doubt see, for a request that takes (for example) a string for your location, we’d create a specific type for wrapping that parameter and the handler can then be mapped to that request type.

In our example we used the MediatR Send method. This sends a request to a single handler and expects a response of some type, but MediatR also has the ability to Publish to multiple handlers. These types of handlers are different, firstly they need to implement the INotificationHandler interface and secondly no response is expected when using Publish. These sorts of handlers are more like event broadcasts, so you might use then to send a message to an email service or database code which sends out an email upon request or updates a database.

Or WeatherForecast sample doesn’t give me any good ideas for using Publish in it’s current setup, so let’s just assume we have a way to set the current location. Like I said this example’s a little contrived as we’re going to essentially set the location for everyone connecting to this service, but you get the idea.

We’re going to add a SetLocation request type that looks like this

public record SetLocation(string Location) : INotification;

Notice that for publish our type is implementing the INotification interface. Our handles look like this (my file is named SetLocationHandler.cs but I’ll put both handlers in there just to be a little lazy)

public class UpdateHandler1 : INotificationHandler<SetLocation>
{
  public Task Handle(SetLocation notification, CancellationToken cancellationToken)
  {
    Console.WriteLine(nameof(UpdateHandler1));
    return Task.CompletedTask;
  }
}

public class UpdateHandler2 : INotificationHandler<SetLocation>
{
  public Task Handle(SetLocation notification, CancellationToken cancellationToken)
  {
    Console.WriteLine(nameof(UpdateHandler2));
    return Task.CompletedTask;
  }
}

As you can see, the handlers need to implement INotificationHandler with the correct request type. In this sample we’ll just write messages to console, but you might have a more interesting set of handlers in mind.

Finally let’s add the following to the Program.cs to publish a message

app.MapGet("/setlocation", (IMediator mediator, string location) =>
  mediator.Publish(new SetLocation(location)))
.WithName("SetLocation")
.WithOpenApi();

When you run up your server and use Swagger or call the setlocation method via it’s URL you’ll see that all your handlers that handle the request get called.

Ofcourse we can also Send and Post messages/request from our handlers, so maybe we get the weather forecast data then publish a message for some logging system to update the logs.

MediatR also includes the ability to stream from a requests where our request type implements the IStreamRequest and our handlers implement IStreamRequestHandler.

If we create a simple request type but this one implements IStreamRequest for example

public record GetWeatherStream : IStreamRequest<WeatherForecast>;

and now add a handler which implements IStreamRequestHandler, something like this (which delay’s to just give a feel of getting data from somewhere else)

public class GetWeatherStreamHandler : IStreamRequestHandler<GetWeatherStream, WeatherForecast>
{
  public async IAsyncEnumerable<WeatherForecast> Handle(GetWeatherStream request, 
    [EnumeratorCancellation] CancellationToken cancellationToken)
  {
    var index = 0;
    while (!cancellationToken.IsCancellationRequested)
    {
      await Task.Delay(500, cancellationToken);
      yield return new WeatherForecast
      {
        Date = DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
        TemperatureC = Random.Shared.Next(-20, 55),
        Summary = Data.Summaries[Random.Shared.Next(Data.Summaries.Length)]
      };

      index++;
      if(index > 10)
        break;
    }
  }
}

Finally we can declare our streaming route using Minimal API very simply, for example

app.MapGet("/stream", (IMediator mediator) =>
  mediator.CreateStream(new GetWeatherStream()))
.WithName("Stream")
.WithOpenApi();

ASP.NET core and Ingress rules

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

You’ve implemented a service using ASP.NET deployed it to Kubernetes and all worked great, you then deploy a front end to use that service (as per the example in the Project Tye repo) again, all worked well. Whilst the Ingress mapped the path / to your front end services, the CSS an JS libs all worked fine, but then you change you Ingress route to (for example)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  namespace: default
spec:
  rules:
    - http:
        paths:
          - path: /frontend
            pathType: Prefix
            backend:
              service: 
                name: frontend
                port: 
                  number: 80

In the above, the rule for the path /frontend will run the frontend service. All looks good so you navigate to http://your-server-ip/frontend and, wait a moment. The front end and backend services are working, i.e. you see some HTML and you see results from the backend service but Edge/Chrome/whatever reports 404 of bootstrap and your CSS.

The simplest solution, but with the downside that you are putting knowledge of the deployment route into your front end service is to just add the following to Startup.cs

app.UsePathBase("/frontend");

Obviously if you’re using tye or your own environment configuration, you might prefer to get the “/frontend” string from the environment configuration instead of hard coding.

Project Tye

In the last few posts I’ve been doing a lot of stuff with ASP.NET core services and clients within Kubernetes and whilst you’ve seen it’s not too hard to create a docker container/image out of services and clients, deploy to the local registry and then deploy using Kubernetes scripts, after a while you’re likely to find this tedious and want to wrap everything into a shell/batch script – an alternative is to use Project Tye.

What is Project Tye?

I recommend checking out

The basics are Project Tye can be used to take .NET projects, turn them into docker images and generate the deployments to k8s with a single command. Also Project Tye allows us to undeploy with a single command also .

Installing Project Tye

I’m using a remote Ubuntu server to run my Kubernetes cluster, so we’ll need to ensure that .NET 3.1 SDK is installed (hopefully Tye will work with 5.0 in the near future but for the current release I needed .NET 3.1.x installed.

To check your current list of SDK’s run

dotnet --list-sdks

Next up you need to run the dotnet tool to install Tye, using

dotnet tool install -g Microsoft.Tye --version "0.7.0-alpha.21279.2"

Obviously change the version to whatever the latest build is – that was the latest available as of 6th June 2021.

The tool will be deployed to

  • Linux – $HOME/.dotnet/tools
  • Windows – %USERPROFILE%\.dotnet\tools

Running Project Tye

It’s as simple as running the following command in your solution folder

tye deploy --interactive

This is the interactive version and you’ll be prompted to supply the registry you wish to push your docker images to, as we’re using localhost:32000, remember to set that as your registry, or better still we can create a tye.yaml file with configuration for Project Tye within the solution folder, here’s an example

name: myapp
registry: localhost:32000
services:
- name: frontend
  project: frontend/razordemo.csproj
- name: backend
  project: backend/weatherservice.csproj

Now with this in place we can just run

tye deploy

If you want to create a default tye.yaml file then run

tye init

Project Tye will now build our docker images, push to localhost:3200 and then generate deployments, services etc. within Kubernetes based upon the configuration. Check out the JSON schema for the Tye configuration file tye-schema.json for all the current options.

Now you’ve deployed everything and it’s up and running, but Tye also includes environment configurations, for example

env:
  - name: DOTNET_LOGGING__CONSOLE__DISABLECOLORS
    value: 'true'
  - name: ASPNETCORE_URLS
    value: 'http://*'
  - name: PORT
    value: '80'
  - name: SERVICE__RAZORDEMO__PROTOCOL
    value: http
  - name: SERVICE__RAZORDEMO__PORT
    value: '80'
  - name: SERVICE__RAZORDEMO__HOST
    value: razordemo
  - name: SERVICE__WEATHERSERVICE__PROTOCOL
    value: http
  - name: SERVICE__WEATHERSERVICE__PORT
    value: '80'
  - name: SERVICE__WEATHERSERVICE__HOST
    value: weatherservice

Just add the following NuGet package to your project(s)

<PackageReference Include="Microsoft.Tye.Extensions.Configuration" Version="0.2.0-*" />

and then you can interact with the configuration using the TyeConfigurationExtensions classes from that package. For example using the following

client.BaseAddress = Configuration.GetServiceUri("weatherservice");

Ingress

You can also include ingress configuration within your tye.yaml, for example

ingress: 
  - name: ingress  
    # bindings:
    #   - port: 8080
    rules:
      - path: /
        service: razordemo

however, as an Ingress might be shared across services/applications this will not be undeployed using the undeploy command, so not to affect potentially other applications, you can force it to be undeployed using

kubectl delete -f https://aka.ms/tye/ingress/deploy

See Ingress for more information on Tye and Ingress.

Dependencies

Along with our solution/projects we can also deploy dependencies as part of the deployment, for example if we need to also deploy a redis cache, dapr or other images. Just add the dependency to the tye.yaml like this

- name: redis
  image: redis
  bindings:
    - port: 6379

Deploying an ASP.NET core application into a Docker image within Kubernetes

In the previous post we looked and an “off the shelf” image of nginx, which we deployed to Kubernetes and were able to access externally using Ingress. This post follows on from that one, so do refer back to it if you have any issues with the following configurations etc.

Let’s look at the steps for deploying our own Docker image to k8s and better still let’s deploy a dotnet core webapp.

Note: Kubernetes is deprecating its support for Docker, however this does not mean we cannot deployed docker images, just that we need to use the docker shim or generated container-d (or other container) images.

The App

We’ll create a standard dotnet ASP.NET core Razor application which you can obviously do what you wish to, but we’ll take the default implementation and turn this into a docker image and then deploy it to k8s.

So to start with…

  • Create a .NET core Razor application (mine’s named razordemo), you can do this from Visual Studio or using
    dotnet new webapp -o razordemo --no-https -f net5.0
    
  • If you’re running this on a remote machine don’t forget to change launchSettings.json localhost to 0.0.0.0 if you need to.
  • Run dotnet build

It’d be good to see this is all working, so if let’s run the demo using

dotnet run

Now use your browser to access http://your-server-ip:5000/ and you should see the Razor demo home page, or use curl to see if you get valid HTML returned, i.e.

curl http://your-server-ip:5000

Generating the Docker image

Note: If you changed launchSettings.json to use 0.0.0.0, reset this to localhost.

Here’s the Dockerfile for building an image, it’s basically going to publish a release build of our Razor application then set up the image to run the razordemo.dll via dotnet from a Docker instance.

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY razordemo.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app

FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "razordemo.dll"]

Now run docker build using the following

docker build . -t razordemo --no-cache

If you want to check the image works as expect then run the following

docker run -d -p 5000:80 razordemo 

Now check the image is running okay by using curl as we did previously. If all worked you should see the Razor demo home page again, but now we’re seeing if within the docker instance.

Docker in the local registry

Next up, we want to deploy this docker image to k8s.

k8s will try to get an image from a remote registry and we don’t want to deploy this image outside of our network, so we need to rebuild the image, slightly differently using

docker build . -t localhost:32000/razordemo --no-cache

Reference: see Using the built-in registry for more information on the built-in registry.

Before going any further, I’m using Ubuntu and micro8s, so will need to enable the local registry using

microk8s enable registry

I can’t recall if this is required, but I also enabled k8s DNS using

microk8s.enable dns

Find the image ID for our generated image using

docker images

Now use the following commands, where {the image id} was the one found from the above command

docker tag {the image id} localhost:32000/razordemo
docker push localhost:32000/razordemo

The configuration

This is a configuration based upon my previous post (the file is named razordemo.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  selector:
    matchLabels:
      run: webapp
  replicas: 1
  template:
    metadata:
      labels:
        run: webapp
    spec:
      containers:
      - name: webapp
        image: localhost:32000/razordemo
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webapp
  labels:
    run: webapp
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    run: webapp
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: razor-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp
            port: 
              number: 80

Now apply this configuration to k8s using (don’t forget to change the file name to whatever you named your file)

kubectl apply -f ./razordemo.yaml

Now we should be able to check the deployed image, by either using the k8s dashboard or run

kubectl get ep webapp

Note the endpoint and curl to that endpoint, if all worked well you should be able to see the ASP.NET generate home page HTML and better still access http://your-server-ip from another machine and see the webpage.