Checking who last logged in and when on Ubuntu Server

You might want to see a list of who logged into your Ubuntu server last or over a time period. Or maybe you want to take a look at more than just that and see more of an audit trail…

last

We can use the command last to display a “listing of last logged in users”. For example let’s just get the tail of the list using

last | tail

or ofcourse we might simple look through all entries using

last | less

What about if we know the user we’re interested in, then we can simply use

last -x Bob

Replacing Bob with the username you wish to look for.

There’s options to specify listing data after a specific time or before a time and more see

last --help

What if we want even more information.

/var/log/auth.log

Last works well if you’re interested in users logging into the server, but what about if we just want to check who’s access a shared drive or the likes, then we can check the /var/log/auth.log like this

sudo cat /var/log/auth.log
// or 
sudo less /var/log/auth.log
// or
sudo tail /var/log/auth.log

As per standard piping of commands, I’ve included an example of using less and tail, but basically we want to list the contents of the auth.log file. Ofcourse as it’s a file we can grep it (or whatever) as required.

This auth.log file lists sessions being opened and closed, cron jobs and lots more besides.

Ubuntu server auto upgrade

I have an Ubuntu server happily living it’s life as a NAS (and more) server which every now and then (if I forget to keep track of updates) it runs out of space on /boot causing issues removing old releases or partial updates to be a bit of a problem. So how do we either turn on or off automatic upgrades?

The configuration for this is stored in the file 20auto-upgrades. So we can edit the file using

sudo nano /etc/apt/apt.conf.d/20auto-upgrades

Within the file you’ll see something like this

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

Note: The “1” in the above are not True/False but the minimum number of days. So we could change from “1” to run weekly by changing this value to “7”. A value of “0” will disable the feature.

You can create the file yourself or run the following command to create the file (it can be used to turn on/off by running it on an existing file but that’s all)

sudo dpkg-reconfigure -plow unattended-upgrades

If you want to change the frequency of checks for updates you’ll need to edit the 20auto-upgrades manually (as already discussed).

References

AutomaticSecurityUpdates.
UnattendedUpgrades

Fingerprint and biometrics authentication in Xamarin Forms

This is something I’ve wanted to try for a while and there’s a NuGet package that will allow us to enable and use biometric authentication with very little effort – much of this post will end up covering the github README at Biometric / Fingerprint plugin for Xamarin. So I strongly recommend checking that out.

Create a sample application

Let’s create a new Xamarin Forms application to test this out. So follow these steps to get up and running…

  • In Visual Studio create a new project – Mobile App (Xamarin.Forms)
  • At the solution level, right mouse click in Visual Studio 2019 and select Manage NuGet Packages for Solution
  • Browse for Plugin.Fingerprint by Sven-Michael Stübe
  • Click on the package then check each of your projects, shared and platform specific. We need to add the plugin to all projects, then click Install
  • In the Android MainActivity.cs file, OnCreate method after Xamarin.Essentials.Platform.Init(this, savedInstanceState); add
    CrossFingerprint.SetCurrentActivityResolver(
       () => Xamarin.Essentials.Platform.CurrentActivity);
    
  • In the Android Manifest add Required Permissions USE_FINGERPRINT
  • In the iOS project, open the Info.plist in code (F7) and add the following
    <key>NSFaceIDUsageDescription</key>
    <string>Use your face to authenticate</string>
    

    Ofcourse the string can be whatever you want.

Now we’ve got the project and configuration set up you’ll want some popup, page or just a button on your MainPage.xaml to initiate the Fingerprint/Biometrics login. For now let’s just add a Button to the MainPage.xaml and, for brevity, just add a Clicked handler, so for example

<Button Clicked="Button_OnClicked" Text="Authenticate with Biometrics" />

and here’s the code within the code behind for Button_OnClicked

private async void Button_OnClicked(object sender, EventArgs e)
{
   if (await CrossFingerprint.Current.IsAvailableAsync(true))
   {
      var result = await CrossFingerprint.Current.AuthenticateAsync(
         new AuthenticationRequestConfiguration("Login", "Access your account"));
      if (result.Authenticated)
      {
         await DisplayAlert("Success", "Authenticated", "OK");
      }
      else
      {
         await DisplayAlert("Failure", "Not Authenticated", "OK");
      }
   }
   else
   {
      await DisplayAlert("Failure", "Biometrics not available", "OK");
   }
}

We begin by checking if biometrics are available, passing in true will allow fallback to pin authentication. Assuming biometrics are available we then display the authentication mechanism using AuthenticateAsync passing in configuration that, in this case, will display some text on the fingerprint popup. If we’re authenticate then we display an alert to show success, in this example, but ofcourse you’ll handle success and failure as needed by your application.

Testing in the Android emulator

To test this application in the Android emulator

  • Goto the Settings within the Android OS and select Security
  • Under Device Security select Screen lock and add a pin
  • Under Device Security select Fingerprint and add a fingerprint, now to actually add a fingerprint we’ll click the … on the emulator and select Fingerprint, then click the Touch the Sensor button twice – you’re see the Fingerprint dialog go 50% of the way then 100% on the second click, finally click Done

Once we’re set up the security on the emulator and supplied one or more fingerprints run up your Xamarin Forms application and click the button you added. You’ll be presented with the AuthenticationRequestConfiguration you added, again using the … button on the emulator (if you closed the Extended controls dialog), select Fingerprint and click Touch the Sensore – this basically emulates a fingerprint touching the sensor.

To test for success, ensure the Fingerprint selected is one you added, i.e. by default Finger 1, to test for failure simply select one of the other, non-configured fingers and click Touch the Sensor

Testing in the iOS simulator

The setup for testing using the iOS simulator is a little simpler than Android…

  • Open the simulator and (in the latest XCode I’m using 13 but basically 12.x and above) select Features | Touch ID or Face ID (whichever is available on your simulator) and check the Enrolled option to show a tick (untick to remove the feature).

Now when you click your authentication button in your Xamarin forms application you may be presented with the dialog to allow the permission to be used, once you’ve accepted this you won’t see it again. Next you’ll see a small grey square which will denote your Face ID authentication (or for Touch ID you’ll get the fingerprint dialog), from the simulator’s Features and Face ID submenu, select Matching Face to simulate a successful authentication or Non-matching Face for a failure. For Touch ID simulators select Matching Touch for successful authentication or Non-matching Touch for a failure.

Code

Code for this post is available on GitHub.

References

Biometric / Fingerprint plugin for Xamarin
Enrolling a Fingerprint

Change the colour of the status bar on Android (in Xamarin Forms)

In your Android project, values folder styles.xml file you’ll find something like

<style name="MainTheme" parent="MainTheme.Base">
   <!-- -->
</style>

and/or

<style name="MainTheme.Base" parent="Theme.AppCompat.Light.DarkActionBar">
   <!-- -->
</style>

Use the following element and attribute in the MainTheme.Base if that exists or the MainTheme (it will depend on the template you used at to whether one or both of these exist)

<!-- Top status bar area on Android -->
<item name="android:statusBarColor">#FF0C1436</item> 

You may wish to also change the colour of the bar background at the top of the NavigationPage (if you’re using the NavigationPage such as with Prism) by adding the following to the App.xaml

<Application.Resources>
  <ResourceDictionary>
    <Style TargetType="NavigationPage">
      <Setter Property="BarBackgroundColor" Value="Color.Red"/>
      <Setter Property="BarTextColor" Value="Color.White"/>
    </Style>
  </ResourceDictionary>
</Application.Resources>

in non-Prism you can change the Primary colour, for example again in App.xaml

<Application.Resources>
  <ResourceDictionary>
    <Color x:Key="Primary">#FF3399FF</Color>
  </ResourceDictionary>
</Application.Resources>

Adding certificates to the Java cacerts (or fixing PKIX path issue)

I’m back on some Java coding after a fair time away and was getting the old PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException error.

Basically Java is complaining that it didn’t recognise the HTTPS SSL certificate of the maven repository (in this case one hosted in Artifactory). Here’s the steps to resolve this….

Note: Instructions are on Windows using Chrome, but should be similar in different browsers.

  • Open the HTTPS repository in Chrome (or preferred web browser)
  • Use the dev tools (ctrl+shift+i) and select the Security tab
  • Click on the View certificate button
  • Select the Details tab
  • Click the Copy to file… button
  • Click Next until you see the format selector, I used DER format
  • Click Next etc. and save the exported cert to your hard drive

Let’s assume we saved the file as mycert.cer, now we need to import this into the cacert using the keytool…

  • Go to the location of the JDK/JRE you’re using, for example C:\Program Files\Java\jdk1.8.0_101\jre\lib\security
  • Open a command prompt and type
    keytool -import -alias mycert -keystore "C:\Program Files\Java\jdk1.8.0_101\jre\lib\security\cacerts" -file mycert.cer
    

    Replace the first occurence of mycert with a unique name (key) for your certificate and then obviously the mycert.cer is replaced with the name of the certificate file you saved.

  • You’ll be asked for a password, the default is changeit obviously if this has been changed then use that
  • Type yes when prompted if you want to proceed

That’s it – the certificate should now be available to Java.

React router dom direct URL or refresh results in 404

When using React, we’re writing a SPA and when using the React Router we actually need all pages/URL’s to go through App.tsx and the React Router. If everything is not set up correctly you’ll find when you navigate to a page off of the route and refresh the page, or just try to navigate via a direct URL you may end up with a 404 in production, even though everything worked in dev.

Note: Obviously we do not have an App.tsx when transpiled, but I’ll refer to that page as it makes things more obvious what’s going on.

To be more specific to my situation where I discovered these issues – I’ve created a subdomain (on a folder off of the root) for a React site I’m developing that will hosted from that subdomain.

As stated, all works great using serve or the start script. When deployed to production the root page works fine and links from that page via React Router also work fine but refreshing one of those links or trying to navigate directly to a link off of the root results in a 404. So what’s going on?

This issue is that the web server being used is not routing those relative URL’s via the App.tsx router page now and so the React Router is not actually doing anything, it only works when we route things via the App.tsx code.

To solve this, within your React public folder or the root of where you eventually deploy the React web site to, add a .htaccess file (if one is not already there and if your webserver supports this file). Add the following to that file

<IfModule mod_rewrite.c>

  RewriteEngine On
  RewriteBase /
  RewriteRule ^index\.html$ - [L]
  RewriteCond %{REQUEST_FILENAME} !-f
  RewriteCond %{REQUEST_FILENAME} !-d
  RewriteCond %{REQUEST_FILENAME} !-l
  RewriteRule . /index.html [L]

</IfModule>

And that’s it, now the web server will route via our App.tsx (index.html).

See the following references which supplied all this information and thanks to these posts I was able to get my React direct links working

404: React Page Not Found
Simple Steps on how to Deploy or Host your ReactJS App in cPanel
Routing single page application on Apache with .htaccess

Named arguments in C#

I’ve actually never had or felt a need to use named arguments in C#. Am I missing anything?

Well first off I have actually used named arguments in other languages and my immediate feeling was – this makes my code so verbose and I felt I got no real benefit from them. Certainly they can (sort of) document your code a little better, but with most/all development tools including intellisense or other tools to tell the development what parameters they were editing/adding and in some cases even show the named of the argument in the editor – hence the benefit of “documenting” seemed to be of little real use.

This all said, let’s look at what we can do with named arguments.

What are named arguments?

When we write methods/functions we pass arguments/parameters using “positional arguments”, which simply means the order of arguments must match the method signature’s argument order. For example let’s look at a simple method to add a person to some application

void AddPerson(string name, string address, int age)
{
   // do something
}

So when we use this method with positional arguments we would write

AddPerson("Scooby Doo", "Mystery Machine", 11);

In C# we also get the ability to use named arguments instead (without any changes to the method signature) by including the argument name as part of the call, so for example

AddPerson(name: "Scooby Doo", address: "Mystery Machine", age: 11);

With tools like Visual Studio 2019, this doesn’t really add anything useful (if we’re mirroring the argument positions) because Visual Studio already tells us the name of each argument in the editor. Obviously outside of Visual Studio, for example is source control, maybe this is more useful.

Surely there’s more to it than that?

Positional arguments are just that, the calling code must supply each argument in the correct position and whilst we can do the same with named arguments, you can also rearrange the arguments and hence no longer need to call using the same positions, for example let’s switch around the named arguments from earlier to give us this

AddPerson(name: "Scooby Doo", age: 11, address: "Mystery Machine");

The C# compiler will simply rearrange the arguments into their positions producing the same IL as is generated for a callee using positional arguments. Here’s an example of such code generated via dotPeek – it’s exactly the same code as for the positional arguments as one would expect.

IL_0013: ldstr        "Scooby Doo"
IL_0018: ldstr        "Mystery Machine"
IL_001d: ldc.i4.s     11 // 0x0b
IL_001f: call         void NameParamsTests.Program::AddPerson(string, string, int32)
IL_0024: nop

One area where named arguments offer some extra goodness is when we’re using optional argument, so let’s assume our AddPerson signature changes to

static void AddPerson(string name, string address = null, int age = Int32.MinValue)
{
   // do something
}

If we’re using positional arguments and we don’t have an address then we must still supply a value for the address, for example

AddPerson("Scooby Doo", null, 11);

But as we’ve seen, with named arguments the order is no longer a limiting factor, therefore we can used named arguments instead and no even bother with the address, the compiler will figure it out for us, hence we can write

AddPerson(name: "Scooby", age: 11);

Note: We can ofcourse use positional and named arguments in a method call if we wish/need to but then the named arguments would need to be in the correct position limiting the usefulness of using named arguments.

Named arguments – when we have lots of arguments

The simple AddPerson method probably isn’t a great example for using named arguments, so lets instead look at a method that takes more arguments with lots of optional arguments. If we instead have a method which looks like this

void AddPerson(
   string firstName, string lastName, int age = Int32.MinValue,
   string addressLine1 = null, string addressLine2 = null, string addressLine3 = null,
   string city = null, string county = null, string postalCode = null)
{
   // do something
}

Now we can see that if we have partial details for the person we can call this method in a more succinct manner, for example

AddPerson(age: 11, firstName: "Scooby", lastName: "Doo", postalCode: "MM1");

// or with mixed positional and named arguments

AddPerson("Scooby", "Doo", 11, postalCode: "MM1");

As you’d imagine, the compiler simply handles the setting of the optional arguments etc. as before giving us IL such as

IL_0001: ldstr        "Scooby"
IL_0006: ldstr        "Doo"
IL_000b: ldc.i4.s     11 // 0x0b
IL_000d: ldnull
IL_000e: ldnull
IL_000f: ldnull
IL_0010: ldnull
IL_0011: ldnull
IL_0012: ldstr        "MM1"
IL_0017: call         void NameParamsTests.Program::AddPerson(string, string, int32, string, string, string, string, string, string)
IL_001c: nop

Once our methods start getting more arguments and especially if lots are defaulted, then named arguments start to make sense, although with a larger number of arguments, one might question whether in fact the method call itself might be in need of refactoring, with our example here we could ofcourse create separate objects for different parts of the data and with C#’s object initializer syntax we sort of get a similar way to create “named” arguments, for example

public struct Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
   public string Line1 { get; set; }
   public string Line2 { get; set; }
   public string Line3 { get; set; }
   public string City { get; set; }
   public string County { get; set; }
   public string PostalCode { get; set; }
}

void AddPerson(Person person)
{
   // do something
}

Now using object initializer syntax we could call this method like this

AddPerson(new Person
   {
      FirstName = "Scooby",
      LastName = "Doo",
      Age = 11,
      PostalCode = "MM1"
   });

Project Tye

In the last few posts I’ve been doing a lot of stuff with ASP.NET core services and clients within Kubernetes and whilst you’ve seen it’s not too hard to create a docker container/image out of services and clients, deploy to the local registry and then deploy using Kubernetes scripts, after a while you’re likely to find this tedious and want to wrap everything into a shell/batch script – an alternative is to use Project Tye.

What is Project Tye?

I recommend checking out

The basics are Project Tye can be used to take .NET projects, turn them into docker images and generate the deployments to k8s with a single command. Also Project Tye allows us to undeploy with a single command also .

Installing Project Tye

I’m using a remote Ubuntu server to run my Kubernetes cluster, so we’ll need to ensure that .NET 3.1 SDK is installed (hopefully Tye will work with 5.0 in the near future but for the current release I needed .NET 3.1.x installed.

To check your current list of SDK’s run

dotnet --list-sdks

Next up you need to run the dotnet tool to install Tye, using

dotnet tool install -g Microsoft.Tye --version "0.7.0-alpha.21279.2"

Obviously change the version to whatever the latest build is – that was the latest available as of 6th June 2021.

The tool will be deployed to

  • Linux – $HOME/.dotnet/tools
  • Windows – %USERPROFILE%\.dotnet\tools

Running Project Tye

It’s as simple as running the following command in your solution folder

tye deploy --interactive

This is the interactive version and you’ll be prompted to supply the registry you wish to push your docker images to, as we’re using localhost:32000, remember to set that as your registry, or better still we can create a tye.yaml file with configuration for Project Tye within the solution folder, here’s an example

name: myapp
registry: localhost:32000
services:
- name: frontend
  project: frontend/razordemo.csproj
- name: backend
  project: backend/weatherservice.csproj

Now with this in place we can just run

tye deploy

If you want to create a default tye.yaml file then run

tye init

Project Tye will now build our docker images, push to localhost:3200 and then generate deployments, services etc. within Kubernetes based upon the configuration. Check out the JSON schema for the Tye configuration file tye-schema.json for all the current options.

Now you’ve deployed everything and it’s up and running, but Tye also includes environment configurations, for example

env:
  - name: DOTNET_LOGGING__CONSOLE__DISABLECOLORS
    value: 'true'
  - name: ASPNETCORE_URLS
    value: 'http://*'
  - name: PORT
    value: '80'
  - name: SERVICE__RAZORDEMO__PROTOCOL
    value: http
  - name: SERVICE__RAZORDEMO__PORT
    value: '80'
  - name: SERVICE__RAZORDEMO__HOST
    value: razordemo
  - name: SERVICE__WEATHERSERVICE__PROTOCOL
    value: http
  - name: SERVICE__WEATHERSERVICE__PORT
    value: '80'
  - name: SERVICE__WEATHERSERVICE__HOST
    value: weatherservice

Just add the following NuGet package to your project(s)

<PackageReference Include="Microsoft.Tye.Extensions.Configuration" Version="0.2.0-*" />

and then you can interact with the configuration using the TyeConfigurationExtensions classes from that package. For example using the following

client.BaseAddress = Configuration.GetServiceUri("weatherservice");

Ingress

You can also include ingress configuration within your tye.yaml, for example

ingress: 
  - name: ingress  
    # bindings:
    #   - port: 8080
    rules:
      - path: /
        service: razordemo

however, as an Ingress might be shared across services/applications this will not be undeployed using the undeploy command, so not to affect potentially other applications, you can force it to be undeployed using

kubectl delete -f https://aka.ms/tye/ingress/deploy

See Ingress for more information on Tye and Ingress.

Dependencies

Along with our solution/projects we can also deploy dependencies as part of the deployment, for example if we need to also deploy a redis cache, dapr or other images. Just add the dependency to the tye.yaml like this

- name: redis
  image: redis
  bindings:
    - port: 6379

Communicating between services/applications in Kubernetes

If you have, say a service and client in a single Pod, then you’re supplied a virtual ip address and the services etc. within the Pod are accessible via localhost, but you’re more likely to want to deploy services in their own Pods (unless they are tightly coupled) to allow scaling per service etc.

How do we communicate with a services in a different Pod to our application?

A common scenario here is, we have a a client application, for example the razordemo application in the post Deploying an ASP.NET core application into a Docker image within Kubernetes and it might be using the WeatherForecast service that is created using

dotnet new webapi -o weatherservice --no-https -f net5.0

We’re not going to go into the code of the actual services apart from show the pieces that matter for communication, so let’s assumed we’ve deployed weatherservice to k8s using a configuration such as

apiVersion: apps/v1
kind: Deployment
metadata:
  name: weatherservice
  namespace: default
spec:
  selector:
    matchLabels:
      run: weatherservice
  replicas: 1
  template:
    metadata:
      labels:
        run: weatherservice
    spec:
      containers:
      - name: weatherservice
        image: localhost:32000/weatherservice
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: weatherservice
  namespace: default
  labels:
    run: weatherservice
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    run: weatherservice

This service will exist in it’s own virtual network running on port 80, we may scale this service according to our needs and without affecting other services or clients.

If we then deploy our razordemo as per Deploying an ASP.NET core application into a Docker image within Kubernetes – it will also exist in it’s own virtual network, also running on port 80.

To communicate from razordemo to the weatherservice we simply use the service name (if we’re on the same cluster) for example http://weatherservice.

Here’s a simple example of razordemo code for getting weatherservice data…

var httpClient = new HttpClient();
httpClient.BaseAddress = new Uri("http://weatherservice");

var response = await httpClient.GetAsync("/weatherforecast");
var results = JsonConvert.DeserializeObject<WeatherForecast[]>(await response.Content.ReadAsStringAsync());

The first two lines will probably be set in your ASP.NET core Startup.cs file, although better still is for us to store the URL within configuration via an environment variable within k8s, so the client is agnostic to the service name that we deployed the weatherservice with.

Deploying an ASP.NET core application into a Docker image within Kubernetes

In the previous post we looked and an “off the shelf” image of nginx, which we deployed to Kubernetes and were able to access externally using Ingress. This post follows on from that one, so do refer back to it if you have any issues with the following configurations etc.

Let’s look at the steps for deploying our own Docker image to k8s and better still let’s deploy a dotnet core webapp.

Note: Kubernetes is deprecating its support for Docker, however this does not mean we cannot deployed docker images, just that we need to use the docker shim or generated container-d (or other container) images.

The App

We’ll create a standard dotnet ASP.NET core Razor application which you can obviously do what you wish to, but we’ll take the default implementation and turn this into a docker image and then deploy it to k8s.

So to start with…

  • Create a .NET core Razor application (mine’s named razordemo), you can do this from Visual Studio or using
    dotnet new webapp -o razordemo --no-https -f net5.0
    
  • If you’re running this on a remote machine don’t forget to change launchSettings.json localhost to 0.0.0.0 if you need to.
  • Run dotnet build

It’d be good to see this is all working, so if let’s run the demo using

dotnet run

Now use your browser to access http://your-server-ip:5000/ and you should see the Razor demo home page, or use curl to see if you get valid HTML returned, i.e.

curl http://your-server-ip:5000

Generating the Docker image

Note: If you changed launchSettings.json to use 0.0.0.0, reset this to localhost.

Here’s the Dockerfile for building an image, it’s basically going to publish a release build of our Razor application then set up the image to run the razordemo.dll via dotnet from a Docker instance.

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY razordemo.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app

FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "razordemo.dll"]

Now run docker build using the following

docker build . -t razordemo --no-cache

If you want to check the image works as expect then run the following

docker run -d -p 5000:80 razordemo 

Now check the image is running okay by using curl as we did previously. If all worked you should see the Razor demo home page again, but now we’re seeing if within the docker instance.

Docker in the local registry

Next up, we want to deploy this docker image to k8s.

k8s will try to get an image from a remote registry and we don’t want to deploy this image outside of our network, so we need to rebuild the image, slightly differently using

docker build . -t localhost:32000/razordemo --no-cache

Reference: see Using the built-in registry for more information on the built-in registry.

Before going any further, I’m using Ubuntu and micro8s, so will need to enable the local registry using

microk8s enable registry

I can’t recall if this is required, but I also enabled k8s DNS using

microk8s.enable dns

Find the image ID for our generated image using

docker images

Now use the following commands, where {the image id} was the one found from the above command

docker tag {the image id} localhost:32000/razordemo
docker push localhost:32000/razordemo

The configuration

This is a configuration based upon my previous post (the file is named razordemo.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  selector:
    matchLabels:
      run: webapp
  replicas: 1
  template:
    metadata:
      labels:
        run: webapp
    spec:
      containers:
      - name: webapp
        image: localhost:32000/razordemo
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webapp
  labels:
    run: webapp
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    run: webapp
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: razor-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp
            port: 
              number: 80

Now apply this configuration to k8s using (don’t forget to change the file name to whatever you named your file)

kubectl apply -f ./razordemo.yaml

Now we should be able to check the deployed image, by either using the k8s dashboard or run

kubectl get ep webapp

Note the endpoint and curl to that endpoint, if all worked well you should be able to see the ASP.NET generate home page HTML and better still access http://your-server-ip from another machine and see the webpage.