Category Archives: Kubernetes

ASP.NET core and Ingress rules

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

You’ve implemented a service using ASP.NET deployed it to Kubernetes and all worked great, you then deploy a front end to use that service (as per the example in the Project Tye repo) again, all worked well. Whilst the Ingress mapped the path / to your front end services, the CSS an JS libs all worked fine, but then you change you Ingress route to (for example)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  namespace: default
spec:
  rules:
    - http:
        paths:
          - path: /frontend
            pathType: Prefix
            backend:
              service: 
                name: frontend
                port: 
                  number: 80

In the above, the rule for the path /frontend will run the frontend service. All looks good so you navigate to http://your-server-ip/frontend and, wait a moment. The front end and backend services are working, i.e. you see some HTML and you see results from the backend service but Edge/Chrome/whatever reports 404 of bootstrap and your CSS.

The simplest solution, but with the downside that you are putting knowledge of the deployment route into your front end service is to just add the following to Startup.cs

app.UsePathBase("/frontend");

Obviously if you’re using tye or your own environment configuration, you might prefer to get the “/frontend” string from the environment configuration instead of hard coding.

Project Tye

In the last few posts I’ve been doing a lot of stuff with ASP.NET core services and clients within Kubernetes and whilst you’ve seen it’s not too hard to create a docker container/image out of services and clients, deploy to the local registry and then deploy using Kubernetes scripts, after a while you’re likely to find this tedious and want to wrap everything into a shell/batch script – an alternative is to use Project Tye.

What is Project Tye?

I recommend checking out

The basics are Project Tye can be used to take .NET projects, turn them into docker images and generate the deployments to k8s with a single command. Also Project Tye allows us to undeploy with a single command also .

Installing Project Tye

I’m using a remote Ubuntu server to run my Kubernetes cluster, so we’ll need to ensure that .NET 3.1 SDK is installed (hopefully Tye will work with 5.0 in the near future but for the current release I needed .NET 3.1.x installed.

To check your current list of SDK’s run

dotnet --list-sdks

Next up you need to run the dotnet tool to install Tye, using

dotnet tool install -g Microsoft.Tye --version "0.7.0-alpha.21279.2"

Obviously change the version to whatever the latest build is – that was the latest available as of 6th June 2021.

The tool will be deployed to

  • Linux – $HOME/.dotnet/tools
  • Windows – %USERPROFILE%\.dotnet\tools

Running Project Tye

It’s as simple as running the following command in your solution folder

tye deploy --interactive

This is the interactive version and you’ll be prompted to supply the registry you wish to push your docker images to, as we’re using localhost:32000, remember to set that as your registry, or better still we can create a tye.yaml file with configuration for Project Tye within the solution folder, here’s an example

name: myapp
registry: localhost:32000
services:
- name: frontend
  project: frontend/razordemo.csproj
- name: backend
  project: backend/weatherservice.csproj

Now with this in place we can just run

tye deploy

If you want to create a default tye.yaml file then run

tye init

Project Tye will now build our docker images, push to localhost:3200 and then generate deployments, services etc. within Kubernetes based upon the configuration. Check out the JSON schema for the Tye configuration file tye-schema.json for all the current options.

Now you’ve deployed everything and it’s up and running, but Tye also includes environment configurations, for example

env:
  - name: DOTNET_LOGGING__CONSOLE__DISABLECOLORS
    value: 'true'
  - name: ASPNETCORE_URLS
    value: 'http://*'
  - name: PORT
    value: '80'
  - name: SERVICE__RAZORDEMO__PROTOCOL
    value: http
  - name: SERVICE__RAZORDEMO__PORT
    value: '80'
  - name: SERVICE__RAZORDEMO__HOST
    value: razordemo
  - name: SERVICE__WEATHERSERVICE__PROTOCOL
    value: http
  - name: SERVICE__WEATHERSERVICE__PORT
    value: '80'
  - name: SERVICE__WEATHERSERVICE__HOST
    value: weatherservice

Just add the following NuGet package to your project(s)

<PackageReference Include="Microsoft.Tye.Extensions.Configuration" Version="0.2.0-*" />

and then you can interact with the configuration using the TyeConfigurationExtensions classes from that package. For example using the following

client.BaseAddress = Configuration.GetServiceUri("weatherservice");

Ingress

You can also include ingress configuration within your tye.yaml, for example

ingress: 
  - name: ingress  
    # bindings:
    #   - port: 8080
    rules:
      - path: /
        service: razordemo

however, as an Ingress might be shared across services/applications this will not be undeployed using the undeploy command, so not to affect potentially other applications, you can force it to be undeployed using

kubectl delete -f https://aka.ms/tye/ingress/deploy

See Ingress for more information on Tye and Ingress.

Dependencies

Along with our solution/projects we can also deploy dependencies as part of the deployment, for example if we need to also deploy a redis cache, dapr or other images. Just add the dependency to the tye.yaml like this

- name: redis
  image: redis
  bindings:
    - port: 6379

Communicating between services/applications in Kubernetes

If you have, say a service and client in a single Pod, then you’re supplied a virtual ip address and the services etc. within the Pod are accessible via localhost, but you’re more likely to want to deploy services in their own Pods (unless they are tightly coupled) to allow scaling per service etc.

How do we communicate with a services in a different Pod to our application?

A common scenario here is, we have a a client application, for example the razordemo application in the post Deploying an ASP.NET core application into a Docker image within Kubernetes and it might be using the WeatherForecast service that is created using

dotnet new webapi -o weatherservice --no-https -f net5.0

We’re not going to go into the code of the actual services apart from show the pieces that matter for communication, so let’s assumed we’ve deployed weatherservice to k8s using a configuration such as

apiVersion: apps/v1
kind: Deployment
metadata:
  name: weatherservice
  namespace: default
spec:
  selector:
    matchLabels:
      run: weatherservice
  replicas: 1
  template:
    metadata:
      labels:
        run: weatherservice
    spec:
      containers:
      - name: weatherservice
        image: localhost:32000/weatherservice
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: weatherservice
  namespace: default
  labels:
    run: weatherservice
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    run: weatherservice

This service will exist in it’s own virtual network running on port 80, we may scale this service according to our needs and without affecting other services or clients.

If we then deploy our razordemo as per Deploying an ASP.NET core application into a Docker image within Kubernetes – it will also exist in it’s own virtual network, also running on port 80.

To communicate from razordemo to the weatherservice we simply use the service name (if we’re on the same cluster) for example http://weatherservice.

Here’s a simple example of razordemo code for getting weatherservice data…

var httpClient = new HttpClient();
httpClient.BaseAddress = new Uri("http://weatherservice");

var response = await httpClient.GetAsync("/weatherforecast");
var results = JsonConvert.DeserializeObject<WeatherForecast[]>(await response.Content.ReadAsStringAsync());

The first two lines will probably be set in your ASP.NET core Startup.cs file, although better still is for us to store the URL within configuration via an environment variable within k8s, so the client is agnostic to the service name that we deployed the weatherservice with.

Deploying an ASP.NET core application into a Docker image within Kubernetes

In the previous post we looked and an “off the shelf” image of nginx, which we deployed to Kubernetes and were able to access externally using Ingress. This post follows on from that one, so do refer back to it if you have any issues with the following configurations etc.

Let’s look at the steps for deploying our own Docker image to k8s and better still let’s deploy a dotnet core webapp.

Note: Kubernetes is deprecating its support for Docker, however this does not mean we cannot deployed docker images, just that we need to use the docker shim or generated container-d (or other container) images.

The App

We’ll create a standard dotnet ASP.NET core Razor application which you can obviously do what you wish to, but we’ll take the default implementation and turn this into a docker image and then deploy it to k8s.

So to start with…

  • Create a .NET core Razor application (mine’s named razordemo), you can do this from Visual Studio or using
    dotnet new webapp -o razordemo --no-https -f net5.0
    
  • If you’re running this on a remote machine don’t forget to change launchSettings.json localhost to 0.0.0.0 if you need to.
  • Run dotnet build

It’d be good to see this is all working, so if let’s run the demo using

dotnet run

Now use your browser to access http://your-server-ip:5000/ and you should see the Razor demo home page, or use curl to see if you get valid HTML returned, i.e.

curl http://your-server-ip:5000

Generating the Docker image

Note: If you changed launchSettings.json to use 0.0.0.0, reset this to localhost.

Here’s the Dockerfile for building an image, it’s basically going to publish a release build of our Razor application then set up the image to run the razordemo.dll via dotnet from a Docker instance.

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY razordemo.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app

FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "razordemo.dll"]

Now run docker build using the following

docker build . -t razordemo --no-cache

If you want to check the image works as expect then run the following

docker run -d -p 5000:80 razordemo 

Now check the image is running okay by using curl as we did previously. If all worked you should see the Razor demo home page again, but now we’re seeing if within the docker instance.

Docker in the local registry

Next up, we want to deploy this docker image to k8s.

k8s will try to get an image from a remote registry and we don’t want to deploy this image outside of our network, so we need to rebuild the image, slightly differently using

docker build . -t localhost:32000/razordemo --no-cache

Reference: see Using the built-in registry for more information on the built-in registry.

Before going any further, I’m using Ubuntu and micro8s, so will need to enable the local registry using

microk8s enable registry

I can’t recall if this is required, but I also enabled k8s DNS using

microk8s.enable dns

Find the image ID for our generated image using

docker images

Now use the following commands, where {the image id} was the one found from the above command

docker tag {the image id} localhost:32000/razordemo
docker push localhost:32000/razordemo

The configuration

This is a configuration based upon my previous post (the file is named razordemo.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  selector:
    matchLabels:
      run: webapp
  replicas: 1
  template:
    metadata:
      labels:
        run: webapp
    spec:
      containers:
      - name: webapp
        image: localhost:32000/razordemo
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webapp
  labels:
    run: webapp
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    run: webapp
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: razor-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp
            port: 
              number: 80

Now apply this configuration to k8s using (don’t forget to change the file name to whatever you named your file)

kubectl apply -f ./razordemo.yaml

Now we should be able to check the deployed image, by either using the k8s dashboard or run

kubectl get ep webapp

Note the endpoint and curl to that endpoint, if all worked well you should be able to see the ASP.NET generate home page HTML and better still access http://your-server-ip from another machine and see the webpage.

Deploying and exposing an nginx server in kubernetes

In this post we’re going to create the configuration for a deployment, service and ingress controller to access an nginx instance to our network. I’ve picked nginx simply to demonstrate the process of these steps and obviously, out of the box, we get a webpage to view to know whether everything work – feel free to can change the image used to something else for your own uses.

Deployment

Below is the deployment configuration for k8s, creating a deployment named my-nginx. This will generate 2 replicas and expose the image on port 80 of the container.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

Service

Now we’ll create the configuration for the my-nginx service

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx

Ingress

We’re going to use Ingress to expose our service to the network, alternatively we could use a load balancer, but this options requires an external load balancer, so more like used on the cloud, or we could use NodePort which allows us to assigna port to a service so that any request to the port is forwarded to the node – the main problem with this is the port may change, instead Ingress acts like a load balancer within our cluster and will allow us to configure things to route port 80 calls through to our my-nginx service as if it was running outside of k8s.

We’re going to need to enable ingress within k8s. As we’re using Ubuntu and microk8s, run the following

microk8s enable ingress

The following is the configuration for this ingress.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: http-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-nginx
            port: 
              number: 80

Running and testing

Now we need to apply the configuration to our kubernetes instance, so I’ve saved all the previously defined configuration sections into a single file named my-nginx.yaml. Now just run the following to run the configuration in k8s

kubectl apply -f my-nginx.yaml

If you like, you can check the endpoint(s) assigned within k8s using

kubectl get ep my-nginx

and then curl the one or more endpoints (in our example 2 endpoints). Or we can jump straight to the interesting bit and access your k8s host’s ip and if all worked, you’ll be directed to one of the replicates nginx instances and we should see the “Welcome to nginx!” page.

Installing Kubernetes on Ubuntu

In this post I’m just going to list the steps for installing Kubernetes (k8s) on Ubuntu server using the single node, lightweight MicroK8s.

  • If you don’t have it installed, install snap

    sudo apt install snapd
    

Now let’s go through the steps to get microk8s installed

  • Install microk8s

    sudo snap install microk8s --classic
    
  • You may wish to change permissions to save using sudo everytime, if so run the following and change {username} to your user name.

    sudo usermod -a -G microk8s {username}
    sudo chown -f -R {username} ~/.kube
    
  • Verify the installation
    sudo microk8s.status
    
  • To save us typing microk8s to run k8s command let’s alias it
    sudo snap alias microk8s.kubectl kubectl
    

Accessing the Dashboard

Whilst you can now run kubectl command, the Web based dashboard is really useful – so lets deploy it using

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml

By default the Dashboard can only be accessed from the local machine and from a security standpoint you may even prefer not to be running a dashboard, but if you decide you want it (for example for dev work) then…

As per Accessing Dashboard

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard

Change type: ClusterIP to type: NodePort and save this file.

You’ll need to run the proxy

kubectl proxy&

Now to get the port for the dashboard, run the following

kubectl -n kubernetes-dashboard get service kubernetes-dashboard

Example output (take from Accessing Dashboard)

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.100.124.90   <nodes>       443:31707/TCP   21h

You can now access the dashboard remotely now on the port 31707 (see the PORT(S) listed above), for example https://{master-ip}:31707 where {master-ip} is the server’s ip address.

To get the token for the dashboard we need to set up and few things

kubectl describe secret -n kube-system | grep deployment -A 12

Then copy the whole token to the Dashboard Token edit box and login.

To enable skipping of the requirement for a token etc. on the dashboard (should only be used on a development installation) run

kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard

then add the following line

- args:
   - --auto-generate-certificates
   - --namespace=kubernetes-dashboard
   - --enable-skip-login  # this argument allows us to skip login