Category Archives: Rust

TailwindCSS with Yew

We’ve looked at using Yew with Rust in my post WASM with Rust (and Yew), let’s take things a step further by adding TailwindCSS with Yew.

  • Install the tailwind-cli
  • In your root folder run
    npm init 
    

    just select defaults for to get things started

  • Now let’s install tailwindcss using
    npm install --save-dev tailwindcss
    
  • Ensure tailwindcss cli is installed to your package using
    npm install tailwindcss @tailwindcss/cli
    
  • You can actually remove pretty much everything from the package.json file, mine looks like this
    {
      "dependencies": {
        "@tailwindcss/cli": "^4.1.12"
      }
    }
    
  • Create a styles folder and place the tailwind.css file into it
  • Change the tailwind.config.js to look like this
    /** @type {import('tailwindcss').Config} */
    module.exports = {
        content: [
            "./index.html",
            "./src/**/*.rs"
        ],
        theme: {
            extend: {},
        },
        plugins: [],
    }
    

Using Tailwindcss

We’re going to want to use the tailwindcss CLI to generate our application’s .css file into the ./dist folder otherwise our application will not be able to use it, but we’ll leave that step until a little later. For now let’s start by manually running the CLI to generate our .css file, which we’ll store in our root application folder.

Run the following command from the terminal

npx @tailwindcss/cli -i ./styles/tailwind.css -o ./app.css

This generates an app.css file, but actually it’s not a lot of use at this point, however you can take a look at what’s generated by tailwindcss CLI in this app.css file.

What we really want to do is, as part of the build, is generate the file and then reference it from our application.

As we’re using trunk from our previous posts, we can use the trunk hooks to generate the file. I’m running on Windows so will use powershell to run the command. Open the Trunk.toml file (I’ll include my whole file below) and add the [[hooks]] section where we will create a pre-build step that generates the app.css in the application root as we did manually

[serve]
address = "127.0.0.1"
port = 8080

[[hooks]]
stage = "pre_build"
command = "powershell"
command_arguments = ["-Command", "npx" , "@tailwindcss/cli -i ./styles/tailwind.css -o ./app.css"]

However, as mentioned already we need this file in the ./dist folder. Copying the file is no use as we need to link to the file in our index.html file.

We do this by creating the link to our app.css but marking it so that trunk will generate the file in the ./dist folder for us. Here’s the index.html

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8" />
    <title>My WASM App</title>
    <link data-trunk rel="sass" href="index.scss" />
    <link data-trunk rel="css" href="app.css" />
  </head>
  <body></body>
</html>

Note the use of data-trunk for both our app.css but also the default created index.scss.

Using the TailwindCSS in our code

We’ve now got the .css file generated as part of the trunk build and the file copied to ./dist so now we can use the CSS classes in our code, so here’s an example of a simply layout

use yew::prelude::*;

#[function_component(Layout)]
pub fn counter() -> Html {
  html! {
    <div class="min-h-screen w-screen flex flex-col">
      <nav class="bg-red-800 text-white px-6 py-4 flex justify-between items-center">
        <div class="text-base font-semibold">{ "My Application"  }</div>
        <ul class="flex space-x-6">
          <li><a href="#" class="hover:text-gray-300 text-base">{"Home"}</a></li>
          <li><a href="#" class="hover:text-gray-300 text-base">{"About"}</a></li>
          <li><a href="#" class="hover:text-gray-300 text-base">{"Services"}</a></li>
          <li><a href="/counter" class="hover:text-gray-300 text-base">{"Counter"}</a></li>
        </ul>
     </nav>
     <main class="flex-grow bg-gray-100 p-6">
       <p class="text-gray-700">{"Content foes here"}</p>
     </main>
   </div>
  }
}

Here’s an example of the counter code

use yew::prelude::*;

#[function_component(Counter)]
pub fn counter() -> Html {
    let counter = use_state(|| 0);
    let on_add_click = {
        let c = counter.clone();
        move |_| { c.set(*c + 1); }
    };

    let on_subtract_click = {
        let c = counter.clone();
        move |_| { c.set(*c - 1); }
    };

    html! {
        <div>
            <button onclick={on_add_click}
                style="width: 100px;"
                class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded">
                    { "+1" }</button>
            <p style="text-align: center">{ *counter }</p>
            <button onclick={on_subtract_click}
                style="width: 100px;"
                class="bg-emerald-500 hover:bg-emerald-700 text-white font-bold py-2 px-4 rounded">
                    { "-1" }</button>
        </div>
    }
}

WASM with Rust (and Yew)

In my previous post WASM with Rust (and Leptos) we covered creating a Rust project which generate a binary for use within WASM, using Leptos and using Trunk to build and run it.

There’s more than one framework for creating WASM/WebAssembly projects in Rust, let’s look at another one, this time Yew.

We’ll be using trunk (just as the previous post) to serve but I’ll repeat the step to install here

cargo install trunk

I’m going to assume you’ve also added the target, but I’ll include here for completeness

rustup target add wasm32-unknown-unknown

Getting started

We’re going to use a template to scaffold a basic Yew application, so create yourself a folder for your project then run

cargo generate --git https://github.com/yewstack/yew-trunk-minimal-template

For mine I stuck with the defaults after naming it wasm_app. So the stable Yew version and no logging.

Before we get into the code, let’s add a Trunk.toml (in the folder with the Cargo.toml) with this configuration

[serve]
address = "127.0.0.1"
port = 8081

Let’s see what Yew generated. From the app folder (mine was named wasm_app) run

trunk serve --open

Straight up, Yew gives us a colourful starting point.

In the code

Let’s go through the code, so we know what we need if we’re creating a project without the template, but also to see what’s been added.

If you check out the Cargo.toml it’s filled in a lot of package info. for us, so you might wish to go tweak there, but we have a single dependency

[dependencies]
yew = { version="0.21", features=["csr"] }

The Yew template includes index.scss for our styles and Trunk automatically compiles/transpiles to the .css file of the same name within the dist.

The index.html is lovely and simple, really the only addition from a bare bones index.html is the including the SASS link which tells the compiler to compile using SASS

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8" />
    <title>Trunk Template</title>
    <link data-trunk rel="sass" href="index.scss" />
  </head>
  <body></body>
</html>

In the src folder we have two files, main.rs and app.rs, within main.rs we have

mod app;

use app::App;

fn main() {
    yew::Renderer::<App>::new().render();
}

Here we are basically telling Yew to render our App. Within the app.rs we have

use yew::prelude::*;

#[function_component(App)]
pub fn app() -> Html {
    html! {
        <main>
            <img class="logo" src="https://yew.rs/img/logo.svg" alt="Yew logo" />
            <h1>{ "Hello World!" }</h1>
            <span class="subtitle">{ "from Yew with " }<i class="heart" /></span>
        </main>
    }
}

Similar to Leptos, we have a macro for our HTML tags etc. but it’s html! here (not view!). Also the component is marked with the function_component annotation, but otherwise it’s very recognisable what’s happening here.

use yew::prelude::*;

#[function_component(App)]
pub fn app() -> Html {
    html! {
        <main>
            <img class="logo" src="https://yew.rs/img/logo.svg" alt="Yew logo" />
            <h1>{ "Hello World!" }</h1>
            <span class="subtitle">{ "from Yew with " }<i class="heart" /></span>
        </main>
    }
}

Let’s add some routing

Create yourself a new file named counter.rs, let’s implement the fairly standard counter.rs component – I should say the Yew web site has an example of the counter page on their Getting Started, so we’ll just take that and make a few tweaks

use yew::prelude::*;

#[function_component(Counter)]
pub fn counter() -> Html {
    let counter = use_state(|| 0);
    let on_add_click = {
        let c = counter.clone();
        move |_| { c.set(*c + 1); }
    };

    let on_subtract_click = {
        let c = counter.clone();
        move |_| { c.set(*c - 1); }
    };

    html! {
        <div>
            <button onclick={on_add_click}>{ "+1" }</button>
            <p>{ *counter }</p>
            <button onclick={on_subtract_click}>{ "-1" }</button>
        </div>
    }
}

If you’ve used React, you’ll see this is very similar to the way we might write our React component.

Ofcourse the syntax differs, but we have a use_state and event handler functions etc. The main difference is the way we’re cloning the value – by convention all those c variables would be named counter as well, but I wanted to make it clear as to what the scope of the counter variable was.

On further reading – it appears the use_XXX syntax are hooks, see Pre-defined Hooks

When we clone the counter, we’re not cloning the value, we’re cloning the handle (or type UseStateHandler which implements Clone). All clones point to the same reactive cell, so you are essentially changing the value in that handle.

Before trying this code out we need our router, so the Yew site says add the following dependency to the Cargo.toml file

yew-router = { git = "https://github.com/yewstack/yew.git" }

but I had version issues so instead used

yew-router = { version = "0.18.0" }

Now let’s change the app.rs file to the following

use yew::prelude::*;
use yew_router::prelude::*;
use crate::counter::Counter;

#[derive(Clone, Routable, PartialEq)]
enum Route {
    #[at("/")]
    Home,
    #[at("/counter")]
    Counter,
    #[not_found]
    #[at("/404")]
    NotFound,
}

fn switch(routes: Route) -> Html {
    match routes {
        Route::Home => html! { <h1>{ "Home" }</h1> },
        Route::Counter => { html! { <Counter /> }},
        Route::NotFound => html! { <h1>{ "404" }</h1> },
    }
}

#[function_component(App)]
pub fn app() -> Html {
    html! {
        <BrowserRouter>
            <Switch<Route> render={switch} />
        </BrowserRouter>
    }
}

There’s a fair bit to digest, but hopefully it’s fairly obvious what’s happening thankfully.

We create an enum of the routes with the at annotation mapping to the URL path. Then we use a function (named switch in this case) which maps the enum to the HTML. We’ve embedded HTML into the Home and NotFound routes but the Counter will render our Counter component as if it’s HTML.

The final change is the app functions where we use the BrowserRouter and Switch along with our switch function to render the pages.

Code

Checkout the code on GitHub

WASM with Rust (and Leptos)

I’m going to be going through some of the steps from Leptos Getting Started. Hopefully we’ll be able to add something here.

Prerequisites

We’re going to use Trunk to run our application, so first off we need to make sure we’ve installed it

cargo install trunk

Trunk allows us to build our code, run a server up and run our WASM application, moreover it’s watching for changes and so will rebuild and redeploy things as you go. Web style development with a compiled language. We’ll cover more on trunk later.

Creating our project

Create yourself a folder for your application and run the terminal in that folder.

We need to create our project – although I use RustRover from JetBrains, let’s go “old school” as use cargo to create our project etc.

Run the following (obviously change wasm_app to something more meaningful for your app name

cargo new wasm_app --bin

cd into the application (as above, mine’s named wasm_app).

cargo add leptos --features=csr

We’ll need to add the WASN target

rustup target add wasm32-unknown-unknown

in the root folder (where your Cargo.toml file is) create an index.html with the following

<!DOCTYPE html>
<html>
  <head></head>
  <body></body>
</html>

Next create a cd into the src folder and edit the main.rs – it should look like the following

use leptos::prelude::*;

fn main() {
    leptos::mount::mount_to_body(|| view! { <p>"Hello, world!"</p> })
}

The mount_to_body, as the name suggests, essentially injects your WASM code into the <body></body> element.

Now, from the root folder (i.e. where index.html is) run

trunk serve --open

If the default port (8080) is already in use we can specify the port using

trunk serve --open --port 8081

If all went well then you see your default browser showing the web page, if not then open a browser window and navigate to http://localhost:8081/

To save having to set the port via the CLI, you can also create a trunk.toml file in the root folder with something like this in it

[serve]
address = "127.0.0.1"
port = 8081

Taking it a bit further

We’ve got ourselves a really simple WASM page.

Let’s move this a little further by creating a component for our application.

Create a file in src named app.rs and we’ll add the following

use leptos::prelude::*;

#[component]
pub fn App() -> impl IntoView {
    view! {
        <p>"Hello, world!"</p>
    }
}

and change the main.rs to this

mod app;

use leptos::mount::mount_to_body;
use crate::app::App;

fn main() {
    mount_to_body(App);
}

If you kept trunk running it will automatically rebuild the code and refresh the browser.

Components

The component (below) returns HTML via the view macro and as you can see, this returns a trait IntoView. As you can see we mark the function as a #[component]

#[component]
pub fn App() -> impl IntoView {
    view! {
        <p>"Hello, world!"</p>
    }
}

Note: you might want to change your text to prove to yourself that the did indeed get updated in the web page when you saved it – assuming trunk was running.

This is a pretty simple starting point, so let’s add some more bits to this…

Let’s change the App function to this

use leptos::prelude::*;
 
#[component]
pub fn App() -> impl IntoView {
    let (count, set_count) = signal(0);

    view! {

        <button on:click=move |_| set_count.set(count.get() + 1)>Up</button>
        <div>{count}</div>
        <button on:click=move |_| set_count.set(count.get() - 1)>Down</button>
    }
}

The signal (which is a reactive variable) may remind you of something like useState in React, we deconstruct the signal (which has the default of 0) into a count (getter) and a set_count (setter). To be honest the set and get functions seem odd if you’re using the properties such as C#, but that’s the way it is is Rust.

The count value is of type ReadSignal and set_count is of type WriteSignal.

We can also set the value of count using the following within the closure. Ultimately this should be a more performant way of doing things. The example above might be preferred for readability (although that’s debatable) – it does however look more inline with the way we get values etc. I’ll leave others to debate the pros and cons, for me the line below is effecient.

*set_count.write() += 1

Routing

Let’s rename the app.rs file to counter.rs (also rename the function to Counter) and create a new app.rs file which will acts as the router to our components. We’ll need to add this to the Cargo.toml dependencies

leptos_router = "0.8.5"

and in the app.rs paste the following code

use leptos::prelude::*;
use leptos_router::{
    components::{Route, Router, Routes},
    StaticSegment,
};
use crate::counter::Counter;
use crate::home::Home;

#[component]
pub fn App() -> impl IntoView {
    view! {
        <Router>
            <Routes fallback=|| "Page not found.">
                <Route path=StaticSegment("") view=Home />
                <Route path=StaticSegment("counter") view=Counter />
            </Routes>
        </Router>
    }
}

You’ll need to add the mod to the main.rs file to include the home reference.

I’ve also added a home.rs file with the following

use leptos::prelude::*;

#[component]
pub fn Home() -> impl IntoView {
    view! {
        <p>Welcome to your new app!</p>
    }
}

As you can see, the router routes / to our Home component and the /counter to our Counter component.

Meta data from code

Whilst we have an index.html which you can edit, we might want to supply some of the meta data etc. via the app’s code.

Add the following to Cargo.toml dependencies

leptos_meta = "0.8.5"

Now in main.rs add

use leptos::view;
use leptos_meta::*;

and now change the code to

fn main() {
    mount_to_body(|| {
        provide_meta_context();
        view! { 
            <Title text="Welcome to My App" />
            <Meta name="description" content="This is my app." />
            <App />
        }
    });
}

The provide_meta_context() function allows us to inject metadata such as <title>, <meta> and <script>

Code

Code for this post is available on GitHub.

cargo-watch

Cargo watch allows us to run our Rust application and when changes are detected updates and runs the application again (you know, standard watch functionality).

To install use

cargo install cargo-watch

The to run, use

cargo-watch -x run

Async/await using tokio and Rust

Rust supports async/await in a similar way to C# although these are supplied via runtimes, for example Tokio, async-std and others.

In this post we’ll look at the tokio runtime option.

The first thing we need to do is add tokio to the Cargo.toml, for example

[dependencies]
tokio = { version = "1", features = ["full"] }

Now, let’s create a simple async function

async fn execute() {
   println!("Execution in async function");
}

Notice we do not return a Task like C# or any type in this case, but this is essentially syntactic sugar for

fn execute() -> impl Future<Output = ()> 

Hence, we can see async functions return a Future (similar to a Promise in Javascript etc.).

The Future trait has a poll function which can be checked to see if the async function is ready to return a value or if it’s pending.

To await an async function we use te following syntax

execute().await;

The await will ofcourse cause the current Future to return to the caller but the code after the await will not execute until the Future completes/is ready.

If you come from C# this is much the same, i.e. running a continuation when completed etc.

To use asyc/await on main we need to make a couple of changes, first to make main async but this alone will not work without the runtime, hence main looks like this

#[tokio::main]
async fn main() {
  execute().await;
}

Futures are lazy loaded. Meaning, that the future will not execute until the await is called.

As you can see Futures do not run in a thread, they are just polling futures. However we can use tokio tasks (which looks a lot like std lib threads) the execute the code on a thread

 
let handle = tokio::spawn(asyc move {
   execute().await;
}

handle.await.unwrap();

By default tokio executes on a threadpool but we could change things, as below

#[tokio::main(flavor = "current_thread")]

Which then uses time slicing instead of threads.

Tokio is good for non blocking IO, but tokio uses a single thread for it’s main event loop hence heavy CPU will basically slow down other tasks. Hence we would need to spawn threads as already discussed.

Slight detour

As a slight detour from async/await – tokio can also create “green” threads (lightweight threads from the runtime – not OS threads), for example

async fn execute() {
   time::sleep(time::Duration::from_secs(1)).await;
}

fn main() {
  let runtime = tokio::runtime::Runtime::new().unwrap();
  
  let future = execute();

  runtime.block_on(future);
}

Rust, postfix “?”

Let’s assume we have a function such as and we have the line highlight ending in a “?” – what’s this doing?

fn get_history() -> Result<Vec<Revision>, String> {
   let revisions: Vec<Revision> = get_revisions()?;
   return Ok(revisions)
}

We can see that the return is a Result – which is an enum that essentially looks like this

enum Result<T, E> {
    Ok(T),
    Err(E),
}

Hence our get_history function can return a Vec<Revision> which might me Ok (for success ofcourse) or an Err (for an error).

Okay, so what’s the highlighted code doing, especially as we only appear to return an Ok?

This is essentially is the same as the following

let revisions = match get_revisions() {
  Ok(val) => val,
  Err(e) => return Err(e)
};

As we can see this is a nice bit of semantic sugar to return an error from the function OR assign the Ok result to the revisions variable.

Kubernetes cronjobs

You know the scenario, you’re wanting to run jobs either at certain points in a day or throughout the data every N timespans (i.e. every 5 mins).

Kubernetes has you covered, there’s a specific “kind” of job for this, as you guessed from the title, the CronJob.

An example app.

Let’s assume you created yourself a job – I’m going to create a simple job that just outputs the date/time at the scheduled time. I’ve written this in Rust but to be honest it’s simple enough that this could be any language. Here’s the Cargo.toml

The application is just a standard console application named crj (for cronjob or cron rust job, I really didn’t think about it :)).

[package]
name = "crj"
version = "0.1.0"
edition = "2024"

[dependencies]
chrono = "0.4"

Here’s the code

use chrono::Local;

fn main() {
    let now = Local::now();
    println!("Current date and time: {}", now);
}

See I told you it was simple.

Docker

For completeness, here’s the Dockerfile and the steps to get things built, tagged and pushed

FROM rust:1.89.0-slim AS builder

WORKDIR /app
COPY . .

RUN cargo build --release

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release /usr/local/bin/crj

RUN chmod +x /usr/local/bin/crj

ENTRYPOINT ["/usr/local/bin/crj/crj"]

Next up we need to build the image using (remember to use the image you created as well as the correct name for your container registry)

docker build -t putridparrot/crj:1.0.0 .

then tag it using

docker tag putridparrot/crj:1.0.0 putridparrotreg/putridparrot/crj:1.0.0

Finally we’ll push it to our container registry using

docker push putridparrotreg/putridparrot/crj:1.0.0

Kubernetes CronJob

All pretty standard stuff and to be honest the next bit is simple enough. We need to create a kubernetes yaml file (or helm charts). Here’s my cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: scheduled-job
  namespace: dev
spec:
  schedule: "*/5 * * * *" # every 5 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: scheduled-job
              image:  putridparrotreg/putridparrot/crj:1.0.0
          restartPolicy: Never

My cronjob has the name scheduled-job (I know, not very imaginative). We apply this file to Kubernetes as usual i.e.

kubectl apply -f .\cronjob.yaml

Did it work?

We’ll ofcourse want to take a look at what happened after this CronJob was set up in Kubernetes. We can simply use the following. You can set the namespace used, such as dev in my case.

kubectl get cronjobs --all-namespaces -w

you’ll see something like this

NAMESPACE   NAME            SCHEDULE      TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
dev         scheduled-job   */5 * * * *   <none>     False     0        <none>          9s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              16s
dev         scheduled-job   */5 * * * *   <none>     False     0        13s             29s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              5m16s

In my case the job starts (ACTIVE) and then completes and shuts down. Then 5 minutes later it starts again as expected with this cron schedule.

On the pods side you can run

kubectl get pods -n dev -w

Now what you’ll see is something like this

NAME                           READY   STATUS              RESTARTS   AGE
scheduled-job-29257380-5w4rg   0/1     Completed           0          51s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     ContainerCreating   0          0s
scheduled-job-29257385-qgml2   1/1     Running             0          2s
scheduled-job-29257385-qgml2   0/1     Completed           0          3s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     ContainerCreating   0          0s
scheduled-job-29257390-2x98r   1/1     Running             0          2s

Notice that the pod is created and goes into a “Pending” state. Then “ContainerCreating” before “Running” and finally “Completed”, but the next run of the cronjob creates a new pod name. Therefore, if you’re trying to log the pods i.e. kubectl logs scheduled-job-29257380-5w4rg -n dev – then you’ll get something like the below, but you cannot -f (follow) the logs as the next time the job runs it creates a new pod.

Current date and time: 2025-08-17 15:00:09.294317303 +00:00

Closures in Rust

A “regular” closure within Rust uses the following syntax

let name = String::from("PutridParrot");
let hello = || println!("Hello {}", name);

In this simple example, the name is captured within the closure, which is the function

|| println!("Hello {}", name);

The name variable remains usable after the closure, however there’s another type of closure is the Moving Closure which uses the move keyword i.e.

let name = String::from("PutridParrot");
let hello = move || println!("Hello {}", name);

The difference here is the the name variable is no longer usable after the closure. Essentially the closure takes ownership of all enclosed variables.

The main use of move closures is within threading, so the thread takes ownership of it’s data. Async blocks often require owned values. Passing values into boxed trait objects.

A simple web API in various languages and deployable to Kubernetes (Rust)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Rust implementation.

Implementation

I’m using JetBrains RustRover for this project, so I created a project named echo_service.

Next, add the following to the dependencies of Cargo.toml

axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }

and now the main.rs can be replaced with

use axum::{
    routing::get,
    extract::Query,
    http::StatusCode,
    response::IntoResponse,
    Router,
};
use tokio::net::TcpListener;
use axum::serve;
use std::net::SocketAddr;
use serde::Deserialize;

#[derive(Deserialize)]
struct EchoParams {
    text: Option<String>,
}

async fn echo(Query(params): Query<EchoParams>) -> String {
    format!("Rust Echo: {}", params.text.unwrap_or_default())
}

async fn livez() -> impl IntoResponse {
    (StatusCode::OK, "OK")
}

async fn readyz() -> impl IntoResponse {
    (StatusCode::OK, "Ready")
}

#[tokio::main]
async fn main() {
    let app = Router::new()
        .route("/echo", get(echo))
        .route("/livez", get(livez))
        .route("/readyz", get(readyz));

    let addr = SocketAddr::from(([0, 0, 0, 0], 8080));
    println!("Running on http://{}", addr);

    let listener = TcpListener::bind(addr).await.unwrap();
    serve(listener, app).await.unwrap();

}

Dockerfile

Next up we need to create our Dockerfile

FROM rust:1.72-slim AS builder

WORKDIR /app
COPY . .

RUN cargo build --release

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release /usr/local/bin/echo_service

RUN chmod +x /usr/local/bin/echo_service

EXPOSE 8080

ENTRYPOINT ["/usr/local/bin/echo_service/echo_service"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Cargo

As part of a little project I’m working on, I’m back playing within Rust.

Whilst we can use rustc to build our code we’re more likely to use the build and package management application cargo. Let’s take a look at the core features of using cargo.

What version are you running?

To find the version of cargo, simply type

cargo --version

Creating a new project

Cargo can be used to create a minimal project which will include the .toml configuration file and the code will be written to a src folder. Cargo expects a specific layout of configuration and source as well as writing the building artifacts to the target folder.

To generate a minimal application run the following (replacing app_name with your application name)

cargo new app_name

We can also use cargo to create a minimal library using the –lib switch

cargo new lib_name --lib

Building our project

To build the project artifacts, run the following from the root of your application/library

cargo build

This command will create the binaries in the target folder. This is (by default) a debug build, to create a release build ass the –release switch i.e.

cargo build --release

Running our project

In the scenarios where we’ve generated an executable, then we can run the application by cd-ing into the target folder and running the .exe or via cargo we use

cargo run

Check your build

In some cases, where you are not really interested in generating an executable (for example) you can run the check command which will simply verify your code is valid – this will likely be faster than generating the executable and is useful where you just want to ensure your code will build

cargo check