Trying out SurrealDB with Rust

SurrealDB is a multi-model database, which essentially means it allows storage of relation, document, graph, time-series, vector and search as well as geospatial models (as taken from the SurrealDB Overview).

SurrealDB allows queries through an SQL like query language as well as GraphQL, HTTP and RPC.

There are SDKs for Rust (which I’m going to use here) along with JavaScript, Java, Go, Python, .NET and PHP.

Whilst you can install on Windows, Linux and Mac I prefer using Docker, so let’s run up an instance of SurrealDB

docker run --rm -p 8000:8000 surrealdb/surrealdb:latest start --log trace --user root --pass root memory

With a volume, either create yourself a folder (i.e. mkdir mydata) or use an existing path

docker run --rm -p 8000:8000 surrealdb/surrealdb:latest start --log trace --user root --pass root mydb:/mydata/mydatabase.db

If you’d like to run a web based UI for SurrealDB, you can run Surrealist

docker run -d -p 8080:8080 surrealdb/surrealist:latest

Then use this to connect to your running instance, default user is admin, default password is admin (obviously change this in a real world usage).

Once connected via Surrealist we can create a namespace and database, here’s a simple example of such a query run via Surrealist

USE NS myns DB mydb;

Yes, we literally just use the namespace and database for the first time to create both. Now let’s a some data, creating a “table” using

CREATE person CONTENT {
  first_name: "Scooby",
  last_name: "Doo",
  age: 42,
  email: "scooby.doo@example.com"
};

We can query for the list of “person” rows using

SELECT * FROM person;

As you can see, it’s very SQL like syntax with some differences.

We didn’t created an id or such like field, but if you select the rows from the person table you’ve notice something like this

[
  {
    age: 42,
    email: 'scooby.doo@example.com',
    first_name: 'Scooby',
    id: person:77xrs2c05oe9bmtgjbhq,
    last_ame: 'Doo'
  }
]

We could have supplied an id ourselves like this

CREATE person CONTENT {
  first_name: "Fred",
  last_name: "Jones",
  age: 19,
  id: person:fredjones,
  email: "fred.jones@example.com"
};

We can update a row using

UPDATE person:77xrs2c05oe9bmtgjbhq SET name="Scrappy", age = 23;

There are obviously more commands/queries we could use, but let’s move on to using the DB from Rust.

We’ll start by adding a few dependencies to Cargo.toml

[dependencies]
tokio = { version = "1.47.1", features = ["full"] }
surrealdb = "2.3.8"
serde = { version = "1.0.219", features = ["derive"] }

Next update main.rs to look like this

use surrealdb::{Surreal};
use surrealdb::engine::remote::ws::Ws;
use std::error::Error;
use surrealdb::opt::auth::Root;

use surrealdb::sql::Thing;
use serde::Deserialize;

#[derive(Debug, Deserialize)]
struct Person {
    id: Thing,
    first_name: String,
    last_name: String,
    email: String,
    age: u32,
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    let db = Surreal::new::<Ws>("127.0.0.1:8000").await?;
    db.signin(Root { username: "root", password: "root" }).await?;
    db.use_ns("myns").use_db("mydb").await?;

    let result: Vec<Person> = db.query("SELECT * FROM person").await?.take(0)?;

    println!("{:?}", result);
    Ok(())
}

We’re using the default username and password. Ofcourse you should change the password for this user and create your own user, but for now, let’s just get things up and running.

Notice that we connect to SurrealDB via the web socket.

You may have also noticed that in our Person struct we have an id Thing. This is essentially a record pointer, which has the table name and record id.

Logging with Rust

Rust supports a logging facade, which we can include using the following dependency in Cargo.toml

[dependencies]
log = "0.4.28"

Now in our main.rs we can use the various levels of logging like this

use log::{info, warn, error, trace, debug, LevelFilter};

fn main() {
    debug!("Debug log message");
    trace!("Trace log message");
    info!("Info log message");
    warn!("Warning log message");
    error!("Error log message");
}

If you run this, nothing will be output because we need to add a logging provider.

One simple provider is env_logger which will log to standard out. To include, add the following to the Cargo.toml dependencies

env_logger = "0.11.8"

We’ll need to add the use clause

use env_logger::{Builder, Env};

and then we need to initialise the env_logger, we can use the following at the start of the main function

env_logger::init();

This will only output ERROR messages, we can change the log level using the environment variable like this

RUST_LOG=trace

Alternatively we can set the environment variable within code by replace the environment variable and the env_logger::init(); line with

let env = Env::default().filter_or("RUST_LOG", "trace");
Builder::from_env(env).init();

or we can set in code instead using

Builder::new()
   .filter_level(LevelFilter::Trace)
   .init();

Rust and reqwest

I’ve covered a few topics around Rust lately on this blog. Hopefully around technologies that are most likely to be used in many real world applications. This post is about one of the missing pieces – how do we call our web API’s/services etc.

In C# we have the HttpClient which is the usual type for such use cases. With Rust there are various options, but as the title suggests, we’re going to concentrate on reqwest.

All we really need to do is supply a couple of crates to Cargo.toml, as you’ll have guessed, one is reqwest. The other is tokio because I’m waiting to use async/await. So create yourself a project then update Cargo.toml to add these dependencies

reqwest = "0.12.23"
tokio = { version = "1.47.1", features = ["rt", "rt-multi-thread", "macros"] }

Next up, open src/main.rs (or create one) and let’s add a simply GET call

#[tokio::main]
async fn main() -> Result<(), reqwest::Error> {
let result = reqwest::get("https://httpbin.org/get")
  .await?
  .text()
  .await?;

  println!("{}", result);
  Ok(())
}

This is a “shortcut” to use a get method.

The following is a longer form and is what we’d use for other HTTP methods, but I’m showing how we can generate a RequestBuilder (the type returned from client.get) and then send this and retrieve the response

let client = reqwest::Client::new();
let result = client.get("https://httpbin.org/get");
let result = result.send().await?.text().await?;

Other HTTP methods, such as POST, DELETE etc. can be created from the RequestBuilder, for example

let post = client.post("https://httpbin.org/post")
  .body("hello world")
  .header("Content-Type", "text/plain");

let result = post.send().await?.text().await?;

JSON instead of plan text

Often we’ll want to deserialize to types, i.e. via JSON, so update Cargo.toml to lok like this

[dependencies]
reqwest = { version = "0.12.23", features = ["json"] }
tokio = { version = "1.47.1", features = ["rt", "rt-multi-thread", "macros"] }
serde = { version = "1.0.219", features = ["derive"] }

Now change main.rs to add this code to the start of the file

use serde::Deserialize;

#[derive(Deserialize)]
struct ApiResponse {
    message: String,
}

ApiResponse will represent our object and we use the following

let client = reqwest::Client::new();
let result = client.get("https://your_api");
let result = result
        .send()
        .await?
        .json::<ApiResponse>()
        .await?;

println!("{}", result.message);

Rust Rocket (and openapi/swagger)

Rocket provides code that allows us to build web servers and web based applications such as web APIs.

We’ll start by just creating a simple endpoint, and then we’ll look at adding Open API and swagger support.

Starting simple

Create yourself a Rust package, for example with cargo

cargo new myapi --bin

add the following dependency to your Cargo.toml

rocket = "0.5.1"

Now create a Rocket.toml so we can configure rocket’s server and mine looks like this

[default]
port = 8080
address = "127.0.0.1"

We need a main.rs (so you can delete the lib.rs if you wish for now) and here’s a very basic starting point

#[macro_use] extern crate rocket;

#[get("/")]
fn index() -> &'static str {
    "Hello, world!"
}

#[launch]
fn rocket() -> _ {
    rocket::build().mount("/", routes![index])
}

Now, run the application using

cargo run

As you can see this is a minimal API style, i.e. we create a function supplying it with an HTTP method and we add it to the routes list.

Adding the usual echo endpoint

Now let’s add my version of “Hello World” for API’s, a simple echo endpoint.

#[get("/echo?<text>")]
fn echo(text: &str) -> String {
    format!("Echo: {}", text)
}

Now add this to the routes i.e.

#[launch]
fn rocket() -> _ {
    rocket::build().mount("/", routes![index, echo])
}

Add Open API and Swagger

Now we have a couple of simple endpoints, let’s add Open API and Swagger and change the echo endpoint to use Json. I’m purposefully going to keep the index as non-Open API just to demonstrate running both Open API and non-Open API endpoints.

We’re going to need a few addition to our Cargo.toml – now, unfortunately it’s easy to get multiple version dependencies for these, so the one’s shown here will work together without warning/errors

[dependencies]
rocket = { version = "0.5.1", features = ["json"] }
openapi = "0.1.5"
serde = "1.0.219"
rocket_okapi = { version = "0.9.0", features = ["swagger"] }
schemars = "0.8.22"

Notice we’re adding features to the rocket crate and we’ve got some creates for swagger and open api. The schamars crate 0.8.22 was being used by other crates, hence I locked this down to the same version.

We’ll extend our echo endpoint to return Json, but before we do I’ll list the use clauses that are listed after #[macro_use] extern crate rocket;

use rocket::serde::{Serialize, json::Json};
use rocket_okapi::{openapi, openapi_get_routes};
use rocket_okapi::swagger_ui::{make_swagger_ui, SwaggerUIConfig};
use schemars::JsonSchema;

We’ll create a response object for the echo endpoint and update the echo endpoint to both return this and add the openapi attribute to allow this endpoint to have an open api spec generated for it

#[derive(Serialize, JsonSchema)]
struct EchoResponse {
    message: String,
}

#[openapi]
#[get("/echo?<text>")]
fn echo(text: &str) -> Json<EchoResponse> {
    Json(EchoResponse {
        message: format!("Echo: {}", text),
    })
}

Next up we need to change the rocket function, so let’s just see the latest version

#[launch]
fn rocket() -> _ {
    rocket::build()
        .mount("/", routes![index])
        .mount("/", openapi_get_routes![echo])
        .mount(
            "/swagger",
            make_swagger_ui(&SwaggerUIConfig {
                url: "/openapi.json".to_owned(),
                ..Default::default()
            })
        )
}

Now I purposefully left the index route without an open api attribute just to demonstrate, if you have such endpoints, you need to still use the routes! macro, if you add index to openapi_get_routes! without the open api attribute you’ll get some slight ambiguous error’s such as a function with a similar name exists.

Now run your application and go to http://localhost:8080/swagger/index.html and you can interact with your endpoints via the Swagger UI you can also access the openapi.json file using http://localhost:8080/openapi.json.

Code

Available on GitHub.

Rust and gRPC (with Protocol Buffers)

Back in 2018 I published a couple of posts around Using Protocol Buffers and Using gRPC with Protocol Buffers.

For this post we’re going to look at using gRPC and Protocol Buffers from Rust.

Getting Started

Before we begin to do anything in Rust we’ll need protoc on our machine, so checkout https://github.com/protocolbuffers/protobuf/releases for a release.

Note: On Windows we can just use winget install protobuf then run protoc –version to check it was installed.

Also ensure protoc.exe is in your path or set-up via your development tools – in my case I’m using JetBrains RustRover and added the environment variable PROTOC to the project configuration with a value of C:\Users\{your-username}\AppData\Local\Microsoft\WinGet\Links\protoc.exe as I installed on Windows via winget.

Next, let’s create the bare bones project.

  • Create yourself a folder for your project then…
  • Run the following (replace rust_grpc) with your project name
    cargo new rust_grpc
    
  • cd into the folder just created
  • Update the Cargo.toml to include the following dependencies and build-dependencies
    [dependencies]
    tonic = "0.14.2"
    tokio = { version = "1", features = ["full"] }
    prost = "0.14.1"
    tonic-prost = "0.14.2"
    
    [build-dependencies]
    tonic-prost-build = "0.14.2"
    

Obviously change the dependencies versions to suit.

The build-dependencies will generate the source code from our .proto file.

Creating the proto file(s)

Let’s create a simple proto file.

  • Create a folder, we’ll use a standard name, so ours is called proto off of the root folder
  • Create a file names hello.proto and copy the code below into it (this is a sort of “Hello World” of proto files)
    syntax = "proto3";
    
    package hello;
    
    service Greeter {
      rpc SayHello (HelloRequest) returns (HelloReply);
    }
    
    message HelloRequest {
      string name = 1;
    }
    
    message HelloReply {
      string message = 1;
    }
    
  • To generate the code from the .proto, create a build.rs file with the following code
    use tonic_prost_build::configure;
    
    fn main() -> Result<(), Box<dyn std::error::Error>> {
        configure()
            .out_dir("src/generated")
            .compile_protos(&["proto/hello.proto"], &["proto"])
            .unwrap();
        Ok(())
    }
    

I had problems getting the build.rs to generate source for the proto file, so you might need to create a folder /src/generated before running the command and the proto folder is off on the project root i.e. alongside the src folder as mentioned previous, so ensure that’s correct.

To generate the source files for the project we can run the build from a tool such as RustRover or use cargo build from your project folder.

I’m not going to include the whole file that’s generated but you should see bits like the following

/// Generated client implementations.
pub struct HelloRequest {
    #[prost(string, tag = "1")]
    pub name: ::prost::alloc::string::String,
}
#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)]
pub struct HelloReply {
    #[prost(string, tag = "1")]
    pub message: ::prost::alloc::string::String,
}
pub mod greeter_client {
    #![allow(
        unused_variables,
        dead_code,
        missing_docs,
        clippy::wildcard_imports,
        clippy::let_unit_value,
    )]

As you can see we have representations of the request and reply from the .proto file.

I also added a mod.rs file to the src/generated folder which looks like this

pub mod hello;

This will make our generated source available to the main.rs file for importing.

This example exists on the tonic GitHub repo https://github.com/hyperium/tonic/tree/master/examples/src/helloworld, I hadn’t realised when I started this but I would suggest you check out their examples.

I’m going to place everything in the main.rs file for simplicity, but ofcourse the code should be split into client, server and main code when using in anything other than such a simple example, but let’s look at each section of code separately…

We have a GreeterSever generated from our proto code but we need to create the equivalent of an “endpoint” or “service”, so we’ll create service with the following code

#[derive(Default)]
pub struct GreeterService {}

#[tonic::async_trait]
impl Greeter for GreeterService {
    async fn say_hello(
        &self,
        request: Request<HelloRequest>,
    ) -> Result<Response<HelloReply>, Status> {
        let name = request.into_inner().name;
        let reply = HelloReply {
            message: format!("Hello, {}!", name),
        };
        Ok(Response::new(reply))
    }
}

This essentially responds to a HelloRequest returning a HelloReply – as mentioned, think of this as your service endpoint.

We’re going to need to create a server, which will look like this

async fn grpc_server() -> Result<(), Box<dyn std::error::Error>> {
    let addr = "[::1]:50051".parse()?;
    let greeter = GreeterService::default();

    println!("Server listening on {}", addr);

    Server::builder()
        .add_service(GreeterServer::new(greeter))
        .serve(addr)
        .await?;

    Ok(())
}

Notice that we are indeed creating a server, listening on a port. We supply the service to the Server::builder via add_service and that’s pretty much it.

Next we’re going to need a client to send some request, so here’s an example

async fn grpc_client() -> Result<(), Box<dyn std::error::Error>> {

    let mut client = GreeterClient::connect("http://[::1]:50051").await?;

    let request = Request::new(HelloRequest {
        name: "PutridParrot".into(),
    });
    let response = client.say_hello(request).await?;

    println!("Response is {:?}", response.into_inner().message);
    Ok(())
}

Ofcourse the client connects to the server, creates a request and sends it to the server via the say_hello function. This is a call via the generated code, not to be confused with the GreeterService function of the same name, however ofcourse this will then go via the wire to the server and be handled by the GreeterService’s say_hello function.

We await the response and println! it.

Now let’s just create a simple main/entry point to run the server then run the client and get a response (again this is made simple just for ease of using the one file (main.rs) and ofcourse should be separated in a real world use.

Note: I’ll also include all the use code as well in this sample

mod generated;

use tokio::spawn;
use tokio::time::{sleep, Duration};
use tonic::{Request, Response, Status};
use tonic::transport::Server;
use crate::generated::hello::greeter_client::GreeterClient;
use crate::generated::hello::greeter_server::{Greeter, GreeterServer};
use crate::generated::hello::{HelloReply, HelloRequest};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    spawn(async {
        grpc_server().await.unwrap();
    });

    sleep(Duration::from_millis(500)).await;

    grpc_client().await?;
    
    Ok(())
}

Use cargo run or run via RustRover or your preferred development tools and you should see

Server listening on [::1]:50051
Response is "Hello, PutridParrot!"

Code

Code is available for GitHub. Don’t forget to install protoc and ensure the path is set if you wish to run the code.

TailwindCSS with Yew

We’ve looked at using Yew with Rust in my post WASM with Rust (and Yew), let’s take things a step further by adding TailwindCSS with Yew.

  • Install the tailwind-cli
  • In your root folder run
    npm init 
    

    just select defaults for to get things started

  • Now let’s install tailwindcss using
    npm install --save-dev tailwindcss
    
  • Ensure tailwindcss cli is installed to your package using
    npm install tailwindcss @tailwindcss/cli
    
  • You can actually remove pretty much everything from the package.json file, mine looks like this
    {
      "dependencies": {
        "@tailwindcss/cli": "^4.1.12"
      }
    }
    
  • Create a styles folder and place the tailwind.css file into it
  • Change the tailwind.config.js to look like this
    /** @type {import('tailwindcss').Config} */
    module.exports = {
        content: [
            "./index.html",
            "./src/**/*.rs"
        ],
        theme: {
            extend: {},
        },
        plugins: [],
    }
    

Using Tailwindcss

We’re going to want to use the tailwindcss CLI to generate our application’s .css file into the ./dist folder otherwise our application will not be able to use it, but we’ll leave that step until a little later. For now let’s start by manually running the CLI to generate our .css file, which we’ll store in our root application folder.

Run the following command from the terminal

npx @tailwindcss/cli -i ./styles/tailwind.css -o ./app.css

This generates an app.css file, but actually it’s not a lot of use at this point, however you can take a look at what’s generated by tailwindcss CLI in this app.css file.

What we really want to do is, as part of the build, is generate the file and then reference it from our application.

As we’re using trunk from our previous posts, we can use the trunk hooks to generate the file. I’m running on Windows so will use powershell to run the command. Open the Trunk.toml file (I’ll include my whole file below) and add the [[hooks]] section where we will create a pre-build step that generates the app.css in the application root as we did manually

[serve]
address = "127.0.0.1"
port = 8080

[[hooks]]
stage = "pre_build"
command = "powershell"
command_arguments = ["-Command", "npx" , "@tailwindcss/cli -i ./styles/tailwind.css -o ./app.css"]

However, as mentioned already we need this file in the ./dist folder. Copying the file is no use as we need to link to the file in our index.html file.

We do this by creating the link to our app.css but marking it so that trunk will generate the file in the ./dist folder for us. Here’s the index.html

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8" />
    <title>My WASM App</title>
    <link data-trunk rel="sass" href="index.scss" />
    <link data-trunk rel="css" href="app.css" />
  </head>
  <body></body>
</html>

Note the use of data-trunk for both our app.css but also the default created index.scss.

Using the TailwindCSS in our code

We’ve now got the .css file generated as part of the trunk build and the file copied to ./dist so now we can use the CSS classes in our code, so here’s an example of a simply layout

use yew::prelude::*;

#[function_component(Layout)]
pub fn counter() -> Html {
  html! {
    <div class="min-h-screen w-screen flex flex-col">
      <nav class="bg-red-800 text-white px-6 py-4 flex justify-between items-center">
        <div class="text-base font-semibold">{ "My Application"  }</div>
        <ul class="flex space-x-6">
          <li><a href="#" class="hover:text-gray-300 text-base">{"Home"}</a></li>
          <li><a href="#" class="hover:text-gray-300 text-base">{"About"}</a></li>
          <li><a href="#" class="hover:text-gray-300 text-base">{"Services"}</a></li>
          <li><a href="/counter" class="hover:text-gray-300 text-base">{"Counter"}</a></li>
        </ul>
     </nav>
     <main class="flex-grow bg-gray-100 p-6">
       <p class="text-gray-700">{"Content foes here"}</p>
     </main>
   </div>
  }
}

Here’s an example of the counter code

use yew::prelude::*;

#[function_component(Counter)]
pub fn counter() -> Html {
    let counter = use_state(|| 0);
    let on_add_click = {
        let c = counter.clone();
        move |_| { c.set(*c + 1); }
    };

    let on_subtract_click = {
        let c = counter.clone();
        move |_| { c.set(*c - 1); }
    };

    html! {
        <div>
            <button onclick={on_add_click}
                style="width: 100px;"
                class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded">
                    { "+1" }</button>
            <p style="text-align: center">{ *counter }</p>
            <button onclick={on_subtract_click}
                style="width: 100px;"
                class="bg-emerald-500 hover:bg-emerald-700 text-white font-bold py-2 px-4 rounded">
                    { "-1" }</button>
        </div>
    }
}

A simple web API in various languages and deployable to Kubernetes (Java)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Java implementation.

Implementation

I’m going to be using JetBrains IntelliJ.

  • Create a new Java Project, selecting Maven as the build system
  • We’re going to use Sprint Boot, so add the following to the pom.xml
    <dependencies>
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
        <version>3.5.5</version>
      </dependency>
    </dependencies>
    
  • We’re also going to want to use the Spring Boot Maven plugin to generate our JAR and Manifest
    <build>
      <plugins>
        <plugin>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-maven-plugin</artifactId>
          <version>3.5.5</version>
          <executions>
            <execution>
              <goals>
                <goal>repackage</goal>
              </goals>
            </execution>
          </executions>
        </plugin>
      </plugins>
    </build>
    
  • Let’s delete the supplied Main.java file and replace with one named EchoServiceApplication.java which looks like this
    package com.putridparrot;
    
    import org.springframework.boot.SpringApplication;
    import org.springframework.boot.autoconfigure.SpringBootApplication;
    
    import java.util.HashMap;
    import java.util.Map;
    
    @SpringBootApplication
    public class EchoServiceApplication {
        public static void main(String[] args) {
            SpringApplication app = new SpringApplication(EchoServiceApplication.class);
            Map<String, Object> props = new HashMap<>();
            props.put("server.port", System.getenv("PORT"));
            app.setDefaultProperties(props);
            app.run(args);
        }
    }
    

    We’re setting the PORT here from the environment variable as this will be supplied via the Dockerfile

  • Next add a new file named EchoController.java which will look like this
    package com.putridparrot;
    
    import org.springframework.web.bind.annotation.*;
    
    @RestController
    public class EchoController {
    
        @GetMapping("/echo")
        public String echo(@RequestParam(name = "text", defaultValue = "") String text) {
            return String.format("Java Echo: %s", text);
        }
    
        @GetMapping("/readyz")
        public String readyz() {
            return "OK";
        }
    
        @GetMapping("/livez")
        public String livez() {
            return "OK";
        }
    }
    

Dockerfile

Next up we need to create our Dockerfile

FROM maven:3.9.11-eclipse-temurin-21 AS builder

WORKDIR /app

COPY . .
RUN mvn clean package -DskipTests

FROM eclipse-temurin:21-jre AS runtime

WORKDIR /app

COPY --from=builder /app/target/echo_service-1.0-SNAPSHOT.jar ./echo-service.jar

ENV PORT=8080
EXPOSE 8080

ENTRYPOINT ["java", "-jar", "echo-service.jar"]

Note: In Linux port 80 might be locked down, hence we use port 8080 – to override the default port in phx we also set the environment variable PORT.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

To test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (Elixir)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Elixir implementation.

Implementation

I’m going to be using Visual Code and dev containers, so I created a folder echo_service which has a folder named .devcontainer with the following devcontainer.json

{
    "image": "elixir",
    "forwardPorts": [3000]
}

Next I opened Visual Code in the echo_service folder and it should detect the devtonainer and ask if you want to reopen in the devcontainer. To which we do.

I’m going to use Phoenix Server (phx), so I open the terminal in Visual Code and run the following

  • I needed to install phx installer, using

    mix archive.install hex phx_new
    
  • Next I want to generate a minimal phx server, hence run the following

    mix phx.new echo_service --no-html --no-ecto --no-mailer --no-dashboard --no-assets --no-gettext
    

    When this prompt appears, type y

    Fetch and install dependencies? [Yn]
    
  • Now cd into the newly created echo_service folder
  • To check everything is working, run
    mix phx.server
    

Next we need to add a couple of controllers (well we could just use one but I’m going to create a Echo controller and a Health controller). So in lib/echo_service_web/controllers add the files echo_controller.ex and health_controller.ex

The echo_controller.ex looks like this

defmodule EchoServiceWeb.EchoController do
  use Phoenix.Controller, formats: [:html, :json]

  def index(conn, %{"text" => text}) do
    send_resp(conn, 200, "Elixir Echo: #{text}")
  end
end

The health_controller.exe should look like this

defmodule EchoServiceWeb.HealthController do
  use Phoenix.Controller, formats: [:html, :json]

  def livez(conn, _params) do
    send_resp(conn, 200, "Live")
  end

  def readyz(conn, _params) do
    send_resp(conn, 200, "Ready")
  end
end

In the parent folder (i.e. lib/echo_service_web) edit the router.exe so it looks like this

defmodule EchoServiceWeb.Router do
  use EchoServiceWeb, :router

  pipeline :api do
    plug :accepts, ["json"]
  end

  scope "/", EchoServiceWeb do
    # pipe_through :api
    get "/echo", EchoController, :index
    get "/livez", HealthController, :livez
    get "/readyz", HealthController, :readyz
  end
end

Now we can run mix phx.server again (ctrl+c twice to shut any existing running instance).

Dockerfile

Next up we need to create our Dockerfile

FROM elixir:latest

RUN mkdir /app
COPY . /app
WORKDIR /app

RUN mix local.hex --force
RUN mix do compile

ENV PORT=8080
EXPOSE 8080

CMD ["mix", "phx.server"]

Note: In Linux port 80 might be locked down, hence we use port 8080 – to override the default port in phx we also set the environment variable PORT.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

To test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Running ollama locally

Installing ollama locally is easily done using docker, for example

docker run -d -v "c:\temp\ollama:/root/.ollama" -p 11434:11434 --name ollama ollama/ollama

Next we’ll want to pull in a model, for example using phi3

docker exec -it ollama ollama run phi3 

We have several phi3 models, phi3:mini, phi3:medium and phi3:medium-128k (requires Ollama 0.1.39+).

Other options include mistral, llama2 or openhermes. Just replace phi3 with your preferred model

On running the exec command we get a “prompt” and can start a “chat” with the model.

Use “/bye” to exit the prompt.

Using Valkey (a Redis compatible memory data store)

Valkey is an in-memory, high performance key/value store. It’s Redis compatible which means we can use the same protocols, clients etc.

We can run up an instance via docker using

docker run --name valkey-cache -d -p 6379:6379 valkey/valkey

The valkey client can be run from the instance using

docker exec -ti valkey-cache valkey-cli

As mentioned, we can use existing tools that work with Redis, so here’s a docker-compose.yaml to run up a Valkey instance along with redis-commander

services:
  valkey:
    image: valkey/valkey
    container_name: valkey
    ports:
      - "6379:6379"

  redis-commander:
    image: rediscommander/redis-commander:latest
    container_name: redis-commander
    ports:
      - "8080:8080"
    environment:
      REDIS_HOSTS: valkey:valkey:6379

Now we can use localhost:8080 and view/interact with our data store via a browser.

valkey-cli

From the valkey-cli we can run various commands to add a key/value, get it, delete it etc.

CommandDescription
KEYS '*'List all keys
SET mykey "Hello"add/set a key/value
GET mykeyget the value for the given key
EXISTS mykeyCheck if the key exists
DEL mykeyDelete the key/value
EXPIRE mykey 60Set an expire on the key (60 seconds in this example)
TTL mykeyChecks the time to live for a key