Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Rust implementation.
Implementation
I’m using JetBrains RustRover for this project, so I created a project named echo_service.
Next, add the following to the dependencies of Cargo.toml
axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
and now the main.rs can be replaced with
use axum::{
routing::get,
extract::Query,
http::StatusCode,
response::IntoResponse,
Router,
};
use tokio::net::TcpListener;
use axum::serve;
use std::net::SocketAddr;
use serde::Deserialize;
#[derive(Deserialize)]
struct EchoParams {
text: Option<String>,
}
async fn echo(Query(params): Query<EchoParams>) -> String {
format!("Rust Echo: {}", params.text.unwrap_or_default())
}
async fn livez() -> impl IntoResponse {
(StatusCode::OK, "OK")
}
async fn readyz() -> impl IntoResponse {
(StatusCode::OK, "Ready")
}
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/echo", get(echo))
.route("/livez", get(livez))
.route("/readyz", get(readyz));
let addr = SocketAddr::from(([0, 0, 0, 0], 8080));
println!("Running on http://{}", addr);
let listener = TcpListener::bind(addr).await.unwrap();
serve(listener, app).await.unwrap();
}
Dockerfile
Next up we need to create our Dockerfile
FROM rust:1.72-slim AS builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && \
rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release /usr/local/bin/echo_service
RUN chmod +x /usr/local/bin/echo_service
EXPOSE 8080
ENTRYPOINT ["/usr/local/bin/echo_service/echo_service"]
Note: In Linux port 80 might be locked down, hence we use port 8080 by default.
To build this, run
docker build -t putridparrot.echo_service:v1 .
Don’t forget to change the name to your preferred name.
and to test this, run
docker run -p 8080:8080 putridparrot.echo_service:v1
Kubernetes
If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.
I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
namespace: dev
labels:
app: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: putridparrotreg/putridparrot.echo_service:v1
ports:
- containerPort: 8080
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "200Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /livez
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: echo_service
namespace: dev
labels:
app: echo
spec:
type: ClusterIP
selector:
app: echo
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
namespace: dev
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo_service
port:
number: 80
Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.