Azure Static Web App Preview Environments

If you’re using something like GitHub actions to deploy your static web app to Azure, you might not realise that you can have the PR’s deployed to “Preview” Environments.

Go to your static web app and select Settings | Environment and your PR’s will have a deployment listed in the preview environment section.

If the URL for your site is something like this

https://green-bat-01ba34c03-26.westeurope.3.azurestaticapps.net

The 26 is the PR number so

https://green-bat-01ba34c03-{PR Number}.westeurope.3.azurestaticapps.net

And you can simply open that PR via your browser and verify all looks/works before merging to main.

If you’ve set your PR close to correctly delete these preview environments, for example from GitHub actions

- name: Close Pull Request
  id: closepullrequest
  uses: Azure/static-web-apps-deploy@v1
  with:
    azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
    app_location: "./dist"
    action: "close"

then this will be delete the preview envirnoment once your PR is merged.

However as I found, if you PR close is not working correctly these preview environments increase until you get an error trying to build the PR and the PR cannot be deployed to the preview environment until you select and delete them, i.e. you’ve reached the max number of preview environments.

Cancellation tokens in Rust

When using tokio::spawn we might wish to pass through a cancellation token to allow us to cancel a long running thread.

We can create a cancellation token like this

let token = CancellationToken::new();

From this we could take one or more child tokens like this

let child = token.child_token();

Using child token’s allows us to cancel all child tokens from the parent or we can cancel each one individually

Now if we spawn our threads, in this case we’ll create two concurrent branches. The first one that completes is the returning value. In this instance we’ll store the JoinHandle just to allow us to force the application to wait upon completion so we get something meaningful output to the console

let handle = tokio::spawn(async move {
  tokio::select! {
    _ = child.cancelled() => {
      println!("Child1 task cancelled");
    }
    _ = tokio::time::sleep(Duration::from_secs(30)) => {
      println!("Child2 task cancelled");
    }
  }
});

Here’s the full code, starting with cargo.toml dependencies

[dependencies]
tokio-util = "0.7.17"
tokio = { version = "1.48.0", features = ["rt", "rt-multi-thread", "macros", "time"] }
select = "0.6.1"
anyhow = "1.0.100"

Now the main.rs code

use std::io;
use std::time::Duration;
use tokio_util::sync::CancellationToken;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let token = CancellationToken::new();
    let child = token.child_token();

    let handle = tokio::spawn(async move {
        tokio::select! {
            _ = child.cancelled() => {
                println!("Child1 task cancelled");
            }
            _ = tokio::time::sleep(Duration::from_secs(30)) => {
                println!("Child2 task cancelled");
            }
        }
    });

    io::stdin().read_line(&mut String::new())?;
    token.cancel();

    handle.await.expect("Task panicked");
    println!("Task Completed");

    Ok(())
}

Git worktree

Git worktree allows us to maintain multiple working directories for a single repository, the benefits of this are that we can work on multiple branches at the same time.

Ofcourse your first question might be, well I can checkout a repo multiple times in different folders/directories already, so what’s the point?

Let’s compare multiple checkouts against git worktree

  • Disk usage
    • Multiple checkouts: Each clone has a full .git directory, incl. all refs etc.
    • Git worktree: All work trees share a single .git folder and hence lightweight
  • Performance
    • Multiple checkouts: Cloning is slower, esp on large repos
    • Git worktree: As no clone takes place and reuses .git this is very fast
  • Consistency
    • Multiple checkouts: Clones can drift, i.e. different removes, configs etc.
    • Git worktree: As worktree’s share .git they share remotes, configs etc.
  • Branch Isolation
    • Multiple checkouts: With multiple checkouts, can become confusing
    • Git worktree: Only allows one instance of a branch to be checked out, avoid divergence
  • Maintenance Overhead
    • Multiple checkouts: Multiple clones means multiple .git and folders to maintain
    • Git worktree: A central .git folder with lightweight trees
  • Use Cases
    • Multiple checkouts: Great if using multiple remotes and sandboxing
    • Git worktree: Great for parallel branch work

Usage

I find it easiest if you have a folder, for example if your application was called MyApp then the folder is MyApp, but then clone your repo into MyApp and name it main (or whatever your root branch is named). The reason I find this best is because when we start using git worktree I want the branches etc. to be next to the main branch, as opposed to mingling with other cloned applications – just makes it simpler to see pretty quickly what worktree’s you have.

If we there assume I have something like c:\repos\myapp\main now cd in the main folder and run

git worktree list

This will show any worktree’s you have and their file location.

Let’s create a new worktree

git worktree add ../ka-hotfix -b hotfix

If you already have a branch named hotfix then you can ignore the -b.

This will have created a new worktree named hotfix in the folder ../ka-hotfix – at this point you’re see if you run git worktree list again that the branch was created and the worktree path is a sibling of our main folder.

If you cd into your branches folder you’ll notice that if you had changes in main, the branch is clean (i.e. no changes). You therefore do not need to stash changes if you’re switching between branches in a single folder (i.e. not using worktree’s).

From either main or you new branch worktree you’ll find git branch list all the branches because it’s using the same .git folder etc.

When we have a branch checked out we cannot check it out again, i.e. the root .git shows it’s already checked out into a different worktree.

We can delete a worktree, but if we run git worktree list again it still shows in the list even though it’s been deleted – it should be displayed as prunable. We can now run

git worktree prune

to remove anything no longer in a worktree.

The thing to remember is that the branch still exists, it’s just the worktree removed.

I’m not going to go into every worktree command but the following is a list of some of the options/subcommands for you the investigate

git worktree add
git worktree list
git worktree move
git worktree remove
git worktree repair
git worktree lock
git worktree unlock

Different ways of working with the HttpClient

A few years back I wrote the post .NET HttpClient – the correct way to use it.

I wanted to extend this discussion to the other ways of using/instantiating your HttpClient.

We’ll look at this from the view of the way we’d usually configure things for ASP.NET but these are not limited to ASP.NET.

IHttpClientFactory

Instead of passing an HttpClient to a class (such as a controller) we might prefer to use the IHttpClientFactory. This allows us to inject the factory and create an instance of an HttpClient using the method CreateClient, for example in our Program.cs

builder.Services.AddHttpClient();

then in our code which uses the IHttpClientFactory

public class ExternalService(IHttpClientFactory httpClientFactory)
{
   public Task LoadData()
   {
      var httpClient = httpClientFactory.CreateClient();
      // use the httpClient as usual
   }
}

This might not seem that much of an advancement from passing around HttpClient’s.

Where this is really useful is in allowing us to configure our HttpClient, such as base address, timeouts etc. In this situation we can use “named” clients. We’d assign a name to the client such as

builder.Services.AddHttpClient("external", client => {
  client.BaseAddress = new Uri("https://some_url");
  client.Timeout = TimeSpan.FromMinutes(3);
});

Now in usage we’d write the following

var httpClient = httpClientFactory.CreateClient("external");

We can now configured multiple clients with different names for use in different scenarios. With can also add policy and message handlers, for example

builder.Services.AddHttpClient("external", client => {
  // configuration for the client
}).
AddHttpMessageHandler<SomeMessageHandler>()
.AddPolicyHandler(SomePolicy());

Strongly Typed Client

Typed or Strongly Typed clients is another way of using the HttpClient, weirdly this looks much more like our old way of passing HttpClient’s around.

We create a class specific to the HttpClient usage, and have an HttpClient parameter on the constructor, for example

public class ExternalHttpClient : IExternalHttpClient
{
  private readonly HttpClient _httpClient;

  public ExternalHttpClient(HttpClient httpClient)
  {
    _httpClient = httpClient;
    _httpClient.BaseAddress = new Uri("https://some_url");
    _httpClient.Timeout = TimeSpan.FromMinutes(3);
  }

  public Task<SomeData> GetDataAsync()
  {
     // use _hhtpClient as usual
  }
}

We’d now need to add the client to the dependency injection in Program.cs using

builde.Services.AddHttpClient<IExternalHttpClient, ExternalHttpClient>();

Conclusions

The first question might be, why use strongly typed HttpClient’s over IHttpClientFactory. The most obvious response is that it gives a clean design, i.e. we don’t use “magic strings” we know which client does what as it includes the methods to call the endpoints for the developer. Essentially it encapsulates our HttpClient usage for a specific endpoint. It also gives us a cleaner way of testing our code by allowing us to mock the interface only (not having to mock an IHttpClientFactory etc.).

However the IHttpClientFactory way of working gives us central place where we’d generally have all our clients declared and configured, named clients allow us to switch between clients easily using the name, it also gives great integrations for things like Polly.

Calling Orleans from ASP.NET

In my last post Getting started with Orleans we covered a lot of ground on the basics of setting up and using Orleans. It’s quite likely you’ll be wanting to use ASP.NET as an entry point to your Orleans code, so let’s look at how we might set this up.

Create yourself an ASP.NET core project, I’m using controllers but minimal API is also fine (I just happened to have the option to use controllers selected).

After you’ve created your application, clear out the weather forecast code etc. if you created the default sample.

Add a folder for your grain(s) (mine’s named Grains, not very imaginative) and I’ve added the following files and code…

IDocumentGrain.cs

public interface IDocumentGrain : IGrainWithGuidKey
{

    Task<string> GetContent();
    Task UpdateContent(string content);
    Task<DocumentMetadata> GetMetadata();
    Task Delete();
}

DocumentGrain.cs

public class DocumentGrain([PersistentState("doc", "documentStore")] IPersistentState<DocumentState> state)
    : Grain, IDocumentGrain
{
    public Task<string> GetContent()
    {
        // State is hydrated automatically on activation
        return Task.FromResult(state.State.Content);
    }

    public async Task UpdateContent(string content)
    {
        state.State.Content = content;
        state.State.LastUpdated = DateTime.UtcNow;
        await state.WriteStateAsync(); // persist changes
    }

    public Task<DocumentMetadata> GetMetadata()
    {
        var metadata = new DocumentMetadata
        {
            Title = state.State.Title,
            LastUpdated = state.State.LastUpdated
        };
        return Task.FromResult(metadata);
    }

    public async Task Delete()
    {
        await state.ClearStateAsync(); // wipe persisted state
    }
}

DocumentMetadata.cs

[GenerateSerializer]
public class DocumentMetadata
{
    [Id(0)]
    public string Title { get; set; } = string.Empty;

    [Id(1)]
    public DateTime LastUpdated { get; set; }
}

DocumentState.cs

public class DocumentState
{
    public string Title { get; set; } = string.Empty;
    public string Content { get; set; } = string.Empty;
    public DateTime LastUpdated { get; set; }
}

Now we’ll add the DocumentController.cs in the Controllers folder

[ApiController]
[Route("api/[controller]")]
public class DocumentController(IClusterClient client) : ControllerBase
{
    [HttpGet("{id}")]
    public async Task<string> Get(Guid id)
    {
        var grain = client.GetGrain<IDocumentGrain>(id);
        return await grain.GetContent();
    }
}

Note: As we touched on in the previous post, we just use grains as if they already exist, the Orleans runtime will create and activate them if they do not exist or return them if the already exist.

Finally in Program.cs add the following code after builder.Services.AddControllers();

builder.Host.UseOrleans(silo =>
{
    silo.UseLocalhostClustering();
    silo.AddMemoryGrainStorage("documentStore");
    silo.UseDashboard(options =>
    {
        options.HostSelf = true;
        options.Port = 7000;
    });
});

When we run this application we will need to pass a GUID (as we’re using IGrainWithGuidKey) for example https://localhost:7288/api/document/B5D4A805-80C3-4239-967B-937A5A0E9250 and this will obviously send this request to the DocumentController Get endpoint and this uses a grain based upon the supplied id and calls the grain method GetContent which gets the current state Content property.

I’ve not added code to call the other methods on the grain, but examples are listed for how these might look in the code above.

Getting started with Orleans

Microsoft Orleans is a cross platform framework for distributed applications. It’s based upon the Actor model which represents a lightweight, concurrent, immutable objects encapsulating state.

Basic Concepts

A Grain is a virtual actor and one of several Orleans primitives. A Grain is an entity which comprises of

identity + behaviour + state

where an identity is a user-defined key.

Grains are automatically instantiated on demand by the Orleans runtime and have a managed lifecycle with the runtime activating/deactivating grains as well as placing/locating grains as required.

A Silo is a primitive which hosts one or more Grains. A group of silos run as a cluster and in this mode coordinates and distributes work.

Orleans can handle persistence, timers and reminders along with flexible grain placement and versioning.

Use cases

Whilst grains can be stateless, the “sweet spot” for using Orleans is where you require distributed, durable and concurrent state management without locks etc. Long running stateful process such as event driven workflows are very much in the Orleans world.

Where Orleans is not the best solution include stateless, computer heavy tasks, these are better suited to Azure functions (for example). In such situations Orleans just adds complexity.

Lifecycle

Orleans automatically manages the lifecycle of grains. A grain may be in one of the following states activating, active, deactivating and persisted. Persisted maybe wasn’t the state you first thought of when looking at the progress through other states. Let’s look at the states in a little more depth (although probably fairly self explanatory)

  • Activation occurs when a request for a grain is received and the grain current state is not active. Hence the grain will be initialized and when active, can accept requests. An important point is that the grain will stay active based upon the fulfilment of requests.
  • When a grain is active in memory it will accept requests but if it’s busy messages will be stored in a queue until the grain is ready to receive them. Whilst We can call the grain concurrently, due to the Actor model design, only one execution is permitted on the grains thread, ensuring thread safety.
  • Deactivation takes place once the grain stops receiving requests for a period of time. This state is not yet persisted, however once we reach the persisted state it will be removed from memory.
  • Persisted state is when a grain has been deactivated and it’s final state is stored in a database or other datastore.

The framework takes care of the life cycle and allows the developer to just concentrate on using grains.

The Silo lifecycle

The silo’s lifecycle goes something like the following

  • When created the silo initializes the runtime environment etc.
  • Runtime services are started and the silo initializes agents and networking
  • Runtime storage is initialized
  • Runtime services for grains is started, includes grain type management, membership services and the grain directory
  • Application layer services started
  • The silo joins the cluster
  • Once active the silo is ready to accept workload

Concurrency

Grains are virtual actors and hence based upon the Actor model which essentially has a single thread that accepts request/messages. Hence when a grain is processing a request it’s in a busy state and in the default, turn based concurrency, other requests are queued. It’s possible to override turn-based concurrency to handle multiple messages on the same thread but this does come with potential risks around sharing a threads.

Orleans maintains a relative small thread pool which is determined by the number of CPU cores in the system, therefore we must still be careful around any potential blocking of threads. Grains can use the .NET thread pool but this should be fairly rare and using async/await should be used.

Mesage flow

Requests for a grain are passed from client to silo and then the request is passed onto the grain. When the grain has completed it’s work and if a response is required this will pass back to the silo and onto the client.

Let’s write some code

Let’s get to writing the equivalent of “Hello World” by creating two Console projects, mine are named Client and Silo. However I also want some interface and implementation of a HelloGrain, so you’ll need to create two libraries. I’ve named mine GrainInterfaces and Grains (not very imaginative I admit).

In the Client project add the following nuget package Microsoft.Orleans.Client, so my packages are as follows in the .csproj

<PackageReference Include="Microsoft.Extensions.Hosting" Version="9.0.10" />
<PackageReference Include="Microsoft.Extensions.Logging.Console" Version="9.0.10" />
<PackageReference Include="Microsoft.Orleans.Client" Version="9.2.1" />

Now in the Silot project add Microsoft.Orleans.Hosting.Server, I’m also wanting to host in Kubernetes so added Microsoft.Orleans.Hosting.Kuberneres and Microsoft.Clustering.Kuberneres. Finally I want to use the OrleansDashboard, so add the OrleansDashboard package, hence my .csproj looks like this

<PackageReference Include="Microsoft.Extensions.Hosting" Version="9.0.10" />
<PackageReference Include="Microsoft.Extensions.Logging.Console" Version="9.0.10" />
<PackageReference Include="Microsoft.Orleans.Hosting.Kubernetes" Version="9.2.1" />
<PackageReference Include="Microsoft.Orleans.Server" Version="9.2.1" />
<PackageReference Include="Orleans.Clustering.Kubernetes" Version="8.2.0" />
<PackageReference Include="OrleansDashboard" Version="8.2.0" />

Notice I also have the Microsoft.Extensions packaged to include logging and to create the host.

For the GrainInterfaces project add the package Microsoft.Orleans.Sdk so the GrainInterfaces .csproj has this

<PackageReference Include="Microsoft.Orleans.Sdk" Version="9.2.1" />

and finally add the same to the Grains project but also let’s add Microsoft.Extensions.Logging.Abstractions for us to do some logging, so the .csproj should look like this

<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" Version="9.0.10" />
<PackageReference Include="Microsoft.Orleans.Sdk" Version="9.2.1" />

For the GrainInterfaces add an interface IHello.cs which looks like this

public interface IHello : IGrainWithIntegerKey
{
  ValueTask<string> SayHello(string greeting);
}

Add a project reference in the Client to this project.

Next up, the Grains project has a new class named HelloGrain.cs which looks like this

public class HelloGrain(ILogger<HelloGrain> logger) : Grain, IHello
{
  private readonly ILogger _logger = logger;

  ValueTask<string> IHello.SayHello(string greeting)
  {
    _logger.LogInformation("""
            SayHello message received: "{Greeting}"
            """,
            greeting);

        return ValueTask.FromResult($"""
            Client said: "{greeting}"
            """);
    }

    public override Task OnDeactivateAsync(DeactivationReason reason, CancellationToken cancellationToken)
    {
        if(reason.ReasonCode == DeactivationReasonCode.ShuttingDown)
        {
            MigrateOnIdle();
        }

        return base.OnDeactivateAsync(reason, cancellationToken);
    }

For the server code, edit the Program.cs within the Client project as follows

using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.DependencyInjection;
using GrainInterfaces;

var builder = Host.CreateDefaultBuilder(args)
    .UseOrleansClient(client =>
    {
        client.UseLocalhostClustering();
    })
    .ConfigureLogging(logging => logging.AddConsole())
    .UseConsoleLifetime();

using var host = builder.Build();
await host.StartAsync();

var client = host.Services.GetRequiredService<IClusterClient>();

var friend = client.GetGrain<IHello>(0);
string response = await friend.SayHello("Hi Orleans");

Console.WriteLine($"""
                   {response}

                   Press any key to exit...
                   """);

Console.ReadKey();

await host.StopAsync();

For the server code, edit the Program.cs within the Silo project as follows

using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;

var builder = Host.CreateDefaultBuilder(args)
    .UseOrleans(silo =>
    {
        silo.UseLocalhostClustering()
            .ConfigureLogging(logging => logging.AddConsole());
        silo.UseDashboard(options => 
        {
            options.HostSelf = true;       // Enables embedded web server
            options.Port = 7000;           // Default port
        });
    })
    .UseConsoleLifetime();

using IHost host = builder.Build();

await host.RunAsync();

If we’re wanting to run these project from a single solution then don’t forget to go to Visual Studio’s solution, right mouse click and select Configure Startup Projects from here select the Common Properties | Configure Startup Projects and then Multiple startup projects, set the Silo and Client project actions to Start.

Persistence

As mentioned previously grains can be have their state persisted by Orleans. If we edit the silo project, Program.cs we can add various types of persistence, table store, SQL database etc. but also for testing we can use an in memory storage.

We just add the following to the UseOrleans method

silo.AddMemoryGrainStorage("docStore");
// Or Azure Table storage
// silo.AddAzureTableGrainStorage("documentStore", options =>
// {
//     options.ConnectionString = "<your-connection-string>";
// });

Now in our HelloGrain we can add persistence as easily as the following, add the PersistanceState attribute as a ctor parameter for the IPersistentState object to be injected

public class HelloGrain(
  [PersistentState("hello", "docStore")] IPersistentState<State> state, 
  ILogger<HelloGrain> logger) : Grain, IHello

The state name here is “hello” and the storageName should match the storage we set up in the Silo project.

Now to save data on the state we just write the following in the SayHello method

state.State.Data = greeting;
await state.WriteStateAsync();

Here’s a very simple State object example

public class State
{
    public string? Data { get; set; }
}

Base64 encoding

Base64 encoding is used when embedding binary in text based formats such as JSON, XML, YML etc. in such cases if we need to add a binary type, such as images or files etc. and we must pass then via a text format, then we need to Base64 encode this type of data first.

Use cases

Within web applications this is often used to pass binary within a JSON request/response object, but can be also seen when embedding images directly into HTML, for example

<img src="data:image/png;base64,your-binary-encoded-data..." />

It’s also used for Email attachments (MIME) as well as Authentication tokens – JWT tokens often use Base64URL (a variant of Base64).

Other use cases include clipboard copy/pasting of blobs (images/files etc.) into a text based clipboard format as well as being used from transporting over text only channels.

Where and why not to use Base64 encoding?

Base64 should NOT be used for streaming raw binary (application/octet-stream), large files or in binary safe protocols such as gRPC, websockets and HTTP when using the aforementioned large binary data etc.

First off, these protocols already support raw binary data so the affects of encoding are only on the negative side – if we encode to Base64 we will see, potentially, significant increases in the data size…

To Base54 encode using Javascript in the browser we can use

// encode 
const text = "Hello, world!";
const encoded = btoa(text);

// decode
const decoded = atob(encoded); 

In C# we can use

// encode
byte[] bytes = Encoding.UTF8.GetBytes("Hello, world!");
string base64 = Convert.ToBase64String(bytes);

// decode
byte[] decodedBytes = Convert.FromBase64String(base64);
string decoded = Encoding.UTF8.GetString(decodedBytes);

Calculating the Base64 encoding affects

Base64 encoding encodes every 3 bytes of binary data in 4 ASCII characters, so we essentially expand a binary data payload when using Base64 encoding

var base64Size = (binarySize/3) * 4

Or we can approximate with

var base64Size = 1.33 * binarySize;

Plus up to 2 padding characters “=” if the binary size is not divisible by 3.

This means that for every 1MB (1048576 bytes) the Base64 size is about 1.4MB (1398104 chars) giving us a 33% overhead.

This ofcourse is significant in streaming of data as it adds to the bandwidth and memory overhead along with increases in CPU usage for the encoding/decoding.

Increasing the body size of requests (with your ASP.NET core application within Kubernetes)

I cam across an interesting issue whereby we wanted to allow larger files to be uploaded through our ASP.NET core API, through to an Azure Function. All this hosted within Kubernetes.

The first thing is, if you’re running through something like Cloudflare, Akamai, Traffic Manager, changes there are outside the scope of this post.

Kubernetes Ingress

Let’s first look at Kubernetes, the ingress controller to you application may have something like this

className: nginx
annotations:
  nginx.ingress.kubernetes.io/proxy-buffer-size: "100m"
  nginx.ingress.kubernetes.io/proxy-body-size: "100m"
...

In the above we set the buffer and body size to 100MB – now one thing to note is that when we had this closer to the actual file size we wanted to support, the request body seemed larger, so you might need to tweak things a little.

Kestrel

The change in Kubernetes ingress now allows requests of upto 100MB but you may now find the request rejected by ASP.NET core, or more specifically Kestrel.

Kestrel (at the time of writing) has a default MaxRequestBodySize of 30MB, so we need to add the following

builder.WebHost.ConfigureKestrel(serverOptions =>
{
  serverOptions.Limits.MaxRequestBodySize = 104857600; // 100 MB in bytes
});

Azure Functions

Next up, we’re using Azure functions and by default (when on the pro consumption plan) is 100MB, however if you need to or want to change/fix this in place, you can edit the host.json file to include this

"http": {
  "maxRequestBodySize": 100
}

Obviously if you have code in place anywhere that also acts as a limit, you’ll need to amend that as well.

Anything else?

Depending on the size of files and the time it takes to process, you might also need to review your timouts on HttpClient or whatever mechanism you’re using.

Rust and Sqlite

Add the dependencies

[dependencies]
rusqlite = { version = "0.37.0", features = ["bundled"] }

The bundled part will automatically compile and link an upto date SQLite, without this I got errors such as “LINK : fatal error LNK1181: cannot open input file ‘sqlite3.lib'”, obviously if you have everything installed for SQLite, then you might prefer the non-bundled dependency, so just replace this with.

[dependencies]
rusqlite = "0.37.0"

Create a DB

Now let’s create a database as a file and insert an initial row of data

use rusqlite::Connection;

fn main() {
    let connection = Connection::open("./data.db3").unwrap();
    connection.execute("CREATE TABLE app (id INTEGER PRIMARY KEY, name TEXT NOT NULL", ()).unwrap();
    connection.execute("INSERT INTO app (id, name) VALUES (?, ?)", (1, "Hello")).unwrap();
}

We could also do this in memory using the following

let connection = Connection::open_in_memory().unwrap();

Reading from our DB

We’ll create a simple structure representing the DB created above

#[derive(Debug)]
struct App {
    id: i32,
    name: String,
}

Now to read into this we use the following

let mut statement = connection.prepare("SELECT * FROM app").unwrap();
let app_iter = statement.query_map([], |row| {
  Ok(App {
    id: row.get(0)?,
    name: row.get(1)?,
  })
}).unwrap();

for app in app_iter {
  println!("{:?}", app.unwrap());
}

You’ll also need the following use clause

use rusqlite::fallible_iterator::FallibleIterator;