You run your server application and the port is not available

I’ve hit this problem before (see post . An attempt was made to access a socket in a way forbidden by its access permissions). The port was available one day and seemingly locked the next…

Try the following step to see if it’s on the exclusion port range

netsh interface ipv4 show excludedportrange protocol=tcp

If you do find the port is within one of the ranges then I’ve found (at least for the port I’ve been using) that I can stop and restart the winnat service, i.e.

Note: you may need to run these as administrator.

net stop winnat

then

net start winnat

and the excluded port list reduces in size.

Microsoft’s Dependency Injection

Dependency injection has been a fairly standard part of development for a while. You’ve probably used Unity, Autofac, Ninject and others in the past.

Frameworks, such as ASP.NET core and MAUI use the Microsoft Dependency Injection package (Microsoft.Extensions.DependencyInjection) and we can use this with any other type of application.

For example if we create ourselves a Console application, then add the package Microsoft.Extensions.DependencyInjection. Now can then use the following code

var serviceCollection = new ServiceCollection();

// add our services

var serviceProvider = serviceCollection.BuildServiceProvider();

and it’s as simple as that.

The Microsoft.Extensions.DependencyInjection has most of the features we require for most dependency injection scenarios (Note: it does not support property injection for example). We can add services as…

  • Transient an instance created for every request, for example
    serviceCollection.AddTransient<IPipeline, Pipeline>();
    // or
    serviceCollection.AddTransient<Pipeline>();
    
  • Singleton a single instance created and reused on every request, for example
    serviceCollection.AddSingleton<IPipeline, Pipeline>();
    // or
    serviceCollection.AddSingleton<Pipeline>();
    
  • Scoped when we create a scope we get the same instance within the scope. In ASP.NET core a scope is created for each request
    serviceCollection.AddScoped<IPipeline, Pipeline>();
    // or
    serviceCollection.AddScoped<Pipeline>();
    

For the services registered as “scoped”, if no scope is created then the code will work more or less like a singleton, i.e. the scope is the whole application, but if we want to mimic ASP.NET (for example) we would create a scope per request and we would do this by using the following

using var scope = serviceProvider.CreateScope();

var pipeline1 = scope.ServiceProvider.GetRequiredService<Pipeline>();
var pipeline2 = scope.ServiceProvider.GetRequiredService<Pipeline>();

in the above code the same instance of the Pipeline is returned for each GetRequiredService call, but when the scope is disposed of or another scope created then a new instance for that scope will be returned.

The service provider is used to create/return instances of our services. We can use GetRequiredService which will throw and InvalidOperationException if the service is not registered or we might use GetService which will not throw an exception but will either return the instance or null.

Multiple services of the same type

If we register multiple implementations of our services like this

serviceCollection.AddTransient<IPipeline, Pipeline1>();
serviceCollection.AddTransient<IPipeline, Pipeline2>();

and we use the service provider and use GetRequiredService<IPipeline> we will get a Pipeline2 – it will be the the last registered type.

If we want to get all services registered for type IPipeline then we use GetServices<IPipeline> and we’ll get an IEnumerable of IPipelines, so if we have a service which take an IPipeline, we’d need to declare it as follows

public class Context(IEnumerable<IPipeline> pipeline)
{
}

Finally we have the keyed option, this is allows use to register multiple variations of an interface (for example) and give each a key/name, for example

serviceCollection.AddKeyedTransient<IPipeline, Pipeline1>("one");
serviceCollection.AddKeyedTransient<IPipeline, Pipeline2>("two");

Now these will not be returned when using GetServices<IPipeline> instead it’s expected that we get the service by the key, i.e.

var pipeline = serviceProvider.GetKeyedService<IPipeline>("one");

When declaring the requirement in our dependent classes we would use the FromKeyedServicesAttribute like this

public class Context([FromKeyedServices("one")] IPipeline pipeline)
{
}

Publishing your Elixir package

There’s a couple of ways to share your code as packages. The first is by simply creating a GitHub repository and putting your code there, the second is to add the package to the hex.pm packages website. To be honest we’re most likely going to create a GibHub repos, so we’ll end up doing both…

Github packages

I’ve implemented a new library by creating it via mix new ex_unit_conversions (naming is always a problem, do also check hex.pm to ensure your name is not duplicated). Then I’ve added the code to the lib folder and tests to the test folder, see ex_unit_conversions.

We can create a dependency on this package without a version, but for this one I create a release named v0.1.0, so now I need to update the deps of any application/code that is going to download and use this package, i.e. in mix.exs I have this

defp deps do
  [
    {:ex_unit_conversions, git: "https://github.com/putridparrot/ex_unit_conversions.git", tag: "v0.1.0"}
  ]
end

As you can see the tag needs to be the same as the tag name. So in my case I put the v prefixing (for the later version of package I named the release in GitHub without the preceding v, i.e. 0.1.1).

Now when you run mix deps.get you’ll see a new deps folder created with the code from your repo downloaded to it.

Pretty simple. Now let’s look at making things work in hex.pm.

Updating mix.exs

Before releasing your package to hex.pm you may wish to update the mix.exs project section to ensure your app name is correct, update a version add source_url, home page etc. For example (I missed these for the GitHub only release but added for release to hex.pm). If you miss anything you will be prompted for it to be added before you can publish to hex.pm.

def project do
    [
      app: :ex_unit_conversions,
      version: "0.1.1",
      elixir: "~> 1.17",
      source_url: "https://github.com/putridparrot/ex_unit_conversions",
      homepage_url: "https://github.com/putridparrot/ex_unit_conversions",
      start_permanent: Mix.env() == :prod,
      deps: deps(),
      package: [
        links: %{"GitHub" => "https://github.com/putridparrot/ex_unit_conversions"},
        licenses: ["MIT"],
      ],
      description: "Unit conversion functions"
    ]
  end

You may wish to also create a dependency in your package to include ex_doc, i.e.

def deps do
  [
    # Other dependencies
    {:ex_doc, "~> 0.34.2", only: :dev, runtime: false},
  ]
end

Install your dependencies using mix deps.get and then you can generate your docs using mix docs but this will also allow hex.pm to generate docs for your package.

hex.pm

To create a package on hex.pm, first we need to create an account. See Publishing your package for the official guidance on this process, but I’ll recreate some of those steps here

  • If you’ve not already done so, create a user account via hex.pm OR using mix, we can register a user using
    mix hex.user register
    
  • Once you have your mix.exs upto date (and hex.pm may prompt your for missing information during this next step), you can try to publish your package using the following via the your shell/terminal from your package folder (as created using mix new).
    mix hex.publish
    

    You will be prompted for your username, password and are required to have a local password (this will be setup during this process).

If you want to check the latest publish options etc. run mix help hex.publish.

That’s all there is to it.

References

Package configuration options
Publishing a package

Structs in Elixir

Defining structs is pretty simple in Elixir.

We create a struct within a module (like we do with functions), for example

defmodule Person do
  defstruct firstName: "", lastName: "", age: nil
end

As you can see, we’re explicitly settings the default values.

We’ve set age to nil, but we can also declare values with the default value of nil implicitly, but the fields that are to be implicitly set to nil must be at the start of the struct and we use [ ] to enclose the struct, for example

defstruct [:age, firstName: "", lastName: ""]

We can also mark fields/keys as required using the @enforce_keys attribute, for example

@enforce_keys [:firstName, :lastName]
defstruct [:age, :firstName, :lastName]

Now if we try to create and empty instance (i.e not setting firstName and lastName) we’ll get an error such as “(ArgumentError) the following keys must also be given when building struct Person: [:firstName, :lastName]”, but we’re jumping ahead of ourselves, let’s find out how we actually do create an instance of our struct first.

Structs take the name of the module, hence this struct is Person and we can create an instance of the struct using the following syntax, %{}, as shown below

def create() do
  %Person{ firstName: "Scooby", lastName: "Doo", age: 30 }
end

Now if you’ve already encountered maps, you’ll see that the struct syntax %{} is the same to the map syntax and that’s because structs are built on top of maps (but do not have all the capabilities of maps).

We can create an instance of a struct with it’s default values be simply using the following (assuming we’re not using @enforce_keys here)

defaultPerson = %Person{}

and this will simply bind to an instance with firstName and lastName of “” and age of nil.

We use standard “dot” syntax to, for example

iex(1)> scooby = Person.create()
%Person{firstName: "Scooby", lastName: "Doo", age: 30}
iex(2)> scooby.firstName
"Scooby"

We can create a new instance of a Person, where we change some fields using the | (update syntax), for example

iex(3)> scrappy = %{scooby | firstName: "Scrappy"}
%Person{firstName: "Scrappy", lastName: "Doo", age: 30}

We can also bind to a struct using pattern matching, i.e.

iex(4)> %Person{firstName: firstName} = scrappy
%Person{firstName: "Scrappy", lastName: "Doo", age: 30}
iex(5)> firstName
"Scrappy"

I said that structs were built on top of maps so let’s see if that’s true, try this

is_map(scooby)

and you’ll see true.

Structs can be made up of basic and more complicated types, i.e. structs can have fields which themselves are structs and so on.

Elixir, use and using

I didn’t include much information in my post More modules in Elixer around the use macro and that’s because it really requires a post on it’s own. So here we go…

If you’ve uses Phoenix (partially covered in Elixir and Phoenix) you’ll have probably noticed that when we created the controller we gained access to functions such as json and when we setup the router we had a long list of except: atoms. This is because using use bought functions into the modules from Phoenix automatically, i.e. new, edit, create etc. functions.

The use macro is quite powerful, it essentially calls the __using__ macro within another module. The __using__ macro allows us to inject code from the other module.

Let’s see this in action…

We’ll start by creating a module which will include functionality that can be injected into another module

defmodule MyUse do
  defmacro __using__(_opts) do
    quote do
      def my_injected_fn() do
        "Hello World"
      end
    end
  end
end

In the above example, we create the macro with the my_injected_fn. The neat bit is the __using__ which injects this code into a module which uses the MyUse module, for example

defmodule MyModule do
  use MyUse

  def test_use() do
    my_injected_fn()
  end
end

This will inject all macros, but in the Phoenix example we want to inject only certain pieces from a module.

Before we move on let’s quickly address a couple of things in the code above. The quote macro tranforms the block of code into an AST (Abstract Syntax Tree), we can see an example of this by type the following into iex quote do: MyModule.testuse() and it will display something like {{:., [], [{:__aliases__, [alias: false], [:MyModule]}, :testuse]}, [], []}. It’s probably quite obvious that defmacro defines a macro and the __using__ calback macro is what allows us to extend other modules (as already mentioned).

Let’s change the MyUse module to allow us to inject specific functionality.

defmodule MyUse do
  defmacro __using__(which) when is_atom(which) do
    apply(__MODULE__, which, [])
  end

  def injector do
    quote do
      def my_injected_fn() do
        "Hello World"
      end
    end
  end
end

We’re now using the macro __using__ to select which bits of functionality we want to inject into another module. Admittedly in this example we just have a single piece of code to be injected, but bare with me.

So now to use this in our other modules we write the following

defmodule MyModule do
  use MyUse, :injector

  def test_use() do
    my_injected_fn()
  end
end

This will inject the injector defined code.

Let’s be honest this is not that useful with one function, so let’s extend the MyUse with a couple of functions (in my case, just to demonstrate things I’ve given them the same name but in most modules you’ll probably not be doing this

defmodule MyUse do
  defmacro __using__(which) when is_atom(which) do
    apply(__MODULE__, which, [])
  end

  def injector1 do
    quote do
      def my_injected_fn() do
        "Hello World 1"
      end
    end
  end

  def injector2 do
    quote do
      def my_injected_fn() do
        "Hello World 2"
      end
    end
  end
end

What we’re going to do, is in the calling module, we can select injector1 OR injector2 an without changing the calling function name (as it’s unchanged in the MyUse module)

defmodule MyModule do
  use MyUse, :injector1

  def test_use() do
    my_injected_fn()
  end
end

This will display “Hello World1” when evaluated. Switching the :injector2 will display “Hello World 2.

As mentioned it’s unlikely you’ve normally do this, it’s much more likely you might include different functions, maybe along these lines

defmodule MyUse do
  defmacro __using__(which) when is_atom(which) do
    apply(__MODULE__, which, [])
  end

  def injector1 do
    quote do
      def hello1() do
        "Hello World 1"
      end
    end
  end

  def injector2 do
    quote do
      def hello2() do
        "Hello World 2"
      end
    end
  end
end

Now we can inject these via use, like this

defmodule MyModule do
  use MyUse, :injector1
  use MyUse, :injector2

  def test_use() do
    IO.puts hello1()
    IO.puts hello2()
  end
end

Note: I’m still quite new to Elixir, the usage of two use clauses seems a little odd, there may be a better way to define such things.

I mentioned you could use import in much the same way, except use allows us to inject aliases, imports other use modules etc.

References

The ‘use’ Macro in Elixir.
Understanding Elixir’s Macros by Phoenix example
How to Use Macros in Elixir
Phoenix repo on GitHub

More modules in Elixer

In the post A little more Elixir, we’re talking modules and functions we looked at a basic use of modules to allows us to declare functions (and macros, although we’ve not really touched these yet).

Modules can be nested, for example

defmodule Math do
  defmodule Fractions do
  # Nested module and functions 
  end

  # Top level module an functions
end

These don’t actually have a relationship with one another, instead Elixir essentially has them as separate modules, like this

defmodule Math do
  # functions
end

defmodule Math.Fractions do
  # functions
end

Importing modules

The import keyword (as the name suggests) imports a module’s functions and/or macros into the scope of the module or function they’re imported into. The scope is limited to that defined by the start and end of the module or function. Importing allows us to call functions from the module without having to use it’s module name. For example, we have a Math module with functions add and sub but we import it into our function like this

def f do 
  import Math
  add(1, 7)
end

Notice how the highlighted line does not need to prefix the module name to the function.

We can limit what’s imported using additional syntax where where we have only: or except: to reduce the scope of imports to the minimal, for example we do not need sub in our import so we could write

def f do 
  import Math, only [add: 2]
  add(1, 7)
end

In the above we import only the add function with arity of 2 (i.e. the 2 parameter function named add).

Aliasing modules

As the name suggests, alias allows us to create an alias to a module name, for example let’s say we have Math.Fractions whilst not a big deal to type if we’re typing it for every function it create a lot of “clutter” in our code, so instead we can alias the name like this

defmodule TestMod do
  alias Math.Fraction, as: F
  def f do
    F.some_function()
  end 
end

Require a module

The require keyword ensure the macro definitions of a module are compiled into the scope using the require. Or to put this another way require ensures that a required module is loaded before the module that’s calling into it. This will ensure any macros within it are available to the calling module and they’re scoped to that calling module.

Note: require is not like an alias, so you still need to prefix any macro/function calls with the module name (unless you also alias it ofcourse)

An example might be something like this

defmodule A do
  defmacro hello(arg) do
    quote do
      IO.puts "Hello #{unquote(arg)}"
    end
  end
end

defmodule B do
  def hello_world() do
    A.hello "World"
  end
end

In the above, if you compile this into iex using c(“require_sample.rx”) you’ll get warnings such as warning: you must require A before invoking the macro A.hello/1 if you try to execute the command B.hello_world(“Scooby”) you’ll get an error like this ** (UndefinedFunctionError) function A.hello/1 is undefined or private. However, there is a macro with the same name and arity. Be sure to require A if you intend to invoke this macro.

So as you can see, we need to use require, simply add the require A as below

defmodule A do
  defmacro hello(arg) do
    quote do
      IO.puts "Hello #{unquote(arg)}"
    end
  end
end

defmodule B do
  require A
  def hello_world() do
    A.hello "World"
  end
end

The use macro

The use macros allows us to “inject” any code into the current module. It’s used as an extension point.

I’ll dedicate a post of it’s own to the use keyword as it’s interesting what you can do with it.

Module Attributes

Attributes in Elixir are prefixed with the @ symbol. These add metadata to our module. An attribute is declared as a @name value pair. Whilst the name can be pretty much anything (within the allowable syntax), for example I might have a @my_ver 1, there are some reserved names

  • @moduledoc is use for module documentation
  • @doc is use for function or macro documentation
  • @spec is use to supply a typespec for the function which follows
  • @behaviour is used for OTP or under-defined behaviour (and yes it’s the UK spelling)

Here’s an example of creating our own attribute and we’re able to use it within our functions

defmodule Attributes do
  @some_name PutridParrot

  def attrib() do
    @some_name
  end
end

This will return the @some_name value.

You can set the attribute multiple times within the module scope. Elixir evaluates from top to bottom so functions after a change to the @some_name (above) will get the new value.

Guard clauses in Elixir

Guard clauses in Elixir allow us to extend the standard parameter pattern match to evaluating against a predicate.

For example

defmodule Guard do
  def check(value) when value == nil do
    IO.puts "Value is nil"
  end

  def check(value) when is_integer(value) or is_float(value) do
    IO.puts "#{value} is number"
  end

  def check(value) when is_atom(value) do
    IO.puts "#{value} is atom"
  end

end

We can see the use of the when keyword, also for multiple guards we use the or keyword. The right side of the when clauses is a predicate, hence should return true or false.

In these examples if we enter Guard.check(nil) the result is “Value is nil” if you check 123 or 123.4 (for example) the second guarded function is called etc.

Now one must remember that Elixir will evaluate functions from the top to the bottom, hence if the is_atom check is moved to the top of the functions, a nil will match as an atom and hence never reach the “Value is nil” returning function.

As we’ve seen, we supply a predicate to the function which means we can check the value for things like it’s type. We can also handle ranges, for example maybe we have one function that handle division but guards against a denominator of 0, then we might create two functions like this

def div(a, b) when b == 0 do
  if b == 0, do: raise("Divide by Zero")
end

def div(a, b) do
  a / b
end

I’m not saying this is better than writing something like the code below, but it’s another way of doing things

def div(a, b) do
  if b == 0, do: raise("Divide by Zero")

  a / b
end

As we saw, we use or not ||.

  • The boolean operators allowed are or, and, not and !.
  • Comparison operators include ==, !=, ===, >, <, >= and <=.
  • Arithmetic operators +, , *, / can be used.
  • Join operators <> and ++ can be used.
  • The in operator can be used.
  • Type check functions such as is_integer etc.

For a more complete list which also include “guard-friendly functions” such as abs etc. can be viewed in the Guards documentation

Switching the Docker Desktop Daemon

Docker Desktop allows us to work on Windows images and Linux, but sadly (at least at the moment) we may need to switch the builder used by it, either by using the UI or

DockerCli -SwitchDaemon
DockerCli -SwitchLinuxEngine
DockerCli -SwitchWindowsEngine

Elixir Atoms

One of the stranger bits of syntax I see in Elixir is for Atoms.

To quote the previous link “Atoms are constants whose values are their own name”.

What does this mean ?

Well imagine you were writing code such as

configuarion = "configuration"
ok = "ok"

Essentially the value is the same as the name, so basically instead if assigning a value of the same name we simply prefix the name with : to get the following

:configuration
:ok

In fact :true, :false and :nil are atoms but we would tend to use the alias of these, i.e. without the colon, true, false and nil.

The syntax for atoms is that they start with an alpha or underscore, can contain alphanumeric characters, can also contain @ and can end with and alphanumeric character or either ? or !. If you wish to create an atom with violates this syntax you can simply enclose in double quotes, i.e.

:"123atom"

and to prove it we can use

is_atom(:"123atom")

Atoms can also be declared starting with an uppercase alpha character such as

Atom
is_atom(Atom)

Atoms with the same content are always equivalent.

Modules are represented as atoms, for example :math is the module math and :”Elixir.String” is the same as a String.

Elixir and Phoenix

Most languages which I start to learn, I’ve found I learn the basics of the language (enough to feel at relative ease with the language) but then want to see it in real world scenarios. One of those is usually a web API or the likes. So today I’m looking at using Elixir along with the Phoenix framework.

Note: I’m new to Elixir and Phoenix, this post is based upon my learnings, trying to get a basic web API/service working and there may be better ways to achieve this that I’m not aware of yet.

Phoenix is a way to (as they say on their site) to “build rich, interactive web applications”. Actually I find it builds too much as by default it will create a website, code for working with a DB (PostgresQL by default) etc. In this post I want to create something more akin to a web API or microservice.

If you’re after the default application, then run

mix phx.server

In this post I want to create a simple API service so instead we’ll use phx.new to create a service named my_api and well remove the website/HTML and ecto (the DB) side of things

mix phx.new my_api --no-html --no-ecto --no-mailer

If you run the command above you’ll get a new application generated. Just cd my_api to allow us to run the service etc.

If you’d like to see what the default generated application is then run the following

mix phx.server

By default this will start a server against localhost:4000. If you open the browser you’ll see a default dashboard/page which likely says there’s no route for GET / and then lists some available routes.

The /dev/dashboard route takes you to a nice LiveDashboard showing information about the Elixir and Phoenix.

To shutdown the Phoenix server CTRL+C twice within the terminal that you ran it up from.

For my very simple web service, I do not even what the live dashboard. So if you created that new app. delete your new app folder and then run this minimal code version (unless you’d prefer to keep live dashboard etc.)

mix phx.new my_api --no-html --no-ecto --no-mailer --no-dashboard --no-assets --no-gettext

This will then generate a fairly minimal server which is a good starting point for our service. You’ll notice first off that there are now, no routes when you run this via mix phx.server
.

Let’s add a controller, this will acts as the controller for our web service, so within the /lib/my_api_web/controllers folder add a new file named math-controller.ex and past the following code into it (obviously change the module name to suite your application name)

defmodule MyApiWeb.MathController do
  #use MyApiWeb, :controller
  use Phoenix.Controller, formats: [:html, :json]

  def index(conn, _params) do
   json(conn, "{name: Scooby}")
  end
end

We now need to hook up our controller to a route, so go to the router.ex file within the /lib/my_api_web/ folder and alter the scope section to look like this

scope "/", MyApiWeb do
  pipe_through :api

  resources "/api", MathController, except: [:new, :edit, :create, :delete, :update, :show]
end

If you run mix phx.server you should see a route to /api, typing http://localhost:4000/api will return “{name: Scooby}” as defined in the math-controller index. This is not very math-like so let’s create a couple of functions, one for adding numbers and one for subtracting.

Remove the resources section (or comment out using #) in the scope then add the following routes

get "/add", MathController, :add
get "/sub", MathController, :subtract

Go to the math-controler.ex and add the following functions

def add(conn, %{"a" => a, "b" => b}) do
  text(conn, String.to_integer(a) + String.to_integer(b))
end

def subtract(conn, %{"a" => a, "b" => b}) do
  text(conn, String.to_integer(a) - String.to_integer(b))
end

Notice we destructuring params to values a and b – we’ll convert those values to integers and use the text function to return raw text (previously we expected JSON hence uses the json function). Now when you browse the add method, for example http://localhost:4000/add?a=10&b=5 or subtract method, for example http://localhost:4000/sub?a=10&b=5 you should see raw text returned with answers to the math functions.

What routes are we exposing

Another useful way of checking the available routes (without running the server) is, as follows

mix phx.routes

Config

If you’ve looked around the generated code you’ll notice the config folder.

One thing you might like to do now is change localhost to 0.0.0.0 so edit dev.exs and replace

http: [ip: {127, 0, 0, 1}, port: 4000],

with

http: [ip: {0, 0, 0, 0}, port: 4000],

If you do NOT do this and you decide to deploy the dev release to Docker, you’ll find you cannot access your service from outside of Docker (which ofcourse is quite standard).

Releases

Generating a release will precompile any files that can be compiled and allows us to run the server without the source code (as you’d expect) you will need to tell the compiler what configuration to use, we do that by setting the MIX_ENV like this

export MIX_ENV=prod

(No MIX_ENV environment variable will default dev)

Then running

mix release

This will create and assemble your compiled files to _build/prod/rel/my_api/bin/my_api (obviously replacing the last part with your app name). The results of a release build show using

Note: Replace /prod/ with /dev/ above etc. as per the environment you’ve compiled for

_build/prod/rel/my_api/bin/my_api start

to start your application, this will need start a server. By default the above does not start a server so instead we need to set the following environment variable

export PHX_SERVER=true

You’ll also able to run following it will automatically generate the bin/server and sets the PHX_SERVER environment variable

mix phx.gen.release

One last thing, you may find when you use the start command (against PROD) that this fails saying you are missing the SECRET_KEY_BASE. We can generate this using

mix phx.gen.secret

Then simply

export SECRET_KEY_BASE=your-generated-key

This is for signing cookies etc. and you can see where the exception comes from within the runtime.exs file. This is set as an environment variable, best not to check into source control.

Dockerizing our service

Okay, it’s not Elixir specific, but I feel that the natural conclusion to our API/service development is to have it all running in a container. Let’s start by creating a container image based upon the build and using the phx.server call…

Create yourself a Dockerfile which looks like this

FROM elixir:latest

RUN mkdir /app
COPY . /app
WORKDIR /app

RUN mix local.hex --force
RUN mix do compile

EXPOSE 4000

CMD ["mix", "phx.server"]

I’m assuming we’re going to stick with port 4000 in the above and in the commands below, so I’ll document this via the EXPOSE command.

Now to build and run our container let’s use the following

docker build -t pp/my-api:0.1.0 .
docker run --rm --name my-api -p 4000:4000 -d pp/my-api:0.1.0

Now you should be able to uses http://localhost:4000 to access your shiny new Elixir/Phoenix API/service.

Note: Remember that if you cannot access the service outside of the docker image, ensure you’ve set the http ip in dev.exs to 0.0.0.0

If we want to instead containerize our release build then we could use the following

FROM elixir:latest

ENV PHX_SERVER=true

RUN mkdir /app
COPY /_build/dev/ /app
WORKDIR /app/rel/my_api/bin

EXPOSE 4000

CMD ["./my_api", "start"]

Again using the previous build and run commands, will start the server (if all went to plan).

Code

Code is available in my GitHub blog project repo.