Category Archives: Linux

Ubuntu server updating

Every month (at least) I check my Ubuntu servers for updates and every month (at least) I forget how I clean old installs etc.

The following is a list of commands that I use regularly. This post is not meant to be a comprehensive description of these Linux/Ubuntu commands, more a checklist for updating my servers.

df or df /boot

When I log into my servers and see updates available the first thing I tend to do is run df and check the /boot usage/space available (obviously I could jump straight to the next command in my list, but I sort of like knowing what space I have available). I’ve tried carrying out an update in the past with next to no space available and sadly end up spending way too long trying to clear up partially installed components and/or clear disc space.

Alternatively, if you only want to view the /boot usage/space – which is really what we’re after here, just use df /boot.

sudo apt-get autoremove

apt-get is the Ubuntu based package manager (well, it’s actually originally from Debian), we’ll need to run it as the su. The options supplied will remove old packages and configurations files and all unused packages.

Why is the above useful? Well as we add updates to Ubuntu, it’s not unusual to find previous releases sat on our machine in the /boot area (I assume this is so we could roll back a release). Once you’re happy with your latest updates (or as I do, prior to the next update), run this command sudo apt-get autoremove to clear out these unused packages and free up disc space.

sudo apt-get update

The update option will retrieve a new list of packages available. This doesn’t install new versions of software but will ensure all package index files are upto date.

sudo apt-get upgrade

Now its time to upgrade we run sudo apt-get upgrade. This will look to clear out old packages if required, but doesn’t seem to do a more comprehensive removal, unlike purge and autoremove.

This command uses information from apt-get update to know about new versions of packages on the machine.

sudo apt-get dist-upgrade

If we want upgrade dependencies etc. this can be used instead of the straight upgrade.

sudo reboot now

Time to restart the server. This will obviously reboot the machine instantly (now).

uname -r

I don’t tend to use this during my “update” process, but if I want to know what version of the Linux kernel I’m running, using uname -r will tell me. This does become important if (as has happened to me) I cannot autoremove old kernels. Obviously when targeting specific kernels you need to know which one is current, or to put it another way, which one to not remove.

So for example, at the time of this post my current kernel is

4.4.0-72-generic

dpkg -l | grep linux-image

If you want to see what kernels exist on your machine, use dpkg -l | grep linux-image. This will (in conjunction with uname -r) allow us to target a purge of each kernel individually, using the following command.

sudo apt-get -y purge linux-image-?.?.?-??-generic

Obviously replace ?.?.?-?? with the version you wish to remove. This will then remove that specific package.

Redis service and client

Redis is an in memory store/cache, key/value store. Luckily there’s a Docker image for this.

Let’s run an instance of Redis via Docker on an Ubuntu server

docker run --name myredis -d -p 6379:6379 redis

Oh, how I love Docker (and ofcourse the community who create these images). This will run Redis and return immediately to your host’s command prompt (i.e. we do not go into the instance of Redis).

To run the redis client we’ll need to switch to the instance of the Docker container running Redis and then run the redis command line interface, thus

docker exec -it myredis bash
redis-cli

We’ll use this CLI later to view data in the cache.

C# client

There are several Redis client libraries available for .NET/C#, I’m going to go with ServiceStack.Redis, mainly because I’ve been using ServiceStack recently. So create a Console application, add the nuget packages for ServiceStack.Redis and now add the following code

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
}

class Program
{
   static void Main(string[] args)
   {
      var client = new RedisClient("redis://xxx.xxx.xxx.xxx:6379");
      client.Add("1234", new Person {FirstName = "Scooby", LastName = "Doo"});
      client.Save();
   }
}

Obviously change xxx.xxx.xxx.xxx to your server ip address.

This code will simply write the Person object to the store against the key 1234. If you have the redis-cli running then you can type

get 1234

this should result in the following result

"{\"FirstName\":\"Scooby\",\"LastName\":\"Doo\"}"

Ofcourse, we now need to use the ServiceStack.Redis client to read our data back. Just use this

var p = client.Get<Person>("1234");

Security

By default Redis has no security set up, hence we didn’t need to specify a user name and password. Obviously in a production environment we’d need to implement such security (or if using Redis via a cloud provider such as Azure).

For our instance we can secure Redis as a whole using the command AUTH. So from redis-cli run

CONFIG SET requirepass "password"
AUTH "password"

If you run AUTH “password” and get Err Client sent AUTH, but no password is set you’ll need the CONFIG line, otherwise the AUTH line should work fine. Our client application will need the following changes to the URL

var client = new RedisClient("redis://password@xxx.xxx.xxx.xxx:6379");

To remove the password (if you need to) simple type the following from the redis-cli

CONFIG SET requirepass ""

References

https://github.com/ServiceStack/ServiceStack.Redis

MongoDB revisited

As I’m busy setting up my Ubuntu server, I’m going to revisit a few topics that I’ve covered in the past, to see whether there are changes to working with various servers. Specifically I’ve gone Docker crazy and want to run these various server in Docker.

First up, let’s see what we need to do to get a basic MongoDB installation up and running and the C# client to access it (it seems some things have changes since I last did this).

Getting the server up and running

First off we’re going to get the Ubuntu server setup with an instance of MongoDB. So let’s get latest version on mongo for Docker

docker pull mongo:latest

this will simply download the latest version of the MongoDB but not run it. So our next step is to run the MongoDB Docker instance. By default the port MongoDB uses is 27017, but this isn’t available to the outside world. So we’re going to want to map this to a port accessible to our client machine(s). I’m going to use port 28000 (there’s no specific reason for this port choice). Run the following command from Ubuntu

docker run -p 28000:27017 --name my-mongo -d mongo

We’ve mapped MongoDB to the previously mentioned port and named the instance my-mongo. This will run MongoDB in the background. We can now look to write a simple C# client to access the instance.

Interactive Shell

Before we proceed to the client, we might wish to set-up users etc. in MongoDB and hence run its shell. Now running the following

docker exec -t my-mongo mongo

Didn’t quite work as expected, whilst I was placed inside the MongoDB shell, commands didn’t seem to run.

Note: This could be something I’m missing here, but when pressing enter, the shell seemed to think I was about to add another command.

To work with the shell I found it simpler to connect to the Docker instance using bash, i.e.

docker exec -t my-mongo bash

then run

mongo

to access the shell.

I’m not going to set up any users etc. at this point, we’ll just used the default setup.

Creating a simple client

Let’s fire up Visual Studio 2015 and create a console application. Then using NuGet add the MongoDB.Driver by MongoDB, Inc. Now add the following code to your Console application

public class Person
{
   public ObjectId Id { get; set; }
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
}

class Program
{
static void Main(string[] args)
{
   var client = new MongoClient("mongodb://xxx.xxx.xxx.xxx:28000");
   var r = client.GetDatabase("MyDatabase");
   var collection = r.GetCollection<Person>("People");
   collection.InsertOne(new Person 
   { 
      FirstName = "Scooby", 
      LastName = "Doo", 
      Age = 27 
   });
}

Obviously replace the xxx.xxx.xxx.xxx with the IP address of your server (in my case my Ubuntu server box), the port obviously matches the port we exposed via Docker. You don’t need to “create” the database explicitly via the shell or a command, you can just run this code and it’ll create MyDatabase then the table People and then insert a record.

Did it work?

Hopefully your Console application just inserted a record. There should have been no timeout or other exception. Ofcourse we can use the Console application, for example

var client = new MongoClient("mongodb://xxx.xxx.xxx.xxx:28000");
var r = client.GetDatabase("MyDatabase");
var collection = r.GetCollection<Person>("People");
foreach (var p in collection.FindSync(_ => true).ToList())
{
   Console.WriteLine($"{p.FirstName} {p.LastName}");                
}

I’m using the synchronous methods to find and create the list, solely because my Console application is obviously pretty simple, but the MongoDB driver library offers Async versions of these methods as well.

The above code will write out Scooby Doo as the only entry in our DB, so all worked fine. How about we do the same thing using the shell.

If we now switch back to the server and if its not running, run the MongoDB shell as previously outlined. From the shell run the following

use MyDatabase
db.People.find()

We should now see a single entry

{ 
  "_id" : ObjectId("581d9c5065151e354837b8a5"), 
  "FirstName" : "Scooby", 
  "LastName" : "Doo", 
  "Age" : 27 
}

Just remember, we didn’t set this instance of MongoDB up to use a Docker Volume and hence when you remove the Docker instance the data will disappear.

So let’s quickly revisit the code to run Mongo DB within Docker and fix this. First off exit back to the server’s prompt (i.e. out of the Mongo shell and out of the Docker bash instance).

Now stop my-mongo using

docker stop my-mongo

You can restart mongo at this point using

docker start my-mongo

and your data will still exist, but if you run the following after stopping the mongo instance

docker rm my-mongo

and execute Mongo again the data will have gone. If we add a volume command to the command line argument, and so we will execute the following

docker run -p 28000:27017 -v mongodb:/data/mongodb --name my-mongo -d mongo

the inclusion of the /v will map the mongodb data (/data/mongodb) to the volume on the local machine named mongodb. By default this is created in /var/lib/docker/volumes, but ofcourse you could supply a path to an alternate location

Remember, at this point we’re still using default security (i.e. none), I will probably create a post on setting up mongo security in the near future

Building a Linux based NAS

I’ve put together a Linux (Ubuntu) based NAS device. I’m going to list the steps I took to get it all up and running. As a disclaimer though I need to state I am not a Linux expert, so don’t take this as the perfect solution/setup.

Starting point

First off, I actually had a Windows Home Server NAS device, but the hard disk controller died. This means I have a bunch of NTFS formatted drives with lots of media files on and so I need the new NAS to be able to use these drives.

I bought myself a DELL PowerEdge T20 Tower Server which will act as my NAS, it’s a really well spec’d and priced piece of hardware, although a little larger than I would normally want for a NAS and without any easy to access hard drive bays – basically it’s a small tower cased computer, but for the price it’s superb.

Next up I’ve installed Ubuntu Server. So obviously this is so much more than just a NAS, but I’ll concentrate on creating that side of things in this post.

Mounting my drives

After all the hard drives were connected I needed to mount them. As stated, these are already formatted to NTFS.

The first thing we need to do is create some folders to act as the mount points using

mkdir /mnt/foldername

Next up we need to actually mount the drives and assign them to the mount point, for this we’ll use

sudo mount -t ntfs /dev/sda2 /mnt/foldername

Obviously the /dev/sda2 needs to be set to your drive. The easiest way to check what devices/drives exists is use

sudo fdisk -l

or if, like me, you’re looking for the NTFS drives you can use

sudo fdisk -l | grep NTFS

Mounting the drives like this, is transient and the mount will be lost when the NAS reboots, so we need to set them up to “automount” at startup. To do this we need to edit the fstab file, i.e.

sudo nano /etc/fstab

Whilst we can use the /dev/sda2 way of mounting, it’s far better to use the device’s UUID as this will allow us to hot-swap the drives or the like. To find the UUID of your drives simply run

sudo blkid

Now in the fstab file we’ll add lines like this

UUID=123456789ABCDEF / ntfs defaults 0 2

Using Samba to access the drives

Whilst the previous steps have mounted the drives which are now accessible via the server, we want these drives accessible from Windows and the internal network. Samba allows us to expose the drives, or more specifically their folders to the LAN.

We need to edit the smb.conf file, so run

sudo nano /etc/samba/smb.conf

Here’s an example of an entry for one of our shared drives/folders. In this case assume “share-name” to be the name you want to see in your network browser. The folder-path should point to your mounted drive and or any folders you want to expose

[share-name]
   path = /path/folder-path
   read only = yes
   browseable = yes

In this case, we’re stating the share can be browsed for (i.e. it’ll be visible in Windows Explorer when we access the server using \\mynas). In this case I’ve also set the folder to be read only. We add an entry for each folder/drive we want to expose.

Users and Permissions

As I have family members each having their own user space on the NAS, I need to create users – this is simply a case of using

sudo useradd username

replacing username with the name of the user. We can list all users using

cut -d: -f1 /etc/passwd

Now the useradd command simply adds a user, this does not create home folders or other private space for the user. As my WHS was set up with folders for each user, we don’t need to use the Linux /home folders, but just for completeness. When we added the users Ubuntu adds information to the /etc/passwd file to show where the user’s home folder is, but it didn’t create the folders, so we’ll create the home directories manually. Create a home folder for each of your user’s using

sudo mkdir /home/username

Now let’s change the ownership of the folders to each user

sudo chown -R username /home/username

to prove everything is working as we want we can run the command

ls -l

from the home folder and it’ll list the owners of each folder

Obviously at this point we can go back to the smb.conf and expose the user’s home folders to Windows and the LAN.

Once we’ve created the Samba folder configurations for the user’s folder, we’ll probably want to look at setting permissions on the folders. We go back to editing smb.conf

[share-name]
   path = /path/folder-path
   read only = yes
   browseable = yes
   write list = bob
   create mask = 0755

We can set up a write list of all user’s who can write to the share (we can use @group, replacing the word group with the actual group name of users as well). Notice the write list will give write access regardless of the read only option.

We can also define a create mask for files and directories (using standard Linux bitwise flags).

How about having the NAS start-up and shutdown automatically

It’d be cool if we can try and conserve energy a little by turning the NAS off when its less likely to be used and back on when it’s most likely to be used.

I haven’t yet implemented this, but have tried the steps in the following document and they worked perfectly, so check out

https://www.mythtv.org/wiki/ACPI_Wakeup#Using_.2Fsys.2Fclass.2Frtc.2Frtc0.2Fwakealarm

Changing the host name

A slight detour here, but when I installed Ubuntu I chose a host name, which on reflection I wanted to change to match the name the old WHS had as this allowed the family to use the new NAS as if it was the old one (i.e. not have to get them to change to using the new NAS name etc.).

We can find the host name using

hostname

and we can also use the same command to set a new hostname using

sudo hostname MyHostName

this is a temporary change, so to make this permanent (i.e. exist after a reboot) we use

sudo nano /etc/hostname
sudo nano /etc/hosts

References

https://help.ubuntu.com/community/Samba/SambaServerGuide
http://askubuntu.com/questions/113733/how-do-i-correctly-mount-a-ntfs-partition-in-etc-fstab
http://askubuntu.com/questions/205841/how-do-i-mount-a-folder-from-another-partition

Setting up my Raspberry Pi

As this is the third Raspberry Pi I’ve had to set up, and again I’ve forgotten the steps. Here’s a post of what I need.

Steps…

  1. Ensure the OS is upto date
  2. Setup a WiFi USB device (I’m using the EDIMAX USB Adapter)
  3. Setup a static IP address
  4. Install mono
  5. Install Git tools
  6. Setup VNC

Fire up the Pi and login…

Ensure the OS is upto date

sudo apt-get update

This will update all packages.

You can also run

sudo apt-get upgrade

to upgrade the Linux kernel

Setup a WiFi USB device (I’m using the EDIMAX USB Adapter)

The EDIMAX EW-7811Un is on the list of WiFi USB adapters that’s verified for use on a Raspberry Pi and it’s the one I’ve used previously so I know it works :)

Note: An excellent and fuller guide is available at Raspberry Pi – Installing the Edimax EW-7811Un USB WiFi Adapter (WiFiPi)

I’ll just reproduce the steps I used here

  1. So obviously we need to start by plugging the USB adapter into the Raspberry Pi
  2. Next run
    iwconfig
    

    This will tell us whether the adapter is ready to use as it should automatically have been recognized. You should see wlan0 plus other information output. Check the link above for more tests you can do to ensure everything is detected and read.

  3. Now we want to ensure that the adapter is configured, so type
    sudo nano /etc/network/interfaces
    

    or use your preferred text editor in place of nano.

    Make sure the file contains the following (and they’re uncommented)

    allow-hotplug wlan0
    iface wlan0 inet manual
    wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
    iface default inet dhcp
    
  4. Now we need to create/amend the configuration file for connecting to the network, so type
    sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
    

    We need to add the various entries for connecting to the network here, for example

    network={
       ssid="The Router SSID"
       psk="The Router Password"
       proto=WPA
       key_mgmt=WPA-PSK
       pairwise=TKIP
       auth-alg=OPEN
    }
    

    Obviously replace the ssid and psk with your router’s ssid and the password you use to connect to it.

  5. Type the following, to restart the network interface with the new configuration data

    sudo ifup wlan0
    

Setup a static IP address

Open /etc/network/interfaces again, for example using

sudo nano /etc/network/interfaces

We now want to change the configuration from using dhcp supplied address to using a static ip address. So alter the line

iface default inet dhcp

from dhcp to static and add the following

address 192.168.1.100
gateway 192.168.1.1
netmask 255.255.255.0

Obviously replace the address, gateway and netmask with your specific network settings. You may also wish to alter the eth0 adapter to mirror the wlan0 adapter with the same address etc.

Install mono

Finally time to install something. This is the easy part. To install mono just enter the following

sudo apt-get install mono-complete

Install Git tools

Time to install GIT.

sudo apt-get install git

Setup VNC

Install VNCServer using

sudo apt-get install tightvncserver

Now run tightvnsserver and note the number supplied for the desktop for connection from the client machine.

Raspberry Pi A+

The Raspberry Pi A+ has a single USB port (excluding the USB power socket) and no Ethernet port, which is a bit of a pain if trying to setup WiFi and having problems getting the configuration right (especially as I don’t have a powered USB hub to help me out).

Anyway after a lot of hassle here’s the configuration for the Raspberry Pi A+ running Wheezy Raspbian 7. The USB WiFi device I’m using is a WiPi dongle.

Let’s begin with the /etc/network/interfaces file, mine looks like this

auto lo
iface lo inet loopback

auto wlan0
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet static

address 192.168.1.2
gateway 192.168.1.
netmask 255.255.255.0

obviously the above is setup with a static address, if you want to use a dynamic address change the static to dhcp and remove the address gateway and netmask (to tidy the config up).

Next we need to amend the /etc/wpa_supplicant/wpa_supplicant.conf file, mine looks like this

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
ssid="The Router SSID"
psk="The Router Password"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP TKIP
group=CCMP TKIP
}

The above configures for WPA2 security (use WPA for WPA1).