Maven file structure basics

This is an old blog post that sat in draft for years, int looks complete, so I thought I’d publish it. Hopefully it’s still upto date.

As I’m working in Java again and using Maven a lot to get my projects up and running. Whilst I’ve covered some of this stuff in other posts, they’ve tended to be part of working with some specific code. So this post is mean’t to be more about using Maven itself.

Maven convention file structure

By default the following file structure is expected

/src/main/java
/src/main/resources
/src/test/java

Optionally we might have the test/resources

/src/test/resources

POM starting point

Maven (be default) expects a file name pom.xml to exist which contains version information and may include plugins, code generation, dependencies etc.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.putridparrot.common</groupId>
    <artifactId>JsonPatch</artifactId>
    <version>1.0-SNAPSHOT</version>
</project>

Naming conventions (see references)

groupId – “identifies the project uniquely across all projects” hence might best be represented by the package name.
artifactId – is the name of the JAR
version – standard numbers with dots, i.e. 1.0, 1.0.1 etc. This is a string so in situations where we want a version to include the word SNAPSHOT (for example)

Maven commands

Before we get into more POM capabilities, let’s look at the basic set of Maven command’s we’ll use most.

Compiling to specific Java versions

<properties>
   <maven.compiler.source>1.8</maven.compiler.source>
   <maven.compiler.target>1.8</maven.compiler.target>
</properties>

OR

<build>
   <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.7.0</version>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
        </configuration>
      </plugin>
    </plugins>
</build>

References

Guide to naming conventions on groupId, artifactId and version
Setting the -source and -target of the Java Compiler

Parameterized unit testing in Java

Occasionally (maybe even often) you’ll need some way to run the same unit test code against multiple inputs.

For example, you might have some code that iterates over a string counting certain characters (let’s say it counts the letter Z), the unit test would be exactly the same for testing the following scenarios

  1. When no characters of the expected type exist
  2. When characters of the expected type exist
  3. When the string is empty
  4. When the string is null

The only difference would be the input to the unit test and the expectation for the assert. In such situations we would tend to use parameterized unit tests, in C#/NUnit this would be with TestCase attribute. In Java with JUnit 5 (org.junit.jupiter.*) this would be with the @ParameterizedTest annotation.

We’re going to need to add the following dependency to our pom.xml (change the version to suit).

<dependency>
  <groupId>org.junit.jupiter</groupId>
  <artifactId>junit-jupiter-params</artifactId>
  <version>5.8.1</version>
  <scope>test</scope>
</dependency>

We could write two unit tests for the above scenario, one for success and one for failure, such as when no characters exist (i.e. the string is null or empty we expect a 0 return from our system under test), a second test would maybe check a known return value. In such situations we can simply use something like

@ParameterizedTest
@ValueSource(strings = {"", null, "aaabbb" })
void noZExists_ShouldReturn_Zero(string input) {
    assertEqual(0, CharacterCounter.countZ(input));
}

Now we’d have another unit test for successful cases, for example

@ParameterizedTest
@ValueSource(strings = {"aaaaZZZ", "ZZZ", "ZZZaaa" })
void zExists_ShouldReturn_Three(string input) {
    assertEqual(3, CharacterCounter.countZ(input));
}

This would be far better if we simply wrote one unit test but could pass in both the input as well as the expected result, hence combine all values into a single test. The only option I found for this was to use the @CvsSource annotation, so for example we write the input followed by the comma separate followed by the expectation – ofcourse we could supply more than two args per call of the unit test, but this is adequate for our needs – this means our test would look more like this

@ParameterizedTest
@CsvSource({ "\"\",0", "aaabbb,0", "aaaaZZZ,3", "ZZZ,3", "ZZZaaa,3" })
void zExists_ShouldReturn_Three(string input, int expectation) {
    assertEqual(expectation, CharacterCounter.countZ(input));
}

Localization in SwiftUI

Let’s create a simple little Swift UI application (mine’s a Mac app) to try out the Swift UI localization options.

Once you’ve created an application, select the project in the Xcode project navigator. So for example my application’s name LocalizationApp, select this item in the project navigator. From the resultant view select the project (not the target) and this will display a section labelled Localizations. This will show your Development Language, in my case this is English. Beneath this you’ll see a + button which we can use to add further languages.

Click on the + as many times as you like and add the languages you want to support. I’ve just added French (Fr) for my example.

Adding localizable strings

Add a new file to the project, select a Strings file and name it Localizable (hence it will be Localizable.strings). This file will have key value pairs, where the key will be used as the key to the localised string and, as you probably guessed, the value will be the actual localised string, for example

"hello-world" = "Hello World";

Note: if you forget the semi-colon, you’ll get an error such as “validation failed: Couldn’t parse property list because the input data was in an invalid format”.

Now if you’ve created the default Mac application using Swift UI, go to the ContentView and replace the following

Text("Hello, world!")

with

Text("hello-world")

Wait a minute, we seemed to have replaced one string with another string, why isn’t the Text displaying “hello-world”?

The Swift UI Text control (and other controls) support LocalizedStringKey. This essentially means that the code above is an implicit version of this

Text(LocalizedStringKey("hello-world"))

So basically, we can think of this (at least in its implicit form) as first looking for the string within the .strings file and if it exists, replacing it with the LocalizedStringKey. If the string does not exist in the .strings file then use that string as it is.

We can also use string interpolation within the .strings file, so for example we might have

"my-name %@" = "My name is %@";

and we can use this in this way

Text("my-name \(name)")

The %@ is a string formatter and in this instance means we can display a string, but there are other formatters for int and other types, see String Format Specifiers.

What about variables and localization?

We’ve seen that Text and the controls allow an implicit use of LocalizedStringKey but variable assignment has no such implicit capability, so for example if we declared a variable like this

let variable = "hello-world"

Now if we use the variable like this (below) you’ll simply see the string “hello-world” displayed, which is predictable based upon what we know

Text(variable)

Ofcourse we can simply replace the variable initialization with

let variable = LocalizedStringKey("hello-world")

Adding other languages

Let’s now create a new language for our application by clicking on the Localizable.strings file and in the file inspector you’ll see a Localize button. As we’d already added a language view the project navigator (at the start of this post). You’ll now see both English and French listed. The Localizable.strings file now appears a parent to two Localizable files, one named Localizable (English) and one named Localizable (French).

In the French file we’ve added

"hello-world" = "Bonjour le monde";
"my-name %@" = "Mon nom est %@";

Note: What you’ll actually find is the within the application directory there will be two folders, one named en.lproj and fr.lproj each will have a Localizable.strings file.

Testing our localizations

Ofcourse if we now run our application we’ll still see the default language,, in my case English as that’s the locale on my Mac. So how do we test our French translation?

We can actually view the translations side by side (as it were) by amending our code like this

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        Group {
            ContentView()
                .environment(\.locale, .init(identifier: "en"))
            ContentView()
                .environment(\.locale, .init(identifier: "fr"))
        }
    }
}

Or (much more likely) we can go to the application code and replace the WindowGroup with the following

var body: some Scene {
   WindowGroup {
      ContentView()
         .environment(\.locale, .init(identifier: "fr"))
   }
}

Actually, there’s also another way of change the locale.

Instead of coding the change, select the application name in Xcode’s top bar and a drop down will show “Edit Scheme”. Select the “Run” option on the left and then the tab “Options”. Locate the “App Language” picker and select French (or whichever language you added as non-default). Now run the application again and you’ll see the application using the selected language localization file.

Exporting and Importing localizations

Whilst we can use google translate or other online translation services, we might prefer to export the localization files so we can send them to a professional translation service.

Select the application within the project navigator, then select Product | Export Localizations… this will create a folder, by default named “<Your application name> Localizations” (obviously where Your application name is replaced by your actual app name). This folder contains en.xcloc and fr.xcloc files in my case.

After your translation service/department/friends complete the translations, we can now select the root in the project navigator (i.e. our application) then select Product | Import Localizations… from here select the folder and files you want to import. Click the “Import” button.

What happens if I run this command in Powershell

As Powershell allows us to do things, like stop process, move files and more, we might prefer to check “what if I run this command” before actually executing it.

Whatif

Let’s assume we want to stop several processes, but before executing the command we’d like to see some output showing what the command will do.

For example

Get-Process *host | Stop-Process -whatif

Instead of actually stopping the processes found via the wildcard *host this will output a list of “what if” statements, showing the operation which would take place against which target (process).

So we might notice that our wildcard includes unintended consequences and can thus save ourselves the headache unwanted changes.

Adaptive triggers in a universal app.

Adaptive triggers are part of the Universal Windows application and part of Windows 10.

With a universal app. you can begin writing an application which can target multiple Windows platforms/devices, for example mobile, tablet and desktop.

When writing UI code for a variety of devices you’ll come across the problem where, for example, a button on a desktop needs to be place elsewhere when used on a mobile phone and so on.

Adaptive triggers allow us to customize layout based upon the dimensions of the device we’re running on. But wait ! This statement is slightly misleading. The trigger actually has nothing to do with the device and everything to do with the Window size of the application on the device. So in reality an AdaptiveTrigger is more like a responsive design feature in that when the Window is expanded, it’s possible one layout is used and when the Window is reduce in size maybe another layout is used.

To define an AdaptiveTrigger we use the VisualStateManager and create StateTriggers, which contain our AdaptiveTrigger(s).

For example, let’s define VisualState and AdapterTriggers for a desktop with a minimum width of 1024 pixels and a phone with minimum width of 320 pixels and we’ll a the button display differently for each device.

Note: the x:Name is there just for descriptive purposes in this example.

<Grid>
   <VisualStateManager.VisualStateGroups>
      <VisualStateGroup>
         <VisualState x:Name="DeskTop">
            <VisualState.Setters>
               <Setter Target="button.(FrameworkElement.HorizntalAlignment)" Value="Left" />
            </VisualState.Setters>
            <VisualState.StateTriggers>
              <AdaptiveTrigger MinWindowWidth="1024" />
           </VisualState.StateTriggers>
         </VisualState>
         <VisualState x:Name="Phone">
            <VisualState.Setters>
               <Setter Target="button.(FrameworkElement.HorizntalAlignment)" Value="Right" />
            </VisualState.Setters>
            <VisualState.StateTriggers>
              <AdaptiveTrigger MinWindowWidth="320" />
           </VisualState.StateTriggers>
         </VisualState>
      </VisualStateGroup>
   </VisualStateManager.VisualStateGroups>
</Grid>

Setting up Ubuntu Server firewall

UFW is used as the firewall on Linux and in my case on Ubuntu server. UFW comes with a UI, but we’re going to use this on a headless server (hence no UI being used).

Status and enabling/disabling the firewall

Simply run the following to check whether your firewall is active or not

sudo ufw status

To enable the firewall simply use the following

sudo ufw enable

Use disable to disable the firewall (as you probably guessed).

Once enabled run the status command again and you should see a list showing which ports we have defined rules for and these will show whether to ALLOW or REJECT connections to port. For example

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
80                         ALLOW       Anywhere

Allow and reject access

We can allow access to a port, reject access to ports and reject outgoing traffic on ports. When we allow, reject incoming or reject outgoing access we’re creating firewall rules.

To allow access to SSH, for example we do the following

sudo ufw allow 22

This will allow tcp and udp access, but we can be more precise and just allow tcp by using

sudo ufw allow 22/tcp

As you can see from the previous output from the status option, we’ve enabled 22/tcp already.

To reject access to a port we use reject.

Note: If you’re access your server using SSH you probably don’t want to reject access to port 22, for obvious reasons, i.e. port 22 is used by SSH and this will block your access via SSH.

sudo ufw reject 80

Application profiles

UFW includes application profiles which allow us to enable predefined lists of permissions

sudo ufw app list

The applications listed from this command can also be seen by listing /etc/ufw/applications.d, so for example on my system I have a file name openssh-server, if you open this with nano (or your preferred editor), you’ll see an INI file format, for example

[OpenSSH]
title=Secure shell server, an rshd replacement
description=OpenSSH is a free implementation of the Secure Shell protocol.
ports=22/tcp

We can also use

sudo ufw app info OpenSSH

Replacing OpenSSH with the name of the application profile you want to view

As you can see, if our application profiles are just INI files, then you can create your own file and place it into the aforementioned folder and make it available to UFW. Once you’ve created your file you’ll need to tell UFW to load the application definitions using

sudo ufw app update MyApp

Replace MyApp with your application name in the above.

Ofcourse once we have these profiles we can allow, reject etc. using the application name, i.e.

sudo ufw allow OpenSSH

Logging

By default logging is disabled, we can turn it on using

sudo ufw logging on

Linked lists, “do I need them?”

I noticed a few months back, several posts on LinkedIn and Twitter where people are saying things like “what’s the point of a linked list?”, “do I need linked lists?” etc.

The answer, like in many cases, to these sorts of questions are “it depends”.

What is a linked list?

A singularly linked list is a container structure where you usually maintain a HEAD link and from that you have a pointer/reference to a NEXT link in the list, each link points to a NEXT link OR a NULL. The NULL simply indicates there are no more links.

Let’s create a really simple example of this in C#. This will act list a Queue, hence we’ll put to the end of the list and get from the front of the list, i.e. FIFO (First In First Out). It should be pretty obvious how it can be written in most other languages

class Link<T>
{
   public Link<T>(T data)
   {
      Data = data;
   }
  
   public T Data { get; }
   public Link<T> Next { get; set; }
}

class Queue<T>
{
   private Link<T> _head;
   private Link<T> _tail;
 
   public void Put(T data)
   {
      if(_head == null)
      {
         _head = _tail = new Link<T>(data);
      }
      else
      {
         _tail.Next = new Link<T>(data);
         _tail = _tail.Next;
      }
   }

   public T Get()
   {
      if(_head != null) 
      {
         var data = _head.Data;
         _head = _head.Next;
         return data;
      }
      return default(T);
   }
}

In the code above we’ve added a little optimization by including a TAIL link, which just allows us to add to the end of the linked list without having to iterate of the links each time, but otherwise it’s about as simple as it gets.

What are the problems with Linked Lists?

So immediately from the fairly trivial example we can see a few issues with this type of container. The most obvious is the lack of direct indexing, i.e. we cannot just peek or get the third item in a O(1) time complexity. Instead, we must go to the HEAD and iterate to the third item via the NEXT links. Ofcourse, an array would have far better performance than a Linked List in such situations.

With a linked list, we are (potentially) going to end up, with quite a bit of memory fragmentation as each link can be stored anywhere in memory. So, that’s another downside of using a linked list.

I suppose one might find the implementation complex compared to just creating an array, although once implemented and fully tested that’s probably not a concern. What might be more concerned with is that for each item in memory we have to store a link to the next item.

But let’s be honest the main issue people would have with a linked list is performance especially compared to an array.

So, what’s the point of a linked list when we could just use an array?

Way back, when I learned to use linked lists, I was using C++ on 8088 processors and we had very little memory available. Now, whilst each link in a linked list does have this overheard of a NEXT link, a link list can grow in a way that an array cannot.

Let’s assume we have an array of 1000 items and we want to add one more item. Then we need to create a new array buffer the size of the previous buffer + 1 and then copy all the items from the old to the new buffer then add our new item to the next extended buffer then free the original array buffer.

This means we have two arrays in memory, one with a size of 1000 items and another with 1001 items being held in memory whilst we do the copy. When memory is scarce or heavily fragmented, there’s the potential of an Out of Memory exception as an array would need a block of memory that’s contiguous and hence indexable. Along with the fact that this will end up being slow, creating a new array every time we add an item and copying.

The first thing the reader might say is – well instead of creating a new array with 1 extra item, meaning we have to create a new allocation for every item added, let’s instead create an array with more memory than we currently need, so we might then create an array with a delta (a value that says, when I create a new buffer it will always be old buffer length + delta) where delta might be a value such as 100. Now, we only need to allocate more memory every 100 times we add an item. In other words in our example, instead of creating a 1001 buffer we create a 1100 and simply track length as being 1001 but buffer size being 1100, hence we do not need to allocate more memory until the developer reaches the 1100 and wants to add an item.

This reduces memory allocation, but we still have the issues around maintaining two buffers in memory whilst we copy from one to another and, again, on systems with smaller amounts of memory such as microprocessors and still has the potential of Out of Memory exceptions due to this copy process and/or fragmentation.

Conclusion

So, in today’s modern desktop, laptops and even phones, we probably do not care about the extra issues around creating arrays and resizing them because we have plenty of memory and with some languages using garbage collection with heap defragmentation because we’ve memory to spare and the O(1) indexing is far more important to us, but if you’re using a microprocessor/microcontroller where memory is a premium linked lists still make a lot of sense. Ofcourse you will lose out on O(1) indexing but everything is a trade-off where memory is at a premium.

Unit testing with Swift

Swift comes with its own unit testing framework XCTest, let’s take a look at what we need to do to enable unit testing and how we use this library.

Test Package Dependencies

If you created your package via the Wwift package manager your project will be laid out with Sources and Tests folders.

Here’s an example of a library package, notice the .testTarget section

targets: [
   .target(
      name: "MyLib",
      dependencies: []),
   .testTarget(
      name: "MyLibTests",
      dependencies: ["MyLib"]),
]

Creating a test fixture

Next up we need to import the XCTest library and derive our test fixture/class from XCTestCase, for example

final class MyLibTests: XCTestCase {
   // tests go here
}

Creating a test

Tests are written in test methods within the test case subclass. Test methods have no parameters, no return value and the name should begin with lowercase prefix test, so for example

final class MyLibTests: XCTestCase {
   func testIsValid() {
     // test setup and asserts
   }
}

Asserting our tests

The XCTest library prefixes its assertion method with XCT. Hence, we have XCTAssertEqual for example to assert that two values are equal

final class MyLibTests: XCTestCase {
   func testIsValid() {
      let a = 1
      let b = 1
      XCTAssertEqual(a, b)
   }
}

try…catch, but no finally in Swift

My Swift tip of the day…

Swift handles errors (what we would call exceptions in many other languages) using try and catch, but there’s no finally keyword. Instead, we can wrap a closure and pass to the defer function instead. For example we open some resource (file or whatever – we’re assuming this works fine) we create a defer (a bit like using a Disposable), then we useResource which might exception, but defer will now call closeResource for us

let someResource = openResource()
defer { closeResource(someResource) }

do {
   try useResource(someResource)
} catch {
   throw MyError.resoureFailure()
}

Big O and time complexity

Introduction

The time complexity of a function (or algorithm) can be seen as the relative time the function takes to process data of different sizes.

Time complexity is sometimes referred to as simply “complexity”, or “algorithmic complexity” or “asymptotic complexity”. We might measure the time complexity of a function with respect to best case, the average case or more likely and more generally the worst case.

Note: We generally talk about the time complexity of a function in terms of its worst case, however it’s possible we might declare the best or average case but we should therefore explicitly state that this time complexity is best or average case when using it, otherwise worst case is assumed.

As we’ll see, and you probably worked out from the title of this post, we have notation for this called Big O.

When I say “relative time” we’re not talking about the actual performance characteristics of a function as this may depend on the language used as well and the hardware running the function as well as the actual algorithm used. We’re not really interested directly in the actual time to run an algorithm or function but we’re more interested in how long it takes relative to the data or input growth. In other words whilst a function might take milliseconds for 100 items, what it we increase the data presented to the function to 10,000 items.

Disclaimer: This post is based upon a computer science view/usage of time complexity and Big O. It is not as rigorously written (or intended to be) as one would require far a mathematics based analysis and frankly it’s primarily based upon what I recall from my learning this stuff way back in the early nineties along with some refresher reading.

A simple example

Let’s look at something equivalent of the “Hello World” app. but for time complexity…

We have function which counts the number of elements in an array

func count(array: [Int]) -> Int {
  var c = 0
  for item in array {
    c += 1
  }
  return c
}

The language is immaterial, although as I’m messing with Swift at the moment, that’s what I’m presenting here, but it’s so basic, hopefully it’s pretty obvious how this works.

Now let’s pretend I ran this code against an array of 100 items and it ran in 2ms then I ran the function again with 200 elements and measure the time to be 4ms. We can see that this might indicate a linear time complexity – in other words if we continually double the number of items we’d see a double of the time taken. Hence the time against size or growth of the data creates a linear graph (i.e. y = mx+c). Ofcourse we’d need to run a fair few more samples to prove this, but you hopefully get the idea.

But wait… I said at the start that we’re interested in relative time.

What we know about this simple function is that, as the array size grows the time complexity of the function grows in proportion to size, so now we need to figure out a way to represent this information.

Big O

Big O is used to represent our relative time using notation to represent time complexity. The notation starts with a capitalized (big) O (which represents order of) followed by parenthesis and within the parenthesis we have a constant or a formula using n usually, to represent the size of data.

In the count code (above), we’re having to iterate the array to count the number of items. Therefore, to find the length of the array we must iterate over N items, hence this function has a Big O of O(n). You’ll notice though that we have essentially three statements within this function, so let’s be a little more precise about how we calculate the Big O of this function.

func count(array: [Int]) -> Int {
  var c = 0            // O(1)  
  for item in array {  // O(n)
    c += 1             // O(1) 
  }
  return c             // O(1)
}

The line where we assign the initial value to c has a constant time complexity (denoted as O(1)), in other words, this line and for that matter the incrementing of the c variable takes the same time whatever the size of input. However, the for loop must loop through each item in the array, hence has a time complexity of O(n), so we really end up with a total time complexity of

O(1) + O(n) + O(1) + O(1)

We can remove the constants as these become immaterial and are unaffected by the growth of the array, hence removing the O(1) we’re left with the O(n) time complexity.

There are a bunch of (if you like) standard time complexities you’ll see a lot, these are (in order of dominance)

O(1) = constant time
O(log n) = logarithmic time
O(n) = linear time 
O(n log n) = linear log time
O(n^2) = quadratic time
O(2^n) = exponential time
O(n!) = factorial time 

We ignore lower order terms when the algorithm is dominated by higher order terms, in other words O(n) dominates the O(log n) and hence a function that has both terms would have an overall term of O(n).

With regards our sample code, hopefully all languages will store the length of the array alongside the array data. Hence to access the length of an array we simply return this length variable and this happens in constant time O(1). Therefore, we can see that regardless of the size of the array the time to get the length variable will never change in such implementations. But if you’re used to C# think of the difference of using a List Count property against a Linq Count(). The first will return the stored count whereas the second would try to iterate the list giving quite different time complexities.

Working out the Big O of a function

We had a quick look at how to work out the time complexity of the count function, whereby we look at each line and apply the Big O notation to it, then we look for the dominant term and essentially this becomes our time complexity for the function.

Let’s now look at another function

func display() {
   for x in 0..100 {    // O(n^2)
     for y in 0..100 {  
        print(x * y)    // O(1)
     }
   }
}

This gives us O(n^2) + O(1), we can discard the constant and hence this function has a time complexity of O(n^2)

Time complexity as a range

It was mentioned earlier that we could calculate the time complexity for best, average or worst case. Whilst best and worst case give us a range to define the efficiency of our function and one might say that average gives us a useful value, we tend to concern ourselves most with the worst case. A simple example of this would be if we now change our count function to search an array for the first occurrence of a value, for example

func find(array: [Int], search: Int) -> Int{
  var pos = 0            
  for item in array {    
    if item == search {
      return pos
    }
    pos += 1
  }
  return -1
}

In the above code we can see that the only real difference from the previous count function is there’s a break from the O(n) loop when a match is found, therefore we can determine if the array’s first item was a match to our search parameter then this function would have a best case time complexity of O(1), on the other hand if the match was the last item we’d have a time complexity of O(n). Whilst we know that a best case gives us a constant time, we always need to look at the worst case. So, this function would still have a time complexity of O(n) as a worst case.

Why not just time our runs?

Earlier I mentioned some hypothetical timings for the count algorithm. Why don’t we just use those timings to represent our algorithm’s efficiency?

Well as you probably ascertained, we’re looking at have a machine independent measurement. Obviously if we’re comparing timings on a powerful development machine with lots of CPU power and memory our timings will not be indicative of the same algorithm on a low powered machine. Hence, we’re aiming to get a representation of a machine independent value.

As per the start of this post “The time complexity of a function (or algorithm) can be seen as the relative time the function takes to process data of different sizes.
“.

So, one must also remember that we’re measuring how an algorithm’s time complexity changes as data of different sizes are applied to it. But there’s a catch to this. What if we have function countA and it does the count process as implemented above but has a sleep, or some other constant time delay and compare it’s time complexity against the same function countB without this constant time delay, the Big O will essentially be the same, but obviously measured performance on the same machine will differ (potentially vastly differ).

For example, if our implementations looked like this

func countA(array: [Int]) -> Int {
  var c = 0            // O(1)  
  for item in array {  // O(n)
    c += 1             // O(1) 
  }
  return c             // O(1)
}

func countB(array: [Int]) -> Int {
  var c = 0            // O(1)  
  for item in 1..1000 { // O(1)
  }
  for item in array {  // O(n)
    c += 1             // O(1) 
  }
  return c             // O(1)
}

So, in countB we essentially have a constant time loop but it’s obvious from these two functions that countA will be more performant than countB but both will have the same O(n) time complexity.

So how do we deal with this difference?

In the response to that question, at this time, I’m not wholly certain. I need to research this further – but what’s obvious here is that the two algorithms will both be O(n) regarding the changes in the amount of data but not in a true comparison with one another as it’s obvious that countB will always be slower than countA.