Creating type-safe identifiers for your Codable models

Note:
After publishing this article, it has been brought to my attention that the folks from @pointfreeco have a very similar solution for the problems I outline in this post. It's called tagged and implements the same features I cover in this post with several useful extensions. If you like this post and plan to use the concepts I describe, you should take a look at tagged.

It seems that on the Swift forums, there are a couple of topics that come up regularly. One of these topics is the newtype for Swift discussion. Last week, I saw a new topic come up for newtype and I realized that I used to wonder why folks wanted that features and I was opposed to it, and now I'm actually sort of in favor of it.

Newtype, in short, is a feature that allows you to not just create a typealias, but actually allows you to clone or copy a type and create a whole new type in the process. You can almost think of it as subclassing for structs, but it's not quite that. If you want to understand newtype in detail, I can recommend that you take a look at the Swift forum topic. The use cases for a feature like newtype are explained decently over there.

In this post, my goal is not to talk about newtype and convince why we need it. Instead, I wanted to write about one of the problems that are solved by newtype and a solution I came up with that you can take and use in your projects without having a newtype object. By the end of this post, you will understand exactly why you'd want to have type-safe identifiers for your models, what I mean when I say type-safe identifiers and how you can implement them on your models, even if they conform to Codable without having to make huge changes to your codebase.

Understanding why you'd want to have type-safe identifiers

When you're working with models or identifiers, it's not uncommon to have models that use Int or String as their unique identifiers.

For example, here's an example of three models that I prepared for this post:

struct Artist: Decodable {
  let id: String
  let name: String
  let recordIds: [String]
}

struct Record: Decodable {
  let id: String
  let name: String
  let artistId: String
  let songIds: [String]
}

struct Song: Decodable {
  let id: String
  let name: String
  let recordId: String
  let artistId: String
}

These models represent a relationship between artists, records and songs. These models are nothing special. They use String for their unique identifier and they refer to each other by their identifiers.

When you write an API to find the objects represented by these models, that API would probably look a bit like this:

struct MusicApi {
  func findArtistById(_ id: String) -> Artist? {
    // lookup code
  }

  func findRecordById(_ id: String) -> Record? {
    // lookup code
  }

  func findSongById(_ id: String) -> Song? {
    // lookup code
  }
}

I'm sure you're still with me at this point. If you've ever written an API that looks items up by their identifier this code should look extremely familiar. If a match is found, return the found object and otherwise return nil.

What's interesting here is that it's easy to make mistakes. For example, consider the following code:

let api = MusicApi()
let song = Song(id: "song-1", name: "A song", recordId: "record-1", artistId: "artist-1")

api.findRecordById(song.artistId) // nil

Can you see what's wrong in this code?

We're trying to find a record using song.artistId and the compiler is fine with that. After all, artistId, recordId and even id are all properties on Song and they are all instances of String. In fact, there's nothing stopping us from using a bogus string as an input for findRecordById.

Preventing mistakes like the one I just showed is the entire purpose of having type-safe identifiers. If you can get the Swift compiler to tell you that you're using an artist identifier instead of a record identifier, or that you're using a plain string directly instead of an identifier that you created explicitly, your code will be much safer and more predictable in the long run which is fantastic.

So how do we achieve this?

Implementing type-safe identifiers

Let's look at the refactored MusicApi struct before I show you how I implemented my type-safe identifiers. Once you've seen the refactored MusicApi I think you'll have a much better understanding of what it is I was trying to achieve exactly:

struct MusicApi {
  func findArtistById(_ id: Artist.Identifier) -> Artist? {
    // lookup code
  }

  func findRecordById(_ id: Record.Identifier) -> Record? {
    // lookup code
  }

  func findSongById(_ id: Song.Identifier) -> Song? {
    // lookup code
  }
}

Instead of accepting String for each of these method, I expect a specifc kind of object. Because of this, it should be impossible to write the following incorrect code:

let api = MusicApi()
let song = Song(id: "song-1", name: "A song", recordId: "record-1", artistId: "artist-1")

api.findRecordById(song.artistId) // error: cannot convert value of type 'Artist.Identifier' to expected argument type 'Record.Identifier'

The compiler simply won't allow me to do this because the type of artistId on Song is not a Record.Identifier. To show you how this Identifier type works, I want to show you an updated version of the Artist model since that's the simplest model to change. Here's what the old model looked like:

struct Artist: Decodable {
  let id: String
  let name: String
  let recordIds: [String]
}

Very straightforward, right? Now let's look at the updated Artist model and its nested Artist.Identifier:

struct Artist: Decodable {
  struct Identifier: Decodable {
    let wrappedValue: String

    init(_ wrappedValue: String) {
      self.wrappedValue = wrappedValue
    }

    init(from decoder: Decoder) throws {
      let container = try decoder.singleValueContainer()
      self.wrappedValue = try container.decode(String.self)
    }
  }

  let id: Artist.Identifier
  let name: String
  let recordIds: [Record.Identifier]
}

The Identifier struct is nested under Artist which means its full type is Artist.Identifier. This type wraps a string, and it's Decodable. I have defined two initializers on Identifier. One that takes a wrapped string directly, allowing you to create instances of Identifier with any string you choose, and I have defined a custom init(from:) to implement Decodable manually for this object. Note that I use a single value container to extract the string I want to wrap in this Identifier. I won't go into all the fine details of custom JSON decoding right now but consider the following JSON data:

{
  "id": "b0a16f6e-1bf9-4007-807d-d1a59b399a64",
  "name": "Johnny Cash",
  "recordIds": ["9431bcb0-f83b-4eeb-8932-dd105584ca29"]
}

The data represents an Artist object and it has an id property which is a String. This means that when a JSONDecoder tries to decode the Artist object's id property on the original model, it could simply decode the id into a Swift String. Because I changed the type of id from String to Identifier, we need to do a little bit of work to convert "b0a16f6e-1bf9-4007-807d-d1a59b399a64" (which is a String) to Identifier. The Decoder that is passed to the Identifier's custom init(from:) initializer only holds a single value which means we can extract this single value, decode it as a string and assign it to self.wrappedValue.

By implementing Identifier like this, we don't need to do any additional work, the JSON can remain as it was before, we don't need custom decoding logic on Artist.Identifier and only need to extract a single value in the Artist.Identifier custom decoder. Notice that I changed the array of recordIds from [String] to [Record.Identifier]. Let's look at the refactored implementations for Record and Song:

struct Record: Decodable {
  struct Identifier: Decodable {
    let wrappedValue: String

    init(_ wrappedValue: String) {
      self.wrappedValue = wrappedValue
    }

    init(from decoder: Decoder) throws {
      let container = try decoder.singleValueContainer()
      self.wrappedValue = try container.decode(String.self)
    }
  }

  let id: Record.Identifier
  let name: String
  let artistId: Artist.Identifier
  let songIds: [Song.Identifier]
}

struct Song: Decodable {
  struct Identifier: Decodable {
    let wrappedValue: String

    init(_ wrappedValue: String) {
      self.wrappedValue = wrappedValue
    }

    init(from decoder: Decoder) throws {
      let container = try decoder.singleValueContainer()
      self.wrappedValue = try container.decode(String.self)
    }
  }

  let id: Song.Identifier
  let name: String
  let recordId: Record.Identifier
  let artistId: Artist.Identifier
}

Both objects implement the same Identifier logic that was added to Artist which means that we can update all String identifiers to their respective Identifier objects. The implementation for each Identifier is the same every time which makes this code pretty repetitive. Let's refactor the models one last time to remove the duplicated Identifier logic:

struct Identifier<T, KeyType: Decodable>: Decodable {
  let wrappedValue: KeyType

  init(_ wrappedValue: KeyType) {
    self.wrappedValue = wrappedValue
  }

  init(from decoder: Decoder) throws {
    let container = try decoder.singleValueContainer()
    self.wrappedValue = try container.decode(KeyType.self)
  }
}

struct Artist: Decodable {
  typealias IdentifierType = Identifier<Artist, String>

  let id: IdentifierType
  let name: String
  let recordIds: [Record.IdentifierType]
}

struct Record: Decodable {
  typealias IdentifierType = Identifier<Record, String>

  let id: IdentifierType
  let name: String
  let artistId: Artist.IdentifierType
  let songIds: [Song.IdentifierType]
}

struct Song: Decodable {
  typealias IdentifierType = Identifier<Song, String>

  let id: IdentifierType
  let name: String
  let recordId: Record.IdentifierType
  let artistId: Artist.IdentifierType
}

Instead of giving each model its own Identifier I have created an Identifier struct that is generic over T which represents the type it belongs to and KeyType which is the type of value that the Identifier wraps. In my example I'm only wrapping String identifiers but making the Identifier struct generic allows me to also use Int, UUID, or other types as identifiers.

Each model now defines a typealias to make working with its identifier a bit easier.

With this update in place, we should also update the MusicApi object one last time:

struct MusicApi {
  func findArtistById(_ id: Artist.IdentifierType) -> Artist? {
    // lookup logic
  }

  func findRecordById(_ id: Record.IdentifierType) -> Record? {
    // lookup logic
  }

  func findSongById(_ id: Song.IdentifierType) -> Song? {
    // lookup logic
  }
}

With this code in place, it's now impossible to accidentally use a bad identifier to search for an object. This means that when we want to look up an Artist, we need to obtain or create an instance of Artist.IdentifierType which is a typealias for Identifier<Artist, String>. Pretty cool, right!

And even though we added a whole new object to the models, it still decodes the same JSON that it did when we started out with String identifiers.

In summary

In this week's post, I offered you a glimpse into what happens when I notice something on the Swift forums that I want to learn more about. In this case, it was a discussion about a potential newtype declaration in Swift. In this post I explored the problem that could be solved by a newtype which is the ability to create a copy of a certain type to make it clear that that copy should not be treated the same as the original type.

I demonstrated this by showing you how a String identifier can be error-prone if somebody accidentally uses the wrong kind of identifier to search for an object in a database. The Swift compiler can't tell you that you're using the wrong identifier because all String instances are equal to the compiler. Even when you hide them behind a typealias.

To work around this problem and help the Swift compiler I showed you how you can create a new struct that wraps an identifier and adds type safety to the identifiers for my models. I made this new Identifier object generic so it's very flexible and because it implements custom decoding logic it can decode a JSON string on its own which means that we can still decode the same JSON that the original String based model could with the added benefit of being type-safe.

Why your @Atomic property wrapper doesn’t work for collection types

A while ago I implemented my first property wrapper in a code base I work on. I implemented an @Atomic property wrapper to make access to certain properties thread-safe by synchronizing read and write access to these properties using a dispatch queue. There are a ton of examples on the web that explain these property wrappers, how they can be used and why it's awesome. To my surprise, I found out that most, if not all of these property wrappers don't actually work for types where it matters most; collection types.

Let's look at an example that I tweeted about earlier. Given this property wrapper:

@propertyWrapper
public struct Atomic<Value> {
  private let queue = DispatchQueue(label: "com.donnywals.\(UUID().uuidString)")
  private var value: Value

  public init(wrappedValue: Value) {
    self.value = wrappedValue
  }

  public var wrappedValue: Value {
    get {
      return queue.sync { value }
    }
    set {
      queue.sync { value = newValue }
    }
  }
}

What should the output of the following code be?

class MyObject {
  @Atomic var atomicDict = [String: Int]()
}

var object = MyObject()
let g = DispatchGroup()

for index in (0..<10) {
  g.enter()
  DispatchQueue.global().async {
    object.atomicDict["item-\(index)"] = index
    g.leave()
  }
}

g.notify(queue: .main, execute: {
  print(object.atomicDict)
})

The code loops over a range ten times and inserts a new key in my @Atomic dictionary for every loop. The output I'm hoping for here is the following:

["item-0": 0, "item-1": 1, "item-2": 2 ... "item-7": 7, "item-8": 8, "item-9": 9]

Instead, here's the output of the code I showed you:

["item-3": 3]

Surely this can't be right I though when I first encountered this. So I ran the program again. Here's the output when you run the code again:

["item-6": 6]

Wait. What?

I know. It's weird. But it actually makes sense.

Because Dictionary is a value type, every time we run object.atomicDict["item-\(index)"] = index we're given a copy of the underlying dictionary because that's how the property wrapper's get works, we modify this copy and then reassign this copy as the property wrapper's wrappedValue. And because the loop runs ten times and then concurrently runs object.atomicDict["item-\(index)"] = index we first get ten copies of the empty dictionary since that's its initial state. Each copy is then modified by adding index to the dictionary for the "item-\(index)" key which leaves us with ten dictionaries, each with a single item. Next, the property wrapper's set is called for each of those ten copies. Whichever copy is scheduled to be assigned last will be the dictionaries final value.

Don't believe me? Let's modify the property wrapper a bit to help us see:

@propertyWrapper
public struct Atomic<Value> {
  private let queue = DispatchQueue(label: "com.donnywals.\(UUID().uuidString)")
  private var value: Value

  public init(wrappedValue: Value) {
    self.value = wrappedValue
  }

  public var wrappedValue: Value {
    get {
      return queue.sync {
        print("executing get and returning \(value)")
        return value
      }
    }
    set {
      queue.sync {
        print("executing set and assigning \(newValue)")
        value = newValue
      }
    }
  }
}

I've added some print statements to help us see when each get and set closure is executed, and to see what we're returning and assigning.

Here's the output of the code I showed you at the beginning with the print statements in place:

executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing get and returning [:]
executing set and assigning ["item-5": 5]
executing set and assigning ["item-7": 7]
executing set and assigning ["item-1": 1]
executing set and assigning ["item-0": 0]
executing set and assigning ["item-6": 6]
executing set and assigning ["item-3": 3]
executing set and assigning ["item-8": 8]
executing set and assigning ["item-4": 4]
executing set and assigning ["item-9": 9]
executing set and assigning ["item-2": 2]
executing get and returning ["item-2": 2]
["item-2": 2]

This output visualizes the exact process I just mentioned. Obviously, this is not what we wanted when we made the @Atomic property wrapper and applied it to the dictionary. The entire purpose of doing this is to allow multi-threaded code to safely read and write from our dictionary. The problem I've shown here applies to all collection types in Swift that are passed by value.

So how can we fix the @Atomic property wrapper? I don't know. I have tried several solutions but nothing really fits. The only solution I have seen that works is to add a special closure to your property wrapper like Vadim Bulavin shows in how post on @Atomic. While a closure like Vadim shows is effective, and makes the property wrapper play nicely with collection types it's not the kind of API I would like to have for my property wrapper. Ideally you'd be able to use the dictionary subscripts just like you normally would without thinking about it instead of using special syntax that you have to remember.

My current solution is to not use this property wrapper for collection types and instead us some kind of a wrapper type that is far more specific for your use case. Something like the following:

public class AtomicDict<Key: Hashable, Value>: CustomDebugStringConvertible {
  private var dictStorage = [Key: Value]()

  private let queue = DispatchQueue(label: "com.donnywals.\(UUID().uuidString)", qos: .utility, attributes: .concurrent,
                                    autoreleaseFrequency: .inherit, target: .global())

  public init() {}

  public subscript(key: Key) -> Value? {
    get { queue.sync { dictStorage[key] }}
    set { queue.async(flags: .barrier) { [weak self] in self?.dictStorage[key] = newValue } }
  }

  public var debugDescription: String {
    return dictStorage.debugDescription
  }
}

If we update the code from the start of this post to use AtomicDict it would look like this:

class MyObject {
  var atomicDict = AtomicDict<String, Int>()
}

var object = MyObject()
let g = DispatchGroup()

for index in (0..<10) {
  g.enter()
  DispatchQueue.global().async {
    object.atomicDict["item-\(index)"] = index
    g.leave()
  }
}

g.notify(queue: .main, execute: {
  print(object.atomicDict)
})

This code produces the following output:

["item-2": 2, "item-7": 7, "item-4": 4, "item-0": 0, "item-6": 6, "item-9": 9, "item-8": 8, "item-5": 5, "item-3": 3, "item-1": 1]

The reason this AtomicDict works is that we don't send copies of the dictionary to users of AtomicDict like we did for the property wrapper. Instead, AtomicDict is a class that users modify. The class uses a dictionary to get and set values, but this dictionary is owned and modified by one instance of AtomicDict only. This eliminates the issue we had before since we're not passing empty copies of the initial dictionary around.

In Summary

This discovery and trying to figure out why the @Atomic property wrapper doesn't work for collection types was a fun exercise in learning more about concurrency, value types and how they can produce weird but perfectly explainable results. I've not been successful in refactoring my own @Atomic property wrapper to work with all types just yet but I hope that some day I will. If you have any ideas, please do let me know and run it through the relatively simple test I presented in this post.

If you have any feedback or questions about this post, don't hesitate to reach out to me on Twitter.

Changing a publisher’s Failure type in Combine

One of Combine's somewhat painful to work with features is its error mechanism. In Combine, publishers have an Output type and a Failure type. The Output represents the values that a publisher can emit, the Failure represents the errors that a publisher can emit. This is really convenient because you know exactly what to expect from a publisher you subscribe to. But what happens when you have a slightly more complicated setup? What happens if you want to transform a publisher's output into a new publisher but the errors of the old and new publishers don't line up?

The other day I was asked a question about this. The person in question wanted to know how they could write an extension on Publisher that would transform URLRequest values into URLSession.DataTaskPublisher values so each emitted URLRequest would automatically become a network request. Here's what my initial experiment looked like (or rather, the code I would have liked to write):

extension Publisher where Output == URLRequest {
  func performRequest() -> AnyPublisher<(data: Data, response: URLResponse), Error> {
    return self
      .flatMap({ request in
        return URLSession.shared.dataTaskPublisher(for: request)
      })
      .eraseToAnyPublisher()
  }
}

Not bad, right? But this doesn't compile. The following error appears in the console:

instance method 'flatMap(maxPublishers::)' requires the types 'Self.Failure' and 'URLSession.DataTaskPublisher.Failure' (aka 'URLError') be equivalent_

In short, flatMap requires that the errors of the source publisher, and the one I'm creating are the same. That's a bit of a problem because I don't know exactly what the source publisher's error is. I also don't know how an if I can map it to URLError, or if I can map URLError to Self.Failure.

Luckily, we know that Publisher.Failure must conform to the Error protocol. This means that we can erase the error type completely, and transform it into a generic Error instead with Combine's mapError(_:) operator:

extension Publisher where Output == URLRequest {
  func performRequest() -> AnyPublisher<(data: Data, response: URLResponse), Error> {
    return self
      .mapError({ (error: Self.Failure) -> Error in
        return error
      })
      .flatMap({ request in
        return URLSession.shared.dataTaskPublisher(for: request)
          .mapError({ (error: URLError) -> Error in
            return error
          })
      })
      .eraseToAnyPublisher()
  }
}

Note that I apply mapError(_:) to self which is the source publisher and to the URLSession.DataTaskPublisher that's created in the flatMap. This way, both publishers emit a generic Error rather than their specialized error. The upside is that this code compiled. The downside is that when we subscribe to the publisher created in performRequest we'll need to figure out which error may have occurred. An alternative to erasing the error completely could be to map any errors emitted by the source publisher to a failing URLRequest:

extension Publisher where Output == URLRequest {
  func performRequest() -> AnyPublisher<(data: Data, response: URLResponse), URLError> {
    return self
      .mapError({ (error: Self.Failure) -> URLError in
        return URLError(.badURL)
      })
      .flatMap({ request in
        return URLSession.shared.dataTaskPublisher(for: request)
      })
      .eraseToAnyPublisher()
  }
}

I like this solution a little bit better because we don't lose all error information. The downside here is that we don't know which error may have occurred upstream. Neither solution is ideal but the point here is not for me to tell you which of these solutions is best for your app. The point is that you can see how to transform a publisher's value using mapError(_:) to make it fit your needs.

Before I wrap this Quick Tip, I want to show you an extension that you can use to transform the output of any publisher into a generic Error:

extension Publisher {
  func genericError() -> AnyPublisher<Self.Output, Error> {
    return self
      .mapError({ (error: Self.Failure) -> Error in
        return error
      }).eraseToAnyPublisher()
  }
}

You could use this extension as follows:

extension Publisher where Output == URLRequest {
  func performRequest() -> AnyPublisher<(data: Data, response: URLResponse), Error> {
    return self
      .genericError()
      .flatMap({ request in
        return URLSession.shared.dataTaskPublisher(for: request)
          .genericError()
      })
      .eraseToAnyPublisher()
  }
}

It's not much but it saves a couple of lines of code. Be careful when using this operator though. You lose all error details from upstream publishers when in favor of slightly better composability. Personally, I think your code will be more robust when you transform errors to the error that's needed downstream like I did in the second example. It makes sure that you explicitly handle any errors rather than ignoring them.

If you have any questions or feedback about this Quick Tip make sure to reach out on Twitter

An introduction to Big O in Swift

Big O notation. It's a topic that a lot of us have heard about, but most of us don't intuitively know or understand what it is. If you're reading this, you're probably a Swift developer. You might even be a pretty good developer already, or maybe you're just starting out and Big O was one of the first things you encountered while studying Swift.

Regardless of your current skill level, by the end of this post, you should be able to reason about algorithms using Big O notation. Or at least I want you to understand what Big O is, what it expresses, and how it does that.

Understanding what Big O is

Big O notation is used to describe the performance of a function or algorithm that is applied to a set of data where the size of that set might not be known. This is done through a notation that looks as follows: O(1).

In my example, I've used a performance that is arguably the best you can achieve. The performance of an algorithm that is O(1) is not tied to the size of the data set it's applied to. So it doesn't matter if you're working with a data set that has 10, 20 or 10,000 elements in it. The algorithm's performance should stay the same at all times. The following graph can be used to visualize what O(1) looks like:

A graph that shows O(1)

As you can see, the time needed to execute this algorithm is the same regardless of the size of the data set.

An example of an O(1) algorithm you might be familiar with is getting an element from an array using a subscript:

let array = [1, 2, 3]
array[0] // this is done in O(1)

This means that no matter how big your array is, reading a value at a certain position will always have the same performance implications.

Note that I'm not saying that "it's always fast" or "always performs well". An algorithm that is O(1) can be very slow or perform horribly. All O(1) says is that an algorithm's performance does not depend on the size of the data set it's applied to.

An algorithm that has O(1) as its complexity is considered constant. Its performance does not degrade as the data set it's applied to grows.

An example of a complexity that grows as a dataset grows is O(n). This notation communicates linear growth. The algorithm's execution time or performance degrades linearly with the size of the data set. The following graph demonstrates linear growth:

A graph that shows O(n)

An example of a linear growth algorithm in Swift is map. Because map has to loop over all items in your array, a map is considered an algorithm with O(n) complexity. A lot of Swift's built-in functional operators have similar performance. filter, compactMap, and even first(where:) all have O(n) complexity.

If you're familiar with first(where:) it might surprise you that it's also O(n). I just explained that O(n) means that you loop over, or visit, all items in the data set once. first(where:) doesn't (have to) do this. It can return as soon as an item is found that matches the predicate used as the argument for where:

let array = ["Hello", "world", "how", "are", "you"]
var numberOfWordsChecked = 0
let threeLetterWord = array.first(where: { word in
    numberOfWordsChecked += 1
    return word.count == 3
})

print(threeLetterWord) // how
print(numberOfWordsChecked) // 3

As this code shows, we only need to loop over the array three times to find a match. Based on the rough definition I gave you earlier, you might say that this argument clearly isn't O(n) because we didn't loop over all of the elements in the array like map did.

You're not wrong! But Big O notation does not care for your specific use case. If we'd be looking for the first occurrence of the word "Big O" in that array, the algorithm would have to loop over all elements in the array and still return nil because it couldn't find a match.

Big O notation is most commonly used to depict a "worst case" or "most common" scenario. In the case of first(where:) it makes sense to assume the worst-case scenario. first(where:) is not guaranteed to find a match, and if it does, it's equally likely that the match is at the beginning or end of the data set.

Earlier, I mentioned that reading data from an array is an O(1) operation because no matter how many items the array holds, the performance is always the same. The Swift documentation writes the following about inserting items into an array:

Complexity: Reading an element from an array is O(1). Writing is O(1) unless the array's storage is shared with another array or uses a bridged NSArray instance as its storage, in which case writing is O(n), where n is the length of the array.

This is quite interesting because arrays do something special when you insert items into them. An array will usually reserve a certain amount of memory for itself. Once the array fills up, the reserved memory might not be large enough and the array will need to reserve some more memory for itself. This resizing of memory comes with a performance hit that's not mentioned in the Swift documentation. I'm pretty sure the reason for this is that the Swift core team decided to use the most common performance for array writes here rather than the worst case. It's far more likely that your array doesn't resize when you insert a new item than that it does resize.

Before I dive deeper into how you can determine the Big O of an algorithm I want to show you two more examples. This example is quadratic performance, or O(n^2):

A Graph that shows O(n^2)

Quadratic performance is common in some simple sorting algorithms like bubble sort. A simple example of an algorithm that has quadratic performance looks like this:

let integers = (0..<5)
let squareCoords = integers.flatMap { i in 
    return integers.map { j in 
        return (i, j)
    }
}

print(squareCoords) // [(0,0), (0,1), (0,2) ... (4,2), (4,3), (4,4)]

Generating the squareCoords requires me to loop over integers using flatMap. In that flatMap, I loop over squareCoords again using a map. This means that the line return (i, j) is invoked 25 times which is equal to 5^2. Or in other words, n^2. For every element we add to the array, the time it takes to generate squareCoords grows exponentially. Creating coordinates for a 6x6 square would take 36 loops, 7x7 would take 49 loops, 8x8 takes 64 loops and so forth. I'm sure you can see why O(n^2) isn't the best performance to have.

The last common performance notation I want to show you is O(log n). As the name of this notation shows, we're dealing with a complexity that grows on a logarithmic scale. Let's look at a graph:

A Graph that shows O(log n)

An algorithm with O(log n) complexity will often perform worse than some other algorithms for a smaller data set. However, as the data set grows and n approaches an infinite number, the algorithm's performance will degrade less and less. An example of this is a binary search. Let's assume we have a sorted array and want to find an element in it. A binary search would be a fairly efficient way of doing this:

extension RandomAccessCollection where Element: Comparable, Index == Int {
    func binarySearch(for item: Element) -> Index? {
        guard self.count > 1 else {
            if let first = self.first, first == item {
                return self.startIndex
            }  else {
                return nil
            }
        }

        let middleIndex = (startIndex + endIndex) / 2
        let middleItem = self[middleIndex]

        if middleItem < item {
            return self[index(after: middleIndex)...].binarySearch(for: item)
        } else if middleItem > item {
            return self[..<middleIndex].binarySearch(for: item)
        } else {
            return middleIndex
        }
    }
}

let words = ["Hello", "world", "how", "are", "you"].sorted()
print(words.binarySearch(for: "world")) // Optional(3)

This implementation of a binary search assumes that the input is sorted in ascending order. In order to find the requested element, it finds the middle index of the data set and compares it to the requested element. If the requested element is expected to exist before the current middle element, the array is cut in half and the first half is used to perform the same task until the requested element is found. If the requested element should come after the middle element, the second half of the array is used to perform the same task.

A search algorithm is very efficient because the number of lookups grows much slower than the size of the data set. Consider the following:

For 1 item, we need at most 1 lookup
For 2 items, we need at most 2 lookups
For 10 items, we need at most 3 lookups
For 50 items, we need at most 6 lookups
For 100 items, we need at most 7 lookups
For 1000 items, we need at most 10 lookups

Notice how going from ten to fifty items makes the data set five times bigger but the lookups only double. And going from a hundred to a thousand elements grows the data set tenfold but the number of lookups only grows by three. That's not even fifty percent more lookups for ten times the items. This is a good example of how the performance degradation of an O(log n) algorithm gets less significant as the data set increases.

Let's overlay all three graphs I've shown you so far so you can compare them.

A mix of all mentioned Big O graphs

Notice how each complexity has a different curve. This makes different algorithms a good fit for different purposes. There are many more common Big O complexity notations used in programming. Take a look at this Wikipedia page to get an idea of several common complexities and to learn more about the mathematical reasoning behind Big O.

Determining the Big O notation of your code

Now that you have an idea of what Big O is, what it depicts and roughly how it's determined, I want to take a moment and help you determine the Big O complexity of code in your projects.

With enough practice, determining the Big O for an algorithm will almost become an intuition. I'm always thoroughly impressed when folks have developed this sense because I'm not even close to being able to tell Big O without carefully examining and thinking about the code at hand.

A simple way to tell the performance of code could be to look at the number of for loops in a function:

func printAll<T>(from items: [T]) {
    for item in items {
        print(item)
    }
}

This code is O(n). There's a single for loop in there and the function loops over all items from its input without ever breaking out. It's pretty clear that the performance of this function degrades linearly.

Alternatively, you could consider the following as O(1):

func printFirst<T>(_ items: [T]) {
    print(items.first)
}

There are no loops and just a single print statement. This is pretty straightforward. No matter how many items are in [T], this code will always take the same time to execute.

Here's a trickier example:

func doubleLoop<T>(over items: [T]) {
    for item in items {
        print("loop 1: \(item)")
    }

    for item in items {
        print("loop 2: \(item)")
    }
}

Ah! You might think. Two loops. So it's O(n^2) because in the example from the previous section the algorithm with two loops was O(n^2).

The difference is that the algorithm from that example had a nested loop that iterated over the same data as the outer loop. In this case, the loops are alongside each other which means that the execution time is twice the number of elements in the array. Not the number of elements in the array squared. For that reason, this example can be considered O(2n). This complexity is often shortened to O(n) because the performance degrades linearly. It doesn't matter that we loop over the data set twice.

Let's take a look at an example of a loop that's shown in Cracking the Coding Interview that had me scratching my head for a while:

func printPairs(for integers: [Int]) {
    for (idx, i) in integers.enumerated() {
        for j in integers[idx...] {
            print((i, j))
        }
    }
}

The code above contains a nested loop, so it immediately looks like O(n^2). But look closely. We don't loop over the entire data set in the nested loop. Instead, we loop over a subset of elements. As the outer loop progresses, the work done in the inner loop diminishes. If I write down the printed lines for each iteration of the outer loop it'd look a bit like this if the input is [1, 2, 3]:

(1, 1) (1, 2) (1, 3)
(2, 2) (2, 3)
(3, 3)

If we'd add one more element to the input, we'd need four more loops:

(1, 1) (1, 2) (1, 3) (1, 4)
(2, 2) (2, 3) (2, 4)
(3, 3) (3, 4)
(4, 4)

Based on this, we can say that the outer loop executes n times. It's linear to the number of items in the array. The inner loop runs roughly half of n on average for each time the outer loop runs. The first time it runs n times, then n-1, then n-2 and so forth. So one might say that the runtime for this algorithm is O(n * n / 2) which is the same as O(n^2 / 2) and similar to how we simplified O(2n) to O(n), it's normal to simplify O(n^2 / 2) to O(n^2).

The reason you can simplify O(n^2 / 2) to O(n^2) is because Big O is used to describe a curve, not the exact performance. If you'd plot graphs for both formulas, you'd find that the curves look similar. Dividing by two simply doesn’t impact the performance degradation of this algorithm in a significant way. For that reason, it's preferred to use the simpler Big O notation instead of the complex detailed one because it communicates the complexity of the algorithm much clearer.

While you may have landed on O(n^2) by seeing the two nested for loops immediately, it's important to understand the reasoning behind such a conclusion too because there’s a little bit more to it than just counting loops.

In summary

Big O is one of those things that you have to practice often to master it. I have covered a handful of common Big O notations in this week's post, and you saw how those notations can be derived from looking at code and reasoning about it. Some developers have a sense of Big O that's almost like magic, they seem to just know all of the patterns and can uncover them in seconds. Others, myself included, need to spend more time analyzing and studying to fully understand the Big O complexity of a given algorithm.

If you want to brush up on your Big O skills, I can only recommend practice. Tons and tons of practice. And while it might be a bit much to buy a whole book for a small topic, I like the way Cracking the Coding Interview covers Big O. It has been helpful for me. There was a very good talk at WWDC 2018 about algorithms by Dave Abrahams too. You might want to check that out. It's really good.

If you've got any questions or feedback about this post, don't hesitate to reach out on Twitter.

Using Closures to initialize properties in Swift

There are several ways to initialize and configure properties in Swift. In this week's Quick Tip, I would like to briefly highlight the possibility of using closures to initialize complex properties in your structs and classes. You will learn how you can use this approach of initializing properties, and when it's useful. Let's dive in with an example right away:

struct PicturesApi {
  private let dataPublisher: URLSession.DataTaskPublisher = {
    let url = URL(string: "https://mywebsite.com/pictures")!
    return URLSession.shared.dataTaskPublisher(for: url)
  }()
}

In this example, I create a URLSession.DataTaskPublisher object using a closure that is executed immediately when PicturesApi is instantiated. Even though this way of initializing a property looks very similar to a computed property, it's really more an inline function that runs once to give the property it's initial value. Note some of the key differences between this closure based style of initializing and using a computed property:

  • The closure is executed once when PicturesApi is initialized. A computed property is computed every time the property is accessed.
  • A computed property has to be var, the property in my example is let.
  • You don't put an = sign between the type of a computed property and the opening {. You do need this when initializing a property with a closure.
  • Note the () after the closing }. The () execute the closure immediately when PicturesApi is initialized. You don't use () for a computed property.

Using closures to initialize properties can be convenient for several reasons. One of those is shown in my earlier example. You cannot create an instance of URLSession.DataTaskPublisher without a URL. However, this URL is only needed by the data task publisher and nowhere else in my PicturesApi. I could define the URL as a private property on PicturesApi but that would somehow imply that the URL is relevant to PicturesApi while it's really not. It's only relevant to the data task that uses the URL. Using a closure based initialization strategy for my data task publisher allows me to put the URL close to the only point where I need it.

Tip:
Note that this approach of creating a data task is not something I would recommend for a complex or sophisticated networking layer. I wrote a post about architecting a networking layer a while ago and in general I would recommend that you follow this approach if you want to integrate a proper networking layer in your app.

Another reason to use closure based initialization could be to encapsulate bits of configuration for views. Consider the following example:

class SomeViewController: UIViewController {
  let mainStack: UIStackView = {
    let stackView = UIStackView()
    stackView.axis = .vertical
    stackView.spacing = 16
    return stackView
  }()

  let titleLabel: UILabel = {
    let label = UILabel()
    label.textColor = .red
    return label
  }()
}

In this example, the views are configured using a closure instead of configuring them all in viewDidLoad or some other place. Doing this will make the rest of your code much cleaner because the configuration for your views is close to where the view is defined rather than somewhere else in (hopefully) the same file. If you prefer to put all of your views in a custom view that's loaded in loadView instead of creating them in the view controller like I have, this approach looks equally nice in my opinion.

Closure based initializers can also be lazy so they can use other properties on the object they are defined on. Consider the following code:

class SomeViewController: UIViewController {
  let titleLabel: UILabel = {
    let label = UILabel()
    label.textColor = .red
    return label
  }()

  let subtitleLabel: UILabel = {
    let label = UILabel()
    label.textColor = .orange
    return label
  }()

  lazy var headerStack: UIStackView = {
    let stack = UIStackView(arrangedSubviews: [self.titleLabel, self.subtitleLabel])
    stack.axis = .vertical
    stack.spacing = 4
    return stack
  }()
}

By making headerStack lazy and closure based, it's possible to initialize it directly with its arranged subviews and configure it in one go. I really like this approach because it really keeps everything close together in a readable way. If you don't make headerStack lazy, the compiler will complain. You can't use properties of self before self is fully initialized. And if headerStack is not lazy, it needs to be initialized to consider self initialized. But if headerStack depends on properties of self to be initialized you run into problems. Making headerStack lazy solves these problems.

Closure based initialization is a convenient and powerful concept in Swift that I like to use a bunch in my projects. Keep in mind though, like every language features this feature can be overused. When used carefully, closures can really help clean up your code, and group logic together where possible. If you have any feedback or questions about this article, reach out on Twitter. I love to hear from you.

How to use SF Symbols in your apps

It’s been a while since Apple announced SF Symbols at WWDC 2019 and I remember how excited everybody was about them. The prospect of having an easy to integrate set of over 1,500 icons that you can display in nine weights sounds very appealing and has helped me prototype my ideas much quicker with good looking icons than ever before.

I haven’t heard or seen much content related to SF Symbols since they came out and I realized I hadn’t written about them at all so I figured that I’d give you some insight into SF Symbols and how you can integrate them in your app. By the end of this blog post you will know where to look for symbols, how you can integrate them and how you can configure them to fit your needs.

Browsing for symbols

The first step to using SF Symbols in your app is to figure out which symbols Apple provides, and which symbols you might need in your app. With over 1,500 symbols to choose from I’m pretty sure there will be one or more symbols that fit your needs.

To browse Apple’s SF Symbols catalog, you can download the official SF Symbols macOS app from Apple’s design resources. With this app you can find all of Apple’s symbols and you can easily view them in different weights, and you can see what they are called so you can use them in your app.

If you’d rather look for symbols in a web interface, you can use this website. Unfortunately, the website can’t show the actual symbols due to license restrictions. This means that you’ll have to look up the icons by name and use Apple’s SF Symbols app to see what they look like.

Once you’ve found a suitable symbol for your app, it’s time to use it. Let’s find out how exactly in the next section.

Using SF Symbols in your app

Using SF Symbols in your app is relatively straightforward with one huge caveat. SF Symbols are only available on iOS 13. This means that there is no way for you to use SF Symbols on iOS 12 and below. However, if your app supports iOS 13 and up (which in my opinion is entirely reasonable at this point) you can begin using SF Symbols immediately.

Once you’ve found a symbol you like, and know it’s name you can use it in your app as follows:

UIImage(systemName: "<SYMBOL NAME>")

Let's say you want to use a nice paintbrush symbol on a tab bar item, you could use the following code:

let paintbrushSymbol = UIImage(systemName: "paintbrush.fill")
let tabBarItem = UITabBarItem(title: "paint", 
                              image: paintbrushSymbol, 
                              selectedImage: nil)

Instances of SF Symbols are created as UIImage instances using the systemName argument instead of the named argument you might normally use. Pretty straightforward, right?

Find a symbol, copy its name and pass it to UIImage(systemName: ""). Simple and effective.

Configuring a symbol to fit your needs

SF Symbols can be configured to have different weights and scales. To apply a weight or scale, you apply a UIImage.SymbolConfiguration to the UIImage that will display your SF Symbol. For example, you change an SF Symbol's weight using the following code:

let configuration = UIImage.SymbolConfiguration(weight: .ultraLight)
let image = UIImage(systemName: "pencil", withConfiguration: configuration)

The above code creates an ultra light SF Symbol. You can use different weight settings from ultra light, all the way to black which is super bold. For a full overview of all available weights, refer to Apple's SF Symbols human interface guidelines.

In addition to changing a symbol's scale, you can also tweak its size by setting the symbol's scale. You can do this using the following code:

let configuration = UIImage.SymbolConfiguration(scale: .large)
let image = UIImage(systemName: "pencil", withConfiguration: configuration)

The code above applies a large scale to the symbol. You can choose between small, medium and large for your icon scale.

It's also possible to combine different configurations using the applying(_:) method on UIImage.SymbolConfiguration:

let lightConfiguration = UIImage.SymbolConfiguration(weight: .ultraLight)
let largeConfiguration = UIImage.SymbolConfiguration(scale: .large)

let combinedConfiguration = lightConfiguration.applying(largeConfiguration)

The above code creates a symbol configuration for an icon that is both ultra light and large.

One last thing I'd like to show you is how you can change an SF Symbol's color. If you're using a symbol in a tab bar, it will automatically be blue, or adapt to your tab bar's tint color. However, the default color for an SF Symbol is black. To use a different color, you can use the withTintColor(_:) method that's defined on UIImage to create a new image with the desired tint color. For example:

let defaultImage = UIImage(systemName: "pencil")!
let whiteImage = defaultImage.withTintColor(.white)

The above code can be used to create a white pencil icon that you can use wherever needed in your app.

In summary

In this week's post, you learned how you can find, use and configure Apple's SF Symbols in your apps. In my opinion, Apple did a great job implementing SF Symbols in a way that makes it extremely straightforward to use.

Unfortunately, this feature is iOS 13+ and the SF Symbols macOS app could be improved a little bit, but overall it's not too bad. I know that I'm using SF Symbols all the time in any experiments I do because they're available without any hassle.

If you have any questions, tips, tricks or feedback about this post don't hesitate to reach out on Twitter!

Find and copy Xcode device support files

Every once in a while I run into a situation where I update my iPhone to the latest iOS before I realize I'm still using an older version of Xcode for some projects. I usually realize this when Xcode tells me that it "Could not locate device support files". I'm sure many folks run into this problem.

Luckily, we can fix this by copying the device support files from the new Xcode over to the old Xcode, or by grabbing the device support files from an external source.

Copying device support files if you already have the latest Xcode installed

If you have the latest Xcode installed but need to use an older Xcode version to work on a specific project, you can safely copy the device support files from the new Xcode over to the old.

This can be done using the following command in your terminal:

cp -R /Applications/<new xcode>/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/<target ios> /Applications/<old xcode>/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/<target ios>

Make sure to replace <new xcode> with the path to your latest Xcode app, <old xcode> with your old Xcode app and <target ios> with the iOS version you wish to copy device support files for.

Tip:
I use xcversion to manage my Xcode installs. Read more in my post about having more than one Xcode version installed.

So for example, to copy device support files for iOS 13.4 from Xcode 11.4 to Xcode 11.3.1 you'd run the following command:

cp -R /Applications/Xcode-11.4.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/13.4 /Applications/Xcode-11.3.1.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/13.4

Let's look at the same command, formatted a little nicer:

cp -R \
  /Applications/Xcode-11.4.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/13.4 \
  /Applications/Xcode-11.3.1.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/13.4

Both of the above snippets copy iOS 13.4 support files from Xcode 11.4 to 11.3.1. After doing this, reboot Xcode (I always do just to be sure) and you should be able to run your Xcode 11.3.1 project on devices running iOS 13.4.

As an alternative to copying the files, you can also link them using the ln -s command in your terminal:

ln -s \
  /Applications/Xcode-11.4.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/13.4 \
  /Applications/Xcode-11.3.1.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/13.4

This command creates a symbolic link to the device support files instead of copying them. Both commands achieve the same result and are equally effective.

Obtaining device support files if you don't have the latest Xcode installed

Not everybody who needs device support files will have the latest Xcode available. If this is the case, I recommend that you take a look at this repository. It collects device support files for all iOS versions so you can clone that repository and copy, or link, device support files from that repository to the Xcode version you're using. You can use similar command to those that I showed in the previous section except you'd replace the old Xcode path with the path to the appropriate device support files in the clones repo.

Enforcing code consistency with SwiftLint

If you're ever amongst a group of developers and want to spark some intense discussion, all you need to do is call out that tabs are better than spaces. Or that indenting code with two spaces is much better than four. Or that the curly bracket after a function definition goes on the next line rather than on the same line as the method name.

A lot of us tend to get extremely passionate about our preferred coding styles and we're not afraid to discuss it in-depth. Which is fine, but this is not the kind of discussion you and your team should have for every PR that's made against your git repository. And you also don't want to explain and defend your coding style choices every time you have a new team member joins your team.

Luckily, developers don't just love arguing about their favorite code style. They also tend to get some joy out of building tools that solve tedious and repetitive problems. Enforcing a coding style is most certainly one of those tedious problems and for every sufficiently tedious problem, there is a tool to help you deal with that problem.

In this week's post, I would like to introduce you to a tool called SwiftLint. SwiftLint is used by developers all over the world to help them detect problems in how they style their code and to fix them. I will show you how you can add SwiftLint to your projects, configure it so it conforms to your wishes and how you can use it to automatically correct the problems it has found so you don't have to do this manually.

Adding SwiftLint to your project

Before you can use SwiftLint in your project, you need to install this. If you have Homebrew installed, you can install SwiftLint using the following command:

brew install swiftlint

Running this command will pull down and install the SwiftLint tool for you.

Once SwiftLint is installed, you can immediately begin using it by running the swiftlint command in your project folder from the Terminal.

Alternatively, you can add Swiftlint to your project using Cocoapods by adding the following line to your Podfile:

pod 'SwiftLint'

Using Cocoapods to install SwiftLint allows you to use different versions of SwiftLint for your projects and you can pinpoint specific releases instead of always using the latest release like Homebrew does.

After installing Swiftlint through Cocoapods, you can navigate to your project folder in terminal and run Pods/SwiftLint/swiftlint command to analyze your project with the SwiftLint version that was installed through Cocoapods.

Running Swiftlint with its default settings on a fresh project yields the following output:

❯ swiftlint
Linting Swift files at paths
Linting 'ViewController.swift' (1/3)
Linting 'AppDelegate.swift' (2/3)
Linting 'SceneDelegate.swift' (3/3)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/ViewController.swift:20:1: warning: Trailing Newline Violation: Files should have a single trailing newline. (trailing_newline)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/ViewController.swift:18:1: warning: Vertical Whitespace Violation: Limit vertical whitespace to a single empty line. Currently 2. (vertical_whitespace)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/SceneDelegate.swift:16:1: warning: Line Length Violation: Line should be 120 characters or less: currently 125 characters (line_length)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/SceneDelegate.swift:19:1: warning: Line Length Violation: Line should be 120 characters or less: currently 143 characters (line_length)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/SceneDelegate.swift:27:1: warning: Line Length Violation: Line should be 120 characters or less: currently 137 characters (line_length)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/SceneDelegate.swift:53:1: warning: Trailing Newline Violation: Files should have a single trailing newline. (trailing_newline)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/SceneDelegate.swift:20:15: warning: Unused Optional Binding Violation: Prefer `!= nil` over `let _ =` (unused_optional_binding)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/SceneDelegate.swift:15:1: warning: Vertical Whitespace Violation: Limit vertical whitespace to a single empty line. Currently 2. (vertical_whitespace)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/SceneDelegate.swift:51:1: warning: Vertical Whitespace Violation: Limit vertical whitespace to a single empty line. Currently 2. (vertical_whitespace)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/AppDelegate.swift:16:1: warning: Line Length Violation: Line should be 120 characters or less: currently 143 characters (line_length)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/AppDelegate.swift:23:1: warning: Line Length Violation: Line should be 120 characters or less: currently 177 characters (line_length)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/AppDelegate.swift:31:1: warning: Line Length Violation: Line should be 120 characters or less: currently 153 characters (line_length)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/AppDelegate.swift:37:1: warning: Trailing Newline Violation: Files should have a single trailing newline. (trailing_newline)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/AppDelegate.swift:15:1: warning: Vertical Whitespace Violation: Limit vertical whitespace to a single empty line. Currently 3. (vertical_whitespace)
/Users/dwals/Personal/SwiftLintDemo/SwiftLintDemo/AppDelegate.swift:35:1: warning: Vertical Whitespace Violation: Limit vertical whitespace to a single empty line. Currently 2. (vertical_whitespace)
Done linting! Found 15 violations, 0 serious in 3 files.

While it's nice that we can run SwiftLint on the command line, it's much nicer if SwiftLint's output was shown directly in Xcode, and if SwiftLint would run automatically for every build action. You can achieve this by adding a new Run Script Phase to your project's Build Phases tab:

Click the + icon and select New Run Script Phase:

Open the newly added Run Script step and add the following code to it:

if which swiftlint >/dev/null; then
  swiftlint
else
  echo "warning: SwiftLint not installed, download from https://github.com/realm/SwiftLint"
fi

Your step should look like this in Xcode:

If you're running SwiftLint with CocoaPods, your Run Script Phase should look as follows:

"${PODS_ROOT}/SwiftLint/swiftlint"

The earlier version of the Run Script Phase would execute the globally installed SwiftLint version rather than the one that's installed by Cocoapods.

After setting up your build phase, Xcode will run Swiftlint after every build and show you inline warnings and errors where appropriate. This is much more convenient than using the terminal to run Swiftlint and figuring out what errors go where.

Setting up custom SwiftLint rules

SwiftLint can do some really good work to help you write better code, and it's quite smart about it too. For example, SwiftLint will urge you to use array.isEmpty over array.count == 0. It will also prefer that you use myVar != nil over let _ = myVar and more. For a complete list of SwiftLint's rules, you can look at this page. There are a ton of rules so I won't cover them in this post. It's just too much.

Every rule in the rule directory comes with a comprehensive page that explains what the rule does, and whether it's enforced in the default configuration. One rule that I kind of dislike is the Line Length rule. I use a marker in Xcode that suggests a certain line length but I don't want SwiftLint to enforce this rule. Based on the directory page for this rule, you can find out that it's enabled by default, that it warns you for a line length of 120 characters or longers and that it throws an error for 200 characters or more. Additionally, this rule applies to urls, comments, function declarations and any other code you write.

You can disable or customize this rule in a .swiftlint.yml file (note the . in front of the filename). Even though I dislike the line length rule, other members of my team might really like it. After thorough discussions, we might decide that this rule should not apply to URLs and comments, it should warn at 140 characters and it should throw an error at 240 characters.

To set this up, you need to add a .swiftlint.yml file to your project directory. Note that this file should be added alongside your Xcode project and your Xcode workspace. It should not be placed inside of your project files directory. SwiftLint expects your .swiftlint.yml file to exist in the same directory that you'll run SwiftLint from which, in this case, is the folder that contains your Xcode project.

To set up the line length rule to behave as I mentioned, add the following contents to .swiftlint.yml:

line_length:
  warning: 140
  error: 240
  ignores_comments: true
  ignores_urls: true

To configure a specific rule, you create a new yaml node with the rule name. Inside of that node you can add the configuration for that specific rule. It's also possible to provide lists of enabled and disabled rules in your .swiftlint.yml file:

disabled_rules:
  unused_optional_binding

opt_in_rules:
  empty_count

Each enabled or disabled rule should be listed on a new line under its respective node. You can find the identifiers for all SwiftLint rules in the rules directory.

If you don't want to run SwiftLint on all of your source files, you can specify a list of excluded folders or files in .swiftlint.yml as follows:

excluded:
  Pods
  Carthage

You can use patterns like Sources/*/MyFiles.swift to match wildcards if needed.

For a complete list of possible yaml configuration keys, refer to the SwiftLint repository.

Sometimes you don't want to opt-out of a rule completely but you want to make an exception. If this is the case, you can specify this for a complete file, for a line, or a block of code. A full overview with examples is available in the SwiftLint repository but I'll include a brief overview here.

The following code disables the line length rule from the moment this comment is found, until the end of the file or until the rule is explicitly enabled again. You can specify multiple rules separated by a space.

// swiftlint:disable line_length

If you want to (re-)enable a specific SwiftLint rule you can write the following comment in your code:

// swiftlint:enable line_length

You can also use comments to enable the next, previous or current violation of a SwiftLint rule. I took the following example from the SwiftLint repository:

// swiftlint:disable:next force_cast
let noWarning = NSNumber() as! Int
let hasWarning = NSNumber() as! Int
let noWarning2 = NSNumber() as! Int // swiftlint:disable:this force_cast
let noWarning3 = NSNumber() as! Int
// swiftlint:disable:previous force_cast

Determining which SwiftLint rules you should apply to your codebase is a highly personal decision that you should consider carefully with your team.

If your project runs on CI, SwiftLint will automatically run when your CI builds your project so there's no additional work for you there. If you want to run SwiftLint only on CI, you can remove the Run Script Phase from your Xcode project and run SwiftLint directly using the steps mentioned at the start of this section.

Using SwiftLint to fix your code automatically

Some of SwiftLint's rules support automated corrections. You can execute SwiftLint's automated corrections using the following command:

swiftlint autocorrect

If you're using SwiftLint through Cocoapods you'll want to use the following command instead:

Pods/SwiftLint/swiftlint autocorrect

This command will immediately make changes to your source files without asking for permission. I highly recommend you to commit any uncommitted changes in your projects to git before running autocorrect. This will allow you to see exactly what SwiftLint change, and you will be able to undo any undesired actions easily.

In Summary

This week, you learned about SwiftLint and how you can add it to your projects to automatically ensure that everybody on your team sticks to the same coding style. A tool like SwiftLint removes discussions and bias from your workflow because SwiftLint tells you when your code does not meet your team's standards. This means that you spend less time pointing out styling mistakes in your PRs, and you don't have to sit down with every new team member to explain (and defend) every stylistic choice you made in your codebase. Instead, everybody can look at the SwiftLint file and understand what decision the team has made.

I showed you how you can set up your .swiftlint.yml configuration file, and how you can apply specific exceptions in your code where needed. Keep in mind that you should not find yourself adding these kinds of exceptions to your code regularly. If this is the case, you should probably add a new rule to your SwiftLint configuration, or remove an existing rule from it.

Lastly, you learned about the autocorrect command that will automatically fix any SwiftLint warnings and errors where possible.

If you have any questions or feedback for me don't hesitate to send me a Tweet.

Calculating the difference in hours between two dates in Swift

Sometimes you need to calculate the difference between two dates in a specific format. For instance, you might need to know the difference between dates in hours. Or maybe you want to find out how many days there are between two dates. One approach for this would be to determine the number of seconds between two dates using timeIntervalSince:

let differenceInSeconds = lhs.timeIntervalSince(rhs)

You could use this difference in seconds to convert to hours, minutes or any other unit you might need. But we can do better in Swift using DateComponents. Given two dates, you can get the difference in hours using the following code:

let diffComponents = Calendar.current.dateComponents([.hour], from: startDate, to: endDate)
let hours = diffComponents.hour

The hour property on diffComponents will give you the number of full hours between two dates. This means that a difference of two and a half hours will be reported as two.

If you're looking for the difference between two dates in hours and minutes, you can use the following code:

let diffComponents = Calendar.current.dateComponents([.hour, .minute], from: lhs, to: rhs)
let hours = diffComponents.hour
let minutes = diffComponents.minute

If the dates are two and a half hours apart, this would give you 2 for the hour component, and 30 for the minute component.

This way of calculating a difference is pretty smart. If you want to know the difference in minutes and seconds, you could use the following code:

let diffComponents = Calendar.current.dateComponents([.minute, .second], from: lhs, to: rhs)
let minutes = diffComponents.minute
let seconds = diffComponents.second

Considering the same input where the dates are exactly two and a half hours apart, this will give you 150 for the minute component and 0 for the second component. It knows that there is no hour component so it will report 150 minutes instead of 30.

You can use any date component unit for this time of calculation. Some examples include years, days, nanoseconds and even eras.

Date components are a powerful way to work with dates and I highly recommend using this approach instead of doing math with timeIntervalSince because DateComponents are typically far more accurate.

If you have questions or feedback about this tip, feel free to shoot me a Tweet.

Adding your app’s content to Spotlight

On iOS, you can swipe down on the home screen to access the powerful Spotlight search feature. Users can type queries in Spotlight and it will search through several areas of the system for results. You may have noticed that Spotlight includes iMessage conversations, emails, websites, and more. As an app developer, you can add content from your app to the Spotlight search index so your users can find results that exist in your app through Spotlight.

An important aspect of the Spotlight index is that you can choose whether you want to index your app contents publicly, or privately. In this post, you will learn what that means and how it works.

All in all, this post covers the following topics:

  • Adding content to Spotlight
  • Adding Spotlight search as an entry point for your app

By the end of this post, you will know everything you need to know to add your app's content to the iOS Spotlight index to enhance your app's discoverability.

Adding content to Spotlight

There are several mechanisms that you can utilize to add content to Spotlight. I will cover two of them in this section:

  • NSUserActivity
  • CSSearchableItem

For both mechanisms, you can choose whether your content should be indexed publicly, or privately. When you index something privately, the indexed data does not leave the user's device and it's only added to your user's Spotlight index. When you choose to index an item publicly, a hash of the indexed item is sent to Apple's servers. When other user's devices start sending the same hash to Apple's servers, Apple will begin recognizing your indexed item as useful, or important. Note that having many users send the same item once doesn't mean much to Apple. They are looking for indexes items that are accessed regularly by each user.

I don't know the exact thresholds Apple maintains, but once a certain threshold is reached, Apple will add the publicly indexed item to Spotlight for users that have your app but may not have accessed the content you have indexed for other users. If your indexed items include a Universal Link URL, your indexed item can even appear in Safari's search results if the user doesn't have your app installed yet. This means that adding content to the Spotlight index and doing so accurately and honestly, can really boost your app's discoverability because you might appear in places on a user's where you otherwise would not have.

Adding content to Spotlight using NSUserActivity

The NSUserActivity class is used for many activity related objects in iOS. It's used for Siri Shortcuts, to encapsulate deeplinks, to add content to Spotlight and more. If you're familiar with NSUserActivity, the following code should look very familiar to you:

let userActivity = NSUserActivity(activityType: "com.donnywals.example")
userActivity.title = "This is an example"
activity.persistentIdentifier = "example-identifier"
userActivity.isEligibleForSearch = true // add this item to the Spotlight index
userActivity.isEligibleForPublicIndexing = true // add this item to the public index
userActivity.becomeCurrent() // making this the current user activity will index it

As you can see, creating and indexing an NSUserActivity object is relatively straightforward. I've only used a single attribute of the user activity. If you want to index your app's content, there are several other fields you might want to populate. For example, contentAttributeSet, keywords and webpageURL. I strongly recommend that you look at these properties in the documentation for NSUserActivity and populate them if you can. You don't have to though. You can use the code I've shown you above and your indexed user activities should pop up in Spotlight pretty much immediately.

User activities are ideally connected to the screens a user visits in your app. For instance, you can register them in viewDidAppear and set the created user activity to be the view controllers activity before calling becomeCurrent:

self.userActivity = userActivity
self.userActivity?.becomeCurrent()

You should do this every time your user visits the screen that the user activity belongs to. By doing this, the current user activity is always the true current user activity, and iOS will get a sense of the most important and often used screens in your app. This will impact the Spotlight search result ranking of the indexed user activity. Items that are used regularly rank higher than items that aren't used regularly.

Adding content to Spotlight using CSSearchableItem

A CSSearchableItem is typically not connected to a screen like user activities are. Ultimately a CSSearchableItem of course belongs to some kind screen, but what I mean is that the moment of indexing a CSSearchableItem is not always connected to a user visiting a screen in your app. If your app has a large database of content, you can use CSSearchableItem instances to index your content in Spotlight immediately.

Attributes for a CSSearchableItem are defined in a CSSearchableItemAttributeSet. An attribute set can contain a ton of metadata about your content. You can add start dates, end dates, GPS coordinates, a thumbnail, rating and more. Depending on the fields you populate, iOS will render your indexed item differently. When you add content to Spotlight, make sure you provide as much content as possible. For a full overview of the properties that you can set, refer to Apple's documentation. You can assign an attribute set to the contentAttributeSet property on NSUserActivity to make it as rich as a CSSearchableItem is by default.

You can create an instance of CSSearchableItemAttributeSet as follows:

import CoreSpotlight // don't forget to import CoreSpotlight at the top of your file
import MobileCoreServices // needed for kUTTypeText

let attributes = CSSearchableItemAttributeSet(itemContentType: "com.donnywals.favoriteMovies")
attributes.title = indexedMovie.title
attrs.contentType = kUTTypeText as String
attributes.contentDescription = indexedMovie.description
attributes.identifier = indexedMovie.id
attributes.relatedUniqueIdentifier = indexedMovie.id

In this example, I'm using a made-up indexedMovie object to add to the Spotlight index. I haven't populated a lot of the fields that I could have populated because I wanted to keep this example brief. The most interesting bits here are the identifier and the relatedUniqueIdentifier. Because you can index items through both NSUserActivity and CSSearchableItem, you need a way to tell Spotlight when two items are really the same item. You can do this bu setting the searchable attributes' relatedUniqueIdentifier to the same value you'd use for the user activity's persistentIdentifier property. When Spotlight discovers a searchable item whose's attributes contain a relatedUniqueIdentifier that corresponds with a previously indexed user activity's persistentIdentifier, Spotlight will know to not re-index the item but instead, it will update the existing item.

Important!:
When you add a new item to Spotlight, make sure to assign a value to contentType. In my tests, the index does not complain or throw errors when you index an item without a contentType, but the item will not show up in the Spotlight index. Adding a contentType fixes this.

Once you've prepared your searchable attributes, you need to create and index your searchable item. You can do this as follows:

let item = CSSearchableItem(uniqueIdentifier: "movie-\(indexMovie.id)", domainIdentifier: "favoriteMovies", attributeSet: attributes)

The searchable item initializer takes three arguments. First, it needs a unique identifier. This identifier needs to be unique throughout your app so it should be more specialized than just the identifier for the indexed item. Second, you can optionally pas a domain identifier. By using domains for the items you index, you can separate some of the indexed data which will allow you to clear certain groups of items from the index if needed. And lastly, the searchable attributes are passed to the searchable item. To index the item, you can use the following code:

CSSearchableIndex.default().indexSearchableItems([item], completionHandler: { error in
  if let error = error {
    // something went wrong while indexing
  }
})

Pretty straightforward, right? When adding items to the Spotlight index like this, make sure you add the item every time the user interacts with it. Similar to user activities, iOS will derive importance from the way a user interacts with your indexed item.

Note that we can't choose to index searchable items publicly. Public indexing is reserved for user activities only.

When you ask Spotlight to index items for your app, the items should become available quickly after indexing them. Try swiping down on the home screen and typing the title of an item you've indexed. It should appear in Spotlights search results and you can tap the result to go to your app. However, nothing really happens when your app opens. You still need a way to handle the Spotlight result so your user is taken to the screen in your app that displays the content they've tapped.

Showing the correct content when a user enters your app through Spotlight

Tip:
I'm going to assume you've read my post on handling deeplinks or that you know how to handle deeplinks in your app. A lot of the same principles apply here and I want to avoid explaining the same thing twice. What's most important to understand is which SceneDelegate and AppDelegate methods are called when a user enters your app via a deeplink, and how you can navigate to the correct screen.

In this section, I will only explain the Spotlight specific bits of opening a user activity or searchable item. The code needed to handle Spotlight search items is very similar to the code that handles deeplinks so your knowledge about handling deeplinks carries over to Spotlight nicely.

Your app can be opened to handle a user activity or a searchable item. How you handle them varies slightly. Let's look at user activity first because that's the simplest one to handle.

When your app is launched to handle any kind of user activity, the flow is the same. The activity is passed to scene(_:continue:) if your app is already running in the background, or through connectionOptions.userActivities in scene(_:willConnectTo:options:) if your app is launched to handle a user activity. If you're not using the SceneDelegate, your AppDelegate's application(_:continue:restorationHandler:) method is called, or the user activity is available through UIApplicationLaunchOptionsUserActivityKey on the application's launch options.

Once you've obtained a user activity, it's exposed to you in the exact same way as you created it. So for the user activity I showed you earlier, I could use the following code to handle the user activity in my scene(_:continue:) method:

func scene(_ scene: UIScene, continue userActivity: NSUserActivity) {
  if userActivity.activityType == "com.donnywals.example",
    let screenIdentifier = userActivity.persistentIdentifier {

    // navigate to screen
  }
}

In my post on handling deeplinks I describe some techniques for navigating to the correct screen when handling a deeplink, and I also describe how you can handle a user activity from scene(_:willConnectTo:options:). I recommend reading that post if you're not sure how to tackle these steps because I want to avoid explaining the same principle in two posts.

When your app is opened to handle a spotlight item, it will also be asked to handle a user activity. This user activity will look slightly different. The user activity's acitivityType will equal CSSearchableItemActionType. Furthermore, the user activity will not expose any of its searchable attributes. Instead, you can extract the item's unique identifier that you passed to the CSSearchableItem initializer. Based on this unique identifier you will need to find and initialize the content and screen a user wants to visit. You can use the following code to detect the searchable item and extract its unique identifier:

if userActivity.activityType == CSSearchableItemActionType,
  let itemIdentifier = userActivity.userInfo?[CSSearchableItemActivityIdentifier] as? String {

  // handle item with identifier
}

Again, the steps from here are similar to how you'd handle a deeplink with the main difference being how you find content. If you're using Core Data or Firebase, you will probably want to use the item identifier to query your database for the required item. If your item is hosted online, you will want to make an API call to fetch the item with the desired item identifier. Once the item is obtained you can show the appropriate screen in your app.

In Summary

In this week's post, you learned how you can index your app's content in iOS' Spotlight index. I showed you how you can use user activities and searchable items to add your app's content to Spotlight. Doing this will make your app show up in many more places in the system, and can help your user discover content in your app.

If you want to learn much more about Spotlight's index, I have a full chapter dedicated to it in my book Mastering iOS 12 Development. While this book isn't updated for iOS 13 and the Scene Delegate, I think it's still a good reference to help you make sense of Spotlight and what you can do with it.

If you have any questions or feedback about this post, don't hesitate to reach out to me on Twitter.