Measuring performance with os_signpost

One of the features that got screen time at WWDC 2018 but never really took off is the signposting API, also known as os_signpost. Built on top of Apple’s unified logging system, signposts are a fantastic way for you to gain insight into how your code behaves during certain operations.

In this post, I will show you how to add signpost logging to your app, and how you can analyze the signpost output using Instruments.

Adding signposts to your code

If you’re familiar with OSLog and already use it in your app, adding signpost logging should be fairly simple for you. If you have never used OSLog before, don’t worry, you can follow along with this post just fine.

Once you have determined what operation in your code you want to measure exactly, you need to create a so-called log handle that you can use as an entry point for your signpost logging. Recently I wanted to measure the difference in execution speed of certain quality of service settings on an operation queue. I built a simple sample app and added my log handle to my view controller. In your code, you should add it near the operation you want to measure. For example by making the log handle a property on your networking object, view model, view controller or anything else. What’s important is that it’s an instance property since you want to make sure the log handle can be accessed from anywhere within the code you’re measuring.

You can create a log handle as follows:

import os.log

let logHandler = OSLog(subsystem: "com.dw.networking", category: "qos-measuring")

A log handler always belongs to a subsystem, you might consider your entire app to be a subsystem, or maybe you consider different components in your app to be their own subsystem. You should name your subsystem some unique for your app, and it’s good practice to use reverse DNS notation for the naming. You also need to specify a category, in this case, I chose one that describes the thing I’m measuring with the signpost we’ll add next. Note that the preceding code imports the os.log framework. The signpost API is part of this framework so we need to import it in order to use signposts.

In a very simple example, you might want to add signposts in a way similar to what the following code snippet shows:

func processItem(_ item: Item) {
  os_signpost(.begin, log: logHandler, name: "Processing", "begin processing for %{public}s", item.id)

  // do some work
  os_signpost(.event, log: pointsOfInterestHandler, name: "Processing", "reached halfway point for %{public}s", label)
  // do some more work

  os_signpost(.end, log: logHandler, name: "Processing", "finished processing for %{public}s", item.id)
}

Note that there are three different event types used in the preceding code:

  • .begin
  • .event
  • .end

The .begin event is used to mark the start of an operation, and the .end event is used to mark the end of an operation. In this example, the system will use the name as a way of identifying each operation to link up the operation start and end events. We can also add points of interest that occur during an event, for example when you reach the halfway point. You add points of interest using the .event event type.

In order to log .event events, you need a special log handler that specializes in points of interest. You create such a log handler as follows:

let pointsOfInterestHandler = OSLog(subsystem: "dw.qostest", category: .pointsOfInterest)

It works pretty much the same as the normal logger, except you use a predefined category.

Also note the final two arguments passed to os_signpost: "finished processing for %{public}s", item.id. The first of these two arguments is a format string. Depending on the number of placeholders in the format string, the first, second, third, etc. arguments after the format string will be used to fill the placeholders. You can specify placeholders as either {public} or {private}. Specifying neither will default to {private}. Values passed to public placeholders are visible in the console, even if your app is running without the Xcode debugger attached. So if you’re handling sensitive data, make sure to mark your placeholders as private.

The s after the placeholder’s access level specifier marks that the value that’s used to fill the placeholder will be a string. You could also use a number instead of a string if you replace s with a d. Apple recommends that you only use strings and numbers for your placeholders in order to keep your logs simple and lightweight.

This example is very simple, everything occurs in the same method and we could use a string to link our signpost begin and end. But what if we have multiple operations running that all use signposts. If they all have the same name, they will start interfering with each other. In that case, you can use a SignpostID. You can create SignpostID objects in two ways:

let uniqueID = OSSignpostID(log: logHandler)
let idBasedOnObject = OSSignpostID(log: logHandler, object: anObject)

If you use the first method, you need to keep a reference to the identifier around so you can use it to correctly link .begin and .end events together. If your operation is strongly related to an instance of a class, for example, if each instance only runs one operation, or if you’re manipulating an object that’s a class in your operation, you can use the second method to obtain a SignpostID. When you create an identifier using an object, you always get the same SignpostID back as long as you’re using the same instance of the object. Note that the object must be a class. You can’t use value types for this.

You can use SignpostID in your code as follows:

class ImageManipulator {
  // other properties, like the logHandle
  let signpostID = SignpostID(log: logHandler, object: self)

  func start() {
    os_signpost(.begin, log: logHandler, name: "Processing", signpostID: signpostID, "begin processing for %{public}s", item.id)
    // do things
  }

  func end() {
    os_signpost(.end, log: logHandler, name: "Processing", signpostID: signpostID, "finished processing for %{public}s", item.id)
  }
}

Our signposts are now uniquely identified through the signpostID that gets generated based on the ImageManipulator itself. Note that this object is now expected to only work on one image at a time. If you would use this object for multiple operations in parallel, you’d need to either create a unique SignpostID for each operation or, for example, generate the identifier based on the image.

Reading signposts with instruments

Once you’ve added signposts to your code, you can view them in Console.app, or you can analyze them with Instruments. To do this, run your app with Instruments like you normally would (cmd + i or Product -> Profile) and select a blank Instruments template:

New Instrument Window

In the blank Instrument window, click the + icon in the top right, find the os_signpost instrument and double click it to add it to your Instruments session. Also, add the points of interest instrument from the same menu.

Add signpost Instrument

After doing that, hit record and use your app so a bunch of signposts are logged, and you have some data to look at:

Instruments overview

If you have the os_signpost track selected, Instruments will group measurements for each of its begin and end signposts based on your signpost message. So if you’re using the same message for operations, as we have in the earlier examples, performing the same operation over and over will cause Instruments to group those operations. And more importantly, Instruments will tell you the maximum duration, minimum duration, average duration and more for each operation. That way, you can easily measure the performance of the things your app does, without relying on print statements or date calculations that might negatively impact your code!

In summary

In this post, you’ve seen that Instruments and os_signpost are a powerful team that can help you gain insight into your code. You can use signposts as a way of regular logging to Console.app, but it’s also very well suited to do low-impact performance measuring of your code if you combine signposts with Instruments. Both signposts and Instruments are tools you might not need or use all the time, but knowing they exist, what they do and when you use them is essential to learning more about the code you write, and ultimately to becoming a better developer.

If you have feedback, questions or anything else regarding this post for me, please reach out on Twitter. I love hearing from you.

Expand your learning with my books

Practical Swift Concurrency header image

Learn everything you need to know about Swift Concurrency and how you can use it in your projects with Practical Swift Concurrency. It contains:

  • Eleven chapters worth of content.
  • Sample projects that use the code shown in the chapters.
  • Free updates for future iOS versions.

The book is available as a digital download for just $39.99!

Learn more

Using Xcode’s memory graph to find memory leaks

There are many reasons for code to function suboptimally. In a post, I have shown you how to use the Time Profiler to measure the time spent in each method in your code, and how to analyze the results. While a lot of performance-related problems can be discovered, analyzed and fixed using these tools, memory usage must often be debugged slightly differently. Especially if it's related to memory leaks.

In today's post, I will show you how to use the Memory Graph tool in Xcode to analyze the objects that are kept in memory for your app, and how to use this tool to discover memory leaks. I will focus specifically on retain cycles today.

Activating the Memory Graph

When you run your app with Xcode, you can click the memory debugger icon that's located between your code and the console, or at the bottom of your Xcode window if you don't have the console open:

Memory debugger icon

When you click this icon, Xcode will take a snapshot of your app's memory graph and the relationships that every object has to other objects. Your app's execution will be paused and Xcode will show you all objects that are currently in memory. Note that this might take a little while, depending on how big your app is.

Example memory graph

In the sidebar on the left-hand side, Xcode shows a full list of all objects that it has discovered. When you select an object in the sidebar, the middle section will show your selected object, and the relationships it has to other objects. Sometimes it's a big graph, like in the screenshot. Other times it's a smaller graph with just a couple of objects.

If Xcode spots a relationship that it suspects to be a memory leak, or retain cycle, it will add a purple square with a exclamation mark behind the object in the sidebar. In the screenshot you just saw, it's quite obvious where the purple squares are. If they are more hidden, or you just want to filter memory leaks, you can do so using the filter menu at the bottom of the sidebar as shown in the following screenshot:

Filtered view

The screenshot above shows that instances of two different objects are kept in memory while Xcode thinks they shouldn't. When you click one of them, the problem becomes visible immediately.

Retain cycle image

The DataProvider and DetailPage in this example are pointing at each other. A classic example of a retain cycle. Let's see how this occurs and what you can do to fix it.

Understanding how retain cycles occur and how to fix them

In iOS, objects are removed from memory when there are no other objects that keep a strong reference to them. Every instance of an object you create in your app has a retain count. Any time you pass a reference to your object to a different place in your code, its retain count is increased because there is now one more object pointing at the location in memory for that object.

This principle of retain counts mostly applies to classes. Because when you pass around an instance of a class in your code, you're really passing around a memory reference, which means that multiple objects point to the same memory address. When you're passing around value types, the value is copied when it's passed around. This means that the retain count for a value type is typically always one; there is never more than one object pointing to the memory address of a value type.

In order for an object to be removed from memory, its reference count must be zero; no objects should be referencing that address in memory. When two objects hold a reference to each other, which is often the case when you're working with delegates, it's possible that the reference count for either object never reached zero because they keep a reference to each other. Note that I mentioned a strong reference at the beginning of this section. I did that on purpose, if we have a strong reference, surely there is such a thing as a weak reference right? There is!

Weak references are references to instances of reference types that don't increase the reference count for the object the reference points to. The principles that apply here are exactly the same as using weak self in closures. By making the delegate property of an object weak, the delegate and its owner don't keep each other alive and both objects can be deallocated. In the example we were looking at this means that we need to change the following code:

class DataProvider {
  var delegate: DataDelegate?

  // rest of the code
}

Into the following:

class DataProvider {
  weak var delegate: DataDelegate?

  // rest of the code
}

For this to work, DataDelegate must be constrained to being a class, you can do this by adding : AnyObject to your protocol declaration. For example:

protocol DataDelegate: AnyObject {
  // requirements
}

When you'd run the app again and use the memory graph to look for retain cycles, you would notice that there are no more purple squares and the memory graph looks exactly like you'd expect.

In Summary

In this article, I have shown you that you can use Xcode to visualize and explore the memory graph of your app. This helps you to find memory leaks and retain cycles. When clicking on an object that's in memory, you can explore its relationship with other objects, and ultimately you can track down retain cycles. You also learned what a retain cycle is, how they occur and how you can break them.

If you have questions, feedback or anything else for me, don't hesitate to reach out on Twitter

Finding slow code with Instruments

Every once in a while we run into performance problems. One thing you can do when this happens is to measure how long certain things in your code take. You can do this using signposts. However, there are times when we need deeper insights in our code. More specifically, sometimes you simply want to know exactly how long each function in your code takes to execute. You can gain these insights using the Time Profiler Instrument. In today's article, I will show you how you can use the Time Profiler to analyze your code, and how you can optimize its output so you can gain valuable insights.

Exploring the Time Profiler Instrument

If you want to analyze your app, you need to run it for profiling. You can do this by pressing cmd+i or by using the Product -> Profile menu item. When your app is done compiling, it will be installed on your device and Instruments will launch. In the window that appears when Instruments launches, pick the Time Profiler template:

Instruments template selection

When you select this template, Instruments will launch a new Instruments session with several tracks.

Empty Instruments window

The one you're most interested in is the Time Profiler track. When you select the Time Profiler track, the table under the Instruments timeline will show your app's objects and their methods, and how much time is spent in each method. To profile your app, unlock your device and hit the record button in the top left corner. Use your app like you normally would and make sure to spend some time with the feature your most interested in. Instruments will begin filling up with measurements from your code as shown in the following screenshot.

Instruments with measurements

The Time Profiler takes snapshots of your app's memory and CPU usage every couple of milliseconds to create a picture of what is running, and when. Based on this, the Time Profiler measures how much time is spent in each method. The flip side here is that the Time Profiler is not suited for fine-grained, high-resolution profiling of your code. If this is what you need, you should use signposts instead.

Note
It's always best to run your app on a real device if you want to run the Time Profiler on it. The simulator has all the processing power from your working machine at its disposal so measurements will be very skewed if you profile your app using the Simulator.

Once you feel like you've captured enough data to work with, you can begin analyzing your measurements.

Analyzing the Time Profiler's measurements

By default, Instruments will show its measurements from the inside out. The topmost item in the tree is your app, followed by several threads. Note how instruments displays a number of seconds spent in each thread. This counter only increments if your app is actively processing data on the corresponding thread. Since you're probably not really interested in working your way from the inside out, and also not in system libraries, it's a good idea to change the way instruments visualizes data. In the bottom of the window, there's a button named Call Tree, if you click that, you can specify how Instruments should display its measurements. I always use the following settings:

Instruments settings

At the surface, not much will seem to have changed. Your code is still separated by thread, but if you expand the threads, your code is listed first because the call tree is now shown from the outside in rather than from the inside out. Every time you drill down one level deeper, Instruments shows what method called the method you're drilling into.

In the app I've been profiling here I was looking for reasons that it took a long time to update my UI after an image has finished downloading. I can see that a lot of time is spent in my performTask method. This is the method that's responsible for fetching and processing the image, and ultimately pass it to the UI. There's also some time spent in the UIImage initializer. Which is called from the performTask method as shown in the following screenshot:

Instruments readout

Based on these findings, you would conclude that something fishy might be happening in performTask because we're spending all of our time there. If the UIImage initialization was slow, we would be spending way more time in that initializer. And since the code spends so much time in performTask, but not in the UIImage initializer, this is a good first guess.

In this case, I made performTask slow on purpose. After loading an image I would write it to the phone's documents directory a couple of times, and also convert it to a UIImage not once, but five times before updating the UI. In this case, a potential fix would be to either update the UI immediately before persisting the image to the documents directory and to remove the loop that's obviously not needed.

In summary

From personal experience, I can tell you that the Time Profiler Instrument is an extremely valuable tool in an iOS developer's toolbox. If your UI doesn't scroll as smooth as you want, or if your device runs hot every time you use your app, or if see CPU and memory usage in Xcode rise all the time, the Time Profiler is extremely helpful to gain an understanding of what your code is doing exactly. When you profile your code and know what's going on, you can start researching performance problems in your code with more confidence.

If you have any questions about the Time Profiler, have feedback or just want to reach out, you can find me on Twitter.

Effectively using static and class methods and properties

Swift allows us to use a static prefix on methods and properties to associate them with the type that they’re declared on rather than the instance. We can also use static properties to create singletons of our objects which, as you have probably heard before is a huge anti-pattern. So when should we use properties or methods that are defined on a type rather than an instance? In this blog post, I’m going to go over several use cases of static properties and methods. Once we’ve covered the hardly controversial topics, I’m going to make a case for shared instances and singletons.

Sometimes you'll run into a Swift Concurrency errors stating that your static vars aren't concurrency safe. In this post we explore how you can solve this and why.

Using static properties for configuration

A very common use case for static properties is configuration. All over UIKit you can find these properties whose sole purpose is to configure other objects. The main reason they are defined as static strings is quite possibly because making them static provides a namespace of sorts. A common example I like to use for explaining static properties for configuration is when you use objects with static properties as a style guide. For example, it’s much nicer to define colors or font sizes in a single place than having their values scattered throughout your code. When you want to change a certain color or font size across your app you would have to go through your entire app and then replace the appropriate values. If this is the approach you take it’s only a matter of time before you forget one or more properties. Now consider the following configuration object:

enum AppStyles {
  enum Colors {
    static let mainColor = UIColor(red: 1, green: 0.2, blue: 0.2, alpha: 1)
    static let darkAccent = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1)
  }

  enum FontSizes {
    static let small: CGFloat = 12
    static let medium: CGFloat = 14
    static let large: CGFloat = 18
    static let xlarge: CGFloat = 21
  }
}

If you have to slightly tweak your main color, you only have to change a single place in your code. By naming the color using a descriptive name, your properties are not bound to the visual representation they end up having on-screen.

Note that I’m using an enum with static properties here. The reason for that is because I don’t want to be able to create instances of my configuration objects. And since enums don’t have initializers they are well suited for this purpose. A struct with a private initializer would do the trick as well, I just happen to think enums are nicer for this job.

Static properties also make sense if you want to define a finite set of strings that you might want to use for notifications that you’re sending through the Notification Center on iOS, or for any other time where you want to have some kind of global configuration in your app that should be constant across your entire app without having to pass around a configuration object.

Using static properties for expensive objects

Another use of static properties is to use them as a cache. Creating certain objects in your app might be quite expensive, even though you might be able to create an instance once and reuse it throughout the lifetime of your app. A good example of an expensive object that you might want to keep around in a static property is a dateformatter:

struct Blogpost {
  private static var dateFormatter = ISO8601DateFormatter()

  let publishDate: Date?
  // other properties ...

  init(_ publishDateString: String /* other properties */) {
    self.publishDate = Blogpost.dateFormatter.date(from: publishDateString)
  }
}

No matter how many times we want to create a blogpost instance, the date formatter we use to convert the string will always be an ISO8601DateFormatter, we can create a static property on Blogpost that holds the date formatter we need. This is useful because date formatters are expensive to create and can be reused without any consequences. If we would associate the date formatter with an instance of Blogpost rather than making it static and associating it with the type, a new date formatter would be created for every instance of Blogpost. This can lead to many identical dateformatters being created which is pretty wasteful.

So any time you have objects that are expensive to create, and that can be safely reused many times, it might be a good idea to define them statically so you only have to create an instance once for the type rather than creating a new expensive object for every instance that uses said expensive object.

Creating a factory with static methods

A common pattern in programming is the factory pattern. Factories are useful to create complex objects using a simple mechanism while hiding certain details about the target object’s initializer. Let’s look at an example:

enum BlogpostFactory {
  static func create(withTitle title: String, body: String) -> Blogpost {
    let metadata = Metadata(/* metadata properties */)
    return Blogpost(title: title, body: body, createdAt: Date(), metadata: metadata)
  }
}

What’s nice is that we can use this BlogpostFactory to create new instances of Blogpost. Depending on your use case you might not want to build factories this way. For example, if there is some kind of state associated with your factory. In simple cases like this, however, it might make sense to have a simple static method to create instances of an object on your behalf. Another example is using a default static method on a type to create some kind of basic starting point for a blog post or form:

extension Blogpost {
  static func sensibleDefault() -> Blogpost {
    return Blogpost(title: "Hello, world!",
                    body: "Hello, sample body",
                    createdAt: Date())
  }
}

You could use this default() static method to create a placeholder object whenever a user is about to create a new blog post.

Static methods are useful if you want to associate a certain method with a type rather than an instance. Nothing stops us from creating a free function called defaultBlogpost() that creates a blog post instance. However, it’s much nicer to associate the default() method directly with Blogpost.

Understanding how class methods differ from static methods

In the previous examples, I always used static methods. In Swift, it’s also possible to define class methods on classes. Class methods are also associated with types rather than instances, except the main difference is that subclasses can override class methods:

class SomeClass {
  class func date(from string: String) -> Date {
    return ISO8601DateFormatter().date(from: string)!
  }
}

class SubClass: SomeClass {
  override class func date(from string: String) -> Date {
    return DateFormatter().date(from: string)!
  }
}

In the preceding example, SubClass overrides the date(from:) from its superclass so it uses a different date formatter. This behavior is limited to class methods, you can’t override static methods like this.

Understanding when to create shared instances

So far you have seen that you can use static properties for configuration, or expensive objects and how you can use static methods to create simple factories. A more controversial topic is the topic of shared instances. We have probably all used at least a couple of the following examples in our code at some point:

  • URLSession.shared
  • UserDefaults.standard
  • NotificationCenter.default
  • DispatchQueue.main

These are all examples of shared instances.

Each of the above objects has a static property that holds a default, or shared instance of the type it’s defined on. If you think of these objects as singletons, you are mistaken. Let me explain why.

A singleton is an object that you can only ever have one instance of. It’s often defined as a static property in Swift, but in other languages, you might simply use the types initializer and instead of getting a new instance every time, you would get the same instance over and over.

The static properties on the types listed earlier are all shared instances rather than singletons because you are still free to create your own instances of every object. Nothing is preventing you from creating your own UserDefaults store, or your own URLSession object.

The shared instances that are offered on these objects are merely suggestions. They are fully configured, useful instances of these objects that you can use to quickly get up and running. In some cases, like DispatchQueue.main or NotificationCenter.default, the shared instances have specific purposes in your app. For example like how DispatchQueue.main is used for all UI manipulations, or how NotificationCenter.default is used for all notifications sent by UIKit and the system.

Whenever you use a shared instance, try to immediately add a built-in escape hatch for when you might decide that you want to use a different instance than the shared one. Let me show you an example of how you can do this:

struct NetworkingObject {
  let urlSession: URLSession

  init(urlSession: URLSession = URLSession.shared) {
    self.urlSession = urlSession
  }
}

The NetworkingObject uses a URLSession to make requests. Its initializer accepts a URLSession instance and has URLSession.shared as its default value. This means that in most cases you can create new instances of your networking object without passing the URL session explicitly. If you decide that you want to use a different URL session, for example in your unit tests, you can simply pass the session to the networking object’s initializer and it will use your custom URL session.

Shared instances are very useful for objects that you’ll likely every only need a single instance of, that is preconfigured and easily accessible from throughout your app. Their main benefit is that they also allow you to create your own instances, which means that you get the benefits of having a shared instance with shared state without losing the ability to create your own instances that have their own state if needed.

Knowing when to use a singleton

Singletons are universally known as an anti-pattern throughout the development community. I personally tend to prefer shared instances over singletons because with a shared instance you can still create your own instance of an object if needed while using the shared one when it makes sense to do so.

I do think, however, that there are responsible ways to use the singleton pattern in Swift. Given that a singleton’s only real requirement is that you can only ever have one instance of a singleton, you might write some code like this:

protocol Database {
  /* requirements */
}

struct AppDatabase: Database {
  static let singleton = AppDatabase()

  private init() {}
}

struct UserProvider {
  let database: Database
}

When used as described above, the singleton conforms to a protocol. In this case Database. The object that uses the database has a property that requires an object that conforms to Database. If we don’t access the singleton’s singleton property when we access the database but instead inject it into the UserProvider and other users of the database, the singleton is used like any other dependency.

So why make AppDatabase a singleton? You might ask. The reason is simple. If I have two instances of my database, it might be possible for two objects to write to my underlying storage at the same time if I don’t have a very good read/write mechanism in place. So to make sure that you can only ever create one instance of AppDatabase you can implement it as a singleton.

The major drawback to this approach for me is that this might encourage people to use the singleton property even though they should be using dependency injection to inject the singleton instance to hide the fact that we’re using a singleton. This is what code reviews are for though, and if everybody on your team agrees that it’s okay to use singletons like this you can go ahead and do it.

Keep in mind that singletons are still an anti-pattern though, all I provided here is a use case where I think the downsides are limited and isolated.

In summary

In this post, I showed you how you can use static properties to drive configuration on a type-level rather than an instance level. I also showed you that static properties are great for storing objects that are reused often and are expensive to create. Next, you saw how static methods can be used to implement a factory of sorts and how class methods are different from static methods.

Then we took a turn into a more controversial realm by exploring shared instances and singletons. I argued that shared instances are often nicer than singletons because you can still create your own instances of objects that offer a shared instance if needed. I then showed you that a singleton might not be so bad if you make it implement a protocol and inject the singleton into the initializer of an object that depends on a protocol rather than the singleton’s explicit type.

If you have any feedback, suggestions or if you want to talk to me about singletons and shared instances, my Twitter for you.

The some keyword in Swift explained

If you're using SwiftUI to build your apps, you will have noticed that your view's body property is of type some View. The some keyword was introduced alongside SwiftUI and it’s part of a feature called opaque result types (SE-0244). In this post, we'll take a look at what the some keyword is exactly, which problems it solves, and when you might need it in your code.

We'll start by exploring what opaque result types are, and more specifically what problem they solve. Next, we’ll look at how opaque result types are used in SwiftUI and we’ll discover whether it’s a Swift feature that you’re likely to adopt in your code at some point.

Exploring opaque result types

To fully understand the problems solved by opaque result types, it’s good to have a solid understanding of generics. If you’re not familiar with generics at all, I recommend reading these to posts I wrote to get yourself up to speed:

If you’re not interested in learning loads about generics and just want to learn about opaque result types and what the some keyword is, that’s fine too. Just be aware that some of the content in this post could be confusing without understanding generics at all. If you have a vague idea of what they are, that's probably good enough to follow along.

In Swift, we can use protocols to define interfaces or contracts for our objects. When something conforms to a protocol, we know that it can do certain things, or has certain properties. This means that you can write code like this:

protocol ListItemDisplayable {
  var name: String { get }
}

struct Shoe: ListItemDisplayable {
  let name: String
}

var listItem: ListItemDisplayable = Shoe(name: "a shoe")

When using this listItem property, only the properties exposed by ListItemDisplayable are exposed to us. This is especially useful when you want to have an array of items that are ListItemDisplayable where the concrete types can be more than just Shoe:

struct Shoe: ListItemDisplayable {
  let name: String
}

struct Shorts: ListItemDisplayable {
  let name: String
}

var mixedList: [ListItemDisplayable] = [Shoe(name: "a shoe"),
                                        Shorts(name: "a pair of shorts")]

The compiler treats our Shoe and Shorts objects as ListItemDisplayable, so users of this list won't know whether they’re dealing with shoes, shorts, jeans or anything else. All they know is that whatever is in the array can be displayed in a list because it conforms to ListDisplayable.

Opaque result types for protocols with associated types

The flexibility shown in the previous section is really cool, but we can push our code further:

protocol ListDataSource {
  associatedtype ListItem: ListItemDisplayable

  var items: [ListItem] { get }
  var numberOfItems: Int { get }
  func itemAt(_ index: Int) -> ListItem
}

The above defines a ListDataSource that holds some list of an item that conforms to ListItemDisplayable. We can use objects that conform to this protocol as data source objects for table views, or collection views which is pretty neat.

We can define a view model generator object that will, depending on what kind of items we pass it, generate a ListDataSource:

struct ShoesDataSource: ListDataSource {
  let items: [Shoe]
  var numberOfItems: Int { items.count }

  func itemAt(_ index: Int) -> Shoe {
    return items[index]
  }
}

struct ViewModelGenerator {
  func listProvider(for items: [Shoe]) -> ListDataSource {
    return ShoesDataSource(items: items)
  }
}

However, this code doesn’t compile because ListDataSource is a protocol with associated type constraints. We could fix this by specifying ShoesDataSource as the return type instead of ListDataSource, but this would expose an implementation detail that we want to hide from users of the ViewModelGenerator. Callers of listProvider(for:) only really need to know is that we’re going to return a ListDataSource from this method. We can rewrite the generator as follows to make our code compile:

struct ViewModelGenerator {
  func listProvider(for items: [Shoe]) -> some ListDataSource {
    return ShoesDataSource(items: items)
  }
}

By using the some keyword, the compiler can enforce a couple of things while hiding them from the caller of listProvider(for:):

  • We return something that conforms to ListDataSource.
  • The returned object’s associated type matches any requirements that are set by ListDataSource.
  • We always return the same type from listProvider(for:).

Especially this last point is interesting. In Swift, we rely on the compiler to do a lot of compile-time type checks to help us write safe and consistent code. And in turn, the compiler uses all of this information about types to optimize our code to ensure it runs as fast as possible. Protocols are often a problem for the compiler because they imply a certain dynamism that makes it hard for the compiler to make certain optimizations at compile time which means that we’ll take a (very small) performance hit at runtime because the runtime will need to do some type checking to make sure that what’s happening is valid.

Because the Swift compiler can enforce the things listed above, it can make the same optimizations that it can when we would use concrete types, yet we have the power of hiding the concrete type from the caller of a function or property that returns an opaque type.

Opaque result types and Self requirements

Because the compiler can enforce type constraints compile time, we can do other interesting things. For example, we can compare items that are returned as opaque types while we cannot do the same with protocols. Let’s look at a simple example:

protocol ListItemDisplayable: Equatable {
  var name: String { get }
}

func createAnItem() -> ListItemDisplayable {
  return Shoe(name: "a comparable shoe: \(UUID().uuidString)")
}

The above doesn’t compile because Equatable has a Self requirement. It wants to compare two instances of Self where both instances are of the same type. This means that we can’t use ListItemDisplayable as a regular return type, because a protocol on its own has no type information. We need the some keyword here so the compiler will figure out and enforce a type for ListItemDisplayable when we call createAnItem():

func createAnItem() -> some ListItemDisplayable {
  return Shoe(name: "a comparable shoe: \(UUID().uuidString)")
}

The compiler can now determine that we’ll always return Shoe from this function, which means that it knows what Self for the item that’s returned by createAnItem(), which means that the item can be considered Equatable. This means that the following code can now be used to create two items and compare them:

let left = createAnItem()
let right = createAnItem()

print(left == right)

What’s really cool here is that both left and right hide all of their type information. If you call createAnItem(), all you know is that you get a list item back. And that you can compare that list item to other list items returned by the same function.

Opaque return types as reverse generics

The Swift documentation on opaque result types sometimes refers to them as reverse generics which is a pretty good description. Before opaque result types, the only way to use protocols with associated types as a return type would have been to place the protocol on a generic constraint for that method. The downside here is that the caller of the method gets to decide the type that’s returned by a function rather than letting the function itself decide:

protocol ListDataSource {
  associatedtype ListItem: ListItemDisplayable

  var items: [ListItem] { get }
  var numberOfItems: Int { get }
  func itemAt(_ index: Int) -> ListItem

  init(items: [ListItem])
}

// the caller decides what T is at runtime
func createViewModel<T: ListDataSource>(for list: [T.ListItem]) -> T {
  return T.init(items: list)
}

// the caller knows they'll receive an object that conforms to ListDataSource, but it can't know exactly _which_ ListDataSource
// the method decides that it's a GenericViewModel instance. The caller can't influence that.
func createOpaqueViewModel<T: ListItemDisplayable>(for list: [T]) -> some ListDataSource {
  return GenericViewModel<T>(items: list)
}

let shoes: GenericViewModel<Shoe> = createViewModel(for: shoeList)
let opaqueShoes = createOpaqueViewModel(for: shoeList)

Both methods in the preceding code can return the exact same GenericViewModel. The main difference here is that in the first case, the caller decides that it wants to have a GenericViewModel<Shoe> for its list of shoes, and it will get a concrete type back of type GenericViewModel<Shoe>. In the example that uses some, the caller only knows that it will get some ListDataSource that holds its list of ListItemDisplayable items. This means that the implementation of createOpaqueViewModel can now decide what it wants to do. In this case, we chose to return a generic view model. We could also have chosen to return a different kind of view model instead, all that matters is that we always return the same type from within the function body and that the returned object conforms to ListDataSource.

Using opaque return types in your projects

While I was studying opaque return types and trying to come up with examples for this post, I noticed that it’s not really easy to come up with reasons to use opaque return types in common projects. In SwiftUI they serve a key role, which might make you believe that opaque return types are going to be commonplace in a lot of projects at some point.

Personally, I don’t think this will be the case. Opaque return types are a solution to a very specific problem in a domain that most of us don’t work on. If you’re building frameworks or highly reusable code that should work across many projects and codebases, opaque result types will interest you. You’ll likely want to write flexible code based on protocols with associated types where you, as the builder of the framework, have full control of the concrete types that are returned without exposing any generics to your callers.

Another consideration for opaque return types might be their runtime performance. As discussed earlier, protocols sometimes force the compiler to defer certain checks and lookups until runtime which comes with a performance penalty. Opaque return types can help the compiler make compile-time optimizations which is really cool, but I’m confident that it won’t matter much for most applications. Unless you’re writing code that really has to be optimized to its core, I don’t think the runtime performance penalty is significant enough to throw opaque result types at your codebase. Unless, of course, it makes a lot of sense to you. Or if you’re certain that in your case the performance benefits are worth it.

What I’m really trying to say here is that protocols as return types aren’t suddenly horrible for performance. In fact, they sometimes are the only way to achieve the level of flexibility you need. For example, if you need to return more than one concrete type from your function, depending on certain parameters. You can’t do that with opaque return types.

This brings me to quite possibly the least interesting yet easiest way to start using opaque return types in your code. If you have places in your code where you’ve specified a protocol as return type, but you know that you’re only returning one kind of concrete type from that function, it makes sense to use an opaque return type instead. This comes down to a difference between existential types and concrete types which you can learn about in my any vs some post. The main point from that post is that using a protocol is slightly slower than using a concrete type. So if you have a function that returns any MyProtocol, that's a bit slower than returning a type that conforms to MyProtocol directly. However, you might want to hide the actual type from a caller. In that case, returning some MyProtocol instead of any MyProtocol will get you a performance boost because some MyProtocol is resolved to a concrete type at compile time.

This brings us to a more interesting consideration to make for using some. You can use some in places where you've defined a single use generic. For example, in the following situation you might be able to use some instead of a generic:

class MusicPlayer {
  func play<Playlist: Collection<Track>>(_ playlist: Playlist) { /* ... */ }
}

In this example, our play function has a generic argument Playlist that's constrained to a Collection that holds Track objects. We can write this constraint thanks to Swift 5.7's primary associated types. Learn more about primary associated types in this post. If we only use the Playlist generic in a single place like a function argument, we can use some instead of the generic from Swift 5.7 onward. Swift 5.7 allows us to use some for function arguments which is a huge improvement.

Rewriting the example above with some looks as follows:

class MusicPlayer {
  func play(_ playlist: some Collection<Track>) { /* ... */ }
}

Much better, right?

Verify your existential usage for Swift 6 with Xcode 15.3

If you want to make sure that your app is ready for Swift 6.0 and uses any or some everywhere you're supposed to, pass the -enable-upcoming-feature ExistentialAny in your Swift build flags. To learn how, take a look at this post where I dig into experimental Swift versions and features. Note that the EsistentialAny build flag is available in the default Xcode 15.3 toolchain.

In summary

In this post you saw what problems opaque return types solve, and how they can be used by showing you several examples. You learned that opaque return types can act as a return type if you want to return an object that conforms to a protocol with associated type constraints. This works because the compiler performs several checks at compile time to figure out what the real types of a protocol’s associated types are. You also saw that opaque return types help resolve so-called Self requirements for similar reasons. Next, you saw how opaque result types act as reverse generics in certain cases, which allows the implementer of a method to determine a return type that conforms to a protocol rather than letting the caller of the method decide.

Next, I gave some insights into what opaque result types are likely going to in your apps. With Swift 5.7's ability to use some in more places than just return types I think some will become a very useful tool that will help us use conrete types instead of existentials (protocols) in lots of places which should make our code more performant and robust.

If you have any questions, feedback or if you have awesome applications of opaque return types that I haven’t covered in this post, I would love to hear from you on Twitter.

Generics in Swift explained

Whenever we write code, we want our code to be well-designed. We want it to be flexible, elegant and safe. We want to make sure that Swift’s type system and the compiler catch as many of our mistakes as possible. It’s especially interesting how Swift’s type system can help us avoid obvious errors. For example, Swift won’t allow you to assign an Int to a String property like this:

var anInt = 10
anInt = "Hello, World!"

The Swift compiler would show you an error that explains that you can’t assign a String to an Int and you’d understand this. If something is declared or inferred to be a certain type, it can’t hold any types other than the one it started out as.

It’s not always that simple though. Sometimes you need to write code where you really don’t know what type you’re working with. All you know is that you want to make sure that no matter what happens, that type cannot change. If this doesn’t sound familiar to you, or you’re wondering who in their right mind would ever want that, keep reading. This article is for you.

Reverse engineering Array

A great example of an object that needs the flexibility that I described earlier is an array. Considering that arrays in Swift are created to hold objects of a specific type, whether it’s a concrete type or a protocol, array’s aren’t that different from the mistake I showed you earlier. Let’s adapt the example to arrays so you can see what I mean:

var arrayOfInt = [1, 2, 3]
arrayOfInt = ["one", "two", "three"]

If you try to run this code you will see an error that explains you can’t assign an object of Array<String> to Array<Int>. And this is exactly the kind of magic that you need generics for.

Arrays are created in such a way that they work with any type you throw at them. The only condition being that the array is homogenous, in other words, it can only contain objects of a single type.

So how is this defined in Swift? What does a generic object like Array look like? Instead of showing you the exact implementation from the Swift standard library, I will show you a simplified version of it:

public struct Array<Element> {
  // complicated code that we don’t care about right now
}

The interesting part here is between the angle brackets: <Element>. The type Element does not exist in the Swift standard library. It’s a made-up type that only exists in the context of arrays. It specifies a placeholder that’s used where the real, concrete type would normally be used.

Let’s build a little wrapper around Array that will help you make sense of this a little bit more.

struct WrappedArray<OurElement> {
  private var array = Array<OurElement>()

  mutating func append(_ item: OurElement) {
    array.append(item)
  }

  func get(atIndex index: Int) -> OurElement {
    return array[index]
  }
}

Notice how instead of Element, we use the name OurElement. This is just to prove that Element really doesn’t exist in Swift. In the body of this struct, we create an array. We do this by using its fully written type Array<OurElement>(). The same can be achieved using the following notation: [OurElement](). The outcome is the same.

Next, in the append and get methods we accept and return OurElement respectively. We don’t know what OurElement will be. All we know is that the items in our array, the items we append to it and the items we retrieve from it, will all have the same type.

To use your simple array wrapper you might write something like this:

var myWrappedArray = WrappedArray<String>
myWrappedArray.append("Hello")
myWrappedArray.append("World")
let hello = myWrappedArray.get(atIndex: 0) // "Hello"
let world = myWrappedArray.get(atIndex: 1) // "World"

Neat stuff, right! Try adding an Int, or something else to myWrappedArray. Swift won’t let you, because you specified that OurElement can only ever be String for myWrappedArray by placing String between the angle brackets.

You can create wrapped arrays that hold other types by placing different types between the angle brackets. You can even use protocols instead of concrete types:

var codableArray = WrappedArray<Codable>

The above would allow you to add all kinds of Codable objects to codableArray. Note that if you try to retrieve them from the array using get, you will get a Codable object back, not the conforming type you might expect:

var codableArray = WrappedArray<Codable>

let somethingCodable = Person()
codableArray.append(somethingCodable)
let item = codableArray.get(0) // item is Codable, not Person

The reason for this is that get returns OurElement and you specified OurElement to be Codable.

Similar to arrays, Swift has generic objects for Set (Set<Element>), Dictionary (Dictionary<Key, Value>) and many other objects. Keep in mind that whenever you see something between angle brackets, it’s a generic type, not a real type.

Before we look at an example of generics that you might be able to use in your own code someday, I want to show you that functions can also specify their own generic parameters. A good example of this is the decode method on JSONDecoder:

func decode<T>(_ type: T.Type, from data: Data) throws -> T where T : Decodable {
  // decoding logic
}

If you’ve never used this method yourself, it would normally be called as follows:

let result = try? decoder.decode(SomeType.self, from: data) // result is SomeType

Let’s pick apart the method signature for decode a bit:

  • func decode<T>: the decode method specifies that it uses a generic object called T. Again, T is only a placeholder for whatever the real type will be, just like Element and OurElement were in earlier examples.
  • (_ type: T.Type, from data: Data): one of the arguments here is T.Type. This means that we must call this method and specify the type we want to decode data into. In the example, I used SomeType.self. When you call the method with SomeType.self without explicitly specifying T, Swift can infer that T will now be SomeType.
  • throws -> T: This bit marks decode as throwing and it specifies that decode will return T. In the example, T was inferred to be SomeType.
  • where T : Decodable: this last bit of decode's method signature applies a constraint to T. We can make T whatever we want, as long as that object conforms to Decodable. So in our example, we’re only allowed to use SomeType as the type of T if SomeType is decodable.

Take another look at the method signature of decode and let it all sink in for a moment. We’re going to build our own struct in a moment that will put everything together so if it doesn’t make sense yet, I hope it does after the next section.

Applying generics in your code

You have seen how Array and JSONDecoder.decode use generics. Let’s build something relatively simple that applies your newfound logic using an example that I have run into many times over the years.

Imagine you’re building an app that shows items in table views. And because you like to abstract things and separate concerns, you have taken some of your UITableViewDataSource logic and you’ve split that into a view model and the data source logic itself. Yes, I said view model and no, we’re not going to talk about architecture. View models are just a nice way to practice building something with generics for now.

In your app you might have a couple of lists that expose their data in similar ways and heavily simplified, your code might look like this:

struct ProductsViewModel {
  private var items: [Products]
  var numberOfItems: Int { items.count }

  func item(at indexPath: IndexPath) -> Products {
    return items[indexPath.row]
  }
}

struct FavoritesViewModel {
  private var items: [Favorite]
  var numberOfItems: Int { items.count }

  func item(at indexPath: IndexPath) -> Favorite {
    return items[indexPath.row]
  }
}

This code is really repetitive, isn’t it? Both view models have similar property and method names, the only real difference is the type of the objects they operate on. Look back to our WrappedArray example. Can you figure out how to use generics to make these view models less repetitive?

If not, that’s okay. Here’s the solution:

struct ListViewModel<Item> {
  private var items: [Item]
  var numberOfItems: Int { items.count }

  func item(at indexPath: IndexPath) -> Item {
    return item[indexPath.row]
  }
}

Neat, right! And instead of the following code:

let viewModel = FavoritesViewModel()

You can now write:

let viewModel = ListViewModel<Favorite>()

The changes in your code are minimal, but you’ve removed code duplication which is great! Less code duplication means fewer surface areas for those nasty bugs to land on.

One downside of the approach is that you can now use any type of object as Item, not just Favorite and Product. Let’s fix this by introducing a simple protocol and constraining ListViewModel so it only accepts valid list items as Item:

protocol ListItem {}
extension Favorite: ListItem {}
extension Product: ListItem {}

struct ListViewModel<Item> where Item: ListItem {
  // implementation
}

Of course, you can decide to add certain requirements to your ListItem protocol but for our current purposes, an empty protocol and some extensions do the trick. Similar to how decode was constrained to only accept Decodable types for T, we have now constrained ListViewModel to only allow types that conform to ListItem as Item.

Note
Sometimes the where is moved into the angle brackets: struct ListViewModel<Item: ListItem> the resulting code functions exactly the same and there are no differences in how Swift compiles either notation.

In summary

In this blog post, you learned where the need for generics come from by looking at type safety in Swift and how Array makes sure it only contains items of a single type. You created a wrapper around Array to experiment with generics and saw that generics are placeholders for types that are filled in at a later time. Next, you saw that functions can also have generic parameters and that they can be constrained to limit the types that can be used to fill in the generic.

To tie it all together you saw how you can use generics and generic constraints to clean up some duplicated view model code that you actually might have in your projects.

All in all, generics are not easy. It’s okay if you have to come back to this post every now and then to refresh your memory. Eventually, you’ll get the hang of it! If you have questions, remarks or just want to reach out to me, you can find me on Twitter.

Efficiently loading images in table views and collection views

When your app shows images from the network in a table view or collection view, you need to load the images asynchronously to make sure your list scrolls smoothly. More importantly, you’ll need to somehow connect the image that you’re loading to the correct cell in your list (instead of table view or collection view, I’m going to say list from now on). And if the cell goes out of view and is reused to display new data with a new image, you’ll want to cancel the in-progress image load to make sure new images are loaded immediately. And, to make sure we don’t go to the network more often than needed, we’ll need some way to cache images in memory, or on disk, so we can use the local version of an image if we’ve already fetched it in the past. Based on this, we can identify three core problems that we need to solve when loading images for our cells asynchronously:

  1. Setting the loaded image on the correct cell.
  2. Canceling an in-progress load when a cell is reused.
  3. Caching loaded images to avoid unneeded network calls.

In this post, I’m going to show you how to build a simple image loader class, and write a table view cell that will help us solve all these problems. I will also show you how you can use the same image loader to enhance UIImage with some fancy helpers.

The loader and techniques in this post focus on UIKit do not use Swift's async/await. If you're interested in building an image loader that leverages async/await make sure you check out this post alongside this one.

Building a simple image loader

When you make a GET request using URLSession, you typically do so through a data task. Normally, you don’t hold on to that data task because you don’t need it. But if you keep a reference to your data task around, you can cancel it at a later time. I’m going to use a dictionary of [UUID: URLSessionDataTask] in the image loader we’re building because that will allow me to keep track of running downloads and cancel them later. I’m also going to use a dictionary of [URL: UIImage] as a simple in-memory cache for loaded images. Based on this, we can begin writing the image loader:

class ImageLoader {
  private var loadedImages = [URL: UIImage]()
  private var runningRequests = [UUID: URLSessionDataTask]() 
}

We can also implement a loadImage(_:completion:) method. This method will accept a URL and a completion handler, and it’s going to return a UUID that’s used to uniquely identify each data task later on. The implementation looks as follows:

func loadImage(_ url: URL, _ completion: @escaping (Result<UIImage, Error>) -> Void) -> UUID? {

  // 1
  if let image = loadedImages[url] {
    completion(.success(image))
    return nil
  }

  // 2
  let uuid = UUID()

  let task = URLSession.shared.dataTask(with: url) { data, response, error in
    // 3
    defer {self.runningRequests.removeValue(forKey: uuid) }

    // 4
    if let data = data, let image = UIImage(data: data) {
      self.loadedImages[url] = image
      completion(.success(image))
      return
    }

    // 5
    guard let error = error else {
      // without an image or an error, we'll just ignore this for now
      // you could add your own special error cases for this scenario
      return
    }

    guard (error as NSError).code == NSURLErrorCancelled else {
      completion(.failure(error))
      return
    }

    // the request was cancelled, no need to call the callback
  }
  task.resume()

  // 6
  runningRequests[uuid] = task
  return uuid
}

Let’s go over the preceding code step by step, following the numbered comments.

  1. If the URL already exists as a key in our in-memory cache, we can immediately call the completion handler. Since there is no active task and nothing to cancel later, we can return nil instead of a UUID instance.
  2. We create a UUID instance that is used to identify the data task that we’re about to create.
  3. When the data task completed, it should be removed from the running requests dictionary. We use a defer statement here to remove the running task before we leave the scope of the data task’s completion handler.
  4. When the data task completes and we can extract an image from the result of the data task, it is cached in the in-memory cache and the completion handler is called with the loaded image. After this, we can return from the data task’s completion handler.
  5. If we receive an error, we check whether the error is due to the task being canceled. If the error is anything other than canceling the task, we forward that to the caller of loadImage(_:completion:).
  6. The data task is stored in the running requests dictionary using the UUID that was created in step 2. This UUID is then returned to the caller.

Note that steps 3 through 5 all take place in the data task’s completion handler. This means that the order in which the listed steps execute isn’t linear. Step 1 and 2 are executed first, then step 6 and then 3 through 5.

Now that we have logic to load our images, let’s at some logic that allows us to cancel in-progress image downloads too:

func cancelLoad(_ uuid: UUID) {
  runningRequests[uuid]?.cancel()
  runningRequests.removeValue(forKey: uuid)
}

This method receives a UUID, uses it to find a running data task and cancels that task. It also removes the task from the running tasks dictionary, if it exists. Fairly straightforward, right?

Let’s see how you would use this loader in a table view’s cellForRowAtIndexPath method:

// 1
let token = loader.loadImage(imageUrl) { result in
  do {
    // 2
    let image = try result.get()
    // 3
    DispatchQueue.main.async {
      cell.cellImageView.image = image
    }
  } catch {
    // 4
    print(error)
  }
}

// 5
cell.onReuse = {
  if let token = token {
    self.loader.cancelLoad(token)
  }
}

Let’s go through the preceding code step by step again:

  1. The image loader’s loadImage(_:completion:) method is called, and the UUID returned by the loader is stored in a constant.
  2. In the completion handler for loadImage(_:completion:), we extract the result from the completion’s result argument.
  3. If we successfully extracted an image, we dispatch to the main queue and set the fetched image on the cell’s imageView property. Not sure what dispatching to the main queue is? Read more in this post
  4. If something went wrong, print the error. You might want to do something else here in your app.
  5. I’ll show you an example of my cell subclass shortly. The important bit is that we use the UUID that we received from loadImage(_:completion:) to cancel the loader’s load operation for that UUID.

Note that we do this in the cellForRowAt method. This means that every time we’re asked for a cell to show in our list, this method is called for that cell. So the load and cancel are pretty tightly coupled to the cell’s life cycle which is exactly what we want in this case. Let’s see what onReuse is in a sample cell:

class ImageCell: UITableViewCell {
  @IBOutlet var cellImageView: UIImageView!
  var onReuse: () -> Void = {}

  override func prepareForReuse() {
    super.prepareForReuse()
    onReuse()
    cellImageView.image = nil
  }
}

The onReuse property is a closure that we call when the cell’s prepareForReuse method is called. We also remove the current image from the cell in prepareForReuse so it doesn’t show an old image while loading a new one. Cells are reused quite often so doing the appropriate cleanup in prepareForReuse is crucial to prevent artifacts from old data on a cell from showing up when you don’t want to.

If you implement all of this in your app, you’ll have a decent strategy for loading images. You would probably want to add a listener for memory warnings that are emitted by your app’s Notification Center, and maybe you would want to cache images to disk as well as memory too, but I don’t think that fits nicely into the scope of this article for now. Keep these two features in mind though if you want to implement your own image loader. Especially listening for memory warnings is important since your app might be killed by the OS if it consumes too much memory by storing images in the in-memory cache.

Enhancing UIImageView to create a beautiful image loading API

Before we implement the fancy helpers, let’s refactor our cell and cellForRowAt method so they already contain the code we want to write. The prepareForReuse method is going to look as follows:

override func prepareForReuse() {
  cellImageView.image = nil
  cellImageView.cancelImageLoad()
}

This will set the current image to nil and tells the image view to stop loading the image it was loading. All of the image loading code in cellForRowAt should be replaced with the following:

cell.cellImageView.loadImage(at: imageUrl)

Yes, all of that code we had before is now a single line.

To make this new way of loading and canceling work, we’re going to implement a special image loader class called UIImageLoader. It will be a singleton object that manages loading for all UIImageView instances in your app which means that you end up using a single cache for your entire app. Normally you might not want that, but in this case, I think it makes sense. The following code outlines the skeleton of the UIImageLoader:

class UIImageLoader {
  static let loader = UIImageLoader()

  private let imageLoader = ImageLoader()
  private var uuidMap = [UIImageView: UUID]()

  private init() {}

  func load(_ url: URL, for imageView: UIImageView) {

  }

  func cancel(for imageView: UIImageView) {

  }
}

The loader itself is a static instance, and it uses the ImageLoader from the previous section to actually load the images and cache them. We also have a dictionary of [UIImageView: UUID] to keep track of currently active image loading tasks. We map these based on the UIImageView so we can connect individual task identifiers to UIImageView instances.

The implementation for the load(_:for:) method looks as follows:

func load(_ url: URL, for imageView: UIImageView) {
  // 1
  let token = imageLoader.loadImage(url) { result in
    // 2
    defer { self.uuidMap.removeValue(forKey: imageView) }
    do {
      // 3
      let image = try result.get()
      DispatchQueue.main.async {
        imageView.image = image
      }
    } catch {
      // handle the error
    }
  }

  // 4
  if let token = token {
    uuidMap[imageView] = token
  }
}

Step by step, this code does the following:

  1. We initiate the image load using the URL that was passed too load(_:for:).
  2. When the load is completed, we need to clean up the uuidMap by removing the UIImageView for which we’re loading the image from the dictionary.
  3. This is similar to what was done in cellForRowAt before. The image is extracted from the result and set on the image view itself.
  4. Lastly, if we received a token from the image loader, we keep it around in the [UIImageView: UUID] dictionary so we can reference it later if the load has to be canceled.

The cancel(for:) method has the following implementation:

func cancel(for imageView: UIImageView) {
  if let uuid = uuidMap[imageView] {
    imageLoader.cancelLoad(uuid)
    uuidMap.removeValue(forKey: imageView)
  }
}

If we have an active download for the passed image view, it’s canceled and removed from the uuidMap. Very similar to what you’ve seen before.

All we need to do now is add an extension to UIImageView to add the loadImage(at:) and cancelImageLoad() method you saw earlier:

extension UIImageView {
  func loadImage(at url: URL) {
    UIImageLoader.loader.load(url, for: self)
  }

  func cancelImageLoad() {
    UIImageLoader.loader.cancel(for: self)
  }
}

Both methods pass self to the image loader. Since the extension methods are added to instances of UIImageView, this helps the image loader to connect the URL that it loads to the UIImageView instance that we want to show the image, leaving us with a very simple and easy to use API! Cool stuff, right?

What’s even better is that this new strategy can also be used for images that are not in a table view cell or collection view cell. It can be used for literally any image view in your app!

In summary

Asynchronously loading data can be a tough problem on its own. When paired with the fleeting nature of table view (and collection view) cells, you run into a whole new range of issues. In this post, you saw how you can use URLSession and a very simple in-memory cache to implement a smart mechanism to start, finish and cancel image downloads.

After creating a simple mechanism, you saw how to create an extra loader object and some extensions for UIImageView to create a very straightforward and easy to use API to load images from URLs directly into your image views.

Keep in mind that the implementations I’ve shown you here aren’t production-ready. You’ll need to do some work in terms of memory management and possibly add a disk cache to make these objects ready for prime time.

If you have any questions about this topic, have feedback or anything else, don’t hesitate to shoot me a message on Twitter.

Appropriately using DispatchQueue.main

Lots of iOS developers eventually run into code that calls upon DispatchQueue.main. It's often clear that this is done to update the UI, but I've seen more than a handful of cases where developers use DispatchQueue.main as an attempt to get their code to work if the UI doesn't update as they expect, or if they run into crashes they don't understand. For that reason, I would like to dedicate this post to the question "When should I use DispatchQueue.main? And Why?".

Understanding what the main dispatch queue does

In iOS, we use dispatch queues to perform work in parallel. This means that you can have several dispatch queues running at the same time, and they can all be performing tasks simultaneously. In general, dispatch queues will only perform one task at a time in a first-in, first-out kind of fashion. They can, however, be configured to schedule work concurrently.

The main dispatch queue is a queue that runs one task at a time. It's also the queue that performs all layout updates. If somebody talks about the importance of not blocking the main thread, what they're really saying is that they don't want to keep the main dispatch queue busy for too long. If you keep the main queue busy too long, you will notice that your application's scroll performance becomes choppy, animations stutter and buttons become unresponsive.

The reason for this is that the main queue is responsible for everything UI related, but like we already established, the main queue is a serial queue. So if the main queue is busy doing something, it can't respond to user input or draw new frames of your animation.

A lot of code in iOS is code that can take a while to run. For example, making a network request is a good example of code that is very slow to execute. Once the network call is sent off to the server, the code has to wait for a response. While waiting, that queue isn't doing anything else. When the response comes back a couple of seconds later, the queue can process the results and move on to the next task. If you would perform this work on the main queue, your app wouldn't be able to draw any UI or respond to user input for the entire duration of the network request.

We can summarise this whole section in just a short sentence. The main queue is responsible for drawing UI and responding to user input. In the next section, we'll take a closer look at when we should use DispatchQueue.main and what that does exactly.

Using DispatchQueue.main in practice

Sticking with the example of making a network call, you can assume that network calls are made on their own queue, away from the main queue. Before we continue, I want to refresh your memory and show you what the code for a network call looks like:

URLSession.shared.dataTask(with: someURL) { data, response, error in 

}

The data task is created with a completion closure which is called when the network call completes. Since the data that's returned by the server might still need to be processed, iOS calls your completion closure on a background queue. Usually, this is great. I don't think there's any situation where you won't want to do any processing to the data that you receive from the server at all. Sometimes you might have to do more processing than other times, but no processing at all is very rare. Once you're done processing the data however, you'll probably want to update your UI.

Since you know you're not on the main queue when handling a network response, you'll need to use DipatchQueue.main to make sure your UI updates on the main queue. The following code is an example of reloading a table view on the main queue.

DispatchQueue.main.async {
  self.tableView.reload()
}

This code looks simple enough, right? But what's really going on here?

DispatchQueue.main is an instance of DispatchQueue. All dispatch queues can schedule their work to be executed sync or async. Typically you will want to schedule work async because scheduling your work synchronously would halt the execution of the current thread, wait for the target thread execute the closure that you pass to sync, and then resume the current thread. Using async allows the current thread to resume while the target thread can schedule and perform your closure when needed. Let's look at an example:

func executesAsync() {
  var result = 0

  DispatchQueue.global().async {
    result = 10 * 10
  }

  print(result) // 0
}

func executesSync() {
  var result = 0

  DispatchQueue.global().sync {
    result = 10 * 10
  }

  print(result) // 100
}

Both of the preceding functions look very similar. The main difference here is that executesAsync dispatches to the main queue asynchronously, causing result to be printed before, result is updated. The executesSync function dispatches to the main queue synchronously, which results in the execution of executesSync to be paused until the closure passed to DispatchQueue.main.sync finishes executing. This means that result is updated when print is called.

Think about the preceding example in the context of reloading a table view. If we would use sync instead of async, the executing of the network call completion closure is paused while the main thread reloads the table view. Once the table view is reloaded, the execution of the closure is resumed. By using async, the completion closure continues its execution and the table view will reload whenever the main queue has time to do so. This, hopefully, is pretty much instantaneously because if it takes a while it's probably a symptom of blocking the main thread.

Knowing when to use DispatchQueue.main

Now that you know what the main queue is, and you've seen an example of how and when DispatchQueue.main is used, how do you know when you should use DispatchQueue.main in your project?

The simplest answer is to always use it when you're updating UI in a delegate method or completion closure because you don't control how or when that code is called. In addition, Xcode will crash your app if it detects that you're doing UI work away from the main thread. While this is a very convenient feature that has helped me prevent bugs every now and then, it's not reliable 100% of the time and there are better ways to make sure your code runs on the main thread when it has to.

Remember to always dispatch your queue to the main queue asynchronously using DispatchQueue.main.async to avoid blocking the current thread. And potentially even deadlocking your app, which can happen if you call DispatchQueue.main.sync from code that is already on the main queue. Dispatching to the main queue with async does not carry this same risk, even if you're already on the main queue.

Let's look at one last example. If you fetch a user's current push notification permissions or request their contacts, you know that operation runs asynchronously and it might take a while. If you want to update the UI in the completion closure that's used for these operations, it's best to explicitly make sure your UI updates are done on the main queue by wrapping your UI updates in a DispatchQueue.main.async block.

In Summary

Writing applications that use multiple queues can be really complicated. As a general rule, keep in mind that the main queue is reserved for UI work. That doesn't mean that all non-UI work has to go off the main queue, but it does mean that all the UI work most be on the main queue and all other work can be somewhere else if you want it to be. For example, if you know an operation might take a while to complete.

In other words, the short answer to the question from the beginning of this article, you should use DispatchQueue.main to send UI related work from non-main queues to the main queue.

If you have questions about this article, feedback, suggestions or anything else, feel free to reach out to me on Twitter

Changes to location access in iOS 13

If you're working on an app that requires access to a user's location, even when your user has sent your app to the background, you might have noticed that when you ask the user for the appropriate permission, iOS 13 shows a different permissions dialog than you might expect. In iOS 12 and below, when you ask for so-called always permissions, the user can choose to allow this, allow location access only in the foreground or they can decide to now allow location access at all. In iOS 13, the user can choose to allow access once, while in use or not at all. The allow always option is missing from the permissions dialog completely.

In this post, you will learn what changes were made to location access in iOS 13, why there is no more allow always option and lastly you'll learn what allow once means for your app. We have a lot to cover so let's dive right in.

Asking for background location permissions in iOS 13

The basic rules and principles of asking for location permissions in applications haven't changed since iOS 12. So all of the code you wrote to ask a user for location permissions should still work the same in iOS 13 as it did in iOS 12. The major differences in location permissions are user-facing. In particular, Apple has doubled down on security and user-friendliness when it comes to background location access. A user's location is extremely privacy-sensitive data and as a developer, you should treat a user's location with extreme caution. For this reason, Apple decided that accessing a user's location in the background deserves a special, context-sensitive prompt rather than a prompt that presented while your app is in the foreground.

If you want to access a user's location in the background, you can ask them for this permission using the following code:

let locationManager = CLLocationManager()

func askAlwaysPermission() {
  locationManager.requestAlwaysAuthorization()
}

This code will cause the location permission dialog to pop up if the current CLAuthorizationStatus is .notDetermined. The user can now choose to deny location access, allow once or to allow access while in use. If the user chooses access while in use, your location manager's delegate will be informed of the choice and the current authorization status will be .authorizedAlways.

But wait. The user chooses when in use! Why are we now able to always access the user's location?

Because Apple is making background location access more user-centered, your app is tricked into thinking it has access to the user's background even if the app is in the background by giving it provisional authorization. You are expected to handle this scenario just like you would normally. Set up your geofencing, start listening for location updates and more. When your app is eventually backgrounded and tries to use the user's location, iOS will wait for an appropriate moment to inform the user that you want to use their location in the background. The user can then choose to either allow this or to deny it.

Note that your app is not aware of this interaction and location events in the background are not delivered to your app until the user has explicitly granted you permission to access their location in the background. This new experience for the user means that you must ensure that your user understands why you need access to their location in the background. If you're building an app that gives a user suggestions for great coffee places near their current location, you probably don't need background location access. All you really need to know is where the user is while using your app so you can give them good suggestions that are nearby, in this example, it's very unlikely that the user will understand why the background permission dialog pops-up and they will deny access.

However, if you're a home automation app that uses geofences to execute a certain home automation when a user enters or leaves a certain area, it makes sense for you to need the always access permission so you can execute a specific automation even if the user isn't actively using your app.

Speaking of geofences and similar APIs that require always authorization in iOS 12, iOS 13 allows you to set up geofences, listen for significant location changes and monitor even if your app only has when in use permission. This allows you to monitor geofences and more as long as your app is active. Usually, your app is active when it's foregrounded but there are exceptions.

If you have an app where you don't really need always authorization but you do want to allow your user to send your app to the background while you access their location, for example, if you're building some kind of scavenger hunt app where a user must enter a geofence that you've set up, you can set the allowsBackgroundLocationUpdates property on your location manager to true and your app will show the blue activity indicator on the user's status bar. While this activity indicator is present, your app remains active and location-related events are delivered to your app.

All in all, you shouldn't have to change much, if anything at all, for the new location permission strategy in iOS 13. Your app's location permissions will change according to the user's preferences just like they always have. The biggest change is in the UI and the fact that the user will not be prompted for always authorization until you actually try to use their location in the background.

Let's look at another change in iOS 13 that enables users to allow your app to access the user's location once.

Understanding what it means when a user allows your app to access their location once

Along with provisional authorization for accessing the user's location in the background, users can now choose to allow your app to access their location once. If a user chooses this option when the location permissions dialog appears, your app will be given the .authorizedWhenInUse authorization status. This means that your app can't detect whether you will always be able to access the user's location when they're using your app, or if you can access their location only for the current session. This, again, is a change that Apple made to improve the user's experience and privacy.

If a user chooses the allow once permission option, your app can access their current location until your app is moved to the background and becomes inactive. If your app becomes active again, you have to ask for location permission again.

It's recommended that you don't ask for permission as soon as the app launches. Instead, think about why you asked for location permission in the first place. The user probably was trying to do something with your app where it made sense to access their location. You should wait for the next moment where it makes sense to access the user's location rather than asking them for permission immediately when your app returns to the foreground.

If you think your app should have access to the user's location even if the app is backgrounded for a while, you can set the location manager's allowsBackgroundLocationUpdates to true and your app's session remains active until the user decides to stop it. This would be appropriate for an app that tracks hikes or runs where a user would expect you to continue actively tracking their location even while their phone is in their pocket.

Similar to provisional authorization, this change shouldn't impact your code too much. If you're already dealing with your user's locations in a careful and considerate way chances are that everything in your will work perfectly fine with the new allow once permission.

In Summary

Apple's efforts to ensure that your user's data is safe and protected are always ongoing. This means that they sometimes make changes to how iOS works that impact our apps in surprising ways and I think that location access in iOS 13 is no exception to this. It can be very surprising when you're developing your app on iOS 13 and things don't work like you're used to.

In this blog post, you learned what changes Apple has made to location permissions, and how they impact your code. You learned that apps now get provisional access to a user's location in the background and that provisional access can be converted to permanent access if you attempt to use a user's location and the background and they allow it. You also learned that your app won't receive any background location events until the user has explicitly allowed this.

In addition to provisional location access, you learned about one-time location access. You saw that your app will think it has the while in use permissions and that those permissions will be gone the next time the user launches your app. This means that you should ensure to ask for location permissions when it makes sense rather than doing this immediately. Poorly timed location permission dialogs are far more likely to result in a negative response from a user than a well thought out experience.

If you have any questions or feedback for about this post or any other post on my blog, make sure to reach out on Twitter.

Using launch arguments for easier Core Data and SwiftData debugging

If you use Core Data or SwiftData in your apps, you might be aware that the larger and more complicated your set up becomes, the harder it is to debug. It's at this point where you might start to get frustrated with your persistence framework and its black-box kind of implementation. You might think that you simply have to trust that Core Data (or SwiftData) will do the ideal thing for your app.

Furthermore, you might have a set up with multiple managed object contexts, each confined to its own thread. And when your app crashes sometimes, you think it's related to Core Data in one way or the other, but you're not quite sure how to debug it. Especially because your app only crashes sometimes rather than all the time.

In this post, I want to show you some Core Data related Launch Arguments that will help you debug and validate your Core Data related code. Let's start by using Launch Arguments to see what Core Data does under the hood. Next, we'll see how to detect most of your threading problems in Core Data.

Knowing what's happening under the hood

Sometimes, you want to open the underlying SQLite for your app file to see whether your data is actually stored as expected, or maybe you want to inspect the database structure. To do this, you need to know where the underlying SQLite file is stored which can be especially challenging when you're running on the simulator. Because let's be honest, we don't know what the UUID of our simulator is and we most certainly don't want to have to figure out its location every time we need to find our SQLite file.

Luckily, you can use a Launch Argument to log some information to the console. To add a Launch Argument, use to top menu in Xcode to go to Product -> Scheme -> Edit Scheme... (cmd + >). Select the Run configuration and go to the Arguments tab as shown in the following screenshot:

Example of the schema window

To get information logged to the console, click the + button under the Launch Arguments and add the following argument:

-com.apple.CoreData.SQLDebug 1

The result should look as follows:

Screenshot of Core Data debug flags in scheme editor

If you run your app after setting this Launch Argument, Core Data and SwiftData will start logging basic information like where the backing SQLite file is stored, and what queries are executed.

You can increase the log level all the way up to level 4, at that point the system will log pretty much everything you might want to know about what's going on under the hood and more. For example, this might help you notice that your app is performing lots of SQLite queries to fetch object relationships. And based on that discovery you might decide that certain fetch requests or queries should automatically fetch certain relationships by setting your fetch request's relationshipKeyPathsForPrefetching property.

In case you're curious, the following list describes the different Core Data SQLDebug log levels:

  1. SQL statements and their execution time
  2. Values that are bound in the statement
  3. Fetched managed object IDs
  4. SQLite EXPLAIN statement

These four log levels give you a lot of information that you can use to improve your apps. Of course, the usefulness of certain log levels like level four depends entirely on your knowledge of SQLite. But even if you're not well versed in SQLite, I recommend to take a look at all of the log levels sometimes, they can produce some interesting outputs.

Detecting threading problems in Core Data

One of the biggest frustrations you might have with Core Data is random crashes due to threading problems. You're supposed to use managed object context and managed objects only on the threads that they were created on, and violating this rule might crash your app. However, usually your app won't crash and everything is fine. But then every now and then a random crash pops up. You can tell that it's Core Data related but you might not be sure where the error is coming from exactly.

Luckily, the Core Data team has thought of a way to help you get rid of these crashes. In Xcode, you can add the -com.apple.CoreData.ConcurrencyDebug 1 Launch Argument to run Core Data in an extra strict mode. Whenever Core Data encounters a threading violation, your app will immediately crash and Xcode will point out the exact line where the violation occurred.

I can recommend everybody to use this Launch Argument in development because it will help you catch threading problems early, and forces you to fix them right away rather than getting some nasty surprises when your app is already published to the App Store.

In Summary

While you can’t eliminate all bugs and performance issues with debug flags, it does help to have some tools available that you can use to make your problems more visible. Whether it's making you app crash if you break Core Data's threading rules, or gaining insights into the different SQLite queries Core Data does under the hood, it's always good to understand how your code behaves under the hood.

One of the nicer things about SwiftData being built on top of Core Data is that all of these debug arguments will help in SwiftData apps too. I add the SQLDebug argument to pretty much every app I work on because it makes keeping an eye on your app's efficiency so much easier.

I hope these debug flags will help save you loads of time, just like they do for me. If you have questions, feedback or simply want to reach out to me, don't hesitate to contact me on Twitter