Understanding how and when SwiftUI decides to redraw views

There's a good chance that you're using SwiftUI and that you're not quite sure how and when SwiftUI determines which views should redraw. And arguably, that's a good thing. SwiftUI is clearly smart enough to make decent decisions without any negative consequences. In fact, you might even have set up your app in a way that coincidentally plays into SwiftUI's strength beautifully. There's an equal likelihood that your setup isn't as performant as you might think but you're just not seeing any issues yet.

Recently, I had to figure out how SwiftUI determines that it should redraw views in order to fix some performance issues. One issue was that for some reason SwiftUI decided that it needed access the bodies of a lot of views that never changed which led to some dropped frames while scrolling. Another issue that I investigated is one where scrolling performance suffered greatly when just one or two items in a list were updated.

The details and specifics of these issues aren't that interesting. What's more interesting in my opinion is what I learned about how and when SwiftUI determines to redraw views because some of the things I've noticed were quite surprising to me while others felt very natural and confirmed some thoughts I've had regarding SwiftUI for a while.

Please keep in mind that I don't have insight into SwiftUI's internals, the information I've gathered in this post is based on observations and measurements and there are no guarantees that they'll remain accurate in the future. In general you shouldn't rely on undocumented internals, even if you have lots of proof to back up your reasoning. That said, the measurements in this post were done to solve real problems, and I think the conclusions that can be drawn from these measurements explain sensible best-practices without relying on the internals of SwiftUI too much.

With that out of the way, let's dive right in!

Understanding the example we'll work from

The most important thing to understand while we're exploring SwiftUI is the example that I'm using to work from. Luckily, this example is relatively simple. If you want to check out the source code that I've used to gather measurements during my exploration, you can grab it from GitHub.

The sample I've been working from is based on a list of items. There's functionality to set a list item to "active". Doing this will mark the currently active item (if one exists) as not active, and the next item in the list becomes active. I can either do this by hand, or I can do it on a timer. The models used to populate my cells also have a random UUID that's not shown in the cell. However, when changing the active cell there's an option in the app to update the random UUID on every model in the my data source.

I'll show you the important parts of my model and data source code first. After that I'll show you the view code I'm working from, and then we can get busy with taking some measurements.

Understanding the sample's data model

My sample app uses an MVVM-like strategy where cells in my list receive a model object that they display. The list itself uses a view model that maintains some state surrounding which item is active, and whether a list of items is loaded already.

Let's look at the model that's shown in my cells first:

struct Item: Identifiable {
    var isActive: Bool
    let id = UUID()
    var nonVisibleProperty = UUID()

    init(id: UUID = UUID(), isActive: Bool = false, nonVisibleProperty: UUID = UUID()) {
        self.isActive = isActive
    }
}

It's pretty simple and what's important for you to note is that my model is a struct. This means that changing the nonVisibleProperty or isActive state does not trigger a view redraw. The reason for this is that there's a view model that holds all of the items I want to show. The view model is an observable object and whenever one of its items changes, it will update its @Published list of items.

I won't put the full view model code in this post, you can view it right here on GitHub if you're interested to see the entire setup.

The list of items is defined as follows:

@Published var state: State = .loading

By using a State enum it's possible to easily show appropriate UI that corresponds to the state of the view model. For simplicity I only have two states in my State enum:

enum State {
    case loading
    case loaded([Item])
}

Probably the most interesting part of the view model I defined is how I'm toggling my model's isActive property. Here's what my implementation looks like for the method that activates the next item in my list:

func activateNextItem() {
    guard case .loaded(let items) = state else {
        return
    }

    var itemsCopy = items

    defer {
        if isMutatingHiddenProperty {
            itemsCopy = itemsCopy.map { item in
                var copy = item
                copy.nonVisibleProperty = UUID()
                return copy
            }
        }

        self.state = .loaded(itemsCopy)
    }

    guard let oldIndex = activeIndex, oldIndex + 1 < items.endIndex else {
        activeIndex = 0
        setActiveStateForItem(at: activeIndex!, to: true, in: &itemsCopy)
        return
    }

    activeIndex = oldIndex + 1

    setActiveStateForItem(at: oldIndex, to: false, in: &itemsCopy)
    setActiveStateForItem(at: activeIndex!, to: true, in: &itemsCopy)
}

I'm using a defer to assign a copy of my list of items to self.state regardless of whether my guard requirement is satisfied or not.

If this method looks suboptimal to you, that's ok. The point of this exercise was never to write optimal code. The point is to write code that allows us to observe and analyze SwiftUI's behavior when it comes to determining to which views get redrawn and when.

Before we start taking some measurements, I want to show you what my views look like.

Understanding the sample's views

The sample views are quite simple so I won't explain them in detail. My cell view looks as follows:

struct StateDrivenCell: View {
    let item: Item

    var body: some View {
        VStack(alignment: .leading, spacing: 8) {
            HStack {
                VStack(alignment: .leading) {
                    Text("identifier:").bold()
                    Text(item.id.uuidString.split(separator: "-").first!)
                }
                Spacer()
            }

            HStack {
                VStack(alignment: .leading) {
                    Text("active state:").bold()
                    Text("is active: \(item.isActive  ? "✅ yes": "❌ no")")
                }
                Spacer()
            }

        }.padding()
    }
}

All this cell does is display its model. Nothing more, nothing less.

The list view looks as follows:

struct StateDrivenView: View {
    @StateObject var state = DataSource()

    var body: some View {
        NavigationView {
            ScrollView {
                if case .loaded(let items) = state.state {
                    LazyVStack {
                        ForEach(items) { item in
                            StateDrivenCell(item: item)
                        }
                    }
                } else {
                    ProgressView()
                }
            }
            .toolbar {
                // some buttons to activate next item, start timer, etc.
            }
            .navigationTitle(Text("State driven"))
        }
        .onAppear {
            state.loadItems()
        }
    }
}

Overall, this view shouldn't surprise you too much.

When looking at this, you might expect things to be suboptimal and you would maybe set this example up in a different way. That's okay because again, the point of this code is not to be optimal. In fact, as our measurements will soon prove, we can write much better code with minimal changes. Instead, the point is to observe and analyze how SwiftUI determines what it should redraw.

To do that, we'll make extensive use of Instruments.

Using Instruments to understand SwiftUI's redraw behavior

When we run our app, everything looks fine at first glance. When we set the application up to automatically update the active item status every second, we don't see any issues. Even when we set the application up to automatically mutate our non-visible property everything seems completely fine.

At this point, it's a good idea to run the application with the SwiftUI Instruments template to see if everything looks exactly as we expect.

In particular, we're looking for body access where we don't expect it.

If everything works correctly, we only want the view bodies for cells that have different data to be accessed. More specifically, ideally we don't redraw any views that won't end up looking any different if they would be redrawn.

Whenever you build your app for profiling in Xcode, Instruments will automatically open. If you're running your own SwiftUI related profiling, you'll want to select the SwiftUI template from Instruments' templates.

Instruments' template selection screen

Once you've opened the SwiftUI template, you can run your application and perform the interactions that you want to profile. In my case, I set my sample app up to automatically update the active item every second, and every time this happens I change some non-visible properties to see if cells are redrawn even if their output looks the same.

When I run the app with this configuration, here's what a single timer tick looks like in Instruments when I focus on the View Body timeline:

A screenshot of Instruments that shows 6 cells get re-evaluated

In this image, you can see that the view body for StateDrivenCell was invoked six times. In other words, six cells got their bodies evaluated so they could be redrawn on the screen. This number is roughly equal to the number of cells on screen (my device fits five cells) so to some extent this makes sense.

On the other hand, we know that out of these six cells only two actually updated. One would have its isActive state flipped from true to false and the other would have its isActive state flipped from false to true. The other property that we updated is not shown and doesn't influence the cell's body in any way. If I run the same experiment except I don't update the non-visible property every time, the result is that only two cell bodies get re-evaluated.

Instruments screenshot that shows 2 cells get re-evaluated when we don't change a non-visible property

We can see that apparently SwiftUI is smart enough to somehow compare our models even though they're not Equatable. In an ideal world, we would write our app in a way that would ensure only the two cell bodies that show the models that changed in a meaningful way are evaluated.

Before we dig into that, take a good look at what's shown in Instruments. It shows that StateDrivenView also has its body evaluated.

The reason this happens is that the StateDrivenView holds a @StateObject as the source of truth for the entire list. Whenever we change one of the @StateObject's published properties, the StateDrivenView's body will be evaluated because its source of truth changed.

Note that body evalulation is not guaranteed to trigger an actual redraw on the screen. We're seeing Core Animation commits in the Instruments anlysis so it's pretty safe to assume something got redrawn, but it's hard to determine what exactly. What's certain though is that if SwiftUI evaluates the body of a view, there's a good chance this leads to a redraw of the accessed view itself, or that one of its child views needs to be redrawn. It's also good to mention that a body evaluation does not immediately lead to a redraw. In other words, if a view's body is evaulated multiple times during a single render loop, the view is only redrawn once. As a mental model, you can think of SwiftUI collecting views that need to be redrawn during each render loop, and then only redrawing everything that needs to be redrawn once rather than commiting a redraw for every change (this would be wildly inefficient as you can imagine). This model isn't 100% accurate, but in my opinion it's good enough for the context of this blog post.

Because we're using a LazyVStack in the view, not all cells are instantiated immediately which means that the StateDrivenView will initially only create about six cells. Each of these six cells gets created when the StateDrivenView's body is re-evaluated and all of their bodies get re-evaluated too.

You might think that this is just the way SwiftUI works, but we can actually observe some interesting behavior if we make some minor changes to our model. By making our model Equatable, we can give some hints to SwiftUI about whether or not the underlying data for our cell got changed. This will in turn influence whether the cell's body is evaluated or not.

This is also where things get a little... strange. For now, let's pretend everything is completely normal and add an Equatable conformance to our model to see what happens.

Here's what my conformance looks like:

struct Item: Identifiable, Equatable {
    var isActive: Bool
    let id = UUID()
    var nonVisibleProperty = UUID()

    init(id: UUID = UUID(), isActive: Bool = false, nonVisibleProperty: UUID = UUID()) {
        self.isActive = isActive
    }

    static func == (lhs: Item, rhs: Item) -> Bool {
        return lhs.id == rhs.id && lhs.isActive == rhs.isActive
    }
}

The parameters for my test are the exact same. Every second, a new item is made active, the previously active item is made inactive. The nonVisibleProperty for every item in my list is mutated.

My Equatable conformance ignores the nonVisibleProperty and only compares the id and the isActive property. Based on this, what I want to happen is that only the bodies of the cells who's item's isActive state changed is evaluated.

Unfortunately, my Instruments output at this point still looks the same.

A screenshot of Instruments that shows 6 cells get re-evaluated

While I was putting together the sample app for this post, this outcome had me stumped. I literally had a project open alongside this project where I could reliably fix this body evaluation by making my model Equatable. After spending a lot of time trying to figure out what was causing this, I added a random String to my model, making it look like this:

struct Item: Identifiable, Equatable {
    var isActive: Bool
    let id = UUID()
    var nonVisibleProperty = UUID()
    let someString: String

    init(id: UUID = UUID(), isActive: Bool = false, nonVisibleProperty: UUID = UUID()) {
        self.isActive = isActive
        self.someString = nonVisibleProperty.uuidString
    }

    static func == (lhs: Item, rhs: Item) -> Bool {
        return lhs.id == rhs.id && lhs.isActive == rhs.isActive
    }
}

After updating the app with this random String added to my model, I'm suddenly seeing the output I was looking for. The View body timeline now shows that only two StateDrivenCell bodies get evaluated every time my experiment runs.

A screenshot of Instruments that shows 2 cells get re-evaluated with our updates in place

It appears that SwiftUI determines whether a struct is a plain data type, or a more complex one by running the built-in _isPOD function that's used to determin whether a struct is a "plain old data" type. If it is, SwiftUI will use reflection to directly compare fields on the struct. If we're not dealing with a plain old data type, the custom == function is used. Adding a String property to our struct changes it from being a plain old data type to a complex type which means SwiftUI will use our custom == implementation.

To learn more about this, take a look at this post by the SwiftUI Lab.

After I realized that I can make my models conform to Equatable and that influences whether my view's body is evaluated or not, I was wondering what leads SwiftUI to compare my model struct in the first place. After all, my cell is defined is follows:

struct StateDrivenCell: View {
    let item: Item

    var body: some View {
        VStack(alignment: .leading, spacing: 8) {
            // cell contents
        }.padding()
    }
}

The item property is not observed. It's a simple stored property on my view. And according to Instruments my view's body isn't evaluated. So it's not like SwiftUI is comparing the entire view. More interestingly, it was able to do some kind of comparison before I made my model Equatable.

The only conclusion that I can draw here is that SwiftUI will compare your models regardless of their Equatable conformance in order to determine whether a view needs to have its body re-evaluated. And in some cases, your Equatable conformance might be ignored.

At this point I was curious. Does SwiftUI evaluate everything on my struct that's not my body? Or does it evaluate stored properties only? To find out, I added the following computed property to my view:

var randomInt: Int { Int.random(in: 0..<Int.max) }

Every time this is accessed, it will return a new random value. If SwiftUI takes this property into account when it determines whether or not StateDrivenCell's body needs to be re-evaluated, that means that this would negate my Equatable conformance.

After profiling this change with Instruments, I noticed that this did not impact my body access. The body for only two cells got evaluated every second.

Then I redefined randomInt as follows:

let randomInt = Int.random(in: 0..<Int.max)

Now, every time an instance of my struct is created, randomInt will get a constant value. When I ran my app again, I noticed that I was right back where I started. Six body evaluations for every time my experiment runs.

A screenshot of Instruments that shows 6 cells get re-evaluated

This led me to conclude that SwiftUI will always attempt to compare all of its stored properties regardless of whether they're Equatable. If you provide an Equatable conformance on one of the view's stored properties this implementation will be used if SwiftUI considers it relevant for your model. It's not quite clear when using your model's Equatable implementation is or is not relevant according to SwiftUI.

An interesting side-note here is that it's also possible to make your view itself conform to Equatable and compare relevant model properties in there if the model itself isn't Equatable:

extension StateDrivenCell: Equatable {
    static func ==(lhs: StateDrivenCell, rhs: StateDrivenCell) -> Bool {
        return lhs.item.id == rhs.item.id && lhs.item.isActive == rhs.item.isActive
    }
}

What's interesting is that this conformance is pretty much ignored under the same circumstances as before. If Item does not have this extra string that I added, there are six cell bodies accessed every second. Adding the string back makes this work properly regardless of whether the Item itself is Equatable.

I told you things would get weird here, didn't I...

Overall, I feel like the simple model I had is probably way too simple which might lead SwiftUI to get more eager with its body access. The situation where an Equatable conformance to a model would lead to SwiftUI no longer re-evaluating a cell's body if the model is considered equal seems more likely in the real world than the situation where it doesn't.

In fact, I have tinkered with this in a real app, a sample experiment, and a dedicated sample app for this post and only in the dedicated app did I see this problem.

Takeaways on SwiftUI redrawing based on Instruments analysis

What we've seen so far is that SwiftUI will evaluate a view's body if it thinks that this view's underlying data will change its visual representation (or that of one the view's subviews). It will do so by comparing all stored properties before evaluating the body, regardless of whether these stored properties are Equatable.

If your stored properties are Equatable, SwiftUI might decide to rely on your Equatable conformance to determine whether or not your model changed. If SwiftUI determines that all stored properties are still equal, your view's body is not evaluated. If one of the properties changed, the body is evaluated and each of the views returned from your view's body is evaluated in the same way that I just described.

Conforming your view to Equatable works in the same way except you get to decide which properties participate in the comparison. This means that you could take computed properties into account, or you could ignore some of your view's stored properties.

Note that this only applies to view updates that weren't triggered by a view's @ObservedObject, @StateObject, @State, @Binding, and similar properties. Changes in these properties will immediately cause your view's body to be evaluated.

Designing your app to play into SwiftUI's behavior

Now that we know about some of SwiftUI's behavior, we can think about how our app can play into this behavior. One thing I've purposefully ignored up until now is that the body for our StateDrivenView got evaluated every second.

The reason this happens is that we assign to the DataSource's state property every second and this property is marked with @Published.

Technically, our data source didn't really change. It's just one of the properties on one of the models that we're showing in the list that got changed. It'd be far nicer if we could scope our view updates entirely to the cells holding onto the changed models.

Not only would this get rid of the StateDrivenView's body being evaluated every second, it would allow us to get rid of the entire Equatable conformance that we added in the previous section.

To achieve this, we can keep the @Published property on DataSource. It doesn't need to be changed. What needs to be updated is the definition of Item, and the way we toggle the active item.

First, let's make Item a class and mark it as an ObservableObject. We'll also mark its isActive property as @Published:

class Item: Identifiable, ObservableObject {
    @Published var isActive: Bool
    let id = UUID()
    var nonVisibleProperty = UUID()

    init(id: UUID = UUID(), isActive: Bool = false, nonVisibleProperty: UUID = UUID()) {
        self.isActive = isActive
    }
}

Note that I got rid of someString since its only purpose was to make the Equatable workaround work.

The view needs to be updated to use Item as an observed object:

struct StateDrivenView: View {
    @ObservedObject var item: Item

    var body: some View {
        VStack(alignment: .leading, spacing: 8) {
            HStack {
                VStack(alignment: .leading) {
                    Text("identifier:").bold()
                    Text(item.id.uuidString.split(separator: "-").first!)
                }
                Spacer()
            }

            HStack {
                VStack(alignment: .leading) {
                    Text("active state:").bold()
                    Text("is active: \(item.isActive ? "✅ yes": "❌ no")")
                }
                Spacer()
            }

        }.padding()
    }
}

Now that Item can be observed by our view, we need to change the implementation of activateNextItem() in the DataSource:

func activateNextItem() {
    guard case .loaded(let items) = state else {
        return
    }

    defer {
        if isMutatingHiddenProperty {
            for item in items {
                item.nonVisibleProperty = UUID()
            }
        }
    }

    guard let oldIndex = activeIndex, oldIndex + 1 < items.endIndex else {
        activeIndex = 0
        items[activeIndex!].isActive = true
        return
    }

    activeIndex = oldIndex + 1

    items[oldIndex].isActive = false
    items[activeIndex!].isActive = true
}

Instead of updating the state property on DataSource every time this method is called, I just mutate the items I want to mutate directly.

Running the sample app with Instruments again yields the following result:

A screenshot of Instruments that shows 2 cells get re-evaluated and the list itself is not evaluated

As you can see, only two cell bodies get evaluated now. That's the cell that's no longer active, and the newly activated cell. The StateDrivenView itself is no longer evaluated every second.

I'm sure you can imagine that this is the desired situation to be in. We don't want to re-evaluate and redraw our entire list when all we really want to do is re-evaluate one or two cells.

The lesson to draw from this optimization section is that you should always aim to make your data source scope as small as possible. Triggering view updates from way up high in your view hierarchy to update something that's all that way at the bottom is not very efficient because all of the bodies of views in between will need to be evaluated and redrawn in the process.

Conclusions

In this post you learned a lot about how and when SwiftUI decides to redraw your views. You learned that if the model for a view contains properties that changed, SwiftUI will re-evaluate the view's body. This is true even if the changed properties aren't used in your view. More interestingly, you saw that SwiftUI can compare your models even if they're not Equatable.

Next, I showed you that adding Equatable conformance to your model can influence how SwiftUI decides whether or not your view's body needs to be re-evaluated. There's one caveat though. Your Equatable conformance won't influence SwiftUI's re-evaluation behavior depending on whether your model object is a "plain old data" object or not.

After that, you saw that your view will automatically take all of its stored properties into account when it decides whether or not your view's body needs re-evaluation. Computed properties are ignored. You also saw that instead of conforming your model to Equatable, you can conform your views to Equatable and as far as I can tell, the same caveat mentioned earlier applies.

Lastly, you saw that in order to keep tight control over your views and when they get redrawn, it's best to keep your data sources small and focussed. Instead of having a global state that contains a lot of structs, it might be better have your models as ObservableObjects that can be observed at a more granular level. This can, for example, prevent your lists body from being evaluated and works around the extra redraws that were covered in the first half of this post entirely.

I'd like to stress one last time that it's not guaranteed that SwiftUI will continue working the way it does, and this post is an exercise in trying to unravel some of SwiftUI's mysteries like, for example, how SwiftUI's diffing works. Investigating all of this was a lot of fun and if you have any additions, corrections, or suggestions for this post I'd love to add them, please send them to me on Twitter.

Understanding Swift’s AsyncSequence

The biggest features in Swift 5.5 all revolve around its new and improved concurrency features. There's actors, async/await, and more. With these features folks are wondering whether async/await will replace Combine eventually.

While I overall do not think that async/await can or will replace Combine on its own, Swift 5.5 comes with some concurrency features that provide very similar functionality to Combine.

If you're curious about my thoughts on Combine and async/await specifically, I still believe that what I wrote about this topic earlier is true. Async/await will be a great tool for work that have a clearly defined start and end with a single output while Combine is more useful for observing state and responding to this state.

In this post, I would like to take a look at a Swift Concurrency feature that provides a very Combine-like functionality because it allows us to asynchronously receive and use values. We'll take a look at how an async sequence is used, and when you might want to choose an async sequence over Combine (and vice versa).

Using an AsyncSequence

The best way to explain how Swift's AsyncSequence works is to show you how it can be used. Luckily, Apple has added a very useful extension to URL that allows us to asynchronously read lines from a URL. This can be incredibly useful when your server might stream data as it becomes available instead of waiting for all data to be ready before it begins sending an HTTP body. Alternatively, your server's response format might allow you to begin parsing and decoding its body line by line. An example of this would be a server that returns a csv file where every line in the file represents a row in the returned dataset.

Iterating over an async sequence like the one provided by URL looks as folllows:

let url = URL(string: "https://www.donnywals.com")!
for try await line in url.lines {
    print(line)
}

This code only works when you're in an asynchronous context, this is no different from any other asynchronous code. The main difference is in how the execution of the code above works.

A simple network call with async/await would look as follows:

let (data, response) = try await URLSession.shared.data(from: url)

The main difference here is that URLSession.shared.data(from:) only returns a single result. This means that the url is loaded asynchronously, and you get the entire response and the HTTP body back at once.

When you iterate over an AsyncSequence like the one provided through URL's lines property, you are potentially awaiting many things. In this case, you await each line that's returned by the server.

In other words, the for loop executes each time a new line, or a new item becomes available.

The example of loading a URL line-by-line is not something you'll encounter often in your own code. However, it could be a useful tool if your server's responses are formatted in a way that would allow you to parse the response one line at a time.

A cool feature of AsyncSequence is that it behaves a lot like a regular Sequence in terms of what you can do with it. For example, you can transform your the items in an AsyncSequence using a map:

let url = URL(string: "https://www.donnywals.com")!
let sequence = url.lines.map { string in
    return string.count
}

for try await line in sequence {
    print(line)
}

Even though this example is very simple and naive, it shows how you can map over an AsyncSequence.

Note that AsyncSequence is not similar to TaskGroup. A TaskGroup runs multiple tasks that each produce a single result. An AsyncSequence on the other hand is more useful to wrap a single task that produces multiple results.

However, if you're familiar with TaskGroup, you'll know that you can obtain the results of the tasks in a group by looping over it. In an earlier post I wrote about TaskGroup, I showed the following example:

func fetchFavorites(user: User) async -> [Movie] {
    // fetch Ids for favorites from a remote source
    let ids = await getFavoriteIds(for: user)

    // load all favorites concurrently
    return await withTaskGroup(of: Movie.self) { group in
        var movies = [Movie]()
        movies.reserveCapacity(ids.count)

        // adding tasks to the group and fetching movies
        for id in ids {
            group.addTask {
                return await self.getMovie(withId: id)
            }
        }

        // grab movies as their tasks complete, and append them to the `movies` array
        for await movie in group {
            movies.append(movie)
        }

        return movies
    }
}

Note how the last couple of likes await each movie in the group. That's because TaskGroup itself conforms to AsyncSequence. This means that we can iterate over the group to obtain results from the group as they become available.

In my post on TaskGroup, I explain how a task that can throw an error can cause all tasks in the group to be cancelled if the error is thrown out of the task group. This means that you can still catch and handle errors inside of your task group to prevent your group from failing. When you're working with AsyncSequence, this is slightly different.

AsyncSequence and errors

Whenever an AsyncSequence throws an error, your for loop will stop iterating and you'll receive no further values. Wrapping the entire loop in a do {} catch {} block doesn't work; that would just prevent the enclosing task from rethrowing the error, but the loop still stops.

This is part of the contract of how AsyncSequence works. A sequence ends either when its iterator returns nil to signal the end of the sequence, or when it throws an error.

Note that a sequence that produces optional values like String? can exist and if a nil value exists this wouldn't end the stream because the iterator would produce an Optional.some(nil). The reason for this is that an item of type String? was found in the sequence (hence Optional.some) and its value was nil. It's only when the iterator doesn't find a value and returns nil (or Optional.none) that the stream actually ends.

In the beginning of this post I mentioned Combine, and how AsyncSequence provides some similar features to what we're used to in Combine. Let's take a closer look at similarities and differences between Combine's publishers and AsyncSequence.

AsyncSequence and Combine

The most obvious similarities between Combine and AsyncSequence are in the fact that both can produce values over time asynchronously. Furthermore, they both allow us to transform values using pure functions like map and flatMap. In other words, we can use functional programming to transform values. When we look at how thrown errors are handled, the similarities do not stop. Both Combine and AsyncSequence end the stream of values whenever an error is thrown.

To sum things up, these are the similarities between Combine and AsyncSequence:

  • Both allow us to asynchronously handle values that are produced over time.
  • Both allow us to manipulate the produced values with functions like map, flatMap, and more.
  • Both end their stream of values when an error occurs.

When you look at this list you might thing that AsyncSequence clearly replaces Combine.

In reality, Combine allows us to easily do things that we can't do with AsyncSequence. For example, we can't debounce values with AsyncSequence. We also can't have one asynchronous iterator that produces values for multiple for loops because iterators are destructive which means that if you loop over the same iterator twice, you should expect to see the second iterator return no values at all.

I'm sure there are ways to work around this but we don't have built-in support at this time.

Furthermore, at this time we can't observe an object's state with an AsyncSequence which, in my opinion is where Combine's value is the biggest. Again, I'm sure you could code up something that leverages KVO to build something that observes state but it's not built-in at this time.

This is most obvious when looking at an ObservableObject that's used with SwiftUI:

class MyViewModel: ObservableObject {
  @Published var currentValue = 0
}

SwiftUI can observe this view model's objectWillChange publisher to be notified of changes to any of the ObservableObject's @Published properties. This is really powerful, and we currently can't do this with AsyncSequence. Furthermore, we can use Combine to take a publisher's output, transform it, and assign it to an @Published property with the assign(to:) operator. If you want to learn more about this, take a look at this post I wrote where I use the assign(to:) operator.

Two other useful features we have in Combine are CurrentValueSubject and PassthroughSubject. While AsyncSequence itself isn't equivalent to Subject in Combine, we can achieve similar functionality with AsyncStream which I plan to write a post about soon.

The last thing I'd like to cover is the lifetime of an iterator versus that of a Combine subscription. When you subscribe to a Combine publisher you are given a cancellable to you must persist to connect the lifetime of your subscription to the owner of the cancellable. To learn more about cancellables in Combine, take a look at my post on AnyCancellable.

You can easily subscribe to a Combine publisher in a regular function in your code:

var cancellables = Set<AnyCancellable>()

func subscribeToPublishers() {
  viewModel.$numberOfLikes.sink { value in 
    // use new value
  }.store(in: &cancellables)

  viewModel.$currentUser.sink { value in 
    // use new value
  }.store(in: &cancellables)
}

The lifetime of these subscriptions is tied to the object that holds my set of cancellables. With AsyncSequence this lifecycle isn't as clear:

var entryTask: Task<Void, Never>?

deinit {
  entryTask.cancel()
}

func subscribeToSequence() {
  entryTask = Task {
    for await entry in viewModel.fetchEntries {
      // use entry
    }
  }
}

We could do something like the above to cancel our sequence when the class that holds our task is deallocated but this seems very error prone, and I don't think its as elegant as Combine's cancellable.

Summary

In this post, you learned about Swift's AsyncSequence and you've learned a little bit about how it can be used in your code. You learned about asynchronous for loops, and you saw that you can transform an AsyncSequence output.

In my opinion, AsyncSequence is a very useful mechanism to obtain values over time from a process that has a beginning and end. For more open ended tasks like observing state on an object, I personally think that Combine is a better solution. At least for now it is, who knows what the future brings.

Using Swift’s async/await to build an image loader

Async/await will be the defacto way of doing asynchronous programming on iOS 15 and above. I've already written quite a bit about the new Swift Concurrency features, and there's still plenty to write about. In this post, I'm going to take a look at building an asynchronous image loader that has support for caching.

SwiftUI on iOS 15 already has a component that allows us to load images from the network but it doesn't support caching (other than what’s already offered by URLSession), and it only works with a URL rather than also accepting a URLRequest. The component will be fine for most of our use cases, but as an exercise, I'd like to explore what it takes to implement such a component ourselves. More specifically I’d like to explore what it’s like to build an image loader with Swift Concurrency.

We'll start by building the image loader object itself. After that, I'll show how you can build a simple SwiftUI view that uses the image loader to load images from the network (or a local cache if possible). We'll make it so that the loader work with both URL and URLRequest to allow for maximum configurability.

Note that the point of this post is not to show you a perfect image caching solution. The point is to demonstrate how you'd build an ImageLoader object that will check whether an image is available locally and only uses the network if the requested image isn't available locally.

Designing the image loader API

The public API for our image loader will be pretty simple. It'll be just two methods:

  1. public func fetch(_ url: URL) async throws -> UIImage
  2. public func fetch(_ urlRequest: URLRequest) async throws -> UIImage

The image loader will keep track of in-flight requests and already loaded images. It'll reuse the image or the task that's loading the image whenever possible. For this reason, we'll want to make the image loader an actor. If you're not familiar with actors, take a look at this post I published to brush up on Swift Concurrency's actors.

While the public API is relatively simple, tracking in-progress fetches and loading images from disk when possible will require a little bit more effort.

Defining the ImageLoader actor

We'll work our way towards a fully featured loader one step at a time. Let's start by defining the skeleton for the ImageLoader actor and take it from there.

actor ImageLoader {
    private var images: [URLRequest: LoaderStatus] = [:]

    public func fetch(_ url: URL) async throws -> UIImage {
        let request = URLRequest(url: url)
        return try await fetch(request)
    }

    public func fetch(_ urlRequest: URLRequest) async throws -> UIImage {
        // fetch image by URLRequest
    }

    private enum LoaderStatus {
        case inProgress(Task<UIImage, Error>)
        case fetched(UIImage)
    }
}

In this code snippet I actually did a little bit more than just define a skeleton. For example, I've defined a private enum LoaderStatus. This enum will be used to keep track of which images we're loading from the network, and which images are available immediately from memory. I also went ahead and implemented the fetch(:) method that takes a URL. To keep things simple, it just constructs a URLRequest with no additional configuration and calls the overload for fetch(_:) that takes a URLRequest.

Now that we have a skeleton ready to go, we can start implementing the fetch(_:) method. There are essentially three different scenarios that we can run into. Interestingly enough, these three scenarios are quite similar to what I wrote in an earlier Swift Concurrency related post that covered refreshing authentication tokens.

The scenarios can be roughly defined as follows:

  1. fetch(_:) has already been called for this URLRequest so will either return a task or the loaded image.
  2. We can load the image from disk and store it in-memory
  3. We need to load the image from the network and store it in-memory and on disk

I'll show you the implementation for fetch(_:) one step at a time. Note that the code won't compile until we've finished the implementation.

First, we'll want to check the images dictionary to see if we can reuse an existing task or grab the image directly from the dictionary:

public func fetch(_ urlRequest: URLRequest) async throws -> UIImage {
    if let status = images[urlRequest] {
        switch status {
        case .fetched(let image):
            return image
        case .inProgress(let task):
            return try await task.value
        }
    }

    // we'll need to implement a bit more before this code compiles
}

The code above shouldn't look too surprising. We can simply check the dictionary like we would normally. Since ImageLoader is an actor, it will ensure that accessing this dictionary is done in a thread safe way (don't forget to refer back to my post on actors if you're not familiar with them yet).

If we find an image, we return it. If we encounter an in-progress task, we await the task's value to obtain the requested image without creating a new (duplicate) task.

The next step is to check whether the image exist on disk to avoid having to go to the network if we don't have to:

public func fetch(_ urlRequest: URLRequest) async throws -> UIImage {
    // ... code from the previous snippet

    if let image = try self.imageFromFileSystem(for: urlRequest) {
        images[urlRequest] = .fetched(image)
        return image
    }

    // we'll need to implement a bit more before this code compiles
}

This code calls out to a private method called imageFromFileSystem. I haven't shown you this method yet, I'll show you the implementation soon. First, I want to briefly cover what this code snippet does. It attempts to fetch the requested image from the filesystem. This is done synchronously and when an image is found we store it in the images array so that the next called of fetch(_:) will receive the image from memory rather than the filesystem.

And again, this is all done in a thread safe manner because our ImageLoader is an actor.

As promised, here's what imageFromFileSystem looks like. It's fairly straightforward:

private func imageFromFileSystem(for urlRequest: URLRequest) throws -> UIImage? {
    guard let url = fileName(for: urlRequest) else {
        assertionFailure("Unable to generate a local path for \(urlRequest)")
        return nil
    }

    let data = try Data(contentsOf: url)
    return UIImage(data: data)
}

private func fileName(for urlRequest: URLRequest) -> URL? {
    guard let fileName = urlRequest.url?.absoluteString.addingPercentEncoding(withAllowedCharacters: .urlPathAllowed),
          let applicationSupport = FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first else {
              return nil
          }

    return applicationSupport.appendingPathComponent(fileName)
}

The third and last situation we might encounter is one where the image needs to be retrieved from the network. Let's see what this looks like:

public func fetch(_ urlRequest: URLRequest) async throws -> UIImage {
    // ... code from the previous snippets

    let task: Task<UIImage, Error> = Task {
        let (imageData, _) = try await URLSession.shared.data(for: urlRequest)
        let image = UIImage(data: imageData)!
        try self.persistImage(image, for: urlRequest)
        return image
    }

    images[urlRequest] = .inProgress(task)

    let image = try await task.value

    images[urlRequest] = .fetched(image)

    return image
}

private func persistImage(_ image: UIImage, for urlRequest: URLRequest) throws {
    guard let url = fileName(for: urlRequest),
          let data = image.jpegData(compressionQuality: 0.8) else {
        assertionFailure("Unable to generate a local path for \(urlRequest)")
        return
    }

    try data.write(to: url)
}

This last addition to fetch(:) creates a new Task instance to fetch image data from the network. When the data is successfully retrieved, and it's converted to an instance of UIImage. This image is then persisted to disk using the persistImage(:for:) method that I included in this snippet.

After creating the task, I update the images dictionary so it contains the newly created task. This will allow other callers of fetch(_:) to reuse this task. Next, I await the task's value and I update the images dictionary so it contains the fetched image. Lastly, I return the image.

You might be wondering why I need to add the in progress task to the images dictionary before awaiting it.

The reason is that while fetch(:) is suspended to await the networking task's value, other callers to fetch(:) will get time to run. This means that while we're awaiting the task value, someone else might call the fetch(_:) method and read the images dictionary. If the in progress task isn't added to the dictionary at that time, we would kick off a second fetch. By updating the images dictionary first, we make sure that subsequent callers will reuse the in progress task.

At this point, we have a complete image loader done. Pretty sweet, right? I'm always delightfully surprised to see how simple actors make complicated flows that require careful synchronization to correctly handle concurrent access.

Here's what the final implementation for the fetch(_:) method looks like:

public func fetch(_ urlRequest: URLRequest) async throws -> UIImage {
    if let status = images[urlRequest] {
        switch status {
        case .fetched(let image):
            return image
        case .inProgress(let task):
            return try await task.value
        }
    }

    if let image = try self.imageFromFileSystem(for: urlRequest) {
        images[urlRequest] = .fetched(image)
        return image
    }

    let task: Task<UIImage, Error> = Task {
        let (imageData, _) = try await URLSession.shared.data(for: urlRequest)
        let image = UIImage(data: imageData)!
        try self.persistImage(image, for: urlRequest)
        return image
    }

    images[urlRequest] = .inProgress(task)

    let image = try await task.value

    images[urlRequest] = .fetched(image)

    return image
}

Next up, using it in a SwiftUI view to create our own version of AsyncImage.

Building our custom SwiftUI async image view

The custom SwiftUI view that we'll create in this section is mostly intended as a proof of concept. I've tested it in a few scenarios but not thoroughly enough to say with confidence that this would be a better async image than the built-in AsyncImage. However, I'm pretty sure that this is an implementation that should work fine in many situations.

To provide our custom image view with an instance of the ImageLoader, I'll use SwiftUI's environment. To do this, we'll need to add a custom value to the EnvironmentValues object:

struct ImageLoaderKey: EnvironmentKey {
    static let defaultValue = ImageLoader()
}

extension EnvironmentValues {
    var imageLoader: ImageLoader {
        get { self[ImageLoaderKey.self] }
        set { self[ImageLoaderKey.self ] = newValue}
    }
}

This code adds an instance of ImageLoader to the SwiftUI environment, allowing us to easily access it from within our custom view.

Our SwiftUI view will be initialized with a URL or a URLRequest. To keep things simple, we'll always use a URLRequest internally.

Here's what the SwiftUI view's implementation looks like:

struct RemoteImage: View {
    private let source: URLRequest
    @State private var image: UIImage?

    @Environment(\.imageLoader) private var imageLoader

    init(source: URL) {
        self.init(source: URLRequest(url: source))
    }

    init(source: URLRequest) {
        self.source = source
    }

    var body: some View {
        Group {
            if let image = image {
                Image(uiImage: image)
            } else {
                Rectangle()
                    .background(Color.red)
            }
        }
        .task {
            await loadImage(at: source)
        }
    }

    func loadImage(at source: URLRequest) async {
        do {
            image = try await imageLoader.fetch(source)
        } catch {
            print(error)
        }
    }
}

When we're instantiating the view, we provide it with a URL or a URLRequest. When the view is first rendered, image will be nil so we'll just render a placeholder rectangle. I didn't give it any size, that would be up to the user of RemoteImage to do.

The SwiftUI view has a task modifier applied. This modifier allows us to run asynchronous work when the view is first created. In this case, we'll use a task to ask the image loader for an image. When the image is loaded, we update the @State var image which will trigger a redraw of the view.

This SwiftUI view is pretty simple and it doesn't handle things like animations or updating the image later. Some nice additions could be to add the ability to use a placeholder image, or to make the source property non-private and use an onChange modifier to kick off a new task using the Task initializer to load a new image.

I'll leave these features to be implemented by you. The point of this simple view was merely to show you how this custom image loader can be used in a SwiftUI context; not to show you how to build a fantastic fully-featured SwiftUI image view replacement.

In Summary

In this post we covered a lot of ground. I mean, a lot. You saw how you can build an ImageLoader that gracefully handles concurrent calls by making it an actor. You saw how we can keep track of both in progress fetches as well as already fetched images using a dictionary. I showed you a very simple implementation of a file system cache as well. This allows us to cache images in memory, and load from the filesystem if needed. Lastly, you saw how we can implement logic to load our image from the network if needed.

You learned that while an asynchronous function that's defined on an actor is suspended, the actor's state can be read and written by others. This means that we needed to assign our image loading task to our dictionary before awaiting the tasks result so that subsequent callers would reuse our in progress task.

After that, I showed you how you can inject the custom image loader we've built into SwiftUI's environment, and how it can be used to build a very simple custom asynchronous image view.

All in all, you've learned a lot in this post. And the best part is, in my opinion, that while the underlying logic and thought process is quite complex, Swift Concurrency allows us to express this logic in a sensible and readable way which is really awesome.

What exactly is a Combine AnyCancellable?

If you've worked with Combine in your applications you'll know what it means when I tell you that you should always retain your cancellables. Cancellables are an important part of working with Combine, similar to how disposables are an important part of working with RxSwift. Interestingly, Swift Concurrency's AsyncSequence operates without an equivalent to cancellable (with memory leaks as a result). That said, in this post we'll only focus on Combine.

For example, you might have built a publisher that wraps CLLocationManagerDelegate and exposes the user's current location with a currentLocation publisher that's a CurrentValueSubject<CLLocation, Never>. Subscribing to this publisher would look look a bit like this:

struct ViewModel {
    let locationProvider: LocationProvider
    var cancellables = Set<AnyCancellable>()

  init(locationProvider: LocationProvider) {
        self.locationProvider = locationProvider
        locationProvider.currentLocation.sink { newLocation in 
            // use newLocation
        }.store(in: &cancellables)
    }
}

For something that's so key to working with Combine, it kind of seems like cancellables are just something we deal with without really questioning it. Thats why in this post, I'd like to take a closer look at what a cancellable is, and more specifically, I'd like to look at what the enigmatic AnyCancellable that's returned by both sink and assign(to:on:) is exactly.

Understanding the purpose of cancellables in Combine

Cancellables in Combine fulfill an important part in Combine's subscription lifecycle. According to Apple, the Cancellable protocol is the following:

A protocol indicating that an activity or action supports cancellation.

Ok. That's not very useful. I mean, if supporting cancellation is all we want to do, why do we need to retain our cancellables?

If we look at the detailed description for Cancellable, you'll find that it says the following:

Calling cancel() frees up any allocated resources. It also stops side effects such as timers, network access, or disk I/O.

This still isn't great, but at least it's something. We know that an object that implements Cancellable has a cancel method that we can call to stop any in progress work. And more importantly, we know that we can expect any allocated resources to be freed up. That's really good to know.

What this doesn't really tell us is why we need to retain our cancellables in Combine. Based on the information that Apple provides there's nothing that even hints towards the need to retain cancellables.

Let's take a look at the documentation for AnyCancellable next. Maybe a Cancellable and AnyCancellable aren't quite the same even though we'd expect AnyCancellable to be nothing more than a type-erased Cancellable based on the way Apple chose to name it.

The short description explains the following:

A type-erasing cancellable object that executes a provided closure when canceled.

Ok. That's interesting. So rather it being "just" a type erased object that conforms to Cancellable, we can provide a closure to actually do something when we initialize an AnyCancellable. When we subscribe to a publisher we don't create our own AnyCancellable though, so we'll need to dig a little deeper.

There's once sentence in the AnyCancellable documentation that tells us exactly why we need to retain cancellables. It's the very last sentence in the discussion and it reads as follows:

An AnyCancellable instance automatically calls cancel() when deinitialized.

So what exactly does this tell us?

Whenever an AnyCancellable is deallocated, it will call cancel() on itself. This will run the provided closure that I mentioned earlier. It's safe to assume that this closure will ensure that any resources associated with our subscription are torn down. After all, that's what the cancel() method is supposed to do according to the Cancellable protocol.

Based on this, we can deduce that the purpose of cancellables in Combine, or rather the purpose of AnyCancellable in Combine is to associate the lifecycle of a Combine subscription to something other than the subscription completing.

When we retain a cancellable in an instance of a view model, view controller, or any other object, the lifecycle of that subscription becomes connected to that of the owner (the retaining object) itself. Whenever the owner of the cancellable is deallocated, the subscription is torn down and all resources are freed up immediately.

Note that this might not be quite intuitive when you think of that original description I quoted from the Cancellable documentation:

A protocol indicating that an activity or action supports cancellation.

Cancelling a subscription by calling cancel() on an AnyCancellable is not a graceful operation. This is already hinted at because the documentation for Cancellable mentions that "any allocated resources" will be freed up. You need to interpret this broadly.

You won't just cancel an in flight network call and be notified about it in a receiveCompletion closure. Instead, the entire subscription is torn down immediately. You will not be informed of this, and you will not be able to react to this in your receiveCompletion closure.

To sum up the purpose of cancellables in Combine, they are used to tie the lifecycle of a subscription to the object that retains the cancellable that we receive when we subscribe to a publisher.

This description might lead to you thinking that an AnyCancellable is a wrapper for a subscription. Unfortunately, that's not quite accurate. It's also not flat out wrong, but there's a bit of a nuance here; Apple chose the name AnyCancellable instead of Subscription on purpose.

What's inside an AnyCancellable exactly?

If an AnyCancellable isn't a subscription, then what it is? What's inside of an AnyCancellable?

The answer is complicated...

When I first learned Combine I was lucky enough to run into an Apple employee at a conference. We got talking about Combine, and I explained that I was working on a Combine book. I started firing off a few questions to validate my understanding of Combine and I was very lucky to get an answer or two.

One of my questions was "So is an AnyCancellable a subscription then?" and the answer was short and simple "No. It's an AnyCancellable".

You might think that's unhelpful, and I would fully understand. However, the answer is fully correct as I learned in our conversation and it makes Apple's intent with AnyCancellable perfectly clear.

Combine intentionally does not specify what's inside of AnyCancellable because we simply don't need to know exactly what is wrapped and how. All we need to know is that an AnyCancellable conforms to the Cancellable protocol, and when its cancel() method is called, all resources retained by whatever the Cancellable wrapper are released.

In practice, we know that an AnyCancellable will most likely wrap an object that conforms to Subscription and possibly also one that conforms to Subscriber. One of the two might even have a reference to a Publisher object.

We know this because we know that these three objects are always involved when you subscribe to a publisher. I've outlined this in more detail in this post as well as my Combine book.

This is really a long-winded way of me trying to tell you that we don't know what's inside an AnyCancellable, and it doesn't matter. You just need to remember that when an AnyCancellable is deallocated it will run its cancellation closure which will tear down anything it retains. This includes tearing down your subscription to a publisher.

If you're interested in learning about Swift Concurrency's AsyncSequence, and how it compares to publishers in Combine, I highly recommend that you start by looking at this post.

In Summary

In this post you learned about a key aspect of Combine; the Cancellable. I explained what the Cancellable protocol is, and from there I moved on to explain what the AnyCancellable is.

You learned that subscribing to a publisher with sink or assign(to:on:) will return an AnyCancellable that will tear down your subscription whenever the AnyCancellable is deallocated. This makes sure that your subscription to a publisher is deallocated when the object that retains your AnyCancellable is deallocated. This prevents your subscriptions from being deallocated immediately when the scope where they're created exits.

Lastly, I explained that we don't know what exactly is inside of the AnyCancellable objects that we retain for our subscriptions. While we can be pretty certain that an AnyCancellable must somehow retain a subscription, we shouldn't refer to it as a wrapper for a subscription because that would be inaccurate.

Hopefully this post gave you some extra insights into something that everybody that works with Combine has to deal with even though there's not a ton of information out there on AnyCancellable specifically.

Building a token refresh flow with async/await and Swift Concurrency

One of my favorite concurrency problems to solve is building concurrency-proof token refresh flows. Refreshing authentication tokens is something that a lot of us deal with regularly, and doing it correctly can be a pretty challenging task. Especially when you want to make sure you only issue a single token refresh request even if multiple network calls encounter the need to refresh a token.

Furthermore, you want to make sure that you automatically retry a request that failed due to a token expiration after you've obtained a new (valid) authentication token.

I wrote about a flow that does this before, except that post covered token refreshes with Combine rather than async await.

In this post, we'll build the exact same flow, except it'll use Swift Concurrency rather than Combine.

Understanding the flow

Before I dive into the implementation details, I want to outline the requirements of the token refresh flow that we'll build. The following chart outlines the flow of the network object that I want to build in this post:

A chart that describes the flow of making an authenticated network call

Whenever a network request is made, we ask an AuthManager object for a valid token. If a valid token was obtained, we can proceed with the network call. If no valid token was obtained we should present a login screen. When the request itself succeeds, we're all good and we'll return the result of the request. If the request fails due to a token error, we'll attempt to refresh the token. If the refresh succeeds, we'll retry the original request. If we couldn't refresh the token, an error is thrown. When the request is retried and it fails again we'll also throw an error even if the error is related to the token. Clearly something is wrong and it doesn't make sense to refresh and retry endlessly.

The AuthManager itself is pro-active about how it deals with tokens as shown in the following diagram:

A graph that depicts the flow of refreshing a token

When the AuthManager is asked for a valid token, we'll check if a token exists locally. If not, we'll throw an error. If it does exist, we check if the token is valid. If it isn't, a refresh is attempted so we can obtain a valid token. If this succeeds the valid token is returned. In cases where the token refresh fails we'll throw an error so the user can authenticate again.

This flow is complex enough as it is, but when we add the requirement that we should only have one request in progress at any given time, things can get a little hairy.

Luckily, Swift's concurrency features are incredibly helpful when building a flow like this.

We'll implement the AuthManager object first, and after that I'll show you how it can be used in the Network object.

Note that all of this is somewhat simplified from how you might structure things in the real world. For example, you should always store tokens in the keychain, and your objects are probably a lot more complex than the ones I'm working with in this post.

None of that changes the flow and principles of what I intend to describe, hence why I chose to go with a simplified representation because it allows you to focus on the relevant parts for this post.

Implementing the AuthManager

Because we want to make sure that our AuthManager handles concurrent calls to validToken() in such a way that we only have one refresh request in flight at any time, we should make it an actor. Actors ensure that their internal state is always accessed in a serial fashion rather than concurrently. This means that we can keep track of a currently in-flight token refresh call and check whether one exists safely as long as the manager is an actor.

If you want to learn more about Swift's actors and how they are used, I recommend you take a look at my post on actors before moving on with the implementation of AuthManager.

Now that we know we're going to make AuthManager an actor, and we already know that it needs a validToken() and a refreshToken() method, we can implement a starting point for the manager as follows:

actor AuthManager {
    private var currentToken: Token?
    private var refreshTask: Task<Token, Error>?

    func validToken() async throws -> Token {        

    }

    func refreshToken() async throws -> Token {

    }
}

This skeleton shouldn't be too surprising. Note that I'm storing the token as an instance variable on AuthManager. Do not do this in your own implementation. You should store the token in the user's Keychain, and read it from there when needed. I'm only storing it as an instance variable for convenience, not because it's good practice (because it's not).

Before we move on, I want to show you the error I might throw from within the AuthManager:

enum AuthError: Error {
    case missingToken
}

The validToken() implementation is probably the simplest implementation in this post, so let's look at that first:

func validToken() async throws -> Token {
    if let handle = refreshTask {
        return try await handle.value
    }

    guard let token = currentToken else {
        throw AuthError.missingToken
    }

    if token.isValid {
        return token
    }

    return try await refreshToken()
}

In this method, I cover four scenarios in the following order:

  1. If we're currently refreshing a token, await the value for our refresh task to make sure we return the refreshed token.
  2. We're not refreshing a token, and we don't have a persisted token. The user should log in. Note that you'd normally replace currentToken with reading the current token from the user's keychain.
  3. We found a token, and we can reasonably assume the token is valid because we haven't reached the expiration threshold yet.
  4. None of the above applies so we'll need to refresh the token.

I didn't define a network nor a keychain property in my skeleton because we won't be using them for the purposes of this post, but I can't stress enough that tokens should always be stored in the user's keychain and nowhere else.

Let's start building out the refreshToken() method next. We'll do this in two steps. First, we'll handle the case where refreshToken() is called concurrently multiple times:

func refreshToken() async throws -> Token {
    if let refreshTask = refreshTask {
        return try await refreshTask.value
    }

    // initiate a refresh...
}

Because AuthManager is an actor, this first step is relatively simple. Normally you might need a sync queue or a lock to make sure concurrent calls to refreshToken() don't cause data races on refreshTask. Actors don't have this issue because they make sure that their state is always accessed in a safe way.

We can return the result of our existing refresh task by awaiting and returning the task handle's value. We can await this value in multiple places which means that all concurrent calls to refreshToken() can (and will) await the same refresh task.

The next step is to initiate a new token refresh and store the refresh task on AuthManager. We'll also return the result of our new refresh task in this step:

func refreshToken() async throws -> Token {
    if let refreshTask = refreshTask {
        return try await refreshTask.value
    }

    let task = Task { () throws -> Token in
        defer { refreshTask = nil }

        // Normally you'd make a network call here. Could look like this:
        // return await networking.refreshToken(withRefreshToken: token.refreshToken)

        // I'm just generating a dummy token
        let tokenExpiresAt = Date().addingTimeInterval(10)
        let newToken = Token(validUntil: tokenExpiresAt, id: UUID())
        currentToken = newToken

        return newToken
    }

    self.refreshTask = task

    return try await task.value
}

In this code, I create a new Task instance so that we can store it in our AuthManager. This task can throw if refreshing the token fails, and it will update the current token when the refresh succeeds. I'm using defer to make sure that I always set my refreshTask to nil before completing the task. Note that I don't need to await access to refreshTask because this newly created Task will run on the AuthManager actor automatically due to the way Structured Concurrency works in Swift.

I assign the newly created task to my refreshTask property, and I await and return its value like I explained before showing you the code.

Even though our flow is relatively complex, it wasn't very complicated to implement this in a concurrency-proof way thanks to the way actors work in Swift.

If actors are still somewhat of a mystery to you after reading this, take a look at my post on actors to learn more.

As a next step, let's see how we can build the networking part of this flow by creating a Networking object that uses the AuthManager to obtain and refresh a valid access token and retry requests if needed.

Using the AuthManager in a Networking object

Now that we have a means of obtaining a valid token, we can use the AuthManager to add authorization to our network calls. Let's look at a skeleton of the Networking object so we have a nice starting point for the implementation:

class Networking {

    let authManager: AuthManager

    init(authManager: AuthManager) {
        self.authManager = authManager
    }

    func loadAuthorized<T: Decodable>(_ url: URL) async throws -> T {
        // we'll make the request here
    }

    private func authorizedRequest(from url: URL) async throws -> URLRequest {
        var urlRequest = URLRequest(url: url)
        let token = try await authManager.validToken()
        urlRequest.setValue("Bearer \(token.value)", forHTTPHeaderField: "Authorization")
        return urlRequest
    }
}

The code in this snippet is fairly straightforward. The Networking object depends on an AuthManager. I added a convenient function to create an authorized URLRequest from within the Networking class. We'll use this method in loadAuthorized to fetch data from an endpoint that requires authorization and we'll decode the fetched data into decodable model T. This method uses generics so we can use it to fetch decoded data from any URL that requires authorization.

If you're not familiar with generics, you can read more about them here and here.

Let's implement the happy path for our loadAuthorized method next:

func loadAuthorized<T: Decodable>(_ url: URL) async throws -> T {
    let request = try await authorizedRequest(from: url)
    let (data, _) = try await URLSession.shared.data(for: request)

    let decoder = JSONDecoder()
    let response = try decoder.decode(T.self, from: data)

    return response
}

This code should, again, be fairly straightfoward. First, I create an authorized URLRequest for the URL we need to load by calling authorizedRequest(from:). As you saw earlier, this method will ask the AuthManager for a valid token and configure an authorization header that contains an access token. We prefix the call to this method with try await because this operation can fail, and could require us to be suspended in the case that we need to perform a token refresh proactively.

If we can't authorize a request, this means that AuthManager's validToken method threw an error. This, in turn, means that we either don't have an access token at all, or we couldn't refresh our expired token. If this happens it makes sense for loadAuthorized to forward this error to its callers so they can present a login screen or handle the missing token in another appropriate way.

Next, I perform the URLRequest. A URLRequest can fail for various reason so this call needs to be prefixed with try as well. Any network related errors that get thrown from this line are forwarded to our caller.

Once we've obtained Data from the URLRequest we decode it into the appropriate type T and we return this decoded data to the caller.

Before we move on, please take a moment to appreciate how much more straightforward this code looks with async/await when compared to a traditional callback based approach or even a reactive approach that you might implement with RxSwift or Combine.

As it stands, we've implemented about half of the request flow. I've made the implemented steps green in the image below:

A graph of the networking flow with the happy path that's currently implemented highlighted in green.

To implement the last couple of steps we need to make a small change to the signature of loadAuthorized so it can take an allowRetry argument that we'll use to limit our number of retries to a single retry. We'll also need to check whether the response we received from URLSession is an HTTP 401: Unauthorized response that would indicate we ran into an authorization error so we can explicitly refresh our token one time and retry the original request.

While this should not be a common situation to be in, it's entirely possible that we believe our persisted token is valid since the device clock is pretty far from the token's expiration date while the token is, in fact, expired. One reason is that all tokens in the back-end were manually set to be expired for security reasons. It's also possible that your user's device clock was changed (either by the user or by travelling through timezones) which led to our calculations being incorrect.

In any event, we'll want to attempt a token refresh and retry the request once if this happens.

Here's what the updated loadAuthorized method looks like:

func loadAuthorized<T: Decodable>(_ url: URL, allowRetry: Bool = true) async throws -> T {
    let request = try await authorizedRequest(from: url)
    let (data, urlResponse) = try await URLSession.shared.data(for: request)

    // check the http status code and refresh + retry if we received 401 Unauthorized
    if let httpResponse = urlResponse as? HTTPURLResponse, httpResponse.statusCode == 401 {
        if allowRetry {
            _ = try await authManager.refreshToken()
            return try await loadAuthorized(url, allowRetry: false)
        }

        throw AuthError.invalidToken
    }

    let decoder = JSONDecoder()
    let response = try decoder.decode(T.self, from: data)

    return response
}

These couple of lines of code that I added implement the last part of our flow. If we couldn't make the request due to a token error we'll refresh the token explicitly and we retry the request once. If we're not allowed to retry the request I throw an invalidToken error to signal that we've attempted to make a request with a token that we believe is valid yet we received an HTTP 401: Unauthorized.

Of course, this is a somewhat simplified approach. You might want to take the HTTP body for any non-200 response and decode it into an Error object that you throw from your loadAuthorized method instead of doing what I did here. The core principle of implementing a mechanism that will proactively refresh your auth tokens and authorize your network requests shouldn't change no matter how you decide to deal with specific status codes.

All in all, Swift Concurrency's actors combined with async/await allowed us to build a complex asynchronous flow by writing code that looks like it's imperative code while there's actually a ton of asynchronisity and even synchronization happening under the hood. Pretty cool, right?

In Summary

In this post, you saw how I implemented one of my favorite networking and concurrency related examples with async/await and actors. First, you learned what the flow we wanted to implement looks like. Next, I showed you how we can leverage Swift's actors to build a concurrency proof token provider that I called an AuthManager. No matter how many token related methods we call concurrently on this object, it will always make sure that we only have one refresh call in progress at any given time.

After that, you saw how you can leverage this AuthManager in a Networking object to authorize network calls and even explicitly refresh a token and retry the original request whenever we encounter an unexpected token related error.

Flows like these are a really nice way to experiment with, and learn about, Swift Concurrency features because they can be applied in the real world immediately, and they force you to mix and match different concurrency features so you'll immediately see how things fit together in the real world.

Using Swift Concurrency’s task group for tasks with varying output

Earlier, I published a post on Swift Concurrency's task groups. If you haven't read that post yet, and you're not familiar with task groups, I recommend that you read that post first because I won't be explaining task groups in this post. Instead, you will learn about a technique that you can use to work around a limitation of task groups.

Task groups can run a number of child tasks where every child task in the task group produces the same output. This is a hard requirement of the withTaskGroup function. This means that task groups are not the right tool for every job. Sometimes it makes more sense to use async let instead.

In the post where I introduced task groups, I used an example where I needed to fetch a Movie object based on an array of UUIDs. Now let's imagine that our requirements aren't as clear, and we write a function where we receive an array of Descriptor objects that informs us about the type of objects we need to load.

These objects could be either a Movie, or a TVShow. Here's what the Descriptor looks like:

enum MediaType {
    case movie, tvShow
}

struct Descriptor {
    let id: UUID
    let type: MediaType
}

The implementation of Movie and TVShow aren't really relevant in this context. All you need to know is that they can both be loaded from a remote source based on a UUID.

Now let's take a look at the skeleton function that we'll work with:

func fetchMedia(descriptors: [Descriptor]) async -> ???? {
    return await withTaskGroup(of: ????) { group in 
        for descriptor in descriptor {
            group.addTask {
                // do work and return something
            }
        }
    }
}

Notice that I used ???? instead of an actual type for the function's return type and for the type of the task group. We'll need to figure out what we want to return.

One approach would be to create a Media base class and have Movie and TVShow subclass this object. That would work in this case, but it requires us to use classes where we might prefer structs, and it wouldn't work if the the fetched objects weren't so similar.

Instead, we can define an enum and use that as our task output and return type instead. Let's call it a TaskResult:

enum TaskResult {
    case movie(Movie)
    case tvShow(TVShow)
}

Now we can switch on the Descriptor's type, fetch our object, and return a TaskResult where the fetched media is an associated type of our enum case:

func fetchMedia(descriptors: [Descriptor]) async -> [TaskResult] {
    return await withTaskGroup(of: TaskResult.self) { group in 
        for descriptor in descriptor {
            group.addTask {
                switch descriptor.type {
                    case .movie:
                        let movie = await self.fetchMovie(id: descriptor.id)
                        return TaskResult.movie(movie)
                    case .tvShow:
                        let tvShow = await self.fetchShow(id: descriptor.id)
                        return TaskResult.tvShow(tvShow)
                }
            }
        }

        var results = [TaskResult]()

        for await result in group {
            results.append(result)
        }

        return results
    }
}

The nice thing about this approach is that it's easy to scale it into as many types as you need without the need to subclass. That said, I wouldn't recommend this approach in all cases. For example, if you're building a flow similar to the one I show in my post on async let, task groups wouldn't make a lot of sense.

In Summary

Ideally, you only use task groups when all tasks in the group really produce the same output. However, I'm sure there are situations where you need to run an unknown number of tasks based on some input like an array where the tasks don't always produce the same output. In those cases it makes sense to apply the workaround that I've demonstrated in this post.

How to use async let in Swift?

In last week's post, I demonstrated how you can use a task group in Swift to concurrently run multiple tasks that produce the same output. This is useful when you're loading a bunch of images, or in any other case where you have a potentially undefined number of tasks to run, as long as you (somehow) make sure that every task in your group produces the same output.

Unfortunately, this isn't always a reasonable thing to do.

For example, you might already know that you only have a very limited, predetermined, number of tasks that you want to run. These tasks might not even produce the same output which could make matters more complicated.

In these scenarios, it makes sense to use Swift's async let syntax instead of a task group.

If you're not yet familiar with task groups, make sure you take a look at the following posts if you want to understand the complete picture of what we're covering in this post.

In this post, you will learn when it makes sense to use async let, how it's used, and you'll learn how async let fits into the bigger picture of an application that uses Swift Concurrency.

Knowing when to use async let

In my post on using a task group for multiple tasks with varying output, you saw how you could wrap the output from a task in a task group in a TaskResult enum that we defined ourselves. While this is convenient when we don't know how many tasks we might have to run exactly in a task group, you can imagine that this isn't always desirable.

For this exact reason, the Swift core team gave us a convenient tool to concurrently run a predetermined number of tasks and awaiting their results only when we actually need them.

This allows you to perform work as soon as possible, but not await it if you don't need it right away.

Let's look at an example.

Imagine that you're implementing a bootstrapping sequence for a movies app. When this sequence is kicked off, you want to do a bunch of stuff. For example:

  • Fetch movies from a server
  • Asynchronously fetch the current user
  • Load user's favorites
  • Load user's profile
  • Load user's movie tickets

Without async let, and without task groups, you might write something like this:

func bootstrapSequence() async throws {
    let movies = await loadMovies() // will cache movies as well
    if let user = await currentUser() {
        let favorites = try await updateFavorites(user: user)
        let profiles = await updateUserProfile(user: user)
        let tickets = await updateUserTickets(user: user)
    }

    // use properties or ignore their output as needed
}

This code will work fine, but there's a bit of an optimization problem here. The steps in our sequence are run serially rather than concurrently.

Notice that the movies and user tasks can run concurrently. They don't depend on each other in any way.

The other three tasks depend on both movies and user. Or rather, they depend on user but it would be nice if movies are loaded too.

I don't want to spend too much time on the details of what each of these functions do, but the rest of this post makes a lot more sense if I explain my intention behind them at least a little bit.

  • loadMovies() -> [Movie] will load movies a list of movies from a remote source and cache them locally.
  • currentUser() -> User will check if a user exists locally or attempts to fetch the user from the server. User object is a bare-bones container of user info.
  • updateFavorites(user:) -> [Movie] loads a list of movie ids that the user marked as favorite from the server, and associates them with a Movie object. If the Movie is not cached it will be fetched from a server.
  • updateUserProfile(user:) -> UserProfile fetches and caches the user's profile information from a server (contains a lot more info than the object returned by currentUser)
  • updateUserTickets(user:) -> [Ticket] updates the user's movie tickets in the local store. Tickets are associated with Movie objects from the local cache. If a specific movie doesn't exist locally it's fetched from the server.

As you can see, each of these steps in the sequence does a bunch of stuff and we want to do as many of these things concurrently as possible.

This means that we can divide the sequence into two steps, or sections:

  • Load movies and current user object
  • Update favorites, profile, and tickets

With a task group this would be rather tedious because every task has a different result, and we'd need to split our group in two somehow, or we'd need to use two task groups. Not ideal, especially because we're not dealing with an indeterminate number of tasks.

Let's see how async let helps us solve this problem.

Using async let in your code

As I mentioned earlier, async let allows us to run tasks concurrenly without suspending the calling context to await the task's output so we can only await their results when we need them.

The simplest usage of an async let is when you want to run a task as soon as a function starts, but you don't want to await it immediately. Let's take a look at an example of this before we go back to the more complex scenario I explained in the previous section.

Imagine that you're writing a function where you want to load some information from the network to update a local record, and while that happens you want to see if a local record exists so you know whether you'll need to create a new record. The network code will run asynchronously using async let so we can fetch the most up to date information from the server while checking our local store at the same time:

func fetchUserProfile(user: User) async -> UserProfile {
    // start fetching profile from server immediately
    async let remoteProfile = network.fetchUserProfile(user: user)

    // fetch (or create) and await local profile
    let localProfile = await localStore.fetchUserProfile(user: user)

    // update local profile with remote profile
    await localProfile.update(with: remoteProfile)
    await localStore.persist(localProfile)

    return localProfile
}

In this code, the network call is executed immediately and it will start running right away.

While this network call is executing, we'll attempt to load a user profile from the local cache which could take a little while too depending on what we're using to store the profile. The exact details of this aren't relevant for now.

Once we've obtained a local profile, we call await localProfile.update(with: remoteProfile). At this point, we want to wait for the profile that we loaded from the network and use it to update and persist the local version.

The network call might have already completed by the time we use await to wait for its result, but it could also still be in-flight. The nice part is that the network call runs concurrent with the rest of fetchUserProfile and we don't suspend fetchUserProfile until we don't have any other choice. In other words, we were able to do two things concurrently in fetchUserProfile (perform network call, and find the cached user profile) by using async let.

When you think about the flow I showed you in the previous section for example, we'll want to run loadMovies() and currentUser() concurrently, await their results, and then proceed with the next steps in our bootstrapping sequence.

Here's what this first part of the sequence would look like:

func bootstrapSequence() async throws {
    async let moviesTask = loadMovies()
    async let userTask = currentUser()

    let (_, currentUser) = await (moviesTask, userTask)

    // we'll implement the rest of the sequence soon
}

In this code, I create two tasks with the async let syntax. This essentially tells Swift to start running the function call that follows it immediately, without awaiting their results. You can have multiple of these async let tasks running at the same time simply by defining them one after the other like I did in the code snippet above.

Earlier, you saw that I needed to await remoteProfile to get the result of my async let remoteProfile. In this case, I want to await two tasks. I want to ignore the output of loadMovies() while assigning the output of currentUser() to a property that I can use later.

As you saw earlier, it's possible to use the output of an async let task inline by writing await before the expression that uses the task's output. For example, I could write the following to use the output of currentUser() without assigning the output to an intermediate property:

async let user = currentUser()
let tickets = await updateUserTickets(user: user)

The code above would await the value of user (which would be the output of currentUser()), and then run and await updateUserTickets(user:). This is very similar to how try works in Swift where you only need to write a single try to apply it to an entire expression even if it contains multiple throwing statements. For clarity, I'm going to keep using the approach you saw earlier where I explicitly awaited the result of a property called userTask.

Once the user and movies are loaded, we can concurrently run the second part of the sequence:

func bootstrapSequence() async throws {
    async let moviesTask = loadMovies()
    async let userTask = currentUser()

    let (_, currentUser) = await (moviesTask, userTask)

    guard let user = currentUser else {
        return
    }

    async let favoritesTask = updateFavorites(user: user)
    async let profilesTask = updateUserProfile(user: user)
    async let ticketsTask = updateUserTickets(user: user)

    let (favorites, profile, tickets) = try await (favoritesTask, profileTask, ticketsTask)

    // use the loaded data as needed
}

Notice how this follows the exact same pattern that you saw before. I define some aysnc let properties to create a bunch of tasks that will run concurrently as soon as they are created, and I use await to wait for their results.

Note that this time around, I had to write try await. That's because updateFavorites can throw. I applied the try to the entire expression because I think it reads a bit nicer and it makes it easier to change other tasks to be throwing later. It would have been equally valid for me to write the following:

let (favorites, profile, tickets) = await (try favoritesTask, profileTask, ticketsTask)

Which, in my opinion, just doesn't read as nicely as the version I showed you earlier.

In terms of usage, there really isn't much else to async let. You defined tasks with async let, they begin doing their work as soon as they are created, and you must use await whenever you want to use the async let task's value.

I love how easy to use this API is, and how it allows us to build relatively complex sequences and flows without a ton of effort. As an exercise, take a look at this post I wrote on running some tasks concurrently waiting for all tasks to be completed using DispatchGroup. You'll see that it's not nearly as nice and convenient as Swift Concurreny's async let.

While async let is easy to use, there's a lot going on behind the scenes to make it work. And there are some important rules you should understand. Let's explore those next.

Understanding how async let works

When you create an async let, you spawn a new Swift Concurrency task behind the scenes. This task will run as a child of the task that's currently running (ever async scope in Swift Concurrency is part of a task). This new task will inherit things like task local values, and it will run on the same actor as the actor that you spawned the task from.

It's important to understand this because it will help you reason about what's happening behind the scenes when you use async let since it's subtly different from calling an async function and using await to wait for the function's result.

When you normally await an async function's output, this is all done as part of the same task. Since async let will run concurrently with the function that you used it in, it'll be run in a new task. This means that, similar to how you spawn tasks in a task group, you spawn a new task every time you write async let.

With this in mind, it's important to think about what might happen when you spawn a task with async let without ever awaiting its result. For example:

func bootstrapSequence() async throws {
    async let moviesTask = loadMovies()
    async let userTask = currentUser()

    let (_, currentUser) = await (moviesTask, userTask)

    guard let user = currentUser else {
        return
    }

    async let favoritesTask = updateFavorites(user: user)
    async let profilesTask = updateUserProfile(user: user)
    async let ticketsTask = updateUserTickets(user: user)

    // we don't await any of the async let's above
}

Since we don't await the results of our async let tasks, the bootstrapSequence function will exit after the last async let task is started. When this happens, our tasks will go out of scope, and they get marked as cancelled which means that we should stop performing any work as soon as we can to respect Swift Concurrency's cooperative cancellation paradigm.

In other words, you should not use an async let as a means to run code asynchronously after your function has exitted its scope.

The last thing I want to cover in this post is the restriction of applying async only to let properties.

You can't write async var to have an asynchronous variable. The reason is that your created property will be bound to a task, and its value doesn't become available until it's awaited and the task produces a result. If you would be able to write async var this feature would become increasingly complex because of how the binding from task to property works.

In Summary

In this post you learned a lot about Swift Concurrency's async let. You learned that async let is a feature that helps you run unrelated asynchronous function calls concurrently as their own tasks. You learned that async let solves a propblem that's similar to the problem that's solved by task groups except it doesn't have the limitation of only being applicable for tasks that produce the same output.

I showed you how you can use async let to build a complex loading sequence that can run in two steps. The first step performs two concurrenct tasks, and the second step runs three concurrent tasks that depend on the first two tasks. You saw that this was fairly trivial to implement with async let and awaiting results where needed.

Lastly you gained some deeper insights into how async let works. You learned that an async let creates a child task of your current task under the hood, and you learned that this task is cancelled whwnever the function it's created in goes out of scope. To avoid this, you should always await the results of your async let tasks.

Overall I think async let is an incredibly useful feature for scenarios where you want to run several tasks concurrently before doing something else. A bootstrapping process like you saw in this post is a good example of that.

Swift Concurrency’s TaskGroup explained

With Apple's overhaul of how concurrency will work in Swift 5.5 and newer, we need to learn a lot of things from scratch. While we might have used DispatchQueue.async or other mechanisms to kick off multiple asynchronous tasks in the past, we shouldn't use these older concurrency tools in Swift's new concurrency model.

Luckily, Swift Concurrency comes with many features already which means that for a lot of our old uses cases, a new paradigm exists.

In this post, you will learn what Swift Concurrency's task groups are, and how you can use them to concurrently perform a lot of work.

Which problem does TaskGroup solve?

Before I show you how you can use a task group, I'd like to explain when a task group is most likely the correct tool for your job. Or rather, I'd like to explain the problem that task groups were designed to solve.

Consider the following example.

Let's say that you fetched a list of ids from your server. These ids represent the ids of movies that your user has marked as a favorite. By returning ids instead of full-blown movie objects, your user can save a lot of data, assuming that clients can (and will) cache movie objects locally. This allows you to either look up a movie in your local cache, or to fetch the movie from the server if needed.

The code to fetch these movie ids might look a bit like this:

func getFavoriteIds(for user: User) async -> [UUID] {
    return await network.fetchUserFavorites(for: user)
}

func fetchFavorites(user: User) async -> [Movie] {
    // fetch Ids for favorites from a remote source
    let ids = await getFavoriteIds(for: user)

    // perform work to obtain `[Movie]`
}

So far so good. If you're somewhat familiar with Swift Concurrency's async/await concept this code shouldn't look too scary.

Now that we have an array of UUID, we need to somehow convert this array to Movie objects. In this case, I don't care about the order of the ids and the resulting movies matching. And I don't want to fetch movies one by one because that might take a while.

I'd like to fetch as many movies at the same time as I possibly can.

This sentence above is essentially the key to knowing when we should use a task group.

In this case, I want to run a variable number of tasks concurrently, and every task produces the same type of output. This use case is exactly what task groups are good at. They allow you to spawn as many tasks as you want, and all of these tasks will run concurrently. One constraint is that every task must produce the same output. In this case, that's not a problem. We want to convert from UUID to Movie every time, which means that our task will always produce the same output.

Let's take a look at an example.

Using a TaskGroup in your code

Task groups can either be throwing or non-throwing. This might sound obvious, but the (non-)throwing nature of your task group has to be defined when you create it. In this case, I'm going to use a non-throwing task group. Let's see how a task group can be created.

Defining a task group

A task group can be created as follows:

await withTaskGroup(of: Movie.self) { group in

}

The withTaskGroup function is a global function in Swift that takes two arguments. The first argument specifies the type of result that your tasks produce. If your tasks don't have any output, you would write Void.self here since that would be the return type for each individual task. In this case, it's Movie.self because all tasks will produce a Movie instance.

If the tasks in a task group can throw errors, you should use withThrowingTaskGroup instead of withTaskGroup.

The second argument is a closure in which we'll schedule and handle all of our tasks. This closure receives an instance of TaskGroup<Output> as its only argument. The Output generic will correspond with your task output. So in this case the actual type would be TaskGroup<Movie>.

The withTaskGroup function is marked async which means that need to await its result. In this case, we don't return anything from the closure that we pass to withTaskGroup. If we did return something, the returned object would be the return value for the call to withTaskGroup and we could assign this output to a property or return it from a function.

In this case, we'll want to return something from fetchFavorites. Here's what that looks like:

func fetchFavorites(user: User) async -> [Movie] {
    // fetch Ids for favorites from a remote source
    let ids = await getFavoriteIds(for: user)

    // load all movies concurrently
    return await withTaskGroup(of: Movie.self) { group in
        var movies = [Movie]()

        // obtain movies

        return movies
    }
}

While this code compiles just fine, it's not very useful. Let's add some tasks to our task group so we can fetch movies.

Adding tasks to a TaskGroup

The TaskGroup object that is passed to our closure is used to schedule tasks in the group, and also to obtain the results of these tasks if needed. Let's see how we can add tasks to the group first, and after that I'll show you how you can obtain the results of your tasks by iterating over the group's results.

To load movies, we'll call the following async function from a new task. This function would be defined alongside fetchFavorites and getFavoriteIds:

func getMovie(withId id: UUID) async -> Movie {
    return await network.fetchMovie(withId: id)
}

To call this function from within a new task in the task group, we need to call addTask on the TaskGroup as follows:

func fetchFavorites(user: User) async -> [Movie] {
    // fetch Ids for favorites from a remote source
    let ids = await getFavoriteIds(for: user)

    // load all movies concurrently
    return await withTaskGroup(of: Movie.self) { group in
        var movies = [Movie]()

        // adding tasks to the group and fetching movies
        for id in ids {
            group.addTask {
                return await self.getMovie(withId: id)
            }
        }

        return movies
    }
}

I added a for loop to the task group closure to iterate over the ids that were fetched. For every fetched id I call group.addTask and pass it a closure that contains my task. This closure is async which means that we can await the result of some function call. In this case I want to await and return the result of self.getMovie. Note that I don't need to capture self weakly in the closure I pass to addTask. The reason for this is that the task I created can never outlive the scope it's defined in (more on that later), this means that no retain cycles are created here. The Swift compiler guarantees that our tasks don't outlive the scope they're defined in so we can be absolutely sure that our tasks don't create retain cycles.

Every task that's added to the task group with group.addTask must return a Movie instance because that's the task output type that we passed to withTaskGroup. As soon as a task is added to the task group it will beginning running concurrently with any other tasks that I may have already added to the group.

You might notice that while I add a bunch of tasks to the group, I never actually await or return the output of my tasks. To do this, we need to iterate asynchronously over the task group and obtain the results of its tasks. The TaskGroup object conforms to AsyncSequence which means that we can iterate over it using for await as follows:

func fetchFavorites(user: User) async -> [Movie] {
    // fetch Ids for favorites from a remote source
    let ids = await getFavoriteIds(for: user)

    // load all favorites concurrently
    return await withTaskGroup(of: Movie.self) { group in
        var movies = [Movie]()
        movies.reserveCapacity(ids.count)

        // adding tasks to the group and fetching movies
        for id in ids {
            group.addTask {
                return await self.getMovie(withId: id)
            }
        }

        // grab movies as their tasks complete, and append them to the `movies` array
        for await movie in group {
            movies.append(movie)
        }

        return movies
    }
}

By using for await movie in group the task group will provide us with movies as soon as they are obtained. Note that the results will be gathered in completion order. In other words, whichever movie is fully fetched first, will be returned first. The order in which we added tasks to the group does not necessarily matter. Although for very small/quick tasks it may happen that completion order can be the same as the order in which we added the tasks but this is never guaranteed. This is why I mentioned I didn't care about ordering earlier.

Whenever a task completes, the group provides us with the task output, and we can append this output to the movies array. Once all tasks are completed and we have appended all output to the movies array, we return this array from our task group closure.

This means that we can return the result of awaiting withTaskGroup from fetchFavorites since the output is an array of movies.

Note that we don't return from the closure that's provided to withTaskGroup until all tasks have completed due to the asynchronous for loop. This loop doesn't complete until all tasks in the group complete, and all output has been provided to us. Of course, we could exit our loop early with a break just like you can in a normal loop.

The example you've seen so far follows a pretty happy path. Let's consider two additional situations, in which we'll have to deal with errors thrown by the tasks that were added to the group:

  1. One of the tasks throws an error
  2. The task group is cancelled

TaskGroups and throwing tasks

I already mentioned that a task group for tasks that can throw should be created with withThrowingTaskGroup. We'd need to do this if the getMovie function you saw earlier could throw an error. If it could, it would look like this:

func getMovie(withId id: UUID) async throws -> Movie {
    return try await network.fetchMovie(withId: id)
}

The code to fetch a user's favorite movies would in turn be updated as follows:

func fetchFavorites(user: User) async throws -> [Movie] {
    // fetch ids for favorites from a remote source
    let ids = await getFavoriteIds(for: user)

    // load all favorites concurrently
    return try await withThrowingTaskGroup(of: Movie.self) { group in
        var movies = [Movie]()
        movies.reserveCapacity(ids.count)

        // adding tasks to the group and fetching movies
        for id in ids {
            group.addTask {
                return try await self.getMovie(withId: id)
            }
        }

        // grab movies as their tasks complete, and append them to the `movies` array
        for try await movie in group {
            movies.append(movie)
        }

        return movies
    }
}

The changes we needed to make to handle throwing tasks are relatively small. All I had to do was to add try where appropriate, and use withThrowingTaskGroup instead of withTaskGroup. However, there's a huge difference here in terms of what might happen.

In this example, I'm fetching movies by calling try await self.getMovie(withId: id). This means that the getMovie operation might throw an error. When it does, it's not a big deal per se. A task can fail without impacting any of the other tasks in the task group. This means that failing to load one of the movie does not necessarily impact the other tasks in my task group. However, because I iterate over the fetched movies using for try await movie in group, a single failure does impact other tasks in my group.

As we iterate over the group's results, a failed task also counts as a result. However, when the group's next() function is called internally to obtain the next result, it will throw the error that was thrown by the failing task so we can inspect and handle it if needed. In a for loop, I can only write try await which means that when the group throws an error from its next() function, this error is thrown out from the withThrowingTaskGroup closure since we don't handle (or ignore) it.

When an error is thrown from the closure provided to withThrowingTaskGroup, the task group will fail with that error. Before this error is thrown, the task group will mark any unfinished tasks as cancelled to allow them to stop executing work as soon as possible in order to comply with Swift Concurrency's cooperative cancellation. Once all tasks have completed (either by finishing their work or throwing an error), the task group will throw its error and complete.

In the example we're working with here, we can prevent a single failure from cancelling all in progress work. The solution would be to make sure the closure I pass to addTask doesn't throw. I could handle the errors thrown by getMovie and return some kind of default movie which probably isn't the best solution, or I could return nil. If returning nil is reasonable for your use case, you could also write try? await self.getMovie(withId: id) to ignore the error and return nil instead of handling the error in a do {} catch {} block.

Depending on how the tasks you add to your task group were written, cancelling one of your tasks might have a similar effect. In Swift Concurrency, it's perfectly acceptable to throw an error from a task when it's cancelled. This means that if your task throws a cancellation error, it could propagate through your task group in the exact same way that other thrown errors propagate through your task group if it ends up being thrown out of your withThrowingTaskGroup closure.

The bottom line here is that individual tasks throwing errors do not impact the task group and its enclosing task per se. It's only when this error ends up being thrown from your withThrowingTaskGroup closure that all unfinished tasks get cancelled, and the original error is thrown from the task group's task once all child tasks have finished. All this talk about errors and completing the task group's task segues nicely into the last topic I want to cover; the lifecycle of your task group's tasks.

Understanding the lifecycle of tasks in a TaskGroup

When you add tasks in a task group, you enter into a very important (explicit) contract. Swift's concurrency mechanisms are structured (pun intended) around the concept of Structured Concurrency. Async lets as well as task group child tasks both adhere to this idea.

The core idea behind structured concurrency is that a task cannot outlive the scope of its parent task. And similarily, no TaskGroup child task may outlive the scope of the withTaskGroup closure. This is achieved by implicitly awaiting on all tasks to complete before returning from the closure you pass to withTaskGroup.

When you know that tasks in a group cannot outlive the group they belong to, the error throwing / cancellation strategy I outlined above makes a lot of sense.

Once the task that manages the group throws an error, the scope of the task group has completed. If we still have running tasks at that time, the tasks would outlive their group which isn't allowed. For that reason, the task group will first wait for all of its tasks to either complete or throw a cancellation error before throwing its own error and exitting its scope.

When thinking of the code you've seen in this post, I've awaited the results of all child tasks explicitly by iterating over the group. This means that by the time we hit return movies all tasks are done already and no extra waiting is needed.

However, we don't have to await the output of our tasks in all cases. Let's say we have a bunch of tasks that don't return anything. We'd only write the following:

print("Before task group")
await withTaskGroup(of: Void.self) { group in
    for item in list {
        group.addTask {
            await doSomething()
            print("Task completed")
        }
    }
    print("For loop completed")
}
print("After task group")

Like I explained earlier, the task group's child tasks are always implicitly awaited before exitting the closure in which they were created in order to comply with the requirements of structured concurrency. This means that even if we don't await the result of our tasks, the tasks are guaranteed to be completed when we exit the withTaskGroup closure.

I've added some prints to the code snippet before to help you see this principle in action. When I run the code above, the output would look a bit like this:

print("Before task group")
print("For loop completed")
print("Task completed")
print("Task completed")
print("Task completed")
print("After task group")

The reason for that is the implicit awaiting of tasks in a group I just mentioned. The task group is not allowed to complete before all of the tasks it manages have also completed.

In Summary

In this post you learned a lot. You learned that tasks groups are a tool to concurrently perform an arbitrary number of tasks that produce the same output. I showed you how you can write a basic task group to concurrently fetch an arbitrary number of movies based on their ids as an example. You learned that task groups will run as many tasks at once as possible, and that you can obtain the results of these tasks using an async for loop.

After that, I explained how errors and cancellation work within a task group. You learned that whenever a task throws an error you can either handle or ignore this error. You also saw that if you throw an error from your task group closure, this will cause all unfinished tasks in the group to be marked as cancelled, and you learned that the original error will be thrown from task group once all tasks have completed.

Lastly, I explained how tasks within a task group cannot outlive the task group due to the guarantees made by Swift Concurrency, and that a task group will implicitly await all of its child tasks before completing to make sure none of its tasks are still running by the time the task group completes.

Huge thanks to Konrad for reviewing this post and providing some important corrections surrounding errors and cancellation.

Using UISheetPresentationController in SwiftUI 3

This post applies to the version of SwiftUI that shipped with iOS 15, also known as Swift 3. To learn how you can present a bottom sheet on iOS 16 and newer, take a look at this post.

With iOS 15, Apple introduced the ability to easily implement a bottom sheet with UISheetPresentationController in UIKit. Unfortunately, Apple didn't extend this functionality to SwiftUI just yet (I'm hoping one of the iOS 15 betas adds this...) but luckily we can make use of UIHostingController and UIViewRepresentable to work around this limitation and use a bottom sheet on SwiftUI.

In this post, I will show you a very simple implementation that might not have everything you need. After I tweeted about this hacky little workaround, someone suggested this very nice GitHub repository from Adam Foot that works roughly the same but with a much nicer interface. This post's goal is not to show you the best possible implementation of this idea, the repository I linked does a good job of that. Instead, I'd like to explain the underlying ideas and principles that make this work.

The underlying idea

When I realized it wasn't possible to present a bottom sheet in SwiftUI with the new UISheetPresentationController I started wondering if there was some way around this. I know that there are some issues with presenting a CloudKit sharing controller from SwiftUI as well, and a popular workaround is to have a UIButton in your view that presents the sharing controller.

While not strictly needed to make the bottom sheet work (as shown by the repository linked in the intro), I figured I would follow a similar pattern. That way I would be able to create a UIViewController and present it on top of the view that the button is presented in. The nice thing about that over how Adam Foot implemented his bottom sheet is that we can use the button's window to present the popover. Doing this will ensure that our view is always presented in the correct window if your app supports multiple windows. The cost is that, unfortunately, our API will not feel very at home in SwiftUI.

I figured that's ok for this writeup. If you want to see an implementation with a nicer API, look at what Adam Foot did in his implementation. The purpose of this post is mostly to explain how and why this works rather than providing you with the absolute best drop-in version of a bottom sheet for SwiftUI.

Implementing the BottomSheetPresenter

As I mentioned, a useful method to present a UICloudSharingController in SwiftUI is to present a UIButton that will in turn present the sharing controller. The reason this is needed is because, for some reason, presenting the sharing controller directly does not work. I don't fully understand why, but that's way beyond the scope of this post (and maybe a good topic for another post once I figure it out).

We'll follow this pattern for the proof of concept we're building in this post because it'll allow me to present the bottom sheet on the current window rather than any window. The components involved will be a BottomSheetPresenter which is a UIViewRepresentable that shows my button, and a BottomSheetWrapperController that puts a SwiftUI view in a view controller that I'll present.

Let's implement the presenter first. I'll use the following skeleton:

struct BottomSheetPresenter<Content>: UIViewRepresentable where Content: View{
    let label: String
    let content: Content
    let detents: [UISheetPresentationController.Detent]

    init(_ label: String, detents: [UISheetPresentationController.Detent], @ViewBuilder content: () -> Content) {
        self.label = label
        self.content = content()
        self.detents = detents
    }

    func makeUIView(context: UIViewRepresentableContext<BottomSheetPresenter>) -> UIButton {
        let button = UIButton(type: .system)

        // configure button

        return button
    }

    func updateUIView(_ uiView: UIButton, context: Context) {
        // no updates
    }

    func makeCoordinator() -> Void {
        return ()
    }
}

The bottom sheet presenter initializer takes three arguments, a label for the button, the detents (steps) that we want to use in our UISheetPresentationController, and the content that should be shown in the presented view controller.

Note that I had to make my BottomSheetPresenter generic over Content so it can take a @ViewBuilder that generates a View for the presented view controller. We can't use View as the return type for the @ViewBuilder because View has a Self requirement which means it can only be used as a generic constraint.

Tip:
To learn more about generics, associated types, and generic constraints take a look at this post. For an introduction to generics you might want to read this post first.

The BottomSheetPresenter is a UIViewRepresentable struct which means that it can be used to present a UIKit view in a SwiftUI context.

The makeUIView method is used to create and configure our UIButton. We don't need any extra information so the makeCoordinator method returns Void, and the updateUIView method can remain empty because we're not going to update our view (we don't need to).

Let's fill in the makeUIView method:

func makeUIView(context: UIViewRepresentableContext<BottomSheetPresenter>) -> UIButton {
    let button = UIButton(type: .system)
    button.setTitle(label, for: .normal)
    button.addAction(UIAction { _ in
        let hostingController = UIHostingController(rootView: content)
        let viewController = BottomSheetWrapperController(detents: detents)

        viewController.addChild(hostingController)
        viewController.view.addSubview(hostingController.view)
        hostingController.view.pinToEdgesOf(viewController.view)
        hostingController.didMove(toParent: viewController)

        button.window?.rootViewController?.present(viewController, animated: true)
    }, for: .touchUpInside)

    return button
}

The implementation for makeUIView is pretty straightforward. We assign the button's title and add an action for touch up inside.

When the user taps this button, we create an instance of UIHostingController to present a SwiftUI view in a UIKit context, and we pass it the content that was created by the initializer's @ViewBuilder closure. After that, we create an instance of BottomSheetWrapperController. This view controller will receive the UIHostingController as its child view controller, and it's the view controller we'll present. We need this extra view controller so we can override its viewDidLoad and configure the detents for its presentationController (remember how we presented a bottom sheet in UIKit?).

The following lines of code add the hosting controller as a child of the wrapper controller, and I set up the constraints using a convenient method that I added as an extension to UIView. The pinToEdgesOf(_:) function I added in my UIView extension configures my view for autolayout and it pins all edges to the view that's passed as the argument.

Once all setup is done, I present my wrapper controller on the button's window. This will make sure that this implementation works well in applications that support multiple windows.

Lastly, I return the button so that it can be presented in my SwiftUI view.

Before we look at the SwiftUI view, let's look at the implementation for BottomSheetWrapperController.

Implementing the BottomSheetWrapperController

The implementation for the BottomSheetWrapperController class is pretty straightforward. It has a custom initializer so we can accept the array of detents from the BottomSheetPresenter, and in viewDidLoad we check if we're being presented by a UISheetPresentationController. If we are, we assign the detents and set the grabber to be visible.

Note that you might want to make the grabber's visibility configurable by making it an argument for the initializer and storing the preference as a property on the wrapper.

class BottomSheetWrapperController: UIViewController {
    let detents: [UISheetPresentationController.Detent]

    init(detents: [UISheetPresentationController.Detent]) {
        self.detents = detents
        super.init(nibName: nil, bundle: nil)
    }

    required init?(coder: NSCoder) {
        fatalError("No Storyboards")
    }

    override func viewDidLoad() {
        super.viewDidLoad()

        if let sheetController = self.presentationController as? UISheetPresentationController {
            sheetController.detents = detents
            sheetController.prefersGrabberVisible = true
        }
    }
}

I'm not going to go into the details of how this view controller works in this post. Please refer to the UIKit version of this post if you want to know more (it's very short).

Using the BottomSheetPresenter in SwiftUI

Now that we have everything set up, let's take a look at how the BottomSheetPresenter can be used in a SwiftUI view:

struct ContentView: View {
    var body: some View {
        BottomSheetPresenter("Tap me for a bottom sheet!!", detents: [.medium(), .large()]) {
            VStack {
                Text("This is a test")
                Text("Pretty cool, right")
            }
        }
    }
}

That doesn't look bad at all, right? We create an instance of BottomSheetPresenter, we assign it a label, pass the detents we want to use and we use regular SwiftUI syntax to build the contents of our bottom sheet.

I agree, it doesn't feel very at home and it would be nicer to configure the bottom sheet with a view modifier. This is exactly what Adam Foot implemented in his version of BottomSheet. The only downside to that version is that it grabs the first window it can find to present the sheet. This means that it wouldn't work well in an application with multiple windows. Other than that, I really like his custom SwiftUI modifier, and I would recommend you take a look at the implementation if you're curious.

You'll find that it's very similar to what you learned in this post, except it has a bunch more configuration that I didn't include during my exploration to see if I could get this bottom sheet to work.

Keep in mind, this post isn't intended to show you the ultimate way of achieving this. My goal is to help you see how I got to my version of using UISheetPresentationController in SwiftUI through experimentation, and applying what I know from presenting a UICloudSharingController in SwiftUI.

Presenting a bottom sheet in UIKit with UISheetPresentationController

We've seen bottom sheets in Apple's apps for a while now, and plenty of apps have followed this pattern. If you're not sure what I mean, it's the kind of sheet that takes up just a part of the screen and can be swiped upwards to take up the whole screen or downwards to be dismissed. Here's an example from Apple's Maps app:

To implement a sheet like this, we used to require third party tools, or we needed to get creative and implement this pattern ourselves.

With iOS 15, Apple introduced UISheetPresentationController which allows us to implement bottom sheets with just a few lines of code.

When you present a view controller as shown below, you know that your presented view controller will be shown as a "card" on top of the presenting view controller:

present(targetViewController, animated: true)

By default, the user can already swipe the presented view controller down to dismiss it interactively.

A view controller that is presented with present(_:animated:) on iOS 15 will have a UISheetPresentationController set as its presentation controller automatically. This presentation controller is responsible for managing the transition between the presented and the presenting view controller, and it handles interaction like interactive dismissal.

It also handles the bottom sheet behavior that we want to implement. So that our view controller will first take up half the screen, and then can be expanded to full height and back down again.

To implement this, you use so called detents. These detents are set on the UISheetPresentationController and will be used to determine how the view controller can be shown. We can choose between medium, large, or both. Using only a medium detent will make your presented view controller take up roughly half the height of the screen. A large detent is the default and it makes the presented view controller its full height. Using both will allow the user to swipe between medium and large.

Here's how you set the UISheetPresentationController's detents:

override func viewDidLoad() {
    super.viewDidLoad()

    if let sheetController = self.presentationController as? UISheetPresentationController {
        sheetController.detents = [.medium(), .large()]
    }
}

You simply check if the view controller's presentation controller is an instance of UISheetPresentationController, and if it is you assign its detents.

Apple's Maps implementation shows a nice grabber that indicates to users that they can drag the bottom sheet up or down. This grabber isn't shown by default so you'll need to enable it manually if you want it to be shown:

sheetController.prefersGrabberVisible = true

This new feature is very nice, and I love how easy Apple has made it to implement a bottom sheet in iOS 15.

Unfortunately, we can't (easily) use this feature in SwiftUI. But if you're interested in a workaround... I have one for you right here.