What are Swift Concurrency’s task local values?

If you've been following along with Swift Concurrency in the past few weeks, you might have come across the term "task local values". Task local values are, like the name suggests, values that are scoped to a certain task. These values are only available within the context they're scoped to, and they are really only supposed to be used in a handful of use cases.

In this post, I will explain what task local are, and more importantly I will explain how and when they are useful. For a full rundown of task local values and their design I'd like to refer you to SE-0311: Task Local Values.

Understanding what task local values are

Task local values are a way to associate some state with a Swift Concurrency Task, or rather a specific context within a Task. We can create a scope for a task local value to live in, even if we're already in a task (or a child task). This doesn't quite explain what a task local value is, and to really understand this we need to zoom out a little bit. If we don't, this entire feature will be really hard to understand.

When you create a new Task in Swift Concurrency, either through Task.init (formerly async), or Task.detached (formerly detach), this task will have a priority property and an isCancelled property. We can read these values by obtaining and inspecting the current task:

withUnsafeCurrentTask { task in
    print(task?.isCancelled)
    print(task?.priority)
}

The withUnsafeCurrentTask checks if the context we're currently in runs as part of a Task instance, and if it does, the "current" task (the task that we're part of) is provided to the closure. We can then read the isCancelled property to check if the current task is cancelled, allowing is to act accordingly.

You can imagine that writing this code everywhere would be tedious, so the Swift team provided a more convenient way to check if the current task is cancelled: Task.isCancelled. This static member on Task will obtain the current task for us, and it will return that task's cancellation status (or false if no current task exists). Here's what that static variable looks like:

extension Task {
    static var isCancelled: Bool {
        return withUnsafeCurrentTask { task in
            return task?.isCancelled ?? false
        }
    }
}

This static isCancelled property is not quite the same as a task local value, but it's close enough to proceed with understanding what they are. Remember that Task.isCancelled is a regular static property that returns a different value depending on which task it's accessed from.

With task local values, we can achieve a similar feature that allows us to associate metadata about a task with a task. We can do this by annotating a static property with the @TaskLocal property wrapper. This property wrapper will make sure that the given static property's value is only assigned within the scope of a given task.

Let's see what this looks like:

enum Transaction {
    @TaskLocal static var id: UUID? = nil
}

This enum has a task local id that can be used to identify a transaction in our system. I'll explain what this can be used for later. I want to explain task locals a little bit more before I show you how to use them.

My task local value has a default value of nil. This default value is the value that I'll get when I try to read the transaction id from a task that does not explicitly have its Transaction.id set. Note that after I assign a default value to my id, I can not change it:

Transaction.id = UUID() // Cannot assign to property: 'id' is a get-only property

To assign a task local value, we need to call a method on $id as follows:

await Transaction.$id.withValue(UUID()) {
    print(Transaction.id)
}

The withValue(_:operation:) method creates a scope where Transaction.id will have the provided value as its value. This works very similar to how Task.isCancelled is implemented. The value that's returned when accessing Transaction.id is determined by checking the context that we're currently in. If we're not in a context where the value was explicitly set we'll receive the default value that we assigned in the declaration. In this case that would be nil.

The value that's assigned to Transaction.id when creating a scope is only valid during that scope.

You can temporarily override this value within the scope with a nested call to withValue(_:operation:):

Transaction.$id.withValue(UUID()) {
    print(Transaction.id) // original value

    Transaction.$id.withValue(UUID()) {
        print(Transaction.id) // new value
    }

    print(Transaction.id) // original value
}

Outside of the nested closure, the value for Transaction.id returns it's "orginal" value because the assigned value is scoped to the closure that you pass to withValue.

The way Swift Concurrency scopes this makes sure that you can't accidentally assign an expensive object to a task local value and forget to deallocate it when it's no longer needed. In other words, the scoping of withValue(_:operation:) makes sure that our task local value does not escape its scope.

If we start a new detached task from within a context created through withValue(_:operation:) this task will not inherit the task local values that were present in the context:

Transaction.$id.withValue(UUID()) {
    print(Transaction.id) // assigned value

    Task {
        print(Transaction.id) // assigned value
    }

    Task.detached {
        print(Transaction.id) // nil
    }
}

If you want task local values to be copied into a detached task you'll need to explicitly copy this value:

Transaction.$id.withValue(UUID()) {
    let transaction = Transaction.id
    Task.detached {
        print(transaction) // the task local UUID
    }
}

You can also use this copied value as a new task local for the detached task:

Transaction.$id.withValue(UUID()) {
    let transaction = Transaction.id
    Task.detached {
        Transaction.$id.withValue(transaction) {
            print(Transaction.id) // the task local UUID from the outer scope
        }
    }
}

Note that this allows multiple sources to read this value concurrently. For this reason task local values have to be safe to use concurrently. This is enforced by the requirement that task local values are Sendable.

Okay. I think at this point you know enough about task local values to have an idea of what they are and how they're used. In short, they provide a scope where a certain "global" value is available.

Let's see when these values can (and more importantly should) be used.

Understanding how task local values can be used

When you've read the previous section, it might seem attractive to put shared state or information in a task local value. This would have a similar feel to SwiftUI's @Environment which is intended for sharing of state and dependencies in a view hierarchy.

Task local values are not intended to be used like this.

I feel like I should repeat this with different words.

Task local values should not be used to provide state that you depend on within the context of a task.

There are multiple reasons for this, and I reckon one most important ones is that it's extremely error prone to have to depend on setting this state outside of a function. It's easy to forget to call withValue(_:operation:) all the time, and this could mean that you introduce unnoticed bugs in your application.

Another, possibly more important, reason to not rely on task local values to provide state you depend on is that it's more expensive to look up task local values than it is to access a normal variable. The reason for this is that the task local value will have to reason about its current context before it can provide a value.

When you have an async function that depends on specific state to do its job, pass it to the function explicitly.

So what are task local value for then?

Well, they are intended to associate specific metadata with a given task. This means that task local values will mostly be useful if you want to debug your code, or if you want to be able to group a bunch of asynchronously produced logs together through something like a transaction ID.

Imagine that you have some object that can fetch user data. This object depends on a data provider, and the data provider relies on an Authorizer and Networking object to make authorized network requests.

We might have many concurrent calls in progress, and when you attempt to debug something in this flow, your logs might looks a little like this:

UserApi.fetchProfile() called
UserApi.fetchProfile() called
RemoteDataSource.loadProfile() called
RemoteDataSource.loadProfile() called
UserApi.fetchProfile() called
Authorizer.authorize(_ request: URLRequest) called
RemoteDataSource.loadProfile() called
Authorizer.authorize(_ request: URLRequest) called
Authorizer.accessToken() called
Authorizer.refreshToken(_ token: Token?) called
Authorizer.authorize(_ request: URLRequest) called
Authorizer.accessToken() called
Networking.load<T: Decodable>(_ request: URLRequest) called
Authorizer.accessToken() called
Networking.load<T: Decodable>(_ request: URLRequest) called
Networking.load<T: Decodable>(_ request: URLRequest) called

With this output it's impossible to see what the order of events is exactly. We don't know if the first loadProfile call lines up with the first load call, or whether it triggered the call to refreshToken.

Without task local values you might pass a UUID to every function, and pass the UUID down to the next functions so you can retrace your steps. With task local values, you can associate a transaction ID with your task using the Transaction.id from before so it propogates throughout your function calls automatically. Let's see what this looks like:

class UserApi {
    let dataSource: RemoteDataSource

    init(dataSource: RemoteDataSource) {
        self.dataSource = dataSource
    }

    func fetchProfile() async throws -> Profile {        
        return try await Transaction.$id.withValue(UUID()) {
            if let transactionID = Transaction.id {
                print("\(transactionID) UserApi.fetchprofile() called")
            }
            return try await dataSource.loadUserProfile()
        }
    }
}

To print useful information, we check if Transaction.id is set. In this case we've set it with withValue(_:operation:) on the line before but we still unwrap it properly. Next, I simply prefix my old print statement with the transaction ID.

In the loadUserProfile function, I can also access the transaction ID because it runs as part of the same task:

func loadProfile() async throws -> Profile {
    if let transactionID = Transaction.id {
        print("\(transactionID): RemoteDataSource.loadRandomNumber() called")
    }

    let request = try await authorizer.authorize(URLRequest(url: endpoint))
    return try await network.load(request)
}

This logic can be written in all of the subsequent function calls too. So we'd add this same code to the authorize, accessToken, refreshToken, and load methods. When we run the code with all of this in place, here's what the same output from earlier would look like:

3F0A1FD9-D55D-4015-A7D0-8B054A1CF7A9: UserApi.fetchProfile() called
98365B1C-4176-44DA-806A-2D2BCB787111: UserApi.fetchProfile() called
3F0A1FD9-D55D-4015-A7D0-8B054A1CF7A9: RemoteDataSource.loadProfile() called
98365B1C-4176-44DA-806A-2D2BCB787111: RemoteDataSource.loadProfile() called
F02A7024-0B84-454C-9E23-E3DA0F8E3558: UserApi.fetchProfile() called
3F0A1FD9-D55D-4015-A7D0-8B054A1CF7A9: Authorizer.authorize(_ request: URLRequest) called
F02A7024-0B84-454C-9E23-E3DA0F8E3558: RemoteDataSource.loadProfile() called
98365B1C-4176-44DA-806A-2D2BCB787111: Authorizer.authorize(_ request: URLRequest) called
3F0A1FD9-D55D-4015-A7D0-8B054A1CF7A9: Authorizer.accessToken() called
3F0A1FD9-D55D-4015-A7D0-8B054A1CF7A9: Authorizer.refreshToken(_ token: Token?) called
F02A7024-0B84-454C-9E23-E3DA0F8E3558: Authorizer.authorize(_ request: URLRequest) called
98365B1C-4176-44DA-806A-2D2BCB787111: Authorizer.accessToken() called
3F0A1FD9-D55D-4015-A7D0-8B054A1CF7A9: Networking.load<T: Decodable>(_ request: URLRequest) called
F02A7024-0B84-454C-9E23-E3DA0F8E3558: Authorizer.accessToken() called
98365B1C-4176-44DA-806A-2D2BCB787111: Networking.load<T: Decodable>(_ request: URLRequest) called
F02A7024-0B84-454C-9E23-E3DA0F8E3558: Networking.load<T: Decodable>(_ request: URLRequest) called

Now that every sequence of method calls is associated with a transaction id, the logs that are produced by this program are far more useful than they were before.

This is a really good use of task local values because we're not using them to pass around important state. Instead, we use this for logging and retracing our steps. The transaction ID really is metadata rather than state. This is exactly what the Swift team has intended task local values for. They're a container for task metadata.

In Summary

While task local values will most likely not be a heavily used feature for most people, I'm sure some developers will make heavy use of it for debugging, logging, and other purposes. I personally find this transaction example very compelling because I've worked on a codebase not too long ago where we passed transaction IDs around to every method call that would make a network call so we could collect comprehensive logs in case something went wrong. Manually passing a transaction ID around really feels like busywork, and being able to associate a transaction ID with an entire chain of method calls that occur withing the scope of the operation passed to withValue(_:operation:) is a breath of fresh air.

If you every find yourself needing to untangle a bunch of concurrently active tasks, task local values might just be the tool you need to help you out.

Got any questions or feedback? Feel free to shoot me a message on Twitter.

Preventing data races with Swift’s Actors

We all know that async / await was one of this year’s big announcements WWDC. It completely changes the way we interact with concurrent code. Instead of using completion handlers, we can await results in a non-blocking way. More importantly, with the new Swift Concurrency features, our Swift code is much safer and consistent than ever before.

For example, the Swift team built an all-new threading model that ensures your program doesn’t spawn more threads than there are CPU cores to avoid thread explosion. This is a huge difference from GCD where every call to async would spawn a new thread and the CPU had to give each of your threads some time to run which caused significant overhead due to a lot of context switching.

While all this is interesting, and makes our concurrent code much better, this post is not about Swift concurrency as a whole. Instead, I want to focus on a smaller feature called Actors.

Understanding the problem that actors solve

An actor in Swift 5.5 is an object that isolates access to its mutable state. This means that anybody that wants to call a method on an actor where the method relies on mutable state, regardless of reading or writing, has to do so asynchronously.

But what does this mean? And why is this the case?

To answer that, let’s consider an example of code that you might write today.

class DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()

    func formatter(with format: String) -> DateFormatter {
        if let formatter = formatters[format] {
            return formatter
        }

        let formatter = DateFormatter()
        formatter.locale = Locale(identifier: "en_US_POSIX")
        formatter.dateFormat = format
        formatters[format] = formatter
        return formatter
    }
}

This code is quite straightforward, and something that I have actually included in a project once. However, it got rejected in a PR. To find out why, let’s see how this code would be used.

Let’s emulate a situation where this code is used in a multithreaded environment.

Let’s add a few print statements first:

class DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()

    func formatter(with format: String) -> DateFormatter {
        if let formatter = formatters[format] {
            print("returning cached formatter for \(format)")
            return formatter
        }

        print("creating new formatter for \(format)")
        let formatter = DateFormatter()
        formatter.locale = Locale(identifier: "en_US_POSIX")
        formatter.dateFormat = format
        formatters[format] = formatter
        return formatter
    }
}

This will tell us whether we’re reusing an existing formatter or creating a new one. These print statements will make it easier to follow what this code does exactly.

Here’s how I’ll emulate the multithreaded environment:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

I know these date formats might not be the best; it’s not the point for me to show you some clever date formats. Instead, I want to demonstrate a problem to you.

Running this code crashes most of the time for me. I get an EXC_BAD_ACCESS error on the formatters dictionary after a couple of iterations.

When looking at the console, the output looks a little like this:

creating new formatter for DD-mm-yyyy
creating new formatter for DD-mm
creating new formatter for DD-mm-yyyy
creating new formatter for yyyy
creating new formatter for DD-mm-yyyy
creating new formatter for DD-MM
creating new formatter for DD-MM
creating new formatter for DD-mm-yyyy
creating new formatter for DD-mm-yyyy
creating new formatter for DD/MM/YYYY

This makes it look like the cache is not doing anything. Clearly, we're creating a new formatter for every iteration.

Let’s run this code in a normal for loop to see if that’s any better.

for _ in 0..<10 {
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

The first thing to note is that this code wouldn’t crash. There’s no bad access on formatters inside of the cache anymore.

Let’s look at the console:

creating new formatter for DD/MM/YYYY
creating new formatter for DD-mm-yyyy
returning cached formatter for DD/MM/YYYY
returning cached formatter for DD/MM/YYYY
creating new formatter for yyyy
returning cached formatter for DD/MM/YYYY
creating new formatter for DD-mm
returning cached formatter for yyyy
returning cached formatter for DD-mm
returning cached formatter for DD-mm

This looks much better. Apparently the caching logic should work. But not when we introduce concurrency…

The reason the formatter cache crashed in the concurrent example is a data race. Multiple threads attempt to read, and modify, the formatters dictionary. The program can’t handle these concurrent reads and writes which puts our program in an inconsistent state and eventually leads to a crash.

Another interesting aspect of this is the broken cache. This of course related to the data race, but let’s see what actually happens when the code runs.

I have explained issues with concurrency, mutable state, and dictionaries before in this post.

Because we’re running code concurrently, we call the formatter(with:) method ten times at roughly the same time. When this functions starts, it reads the formatters dictionary which will be empty, so no formatters are cached. And because we have ten concurrent reads, the dictionary will be empty for each of the ten calls.

Dictionaries are passed by value with reference characteristics. This means that the dictionary is not copied until you attempt to modify it. This is important to remember.

When each of the ten calls to formatter(with:) attempt to add the newly created formatter to the cache, the formatter will be copied and the new value is added to the copy. This means that each iteration will be adding to the dictionary that was read earlier. An empty dictionary that we'll add one entry to, and we'll make that the new value of formatters. This means that we'll end up with a different dictionary that has one value after each of these concurrent function calls.

Usually.

Because our concurrent code might also run slightly slower, we could sometimes have a dictionary with two, three, or more items. And this dictionary could be overwritten by a later iteration if our code happens to run that way.

There’s a ton of ambiguity here. We don’t control exactly how our formatter cache is accessed, by which thread, and how often. This means that my initial, simple implementation, can never work reliably in a multithreaded environment.

Solving data races without Actors

We can fix this without Swift’s new concurrency by synchronizing access to the formatters dictionary. Synchronizing means that we ensure that we execute the formatter(with:) function serially even if it’s called in parallel. This will ensure that the formatters dictionary is read, and updated, atomically. Or in one pass. Or in other words, without interruption. To gain a better understanding of what atomicity is, you can refer to this post I wrote earlier. By synchronizing code we'll know that once the formatter(with:) function has done its work, we’re ready to handle another call to formatter(with:). Basically callers to formatter(with:) will have to wait for their turn.

Synchronizing code like that can be done with a dispatch queue:

class DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()
    private let queue = DispatchQueue(label: "com.dw.DateFormatterCache.\(UUID().uuidString)")

    func formatter(with format: String) -> DateFormatter {
        return queue.sync {
            if let formatter = formatters[format] {
                print("returning cached formatter for \(format)")
                return formatter
            }

            print("creating new formatter for \(format)")
            let formatter = DateFormatter()
            formatter.locale = Locale(identifier: "en_US_POSIX")
            formatter.dateFormat = format
            formatters[format] = formatter
            return formatter
        }
    }
}

By creating a private queue and calling sync on it in the formatter, we make sure the queue only runs one of these closures at a time. We can return the result of our operation from the closure, by returning queue.sync from the function because everything happens synchronously.

While this code runs we block the calling thread. This means that nothing else can run on that thread until the sync closure ran.

When we run the concurrent example code again with this private queue in place:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

It doesn’t crash and produces the following output:

creating new formatter for DD/MM/YYYY
returning cached formatter for DD/MM/YYYY
creating new formatter for yyyy
creating new formatter for DD-mm
returning cached formatter for DD-mm
creating new formatter for DD-mm-yyyy
returning cached formatter for DD/MM/YYYY
returning cached formatter for yyyy
returning cached formatter for DD-mm
creating new formatter for DD-MM

Clearly, the code works well! Awesome.

But there are a few problems here:

  1. We block the thread. This means that GCD will spawn new threads to make sure the CPU stays busy with those threads instead of sitting completely idle. This means that we’ll potentially have tons of threads, which can be expensive if the CPU has to context switch between threads a lot.
  2. It’s not clear to the caller of formatter(with:) that it’s a blocking function. A caller of this function might have to wait for many other calls to this function to complete which might be unexpected.
  3. It’s easy to forget synchronization, especially if the formatters property should be readable from outside of the class. The compiler can’t help us so we have to rely on our own judgement and hope that any mistakes get caught in PR, just like my mistake was.

In Swift 5.5, we can leverage actors to achieve proper mutable state isolation with compiler support.

Solving data races with Actors

As I mentioned earlier, actors isolate access to their mutable state. This means that an object like the DateFormatterCache can be written as an actor instead of a class, and we’ll get synchronization for free:

actor DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()

    func formatter(with format: String) -> DateFormatter {
        if let formatter = formatters[format] {
            print("returning cached formatter for \(format)")
            return formatter
        }

        print("creating new formatter for \(format)")
        let formatter = DateFormatter()
        formatter.locale = Locale(identifier: "en_US_POSIX")
        formatter.dateFormat = format
        formatters[format] = formatter
        return formatter
    }
}

Note how the object is completely unchanged from the initial version. All I did was change class to actor and I removed the queue that we added later. Also note that actors are reference types, just like classes are.

Now that DateFormatterCache is an actor, Swift will know that formatters is mutable and that any access to it will need to be synchronized. This also means that Swift knows hat formatter(with:) might not return immediately, even if the function isn’t marked async. This is very similar to what we had earlier with the private queue.

If I were to make formatters an internal or public property instead of private, accessing formatters directly from the outside would also be synchronized, and therefor be done asynchronously from the caller’s point of view.

Within the actor, we know that we’re already synchronized. So I don’t have to wait for formatters’s value to be read. I can just read it directly without doing any manual synchronization. I get all of this for free; there’s no work to be done by me to ensure correct synchronization.

Running the following test code from earlier produces an error though:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

Here’s the error:

Actor-isolated instance method ‘formatter(with:)’ can only be referenced from inside the actor

This error seems to suggest that we cannot access formatter(with:) at all. This isn’t entirely correct, but we’ll need access it asynchronously rather than synchronously like we do now. The easiest way to do this is to either already be in an async context, or enter one:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    Task {
        let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
    }
}

Doing this provides us with a more useful compiler error:

Expression is ‘async’ but is not marked with ‘await’

Remember how I explained that formatter(with:) might not return immediately because it will be synchronized by the actor just like how the queue.sync version in the class didn’t return immediately?

In the old version of this code, the blocking nature of formatters(with:) was hidden.

With an actor, the compiler will tell us that formatter(with:) might not return immediately, so it forces us to use an await so that our asynchronous work can be suspended until formatter(with:) is run.

Not only is this much nicer due to the more expressive nature of the code, it’s also much better because we’re not blocking our thread. Instead, we’re suspending our function so its execution context can be set aside while the existing thread does other work. We don't create a new thread like we did with GCD. Eventually the actor runs formatter(with:) and our execution context is picked back up where it left off.

Here's what the corrected code looks like:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    Task {
        let formatter = await DateFormatterCache.shared.formatter(with: formats.randomElement()!)
    }
}

What’s interesting is that because Swift’s new concurrency model does not spawn more threads than CPU cores, simply wrapping the class based version of the cache in a Task.init or Task.detached block would already mask our bug most of the time. The reason for this is that it’s very likely that all of the task you create run on the same thread. This means that they won’t actually run concurrently like they do with Task.

You can try this out by making DateFormatter a class again and removing the await from the last code snippet. Keep the Task though since that will leverage Swift's new concurrency features.

However, you should not assume that the bug would actually be fixed by using a class, not synchronizing, and using Task. There is no guarantee that your closures would run on the same thread. And more importantly, in the really world you might have many tasks happening that are spawned from many different threads. This would make it far more likely for data races to occur than it is in my simple example.

Conclusion

In this post, I explained a little bit about what Swift's new Actors are, and what their roles is in this new async / await world that we can start exploring. You also learned when data races occur, and how you can solve them. First, you saw an approach without actors. After that, I showed you an approach that's much more expressive and without any of the hidden implications that the earlier version had.

Swift's actors are an extremely useful tool to ensure you don't run into data races by isolating mutatble state, and synchronizing access. What's even better is that the Swift language and compiler make sure of all this, and any potential errors can be raised as compiler error rather than bugs and runtime crashes. I’m extremely excited for concurrency in Swift 5.5, and can’t wait to explore this feature more over the coming weeks.

WWDC Notes: Swift concurrency: Behind the scenes

Meet async / await, explore structured concurrency, protect mutable state with actors should be watched first.

Threading model

Compares GCD to Swift. It’s not built on top of GCD. It’s a whole new thread pool.

GCD is very eager to bring up threads whenever we kick off work on queues. When a queue blocks its thread, a new thread will be spawned to handle work.

This means that the system can overcommit with more threads than there are CPU cores. This is also called Thread explosion and can lead to memory and performance issues.

There’s a lot of scheduling overhead during threads. There will likely be a ton of context switching which in turn will make the CPU run less efficiently.

Swift concurrency was designed to be more efficient than GCD.

The goal is to have no more threads than CPU cores. Instead, there are continuations that can be blocked. Instead of having the CPU context switch, the thread does this. It’s as simple as a method call so the overhead is much, much lower.

To make this happen, the language has to be able to guarantee that threads do not block through language features:

  • await and non-blocking of threads
  • Tracking of dependencies in Swift task model

Swift’s await does not block a thread like GCD’s sync does.

Every thread has a stack that keeps track of function calls. There are several stack frames. One for each function call. When a function returns, its stack frame is popped.

When an async function is called with await, it’s tracked as an async frame on the heap. The async frames keep track of state that’s needed when the awaited function returns. When another function is called, the topmost stack frame on the thread is replaced. Because async frames are stored on the heap, they can be put back on a thread and resumed. Async frames will be put back on the stack as needed. Calling sync code in an async fuck will add frames to the thread’s stack.

The block of code that runs after the await is called a continuation When execution should resume, the continuation is put back on a thread’s stack.

Interesting stuff, try to find out more and properly understand this.

Async work is modeled with tasks. Tasks can have child tasks. Tasks can only await other tasks in swifts. Awaited tasks are either continuations or child tasks.

Threads can track these task dependencies and they’ll know how to suspend tasks and schedule other work until the task can be resumed.

A cooperative thread pool is the default executor for Swift. The number of threads is limited to the number of CPU cores. Threads will always make forward progress and avoids thread explosion and excessive task switching.

Where with GCD you needed to be mindful of the number of queues you use, Swift concurrency ensures that you don’t have to worry about this anymore.

Concurrency always comes with a cost. It takes additional memory allocation and logic in the Swift runtime. Concurrency should only be used when its cost outweighs the cost of managing it.

For example, reading for user defaults is a super small task that should not be spawned into its own async task unless needed.

Measure performance to understand when you need concurrency.

await explicitly breaks atomicity because in the time between your await and calling the continuation, things might change. You should never hold locks across await. Thread specific data is also not preserved across await because you might resume on a different thread than the one you were suspended on.

The Swift language upholds the runtime contract that threads can always make forward progress. You have to make sure you don’t break this contract so the thread pool can do its best work.

  • Use primitives like await, actors, and task groups so the compiler can enforce the contract.
  • Locks can be used in sync code with caution. There’s no compiler support but does not break the contract as long as the thread is not fully blocked (for too long)
  • Semaphores and NSCondition are not safe to use. They hide dependency information from the runtime so it cannot schedule work correctly which might result in blocking.

Don’t use unsafe primitives to wait across task boundaries. Like for example using a semaphore in an async context. This is not save.

The LIBDISPATCH_COOPERATIVE_POOL_STRICT=1 environment variable will run the app under a debug runtime that enforces forward progress for thread.

Synchronization with Actors

Actors synchronize access to their state through mutual exclusion.

When using DispatchQueue.sync, a current thread can be reused when there’s no contention. When there is, DispatchQueue.sync is blocking and new threads are spawned.

When you use DispatchQueue.async, you’re non-blocking under contention, but a new thread is always spawned.

Swift Actors always reuse threads and are non-blocking. If the thread is free, code is run. If not, function is suspended and run later.

Serial queues can be replaced with actors to manage access.

When you switch between different actors, you are thread hopping. An actor can be suspended and threads can easily hop from a running actor to a currently suspended actor. The runtime can handle this by creating work items for the thread without spawning a new thread.

Actor work items can remain pending until an in progress work item is completed.

Actors are designed to allow the system to prioritize work.

Actor reentrancy means that an actor might have pending work when it schedules and executes new work items. This can happen if a task is awaiting something, and this other thing awaits something on the actor.

The main actor runs on the main thread. The main thread is separated from the rest of the threads in the cooperative pol.

When you hop on and off the main actor often, you force hopping from and to the mean thread. This is expensive. If this happens it’s better to bundle work and run a bigger task to update UI from the main actor. For that reason, it’s not adviced to jump onto an actor and away from the main actor for small bits of work (often).

WWDC Notes: Bring Core Data concurrency to Swift and SwiftUI

Persistence everywhere

Core Data takes care of many complexities to persist data. It converts in-memory graph to persisted data and takes care of all kinds of complex tasks like memory management.

Core Data works on all platforms, and it’s great in Swift. Apple’s been working to make Core Data better with Swift over the years.

Core Data has always cared about running code concurrently.

Swift concurrency

Sample app

The sample app loads data from the background and persist it. Eventually it updates the view context.

Insertion is done with bgctx.performAndWait() and a batch insert.

performAndWait will block the calling thread to run the provided code on the context’s thread. The perform function doesn’t block and runs the code on the contexts queue.

You can use await ctx.perform {} to suspend the current execution context, have Core Data run code on its queue, and hand back control. Its usage is the same but intent is clearer without blocking.

In iOS 15, perform is generic over the returned type of object and the supplied closure can throw. This is nice and inline with async / await .

try await ctx.perform {
  throw SomeError.case
}

We used to obtain results like this:

var count: Int!
ctx.perform {
  // configure a request with NSCountResultTyoe
  count = request.execute().last!
}

With async / await we can do:

let count = try await ctx.perform {
  // configure a request with NSCountResultTyoe
  return request.execute().last!  
}

This is much nicer to read and reason about.

Returning a managed object from a perform can be risky. When you return a managed object from perform it’s not save to interact with the returned object.

Instead, you should return a managed object ID or dict representation.

Perform is scheduled with .immediate by default. It behaves a lot like an async version of performAndWait.

If you use enqueued, the work is always added to the end of the queue of the context. Even if you’re already in the correct context.

Translation to async await:

  • performAndWait == await perform
  • perform == await perform(.enqueued)

NSPersistentContainer and NSPersistentStoreCoordinator can also perform work in their context and have received similar async features.

Existing debugging tools still work and should be used.

CloudKit sharing is new and Core Data spotlight integration is improved.

Persistent stores are where data is stored. Core Data supplies XML, Binary, In Memory, and SQLite. On iOS 14 and below we used long names. Now they’re Swfty. .xml, .sqlite, etc.

AttributeType is also improved.

SwiftUI

@FetchRequest now has lazy entity resolution and they pick up dynamic configuration. There’s also a new sectioned fetch request.

Lazy entity resolution means you don’t have to set up the Core Data stack when creating your fetch request. Instead, it needs to be set up when the request is performed.

Predicates and Sort Descriptors can now be updated through properties on the wrapped value of the fetch request. Updating the property automatically reloads.

There’s also a new SortDescriptor type that’s easier to use.

@SectionedFetchRequest takes a sectionIdentifier that’s used to determine which section items are in. This fetch request returns a 2D array that represents sections with items for a section as its nested result.

The sectionIdentifier key path can be dynamically adjusted as well.

When changing a sortDescriptor, you might have to change the sectionIdentifier too.

WWDC Notes: Discover concurrency in SwiftUI

When performing slow work, you might dispatch off of the main queue. Updating an observable object off of the main queue could result in this updating colliding with a “tick” of the run loop. This means that SwiftUI receive an objectWillChange, and attempt to redraw UI before the underlying value is updated.

This will lead to SwiftUI thinking that your model is in one state, but it’s in the next.

SwiftUI needs to have objectWillChange->stateChange->runloop tick in this exact order.

Running your update on the main actor (or main queue pre async/await) will ensure that the state change is completed before the runloop tick since the operation would be atomic.

You can use await to ensure this. Doing this is called yielding (to) the main actor.

When you’re on the main actor and you call a function with await, you yield the actor, allowing it to do other work. The work is then performed by a different actor. When this work completes, control is handed back to the main actor where it will update state:

class Photos: ObservableObject {
  @Pulished var items: [SpacePhoto] = []

  func update() async {
    let fetched = await fetch() // yields main actor
    items = fetched // done on the main actor
  }
}

There’s currently guarantee that item is always accessed on the main actor. To make this guarantee, class Photos needs the @MainActor annotation.

The task modifier on a SwiftUI is used to run an async task on creation. It’s called at the same point in the lifecycle as onAppear.

Since task is tied to the view’s lifecycle, you can await an async sequence’s elements in task and rest assured that everything is cancelled and cleaned up when the view’s lifecycle ends.

Button methods in SwiftUI are synchronous. To launch an async task from a button handler, use async {} (will be renamed to Task.init) and await your async work.

Button("Save") {
  async {
    isSaving = true
    await model.save()
    isSaving = false
  }
}

In this button isSaving is mutated on the main actor. async (or Task.init) runs its task attached to the current actor. In a SwiftUI view, this would be the main actor. await will yield the main actor and run code on whatever actor model.save() runs until control is yielded back to the main actor.

The .refreshable modifier on SwiftUI takes an async closure. You can await an update operation in there. This modifier will, by default, use a pull to refresh control.

SwiftUI integrates nicely with async / await and asynchronous functions.

It’s recommended to mark ObservableObject with @MainActor to ensure that their property access and mutations are done savely on the main actor.

WWDC Notes: Meet AsyncSequence

Map, filter, reduce, dropFirst all work in async sequences:

for try await someThing in async.dropFirst() {
}

For example.

AsyncSequence suspends on each element and receives values asynchronously from the iterator. AsyncSequences either complete with success or stop when an error is thrown.

Implementing an AsyncSequence follows all the rules that a normal sequence follows. Its next() returns nil when it’s completed for example.

An async iterator also consumes its underlying collection.

Things like break and continue work in async sequences too.

You can cancel an iteration by holding on to its Task.Handle when you wrap it in async:

let handle = async {
  for await thing in list {
    // ...
  }
}

handle.cancel()

Reading files from a URL is commonly done async. You can use URL.lines for this. It works for network and local resources.

URLSession has a bytes(_:) function to enable roughly the same but with more control.

NotificationCenter notifications can be awaited too. You can even await one notification:

// listens for 1 notification only
let center = NotificationCenter.default
let notification = await center.notifications(named: .SomeNotification).first { notification in 
  print(notification)
}

Callbacks that are called multiple times, and some delegates are good candidates for async sequences.

Start / stop / handle pattern is a good candidate. Sounds similar to location managers.

let stream = AsyncStream(Output.self) { continuation in 
  let object = SomeObject()
  object.handler = { element in 
    continuation.yield(element)
  }

  object.onTermination = { _ in 
    object.stop()
  }

  // starts producing values
  object.start()
}

AsyncStream is the easiest way to create your own asynchronous sequences. They also handle buffering (not sure what that means in this case).

There’s also a throwing version: AsyncThrowingStream.

Note: AsyncStream does not appear to be present in Beta 1

WWDC Notes: What’s new in SwiftUI

A good way to get started with SwiftUI is to use it for new features. SwiftUI can be mixed in with UIKit and AppKit code. It also allows you to expand into new platforms, like macOS, with little to no work.

Essentially, try to do new work with SwiftUI whenever you can.

Better lists

SwiftUI can load images async with the new AsyncImage. This takes a URL and shows a placeholder by default. You can pass a closure to configure the loaded image with modifier, and to set a custom placeholder.

There’s a new refreshable modifier. This modifier takes a closure that takes an async closure. So you can await things in there. Lists use this modifier to add pull to refresh.

You can run a task on appear with the task modifier. This will run when the view is first created, and cancels the task when the view disappears. You could use this to fetch initial data for the list without blocking your UI.

Videos:

  • Discover concurrency in SwiftUI
  • Swift concurrency: Update a sample app

Text can be made editable with a TextField. This takes a binding but we might not easily have a binding available because in a list we receive an element; not a bindable object.

This is covered in depth in Demystifying SwiftUI.

You can pass a binding to your collection to the List. This will make it so the item closure receives a binding too. You can still read data like you did normally but you also have a binding.

Looks like you should use $thing instead of thing in the list item closure.

Separator tints can be set with listRowSeparatorTint. Also works for section separators. listRowSeparator(.hidden) hides the separators.

SwiftUI now has a swipeActions modifier that allows you to add swipe actions to items in ForEach. You can pass an edge to swipeAction to control where they appear. Also works on macOS.

View styles now have an enum-like style. For example listStyle(.inset). We can now pass a value to inset to alternate row background colors.

Beyond lists

We can now create tables with the Table view. Tables use a TableColumn for columns. They take a title + data. They support single + multiple selection and sorting. You can sort through key paths. Worth exploring, looks fun.

Fetch requests now provide a binding to their sort descriptor that allows you to sort fetched results Must explore

We can also use @SectionedFetchRequest to split @FetchRequest into sections.

Videos:

  • SwiftUI on the Mac: Fundamentals
  • SwiftUI on the Mac: Finishing touches
  • Bringing Core Data concurrency to Swift and SwiftUI

Lists in SwiftUI can now be made searchable through a modifier. It binds to a search query. There’s a whole session on each in SwiftUI.

We can use onDrag to make this draggable (existing) we can provide a preview: closure to provide a custom preview of the dragged data.

We can add the importsItemProviders to make a view a drop target that accepts item providers. We can add ImportFromDevicesCommands() to an app’s commands to allow import from external devices like a camera.

We can use exportsItemProviders modifier to export items so that apps like Shortcuts can use our data. Not sure how this works exactly though. Covered only very briefly.

Advanced graphics

Apple added lots of new SF Symbols and two new rendering modes. Hierarchical and palette. More information in the SF Symbols talk.

There are all kinds of new colors in SwiftUI this year.

SF Symbol variants are now automatically picked by the system. Modifiers like .fill or no longer needed. The correct variant is automatically chosen based on context and platform. This is covered more in the SwiftUI + SF Symbols talk.

A Canvas can be used to draw a large number of items that don’t need to be managed individually. It’s possible to add gestures to a Canvas and modify its contents. You can add accessibility information like accessibillityChildren to enhance the accessibility experience.

You can wrap a TimelineView to create a timeline loop to build a screensaver like experience for tvOS. Or you can even use it to update your watch app when it’s in the always on state. You could, for example, update your view every minute. More information in What’s new in WatchOS 8.

.privacySensitive is a useful modifier to hide views when they might be visible to others. For example when a device is locked.

Materials can be used to build blurry translucent backgrounds and more. Materials work with the safe area inset to build partial overlays for example. The Rich graphics session covers this in depth.

SwiftUI previews can now show landscape iOS devices. Accessibility modifiers are now shown in the inspector when using previews. You can also inspect the state of your accessibility with previews. Covered in depth in the accessibility session.

Text and keyboard (focussed based)

SwiftUI Text now supports markdown to format text and even include links or code. This is built on top of Swift Attributed strings. More information on this in What’s new in Foundation.

Xcode 13 can now extract localization catalogs at compile time. Check out Localize your SwiftUI app for more.

Dynamic type was already supported. Now we can set a dynamicTypeSize minimum and maximum size to ensure our layouts don’t break for too large or small texts.

We can enable text selection on macOS with .textSelection. All text within the view this is applied to will be selectable. Also works on iOS.

We can easily format dates with .formatted. We can even use this to format people names for example. We can bind text fields to formatted fields. PersonNameComponents is used in an example. Also covered in the Foundation talk.

We can use onSubmit to handle pressing of the return key. We can also apply this on an entire form. We can set the label of the return key through the submitLabel modifier.

We can add toolbars to a keyboard through the toolbar modifier by returning a ToolbarItemGroup with placement: .keyboard. Shown above keyboard on iOS/iPadOS and Touch Bar on Mac.

We use @FocusState to control focus state. The focused modifier takes a binding to a focus state and will change this state when something is focused. This allows you to update something else in your view.

@FocusState can represent a Hashable value and .focused($focusField, equals: .addAttendee) to for example bind to an enum value.

When you set the @FocusState property to a value, the view that has its focused(_:equals:) set to that value will be activated.

Keyboard can be dismissed by not having a selected focus state. More information in the SwiftUI + Focus session.

Buttons

Buttons are used all over the place in SwiftUI. You can use the .bordered button style to add a button around buttons. You can add tinting and apply the modifier to a container to style all buttons.

You can use controlSize and controlProminence to set a button’s styling too. A button that has an increased prominence gets high contrast color to stand out. accentColor is more dimmed. Button size .large makes a large full width button.

You can use a maxWidth to prevent a button from being too big. You can apply a keyboardShortcut to activate a button.

High prominence should not be applied to all buttons. It’s best for 1 button per screen. A lower prominence can be used to add a splash of color. You can mark buttons as destructive now so they are colored red. You can also ask confirmation for (destructive only?) buttons with the confirmationDialog.

Menu on macOS can now have a primary action and a menu with options of secondary actions. This also works on iOS.

New on button is a toggle style to activate / deactivate buttons.

We can compose everything together in a control group for example.

WWDC Notes: Protect mutable state with Swift actors

Data races make concurrency hard. They occur when two threads access the same data and at least one of them is a write. It’s trivial to write a data race, but it’s really hard to debug.

Data races aren’t always clear, aren’t always reproducible, and might not always manifest in the same way.

Shared mutable state is needed for a data race to occur. Value types don’t suffer from data races due to the way they work; they’re copied.

When you pass an array around, copies are created. This is due to array’s value semantics.

Even an object that’s a value type can be captured in a racy way. When you mutate a value type in a concurrent code block, Swift will show a compiler error since you’re about to write a data race.

Shared mutable state requires synchronization. Existing methods are:

  • Atomics
  • Locks
  • Serial dispatch queues

All three require the developer to carefully use these tools.

Actors

Actors are introduced to eliminate this problem. Actors isolate their state from the rest of the program. This means that all access to state has. To go through the actor. An actor ensures mutually-exclusive access to state.

What’s nice is that you cannot forget to synchronize an actor.

Actors can do similar things to structs, enums, and classes. They are reference types and their unique characteristic is in how they synchronize and isolate data.

actor Counter {
  var value = 0
  func increment() -> Int {
    value = value + 1
    return value
  }
}

In this code, the actor will ensure that value is never read / mutated concurrently.

let counter = Counter()

asyncDetached {
  print(await counter.increment())
}

asyncDetached {
  print(await counter.increment())
}

This code does not cause a data race even though we have no idea how/when these detached tasks run. They might run after each other or at the same time. The actor will ensure we don’t have a data race on value.

Interacting with actors is done asynchronously. The actor might have your calling code wait for a while to free its resources and avoid data races.

Extensions on an actor can access an actor’s state because it’s considered internal to the actor. You can access an actor’s state synchronously from within the actor. In other words, within an actor, your code is uninterrupted and you don’t need to worry about suspensions.

Actor reentrancy

There is an important exception to this though. When a function on an actor runs it’s not interrupted, unless you have an wait within the function. If you await in an actor function, the function is suspended.

Consider this code:

actor ImageDownloader {
  private var cache: [URL: Image] = [:]

  func image(from url: URL) async throws -> Image? {
    if let cached = cache[url] {
      return cached
    }

    let image = try await download(from: url)

    cache[url] = image
    return image
  }
}

If you call this code, no data races for cache can occur. But if you call this method twice for the same url, here’s what happens:

  • CALL1 sees that no image is cached for url
  • CALL1 starts download,
  • CALL1 suspends
  • CALL2 sees that no image is cached for url
  • CALL2 starts download,
  • CALL2 suspends
  • CALL1’s call to download completes and CALL1 resumes
  • CALL1 caches image
  • CALL2’s call to download completes and CALL2 resumes
  • CALL2 overrides image with new version of image for same URL

While an actor’s function is suspended, the underlying data can be changed by another call to that function.

One workaround here would be to not overwrite the image when it’s re-downloaded:

cache[url] = cache[url, default: image]

Video mentions code associated with the video for a better solution; not found.

Ideally, all mutations in an actor are synchronous. You should expect that state is changed after you’ve used an await in your actor code. Any assumptions should be checked after an await.

Actor isolation

Protocol conformances must respect actor isolation.

For example

extension SomeActor: Hashable {
  func hash(into hasher: inout Hasher) {
    hasher.combine(someProperty)
  }
}

This will throw a compiler error. hash(into:) is an instance method on SomeActor which means that it must be called with async since the actor might run the function call at a later time if the actor is already busy.

However, Hashable requires hash(into:) to be synchronous. This is a problem because for an actor to guarantee isolation, we can’t call it synchronously.

To work around this, we can use nonisolated:

extension SomeActor: Hashable {
  nonisolated func hash(into hasher: inout Hasher) {
    hasher.combine(someProperty)
  }
}

This will make it so that the function is not isolated anymore. As long as someProperty is immutable, this will work. We know that someProperty can’t be mutated so reading it without isolation shouldn’t be a problem. After all, a data race is only a data race if one of the accesses mutates state.

If someProperty is mutable though, we’d still get a compiler error because we don’t know who else might access someProperty (and potentially mutate it).

Closures can be isolated to the actor.

extension SomeActor {
  func doSomething() {
    self.someList.map { item in 
      return self.infoFor(item)
    }
  }
}

infoFor is defined on SomeActor but we don’t need to await self.infoFor because we know this closure never leaves the scope of the actor. Map’s closure isn’t escaping so we’re always actor isolated.

extension SomeActor {
  func runLater() {
    asyncDetached {
      await doSomething()
    }
  }
}

In this example, the closure isn’t run immediately within the scope of the actor since it’s detached. This means we must await doSomething() since we’re no longer actor isolated.

If an actor owns a reference type that might be exposed to the outside of an actor, we have a potential for a data race. This isn’t a problem for value types. However, it is a problem for classes since the actor will hold a reference to an instance of a reference type. If this references is (safely) passed outside of the scope of the actor, this non-actor scope might cause a data race on that reference type instance.

Objects that can be safely shared concurrently are called Sendable types.

If you copy a something from one place to the other and each can modify it without issue, that object is sendable.

  • Value types are sendable because they’re copied
  • Actors are sendable because they isolate state
  • Immutable classes can be sendable. These are classes with only let properties
  • Internally synchronized classes can be sendable
  • Functions aren’t always sendable. They are if they are @Sendable

Sendable describes a common but not universal property of type.

Swift will enforce Sendable at compile time so you’ll know when you’re at risk of leaking non-Sendable information.

Sendable is a protocol that you can have classes conform to. Swift will automatically check correctness for you from that point. Something is Sendable if all of its properties are Sendable, similar to Decodable’s synthesized properties.

Sendable objects with generics must have the generic be Sendanle too. You can add conditional conformance to objects based on generics being Sendable for example.

Making objects Sendable if you need them in a concurrent situation is a great way to have Swift help you out when you might have a data race.

@Sendable function types conform to Sendable

Sendable closures cannot have mutable captures, must capture Sendable types only, and can’t be both sync and actor isolated.

asyncDetached takes a sendable closure.

var c = Counter()
asyncDetached {
  c.increment()
}

This doesn’t work because c is not Sendable

Main actor

The main thread in an app is where all kinds of UI operations are performed. Whenever you interact with the UI you’re on the main thread.

Slow tasks should be performed away from the main thread so it doesn’t freeze.

Main thread is good for quick operations.

DispatchQueue.main.async is used to dispatch onto the main queue. This is very similar to running on an actor. If you’re on the main thread you can access it safely. Just like you can safely access an actor’s state from within the actor. If you want to run something on the actor from the outside you do so async with await. This is similar to running code on the main queue async with DispatchQueue.main.async.

The Main Actor in Swift is an actor that represents the main thread.

Main actors always use the main dispatch queue as their underlying queue. It’s interchangeable with DispatchQueue.main from the pov of the runtime.

If you mark something with @MainActor, it always runs on the main actor:

@MainActor func doSomething() {}

This code must be called async from outside of the main actor:

await doSomething()

Whole types cal be placed on the main actor with @MainActor:

@MainActor class SomeClass

Opt out of running on the main actor in such a class with nonisolated.

WWDC Notes: Explore structured concurrency in Swift

Structured programming uses a static scope. This makes it very easy to reason about code and its flow. Essentially making it trivial to understand what your code does by reading it from top to bottom.

Asynchronous and concurrent code do not follow this structured way of programming; it can’t be read from top to bottom.

Asynchronous functions don’t return values because the values aren’t ready at the end of the function scope. This means that the function will communicate results back through a closure at a later time.

It also means that we don’t use structured programming for error handling (no throws for example).

We need nesting if we want to use the produced results for another asynchronous operation.

An async function doesn’t take a completion handler and is instead marked with async, and it returns a value.

By using await to call async code we don’t have to nest in order to use results from an async function, and we can throw errors instead of passing them to a completion handler.

This brings us much closer to structured programming.

Tasks are a new feature in Swift and they are new in Swift. A task provides an execution context in which we can write asynchronous code. Each task runs concurrently with other tasks. They will run in parallel when appropriate.

Due to deep integration the compiler can help prevent bugs.

Async let task

An async let task is the easiest kind of task.

When writing let thing = something(), something() is evaluated first and its result is assigned to let thing.

If something() is async, you need it to run first and then assign to let thing. You can do this by marking let thing as async:

async let thing = something()

When evaluating this, a child task is created. At the same time, let thing is assigned a placeholder. The parent task continues running until at some point we want to use thing. We need to mark this point with await:

async let thing = something()

// some stuff

makeUseOf(await thing)

At this point, the parent context will suspend and await the completion of the child task which will fulfill the placeholder. If the async function can throw, you must prefix with try await instead.

When calling multiple async functions in one scope, you can write this:

func performAsyncJob() async throws -> Output {
  let (data, _) = try await fetchData()
  let (meta, _) = try await meta()

  return Output(data, meta)
}

This will first run (and await the output of) fetchData, and meta is run after.

After meta is done, we return Output.

If the two await lines don’t depend on each other, we can run then concurrently by using async let:

func performAsyncJob() async throws -> Output {
  async let (data, _) = fetchData()
  async let (meta, _) = meta()

  return Output(try await data, try await meta)
}

This will not suspend the parent task until the await is encountered, and both tasks will be running concurrently.

A parent task can spawn one or more child tasks. A parent task can only complete its work if its child tasks have completed their work. If one of the child tasks throws an error, the parent task should immediately exit. If there are multiple child tasks running, the parent will mark any in-flight tasks as cancelled before exiting. Marking a task as cancelled does not stop the task; it’ll just tell the task that its output is no longer needed; the task must handle its cancellation. If a task has any child tasks when cancelled, its child tasks will be automatically marked as cancelled too.

A parent task will only finish when all of its child tasks are marked as finished,

This guarantee of always finishing tasks (either successfully, through cancellation, or by throwing an error) is fundamental to concurrency in Swift.

Cancellation in Swift tasks is cooperative. The task is not stopped when cancelled. A task must check its own cancellation status at reasonable times, whether it’s actually async or not. This means you should design your tasks with cancellation in mind; especially if they are long-running. You should always aim to stop execution as soon as possible when a task is cancelled.

You can do this with try Task.checkCancellation(). This will check the cancellation status for the current task and throws an error if/when it is. If it’s more appropriate you can also use Task.isCancelled. When your task is cancelled you can throw an error which is what Task.checkCancellation() does. But you can also return an empty result, or a partial result. Make sure you document this explicitly so callers of your function know what to expect.

Group task

In the talk a structure like this is shown:

func fetchSeveralThings(for ids: [String]) async throws -> [String: Output] {
  var output = [String: Output]()
  for id in ids {
    output[id] = try await performAsyncJob()
  }
  return output
}

func performAsyncJob() async throws -> Output {
  async let (data, _) = fetchData()
  async let (meta, _) = meta()

  return Output(try await data, try await meta)
}

For every id, one task with two child tasks is spawned. await performAsyncJob is the parent, and fetchData and meta create the child tasks. In the for loop, we only have one active task at a time since we await performAsyncJob in the loop. This means that Swift can make certain guarantees about our concurrency. It knows exactly how many tasks are active at a time.

We can use a task group to have multiple calls to performAsyncJob active. Tasks that are created in a group cannot escape the scope of their group.

You create a task group through the withThrowingTaskGroup(of: Type.self) function. This function takes a closure that receives a group object. You add new tasks to the group by calling using on the group:

func fetchSeveralThings(for ids: [String]) async throws -> [String: Output] {
  var output = [String: Output]()
  try await withThrowingTaskGroup(of: Void.self) { group in 
    for id in ids {
      group.async {
        output[id] = try await performAsyncJob()
      }
    }
  }

  return output
}

Child tasks start immediately when they’re added to the group.

By the end of the closure, the group goes out of scope and all added child tasks are awaited.

This means that that there’s one task in fetchSeveralThings. This task has a child task for each id in the list. And each child task has several tasks of its own.

The code above would have a compiler error due to a data race on output. It’s being mutated by several concurrently running tasks. Data races are common in concurrent code. Dictionaries are not concurrency-proof; they should only be mutated one by one.

Task creation takes a @Sendable closure and it cannot capture mutable variables. Sendable closures should only capture value types, actors, or classes that implement synchronization.

The Protect mutable state with Swift actors provides more info.

To fix the problem, child tasks can return a value instead of mutating the dictionary directly:

func fetchSeveralThings(for ids: [String]) async throws -> [String: Output] {
  var output = [String: Output]()
  try await withThrowingTaskGroup(of: (String, Output).self) { group in 
    for id in ids {
      group.async {
        return (id, try await performAsyncJob())
      }
    }

    for try await (id, result) in group {
      output[id] = result
    }
  }

  return output
}

This for try await loop runs sequentially so the output dictionary is mutated one step at a time.

Async sequences are covered more in the Meet AsyncSequence session.

While task groups are a form of structured concurrency, the task tree rules work slightly differently.

When one of the child tasks in the group fails with an error (and it throws), this will cancel all other child task just like you’d expect since it’s the same as async let. The main difference is that when the group goes out of scope its tasks aren’t cancelled. You can call cancelAll on the group before exiting the closure that’s used to populate the task.

Unstructured tasks

Structured concurrency has a clear hierarchy and defined rules. Sometimes there are situations where you need unstructured concurrency; without a structured context.

For example, you might need to launch something async from a not yet async context. This means you don’t have a task yet. Other times, tasks might live beyond the confines of a single scope. This would be common when implementing delegate methods with concurrency.

Async tasks

Imagine a collection view delegate method where you want to fetch something async for your cell.

This would not work

// shortened
func cellForRowAt() {
  let ids = getIds(for: item) // item is passed to cellForRowAt
  let content = await getContent(for: ids)
  cell.content = content
}

So we need to launch an unstructured task

// shortened
func willDisplayCellForItem() {
  let ids = getIds(for: item) // item is passed to willDisplayCellForItem
  async {
      let content = await getContent(for: ids)
      cell.content = content
    }
}

The async function runs code asynchronously on the current actor. To make this the main actor you can annotate a class with @MainActor

@MainActor
class CollectionDelegate {
  // code
}
  • An unstructured task will inherit actor isolation and priority of the origin context
  • Lifetime is not confined to a scope
  • Can be launched anywhere
  • Must be manually cancelled or awaited

SIDENOTE all async work is done in a task. Always.

Cancellations and errors do not automatically propagate when using an unstructured task.

We can put tasks in a dictionary to keep track of them:

@MainActor
class CollectionDelegate {
  var tasks = [IndexPath: Task.Handle<Void, Never>]()

  func willDisplayCellForItem() {
    let ids = getIds(for: item) // item is passed to willDisplayCellForItem
    tasks[item] = async {
      defer { tasks[item] = nil }

        let content = await getContent(for: ids)
        cell.content = content
      }
  }
}

Storing the task allows you to cancel it. Should be removed if the task finished. Defer can do this. This way we don’t cancel anything.

Because we run on the main actor (async inherits the actor), we know that we don’t have a data race on tasks. Only one operation will mutate (or read) it at a time.

We can use tasks[item]?.cancel() in didEndDisplay to cancel the task manually.

Detached tasks

Sometimes you want to not inherit any actor information and run a task completely on its own. This can be done with a detached task. They work the same as async tasks but they don’t run in the same context as they’re crated in. You can pass parameters for priority.

Imagine caching the result of getContent in the code. This can be detached from the main actor.

@MainActor
class CollectionDelegate {
  var tasks = [IndexPath: Task.Handle<Void, Never>]()

  func willDisplayCellForItem() {
    let ids = getIds(for: item) // item is passed to willDisplayCellForItem
    tasks[item] = async {
      defer { tasks[item] = nil }

        let content = await getContent(for: ids)

      asyncDetached(priority: .background) {
        writeToCache(content)
      }

        cell.content = content
      }
  }
}

This task will run on its own actor and takes a low priority since it’s not super important to do it as soon as possible.

In a detached task you can create a task group. This would allow you to run a bunch of async work concurrently, and to cancel this work easily by cancelling the detached task since the detached task would be a parent task for all child tasks.

asyncDetached(priority: .background) {
  withTaskGroup(of: Void.self) { group in 
    group.async { writeToCache(content) }
    group.async { ... }
    group.async { ... }
  }
}

Since the detached task is a background task, the priority also applies to all of its child tasks.

Tasks are just part of Swift concurrency. They integrate with the rest of this large feature.

Tasks integrate with the OS and have low overhead. The behind the scene sessions provides more info.

WWDC Notes: Meet async await in Swift

There are tons of async await compatible functions built-in into the SDK. Often with an async version and completion handler based function.

Sync code blocks threads, async code doesn’t

When writing async code with completion handlers you unblock threads but it’s easy to not call your completion handlers. For example when you use a guard let and only return in the else clause. Swift can’t enforce this in the compiler which can lead to subtle bugs.

You can’t throw errors from completion handlers. We usually use Result for this. This adds “ceremony” to our code which isn’t ideal.

Futures can help clean up a callback based flow. But while it’s better and you can’t forget to call completion handlers with that, it’s still not ideal.

Async functions fix this. An async function is suffixed with async:

func doSomething() async

The keyword appears before the return type and before throws:

func doSomething() async throws -> ReturnType

Calling an async function and retrieving its result uses await:

let result = await doSomething()

If the method throws, you use try await:

let result = try await doSomething()

On the line after your async call, the result of doSomething() is available. It was produced without blocking the current thread; execution of your code is paused and the thread is freed up where you write await.

Not having to nest completion closures, call completion handlers, and forward errors makes code much simpler. Errors can be thrown so you don’t need Result.

In addition to functions, properties and initializers can also be async.

var property: Type? {
  get async {
    return await self.computeProperty()
  }
}

If the getter can throw, add throws after the async:

var property: Type? {
  get async throws {
    return await self.computeProperty()
  }
}

We can also use asynchronous sequences. These sequences generate their values asynchronously. We use them like this:

for await value in list {
  let transformed = await transform(value)
  // use transformed
}

There’s a specific session on “Meet AsyncSequence”. There’s also a specific session on Swift’s Structured Concurrency that explains running tasks in parallel for example.

Normal functions calls start, do work, return something. When you call a function, your “source” function gives up control to the called version which will in turn give control back when it’s done. This chain always keeps control of the thread.

In an async function, the function can suspend and allow the system to decide whether it will give control back to the async function by resuming it, or it might let somebody else do work on the thread until it makes sense for the async function to get control back.

Marking something as async or with await doesn’t guarantee that your function will suspend. Just that it might suspend. Or rather, will suspend if needed.

The await keyword signals to the developer that the state of your app can change dramatically while a function is suspended and the thread that the function is on is free to do other work. Swift does not guarantee which thread it will resume your function on. This is an implementation detail that you shouldn’t care about.

Since an async function can suspend, you have to call it from an async context to account for this suspension. Await marks where execution might be suspended. While a function is suspended, other work can be done on the thread (not guaranteed).

XCTest has support for async out of the box. No more need for expectation.

You can test async code by marking the test as async and calling your async work like you would normally, asserting that no errors are thrown for example.

Calling asynchronous code from a SwiftUI view (or non-async context) is done by calling the async task function:

async {
  // call async code
}

This makes it easy to call out into async code from a non-async place.

To learn more:

  • Explore structured concurrency in Swift
  • Discover concurrency in SwiftUI

Getting started is easiest by starting small with some built-in Apple APIs that were converted to be async.

A common pattern to update is one where you have a completion handler. The Swift compiler automatically provides async versions of imported Objective-C code that takes a completion handler.

Some delegate methods take a completion handler that developers must call to communicate an async result. Similar to functions that developers call, these delegate methods are now also async which means we can return values from them or throw errors rather than having to call a completion handler.

There are several sessions about this. The Core Data async/await one is an example of this.

NSAsynchronousFetchRequest fetches objects asynchronously, making it a good candidate to integrate with async/await. This API works with a completion handler.

We can wrap custom async work through continuations. A continuation is used to suspend a function, and resume it when appropriate. We do so through withCheckedThrowingContinuation. If you want a non-throwing version, withThrowingContinuation. You can await a call to these functions to provide a suspension point.

Then you can do whatever you need and call resume(throwing:): or resume(returning:) to resume the code again after having done the work you needed to do.

You must always call resume once. If there are code paths where you don’t call resume, Swift will tell you. Calling resume more than once is a problem and ensures that your code crashes if this happens.

You can store a checked continuation on a class. You can then resume (and nil out) a continuation when needed. This can be used for await a delegate’s response to an action.

Swift concurrency: Behind the scenes explains more about these suspend/resume cycle.