How to add a privacy manifest file to your app for required reason API usage?

Apple has recently introduced a new requirement that makes it so that apps that use certain APIs for Apple's mobile platforms (iOS, iPadOS, tvOS, watchOS) must declare their intended use of certain APIs. This requirement has gone in effect on May 1st which means that any app updates or submissions that don't meet Apple's new requirements will be rejected with a "Missing API Declaration" message also referenced as ITMS-91053.

In this post, I'd like to show you how you can add a privacy manifest file to your app so that you can resolve rejections related to ITMS-91053.

We'll go over the following topics:

  • Which APIs require a declaration in a privacy manifest?
  • How do I add a privacy manifest to my project?
  • What about third party SDKs and app extensions?

Let's dive right in, shall we?

The easiest way for you to build your privacy manifest is through my Privacy Manifest Generator. Use it alongside this article to effortlessly get your manifest in order.

If you prefer to learn from videos, watch the video below:

Which APIs require a declaration in a privacy manifest?

Starting May 1st 2024, there's a large list of APIs that require a declaration in your privacy manifest file. For a full overview, you should take a look at Apple's documentation since the list is simply too long for me to repeat here.

Generally speaking though, if you use an API from one of the following categories you almost certainly will need to add a declaration to your privacy manifest:

  • File timestamp APIs
  • System boot time APIs
  • Disk space APIs
  • Active keyboard APIs
  • User Defaults APIs

Based on this list, I think it's highly likely that you'll be adding a privacy manifest to your app even if you're running a small and simple app because a lot of apps use UserDefaults to store some basic user information.

For the purposes of this post, I'll show you how you can add an appropriate declaration to your privacy manifest for UserDefaults. The steps are pretty much the same for every API, they just have different keys that you're supposed to use.

How do I add a privacy manifest file to my project

You can add your privacy manifest just like you add any other file to your project. In Xcode, select file->new->file and look for the privacy manifest file type:

Searching for Privacy in the new file picker

With the privacy manifest file type selected, click next to add your privacy manifest to your app target. It doesn't select your target by default so don't skip doing this yourself. You should keep the default name that Xcode chose for you; PrivacyInfo.

Adding the privacy manifest to your target

Now that you have your privacy manifest added to your app, it's time to add the correct contents to it.

The first step in the process of figuring out which keys you need to add is to go to Apple's requirements page to find the reference codes that best apply to your usage of required reason APIs:

A screenshot of Apple's documentation for UserDefaults required reasons.

In the case of UserDefaults, we're probably interested in one of two keys: CA92.1 or 1C8F.1 depending on whether you're building an app that uses an App Group. Make sure you read every description carefully to ensure you're not missing any small details or nuances in the descriptions; they can be prety hard to read sometimes.

My app isn't part of an App Group so I'll need to declare CA92.1 in my privacy manifest file.

First, I'll need to add the Privacy Accessed API Types key to my privacy manifest file. Do this by clicking the little + icon next to the App Privacy Configuration key that's already in the privacy manifest. The type of Privacy Accessed API Types should be an array.

Next, add a new item to your Privacy Accessed API Types array. The type of this new item should be a dictionary and in the case of accessing UserDefaults, you'll need to add a key of Privacy Accessed API Type with the valueNSPrivacyAccessedAPICategoryUserDefaults to this dictionary first. The second key you add to your dictionary is Privacy Accessed API Reasons which is an array. To that array, you add the code for your access reason. In our case that's CA92.1.

It's pretty tough to correctly describe this plist so let's just go ahead and look at an example:

A screenshot of the privacy manifest file

Note that I only pasted CA92.1 into my privacy manifest as my value for the access reason and Xcode expanded that into the text you see in the screenshot. I personally find it easier to look at the raw XML for this file so if you right click it and select open as you can view the XML source code:

Example of opening a privacy manifest as source code

Here's what the source code for my privacy manifest looks like:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>NSPrivacyAccessedAPITypes</key>
    <array>
        <dict>
            <key>NSPrivacyAccessedAPIType</key>
            <string>NSPrivacyAccessedAPICategoryUserDefaults</string>
            <key>NSPrivacyAccessedAPITypeReasons</key>
            <array>
                <string>CA92.1</string>
            </array>
        </dict>
    </array>
</dict>
</plist>

Repeat the steps above for any other required reason APIs you access. If you forget any, Apple will email you with information on which keys you forgot in your rejection email which will help you figure out what to add if you're missing anything.

What about third party SDKs and app extensions?

Every binary in your app must declare its own privacy manifest. So if you have app extensions they must declare required API usage seperately; you can't put all declarations in a single file in your app target.

Your app extensions can use the same files, but they still need to have the file added to their targets explicitly to have them correctly be discoverd.

Third party SDKs must also include a privacy manifest explicitly so if you work on an SDK, now is the time to really make sure you have this done.

In Summary

Adding a privacy manifest file is a new requirement from Apple that, in my opinion, could have been handled better. Manually working on plist files is a tedious job and the keys and values aren't that easy to manage.

Hopefully this post will help you get your app covered for this new requirement so that you can avoid nasty surprises when you submit your app for review!

Expand your learning with my books

Practical Swift Concurrency (the video course) header image

Learn everything you need to know about Swift Concurrency and how you can use it in your projects with Practical Swift Concurrency the video course. It contains:

  • About ten hours worth of videos and exercises
  • Sample projects that use the code shown in the videos.
  • FREE access to the Practical Swift Concurrency book
  • Free updates for future iOS and Swift versions.

The course is available on Teachable for just $89

Enroll now

What is defer in Swift?

Sometimes, we write code that needs set some state or perform some work at the start of a function and at the end of that same function we might have to reset that state, or perform some cleanup regardless of why we’re exiting that function.

For example, you might have a function that creates a new Core Data object and depending on whether you’re able to enrich the object with data from the network you want to exit the function early. Regardless of how and why you exit the function, you want to save your newly created object.

Writing our code without defer

Here’s what that code would look like without Swift’s defer statement

func createMovie(
  named title: String,
  in context: NSManagedObjectContext
) async throws -> Movie {

  let movie = Movie(context: context)
  movie.title = title

  guard let data = try? await network.fetchExtraMovieData() else {
    try context.save()
    return movie
  }

  movie.rating = data.rating

  try context.save()
  return movie
}

Let me start by saying that there are other ways to write this code; I know. The point isn’t that we could refactor this code to have a single return statement. The point is that we have multiple exit points for our function, and we have to remember to call try context.save() on every path.

Cleaning up our code with defer

With Swift’s defer we can clean our code up by a lot. The code that we write in our defer block will be run whenever we’re about to leave our function. This means that we can put our try context.save() code in the defer block to make sure that we always save before we return, no matter why we return:

func createMovie(
  named title: String,
  in context: NSManagedObjectContext
) async -> Movie {

  let movie = Movie(context: context)
  movie.title = title

  defer {
    do {
      try context.save()
    } catch {
      context.rollback()
    }
  }

  guard let data = try? await network.fetchExtraMovieData() else {
    return movie
  }

  movie.rating = data.rating

  return movie
}

Notice that we changed more that just dropping a defer in our code. We had to handle errors too. That’s because a defer block isn’t allowed to throw errors. After all, we could be leaving a function because an error was throw; in that case we can’t throw another error.

Where can we use a defer block?

Defer blocks can be used in functions, if statements, closures, for loops, and any other place where you have a “scope” of execution. Usually you can recognize these scopes by their { and } characters.

If you add a defer to an if statement, your defer will run before leaving the if block.

Defer and async / await

Defer blocks in Swift run synchronously. This means that even when you defer in an async function, you won’t be able to await anything in that defer. In other words, a defer can’t be used as an asynchronous scope. If you find yourself in need of running async work inside of a defer you’ll have to launch an unstructured Task for that work.

While that would allow you to run async work in your defer, I wouldn’t recommend doing that. Your defer will complete before your task completes (because the defer won’t wait for your Task to end) which could be unexpected.

In Summary

Swift’s defer blocks are incredibly useful to wrap up work that needs to be done when you exit a function no matter why you might exit the function. Especially when there are multiple exit paths for your function.

Defer is also useful when you want to make sure that you keep your “start” and “finish” code for some work in a function close together. For example, if you want to log that a function has started and ended you could write this code on two consecutive lines with the “end” work wrapped in defer.

In my experience this is not a language feature that you’ll use a lot. That said, it’s a very useful feature that’s worth knowing about.

Deciding between a computed property and a function in Swift

In Swift, we can use computed properties to derive a value from other values defined on the same object. Being able to do this is super convenient because it means that we don’t have to manually make sure that we update derived properties every time one of the “source” values changed. We’ll just recompute the property every time it’s accessed!

If you prefer to learn from video, here's the companion video for this blog post:

This is very similar to having a function that takes no arguments and returns a value:

struct User {
  let givenName: String
  let familyName: String

  // Should we use this?
  var fullName: String {
    return "\(givenName) \(familyName)"
  }

  // Or this?
  func fullName() -> String {
    return "\(givenName) \(familyName)"
  }
}

So how do we make a choice between a function with no arguments and a computed property?

I like to keep the following rules of thumb in mind:

  • Accessing a property should never have side effects; if accessing the property mutates any values on your object, you should use a function.
  • Accessing a property should (ideally) have O(1) complexity (learn more about Big-O and what O(1) means right here. For functions it's more expected that they might be O(n) or worse.
  • Your property’s computation should be “simple”. This is probably the most subjective of all but if you’re writing more than a handful of lines you should ask yourself whether a function would look better.
  • The property’s output should be deterministic. In other words, accessing the same property multiple times in a row should get me the same result every time. If not, use a function; it fits the non deterministic behavior better in my opinion.

When I apply these rules to the example above, I would pick a computed property for this one. We can compute the name in constant time, the property's getter would be simple (one line), and the output is completely free of any side-effects. A perfect candidate for a computed property.

Of course, these are all just my opinions but I’ve found that most developers that I’ve worked with over the years either agree with these rules or have rules that are only slightly different from mine.

How do you decide between a function or a computed var? Let me know on Mastodon or Twitter!

if case let in Swift explained

In Swift, we can use the case keyword in multiple places. Most commonly, a case is used in switched but since you’re here, you might have seen a case in combination with an if statement.

In this post, we’ll explore different places where we can use the case keyword to perform something called pattern matching in Swift.

Pattern matching is a powerful feature of Swift that allows us to perform highly elegant checks to see if a given type matches a certain value.

Understanding pattern matching

The syntax for if case let is somewhat complex. So let’s start with a quick code sample that demonstrates how you can write an if statement that attempts to match an enum case:

enum ShapeType {
  case rectangle, triangle, circle
}

let myShape = ShapeType.rectangle

if case .rectangle = myShape {
  print("myShape is a rectangle")
}

Now, let me start by saying we didn’t need to use the case syntax here. We could have just as well written the following:

if myShape == .rectangle {
  print("myShape is a rectangle")
}

However, I like the earlier example because it introduces the case syntax in a pretty clean way.

Now, before I dig in to show you the case let syntax I’d like to take a look at the form of pattern matching in Swift that’s most likely the one you’re most familiar with:

switch myShape {
case .rectangle:
  print("myShape is a rectangle")
case .triangle:
  break
case .circle:
  break
}

A switch in programming allows us to write a list of patterns that we want to compare a given value to. This is much more convenient that writing a bunch of if / else statements.

The case keyword in Swift does not perform any special magic. Instead, it invokes a special operator that compares our pattern (whatever we write after case) to a value (the value we’re switching over in a switch).

So… how does that help you understand if case let syntax?

Understanding if case let

Once you know that if case .rectangle = myShape invokes a comparison between .rectangle and myShape the following suddenly makes a little more sense:

enum LoadingState {
  case inProgress(Task<String, Never>)
  case loaded(String)
}

let state = LoadingState.loaded("Hello, world")

if case .loaded(let string) = state {
  print("Loaded string is \(string)")
}

// or

if case let .loaded(string) = state {
  print("Loaded string is \(string)")
}

In both comparisons, we compare our enum case of .loaded and we assign its associated value to a constant. I prefer case .loaded(let string) myself because it looks a little less strange that case let .loaded(string) but they’re functionally equivalent.

And in a switch, you’d use the same patterns to match against which always helps me to remember:

switch state {
case .inProgress(let task):
  break
case .loaded(let string):
  print("Loaded string is \(string)")
}

Again, the pattern here is that we compare our case to a value. In a switch this looks a lot more natural than it does in an if statement but they’re the same under the hood and they both use the underlying ~= comparator.

That said, writing if case .loaded(let string) = state when you’re only interested in a single case is certainly more convenient than writing a full blown switch when you’re only interested in a single case.

How Enterprise level CI/CD with AppCircle helps you scale

As teams grow and companies mature you’ll often find that it gets harder and harder to manage processes that seemed to be so simple before.

When I worked in startups one of my favorite things was how quick the feedback cycle was on pretty much everything I did. When someone designed a new feature we could build that and ship it on Testflight as quick as a couple of hours. If the designer liked the way the implemented feature works they would sign off and off to App Review we’d go.

Usually everybody in the company would be on the Testflight version of an app and they’d install it whenever they wanted. It was all just a formality anyway because in a startup it’s important to keep shipping and improving. Feedback from other departments is great but at the end of the day you’re aiming to ship new features and improvements on a regular cycle.

In small teams you can manage these cycles quite easily. You probably don’t need any automation and you definitely don’t need advanced features and testing strategies that help you get multiple alpha and beta versions of your app into different teams’ hands.

In this post, I’d like to look past the startup phase and fast forward into the problems that arise once you reach a point where you could be considered an enterprise company. These are usually companies with large dev teams, multiple departments, and heightened security needs because of the amount of data and information they process.

There are three aspects of infrastructure in an enterprise environment that I’d like to highlight:

  • Shipping different builds in parallel
  • The importance of security and data ownership
  • Automating builds and app delivery

This is a sponsored post for AppCircle. Note that every sponsored post on this website is an honest review of a product and is always an accurate representation of my thoughts and opinions. Sponsored posts help keep the content on this site available for free

Shipping different builds in parallel

As projects grow more complex it’s not uncommon to want to have multiple versions of your app installed on a testing device. For example, you might be working on a new feature that relies on your server’s staging environment while also applying some bug fixes on your app’s production build. And maybe alongside these two builds you also want to have the App Store version of your app installed.

It might sound like a lot, overkill even, but being able to use a couple of different bundle identifiers for your apps to install them alongside each other is incredibly useful even when you’re just a small team.

In a larger company you’ll have your QA department, managers, and other roles that have different reasons to install different versions of your app.

Having a platform that makes it easy to install different build versions of your app (alpha, staging, prod) and even different versions of those builds (differentiated by build numbers) will allow everybody to do their job well. This is particularly true for QA where they’ll want to install specific builds to test new features or bug fixes.

Platforms like AppCircle offer ways to allow teams to download and test specific builds as needed. I’ve found that AppCircle’s approach to this works as well as you’d expect and has the ability to create different groups of users and assign specific builds to them. This means that you can send QA very specific and testable builds of your app while your managers only have access to beta builds that are almost ready to go to production.

If you’re working within a large company that requires enterprise-level access control and data ownership, let’s take a look at how AppCircle solves this for their enterprise customers.

The importance of security and data ownership

The more people have access to your user’s data and your app’s experimental and in-development features, the more security risks you’re taking on. Limiting access to data and app builds is an essential feature. When you’re looking for a platform that runs your builds and hosts your test binaries it’s essential that you make sure that the platform’s security features align with your needs.

When you require enterprise features, AppCircle has got you. They have very granular access controls which I think is an essential feature.

Enterprise customers for AppCircle all have access to SSO which in corporate environments has always been something that I’ve seen listed as a must-have. At the moment AppCircle offers LDAP as SSO provider but they’re working on Okta integration at the moment. And if your company uses a different SSO provider I know that AppCircle are always open to getting more SSO providers into their product.

SSO for enterprise is an absolute must have since a corporation wants to be able to shut down or lock accounts with a single step and not worry about which other accounts a user might have; they want to manage their users and the services they access in a single place. Less fragmentation in this sense means less risk of security breaches.

Most importantly, it might be absolutely crucial for you to be able to self-host services so that you can make sure that not just your accounts but also your data are completely protected using standards and tools that your company uses and requires.

Large players like GitHub and Atlassian offer this and so does AppCircle.

You can host AppCircle on servers you own while retaining access to first-class support that’s provided through a Slack channel that gives you access to experts directly. This is something that I haven’t encountered before and I think it’s really powerful that AppCircle does this to help keep their enterprise customers going.

Self-hosting’s biggest drawback is always that you’re taking on cost, effort, and risk to make sure your instances keep running. I was pretty impressed to learn that AppCircle goes to great lengths to help reduce each of these three drawbacks by providing the best support they possibly can.

Automating builds and app delivery

While it’s great that AppCircle provides all these enterprise features that I’ve mentioned above, their core business is to become your build and app delivery system. The features they provide for this are exactly what you’d hope for. You can connect AppCircle to your git repository, automatically trigger builds on push or PR creation, and you can run periodic builds to provide nightly alpha’s for example.

The pipelines you build with AppCircle integrate all the way from your git repository to their enterprise app store (where employees can download your internal apps from), their beta testing platform, and even to App Store delivery. All in all they provide a good experience setting this up with reliable builds and they really go to great lengths to make sure that their CI is everything you expect from a good CI provider.

In Summary

As mentioned in the introduction, a company’s needs change as the company grows in terms of complexity. Once you hit a point where you can consider yourself an enterprise developer, it makes sense to start picking your service providers more carefully.

You’ll require fast and reliable support, advanced security measures, granular user and account management, sometimes you’ll even need to have the service running on servers that you own.

AppCircle can help you do all of this and it’s honestly an impressive product that’s growing and improving rapidly. The mix of app distribution, analytics, and CI that they offer is super useful and if I were to request more I would love to see crash reporting be a part of AppCircle too so that you can fully rely on an on-premises AppCircle instance that works for all your infrastructure needs without sending your data to a server you don’t own or control.

If you’d like to learn more about AppCircle and see whether it makes sense for you and your company to switch your infrastructure please let me know so I can get you connected to the right people for a demo and a chat.

What are lazy vars in Swift?

Sometimes when you’re programming you have some properties that are pretty expensive to compute so you want to make sure that you don’t perform any work that you don’t absolutely must perform.

For example, you might have the following two criteria for your property:

  • The property should be computed once
  • The property should be computed only when I need it

If these two criteria sound like what you’re looking for, then lazy vars are for you.

A lazy variable is defined as follows:

class ExamResultsAnalyser {
  let allResults: [ExamResult]

  lazy var averageGrade: Float = {
    return allResults.reduce(0.0, { total, result in
      return total + result.grade
    }) / Float(allResults.count)
  }()

  init(allResults: [ExamResult]) {
    self.allResults = allResults
  }
}

Notice the syntax that's used to create our lazy var. The variable is defined as a var and not as a let because accessing the property mutates our object. Also notice that we're using a closure to initialize this property. This is not mandatory but it's by far the most common way I've initialized my lazy var properties so far. If you want to learn more about closures as an initialization mechanism, take a look at this post where I explore the topic in depth.

In this case, we’re trying to calculate an average grade based on some exam results. If we only need to do this for a handful of students this would be lightning fast but if we need to do this for a couple thousand students we’d want to postpone the calculation to the last possible second. And since an exam result is immutable, we don’t really want to recalculate the average every time we access the averageGrade property.

This is actually a key difference between computed properties and a lazy var. Both are used to compute something upon access, but a computed property performs its computation every time the property is accessed. A lazy var on the other hand only computes its value once; upon first access.

Note that accessing a lazy var counts as a mutating action on the enclosing object. So if you add a lazy var to a struct, the following code would not compile:

struct ExampleStruct {
  lazy var randomNumber = Int.random(in: 0..<100)
}

let myStruct = ExampleStruct()
myStruct.randomNumber

The compiler will show the following error:

Cannot use mutating getter on immutable value: 'myStruct' is a 'let' constant

And it will offer the following fix:

Change 'let' to 'var' to make it mutable

Because accessing the lazy var is a mutating operation, we must define our myStruct constant as a variable if we want to be able to access the randomNumber lazy var.

In Summary

All in all lazy var is an incredibly useful tool when you need to postpone initialization for a property to the last possible millisecond, and especially when it’s not guaranteed that you’ll need to access the property at all.

Note that a lazy var does not magically make the (expensive) computation that you’re doing faster. It simply allows you to not do any work until the work actually needs to be done. If you’re pretty sure that your lazy var will be accessed in a vast majority of cases it’s worth considering not making the property lazy at all; the work will need to be done at some point either way, and having less complexity is always a good thing in my book.

for vs forEach in Swift: The differences explained

Swift offers multiple ways to iterate over a collection of items. In this post we’ll compare a normal for loop to calling forEach on a collection.

Both for x in collection and collection.forEach { x in } allow you to iterate over elements in a collection called collection. But what are their differences? Does one outperform the other? Is one better than the other? We’ll find out in this post.

Using a regular for loop

I’ve written about for loops in Swift before so if you want an in-depth look, take a look at this post.

A regular for loop looks as follows:

for item in list {
  // use item
}

Unless we break out of our for loop with either a break or a return statement, the loop will iterate all elements in our list without interruption. For loops in Swift allow us to use the continue keyword to cut a specific iteration short and moving on to the next element.

Using forEach to iterate elements

If we use a forEach to iterate a collection of items, we can write code as follows:

list.forEach { item
  // use item
}

While for loops are a language construct, forEach is a function defined on collections. This function takes a closure that will be called for every element in the collection.

If we want to abort our iteration, we can only return from our forEach closure which is an equivalent to using continue in our classic for loop. Returning from a forEach does not end the loop, it just aborts the current iteration.

Making a decision

A forEach is mostly convenient when you’re chaining together functions like map, flatMap, filter, etc. and you want to run a closure for literally every element in your list.

In almost every other case I would recommend using a plain for loop over a forEach due to being able to break out of the loop if needed, and also I prefer the readability of a for loop over a forEach.

Performance-wise the two mechanisms are similar if you want to iterate over all elements. However, as soon as you want to break out of the loop early, the plain for loop and its break keyword beat the forEach.

Dispatching to the Main thread with MainActor in Swift

Swift 5.5 introduced loads of new concurrency related features. One of these features is the MainActor annotation that we can apply to classes, functions, and properties.

In this post you’ll learn several techniques that you can use to dispatch your code to the main thread from within Swift Concurrency’s tasks or by applying the main actor annotation.

If you’d like to take a deep dive into learning how you can figure out whether your code runs on the main actor I highly recommend reading this post which explores Swift Concurrency’s isolation features.

Alternatively, if you’re interested in a deep dive into Swift Concurrency and actors I highly recommend that you check out my book on Swift Concurrency or that you check out my video course on Swift Concurrency. Both of these resources will give you deeper insights and background information on actors.

Dispatching to the main thread through the MainActor annotation

The quickest way to get a function to run on the main thread in Swift Concurrency is to apply the @MainActor annotation to it:

class HomePageViewModel: ObservableObject {
  @Published var homePageData: HomePageData?

  @MainActor
  func loadHomePage() async throws {
    self.homePageData = try await networking.fetchHomePage()
  }
}

The code above will run your loadHomePage function on the main thread. The cool thing about this is that the await in this function isn’t blocking the main thread. Instead, it allows our function to be suspended so that the main thread can do some other work while we wait for fetchHomePage() to come back with some data.

The effect that applying @MainActor to this function has is that the assignment of self.homePageData happens on the main thread which is good because it’s an @Published property so we should always assign to it from the main thread to avoid main thread related warnings from SwiftUI at runtime.

If you don’t like the idea of having all of loadHomePage run on the main actor, you can also annotate the homePageData property instead:

class HomePageViewModel: ObservableObject {
  @MainActor @Published var homePageData: HomePageData?

  func loadHomePage() async throws {
    self.homePageData = try await networking.fetchHomePage()
  }
}

Unfortunately, this code leads to the following compiler error:

Main actor-isolated property 'homePageData' can not be mutated from a non-isolated context

This tells us that we’re trying to mutate a property, homePageData on the main actor while our loadHomePage method is not running on the main actor which is data safety problem in Swift Concurrency; we must mutate the homePageData property from a context that’s isolated to the main actor.

We can solve this issue in one of three ways:

  1. Apply an @MainActor annotation to both homePageData and loadHomePage
  2. Apply @MainActor to the entire HomePageViewModel to isolate both the homePageData property and the loadHomePage function to the main actor
  3. Use MainActor.run or an unstructured task that’s isolated to the main actor inside of loadHomePage.

The quickest fix is to annotate our entire class with @MainActor to run everything that our view model does on the main actor:

@MainActor
class HomePageViewModel: ObservableObject {
  @Published var homePageData: HomePageData?

  func loadHomePage() async throws {
    self.homePageData = try await networking.fetchHomePage()
  }
}

This is perfectly fine and will make sure that all of your view model work is performed on the main actor. This is actually really close to how your view model would work if you didn’t use Swift Concurrency since you normally call all view model methods and properties from within your view anyway.

Let’s see how we can leverage option three from the list above next.

Dispatching to the main thread with MainActor.run

If you don’t want to annotate your entire view model with the main actor, you can isolate chunks of your code to the main actor by calling the static run method on the MainActor object:

class HomePageViewModel: ObservableObject {
  @Published var homePageData: HomePageData?

  func loadHomePage() async throws {
    let data = try await networking.fetchHomePage()
    await MainActor.run {
      self.homePageData = data
    }
  }
}

Note that the closure that you pass to run is not marked as async. This means that any asynchronous work that you want to do needs to happen before your call to MainActor.run. All of the work that you put inside of the closure that you pass to MainActor.run is executed on the main thread which can be quite convenient if you don’t want to annotate your entire loadHomePage method with @MainActor.

The last method to dispatch to main that I’d like to show is through an unstructured task.

Isolating an unstructured task to the main actor

if you’re creating a new Task and you want to make sure that your task runs on the main actor, you can apply an @MainActor annotation to your task’s body as follows:

class HomePageViewModel: ObservableObject {
  @Published var homePageData: HomePageData?

  func loadHomePage() async throws {
    Task { @MainActor in
      self.homePageData = try await networking.fetchHomePage()
    }
  }
}

In this case, we should have just annotated our loadHomePage method with @MainActor because we’re creating an unstructured task that we don’t need and we isolate our task to main.

However, if you’d have to write loadHomePage as a non-async method creating a new main-actor isolated task can be quite useful.

In Summary

In this post you’ve seen several ways to dispatch your code to the main actor using @MainActor and MainActor.run. The main actor is intended to replace your calls to DispatchQueue.main.async and with this post you have all the code examples you need to be able to do just that.

Note that some of the examples provided in this post produce warnings under strict concurrency checking. That’s because the HomePageViewModel I’m using in this post isn’t Sendable. Making it conform to Sendable would get rid of all warnings so it’s a good idea to brush up on your knowledge of Sendability if you’re keen on getting your codebase ready for Swift 6.

Enabling upcoming feature flags for Swift using Xcode

If you’re keen on reading about what’s new in Swift or learn about all the cool things that are coming up, you’re probably following several folks in the iOS community that keep track and tell you about all the new things. But what if you read about an upcoming Swift feature that you’d like to try out? Do you have to wait for it to become available in a new Xcode release?

Sometimes the answer is Yes, you’ll have to wait. But more often than not a Swift evolution proposal will have a header that looks a bit like this:

Screencap of Swift Evolution proposal SE-0430

Notice the Implementation on main and gated behind -enable-experimental-feature TransferringArgsAndResults. This tells us that if you were to Swift directly from its main branch you would be able to try out this new feature when you set a compiler flag.

Sometimes, you’ll find that the implementation is marked as available on a specific branch like release/5.10 or release/6.0. Without any information about gating the feature behind a flag. This means that the feature is available just by using Swift from the branch specified.

This is great, but… how do you actually use Swift from a specific branch? And where and how do we pass these compiler flags so we can try out experimental features in Xcode? In this post, I’ll answer those questions!

If you prefer learning from videos, I got you. The video below covers the exact same topic:

Installing an alternative Swift toolchain for Xcode

Xcode uses a Swift toolchain under the hood to compile your code. Essentially, this means that Xcode will run a whole bunch of shell commands to compile your code into an app that can run on your device or simulator. When you have the Xcode command line tools installed (which should have happened when you installed Xcode), you can open your terminal and type swift --version to see that there’s a command line interface that lets you use a Swift toolchain.

By default, this will be whichever toolchain shipped with Xcode. So if you have Xcode 15.3 installed running swift --version should yield something like the following output:

❯ swift --version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)
Target: arm64-apple-macosx14.0

We can obtain different versions of Swift quite easily from swift.org on their download page.

Here you’ll find different releases of Swift for different platforms. The topmost section will show you the latest release which is already bundled with Xcode. If we scroll down to snapshots however there are snapshots for Trunk Development (main) and upcoming Swift releases like Swift. 6.0 for example.

We can click the Universal download link to install the Swift toolchain that you’re interested in. For example, if you’re eager to try out a cutting edge feature like Swift 6’s isolation regions feature you can download the trunk development toolchain. Or if you’re interested in trying out a feature that has made its way into the Swift 6 release branch, you could download the Swift 6.0 Development toolchain.

Once you’ve downloaded your toolchain and you can install it through a convenient installer. This process is pretty self explanatory.

After installing the toolchain, you can activate this new Swift version in Xcode through the Xcode → Toolchains menu. In the screenshot below you can see that I’m using the Swift Development Snapshot 2024-04-13 (a) toolchain. This is the trunk development toolchain that you saw on swift.org.

Xcode menu expanded with toolchain menu item selected

Once you’ve selected this toolchain, Xcode will use that Swift version to compile your project. This means that if your project is compatible with that Swift version, you can already get a sense of what it will be like to compile your project with a Swift version that’s not available yet.

Note that this may not be entirely representative of what a new Swift version like Swift 6 will be like. After all, we’re using a snapshot built from Swift’s main branch rather than its release/6.0 branch which is what the Swift 6.0 development toolchain is based off of.

Sometimes I’ve found that Xcode doesn’t like swapping toolchains in a project that you’re actively working on and compiling all the time. You’ll see warnings that aren’t supposed to be there or you’ll be missing warnings that you expected to see. I’m pretty sure this is related to Xcode caching stuff in between builds and rebooting Xcode usually gets me back where I’d like to be.

Now that we can use a custom toolchain in Xcode, let’s see how we can opt-in to experimental features.

Trying out experimental Swift features in Xcode

To try out new Swift features, we sometimes need to enable them through a compiler flag. The evolution proposal that goes along with the feature you’d like to try will have an Implementation field in its header that explains which toolchain contains the feature, and whether the feature is gated behind a flag or not.

For example, you might want to try out SE-0414 Region based isolation to see whether it resolves some of your Swift Concurrency warnings.

We’ll use the following code (which is also used as an example in the Evolution proposal) as an example to see whether we’ve correctly opted in to the feature:

// Not Sendable
class Client {
  init(name: String, initialBalance: Double) {  }
}

actor ClientStore {
  var clients: [Client] = []

  static let shared = ClientStore()

  func addClient(_ c: Client) {
    clients.append(c)
  }
}

func openNewAccount(name: String, initialBalance: Double) async {
  let client = Client(name: name, initialBalance: initialBalance)
  await ClientStore.shared.addClient(client) // Warning! 'Client' is non-`Sendable`!
}

To get the warning that we’re expecting based on the code snippet, we need to enable strict concurrency checking. If you’re not sure how to do that, take a look at this post.

After enabling strict concurrency you’ll see the warning pop up as expected.

Now, make sure that you have your new toolchain selected and navigate to your project’s build settings. In the build settings search for Other Swift Flags and make sure you add entries to have your flags look as shown below:

Other Swift Flags with experimental feature set

Notice that I’ve placed -enable-experimental-feature and RegionBasedIsolation as separate lines; not doing this results in a compiler error because the argument won’t be passed correctly.

If you build your project after opting in to the experimental feature, you’ll be able to play around with region based isolation. Pretty cool, right?

You can enable multiple experimental feature by passing in the experimental feature flag multiple times, or by adding other arguments if that’s what the Evolution proposal requires.

In Summary

Experimenting with new and upcoming Swift features can be a lot of fun. You’ll be able to get a sense of how new features will work, and whether you’re able to use these new features in your project. Keep in mind that experimental toolchains shouldn’t be used for your production work so after using an experimental toolchain make sure you switch back to Xcode’s default toolchain if you want to ensure that your main project correctly.

In this post you’ve also seen how you can play around with experimental Swift features which is something that I really enjoy doing. It gives me a sense of where Swift is going, and it allows me to explore new features early. Of course, this isn’t for everyone and since you’re dealing with a pre-release feature on a pre-release toolchain anything can go wrong.

Actor reentrancy in Swift explained

When you start learning about actors in Swift, you’ll find that explanations will always contain something along the lines of “Actors protect shared mutable state by making sure the actor only does one thing at a time”. As a single sentence summary of actors, this is great but it misses an important nuance. While it’s true that actors do only one thing at a time, they don’t always execute function calls atomically.

In this post, we’ll explore the following:

  • Exploring what actor reentrancy is
  • Understanding why async functions in actors can be problematic

Generally speaking, you’ll use actors for objects that must hold mutable state while also being safe to pass around in tasks. In other words, objects that hold mutable state, are passed by reference, and have a need to be Sendable are great candidates for being actors.

If you prefer to see the contents of this post in a video format, you can watch the video below:

Implementing a simple actor

A very simple example of an actor is an object that caches data. Here’s how that might look:

actor DataCache {
  var cache: [UUID: Data] = [:]
}

We can directly access the cache property on this actor without worrying about introducing data races. We know that the actor will make sure that we won’t run into data races when we get and set values in our cache from multiple tasks in parallel.

If needed, we can make the cache private and write separate read and write methods for our cache:

actor DataCache {
  private var cache: [UUID: Data] = [:]

  func read(_ key: UUID) -> Data? {
    return cache[key]
  }

  func write(_ key: UUID, data: Data) {
    cache[key] = data
  }
}

Everything still works perfectly fine in the code above. We’ve managed to limit access to our caching dictionary and users of this actor can interact with the cache through a dedicated read and write method.

Now let’s make things a little more complicated.

Adding a remote cache feature to our actor

Let’s imagine that our cached values can either exist in the cache dictionary or remotely on a server. If we can’t find a specific key locally our plan is to send a request to a server to see if the server has data for the cache key that we’re looking for. When we get data back we cache it locally and if we don’t we return nil from our read function.

Let’s update the actor to have a read function that’s async and attempts to read data from a server:

actor DataCache {
  private var cache: [UUID: Data] = [:]

  func read(_ key: UUID) async -> Data? {
    print(" cache read called for \(key)")
    defer {
      print(" cache read finished for \(key)")
    }

    if let data = cache[key] {
      return data
    }

    do {
      print(" attempt to read remote cache for \(key)")
      let url = URL(string: "http://localhost:8080/\(key)")!
      let (data, response) = try await URLSession.shared.data(from: url)

      guard let httpResponse = response as? HTTPURLResponse,
              httpResponse.statusCode == 200 else {
        print(" remote cache MISS for \(key)")
        return nil
      }

      cache[key] = data
      print(" remote cache HIT for \(key)")
      return data
    } catch {
      print(" remote cache MISS for \(key)")
      return nil
    }
  }

  func write(_ key: UUID, data: Data) {
    cache[key] = data
  }
}

Our function is a lot longer now but it does exactly what we set out to do; check if data exists locally, attempt to read it from the server if needed and cache the result.

If you run and test this code it will most likely work exactly like you’ve intended, well done!

However, once you introduce concurrent calls to your read and write methods you’ll find that results can get a little strange…

For this post, I’m running a very simple webserver that I’ve pre-warmed with a couple of values. When I make a handful of concurrent requests to read a value that’s cached remotely but not locally, here’s what I see in the console:

 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 attempt to read remote cache for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 attempt to read remote cache for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 attempt to read remote cache for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 attempt to read remote cache for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 attempt to read remote cache for DDFA2377-C10F-4324-BBA3-68126B49EB00
 remote cache HIT for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 remote cache HIT for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 remote cache HIT for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 remote cache HIT for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 remote cache HIT for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00

As you can see, executing multiple read operations results in having lots of requests to the server, even if the data exists and you expected to have the data cached after your first call.

Our code is written in a way that ensures that we always write a new value to our local cache after we grab it from the remote so we really shouldn’t expect to be going to the server this often.

Furthermore, we’ve made our cache an actor so why is it running multiple calls to our read function concurrently? Aren’t actors supposed to only do one thing at a time?

The problem with awaiting inside of an actor

The code that we’re using to grab information from a remote data source actually forces us into a situation where actor reentrancy bites us.

Actors only do one thing at a time, that’s a fact and we can trust that actors protect our mutable state by never having concurrent read and write access happen on mutable state that it owns.

That said, actors do not like to sit around and do nothing. When we call a synchronous function on an actor that function will run start to end with no interruptions; the actor only does one thing at a time.

However, when we introduce an async function that has a suspension point the actor will not sit around and wait for the suspension point to resume. Instead, the actor will grab the next message in its “mailbox” and start making progress on that instead. When the thing we were awaiting returns, the actor will continue working on our original function.

Actors don’t like to sit around and do nothing when they have messages in their mailbox. They will pick up the next task to perform whenever an active task is suspended.

The fact that actors can do this is called actor reentrancy and it can cause interesting bugs and challenges for us.

Solving actor reentrancy can be a tricky problem. In our case, we can solve the reentrancy issue by creating and retaining tasks for each network call that we’re about to make. That way, reentrant calls to read can see that we already have an in progress task that we’re awaiting and those calls will also await the same task’s result. This ensures we only make a single network call. The code below shows the entire DataCache implementation. Notice how we’ve changed the cache dictionary so that it can either hold a fetch task or our Data object:

actor DataCache {
  enum LoadingTask {
    case inProgress(Task<Data?, Error>)
    case loaded(Data)
  }

  private var cache: [UUID: LoadingTask] = [:]
  private let remoteCache: RemoteCache

  init(remoteCache: RemoteCache) {
    self.remoteCache = remoteCache
  }

  func read(_ key: UUID) async -> Data? {
    print(" cache read called for \(key)")
    defer {
      print(" cache read finished for \(key)")
    }

    // we have the data, no need to go to the network
    if case let .loaded(data) = cache[key] {
      return data
    }

    // a previous call started loading the data
    if case let .inProgress(task) = cache[key] {
      return try? await task.value
    }

    // we don't have the data and we're not already loading it
    do {
      let task: Task<Data?, Error> = Task {
        guard let data = try await remoteCache.read(key) else {
          return nil
        }

        return data
      }

      cache[key] = .inProgress(task)
      if let data = try await task.value {
        cache[key] = .loaded(data)
        return data
      } else {
        cache[key] = nil
        return nil
      }
    } catch {
      return nil
    }
  }

  func write(_ key: UUID, data: Data) async {
    print(" cache write called for \(key)")
    defer {
      print(" cache write finished for \(key)")
    }

    do {
      try await remoteCache.write(key, data: data)
    } catch {
      // failed to store the data on the remote cache
    }
    cache[key] = .loaded(data)
  }
}

I explain this approach more deeply in my post on building a token refresh flow with actors as well as my post on building a custom async image loader so I won’t go into too much detail here.

When we run the same test that we ran before, the result looks like this:

 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read called for DDFA2377-C10F-4324-BBA3-68126B49EB00
 attempt to read remote cache for DDFA2377-C10F-4324-BBA3-68126B49EB00
 remote cache HIT for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00
 cache read finished for DDFA2377-C10F-4324-BBA3-68126B49EB00

We start multiple cache reads, this is actor reentrancy in action. But because we’ve retained the loading task so it can be reused, we only make a single network call. Once that call completes, all of our reentrant cache read actions will receive the same output from the task we created in the first call.

The point is that we can rely on actors doing one thing at a time to update some mutable state before we hit our await. This state will then tell reentrant calls that we’re already working on a given task and that we don’t need to make another (in this case) network call.

Things become trickier when you try and make your actor into a serial queue that runs async tasks. In a future post I’d like to dig into why that’s so tricky and explore possible solutions.

In Summary

Actor reentrancy is a feature of actors that can lead to subtle bugs and unexpected results. Due to actor reentrancy we need to be very careful when we’re adding async methods to an actor, and we need to make sure that we think about what can and should happen when we have multiple, reentrant, calls to a specific function on an actor.

Sometimes this is completely fine, other times it’s wasteful but won’t cause problems. Other times, you’ll run into problems that arise due to certain state on your actor being changed while your function was suspended. Every time you await something inside of an actor it’s important that you ask yourself whether you’ve made any state related assumptions before your await that you need to reverify after your await.

Step one to avoiding reentrancy related issues is to understand what it is, and have a sense of how you can solve problems when they arise. Unfortunately there’s no single solution that fixes every reentrancy related issue. In this post you saw that holding on to a task that encapsulates work can prevent multiple network calls from being made.

Have you ever run into a reentrancy related problem yourself? And if so, did you manage to solve it? I’d love to hear from you on Twitter or Mastodon!