What is @concurrent in Swift 6.2?

Swift 6.2 is available and it comes with several improvements to Swift Concurrency. One of these features is the @concurrent declaration that we can apply to nonisolated functions. In this post, you will learn a bit more about what @concurrent is, why it was added to the language, and when you should be using @concurrent.

Before we dig into @concurrent itself, I’d like to provide a little bit of context by exploring another Swift 6.2 feature called nonisolated(nonsending) because without that, @concurrent wouldn’t exist at all.

And to make sense of nonisolated(nonsending) we’ll go back to nonisolated functions.

Exploring nonisolated functions

A nonisolated function is a function that’s not isolated to any specific actor. If you’re on Swift 6.1, or you’re using Swift 6.2 with default settings, that means that a nonisolated function will always run on the global executor.

In more practical terms, a nonisolated function would run its work on a background thread.

For example the following function would run away from the main actor at all times:

nonisolated 
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

While it’s a convenient way to run code on the global executor, this behavior can be confusing. If we remove the async from that function, it will always run on the callers actor:

nonisolated 
func decode<T: Decodable>(_ data: Data) throws -> T {
  // ...
}

So if we call this version of decode(_:) from the main actor, it will run on the main actor.

Since that difference in behavior can be unexpected and confusing, the Swift team has added nonisolated(nonsending). So let’s see what that does next.

Exploring nonisolated(nonsending) functions

Any function that’s marked as nonisolated(nonsending) will always run on the caller’s executor. This unifies behavior for async and non-async functions and can be applied as follows:

nonisolated(nonsending) 
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Whenever you mark a function like this, it no longer automatically offloads to the global executor. Instead, it will run on the caller’s actor.

This doesn’t just unify behavior for async and non-async functions, it also makes our code less concurrent and easier to reason about.

When we offload work to the global executor, this means that we’re essentially creating new isolation domains. The result of that is that any state that’s passed to or accessed inside of our function is potentially accessed concurrently if we have concurrent calls to that function.

This means that we must make the accessed or passed-in state Sendable, and that can become quite a burden over time. For that reason, making functions nonisolated(nonsending) makes a lot of sense. It runs the function on the caller’s actor (if any) so if we pass state from our call-site into a nonisolated(nonsending) function, that state doesn’t get passed into a new isolation context; we stay in the same context we started out from. This means less concurrency, and less complexity in our code.

The benefits of nonisolated(nonsending) can really add up which is why you can make it the default for your nonisolated function by opting in to Swift 6.2’s NonIsolatedNonSendingByDefault feature flag.

When your code is nonisolated(nonsending) by default, every function that’s either explicitly or implicitly nonisolated will be considered nonisolated(nonsending). This means that we need a new way to offload work to the global executor.

Enter @concurrent.

Offloading work with @concurrent in Swift 6.2

Now that you know a bit more about nonisolated and nonisolated(nonsending), we can finally understand @concurrent.

Using @concurrent makes most sense when you’re using the NonIsolatedNonSendingByDefault feature flag as well. Without that feature flag, you can continue using nonisolated to achieve the same “offload to the global executor” behavior. That said, marking functions as @concurrent can future proof your code and make your intent explicit.

With @concurrent we can ensure that a nonisolated function runs on the global executor:

@concurrent
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Marking a function as @concurrent will automatically mark that function as nonisolated so you don’t have to write @concurrent nonisolated. We can apply @concurrent to any function that doesn’t have its isolation explicitly set. For example, you can apply @concurrent to a function that’s defined on a main actor isolated type:

@MainActor
class DataViewModel {
  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    // ...
  }
}

Or even to a function that’s defined on an actor:

actor DataViewModel {
  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    // ...
  }
}

You’re not allowed to apply @concurrent to functions that have their isolation defined explicitly. Both examples below are incorrect since the function would have conflicting isolation settings.

@concurrent @MainActor
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

@concurrent nonisolated(nonsending)
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Knowing when to use @concurrent

Using @concurrent is an explicit declaration to offload work to a background thread. Note that doing so introduces a new isolation domain and will require any state involved to be Sendable. That’s not always an easy thing to pull off.

In most apps, you only want to introduce @concurrent when you have a real issue to solve where more concurrency helps you.

An example of a case where @concurrent should not be applied is the following:

class Networking {
  func loadData(from url: URL) async throws -> Data {
    let (data, response) = try await URLSession.shared.data(from: url)
    return data
  }
}

The loadData function makes a network call that it awaits with the await keyword. That means that while the network call is active, we suspend loadData. This allows the calling actor to perform other work until loadData is resumed and data is available.

So when we call loadData from the main actor, the main actor would be free to handle user input while we wait for the network call to complete.

Now let’s imagine that you’re fetching a large amount of data that you need to decode. You started off using default code for everything:

class Networking {
  func getFeed() async throws -> Feed {
    let data = try await loadData(from: Feed.endpoint)
    let feed: Feed = try await decode(data)
    return feed
  }

  func loadData(from url: URL) async throws -> Data {
    let (data, response) = try await URLSession.shared.data(from: url)
    return data
  }

  func decode<T: Decodable>(_ data: Data) async throws -> T {
    let decoder = JSONDecoder()
    return try decoder.decode(T.self, from: data)
  }
}

In this example, all of our functions would run on the caller’s actor. For example, the main actor. When we find that decode takes a lot of time because we fetched a whole bunch of data, we can decide that our code would benefit from some concurrency in the decoding department.

To do this, we can mark decode as @concurrent:

class Networking {
  // ...

  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    let decoder = JSONDecoder()
    return try decoder.decode(T.self, from: data)
  }
}

All of our other code will continue behaving like it did before by running on the caller’s actor. Only decode will run on the global executor, ensuring we’re not blocking the main actor during our JSON decoding.

We made the smallest unit of work possible @concurrent to avoid introducing loads of concurrency where we don’t need it. Introducing concurrency with @concurrent is not a bad thing but we do want to limit the amount of concurrency in our app. That’s because concurrency comes with a pretty high complexity cost, and less complexity in our code typically means that we write code that’s less buggy, and easier to maintain in the long run.

Expand your learning with my books

Practical Swift Concurrency header image

Learn everything you need to know about Swift Concurrency and how you can use it in your projects with Practical Swift Concurrency. It contains:

  • Eleven chapters worth of content.
  • Sample projects that use the code shown in the chapters.
  • Free updates for future iOS versions.

The book is available as a digital download for just $39.99!

Learn more

Exploring tab bars on iOS 26 with Liquid Glass

When your app has a tab bar and you recompile it using Xcode 26, you will automatically see that your tab bar has a new look and feel based on Liquid Glass. In this blog post, we’ll explore the new tab bar, and which new capabilities we’ve gained with the Liquid Glass redesign.

By the end of this post you’ll have a much better sense of how Liquid Glass changes your app’s tab bar, and how you can configure the tab bar to really lean into iOS 26’s Liquid Glass design philosophy.

Tab Bar basics in iOS 26

If you’ve adopted iOS 18’s tab bar updates, you’re already in a really good spot for adopting the new features that we get with Liquid Glass. If you haven’t, here’s what a very simple tab bar looks like using TabView and Tab:

TabView {
  Tab("Workouts", systemImage: "dumbbell.fill") {
    WorkoutsView()
  }

  Tab("Exercises", systemImage: "figure.strengthtraining.traditional") {
    ExercisesView()
  }
}

When you compile your app with Xcode 26, and you run it on a device with iOS 18 installed, your tab bar would look a bit like this:

When running the exact some code on iOS 26, you’ll find that the tab bar gets a new Liquid Glass based design:

ios-26-plain.png

Liquid glass encourages a more layer approach to designing your app, so having this approach where there’s a large button above the tab bar and obscuring content isn’t very iOS 26-like.

Here’s what the full screen that this tab bar is on looks like:

ios-26-plain-full.png

To make this app feel more at home on iOS 26, I think we should expand the list’s contents so that they end up underneath the tab bar using a bit of a blurry overlay. Similar to what Apple does for their own apps:

ios-26-health.png

Notice that this app has a left-aligned tab bar and that there’s a search button at the bottom as well. Before we talk a bit about how to achieve that layout, I’d like to explore the setup where they have content that expands underneath the tab bar first. After that we’ll look at more advanced tab bar features like having a search button and more.

Understanding the tab bar’s blur effect

If you’ve spent time with the tab bar already, you’ll know that the blur effect that we see in the health app is actually the default effect for a tab bar that sits on top of a scrollable container.

The app we’re looking at in this post has a view layout that looks a bit like this:

VStack {
  ScrollView(.horizontal) { /* filter options */ }
  List { /* The exercises */ }
  Button { /* The purple button + action */
}

The resulting effect is that the tab doesn’t overlay a scrolling container, and we end up with a solid colored background.

If we remove the button for now, we actually get the blurred background behavior that we want:

ios26-blur.png

The next objective now is to add that “Add Exercise” button again in a way that blends nicely with Liquid Glass, so let’s explore some other cool tab view behaviors on iOS 26, and how we can enable those.

Minimizing a Liquid Glass tab view

Let’s start with a cool effect that we can apply to a tab bar to make it less prominent while the user scrolls.

ios-26-minimized.png

While this effect doesn’t bring our “Add Exercise” button back, it does opt-in to a feature from iOS 26 that I like a lot. We can have our tab bar minimize when the user scrolls up or down by applying a new view modifier to our TabView:

TabView {
  /* ... */
}.tabBarMinimizeBehavior(.onScrollDown)

When this view modifier is applied to your tab view, it will automatically minimize itself when the content that’s overlayed by the tab bar gets scrolled. So in our case, the tab bar minimizes when the list of exercises gets scrolled.

Note that the tab bar doesn’t minimize if we’d apply this view modifier with the old design. That’s because the tab bar didn’t overlay any scrolling content. This makes it even more clear that the old design really doesn’t fit well in a liquid glass world.

Let’s see how we can add our button on top of the Liquid Glass TabView in a way that fits nicely with the new design.

Adding a view above your tab bar on iOS 26

On iOS 26 we’ve gained the ability to add an accessory view to our tab bars. This view will be placed above your tab bar on iOS and when your tab bar minimizes the accessory view is placed next to the minimized tab bar button:

ios-26-acc.png

Note that the button seems a little cut off in the minimized example. This seems to be a but in the beta as far as I can tell right now; if later in the beta cycle it turns out that I’m doing something wrong here, I will update the article as needed.

To place an accessory view on a tab bar, you apply the tabViewBottomAccessory view modifier to your TabView:

TabView {
  /* ... */
}
.tabBarMinimizeBehavior(.onScrollDown)
.tabViewBottomAccessory {
  Button("Add exercise") {
    // Action to add an exercise
  }.purpleButton()
}

Note that the accessory will be visible for every tab in your app so our usage here might not be the best approach; but it works. It’s possible to check the active tab inside of your view modifier to return different buttons or views depending on the active tab:

.tabViewBottomAccessory {
  if activeTab == .workouts {
    Button("Start workout") {
      // Action to add an exercise
    }.purpleButton()
  } else {
    Button("Add exercise") {
      // Action to add an exercise
    }.purpleButton()
  }
}

Again, this works but I’m not sure this is the intended use case for a bottom accessory. Apple’s own usage seems pretty limited to views that are relevant for every view in the app. Like the music app where they have player controls as the tab view’s accessory.

So, while this approach let us add the “Add exercise” button again; it seems like this isn’t the way to go.

Adding a floating button to our view

In the health app example from before, there was a search button in the bottom right side of the screen. We can add a button of our own to that location by using a TabItem in our TabView that has a .search role:

Tab("Add", systemImage: "plus", value: Tabs.exercises, role: .search) {
  /* Your view */
}

While this adds a bottom to the bottom right of our view, it’s far from a solution to replacing our view-specific “Add exercise” button. A Tab that has a search role is separated from your other tabs but you’re expected to present a full screen view from this tab. So a search tab really only makes sense when your current tab bar contains a search page.

That said, I do think that a floating button is what we need in this Liquid Glass world so let’s add one to our exercises view.

It won’t use the TabView APIs but I do think it’s important to cover the solution that works well in my opinion.

Given that Liquid Glass enforces a more layered design, this pattern of having a large button at the bottom of our list just doesn’t work as well as it used to.

Instead, we can leverage a ZStack and add a button on top of it so we can have our scrolling content look the way that we like while also having an “Add Exercise” button:

ZStack(alignment: .bottomTrailing) {
  // view contents

  Button(action: {
    // ...
  }) {
    Label("Add Exercise", systemImage: "plus")
      .bold()
      .labelStyle(.iconOnly)
      .padding()
  }
  .glassEffect(.regular.interactive())
  .padding([.bottom, .trailing], 12)
}

The key to making our floating button look at home is applying the glassEffect view modifier. I won’t cover that modifier in depth but you can probably guess what it does; it makes our button have that Liquid Glass design that we’re looking for:

ios-26-float.png

I’m not 100% sold on this approach because I felt like there was something nice about having that large purple button in my old design. But, this is a new design era. And this feels like it’s something that would fit nicely in the iOS 26 design language.

In Summary

Knowing which options you have for customizing iOS 26’s TabView will greatly help with adopting Liquid Glass. Knowing how you can minimize your tab bar, or when to assign an accessory view can really help you build better experiences for your users. Adding a search tab with the search role will help SwiftUI position your search feature properly and consistently across platforms.

While Liquid Glass is a huge change in terms of design language, I like these new TabView APIs a lot and I’m excited to spend more time with them.

Opting your app out of the Liquid Glass redesign with Xcode 26

On iOS 26, iPadOS 26 and more, your apps will take on a whole new look based on Apple's Liquid Glass redesign. All you need to do to adopt this new style in your apps is recompile. Once recompiled, your app will have all-new UI components which means your app will look fresh and right at home in Apple's latest OS.

That said, there are many reasons why you might not want to adopt Liquid Glass just yet.

It's a big redesign and for lots of apps there will be work to do to properly adapt your designs to fit in with Liquid Glass.

For these apps, Apple allows developers to opt-out of the redesign using a specific property list key that you can add to your app's Info. When you add UIDesignRequiresCompatibility to your Info.plist and set it to YES, your app will run using the old OS design instead of the new Liquid Glass design.

According to Apple this flag should mainly be used for debugging and testing but it can also be used to stay on the old design for a while longer. A word of warning though; Apple intends to remove this option in the next major Xcode release. This means that even though in Xcode 26 you will be able to opt-out, Xcode 27 will probably make adopting Liquid Glass mandatory.

That said, for now you can keep the old look and feel for your app while you figure out how Liquid Glass impacts your design choices.

Setting default actor isolation in Xcode 26

With Swift 6.2, Apple has made a several improvements to Swift Concurrency and its approachability. One of the biggest changes is that new Xcode projects will now, by default, apply an implicit main actor annotation to all your code. This essentially makes your apps single-threaded by default.

I really like this change because without this change it was far too easy to accidentally introduce loads of concurrency in your apps.

In this post I'd like to take a quick look at how you can control this setting as well as the setting for nonisolated(nonsending) from Xcode 26's build settings menu.

Setting your default actor isolation

Open your build settings and look for "Default Actor Isolation". You can use the search feature to make it easier to find the setting.

New projects will have this set to MainActor while existing projects will have this set to nonisolated. I highly recommend trying to set this to MainActor instead. You will need to refactor some of your code and apply explicit nonisolated declarations where you intended to use concurrency so you'll want to allocate some time for this.

MainActor and nonisolated are the only two valid values for this setting.

Enabling nonisolated(nonsending)

Another feature that's introduced through Swift 6.2 is nonisolated(nonsending). This feature makes it so that your nonisolated sync functions automatically inherit the calling actor's isolation instead of always running on the global executor without being isolated to any actor. To get the old behavior back you can annotate your functions with @concurrent. You can learn more about this in my post about Swift 6.2's changes.

You can turn on nonisolated(nonsending) in one of two ways. You can either enable the feature flag for this feature or you can turn on "Approachable Concurrency".

WIth Approachable Concurrency you will get nonisolated(nonsending) along with a couple of other changes that should make the compiler smarter and more sensible when it comes to how concurrent your code will really be.

If you're not sure which one you should use I recommend that you go for Approachable Concurrency.

Exploring concurrency changes in Swift 6.2

It's no secret that Swift concurrency can be pretty difficult to learn. There are a lot of concepts that are different from what you're used to when you were writing code in GCD. Apple recognized this in one of their vision documents and they set out to make changes to how concurrency works in Swift 6.2. They're not going to change the fundamentals of how things work. What they will mainly change is where code will run by default.

In this blog post, I would like to take a look at the two main features that will change how your Swift concurrency code works:

  1. The new nonisolated(nonsending) default feature flag
  2. Running code on the main actor by default with the defaultIsolation setting

By the end of this post you should have a pretty good sense of the impact that Swift 6.2 will have on your code, and how you should be moving forward until Swift 6.2 is officially available in a future Xcode release.

Understanding nonisolated(nonsending)

The nonisolated(nonsending) feature is introduced by SE-0461 and it’s a pretty big overhaul in terms of how your code will work moving forward. At the time of writing this, it’s gated behind an upcoming feature compiler flag called NonisolatedNonsendingByDefault. To enable this flag on your project, see this post on leveraging upcoming features in an SPM package, or if you’re looking to enable the feature in Xcode, take a look at enabling upcoming features in Xcode.

For this post, I’m using an SPM package so my Package.swift contains the following:

.executableTarget(
    name: "SwiftChanges",
    swiftSettings: [
        .enableExperimentalFeature("NonisolatedNonsendingByDefault")
    ]
)

I’m getting ahead of myself though; let’s talk about what nonisolated(nonsending) is, what problem it solves, and how it will change the way your code runs significantly.

Exploring the problem with nonisolated in Swift 6.1 and earlier

When you write async functions in Swift 6.1 and earlier, you might do so on a class or struct as follows:

class NetworkingClient {
  func loadUserPhotos() async throws -> [Photo] {
    // ...
  }
}

When loadUserPhotos is called, we know that it will not run on any actor. Or, in more practical terms, we know it’ll run away from the main thread. The reason for this is that loadUserPhotos is a nonisolated and async function.

This means that when you have code as follows, the compiler will complain about sending a non-sendable instance of NetworkingClient across actor boundaries:

struct SomeView: View {
  let network = NetworkingClient()

  var body: some View {
    Text("Hello, world")
      .task { await getData() }
  }

  func getData() async {
    do {
      // sending 'self.network' risks causing data races
      let photos = try await network.loadUserPhotos()
    } catch {
      // ...
    }
  }
}

When you take a closer look at the error, the compiler will explain:

sending main actor-isolated 'self.network' to nonisolated instance method 'loadUserPhotos()' risks causing data races between nonisolated and main actor-isolated uses

This error is very similar to one that you’d get when sending a main actor isolated value into a sendable closure.

The problem with this code is that loadUserPhotos runs in its own isolation context. This means that it will run concurrently with whatever the main actor is doing.

Since our instance of NetworkingClient is created and owned by the main actor we can access and mutate our networking instance while loadUserPhotos is running in its own isolation context. Since that function has access to self, it means that we can have two isolation contexts access the same instance of NetworkingClient at the exact same time.

And as we know, multiple isolation contexts having access to the same object can lead to data races if the object isn’t sendable.

The difference between an async function and a non-async function that are both nonisolated, is that the non-async function will always run on the caller’s actor. On the other hand, if we call a nonisolated async function from the main actor then the function will not run on the main actor. When we call a nonisolated async function from a place that’s not on the main actor, then the called function will also not run on the main actor.

The code below is commented to show this through some examples:

// this function will _always_ run on the caller's actor
nonisolated func nonIsolatedSync() {}

// this function is isolated to an actor so it always runs on that actor (main in this case)
@MainActor func isolatedSync() {}

// this function will _never_ run on any actor (it runs on a bg thread)
nonisolated func nonIsolatedAsync() async {}

// this function is isolated to an actor so it always runs on that actor (main in this case)
@MainActor func isolatedAsync() async {}

As you can see, there's quite some difference in behavior for functions that are async versus functions that are not. Specifically for nonisolated async versus nonisolated non-async.

Swift 6.2 aims to fix this with a new default for nonisolated functions that's intended to make sure that async and non-async functions can behave in the exact same way.

Understanding nonisolated(nonsending)

The behavior in Swift 6.1 and earlier is inconsistent and confusing for folks, so in Swift 6.2, async functions will adopt a new default for nonisolated functions called nonisolated(nonsending). You don’t have to write this manually; it’s the default so every nonisolated async function will be nonsending unless you specify otherwise.

When a function is nonisolated(nonsending) it means that the function won’t cross actor boundaries. Or, in a more practical sense, a nonisolated(nonsending) function will run on the caller’s actor.

So when we opt-in to this feature by enabling the NonisolatedNonsendingByDefault upcoming feature, the code we wrote earlier is completely fine.

The reason for that is that loadUserPhotos() would now be nonisolated(nonsending) by default, and it would run its function body on the main actor instead of running it on the cooperative thread pool.

Let’s take a look at some examples, shall we? We saw the following example earlier:

class NetworkingClient {
  func loadUserPhotos() async throws -> [Photo] {
    // ...
  }
}

In this case, loadUserPhotos is both nonisolated and async. This means that the function will receive a nonisolated(nonsending) treatment by default, and it runs on the caller’s actor (if any). In other words, if you call this function on the main actor it will run on the main actor. Call it from a place that’s not isolated to an actor; it will run away from the main thread.

Alternatively, we might have added a @MainActor declaration to NetworkingClient:

@MainActor
class NetworkingClient {
  func loadUserPhotos() async throws -> [Photo] {
    return [Photo()]
  }
}

This makes loadUserPhotos isolated to the main actor so it will always run on the main actor, no matter where it’s called from.

Then we might also have the main actor annotation along with nonisolated on loadUserPhotos:

@MainActor
class NetworkingClient {
  nonisolated func loadUserPhotos() async throws -> [Photo] {
    return [Photo()]
  }
}

In this case, the new default kicks in even though we didn’t write nonisolated(nonsending) ourselves. So, NetworkingClient is main actor isolated but loadUserPhotos is not. It will inherit the caller’s actor. So, once again if we call loadUserPhotos from the main actor, that’s where we’ll run. If we call it from some other place, it will run there.

So what if we want to make sure that our function never runs on the main actor? Because so far, we’ve only seen possibilities that would either isolate loadUserPhotos to the main actor, or options that would inherit the callers actor.

Running code away from any actors with @concurrent

Alongside nonisolated(nonsending), Swift 6.2 introduces the @concurrent keyword. This keyword will allow you to write functions that behave in the same way that your code in Swift 6.1 would have behaved:

@MainActor
class NetworkingClient {
  @concurrent
  nonisolated func loadUserPhotos() async throws -> [Photo] {
    return [Photo()]
  }
}

By marking our function as @concurrent, we make sure that we always leave the caller’s actor and create our own isolation context.

The @concurrent attribute should only be applied to functions that are nonisolated. So for example, adding it to a method on an actor won’t work unless the method is nonisolated:

actor SomeGenerator {
  // not allowed
  @concurrent
  func randomID() async throws -> UUID {
    return UUID()
  }

  // allowed
  @concurrent
  nonisolated func randomID() async throws -> UUID {
    return UUID()
  }
}

Note that at the time of writing both cases are allowed, and the @concurrent function that’s not nonisolated acts like it’s not isolated at runtime. I expect that this is a bug in the Swift 6.2 toolchain and that this will change since the proposal is pretty clear about this.

How and when should you use NonisolatedNonSendingByDefault

In my opinion, opting in to this upcoming feature is a good idea. It does open you up to a new way of working where your nonisolated async functions inherit the caller’s actor instead of always running in their own isolation context, but it does make for fewer compiler errors in practice, and it actually helps you get rid of a whole bunch of main actor annotation based on what I’ve been able to try so far.

I’m a big fan of reducing the amount of concurrency in my apps and only introducing it when I want to explicitly do so. Adopting this feature helps a lot with that. Before you go and mark everything in your app as @concurrent just to be sure; ask yourself whether you really have to. There’s probably no need, and not running everything concurrently makes your code, and its execution a lot easier to reason about in the big picture.

That’s especially true when you also adopt Swift 6.2’s second major feature: defaultIsolation.

Exploring Swift 6.2’s defaultIsolation options

In Swift 6.1 your code only runs on the main actor when you tell it to. This could be due to a protocol being @MainActor annotated or you explicitly marking your views, view models, and other objects as @MainActor.

Marking something as @MainActor is a pretty common solution for fixing compiler errors and it’s more often than not the right thing to do.

Your code really doesn’t need to do everything asynchronously on a background thread.

Doing so is relatively expensive, often doesn’t improve performance, and it makes your code a lot harder to reason about. You wouldn’t have written DispatchQueue.global() everywhere before you adopted Swift Concurrency, right? So why do the equivalent now?

Anyway, in Swift 6.2 we can make running on the main actor the default on a package level. This is a feature introduced by SE-0466.

This means that you can have UI packages and app targets and model packages etc, automatically run code on the main actor unless you explicitly opt-out of running on main with @concurrent or through your own actors.

Enable this feature by setting defaultIsolation in your swiftSettings or by passing it as a compiler argument:

swiftSettings: [
    .defaultIsolation(MainActor.self),
    .enableExperimentalFeature("NonisolatedNonsendingByDefault")
]

You don’t have to use defaultIsolation alongside NonisolatedNonsendingByDefault but I did like to use both options in my experiments.

Currently you can either pass MainActor.self as your default isolation to run everything on main by default, or you can use nil to keep the existing behavior (or don’t pass the setting at all to keep the existing behavior).

Once you enable this feature, Swift will infer every object to have an @MainActor annotation unless you explicitly specify something else:

@Observable
class Person {
  var myValue: Int = 0
  let obj = TestClass()

  // This function will _always_ run on main 
  // if defaultIsolation is set to main actor
  func runMeSomewhere() async {
    MainActor.assertIsolated()
    // do some work, call async functions etc
  }
}

This code contains a nonisolated async function. This means that, by default, it would inherit the actor that we call runMeSomewhere from. If we call it from the main actor that’s where it runs. If we call it from another actor or from no actor, it runs away from the main actor.

This probably wasn’t intended at all.

Maybe we just wrote an async function so that we could call other functions that needed to be awaited. If runMeSomewhere doesn’t do any heavy processing, we probably want Person to be on the main actor. It’s an observable class so it probably drives our UI which means that pretty much all access to this object should be on the main actor anyway.

With defaultIsolation set to MainActor.self, our Person gets an implicit @MainActor annotation so our Person runs all its work on the main actor.

Let’s say we want to add a function to Person that’s not going to run on the main actor. We can use nonisolated just like we would otherwise:

// This function will run on the caller's actor
nonisolated func runMeSomewhere() async {
  MainActor.assertIsolated()
  // do some work, call async functions etc
}

And if we want to make sure we’re never on the main actor:

// This function will run on the caller's actor
@concurrent
nonisolated func runMeSomewhere() async {
  MainActor.assertIsolated()
  // do some work, call async functions etc
}

We need to opt-out of this main actor inference for every function or property that we want to make nonisolated; we can’t do this for the entire type.

Of course, your own actors will not suddenly start running on the main actor and types that you’ve annotated with your own global actors aren’t impacted by this change either.

Should you opt-in to defaultIsolation?

This is a tough question to answer. My initial thought is “yes”. For app targets, UI packages, and packages that mainly hold view models I definitely think that going main actor by default is the right choice.

You can still introduce concurrency where needed and it will be much more intentional than it would have been otherwise.

The fact that entire objects will be made main actor by default seems like something that might cause friction down the line but I feel like adding dedicated async packages would be the way to go here.

The motivation for this option existing makes a lot of sense to me and I think I’ll want to try it out for a bit before making up my mind fully.

Enabling upcoming feature flags in an SPM package

As Swift evolves, a lot of new evolution proposals get merged into the language. Eventually these new language versions get shipped with Xcode, but sometimes you might want to try out Swift toolchains before they're available inside of Xcode.

For example, I'm currently experimenting with Swift 6.2's upcoming features to see how they will impact certain coding patterns once 6.2 becomes available for everybody.

This means that I'm trying out proposals like SE-0461 that can change where nonisolated async functions run. This specific proposal requires me to turn on an upcoming feature flag. To do this in SPM, we need to configure the Package.json file as follows:

let package = Package(
    name: "SwiftChanges",
    platforms: [
        .macOS("15.0")
    ],
    targets: [
        // Targets are the basic building blocks of a package, defining a module or a test suite.
        // Targets can depend on other targets in this package and products from dependencies.
        .executableTarget(
            name: "SwiftChanges",
            swiftSettings: [
                .enableExperimentalFeature("NonisolatedNonsendingByDefault")
            ]
        ),
    ]
)

The section to pay attention to is the swiftSettings argument that I pass to my executableTarget:

swiftSettings: [
    .enableExperimentalFeature("NonisolatedNonsendingByDefault")
]

You can pass an array of features to swiftSettings to enable multiple feature flags.

Happy experimenting!

Should you use network connectivity checks in Swift?

A lot of modern apps have a networking component to them. This could be because your app relies on a server entirely for all data, or you’re just sending a couple of requests as a back up or to kick off some server side processing. When implementing networking, it’s not uncommon for developers to check the network’s availability before making a network request.

The reasoning behind such a check is that we can inform the user that their request will fail before we even attempt to make the request.

Sound like good UX, right?

The question is whether it really is good UX. In this blog post I’d like to explore some of the pros and cons that a user might run into when you implement a network connectivity check with, for example, NWPathMonitor.

A user’s connection can change at any time

Nothing is as susceptible to change as a user’s network connection. One moment they might be on WiFi, the next they’re in an elevator with no connection, and just moments later they’ll be on a fast 5G connection only to switch to a much slower connection when their train enters a huge tunnel.

If you’re preventing a user from initiating a network call when they momentarily don’t have a connection, that might seem extremely weird to them. By the time your alert shows up to tell them there’s no connection, they might have already restored connection. And by the time the actual network call gets made the elevator door close and … the network call still fails due to the user not being connected to the internet.

Due to changing conditions, it’s often recommended that apps attempt a network call, regardless of the user’s connection status. After all, the status can change at any time. So while you might be able to successfully kick off a network call, there’s no guarantee you’re able to finish the call.

A much better user experience is to just try the network call. If the call fails due to a lack of internet connection, URLSession will tell you about it, and you can inform the user accordingly.

Speaking of URLSession… there are several ways in which URLSession will help us handle offline usage of our app.

You might have a cached response

If your app is used frequently, and it displays relatively static data, it’s likely that your server will include cache headers where appropriate. This will allow URLSession to locally cache responses for certain requests which means that you don’t have to go to the server for those specific requests.

This means that, when configured correctly, URLSession can serve certain requests without an internet connection.

Of course, that means that the user must have visited a specific URL before, and the server must include the appropriate cache headers in its response but when that’s all set up correctly, URLSession will serve cached responses automatically without even letting you, the developer, know.

Your user might be offline and most of the app still works fine without any work from your end.

This will only work for requests where the user fetches data from the server so actions like submitting a comment or making a purchase in your app won’t work, but that’s no reason to start putting checks in place before sending a POST request.

As I mentioned in the previous section, the connection status can change at any time, and if URLSession wasn’t able to make the request it will inform you about it.

For situations where your user tries to initiate a request when there’s no active connection (yet) URLSession has another trick up its sleeve; automatic retries.

URLSession can retry network calls automatically upon reconnecting

Sometimes your user will initiate actions that will remain relevant for a little while. Or, in other words, the user will do something (like sending an email) where it’s completely fine if URLSession can’t make the request now and instead makes the request as soon as the user is back online.

To enable this behavior you must set the waitsForConnectivity on your URLSession’s configuration to true:

class APIClient {
  let session: URLSession

  init() {
    let config = URLSessionConfiguration.default
    config.waitsForConnectivity = true

    self.session = URLSession(configuration: config)
  }

  func loadInformation() async throws -> Information {
    let (data, response) = try await session.data(from: someURL)
    // ...
  }

In the code above, I’ve created my own URLSession instance that’s configured to wait for connectivity if we attempt to make a network call when there’s no network available. Whenever I make a request through this session while offline, the request will not fail immediately. Instead, it remains pending until a network connection is established.

By default, the wait time for connectivity is several days. You can change this to a more reasonable number like 60 seconds by setting timeoutIntervalForResource:

init() {
  let config = URLSessionConfiguration.default
  config.waitsForConnectivity = true
  config.timeoutIntervalForResource = 60

  self.session = URLSession(configuration: config)
}

That way a request will remain pending for 60 seconds before giving up and failing with a network error.

If you want to have some logic in your app to detect when URLSession is waiting for connectivity, you can implement a URLSessionTaskDelegate. The delegate’s urlSession(_:taskIsWaitingForConnectivity:) method will be called whenever a task is unable to make a request immediately.

Note that waiting for connectivity won’t retry the request if the connection drops in the middle of a data transfer. This option only applies to waiting for a connection to start the request.

In summary

Handling offline scenarios should be a primary concern for mobile developers. A user’s connection status can change quickly, and frequently. Some developers will “preflight” their requests and check whether a connection is available before attempting to make a request in order to save a user’s time and resources.

The major downside of doing this is that having a connection right before making a request doesn’t mean the connection is there when the request actually starts, and it doesn’t mean the connection will be there for the entire duration of the request.

The recommended approach is to just go ahead and make the request and to handle offline scenarios if / when a network call fails.

URLSession has built-in mechanisms like a cache and the ability to wait for connections to provide data (if possible) when the user is offline, and it also has the built-in ability to take a request, wait for a connection to be available, and then start the request automatically.

The system does a pretty good job of helping us support and handle offline scenarios in our apps, which means that checking for connections with utilities like NWPathMonitor usually ends up doing more harm than good.

Choosing between LazyVStack, List, and VStack in SwiftUI

SwiftUI offers several approaches to building lists of content. You can use a VStack if your list consists of a bunch of elements that should be placed on top of each other. Or you can use a LazyVStack if your list is really long. And in other cases, a List might make more sense.

In this post, I’d like to take a look at each of these components, outline their strengths and weaknesses and hopefully provide you with some insights about how you can decide between these three components that all place content on top of each other.

We’ll start off with a look at VStack. Then we’ll move on to LazyVStack and we’ll wrap things up with List.

Understanding when to use VStack

By far the simplest stack component that we have in SwiftUI is the VStack. It simply places elements on top of each other:

VStack {
  Text("One")
  Text("Two")
  Text("Three")
}

A VStack works really well when you only have a handful of items, and you want to place these items on top of each other. Even though you’ll typically use a VStack for a small number of items, but there’s no reason you couldn’t do something like this:

ScrollView {
  VStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        Image(systemName: model.iconName)
      }
    }
  }
}

When there’s only a few items in models, this will work fine. Whether or not it’s the correct choice… I’d say it’s not.

If your models list grows to maybe 1000 items, you’ll be putting an equal number of views in your VStack. It will require a lot of work from SwiftUI to draw all of these elements.

Eventually this is going to lead to performance issues because every single item in your models is added to the view hierarchy as a view.

Now let's say these views also contain images that must be loaded from the network. SwiftUI is then going to load these images and render them too:

ScrollView {
  VStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        RemoteImage(url: model.imageURL)
      }
    }
  }
}

The RemoteImage in this case would be a custom view that enables loading images from the network.

When everything is placed in a VStack like I did in this sample, your scrolling performance will be horrendous.

A VStack is great for building a vertically stacked view hierarchy. But once your hierarchy starts to look and feel more like a scrollable list… LazyVStack might be the better choice for you.

Understanding when to use a LazyVStack

The LazyVStack components is functionally mostly the same as a regular VStack. The key difference is that a LazyVStack doesn’t add every view to the view hierarchy immediately.

As your user scrolls down a long list of items, the LazyVStack will add more and more views to the hierarchy. This means that you’re not paying a huge cost up front, and in the case of our RemoteImage example from earlier, you’re not loading images that the user might never see.

Swapping a VStack out for a LazyVStack is pretty straightforward:

ScrollView {
  LazyVStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        RemoteImage(url: model.imageURL)
      }
    }
  }
}

Our drawing performance should be much better with the LazyVStack compared to the regular VStack approach.

In a LazyVStack, we’re free to use any type of view that we want, and we have full control over how the list ends up looking. We don’t gain any out of the box functionality which can be great if you require a higher level of customization of your list.

Next, let’s see how List is used to understand how this compares to LazyVStack.

Understanding when to use List

Where a LazyVStack provides us maximum control, a List provides us with useful features right of the box. Depending on where your list is used (for example a sidebar or just as a full screen), List will look and behave slightly differently.

When you use views like NavigationLink inside of a list, you gain some small design tweaks to make it clear that this list item navigates to another view.

This is very useful for most cases, but you might not need any of this functionality.

List also comes with some built-in designs that allow you to easily create something that either looks like the Settings app, or something a bit more like a list of contacts. It’s easy to get started with List if you don’t require lots of customization.

Just like LazyVStack, a List will lazily evaluate its contents which means it’s a good fit for larger sets of data.

A super basic example of using List in the example that we saw earlier would look like this:

List(models) { model in 
  HStack {
    Text(model.title)
    RemoteImage(url: model.imageURL)
  }
}

We don’t have to use a ForEach but we could if we wanted to. This can be useful when you’re using Sections in your list for example:

List {
  Section("General") {
    ForEach(model.general) { item in 
      GeneralItem(item)
    }
  }

  Section("Notifications") {
    ForEach(model.notifications) { item in 
      NotificationItem(item)
    }
  }
}

When you’re using List to build something like a settings page, it’s even allowed to skip using a ForEach altogether and hardcode your child views:

List {
  Section("General") {
    GeneralItem(model.colorScheme)
    GeneralItem(model.showUI)
  }

  Section("Notifications") {
    NotificationItem(model.newsletter)
    NotificationItem(model.socials)
    NotificationItem(model.iaps)
  }
}

The decision between a List and a LazyVStack for me usually comes down to whether or not I need or want List functionality. If I find that I want little to none of List's features odds are that I’m going to reach for LazyVStack in a ScrollView instead.

In Summary

In this post, you learned about VStack, LazyVStack and List. I explained some of the key considerations and performance characteristics for these components, without digging to deeply into solving every use case and possibility. Especially with List there’s a lot you can do. The key point is that List is a component that doesn’t always fit what you need from it. In those cases, it’s useful that we have a LazyVStack.

You learned that both List and LazyVStack are optimized for displaying large amounts of views, and that LazyVStack comes with the biggest amount of flexibility if you’re willing to implement what you need yourself.

You also learned that VStack is really only useful for smaller amounts of views. I love using it for layout purposes but once I start putting together a list of views I prefer a lazier approach. Especially when i’m dealing with an unknown number of items.

Differences between Thread.sleep and Task.sleep explained

In Swift, we have several ways to “suspend” execution of our code. While that’s almost always a bad practice, I’d like to explain why Task.sleep really isn’t as problematic as you might expect when you’re familiar with Thread.sleep.

When you look for examples of debouncing or implementing task timeout they will frequently use Task.sleep to suspend a task for a given amount of time.

The key difference is in how tasks and threads work in Swift.

In Swift concurrency, we often say that tasks replace threads. Or in other words, instead of worrying about threads, we worry about tasks.

While that not untrue, it’s also a little bit misleading. It sounds like tasks and threads are mostly analogous to each other and thats not the case.

A more accurate mental model is that without Swift concurrency you used Dispatch Queues to schedule work on threads. In Swift concurrency, you use tasks to schedule work on threads. In both cases, you don’t directly worry about thread management or creation.

Exploring Thread.sleep

When you suspend execution of a thread using Thread.sleep you prevent that thread from doing anything other than sleeping. It’s not working on dispatch queues, nor on tasks.

With GCD that’s bad but not hugely problematic because if there are no threads available to work on our queue, GCD will just spin up a new thread.

Swift Concurrency isn’t as eager to spin up threads; we only have a limited number of threads available.

This means that if you have 4 threads available to your program, Swift Concurrency can use those threads to run dozens of tasks efficiently. Sleeping one of these threads with Thread.sleep means that you now only have 3 threads available to run the same dozen of tasks.

If you hit a Thread.sleep in four tasks, that means you’re now sleeping every thread available to your program and your app will essentially stop performing any work at all until the threads resume.

What about Task.sleep?

Sleeping a task with Task.sleep is, in some ways, quite similar to Thread.sleep. You suspend execution of your task, preventing that task to make progress. The key difference in how that suspension happens. Sleeping a thread just stops it from working and reducing the number of threads available. Sleeping a task means you suspend the task, which allows the thread that was running your task to start running another task.

You’re not starving the system from resources with Task.sleep and you’re not preventing your code from making forward progress which is absolutely essential when you’re using Swift Concurrency.

If you find yourself needing to suspend execution in your Swift Concurrency app you should never use Thread.sleep and use Task.sleep instead. I don’t say never often, but this is one of those cases.

Also, when you find yourself adding a Task.sleep you should also make sure that you’re using it to solve a real problem and not just because “without sleeping for 0.01 seconds this didn’t work properly”. Those kinds of sleeps usually mask serialization and queueing issues that should be solved instead of hidden.

Protecting mutable state with Mutex in Swift

Once you start using Swift Concurrency, actors will essentially become your standard choice for protecting mutable state. However, introducing actors also tends to introduce more concurrency than you intended which can lead to more complex code, and a much harder time transitioning to Swift 6 in the long run.

When you interact with state that’s protected by an actor, you have to to do so asynchronously. The result is that you’re writing asynchronous code in places where you might never have intended to introduce concurrency at all.

One way to resolve that is to annotate your let's say view model with the @MainActor annotation. This makes sure that all your code runs on the main actor, which means that it's thread-safe by default, and it also makes sure that you can safely interact with your mutable state.

That said, this might not be what you're looking for. You might want to have code that doesn't run on the main actor, that's not isolated by global actors or any actor at all, but you just want to have an old-fashioned thread-safe property.

Historically, there are several ways in which we can synchronize access to properties. We used to use Dispatch Queues, for example, when GCD was the standard for concurrency on Apple Platforms.

Recently, the Swift team added something called a Mutex to Swift. With mutexes, we have an alternative to actors for protecting our mutable state. I say alternative, but it's not really true. Actors have a very specific role in that they protect our mutable state for a concurrent environment where we want code to be asynchronous. Mutexes, on the other hand, are really useful when we don't want our code to be asynchronous and when the operation we’re synchronizing is quick (like assigning to a property).

In this post, we’ll explore how to use Mutex, when it's useful, and how you choose between a Mutex or an actor.

Mutex usage explained

A Mutex is used to protect state from concurrent access. In most apps, there will be a handful of objects that might be accessed concurrently. For example, a token provider, an image cache, and other networking-adjacent objects are often accessed concurrently.

In this post, I’ll use a very simple Counter object to make sure we don’t get lost in complex details and specifics that don’t impact or change how we use a Mutex.

When you increment or decrement a counter, that’s a quick operation. And in a codebase where. the counter is available in several tasks at the same time, we want these increment and decrement operations to be safe and free from data races.

Wrapping your counter in an actor makes sense from a theory point of view because we want the counter to be protected from concurrent accesses. However, when we do this, we make every interaction with our actor asynchronous.

To somewhat prevent this, we could constrain the counter to the main actor, but that means that we're always going to have to be on the main actor to interact with our counter. We might not always be on the same actor when we interact with our counter, so we would still have to await interactions in those situations, and that isn't ideal.

In order to create a synchronous API that is also thread-safe, we could fall back to GCD and have a serial DispatchQueue.

Alternatively, we can use a Mutex.

A Mutex is used to wrap a piece of state and it ensures that there's exclusive access to that state. A Mutex uses a lock under the hood and it comes with convenient methods to make sure that we acquire and release our lock quickly and correctly.

When we try to interact with the Mutex' state, we have to wait for the lock to become available. This is similar to how an actor would work with the key difference being that waiting for a Mutex is a blocking operation (which is why we should only use it for quick and efficient operations).

Here's what interacting with a Mutex looks like:

class Counter {
    private let mutex = Mutex(0)

    func increment() {
        mutex.withLock { count in
            count += 1
        }
    }

    func decrement() {
        mutex.withLock { count in
            count -= 1
        }
    }
}

Our increment and decrement functions both acquire the Mutex, and mutate the count that’s passed to withLock.

Our Mutex is defined by calling the Mutex initializer and passing it our initial state. In this case, we pass it 0 because that’s the starting value for our counter.

In this example, I’ve defined two functions that safely mutate the Mutex' state. Now let’s see how we can get the Mutex' value:

var count: Int {
    return mutex.withLock { count in
        return count
    }
}

Notice that reading the Mutex value is also done withLock. The key difference with increment and decrement here is that instead of mutating count, I just return it.

It is absolutely essential that we keep our operations inside of withLock short. We do not want to hold the lock for any longer than we absolutely have to because any threads that are waiting for our lock or blocked while we hold the lock.

We can expand our example a little bit by adding a get and set to our count. This will allow users of our Counter to interact with count like it’s a normal property while we still have data-race protection under the hood:

var count: Int {
    get {
        return mutex.withLock { count in
            return count
        }
    }

    set {
        mutex.withLock { count in
            count = newValue
        }
    }
}

We can now use our Counter as follows:

let counter = Counter()

counter.count = 10
print(counter.count)

That’s quite convenient, right?

While we now have a type that is free of data-races, using it in a context where there are multiple isolation contexts is a bit of an issue when we opt-in to Swift 6 since our Counter doesn’t conform to the Sendable protocol.

The nice thing about Mutex and sendability is that mutexes are defined as being Sendable in Swift itself. This means that we can update our Counter to be Sendable quite easily, and without needing to use @unchecked Sendable!

final class Counter: Sendable {
    private let mutex = Mutex(0)

    // ....
}

At this point, we have a pretty good setup; our Counter is Sendable, it’s free of data-races, and it has a fully synchronous API!

When we try and use our Counter to drive a SwiftUI view by making it @Observable, this get a little tricky:

struct ContentView: View {
    @State private var counter = Counter()

    var body: some View {
        VStack {
            Text("\(counter.count)")

            Button("Increment") {
                counter.increment()
            }

            Button("Decrement") {
                counter.decrement()
            }
        }
        .padding()
    }
}

@Observable
final class Counter: Sendable {
    private let mutex = Mutex(0)

    var count: Int {
        get {
            return mutex.withLock { count in
                return count
            }
        }

        set {
            mutex.withLock { count in
                count = newValue
            }
        }
    }
}

The code above will compile but the view won’t ever update. That’s because our computed property count is based on state that’s not explicitly changing. The Mutex will change the value it protects but that doesn’t change the Mutex itself.

In other words, we’re not mutating any data in a way that @Observable can “see”.

To make our computed property work @Observable, we need to manually tell Observable when we're accessing or mutating (in this case, the count keypath). Here's what that looks like:

var count: Int {
    get {
        self.access(keyPath: \.count)
        return mutex.withLock { count in
            return count
        }
    }

    set {
        self.withMutation(keyPath: \.count) {
            mutex.withLock { count in
                count = newValue
            }
        }
    }
}

By calling the access and withMutation methods that the @Observable macro adds to our Counter, we can tell the framework when we’re accessing and mutating state. This will tie into our Observable’s regular state tracking and it will allow our views to update when we change our count property.

Mutex or actor? How to decide?

Choosing between a mutex and an actor is not always trivial or obvious. Actors are really good in concurrent environments when you already have a whole bunch of asynchronous code. When you don't want to introduce async code, or when you're only protecting one or two properties, you're probably in the territory where a mutex makes more sense because the mutex will not force you to write asynchronous code anywhere.

I could pretend that this is a trivial decision and you should always use mutexes for simple operations like our counter and actors only make sense when you want to have a whole bunch of stuff working asynchronously, but the decision usually isn't that straightforward.

In terms of performance, actors and mutexes don't vary that much, so there's not a huge obvious performance benefit that should make you lean in one direction or the other.

In the end, your choice should be based around convenience, consistency, and intent. If you're finding yourself having to introduce a ton of async code just to use an actor, you're probably better off using a Mutex.

Actors should be considered an asynchronous tool that should only be used in places where you’re intentionally introducing and using concurrency. They’re also incredibly useful when you’re trying to wrap longer-running operations in a way that makes them thread-safe. Actors don’t block execution which means that you’re completely fine with having “slower” code on an actor.

When in doubt, I like to try both for a bit and then I stick with the option that’s most convenient to work with (and often that’s the Mutex...).

In Summary

In this post, you've learned about mutexes and how you can use them to protect mutable state. I showed you how they’re used, when they’re useful, and how a Mutex compares to an actor.

You also learned a little bit about how you can choose between an actor or a property that's protected by a mutex.

Making a choice between an actor or a Mutex is, in my opinion, not always easy but experimenting with both and seeing which version of your code comes out easier to work with is a good start when you’re trying to decide between a Mutex and an actor.