Using Observations to observe @Observable model properties

Starting with Xcode 26, there's a new way to observe properties of your @Observable models. In the past, we had to use the withObservationTracking function to access properties and receive changes with willSet semantics. In Xcode 26 and Swift 6.2, we have access to an entirely new approach that will make observing our models outside of SwiftUI much simpler.

In this post, we'll take a look at how we can use Observations to observe model properties. We'll also go over some of the possible pitfalls and caveats associated with Observations that you should be aware of.

Setting up an observation sequence

Swift's new Observations object allows us to build an AsyncSequence based on properties of an @Observable model.

Let's consider the following @Observable model:

@Observable 
class Counter {
  var count: Int
}

Let's say we'd like to observe changes to the count property outside of a SwiftUI view. Maybe we're building something on the server or command line where SwiftUI isn't available. Or maybe you're observing this model to kick off some non-UI related process. It really doesn't matter that much. The point of this example is that we're having to observe our model outside of SwiftUI's automatic tracking of changes to our model.

To observe our Counter without the new Observations, you'd write something like the following:

class CounterObserver {
  let counter: Counter

  init(counter: Counter) {
    self.counter = counter
  }

  func observe() {
    withObservationTracking { 
      print("counter.count: \(counter.count)")
    } onChange: {
      self.observe()
    }
  }
}

This uses withObservationTracking which comes with its own caveats as well as a pretty clunky API.

When we refactor the above to work with the new Observations, we get something like this:

class CounterObserver {
  let counter: Counter

  init(counter: Counter) {
    self.counter = counter
  }

  func observe() {
    Task { [weak self] in
      let values = Observations { [weak self] in
        guard let self else { return 0 }
        return self.counter.count 
      }

      for await value in values {
        guard let self else { break }
        print("counter.count: \(value)")
      }
    }
  }
}

There are two key steps to observing changes with Observations:

  1. Setting up your async sequence of observed values
  2. Iterate over your observation sequence

Let's take a closer look at both steps to understand how they work.

Setting up an async sequence of observed values

The Observations object that we created in the example is an async sequence. This sequence will emit values whenever a change to our model's values is detected. Note that Observations will only inform us about changes that we're actually interested in. This means that the only properties that we're informed about are properties that we access in the closure that we pass to Observations.

This closure also returns a value. The returned value is the value that's emitted by the async sequence that we create.

In this case, we created our Observations as follows:

let values = Observations { [weak self] in
  guard let self else { return 0 }
  return self.counter.count 
}

This means that we observe and return whatever value our count is.

We could also change our code as follows:

let values = Observations { [weak self] in
  guard let self else { return "" }
  return "counter.count is \(self.counter.count)"
}

This code observes counter.count but our async sequence will provide us with strings instead of just the counter's value.

There are two things about this code that I'd like to focus on: memory management and the output of our observation sequence.

Let's look at the output first, and then we can talk about the memory management implications of using Observations.

Sequences created by Observations will automatically observe all properties that you accessed in your Observations closure. In this case we've only accessed a single property so we're informed whenever count is changed. If we accessed more properties, a change to any of the accessed properties will cause us to receive a new value. Whatever we return from Observations is what our async sequence will output. In this case that's a string but it can be anything we want. The properties we access don't have to be part of our return value. Accessing the property is enough to have your closure called, even when you don't use that property to compute your return value.

You have probably noticed that my Observations closure contains a [weak self]. Every time a change to our observed properties happens, the Observations closure gets called. That means that internally, Observations will have to somehow retain our closure. As a result of that, we can create a retain cycle by capturing self strongly inside of an Observations closure. To break that, we should use a weak capture.

This weak capture means that we have an optional self to deal with. In my case, I opted to return an empty string instead of nil. That's because I don't want to have to work with an optional value later on in my iteration, but if you're okay with that then there's nothing wrong with returning nil instead of a default value. Do note that returning a default value does not do any harm as long as you're setting up your iteration of the async sequence correctly.

Speaking of which, let's take a closer look at that.

Iterating over your observation sequence

Once you've set up your Observations, you have an async sequence that you can iterate over. This sequence will output the values that you return from your Observations closure. As soon as you start iterating, you will immediately receive the "current" value for your observation.

Iterating over your sequence is done with an async for loop which is why we're wrapping this all in a Task:

Task { [weak self] in
  let values = Observations { [weak self] in
    guard let self else { return 0 }
    return self.counter.count 
  }

  for await value in values {
    guard let self else { break }
    print("counter.count: \(value)")
  }
}

Wrapping our work in a Task, means that our Task needs a [weak self] just like our Observations closure does. The reason is slightly different though. If you want to learn more about memory management in tasks that contain async for loops, I highly recommend you read my post on the topic.

When iterating over our Observations sequence we'll receive values in our loop after they've been assigned to our @Observable model. This means that Observations sequences have "did set semantics" while withObservationTracking would have given us "will set semantics".

Now that we know about the happy paths of Observations, let's talk about some caveats.

Caveats of Observations

When you observe values with Observations, the first and main caveat that I'd like to point out is that memory management is crucial to avoiding retain cycles. You've learned about this in the previous section, and getting it all right can be tricky. Especially because how and when you unwrap self in your Task is essential. Do it before the for loop and you've created a memory leak that'll run until the Observations sequence ends (which it won't).

A second caveat that I'd like to point out is that you can miss values from your Observable sequence if it produces values faster than you're consuming them.

So for example, if we introduce a sleep of three seconds in our loop we'll end up with missed values when we produce a new value every second:

for await value in values {
  guard let self else { break }
  print(value)
  try await Task.sleep(for: .seconds(3))
}

The result of sleeping in this loop while we produce more values is that we will miss values that were sent during the sleep. Every time we receive a new value, we receive the "current" value and we'll miss any values that were sent in between.

Usually this is fine, but if you want to process every value that got produced and processing might take some time, you'll want to make sure that you implement some buffering of your own. For example, if every produced value would result in a network call you'd want to make sure that you don't await the network call inside of your loop since there's a good chance that you'd miss values when you do that.

Overall, I think Observations is a huge improvement over the tools we had before Observations came around. Improvements can be made in the buffering department but I think for a lot of applications the current situation is good enough to give it a try.

Expand your learning with my books

Practical Combine header image

Learn everything you need to know about Combine and how you can use it in your projects with Practical Combine. It contains:

  • Thirteen chapters worth of content.
  • Playgrounds and sample projects that use the code shown in the chapters.
  • Free updates for future iOS versions.

The book is available as a digital download for just $39.99!

Learn more

How to unwrap [weak self] in Swift Concurrency Tasks?

As a developer who uses Swift regularly, [weak self] should be something that's almost muscle memory to you. I've written about using [weak self] before in the context of when you should generally capture self weakly in your closures to avoid retain cycles. The bottom line of that post is that closures that aren't @escaping will usually not need a [weak self] because the closures aren't retained beyond the scope of the function you're passing them to. In other words, closures that aren't @escaping don't usually cause memory leaks. I'm sure there are exceptions but generally speaking I've found this rule of thumb to hold up.

This idea of not needing [weak self] for all closures is reinforced by the introduction of SE-0269 which allows us to leverage implicit self captures in situations where closures aren’t retained, making memory leaks unlikely.

Later, I also wrote about how Task instances that iterate async sequences are fairly likely to have memory leaks due to this implicit usage of self.

So how do we use [weak self] on Task? And if we shouldn't, how do we avoid memory leaks?

In this post, I aim to answer these questions.

The basics of using [weak self] in completion handlers

As Swift developers, our first instinct is to do a weak -> strong dance in pretty much every closure. For example:

loadData { [weak self] data in 
  guard let self else { return }

  // use data
}

This approach makes a lot of sense. We start the call to loadData, and once the data is loaded our closure is called. Because we don't need to run the closure if self has been deallocated during our loadData call, we use guard let self to make sure self is still there before we proceed.

This becomes increasingly important when we stack work:

loadData { [weak self] data in 
  guard let self else { return }

  processData(data) { [weak self] models in 
    // use models
  }
}

Notice that we use [weak self] in both closures. Once we grab self with guard let self our reference is strong again. This means that for the rest of our closure, self is held on to as a strong reference. Due to SE-0269 we can call processData without writing self.processData if we have a strong reference to self.

The closure we pass to processData also captures self weakly. That's because we don't want that closure to capture our strong reference. We need a new [weak self] to prevent the closure that we passed to processData from creating a (shortly lived) memory leak.

When we take all this knowledge and we transfer it to Task, things get interesting...

Using [weak self] and unwrapping it immediately in a Task

Let's say that we want to write an equivalent of our loadData and processData chain, but they're now async functions that don't take a completion handler.

A common first approach would be to do the following:

Task { [weak self] in
  guard  let self else { return }

  let data = await loadData()
  let models = await processData(data)
}

Unfortunately, this code does not solve the memory leak that we solved in our original example.

An unstructured Task you create will start running as soon as possible. This means that if we have a function like below, the task will run as soon as the function reaches the end of its body:

func loadModels() {
  // 1
  Task { [weak self] in
    // 3: _immediately_ after the function ends
    guard  let self else { return }

    let data = await loadData()
    let models = await processData(data)
  }
  // 2
}

More complex call stacks might push the start of our task back by a bit, but generally speaking, the task will run pretty much immediately.

The problem with guard let self at the start of your Task

Because Task in Swift starts running as soon as possible, the chance of self getting deallocated in the time between creating and starting the task is very small. It's not impossible, but by the time your Task starts, it's likely self is still around no matter what.

After we make our reference to self strong, the Task holds on to self until the Task completes. In our call that means that we retain self until our call to processData completes. If we translate this back to our old code, here's what the equivalent would look like in callback based code:

loadData { data in 
  self.processData(data) { models in 
    // for example, self.useModels
  }
}

We don't have [weak self] anywhere. This means that self is retained until the closure we pass to processData has run.

The exact same thing is happening in our Task above.

Generally speaking, this isn't a problem. Your work will finish and self is released. Maybe it sticks around a bit longer than you'd like but it's not a big deal in the grand scheme of things.

But how would we prevent kicking off processData if self has been deallocated in this case?

Preventing a strong self inside of your Task

We could make sure that we never make our reference to self into a strong one. For example, by checking if self is still around through a nil check or by guarding the result of processData. I'm using both techniques in the snippet above but the guard self != nil could be omitted in this case:

Task { [weak self] in
  let data = await loadData()
  guard self != nil else { return }

  guard let models = await self?.processData(data) else {
    return
  }

  // use models
}

The code isn't pretty, but it would achieve our goal.

Let's take a look at a slightly more complex issue that involves repeatedly fetching data in an unstructured Task.

Using [weak self] in a longer running Task

Our original example featured two async calls that, based on their names, probably wouldn't take all that long to complete. In other words, we were solving a memory leak that would typically solve itself within a matter of seconds and you could argue that's not actually a memory leak worth solving.

A more complex and interesting example could look as follows:

func loadAllPages() {
  // only fetch pages once
  guard fetchPagesTask == nil else { return }

  fetchPagesTask = Task { [weak self] in
    guard let self else { return }

    var hasMorePages = true
    while hasMorePages && !Task.isCancelled {
      let page = await fetchNextPage()
      hasMorePages = !page.isLastPage
    }

    // we're done, we could call loadAllPages again to restart the loading process
    fetchPagesTask = nil
  }
}

Let's remove some noise from this function so we can see the bits that are actually relevant to whether or not we have a memory leak. I wanted to show you the full example to help you understand the bigger picture of this code sample...

 Task { [weak self] in
  guard let self else { return }

  var hasMorePages = true
  while hasMorePages {
    let page = await fetchNextPage()
    hasMorePages = !page.isLastPage
  }
}

There. That's much easier to look at, isn't it?

So in our Task we have a [weak self] capture and immediately we unwrap with a guard self. You already know this won't do what we want it to. The Task will start running immediately, and self will be held on to strongly until our task ends. That said, we do want our Task to end if self is deallocated.

To achieve this, we can actually move our guard let self into the while loop:

Task { [weak self] in
  var hasMorePages = true

  while hasMorePages {
    guard let self else { break }
    let page = await fetchNextPage()
    hasMorePages = !page.isLastPage
  }
}

Now, every iteration of the while loop gets its own strong self that's released at the end of the iteration. The next one attempts to capture its own strong copy. If that fails because self is now gone, we break out of the loop.

We fixed our problem by capturing a strong reference to self only when we need it, and by making it as short-lived as possible.

In Summary

Most Task closures in Swift don't strictly need [weak self] because the Task generally only exists for a relatively short amount of time. If you find that you do want to make sure that the Task doesn't cause memory leaks, you should make sure that the first line in your Task isn't guard let self else { return }. If that's the first line in your Task, you're capturing a strong reference to self as soon as the Task starts running which usually is almost immediately.

Instead, unwrap self only when you need it and make sure you only keep the unwrapped self around as short as possible (for example in a loop's body). You could also use self? to avoid unwrapping altogether, that way you never grab a strong reference to self. Lastly, you could consider not capturing self at all. If you can, capture only the properties you need so that you don't rely on all of self to stick around when you only need parts of self.

Should you opt-in to Swift 6.2’s Main Actor isolation?

Swift 6.2 comes with some interesting Concurrency improvements. One of the most notable changes is that there's now a compiler flag that will, by default, isolate all your (implicitly nonisolated) code to the main actor. This is a huge change, and in this post we'll explore whether or not it's a good change. We'll do this by taking a look at some of the complexities that concurrency introduces naturally, and we'll assess whether moving code to the main actor is the (correct) solution to these problems.

By the end of this post, you should hopefully be able to decide for yourself whether or not main actor isolation makes sense. I encourage you to read through the entire post and to carefully think about your code and its needs before you jump to conclusions. In programming, the right answer to most problems depends on the exact problems at hand. This is no exception.

We'll start off by looking at the defaults for main actor isolation in Xcode 26 and Swift 6. Then we'll move on to determining whether we should keep these defaults or not.

Understanding how Main Actor isolation is applied by default in Xcode 26

When you create a new project in Xcode 26, that project will have two new features enabled:

  • Global actor isolation is set to MainActor.self
  • Approachable concurrency is enabled

If you want to learn more about approachable concurrency in Xcode 26, I recommend you read about it in my post on Approachable Concurrency.

The global actor isolation setting will automatically isolate all your code to either the Main Actor or no actor at all (nil and MainActor.self are the only two valid values).

This means that all code that you write in a project created with Xcode 26 will be isolated to the main actor (unless it's isolated to another actor or you mark the code as nonisolated):

// this class is @MainActor isolated by default
class MyClass {
  // this property is @MainActor isolated by default
  var counter = 0

  func performWork() async {
    // this function is @MainActor isolated by default
  }

  nonisolated func performOtherWork() async {
    // this function is nonisolated so it's not @MainActor isolated
  }
}

// this actor and its members won't be @MainActor isolated
actor Counter {
  var count = 0
}

The result of your code bein main actor isolated by default is that your app will effectively be single threaded unless you explicitly introduce concurrency. Everything you do will start off on the main thread and stay there unless you decide you need to leave the Main Actor.

Understanding how Main Actor isolation is applied for new SPM Packages

For SPM packages, it's a slightly different story. A newly created SPM Package will not have its defaultIsolation flag set at all. This means that a new SPM Package will not isolate your code to the MainActor by default.

You can change this by passing defaultIsolation to your target's swiftSettings:

swiftSettings: [
    .defaultIsolation(MainActor.self)
]

Note that a newly created SPM Package also won't have Approachable Concurrency turned on. More importantly, it won't have NonIsolatedNonSendingByDefault turned on by default. This means that there's an interesting difference between code in your SPM Packages and your app target.

In your app target, everything will run on the Main Actor by default. Any functions that you've defined in your app target and are marked as nonisolated and async will run on the caller's actor by default. So if you're calling your nonisolated async functions from the main actor in your app target they will run on the Main Actor. Call them from elsewhere and they'll run there.

In your SPM Packages, the default is for your code to not run on the Main Actor by default, and for nonisolated async functions to run on a background thread no matter what.

Confusing isn't it? I know...

The rationale for running code on the Main Actor by default

In a codebase that relies heavily on concurrency, you'll have to deal with a lot of concurrency-related complexity. More specifically, a codebase with a lot of concurrency will have a lot of data race potential. This means that Swift will flag a lot of potential issues (when you're using the Swift 6 language mode) even when you never really intended to introduce a ton of concurrency. Swift 6.2 is much better at recognizing code that's safe even though it's concurrent but as a general rule you want to manage the concurrency in your code carefully and avoid introducing concurrency by default.

Let's look at a code sample where we have a view that leverages a task view modifier to retrieve data:

struct MoviesList: View {
  @State var movieRepository = MovieRepository()
  @State var movies = [Movie]()

  var body: some View {
    Group {
      if movies.isEmpty == false {
        List(movies) { movie in
          Text(movie.id.uuidString)
        }
      } else {
        ProgressView()
      }
    }.task {
      do {
        // Sending 'self.movieRepository' risks causing data races
        movies = try await movieRepository.loadMovies()
      } catch {
        movies = []
      }
    }
  }
}

This code has an issue: sending self.movieRepository risks causing data races.

The reason we're seeing this error is due to us calling a nonisolated and async method on an instance of MovieRepository that is isolated to the main actor. That's a problem because inside of loadMovies we have access to self from a background thread because that's where loadMovies would run. We also have access to our instance from inside of our view at the exact same time so we are indeed creating a possible data race.

There are two ways to fix this:

  1. Make sure that loadMovies runs on the same actor as its callsite (this is what nonisolated(nonsending) would achieve)
  2. Make sure that loadMovies runs on the Main Actor

Option 2 makes a lot of sense because, as far as this example is concerned, we always call loadMovies from the Main Actor anyway.

Depending on the contents of loadMovies and the functions that it calls, we might simply be moving our compiler error from the view over to our repository because the newly @MainActor isolated loadMovies is calling a non-Main Actor isolated function internally on an object that isn't Sendable nor isolated to the Main Actor.

Eventually, we might end up with something that looks as follows:

class MovieRepository {
  @MainActor
  func loadMovies() async throws -> [Movie] {
    let req = makeRequest()
    let movies: [Movie] = try await perform(req)

    return movies
  }

  func makeRequest() -> URLRequest {
    let url = URL(string: "https://example.com")!
    return URLRequest(url: url)
  }

  @MainActor
  func perform<T: Decodable>(_ request: URLRequest) async throws -> T {
    let (data, _) = try await URLSession.shared.data(for: request)
    // Sending 'self' risks causing data races
    return try await decode(data)
  }

  nonisolated func decode<T: Decodable>(_ data: Data) async throws -> T {
    return try JSONDecoder().decode(T.self, from: data)
  }
}

We've @MainActor isolated all async functions except for decode. At this point we can't call decode because we can't safely send self into the nonisolated async function decode.

In this specific case, the problem could be fixed by marking MovieRepository as Sendable. But let's assume that we have reasons that prevent us from doing so. Maybe the real object holds on to mutable state.

We could fix our problem by actually making all of MovieRepository isolated to the Main Actor. That way, we can safely pass self around even if it has mutable state. And we can still keep our decode function as nonisolated and async to prevent it from running on the Main Actor.

The problem with the above...

Finding the solution to the issues I describe above is pretty tedious, and it forces us to explicitly opt-out of concurrency for specific methods and eventually an entire class. This feels wrong. It feels like we're having to decrease the quality of our code just to make the compiler happy.

In reality, the default in Swift 6.1 and earlier was to introduce concurrency by default. Run as much as possible in parallel and things will be great.

This is almost never true. Concurrency is not the best default to have.

In code that you wrote pre-Swift Concurrency, most of your functions would just run wherever they were called from. In practice, this meant that a lot of your code would run on the main thread without you worrying about it. It simply was how things worked by default and if you needed concurrency you'd introduce it explicitly.

The new default in Xcode 26 returns this behavior both by running your code on the main actor by default and by having nonisolated async functions inherit the caller's actor by default.

This means that the example we had above becomes much simpler with the new defaults...

Understanding how default isolation simplifies our code

If we turn set our default isolation to the Main Actor along with Approachable Concurrency, we can rewrite the code from earlier as follows:

class MovieRepository {
  func loadMovies() async throws -> [Movie] {
    let req = makeRequest()
    let movies: [Movie] = try await perform(req)

    return movies
  }

  func makeRequest() -> URLRequest {
    let url = URL(string: "https://example.com")!
    return URLRequest(url: url)
  }

  func perform<T: Decodable>(_ request: URLRequest) async throws -> T {
    let (data, _) = try await URLSession.shared.data(for: request)
    return try await decode(data)
  }

  @concurrent func decode<T: Decodable>(_ data: Data) async throws -> T {
    return try JSONDecoder().decode(T.self, from: data)
  }
}

Our code is much simpler and safer, and we've inverted one key part of the code. Instead of introducing concurrency by default, I had to explicitly mark my decode function as @concurrent. By doing this, I ensure that decode is not main actor isolated and I ensure that it always runs on a background thread. Meanwhile, both my async and my plain functions in MoviesRepository run on the Main Actor. This is perfectly fine because once I hit an await like I do in perform, the async function I'm in suspends so the Main Actor can do other work until the function I'm awaiting returns.

Performance impact of Main Actor by default

While running code concurrently can increase performance, concurrency doesn't always increase performance. Additionally, while blocking the main thread is bad we shouldn't be afraid to run code on the main thread.

Whenever a program runs code on one thread, then hops to another, and then back again, there's a performance cost to be paid. It's a small cost usually, but it's a cost either way.

It's often cheaper for a quick operation that started on the Main Actor to stay there than it is for that operation to be performed on a background thread and handing the result back to the Main Actor. Being on the Main Actor by default means that it's much more explicit when you're leaving the Main Actor which makes it easier for you to determine whether you're ready to pay the cost for thread hopping or not. I can't decide for you what the cutoff is for it to be worth paying a cost, I can only tell you that there is a cost. And for most apps the cost is probably small enough for it to never matter. By defaulting to the Main Actor you can avoid paying the cost accidentally and I think that's a good thing.

So, should you set your default isolation to the Main Actor?

For your app targets it makes a ton of sense to run on the Main Actor by default. It allows you to write simpler code, and to introduce concurrency only when you need it. You can still mark objects as nonisolated when you find that they need to be used from multiple actors without awaiting each interaction with those objects (models are a good example of objects that you'll probably mark nonisolated). You can use @concurrent to ensure certain async functions don't run on the Main Actor, and you can use nonisolated on functions that should inherit the caller's actor. Finding the correct keyword can sometimes be a bit of a trial and error but I typically use either @concurrent or nothing (@MainActor by default). Needing nonisolated is more rare in my experience.

For your SPM Packages the decision is less obvious. If you have a Networking package, you probably don't want it to use the main actor by default. Instead, you'll want to make everything in the Package Sendable for example. Or maybe you want to design your Networking object as an actor. Its' entirely up to you.

If you're building UI Packages, you probably do want to isolate those to the Main Actor by default since pretty much everything that you do in a UI Package should be used from the Main Actor anyway.

The answer isn't a simple "yes, you should", but I do think that when you're in doubt isolating to the Main Actor is a good default choice. When you find that some of your code needs to run on a background thread you can use @concurrent.

Practice makes perfect, and I hope that by understanding the "Main Actor by default" rationale you can make an educated decision on whether you need the flag for a specific app or Package.

What is Approachable Concurrency in Xcode 26?

Xcode 26 allows developers to opt-in to several of Swift 6.2’s features that will make concurrency more approachable to developers through a compiler setting called “Approachable Concurrency” or SWIFT_APPROACHABLE_CONCURRENCY. In this post, we’ll take a look at how to enable approachable concurrency, and which compiler settings are affected by it.

How to enable approachable concurrency in Xcode?

To enable approachable concurrency, you should go to your project’s build settings and perform a search for “approachable concurrency” or just the word “approachable”. This will filter all available settings and should show you the setting you’re interested in:

By default, this setting will be set to No which means that you’re not using Approachable Concurrency by default as of Xcode 26 Beta 2. This might change in a future release and this post will be updated if that happens.

The exact settings that you see enabled under Swift Compiler - Upcoming Features will be different depending on your Swift Language Version. If you’re using the Swift 6 Language Version, you will see everything except the following two settings set to Yes:

  • Infer isolated conformances
  • nonisolated(nonsending) By Default

If you’re using the Swift 5 Language Version like I am in my sample project, you will see everything set to Yes by default if you've created your project in Xcode 26. If you've created your project with Xcode 16 and are using the Swift 5 language mode, you'll find that Approachable Concurrency is not on by default.

To turn on approachable concurrency, set the value to Yes for your target:

This will automatically opt you in to all features shown above. Let’s take a look at all five settings to see what they do, and why they’re important to making concurrency more approachable.

Enabling approachable concurrency in a Swift Package

Packages are a little bit more complex than Xcode projects. By default, a newly created package will use the Swift 6.2 toolchain and the Swift 6 language mode. In practice, this will mean that most of approachable concurrency's features will be on by default. There are two features that you'll need to enable manually though:

swiftSettings: [
  .enableUpcomingFeature("NonisolatedNonsendingByDefault"),
  .enableUpcomingFeature("InferIsolatedConformances")
]

If you're using the Swift 5 language mode in your package, your swift settings should look a bit more like this:

swiftSettings: [
   .swiftLanguageMode(.v5),
   .enableUpcomingFeature("NonisolatedNonsendingByDefault"),
   .enableUpcomingFeature("InferIsolatedConformances"),
   .enableUpcomingFeature("InferSendableFromCaptures"),
   .enableUpcomingFeature("DisableOutwardActorInference"),
   .enableUpcomingFeature("GlobalActorIsolatedTypesUsability"),
]

Adding these settings to your package will get you an equivalent setup to that of Xcode when you enable approachable concurrency for your app target.

Which settings are part of approachable concurrency?

Approachable concurrency mostly means that Swift Concurrency will be more predictable in terms of compiler errors and warnings. In lots of cases Swift Concurrency had strange and hard to understand behaviors that resulted in compiler errors that weren’t strictly needed.

For example, if your code could have a data race the compiler would complain even when it could prove that no data race would occur when the code would be executed.

With approachable concurrency, we opt-in to a range of features that make this easier to reason about. Let’s take a closer look at these features starting with nonisolated(nonsending) by default.

Understanding nonisolated(nonsending) By Default

The compiler setting for nonisolated(nonsending) is probably the most important. With nonisolated(nonsending) your nonisolated async will run on the calling actor’s executor by default. It used to be the case that a nonisolated async function would always run on the global executor. Now that behavior will change and be consistent with nonisolated functions that are not async.

The @concurrent declaration is also part of this feature. You can study this declaration more in-depth in my post on @concurrent.

Understanding Infer Sendable for Methods and Key Path Literals

This compiler flag introduces a less obvious, but still useful improvement to how Swift handles functions and key paths. It allows functions of types that are Sendable to automatically be considered Sendable themselves without forcing developers to jump through hoops.

Similarly, in some cases where you’d leverage KeyPath in Swift, the compiler would complain about key paths capturing non-Sendable state even when there’s no real potential for a data race in certain cases.

This feature is already part of Swift 6 and is enabled in Approachable Concurrency in the Swift 5 Language Version (which is the default).

I’ve found that this setting solves a real issue, but not one that I think a lot of developers will immediately benefit from.

Understanding Infer Isolated Conformances

In Swift 6, it’s possible to have protocol conformances that are isolated to a specific global actor. The Infer Isolated Conformances build setting will make it so that protocol conformances on a type that’s isolated to a global actor will automatically be isolated to the same global actor.

Consider the following code:

@MainActor
struct MyModel: Decodable {
}

I’ve explicitly constrained MyModel to the main actor. But without inferring isolated conformances, my conformance to Decodable is not on the main actor which can result in compiler errors.

That’s why with SE-470, we can turn on a feature that will allow the compiler to automatically isolate our conformance to Decodable to the main actor if the conforming type is also isolated to the main actor.

Understanding global-actor-isolated types usability

This build setting is another one that’s always on when you’re using the Swift 6 Language mode. With this feature, the compiler will make it less likely that you need to mark a property as nonisolated(unsafe). This escape hatch exists for properties that can safely be transferred across concurrency domains even when they’re not sendable.

In some cases, the compiler can actually prove that even though a property isn’t sendable, it’s still safe to be passed from one isolation context to another. For example, if you have a type that is isolated to the main actor, its properties can be passed to other isolation contexts without problems. You don’t need to mark these as nonisolated(unsafe) because you can only interact with these properties from the main actor anyway.

This setting also includes other improvements to the compiler that will allow globally isolated types to use non-Sendable state due to the protection that’s imposed by the type being isolated to a global actor.

Again, this feature is always on when you’re using the Swift 6 Language Version, and I think it’s a type of problem that you might have run into in the past so it’s nice to see this solved through a build setting that makes the compiler smarter.

Understanding Disable outward actor isolation inference

This build setting applies to code that’s using property wrappers. This is another setting that’s always on in the Swift 6 language mode and it fixes a rather surprising behavior that some developers might remember from SwiftUI.

This setting is explained in depth in SE-0401 but the bottom line is this.

If you’re using a property wrapper that has an actor-isolated wrappedValue (like @StateObject which has a wrappedValue that’s isolated to the main actor) then the entire type that uses that property wrapper is also isolated to the same actor.

In other words, back when View wasn’t annotated with @MainActor in SwiftUI, using @StateObject in your View would make your View struct @MainActor isolated.

This behavior was implicit and very confusing so I’m honestly quite glad that this feature is gone in the Swift 6 Language Version.

Deciding whether you should opt-in

Now that you know a little bit more about the features that are part of approachable concurrency, I hope that you can see that it makes a lot of sense to opt-in to approachable concurrency. Paired with your code running on the main actor by default for new projects created with Xcode 26, you’ll find that approachable concurrency really does deliver on its promise. It gets rid of certain obscure compiler errors that required weird fixes for non-existent problems.

Ternary operator in Swift explained

The ternary operator is one of those things that will exist in virtually any modern programming language. When writing code, a common goal is to make sure that your code is succinct and no more verbose than it needs to be. A ternary expression is a useful tool to achieve this.

What is a ternary?

Ternaries are essentially a quick way to write an if statement on a single line. For example, if you want to tint a SwiftUI button based on a specific condition, your code might look a bit as follows:

struct SampleView: View {
  @State var username = ""

  var body: some View {
    Button {} label: {
      Text("Submit")
    }.tint(username.isEmpty ? .gray : .red)
  }
}

The line where I tint the button contains a ternary and it looks like this: username.isEmpty ? .gray : .red. Generally speaking, a ternary always has the following shape <condition> ? <if true> : <else>. You must always provide all three of these "parts" when using a ternary. It's basically a shorthand way to write an if {} else {} statement.

When should you use ternaries?

Ternary expressions are incredibly useful when you're trying to assign a property based on a simple check. In this case, a simple check to see if a value is empty. When you start nesting ternaries, or you find that you're having to evaluate a complex or long expression it's probably a good sign that you should not use a ternary.

It's pretty common to use ternaries in SwiftUI view modifiers because they make conditional application or styling fairly straightforward.

That said, a ternary isn't always easy to read so sometimes it makes sense to avoid them.

Replacing ternaries with if expressions

When you're using a ternary to assign a value to a property in Swift, you might want to consider using an if / else expression instead. For example:

let buttonColor: Color = if username.isEmpty { .gray } else { .red }

This syntax is more verbose but it's arguably easier to read. Especially when you make use of multiple lines:

let buttonColor: Color = if username.isEmpty { 
  .gray 
} else {
  .red
}

For now you're only allowed to have a single expression on each codepath which makes them only marginally better than ternaries for readability. You also can't use if expressions everywhere so sometimes a ternary just is more flexible.

I find that if expressions strike a balance between evaluating longer and more complex expressions in a readable way while also having some of the conveniences that a ternary has.

Supporting Universal Links on iOS

Allowing other apps and webpages to link into your app with deeplinks is a really good way for you to make your app more flexible, and to ensure that users of your app can more easily share content with others by sharing direct links to your contents.

To support deeplinking on iOS, you have two options available:

  1. Support deeplinking through custom URL schemes like maxine://workout/dw-1238-321-jdjd
  2. Support deeplinking through Universal Links which would look like this https://donnywals.com/maxine-app/workout/dw-1238-321-jdjd

To add support for option one, all you need to do is register your custom URL scheme and implement onOpenURL to handle the incoming links. This approach is outlined in my post on handling deeplinks in a SwiftUI app, so I won’t be including detailed steps for that in this post.

This post will instead focus on showing you how you can set your app up for option 2; Universal Links.

We’ll look at the requirements for Universal Links, how you can enable this on the server, and lastly we’ll see how you can support Universal Links in your app.

The major benefit of Universal Links is that only the owner of a domain can establish a link between an app and a domain. In contrast, when you pick a custom URL scheme, other apps can try to claim the same scheme. The first app that claimed the scheme on a given user’s device will be used to handle URLs with that specific scheme.

With Universal Links, you have full control over which apps are allowed to claim a given domain or path. So in my case, I can make sure that only Maxine will be used to handle URLs that start with https://donnywals.com/maxine-app/.

Setting up your server for Universal Links

Every app that wants to support Universal Links must have a server counterpart. This means that you can only support Universal Links for domains you own.

When a user installs your app, iOS will check for any claims that the app makes about Universal Links. For example, if my app claims to support https://donnywals.com then iOS will perform a check to make sure this claim is correct.

To do that, iOS will make a request to https://www.donnywals.com/apple-app-site-association. Every app that supports Universal Link must return a valid JSON response from /apple-app-site-association.

In the JSON that’s returned by this endpoint, the server will specify which apps are allowed to handle Universal Links for this domain. It can also specify which paths or components should or should not be treated as Universal Links.

We’ll look at a couple of examples in this post but for a full overview of what you can and can’t do in your app site association file you can take a look at the applinks documentation on apple.com.

If I were to add support for Universal Links to my own domain, A simple app site association I could upload would look as follows:

{
  "applinks": {
    "details": [
      {
        "appIDs": ["4JMM8JMG3H.com.donnywals.ExerciseTracker"],
        "components": [
          "/": "/maxine/*"
        ]
      }
    ]
  }
}

This JSON specifies the appID that’s allowed to be used on this domain. I also specify a components array that will specify patterns for which URLs should be redirected to my app. You can specify lots of different rules here as you can see on the page for components.

In this case, I specified that my app will handle any URL that starts with /maxine/. The * at the end means that we allow any sequence of characters to come after /maxine/.

Once you’ve made your /apple-app-site-association available on your site, you can go ahead and configure your app for Universal Links.

Setting up your app for Universal Links

In order to inform iOS about your intent to handle Universal Links, you need to add the Associated Domains capability to your project. Do this by selecting your app Target, navigate to Signing and Capabilities and add Associated Domains.

After doing this, you need to register your domain using the applinks: prefix. For example, if I want to open links hosted on donnywals.com I need to write applinks:donnywals.com.

When installing my app, Apple will navigate to my domain’s apple-app-site-association file to verify that my app is allowed to handle links for donnywals.com. If everything checks out, opening links for donnywals.com/maxine/ would open Maxine since that’s the path that I configured in my JSON file.

Testing Universal Links

Universal Links are best tested by tapping on links on your device. I typically have a Notes file with links that I want to test. You can also use a tool like RocketSim if you’re looking for a quick way to test link handling on the simulator.

Note that sometimes Debug builds don’t immediately work with Universal Links. Especially when adding support after having installed the app previously. Reinstalling the app can sometimes solve this. Otherwise a reboot can work wonders too.

When everything works, your app’s onOpenURL view modifiers should be called and you’ll be passed the full URL that your app is asked to handle.

To learn more about onOpenURL, refer to my post on handling deeplinks on iOS.

Universal Link best practices

When you add support for Universal Links you implement a reliable way for users to open certain links in your application. That said, users can choose not to follow the link into your app and stay in their browser instead.

When a user refuses to navigate to your app, you want to make sure that they can (at least) see some of the contents that they were supposed to see. Or, at the very least you want to make sure that a user understands that they opened a link that was supposed to take them to your app.

You can host HTML content on the routes that you’d normally redirect to your app. In some cases that means you can show the exact same content that the user would see in the app. In other cases, you might show a page that tells the user that they should either download your app or enable Universal Links for your app again in settings.

Grouping Liquid Glass components using glassEffectUnion on iOS 26

On iOS 26 we have lots of new ways to reimagine our UIs with Liquid Glass. This means that we can take a look at Apple’s built-in applications and find interesting applications of Liquid Glass that we can use to enhance our understanding of how Liquid Glass components can be built, and to understand what Apple considers to be good practice for Liquid Glass interfaces.

In this post, we’re going to replicate a control that’s part of the new maps app.

It’s a vertical stack of two buttons in a single Liquid Glass container. Here’s what the component looks like in iOS 26:

And here’s the component that we’ll build in this post:

We’re going to be making use of buttons, button styles, a GlassEffectContainer, and the glassEffectUnion view modifier to achieve our effect.

Building the component’s buttons

We’ll start off with a GlassEffectContainer and a VStack that contains two buttons:

GlassEffectContainer {
    VStack {
        Button {

        } label: {
            Label("Locations", systemImage: "square.2.layers.3d.top.filled")
                .bold()
                .labelStyle(.iconOnly)
                .foregroundStyle(Color.black.secondary)
        }
        .buttonStyle(.glass)

        Button {

        } label: {
            Label("Navigation", systemImage: "location")
                .bold()
                .labelStyle(.iconOnly)
                .foregroundStyle(Color.purple)
        }
        .buttonStyle(.glass)
    }
}

This code will simply create two buttons on top of each other using a glass button style. The resulting UI looks like this:

That’s not great but it’s a start. We need to apply a different buttonStyle and tint our glass to have a white background. The code below shows how to do that. For brevity, I will only show a single button; the buttonStyle should be applied to both of our buttons though:

GlassEffectContainer {
    VStack {
        // ... 

        Button {

        } label: {
            Label("Navigation", systemImage: "location")
                .bold()
                .labelStyle(.iconOnly)
                .foregroundStyle(Color.purple)
        }
        .buttonStyle(.glassProminent)
    }.tint(.white.opacity(0.8))
}

With this code, both buttons have a prominent style which gives them a background color instead of being fully translucent like they are with the normal glass effect:

Now that we have our buttons set up, what we need to do is group them together into a single glass shape. To do this, we use the glassEffectUnion view modifier on both elements that we want to group.

Let’s go ahead and do that next.

Grouping elements using a glassEffectUnion

A glassEffectUnion can be used to have multiple buttons contribute to a single Liquid Glass shape. In our case, we want these two buttons to be treated as a single Liquid Glass shape so they end up looking similar to the Apple Maps components we’re trying to replicate.

First, we need to add a namespace to our container view:

@Namespace var unionNamespace

We’ll use this namespace as a way to connect our elements.

Next, we need to update our buttons:

GlassEffectContainer {
    VStack {
        Button {

        } label: {
            Label("Locations", systemImage: "square.2.layers.3d.top.filled")
                .bold()
                .labelStyle(.iconOnly)
                .foregroundStyle(Color.black.secondary)
        }
        .buttonStyle(.glassProminent)
        .glassEffectUnion(id: "mapOptions", namespace: unionNamespace)

        Button {

        } label: {
            Label("Navigation", systemImage: "location")
                .bold()
                .labelStyle(.iconOnly)
                .foregroundStyle(Color.purple)
        }
        .buttonStyle(.glassProminent)
        .glassEffectUnion(id: "mapOptions", namespace: unionNamespace)
    }.tint(Color.white.opacity(0.8))
}

By applying glassEffectUnion(id: "mapOptions", namespace: unionNamespace) to both views they become connected. There are a few conditions to make the grouping work though:

  • The elements must have the same id for them to be grouped
  • The glass effect that’s used must be the same for all elements in the union or they won’t be grouped
  • All components in the group must be tinted the same way or they won’t be grouped

    Now that our elements are grouped, they’re almost exactly where we want them to be:

The buttons are a bit close to the top and bottom edges so we should apply some padding to our Label components. I like the spacing in the middle, so what I’ll do is pad the top of the first Label and the bottom of the second one:

GlassEffectContainer {
    VStack {
        Button {

        } label: {
            Label("Locations", systemImage: "square.2.layers.3d.top.filled")
                .bold()
                .labelStyle(.iconOnly)
                .padding(.top, 8)
                .foregroundStyle(Color.black.secondary)
        }
        .buttonStyle(.glassProminent)
        .glassEffectUnion(id: "mapOptions", namespace: unionNamespace)

        Button {

        } label: {
            Label("Navigation", systemImage: "location")
                .bold()
                .labelStyle(.iconOnly)
                .padding(.bottom, 8)
                .foregroundStyle(Color.purple)
        }
        .buttonStyle(.glassProminent)
        .glassEffectUnion(id: "mapOptions", namespace: unionNamespace)
    }.tint(Color.white.opacity(0.8))
}

This completes our effect:

In Summary

On iOS 26, we have endless new possibilities to build interesting UI components with Liquid Glass. In this post, we tried copying a UI element from Apple’s Maps application to see how we can build a single Liquid Glass element that groups two vertically stacked buttons together.

We used a glassEffectUnion to link together two UI Components and make them appear as a single Liquid Glass shape.

You learned that this view modifier will group any Liquid Glass components that share the same glass style into a single shape. This means these components they will look and feel like a single unit.

Designing custom UI with Liquid Glass on iOS 26

Liquid Glass is iOS 26’s new design language. This means that a lot of apps will be adopting a new UI philosophy that might require some significant changes to how you’re designing your app’s UI.

If you’re not ready to adopt Liquid Glass just yet, Apple has provided you an escape hatch that should be usable until the next major iOS release.

I recently explored updating my workout app Maxine to work well with Liquid Glass tab bars which you can learn more about here.

In this post, I’d like to explore how we can build custom Liquid Glass components for our apps running on iOS 26 and its siblings. We’ll start off by exploring when Liquid Glass is appropriate and then move on to look at SwiftUI’s Liquid Glass related view modifiers.

By the end of this post, we’ll have built the UI that you can see in action below (video slowed down for dramatic effect):

If you prefer learning through video, you can take a look at this post on YouTube

When should you use Liquid Glass

The idea of Liquid Glass is that it acts as a layer on top of your app’s UI. In practice this will usually mean that your main app content isn’t built using the glass style. Doing so would result in some pretty bad looking UI as you can see in this video:

In this video, I applied a glass effect to all of my list rows. The result is a super weird interface that overuses Liquid Glass.

Instead, Liquid Glass should be applied to elements that sit on top of your UI. Examples include toolbars, tab bars, floating action buttons and similar components.

An example of this can be seen right here in Maxine:

The default tab bar is a Liquid Glass component that overlays my list. The floating plus button also has a glass effect applied to it even though you can barely see it due to the light background.

The point is that Liquid Glass elements should always be designed as sitting “on top” of something. They don’t stack, they’re not part of your main UI, they’re always on their own layer when you’re designing.

Now, I’m not a designer. So if you can come up with a great way to use Liquid Glass that places an element in your main content. I’m not going to tell you that you can’t or shouldn’t; you probably know much better than I do. That said, Apple’s philosophy for Liquid Glass is a layered design so for safety you should probably stick to that.

Applying a Liquid Glass effect to UI elements

Let’s build out a nice UI element that can really benefit from a Liquid Glass look and feel. It’s a UI element that existed in an app called Path which no longer exists, and the UI element hasn’t really been used much since. That said, I like the interaction and I think it’ll be fun to give it a glass overhaul.

Our Starting point

You can see an example of the button and its UI right here:

It takes quite some code to achieve this effect, and most of it isn’t relevant to Liquid Glass. That’s why you can take a look at the final code right here on GitHub. There’s a branch for the starting point as well as the end result (main) so you can play around a bit if you’d like.

The view itself looks like this:

struct ContentView: View {
    @State private var isExpanded = false
    var body: some View {
        ZStack(alignment: .bottomTrailing) {
            Color
                .clear
                .overlay(
                    Image("bg_img")
                        .resizable()
                        .scaledToFill()
                        .edgesIgnoringSafeArea(.all)
                )

            button(type: .home)
            button(type: .write)
            button(type: .chat)
            button(type: .email)

            Button {
                withAnimation {
                    isExpanded.toggle()
                }
            } label: {
                Label("Home", systemImage: "list.bullet")
                    .labelStyle(.iconOnly)
                    .frame(width: 50, height: 50)
                    .background(Circle().fill(.purple))
                    .foregroundColor(.white)
            }.padding(32)
        }
    }

    private func button(type: ButtonType) -> some View {
        return Button {} label: {
            Label(type.label, systemImage: type.systemImage)
                .labelStyle(.iconOnly)
                .frame(width: 50, height:50)
                .background(Circle().fill(.white))
        }
        .padding(32)
        .offset(type.offset(expanded: isExpanded)
        .animation(.spring(duration: type.duration, bounce: 0.2))
    }
}

This view on its own isn’t all that interesting, it contains a couple of buttons, and applying a liquid glass effect to our buttons shouldn’t be too hard.

Applying a glass effect

To make buttons look like Liquid Glass, you apply the glassEffect view modifier to them:

Button {
    withAnimation {
        isExpanded.toggle()
    }
} label: {
    Label("Home", systemImage: "list.bullet")
        .labelStyle(.iconOnly)
        .frame(width: 50, height: 50)
        .background(Circle().fill(.purple))
        .foregroundColor(.white)
}
.glassEffect()
.padding(32)

After applying the liquidGlass modifier to all buttons the app looks like this when you run it:

We’re not seeing a glass effect at all!

That’s because we also set a background on our buttons, so let’s go ahead and remove the background to see what our view looks like:

Button {
    withAnimation {
        isExpanded.toggle()
    }
} label: {
    Label("Home", systemImage: "list.bullet")
        .labelStyle(.iconOnly)
        .frame(width: 50, height: 50)
        .foregroundColor(.white)
}
.glassEffect()
.padding(32)

If we run the app now, our UI looks like this:

Our icons are a bit hard to read and I’m honestly not exactly sure whether this is a beta bug or whether it’s supposed to be this way.

Note that Button also comes with a .glass button style that you can use. This effect is slightly different from what I’ve used here but I find that the button style doesn’t always allow for the kinds of customizations that I like.

You can apply the glass button style as follows:

Button {
    withAnimation {
        isExpanded.toggle()
    }
} label: {
    Label("Home", systemImage: "list.bullet")
        .labelStyle(.iconOnly)
        .frame(width: 50, height: 50)
        .foregroundColor(.white)
}
.buttonStyle(.glass)
.padding(32)

That said, there are two things I’d like to do at this point:

  1. Apply a background tint to the buttons
  2. Make the buttons appear interactive

Let's start with the background color.

Applying a background color to our glass effect

To style our buttons with a background color, we need to tint our glass. Here’s how we can do that:

Button {
    withAnimation {
        isExpanded.toggle()
    }
} label: {
    Label("Home", systemImage: "list.bullet")
        .labelStyle(.iconOnly)
        .frame(width: 50, height: 50)
        .foregroundColor(.white)
}
.glassEffect(.regular.tint(.purple))
.padding(32)

This already looks a lot better:

Notice that the buttons still have a circular shape even though we're not explicitly drawing a circle background. That’s the default style for components that you apply a glassEffect to. You’ll always get a shape that has rounded corners that fit nicely with the rest of your app’s UI and the context where the effect is applied.

I do feel like my buttons are a bit too opaque, so let’s apply a bit of opacity to our tint color to get more of a see-through effect:

Button {
    withAnimation {
        isExpanded.toggle()
    }
} label: {
    Label("Home", systemImage: "list.bullet")
        .labelStyle(.iconOnly)
        .frame(width: 50, height: 50)
        .foregroundColor(.white)
}
.glassEffect(.regular.tint(.purple.opacity(0.8))
.padding(32)

This is what our view looks like now:

When I tap the buttons now, not a lot happens as shown in the video above. We can do better by making our buttons respond to user interaction.

Making an interactive glass effect

To make our glass buttons respond to user input by growing a bit and applying a sort of shimmer effect, we apply the interactive modifier to the glass effect:

Button {
    withAnimation {
        isExpanded.toggle()
    }
} label: {
    Label("Home", systemImage: "list.bullet")
        .labelStyle(.iconOnly)
        .frame(width: 50, height: 50)
        .foregroundColor(.white)
}
.glassEffect(.regular.tint(.purple.opacity(0.8).interactive())
.padding(32)

This is what our interactions look like now:

Our UI is coming together. With the glassEffect view modifier, the interactive modifier and a tint we managed to build a pretty compelling effect.

However, our UI isn’t quite liquid. You’re looking at distinct buttons performing an effect.

We can group our elements together to make it appear as though they’re all coming from the same drop of glass.

This sounds a bit weird so let’s just jump into an example right away.

Grouping Liquid Glass elements together

The first thing we should do now that we have a group of elements that are all using a Liquid Glass effect is group them together in a container. This is a recommendation from Apple that helps make sure the system can render our effects efficiently. It also makes it so that Liquid Glass elements that are close together will start to blend into each other. This makes it look like they're all merging and seperating as they move around the screen.

GlassEffectContainer {
    button(type: .home)
    button(type: .write)
    button(type: .chat)
    button(type: .email)

    Button {
        withAnimation {
            isExpanded.toggle()
        }
    } label: {
        Label("Home", systemImage: "list.bullet")
            .labelStyle(.iconOnly)
            .frame(width: 50, height: 50)
            .foregroundColor(.white)
    }
    .glassEffect(.regular.tint(.purple.opacity(0.8)).interactive())
    .padding(32)
}

By placing our Liquid Glass UI elements in the same container, the elements will blend together when they’re close to each other in the UI. For example, when we place all buttons in an HStack with no spacing, they end up looking like this:

Because all the elements are in the same GlassEffectContainer, we can now run our animation and have the buttons animate in a fluid manner:

I’ve slowed everything down a bit so you can enjoy the effect and see that the components all originate from a single button, making them look like a liquid.

The math to achieve all this is part of the ButtonType enum in the GitHub repository that you can check out if you want to see exactly how the end result was achieved.

In Summary

Liquid glass might not be your thing and that’s perfectly fine. That said, it allows us to experiment with UI in fun ways that might surprise you.

In this post, you learned about the glassEffect modifier as well as the glassEffectID view modifier to build a fun menu component that can show and hide itself using a fun, fluid animation.

If you want to see the end result or use this code, feel free to pull it from GitHub and modify it to suit your needs.

Solving actor-isolated protocol conformance related errors in Swift 6.2

Swift 6.2 comes with several quality of life improvements for concurrency. One of these features is the ability to have actor-isolated conformances to protocols. Another feature is that your code will now run on the main actor by default.

This does mean that sometimes, you’ll run into compiler errors. In this blog post, I’ll explore these errors, and how you can fix them when you do.

Before we do, let’s briefly talk about actor-isolated protocol conformance to understand what this feature is about.

Understanding actor-isolated protocol conformance

Protocols in Swift can require certain functions or properties to be nonisolated. For example, we can define a protocol that requires a nonisolated var name like this:

protocol MyProtocol {
  nonisolated var name: String { get }
}

class MyModelType: MyProtocol {
  var name: String

  init(name: String) {
    self.name = name
  }
}

Our code will not compile at the moment with the following error:

Conformance of 'MyModelType' to protocol 'MyProtocol' crosses into main actor-isolated code and can cause data races

In other words, our MyModelType is isolated to the main actor and our name protocol conformance isn’t. This means that using MyProtocol and its name in a nonisolated way, can lead to data races because name isn’t actually nonisolated.

When you encounter an error like this you have two options:

  1. Embrace the nonisolated nature of name
  2. Isolate your conformance to the main actor

The first solution usually means that you don’t just make your property nonisolated, but you apply this to your entire type:

nonisolated class MyModelType: MyProtocol {
  // ...
}

This might work but you’re now breaking out of main actor isolation and potentially opening yourself up to new data races and compiler errors.

When your code runs on the main actor by default, going nonisolated is often not what you want; everything else is still on main so it makes sense for MyModelType to stay there too.

In this case, we can mark our MyProtocol conformance as @MainActor:

class MyModelType: @MainActor MyProtocol {
  // ...
}

By doing this, MyModelType conforms to my protocol but only when we’re on the main actor. This automatically makes the nonisolated requirement for name pointless because we’re always going to be on the main actor when we’re using MyModelType as a MyProtocol.

This is incredibly useful in apps that are main actor by default because you don’t want your main actor types to have nonisolated properties or functions (usually). So conforming to protocols on the main actor makes a lot of sense in this case.

Now let’s look at some errors related to this feature, shall we? I initially encountered an error around my SwiftData code, so let’s start there.

Fixing Main actor-isolated conformance to 'PersistentModel' cannot be used in actor-isolated context

Let’s dig right into an example of what can happen when you’re using SwiftData and a custom model actor. The following model and model actor produce a compiler error that reads “Main actor-isolated conformance of 'Exercise' to 'PersistentModel' cannot be used in actor-isolated context”:

@Model
class Exercise {
  var name: String
  var date: Date

  init(name: String, date: Date) {
    self.name = name
    self.date = date
  }
}

@ModelActor
actor BackgroundActor {
  func example() {
    // Call to main actor-isolated initializer 'init(name:date:)' in a synchronous actor-isolated context
    let exercise = Exercise(name: "Running", date: Date())
    // Main actor-isolated conformance of 'Exercise' to 'PersistentModel' cannot be used in actor-isolated context
    modelContext.insert(exercise)
  }
}

There’s actually a second error here too because we’re calling the initializer for exercise from our BackgroundActor and the init for our Exercise is isolated to the main actor by default.

Fixing our problem in this case means that we need to allow Exercise to be created and used from non-main actor contexts. To do this, we can mark the SwiftData model as nonisolated:

@Model
nonisolated class Exercise {
  var name: String
  var date: Date

  init(name: String, date: Date) {
    self.name = name
    self.date = date
  }
}

Doing this will make both the init and our conformance to PersistentModel nonisolated which means we’re free to use Exercise from non-main actor contexts.

Note that this does not mean that Exercise can safely be passed from one actor or isolation context to the other. It just means that we’re free to create and use Exercise instances away from the main actor.

Not every app will need this or encounter this, especially when you’re running code on the main actor by default. If you do encounter this problem for SwiftData models, you should probably isolate the problematic are to the main actor unless you specifically created a model actor in the background.

Let’s take a look at a second error that, as far as I’ve seen is pretty common right now in the Xcode 26 beta; using Codable objects with default actor isolation.

Fixing Conformance of protocol 'Encodable' crosses into main actor-isolated code and can cause data races

This error is quite interesting and I wonder whether it’s something Apple can and should fix during the beta cycle. That said, as of Beta 2 you might run into this error for models that conform to Codable. Let’s look at a simple model:

struct Sample: Codable {
  var name: String
}

This model has two compiler errors:

  1. Circular reference
  2. Conformance of 'Sample' to protocol 'Encodable' crosses into main actor-isolated code and can cause data races

I’m not exactly sure why we’re seeing the first error. I think this is a bug because it makes no sense to me at the moment.

The second error says that our Encodable conformance “crossed into main actor-isolated code”. If you dig a bit deeper, you’ll see the following error as an explanation for this: “Main actor-isolated instance method 'encode(to:)' cannot satisfy nonisolated requirement”.

In other words, our protocol conformance adds a main actor isolated implementation of encode(to:) while the protocol requires this method to be non-isolated.

The reason we’re seeing this error is not entirely clear to me but there seems to be a mismatch between our protocol conformance’s isolation and our Sample type.

We can do one of two things here; we can either make our model nonisolated or constrain our Codable conformance to the main actor.

nonisolated struct Sample: Codable {
  var name: String
}

// or
struct Sample: @MainActor Codable {
  var name: String
}

The former will make it so that everything on our Sample is nonisolated and can be used from any isolation context. The second option makes it so that our Sample conforms to Codable but only on the main actor:

func createSampleOnMain() {
  // this is fine
  let sample = Sample(name: "Sample Instance")
  let data = try? JSONEncoder().encode(sample)
  let decoded = try? JSONDecoder().decode(Sample.self, from: data ?? Data())
  print(decoded)
}

nonisolated func createSampleFromNonIsolated() {
  // this is not fine
  let sample = Sample(name: "Sample Instance")
  // Main actor-isolated conformance of 'Sample' to 'Encodable' cannot be used in nonisolated context
  let data = try? JSONEncoder().encode(sample)
  // Main actor-isolated conformance of 'Sample' to 'Decodable' cannot be used in nonisolated context
  let decoded = try? JSONDecoder().decode(Sample.self, from: data ?? Data())
  print(decoded)
}

So generally speaking, you don’t want your protocol conformance to be isolated to the main actor for your Codable models if you’re decoding them on a background thread. If your models are relatively small, it’s likely perfectly acceptable for you to be decoding and encoding on the main actor. These operations should be fast enough in most cases, and sticking with main actor code makes your program easier to reason about.

The best solution will depend on your app, your constraints, and your requirements. Always measure your assumptions when possible and stick with solutions that work for you; don’t introduce concurrency “just to be sure”. If you find that your app benefits from decoding data on a background thread, the solution for you is to mark your type as nonisolated; if you find no direct benefits from background decoding and encoding in your app you should constrain your conformance to @MainActor.

If you’ve implemented a custom encoding or decoding strategy, you might be running into a different error…

Conformance of 'CodingKeys' to protocol 'CodingKey' crosses into main actor-isolated code and can cause data races

Now, this one is a little trickier. When we have a custom encoder or decoder, we might also want to provide a CodingKeys enum:

struct Sample: @MainActor Decodable {
  var name: String

  // Conformance of 'Sample.CodingKeys' to protocol 'CodingKey' crosses into main actor-isolated code and can cause data races
  enum CodingKeys: CodingKey {
    case name
  }

  init(from decoder: any Decoder) throws {
    let container = try decoder.container(keyedBy: CodingKeys.self)
    self.name = try container.decode(String.self, forKey: .name)
  }
}

Unfortunately, this code produces an error. Our conformance to CodingKey crosses into main actor isolated code and that might cause data races. Usually this would mean that we can constraint our conformance to the main actor and this would solve our issue:

// Main actor-isolated conformance of 'Sample.CodingKeys' to 'CustomDebugStringConvertible' cannot satisfy conformance requirement for a 'Sendable' type parameter 'Self'
enum CodingKeys: @MainActor CodingKey {
  case name
}

This unfortunately doesn’t work because CodingKeys requires us to be CustomDebugStringConvertable which requires a Sendable Self.

Marking our conformance to main actor should mean that both CodingKeys and CodingKey are Sendable but because the CustomDebugStringConvertible is defined on CodingKey I think our @MainActor isolation doesn’t carry over.

This might also be a rough edge or bug in the beta; I’m not sure.

That said, we can fix this error by making our CodingKeys nonisolated:

struct Sample: @MainActor Decodable {
  var name: String

  nonisolated enum CodingKeys: CodingKey {
    case name
  }

  init(from decoder: any Decoder) throws {
    let container = try decoder.container(keyedBy: CodingKeys.self)
    self.name = try container.decode(String.self, forKey: .name)
  }
}

This code works perfectly fine both when Sample is nonisolated and when Decodable is isolated to the main actor.

Both this issue and the previous one feel like compiler errors, so if these get resolved during Xcode 26’s beta cycle I will make sure to come back and update this article.

If you’ve encountered errors related to actor-isolated protocol conformance yourself, I’d love to hear about them. It’s an interesting feature and I’m trying to figure out how exactly it fits into the way I write code.

What is @concurrent in Swift 6.2?

Swift 6.2 is available and it comes with several improvements to Swift Concurrency. One of these features is the @concurrent declaration that we can apply to nonisolated functions. In this post, you will learn a bit more about what @concurrent is, why it was added to the language, and when you should be using @concurrent.

Before we dig into @concurrent itself, I’d like to provide a little bit of context by exploring another Swift 6.2 feature called nonisolated(nonsending) because without that, @concurrent wouldn’t exist at all.

And to make sense of nonisolated(nonsending) we’ll go back to nonisolated functions.

Exploring nonisolated functions

A nonisolated function is a function that’s not isolated to any specific actor. If you’re on Swift 6.1, or you’re using Swift 6.2 with default settings, that means that a nonisolated function will always run on the global executor.

In more practical terms, a nonisolated function would run its work on a background thread.

For example the following function would run away from the main actor at all times:

nonisolated 
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

While it’s a convenient way to run code on the global executor, this behavior can be confusing. If we remove the async from that function, it will always run on the callers actor:

nonisolated 
func decode<T: Decodable>(_ data: Data) throws -> T {
  // ...
}

So if we call this version of decode(_:) from the main actor, it will run on the main actor.

Since that difference in behavior can be unexpected and confusing, the Swift team has added nonisolated(nonsending). So let’s see what that does next.

Exploring nonisolated(nonsending) functions

Any function that’s marked as nonisolated(nonsending) will always run on the caller’s executor. This unifies behavior for async and non-async functions and can be applied as follows:

nonisolated(nonsending) 
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Whenever you mark a function like this, it no longer automatically offloads to the global executor. Instead, it will run on the caller’s actor.

This doesn’t just unify behavior for async and non-async functions, it also makes our code less concurrent and easier to reason about.

When we offload work to the global executor, this means that we’re essentially creating new isolation domains. The result of that is that any state that’s passed to or accessed inside of our function is potentially accessed concurrently if we have concurrent calls to that function.

This means that we must make the accessed or passed-in state Sendable, and that can become quite a burden over time. For that reason, making functions nonisolated(nonsending) makes a lot of sense. It runs the function on the caller’s actor (if any) so if we pass state from our call-site into a nonisolated(nonsending) function, that state doesn’t get passed into a new isolation context; we stay in the same context we started out from. This means less concurrency, and less complexity in our code.

The benefits of nonisolated(nonsending) can really add up which is why you can make it the default for your nonisolated function by opting in to Swift 6.2’s NonIsolatedNonSendingByDefault feature flag.

When your code is nonisolated(nonsending) by default, every function that’s either explicitly or implicitly nonisolated will be considered nonisolated(nonsending). This means that we need a new way to offload work to the global executor.

Enter @concurrent.

Offloading work with @concurrent in Swift 6.2

Now that you know a bit more about nonisolated and nonisolated(nonsending), we can finally understand @concurrent.

Using @concurrent makes most sense when you’re using the NonIsolatedNonSendingByDefault feature flag as well. Without that feature flag, you can continue using nonisolated to achieve the same “offload to the global executor” behavior. That said, marking functions as @concurrent can future proof your code and make your intent explicit.

With @concurrent we can ensure that a nonisolated function runs on the global executor:

@concurrent
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Marking a function as @concurrent will automatically mark that function as nonisolated so you don’t have to write @concurrent nonisolated. We can apply @concurrent to any function that doesn’t have its isolation explicitly set. For example, you can apply @concurrent to a function that’s defined on a main actor isolated type:

@MainActor
class DataViewModel {
  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    // ...
  }
}

Or even to a function that’s defined on an actor:

actor DataViewModel {
  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    // ...
  }
}

You’re not allowed to apply @concurrent to functions that have their isolation defined explicitly. Both examples below are incorrect since the function would have conflicting isolation settings.

@concurrent @MainActor
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

@concurrent nonisolated(nonsending)
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Knowing when to use @concurrent

Using @concurrent is an explicit declaration to offload work to a background thread. Note that doing so introduces a new isolation domain and will require any state involved to be Sendable. That’s not always an easy thing to pull off.

In most apps, you only want to introduce @concurrent when you have a real issue to solve where more concurrency helps you.

An example of a case where @concurrent should not be applied is the following:

class Networking {
  func loadData(from url: URL) async throws -> Data {
    let (data, response) = try await URLSession.shared.data(from: url)
    return data
  }
}

The loadData function makes a network call that it awaits with the await keyword. That means that while the network call is active, we suspend loadData. This allows the calling actor to perform other work until loadData is resumed and data is available.

So when we call loadData from the main actor, the main actor would be free to handle user input while we wait for the network call to complete.

Now let’s imagine that you’re fetching a large amount of data that you need to decode. You started off using default code for everything:

class Networking {
  func getFeed() async throws -> Feed {
    let data = try await loadData(from: Feed.endpoint)
    let feed: Feed = try await decode(data)
    return feed
  }

  func loadData(from url: URL) async throws -> Data {
    let (data, response) = try await URLSession.shared.data(from: url)
    return data
  }

  func decode<T: Decodable>(_ data: Data) async throws -> T {
    let decoder = JSONDecoder()
    return try decoder.decode(T.self, from: data)
  }
}

In this example, all of our functions would run on the caller’s actor. For example, the main actor. When we find that decode takes a lot of time because we fetched a whole bunch of data, we can decide that our code would benefit from some concurrency in the decoding department.

To do this, we can mark decode as @concurrent:

class Networking {
  // ...

  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    let decoder = JSONDecoder()
    return try decoder.decode(T.self, from: data)
  }
}

All of our other code will continue behaving like it did before by running on the caller’s actor. Only decode will run on the global executor, ensuring we’re not blocking the main actor during our JSON decoding.

We made the smallest unit of work possible @concurrent to avoid introducing loads of concurrency where we don’t need it. Introducing concurrency with @concurrent is not a bad thing but we do want to limit the amount of concurrency in our app. That’s because concurrency comes with a pretty high complexity cost, and less complexity in our code typically means that we write code that’s less buggy, and easier to maintain in the long run.