Working with dates and Codable in Swift

When you’re decoding JSON, you’ll run into situations where you’ll have to decode dates every once in a while. Most commonly you’ll probably be dealing with dates that conform to the ISO-8601 standard but there’s also a good chance that you’ll have to deal with different date formats.

In this post, we’ll take a look at how you can leverage some of Swift’s built-in date formats for en- and decoding data as well as providing your own date format. We’ll look at some of the up- and downsides of how Swift decodes dates, and how we can possibly work around some of the downsides.

This post is part of a series I have on Swift’s codable so I highly recommend that you take a look at my other posts on this topic too.

If you prefer to learn about dates and Codable in a video format, you can watch the video here:

Exploring the default JSON en- and decoding behavior

When we don’t do anything, a JSONDecoder (and JSONEncoder) will expect dates in a JSON file to be formatted as a double. This double should represent the number of seconds that have passed since January 1st 2001 which is a pretty non-standard way to format a timestamp. The most common way to set up a timestamp would be to use the number of seconds passed since January 1st 1970.

However, this method of talking about dates isn’t very reliable when you take complexities like timezones into account.

Usually a system will use its own timezone as the timezone to apply the reference date to. So a given number of seconds since January 1st 2001 can be quite ambiguous because the timestamp doesn’t say in which timezone we should be adding the given timestamp to January 1st 2001. Different parts of the world have a different moment where January 1st 2001 starts so it’s not a stable date to compare against.

Of course, we have some best practices around this like most servers will use UTC as their timezone which means that timestamps that are returned by these servers should always be applied using the UTC timezone regardless of the client’s timezone.

When we receive a JSON file like the one shown below, the default behavior for our JSONDecoder will be to just decode the provided timestamps using the device’s current timezone.

var jsonData = """
[
    {
        "title": "Grocery shopping",
        "date": 730976400.0
    },
    {
        "title": "Dentist appointment",
        "date": 731341800.0
    },
    {
        "title": "Finish project report",
        "date": 731721600.0
    },
    {
        "title": "Call plumber",
        "date": 732178800.0
    },
    {
        "title": "Book vacation",
        "date": 732412800.0
    }
]
""".data(using: .utf8)!

struct ToDoItem: Codable {
  let title: String
  let date: Date
}

do {
  let decoder = JSONDecoder()
  let todos = try decoder.decode([ToDoItem].self, from: jsonData)
  print(todos)
} catch {
  print(error)
}

This might be fine in some cases but more often than not you’ll want to use something that’s more standardized, and more explicit about which timezone the date is in.

Before we look at what I think is the most sensible solution I want to show you how you can configure your JSON Decoder to use a more standard timestamp reference date which is January 1st 1970.

Setting a date decoding strategy

If you want to change how a JSONEncoder or JSONDecoder deals with your date, you should make sure that you set its date decoding strategy. You can do this by assigning an appropriate strategy to the object’s dateDecodingStrategy property (or dateEncodingStrategy for JSONEncoder. The default strategy is called deferredToDate and you’ve just seen how it works.

If we want to change the date decoding strategy so it decodes dates based on timestamps in seconds since January 1st 1970, we can do that as follows:

do {
  let decoder = JSONDecoder()
  decoder.dateDecodingStrategy = .secondsSince1970
  let todos = try decoder.decode([ToDoItem].self, from: jsonData)
  print(todos)
} catch {
  print(error)
}

Some servers work with timestamps in milliseconds since 1970. You can accommodate for that by using the .millisecondsSince1970 configuration instead of .secondsSince1970 and the system will handle the rest.

While this allows you to use a standardized timestamp format, you’re still going to run into timezone related issues. To work around that, we need to take a look at dates that use the ISO-8601 standard.

Working with dates that conform to ISO-8601

Because there are countless ways to represent dates as long as you have some consistency amongst the systems where these dates are used, a standard was created to represent dates as strings. This standard is called ISO-8601 and it describes several conventions around how we can represent dates as strings.

We can represent anything from just a year or a full date to a date with a time that includes information about which timezone that date exists in.

For example, a date that represents 5pm on Feb 15th 2024 in The Netherlands (UTC+1 during February) would represent 9am on Feb 15th 2024 in New York (UTC-5 in February).

It can be important for a system to represent a date in a user’s local timezone (for example when you’re publishing a sports event schedule) so that the user doesn’t have to do the timezone math for themselves. For that reason, ISO-8601 tells us how we can represent Feb 15th 2024 at 5pm in a standardized way. For example, we could use the following string:

2024-02-15T17:00:00+01:00

This system contains information about the date, the time, and timezone. This allows a client in New York to translate the provided time to a local time which in this case means that the time would be shown to a user as 9am instead of 5pm.

We can tell our JSONEncoder or JSONDecoder to discover which one of the several different date formats from ISO-8601 our JSON uses, and then decode our models using that format.

Let’s look at an example of how we can set this up:

var jsonData = """
[
    {
        "title": "Grocery shopping",
        "date": "2024-03-01T10:00:00+01:00"
    },
    {
        "title": "Dentist appointment",
        "date": "2024-03-05T14:30:00+01:00"
    },
    {
        "title": "Finish project report",
        "date": "2024-03-10T23:59:00+01:00"
    },
    {
        "title": "Call plumber",
        "date": "2024-03-15T08:00:00+01:00"
    },
    {
        "title": "Book vacation",
        "date": "2024-03-20T20:00:00+01:00"
    }
]
""".data(using: .utf8)!

struct ToDoItem: Codable {
  let title: String
  let date: Date
}

do {
  let decoder = JSONDecoder()
  decoder.dateDecodingStrategy = .iso8601
  let todos = try decoder.decode([ToDoItem].self, from: jsonData)
  print(todos)
} catch {
  print(error)
}

The JSON in the snippet above is slightly changed to make it use ISO-8601 date strings instead of timestamps.

The ToDoItem model is completely unchanged.

The decoder’s dateDecodingStrategy has been changed to .iso8601 which will allow us to not worry about the exact date format that’s used in our JSON as long as it conforms to .iso8601.

In some cases, you might have to take some more control over how your dates are decoded. You can do this by setting your dateDecodingStrategy to either .custom or .formatted.

Using a custom encoding and decoding strategy for dates

Sometimes, a server returns a date that technically conforms to the ISO-8601 standard yet Swift doesn’t decode your dates correctly. In this case, it might make sense to provide a custom date format that your encoder / decoder can use.

You can do this as follows:

do {
  let decoder = JSONDecoder()

  let formatter = DateFormatter()
  formatter.dateFormat = "yyyy-MM-dd"
  formatter.locale = Locale(identifier: "en_US_POSIX")
  formatter.timeZone = TimeZone(secondsFromGMT: 0)

  decoder.dateDecodingStrategy = .formatted(formatter)

  let todos = try decoder.decode([ToDoItem].self, from: jsonData)
  print(todos)
} catch {
  print(error)
}

Alternatively, you might need to have some more complex logic than you can encapsulate in a date formatter. If that’s the case, you can provide a closure to the custom configuration for your date decoding strategy as follows:

decoder.dateDecodingStrategy = .custom({ decoder in
  let container = try decoder.singleValueContainer()
  let dateString = try container.decode(String.self)

  if let date = ISO8601DateFormatter().date(from: dateString) {
    return date
  } else {
    throw DecodingError.dataCorruptedError(in: container, debugDescription: "Cannot decode date string \(dateString)")
  }
})

This example creates its own ISO-8601 date formatter so it’s not the most useful example (you can just use .iso8601 instead) but it shows how you should go about decoding and creating a date using custom logic.

In Summary

In this post, you saw several ways to work with dates and JSON.

You learned about the default approach to decoding dates from a JSON file which requires your dates to be represented as seconds from January 1st 2001. After that, you saw how you can configure your JSONEncoder or JSONDecoder to use the more standard January 1st 1970 reference date.

Next, we looked at how to use ISO-8601 date strings as that optionally include timezone information which greatly improves our situation.

Lastly, you learn how you can take more control over your JSON by using a custom date formatter or even having a closure that allows you to perform much more complex decoding (or encoding) logic by taking full control over the process.

I hope you enjoyed this post!

Designing APIs with typed throws in Swift

When Swift 2.0 added the throws keyword to the language, folks were somewhat divided on its usefulness. Some people preferred designing their APIs with an (at the time) unofficial implementation of the Result type because that worked with both regular and callback based functions.

However, the language feature got adopted and a new complaint came up regularly. The way throws in Swift was designed didn’t allow developers to specify the types of errors that a function could throw.

In every do {} catch {} block we write we have to assume and account for any object that conforms to the Error protocol to be thrown.

This post will take a closer look at how we can write catch blocks to handle specific errors, and how we can leverage the brand new types throws that will be implemented through SE-0413 recently.

Let’s dig in!

If you prefer to watch this content as a video, the video is available on YouTube:

The situation today: catching specific errors in Swift

The following code shows a standard do { } catch { } block in Swift that you might already be familiar with:

do {
  try loadfeed()
} catch {
  print(error.localizedDescription)
}

Calling a method that can throw errors should always be done in a do { } catch { } block unless you call your method with a try? or a try! prefix which will cause you to ignore any errors that come up.

In order to handle the error in your catch block, you can cast the error that you’ve received to different types as follows:

do {
  try loadFeed()
} catch {
  switch error {
  case let authError as AuthError:
    print("auth error", authError)
    // present login screen
  case let networkError as NetworkError:
    print("network error", networkError)
    // present alert explaining what went wrong
  default:
    print("error", error)
    // present generic alert with a message
  }
}

By casing your error in the switch statement, you can have different code paths for different error types. This allows you to extract information from the error as needed. For example, an authentication error might have some specific cases that you’d want to inspect to correctly manage what went wrong.

Here’s what the case for AuthError might end up looking like:

case let authError as AuthError:
  print("auth error", authError)

  switch authError {
  case .missingToken:
      print("missing token")
      // present a login screen
  case .tokenExpired:
    print("token expired")
    // attempt a token refresh
  }

When your API can return many different kinds of errors you can end up with lots of different cases in your switch, and with several levels of nesting. This doesn’t look pretty and luckily we can work around this by defining catch blocks for specific error types.

For example, here’s what the same control flow as before looks like without the switch using typed catch blocks:

do {
  try loadFeed()
} 
catch let authError as AuthError {
  print("auth error", authError)

  switch authError {
  case .missingToken:
      print("missing token")
      // present a login screen
  case .tokenExpired:
    print("token expired")
    // attempt a token refresh
  }
} 
catch let networkError as NetworkError {
  print("network error", networkError)
  // present alert explaining what went wrong
} 
catch {
  print("error", error)
}

Notice how we have a dedicated catch for each error type. This makes our code a little bit easier to read because there’s a lot less nesting.

The main issues with out code at this point are:

  1. We don’t know which errors loadFeed can throw. If our API changes and we add more error types, or even if we remove error types, the compiler won’t be able to tell us. This means that we might have catch blocks for errors that will never get thrown or that we miss catch blocks for certain error types which means those errors get handles by the generic catch block.
  2. We always need a generic catch at the end even if we know that we handle all error types that our function cold probably throw. It’s not a huge problem, but it feels a bit like having an exhaustive switch with a default case that only contains a break statement.

Luckily, Swift proposal SE-0413 will fix these two pain points by introducing typed throws.

Exploring typed throws

At the time of writing this post SE-0413 has been accepted and is available using the upcoming feature flag FullTypedThrows. If you're interested in exploring upcomng Swift Features, you can do so by installing an experimental toolchain. Learn how you can do that in This post

At its core, typed throws in Swift will allow us to inform callers of throwing functions which errors they might receive as a result of calling a function. At this point it looks like we’ll be able to only throw a single type of error from our function.

For example, we could write the following:

func loadFeed() throws(FeedError) {
  // implementation
}

What we can’t do is the following:

func loadFeed() throws(AuthError, NetworkError) {
  // implementation
}

So even though our loadFeed function can throw a couple of errors, we’ll need to design our code in a way that allows loadFeed to throw a single, specific type instead of multiple. We could define our FeedError as follows to do this:

enum FeedError {
  case authError(AuthError)
  case networkError(NetworkError)
  case other(any Error)
}

By adding the other case we can gain a lot of flexibility. However, that also comes with the downsides that were described in the previous section so a better design could be:

enum FeedError {
  case authError(AuthError)
  case networkError(NetworkError)
}

This fully depends on your needs and expectations. Both approaches can work well and the resulting code that you write to handle your errors can be much nicer when you have a lot more control over the kinds of errors that you might be throwing.

So when we call loadFeed now, we can write the following code:

do {
  try loadFeed()
} 
catch {
  switch error {
    case .authError(let authError):
      // handle auth error
    case .networkError(let networkError):
      // handle network error
  }
}

The error that’s passed to our catch is now a FeedError which means that we can switch over the error and compare its cases directly.

For this specific example, we still require nesting to inspect the specific errors that were thrown but I’m sure you can see how there are benefits to knowing which type of errors we could receive.

In the cases where you call multiple throwing methods, we’re back to the old fashioned any Error in our catch:

do {
  let feed = try loadFeed()
  try cacheFeed(feed)
} catch {
  // error is any Error here
}

If you’re not familiar with any in Swift, check out this post to learn more.

The reason we’re back to any Error here is that our two different methods might not throw the same error types which means that the compiler needs to drop down to any Error since we know that both methods will have to throw something that conforms to Error.

In Summary

Typed throws have been in high demand ever since Swift gained the throws keyword. Now that we’re finally about to get them, I think a lot of folks are quite happy.

Personally, I think typed throws are a nice feature but that we won’t see them used that much.

The fact that we can only throw a single type combined with having to try calls in a do block erasing our error back to any Error means that we’ll still be doing a bunch of switching and inspecting to see which error was thrown exactly, and how we should handle that thrown error.

I’m sure typed throws will evolve in the future but for now I don’t think I’ll be jumping on them straight away once they’re released.

How to determine where tasks and async functions run in Swift?

Swift’s current concurrency model leverages tasks to encapsulate the asynchronous work that you’d like to perform. I wrote about the different kinds of tasks we have in Swift in the past. You can take a look at that post here. In this post, I’d like to explore the rules that Swift applies when it determines where your tasks and functions run. More specifically, I’d like to explore how we can determine whether a task or function will run on the main actor or not.

We’ll start this post by very briefly looking at tasks and how we can determine where they run. I’ll dig right into the details so if you’re not entirely up to date on the basics of Swift’s unstructured and detached tasks, I highly recommend that you catch up here.

After that, we’ll look at asynchronous functions and how we can reason about where these functions run.

To follow along with this post, it’s recommended that you’re somewhat up to date on Swift’s actors and how they work. Take a look at my post on actors if you want to make sure you’ve got the most important concepts down.

If you prefer to consume the contents of this post as a video, you can watch the video below.

Reasoning about where a Swift Task will run

In Swift, we have two kinds of tasks:

  • Unstructured tasks
  • Detached tasks

Each task type has its own rules regarding where the task will run its body.

When you create a detached task, this task will always run its body using the global executor. In practical terms this means that a detached task will always run on a background thread. You can create a detached task as follows:

Task.detached {
  // this runs on the global executor
}

A detached task should hardly ever be used in practice because there are other ways to perform work in the background that don’t involve starting a new task (that doesn’t participate in structured concurrency).

The other way to start a new task is by creating an unstructured task. This looks as follows:

Task {
  // this runs ... somewhere?
}

An unstructured task will inherit certain things from its context, like the current actor for example. It’s this current actor that determines where our unstructured task will run.

Sometimes it’s pretty obvious that we want a task to run on the main actor:

Task { @MainActor in 

}

While this task inherits an actor from the current context, we’re overriding this by annotating our task body with MainActor to make sure that our task’s body runs on the main actor.

Interesting sidenote: you can do the same with a detached task.

Additionally, we can create a new task that’s on the main actor like this:

@MainActor
struct MyView: View {
  // body etc...

  func startTask() {
    Task {
      // this task runs on the main actor
    }
  }
}

Our SwiftUI view in this example is annotated with @MainActor. This means that every function and property that’s defined on MyView will be executed on the main actor. Including our startTask function. The Task inherits the main actor from MyView so it’s running its body on the main actor.

If we make one small change to the view, everything changes:

struct MyView: View {
  // body etc...

  func startTask() {
    Task {
      // where does this task run?
    }
  }
}

Instead of knowing that startTask will run on the main actor, it's a bit trickier to reason about where our function will run exactly. Our view itself is not main actor bound which means that its functions can be called on any actor or executor. When we call startTask, we'll find that the Task that's created in its function body will not be main actor isolated. Not even if you call this function from a place that is main actor isolated. This seems to be related to startTask being nonisolated by definition which means that it's never bound to a specific actor and runs on the global executor which results in unstructured Tasks being spawned on the global excutor too.

At runtime, we can use MainActor.assertIsolated(_:) to perform a check and see whether we're on the main actor. If we're not, our app would crash during development which is perfectly fine. Especially when we're using this function as a tool to learn more about our code. Here's how you can use this function:

struct MyView: View {
  // body etc...

  func startTask() {
    Task {
      MainActor.assertIsolated("Not isolated!!")
    }
  }
}

When I ran this example on my device, it crashed every time which shows that the runtime behavior is not something that's random. We can already know at compile time that our code will not run on the main actor because neither the function, the view, nor the task are @MainActor annotated.

As a rule of thumb you could say that a Task will always run in the background if you’re not attached to any actors. This is the case when you create a new Task from any object that’s not main actor annotated for example. When you create your task from a place that’s main actor annotated, you know your task will run on the main actor.

Unfortunately, this isn’t always straightforward to determine and Apple seems to want us to not worry too much about this. The key takeaway is that if you want something to run on the main actor, you have to annotate it with the @MainActor annotation. The underlying system will make sure there are no extraneous thread hops and that there's no perfromance cost to having these annotations in place.

Luckily, the way async functions work in Swift can give us some confidence in making sure that we don’t block the main actor by accident.

Reasoning about where an async function runs in Swift

Whenever you want to call an async function in Swift, you have to do this from a task and you have to do this from within an existing asynchronous context. If you’re not yet in an async function you’ll usually create this asynchronous context by making a new Task object.

From within that task you’ll call your async function and prefix the call with the await keyword. It’s a common misconception that when you await a function call the task you’re using the await from will be blocked until the function you’re waiting for is completed. If this were true, you’d always want to make sure your tasks run away from the main actor to make sure you’re not blocking the main actor while you’re waiting for something like a network call to complete.

Luckily, awaiting something does not block the current actor. Instead, it sets aside all work that’s ongoing so that the actor you were on is free to perform other work. I gave a talk where I went into detail on this. You can watch the talk here:

Knowing all of this, let’s talk about how we can determine where an async function will run. Examine the following code:

struct MyView: View {
  // body etc...

  func performWork() async {
    // Can we determine where this function runs?
  }
}

The performWork function is marked async which means that we must call it from within an async context, and we have to await it.

A reasonable assumption would be to expect this function to run on the actor that we’ve called this function from.

For example, in the following situation you might expect performWork to run on the main actor:

struct MyView: View {
  var body: some View {
    Text("Sample...")
      .task {
        await peformWork()
      }
  }

  func performWork() async {
    // Can we determine where this function runs?
  }
}

Interestingly enough, peformWork will not run on the main actor in this case. The reason for that is that in Swift, functions don’t just run on whatever actor they were called from. Instead, they run on the global executor unless instructed otherwise.

In practical terms, this means that your asynchronous functions will need to be either directly or indirectly annotated with the main actor if you want them to run on the main actor. In every other situation your function will run on the global executor.

While this rule is straightforward enough, it can be tricky to determine exactly whether or not your function is implicitly annotated with @MainActor. This is usually the case when there’s inheritance involved.

A simpler example looks as follows:

@MainActor
struct MyView: View {
  var body: some View {
    Text("Sample...")
      .task {
        await peformWork()
      }
  }

  func performWork() async {
    // This function will run on the main actor
  }
}

Because we’ve annotated our view with @MainActor, the asynchronous performWork function inherits the annotation and it will run on the main actor.

While the practice of reasoning about where an asynchronous function will run isn’t straightforward, I usually find this easier than reasoning about where my Task will run but it’s still not trivial.

The key is always to look at the function itself first. If there’s no @MainActor, you can look at the enclosing object’s definition. After that you can look at base classes and protocols to make sure there isn’t any main actor association there.

At runtime, you can use the MainActor.assertIsolated(_:) function to see if your async function runs on the main actor. If it does, you’ll know that there’s some main actor annotation that’s applied to your asynchronous function. If you’re not running on the main actor, you can safely say that there’s no main actor annotation applied to your function.

In Summary

Swift Concurrency’s rules for determining where a task or function runs are relatively clear and specific. However, in practice things can get a little muddy for tasks because it’s not always trivial to reason about whether or that your task is created from a context that’s associated with the main actor. Note that running on the main thread is not the same as being associated with the main actor.

For async functions we can reason more locally which results in an easier mental modal but it’s still not trivial.

We can use MainActor.assertIsolated(_:) to study whether our code is running on the main thread but once you fully understand and internalize the rules outlined in this post you shouldn't need this function to reason about where your code runs.

If you have any additions, questions, or comments on this article please don’t hesitate to reach out on X.

Getting started with @Observable in SwiftUI

With iOS 17, we’ve gained a new way to provide observable data to our SwiftUI views. Until iOS 17, we’d use either an ObservableObject with @StateObject, @ObservedObject, or @EnvironmentObject whenever we had a reference type that we wanted to observe in one of our SwiftUI views. For lots of apps this worked absolutely fine, but these objects have a dependency on the Combine framework (which in my opinion isn’t a big deal), and they made it really hard for developers to limit which properties a view would observe.

In iOS 17, we gained the new @Observable macro. I wrote about this macro before in this post where I talk about the @Observable macro as well as @Bindable which is a new property wrapper in iOS 17.

In this post, we’ll explore the new @Observable macro, we’ll explore how this macro can be used, and how it compares to the old way of doing things with ObservableObject.

Note that I won’t distinguish between @StateObject, @ObservableObject, and @EnvironmentObject unless needed. Otherwise, I will write ObservableObject to refer to the protocol instead.

If you prefer to consume content like this in a video format, you can watch the video for this post below:

Defining a simple @Observable model

The @Observable macro can only be applied to classes, here’s what that looks like:

@Observable
class AppSettings {
  var hidesTitles = false
  var trackHistory = true
  var readingListEnabled = true
  var colorScheme = ColorScheme.system
}

This AppSettings class holds on to several properties that can be used to configure several settings on a fictional app. The @Observable macro inserts a bunch of code when we compile our app. For example, the macro makes our AppSettings object conform to the Observable protocol, and it implements several “bookkeeping” properties and functions that enable observing properties on our object.

The details of how this works, and which properties and functions are added are not relevant for now. But if you’d like to see he inserted code, you can right click on the macro in Xcode and choose Expand macro to see the generated code.

We don’t have to add anything other than what we have so far to define our model. Let’s take a look at how we can use an @Observable in our SwiftUI views.

Using @Observable in a SwiftUI view

When you’re working with an ObservableObject in SwiftUI, you have to explicitly opt-in to observing. With @Observable, this is no longer needed.

Typically, you’ll see an @Observable used in one of four ways in a view:

struct SampleView: View {
  // the view owns this instance
  @State var appSettings = AppSettings()

  // the view receives this instance
  let appSettings: AppSettings

  // the view receives this instance and wants to bind to properties
  @Bindable var appSettings: AppSettings

  // we're grabbing this AppSettings object from the Environment
  @Environment(AppSettings.self) var appSettings

  var body: some View {
    // ...
  }
}

Let’s take a closer look at each of these options to understand the implications and use cases for our views.

Initializing an @Observable as @State

The first way to set up an @Observable is initializing it as @State on a view. While this might look and feel logical to you, it’s actually quite interesting that we can (and should) use @State for our observables.

With ObservableObject, we need to use a specific property wrapper to tell the view “this object is a source of truth”. This allows SwiftUI to redraw your view when the object updates one of its @Published properties.

Note that the view won’t care which property changed. Any change to any @Published property will cause your view body to be re-evaluated (and redrawn) regardless of whether the object update results in a changed view.

On iOS 16 and before, you use @State for simple data types like Int or String, or for value types so that assigning a new value to your @State property causes your view to redraw.

When you apply @State to your creation of an @Observable, you do this due to a key characteristic that @State has. It’s not its ability to tell a view to redraw. It’s @State's ability to cache the instance it’s applied to across view redraws.

Consider the following example where we define a view that nests another view. The nested view uses an @Observable that’s not annotated with @State.

@Observable
class Counter {
  var currentValue: Int = 0
}

struct ContentView: View {
  @State var id = UUID()

  var body: some View {
    VStack {
      Button("Change id") {
        id = UUID()
      }
      Text("Current id: \(id)")

      ButtonView()
    }.padding()
  }
}

struct ButtonView: View {
  let counter = Counter()

  var body: some View {
    VStack {
      Text("Counter is tapped \(counter.currentValue) times")
      Button("Increase") {
        counter.currentValue += 1
      }
    }.padding()
  }
}

When you run this code, you’ll find that tapping the Increase button works without any issues. The counter goes up and the view updates.

However, when you tap on Change id the counter resets back to 0.

That’s because once the ContentView redraws, a new instance of ButtonView is created which will also create a new Counter.

If we update the definition of ButtonView as follows, the problem is fixed:

struct ButtonView: View {
  @State var counter = Counter()

  var body: some View {
    VStack {
      Text("Counter is tapped \(counter.currentValue) times")
      Button("Increase") {
        counter.currentValue += 1
      }
    }.padding()
  }
}

We’ve now wrapped counter in @State. Changing the id in this view’s parent now doesn’t reset the counter because @State caches the counter instance for the duration of this view’s lifecycle. Note that SwiftUI can make several instances of the same view struct even when the view has never actually gone off screen.

There are two points here that are interesting to note:

  1. We use @State to persist our @Observable instance through the view’s lifecycle
  2. We don’t need a property wrapper to make our view observe an @Observable

So when exactly do you use @State on an @Observable?

There’s a pretty clear answer to that. Only the view that creates the instance of your @Observable should apply @State. Every other view shouldn’t.

Defining an @Observable as a let property

In the previous section you’ve already seen an example of defining an @Observable as a let. We only made one mistake when doing so; we owned the instance so we should have used @State.

However, when we receive our @Observable from another view, we can safely use a let instead of @State:

struct ContentView: View {
  @State var id = UUID()
  @State var counter = Counter()

  var body: some View {
    VStack {
      Button("Change id") {
        id = UUID()
      }
      Text("Current id: \(id)")

      ButtonView(counter: counter)
    }.padding()
  }
}

struct ButtonView: View {
  let counter: Counter

  var body: some View {
    VStack {
      Text("Counter is tapped \(counter.currentValue) times")
      Button("Increase") {
        counter.currentValue += 1
      }
    }.padding()
  }
}

Notice how we’ve moved the creation of our Counter up to the ContentView. The ButtonView now receives the instance of Counter as an argument to its initializer. This means that we don’t own this instance, and we don’t need to apply any property wrappers. We can simply use a let, and SwiftUI will update our view when needed.

However, we’ll quickly run into a limitation with an @Observable that’s declared as a let; we can’t bind to it.

Using @Observable with @Bindable

I will keep this section short, because I have an in-depth post that covers using @Bindable on an @Observable.

Consider the following code that tries to bind a TextField to the query property on our @Observable model:

@Observable
class SearchModel {
  var query = ""
  // ...
}

struct SearchView: View {
  let model: SearchModel

  var body: some View {
    TextField("Search query", text: $model.query)
  }
}

The code above doesn’t compile with the following error:

Cannot find '$model' in scope

Because our SearchModel is a plain let, we can’t access the $ prefixed version of it that we’re familiar with from ObservableObject related property wrappers.

Since this view receives the SearchModel from another view, we can’t apply the @State property wrapper to our @Observable. If we did own the SearchModel instance by creating it, we’d annotate it with @State and this would enable us to bind to properties of the SearchModel.

If we want to be able to create bindings to @Observable models that we don’t own, we can apply the @Bindable property wrapper instead:

struct SearchView: View {
  @Bindable var model: SearchModel

  var body: some View {
    TextField("Search query", text: $model.query)
  }
}

With the @Bindable property wrapper, we’re able to obtain bindings to properties of the SearchModel. If you want to learn more about @Bindable, please refer to my post on this topic.

Using @Observable with @Environment

Similar to how we can add observable objects to the SwiftUI environment, we can also add our @Observable objects to the environment. To do this, we can’t use the environmentObject view modifier, nor do we use the @EnvironmentObject property wrapper.

Instead, we use the .environment view modifier which has received some now features in iOS 17 to be able to handle @Observable models.

The following code adds the SearchModel you saw earlier to the environment:

struct ContentView: View {
  @State var searchModel = SearchModel()

  var body: some View {
    NestedView()
      .environment(searchModel)
  }
}

Notice how we’re not passing an environment key along to the .environment view modifier. That because it works in a similar way to .environmentObject where we don’t need to pass a specific key. Instead, SwiftUI will enforce that there’s only ever one instance of SearchModel in our view hierarchy which makes environment keys obsolete.

To extract an @Observable from the environment, we write the following:

struct NestedView: View {
  @Environment(SearchModel.self) var searchModel
}

By writing our code like this, SwiftUI knows which type of object to look for in the environment and we’ll be handed our instance from there.

If SwiftUI can’t find an instance of SearchModel, our app will crash. This is the same behavior that you might be aware of for @EnvironmentObject.

Binding to an observable from the environment

Since you can't bind to an object in the environment, you need to obtain an @Bindable for the observable that you've read from the environment. Imagine that in the NestedView from before you wanted to pass a binding to the searchModel's query property to another view. You'd have to create your @Bindable inside of the view body like this:

struct NestedView: View {
  @Environment(SearchModel.self) var searchModel

  var body: some View {
    @Bindable var bindableSearchModel = searchModel

    OtherView(query: $bindableSearchModel.query)
  }
}

Benefits and downside of Observable

Overall, @Observable is an extremely useful macro that works amazingly with your SwiftUI view.

It’s key feature for me would be how SwiftUI can subscribe to changes on only the properties of an @Observable that have actually changed.

The Swift team has added a couple of special features to @Observable that are available to SwiftUI which allow SwiftUI a more powerful way to observe changes than the default withObservationTracking that you and I have access to. I’ll talk about that more in a bit.

What’s important to understand is that @Observable allows users of an Observable to only be notified when a property that was accessed within something called withObservationTracking was changed.

The withObservationTracking method on Observable takes a closure that will allow automatic tracking of properties that got accessed within the closure it receives. This is super useful because it allows us to have much more granular view redraw behavior than before.

However, this observation tracking mechanism isn’t perfect and it comes with downsides.

One of the key downsides for me is that @Observable does not make it easy to track individual properties on your models over time. Whenever you access properties inside of a withObservationTracking call, you are informed about the very next change only. Any changes after your initial callback will require a new call to withObservationTracking.

Also, this means that you can’t easily subscribe to a specific property like you can with @Published, then transform your received data with Combine operators like debounce, and then update another property with a result.

It’s not impossible with @Observable, but it won’t be trivial either. At this point it’s pretty clear that @Observable was designed to work well with SwiftUI and everything else is a bit of an afterthought.

In Summary

In this post, you’ve learned about the new @Observable macro that Apple ships alongside iOS 17. You’ve seen some examples of how this new macro can be used, and you’ve seen how it can help your app perform much better by not tracking literally every property on your model that you might ever be interested in.

We’ve also explored downsides. You’ve learned about withObservationTracking, and the lack of bunch of Combine-linke features.

What do you think about @Observable? Did you jump in to use it straight away? Or are you still holding off? I’d love if you shared your thoughts on X or Threads.

Writing code that makes mistakes harder

As we work on projects, we usually add more code than we remove. At least that’s how things are at the beginning of our project. While our project grows, the needs of the codebase change, and we start refactoring things. One thing that’s often quite hard to get exactly right when coding is the kinds of abstractions and design patterns we actually need. In this post, I would like to explore a mechanism that I like to leverage to make sure my code is robust without actually worrying too much about abstractions and design patterns in the first place.

We’ll start off by sketching a few scenarios in which you might find yourself wondering what to do. Or even worse, scenarios where you start noticing that some things go wrong sometimes, on some screens. After that, we’ll look at how we can leverage Swift’s type system and access control to prevent ourselves from writing code that’s prone to containing mistakes.

If you prefer to consume the contents of this post as a video, you can watch the video below.

Common mistakes in codebases

When you look at codebases that have grown over time without applying the principles that I’d like to outline in this post, you’ll often see that the codebase contains code duplication, lots of if statements, some switch statements here and there, and a whole bunch of mutable values.

None of these are mistakes on their own, I would never, ever argue that the existence of an if statement, switch, or even code duplication is a mistake that should immediately be rectified.

What I am saying is that these are often symptoms of a codebase where it becomes easier and easier over time to make mistakes. There’s a big difference there. The code itself might not be the mistake; the code allows you as a developer to make mistakes more easily when it’s not structured and designed to prevent mistakes.

Let’s take a look at some examples of how mistakes can be made too easy through code.

Mistakes as a result of code duplication

For example, imagine having a SwiftUI view that looks as follows:

struct MyView: View {
  @ObservedObject var viewModel: MyViewModel

  var body: some View {
    Text("\(viewModel.user.givenName) \(viewModel.user.familyName) (\(viewModel.user.email))")
  }
}

On its own, this doesn’t look too bad. We just have a view, and a view model, and to present something to the user we grab a few view model properties and we format them nicely for our user.

Once the app that contains this view grows, we might need to grab the same data from a (different) view model, and format it identical to how it’s formatted in other views.

Initially some copying and pasting will cut it but at some point you’ll usually find that things get out of sync. One view presents data one way, and another view presents data in another way.

You could update this view and view model as follows to fix the potential for mistakes:

class MyViewModel: ObservableObject {
  // ...

  var formattedUsername: String {
    return "\(user.givenName) \(user.familyName) (\(user.email))"
  }
}

struct MyView: View {
  @ObservedObject var viewModel: MyViewModel

  var body: some View {
    Text(viewModel.formattedUsername)
  }
}

With this code in place, we can use this view model in multiple places and reuse the formatted name.

It would be even better if we moved the formatted name onto our User object:

extension User {
  // ...

  var formattedUsername: String {
    return "\(givenName) \(familyName) (\(email))"
  }
}

struct MyView: View {
  @ObservedObject var viewModel: MyViewModel

  var body: some View {
    Text(viewModel.user.formattedUsername)
  }
}

While this code allows us to easily get a formatted username wherever we have access to a user, we are violating a principle called the Law of Demeter. I have written about this before in a post where I talk about loose coupling so I won’t go too in depth for now but the key point to remember is that our view explicitly depends on MyViewModel which is fine. However, by accessing user.formattedUsername on this view model, our view also has an implicit dependency on User. And not just that, it also depends on view model having access to a user object.

I’d prefer to make one more change to this code and make it work as follows:

extension User {
  // ...

  var formattedUsername: String {
    return "\(givenName) \(familyName) (\(email))"
  }
}

class MyViewModel: ObservableObject {
  // ...

  var formattedUsername: String {
    return user.formattedUsername
  }
}

struct MyView: View {
  @ObservedObject var viewModel: MyViewModel

  var body: some View {
    Text(viewModel.formattedUsername)
  }
}

This might feel a little redundant at first but once you start paying attention to keeping your implicit dependencies in check and you try to only access properties on the object you depend on without chaining multiple accesses you’ll find that making changes to your code suddenly requires less work than it does when you have implicit dependencies all over the place.

Another form of code duplication can happen when you're styling UI elements. For example, you might have written some code that styles a button in a particular way.

If there’s more than one place that should present this button, I could copy and paste it and things will be fine.

However, a few months later we might need to make the button labels bold instead of regular font weight and it will be way too easy to miss one or two buttons that we forgot about. We could do a full project search for Button but that would most likely yield way more results than just the buttons that we want to change. This makes it far too easy to overlook one or more buttons that we should be updating.

Duplicating code or logic once or twice usually isn’t a big deal. In fact, sometimes generalizing or placing the duplicated code somewhere is more tedious and complex than it’s worth. However, once you start to duplicate more and more, or when you’re duplicating things that are essential to keep in sync, you should consider making a small and lightweight abstraction or wrapper to prevent mistakes.

Preventing mistakes related to code duplication

Whenever you find yourself reaching for cmd+c on your keyboard, you should ask yourself whether you’re about to copy something that will need to be copied often. Since none of us have the ability to reliably predict the future, this will always be somewhat of a guess. As you gain more experience in the field you will develop a sense for when things are prone to duplication and a good candidate to abstract.

Especially when an abstraction can be added in a simple manner you shouldn’t have a very high tolerance for copying and pasting code.

Consider the view model example from earlier. We were able to resolve our problem by making sure that we thought about the right level of placing our user’s formatted name. Initially we put it on the view model, but then we changed this by giving the user itself a formatted name. Allowing any place that has access to our user object to grab a formatted name.

An added benefit here is we keep our view model as thin as possible, and we’ve made our user object more flexible.

In the case of a button that needs to appear in multiple places it makes sense to wrap the button in a custom view. It could also make sense to write a custom button style if that better fits your use case.

Mistakes as a result of complex state

Managing state is hard. I don’t trust anybody that would argue otherwise.

It’s not uncommon for code to slowly but surely turn into a complex state machine that uses a handful of boolean values and some strings to determine what the app’s current state really is. Often the result is that when once boolean is true, a couple of others must be false because the program would be in a bad state otherwise.

My favorite example of a situation where we have multiple bits of state along with some rules about when this state is or isn’t valid is URLSession's callback for a data task:

URLSession.shared.dataTask(with: url) { data, response, error in
  guard error == nil else {
    // something went wrong, handle error
    return
  }

  guard let data, let response else {
    // something went VERY wrong
    // we have no error, no data, and no response
    return
  }

  // use data and response
}

If our request fails and comes back as an error, we know that the response and data arguments must be nil and vice-versa. This is a simple example but I’ve seen much worse in code I’ve worked on. And the problem was never introduced knowingly. It’s always the result of slowly but surely growing the app and changing the requirements.

When we design our code, we can fix these kinds of problems before they occur. When you notice that you can express an impossible state in your app due to a growth in variables that are intended to interact together, consider leveraging enums to represent the states your app can be in.

That way, you significantly lower your chances of writing incorrect state into your app, which your users will enjoy.

For example, Apple could have improved their URLSession example with the Result type for callbacks. Luckily, with async / await bad state can’t be represented anymore because a data call now returns a non-optional Data and URLResponse or throws an Error.

Mistakes as a result of not knowing the magical incantation

One last example that I’d like to highlight is when codebases require you to call a series of methods in a particular order to make sure that everything works correctly, and all bookkeeping is performed correctly.

This is usually the result of API design that’s somewhat lacking in its usability.

One example of this is the API for adding and removing child view controllers in UIKit.

When you add a child view controller you write code that looks a little like this:

addChild(childViewController)
// ... some setup code ...
childViewController.didMove(toParent: self)

That doesn’t seem too bad, right.

The syntax for removing a child view controller looks as follows:

childViewController.willMove(toParent: nil)
// ... some setup code ...
childViewController.removeFromParent()

The difference here is whether we call willMove or didMove on our childViewController. Not calling these methods correctly can result in too few or too many view controller lifecycle events being sent to your child view controller. Personally, I always forget whether I need to call didMove or willMove when I work with child view controllers because I do it too infrequently to remember.

To fix this, the API design could be improved to automatically call the correct method when you make a call to addChild or removeFromParent.

In your own API design, you’ll want to look out for situations where your program only works correctly when you call the right methods in the right order. Especially when the method calls should always be grouped closely together.

That said, sometimes there is a good reason why an API was designed the way it was. I think this is the case for Apple’s view controller containment APIs for example. We’re supposed to set up the child view controller’s view between the calls we’re supposed to make. But still… the API could surely be reworked to make making mistakes harder.

Designing code that helps preventing mistakes

When you’re writing code you should always be on the lookout for anti-patterns like copy-pasting code a lot, having lots of complex state that allows for incorrect states to be represented, or when you’re writing code that has very specific requirements regarding how it’s used.

As time goes on and you gain more and more coding experience, you’ll find that it gets easier and easier to spot potential pitfalls, and you can start getting ahead of them by fixing problems before they exist.

Usually this means that you spent a lot of time thinking about how you want to call certain bits of code.

Whenever I’m working on a new feature, I tend to write my “call site” fist. The call site means the part where I interact with the feature code that I’m about to write.

For example, if I’m building a SwiftUI view that’s supposed to render a list of items that are fetched from various sources I’ll probably write something like:

List(itemSource.allItems) { item in 
  // ...
}

Of course, that code might not work yet but I’ll know what to aim for. No matter how many data sources I end up with, I want my List to be easy to use.

This method of writing code by determining how I want to use it first can be applied to every layer of your codebase. Sometimes it will work really well, other times you’ll find that you need to deviate from your “ideal” call site but it helps focus on what matters; making sure the code is easy to use.

Whenever I’m designing APIs I think about this post from Dave DeLong.

In particular, this quote always stands out to me:

A great API is kind to all developers who work with it.

Every method you write and every class you design has an API. And it’s a good idea to make sure that this API is friendly to use. This includes making sure that it’s hard (or ideally, impossible) to misuse that API as well as having good error messages and failure modes.

Moving on from API design, if you’re modeling state that mostly revolves around one or more booleans, consider enums instead. Even if you’re modeling something like whether or not a view should animate, an enum can help you make your code more readable and maintainable in the long run.

More than anything, if you think that a certain bit of code feels “off”, “too complex” or “not quite right”, there’s a good chance your intuition is correct. Our code should be as straightforward to understand as possible. So whenever we feel like we’re doing the opposite, we should correct that.

That’s not to say that all complex code is bad. Or that all repetition is bad. Or even that every bit of complex state should become an enum. These are all just flags that should stand out to you as something that you should pay attention to. Any time you can change your code a bit in order to make it impossible to represent an impossible state, or if you can make some changes to your code that ensure you can’t pass bad arguments to a method, that’s a win.

In Summary

Writing good code can be really hard. In this post, I outlined a couple of examples of code that allows developers to make mistakes. There are many ways that code can open a developer up to mistakes, and these usually involve code that has evolved over time, which can mean that blind spots have crept into the codebase without the developer noticing.

Through experience, we can learn to identify our blind spots early and we can defensively write code that anticipates change in a way that ensures our code remains safe and easy to use.

Overall, state is the hardest thing to manage in my experience. Modeling state in a way that allows us to represent complex states in a safe manner is extremely useful. Next time you're considering writing an 'if' statement that compares two or more values to determine what should happen, consider writing an enum with a descriptive name and associated values instead.

What are some common coding mistakes that you have learned to identify along the way? I’d love if you told me all about them on X.

Connecting your git repository with a remote server

Having a local git repository is a smart thing to do. It’s even smarter to push your local git repositories up to a remote server so that you can collaborate with others, clone your repository on a separate machine, or have a backup of your code in case you’re replacing your current development machine with another. A possibly less obvious benefit of hosting your git repository somewhere is that lots of git servers provide useful features like Pull Requests for code reviews, issue tracking, and more.

In this post, you will learn how you can set up a new repository using GitHub, connect your local repository to it, push code, clone your repository on a new machine, and more. The goal of this post is to provide you with a good overview of the kinds of features and workflows that you unlock once you’ve got your git repository set up with a remote like GitHub.

I’m choosing to use GitHub in this post as my remote because it’s one of the most well known and widely used git platforms out there. It’s not the only one, and it certainly doesn’t mean that the others aren’t worth using. Platforms like GitLab and Microsoft Azure Repos work fine too.

Creating a remote git repository

If you don’t have a GitHub account yet, that’s the first thing you’ll want to do. You need to have an account in order to use GitHub.

Once you have your account set up, you can create a new repository by clicking on the “New” button on the page that’s presented as your main page. You can also click here to create a new repo.

Once you’re on the new repo page, you’ll see a form that looks as follows:

Form to create a new repository

As your repository name you should pick a short and simple name that reflects your project. Usually I pick the name of the app I’m working on and replace all space characters with dashes.

As a description for your repository you can write a short sentence about your project.

If your working on your project alone and you want to prevent anybody from discovering and cloning your project, make sure you set your project to Private. If you want to allow people to discover, browse, and clone your code you should keep your repository Public. This is especially useful if you intend to open source your project at some point.

You can choose to initialize your repository with a README file if you like. If you’re planning to connect an existing repository that you have locally to the project you’re setting up right now, don’t check this checkbox. You’ll end up overwriting the generated README when you push your project anyway so there’s no point in creating one now.

The same applies to the license and the .gitignore file.

For new repositories it makes sense to check all the checkboxes and choosing the options that fit your needs. However, if you’re pushing an existing project you’ll most likely already have taken care of these three files on your local machine. And if you haven’t you’ll overwrite the generated files with your new local repository, deleting anything that GitHub generated on your behalf.

Click “Create repository” once you’ve set everything up to see your repository in GitHub’s web interface.

Once you’re on this page, you’ll see something like the following picture:

A screenshot of a newly created repository on github.com

Notice how there are several instructions that you can follow to either clone your project to your computer, or to connect an existing repository to this remote repository.

If you’ve made a completely new project that you don’t have a local repository for yet, you can either follow the instructions under the “create a new repository on the command line” header or you can directly clone your repository using the command below:

git clone [email protected]:<your repo>

You’ll want to replace <your repo> with your repository name. For the correct path to your repo, you can copy the [email protected] URL that’s shown under the “Quick Setup” header.

Once you’ve cloned your repository you can start adding code, making commits, branches, and more.

The process of preparing an exiting repository to talk to your new remote is a little bit more involved. The key steps are the following three git commands. All three commands should be run from within the git repository that you want to push to your newly created remote.

git remote add origin <URL>
git branch -M main
git push -u origin main

The first command in this sequence adds a new remote destination to your git repository. We can name our remotes, and in this case the chosen name is origin. You can use a different name if you prefer, but origin is pretty much an industry standard so I would recommend to not use a different name for your remote.

The second command sets a branch called main to be the main branch for this repository. This means that if somebody (or you) clones your repository, the default branch they’ll check out is main. Again, you can change this to be any branch you’d like but main is an industry standard at this points so I recommend keeping main as your default branch.

Finally, a git push is executed. The command pushes the chosen branch (main in this case) to a remote repository. In this case we specify that we want to push our branch to the origin that we’ve set up before. The -u flag that’s passed makes sure that our local main branch is set up to track the remote branch origin/main. Doing this will allow git to check whether our remote repository contains commits or branches that we don’t have locally.

Let’s see how we can interact with our remote repository through pushing, pulling, and more.

Interacting with a remote repository

Once our local repository is set up to track a remote, we can start interacting with it. The most common interactions you’ll have with a remote repository are pushing and pulling.

We’ve already looked at pushing code in the previous section. When we execute a push command in a local git repository all commits that belong to the branch we’re pushing are uploaded to the local git server.

Usually, pushes are fairly trivial. You execute a push, and the code ends up on your remote server. However, sometimes you’ll try to push but the remote returns an error. For example, you might run into the following error:

error: failed to push some refs to '<YOUR REPO URL>'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

This error tells us what’s wrong and what we can do to resolve the issue. Git is usually quite good at this so it’s very important to carefully read errors that git presents to you. More often than not the error is pretty descriptive but the terminology might seem a bit foreign to you.

One unconventional tip that I’d like to give here is that you can as ChatGPT to clarify the issue given to you by git. This often works well due to how common git is amongst different developers which means that an AI like ChatGPT can be very well trained to help understand problems.

For the error shown above, the usual solution is to run a git pull before pushing. When you run git pull, you pull down all the commits that the remote has for your branch. After running your pull, you can try pushing your branch again. Usually this will succeed unless a new error occurs (which I’d say is uncommon).

Another command that you can use to pull down information about the remote repository is git fetch.

While git pull downloads new commits and applies them to your branch, merging in any commits that were on the remote but not on your local branch yet, a git fetch only downloads changes.

This means that the new commits and branches that existed on the remote will be downloaded into your local repository, but your branches are not updated (yet) to mirror the contents from the server.

Using git fetch is useful if you want to run git log after fetching to inspect what others have worked on without immediately updating your local branches. It’s also useful if you want to list all branches that currently exist both locally and remotely without updating your local branches just yet.

You can list all branches that exist locally and remotely using the git branch --all command. The list that’s printed by this command contains all branches in your repository, allowing you to see if there are any branches on the remote that you don’t have locally.

To switch to one of these branches, you can write git checkout <branch-name> and git will create a local branch that tracks its remote counter part if you didn’t have a local copy yet. If you did use this branch at some point, git will switch to the existing branch instead.

To update this existing version of the branch so it’s at the same commit as the remote you can use a regular git pull.

Once you’ve made a couple of commits and you’re ready to push your branch to the server you can go ahead and use git push -u origin yourbranch to push your new commits up to the remote, just like you’ve seen before.

At some point in time, you might want to delete stale branches that you no longer need. Doing this is a little bit tricky.

Locally, you can delete a branch using git branch -d branchname. This won’t delete your branch if no other branch contains the commits from the branch you’re about to delete. In other words, the -d option checks whether your branch is “unmerged” and warns you if it is.

If you want to delete your branch regardless of its merge status you write git branch -D branchname. This will skip the merge checks and delete your branch immediately.

When you want to delete your branch on your remote as well, you need to push your delete command. Here’s what that looks like:

git push origin --delete branchname

Usually the web interface for your remote repository will also allow you to delete your branches at the click of a button.

In Summary

In this post, we’ve explored establishing and managing a git repository, with a particular focus on using GitHub. We began by underscoring the importance of maintaining a local git repository and the added advantages of hosting it on a remote server like GitHub. Having a remote repository not only makes collaboration easier but also provides a backup of your work.

We looked at the steps needed to create a new remote repository on GitHub. You learned that there are several ways to connect a local repository with a remote, and you’ve learned how you can choose the option that best suits you.

Finally, we explored various interactions with a remote repository, including essential tasks like pushing and pulling code, and managing local and remote branches. We discussed how to address common errors in these processes, highlighting the instructive nature of Git's error messages. Commands such as git fetch, git branch, and git checkout were covered, providing insights into their roles in synchronizing and managing branches. The post wrapped up with guidance on deleting branches, detailing the differences between the git branch -d and git branch -D commands, and the process for removing a branch from the remote repository.

Understanding and resolving merge conflicts

Git is great, and when it works well it can be a breeze to work with. You push , pull, commit, branch, merge, but then… you get into a merge conflict, In this post, we’ll explore merge conflicts. We’ll look at why they happen, and what we can do to avoid running into merge conflicts in the first place.

Let’s start by understanding why a merge conflict happens.

Understanding why a merge conflict happens

Git is usually pretty good at merging together branches or commits. So why does it get confused sometimes? And what does it mean when a merge conflict occurs?

Let me start by saying that a merge conflict is not your fault. There’s a good chance that you couldn’t have avoided it, and it’s most certainly not something you should feel bad about.

Merge conflicts happen all the time and they are always fixable.

The reason merge conflicts happen is that git sometimes gets conflicting information about changes. For example, maybe your coworker split a huge Swift file into two or more files. You’ve made changes to parts of the code that was now moved into an extension.

The following two code snippets illustrate the before situation, and two “after” situations.

// Before
struct MainView: View {
  var body: some View {
    VStack {
      Text("This is an example")

      Button("Counter ++") {
        // ...
      }
    }
    .padding(16)
  }
}

// After on branch main
struct MainView: View {
  var body: some View {
    VStack {
      Text("This is another example")
      Text("It has multiple lines!")

      Button("Counter ++") {
        // ...
      }
    }
    .padding(16)
  }
}

// After on feature branch
struct MainView: View {
  var body: some View {
    VStack {
      MyTextView()

      CounterButton()
    }
    .padding(16)
  }
}

When git tries to merge this, it gets confused.

Programmer A has deleted some lines, replacing them with new views while programmer B has made changes to those lines. Git needs some assistance to tell it what the appropriate way to merge this is. A merge conflict like this is nobody’s fault because it’s perfectly reasonable for one developer to be refactoring code and for another developer to be working on a part of that code.

Usually you’ll try to avoid two developers working on the same files in a short timespan, but at the same time git makes it so that we can work on the same file on multiple branches so it’s not common for developers to synchronize who works on which files and when. Doing so would be a huge waste of time, so we instead we rely on git to get our merges right in most cases.

Whenever git sees two conflicting changes on the same part of a file, it asks a human for help. So let’s move on to seeing different approaches to resolving merge conflicts.

Resolving merge conflicts

There’s no silver bullet for resolving your merge conflicts. Typically you will choose one of three options when you’re resolving a conflict:

  • Resolve using the incoming change (theirs)
  • Resolve using the current change (mine)
  • Resolve manually

In my experience you’ll usually want to use a manual resolution when fixing merge conflicts. Before I explain how that works, let’s take a Quick Look at how resolving using “mine” and “theirs” works.

A merge conflicts always happens when you try to apply changes from one commit onto another commit. Or, when you try to merge one branch into another branch.

Sometimes git can merge parts of a file while other parts of the file cause conflicts. For example, if my commit changes line 2 of a specific file, and the other commit removes that line. My commit also adds a few lines of code at the end of the file, and the other commit doesn’t.

Git would be smart enough to append the new lines to the file, but it can’t figure out what to do with line 2 of the files since both commits have made changes in a way that git can’t merge.

In this case, we can make a choice to either resolve the conflict for line 2 using my commit (make a change to line 2) or using the other commit (delete the line altogether).

Deciding what needs to be done can sometimes require some work and collaboration.

If your coworker deleted a specific line, it’s worth asking why they did that. Maybe line 2 declares a variable that’s no longer needed or used so your coworker figured they’d delete it. Maybe you didn’t check whether the variable was still needed but you applied a formatting change to get rid of a SwiftLint warning.

In a situation like this, it’s safe to resolve your conflict using “their” change. The line can be removed so you can tell git that the incoming change is what you want.

In other situations things might not be as straightforward and you’ll need to do a manual merge.

For example, let’s say that you split a large file into multiple files while your coworker made some changes to one of the functions that you’ve now moved into a different file.

If this is the case, you can’t tell git to use one of the commits. Instead, you’ll need to manually copy your coworker’s changes into your new file so that everything still works as intended. A manual conflict resolution can sometimes be relatively simple and quick to apply. However, they can also be rather complex.

If you’re not 100% sure about the best way to resolve a conflict I highly recommend that you ask for a second pair of eyes to help you out. Preferably the eyes of the author of the conflicting commit because that will help make sure you don’t accidentally discard anything your coworker did.

During a merge conflict your project won’t build which makes testing your conflict resolution almost impossible. Once you’ve resolved everything, make sure you compile and test your app before you commit the conflict resolution. If things don’t work and you have no idea what you’ve missed it can be useful to just rewind and try again by aborting your merge. You an do this using the following command:

git merge --abort

This will reset you back to where you were before you attempted to merge.

If you approach your merge conflicts with caution and you pay close attention to what you’re doing you’ll find that most merge conflicts can be resolved without too much trouble.

Merge conflicts can be especially tedious when you try to merge branches by rebasing. In the next section we’ll take a look at why that’s the case.

Resolving conflicts while rebasing

When you’re rebasing your branch on a new commit (or branch), you’re replaying every commit on your branch using a new commit as the starting point.

This can sometimes lead to interesting problems during a rebase where it feels like you’re resolving the same merge conflicts over and over again.

In reality, your conflicts can keep popping up because each commit will have its own incompatibilities with your new base commit.

For example, consider the following diagram as our git history:

Git history without rebase

You can see that our main branch has received some commits since we’ve created our feature branch. Since the main branch has changed, we want to rebase our feature branch on main so that we know that our feature branch is fully up to date.

Instead of using a regular merge (which would create a merge commit on feature) we choose to rebase feature on main to make our git history look as follows:

Git history with rebase

We run git rebase main from the command line and git tells us that there’s a conflict in a specific file.

Imagine that this file looked like this when we first created feature:

struct MainView: View {
  var body: some View {
    VStack {
      Text("This is an example")

      Button("Counter ++") {
        // ...
      }
    }
    .padding(16)
  }
}

Then, main received some new code to make the file look like this:

struct MainView: View {
  var body: some View {
    VStack {
      Text("This is another example")
      Text("It has multiple lines!")

      Button("Counter ++") {
        // ...
      }
    }
    .padding(16)
  }
}

But our feature branch has a version of this file that looks as follows:

struct MainView: View {
  var body: some View {
    VStack {
      MyTextView()

      CounterButton()
    }
    .padding(16)
  }
}

We didn’t get to this version of the file on feature in one step. We actually have several commits that made changes to this file so when we replay our commits from feature on the current version of main, each individual commit might have one or more conflicts with the “previous” commit.

Let’s take this step by step. The first commit that has a conflict looks like this on feature:

struct MainView: View {
  var body: some View {
    VStack {
      MyTextView()

      Button("Counter ++") {
        // ...
      }
    }
    .padding(16)
  }
}

struct MyTextView: View {
  var body: some View {
    Text("This is an example")
  }
}

I’m sure you can imagine why this is a conflict. The feature branch has moved Text to a new view while main has changed the text that’s passed to the Text view.

We can resolve this conflict by grabbing the updated text from main, adding it to the new MyTextView and proceed with our rebase.

Now, the next commit that changed this file will also have a conflict to resolve. This time, we need to tell git how to get from our previously fixed commit to this new one. The reason this is confusing git is that the commit we’re attempting to apply can no longer be applied in the same way that it was before; the base for every commit in feature has changed so each commit needs to be rewritten.

We need to resolve this conflict in our code editor, and then we can continue the rebase by running git add . followed by git rebase --continue. This will open your terminal’s text editor (often vim) allowing you to change your commit message if needed. When you’re happy with the commit message you can finish your commit by hitting esc and then writing :wq to write your changes to the commit message.

After that the rebase will continue and the conflict resolution process needs to be repeated for every commit with a conflict to make sure that each commit builds correctly on top of the commit that came before it.

When you’re dealing with a handful of commits this is fine. However if you’re resolving conflicts for a dozen of commits this process can be frustrating. If that’s the case, you can either choose to do a merge instead (and resolve all conflict at once) or to squash (parts of) your feature branch. Squashing commits using rebase is a topic that’s pretty advanced and could be explained in a blog post of its own. So for now, we’ll skip that.

When you’ve decided how you want to proceed, you can abandon your rebase by running git rebase --abort in your terminal to go back to the state your branch was in before you attempted to rebase. After that, you can decide to either do a git merge instead, or you can proceed with squashing commits to make your life a little bit easier.

Git rebase and your remote server

If you’ve resolved all your conflicts using rebasing, you have slightly altered all of the commits that were on your feature branch. If you’ve pushed this branch to a remote git server, git will tell you that your local repository has n commits that are not yet on the remote, and that the remote has a (usually) equal number of commits that you do not yet have.

If the remote has more commits than you do, that’s a problem. You should have pulled first before you did your rebase.

The reason you need to pull first in that scenario is because you need to be able to rebase all commits on the branch before you push the rewritten commits to git since in order to do a push like that, we need to tell git that the commits we’re pushing are correct, and the commits it had remotely should be ignored.

We do this by passing the --force flag to our git push command. So for example git push --force feature.

Note that you should always be super cautious when force pushing. You should only ever do this after a rebase, and if you’re absolutely sure that you’re not accidentally discarding commits from the remote by doing this.

Furthermore, if you’re working on a branch with multiple people a force push can be rather frustrating and problematic to the local branches of your coworkers.

As a general rule, I try to only rebase and force push on branches that I’m working on by myself. As soon as a branch is being worked on my others I switch to using git merge, or I only rebase after checking in with my coworkers to make sure that a force push will not cause problems for them.

When in doubt, always merge. It’s not the cleanest solution due to the merge commits it creates, but at least you know it won’t cause issues for your teammates.

In Summary

Merging branching is a regular part of your day to day work in git. Whether it’s because you’re tying to absorb changes someone made into a branch of your own or it’s because you want to get your own changes in to your main branch, understanding different merging techniques is key.

Regardless of how you intend to merge branches, there’s a possibility to run into a merge conflict. In this post, you’ve learned why merge conflicts can happen, and how you can resolve them.

You’ve also learn why rebases can run into several merge conflicts and why you should always resolve these conflicts one by one. In short, it’s because git replays each commit in your branch on top of the “current” commit for the branch you’re rebasing on.

The key to resolving conflicts is always to keep your cool, take it easy, and work through the conflicts one by one. And when in doubt it’s always a good idea to ask a coworker to be your second pair of eyes.

You also learned about force pushing after rebasing and how that can be problematic if you’re working on your branch with multiple people.

Do you have any techniques you love to employ while resolving conflicts? Let me know on X or Threads!

Git basics for iOS developers

I’ll just say this right off the bat. There’s no such thing as git “for iOS Developers”. However, as iOS Developers we do make use of git. And that means that it makes a lot of sense to understand git, what it is, what it’s not, and most importantly how we can use it effectively and efficiently in our work.

In this post, I’d like to outline some of the key concepts, commands, and principles that you’ll need to know as an iOS Developer that works with git. By the end of this post you will have a pretty good understanding of git’s basics, and you’ll be ready to start digging into more advanced concepts.

Understanding what git is

Git is a so called version control system that was invented in the early 2000s. It was invented by Linus Torvalds who’s also the creator of the Linux operating system. It’s primary goal is to be a faster alternative to older version control systems like SVN and CVS. These older systems all relied on a single source of truth and made features like branching slow and hard to manage. And because everybody relied on a single source of truth, this meant that there was also a single point of failure. In practice this meant that if your server broke, the entire project was broken.

Git is a distributed system. This means that everybody that clones a project clones the entire git repository. Everybody has all code, all branches, all tags, etc. on their machine when they clone a repository.

The upside of this is that if anything goes wrong with any of the copies of the repository it’s always possible to replace that copy because there’s never a single point of failure.

However, in your day to day use it won’t matter much that git is faster and more reliable than what came before it. In your day to day work you’ll most likely be using git as a means to collaborate with your peers, and to make sure you always have a backup with proper history tracking for your project.

A common misconception amongst newer developers is that git is only relevant when a project needs to be shared amongst multiple developers. While it’s very useful for that, I can only recommend that you always use git to manage your personal projects too. Doing this will allow you to experiment with new features in separate branches, rewind your project to a previous point in time, and to tag releases so you always know which version of your code ended up shipping. If you’re not sure what a branch is, don’t worry. I’ll get to explaining that soon.

Using git is always recommended regardless of project size, team size, or project complexity.

In this post, I won’t explain how git works on this inside. My aim is to provide a much higher level overview for now, and to dig into internals in several follow up posts. Git is complicated enough as-is, so there’s really no need to make things more complicated than they need to be in an introductory post.

Now that you know that git is a version control system that allows you to keep track of your code, share it, create branches, tags, and more, let’s take a look at some of they terminology that’s used when working with git.

Key terminology

You have a vague sense about what git is so now I’d like to walk you through a bit of key terminology. This will help you understand explanations for concepts further in this series, and provide you with a first look at the most important git concepts.

Later in this post we’ll also look at some of git’s most important commands which will start putting things in context and give you some pointers to start using git if you aren’t already.

Repository

When you work with git, a project is typically called a repository. Your repository is usually your project folder that contains a .git folder which is created when you initialize your git repository. This folder contains all information about your project, your commits, history, branches, tags, and more. In the next section of this post we’ll go over how to create a new git repository.

Remote (Origin)

A git repository usually doesn’t exist only on your computer (even though it can!). Most repositories are hosted somewhere on a server so that you can easily access it from any computer, and share the repository with your team mates. While it is decentralized and everybody that clones your repository has a full copy of the repository, you’ll often have a single origin that’s used as your source of truth that everybody in your team pushes code to and pulls updates from.

Most projects will use an existing platform like GitHub, GitLab, or Azure as their remote to push and pull code. A project can use multiple remotes if needed but usually your primary / main remote is called “origin”.

Branches

In git, you make use of branches to structure your work. Every project that you place under version control with git will have at least one branch, this branch is typically called main. Every time you make a new commit in your repository you’re essentially associating that commit with a branch. This allows you to create a new branch that’s based off of a given version of your code, work on it, make changes, and eventually switch back to another branch that doesn’t contain the same changes you just made.

In a way, you can think of a branch in git as a chain of commits.

This is incredibly useful when you’re working on new features for your app while you’re also maintaining a shipping version of your app. You can make as many branches as you’d like in git, and you can merge changes back into your main branch when you’re happy with the feature you’ve just built.

Commits

Commits are what I would consider git’s core feature. Every time you make a new commit, you create a snapshot of the work you did in your project so far. After you’ve made a commit you can choose to continue working on your project, make progress towards new features, implement bug fixes, and more. As you make progress you’ll make more and more commits to snapshot your progress.

So why would you make commits?

Well, there are a few key reasons. One of them is that a commit allows you to see the changes that you’ve made from one step to the next. For example, when you’ve completed a big refactor you might not completely remember which files you’ve worked on and what you’ve changed. If you’ve made one or more commits during the refactoring process you can retrace every step that you took during your refactor.

Another reason to make a commit is so you can branch off of that commit to work on different features in isolation. You’ll most commonly do this in teams but I’ve done this in single-person projects too.

Git is all about commits so if there’s one git concept that you’ll want to focus on first if you’re new to git than it’s probably going to be commits.

Merging and rebasing

For now, I’m going talk about merging and rebasing under a single header. They’re both different concepts with very different implications and workflows but they usually serve a similar purpose. Since we’re focussing on introducing topics, I think it’s fair to talk about merge and rebase under a single header.

When we have a series of commits on one branch, and we have another branch with some more commits, we’ll usually want somehow bring the newer commits into our source branch. For example, if I have a main branch that I’ve been committing to, I might have created a feature-specific branch to work from. For example, I might have branched off of the main branch to start working on a design overhaul for my app.

Once my design overhaul is complete I’ll want to update my main branch with the new design so that I can ship this update to my users. I can do this by rebasing or merging. The end result of either operation is that the commits that I made (or the final state of my feature branch) end up being applied to my main branch. Merge and rebase each do this in a slightly different way and I’ll cover each option in more depth in a follow up post.

Git’s most important commands

Alright, I know this is a long post (especially for a blog) but before we can wrap up this introduction to git, I think it’s time we go over a few of git’s key commands. These commands correspond to the key terminology that we just covered, so hopefully the commands along with their explanations help solidify what you’ve just learned.

Because the command line is a universally available interface for git I’ll go ahead and focus my examples only on running commands in the command line. If you prefer working with a more graphical interface feel free to use one that you like. Fork, Tower, and Xcode’s built-in git GUI all work perfectly fine and are all built on top of the commands outlined below.

Initializing a new repository

When you start a new project, you’ll want to create a git repository for your project sooner rather than later. Creating a repository can be done with a single command that creates a .git folder in your project root. As you’ve learned in the previous section, the .git folder is the heart and soul of your repository. It’s what transforms a plain folder on your file system into a repository.

To turn your project folder into a repository, navigate to your project folder (the root of your project, usually the same folder as where your .xcodeproj is located) and type the following command:

git init

This command will run quickly and it will initialize a new repository in the folder you ran the command from.

When creating a new project in Xcode you can check the “create git repository on my mac” checkbox to start your project off as a git repository. This will allow you to skip the git init step.

Creating a repository for your project does not put any files in your project under version control just yet. We can verify this by running the git status command. Doing this for a folder that I just created a new Xcode project in yields the following output:

❯ git status
On branch main

No commits yet

Untracked files:
  (use "git add <file>..." to include in what will be committed)
    GitSampleProject.xcodeproj/
    GitSampleProject/

nothing added to commit but untracked files present (use "git add" to track)

As you can see, there’s a list of files under the untracked files header.

This tells us that git can see that we have files in our project folder, but git isn’t actively tracking (or ignoring) these files. In this case, git is seeing our xcodeproj folder and the GitSampleProject folder that holds our Swift files. Git won’t pro-actively dig into these folders to list all files that it’s not tracking. Instead, it lists the folder which indicates that nothing in that folder is being tracked.

Let’s take a look at adding files to a git next.

Adding files to git

As you’ve seen, git doesn’t automatically track history for every file in our project folder. To make git track files we need to add them to git using the add command. When you add a file to git, git will allow you to commit versions of that file so that you can track history or go back to a specific version of that file if needed.

The quickest way to add files to git is to use the add command as follows:

git add .

While this approach is quick, it’s not great. In a standard Xcode project there are always some files that you don’t want to add to git. We can be more specific about what we need to be added to git by specifying the files and folders that we want to add:

# adding files
git add Sources/Sample.swift

# adding folders
git add Sources/

For a standard Xcode project we typically want to everything in our project folder with a couple of exceptions. Instead of manually typing and filtering the files and folders that we want to add to git every time we want to make a new commit, we can exclude files and folders from git using a a file called .gitignore. You can add multiple ignore files to your repository but most commonly you’ll have one at the root of your project. You can create your .gitignore file on the command line by typing the following command:

❯ touch .gitignore
❯ open .gitignore

This will open your file in the TextEdit app. A typical iOS project will at least have the following files and folders added to this file:

.DS_Store
xcuserdata/

You can use pattern matching to exclude or include files and folders using wildcards if you’d like. For now, we’ll just use a pretty simple ignore file as an example.

From now on, whenever git sees that you have files and folders in your project that match the patterns from your ignore file it won’t tell you that it’s not tracking those files because it will simply ignore them. This is incredibly useful for files that contain user specific data, or for content that’s generated at build time. For example, if you’re using a tool like Sourcery to generate code in your project every time it builds, you’ll usually exclude these files from git because they’re automatically recreated anyway.

Once you add files to git using git add, they are added to the staging area. This means that if you were to make a commit now, those files are included in your commit. Git doesn’t record a permanent snapshot of your files until you make a commit. And when you make a commit, only changes that are added to the staging area are included in the commit.

To make your initial commit you’ll usually set up your .gitignore file and then run git add . to add everything in your project to the staging area in one go.

To see the current status of files that have changes, files that aren’t being tracked, and files that are in the staging area and ready to be committed we can use git status again. If we run the command for our iOS project after adding some files and creating the .gitignore file we get the following output:

❯ git status
On branch main

No commits yet

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)
    new file:   .gitignore
    new file:   GitSampleProject.xcodeproj/project.pbxproj
    new file:   GitSampleProject.xcodeproj/project.xcworkspace/contents.xcworkspacedata
    new file:   GitSampleProject.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist
    new file:   GitSampleProject/Assets.xcassets/AccentColor.colorset/Contents.json
    new file:   GitSampleProject/Assets.xcassets/AppIcon.appiconset/Contents.json
    new file:   GitSampleProject/Assets.xcassets/Contents.json
    new file:   GitSampleProject/ContentView.swift
    new file:   GitSampleProject/GitSampleProjectApp.swift
    new file:   GitSampleProject/Preview Content/Preview Assets.xcassets/Contents.json

This is exactly what we want. No more untracked files, git has found our ignore file, and we’re ready to tell git to record the first snapshot of our repository by making a commit.

Making your first commit

We can make a new commit by writing git commit -m "<A short description of changes>" you’d replace the text between the < and > with a short message that describes what’s in the snapshot. In the case of your initial commit you’ll often write initial commit. Future commits usually contain a very short sentence that describes what you’ve changed.

Writing a descriptive yet short commit message is an extremely good practice because once your project has been under development for a while you’ll be thanking yourself when your commit messages are more descriptive than just the words “did some work” or something similar.

Back to making our first commit. To make a new commit in my sample repository, I run the following command:

git commit -m "initial commit"

When I run this command, the following output is produced:

[main (root-commit) 5aa14e7] initial commit
 10 files changed, 443 insertions(+)
 create mode 100644 .gitignore
 create mode 100644 GitSampleProject.xcodeproj/project.pbxproj
 create mode 100644 GitSampleProject.xcodeproj/project.xcworkspace/contents.xcworkspacedata
 create mode 100644 GitSampleProject.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist
 create mode 100644 GitSampleProject/Assets.xcassets/AccentColor.colorset/Contents.json
 create mode 100644 GitSampleProject/Assets.xcassets/AppIcon.appiconset/Contents.json
 create mode 100644 GitSampleProject/Assets.xcassets/Contents.json
 create mode 100644 GitSampleProject/ContentView.swift
 create mode 100644 GitSampleProject/GitSampleProjectApp.swift
 create mode 100644 GitSampleProject/Preview Content/Preview Assets.xcassets/Contents.json

This tells me that a new commit was created with a hash of 5aa14e7. This hash is the unique identifier for this commit. Git also tells me the number of files and changes in the commit, and then the files are listed. In this case, all my files are labeled with create mode. When I make changes to a file and I commit those changes that label will change accordingly.

Most git repositories are connected to a remote host like GitHub. In this post I won’t show you how to add a remote to a git repository. This post is already rather long as it is, so we’ll cover git and remote hosts in a separate post.

In Summary

In this post, you’ve learned a lot of basics around git. You now know that git is a so-called version control system. This means that git tracks history of our files, and allows us to work on multiple features and bug fixes at once using branches. You know that a git repository contains a .git folder that holds all information that git needs to operate.

I’ve explained git’s most important terms like commits, branches, merging, and more. We’ve looked at the key concepts here which means that for some of the terminology you’ve seen we could go way deeper and discover lots of interesting details. These are all topics for separate posts.

After introducing the most important terminology in git, we’ve looked at git’s most important commands. You’ve seen how to create a new git repository, how to add and ignore files, and how to make a commit.

Our next post in this series will focus on getting your repository connected to a remote like GitHub.

Making your SwiftData models Codable

In a previous post, I explained how you can make your NSManagedObject subclasses codable. This was a somewhat tedious process that involves a bunch of manual work. Specifically because the most convenient way I've found wasn't all that convenient. It's easy to forget to set your managed object context on your decoder's user info dictionary which would result in failed saves in Core Data.

With SwiftData it's so much easier to define model objects so it makes sense to take a look at making SwiftData models Codable to see if it's better than Core Data. Ultimately, SwiftData is a wrapper around Core Data which means that the @Model macro will at some point generate managed objects, an object model, and more. In this post, we'll see if the @Model macro will also make it easier to use Codable with model objects.

If you prefer learning by video, check out the video for this post on YouTube:

Tip: if you're not too familiar with Codable or custom encoding and decoding of models, check out my post series on the Codable protocol right here.

Defining a simple model

In this post I would like to start us off with a simple model that's small enough to not get confusing while still being representative for a model that you might define in the real world. In my Practical Core Data book I make a lot of use of a Movie object that I use to represent a model that I would load from The Movie Database. For convenience, let's just go ahead and use the a simplified version of that:

@Model class Movie {
  let originalTitle: String
  let releaseDate: Date

  init(originalTitle: String, releaseDate: Date) {
    self.originalTitle = originalTitle
    self.releaseDate = releaseDate
  }
}

The model above is simple enough, it has only two properties and to illustrate the basics of using Codable with SwiftData we really don't need anything more than that. So let's move on and add Codable to our model next.

Marking a SwiftData model as Codable

The easiest way to make any Swift class or struct Codable is to make sure all of the object's properties are Codable and having the compiler generate any and all boilerplate for us. Since both String and Date are Codable and those are the two properties on our model, let's see what happens when we make our SwiftData model Codable:

// Type 'Movie' does not conform to protocol 'Decodable'
// Type 'Movie' does not conform to protocol 'Encodable'
@Model class Movie: Codable {
  let originalTitle: String
  let releaseDate: Date

  init(originalTitle: String, releaseDate: Date) {
    self.originalTitle = originalTitle
    self.releaseDate = releaseDate
  }
}

The compiler is telling us that our model isn't Codable. However, if we remove the @Model macro from our code we are certain that our model is Codable because our code does compiler without the @Model macro.

So what's happening here?

A macro in Swift expands and enriches our code by generating boilerplate or other code for us. We can right click on the @Model macro and choose expand macro to see what the @Model macro expands our code into. You don't have to fully understand or grasp the entire body of code below. The point of showing it is to show you that the @Model macro adds a lot of code, including properties that don't conform to Codable.

@Model class Movie: Codable {
  @_PersistedProperty
  let originalTitle: String
  @_PersistedProperty
  let releaseDate: Date

  init(originalTitle: String, releaseDate: Date) {
    self.originalTitle = originalTitle
    self.releaseDate = releaseDate
  }

  @Transient
  private var _$backingData: any SwiftData.BackingData<Movie> = Movie.createBackingData()

  public var persistentBackingData: any SwiftData.BackingData<Movie> {
    get {
      _$backingData
    }
    set {
      _$backingData = newValue
    }
  }

  static func schemaMetadata() -> [(String, AnyKeyPath, Any?, Any?)] {
    return [
      ("originalTitle", \Movie.originalTitle, nil, nil),
      ("releaseDate", \Movie.releaseDate, nil, nil)
    ]
  }

  required init(backingData: any SwiftData.BackingData<Movie>) {
    self.persistentBackingData = backingData
  }

  @Transient
  private let _$observationRegistrar = Observation.ObservationRegistrar()
}

extension Movie: SwiftData.PersistentModel {
}

extension Movie: Observation.Observable {
}

If we apply Codable to our SwiftData model, the protocol isn't applied to the small model we've defined. Instead, it's applied to the fully expanded macro. This means that we have several properties that don't conform to Codable which makes it impossible for the compiler to (at the time of writing this) correctly infer what it is that we want to do.

We can fix this by writing our own encoding and decoding logic for our model.

Writing your encoding and decoding logic

For a complete overview of writing custom encoding and decoding logic for your models, check out this post.

Let's start off by defining the CodingKeys enum that we'll use for both our encoding and decoding logic:

@Model class Movie: Codable {
  enum CodingKeys: CodingKey {
    case originalTitle, releaseDate
  }

  // ...
}

These coding keys directly follow the property names for our model. We have to define them because we're defining custom encoding and decoding logic.

The decoding init can look as follows:

required init(from decoder: Decoder) throws {
  let container = try decoder.container(keyedBy: CodingKeys.self)
  self.originalTitle = try container.decode(String.self, forKey: .originalTitle)
  self.releaseDate = try container.decode(Date.self, forKey: .releaseDate)
}

This initializer is pretty straightforward. We grab a container from the decoder, and then we ask the container to decode the properties we're interested in using our coding keys.

The encoding logic would look as follows:

func encode(to encoder: Encoder) throws {
  var container = encoder.container(keyedBy: CodingKeys.self)
  try container.encode(originalTitle, forKey: .originalTitle)
  try container.encode(releaseDate, forKey: .releaseDate)
}

With this initializer and encode(to:) function in place, our model is now fully Codable. Note that if you're only grabbing data from the network and which to decode that data into SwiftData models you can conform to Decodable instead of Codable in order to skip having to write the encode(to:) method.

Let's see how we can actually use our model next.

Decoding JSON into a SwiftData model

For the most part, decoding your JSON data into a SwiftData model will be relatively striaghtforward. The key thing to keep in mind is that you need to register all of your decoded objects in your model context after decoding them. Here's an example of how to do this:

let url = URL(string: "https://path.to.data")!
let (data, _) = try await URLSession.shared.data(from: url)

// this is the actual decoding
let movies = try! JSONDecoder().decode([Movie].self, from: data)

// don't forget to register the decoded objects
for movie in movies {
  context.insert(movie)
}

Making our model Codable and working with it was straightforward enough. To wrap things up, I'd like to explore how this approach works with relationships.

Adding relationships to our model

First, let's update our model object to have a relationship:

@Model class Movie: Codable {
  enum CodingKeys: CodingKey {
    case originalTitle, releaseDate, cast
  }

  let originalTitle: String
  let releaseDate: Date

  @Relationship([], deleteRule: .cascade)
  var cast: [Actor]

  init(originalTitle: String, releaseDate: Date, cast: [Actor]) {
    self.originalTitle = originalTitle
    self.releaseDate = releaseDate
    self.cast = cast
  }

  required init(from decoder: Decoder) throws {
    let container = try decoder.container(keyedBy: CodingKeys.self)
    self.originalTitle = try container.decode(String.self, forKey: .originalTitle)
    self.releaseDate = try container.decode(Date.self, forKey: .releaseDate)
    self.cast = try container.decode([Actor].self, forKey: .cast)
  }

  func encode(to encoder: Encoder) throws {
    var container = encoder.container(keyedBy: CodingKeys.self)
    try container.encode(originalTitle, forKey: .originalTitle)
    try container.encode(releaseDate, forKey: .releaseDate)
    try container.encode(cast, forKey: .cast)
  }
}

The Movie object here has gained a new property cast which is annotated with SwiftData's @Relationship macro. Note that the decode and encode logic doesn't get fancier than it needs to be. We just decode and encode our cast property like we would any other property.

Let's look at the definition of our Actor model next:

@Model class Actor: Codable {
  enum CodingKeys: CodingKey {
    case name
  }

  let name: String

  @Relationship([], deleteRule: .nullify)
  let movies: [Movie]

  init(name: String, movies: [Movie]) {
    self.name = name
    self.movies = movies
  }

  required init(from decoder: Decoder) throws {
    let container = try decoder.container(keyedBy: CodingKeys.self)
    self.name = try container.decode(String.self, forKey: .name)
  }

  func encode(to encoder: Encoder) throws {
    var container = encoder.container(keyedBy: CodingKeys.self)
    try container.encode(name, forKey: .name)
  }
}

Our Actor defines a relationship back to our Movie model but we don't account for this in our encode and decode logic. The data we're loading from an external source would infinitely recurse from actor to movie and back if actors would also hold lists of their movies in the data we're decoding. Because the source data doesn't contain the inverse that we've defined on our model, we don't decode it. SwiftData will make sure that our movies property is populated because we've defined this property using @Relationship.

When decoding our full API response, we don't need to update the usage code from before. It looks like we don't have to explicitly insert our Actor instances into our model context due to SwiftData's handling of relationships which is quite nice.

With the code as it is in this post, we can encode and decode our SwiftData model objects. No magic needed!

In Summary

All in all I have to say that I'm a little sad that we didn't get Codable support for SwiftData objects for free. It's nice that it's easier to make SwiftData models Codable than it is to make an NSManagedObject conform to Codable but it's not too far off. We still have to make sure that we associate our decoded model with a context. It's just a little bit easier to do this in SwiftData than it is in Core Data.

If you have a different approach to make your SwiftData models Codable, or if you have questions about this post feel free to reach out!

SwiftUI’s Bindable property wrapper explained

WIth the introduction of Xcode 15 beta and its corresponding beta OSses (I would say iOS 17 beta, but of course we also get macOS, iPadOS, and other betas...) Apple has introduced new state mangement tools for SwiftUI. One of these new tools is the @Bindable property wrapper. In an earlier post I explained that @Binding and @Bindable do not solve the same problem, and that they will co-exist in your applications. In this post, I would like to clarify the purpose and the use cases for @Bindable a little bit better so that you can make better decisions when picking your SwiftUI state property wrappers.

If you prefer learning by video, the key lessons from this blog post are also covered in this video:

The key purpose of the @Bindable is to allow developers to create bindings to properties that are part of a model that confoms to the Observable protocol. Typically you will create these models by annotating them with the @Observable macro:

@Observable
class SearchModel {
  var query: String = ""
  var results: [SearchResult] = []

  // ...
}

When you pass this model to a SwiftUI view, you might end up with something like this:

struct SearchView {
  let searchModel: SearchModel

  var body: some View {
    TextField("Search query", text: // ...??)
  }
}

Notice how the searchModel is defined as a plain let. We don't need to use @ObservedObject when a SwiftUI view receives an Observable model from one of its parent views. We also shouldn't be using @State because @State should only be used for model data that is owned by the view. Since we're passed our SearchModel by a parent view, that means we don't own the data source and we shouldn't use @State. Even without adding a property wrapper, the Observable model is able to tell the SwiftUI view when one of its properties has changed. How this works is a topic for a different post; your key takeaway for now is that you don't need to annotate your Observable with any property wrappers to have your view observe it.

Back to SearchView. In the SearchView body we create a TextField and this TextField needs to have a binding to a string value. If we'd be working with an @ObservedObject or if we owned the SearchModel and defined its proeprty as @State we would write $searchModel.query to obtain a binding.

When we attempt to do this for our current searchModel property now, we'd see the following error:

var body: some View {
  // Cannot find '$searchModel' in scope
  TextField("Search query", text: $searchModel.query)
}

Because we don't have a property wrapper to create a projected value for our search model, we can't use the $ prefix to create a binding.

To learn more about property wrappers and projected values, read this post.

In order to fix this, we need to annotate our searchModel with @Bindable:

struct SearchView {
  @Bindable var searchModel: SearchModel

  var body: some View {
    TextField("Search query", text: $searchModel.query)
  }
}

By applying the @Bindable property wrapper to the searchModel property, we gain access to the $searchModel property because the Bindable property wrapper can now provide a projected value in the form of a Binding.

Note that you only need the @Bindable property wrapper if:

  • You didn't create the model with @State (because you can create bindings to @State properties already)
  • You need to pass a binding to a property on your Observable model

Essentially, you will only need to use @Bindable if in your view you write $myModel.property and the compiler tells you it can't find $myModel. That's a good indicator that you're trying to create a binding to something that can't provide a binding out of the box, and thay you'll want to use @Bindable to be able to create bindings to your model.

Hopefully this post helps clear the purpose and usage of @Bindable up a little bit!