Using custom publishers to drive SwiftUI views

In SwiftUI, views can be driven by an @Published property that's part of an ObservableObject. If you've used SwiftUI and @Published before, following code should look somewhat familiar to you:

class DataSource: ObservableObject {
  @Published var names = [String]()
}

struct NamesList: View {
  @ObservedObject var dataSource: DataSource

  var body: some View {
    List(dataSource.names, id: \.self) { name in
      Text(name)
    }
  }
}

Whenever the DataSource object's names array changes, NamesList will be automatically redrawn. That's great.

Now imagine that our list of names is retrieved through the network somehow and we want to load the list of names in the onAppear for NamesList.

class DataSource: ObservableObject {
  @Published var names = [String]()

  let networkingObject = NetworkingObject()
  var cancellables = Set<AnyCancellable>()

  func loadNames() {
    networkingObject.loadNames()
      .receive(on: DispatchQueue.main)
      .sink(receiveValue: { [weak self] names in
        self?.names = names
      })
      .store(in: &cancellables)
  }
}

struct NamesList: View {
  @ObservedObject var dataSource: DataSource

  var body: some View {
    List(dataSource.names, id: \.self) { name in
      Text(name)
    }.onAppear(perform: {
      dataSource.loadNames()
    })
  }
}

This would work and it's the way to go on iOS 13 but I've never liked having to subscribe to a publisher just so I could update an @Published property. Luckily, in iOS 14 we can refactor loadNames() and do much better with the new assign(to:) operator:

class DataSource: ObservableObject {
  @Published var names = [String]()

  let networkingObject = NetworkingObject()

  func loadNames() {
    networkingObject.loadNames()
      .receive(on: DispatchQueue.main)
      .assign(to: &$names)
  }
}

The assign(to:) operator allows you to assign the output from a publisher directly to an @Published property under one condition. The publisher that you apply the assign(to:) on must have Never as its error type. Note that I had to add an & prefix to $names. The reason for this is that assign(to:) receives its target @Published property as an inout parameter, and inout parameters in Swift are always passed with an & prefix. To learn more about replacing errors so your publisher can have Never as its error type, refer to this blog post I wrote about catch and replaceError in Combine.

Pretty cool, right?

Ignore first number of elements from a publisher in Combine

If you have a Combine publisher and you want to ignore the first n elements that are published by that publisher, you can use the dropFirst(_:) operator. This operator will swallow any values emitted until the threshold you specify is reached. For example, dropFirst(1) will ignore the first emitted value from a publisher:

[1, 2, 3].publisher
  .dropFirst(1)
  .sink(receiveValue: { value in 
    print(value) // 2 and 3 are printed
  })

For more information about dropFirst and several variations of drop like drop(while:) and drop(untilOutputFrom:) you can refer to Apple's documentation.

Recursively execute a paginated network call with Combine

Last week, my attention was caught by a question that Dennis Parussini asked on Twitter. Dennis wanted to recursively make calls to a paginated API to load all pages of data before rendering UI. Since I love Combine and interesting problems I immediately started thinking about ways to achieve this using a nice, clean API. And then I realized that this is a non-trivial task that was worth exploring.

In this week's post, I would like to share my thought process and solution with you, hoping you'll learn something new about Combine in the process.

Understanding the problem and setting a goal

Whenever I get to work on a problem like this, I always start by writing the code I would like to write when using an API or abstraction that I've written. In this case, I would like to be able to write something like the following code to fetch all pages from the paginated endpoint:

networking.loadPages()
  .sink(receiveCompletion: { _ in
    // handle errors
  }, receiveValue: { items in
    print(items)
  })
  .store(in: &cancellables)

In this case, it didn't matter to me what networking is, or what kind of object owns it. In other words, I don't care about the architecture this would be used in. All I really care about is that I have an object that implements loadPages(). And that loadPages() will return a publisher that emits data for all pages at once. I don't want to receive all intermediate pages in my sink. The publisher completes immediately after delivering my complete data set.

The tricky bit here is that this means that in loadPages() we'll need to somehow create a publisher that collects the responses from several network calls, bundles them into one big result, and outputs them to the created publisher.

Since I didn't have access to an API that would give me paginated responses I decided that a very naive abstraction would be sufficient. The abstraction uses a Response object that looks as follows:

struct Response {
  var hasMorePages = true
  var items = [Item(), Item()]
}

struct Item {}

My loader should keep making more requests until it receives a Response that has its hasMorePages set to false. At that point, the chain is considered complete and the publisher created in loadPages() should emit all fetched values and complete.

The starting point for my experimentation would look like this:

class RecursiveLoader {
  var requestsMade = 0
  var cancellables = Set<AnyCancellable>()

  init() { }

  private func loadPage() -> AnyPublisher<Response, Never> {
    // this would be the individual network call
    Future { promise in
      DispatchQueue.global().asyncAfter(deadline: .now() + 0.1) {
        self.requestsMade += 1
        if self.requestsMade < 5 {
          return promise(.success(Response()))
        } else {
          return promise(.success(Response(hasMorePages: false)))
        }
      }
    }.eraseToAnyPublisher()
  }
}

This setup is fairly simple. I have a loadPage() function that will load an individual page depending on the number of requests I have already made. In the real implementation, this would be replaced by a network call but for my purposes, this would do. What matters is that I have a publisher that emits a Response object that I can use to determine whether I need to load another page or not.

So now that I knew what I wanted to write and had set up some scaffolding it was time to write the solution.

Finding an appropriate solution

Attempt one: a simplified version

My initial thought was to use Combine's reduce operator on some kind of publisher that would emit responses or arrays of items as they came in. When you apply reduce to a publisher in Combine you can accumulate all emitted values into one new value that's emitted when the upstream publisher completes. This sounds perfect for my purposes so I started my experimentation with that as a base thought. However, instead of making loadPage() return a publisher I wanted to simplify everything a little bit. To load all pages I would create an instance of RecursiveLoader, subscribe to a publisher that I'd define as a property on RecursiveLoader and tell it to begin loading:

let loader = RecursiveLoader()

loader.finishedPublisher
  .sink(receiveValue: { items in
    print("items \(items.count)")
  })
  .store(in: &cancellables)

loader.initiateLoadSequence()

While it's not at all what I wanted to write, this was a basic idea that I considered to be approachable enough to start with.

Whenever I'm solving complicated problems, or experiment with new ideas I tend to try and get something working first before I go back to my initial design to see how I can adapt my initial prototype to be like my design. By doing this I make sure that I keep typing and trying things rather than fighting the system to immediately get the implementation I wanted.

With the simplified end goal in place, I started writing some code. First, I needed to define the finishedPublisher and a skeleton for initiateLoadSequence():

class RecursiveLoader {
  var requestsMade = 0

  private let loadedPagePublisher = PassthroughSubject<Response, Never>()
  let finishedPublisher: AnyPublisher<[Item], Never>

  var cancellables = Set<AnyCancellable>()

  init() {
    self.finishedPublisher = loadedPagePublisher
      .reduce([Item](), { allItems, response in
        return response.items + allItems
      })
      .eraseToAnyPublisher()
  }

  func initiateLoadSequence() {
    // do something
  }
}

I defined two publishers on RecursiveLoader instead of one. The private loadedPagePublisher is where I decided I would publish pages as they came in from the network. The finishedPublisher takes the loadedPagePublisher and applies the reduce operator. That way, once I complete the loadedPagePublisher, the finsihedPublisher will emit an array of [Item]. Pretty cool, right?

At this point I came up with the following implementation for initiateLoadSequence():

func initiateLoadSequence() {
  loadPage()
    .sink(receiveValue: { response in
      self.loadedPagePublisher.send(response)

      if response.hasMorePages == false {
        self.loadedPagePublisher.send(completion: .finished)
      } else {
        self.initiateLoadSequence()
      }
    })
    .store(in: &cancellables)
}

In initiateLoadSequence() I call loadPage() and subscribe to the publisher returned by loadPage(). When I receive a response I forward that response to loadedPagePublisher and if we don't have any more pages to load, I complete the loadedPagePublisher so the finishedPublisher emits its array of Item objects. If we do have more pages to load, I call self.initiateLoadSequence() again to load the next page.

This example works, but I don't think it's great. An instance of RecursiveLoader can only load all pages once, and users of this object will need to subscriber to finishedPublisher before calling initiateLoadSequence to prevent dropping events since the loadedPagePublisher will not emit any values if it doesn't have any subscribers. For loadedPagePublisher to have subscribers, users of RecursiveLoader must subscribe to finishedPublisher since that publisher is built upon loadedPagePublisher.

That said, this first attempt did show me that using reduce is a good idea, and I also like the idea of having a publisher that I publish fetched results onto so another publisher can reduce over it to collect all responses returned by the paginated API.

Attempt two: the solution I wanted to write

Since I had an okay first attempt that just had a couple of issues I figured I wanted to push that idea forward and make it work as I had initially intended. To do this, I wanted to get rid of finishedPublisher and loadedPagePublisher because those made my RecursiveLoader into a non-reusable object that can only load all pages once. Instead, I figured that I could write a function loadPages() that would create a publisher in its own scope and then pass that publisher to a function that would load an individual page, and then send its result to loadPagePublisher.

Let me show you what I mean by showing you the end result of my second attempt at implementing this functionality:

class RecursiveLoader {
  var requestsMade = 0
  var cancellables = Set<AnyCancellable>()

  init() { }

  private func loadPage() -> AnyPublisher<Response, Never> {
    // unchanged from the original
  }

  private func performPageLoad(using publisher: PassthroughSubject<Response, Never>) {
    loadPage().sink(receiveValue: { [weak self] response in
      publisher.send(response)

      if response.hasMorePages {
        self?.performPageLoad(using: publisher)
      } else {
        requestsMade = 0
        publisher.send(completion: .finished)
      }
    }).store(in: &cancellables)
  }

  func loadPages() -> AnyPublisher<[Item], Never> {
    let intermediatePublisher = PassthroughSubject<Response, Never>()

    return intermediatePublisher
      .reduce([Item](), { allItems, response in
        return response.items + allItems
      })
      .handleEvents(receiveSubscription: { [weak self] _ in
        self?.performPageLoad(using: intermediatePublisher)
      })
      .eraseToAnyPublisher()
  }
}

As you can see, I no longer have any publishers defined as properties of RecursiveLoader. Instead, loadPages() now returns an AnyPublisher<[Item], Never> that I can subscribe to directly which is much cleaner. Inside loadPages() I create a publisher that will be used to push new responses on by the performPageLoad(using:) method. The loadPages() method returns the intermediate publisher but applies a reduce on it to collect all intermediate responses and create an array of items.

I also use the handleEvents() function to hook into receiveSubscription. This allows me to kick off the page loading as soon as the publisher returned by loadPages is subscribed to. By doing this users of loadPage() don't have to kick off any loading manually and they can't forget to subscribe before starting the loading process like they could in my initial attempt.

The performPageLoad(using:) takes a PassthroughSubject<Response, Never> as its argument. Inside of this method, I call loadPage() and subscribe to its result. I then send the received result using the received subject and complete it if there are no more pages to load. If there are more pages to load, I call performPageLoad(using:) again, and pass the same subject along to that method so that next call will also publish its result on the same passthrough subject so I can reduce it into my collection of items.

Using this approach looks exactly as I wanted:

let networking = RecursiveLoader()
networking.loadPages()
  .sink(receiveCompletion: { _ in
    // handle errors
  }, receiveValue: { items in
    print(items)
  })
  .store(in: &cancellables)

There are still some things I'm not entirely happy with in this implementation. For example, performPageLoad(using:) must emit its values asynchrononously. For an implementation like this were you rely on the network that's not a problem. But if you'd modify my loadPage method and remove the delay that I have added before completing my Future, you'll find that a number of items are dropped because the PassthroughSubject didn't forward them into the reduce since the publisher created by loadPage() wasn't set up just yet. The reason for this is that receiveSubscription is called just before the subscription is completely set up and established.

Additionally, I subscribe to the publisher created by loadPage() in performPageLoad(using:) which is also not ideal, but doesn't directly harm the implementation.

Luckily, we can do better.

Attempt three: community help

After publishing the initial version of this article, a reader reached out to me with a very clean, and in hindsight, obvious solution to this problem that fixes both issues I had with my own second attempt. This solution gets rid of the need to subscribe to the publisher created in loadPage() entirely and also ensures that no matter how loadPage() generates its result, all results are always collected and forwarded.

To make this solution work, the RecursiveLoader skeleton needs to be modified slightly compared to my earlier version:

struct Response {
  var hasMorePages = true
  var items = [Item(), Item()]
  var nextPageIndex = 0
}

class RecursiveLoader {
  init() { }

  private func loadPage(withIndex index: Int) -> AnyPublisher<Response, Never> {
    // this would be the individual network call
    Future { promise in
      DispatchQueue.global().asyncAfter(deadline: .now() + 0.1) {
        let nextIndex = index + 1
        if nextIndex < 5 {
          return promise(.success(Response(nextPageIndex: nextIndex)))
        } else {
          return promise(.success(Response(hasMorePages: false)))
        }
      }
    }.eraseToAnyPublisher()
  }

The loader no longer tracks the number of requests it has made. The loadPage() method is now loadPage(withIndex:). This index represents the page that should be loaded. In this case I want to load 5 pages and then complete the chain. The Response object now has a nextPageIndex that's used to represent the next index that should be loaded. So in this case I will start with an index of 0 and create new Response objects until I reach index 4 which is the fifth page because I started counting at 0.

The loadPages() still does all of the work but it's modified as follows:

func loadPages() -> AnyPublisher<[Item], Never> {
  let pageIndexPublisher = CurrentValueSubject<Int, Never>(0)

  return pageIndexPublisher
    .flatMap({ index in
      return self.loadPage(withIndex: index)
    })
    .handleEvents(receiveOutput: { (response: Response) in
      if response.hasMorePages {
        pageIndexPublisher.send(response.nextPageIndex)
      } else {
        pageIndexPublisher.send(completion: .finished)
      }
    })
    .reduce([Item](), { allItems, response in
      return response.items + allItems
    })
    .eraseToAnyPublisher()
}

Inside loadPages() a CurrentValueSubject is used to drive the loading of pages. Since we want to start loading pages when somebody subscribes to the publisher created by loadPages(), a CurrentValueSubject makes sense because it emits its current (initial) value once it receives a subscriber. The publisher returned by loadPages() applies a flatMap to pageIndexPublisher. Inside of the flatMap, the page index emitted by pageIndexPublisher is used to create a new loadPage publisher that will load the page at a certain index. After the flatMap, handleEvents(receiveOutput:) is used to determine whether the nextPageIndex should be sent through the pageIndexPublisher or if the pageIndexPublisher should be completed. When the nextPageIndex is emitted by the pageIndexPublisher, this triggers another call to loadPage(withIndex:) in the flatMap.

Since we still use a reduce after handleEvents(receiveOutput:), all results from the flatMap are still collected and an array of Item objects is still emitted when pageIndexPublisher completed.

I can imagine that this is slightly mindbending so let's go through it step by step.

When the publisher that's returned by loadPages() receives a subscriber, pageIndexPublisher immediately emits its initial value: 0. This value is transformed into a publisher using flatMap by returning a publisher created by loadPage(withIndex:). The loadPage(withIndex:) fakes a network requests and produces a Response value.

This Response is passed to handleEvents(receiveOutput:), where it's inspected to see if there are more pages to be loaded. If more pages need to be loaded, pageIndexPublisher emits the index for the next page which will be forwarded into flatMap so it can be converted into a new network call. If there are no further pages available, the pageIndexPublisher sends a completion event.

After the Response is inspected by handleEvents(receiveOutput:), it is forwarded to the reduce where the Response object's item property is used to build an array of Item objects. The reduce will keep collecting items until the pageIndexPublisher sends its completion event.

In Summary

This blog post was a fun one to write. Especially because I didn't know I was going to write it until I did. I hope I've been able to give you a glimpse into the thought process that I use what I design and implement solutions to complicated problems. By coming up with an ideal call-site first I usually already get a good sense of what my implementation should look like. And by throwing that ideal implementation aside for a moment and getting something that works first, I always get a good sense of what works and what doesn't without worrying too much about the result.

If you have any questions or feedback for me, don't be scared and send me a message on Twitter.

I did not mention who gave me the tip to use a CurrentValueSubject and a flatMap in my solution because they preferred to remain anonymous.

What’s the difference between Float and Double in Swift

A Double and Float are both used to represent decimal numbers, but they do so in slightly different ways.

If you initialize a decimal number in Swift using as shown below, the Swift compiler will assume that you meant to create a Double:

let val = 3.123 // val is inferred to be Double

The reason for this is that Double is the more precise type when comparing it to Float. A Float holds a total of 8 positions, or 32 bits. Since Double is more precise, it can hold more positions. It uses 64 bits to do this. In practice, that means the following:

print(Double.pi) // 3.141592653589793
print(Float.pi) // 3.1415925

As you can see, Double can represent pi far more accurately than Float. We can make the difference more obvious if we multiply pi by 1000:

print(Double.pi * 1000) // 3141.592653589793
print(Float.pi * 1000) // 3141.5925

Both Double and Float sacrifice some after the comma precision when you multiply pi by 1000. For Float this means that it only has four decimal places while Double still has twelve.

Swift Property Wrappers Explained

Property wrappers are a feature that was introduced in Swift 5.1 and they play a huge role in SwiftUI and Combine which are two frameworks that shipped alongside Swift 5.1 in iOS 13. The community was quick to create some useful examples that were embraced by folks relatively quickly.

As a user of property wrappers, you don't need to be concerned about what they are exactly, or how they work. All that you need to know is how you can use them. However, if you're curious how property wrappers work on the inside, this is just the post for you.

This week I would like to take a deep dive into property wrappers and take you on a journey to see how they work exactly.

Property wrappers look a lot like macros, but they're not the same thing. If you'd like to learn more about their differences, take a look at my comparison post here.

Why do we need property wrappers?

The reason that the Swift team wanted to add property wrappers to the Swift language is to help facilitate common patterns that are applied to properties all the time. If you've ever marked a property as lazy, you've used such a pattern. The Swift compiler knows how to handle lazy, and all the code needed to expand your lazy keyword into code that actually makes your property lazy is hardcoded into the compiler.

Since there are many more of these patterns that can be applied to properties, hardcoding all of them wouldn't make sense. Especially since one of the goals of property wrappers is to allow developers to provide their own patterns in the form of property wrappers.

Let's back up and zoom in on the lazy keyword. I just mentioned that the compiler expands your code into other code that makes your property lazy. Let's take a look at what this would look like if we didn't have the lazy keyword, and we'd have to write a lazy property ourselves.

This example is taken directly from the Swift evolution proposal and slightly modified for readability:

struct MyObject {
  private var _myProperty: Int?

  var myProperty: Int {
    get {
      if let value = _myProperty { return value }
      let initialValue = 1738
      _myProperty = initialValue
      return initialValue
    }
    set {
      _myProperty = newValue
    }
  }
}

Notice that this code is far more verbose than just writing lazy var myProperty: Int?.

Capturing a large number of these patterns in the compiler isn't desirable, and it's also not very extensible. The Swift team wanted to allow themselves, and developers to use keywords to define their own patterns that are similar to lazy to help them clean up their code, and make their code more expressive.

Note that property wrappers do not allow developers to do otherwise impossible things. They merely allow developers to express patterns and intent using a more expressive syntax. Let's move on and look at an example.

Picking apart a property wrapper

A property wrapper that I use a lot is the @Published property wrapper from Combine. This property wrapper converts the property that it's applied to into a publisher that will notify subscribers when you change that property's value. This property wrapper is used like this:

class MyObject {
  @Published var myProperty = 10
}

Fairly straightforward, right?

To access the created publisher I need to use a $ prefix on myProperty. So $myProperty. The myProperty property points to the underlying value which is an Int with a value of 10 by default in this case. There's also a second prefix that can be applied to a property wrapper which is _, so _myProperty. This is a private property so it can only be accessed from within MyObject in this case but it tells us a lot about how property wrappers work. In the case of the MyObject example above, _myProperty is a Published<Int>. $myProperty is a Published.Publisher and myProperty is an Int. So that single line of code results in three different kinds of properties we can access. Let's define a custom property wrapper and find out what each of these three properties is, and what it does.

@propertyWrapper
struct ExampleWrapper<Value> {
  var wrappedValue: Value
}

This property wrapper is very minimal, and not very useful at all. However, it's good enough for us to explore the anatomy of a property wrapper.

First, notice that the ExampleWrapper struct has an annotation on the line before its definition: @propertyWrapper. This annotation means that the struct that's defined after it is a property wrapper. Also note that the ExampleWrapper is generic over Value. This Value is the type of the wrappedValue property.

Property wrappers don't have to be generic. You can hardcode the type of wrappedValue if needed. You could hardcode the wrappedValue if you want your property wrapper to only work for a specific type. Alternatively, you can constrain the type of Value if needed.

The wrappedValue property is required for a property wrapper. All property wrappers must have a non-static property called wrappedValue.

Let's put this ExampleWrapper to work:

class MyObject {
  @ExampleWrapper var myProperty = 10

  func allVariations() {
    print(myProperty)
    //print($myProperty)
    print(_myProperty)
  }
}

let object = MyObject()
object.allVariations()

Notice that I have commented out $myProperty. I will explain why in a moment.

When you run this code, you would see the following printed in Xcode's console:

10
ExampleWrapper<Int>(wrappedValue: 10)

myProperty still prints as 10. Accessing the property that's marked with a property wrapper directly will print the wrappedValue property of the property wrapper. When you print _myProperty, you access the property wrapper object itself. Notice that _myProperty is a member of MyObject. You can type self._myProperty and Swift will know what to do, even though you never explicitly defined _myProperty yourself. I mentioned earlier that _myProperty is private so you can't access it from outside of MyObject but it's there.

The reason is that the Swift compiler will take that @ExampleWrapper var myProperty = 10 line and convert in something else behind the curtain:

private var _myProperty: ExampleWrapper<Int> = ExampleWrapper<Int>(wrappedValue: 10)

var myProperty: Int {
  get { return _myProperty.wrappedValue }
  set { _myProperty.wrappedValue = newValue }
}

There are two things that we can learn from this example. First, you can see that property wrappers really aren't magic. They are actually relatively straightforward. This doesn't make them simple, or easy, but once you know that this conversion from a single definition is exploded into two separate definitions it suddenly becomes a lot easier to reason about.s

The _myProperty isn't some kind of magic value. It's a real member of MyObject that's created by the compiler. And myProperty returns the value of wrappedValue because it's hardcoded that way. Not by us, but by the compiler.

The _myProperty property is called the synthesized storage property. It's where the property wrapper which provides the storage for its wrapped value exists.

So where's $myProperty?

Not all property wrappers come with a $ prefixed flavor. The $ prefixed version of a property wrapped property is called the projected value. The projected value can be useful to provide a special, or different interface for a specific property wrapper like @Published does for example. To add a projected value to a property wrapper you must implement a projectedValue property on the property wrapper definition.

In MyExampleWrapper this would look as follows:

@propertyWrapper
struct ExampleWrapper<Value> {
  var wrappedValue: Value

  var projectedValue: Value {
    get { wrappedValue }
    set { wrappedValue = newValue }
  }
}

This example isn't useful at all, I will show you a more useful example in the next section. For now, I want to show you the anatomy of a property wrapper without any bells and whistles.

If you'd use this property wrapper like before, Swift will generate the following code for you:

private var _myProperty: ExampleWrapper<Int> = ExampleWrapper<Int>(wrappedValue: 10)

var myProperty: Int {
  get { return _myProperty.wrappedValue }
  set { _myProperty.wrappedValue = newValue }
}

var $myProperty: Int {
  get { return _myProperty.projectedValue }
  set { _myProperty.projectedValue = newValue }
}

An extra property is created that uses the private _myProperty's projectedValue for it's get and set implementations.

Since _myProperty is private, your projected value might provide direct access to the property wrapper which is one of the examples shown in the original property wrapper proposals. Alternatively, you could expose a completely different object as your property wrapper's projected value. It's up to you to make this choice. The @Published property wrapper uses its projectedValue to expose a publisher.

Implementing a property wrapper

I have already shown you how to define a simple property wrapper, but let's be honest. That example was boring and kind of bad. In this section, we'll look at implementing a custom property wrapper that mimics the behavior of Combine's @Published property wrapper. If you want to learn about Combine I have several posts about it in the Combine section of this blog.

Let's define the basics first:

@propertyWrapper
struct DWPublished<Value> {
  var wrappedValue: Value
}

This defines a property wrapped that wraps any kind of value. That's good. The goal here is to implement a projected value that exposes some kind of publisher. I will use a CurrentValueSubject for this. Whenever wrappedValue gets a new value, the CurrentValueSubject should emit a new value to its subscribers. A basic implementation might look like this:

@propertyWrapper
class DWPublished<Value> {
  var wrappedValue: Value {
    get { subject.value }
    set { subject.value = newValue }
  }

  private let subject: CurrentValueSubject<Value, Never>

  var projectedValue: CurrentValueSubject<Value, Never> {
    get { subject }
  }

  init(wrappedValue: Value) {
    self.subject = CurrentValueSubject(wrappedValue)
  }
}

Warning:
This implementation is very basic and should not be used as a reference for how @Published is actually implemented. I'm sure there might be bugs with this code. My goal is to help you understand how property wrappers work. Not to show you a perfect custom @Published property wrapper.

This code is vastly different from what you've seen before. The wrappedValue used the private subject to implement its get and set. This means that the wrapped value is always in sync with the subject's current value.

The projectedValue only has it's get specified. We don't want users of this property wrapper to assign anything to projectedValue; it's read-only.

When a property wrapper is initialized in its simplest form, it receives its wrapped value. The wrapped value passed to DWPublished is used to set up the subject with the value we're supposed to wrap as its initial value.

Using this property wrapper would look like this:

class MyObject {
  @DWPublished var myValue = 1
}

let obj = MyObject()
obj.$myValue.sink(receiveValue: { int in
  print("received int: \(int)")
})

obj.myValue = 2

The printed output for this example would be:

received int: 1
received int: 2

Pretty neat, right?

Since the property wrapper's projected value is a CurrentValueSubject, it has a value property that we can assign values to. If I'd do this, the property wrapper's wrappedValue is also updated because the CurrentValueSubject is used to drive the wrappedValue of my property wrapper.

obj.$myValue.value = 3
print(obj.myValue) // 3

This is something that's not possible with the @Published property wrapped because Apple exposes the projectedValue for @Published as a custom type called Published.Publisher instead of a CurrentValueSubject.

A more complicated property wrapper might take some kind of configuration, like a maximum or minimum value. Let's say I want to expand my @DWPublished property wrapper to limit its output by debouncing it. I would like to write the following code in MyObject to configure this:

class MyObject {
  @DWPublished(debounce: 0.3) var myValue = 1
}

This would debounce my published values with 300 milliseconds. We can update the initializer for DWPublished to accept this argument, and refactor the code a little bit:

@propertyWrapper
class DWPublished<Value> {
  var wrappedValue: Value {
    get { subject.value }
    set { subject.value = newValue }
  }

  private let subject: CurrentValueSubject<Value, Never>
  private let publisher: AnyPublisher<Value, Never>

  var projectedValue: AnyPublisher<Value, Never> {
    get { publisher }
  }

  init(wrappedValue: Value, debounce: DispatchQueue.SchedulerTimeType.Stride) {
    self.subject = CurrentValueSubject(wrappedValue)
    self.publisher = self.subject
      .debounce(for: debounce, scheduler: DispatchQueue.global())
      .eraseToAnyPublisher()
  }
}

The initializer for my property wrapper now accepts the debounce interval and uses this interval to create an all-new publisher that debounces my CurrentValueSubject. I erase this publisher to AnyPublisher so I have a nice type for my publisher instead of Publishers.Debounce<CurrentValueSubject<Value, Never>, S> where S : Scheduler which would be the type of my publisher if I didn't erase it.

My property wrapper's wrappedValue still shadows subject.value. The projectedValue now uses the debounced publisher for its get instead of the CurrentValueSubject.

Using this property wrapper now looks as follows:

class MyObject {
  @DWPublished(debounce: 0.3) var myValue = 1
}

var cancellables = Set<AnyCancellable>()

let obj = MyObject()
obj.$myValue
  .sink(receiveValue: { int in
    print("received int: \(int)")
  })
  .store(in: &cancellables)

obj.myValue = 2
obj.myValue = 3
obj.myValue = 4

If you would run this in a playground, only received int: 4 should be printed. That's the debouncer at work and it's exactly what I wanted to happen.

Note that because the property wrapper's projected value is now an AnyPublisher, it's no longer possible to assign new values using $myValue.value like we could before.

In summary

In this week's post, you saw how property wrappers work internally, and what happens when you use a property wrapper. I showed you that the Swift compiler generates code on your behalf and that a property wrapper is far from magic. Swift generates a _ prefixed private property that's an instance of your property wrapper, a $ prefixed property that shadows the private property's projectedValue property, and that the original property shadows the wrappedValue property of the _ prefixed private property.

Once you understand this, you can quickly see how property wrappers work and how they might be implemented. I demonstrated this by implementing my own version of Combine's @Published property wrapper. After that, I showed you how to create a property wrapper that can be configured with extra arguments by expanding your property wrapper's initializer.

I hope that you have a much clearer picture of how property wrappers work now and that using them feels less like magic. For questions or feedback, I would love to hear from you on Twitter.

Five tips to help you become a well-rounded developer

This week I wanted to write about something non-technical. And while the topic of this week's post isn't a technical one, I think it's an important topic for developers who want to expand their knowledge, and deepen their skills.

I have been a developer professionally for more than ten years at this point and in these ten years there are some fundamental lessons I have learned that I believe have helped me get where I am today.

In this week's post, I will share five tips with you that have made into the developer I am today, and I strongly believe that these tips can help you become a more well-rounded developer.

Tip 1: Read books

Some of my fundamental knowledge and programming principles come from books like The Pragmatic Programmer, and Design Patterns. None of these books are about Swift, or iOS Development. In fact, they predate iOS and Swift by many years. However, these books contain fundamental knowledge and experiences that apply to virtually every project you will work on.

The books I have mentioned specifically, will not teach you how to do X or Y. Instead, they contain tons of lessons, examples, principles and information that can help you see programming in the bigger picture. These books can help you put what we're doing today in Swift in a much broader perspective.

And even if you pick up a book about Swift development, like the ones Paul Hudson writes, or one of the books from objc.io or Ray Wenderlich, you will grow as a developer. By reading a complete book from an author you don't just learn the skills they teach. A good book about programming will also give you a glimpse inside of the author's brain. You will learn how they think about programming, and why they teach the code the way they do. Daniel Steinberg is an author who, in my opinion, is exceptionally good at this.

Over the years I have read dozens of programming books and the one thing that they all had in common is that I learned something more than just the concepts I read about. I learned new ways of thinking. I learned new ways to approach problems. And, equally important, my knowledge on the topic I read about got a huge boost.

2. Don't rely on tutorials all the time

Tutorials are a fantastic way to explore something new. Or to learn if you're just getting started. But there comes a point where following tutorial after tutorial won't teach you a lot anymore.

Where books will often help you understand something in depth, or help you see the bigger picture, tutorials are often much shorter publications that simply show you how to get the job done.

Tutorials often don't go too deep into any trade-offs, the reasoning behind certain decisions or how to apply the skills learned in a different context. The tutorial starts with a cool example, the code needed to implement the example is shown, and in the end you have replicated the example 1:1.

This is great because it gives you a sense of accomplishment, but how often do you come out of a tutorial having actually learned something new? Of course, if you're just starting out you will gain tons from tutorials. But there comes a moment where tutorials don't introduce you to any new tools, or where they don't introduce you to new language constructs anymore.

If you feel that this is the case for you, it's time to stop relying on tutorials. It's time that you move on learn how to figure out how you can build those cool tutorial demos on your own. And just follow the occasional tutorial for fun, or just to figure out how something specific was done before you move and and explore the how and why on your own.

This segues nicely into my next tip:

3. Learn to navigate the documentation

Once you stop relying on tutorials all the time, you will need a new way to learn. Picking up a book every time you want to learn something isn't very beneficial so luckily there is an alternative; documentation.

Granted, Apple's documentation for iOS hasn't been great the past couple of years. But regardless, learning how to navigate the documentation, or even learning how to browse the headers for Apple's frameworks can be a huge boost to discovering and learning new frameworks and technologies.

Once you understand how you can learn and read the documentation for the frameworks you use, you will feel far more confident about the products you're building. Instead of just replicating what you saw in a tutorial, you will be able to look up the frameworks and APIs used in a tutorial, and you will be able to reason about the choices that were made.

4. Participate in the community

There's a community around virtually every framework, programming language or tool. Participating in these communities can be scary, but from participating you can gain a ton.

Simply reading along on a community forum like the Swift forum, or being a member of a Slack group like the iOS Developers Slack and reading some of the conversations that go on in there can be eye opening. You don't have to actively participate immediately, but from just soaking up the knowledge that flows writhing these communities you can grow so much.

When I first started reading along with the Swift mailinglist, which later moved on to the Swift forums, I didn't understand half of the conversations that went on. And to be quite frank, there's still a lot of comments and explanations I don't fully understand all of the time. Either way, seeing what goes on and learning from it is a growing experience for me.

Slowly but surely my confidence has built over the years to the point where I enjoy participating in communities all the time. It's why I write this blog, and it's why I'm a workspace admin for the iOS developers group. Helping people figure things out isn't just fun. It's also a great way for me to learn.

So by passively reading conversations I don't understand well enough to participate in, and by actively helping community members where I can I have learned more than I could have in any other way. I can only say that I strongly recommend to find the community for your favorite framework, programming language or tool and start participating. Maybe just read along at first, and at some point you will have the confidence to start contributing. I know you will.

5. Step outside of your comfort zone

The last tip on my list flows nicely from the previous tip. Participating in a community might mean you have to step outside of your comfort zone. But that's not why I put this tip on the list.

What I mean by stepping outside of your comfort zone is that I think it's very beneficial for you as a developer to try new things. If you're an iOS developer, try building something that runs on the server. Maybe use Swift, or maybe learn a different language to do it, like Python. If you're a JavaScript developer, try working on an app sometimes.

By taking a look at new programming languages and platforms, you will learn more about the worlds you exist in as a developer. And more importantly, you will understand your (future) coworkers much better if you have an idea of what they work with.

If you've learned to step outside of your comfort zone, and learned to develop in more than a single language and/or platform, you will feel much more confident. And questions like "Will iOS development still be in demand in a couple of years" suddenly aren't as scary. You'll know how to adapt. You've learned to be flexible.

Bonus tip: Go beyond development

Once you're proficient at writing code and building apps, it's time to starting focussing on everything that happens around development. Try to gain a deeper understanding of the company you work for, it will help you understand the work you do much better which in turn means that you'll be able to make much better choices while coding. Maybe you can try mentoring or supporting junior developers around you or within your company. Helping others grow and learn is a fantastic way to solidify your own understanding of topics, as well as think about the way you work in a different light.

Just to clarify, I'm not saying go into management. Not even close. Of course, management is a viable career path but if you want to grow beyond senior levels of development you'll need to do more than just work on tickets you're assigned.

Some of the best developers I've worked with hardly stood out due to their coding skills (even when those skills were amazing). What stood out about these developers is that they had tons of connections in the workplace. They know loads of people, and they knew exactly who they needed to go to for which questions. And people also know exactly what to come to these skilled developers for. Whenever tickets were shown during a backlog refinement session, they would be able to discuss these tickets in far greater detail than just the scope of the ticket. They know what's going on in the company, where it's headed, and where it's coming from.

Knowing a lot about the context of your work, and having a solid network can really be a superpower as a developer.

In summary

Everybody's journey towards becoming a developer is different. For some is a long and hard journey, while other are lucky enough to come from a place where learning how to program was made easy. But there's one thing we all have in common. We're all constantly learning, growing and evolving.

In this week's post I have shared five tips with you. These tips are what I believe to be important because I believe that following these tips has helped me grow, learn, and evolve into the developer I am today.

The reason I say this is that I don't believe these five tips are the magical five tips that will help everybody. These are just my five tips. And I hope that they are useful to you. And if they're not, I hope you can find your own five tips. We're all different, and that means that different things work for everybody.

What’s the difference between catch and replaceError in Combine?

There are several ways to handle errors in Combine. Most commonly you will either use catch or replaceError if you want to implement a mechanism that allows you to recover from an error. For example, catch is useful if you want to retry a network operation with a delay.

The catch and replaceError operators look very similar at first glance. They are both executed when an error occurs in your pipeline, and they allow you to recover from an error. However, their purposes are very different.

When to use catch

The catch operator is used if you want to inspect the error that was emitted by an upstream publisher, and replace the upstream publisher with a new publisher. For example:

let practicalCombine = URL(string: "https://practicalcombine.com")!
let donnywals = URL(string: "https://donnywals.com")!

var cancellables = Set<AnyCancellable>()

URLSession.shared.dataTaskPublisher(for: practicalCombine)
  .catch({ urlError in
    return URLSession.shared.dataTaskPublisher(for: donnywals)
  })
  .sink(receiveCompletion: { completion in
    // handle completion
  }, receiveValue: { value in
    // handle response
  })
  .store(in: &cancellables)

In this example I replace any errors emitted by my initial data task publisher with a new data task publisher. Depending on your needs and the emitted error, you can return any kind of publisher you want from your catch. The only thing you need to keep in mind is that the publisher you create must have the same Output and Failure as the publisher that the catch is applied to. So in this case it needs to be a publisher that matches a data task publisher's Output and Failure.

Note that you cannot throw errors in catch. You must always return a valid publisher. If you only want to create a new publisher for a specific error, and otherwise forward the thrown error, you can use tryCatch which allows you to throw errors.

When to use replaceError

The replaceError operator is slightly simpler than the catch operator. With replaceError you can provide a default value that's used to replace any thrown error from upstream publishers. Note that this operator changes your Failure type to Never because with this operator in place, it will become impossible for your pipeline to fail. This is different from catch because the publisher you create in the catch operator might still fail.

Let's look at an example of replaceError:

enum MyError: Error {
  case failed
}

var cancellables = Set<AnyCancellable>()
var subject = PassthroughSubject<Int, Error>()

subject
  .replaceError(with: 42)
  .sink(receiveCompletion: { completion in
    print(completion)
  }, receiveValue: { int in
    print(int)
  })
  .store(in: &cancellables)

subject.send(1)
subject.send(2)
subject.send(completion: .failure(MyError.failed))

If you execute this code in a Playground you'll find that the console will contain the following output:

1
2
42
finished

The first two values are sent explicitly by calling send. The third value is the result of replacing the error I sent as the last value. Note that the publisher completes successfully immediately after. The upstream publisher completes as soon as it emits an error, it's one of the rules of Combine that publishers can only complete once, and they do so through an error event or a completion event.

Note that in catch, the publisher that emitted the error that triggered the catch completes when it emits an error. The publisher you return from catch does not have to complete immediately and it replaces the failed publisher completely. So your sink could receive several values after the source publisher failed because the replacement publisher is still active.

Retrying a network request with a delay in Combine

Combine comes with a handy retry operator that allows developers to retry an operation that failed. This is most typically used to retry a failed network request. As soon as the network request fails, the retry operator will resubscribe to the DataTaskPublisher, kicking off a new request hoping that the request will succeed this time. When you use retry, you can specify the number of times you want to retry the operation to avoid endlessly retrying a network request that will never succeed.

While this is great in some scenarios, there are also cases where this behavior is not what you want.

For example, if your network request failed due to being rate limited or the server being too busy, you should probably wait a little while before retrying your network call since retrying immediately is unlikely to succeed anyway.

In this week's post you will explore some options you have to implement this behavior using nothing but operators and publishers that are available in Combine out of the box.

Implementing a simple retry

Before I show you the simplest dretry mechanism with a delay I could come up with, I want to show you what an immediate retry looks like since I'll be using that as the starting point for this post:

var cancellables = Set<AnyCancellable>()

let url = URL(string: "https://practicalcombine.com")!
let dataTaskPublisher = URLSession.shared.dataTaskPublisher(for: url)

dataTaskPublisher
  .retry(3)
  .sink(receiveCompletion: { completion in
    // handle errors and completion
  }, receiveValue: { response in
    // handle response
  })
  .store(in: &cancellables)

This code will fire a network request, and if the request fails it will be retried three times. That means that at most we'd make this request 4 times in total (once for the initial request and then three more times for the retries).

Note that a 404, 501 or any other error status code does not count as a failed request in Combine. The request made it to the server and the server responded. A failed request typically means that the request wasn't executed because the device making the request is offline, the server failed to respond in a timely manner, or any other reason where we never received a response from the server.

For all of these cases it probably makes sense to retry the request immediately. But how should an HTTP status code of 429 (Too Many Requests / Rate Limit) or 503 (Server Busy) be handled? These will be seen as successful outcomes by Combine so we'll need to inspect the server's response, raise an error and retry the request with a couple of seconds delay since we don't want to make the server even busier than it already is (or continue hitting our rate limit).

The first step doesn't really have anything to do with our simple retry yet but it's an important prerequisite. We'll need to extract the HTTP status code from the response we received and see if we should retry the request. For simplicity I will only check for 429 and 503 codes. In your code you'll probably want to check which status codes can be returned by your API and adapt accordingly.

enum DataTaskError: Error {
  case invalidResponse, rateLimitted, serverBusy
}

let dataTaskPublisher = URLSession.shared.dataTaskPublisher(for: url)
  .tryMap({ response -> (data: Data, response: URLResponse) in
    // just so we can debug later in the post
    print("Received a response, checking status code")

    guard let httpResponse = response.response as? HTTPURLResponse else {
      throw DataTaskError.invalidResponse
    }

    if httpResponse.statusCode == 429 {
      throw DataTaskError.rateLimitted
    }

    if httpResponse.statusCode == 503 {
      throw DataTaskError.serverBusy
    }

    return response
  })

By applying a tryMap on the dataTaskPublisher we can get the response from the data task and check its HTTP status code. Depending on the status code I throw different errors. If the status code is not 429 or 503 it's up to the subscriber of dataTaskPublisher to handle any errors. Since this tryMap is fairly lengthy I will omit the definition of the let dataTaskPublisher and the DataTaskError enum in the rest of this post and instead just refer to dataTaskPublisher:

Now that we have a publisher that fails when we want it to fail we can implement a delayed retry mechanism.

Implementing a delayed retry

Since retry doesn't allow us to specify a delay we'll need to come up with a clever solution. Luckily, I am not the first person to try and come up with something because Joseph Heck and Matt Neuburg both wrote some information and their approaches on Stackoverflow.

So why am I writing this if there's already something on Stackoverflow?

Well, neither of the solutions there is the solution. At least not in Xcode 11.5. Maybe they worked in an older Xcode version but I didn't check.

The general idea of their suggestions still stands though. Use a catch to capture any errors, and return the initial publisher with a delay from the catch. Then place a retry after the catch operator. That could would look a bit like this:

// This code is not the final solution
dataTaskPublisher
  .tryCatch({ error -> AnyPublisher<(data: Data, response: URLResponse), Error> in
    print("In the tryCatch")

    switch error {
    case DataTaskError.rateLimitted, DataTaskError.serverBusy:
      return dataTaskPublisher
        .delay(for: 3, scheduler: DispatchQueue.global())
        .eraseToAnyPublisher()
    default:
      throw error
    }
  })
  .retry(2)
  .sink(receiveCompletion: { completion in
    print(completion)
  }, receiveValue: { value in
    print(value)
  })
  .store(in: &cancellables)

The solution above uses the dataTaskPublisher from the earlier code snippet. I use a tryCatch to inspect any errors coming from the data task publisher. If the error matches one of the errors where I want to perform a delayed retry, I return the dataTaskPublisher with a delay applied to it. This will delay the delivery of values from the data task publisher that I return from tryCatch. I also erase the resulting publisher to AnyPublisher because it looks nicer.

Note that any errors emitted by dataTaskPublisher are now replaced by a new publisher that's based on dataTaskPublisher. These publishers are not the same publisher. The new publisher will begin running immediately and emit its output with a delay of 3 seconds.

This means that the publisher that has the delay applied will delay the delivery of both its success and failure values by three seconds.

When this second publisher emits an error, the retry will re-subscribe to the initial data task publisher immediately, kicking off a new network request. And this dance continues until retry was hit twice. With the code as-is, the output looks a bit like this:

Received a response, checking status code # 0 seconds in from the initial dataTaskPublisher
In the tryCatch # 0 seconds in from the initial dataTaskPublisher
Received a response, checking status code # 0 seconds in from the publisher returned by tryCatch
Received a response, checking status code # 3 seconds in from the initial dataTaskPublisher
In the tryCatch # 3 seconds in from the publisher returned by tryCatch
Received a response, checking status code # 3 seconds in from the publisher returned by tryCatch
Received a response, checking status code # 6 seconds in from the initial dataTaskPublisher
In the tryCatch # 6 seconds in from the publisher returned by tryCatch
Received a response, checking status code # 6 seconds in from the publisher returned by tryCatch

That's not exactly what we expected, right? This shows a total of 6 responses being received. Double what we wanted. And more importantly, we make requests in pairs of two. So the publisher that's created in the tryCatch executes immediately, but just doesn't emit its values until 3 seconds later which is why it takes three seconds for the initial dataTaskPublisher to fire again.

Let's see how we can fix this. First, I'll show you an interesting yet incorrect approach at implementing this.

An incorrect approach to a delayed retry

We can bring down this number to a more sensible number of requesting by applying the share() operator to the initial publisher. This will make it so that we only execute the first data task only once:

dataTaskPublisher.share()
  .tryCatch({ error -> AnyPublisher<(data: Data, response: URLResponse), Error> in
    print("In the tryCatch")
    switch error {
    case DataTaskError.rateLimitted, DataTaskError.serverBusy:
      return dataTaskPublisher
        .delay(for: 3, scheduler: DispatchQueue.global())
        .eraseToAnyPublisher()
    default:
      throw error
    }
  })
  .retry(2)
  .sink(receiveCompletion: { completion in
    print(completion)
  }, receiveValue: { value in
    print(value)
  })
  .store(in: &cancellables)

By applying share() to the dataTaskPublisher a new publisher is created that will execute when it receives its initial subscriber and replays its results for any subsequent subscribers. In our case, this results in the following output:

Received a response, checking status code # 0 seconds in from the initial dataTaskPublisher
In the tryCatch # 0 seconds in from the initial dataTaskPublisher
Received a response, checking status code # 0 seconds in from the publisher returned by tryCatch
In the tryCatch # 3 seconds in from the publisher returned by tryCatch
Received a response, checking status code # 3 seconds in from the publisher returned by tryCatch
In the tryCatch # 6 seconds in from the publisher returned by tryCatch
Received a response, checking status code # 6 seconds in from the publisher returned by tryCatch

We're closer to the desired outcome but not quite there yet. Now that we use a shared publisher as the initial publisher, it will no longer execute its data task and the tryMap that we defined on the dataTaskPublisher earlier is no longer called. The result of the tryMap is cached in the share() and this cached result is immediately emitted when retry resubscribes. This means that share() will re-emit whatever error we received the first time it made its request.

This behavior will make it look like we're correctly retrying our request but there's actually a problem. Or rather, there are a couple of problems with this approach.

The retry operator in Combine will catch any errors that occur upstream and resubscribe to the pipeline so far. This means that any errors that occur above the retry will make it so we resubsribe to dataTaskPublisher.share(). In other words, the tryCatch that we have after dataTaskPublisher.share() will always receive the same error. So if the initial request failed due to being rate limitted and our retried request fails because we couldn't make a request, the tryCatch will still think we ran into a rate limit error and retry the request even though the logic in the tryCatch says we want to throw an error if we encountered something other than DataTaskError.rateLimitted or DataTaskError.serverBusy.

And on top of that, when we encounter something other than DataTaskError.rateLimitted or DataTaskError.serverBusy we still hit our retry with an error. This means that we'll resubscribe to dataTaskPublisher.share(), hit the tryCatch, throw an error, and retry again until we've retried the specified amount of times (2 in this example).

We should fix so that:

  1. We always receive the current / latest error in the tryCatch.
  2. We don't retry when we caught a non-retryable error.

This means that we should get rid of the share() and actually run the network request when the retry resubscribes to dataTaskPublisher while making sure we don't get the extra requests that we wanted to get rid of in the previous section.

A correct way to retry a network request with a delay

The first thing we should do in order to fix our retry mechanism is redefine how the dataTaskPublisher property is created. The changes we need to make are fairly small but they have a large impact on our final result. As I mentioned in the previous section, retry will resubscribe to the upstream publisher whenever it encounters an error. This means that a failing network call would trigger our retry even though we only want to retry when we enounter an error that we consider worth retrying the call for. In this post I assume that we should retry for "rate limitted" and "server busy" status codes. Any other failure should not be retried.

To achieve this, we need to make the retry operator think that our network call always succeeds unless we encounter one of our retryable errors. We can do this by converting the network call's output to a Result object that has the data task publisher's output as it's Output and Error as its failure. If the network call comes back with a retryable error, we'll throw an error from tryMap to trigger the retry. Otherwise, we'll return a Swift Result that can hold an error, or our output. This will make it look like everything went well so the retry doesn't trigger, but we'll be able to extract errors later if needed.

Let's take a look at what this means for how the dataTaskPublisher is defined:

let dataTaskPublisher = networkCall
  .tryMap({ dataTaskOutput -> Result<URLSession.DataTaskPublisher.Output, Error> in
    print("Received a response, checking status code")

    guard let response = dataTaskOutput.response as? HTTPURLResponse else {
      return .failure(DataTaskError.invalidResponse)
    }

    if response.statusCode == 429 {
      throw DataTaskError.rateLimitted
    }

    if response.statusCode == 503 {
      throw DataTaskError.serverBusy
    }

    return .success(dataTaskOutput)
  })

If we would erase this pipeline to AnyPublisher, we'd have the following type for our publisher: AnyPublisher<Result<URLSession.DataTaskPublisher.Output, Error>, Error>. The Error in the Result is what we'll use to send non-retryable errors down the pipeline. The publisher's error is what we'll use for retryable errors.

For example, I don't want to retry my network request when I receive an invalid response so I map the data task output to .failure(DataTaskError.invalidResponse) which means the request shouldn't be retried but we can still extract and use the invalid response error after the retry.

When the request succeeded and we're happy I return .success(dataTaskOutput) so I can extract and use the data task output later.

If a retryable error occured I throw an error so we can catch that error later to setup our delayed retry in a similar fashion as what you've seen in the previous section:

dataTaskPublisher
  .catch({ (error: Error) -> AnyPublisher<Result<URLSession.DataTaskPublisher.Output, Error>, Error> in
    print("In the catch")
    switch error {
    case DataTaskError.rateLimitted,
         DataTaskError.serverBusy:
      print("Received a retryable error")
      return Fail(error: error)
        .delay(for: 3, scheduler: DispatchQueue.main)
        .eraseToAnyPublisher()
    default:
      print("Received a non-retryable error")
      return Just(.failure(error))
        .setFailureType(to: Error.self)
        .eraseToAnyPublisher()
    }
  })
  .retry(2)

Instead of a tryCatch I use catch in this example. We want to catch any errors that originated from making the network request (for example if the request couldn't be made) or the tryMap (if we encountered a retryable error).

In the catch I check whether we encountered one of the retryable errors. If we did, I create a publisher that will immediately fail with the received error. I delay the delivery of this error by three seconds and I erase to any publisher so I can have a consistent return type for my catch. This code path will trigger the retry after three seconds and will make it so we resubscribe to dataTaskPublisher and execute the network call again because we don't use the share() anymore.

If we encounter a non-retryable error, I return a Just publisher that will immediately emit a single value. Similar to the tryMap, I wrap this error in a Swift Result to make the retry think everything is fine because we don't emit an error from the Just publisher.

At this point, our pipeline will only emit an error if we encounter a retryable error. Any other errors are wrapped in a Result and sent down the pipeline as Output.

We'll want to transform our Result back to an Error event after the tryMap so we'll receive errors in our sink's receiveCompletion and the receiveValue only receives succesful output.

Here's how we can achieve this:

dataTaskPublisher
  .catch({ (error: Error) -> AnyPublisher<DataTaskResult, Error> in
    print("In the catch")
    switch error {
    case DataTaskError.rateLimitted,
         DataTaskError.serverBusy:
      print("Received a retryable error")
      return Fail(error: error)
        .delay(for: 3, scheduler: DispatchQueue.main)
        .eraseToAnyPublisher()
    default:
      print("Received a non-retryable error")
      return Just(.failure(error))
        .setFailureType(to: Error.self)
        .eraseToAnyPublisher()
    }
  })
  .retry(2)
  .tryMap({ result in
    // Result -> Result.Success or emit Result.Failure
    return try result.get()
  })
  .sink(receiveCompletion: { completion in
    print(completion)
  }, receiveValue: { value in
    print(value)
  })
  .store(in: &cancellables)

By placing a tryMap after the retry we can grab our Result<URLSession.DataTaskPublisher.Output, Error> value and call try result.get() to either return the success case of our result, or throw the error in our failure case.

By doing this, we'll receive errors in receiveCompletion and receiveValue only receives succesful values. This means we won't have to deal with the Result in our receiveValue.

The output for this example would look like this:

Received a response, checking status code # 0 seconds after the initial data task
In the catch # 0 seconds after the initial data task
Received a retryable error # 0 seconds after the initial data task
Received a response, checking status code # 3 seconds after the initial data task
In the catch # 3 seconds after the initial data task
Received a retryable error # 3 seconds after the initial data task
Received a response, checking status code # 6 seconds after the initial data task
In the catch # 6 seconds after the initial data task
Received a retryable error # 6 seconds after the initial data task
failure(__lldb_expr_5.DataTaskError.rateLimitted) # 9 seconds after the initial data task

By delaying the delivery of certain errors, we can manipulate the start of the retried request. One downside is that if every request fails, we'll also delay the delivery of the final failure by the specified interval.

One thing I really like about this approach is that you can use different intervals for different errors, and you can even have the server tell you how long you should wait before retrying a request if the server includes this information in an HTTP header or as part of the response body. The delay could be configured in the tryMap where you have access to the HTTP response and you could associate the delay with your custom error case as an associated value.

In summary

What started as a simple question "How do I implement a delayed retry in Combine" turned out to be quite an adventure for me. Every time I thought I had found a solution, like the one I linked to from Stackoverflow there was always something about each solution I didn't like. As it turns out, there is no quick and easy way in Combine to implement a delayed retry that only applies to specific errors. I even had to update this post months after writing it because Alex Grebenyuk pointed out some interesting issues with the initial solution proposed in this post.

In this post you saw the various solutions I've tried, and why they were not to my liking. In the last section I showed you a tailor-made solution that works by delaying the delivery of specific errors rather than attempting to delay the start of the next request. Ultimately that mechanism delays delivery of all results, including success if the initial request failed. My final solution does not have this drawback which, in my opinion, is much nicer than delaying everything.

I have made the code used in this post available as a GitHub gist here. You can paste it in a Playground and it should work immediately. The code is sllightly modified to proof that network calls get re-executed and I have replaced the network with a Future so you have full control over the fake network call. To learn more about Combine's Future, you might want to read this post.

If you have your own solution for this problem and think it's more elegant, shorter or better than please, do reach out to me on Twitter so I can update this post. I secretly hope that this post is obsolete by the time WWDC 2020 comes along but who knows. For now I think this is the best we have.

Reclaim disk space by deleting old iOS simulators and Device Support files

After using a MacBook that runs Xcode for a few years it's likely that your disk space is starting to fill up good. A large part of this disk space can be occupied by Device Support files that are used by Xcode for older iOS versions, or by iOS simulators that are no longer available on your machine.

To clean these files up you can do the following:

  • Go to your Terminal and type open ~/Library/Developer/Xcode/iOS\ DeviceSupport
  • Delete folders for iOS versions that you no longer need to support.
  • Do the same with open ~/Library/Developer/Xcode/watchOS\ DeviceSupport
  • Clean up unavailable simulators using by typing xcrun simctl delete unavailable in your Terminal

When I ran these commands on a machine that got a clean install when macOS Catalina came out I was able to free up 15Gb of disk space. So that's 15Gb of space after about 8 months of use. Pretty good, right?

Xcode 12 automatically helps cleaning up Device Support files

A very cool feature of Xcode 12 is that Xcode will track the device / iOS version combinations that you use and update the mtime for each item in the Device Support directory accordingly. Any Device Support files that haven't been used for 180 days or more are automatically made elligible for deletion by the system.

This means that macOS can automatically clean up old Device Support files after they haven't been used for 180 days. The system has full control over when exactly this cleanup takes place and the exact cleanup times are dependent on variables like system activity, available disk space, whether you're connected to power and more.

I think it's really cool that Xcode and macOS can now actively help you reclaim disk space by removing unused Device Support files. Of course, if you need space now you'll still need to go in and manually delete Device Support files that you no longer need but this feature should certainly put a cap on the number of old Device Support files that are kept around.

(Thanks to Olivier Halligon for telling me about this feature. And also thanks to Russ Bishop for telling Olivier about this.)

Throttle network speeds for a specific host in Charles

Sometimes you'll want to test whether your app works properly under poor networking conditions. One way to test this is Apple's Network Link Conditioner. Unfortunately, this will slow internet speeds for your entire machine to a crawl which can be counterproductive. Especially if you want to throttle your app for a longer period of time.

If you have Charles installed to debug your app's network traffic, you can use it to throttle network speeds for the entire system, or for a selection of hosts which is exactly what we're looking for.

To enable throttling in Charles you can either go to Proxy -> Start Throttling or press cmd + T. This will turn on global throttling by default.

You can configure how Charles throttles, and for which hosts through Proxy -> Throttle Settings or by pressing cmd + shift + T. Make sure to check the Only for selected hosts checkbox if you want to configure which hosts should be throttled.

Throttle Settings window

Click the Add button to add a new host that should be throttled.

Use the bottom section of the Throttle Settings to configure how the network connection should be throttled. This configuration is applied to all throttled hosts.