First off, a word about definitions. In bcachefs, tiering is caching by another name: storage devices can be assigned to different tiers, and we can use a faster tier to cache a slower tier.

In some other storage systems, tiering means a setup where data can be dynamically moved between different tiers, but it often implies that the migration of data between tiers happens slowly in the background (not as data is accessed), and a given chunk of data only lives in one tier at a time - that is, when data is promoted to your fast tier, it's moved to the fast tier, you aren't adding a cached copy in the fast tier.



That's not what bcachefs does. Bcachefs does caching in roughly the same way as bcache: we can promote data (that is, write a cached copy to a fast device, without moving the data we read from) on read, and we can have both dirty data and clean cached data in our fast device, and in the background we scan for dirty data in the fast device, writing it to the slow device and marking the copy on the fast device clean. Bcachefs also uses the same underlying mechanism for managing cached data - pointers with generation numbers, so that we can efficiently invalidate cached data (and efficiently work with a cached device that is always almost full with clean cached data).

What's different?

In old style bcache, the extents btree only indexes cached data - we don't have an index for the backing device, it's direct mapped. In bcachefs, the extents btree indexes all data in the filesystem - extents have pointers to both the fast device and the slow device. This means that extents can have multiple pointers: if we have some data on our slow device that also has a clean cached copy on the fast device, we will have an extent with two pointers - one dirty pointer to the slow device, and one cached pointer to the fast device. That is, extents can have multiple replicas - this is also part of our replication support - and individual replicas can be either dirty or cached.



We're also not limited to two tiers anymore - with old style bcache, you can only have two tiers: a cache device and a backing device. In bcachefs, we can have many (up to 16) tiers - a "tier" is just a way of grouping devices that have about the same performance. The code currently only handles two tiers, but only because we still have to think through how to manage more than two tiers (Say we have five tiers - when do we move data from tier 3 to tier 5? Which tiers should be used for promoting cached data, and when? That sort of thing).

One side effect of this approach is that cache coherency is a lot easier. In old style bcache, we have to guard against a race between foreground writes and background writeback: if a foreground write in either writethrough or bypass mode (not being written to the cache at all, just the backing device) happens to a location that has dirty data in the cache, and if the background writeback thread is writing that dirty data to the backing device at the same time - we could end up overwriting the foreground write's new data with stale data from the cache, and leaving the cache marked as clean - that would be bad! In order to prevent this, we need a big lock (the writeback_lock) so that foreground writes can coordinate with the background writeback thread's queue, and if they conflict with dirty data that's already being flushed the foreground write is forced into writeback mode.

This issue fundamentally arises because from the perspective of bcache, the backing device is update in place and you have foreground writes and background writeback racing to update the same position on disk. This issue doesn't exist in bcachefs, because all writes are copy on write. In that same example, but in bcachefs, if the foreground write happens between when the background tiering thread scans for dirty data, and when the tiering thread updates the index to point to the new data it just wrote (and mark the existing copy on the fast device as clean) - it will discover when doing the index update that the extent no longer matches the extent it read from, and it won't change the index to point to the (now stale) data it just wrote - it just skips it and keeps going, and the data it just wrote ends up without anything pointing to it, becomes orphaned and is recycled.

What does all this mean for the end user?

The infrastructure we have here for working with cached and dirty data and moving data around is very general - there's a huge array of possible ways we could use it and caching strategies we could implement. With this infrastructure we could implement all of bcache's cache modes - writethrough, writeback, writearound. We can have some writes sent to the fast tier and some to the slow tier - old style bcache tries to detect sequential IO and send it to the backing device, and only cache the small random IOs. It also watches the latency to the cache device, and tries to throttle back and bypass more and more data if the cache device is becoming congested and becoming a bottleneck.

For bcachefs, we're starting with a clean slate, as far as all these knobs and parameters go. Those knobs and parameters all reside in the "backing device style bcache" part of the code, they're not core infrastructure - so bcachefs doesn't use them. And since we're not limited to two tiers anymore, we'll want to rethink all these knobs and interfaces anyways.

So, one of the things on the todo list is to reimagine how this kind of configuration and policy should work: we want to come up with something more general and flexible and powerful, but we also need to think about it from the end user's perspective and figure out what's actually going to be useful in the real world: we don't want to implement every conceivable way of managing tiered storage, as that'll just be overwhelming for the end user. The real trick is to come up with abstractions that will make managing all this relatively simple.

One thing we'll definitely do in the future is add xattrs so that policy can be set per file - "this file is always written to tier 2", or "this file is pinned to tier 0".

What does bcachefs's tiering do for now?

For now, on the policy side of things, I'm keeping it as simple as possible - we can add bells and whistles later, right now the focus is on correctness of the core infrastructure.

We currently only do writeback caching, and there's no bypass - foreground writes always go to tier 0. If tier 0 is full (and the thread that writes data from tier 0 to tier 1 isn't keeping up), foreground writes wait. We promote (add a cached copy) to tier 0 any time we do a foreground read and the data we're reading is on tier 1 (because if it had a copy on the fast tier, we would've read from that instead).

BTW, there's some justification for this behavior, even when you have the option of doing something fancier (like old style bcache does) - buffering data up in tier 0 allows it to be written more efficiently (more sequentially) to tier 1, so your system's throughput can be higher if the only thing writing to tier 1 is the background flusher: if you start sending foreground writes to tier 1 because the flusher isn't keeping up, then you're making the IO pattern to tier 1 less sequential and slowing the flusher down even more, and slowing down your total throughput.

The capacity of a filesystem is calculated based on the capacity of the largest tier - if you have a tier 0 device and a tier 1 device, the capacity of your filesystem is the capacity of your tier 1 device, the tier 0 device doesn't contribute. This is so that there's always room on tier 1 to flush dirty data from tier 0.

Metadata currently always lives on tier 0, it's never written to tier 1.