Thanks to the speed with which we can dial new instances up and down, we maintain GPU infrastructure to handle average use instead of the full capacity for our maximum peak load. Thus far, we’ve migrated about one-third of our infrastructure into Google Cloud.

In order to efficiently search our massive catalog of music, we maintain multiple levels of GPU server clusters that we call "tiers." A first tier searches against a database of the most popular songs’ audio signatures, while subsequent tiers search longer samples against progressively more and more obscure music databases. In this way, Shazam identifies, say, "Thinking Out Loud" by Ed Sheeran in a single pass from a short sample, but might need several passes and a much longer sample to identify a 1950s recording of a Lithuanian polka group (being able to match really obscure music in addition to popular music is what makes using Shazam such a magical experience for our users).

Increasing the hit rate on the first line of servers depends on keeping the index files up to date with the latest popular music. That’s hard to do given how quickly music falls in and out of favor. Some surges in traffic we can plan and pre-scale for, such as the Super Bowl, the Grammy’s, or even our first branded game show, "BEAT SHAZAM." Other surges we cannot predict — say, a local radio station in a large market reviving an old R&B hit, or when a track that was never popular is suddenly featured in a TV advertisement. And that’s not counting new music, which we add to our catalog every day through submissions from labels as well as by in-house music experts who are constantly searching for new music.

Of course, running on bare metal servers, we also need to provision extra capacity for the inevitable failure scenarios we all experience when operating services at scale. One of the amazing benefits of running in Google is that we can now replace a failed node in just minutes with a brand new one “off the shelf” instead of keeping a pool of nodes around just waiting for failures. In our managed service provider, we had to provision GPUs in groups of four cards per machine, with two dies per card. That meant that we could lose up to eight shards of our database when a node failed. Now, in Google, we provision one VM per shard, which localizes the impact of a node failure to a single shard instead of eight.

An unexpected benefit of using Google Cloud GPUs has been to increase how often we recalculate and update our audio signature database, which is actually quite computationally intense. On dedicated infrastructure, we update the index of popular songs daily. On Google Cloud, we can recompile the index and reimage the GPU instance in well under an hour, so the index files are always up-to-date.

This flexibility allows us to begin considering dynamic cluster configurations. For instance, because of the way our algorithm works, it’s much easier for us to identify songs that were Shazamed in a car, which is a relatively quiet environment, than it is to identify songs Shazamed in a restaurant, where talking and clanging obscure the song’s audio signature. With the flexibility that cloud-based GPUs afford us, we have many more options available to us for configuring our tiers to match the specific demands that our users throw at us at different times of day. For example, we may be able to reconfigure our clusters according to time of day — morning drive time, vs. happy hour at the bar.

It’s exciting to think about the possibilities that using GPUs in Google Cloud opens up, and we look forward to working with Google Cloud as it adds new GPU offerings to its lineup.

You can find out more details about our move to Google Cloud Platform here: https://blog.shazam.com/moving-gpus-to-google-cloud-36edb4983ce5