Have you ever tried playing a musical instrument?

It’s hard. You need to learn the theory, learn the instrument and practice.

Years ago we launched a startup to get rid of this problem. The startup was called Reedom, its mission was to make music easy to play for everyone. The idea of our product was simple: you hum a song, we play it with a musical instrument.

The startup failed.

Despite the fact we were not able to create a product, the technology that powered Reedom was very interesting. That’s why we decided to create a bot that shows what the product was meant to do.

You can play with it. The bot creates songs from your audio clips. Here you have an example of an audio clip we sent to the bot.

Here you have the outcome:

This is a prototype, and sometimes it gets a little clunky. But still you can use it to create pretty nice things.

Tum Tum Cha!

You can play two categories of instruments: Melodic Instruments and Percussions.

The two categories are implemented in very different ways. In this article we’re going talk about how Percussions are implemented.

Before going on with the article, I suggest you select a percussion instrument and sing “Tum Tum Cha!” to our bot 🥁.

Here you have an example:

This is the outcome:

What about Deep Learning?

I’m pretty sure Deep Learning is going to disrupt Automatic Music Transcription soon — it’s probably happening while I’m writing. That said, when we started this project, it didn’t seem us to be the right tool for the job. We had a small team, low budget and no large dataset to train our models.

Reedom is implemented using old school Digital signal processing (DSP) and a little of machine learning. DSP comprises a set of wonderful algorithms that are easy to understand and debug. That’s great if you need to quickly code, test and change your product.

How it works

Reedom listens to an input audio track and writes down sheet music. This is called “Automatic Music Transcription”.

Here you have more detailed outline of what we do:

load the input audio track

detect the beginning of the notes

for each note, detect the sound produced by the user

build up sheet music

synthetize it

The user is trying to reproduce the sound of a drum, we do not care if the result is out of tune. We use the spectrum of the audio track just to distinguish the different sounds the user is playing. This is done using K-Means clustering. The bot is configured to recognize 2 sound types (K=2).

The sound type with higher average frequencies is mapped to a higher frequency sound (hat), the sound type with lower average frequencies is mapped to a lower frequency sound (kick).

Finally, the score is synthesized using SoundFonts. SoundFonts is a sample-based synthesis technique used to play MIDI files. We didn’t put much effort in it and worked out really well—it’s fun, I suggest you to give it a try if you have the chance.

Onset Detection

The most challenging and interesting task in the process is detecting the beginning of the notes. This is called “Onset Detection”. Reedom uses a combination of different Onset Detection algorithms. We will go briefly through one of the most important of those algorithms, just to give you an idea of what we’re talking about.

The algorithm is called HFC (High Frequency Content). HFC takes in input the spectrum of the signal and multiplies each spectral bin by its position. Its goal is to boost high frequencies — that’s particularly effective for Onset Detection on percussion instruments. The peaks of the resulting curve are then used to identify the onsets.

It’s computed as follows:

HFC math

Here you have a practical example: HFC computed on the percussion track you heard at the beginning on the article. All the values are normalized in the interval [-1 1] so that everything can be shown in the same chart.

Melodic instruments use a different approach for onset detection. The algorithm is more complex, and it takes into account where the note ends.

Conclusions

There’s much work we need to do to make a real product out of this.

However, the result is an interesting sound processing showcase. This is a good example of how DSP and simple machine learning techniques can be mixed together to obtain cool results.

This article was meant to give you a glimpse of the technologies we used to develop Reedom. We’re considering to write about this topic again— please let us know if you’re interested in a specific part of the project.

Hope you enjoyed playing with our bot 🥁.

Edit: following the feedbacks we received on HN, we added a new feature: you can now export MIDI files! Read more about it in this other blog post.