$\begingroup$

This is a really fascinating possibility, but to be honest right now we are not even close to having consumer-based technology like this. The challenge comes down to a few things:

The signal-to-noise of EEG is really, really bad It's unclear how to map features of brain activity into linguistic features like phonemes/words/etc Time is a crucial aspect of brain activity, and a crucial aspect of speech, however we're not sure how to map the "timescale" of the brain onto the "timescale" of a person speaking. (e.g., think of the sentence "all dogs go to heaven" and then say it out loud. It probably took you a different amount of time in each case).

The best example of state-of-the-art for EEG decoding is probably the P300 speller. Basically, this takes advantage of a signal that pops up in your EEG only when you see something that you were already attending to. They give you a block of letters, tell you attend to one of them, and then start flashing letters in succession. When the letter you were attending to gets flashed, your brain has a special signal that the P300 speller picks up on. However, this types at the rate of only a few characters per minute, so it is far from the lucid stream of words/thoughts that we'd like to have.

The work from Gallant's lab is really cool, but far from a feasible decoder of brain activity. If I remember correctly, their movie decoder was actually selecting from a bunch of movies they had in a database. They'd take the top 10/20/whatever movies that the decoder selected, and then average them together. This is a really clever idea, but isn't the full-fledged decoder that would let them decode arbitrary thoughts etc.

There is some work in electrocorticography (a version of EEG where the electrodes are placed right on the person's brain instead of on their scalp) that has interesting results, but they are still far, far, far from a consumer product. E.g. see this paper from the Stephanie Martin / Brian Pasley for an attempt to decode acoustic features from imagined speech. The ECoG signal is a much higher-quality signal than EEG, and they definitely show an improved ability to decode, but it still leaves much to be desired. (full disclosure, I'm a co-author on that paper)

so tl;dr - these ideas are really fascinating, and hopefully they'll be a part of our society in the future. However, before we get to that point, we have major problems to solve such as finding a brain signal with a better SNR, creating more clever language models, and figuring out how time-scales are represented differently in the brain vs. during speech. Lots of progress yet to be made, but such is the nature of science :)

ps: I didn't mention any commercial brain decoding systems because these are all almost laughably bad right now. I don't know of any researchers that really believe that a system like Emotive actually records abstract states like "attention" "arousal" etc. Be wary of companies that are trying to make money by capitalizing on people's inherent enthusiasm for neuroscience. Sorry if that makes me sound like a crotchety scientist.