While AI is getting powerful very quickly, and more and more jobs are threatened with automation, we are not yet at the point of science fiction where we are communicating as equals with robots. Sure, some robots already understand the concepts of trust and regret , but machines can’t yet think, reason, or communicate at an advanced level.

Microsoft’s Tay Bot–which tried to learn humanity from reading Twitter and quickly became an angry racist–serves as a clear example of the limitations of today’s technology. Siri is another example. She can can give you the weather forecast for your zip code but can’t describe her feelings on mass incarceration or even comb through the fine print of that contract you’ve been asked to sign.

Teaching machines to think, and behave, more like us is what Mo Musbah and the team at Maluuba are working on.

“Deep learning has been used to solve problems in speech recognition, machine translation, image processing. You see it in applications like self-driving cars, but it hasn’t been as utilized in the space of natural language understanding,” says Musbah, VP of product. “Fundamentally speaking, we’re trying to solve machine literacy, getting machines to the point where they can truly understand how to read, write, and speak like human beings.”

Maluuba’s current artificial intelligence is able to process words from a Wikipedia page, a George R.R. Martin novel, or a medical document and answer factual questions about the text (it can currently read in 10 languages). The AI could, for example, tell you what instrument Jerry Garcia played, or how Eddard Stark died, or about the side effects of a new treatment.

Musbah says Maluuba (a nonsense word made up by the founders’ computer science professor) can pull this off without giving its AI any previous training in any given domain, scientific or otherwise, as is usually required with machine learning.

“Questions that have definite answers are what we’ve tackled to date,” says Maluuba research scientist Adam Trischler, who leads the machine comprehension team. “We are in the process, with building new algorithms and data sets, of having AI answer ambiguous questions. So, not just about who did what or what happened next, but synthesizing information and making deductions about people’s motivations, or political machinations.”