“Imagine Paranormal Activity as an augmented reality game”

With over 20 million downloads on iOS, Android and Windows Phone, Bryan Mitchell is a force in the mobile gaming industry. He has been developing mobile games for over half a decade and has been programming for more years than some people have been alive. I’m honored to have the chance to interview Bryan about what it’s like to develop an immersive ARG horror experience that looks incredibly terrifying!

The painting falling off the wall blows my mind every time I watch it!

That video gives me goosebumps! You mention in the video that “they said this couldn’t be done”. What’s are some examples of a difficult problems you faced with this game?

Inertial navigation on mobile devices, and single camera depth estimation. I had a subscription to deepdyve, and read through many unsuccessful attempts at making them work on the iPhone. Computer vision, on the whole, is full of unsolved problems people have been working on for 20 years. Even the human eye and brain aren’t perfect, we are fooled by camouflage and many kinds of optical illusions.

That’s interesting, I hadn’t thought about the human eye not being perfect before. What are some other interesting things this project has made you think about?

Some incredibly interesting things like AI-complete, and Computer vision.

Imagine someone blind from birth all of a sudden being able to see. They would essentially be like a baby seeing for the first time, and would have to learn what this new input meant. They would have to build all of the associations we have already built, like if a baseball is getting bigger it’s about to hit them.

When it comes to an image on a computer screen, most humans can quickly and efficiently tell which pixels in the image are people and which are not, but we have to use green/blue/black screens to help computers tell background from foreground. Humans understand the context from the content, computers don’t. We have to teach the computers.

Think about how you are able to understand what categorizes an object as a person when compared to a wall, or even another person. Take two people, who are standing next to each other against a wall, and how are you able to tell them apart? There may be a difference in heights, genders, body types, clothing, etc. As a human, you understand that there are two different and unique people on the screen. A computer has to be told about all the little things that could exist for an object to be called human.

Solving these issues required some out of the box thinking, and my understanding of film and lighting was invaluable. Almost all of computer vision is based on advanced mathematics and signal processing. I tried to approach it in an experimental way, in a less mathematical way.

Why was knowing about film and lighting useful in solving problems?

Knowing that light falls off at an inverse squared rate was useful, as was having an intuition about how light is reflected, which motivated exploring more visual sensory approaches than signal ones. The early days of computer vision were a lot like that.

What was the initial spark of motivation for your new game on indiegogo, Night Terrors?