There tends to be an idea that creating “true” artificial intelligence is functionally distinct from creating real, practical AI that might actually go toward controlling a robot that exists today. The former is about simulating consciousness, while the latter has to do with simple behavioral switches — when you hit a wall, stop going forward. Yet, this dichotomy assumes that “true” consciousness is not just a very, very elaborate network of such simple programmatic instructions — if consciousness is just a really well programmed robot, then work on modern AI is very much a step toward true artificial consciousness. A new project from the people at OpenWorm illustrates this principle quite well, as they’ve plugged their simulated worm brain into a simulated worm body, and gotten immediate, worm-like behavior as a result.

Notice that the worm robot is not particularly worm-like in appearance. Rather than being a squirming snake machine, which would have been quite difficult to get working, the OpenWorm team used a more conventional robot on wheels. Thus, what’s being simulated here is behavior, not locomotion — though OpenWorm has spent a significant amount of time modeling the worm’s muscles and natural method of movement.

OpenWorm uses software to try to accurately model every neural connection between the 300-odd neurons in the roundworm c. elegans; its goal is to model how information moves through the worm’s body and dictates behavior in its non-brain. On that count, the Lego-built robot produced some very interesting results.

The team’s robot features a food and proximity sensor in the front, which outputs the appropriate signals when stimulated; stimulating the proximity sensor shows the worm that it’s reached an obstacle, causing it to stop, while stimulating the food sensor will cause it to move forward. What’s important to understand is that the researchers did not go in and program the robot to stop in response to an upcoming wall, but rather accurately modeled c. elegans, which has been programmed by evolution, and their modeling of that biological programming led to similar behavior from the robot. It is one of the first (if not the first) examples of emergent robot behavior based on novel input and mapped connections, rather than conventional, results-focused computer programming.

Many are heralding this as the beginning of true artificial life, and while there’s a kernel of truth to that, it’s also a bit misleading. It’s true in that this is literally the most direct attempt to put the causes and effects of real life into a mechanical form, but that’s not what the development of life was — evolution didn’t find a nematode and plug its brain into the body of a nematode, it designed that brain and body from the ground up, doing precisely the sort direct programming this project has abandoned. Simulating a physical brain then hosting that simulation in a robot is maybe the least efficient way of building a brain, which is why the goal of this project was to show how well they’ve simulated the worm’s brain, not to build a robust working robot.

This is not the strategy we’ll use to build an eventual Hal, but it may be how we come to a nuanced enough understanding of artificial brains to actually design one. In a weird way, this is a more fundamental step forward for biology than for robotics or computer programming, showing that we might be able to get reliable predictions of animal behavior from simulated life.