Vast image databases like ImageNet have been employed to train software that uses neuron-like nodes, known as neural networks. The concept of computing neural networks stretches back more than three decades, but has become a powerful tool only in recent years. “The available data and computational capability finally caught up to these ideas of the past,” said Trevor Darrell, a computer vision expert at the University of California, Berkeley.

If data is the fuel, then neural networks constitute the engine of a branch of machine learning called deep learning. It is the technology behind the swift progress not only in computer vision, but also in other forms of artificial intelligence like language translation and speech recognition. Technology companies are investing billions of dollars in artificial intelligence research to exploit the commercial potential of deep learning.

Just how far neural networks can advance computer vision is uncertain. They emulate the brain only in general terms — the software nodes receive digital input and send output to other nodes. Layers upon layers of these nodes make up so-called convolutional neural networks, which, with sufficient training data, have become better and better at identifying images.

Fei-Fei Li, the director of Stanford’s computer vision lab, was a leader of the ImageNet project, and her research is at the forefront of data-driven advances in computer vision. But the current approach, she said, is limited. “It relies on training data,” Dr. Li said, “and so much of what we humans possess as knowledge and context are lacking in this deep learning technology.”

Facebook recently encountered the contextual gap. Its algorithm took down the image, posted by a Norwegian author, of a naked, 9-year-old girl fleeing napalm bombs. The software code saw a violation of the social network’s policy prohibiting child pornography, not an iconic photo of the Vietnam War and human suffering. Facebook later restored the photo.

Or take a fluid scene like a dinner party. A person carrying a platter will serve food. A woman raising a fork will stab the lettuce on her plate and put it in her mouth. A water glass teetering on the edge of the table is about to fall, spilling its contents. Predicting what happens next and understanding the physics of everyday life are inherent in human visual intelligence, but beyond the reach of current deep learning technology.

At the major annual computer vision conference this summer, there was a flurry of research representing encouraging steps, but not breakthroughs. For example, Ali Farhadi, a computer scientist at the University of Washington and a researcher at the Allen Institute for Artificial Intelligence, showed off ImSitu.org, a database of images identified in context, or situation recognition. As he explains, image recognition provides the nouns of visual intelligence, while situation recognition represents the verbs. Search “What do babies do?” The site retrieves pictures of babies engaged in actions including “sucking,” “crawling,” “crying” and “giggling” — visual verbs.