A stock photo of a head made up of a variety of coding numbers and letters, as well as different math equations. The background is blue, and letters and numbers are orange and red.

Challenging the Field

Researchers say new learning algorithm could more quickly give us robotic aides to care for elderly and kids
October 02, 2017

After one of the pioneers of artificial intelligence said a major piece of the technology needs to be scrapped and a new way to advance AI discovered, professors at WPI are intrigued and inspired.

“It’s exciting to challenge the field,” says Carolina Ruiz, associate professor in the Computer Science department. “We should challenge researchers to think outside the box and look at this a different way. People should see this as an opportunity to better understand intelligence.”


Carolina Ruiz

Geoffrey Hinton, a pioneer in the development of artificial intelligence and one of the field’s most famous researchers, said during an interview in Toronto last month that he’s suspicious of the ability of back-propagation, a key algorithm used in a lot of artificial intelligence software, to act as closely as needed to the human brain and to continue to advance the technology.

And Hinton wants the industry to start over and find something new.

“My view is throw it all away and start again," he reportedly told news site Axios.

That is an important opinion for the AI community since Hinton, who today is an emeritus professor at the University of Toronto and an Engineering Fellow at Google, was one of the first researchers to demonstrate the back-propagation algorithm for training neural networks. That spurred the growth in machine learning, an arm of AI, and neural networks, which are systems based on the workings of the human brain.

Both deep learning and neural networks have been critical in the development of speech and image recognition, used in law enforcement and to organize photos, control our devices, and act as security in smartphones. Because of back-propagation, we have systems that scan space for potentially habitable planets and scan medical images looking for signs of cancer. We also have apps that recognize songs on the radio and tell us what we’re listening to.

“It’s the very core of the idea of an artificial neural network,” says David Cyganski, a professor and interim director of WPI’s Robotics Engineering program. “Without back-propagation there would hardly be the concept of an artificial neural network, which is a way we make something work like a brain. I would say the lion’s share of AI today uses back-propagation."

So what is this algorithm that has become so integral to the way we build AI systems today?


David Cyganski

Back-propagation is an algorithm that enables our smart systems to learn.

An artificial neural network acts much like the biological neural network at work in the human brain, which has simple processing units called neurons. However, the power of the brain is not in a single neuron. It lies in the network of neurons and the connections among them.  

The back-propagation algorithm’s job is to tell the network how to adapt the strength of those artificial connections based on the training they are given.

When scientists use back-propagation, they teach the artificial system by giving it information. For example, to teach an AI network what a cat is, they might show it hundreds or a thousand different images of a cat, along with other animals that are not cats. Then every time the system incorrectly recognizes a cat, back-propagation gives it the digital equivalent of a slap on the hand, working the corrected information back through the many layers of algorithms and circuitry in the system, altering those layers so they will react properly when the system later sees something that may or may not be a cat.

The problem with this is that the AI system doesn’t simply learn by being in the world. Someone has to feed it all of that information.

Another problem is that even after seeing hundreds of pictures of cats, the artificial system might be stumped when it sees a human wearing a pair of costume cat ears.

“The problem is we can’t ever show it all of its potential experiences,” says Cyganski. “You cannot depend on knowing what will happen with a neural network when it’s presented with something different from all of its training examples.”

Despite this issue, back-propagation has greatly advanced artificial intelligence, powering Google searches, medical diagnosis software, video games, and social networks.


Jacob Whitehill

However, its limitations bother some researchers, who want more efficient and more reliable ways for AI systems to learn.

“I don’t think Hinton is worried that existing AI is going to necessarily break down, but to take things to the next level there needs to be some kind of fundamental new approach,” says Jacob Whitehill, assistant professor of computer science. “It’s resulted in shockingly powerful behavior that an artificial network can do, even outperforming human capabilities in certain domains. Would it be nice if something more powerful came along? Of course, it would.”

A new, more powerful approach could lead to the faster development of robot assistants that do laundry, pick the kids up from school, care for our elderly parents, and do the shopping.

“A sci-fi idea of a robot that is much more human-like, that can joke with us and have involved conversations, that won’t just require more powerful computers. That will require a more fundamental shift away from back-propagation,” adds Whitehill, who teaches a class in deep learning. “It’s intriguing.”

Joseph Beck, associate professor of computer science, uses machine learning to help him figure out how to better teach students.

While back-propagation is behind what he calls some “amazing achievements” in artificial intelligence, Beck also says Hinton isn’t crazy to call for something new to replace the old algorithm.


Joseph Beck

“It’s a building block of so much of the modern world,” he adds. “What we have now is not so bad, but I hope there’s something better coming.”

Beck speculates that Hinton might have raised the topic of scrapping back-propagation because the AI pioneer is working on an alternative. Hinton declined to answer that question in two different emails.

“If he comes up with something interesting, certainly the world would be interested,” Beck says. “He’s a huge name in machine learning. Everyone would pay attention.”

Ruiz, though, is finding inspiration in Hinton’s call for researchers to come up with something new.

“For him to say we should start all over was incredible,” says Ruiz, whose research focuses on machine learning and building algorithms for behavioral and clinical medicine. “In a way, I thought it was undermining all the work he has done and the achievements we have had with his ideas. But then I thought that it’s just too easy for the AI community to work on that one thing. He’s saying: Don’t follow that path. Think of new ideas. Think of what will be the next new thing.”

Back-propagation has been a critical tool in advancing AI and researchers are unlikely to stop using it anytime soon, but scientists need to make sure they’re not getting stuck in using only one technique or approach to machine learning.

“It’s totally inspiring. He’s a pioneer,” says Ruiz. “For him to encourage other people to do what he did—to think about problems and find new ways to solve them—that inspires me. He could have just continued the success of his own work and not questioned it. He’s achieved great success, but we still can achieve greater.”

- By Sharon Gaudin