Monday, November 25, 2024
No menu items!
HomeTechnologyWill computers ever feel responsible?

Will computers ever feel responsible?

“If a machine is to interact intelligently with people, it has to be endowed with an understanding of human life.” 

—Dreyfus and Dreyfus

Bold technology predictions pave the road to humility. Even titans like Albert Einstein own a billboard or two along that humbling freeway. In a classic example, John von Neumann, who pioneered modern computer architecture, wrote in 1949, “It would appear that we have reached the limits of what is possible to achieve with computer technology.” Among the myriad manifestations of computational limit-busting that have defied von Neumann’s prediction is the social psychologist Frank Rosenblatt’s 1958 model of a human brain’s neural network. He called his device, based on the IBM 704 mainframe computer, the “Perceptron” and trained it to recognize simple patterns. Perceptrons eventually led to deep learning and modern artificial intelligence.

In a similarly bold but flawed prediction, brothers Hubert and Stuart Dreyfus—professors at UC Berkeley with very different specialties, Hubert’s in philosophy and Stuart’s in engineering—wrote in a January 1986 story in Technology Review that “there is almost no likelihood that scientists can develop machines capable of making intelligent decisions.” The article drew from the Dreyfuses’ soon-to-be-published book, Mind Over Machine (Macmillan, February 1986), which described their five-stage model for human “know-how,” or skill acquisition. Hubert (who died in 2017) had long been a critic of AI, penning skeptical papers and books as far back as the 1960s. 

Stuart Dreyfus, who is still a professor at Berkeley, is impressed by the progress made in AI. “I guess I’m not surprised by reinforcement learning,” he says, adding that he remains skeptical and concerned about certain AI applications, especially large language models, or LLMs, like ChatGPT. “Machines don’t have bodies,” he notes. And he believes that being disembodied is limiting and creates risk: “It seems to me that in any area which involves life-and-death possibilities, AI is dangerous, because it doesn’t know what death means.”

According to the Dreyfus skill acquisition model, an intrinsic shift occurs as human know-how advances through five stages of development: novice, advanced beginner, competent, proficient, and expert. “A crucial difference between beginners and more competent performers is their level of involvement,” the researchers explained. “Novices and beginners feel little responsibility for what they do because they are only applying the learned rules.” If they fail, they blame the rules. Expert performers, however, feel responsibility for their decisions because as their know-how becomes deeply embedded in their brains, nervous systems, and muscles—an embodied skill—they learn to manipulate the rules to achieve their goals. They own the outcome.

That inextricable relationship between intelligent decision-­making and responsibility is an essential ingredient for a well-­functioning, civilized society, and some say it’s missing from today’s expert systems. Also missing is the ability to care, to share concerns, to make commitments, to have and read emotions—all the aspects of human intelligence that come from having a body and moving through the world.

As AI continues to infiltrate so many aspects of our lives, can we teach future generations of expert systems to feel responsible for their decisions? Is responsibility—or care or commitment or emotion—something that can be derived from statistical inferences or drawn from the problematic data used to train AI? Perhaps, but even then machine intelligence would not equate to human intelligence—it would still be something different, as the Dreyfus brothers also predicted nearly four decades ago. 

Bill Gourgey is a science writer based in Washington, DC.

RELATED ARTICLES

Most Popular

Recent Comments