Sunday, January 12, 2025
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Holy Mourning Doves

Reflections from the Swamp Dear Reader This story was...

Contemplative Yoga For Older Adults

 Centre For Creative Living Join us for a...

Empowering health decisions: Community paramedics and advance care planning

Home Hospice North Lanark (HHNL)and Bridging Generations-Age...
Science & NatureA.I. doesn't bleed

A.I. doesn’t bleed

AI-generated image

by John Clinton, Ramsay

New knowledge merges with old; what you know shapes what you learn. Artificial neural networks have been studied and computer algorithms have been generated to understand how artificial, computer-generated, neurons, would store a concept, and how it would revise that concept if it was flawed.

Geoffrey Hinton, a computer scientist often called ‘the godfather of AI,’ is a leading figure in neural networks; studies inspired by the way neurons are connected in the human brain. A New Yorker magazine piece describes his collaborative work in the 1960s and 1970s developing neural nets that couldn’t do anything better than a child. The technology suddenly improved about a decade ago with the large increase in processing speed and Internet data availability.

According to Google Scholar, Hinton is the second most cited researcher among psychologists and the most cited among computer and cognitive scientists.

Hinton has been worried about the potential of AI to do harm and he often talks about the existential threat that AI might pose to the human species. He has used OpenAI’s ChatGPT and this has made him uneasy. He wrote “Fox News is an oxy moron” asking ChatGPT to explain the joke. (Initial reading of this caused me to ponder – a moron full of oxygen??) The first reply told him that this implied Fox News was fake news. When he called attention to the space between oxy and moron, it explained that Fox News was addictive, like the drug OxyContin. To him, this level of understanding represents a new era in AI.

In nature, wildlife learns how to be intelligent without access to concepts expressed in words. They learn to be smart (survive) through experience. Learning, not knowledge, is the engine of intelligence. There is no use trying to explain complicated ideas that you do not understand. First, you must understand how something works. Otherwise, you just produce nonsense.

Because neural networks are so strange, no one can predict how useful – or dangerous – AI will become. OpenAI’s GPT models involve billions of artificial neurons (data) which are profoundly different than biological brains. They are not alive. Our intuitions tell us that nothing resident in a browser tab could really be thinking in the way we do.

Because a human brain can run on oatmeal, the world can support billions of brains, all different. Each brain can learn continuously, rather than being trained once, then pushed out into the world (ship it!).

If the brain dies, the knowledge dies. If a computer dies, load up another computer. Ten thousand neural nets can learn ten thousand different things at the same time and share what they have learned. This immortality (knowledge perpetuity) and replicability suggest we should be concerned about digital intelligence taking over from biological intelligence.

Current AI technology is talkative and intellectual. The problems of physical intuition will be the challenge of the next decade says Yann LeCun, a French computer scientist. A teenager can learn to drive a car in twenty hours of practice with little supervision. AI systems don’t come close to doing this – except self-driving cars – and they are over-engineered, requiring the mapping of whole cities, hundreds of engineers, and hundreds of thousands of hours of training.

And then there is the problem of what some researchers call AI ‘hallucination’, which Hinton names ‘confabulation.’ AI systems confabulate by making up plausible answers to questions that stump them. In the human mind, there is no boundary between just making it up and telling the truth. Telling the truth is just making it up correctly. ChatGPT’s ability to make things up is a flaw, but also a sign of its humanlike intelligence.

When Hinton began his research, no one thought the technology would succeed. When it started to succeed, no one thought it would succeed so quickly. He expects AI will contribute to many fields, but he fears what will happen when powerful people abuse it. Think autonomous weapons. Hinton warns that even benign autonomous systems could wreak havoc. Yann LeCun has said there is an idea that if a system is intelligent it is going to want to dominate. But the desire to dominate has nothing to do with intelligence – it has to do with testosterone.

Recently, Hinton did not sign a popular petition that called for at least a six-month pause in AI research.

“China is not going to stop development for six months.”

“So what should we do?”

“I don’t know. If this were like climate change where we either must stop burning carbon or find a way to remove carbon from the atmosphere, we know what the solution looks like. Here, with AI, it’s not like that.”

Think of the AI story in which our intuitions about the specialness of the human mind are being dislodged by thinking machines, or, having stolen fire, we risk being burned.

Are we fooling ourselves, being taken in by our own machines and the companies that hope to profit from them? By seeking to re-create the knowledge systems in our heads, have we seized the forbidden apple? Have we risked exile from our charmed world? But who would choose not to know how knowing works?

Related

FOLLOW US

Latest

From the Archives