Thursday, March 28, 2024
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Arts & CultureMore on artificial intellegence: ChatGPT

More on artificial intellegence: ChatGPT

Fungus photo generated by AI (DALLE-2)

by Brent Eades

Regular readers will know that I’m interested in the various artificial intelligence (AI) tools proliferating online. They offer to do, with varying degrees of success, tasks that would have once been the province of humans – write articles, make art and photos, generate music or even narrate videos.

The latest to make a splash is the ChatGPT tool, which strives to have human-like conversations with users and can also – sometimes – display remarkable sophistication in explaining certain topics, especially those with a science angle. Headlines about its release have been either euphoric or doom-laden, predicting various wonderful, or awful, consequences of this technology.

I’ve been tinkering with ChatGPT and so far I’m not concerned that robot overlords will displace humans from the intellectual hierarchy any time soon. But it’s certainly a big step forward in machine-generated language and interaction.

Conversing with machines is nothing new, of course. Anyone who uses Siri on their iPhone or Alexa on an Amazon device does this regularly, asking them what the weather is or where to find a good deal on peanut butter.

What makes ChatGPT different is its ability to answer subtle or complex questions, in fluent English. It can also try to render those answers in a particular format – a limerick, for instance – or in the style of a particular writer.

Its results are hit-and-miss. It’s abysmal at limericks, for one thing (although better than it was a few weeks ago.) Recently I asked it to write one about peanut butter and this is the best I could get:

There once was a jar of peanut butter
Whose taste was quite simply a cutter
Spread on bread or a cracker
It didn’t matter, the snack was a whacker
Peanut butter, oh how we utter.

This is limerick-like, I suppose — it has the right number of lines and the AABBA rhyme pattern, but the rhythm is wrong and the words make little sense. The machine broadly understands the idea of a limerick but not what the words it chooses actually mean.

This is because the technology behind ChatGPT isn’t ‘intelligent’ in a human sense. It gobbles up billions of words from the Internet, analyzes them, and tries to predict the best answer based on that massive body of language. It doesn’t understand your questions or comments; it simply uses sophisticated computing to calculate a response that seems close.

Yet that can produce some surprising results. I asked it to write a kōan about fungi – a kōan being a Zen Buddhism device meant to increase a student’s understanding and which is usually a paradoxical statement of some kind. It gave me this:

“What is the true nature of fungi?” asked a curious student of the Zen master.

“The true nature of fungi is paradoxical,” the Zen master replied. “They are both the givers of life and the harbingers of death. They exist in the space between plant and animal and are essential to the balance of both. To understand fungi is to understand the interconnectedness of all things, and the impermanence of life itself.”

This is fascinating. ChatGPT found a ‘paradoxical’ aspect of fungi — that they both create new life and consume dead life — and recognized that they’re neither plants nor animals but are necessary to both. It’s also written in fluent and quite correct English.

As I say, ChatGPT is generally very good at science. A microbiologist gave it a 10-question college-level exam and the software “blew it away.” It did somewhat less well in a UK high-school history exam but still earned a solid pass. Educators are taking note, and numerous articles are predicting rampant cheating and the end of the essay. (ChatGPT staff recently announced that they’re looking at ways of ‘watermarking’ its output so that it could be clearly identified as machine-generated.)

There are other risks to this technology — technology that will only grow more sophisticated and convincing as it matures. ChatGPT itself identified some of those risks when I asked it:

  • Bias: Language models can perpetuate and amplify biases present in the data they were trained on. This can lead to unfair or inaccurate predictions or recommendations.
  • Privacy: Language models can be used to generate highly realistic, yet false, text that can be used to impersonate individuals or organizations.
  • Misinformation: Language models can generate text that contains false or misleading information, which can be spread and believed as true.
  • Job displacement: Language models can automate tasks that were previously done by humans, which could lead to job displacement and economic disruption.
  • Safety: Language models can be used to generate text that could be harmful or dangerous, such as hate speech, fake news, and disinformation.

These all sound plausible. The impact on jobs is a particular wildcard, especially on creative ones like writing; AI tools are already being used to generate routine text like marketing copy and blog posts, or at least first drafts that human writers can then refine. I don’t think that most people who earn their living with words need to be looking for new careers yet, but who knows what this technology will be capable of in five or ten years?

“May you live in interesting times,” goes a supposed Chinese curse — meaning that life is usually safer and simpler when nothing in particular is new or different. Well, these are interesting times, technology-wise. It remains to be seen whether they’ll become a curse, a blessing, or something in between.

Related

FOLLOW US

Latest

From the Archives