Large language models aren’t people. Let’s stop testing them as if they were.
Synopsis
Multiple researchers claim large language models can pass tests designed to identify certain cognitive abilities in humans. Such results are feeding a hype that these machines will soon come for white-collar jobs. But there’s little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit.
Will Douglas Heaven When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve
- FONT SIZE
AbcSmall
AbcMedium
AbcLarge
Uh-oh! This is an exclusive story available for selected readers only.
Worry not. You’re just a step away.
Why ?
-
Exclusive Economic Times Stories, Editorials & Expert opinion across 20+ sectors
-
Stock analysis. Market Research. Industry Trends on 4000+ Stocks
-
Clean experience with
Minimal Ads -
Comment & Engage with ET Prime community -
Exclusive invites to Virtual Events with Industry Leaders -
A trusted team of Journalists & Analysts who can best filter signal from noise -
Get 1 Year Complimentary Subscription of TOI+ worth Rs.799/-
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.