Are Humans Still the Smartest Beings on the Planet?

These new AIs are smart. Or at least seem to be. After all, they are excelling at a wide range of tests typically used to gauge human knowledge and intelligence. Which leads me to ask, “Are humans still the smartest beings on the planet?”

Maybe Not

There are some reasonable and growing arguments that we’re no longer the most intelligent entities on the planet. Let’s go to the exams.

Even before the company Open AI launched its ChatGPT-4 chatbot, which is considered considerably more capable than ChatGPT-3, a study looked at the AI’s ability to match humans in three key areas: general knowledge, SAT exam scores, and IQ.

The outcome? ChatGPT wound up in a higher percentile than humans in all three areas.

AI expert and author Dr. Alan D. Thompson suggests that GPT-3 displays an IQ above 120. If ChatGPT were a human being, it would fall into the “gifted” category, according to Thompson.

And then there’s ChatGPT-4. Open AI has published extensive data about how it performs on a wide range exams. For example, the firm claims that the AI passes a simulated bar exam (that is, the one that tests knowledge and skills attorneys should have before becoming licensed to practice law) “with a score around the top 10% of test takers,” which compares very favorably with GPT-3’s score, which was around the bottom 10%.

Maybe We Never Were

Of course, one might argue that we never really were the smartest beings on the planet. We don’t have a way to truly gauge the intelligence, for example, of the huge-brained whales, some of which live for up to 200 years.

I explored this in a previous post, so I won’t delve too deeply into the details. But the truth is that we can only guess at the intelligence of cetaceans such as the humpback, beluga and killer whales as well as various dolphins and porpoises.

Maybe the Question Makes No Sense

One of the interesting things about the large language model (LLM) AIs is that we’ve trained them on human langugage. Lots and lots of it. We are language-using animals par excellence. Now, we’ve harnessed machine learning to create tools that at least imitate what we do through the use of neural nets and statistics.

We don’t typically say that we are dumber than a calculator, even though calculators can handle mathematics much better than we typically can. Nor do we say we are “weaker” than a bulldozer. Perhaps we are just shouldn’t apply the word intelligence to these recticular models of AI. What they do and what we do may not be truly comparable.

Maybe So, For Now

I’m certainly no expert but have had considerable experience with ChatGPT and Bing chat. I was an early adopter in both cases and have seen how humblingly smart and yet puzzlingly dense they can be.

For example, I’ve had to convince ChatGPT that the year 1958 came well after the World War II, and I’ve seen Bing be stubbornly wrong about prime numbers and basic multiplication. In other cases, I’ve asked Bing to give me information on a topic from the last week and it’s given me articles several years old.

As for the AI art generators, they are also amazing yet often can’t seem to count limbs or digits or even draw human hands in a non-creepy way.

In other words, there are times when these systems simply lack what we might consider common sense or fundamental skills. We can’t yet trust them to get the details right in every instance.

At the same time, of course, the LLMs are able to write rather good prose on virtually any topic of your choice in seconds. Imagine knowing just about everything on the Internet and being able to deftly and almost instantly weave that information together in an essay, story or even poem. We don’t even have a word for that capability. Savant would not cover it.

Once these systems truly develop “common sense,” however we define that, there will be precious few tasks on which we can best them. Perhaps they are still a long way from that goal, but perhaps not.

Maybe We’re Just Being Extended

In that past, I’ve written about the “extended human” and Kevin Kelly’s idea of the technium, which he discusses in his book What Technology Wants. Many people would not call any one of these LLM AIs a “being” at all. Rather, they’d say they are still just tools made up of silicon, fiber-optic cables and electronic blips of 0s and 1s with no consciousness or even sentience at all. They are little more than mechancial parrots.

In this view, the LLMs are glorified search engines that put together word pattens with no more thought than when a series of ocean waves create elegant undulating patterns of sand on the beach. These machines depend on our words, ideas and works of art in order to “think” at all, so they are mere extensions of our own intellects: bulldozers of human symbols, so to speak.

Maybe It Doesn’t Matter

Maybe they will out-intellect us by wider and wider margins, but perhaps it doesn’t really matter if we are no longer the smartest entities on the planet.

For decades, some scholars have argued that we can’t compare our intellects to those of other beings: anthills and beehives, corvids and cephalopods, elephants and grizzly bears. Each animal’s intellect is uniquely good at the things that keep them alive.

Squirrels are geniuses at remembering where they’ve hidden their acorns and negotiating the complexities forest canopies. We can’t do what they do, but does that make us their inferior?

No, comparing the two is nonsense, this argument goes.

The AIs will never be better humans than humans because we are uniquely ourselves. Perhaps the era of AIs will give us both the humility and the wisdom to finally understand this.

Which is all fine and well until, of course, the machines learn to dominate our world just as humanity has done in the recent past. If this happens, perhaps we will need to learn to live in their shadows, just as squirrels and crows and coyotes have lived in ours.

Feature image from CrisNYCa, April 17, 2018. Le Penseur (The Thinker) in the garden of Musée Rodin, Paris