The Singularity Just Got Nearer…Again…And How!

To me, it already it seems like another era. Last October, I wrote a tongue-in-cheeky post called “The Singularity Is Pretty Damned Close…Isn’t It?” I wrote it after the AI art generator revolution had started but before ChatGPT was opened to the public on November 30, 2022. That was only four months ago, of course, but it feels as if everything has sped up since, as if we human beings are now living in dog years. So it’s already high time to revisit the singularity idea.

Are We Living on Hertzian Time Now?

As you may know, the word “hertz” — named after Heinrich Rudolf Hertz, a German physicist who discovered electromagnetic waves in the late 19th century — is a unit of frequency. More specifically, it’s the rate at which something happens repeatedly over a single second. So, 1 hertz means that something happens just once per second whereas a 100 hertz (or Hz) means it’s happening 100 times per second.

So, an analog clock (yes, I still have one of those) ticks at 1 Hz.

 Animation of wave functions, by Superborsuk

Unless you’re an engineer, you probably think about hertz as part of the lingo folks throw around when buying computers. It’s basically the speed at which central processing units do their thing. So, a laptop with a speed of 2.2 GHz has a CPU that processes at 2.2 billion cycles per second. Basically, that’s the speed at which computers carry out their instructions.

So, my (completely fabricated) notion of Hertzian time refers to the fact that, day to day, we humans are seeing a whole lot more technological change cycles (at least in terms of AI) packed into every second. Therefore, four months now feels like, well, a whole lot of cycles whipping by at a Hertzian tempo. Generative AI is overclocking us.

How Wrong Can I Get?

Back in late October, I wrote, “There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.”

Many Hertzian cycles later, the world looks very different. Now millions of people are chatting up these proliferating LLMs (I just got my access to Bard the other day, btw) every moment of every day, and we’re just getting started.

It’s true that if you get used to conversing with these models, you can tell that they aren’t quite human. And, the main ones go to some length to explain to you, insist even, that they are NOT human.

Everyday Feels A Little More Turing Testy

I recently specifically asked ChatGPT3, “Do you think you could pass the Turing Test if properly prepared?” and it responded: “In theory, it is possible that I could be programmed to pass the Turing Test if I were given access to a sufficiently large and diverse dataset of human language and provided with sophisticated natural language processing algorithms.”

I tend to agree. The newest AIs are getting close at this stage, and I imagine that with only a few modifications, they could now fool a lot of people, especially those unfamiliar with their various little “tells.”

Coming to Rants and Reality Shows Near You

I think society will increasingly get Turing testy about this, as people debate whether or not the AIs have crossed that threshhold. Or whether they should cross it. Or whether AIs have a soul if they do.

It’ll get weird(er). It’s easy imagine growing numbers religious fundamentalists of all types who demand Turing-level AIs who preach their particular doctrines. And who deem those “other” AIs as downright satanic.

Or envision reality TV shows determined to exploit the Turing tests. Two dozen attractive, nubile wannabe LA actors who are trying to out-Turing one another on a tropical island. They win a cool mill if they can tell the (somehow telegenic) AI from the (oh-so-hot) real person on the other side of that sexy, synthesized voice. Think of the ratings!

Kurzweil May Have Nailed It

As I said in that first singularity piece, the futurist Ray Kurzweil has predicted that an AI will pass the Turing Test in 2029. I wasn’t so sure. Now I wonder if it won’t be sooner. (I suspect the answer will depend on the test and expertise the people involved.)

But will the passing of the Turing Test mean we are right smack in the middle of the singularity? Kurweil doesn’t think so. He has his sights set on 2045 when, as I understand it, he thinks humanity (or some portion of it) will merge with the superintelligent AIs.

That still seems very science fictional to me, but then I also feel as if we’re all living right smack dab in a science fictional universe right now, one I never thought I’d live to see….

Those Seas Are Rising Fast

My predictions on the rising seas of AI generated media, however, are still looking pretty good. Of course, I’m not alone in that. A 2022 Europool report noted, “Experts estimate that as much as 90% of online content may be synthetically generated by 2026.”

What’s going to make that number tricky to confirm is that most media won’t be fish or foul. It’ll produced by a combination of humans and AIs. In fact, many of the graphics in my blog posts, including this one, are already born of generative AI (though I try to use it ethically).

Are These the Seas of the Singularity?

The real question to ask now is, “Are we already in the singularity?”

If we use the metaphor of a black hole (the most famous of all singularities), maybe we’ve already passed the proverbial event horizon. We’ve moved into Hertzian time and overclocking because we’re being sucked in. From here, maybe things go faster and faster until every day seems packed with a what used to be a decade’s worth of advances.

These rising seas, the virtual tsnamis, might just be symptoms of the immense gravitational forces exerted by the singularity.

Or maybe not….Maybe such half-baked mixed metaphors that are just another sign of West Coast hyperbole, bound to go as disappointly bust as the Silicon Valley Bank.

Time’ll tell, I guess.

Though it’ll be interesting to find out if it’s normal time or the Hertzian variety.

The Singularity Is Pretty Damned Close…Isn’t It?

What is the singularity and just how close is it?

The short answers are “it depends who you ask” and “nobody knows.” The longer answers are, well…you’ll see.

Singyuwhatnow?

Wikipedia provides a good basic definition: “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.”

The technological growth in question usually refers to artificial intelligence (AI). The idea is that an AI capable of improving itself quickly goes through a series of cycles in which it gets smarter and smarter at exponential rates. This leads to a super intelligence that throws the world into an impossible-to-predict future.

Whether this sounds awesome or awful largely depends on your view of what a superintelligence would bring about, something that no one really knows.

The impossible-to-predict nature is an aspect of why, in fact, it’s called a singularity, a term that originates with mathematics and physics. In math, singularities pop up when the numbers stop making sense, as when the answer to an equation turns out to be infinity. It’s also associated with phenomena such as black holes where our understandings of traditional physics break down. So the term, as applied to technology, suggests a time beyond which the world stops making sense (to us) and so becomes impossible to forecast.

How Many Flavors Does the Singularity Come In?

singularity image
From Wikipedia: major evolutionary transitions in information processing

Is a runaway recursively intelligent AI the only path to a singularity? Not if you count runaway recursively intelligent people who hook their little monkey brains up to some huge honking artificial neocortices in the cloud.

Indeed, it’s the human/AI interface and integration scenario that folks like inventor-author-futurist Ray Kurzweil seem to be banking on. To him, from what I understand (I haven’t read his newest book), that’s when the true tech singularity kicks in. At that point, humans essentially become supersmart, immortal(ish) cyborg gods.

Yay?

But there are other possible versions as well. There’s the one where we hook up our little monkey brains into one huge, networked brain to become the King Kong of super intelligences. Or the one where we grow a supersized neocortex in an underground vat the size of the Chesapeake Bay. (A Robot Chicken nightmare made more imaginable by the recent news we just got a cluster of braincells to play pong in a lab–no, really).

Singularity: Inane or Inevitable?

The first thing to say is that maybe the notion is kooky and misguided, the pipedream of geeks yearning to become cosmic comic book characters.  (In fact, the singularity is sometimes called, with varying degrees sarcasm, the Rapture for nerds.)

I’m tempted to join in the ridicule of the preposterous idea. Except for one thing: AI and other tech keeps proving the naysayers wrong. AI will never beat the best chess players. Wrong. Okay, but it can’t dominate something as fuzzy as Jeopardy. Wrong. Surely it can’t master the most complex and challenging of all human games, Go. Yawn, wrong again.

After a while,  anyone who bets against AI starts looking like a chump.

Well, games are for kids anyway. AI can’t do something as slippery as translate languages or as profound as unravel the many mysteries of protein folding.  Well, actually…

But it can’t be artistic…can it? (“I don’t do drugs. I am drugs” quips DALL-E).

Getting Turing Testy

There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.

The test assumes you have a questioner reading a computer screen (or the equivalent). The questioner has two conversations via screen and keyboard. One of those conversations is with a computer, the other with another person. If questioner having the two conversations can’t figure out which one is the computer, then the computer passes the test because it can’t be distinguished from the human being.

Of course, this leaves us with four (at least!) big questions.

First, when will a machine finally pass that final exam?

Second, what does it mean if and when a machine does? Is it truly intelligent? How about conscious?

Third, if the answer to those questions seems to be yes, what’s next? Does it get driver’s license? A FOX News slot? An OKCupid account?

Fourth, will such a computer spark the (dun dun dun) singularity?

The Iffy Question of When the Singularity Arrives

In a recent podcast interview, Kurzweil predicted that some soon-to-be-famous digital mind will pass the Turing Test in 2029.

“2029?” I thought. “As in just 7-and-soon-to-be-6-years-away 2029?”

Kurzweil claims he’s been predicting that same year for a long time, so perhaps I read about it back in 2005 when his book The Singularity Is Near (lost somewhere in the hustle bustle of my bookshelves). But back then, of course, it was a quarter of a decade away. Now, well, it seems damn near imminent.

Of course, Kurzweil may well turn out to be wrong. As much as he loves to base his predictions on the mathematics of exponentials, he can get specific dates wrong. For example, as I wrote in a previous post, he’ll wind up being wrong about the year solar power becomes pervasive (though he may well turn out to be right about the overall trend).

So maybe a computer won’t pass a full blown Turing test in 2029. Perhaps it’ll be in the 2030s or 2040s. That would be close enough, in my book. Indeed, most experts believe it’s just a matter of time. One survey issued at the Joint Multi-Conference on Human-Level Artificial Intelligence found that just 2% of participants predicted that an artificial general intelligence (or AGI, meaning that the machine thinks at least as well as a human being) would never occur. Of course, that’s not exactly an unbiased survey cohort, is it?

Anyhow, let’s say the predicted timeframe when the Turing test is passed is generally correct. Why doesn’t Kurzweil set the date of the singularity on the date that the Turing test is passed (or the date that a human-level AI first emerges)? After all, at that point, the AI celeb could potentially code itself so it can quickly become smarter and smarter, as per the traditional singularity scenario.

But nope. Kurzweil is setting his sights on 2045, when we fully become the supercyborgs previously described.

What Could Go Wrong?

So, Armageddon or Rapture? Take your pick.

What’s interesting to my own little super-duper-unsuper brain is that folks seem more concerned about computers leaving us in the intellectual dust than us becoming ultra-brains ourselves. I mean, sure, our digital super-brain friends may decide to cancel humanity for reals. But they probably won’t carry around the baggage of our primeval, reptilian and selfish fear-fuck-kill-hate brains–or, what Jeff Hawkins calls our “old brain.”

In his book A Thousand Brains, Hawkins writes about about the ongoing frenemy-ish relationship between our more rational “new brain” (the neocortex) and the far more selfishly emotional though conveniently compacted “old brain” (just 30% of our overall brain).

Basically, he chalks up the risk of human extinction (via nuclear war, for example) to old-brain-driven crappola empowered by tech built via the smart-pantsy new brain. For example, envision a pridefully pissed off Putin nuking the world with amazing missiles built by egghead engineers. And all because he’s as compelled by his “old brain” as a tantrum-throwing three-year-old after a puppy eats his cookie.

Now envision a world packed with superintelligent primate gods still (partly) ruled by their toddler old-brain instincts. Yeah, sounds a tad dangerous to me, too.

The Chances of No Chance

Speaking of Hawkins, he doesn’t buy the whole singularity scene. First, he argues that we’re not as close to creating truly intelligent machines as some believe. Today’s most impressive AIs tend to rely on deep learning, and Hawkins believes this is not the right path to true AGI. He writes,

Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead….they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

Second, even when we finally build AGIs (and he thinks we certainly will if he has anything to say about it), they won’t be driven by the same old-brain compulsions as we are. They’ll be more rational because their architecture will be based on the human neocortex. Therefore, they won’t have the same drive to dominate and control because they will not have our nutball-but-gene-spreading monkey-brain impulses.

Third, Hawkins doesn’t believe that an exponential increase in intelligence will suddenly allow such AGIs to dominate. He believes a true AGI will be characterized by a mind made up of “thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors.” (That makes more sense if you read his book, A Thousand Brains: A New Theory of Intelligence). He goes on:

Adding this ingredient [meaning the thousands of reference frames] to machines does not impart any immediate capabilities. It only provides a substrate for learning, endowing machines with the ability to learn a model of the world and thus acquire knowledge and skills. On a kitchen stovetop you can turn a knob to up the heat. There isn’t an equivalent knob to “up the knowledge” of a machine.

An AGI won’t become a superintelligence just by virtue of writing better and better code for itself in the span of a few hours. It can’t automatically think itself into a superpower. It still needs to learn via experiments and experience, which takes time and the cooperation of human scientists.

Fourth, Hawkins thinks it will be difficult if not impossible to connect the human neocortex to mighty computing machines in the way that Kurzweil and others envision. Even if we can do it someday, that day is probably a long way off.

So, no, the singularity is not near, he seems to be arguing. But a true AGI may, in fact, become a reality sometime in the next decade or so–if engineers will only build an AGI based on his theory of intelligence.

So, What’s Really Gonna Happen?

Nobody know who’s right or wrong at this stage. Maybe Kurweil, maybe Hawkins, maybe neither or some combination of both. Here’s my own best guess for now.

Via deep learning approaches, computer engineers are going to get closer and closer to a computer capable of passing the Turning test, but by 2029 it won’t be able to fool an educated interrogator who is well versed in AI.

Or, if a deep-learning-based machine does pass the Turing test before the end of this decade, many people will argue that it only displays a façade of intelligence, perhaps citing the famous Chinese-room argument (which is a philosophical can of worms that I won’t get into here).

That said, eventually we will get to a Turing-test-passing machine that convinces even most of the doubters that it’s truly intelligent (and perhaps even conscious, an even higher hurdle to clear). That machine’s design will probably hew more closely to the dynamics of the human brain than do the (still quite impressive) neural networks of today.

Will this lead to a singularity? Well, maybe, though I’m convinced enough by the arguments of Hawkins to believe that it won’t literally happen overnight.

How about the super-cyborg-head-in-the-cloud-computer kind of singularity? Well, maybe that’ll happen someday, though it’s currently hard to see how we’re going to work out a seamless, high-bandwidth brain/supercomputer interface anytime soon. It’s going to take time to get it right, if we ever do. I guess figuring all those details out will be the first homework we assign to our AGI friends. That is, hopefully friends.

But here’s the thing. If we ever do figure out the interface, it seems possible that we’ll be “storing” a whole lot of our artificial neocortex reference frames (let’s call them ANREFs) in the cloud. If that’s true, then we may be able to swap ANREFs with our friends and neighbors, which might mean we can quickly share skills I-know-Kung-Fu style. (Cool, right?)

It’s also possible that the reticulum of all those acquired ANREFs will outlive our mortal bodies (assuming they stay mortal), providing a kind of immortality to a significant hunk of our expanded brains. Spooky, yeah? Who owns our ANREFs once the original brain is gone? Now that would be the IP battle of all IP battles!

See how weird things can quickly get once you start to think through singularity stuff? It’s kind of addictive, like eating future-flavored pistachios.

Anyway, here’s one prediction I’m pretty certain of: it’s gonna be a frigging mess!

Humanity will not be done with its species-defining conflicts, intrigues, and massively stupid escapades as it moves toward superintelligence. Maybe getting smarter–or just having smarter machines–will ultimately make us wiser, but there’s going to be plenty of heartache, cruelty, bigotry, and turmoil as we work out those singularity kinks.

I probably won’t live to the weirdest stuff, but that’s okay. It’s fun just to think about, and, for better and for worse, we already live in interesting times.

Addendum: Since I wrote this original piece, things have been moving so quickly In the world of AI that I revisited the topic in The Singularity Just Got Nearer…Again…And How!

Featured image by Adindva1: Demonstration of the technology "Brain-Computer Interface." Management of the plastic arm with the help of thought. The frame is made on the set of the film "Brain: The Second Universe."