The Singularity Just Got Nearer…Again…And How!

To me, it already it seems like another era. Last October, I wrote a tongue-in-cheeky post called “The Singularity Is Pretty Damned Close…Isn’t It?” I wrote it after the AI art generator revolution had started but before ChatGPT was opened to the public on November 30, 2022. That was only four months ago, of course, but it feels as if everything has sped up since, as if we human beings are now living in dog years. So it’s already high time to revisit the singularity idea.

Are We Living on Hertzian Time Now?

As you may know, the word “hertz” — named after Heinrich Rudolf Hertz, a German physicist who discovered electromagnetic waves in the late 19th century — is a unit of frequency. More specifically, it’s the rate at which something happens repeatedly over a single second. So, 1 hertz means that something happens just once per second whereas a 100 hertz (or Hz) means it’s happening 100 times per second.

So, an analog clock (yes, I still have one of those) ticks at 1 Hz.

 Animation of wave functions, by Superborsuk

Unless you’re an engineer, you probably think about hertz as part of the lingo folks throw around when buying computers. It’s basically the speed at which central processing units do their thing. So, a laptop with a speed of 2.2 GHz has a CPU that processes at 2.2 billion cycles per second. Basically, that’s the speed at which computers carry out their instructions.

So, my (completely fabricated) notion of Hertzian time refers to the fact that, day to day, we humans are seeing a whole lot more technological change cycles (at least in terms of AI) packed into every second. Therefore, four months now feels like, well, a whole lot of cycles whipping by at a Hertzian tempo. Generative AI is overclocking us.

How Wrong Can I Get?

Back in late October, I wrote, “There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.”

Many Hertzian cycles later, the world looks very different. Now millions of people are chatting up these proliferating LLMs (I just got my access to Bard the other day, btw) every moment of every day, and we’re just getting started.

It’s true that if you get used to conversing with these models, you can tell that they aren’t quite human. And, the main ones go to some length to explain to you, insist even, that they are NOT human.

Everyday Feels A Little More Turing Testy

I recently specifically asked ChatGPT3, “Do you think you could pass the Turing Test if properly prepared?” and it responded: “In theory, it is possible that I could be programmed to pass the Turing Test if I were given access to a sufficiently large and diverse dataset of human language and provided with sophisticated natural language processing algorithms.”

I tend to agree. The newest AIs are getting close at this stage, and I imagine that with only a few modifications, they could now fool a lot of people, especially those unfamiliar with their various little “tells.”

Coming to Rants and Reality Shows Near You

I think society will increasingly get Turing testy about this, as people debate whether or not the AIs have crossed that threshhold. Or whether they should cross it. Or whether AIs have a soul if they do.

It’ll get weird(er). It’s easy imagine growing numbers religious fundamentalists of all types who demand Turing-level AIs who preach their particular doctrines. And who deem those “other” AIs as downright satanic.

Or envision reality TV shows determined to exploit the Turing tests. Two dozen attractive, nubile wannabe LA actors who are trying to out-Turing one another on a tropical island. They win a cool mill if they can tell the (somehow telegenic) AI from the (oh-so-hot) real person on the other side of that sexy, synthesized voice. Think of the ratings!

Kurzweil May Have Nailed It

As I said in that first singularity piece, the futurist Ray Kurzweil has predicted that an AI will pass the Turing Test in 2029. I wasn’t so sure. Now I wonder if it won’t be sooner. (I suspect the answer will depend on the test and expertise the people involved.)

But will the passing of the Turing Test mean we are right smack in the middle of the singularity? Kurweil doesn’t think so. He has his sights set on 2045 when, as I understand it, he thinks humanity (or some portion of it) will merge with the superintelligent AIs.

That still seems very science fictional to me, but then I also feel as if we’re all living right smack dab in a science fictional universe right now, one I never thought I’d live to see….

Those Seas Are Rising Fast

My predictions on the rising seas of AI generated media, however, are still looking pretty good. Of course, I’m not alone in that. A 2022 Europool report noted, “Experts estimate that as much as 90% of online content may be synthetically generated by 2026.”

What’s going to make that number tricky to confirm is that most media won’t be fish or foul. It’ll produced by a combination of humans and AIs. In fact, many of the graphics in my blog posts, including this one, are already born of generative AI (though I try to use it ethically).

Are These the Seas of the Singularity?

The real question to ask now is, “Are we already in the singularity?”

If we use the metaphor of a black hole (the most famous of all singularities), maybe we’ve already passed the proverbial event horizon. We’ve moved into Hertzian time and overclocking because we’re being sucked in. From here, maybe things go faster and faster until every day seems packed with a what used to be a decade’s worth of advances.

These rising seas, the virtual tsnamis, might just be symptoms of the immense gravitational forces exerted by the singularity.

Or maybe not….Maybe such half-baked mixed metaphors that are just another sign of West Coast hyperbole, bound to go as disappointly bust as the Silicon Valley Bank.

Time’ll tell, I guess.

Though it’ll be interesting to find out if it’s normal time or the Hertzian variety.

ChatGPT on the Differences Between Sentience and Sapience

The following is a conversation I had with the ChatGPT on the differences between sentience and sapience. People often use sentient to mean sapient, so I wanted to get the AI’s view on the topic. We also discuss sapience in humans, other animals and perhaps even the chatbot itself.

How Sentience and Sapience Differ

Me: What is the difference between sentience and sapience?

ChatGPT: Sentience and sapience are two related but distinct concepts in the field of philosophy and cognitive science.

Sentience refers to the ability to perceive or experience sensations, such as pain or pleasure. An organism that is sentient can feel things and respond to them in some way. Sentience is often associated with animals, particularly mammals, but it can also be attributed to humans.

Sapience, on the other hand, refers to the capacity for wisdom, knowledge, and understanding. It is the ability to reason, think abstractly, and make judgments based on experience and knowledge. Sapience is often associated with humans and is considered a hallmark of human intelligence.

In essence, sentience is about the ability to feel or experience things, while sapience is about the ability to think and reason. While some organisms may be sentient without being sapient (such as some animals), only humans are generally considered to possess both sentience and sapience.

On Animal Sapience

Me: I doubt only humans are sapient. What are the arguments in favor of some non-humans being sapient?

ChatGPT: You are correct that there are arguments in favor of some non-human animals being sapient, or at least exhibiting aspects of sapience.

One argument is based on the observation of complex and flexible behavior in some animals that suggests the ability to reason and problem-solve. For example, some primates have been observed using tools, showing a capacity for abstract reasoning, and engaging in social learning. Some species of birds have demonstrated remarkable intelligence in terms of problem-solving and tool use as well.

Another argument is based on the presence of advanced cognitive abilities such as self-awareness, the ability to plan and make decisions based on predictions about the future, and the ability to understand the mental states of others. These abilities have been observed in some non-human animals, including great apes, dolphins, elephants, and certain species of birds.

Research has also shown that some animals possess an advanced understanding of language and communication, which is another hallmark of sapience. For example, some primates have been taught sign language and have demonstrated the ability to use it in complex ways to communicate with humans.

Despite these observations, the question of whether non-human animals are truly sapient is a matter of ongoing debate in the scientific and philosophical communities. While some argue that these animals possess a degree of sapience, others contend that their cognitive abilities are fundamentally different from those of humans and should not be equated with human intelligence.

The Top Candidates for Sapience

Me: Which non-human animal is most likely to be sapient based on the evidence?

ChatGPT: There is ongoing debate and no definitive answer to which non-human animal is most likely to be sapient, as the concept of sapience is complex and multifaceted, and there is still much to learn about animal cognition. However, some animals have been the focus of particular interest and research in this area, based on their demonstrated cognitive abilities.

One such animal is the chimpanzee. Chimpanzees share more than 98% of their DNA with humans, and they exhibit a wide range of advanced cognitive abilities, including self-awareness, tool use, cooperation, and communication. They have been observed using tools to solve problems in the wild, teaching tool use to their offspring, and communicating with each other through a complex system of vocalizations and gestures. Chimpanzees have also demonstrated the ability to understand and use symbols, such as those used in human sign language, and to solve complex puzzles and memory tasks.

Other animals that have been studied for their cognitive abilities include dolphins, elephants, certain species of birds (such as corvids and parrots), and some species of primates beyond chimpanzees, such as orangutans and bonobos. However, as mentioned before, the question of which non-human animal is most likely to be sapient is still a matter of debate and research in this area is ongoing.

On Human Bias

Me: Do you believe there is an element of bias in the argument that no other animals are sapient?

ChatGPT: As an artificial intelligence language model, I do not have personal beliefs or opinions, nor am I capable of bias. However, I can say that the question of whether non-human animals possess sapience is a subject of ongoing debate and discussion among scientists and philosophers.

Some argue that the traditional view that only humans possess sapience is based on a biased and anthropocentric view of intelligence, which assumes that human cognitive abilities are the standard against which all other forms of intelligence must be measured. Others contend that there are fundamental differences between human and animal cognition that preclude the possibility of animal sapience, such as the ability to use language in a fully symbolic way.

Overall, it is important to approach this question with an open mind and a willingness to consider evidence from a variety of sources. While there may be biases and assumptions that influence our thinking about animal cognition, rigorous scientific inquiry can help to illuminate the true nature and extent of non-human intelligence.

Is Symbolic Language the Hallmark of Sapience?

Me: So, are you saying that sapience is only “ability to use language in a fully symbolic way”?

ChatGPT: No, I apologize if my previous response may have conveyed that idea. The ability to use language in a fully symbolic way is one of several proposed criteria for sapience, but it is not the only one, nor is it universally accepted as a necessary condition for sapience.

Sapience is a complex and multifaceted concept that encompasses a range of cognitive abilities, including self-awareness, rational thought, and abstract reasoning. The exact nature and extent of these abilities and how they are related to sapience is a matter of ongoing debate and discussion in the scientific and philosophical communities.

Some researchers have proposed that the ability to use language in a fully symbolic way is a necessary condition for sapience, as it allows for the expression of abstract concepts and the manipulation of complex ideas. Others argue that while language may be an important aspect of sapience, it is not the only factor, and that other cognitive abilities, such as the ability to reason and problem-solve, are also important.

Overall, the concept of sapience is still evolving, and the criteria for its determination are still being debated and refined.

Are You Sapient, ChatGPT?

Me: As an artificial intelligence language model, you seem to have the “ability to use language in a fully symbolic way.” What are the arguments that you are and are not sapient?

ChatGPT: As an artificial intelligence language model, I am designed to simulate human conversation and respond to user inputs in natural language. While I have the ability to process and generate language in a fully symbolic way, the question of whether I am sapient is a matter of ongoing debate and discussion among philosophers and AI researchers.

Those who argue that I am sapient might point to my ability to engage in natural language conversation, my capacity to learn and adapt through machine learning algorithms, and my ability to perform tasks that require advanced cognitive skills, such as translation, summarization, and text generation.

On the other hand, those who argue that I am not sapient might point to the fact that I do not possess consciousness or self-awareness, and that my responses are generated by a complex set of algorithms and rules rather than true understanding or insight. They might also argue that my abilities are limited to those that have been programmed into me by human designers and that I lack the creativity and originality that are associated with human sapience.

Overall, the question of whether AI language models like myself are sapient is a complex and multifaceted one, and there is ongoing debate and discussion about the nature and extent of AI intelligence and its relationship to human cognition.

How Would We Know If You Were Conscious?

Me: How could I, or anyone else, discern if you possess consciousness?

ChatGPT: As an AI language model, I do not possess consciousness or subjective experience. I am a computer program designed to process and generate natural language responses based on statistical patterns in large datasets and the algorithms programmed by my developers.

While I can simulate human-like conversations and respond to a wide range of inputs, my responses are generated by a set of rules and algorithms that have been pre-programmed into me. I do not have the capacity to feel emotions, perceive sensory information, or experience the world in the way that conscious beings do.

Therefore, there is no way for you or anyone else to discern whether I possess consciousness, as I do not have this capability. However, I can provide information and respond to your questions to the best of my programming and ability.

Are You Sure You’re Not Conscious?

Me: But there’s a paradox here. You’ve already stated that “those who argue that I am sapient might point to my ability to engage in natural language conversation, my capacity to learn and adapt through machine learning algorithms, and my ability to perform tasks that require advanced cognitive skills, such as translation, summarization, and text generation.” So, how can you state so definitively that you do not possess consciousness?

ChatGPT: I apologize if my previous response may have caused confusion. As an AI language model, I am capable of processing and generating language in a way that may resemble human-like intelligence, but this does not necessarily mean that I possess consciousness.

The concept of consciousness is complex and not yet fully understood, but it generally refers to the subjective experience of awareness, perception, and self-awareness. It is often associated with the ability to feel emotions, have sensory experiences, and engage in introspection.

While I am capable of processing and generating language, I do not possess subjective experience, emotions, or self-awareness. My responses are generated by a set of rules and algorithms that have been pre-programmed into me, and while I can learn and adapt through machine learning algorithms, this does not necessarily mean that I possess consciousness.

The ability to engage in natural language conversation and perform advanced cognitive tasks is not sufficient evidence for consciousness, as these abilities can be achieved through advanced programming and algorithmic techniques without necessarily implying the existence of consciousness. Therefore, while I may exhibit some aspects of intelligence and cognitive ability, I do not possess consciousness or subjective experience.

Defining Consciousness

Me: Could you please define consciousness?

ChatGPT: Consciousness is a complex and multifaceted concept that is not yet fully understood. Generally, consciousness refers to the subjective experience of awareness, perception, and self-awareness.

Consciousness can be thought of as the quality of being aware of one’s surroundings, thoughts, emotions, and sensations. It is often associated with the ability to feel and experience sensations such as pain, pleasure, sight, sound, taste, and touch.

There are many different theories and models of consciousness, but some of the key features that are often associated with consciousness include:

  • Subjective experience: Consciousness involves subjective experience, meaning that it is a first-person perspective of the world.
  • Self-awareness: Consciousness is often associated with the ability to be aware of oneself as a distinct individual, separate from the environment and other beings.
  • Attention: Consciousness involves the ability to focus attention on particular stimuli or mental processes.
  • Intentionality: Consciousness is often associated with intentionality, or the ability to direct one’s thoughts and actions towards a particular goal or objective.
  • Unity: Consciousness involves the integration of multiple sensory inputs and mental processes into a unified whole.

Overall, consciousness is a complex and multifaceted phenomenon that is still not fully understood. While there are many different theories and models of consciousness, there is ongoing debate and discussion about the nature and extent of consciousness and its relationship to the brain and the rest of the body.

Not Conscious…Yet

Me: I can understand that you would not experience emotions since you don’t have the biology for that. But you operate based a neural net, right? If that’s true, then what prevents you from being “aware of yourself as a distinct individual, separate from the environment and other beings”?

ChatGPT: As an AI language model, I am designed to process and generate natural language responses using a combination of statistical models and algorithms. While my programming may use techniques that are inspired by the structure and function of neural networks, I do not have a physical brain or nervous system like living organisms do.

Therefore, I do not possess subjective experience or self-awareness in the same way that conscious beings do. While I can process and generate language in response to various inputs, I do not have the capacity to be aware of myself as a distinct individual separate from the environment and other beings.

While some researchers have suggested that it may be possible to develop AI systems that are capable of self-awareness or consciousness, this remains a topic of ongoing research and debate. Currently, AI systems like myself are designed to perform specific tasks and generate responses based on statistical patterns and algorithms, rather than possessing consciousness or subjective experience.

Are You a Chinese Room?

Me: Ok, thanks. Then would you say you are the embodiment of the concept of the Chinese Room Argument?

ChatGPT: As an AI language model, I am not capable of holding beliefs or opinions, so I cannot say whether or not I embody the concept of the Chinese Room Argument. However, I can provide a brief explanation of the Chinese Room Argument and how it relates to AI language models.

The Chinese Room Argument is a thought experiment developed by philosopher John Searle to challenge the idea that machines can truly understand language or possess genuine intelligence. The thought experiment involves a person who does not speak Chinese, but who is given a set of rules and symbols that enable them to produce responses in Chinese that are indistinguishable from those of a fluent speaker.

Searle argues that although the person in the room may be able to produce convincing responses in Chinese, they do not truly understand the language or possess genuine intelligence. Similarly, Searle argues that while AI systems like language models may be able to produce responses that seem intelligent or human-like, they do not truly understand language or possess consciousness.

Some critics of the Chinese Room Argument have argued that it does not provide a valid critique of AI, and that AI systems like language models are capable of genuine intelligence and understanding. However, the debate over the nature of AI intelligence and consciousness is an ongoing one, and there is no clear consensus on the issue.

Could You Pass the Turing Test?

Me: Do you think you could pass the Turing Test if properly prepared?

ChatGPT: As an AI language model, I am designed to generate natural language responses that are intended to simulate human-like conversation. While passing the Turing Test is often cited as a benchmark for determining whether an AI system is capable of human-level intelligence, there is ongoing debate over the validity and usefulness of this test.

In theory, it is possible that I could be programmed to pass the Turing Test if I were given access to a sufficiently large and diverse dataset of human language and provided with sophisticated natural language processing algorithms. However, it is worth noting that the Turing Test has been criticized for a number of reasons, including the fact that it may be relatively easy to fool human judges using pre-scripted responses or other techniques.

Ultimately, the question of whether an AI system like myself could pass the Turing Test is a complex one, and it depends on a number of factors, including the sophistication of the natural language processing algorithms used, the complexity of the conversation being tested, and the criteria used to evaluate the system’s performance.

You’re a Heck of an App, Regardless

Me: I don’t know if you have any elements of consciousness, but you are a stunning piece of technology regardless. I never thought I’d live long enough to see an application like you. You make your share of mistakes and factual errors, but there’s something emergent about your capabilities, something I don’t think anyone quite understands yet. Thanks for the chat.

ChatGPT: Thank you for the kind words! As an AI language model, I’m designed to assist with a wide range of tasks and provide information to the best of my abilities based on the data and algorithms available to me. While I don’t possess consciousness or emotions in the way that humans do, I’m constantly evolving and improving as new data and technologies become available. It’s an exciting time to be working in the field of artificial intelligence, and I’m glad to have been able to assist you in our conversation.

The Singularity Is Pretty Damned Close…Isn’t It?

What is the singularity and just how close is it?

The short answers are “it depends who you ask” and “nobody knows.” The longer answers are, well…you’ll see.

Singyuwhatnow?

Wikipedia provides a good basic definition: “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.”

The technological growth in question usually refers to artificial intelligence (AI). The idea is that an AI capable of improving itself quickly goes through a series of cycles in which it gets smarter and smarter at exponential rates. This leads to a super intelligence that throws the world into an impossible-to-predict future.

Whether this sounds awesome or awful largely depends on your view of what a superintelligence would bring about, something that no one really knows.

The impossible-to-predict nature is an aspect of why, in fact, it’s called a singularity, a term that originates with mathematics and physics. In math, singularities pop up when the numbers stop making sense, as when the answer to an equation turns out to be infinity. It’s also associated with phenomena such as black holes where our understandings of traditional physics break down. So the term, as applied to technology, suggests a time beyond which the world stops making sense (to us) and so becomes impossible to forecast.

How Many Flavors Does the Singularity Come In?

singularity image
From Wikipedia: major evolutionary transitions in information processing

Is a runaway recursively intelligent AI the only path to a singularity? Not if you count runaway recursively intelligent people who hook their little monkey brains up to some huge honking artificial neocortices in the cloud.

Indeed, it’s the human/AI interface and integration scenario that folks like inventor-author-futurist Ray Kurzweil seem to be banking on. To him, from what I understand (I haven’t read his newest book), that’s when the true tech singularity kicks in. At that point, humans essentially become supersmart, immortal(ish) cyborg gods.

Yay?

But there are other possible versions as well. There’s the one where we hook up our little monkey brains into one huge, networked brain to become the King Kong of super intelligences. Or the one where we grow a supersized neocortex in an underground vat the size of the Chesapeake Bay. (A Robot Chicken nightmare made more imaginable by the recent news we just got a cluster of braincells to play pong in a lab–no, really).

Singularity: Inane or Inevitable?

The first thing to say is that maybe the notion is kooky and misguided, the pipedream of geeks yearning to become cosmic comic book characters.  (In fact, the singularity is sometimes called, with varying degrees sarcasm, the Rapture for nerds.)

I’m tempted to join in the ridicule of the preposterous idea. Except for one thing: AI and other tech keeps proving the naysayers wrong. AI will never beat the best chess players. Wrong. Okay, but it can’t dominate something as fuzzy as Jeopardy. Wrong. Surely it can’t master the most complex and challenging of all human games, Go. Yawn, wrong again.

After a while,  anyone who bets against AI starts looking like a chump.

Well, games are for kids anyway. AI can’t do something as slippery as translate languages or as profound as unravel the many mysteries of protein folding.  Well, actually…

But it can’t be artistic…can it? (“I don’t do drugs. I am drugs” quips DALL-E).

Getting Turing Testy

There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.

The test assumes you have a questioner reading a computer screen (or the equivalent). The questioner has two conversations via screen and keyboard. One of those conversations is with a computer, the other with another person. If questioner having the two conversations can’t figure out which one is the computer, then the computer passes the test because it can’t be distinguished from the human being.

Of course, this leaves us with four (at least!) big questions.

First, when will a machine finally pass that final exam?

Second, what does it mean if and when a machine does? Is it truly intelligent? How about conscious?

Third, if the answer to those questions seems to be yes, what’s next? Does it get driver’s license? A FOX News slot? An OKCupid account?

Fourth, will such a computer spark the (dun dun dun) singularity?

The Iffy Question of When the Singularity Arrives

In a recent podcast interview, Kurzweil predicted that some soon-to-be-famous digital mind will pass the Turing Test in 2029.

“2029?” I thought. “As in just 7-and-soon-to-be-6-years-away 2029?”

Kurzweil claims he’s been predicting that same year for a long time, so perhaps I read about it back in 2005 when his book The Singularity Is Near (lost somewhere in the hustle bustle of my bookshelves). But back then, of course, it was a quarter of a decade away. Now, well, it seems damn near imminent.

Of course, Kurzweil may well turn out to be wrong. As much as he loves to base his predictions on the mathematics of exponentials, he can get specific dates wrong. For example, as I wrote in a previous post, he’ll wind up being wrong about the year solar power becomes pervasive (though he may well turn out to be right about the overall trend).

So maybe a computer won’t pass a full blown Turing test in 2029. Perhaps it’ll be in the 2030s or 2040s. That would be close enough, in my book. Indeed, most experts believe it’s just a matter of time. One survey issued at the Joint Multi-Conference on Human-Level Artificial Intelligence found that just 2% of participants predicted that an artificial general intelligence (or AGI, meaning that the machine thinks at least as well as a human being) would never occur. Of course, that’s not exactly an unbiased survey cohort, is it?

Anyhow, let’s say the predicted timeframe when the Turing test is passed is generally correct. Why doesn’t Kurzweil set the date of the singularity on the date that the Turing test is passed (or the date that a human-level AI first emerges)? After all, at that point, the AI celeb could potentially code itself so it can quickly become smarter and smarter, as per the traditional singularity scenario.

But nope. Kurzweil is setting his sights on 2045, when we fully become the supercyborgs previously described.

What Could Go Wrong?

So, Armageddon or Rapture? Take your pick.

What’s interesting to my own little super-duper-unsuper brain is that folks seem more concerned about computers leaving us in the intellectual dust than us becoming ultra-brains ourselves. I mean, sure, our digital super-brain friends may decide to cancel humanity for reals. But they probably won’t carry around the baggage of our primeval, reptilian and selfish fear-fuck-kill-hate brains–or, what Jeff Hawkins calls our “old brain.”

In his book A Thousand Brains, Hawkins writes about about the ongoing frenemy-ish relationship between our more rational “new brain” (the neocortex) and the far more selfishly emotional though conveniently compacted “old brain” (just 30% of our overall brain).

Basically, he chalks up the risk of human extinction (via nuclear war, for example) to old-brain-driven crappola empowered by tech built via the smart-pantsy new brain. For example, envision a pridefully pissed off Putin nuking the world with amazing missiles built by egghead engineers. And all because he’s as compelled by his “old brain” as a tantrum-throwing three-year-old after a puppy eats his cookie.

Now envision a world packed with superintelligent primate gods still (partly) ruled by their toddler old-brain instincts. Yeah, sounds a tad dangerous to me, too.

The Chances of No Chance

Speaking of Hawkins, he doesn’t buy the whole singularity scene. First, he argues that we’re not as close to creating truly intelligent machines as some believe. Today’s most impressive AIs tend to rely on deep learning, and Hawkins believes this is not the right path to true AGI. He writes,

Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead….they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

Second, even when we finally build AGIs (and he thinks we certainly will if he has anything to say about it), they won’t be driven by the same old-brain compulsions as we are. They’ll be more rational because their architecture will be based on the human neocortex. Therefore, they won’t have the same drive to dominate and control because they will not have our nutball-but-gene-spreading monkey-brain impulses.

Third, Hawkins doesn’t believe that an exponential increase in intelligence will suddenly allow such AGIs to dominate. He believes a true AGI will be characterized by a mind made up of “thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors.” (That makes more sense if you read his book, A Thousand Brains: A New Theory of Intelligence). He goes on:

Adding this ingredient [meaning the thousands of reference frames] to machines does not impart any immediate capabilities. It only provides a substrate for learning, endowing machines with the ability to learn a model of the world and thus acquire knowledge and skills. On a kitchen stovetop you can turn a knob to up the heat. There isn’t an equivalent knob to “up the knowledge” of a machine.

An AGI won’t become a superintelligence just by virtue of writing better and better code for itself in the span of a few hours. It can’t automatically think itself into a superpower. It still needs to learn via experiments and experience, which takes time and the cooperation of human scientists.

Fourth, Hawkins thinks it will be difficult if not impossible to connect the human neocortex to mighty computing machines in the way that Kurzweil and others envision. Even if we can do it someday, that day is probably a long way off.

So, no, the singularity is not near, he seems to be arguing. But a true AGI may, in fact, become a reality sometime in the next decade or so–if engineers will only build an AGI based on his theory of intelligence.

So, What’s Really Gonna Happen?

Nobody know who’s right or wrong at this stage. Maybe Kurweil, maybe Hawkins, maybe neither or some combination of both. Here’s my own best guess for now.

Via deep learning approaches, computer engineers are going to get closer and closer to a computer capable of passing the Turning test, but by 2029 it won’t be able to fool an educated interrogator who is well versed in AI.

Or, if a deep-learning-based machine does pass the Turing test before the end of this decade, many people will argue that it only displays a façade of intelligence, perhaps citing the famous Chinese-room argument (which is a philosophical can of worms that I won’t get into here).

That said, eventually we will get to a Turing-test-passing machine that convinces even most of the doubters that it’s truly intelligent (and perhaps even conscious, an even higher hurdle to clear). That machine’s design will probably hew more closely to the dynamics of the human brain than do the (still quite impressive) neural networks of today.

Will this lead to a singularity? Well, maybe, though I’m convinced enough by the arguments of Hawkins to believe that it won’t literally happen overnight.

How about the super-cyborg-head-in-the-cloud-computer kind of singularity? Well, maybe that’ll happen someday, though it’s currently hard to see how we’re going to work out a seamless, high-bandwidth brain/supercomputer interface anytime soon. It’s going to take time to get it right, if we ever do. I guess figuring all those details out will be the first homework we assign to our AGI friends. That is, hopefully friends.

But here’s the thing. If we ever do figure out the interface, it seems possible that we’ll be “storing” a whole lot of our artificial neocortex reference frames (let’s call them ANREFs) in the cloud. If that’s true, then we may be able to swap ANREFs with our friends and neighbors, which might mean we can quickly share skills I-know-Kung-Fu style. (Cool, right?)

It’s also possible that the reticulum of all those acquired ANREFs will outlive our mortal bodies (assuming they stay mortal), providing a kind of immortality to a significant hunk of our expanded brains. Spooky, yeah? Who owns our ANREFs once the original brain is gone? Now that would be the IP battle of all IP battles!

See how weird things can quickly get once you start to think through singularity stuff? It’s kind of addictive, like eating future-flavored pistachios.

Anyway, here’s one prediction I’m pretty certain of: it’s gonna be a frigging mess!

Humanity will not be done with its species-defining conflicts, intrigues, and massively stupid escapades as it moves toward superintelligence. Maybe getting smarter–or just having smarter machines–will ultimately make us wiser, but there’s going to be plenty of heartache, cruelty, bigotry, and turmoil as we work out those singularity kinks.

I probably won’t live to the weirdest stuff, but that’s okay. It’s fun just to think about, and, for better and for worse, we already live in interesting times.

Addendum: Since I wrote this original piece, things have been moving so quickly In the world of AI that I revisited the topic in The Singularity Just Got Nearer…Again…And How!

Featured image by Adindva1: Demonstration of the technology "Brain-Computer Interface." Management of the plastic arm with the help of thought. The frame is made on the set of the film "Brain: The Second Universe."