The Singularity Is Pretty Damned Close…Isn’t It?

What is the singularity and just how close is it?

The short answers are “it depends who you ask” and “nobody knows.” The longer answers are, well…you’ll see.

Singyuwhatnow?

Wikipedia provides a good basic definition: “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.”

The technological growth in question usually refers to artificial intelligence (AI). The idea is that an AI capable of improving itself quickly goes through a series of cycles in which it gets smarter and smarter at exponential rates. This leads to a super intelligence that throws the world into an impossible-to-predict future.

Whether this sounds awesome or awful largely depends on your view of what a superintelligence would bring about, something that no one really knows.

The impossible-to-predict nature is an aspect of why, in fact, it’s called a singularity, a term that originates with mathematics and physics. In math, singularities pop up when the numbers stop making sense, as when the answer to an equation turns out to be infinity. It’s also associated with phenomena such as black holes where our understandings of traditional physics break down. So the term, as applied to technology, suggests a time beyond which the world stops making sense (to us) and so becomes impossible to forecast.

How Many Flavors Does It Come In?

From Wikipedia: major evolutionary transitions in information processing

Is a runaway recursively intelligent AI the only path to a singularity? Not if you count runaway recursively intelligent people who hook their little monkey brains up to some huge honking artificial neocortices in the cloud.

Indeed, it’s the human/AI interface and integration scenario that folks like inventor-author-futurist Ray Kurzweil seem to be banking on. To him, from what I understand (I haven’t read his newest book), that’s when the true tech singularity kicks in. At that point, humans essentially become supersmart, immortal(ish) cyborg gods.

Yay?

But there are other possible versions as well. There’s the one where we hook up our little monkey brains into one huge, networked brain to become the King Kong of super intelligences. Or the one where we grow a supersized neocortex in an underground vat the size of the Chesapeake Bay. (A Robot Chicken nightmare made more imaginable by the recent news we just got a cluster of braincells to play pong in a lab–no, really).

Inane or Inevitable?

The first thing to say is that maybe the notion is kooky and misguided, the pipedream of geeks yearning to become cosmic comic book characters.  (In fact, the singularity is sometimes called, with varying degrees sarcasm, the Rapture for nerds.)

I’m tempted to join in the ridicule of the preposterous idea. Except for one thing: AI and other tech keeps proving the naysayers wrong. AI will never beat the best chess players. Wrong. Okay, but it can’t dominate something as fuzzy as Jeopardy. Wrong. Surely it can’t master the most complex and challenging of all human games, Go. Yawn, wrong again.

After a while,  anyone who bets against AI starts looking like a chump.

Well, games are for kids anyway. AI can’t do something as slippery as translate languages or as profound as unravel the many mysteries of protein folding.  Well, actually…

But it can’t be artistic…can it? (“I don’t do drugs. I am drugs” quips DALL-E).

Getting Turing Testy

There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.

The test assumes you have a questioner reading a computer screen (or the equivalent). The questioner has two conversations via screen and keyboard. One of those conversations is with a computer, the other with another person. If questioner having the two conversations can’t figure out which one is the computer, then the computer passes the test because it can’t be distinguished from the human being.

Of course, this leaves us with four (at least!) big questions.

First, when will a machine finally pass that final exam?

Second, what does it mean if and when a machine does? Is it truly intelligent? How about conscious?

Third, if the answer to those questions seems to be yes, what’s next? Does it get driver’s license? A FOX News slot? An OKCupid account?

Fourth, will such a computer spark the (dun dun dun) singularity?

The Iffy Question of When

In a recent podcast interview, Kurzweil predicted that some soon-to-be-famous digital mind will pass the Turing Test in 2029.

“2029?” I thought. “As in just 7-and-soon-to-be-6-years-away 2029?”

Kurzweil claims he’s been predicting that same year for a long time, so perhaps I read about it back in 2005 when his book The Singularity Is Near (lost somewhere in the hustle bustle of my bookshelves). But back then, of course, it was a quarter of a decade away. Now, well, it seems damn near imminent.

Of course, Kurzweil may well turn out to be wrong. As much as he loves to base his predictions on the mathematics of exponentials, he can get specific dates wrong. For example, as I wrote in a previous post, he’ll wind up being wrong about the year solar power becomes pervasive (though he may well turn out to be right about the overall trend).

So maybe a computer won’t pass a full blown Turing test in 2029. Perhaps it’ll be in the 2030s or 2040s. That would be close enough, in my book. Indeed, most experts believe it’s just a matter of time. One survey issued at the Joint Multi-Conference on Human-Level Artificial Intelligence found that just 2% of participants predicted that an artificial general intelligence (or AGI, meaning that the machine thinks at least as well as a human being) would never occur. Of course, that’s not exactly an unbiased survey cohort, is it?

Anyhow, let’s say the predicted timeframe when the Turing test is passed is generally correct. Why doesn’t Kurzweil set the date of the singularity on the date that the Turing test is passed (or the date that a human-level AI first emerges)? After all, at that point, the AI celeb could potentially code itself so it can quickly become smarter and smarter, as per the traditional singularity scenario.

But nope. Kurzweil is setting his sights on 2045, when we fully become the supercyborgs previously described.

What Could Go Wrong?

So, Armageddon or Rapture? Take your pick.

What’s interesting to my own little super-duper-unsuper brain is that folks seem more concerned about computers leaving us in the intellectual dust than us becoming ultra-brains ourselves. I mean, sure, our digital super-brain friends may decide to cancel humanity for reals. But they probably won’t carry around the baggage of our primeval, reptilian and selfish fear-fuck-kill-hate brains–or, what Jeff Hawkins calls our “old brain.”

In his book A Thousand Brains, Hawkins writes about about the ongoing frenemy-ish relationship between our more rational “new brain” (the neocortex) and the far more selfishly emotional though conveniently compacted “old brain” (just 30% of our overall brain).

Basically, he chalks up the risk of human extinction (via nuclear war, for example) to old-brain-driven crappola empowered by tech built via the smart-pantsy new brain. For example, envision a pridefully pissed off Putin nuking the world with amazing missiles built by egghead engineers. And all because he’s as compelled by his “old brain” as a tantrum-throwing three-year-old after a puppy eats his cookie.

Now envision a world packed with superintelligent primate gods still (partly) ruled by their toddler old-brain instincts. Yeah, sounds a tad dangerous to me, too.

The Chances of No Chance

Speaking of Hawkins, he doesn’t buy the whole singularity scene. First, he argues that we’re not as close to creating truly intelligent machines as some believe. Today’s most impressive AIs tend to rely on deep learning, and Hawkins believes this is not the right path to true AGI. He writes,

Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead….they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

Second, even when we finally build AGIs (and he thinks we certainly will if he has anything to say about it), they won’t be driven by the same old-brain compulsions as we are. They’ll be more rational because their architecture will be based on the human neocortex. Therefore, they won’t have the same drive to dominate and control because they will not have our nutball-but-gene-spreading monkey-brain impulses.

Third, Hawkins doesn’t believe that an exponential increase in intelligence will suddenly allow such AGIs to dominate. He believes a true AGI will be characterized by a mind made up of “thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors.” (That makes more sense if you read his book, A Thousand Brains: A New Theory of Intelligence). He goes on:

Adding this ingredient [meaning the thousands of reference frames] to machines does not impart any immediate capabilities. It only provides a substrate for learning, endowing machines with the ability to learn a model of the world and thus acquire knowledge and skills. On a kitchen stovetop you can turn a knob to up the heat. There isn’t an equivalent knob to “up the knowledge” of a machine.

An AGI won’t become a superintelligence just by virtue of writing better and better code for itself in the span of a few hours. It can’t automatically think itself into a superpower. It still needs to learn via experiments and experience, which takes time and the cooperation of human scientists.

Fourth, Hawkins thinks it will be difficult if not impossible to connect the human neocortex to mighty computing machines in the way that Kurzweil and others envision. Even if we can do it someday, that day is probably a long way off.

So, no, the singularity is not near, he seems to be arguing. But a true AGI may, in fact, become a reality sometime in the next decade or so–if engineers will only build an AGI based on his theory of intelligence.

So, What’s Really Gonna Happen?

Nobody know who’s right or wrong at this stage. Maybe Kurweil, maybe Hawkins, maybe neither or some combination of both. Here’s my own best guess for now.

Via deep learning approaches, computer engineers are going to get closer and closer to a computer capable of passing the Turning test, but by 2029 it won’t be able to fool an educated interrogator who is well versed in AI.

Or, if a deep-learning-based machine does pass the Turing test before the end of this decade, many people will argue that it only displays a façade of intelligence, perhaps citing the famous Chinese-room argument (which is a philosophical can of worms that I won’t get into here).

That said, eventually we will get to a Turing-test-passing machine that convinces even most of the doubters that it’s truly intelligent (and perhaps even conscious, an even higher hurdle to clear). That machine’s design will probably hew more closely to the dynamics of the human brain than do the (still quite impressive) neural networks of today.

Will this lead to a singularity? Well, maybe, though I’m convinced enough by the arguments of Hawkins to believe that it won’t literally happen overnight.

How about the super-cyborg-head-in-the-cloud-computer kind of singularity? Well, maybe that’ll happen someday, though it’s currently hard to see how we’re going to work out a seamless, high-bandwidth brain/supercomputer interface anytime soon. It’s going to take time to get it right, if we ever do. I guess figuring all those details out will be the first homework we assign to our AGI friends. That is, hopefully friends.

But here’s the thing. If we ever do figure out the interface, it seems possible that we’ll be “storing” a whole lot of our artificial neocortex reference frames (let’s call them ANREFs) in the cloud. If that’s true, then we may be able to swap ANREFs with our friends and neighbors, which might mean we can quickly share skills I-know-Kung-Fu style. (Cool, right?)

It’s also possible that the reticulum of all those acquired ANREFs will outlive our mortal bodies (assuming they stay mortal), providing a kind of immortality to a significant hunk of our expanded brains. Spooky, yeah? Who owns our ANREFs once the original brain is gone? Now that would be the IP battle of all IP battles!

See how weird things can quickly get once you start to think through singularity stuff? It’s kind of addictive, like eating future-flavored pistachios.

Anyway, here’s one prediction I’m pretty certain of: it’s gonna be a frigging mess!

Humanity will not be done with its species-defining conflicts, intrigues, and massively stupid escapades as it moves toward superintelligence. Maybe getting smarter–or just having smarter machines–will ultimately make us wiser, but there’s going to be plenty of heartache, cruelty, bigotry, and turmoil as we work out those singularity kinks.

I probably won’t live to the weirdest stuff, but that’s okay. It’s fun just to think about, and, for better and for worse, we already live in interesting times.

Featured image by Adindva1: Demonstration of the technology "Brain-Computer Interface." Management of the plastic arm with the help of thought. The frame is made on the set of the film "Brain: The Second Universe."

A Mayor, a CEO and a Doctor Walk into a Bar

So, a mayor, a CEO and a doctor walk into a bar.

The CEO orders up a bottle of the most expensive wine, saying he’s celebrating the 10th anniversary of his company’s founding and the 5th straight year of double digit growth.

“You know, most companies don’t even live for 10 years,” he says, rolling up the sleeves on his Gucci shirt. “Like they say on Vulcan, live long and prosper, baby!”

“I’ll drink to that,” says his doctor friend. “One of my patients hit the 100-year mark today, no doubt due to some brilliant physicking.”

“Nice job, but those aren’t really milestones for mayors. This city’s just three hundred years old, hardly out of diapers. I’ll buy the wine when it hits its adolescence in another 700 years or so.”

The CEO and the doctor laugh but then go quiet.

“You know, Mayor,” says the CEO finally, “you make me feel like a mayfly. You sure know how to suck the joy out of being a corporate animal.”

“Sorry,” says the mayor. “No offense intended. Businesses serve a purpose. They provide some services for a while and then die out. At least the vast majority of them do.”

“Like blood cells in a body,” says the doctor.

“Jeez,” says the CEO. “Now you’ve got me drinking for other reasons.”

“If you ever want to learn about the art of managing for the long haul, let me know,” quips the mayor. “We’ve got some internships opening up.”

“Shut up, Mayor,” says the CEO.

“Garçon, a round of seltzer water over here,” calls the doctor. “We’ve got to settle our friend’s case of sour grapes.”

Featured image from A.Savin (WikiCommons) - Own work View from Lycabettus in Athens (Attica, Greece)

Scenarios for a Pesky and Unpredictable Energy Future

Writing scenarios about energy is about as hip as writing science fiction about time travel or Moon colonies. Energy scenarios are, after all,  the original business scenarios. They are the vanilla of ice creams, the beige of home decorating, the Honda Accord of automobiles.

Scenarios actually began, for the most part, in the energy industry because, in a crazy and shifting world, that industry has always needed to take a long-term view and make long-term investments. That’s why so many people in that industry give off a vibe that is weirdly geeky as well as stodgy and superior. It must be the cross-breeding of engineers and geologists, gutsy wildcatters, fat-cat corporate diplomats, egg-headed forecasters and corrupt marketers.

Perhaps I picked up the energy bug from the study of scenarios in general and some exposure to energy companies. At any rate, I have been thinking and reading about such scenarios for years, and today they are inextricably linked to climate change. So, here are some my recent thoughts all nicely wrapped up in four scenarios.

Year 2032 Climate Change Scenarios

Scenario One: So Far, So Good

Assumptions: lots of green energy as well as geoengineering

By 2032, intelligent geoengineering is no longer controversial. In truth, rightly or wrongly, it has gotten some of the credit for keeping the world cooler than some had predicted it would be.

Some of the credit has gone to the United Nations, which formed the first coalitions of countries that negotiated a sulfate aerosol program that started off very modestly and then grew moderately more ambitious as various nations became comfortable with the technologies.

“We had to find a global approach to geoengineering,” said the Secretary-General of the United Nations. “Unilateral approaches could have caused international conflicts and dangerous unilateral actions.”

Another form of geoengineering, direct carbon capture, has been driven not by the UN but by a large group of nonprofits allied with private investors. After a number of major technological breakthroughs, experts now project that that within 20 years, they will be able to absorb over 30% of the carbon that has been dumped into the atmosphere over the last 100 years.

Then there are the global reforesting efforts, which are sometimes viewed as a “natural” type of geoengineering. There are multiple companies, some of them non-profits, using large squads of drones to conduct fast and effective reforesting.

The trend emerged in 2022 when AirSeed Technologies started using artificial intelligence to find areas in need of trees and fired seed pods from the sky with drones. Even at the time, the drones were reportedly able to plant over 40,000 seed pods per day, far faster and cheaper than via other methods of reforesting. Today, the number has moved past a half million per day.

An ambitious project has emerged as the reforestation companies have started to run out of promising acreage to plant. The new plan is to partner with new desalination enterprises in northern Africa in order to reforest swaths of the Sahara. This is an extension of project begun in 2007 when the African Union decided to build a “Great Green Wall” in hopes of restoring 100 million hectares of land between Senegal in the west and Djibouti in the east. The idea was to create a 15-kilometer-wide and 8,000-kilometer-long mosaic of trees, vegetation, grasslands and plants.

In every case of geoengineering, critics has emerged to warn of dire consequences. The sulfate aerosol program, they warn, may yet have an unpredictable impact since no one can accurately model climate patterns. The carbon capture programs are still unproven, and the reforestation efforts could do more harm that could.

One climatologist states, “If reforestation results in more vast, uncontrolled forest fires, as seems likely, then the process will only serve to add carbon to the air as oppose to remove it, making global warming worse.”

So far, however, these controlled geoengineering initiatives along with the fast spread of green energy sources seems to be working on level.

“So far, so good,” says the the UN Secretary-General. “Yes, some of these initiatives may not pan out. Yes, there are some potential dangers, but it’s best if we engage in these programs in controlled, internationally ordered way when possible.”

To which one critic has said, “Oh, sure, making huge mistakes via unwieldy global bureaucracies is always better. Sure it is.”

Scenario Two: Exponential Green

Assumptions: lots of green energy and little geoengineering

By the year 2020, solar photovoltaics were down to just 5.7 cents per kWh and were seen as less costly than fossil fuels. And, by 2025, the energy storage problem was well on its way to being solved via a combination of new types of batteries, efficiently converted hydrogen, and fuel cells. Investing in other fuel sources started to look like a bad investment, which meant the lower costs came even more quickly thanks to new investments.

But today, in the year 2032, it’s not all about the photovoltaics. Wind energy has also become very inexpensive, and smaller, more modular nuclear plants have made nuclear energy more price competitive. In addition, several small and still experimental nuclear fusion plants have come online.

Most new homes in the U.S. are sold with solar panels and a collection of fuel cells for storing any energy that doesn’t go directly to the electric grid. In addition, most windows are installed with clear carbon nanotube films that can reflect and collect solar energy, depending on the needs of the home.  There’s also a big business in retrofitting older homes.

This means that a growing number of energy consumers have become energy producers or energy neutral, a situation that has continued to annoy energy utilities, especially after several decades of slowing energy usage among home owners in the U.S.

There have also been advances in wireless energy delivery. The most prevalent technologies are based on lasers and magnetically coupled resonance, allowing a wide range of wireless devices to run in households without the need for wires and plugs. But the largest benefits stem from applications that allow neighborhood homes to share solar energy via ad hoc, computer-controlled and wireless grids.

Renewable energy is now estimated to make up 65% of all energy generated in the world. “We expect to the U.S. to hit 95% renewable energy by 2040,” said one utilities CEO. “It represents an amazing achievement. While humanity hasn’t exactly ‘solved’ its energy problems, it feels like we’re on the road to a sustainable future. As an industry, we’re now looking at other markets where we can be equally successful, especially the transfer of high-bandwidth information via utility infrastructures.”

The world hasn’t solved global warming but most experts are optimistic that humanity will be able to cope without the necessity of risky geoengineering projects.

  Scenario Three: Desperate Times

Assumptions: little green energy and lots of geoengineering

Global warming has hit humanity harder than most of the experts predicted. Back in 2022, Nature reported, “The negative impacts of climate change are mounting much faster than scientists predicted less than a decade ago.” It drew this conclusion from Climate Change 2022: ImpactsAdaptation and Vulnerability, a dire but well documented report from the United Nations climate panel.

What occurred in India and Pakistan shortly thereafter only underscored the point. In May 2022, nearly an eighth of the people on the planet found themselves struggling to endure a relentless heat wave. India had just gone through the hottest April in 122 years, which followed the hottest March on record. Pakistan didn’t get off much easier, encountering its hottest April in 61 years. In Jacobabad, Pakistan, temperatures rose above 120 degrees Fahrenheit.

During the heat wave, there was so much demand on the electrical grid that there were power outages for two-thirds of Indian households. Meanwhile in Pakistan, outages were cutting off power when people needed cooling the most, and many families lost running water without electricity.

This was just the beginning. By the mid-2020s, India and Pakistan were regularly besieged by murderous heats waves and droughts. That’s when the two nations, which had long been enemies, joined forces to implement the most ambitious and controversial geoengineering project in human history.

Starting in 2026, they began using high altitude jets so spread sulfate aerosols into the stratosphere with the goal of reflecting away sunlight. Of course, this resulted in a planetary effect that was greeted by outrage in some nations, gratitude in others. Russia almost immediately engaged in nuclear saber rattling, with its president warning, “This is an attack on Russia itself, threatening to make our winters longer, our growing seasons shorter and our storms more destructive. We will not stand idly by as rogue nations assault our food supplies and starve our citizens.”

Meanwhile, India and Pakistan as well as many other nations argued that climate change was the result of trends brought about by Western nations that had no right to inflict existential harm on their countries.

In the U.S., many took the side of India and Pakistan. One Kansas farmer stated, “We’re just glad somebody’s trying something. The droughts have been brutal the last few years, and the cost of irrigation is through the roof. It’s not just us farmers, either. It drives up the cost of food for everyone. Throwing some dust high up in the sky to cool things off a little seems like the commonsense thing to do to me.”

Not everyone agreed. Some climatologists warned that India and Pakistan were not being patient enough and might well overshoot the mark, wreaking even greater havoc on the global environment. “This could end in the kind of wild swings in global temperatures that do far more harm than good,” one warned.

Scenario Four: Hot, Hotter, Hottest

Assumptions: Little green energy and little geoengineering

In the year 2032, green energy has hit the flatter part of S-curve in a major way. The energy storage technologies never quite worked out, so countries have stuck with tried-and-true natural gas even while slowly building nuclear plans hindered by cost overruns. Engineers have done a pretty good job of making automobiles more fuel efficient, not just through better batteries but the more efficient engineering of all the other components, especially the not-yet-dead combustion engine. Most cars, after all, still run at least partly on petroleum.

Fully 55% of all energy production is still based on fossil fuels (only about 5 percentage points of improvement from 2021). But with China and India and growing parts of the African continent still ramping up their economies and energy usage, there’s even more trepidation about global warming. The scientific news has dismal in the shadow of massive and deadly heatwaves, droughts, forest and bush fires, super storms and ever more cases of daytime flooding in coastal cities. Many have given up hope, saying we’re already past a point of no return for high rates of global warming.

This problem has set the stage for a carbon tax that is expected to be implemented by all G25 countries in 2033 (though some U.S. politicians are still promising to withdraw from the pact if elected) . The funds will be mostly allocated to three areas: 1) increasing the reliability of renewable technologies to the point where natural-gas-using peaker plants are no longer needed , 2) greater energy conservation regulations in all forms of engineering, and 3) smaller, cheaper and safer nuclear plants.

“Look,” says one energy guru, “we’ve made progress over the last 20 years in terms of bringing down the costs of renewables, but they haven’t grown at the exponential rate some predicted. Still, global warming has finally gotten bad enough – and the technology good enough – for us to make a global push. We predict that if the political coalition holds, then by 2050 we can get things down to just 35% fossil fuels and the rest nuclear and renewables. Is it as good as we hoped? No. Are we going to suffer from even worse global warming? Yes. But half a loaf is better than none.”

Given the slow pace of progress, more and more nations are developing geoengineering strategies, but little has been implemented. Large-scale geoengineering initiatives remain controversial and are still being debated in the United Nations and elsewhere.

Concluding Comments

Which of these scenarios is most likely? I don’t know. The one I’d like to see most is “Exponential Green” but it’s hard to say how quickly green energy will grow and, even assuming exponential growth for now, when the trend will slow down and hit the S-curve.

We may need to add geoengineering to the mix in order to avoid disaster, but geoengineering comes with its own risks. Thing can and often do go wrong. Engineering solutions can result in unforeseen problems. If we do need to engage in geoengineering at a large scale, I hope it’ll look more like “So Far, So Good” rather than “Desperate Times.”

The best we can do, I think, is help bring the most positive of the scenarios to fruition. Even if they don’t work out, we will have spent more days in hope than despair. There’s something to be said for active optimism.

Featured image from 林 慕尧 / Chris Lim from East Coast (东海岸), Singapore (新加坡) - Windmills in China?{D70 series}