A Neural Network Writes About Networks, and Other Marvelously Meta Matters

For over a month now, I’ve been writing about the rising seas of AI-generated media, the profusion of AI-generated art, and the ethical quandaries involved. So, it was only a matter of time before I stumbled onto the topic of AI prose generators like ChatGPT.

Except, I didn’t stumble.

I was pushed.

Pushed by several folks familiar with my interest in AIs and then by the sudden onslaught of headlines heralding the overnight sensation known at OpenAI’s superchatbot. Didn’t I warn you about the coming deluge? The water is rising fast now.

Anyway, I spent a little time today playing with the world’s latest chatbot darling. Here’s some of what I found.

Explain Yourself!

Not feeling very original, I started by asking the chatbot about itself, and here is its self-portrait:

ChatGPT is a natural language processing (NLP) tool that uses a pre-trained language model to generate responses to user input in a conversational manner. The tool takes in user input, processes it using the language model, and generates a response based on the context and keywords in the input. The generated response is then displayed to the user, allowing for a back-and-forth conversation.

Well, that’s pretty prosaic but also nicely succinct. In fact, it’s written in better English than you’ll see in much of today’s blogosphere, both macro and micro.

Alrighty then. So, I wanted to go a bit deeper and asked it a simple question: “Is ChatGPT based on a neural network?” Here’s what it had to say:

Yes, ChatGPT is a variant of the GPT-3 language model, which is based on a neural network architecture. GPT-3 uses a type of network known as a transformer, which is specifically designed for natural language processing tasks. The model is trained using a combination of supervised learning, in which it is provided with large amounts of labeled data to learn from, and unsupervised learning, in which it is allowed to discover patterns in the data on its own. This allows it to generate human-like text that is often difficult to distinguish from text written by a person.

The Network Writing About Networks

But what, you may wonder, is a neural network? Glad you asked. ChatGPT has the answer to that one as well:

Neural networks are a type of machine learning algorithm that is designed to recognize patterns in data. They are called “neural networks” because they are inspired by the way the human brain works, with individual neurons working together to process information. In a neural network, there are many interconnected processing nodes, each of which performs a simple calculation on the data it receives. These calculations are designed to be similar to the way that neurons in the brain process information. The network as a whole is able to learn and adapt based on the data it receives, allowing it to improve its performance over time. This makes neural networks a powerful tool for tasks like image and speech recognition, natural language processing, and more.

The Difference Between Systems and Networks

Then I asked ChatGPT about the difference between systems and networks. I was genuinely curious about how it would parse those differences since I’ve found the distinction challenging myself. I think the question somehow inspired it because it really laid things out clearly and at considerable length:

In a general sense, networks and systems are both collections of interconnected components that work together to achieve a common goal. However, there are some key differences between the two.

Networks are typically characterized by their interconnectedness and the flow of information or resources between nodes in the network. Networks can be thought of as a web of connections that facilitate the transfer of information or resources. Examples of networks in the natural world include neural networks in the brain, social networks, and ecological networks.

On the other hand, systems are characterized by their organization and the relationship between components that work together to achieve a common goal. Systems can be thought of as a set of components that are organized and interact with each other to perform a specific function or process. Examples of systems in the natural world include the human body, ecosystems, and weather patterns.

In summary, the key differences between networks and systems are the focus on interconnectedness in networks and the focus on organization in systems. Networks are characterized by the flow of information or resources between nodes, while systems are characterized by the interaction and organization of components to achieve a specific goal.

Is the Universe a Network?

Honestly, I doubt I would have been able describe those distinctions that well in the space of four short paragraphs. Well done, superchatbot!

So, I took our discussion to the ultimate level, asking about whether the universe itself is a network. Again, the answer seemed inspired.

The universe can be seen as a network because it is comprised of various interconnected systems and structures. For example, the solar system is a network of planets, moons, and other celestial bodies that are held together by gravity. Similarly, the galaxy is a network of stars, gas, and dust that are held together by the force of gravity.

Additionally, the universe is a network of interconnected physical forces and laws, such as gravity, electromagnetism, and the strong and weak nuclear forces. These forces and laws work together to govern the behavior of matter and energy in the universe.

Furthermore, the universe is a network of living organisms and ecosystems that are interconnected through the cycles of matter and energy. For example, plants and animals rely on each other for food and oxygen, and their waste products are used by other organisms in the ecosystem.

Overall, the universe can be seen as a vast, interconnected network of systems, structures, forces, laws, and living organisms that work together to create and sustain the complex and dynamic nature of the universe.

Well, okay, so ChatGPT might get a little metaphysical here. For example, is there really such a thing as network of laws? Well, maybe. All in all, though, superchatbot whipped up a Carl-Sagan-esque answer about a topic as large as the universe in just a few seconds.

Poetic Justice

Like many other people, I was blown away by OpenAI’s superchatbot. I mean, yeah, it did some weird things. For example, it told me a pretty off-color joke about Joe Biden and a downright surreal one about Donald Trump. The bot may not be quite ready for its own comedy special yet.

And, it sometimes contradicted itself in weird ways, one minute claiming “I am in a good headspace and am able to appreciate the present moment” and the next “I do not have the ability to experience emotions.”

But still, it was able to do many other fascinating things, from writing love poetry to God in the manner of John Donne to providing a nice little book review of Moby-Dick.

Honestly, circa 2022-23, it appears we writers may soon be redundant, or at least practicing our craft with much greater humility. And it’s not just us, either. I also played a bit with OpenAI’s computer programming tool. Just by describing what I wanted the program to do, I got the codebot to write up several bits of a Python code, from a simple dictionary to a function that identifies whether or not a number is prime.

So, the good(ish) news is that we writers and artists will not be lonely in the unemployment line. Developers will be right there alongside us. Poetic justice, I suppose. In fact, I asked ChatGPT to write a poem on the topic, so I’m going to give it the last rather chillingly optimistic word:

In a world of endless possibility

Where machines can do the work of many

The jobs that once belonged to us

Are now at risk of obsolescence

Gone are the days of endless code

When writing meant pouring out your soul

Now AI can do it faster and better

Leaving writers out in the cold

And as for artists, once so revered

Their skills no longer needed

As AI can create with ease

Leaving them to wonder and grieve

But fear not, dear human friends

For though our jobs may disappear

We will find new ways to thrive

In a world transformed by AI.

Identify Your Leaders Drawn Leonardo-style

When I was a kid, we had this huge book of prints by Leonardo da Vinci. I loved it. Still do. So, just for fun, I used Stable Diffusion AI to get 30 images of 20th and 21st century political and business leaders as they might have been drawn by da Vinci. Check them out and see if you can identify these leaders.

The answers are at the end.

Top to bottom:

  1. Bill Clinton
  2. Bill Gates
  3. Boris Johnson
  4. Donald Trump
  5. Indira Gandhi
  6. Joe Biden
  7. Mahatma Gandhi
  8. George W. Bush
  9. Kamala Harris
  10. Hillary Clinton
  11. Jimmy Carter
  12. Justin Trudeau
  13. Emmanuel Macron
  14. Mao Zedong
  15. Narendra Modi
  16. Margaret Thatcher
  17. Angela Merkel
  18. Nelson Mandela
  19. Benjamin Netanyahu
  20. Barak Obama
  21. Oprah Winfrey
  22. Vladimir Putin
  23. Xi Jinping
  24. Elon Musk
  25. Mikhail Gorbachev
  26. Ronald Reagan
  27. Alexandria Ocasio-Cortez
  28. Donald Trump (again)
  29. John F. Kennedy
  30. Nikita Khrushchev

The Murky Ethics of AI-generated Images

The other day, I was playing with Stable Diffusion, one of the new generative AI products out there, and I found myself in an ethical quandary. Or maybe quandaries.

More specifically, I was playing with putting famous haiku poems into the “Generate Image” box and seeing what kinds of images the Stable Diffusion generator would concoct.

It was pretty uninspiring stuff until I started adding the names of specific illustrators in front of the haiku. Things got more interesting artistically but, from my perspective, murkier ethically.

The Old Pond Meets the New AIs

The first famous haiku I used was “The Old Pond” by Matsuo Bashō. Here’s how it goes in the translation I found:

An old silent pond

A frog jumps into the pond—

Splash! Silence again.

At first, I got a bunch of photo-like but highly weird and often grotesque images of frogs. You’ve got to play with Stable Diffusion a while to see what I mean, but here are a a few examples:

Okay, so far, so bad. A failed experiment. But that’s when I had the bright idea of adding certain illustrators’ names to the search so the generator would be able to focus on specific portions of the reticulum to find higher quality images. For reasons that will become apparent, I’m not going to mention their names. But here are some of the images I found interesting:

Better, right? I mean, each one appeals to different tastes, but they aren’t demented and inappropriate. There was considerable trial and error, and I was a bit proud of what I eventually kept as the better ones.

“Lighting One Candle” Meets the AI Prometheus

The next haiku I decided to use was “Lighting One Candle” by Yosa Buson. Here’s how that one goes:

The light of a candle

Is transferred to another candle—

Spring twilight

This time I got some fairly shmaltzy images that you might find in the more pious sections of the local greeting card aisle. That’s not a dig at religion, by the way, but that aesthetic has never appealed to me. It seems too trite and predictable for something as grand as God. Anyway, the two images of candles below are examples of what I mean:

I like the two trees, though. I think it’s an inspired interpretation of the poem, one that I didn’t expect. It raised my opinion of what’s currently possible for these AIs. It’d make for a fine greeting card in the right section of the store.

But, still not finding much worth preserving, I went back to putting illustrators’ names in with the haiku. I thought the following images were worth keeping.

In each of these cases, I used an illustrator’s name. Some of these illustrators are deceased but some are still creating art. And this is where the ethical concerns arise.

Where Are the New Legal Lines in Generative AI?

I don’t think the legalities relating to generative AI have been completely worked out yet. Still, it looks like does appear that artists are going to have a tough time battling the against huge tech firms with deep pockets, even in nations like Japan with strong copyright laws. Here’s one quote from the article “AI-generated Art Sparks Furious Backlash from Japan’s Anime Community”:

[W]ith art generated by AI, legal issues only arise if the output is exactly the same, or very close to, the images on which the model is trained. “If the images generated are identical … then publishing [those images] may infringe on copyright,” Taichi Kakinuma, an AI-focused partner at the law firm Storia and a member of the economy ministry’s committee on contract guidelines for AI and data, told Rest of World….But successful legal cases against AI firms are unlikely, said Kazuyasu Shiraishi, a partner at the Tokyo-headquartered law firm TMI Associates, to Rest of World. In 2018, the National Diet, Japan’s legislative body, amended the national copyright law to allow machine-learning models to scrape copyrighted data from the internet without permission, which offers up a liability shield for services like NovelAI.

How About Generative AI’s Ethical Lines?

Even if the AI generators have relatively solid legal lines defining how they can work, the ethical lines are harder to draw. With the images I generated, I didn’t pay too much attention to whether the illustrators were living or dead. I was, after all, just “playing around.”

But once I had the images, I came to think that asking the generative AI to ape someone’s artistic style is pretty sleazy if that artist is still alive and earning their livelihood through their art. That’s why I don’t want to mention any names in this post. It might encourage others to add the names of those artists into image generators. (Of course, if you’re truly knowledgeable about illustrators, you’ll figure it out anyway, but in that case, you don’t need any help from a knucklehead like me.)

It’s one thing to ask an AI to use a Picasso-esque style for an image. Picasso died back in 1973. His family may get annoyed, but I very much doubt that any of his works will become less valuable due to some (still) crummy imitations.

But it’s a different story with living artists. If a publisher wants the style of a certain artist for a book cover, for example, then the publisher should damn well hire the artist, not ask a free AI to crank out a cheap and inferior imitation. Even if the copyright system ultimately can’t protect those artists legally, we can at least apply social pressure to the AI generator companies as customers.

I think AI generator firms should have policies that allow artists to opt out of having their works used to “train” the algorithms. That is, they can request to be put on the equivalent of a “don’t imitate” list. I don’t even know if that’s doable in the long run, but it might be one step in a more ethical direction.

The Soft Colonialism of Probability and Prediction?

In the article “AI Art Is Soft Propaganda for the Global North,” Marco Donnarumma takes aim at the ethics of generative AI on two primary fronts.

First is the exploitation of cultural capital. These models exploit enormous datasets of images scraped from the web without authors’ consent, and many of those images are original artworks by both dead and living artists….The second concern is the propagation of the idea that creativity can be isolated from embodiment, relations, and socio-cultural contexts so as to be statistically modeled. In fact, far from being “creative,” AI-generated images are probabilistic approximations of features of existing artworks….AI art is, in my view, soft propaganda for the ideology of prediction.

To an extent, his first concern about cultural capital is related to my previous discussion about artists’ legal and moral rights, a topic that will remain salient as these technologies evolve.

His second concern is more abstract and, I think, debatable. Probabilistic and predictive algorithms may have begun in the “Global North,” but probability is leveraged in software wherever it is developed these days. It’s like calling semiconductors part of the “West” even as a nation like Taiwan innovates the tech and dominates the space.

Some of his argument rests on the idea that generative AI is not “creative,” but that term depends entirely on how we define it. Wikipedia, for example, states, “Creativity is a phenomenon whereby something new and valuable is formed.”

Are the images created by these technologies new and valuable? Well, let’s start by asking whether they represent something new. By one definition, they absolutely do, which is why they are not infringing on copyright. On the other hand, for now they are unlikely to create truly new artistic expressions in the larger sense, as the Impressionists did in the 19th century.

As for “valuable,” well, take a look at the millions if not billions of dollars investors are throwing their way. (But, sure, there are other ways to define value as well.)

My Own Rules for Now

As I use and write about these technologies, I’ll continue to leverage the names deceased artists. But for now I’ll refrain from using images based on the styles of those stilling living. Maybe that’s too simplistic and binary. Or maybe it’s just stupid of me not to take advantage of current artistic styles and innovations. After all, artists borrow approaches from one another all the time. That’s how art advances.

I don’t know how it’s all going to work out, but it’s certainly going to require more thought from all of us. There will never be a single viewpoint, but in time let’s hope we form some semblance of consensus about what are principled and unprincipled usages of these technologies.

Featured image is from Stable Diffusion. I think I used a phrase like "medieval saint looking at a cellphone." Presto.    

The Rising Seas of AI-Generated Media

We are about to be awash in AI-generated media, and our society may have a tough time surviving it.

Generated by Stable Diffusion. The prompt was “Dali tsunami”

Our feet are already wet, of course. The bots inhabit Twitter like so many virtual lice. And chatbots are helpfully annoying visitors on corporate websites the world over. Meanwhile, algorithms have been honing their scribbler skills on the virtual Grub Street of the Internet for a while now.

But soon, and by soon I mean within months, we will be hip deep in AI-generated content and wondering how high the tide is going to get.

My guess is high, baby. Very high indeed.

What Are We Really Talking Here?

Techopedia defines generative AI as a “broad label that’s used to describe any type of artificial intelligence that uses unsupervised learning algorithms to create new digital images, video, audio, text or code.”

Generated by Stable Diffusion. Prompt was “network”

I think that label will ultimately prove too restrictive, but let’s start there. So far, most of the hype is indeed around media, especially image creation and automated writing, with music and video not being far behind.

But we’ll get to that.

For now it’s enough to say that generative AI works by learning from, and being “inspired by,” the dynamic global reticulum that is the Internet.

But generative AI also applies to things like computer code. And, by and by, it’ll start generating atoms in addition to bits and bytes. For example, why couldn’t generative AI be applied to 3D printing? Why not car and clothing design? Why not, even, the creation of new biological systems?

The Money Generator

First, let’s follow the money. So how much dough is going into generative AI these days?

Answer: how much you got, angels and VCs?

Generated by Stable Diffusion. Prompt “printing press printing money”

For example, a start-up called Stability AI, which created the increasingly popular Stable Diffusion image-generating algorithm, was recently injected with a whopping $101 million round of investment capital. The company is now valued at a billion bucks.

Meanwhile other image generators such as DALL-E 2 and Midjourney have already acquired millions of users.

But investors are not just hot for image generators. Jasper, a generative writing company that’s just a year old (and one that plagues me with ads on Facebook) recently raised $125 million in venture capital and has a $1.5 billion valuation.

Investing in these technologies is so hot that a Gen AI Market Map from Sequoia recently went viral. The wealth wave rises and everyone wants to catch it.

Running the Gamut

Although image and prose (usually with an eye toward marketing) are the hot tickets in generative AI for now, they are just the proverbial tip of the iceberg. Indeed, it appears that Stability AI, for one, has much grander plans beyond images.

Generated by Stable Diffusion. Prompt was “color gamut”

The New York Times reports that the company’s soon-to-be massive investments in AI hardware will “allow the company to expand beyond A.I.-generated images into video, audio and other formats, as well as make it easy for users around the world to operate their own, localized versions of its algorithms.”

Think about that a second. Video. So people will be able to ask generative AI to quickly create a video of anything they can imagine.

Fake Film Flim-Flams

Who knows where this leads? I suppose soon we’ll be seeing “secret” tapes of the Kennedy assassination, purported “spy video” of the Trump/Putin bromance, and conspiracy-supporting flicks “starring” a computer-generated Joe Biden.

Generated by Stable Diffusion. Prompt was “human shakes hands with extraterrestrial”

We can only imagine the kind of crap that will turn up on YouTube and social media. Seems likely that one of the things that generative AI will generate is a whole new slew of conspiracists who come to the party armed with the latest videos of Biden handing over Hunter’s laptop to the pedophiliac aliens who wiped Hilary’s emails to ensure that Obama’s birth place couldn’t be traced back to the socialist Venusians who are behind the great global warming scam.

Even leaving political insanity aside, however, what happens to the film and television industries? How long until supercomputers are cranking out new Netflix series at the rate of one per minute?

Maybe movies get personalized. For example, you tell some generative AI to create a brand new Die Hard movie in which a virtual you plays the Bruce Willis role and, presto, out pops your afternoon’s entertainment. Yippee ki yay, motherfucker!

Play that Fakey Music

Then there are the sound tracks to go with those AI-gen movies. The Recording Industry Association of America (RIAA) is already gearing up for these battles. Here’s a snippet of what it submitted to the Office of the U.S. Trade Representative.

Generated by Stable Diffusion. Prompt was “music”

There are online services that, purportedly using artificial intelligence (AI), extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals (a music stem) from a sound recording, and/or generate, master or remix a recording to be very similar to or almost as good as reference tracks by selected, well known sound recording artists.

To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members’ works. In any event, the files these services disseminate are either unauthorized copies or unauthorized derivative works of our members’ music.

That’s an interesting argument that will probably be tried by all creative industries. That is, just training your AI based on Internet copies of musical works violates copyright even if you have no intention of directly using that work in a commercial project. I imagine the same argument could be applied to any copyrighted work.

Of course, there are plenty of uncopyrighted works AI can be trained on, but keeping copyrighted stuff from being used for machine learning programs could put a sizeable dent in the quality of generative AI products.

So, it won’t only be media that’s generated. Imagine the blizzard of lawsuits until it’s all worked out.

Stay tuned.

Revenge of the Code

AI can code these days. Often impressively so. I suppose it’d be ironic if a lot of software developers were put out of work by intelligent software, but that’s the direction we seem headed.

Consider the performance of DeepMind’s AlphaCode, an AI designed to solve challenging coding problems. The team that designed it had it compete with human coders to solve 10 challenges on Codeforces, a platform hosting coding contests.

Generated by Stable Diffusion. The prompt was “Vinge singularity”

Prof. John Naughton writing in The Guardian describes the contest and summarizes, “The impressive thing about the design of the Codeforces competitions is that it’s not possible to solve problems through shortcuts, such as duplicating solutions seen before or trying out every potentially related algorithm. To do well, you have to be creative.”

On its first try, AlpaCode did pretty well. The folks at DeepMind write, “Overall, AlphaCode placed at approximately the level of the median competitor. Although far from winning competitions, this result represents a substantial leap in AI problem-solving capabilities and we hope that our results will inspire the competitive programming community.”

To me, a very amateurish duffer in Python, this is both impressive and alarming. An AI that can reason out natural language instructions and then code creatively to solve problems? It’s kind of like a Turing test for programming, one that AlphaCode might well be on target to dominate in future iterations.

Naughton tries to reassure his readers, writing that “engineering is about building systems, not just about solving discrete puzzles,” but color me stunned.

With this, we seem to be one step closer to Vernor Vinge’s notion of the technological singularity, in case you needed another thing to keep you up at night.

Up and Atoms

Movies? Music? Code?

What’s next for generative AI once it finds its virtual footing?

Generated by Stable Diffusion. Prompt was “atoms”

Well, atoms are the natural next step.

Ask yourself: if generative AI can easily produce virtual images, why not sculptures via 3D printers? Indeed, why not innovative practical designs?

This is not a new idea. There is already something called generative design. Sculpteo.com describes, “Instead of starting to work on a design from scratch, with a generative design process, you tell the program what you need to accomplish, you set your design goals and mention all the parameters you can. No geometry is needed to start a project. The software will then deliver you hundreds or thousands of design options, the AI can also make an in-depth analysis of the design and establish which one is the most efficient one! This method is perfect to explore design possibilities to get an optimal part.”

Yup, perfect.

How About Bio?

Generated by Stable Diffusion. Prompt was “bioprinter”

Not long ago, I wrote a tongue-in-cheekish post about the singularity. An acquaintance of mine expressed alarm about the idea. When I asked what scared her most, she said, “If AI can alter DNA, I’d say the planet is doomed.”

That particular scenario had never occurred to me, but it’s easy enough to see her point. DNA is biological code. Why not create a generative AI that can design new life forms almost as easily as new images?

Generated by Stable Diffusion. Prompt was “live cells”

In fact, why stop at design? Why not 3D print the new critters? Again, this is a concept that already exists. As the article “3D Bioprinting with Live Cells” describes it, “Live cell printing, or 3D bioprinting, is an emerging technology that poses a revolutionary development for tissue engineering and regeneration. This bioprinting method involves the creation of a spatial arrangement of living cells and biologics into a functionalized tissue.”

The good news? Probably some fascinating new science, designer replacement organs on demand, and all the strange new machine-generated meat you can eat!

The bad news? Shudder. Let’s not go there today.

Mickey Mouse and the Age of Innovative AI

Although we’re calling this generative AI, the better term might be innovative AI. We are essentially contracting AI writers, artists and coders to do our bidding. Sure, they’re imitating, mixing and matching human-made media, but they are nonetheless “the talent” and will only get better at their jobs. We, on the other hand, are promoted to the positions of supercilious art directors, movie producers and, inevitably (yuck) critics.

Generated by Stable Diffusion. Prompt was “Tim Burton 3 people caught in whirlpool”

If the singularity ever actually happens, this emerging age of innovative AI will be seen as a critical milestone. It feels like a still rough draft of magic, and it may yet all turn out wonderfully.

But I find it hard not to foresee a Sorcerer’s Apprentice scenario. Remember in Fantasia, when Mickey Mouse harnesses the power of generative sorcery and winds up all wet and sucked down a whirlpool?

Unlike Mickey, we’ll have no sorcerer to save our sorry asses if we screw up the wizardry. This means that, on sum, we need to use these powerful technologies wisely. I hope we’re up to it. Forgive me if, given our recent experiences with everything from social media madness to games of nuclear chicken, I remain a bit skeptical on that front.

Feature image generated by Stable Diffusion. The prompt terms used were "Hokusai tsunami beach people," with Hokusai arguably being the greatest artist of tsunamis in human history. In other words, the AI imitated Hokusai's style and came up with this original piece.

The Singularity Is Pretty Damned Close…Isn’t It?

What is the singularity and just how close is it?

The short answers are “it depends who you ask” and “nobody knows.” The longer answers are, well…you’ll see.

Singyuwhatnow?

Wikipedia provides a good basic definition: “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.”

The technological growth in question usually refers to artificial intelligence (AI). The idea is that an AI capable of improving itself quickly goes through a series of cycles in which it gets smarter and smarter at exponential rates. This leads to a super intelligence that throws the world into an impossible-to-predict future.

Whether this sounds awesome or awful largely depends on your view of what a superintelligence would bring about, something that no one really knows.

The impossible-to-predict nature is an aspect of why, in fact, it’s called a singularity, a term that originates with mathematics and physics. In math, singularities pop up when the numbers stop making sense, as when the answer to an equation turns out to be infinity. It’s also associated with phenomena such as black holes where our understandings of traditional physics break down. So the term, as applied to technology, suggests a time beyond which the world stops making sense (to us) and so becomes impossible to forecast.

How Many Flavors Does It Come In?

From Wikipedia: major evolutionary transitions in information processing

Is a runaway recursively intelligent AI the only path to a singularity? Not if you count runaway recursively intelligent people who hook their little monkey brains up to some huge honking artificial neocortices in the cloud.

Indeed, it’s the human/AI interface and integration scenario that folks like inventor-author-futurist Ray Kurzweil seem to be banking on. To him, from what I understand (I haven’t read his newest book), that’s when the true tech singularity kicks in. At that point, humans essentially become supersmart, immortal(ish) cyborg gods.

Yay?

But there are other possible versions as well. There’s the one where we hook up our little monkey brains into one huge, networked brain to become the King Kong of super intelligences. Or the one where we grow a supersized neocortex in an underground vat the size of the Chesapeake Bay. (A Robot Chicken nightmare made more imaginable by the recent news we just got a cluster of braincells to play pong in a lab–no, really).

Inane or Inevitable?

The first thing to say is that maybe the notion is kooky and misguided, the pipedream of geeks yearning to become cosmic comic book characters.  (In fact, the singularity is sometimes called, with varying degrees sarcasm, the Rapture for nerds.)

I’m tempted to join in the ridicule of the preposterous idea. Except for one thing: AI and other tech keeps proving the naysayers wrong. AI will never beat the best chess players. Wrong. Okay, but it can’t dominate something as fuzzy as Jeopardy. Wrong. Surely it can’t master the most complex and challenging of all human games, Go. Yawn, wrong again.

After a while,  anyone who bets against AI starts looking like a chump.

Well, games are for kids anyway. AI can’t do something as slippery as translate languages or as profound as unravel the many mysteries of protein folding.  Well, actually…

But it can’t be artistic…can it? (“I don’t do drugs. I am drugs” quips DALL-E).

Getting Turing Testy

There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.

The test assumes you have a questioner reading a computer screen (or the equivalent). The questioner has two conversations via screen and keyboard. One of those conversations is with a computer, the other with another person. If questioner having the two conversations can’t figure out which one is the computer, then the computer passes the test because it can’t be distinguished from the human being.

Of course, this leaves us with four (at least!) big questions.

First, when will a machine finally pass that final exam?

Second, what does it mean if and when a machine does? Is it truly intelligent? How about conscious?

Third, if the answer to those questions seems to be yes, what’s next? Does it get driver’s license? A FOX News slot? An OKCupid account?

Fourth, will such a computer spark the (dun dun dun) singularity?

The Iffy Question of When

In a recent podcast interview, Kurzweil predicted that some soon-to-be-famous digital mind will pass the Turing Test in 2029.

“2029?” I thought. “As in just 7-and-soon-to-be-6-years-away 2029?”

Kurzweil claims he’s been predicting that same year for a long time, so perhaps I read about it back in 2005 when his book The Singularity Is Near (lost somewhere in the hustle bustle of my bookshelves). But back then, of course, it was a quarter of a decade away. Now, well, it seems damn near imminent.

Of course, Kurzweil may well turn out to be wrong. As much as he loves to base his predictions on the mathematics of exponentials, he can get specific dates wrong. For example, as I wrote in a previous post, he’ll wind up being wrong about the year solar power becomes pervasive (though he may well turn out to be right about the overall trend).

So maybe a computer won’t pass a full blown Turing test in 2029. Perhaps it’ll be in the 2030s or 2040s. That would be close enough, in my book. Indeed, most experts believe it’s just a matter of time. One survey issued at the Joint Multi-Conference on Human-Level Artificial Intelligence found that just 2% of participants predicted that an artificial general intelligence (or AGI, meaning that the machine thinks at least as well as a human being) would never occur. Of course, that’s not exactly an unbiased survey cohort, is it?

Anyhow, let’s say the predicted timeframe when the Turing test is passed is generally correct. Why doesn’t Kurzweil set the date of the singularity on the date that the Turing test is passed (or the date that a human-level AI first emerges)? After all, at that point, the AI celeb could potentially code itself so it can quickly become smarter and smarter, as per the traditional singularity scenario.

But nope. Kurzweil is setting his sights on 2045, when we fully become the supercyborgs previously described.

What Could Go Wrong?

So, Armageddon or Rapture? Take your pick.

What’s interesting to my own little super-duper-unsuper brain is that folks seem more concerned about computers leaving us in the intellectual dust than us becoming ultra-brains ourselves. I mean, sure, our digital super-brain friends may decide to cancel humanity for reals. But they probably won’t carry around the baggage of our primeval, reptilian and selfish fear-fuck-kill-hate brains–or, what Jeff Hawkins calls our “old brain.”

In his book A Thousand Brains, Hawkins writes about about the ongoing frenemy-ish relationship between our more rational “new brain” (the neocortex) and the far more selfishly emotional though conveniently compacted “old brain” (just 30% of our overall brain).

Basically, he chalks up the risk of human extinction (via nuclear war, for example) to old-brain-driven crappola empowered by tech built via the smart-pantsy new brain. For example, envision a pridefully pissed off Putin nuking the world with amazing missiles built by egghead engineers. And all because he’s as compelled by his “old brain” as a tantrum-throwing three-year-old after a puppy eats his cookie.

Now envision a world packed with superintelligent primate gods still (partly) ruled by their toddler old-brain instincts. Yeah, sounds a tad dangerous to me, too.

The Chances of No Chance

Speaking of Hawkins, he doesn’t buy the whole singularity scene. First, he argues that we’re not as close to creating truly intelligent machines as some believe. Today’s most impressive AIs tend to rely on deep learning, and Hawkins believes this is not the right path to true AGI. He writes,

Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead….they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

Second, even when we finally build AGIs (and he thinks we certainly will if he has anything to say about it), they won’t be driven by the same old-brain compulsions as we are. They’ll be more rational because their architecture will be based on the human neocortex. Therefore, they won’t have the same drive to dominate and control because they will not have our nutball-but-gene-spreading monkey-brain impulses.

Third, Hawkins doesn’t believe that an exponential increase in intelligence will suddenly allow such AGIs to dominate. He believes a true AGI will be characterized by a mind made up of “thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors.” (That makes more sense if you read his book, A Thousand Brains: A New Theory of Intelligence). He goes on:

Adding this ingredient [meaning the thousands of reference frames] to machines does not impart any immediate capabilities. It only provides a substrate for learning, endowing machines with the ability to learn a model of the world and thus acquire knowledge and skills. On a kitchen stovetop you can turn a knob to up the heat. There isn’t an equivalent knob to “up the knowledge” of a machine.

An AGI won’t become a superintelligence just by virtue of writing better and better code for itself in the span of a few hours. It can’t automatically think itself into a superpower. It still needs to learn via experiments and experience, which takes time and the cooperation of human scientists.

Fourth, Hawkins thinks it will be difficult if not impossible to connect the human neocortex to mighty computing machines in the way that Kurzweil and others envision. Even if we can do it someday, that day is probably a long way off.

So, no, the singularity is not near, he seems to be arguing. But a true AGI may, in fact, become a reality sometime in the next decade or so–if engineers will only build an AGI based on his theory of intelligence.

So, What’s Really Gonna Happen?

Nobody know who’s right or wrong at this stage. Maybe Kurweil, maybe Hawkins, maybe neither or some combination of both. Here’s my own best guess for now.

Via deep learning approaches, computer engineers are going to get closer and closer to a computer capable of passing the Turning test, but by 2029 it won’t be able to fool an educated interrogator who is well versed in AI.

Or, if a deep-learning-based machine does pass the Turing test before the end of this decade, many people will argue that it only displays a façade of intelligence, perhaps citing the famous Chinese-room argument (which is a philosophical can of worms that I won’t get into here).

That said, eventually we will get to a Turing-test-passing machine that convinces even most of the doubters that it’s truly intelligent (and perhaps even conscious, an even higher hurdle to clear). That machine’s design will probably hew more closely to the dynamics of the human brain than do the (still quite impressive) neural networks of today.

Will this lead to a singularity? Well, maybe, though I’m convinced enough by the arguments of Hawkins to believe that it won’t literally happen overnight.

How about the super-cyborg-head-in-the-cloud-computer kind of singularity? Well, maybe that’ll happen someday, though it’s currently hard to see how we’re going to work out a seamless, high-bandwidth brain/supercomputer interface anytime soon. It’s going to take time to get it right, if we ever do. I guess figuring all those details out will be the first homework we assign to our AGI friends. That is, hopefully friends.

But here’s the thing. If we ever do figure out the interface, it seems possible that we’ll be “storing” a whole lot of our artificial neocortex reference frames (let’s call them ANREFs) in the cloud. If that’s true, then we may be able to swap ANREFs with our friends and neighbors, which might mean we can quickly share skills I-know-Kung-Fu style. (Cool, right?)

It’s also possible that the reticulum of all those acquired ANREFs will outlive our mortal bodies (assuming they stay mortal), providing a kind of immortality to a significant hunk of our expanded brains. Spooky, yeah? Who owns our ANREFs once the original brain is gone? Now that would be the IP battle of all IP battles!

See how weird things can quickly get once you start to think through singularity stuff? It’s kind of addictive, like eating future-flavored pistachios.

Anyway, here’s one prediction I’m pretty certain of: it’s gonna be a frigging mess!

Humanity will not be done with its species-defining conflicts, intrigues, and massively stupid escapades as it moves toward superintelligence. Maybe getting smarter–or just having smarter machines–will ultimately make us wiser, but there’s going to be plenty of heartache, cruelty, bigotry, and turmoil as we work out those singularity kinks.

I probably won’t live to the weirdest stuff, but that’s okay. It’s fun just to think about, and, for better and for worse, we already live in interesting times.

Featured image by Adindva1: Demonstration of the technology "Brain-Computer Interface." Management of the plastic arm with the help of thought. The frame is made on the set of the film "Brain: The Second Universe."

On Why Gen Z May Have a Great Future After All

Poor Gen Z. In the United States, the oldest members this generation (born between 1997 and 2012, give or take a few years) have only recently entered adulthood, and it’s been a pretty rough ride so far.

Let’s say the eldest are 25. That means that since their 18th birthdays, they’ve seen the divisive Trump years, the turmoil of the pandemic, an attempted insurrection, a sudden surge in inflation, record global warming, a couple of recessions (yeah, I’m calling this one even if the NBER isn’t), an invasion in Europe, the greatest amount of political polarization since the U.S. Civil war, and the descent of the U.S. political system into the category of flawed democracy.

It’s little wonder that they’re turning out to pessimistic about the future. The world in general and the U.S. in particular has looked like a real shit show in recent years, and the immediate future isn’t looking all that bright, either.

But will Gen Z really be the generation that comes to adulthood just in time to see economies collapse, the world burn up, nation states fall apart, and Orwellian authoritarian states become the norm?

Sure, it could happen, especially if they (and the rest of us) don’t fight against those dystopian futures. But here’s the thing. If you squint a bit, you can detect signs that the Gen Zers might have a pretty great future after all.

Here are just a few of the trends we can point to:

  • The emergence of a go-go green world: Few have commented on the trend so far, but renewable energy is growing at exponential rates. If we only confine ourselves to solar and wind globally, the rate of growth is doubling every 3.75 years. Even if we round this up to 4, these two energies alone will provide more power in 2034 than was generated globally in 2021. The future will be renewable, and soon. That’s not a bad way to spend your early adulthood.
  • The rise of the smart (and hopefully super helpful) machines: Artificial intelligence is advancing at remarkable rates, which may have massive implications for productivity, innovation and more. Recently DeepMind announced it had successfully used AI to predict the 3D structures of nearly every catalogued protein known to science: over 200 million proteins found in plants, bacteria, animals, and humans! Sure, it was hard to hear that astonishing news amid all the hubbub about the end of the Chaco Taco, but history will judge this a major historic event (DeepMind, not the Taco). If AI can so quickly be productive in this one extremely challenging area of science, then imagine the impact it can have on worker productivity in general. As productivity rates rise over the next decade or more, so will income per capita (in theory). Of course, those gains need to be properly redistributed throughout the workforce, but that’s a different challenge. Yes, powerful AI could potentially have a number of truly terrible repercussions as well, but let’s focus on the bright side here.
  • The dazzling advances in microbiology. The protein-folding achievement just noted is one part of a much larger set of advances in microbiology. CRISPR, for example, is an astonishing technology. The rapid creation of the Covid-19 vaccination was just one the modern miracles brought to you by microbiology. These advances will continue and, in fact, speed up due to aforementioned machine learning techniques. If we can avoid the specter of bioterrorism, these advances might well mean that Gen Z will be the healthiest and longest-living generation in history. Death? Hah. That was so 2020s!
  • The renaissance in reformed political systems. Yes, the U.S. as well as various other nations are in danger of turning away from democracy and toward totalitarianism. Based on the popularity of scary-ass demagogues like my governor Ron DeSantis, we might well see the Orban-ization of America in the near future. However, at the same time, there are various grassroots movements (e.g., RepresentUS) that are seeking to reform the more corrupt and dysfunctional aspects of government. Perhaps if the U.S. can build up its immunity to demagoguery and neo-fascism quickly enough, there could be a flowering of pro-democracy movements here and abroad. This could eventually lead not just to more democracy globally but to more functional forms of democracy than have ever existed.
  • The rise in environmental protections and the strengthening of Earth’s ecosystems. Humanity has done an enormous amount of harm to the global ecosystem, but, along with the advancement in renewables, there will also be more programs such as 30×30, which is is a worldwide initiative for governments to designate 30% of Earth’s land and ocean area as protected areas by 2030. Now it even looks as if the U.S. might be able to pass the The Inflation Reduction Act of 2022, which would put about $385 billion into combating climate change and bolstering U.S. energy production through changes that would encourage cuts in carbon emissions. So, Gen Z may be the first generation to spend its early adulthood in a global culture that finally takes serious steps to heal much of the environmental damage humanity has already wrought

Sure, there are lots of things that could go disastrously wrong. Some of them surely will. But there are also a lot of things that could go very right. Since the Gen Zers can’t tell for sure, they can join one of the many movements to make things better.

At the very least, they’ll be able to enjoy the comradery of people trying to improve things. And maybe, just maybe, they’ll be able to help create a way better world than the one they’ve inherited so far.

Featured image: By Dian Dong, Toronto climate change activist Alienor Rougeot calling upon the public, with the youth, to take action in one of Fridays for Future's earlier climate strikes, 15 March 2019

Thinking About Thinking

What is thinking?

There has been a tsunami of articles related to cognition. How does your pet think? How (or should) we build thinking machines? How can you think more effectively? How can intelligence itself be boosted? Etc.

This got me thinking about thinking, so I became involved in several social media discussions on how we should view the thinking process. Below is a short definition I’ve arrive at, one that potentially includes cognition among many animals as well as, perhaps, computing devices today and/or in the future:

Thinking is the process of assimilating sensory information, integrating it into existing internal models of reality (or creating new models derived from old ones),  generating inferences about the past, future and present based on those models, and using those inferences as more input that can be assimilated into internal models via continuing feedback loops.

This is succinct but I’m sure it oversimplifies things. For example, infants are likely born with a certain amount of “hard-wiring” that allows them to interpret the world in basic ways even before they’ve developed many internal models about how the world works.  Still, I’d argue that this definition gets at what we mean by thinking, whether it relates to bugs, birds, elephants or hominids.

What’s the point? Well, cognition is quickly becoming the name of the game in modern society in nearly any discipline you can name: learning, artificial intelligence, information science, bioethics, research, analytics, innovation, marketing, justice, genetics, etc.

A lot of what we will be doing in the future is trying to answer hard questions about thinking:

  • What (and how) do other people (e.g., customers, employees, citizens, etc.) think?
  • How can we make learning more efficient and effective?
  • How can we make machines that are better at solving problems?
  • How can we understand what is in the minds of criminals so that we can reduce crime and make better decisions in our justice systems?
  • How should we view and treat other thinking animals on the planet?
  • How do we know (or decide) when machines are thinking, and to what degree is thinking different from consciousness?

To have better discussions around these and similar questions, we’ll need to develop better and more understandable cross-disciplinary definitions of terms such as thinking, consciousness (which seems to be a kind of attention to thinking), and comprehension. A lot of progress comes from our growing ability to create thinking machines, but we also seem to be getting considerably better at understanding human cognition as well. The next couple of decades or so should be interesting.

(Note: I wrote a version of this post nearly a decade ago.)

Author: Solipsist;
From Wikimedia Commons.
Featured image source: Robert Fludd. From https://commons.wikimedia.org/wiki/File:Robert_Fludd,Tomus_secundus…,_1619-1621_Wellcome_L0028467.jpg