Yes, I’m a bit obsessed. This is another of my generative AI posts in which I’m experimenting with Stable Diffusion’s ability to imitate the styles of illustrators whose works reside on the Internet (among other places).
All the following illustrators are deceased, which is one of my criteria for using their names this way, as I’ve explained elsewhere.
I carried out a simple experiment. I put the words “illustrator XXX draws men” or “illustrator XXX draws women.” The images below are some of what I consider to be the better outcomes.
It’s true some images I got (and mostly rejected) are a bit weird. Stable Diffusion is still lousy with hands, legs and arms. You’ll sometimes get images of people with three legs or arms growing out of weird places, and hands often have the wrong number of digits, not to mention very long, creepy and crooked fingers.
It’s interesting that the incredible computing power behind these images still can’t count to two and three. I guess that like many of their genius human counterparts, AI artists are crap at the STEM subjects.
Anyway, have a look. You’ll see that each time a different artist’s name is referenced, the AI tends to produce different types of images.
Note that I’ve cut and paste these short artists’ bios from longer Wikipedia entries. For an entire list of illustrators, you can go to this page. There are, of course, living as well as dead illustrators in the Wikipedia entry.
I must reiterate that these illustrations were not drawn by these artists. Instead, I just used their names to help “conjure” the images. They do not reflect the quality of their original art.
Salomon van Abbé – etcher and illustrator of books and magazines
Salomon van Abbé (born Amsterdam, 31 July 1883, died London, 28 February 1955), also known as Jack van Abbé or Jack Abbey, was an artist, etcher and illustrator of books and magazines.
Edwin Austin Abbey – American artist, illustrator, and painter
Edwin Austin Abbey RA (April 1, 1852 – August 1, 1911) was an American muralist, illustrator, and painter. He flourished at the beginning of what is now referred to as the “golden age” of illustration, and is best known for his drawings and paintings of Shakespearean and Victorian subjects, as well as for his painting of Edward VII’s coronation.
Elenore Abbott – American book illustrator, scenic designer, and artist
Elenore Plaisted Abbott (1875–1935) was an American book illustrator, scenic designer, and painter. She illustrated early 20th-century editions of Grimm’s Fairy Tales,Robinson Crusoe, and Kidnapped.
Dan Adkins – American illustrator of comic books and science-fiction magazines
Danny L. Adkins (March 15, 1937 – May 3, 2013) was an American illustrator who worked mainly for comic books and science-fiction magazines.
Alex Akerbladh – Swedish-born UK comics artist
Alexander (Alex) Akerbladh was a Swedish-born comics artist who drew for the Amalgamated Press in the UK from the 1900s to the 1950s. He painted interiors and figures in oils and watercolours. A freelancer, he worked from home, and his pages are said to have arrived at the AP “in grubby condition, with no trouble taken to erase pencil marks or spilled ink”.
Constantin Alajalov – American painter and illustrator
Constantin Alajálov (also Aladjalov) (18 November 1900 — 23 October 1987) was an Armenian-American painter and illustrator.
Maria Pascual Alberich – Spanish book illustrator
Maria Pasqual i Alberich (Barcelona, 1 July 1933 – 13 December 2011) was a prolific and popular Spanish illustrator.
Annette Allcock – English children’s book illustrator
Annette Allcock née Rookledge, (28 November 1923 – 2 May 2001) was a British artist and illustrator.
The Alpha Test
You may have noticed that all of these illustrators have last names that begin with the letter A. That’s because I only drew from the A section of Wikipedia’s list of illustrators, and I excluded living artists.
I mention this to show that these images are just the tip of the proverbial iceberg. I suspect that eventually there will be sites, perhaps attached to the AI generators themselves, that use the names of artists as if they were colors in a palette or font styles.
That’s a very odd thought.
Or perhaps they’ll will create algorithms that simply bucket various artists into “schools” and provide examples of those artistic styles. Instead of using Georges Seurat as a “style,” perhaps they will just have a “Pointillist” style that incorporates Seurat into it.
Anyway, we will collectively figure it out. Somebody will probably, perhaps inevitably, make money off the idea. And on we’ll go, with past humanity being used as design tools for future AI.
Here I lie on the sofa, engaged in one of the of guiltier Internet pleasures for hopeless nerds: phone blogging.
My head on one throw pillow, my feet on another, I lie here tapping away with my right thumb while petting the cat with my left hand as he lies on the rug below, purring away like a small, warm, happy engine. He too is pro-phone-blogging.
It’s wonderfully relaxing, almost the opposite of some things that should, by rights, be similar: that is, social media posting.
Sure, I know, blogging is technically social media. There are the occasional comments and likes, which are appreciated, and I’ve even taken to using my Reader to peruse and appreciate other blogs.
But for me, blogging is meditative and creative. It helps me think through issues. It is something to do in the evenings instead of channel surfing or doom scrolling. It’s like taking my brain for a walk.
Blogging via a phone app (in my case, WordPress) takes the pressure off. It’s a much different setup than the standing desk I use for work all day.
The irony is that I’ve never cared for texting given my large hands and wide, knuckly fingers. I’m a reasonably good touch typist on a keyboard but a pathetically poor smartphone plunker.
Yet, my crappy thumb-typing is an advantage when it comes to phone blogging. It requires a plodding, leisurely pace that complements my slowed brain after a full day’s work.
It’s true that I still don’t do most graphics this way (though I’m working on it using the Canva app). I tend to use a laptop for final touches, but the lion’s share of writing is done like this.
So, even as the more technically ambitious ride off into the metaverse, outfitted like virtual knights in their latest VR gear, ready for new quests, I literally poke along. A virtual donkey, happily behind the times, nosing along the winding, reticular paths of my own idle thoughts.
So, that’s how I do it, Renard, in answer to your query on the topic. It’s not fancy and no doubt looks to my wife like sheer, irredeemable sloth.
But, the dinner dishes done, to me it’s a nice hour or so off the beaten paths of the increasingly frantic, irate and irrational Internet. This is the back way, the scenic route, the road less traveled. It is the Tao of the Slow Stroll, and I can recommend it.
It’s hard for someone who watched the gorgeous clusterfucks of Web 1.0 and 2.0 to get starry-eyed about Web3 (or Web 3.0, or whatever we’re calling it this week).
But a daily dose of cynicism is among the sundry bitter pills that older generations take with their morning coffee. You know, to stay regular.
So, I wanted to use this post as a reason to give the W3 champions the benefit of the doubt and educate myself better about the latest “new and improved” world wide reticulum.
Blockchains: Ledgers for Liberté!
Many of the folks espousing blockchain and cryptocurrency are enthusiastic to the point of mania, seeing the tech as pivotal to forging a brave new Web3 world. Most other people are, however, blockchain agnostic or just plain apathetic. It seems like too much trouble to figure out how the damned thing works. (Then throw NFTs into the mix and you have a whole new level of bafflement.)
So let’s indulge in some obligatory but necessarily incomplete descriptions before we continue.
WTF Is a Blockchain?
A blockchain is a glorified ledger. It records debits, credits, and closing balances. The magic word is “transactions.”
If you’re old enough to remember balancing a checkbook, then it’s a lot like that, except it’s digital. And somehow going to save the world.
So it’s a spreadsheet? Kind of. Maybe database is more accurate. The data are stored in virtual “blocks” that are virtually “chained” together. Thus, of course, the name.
Bored yet? Hang on. That chain thing? In theory, you can’t break or modify it. So, the database can’t be changed. Fraud is, therefore, tough, and you don’t need some trusted third party to vouch that everything is on the up and up. No traditional contracts and middlemen. In that sense, it’s decentralized. It’s all about the network, baby.
One common trope is that it’s tech forged by libertarian nerds who hate big government, big business and bureaucracies in all their nefarious forms. Therefore, we wind up with an amalgamation of something that pushes all their hot buttons: software plus finance plus ciphers plus decentralization plus implicit political ideology.
So, no, not a sexy look.
But make no mistake. Blockchain is not just for geeks. Not anymore. In fact, whole industries have bought into it. For example, energy companies use it to build peer-to-peer energy trading platforms so that homeowners with solar panels can sell their excess solar energy to neighbors.
Therefore, blockchain becomes solar chic.
Cryptocurrency Runs on Blockchains
Blockchains and cryptocurrencies aren’t synonymous, but they often go hand in hand. Cryptocurrencies are digital money that’s kept secure via cryptography so there’s no counterfeiting them. Most of these currencies are housed on decentralized systems where financial records are maintained and transactions are verified via blockchains.
Got it? Blockchains are the motors that make cryptocurrencies run.
A Very Short History of Cryptocurrencies
The first and best known cryptocurrency, Bitcoin (or BTC), is part of a longish history with its own mythology. The second most common and well known cryptocurrency is ether, or ETH, which is based on Ethereum technology. But these are just the big guns. Other currencies have been popping up like Mario mushrooms after a virtual rainfall. In fact, there are now more than 12,000 cryptocurrencies.
1991: Stuart Haber and W. Scott Stornetta introduce blockchain technology to time-stamped digital documents, making them “tamper-free”
2000: Stefan Konst publishes his theory of cryptographic secured chains
2004: Hal Finney introduces a digital cash system that keeps the ownership of tokens registered on a “trusted” server
2008: Mystery person Satoshi Nakamoto comes up with the concept of “distributed blockchain,” which provides a peer-to-peer network of time stamping
2009: Satoshi Nakamoto releases the famed white paper on the subject of bitcoin
2014: Various industries start developing blockchain technologies that don’t include cryptocurrencies
2015: Ethereum Frontier Network is launched, and along come smart contracts and dApps (for decentralized applications)
2016: Someone exploits a bug in the Ethereum DAO code and hacks the Bitfinex bitcoin exchange.
2019: Amazon announces its Managed Blockchain service on AWS
2021: In 2021, a study by Cambridge University determines that bitcoin used more electricity than Argentina or the Netherlands. El Salvador becomes the first country to make bitcoin legal tender, requiring all businesses to accept the cryptocurrency.
2022: The University of Cambridge estimates that the two largest proof-of-work blockchains, bitcoin and ether, together use twice as much electricity in one year as the whole of Sweden. The Central African Republic is the second nation to make bitcoin legal tender.
Raise a Glass to the WWW 3.0
Okay, with all the crypto and blockchain out of the way, let’s get back to Web3.
(Oh, wait, I forgot NFTs, or non-fungible tokens, which are like one-of-a-kind digital objects that can be worth big money as collectables. These seem insane to me, which probably means they’ll play some pivotal economic role in the future).
These are the chief technologies and implied principles of Web3. As with the previous two iterations the Web, the advocates for Web3 argue that just they want to make the world a better place (even if they happen to make a killing along the way).
The main argument against the status quo is that our current systems are too centralized and corporatized. Financial institutions want to control money, governments want to control legal frameworks, and the biggest tech companies want to control data. Daniel Saito sums it up well here:
The problem with this system is that is leads to inequality and injustice. The rich get richer while the poor get poorer. The powerful get more power while the powerless are left behind. The web 3.0 economy, on the other hand, is based on a decentralized system. This means that there is no central authority or institution that has control over the system. Instead, it is a network of computers that are all connected to each other.
This makes me smile and sigh. Meet the new techno-idealist, same as the old techno-idealist.
Taking a More Skeptical Approach
Does anyone really believe that the venture capitalists are funding this stuff for the good the humanity? Do we really expect, sticking with an example that Saito uses in his article, that the Nimbies are going away and making room for high-speed rail just because someone’s throwing bitcoins at the project?
At the same time, hope springs eternal. I truly want to think that these technologies will make things better in some ways. Maybe we can avoid a certain amount of corruption, fraud, and concentration of power through blockchains. I want to believe.
A …. potential cause for concern is the shift away from centralized exchanges, which are required to conduct identify checks for customers, to decentralized exchanges like dYdX and Uniswap, which is estimated to be the largest such exchange. Decentralized exchanges rely on peer-to-peer systems to operate. This means that several computers serve as nodes in a larger network, in contrast to centralized exchanges that are operated by a single entity. Decentralized exchanges make it easier for traders to anonymously buy and sell coins; most such exchanges do not currently comply with “know your customer” laws, which means that it can be cumbersome for government officials to identify the parties involved in cryptocurrency transactions. Because these exchanges are not run by a single entity, they can be exceedingly difficult to police and lack the sanctions-enforcement mechanism of more centralized exchanges.
Look, people are people. The worst ones want to accrue and maintain power at the expense of others. To the extent that Web3 makes this less likely, good.
To the degree it reduces accountability, however, we could wind up with greater concentrations of power. Power that can’t be changed–even theoretically–at the voting booth. Careful what you wish for.
Stay Hungry and Hopeful…But Also Skeptical
I like webs and networks (and wouldn’t have a blog called The Reticulum otherwise). I think networks are fundamental to the universe whereas hierarchies are only emergent.
So, to the degree we can move in the direction of efficient and effective networks, I’m all in. But don’t ask me to believe that Web3 is going to solve the world’s ills via the mechanics of blockchain and crypto. It won’t. The best we can hope for is movement in the direction of a fairer, more just and saner world free of power-hoarding, dangerous-tech-wielding dictator types. (We’re looking at you, Vladimir)
Free markets absolutely have their place. So do collectives. Ultimately what we want are socioeconomic and technical systems that allow us to find the right balance, one that keeps the network from stumbling into disastrous chaos on one hand or frozen intractability on the other hand. Both spell doom.
When I was a kid, we had this huge book of prints by Leonardo da Vinci. I loved it. Still do. So, just for fun, I used Stable Diffusion AI to get 30 images of 20th and 21st century political and business leaders as they might have been drawn by da Vinci. Check them out and see if you can identify these leaders.
More specifically, I was playing with putting famous haiku poems into the “Generate Image” box and seeing what kinds of images the Stable Diffusion generator would concoct.
It was pretty uninspiring stuff until I started adding the names of specific illustrators in front of the haiku. Things got more interesting artistically but, from my perspective, murkier ethically.
The Old Pond Meets the New AIs
The first famous haiku I used was “The Old Pond” by Matsuo Bashō. Here’s how it goes in the translation I found:
An old silent pond
A frog jumps into the pond—
Splash! Silence again.
At first, I got a bunch of photo-like but highly weird and often grotesque images of frogs. You’ve got to play with Stable Diffusion a while to see what I mean, but here are a a few examples:
Okay, so far, so bad. A failed experiment. But that’s when I had the bright idea of adding certain illustrators’ names to the search so the generator would be able to focus on specific portions of the reticulum to find higher quality images. For reasons that will become apparent, I’m not going to mention their names. But here are some of the images I found interesting:
Better, right? I mean, each one appeals to different tastes, but they aren’t demented and inappropriate. There was considerable trial and error, and I was a bit proud of what I eventually kept as the better ones.
“Lighting One Candle” Meets the AI Prometheus
The next haiku I decided to use was “Lighting One Candle” by Yosa Buson. Here’s how that one goes:
The light of a candle
Is transferred to another candle—
This time I got some fairly shmaltzy images that you might find in the more pious sections of the local greeting card aisle. That’s not a dig at religion, by the way, but that aesthetic has never appealed to me. It seems too trite and predictable for something as grand as God. Anyway, the two images of candles below are examples of what I mean:
I like the two trees, though. I think it’s an inspired interpretation of the poem, one that I didn’t expect. It raised my opinion of what’s currently possible for these AIs. It’d make for a fine greeting card in the right section of the store.
But, still not finding much worth preserving, I went back to putting illustrators’ names in with the haiku. I thought the following images were worth keeping.
In each of these cases, I used an illustrator’s name. Some of these illustrators are deceased but some are still creating art. And this is where the ethical concerns arise.
Where Are the New Legal Lines in Generative AI?
I don’t think the legalities relating to generative AI have been completely worked out yet. Still, it looks like does appear that artists are going to have a tough time battling the against huge tech firms with deep pockets, even in nations like Japan with strong copyright laws. Here’s one quote from the article “AI-generated Art Sparks Furious Backlash from Japan’s Anime Community”:
[W]ith art generated by AI, legal issues only arise if the output is exactly the same, or very close to, the images on which the model is trained. “If the images generated are identical … then publishing [those images] may infringe on copyright,” Taichi Kakinuma, an AI-focused partner at the law firm Storia and a member of the economy ministry’s committee on contract guidelines for AI and data, told Rest of World….But successful legal cases against AI firms are unlikely, said Kazuyasu Shiraishi, a partner at the Tokyo-headquartered law firm TMI Associates, to Rest of World. In 2018, the National Diet, Japan’s legislative body, amended the national copyright law to allow machine-learning models to scrape copyrighted data from the internet without permission, which offers up a liability shield for services like NovelAI.
How About Generative AI’s Ethical Lines?
Even if the AI generators have relatively solid legal lines defining how they can work, the ethical lines are harder to draw. With the images I generated, I didn’t pay too much attention to whether the illustrators were living or dead. I was, after all, just “playing around.”
But once I had the images, I came to think that asking the generative AI to ape someone’s artistic style is pretty sleazy if that artist is still alive and earning their livelihood through their art. That’s why I don’t want to mention any names in this post. It might encourage others to add the names of those artists into image generators. (Of course, if you’re truly knowledgeable about illustrators, you’ll figure it out anyway, but in that case, you don’t need any help from a knucklehead like me.)
It’s one thing to ask an AI to use a Picasso-esque style for an image. Picasso died back in 1973. His family may get annoyed, but I very much doubt that any of his works will become less valuable due to some (still) crummy imitations.
But it’s a different story with living artists. If a publisher wants the style of a certain artist for a book cover, for example, then the publisher should damn well hire the artist, not ask a free AI to crank out a cheap and inferior imitation. Even if the copyright system ultimately can’t protect those artists legally, we can at least apply social pressure to the AI generator companies as customers.
I think AI generator firms should have policies that allow artists to opt out of having their works used to “train” the algorithms. That is, they can request to be put on the equivalent of a “don’t imitate” list. I don’t even know if that’s doable in the long run, but it might be one step in a more ethical direction.
The Soft Colonialism of Probability and Prediction?
First is the exploitation of cultural capital. These models exploit enormous datasets of images scraped from the web without authors’ consent, and many of those images are original artworks by both dead and living artists….The second concern is the propagation of the idea that creativity can be isolated from embodiment, relations, and socio-cultural contexts so as to be statistically modeled. In fact, far from being “creative,” AI-generated images are probabilistic approximations of features of existing artworks….AI art is, in my view, soft propaganda for the ideology of prediction.
To an extent, his first concern about cultural capital is related to my previous discussion about artists’ legal and moral rights, a topic that will remain salient as these technologies evolve.
His second concern is more abstract and, I think, debatable. Probabilistic and predictive algorithms may have begun in the “Global North,” but probability is leveraged in software wherever it is developed these days. It’s like calling semiconductors part of the “West” even as a nation like Taiwan innovates the tech and dominates the space.
Some of his argument rests on the idea that generative AI is not “creative,” but that term depends entirely on how we define it. Wikipedia, for example, states, “Creativity is a phenomenon whereby something new and valuable is formed.”
Are the images created by these technologies new and valuable? Well, let’s start by asking whether they represent something new. By one definition, they absolutely do, which is why they are not infringing on copyright. On the other hand, for now they are unlikely to create truly new artistic expressions in the larger sense, as the Impressionists did in the 19th century.
As for “valuable,” well, take a look at the millions if not billions of dollars investors are throwing their way. (But, sure, there are other ways to define value as well.)
My Own Rules for Now
As I use and write about these technologies, I’ll continue to leverage the names deceased artists. But for now I’ll refrain from using images based on the styles of those stilling living. Maybe that’s too simplistic and binary. Or maybe it’s just stupid of me not to take advantage of current artistic styles and innovations. After all, artists borrow approaches from one another all the time. That’s how art advances.
I don’t know how it’s all going to work out, but it’s certainly going to require more thought from all of us. There will never be a single viewpoint, but in time let’s hope we form some semblance of consensus about what are principled and unprincipled usages of these technologies.
Featured image is from Stable Diffusion. I think I used a phrase like "medieval saint looking at a cellphone." Presto.
You’ve heard of quantum entanglement, if only in the context of Star Trekian jargon-laden scif-fi expositions like those spouted by Lieutenant Commander Data.
Unlike tribbles and Klingons, however, quantum entanglement is unnervingly real. That is, you have two or more particles that are tangled up in such a way that, even when they’re separated by a long distance, the quantum state of one of them is somehow effected by or reflected in the states of the others.
Weird and a kind of spooky, right? Which is why Einstein dubbed it, with a degree of mockery since he wasn’t quite buying the reality of it, “spooky action at a distance.” Since then, of course, entanglement has been tested many times. At this point, it’s no longer a theory but a practical fact, spooky or no.
Entangling the Big, Hairy Stuff
But what scientists have not done as often is apply quantum entanglement to big stuff: you know, like your hair.
Some recent experiments have worked with two aluminum “drums” that are huge by comparison to subatomic particles: the size of a fifth of a human hair. That is, 20 micrometers wide by 14 micrometers long and 100 nanometers thick, weighing in a whopping 70 picograms (okay, so small, but still macroscopic).
ScienceAlert describes the process as follows: “Researchers vibrated the tiny drum membranes using microwave photons and kept them in a synchronized state in terms of their position and velocities. To prevent outside interference, a common problem with quantum states, the drums were cooled, entangled, and measured in separate stages while inside a cryogenically chilled enclosure. The states of the drums are then encoded in a reflected microwave field that works in a similar way to radar.”
“To verify that entanglement is present, we do a statistical test called an ‘entanglement witness,’’’ NIST theorist Scott Glancy said. “We observe correlations between the drums’ positions and momentums, and if those correlations are stronger than can be produced by classical physics, we know the drums must have been entangled.”
John Teufel, a physicist at NIST and a co-author of one of the papers on this topic, said, “These two drums don’t talk to each other at all, mechanically. The microwaves serve as the intermediary that lets them talk to each other. And the hard part is to make sure they talk to each other strongly without anybody else in the universe getting information about them.”
So, Take That, Heisenberg!
This is clearly cool on multiple levels. First, of course, quantum entanglement at the macroscopic level! What? Is that a thing?
Why, yes. Yes it is. In fact, this isn’t the first time it’s happened.
But, second, this time around physicists have (sort of) gotten around the impossibility of measuring both position and momentum when investigating quantum states.
Glancy states, “The radar signals measure position and momentum simultaneously, but the Heisenberg uncertainty principle says that this can’t be done with perfect accuracy. Therefore, we pay a cost of extra randomness in our measurements. We manage that uncertainty by collecting a large data set and correcting for the uncertainty during our statistical analysis.”
Tap into Your Quantum Network
The concept of a quantum network is, of course, like catnip to people interested in reticula. In the future such networks could “facilitate the transmission of information in the form of quantum bits, also called qubits, between physically separated quantum processors.”
For now, these networks are mostly fiction, but they potentially have a lot of communication and computation applications. One is that they become the backbone of unhackable computer networks. In fact, Mara Johnson-Groh writes that it’s already the case that “basic quantum communications called quantum key distributions are helping secure transmissions made over short distances.”
Johnson-Groh predicts that “quantum networks will be important in scientific sensing first” and highlights the idea of optical telescopes from all over the world connected via a quantum network. The goal would be dramatically improving resolution, resulting in “ground-breaking discoveries about the habitability of nearby planets, dark matter and the expansion of the universe.”
The Entangled Reticulum
There’s something poetic about using a quantum network in order to more clearly see the Reticulum constellation (aka, the net) among other things.
But the poetry runs deeper than that. Quantum entanglement is said to occur naturally. Assuming this to be true, I can imagine countless entangled particles streaming off in opposite directions through the universe, encompassing distances that would put the length of a single galaxy to shame.
This would mean large portions of our universe are entangled. The spin of one photon–zinging at lightspeed well beyond the ken of our greatest telescopes–could be entangled with a local photon that happens to meet your retina on a starry night. Thus, the universe is an infinitely complex reticulum of star stuff lighting our consciousness with instantaneous connections from unimaginable distances.
Featured image: Star-forming region called NGC 3324 in the Carina Nebula. Captured in infrared light by NASA’s new James Webb Space Telescope, this image reveals for the first time previously invisible areas of star birth.
To a large extent, you are the culmination of activity in your neocortex. That’s the part of your brain that drives sensory perception, logic, spatial reasoning, and language, among other things. Without it, you’re pretty much an inarticulate lizard person (which I’m afraid is my disposition all too often in the mornings as I read recent newspaper headlines).
Anyway, neuroscientist Jeff Hawkins conceives the neocortex as a matrix of thousands of smaller brains. Amid this reticulum, each minibrain (my word, not his) stores many different models of the world. Somewhere in there there’s a mental model for your car, your house, your pets, your significant other, whatever politician you love to hate, that sweaty dude who walks that barky dog in the neighborhood every morning, and, well, everything else in your personal universe.
The minibrains are cortical columns, each quite intelligent on its own. Hawkins writes,
A cortical column occupies about one square millimeter. It extends through the entire 2.5 mm thickness, giving it a volume of 2.5 cubic millimeters. By this definition, there are roughly 150,000 cortical columns stacked side by side in a human neocortex. You can imagine a cortical column like a little piece of thin spaghetti. A human neocortex is like 150,000 short pieces of spaghetti stacked vertically next to each other.
Have Spaghetti, Will Reference
Okay, so you are largely the sum total of lots of cortical columns. But what does a cortical column actually do?
One of its primary purposes is to store and activate reference frames: oodles and oodles of reference frames.
A reference frame is where we access the information about what an object (or even an abstract concept) is and where it’s located in the world. For example, you have a reference frame for a coffee cup in various cortical columns. You know such a cup when you see it, and feel it, and sip from it. You also know where it is and how it moves. When you turn the cup upside down (hopefully sans coffee), the reference frame in your head also moves.
Reference frames have essential virtues such as:
allowing the brain to learn the structure and components of an object
allowing the brain to mentally manipulate the object as a whole (which is why you can envision an upside down coffee cup)
allowing your brain to plan and create movements, even conceptual ones
Thanks to reference frames, just one cortical column can “learn the three-dimensional shape of objects by sensing and moving and sensing and moving.” As you walk through a strange house, for example, you are mentally building a model of the house using reference frames. This includes your judgments about it. (“Hate that mushy chair in the living room, love that painting in the study, what were they thinking with that creepy bureau in the bedroom!?”)
I Think, Therefore I Predict
You’re a futurist. We all are. Because we’re subconsciously predicting stuff every moment of our conscious day.
Let’s say, for example, that you pick up your cup of coffee without even thinking about it. Your brain predicts the feel of the familiar, smooth, warm ceramic. That’s what you get most mornings. If instead your brain gets something different, it registers surprise and draws your attention to the cup.
Maybe it’s a minor surprise, like a small crack in the cup. Maybe it’s a bigger one, as when one of your fingers unexpectedly brush a cockroach that then quickly crawls up your arm. Argh!
Either way, you didn’t get what you subconsciously predicted based on your reference frame. These tiny predictions happen all the time. Your whole life is spent predicting what comes next, even of you’re not fully aware of it. If something happens that doesn’t match your mental model, your brain gets busy trying to figure out what went wrong with your expectation/prediction and what to do next.
(“Roach! Need to swat it! Where did I put that crappy news magazine? Come on, cortical-column-based reference frames, help me find it! Fast!)
You Are Your Reticulum
In short, most of your brain (the neocortex is about 70% of its total volume) is a highly complex reticulum made up of cortical columns, which themselves are made up of dense networks of neurons that are in a constant state of anticipation, even when you’re feeling pretty relaxed.
Your consciousness doesn’t exist in any one place. Your singular identity is, rather, a clever pastiche fabricated by that squishy matrix in your noggin.
So, why does it feel as if you’re you, the real mental “decider” (as George W. Bush’s neocortex once put it)? Hawkins thinks that all your various cortical columns are essentially “voting” about what you should perceive and how you should act. When you can’t make up your mind, it’s because the vote is too close to call.
So, you’re not just a matrix. You’re a democracy! Which is great. Even if our increasingly shaky U.S. government descends into tyranny, at least our brains will keep voting.
We are about to be awash in AI-generated media, and our society may have a tough time surviving it.
Our feet are already wet, of course. The bots inhabit Twitter like so many virtual lice. And chatbots are helpfully annoying visitors on corporate websites the world over. Meanwhile, algorithms have been honing their scribbler skills on the virtual Grub Street of the Internet for a while now.
But soon, and by soon I mean within months, we will be hip deep in AI-generated content and wondering how high the tide is going to get.
My guess is high, baby. Very high indeed.
What Are We Really Talking Here?
Techopedia defines generative AI as a “broad label that’s used to describe any type of artificial intelligence that uses unsupervised learning algorithms to create new digital images, video, audio, text or code.”
I think that label will ultimately prove too restrictive, but let’s start there. So far, most of the hype is indeed around media, especially image creation and automated writing, with music and video not being far behind.
But we’ll get to that.
For now it’s enough to say that generative AI works by learning from, and being “inspired by,” the dynamic global reticulum that is the Internet.
But generative AI also applies to things like computer code. And, by and by, it’ll start generating atoms in addition to bits and bytes. For example, why couldn’t generative AI be applied to 3D printing? Why not car and clothing design? Why not, even, the creation of new biological systems?
The Money Generator
First, let’s follow the money. So how much dough is going into generative AI these days?
Answer: how much you got, angels and VCs?
For example, a start-up called Stability AI, which created the increasingly popular Stable Diffusion image-generating algorithm, was recently injected with a whopping $101 million round of investment capital. The company is now valued at a billion bucks.
Meanwhile other image generators such as DALL-E 2 and Midjourney have already acquired millions of users.
But investors are not just hot for image generators. Jasper, a generative writing company that’s just a year old (and one that plagues me with ads on Facebook) recently raised $125 million in venture capital and has a $1.5 billion valuation.
Although image and prose (usually with an eye toward marketing) are the hot tickets in generative AI for now, they are just the proverbial tip of the iceberg. Indeed, it appears that Stability AI, for one, has much grander plans beyond images.
The New York Timesreports that the company’s soon-to-be massive investments in AI hardware will “allow the company to expand beyond A.I.-generated images into video, audio and other formats, as well as make it easy for users around the world to operate their own, localized versions of its algorithms.”
Think about that a second. Video. So people will be able to ask generative AI to quickly create a video of anything they can imagine.
Fake Film Flim-Flams
Who knows where this leads? I suppose soon we’ll be seeing “secret” tapes of the Kennedy assassination, purported “spy video” of the Trump/Putin bromance, and conspiracy-supporting flicks “starring” a computer-generated Joe Biden.
We can only imagine the kind of crap that will turn up on YouTube and social media. Seems likely that one of the things that generative AI will generate is a whole new slew of conspiracists who come to the party armed with the latest videos of Biden handing over Hunter’s laptop to the pedophiliac aliens who wiped Hilary’s emails to ensure that Obama’s birth place couldn’t be traced back to the socialist Venusians who are behind the great global warming scam.
Even leaving political insanity aside, however, what happens to the film and television industries? How long until supercomputers are cranking out new Netflix series at the rate of one per minute?
Maybe movies get personalized. For example, you tell some generative AI to create a brand new Die Hard movie in which a virtual you plays the Bruce Willis role and, presto, out pops your afternoon’s entertainment. Yippee ki yay, motherfucker!
Play that Fakey Music
Then there are the sound tracks to go with those AI-gen movies. The Recording Industry Association of America (RIAA) is already gearing up for these battles. Here’s a snippet of what it submitted to the Office of the U.S. Trade Representative.
There are online services that, purportedly using artificial intelligence (AI), extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals (a music stem) from a sound recording, and/or generate, master or remix a recording to be very similar to or almost as good as reference tracks by selected, well known sound recording artists.
To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members’ works. In any event, the files these services disseminate are either unauthorized copies or unauthorized derivative works of our members’ music.
That’s an interesting argument that will probably be tried by all creative industries. That is, just training your AI based on Internet copies of musical works violates copyright even if you have no intention of directly using that work in a commercial project. I imagine the same argument could be applied to any copyrighted work.
Of course, there are plenty of uncopyrighted works AI can be trained on, but keeping copyrighted stuff from being used for machine learning programs could put a sizeable dent in the quality of generative AI products.
So, it won’t only be media that’s generated. Imagine the blizzard of lawsuits until it’s all worked out.
Revenge of the Code
AI can code these days. Often impressively so. I suppose it’d be ironic if a lot of software developers were put out of work by intelligent software, but that’s the direction we seem headed.
Consider the performance of DeepMind’s AlphaCode, an AI designed to solve challenging coding problems. The team that designed it had it compete with human coders to solve 10 challenges on Codeforces, a platform hosting coding contests.
Prof. John Naughton writing in The Guardian describes the contest and summarizes, “The impressive thing about the design of the Codeforces competitions is that it’s not possible to solve problems through shortcuts, such as duplicating solutions seen before or trying out every potentially related algorithm. To do well, you have to be creative.”
On its first try, AlpaCode did pretty well. The folks at DeepMind write, “Overall, AlphaCode placed at approximately the level of the median competitor. Although far from winning competitions, this result represents a substantial leap in AI problem-solving capabilities and we hope that our results will inspire the competitive programming community.”
To me, a very amateurish duffer in Python, this is both impressive and alarming. An AI that can reason out natural language instructions and then code creatively to solve problems? It’s kind of like a Turing test for programming, one that AlphaCode might well be on target to dominate in future iterations.
Naughton tries to reassure his readers, writing that “engineering is about building systems, not just about solving discrete puzzles,” but color me stunned.
What’s next for generative AI once it finds its virtual footing?
Well, atoms are the natural next step.
Ask yourself: if generative AI can easily produce virtual images, why not sculptures via 3D printers? Indeed, why not innovative practical designs?
This is not a new idea. There is already something called generative design. Sculpteo.com describes, “Instead of starting to work on a design from scratch, with a generative design process, you tell the program what you need to accomplish, you set your design goals and mention all the parameters you can. No geometry is needed to start a project. The software will then deliver you hundreds or thousands of design options, the AI can also make an in-depth analysis of the design and establish which one is the most efficient one! This method is perfect to explore design possibilities to get an optimal part.”
How About Bio?
Not long ago, I wrote a tongue-in-cheekish post about the singularity. An acquaintance of mine expressed alarm about the idea. When I asked what scared her most, she said, “If AI can alter DNA, I’d say the planet is doomed.”
That particular scenario had never occurred to me, but it’s easy enough to see her point. DNA is biological code. Why not create a generative AI that can design new life forms almost as easily as new images?
In fact, why stop at design? Why not 3D print the new critters? Again, this is a concept that already exists. As the article “3D Bioprinting with Live Cells” describes it, “Live cell printing, or 3D bioprinting, is an emerging technology that poses a revolutionary development for tissue engineering and regeneration. This bioprinting method involves the creation of a spatial arrangement of living cells and biologics into a functionalized tissue.”
The good news? Probably some fascinating new science, designer replacement organs on demand, and all the strange new machine-generated meat you can eat!
The bad news? Shudder. Let’s not go there today.
Mickey Mouse and the Age of Innovative AI
Although we’re calling this generative AI, the better term might be innovative AI. We are essentially contracting AI writers, artists and coders to do our bidding. Sure, they’re imitating, mixing and matching human-made media, but they are nonetheless “the talent” and will only get better at their jobs. We, on the other hand, are promoted to the positions of supercilious art directors, movie producers and, inevitably (yuck) critics.
If the singularity ever actually happens, this emerging age of innovative AI will be seen as a critical milestone. It feels like a still rough draft of magic, and it may yet all turn out wonderfully.
But I find it hard not to foresee a Sorcerer’s Apprentice scenario. Remember in Fantasia, when Mickey Mouse harnesses the power of generative sorcery and winds up all wet and sucked down a whirlpool?
Unlike Mickey, we’ll have no sorcerer to save our sorry asses if we screw up the wizardry. This means that, on sum, we need to use these powerful technologies wisely. I hope we’re up to it. Forgive me if, given our recent experiences with everything from social media madness to games of nuclear chicken, I remain a bit skeptical on that front.
Feature image generated by Stable Diffusion. The prompt terms used were "Hokusai tsunami beach people," with Hokusai arguably being the greatest artist of tsunamis in human history. In other words, the AI imitated Hokusai's style and came up with this original piece.
Occasionally, though not often enough, I connect with an author: the way they write, think, imagine. Their prose style.
Over the last year or so, one of the writers I’ve connected with is British science fiction author Adrian Tchaikovsky. He’s prolific, brilliant and always entertaining, and his education in zoology often shines through in fascinating ways.
In one of his books, Bear Head, he was riffing on our current unsavory age of political demagoguery.
I should say this isn’t his usual style. I got the feeling that he was venting about recent political trends in Great Britain, the U.S. and elsewhere.
As I write this on a Saturday morning, the news is filled with the election of Giorgia Meloni, who has come to power in Italy. That made me think of Tchaikovsky’s riffs, which I had highlighted in my Kindle. Here are some categories I’ve applied to them.
On hate, fear and politics
[H]ate was not just a fire to destroy, not just an excuse to panhandle donations. Hate was an attractive force.
[T]he generation that held those chains are yesterday’s men, trying to hold on to power by whipping up fear of the other, just like always.
On authority, virtue and the metagame
[T]here’s a metagame…[Y]our worker who ‘kisses ass’ is seen as management material not because they give their all to the company, but because they spend that effort they would otherwise give to the company on looking like they give it all to the company. They spend it on all the little social games instead, and because effort spent on the metagame is focused entirely about the appearance of virtue, it overshadows those who are actually performing the primary task, it overshadows actual virtue.
[T]he people who end up in authority are generally not those focused on whatever the purpose of the community is, but are focused on achieving positions of authority.
[T]hat meant the people who achieved status and power were by definition the least qualified to have it
[M]etagamers could hack organisational structures and procedures to promote themselves without needing to be good at the primary task of the organisation
On leader parasites
He was an ingeniously evolved parasite, the scion of a strain honed over generations to fool wider humanity into following his orders and tending to his needs.
[My] mind kept coming back to that insect in the ant’s nest that convinces the ants it’s more ant than they are, so that they serve up their own larvae for its delectation.
There was nothing to engage behind those eyes, barely anything more than a voracious id, a sense that was all me me me….a pattern of behavior that could be as mindless as some insect’s mimicry of an ant, that let it into the nest to eat the young.
[T]here’s a predatory bug that releases the pheromones of its prey more strongly than ever the females do, so that the witless wooers come from miles around to be devoured. …And yet there’s nothing true within it, nothing at all.
A parasite that prospers because it presents an exaggerated performance of its host species’ salient characteristics. Not just passing for human, but passing for superhuman: putting out all the tells so that you think they’re super-confident, super-dynamic, super-inspiring, exactly the man to follow to the end of the earth. Far more so than anyone who actually has reason to be confident, or to be worth following….more human than human, a colossus, possessing all the virtues the viewer might want to see.
[He] wasn’t about being loyal to underlings, he was about taking their loyalty and wringing every last drop of use from it before discarding them.
There’s clearly more to be said about the role that parasites play in ecosystems–and that demagogues play within the complex reticula of political economies and human limbic systems–but we’ll leave it here for now.
Featured image by José Clemente Orozco:The Demagogue