The Murky Ethics of AI-generated Images

The other day, I was playing with Stable Diffusion and found myself thinking hard about the ethics of AI-generated images. Indeed, I found myself in an ethical quandary. Or maybe quandaries.

More specifically, I was playing with putting famous haiku poems into the “Generate Image” box and seeing what kinds of images the Stable Diffusion generator would concoct.

It was pretty uninspiring stuff until I started adding the names of specific illustrators in front of the haiku. Things got more interesting artistically but, from my perspective, murkier ethically. And, it made me wonder if society has yet formulated way to approach the ethics of AI-generated images today.

The Old Pond Meets the New AIs

The first famous haiku I used was “The Old Pond” by Matsuo Bashō. Here’s how it goes in the translation I found:

An old silent pond

A frog jumps into the pond—

Splash! Silence again.

At first, I got a bunch of photo-like but highly weird and often grotesque images of frogs. You’ve got to play with Stable Diffusion a while to see what I mean, but here are a a few examples:

Okay, so far, so bad. A failed experiment. But that’s when I had the bright idea of adding certain illustrators’ names to the search so the generator would be able to focus on specific portions of the reticulum to find higher quality images. For reasons that will become apparent, I’m not going to mention their names. But here are some of the images I found interesting:

Better, right? I mean, each one appeals to different tastes, but they aren’t demented and inappropriate. There was considerable trial and error, and I was a bit proud of what I eventually kept as the better ones.

“Lighting One Candle” Meets the AI Prometheus

The next haiku I decided to use was “Lighting One Candle” by Yosa Buson. Here’s how that one goes:

The light of a candle

Is transferred to another candle—

Spring twilight

This time I got some fairly shmaltzy images that you might find in the more pious sections of the local greeting card aisle. That’s not a dig at religion, by the way, but that aesthetic has never appealed to me. It seems too trite and predictable for something as grand as God. Anyway, the two images of candles below are examples of what I mean:

I like the two trees, though. I think it’s an inspired interpretation of the poem, one that I didn’t expect. It raised my opinion of what’s currently possible for these AIs. It’d make for a fine greeting card in the right section of the store.

But, still not finding much worth preserving, I went back to putting illustrators’ names in with the haiku. I thought the following images were worth keeping.

In each of these cases, I used an illustrator’s name. Some of these illustrators are deceased but some are still creating art. And this is where the ethical concerns arise.

Where Are the New Legal Lines in Generative AI?

I don’t think the legalities relating to generative AI have been completely worked out yet. Still, it looks like does appear that artists are going to have a tough time battling the against huge tech firms with deep pockets, even in nations like Japan with strong copyright laws. Here’s one quote from the article “AI-generated Art Sparks Furious Backlash from Japan’s Anime Community”:

[W]ith art generated by AI, legal issues only arise if the output is exactly the same, or very close to, the images on which the model is trained. “If the images generated are identical … then publishing [those images] may infringe on copyright,” Taichi Kakinuma, an AI-focused partner at the law firm Storia and a member of the economy ministry’s committee on contract guidelines for AI and data, told Rest of World….But successful legal cases against AI firms are unlikely, said Kazuyasu Shiraishi, a partner at the Tokyo-headquartered law firm TMI Associates, to Rest of World. In 2018, the National Diet, Japan’s legislative body, amended the national copyright law to allow machine-learning models to scrape copyrighted data from the internet without permission, which offers up a liability shield for services like NovelAI.

How About Generative AI’s Ethical Lines?

Even if the AI generators have relatively solid legal lines defining how they can work, the ethical lines are harder to draw. With the images I generated, I didn’t pay too much attention to whether the illustrators were living or dead. I was, after all, just “playing around.”

But once I had the images, I came to think that asking the generative AI to ape someone’s artistic style is pretty sleazy if that artist is still alive and earning their livelihood through their art. That’s why I don’t want to mention any names in this post. It might encourage others to add the names of those artists into image generators. (Of course, if you’re truly knowledgeable about illustrators, you’ll figure it out anyway, but in that case, you don’t need any help from a knucklehead like me.)

It’s one thing to ask an AI to use a Picasso-esque style for an image. Picasso died back in 1973. His family may get annoyed, but I very much doubt that any of his works will become less valuable due to some (still) crummy imitations.

But it’s a different story with living artists. If a publisher wants the style of a certain artist for a book cover, for example, then the publisher should damn well hire the artist, not ask a free AI to crank out a cheap and inferior imitation. Even if the copyright system ultimately can’t protect those artists legally, we can at least apply social pressure to the AI generator companies as customers.

I think AI generator firms should have policies that allow artists to opt out of having their works used to “train” the algorithms. That is, they can request to be put on the equivalent of a “don’t imitate” list. I don’t even know if that’s doable in the long run, but it might be one step in the direction of establishing proper ethics of AI-generated images.

The Soft Colonialism of Probability and Prediction?

In the article “AI Art Is Soft Propaganda for the Global North,” Marco Donnarumma takes aim at the ethics of generative AI on two primary fronts.

First is the exploitation of cultural capital. These models exploit enormous datasets of images scraped from the web without authors’ consent, and many of those images are original artworks by both dead and living artists….The second concern is the propagation of the idea that creativity can be isolated from embodiment, relations, and socio-cultural contexts so as to be statistically modeled. In fact, far from being “creative,” AI-generated images are probabilistic approximations of features of existing artworks….AI art is, in my view, soft propaganda for the ideology of prediction.

To an extent, his first concern about cultural capital is related to my previous discussion about artists’ legal and moral rights, a topic that will remain salient as these technologies evolve.

His second concern is more abstract and, I think, debatable. Probabilistic and predictive algorithms may have begun in the “Global North,” but probability is leveraged in software wherever it is developed these days. It’s like calling semiconductors part of the “West” even as a nation like Taiwan innovates the tech and dominates the space.

Some of his argument rests on the idea that generative AI is not “creative,” but that term depends entirely on how we define it. Wikipedia, for example, states, “Creativity is a phenomenon whereby something new and valuable is formed.”

Are the images created by these technologies new and valuable? Well, let’s start by asking whether they represent something new. By one definition, they absolutely do, which is why they are not infringing on copyright. On the other hand, for now they are unlikely to create truly new artistic expressions in the larger sense, as the Impressionists did in the 19th century.

As for “valuable,” well, take a look at the millions if not billions of dollars investors are throwing their way. (But, sure, there are other ways to define value as well.)

My Own Rules for Now

As I use and write about these technologies, I’ll continue to leverage the names deceased artists. But for now I’ll refrain from using images based on the styles of those stilling living. Maybe that’s too simplistic and binary. Or maybe it’s just stupid of me not to take advantage of current artistic styles and innovations. After all, artists borrow approaches from one another all the time. That’s how art advances.

I don’t know how it’s all going to work out, but it’s certainly going to require more thought from all of us. There will never be a single viewpoint, but in time let’s hope we form some semblance of consensus about what are principled and unprincipled usages of these technologies.

Featured image is from Stable Diffusion. I think I used a phrase like "medieval saint looking at a cellphone." Presto.    

The Rising Seas of AI-Generated Media

We are about to be awash in AI-generated media, and our society may have a tough time surviving it.

Generated by Stable Diffusion. The prompt was “Dali tsunami”

Our feet are already wet, of course. The bots inhabit Twitter like so many virtual lice. And chatbots are helpfully annoying visitors on corporate websites the world over. Meanwhile, algorithms have been honing their scribbler skills on the virtual Grub Street of the Internet for a while now.

But soon, and by soon I mean within months, we will be hip deep in AI-generated content and wondering how high the tide is going to get.

My guess is high, baby. Very high indeed.

What Are We Really Talking Here?

Techopedia defines generative AI as a “broad label that’s used to describe any type of artificial intelligence that uses unsupervised learning algorithms to create new digital images, video, audio, text or code.” In short, it’s all about AI-generated media.

Generated by Stable Diffusion. Prompt was “network”

I think that label will ultimately prove too restrictive, but let’s start there. So far, most of the hype is indeed around media, especially image creation and automated writing, with music and video not being far behind.

But we’ll get to that.

For now it’s enough to say that generative AI works by learning from, and being “inspired by,” the dynamic global reticulum that is the Internet.

But generative AI also applies to things like computer code. And, by and by, it’ll start generating atoms in addition to bits and bytes. For example, why couldn’t generative AI be applied to 3D printing? Why not car and clothing design? Why not, even, the creation of new biological systems?

The Money Generator

First, let’s follow the money. So how much dough is going into generative AI these days?

Answer: how much you got, angels and VCs?

Generated by Stable Diffusion. Prompt “printing press printing money”

For example, a start-up called Stability AI, which created the increasingly popular Stable Diffusion image-generating algorithm, was recently injected with a whopping $101 million round of investment capital. The company is now valued at a billion bucks.

Meanwhile other image generators such as DALL-E 2 and Midjourney have already acquired millions of users.

But investors are not just hot for image generators. Jasper, a generative writing company that’s just a year old (and one that plagues me with ads on Facebook) recently raised $125 million in venture capital and has a $1.5 billion valuation.

Investing in these technologies is so hot that a Gen AI Market Map from Sequoia recently went viral. The wealth wave rises and everyone wants to catch it.

Running the Gamut

Although image and prose (usually with an eye toward marketing) are the hot tickets in generative AI for now, they are just the proverbial tip of the iceberg. Indeed, it appears that Stability AI, for one, has much grander plans beyond images.

Generated by Stable Diffusion. Prompt was “color gamut”

The New York Times reports that the company’s soon-to-be massive investments in AI hardware will “allow the company to expand beyond A.I.-generated images into video, audio and other formats, as well as make it easy for users around the world to operate their own, localized versions of its algorithms.”

Think about that a second. Video. So people will be able to ask generative AI to quickly create a video of anything they can imagine.

Fake Film Flim-Flams

Who knows where this leads? I suppose soon we’ll be seeing “secret” tapes of the Kennedy assassination, purported “spy video” of the Trump/Putin bromance, and conspiracy-supporting flicks “starring” a computer-generated Joe Biden.

Generated by Stable Diffusion. Prompt was “human shakes hands with extraterrestrial”

We can only imagine the kind of crap that will turn up on YouTube and social media. Seems likely that one of the things that generative AI will generate is a whole new slew of conspiracists who come to the party armed with the latest videos of Biden handing over Hunter’s laptop to the pedophiliac aliens who wiped Hilary’s emails to ensure that Obama’s birth place couldn’t be traced back to the socialist Venusians who are behind the great global warming scam.

Even leaving political insanity aside, however, what happens to the film and television industries? How long until supercomputers are cranking out new Netflix series at the rate of one per minute?

Maybe movies get personalized. For example, you tell some generative AI to create a brand new Die Hard movie in which a virtual you plays the Bruce Willis role and, presto, out pops your afternoon’s entertainment. Yippee ki yay, motherfucker!

So, AI-generated media on steroids. On an exponential growth curve!

Play that Fakey Music

Then there are the sound tracks to go with those AI-gen movies. The Recording Industry Association of America (RIAA) is already gearing up for these battles. Here’s a snippet of what it submitted to the Office of the U.S. Trade Representative.

Generated by Stable Diffusion. Prompt was “music”

There are online services that, purportedly using artificial intelligence (AI), extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals (a music stem) from a sound recording, and/or generate, master or remix a recording to be very similar to or almost as good as reference tracks by selected, well known sound recording artists.

To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members’ works. In any event, the files these services disseminate are either unauthorized copies or unauthorized derivative works of our members’ music.

That’s an interesting argument that will probably be tried by all creative industries. That is, just training your AI based on Internet copies of musical works violates copyright even if you have no intention of directly using that work in a commercial project. I imagine the same argument could be applied to any copyrighted work. Who know what this will mean for “synthetic media,” as some are calling.

Of course, there are plenty of uncopyrighted works AI can be trained on, but keeping copyrighted stuff from being used for machine learning programs could put a sizeable dent in the quality of generative AI products.

So, it won’t only be media that’s generated. Imagine the blizzard of lawsuits until it’s all worked out.

Stay tuned.

Revenge of the Code

AI can code these days. Often impressively so. I suppose it’d be ironic if a lot of software developers were put out of work by intelligent software, but that’s the direction we seem headed.

Consider the performance of DeepMind’s AlphaCode, an AI designed to solve challenging coding problems. The team that designed it had it compete with human coders to solve 10 challenges on Codeforces, a platform hosting coding contests.

Generated by Stable Diffusion. The prompt was “Vinge singularity”

Prof. John Naughton writing in The Guardian describes the contest and summarizes, “The impressive thing about the design of the Codeforces competitions is that it’s not possible to solve problems through shortcuts, such as duplicating solutions seen before or trying out every potentially related algorithm. To do well, you have to be creative.”

On its first try, AlpaCode did pretty well. The folks at DeepMind write, “Overall, AlphaCode placed at approximately the level of the median competitor. Although far from winning competitions, this result represents a substantial leap in AI problem-solving capabilities and we hope that our results will inspire the competitive programming community.”

To me, a very amateurish duffer in Python, this is both impressive and alarming. An AI that can reason out natural language instructions and then code creatively to solve problems? It’s kind of like a Turing test for programming, one that AlphaCode might well be on target to dominate in future iterations.

Naughton tries to reassure his readers, writing that “engineering is about building systems, not just about solving discrete puzzles,” but color me stunned.

With this, we seem to be one step closer to Vernor Vinge’s notion of the technological singularity, in case you needed another thing to keep you up at night.

Up and Atoms

Movies? Music? Code?

What’s next for generative AI once it finds its virtual footing?

Generated by Stable Diffusion. Prompt was “atoms”

Well, atoms are the natural next step.

Ask yourself: if generative AI can easily produce virtual images, why not sculptures via 3D printers? Indeed, why not innovative practical designs?

This is not a new idea. There is already something called generative design. Sculpteo.com describes, “Instead of starting to work on a design from scratch, with a generative design process, you tell the program what you need to accomplish, you set your design goals and mention all the parameters you can. No geometry is needed to start a project. The software will then deliver you hundreds or thousands of design options, the AI can also make an in-depth analysis of the design and establish which one is the most efficient one! This method is perfect to explore design possibilities to get an optimal part.”

Yup, perfect.

How About Bio?

Generated by Stable Diffusion. Prompt was “bioprinter”

Not long ago, I wrote a tongue-in-cheekish post about the singularity. An acquaintance of mine expressed alarm about the idea. When I asked what scared her most, she said, “If AI can alter DNA, I’d say the planet is doomed.”

That particular scenario had never occurred to me, but it’s easy enough to see her point. DNA is biological code. Why not create a generative AI that can design new life forms almost as easily as new images?

Generated by Stable Diffusion. Prompt was “live cells”

In fact, why stop at design? Why not 3D print the new critters? Again, this is a concept that already exists. As the article “3D Bioprinting with Live Cells” describes it, “Live cell printing, or 3D bioprinting, is an emerging technology that poses a revolutionary development for tissue engineering and regeneration. This bioprinting method involves the creation of a spatial arrangement of living cells and biologics into a functionalized tissue.”

The good news? Probably some fascinating new science, designer replacement organs on demand, and all the strange new machine-generated meat you can eat!

The bad news? Shudder. Let’s not go there today.

Mickey Mouse and the Age of Innovative AI

Although we’re calling this generative AI, the better term might be innovative AI. We are essentially contracting AI writers, artists and coders to do our bidding. Sure, they’re imitating, mixing and matching human-made media, but they are nonetheless “the talent” and will only get better at their jobs. We, on the other hand, are promoted to the positions of supercilious art directors, movie producers and, inevitably (yuck) critics.

Generated by Stable Diffusion. Prompt was “Tim Burton 3 people caught in whirlpool”

If the singularity ever actually happens, this emerging age of innovative AI will be seen as a critical milestone. It feels like a still rough draft of magic, and it may yet all turn out wonderfully.

But I find it hard not to foresee a Sorcerer’s Apprentice scenario. Remember in Fantasia, when Mickey Mouse harnesses the power of generative sorcery and winds up all wet and sucked down a whirlpool?

Unlike Mickey, we’ll have no sorcerer to save our sorry asses if we screw up the wizardry. This means that, on sum, we need to use these powerful technologies wisely. I hope we’re up to it. Forgive me if, given our recent experiences with everything from social media madness to games of nuclear chicken, I remain a bit skeptical on that front.

Feature image generated by Stable Diffusion. The prompt terms used were "Hokusai tsunami beach people," with Hokusai arguably being the greatest artist of tsunamis in human history. In other words, the AI imitated Hokusai's style and came up with this original piece.

The Singularity Is Pretty Damned Close…Isn’t It?

What is the singularity and just how close is it?

The short answers are “it depends who you ask” and “nobody knows.” The longer answers are, well…you’ll see.

Singyuwhatnow?

Wikipedia provides a good basic definition: “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.”

The technological growth in question usually refers to artificial intelligence (AI). The idea is that an AI capable of improving itself quickly goes through a series of cycles in which it gets smarter and smarter at exponential rates. This leads to a super intelligence that throws the world into an impossible-to-predict future.

Whether this sounds awesome or awful largely depends on your view of what a superintelligence would bring about, something that no one really knows.

The impossible-to-predict nature is an aspect of why, in fact, it’s called a singularity, a term that originates with mathematics and physics. In math, singularities pop up when the numbers stop making sense, as when the answer to an equation turns out to be infinity. It’s also associated with phenomena such as black holes where our understandings of traditional physics break down. So the term, as applied to technology, suggests a time beyond which the world stops making sense (to us) and so becomes impossible to forecast.

How Many Flavors Does the Singularity Come In?

singularity image
From Wikipedia: major evolutionary transitions in information processing

Is a runaway recursively intelligent AI the only path to a singularity? Not if you count runaway recursively intelligent people who hook their little monkey brains up to some huge honking artificial neocortices in the cloud.

Indeed, it’s the human/AI interface and integration scenario that folks like inventor-author-futurist Ray Kurzweil seem to be banking on. To him, from what I understand (I haven’t read his newest book), that’s when the true tech singularity kicks in. At that point, humans essentially become supersmart, immortal(ish) cyborg gods.

Yay?

But there are other possible versions as well. There’s the one where we hook up our little monkey brains into one huge, networked brain to become the King Kong of super intelligences. Or the one where we grow a supersized neocortex in an underground vat the size of the Chesapeake Bay. (A Robot Chicken nightmare made more imaginable by the recent news we just got a cluster of braincells to play pong in a lab–no, really).

Singularity: Inane or Inevitable?

The first thing to say is that maybe the notion is kooky and misguided, the pipedream of geeks yearning to become cosmic comic book characters.  (In fact, the singularity is sometimes called, with varying degrees sarcasm, the Rapture for nerds.)

I’m tempted to join in the ridicule of the preposterous idea. Except for one thing: AI and other tech keeps proving the naysayers wrong. AI will never beat the best chess players. Wrong. Okay, but it can’t dominate something as fuzzy as Jeopardy. Wrong. Surely it can’t master the most complex and challenging of all human games, Go. Yawn, wrong again.

After a while,  anyone who bets against AI starts looking like a chump.

Well, games are for kids anyway. AI can’t do something as slippery as translate languages or as profound as unravel the many mysteries of protein folding.  Well, actually…

But it can’t be artistic…can it? (“I don’t do drugs. I am drugs” quips DALL-E).

Getting Turing Testy

There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.

The test assumes you have a questioner reading a computer screen (or the equivalent). The questioner has two conversations via screen and keyboard. One of those conversations is with a computer, the other with another person. If questioner having the two conversations can’t figure out which one is the computer, then the computer passes the test because it can’t be distinguished from the human being.

Of course, this leaves us with four (at least!) big questions.

First, when will a machine finally pass that final exam?

Second, what does it mean if and when a machine does? Is it truly intelligent? How about conscious?

Third, if the answer to those questions seems to be yes, what’s next? Does it get driver’s license? A FOX News slot? An OKCupid account?

Fourth, will such a computer spark the (dun dun dun) singularity?

The Iffy Question of When the Singularity Arrives

In a recent podcast interview, Kurzweil predicted that some soon-to-be-famous digital mind will pass the Turing Test in 2029.

“2029?” I thought. “As in just 7-and-soon-to-be-6-years-away 2029?”

Kurzweil claims he’s been predicting that same year for a long time, so perhaps I read about it back in 2005 when his book The Singularity Is Near (lost somewhere in the hustle bustle of my bookshelves). But back then, of course, it was a quarter of a decade away. Now, well, it seems damn near imminent.

Of course, Kurzweil may well turn out to be wrong. As much as he loves to base his predictions on the mathematics of exponentials, he can get specific dates wrong. For example, as I wrote in a previous post, he’ll wind up being wrong about the year solar power becomes pervasive (though he may well turn out to be right about the overall trend).

So maybe a computer won’t pass a full blown Turing test in 2029. Perhaps it’ll be in the 2030s or 2040s. That would be close enough, in my book. Indeed, most experts believe it’s just a matter of time. One survey issued at the Joint Multi-Conference on Human-Level Artificial Intelligence found that just 2% of participants predicted that an artificial general intelligence (or AGI, meaning that the machine thinks at least as well as a human being) would never occur. Of course, that’s not exactly an unbiased survey cohort, is it?

Anyhow, let’s say the predicted timeframe when the Turing test is passed is generally correct. Why doesn’t Kurzweil set the date of the singularity on the date that the Turing test is passed (or the date that a human-level AI first emerges)? After all, at that point, the AI celeb could potentially code itself so it can quickly become smarter and smarter, as per the traditional singularity scenario.

But nope. Kurzweil is setting his sights on 2045, when we fully become the supercyborgs previously described.

What Could Go Wrong?

So, Armageddon or Rapture? Take your pick.

What’s interesting to my own little super-duper-unsuper brain is that folks seem more concerned about computers leaving us in the intellectual dust than us becoming ultra-brains ourselves. I mean, sure, our digital super-brain friends may decide to cancel humanity for reals. But they probably won’t carry around the baggage of our primeval, reptilian and selfish fear-fuck-kill-hate brains–or, what Jeff Hawkins calls our “old brain.”

In his book A Thousand Brains, Hawkins writes about about the ongoing frenemy-ish relationship between our more rational “new brain” (the neocortex) and the far more selfishly emotional though conveniently compacted “old brain” (just 30% of our overall brain).

Basically, he chalks up the risk of human extinction (via nuclear war, for example) to old-brain-driven crappola empowered by tech built via the smart-pantsy new brain. For example, envision a pridefully pissed off Putin nuking the world with amazing missiles built by egghead engineers. And all because he’s as compelled by his “old brain” as a tantrum-throwing three-year-old after a puppy eats his cookie.

Now envision a world packed with superintelligent primate gods still (partly) ruled by their toddler old-brain instincts. Yeah, sounds a tad dangerous to me, too.

The Chances of No Chance

Speaking of Hawkins, he doesn’t buy the whole singularity scene. First, he argues that we’re not as close to creating truly intelligent machines as some believe. Today’s most impressive AIs tend to rely on deep learning, and Hawkins believes this is not the right path to true AGI. He writes,

Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead….they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

Second, even when we finally build AGIs (and he thinks we certainly will if he has anything to say about it), they won’t be driven by the same old-brain compulsions as we are. They’ll be more rational because their architecture will be based on the human neocortex. Therefore, they won’t have the same drive to dominate and control because they will not have our nutball-but-gene-spreading monkey-brain impulses.

Third, Hawkins doesn’t believe that an exponential increase in intelligence will suddenly allow such AGIs to dominate. He believes a true AGI will be characterized by a mind made up of “thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors.” (That makes more sense if you read his book, A Thousand Brains: A New Theory of Intelligence). He goes on:

Adding this ingredient [meaning the thousands of reference frames] to machines does not impart any immediate capabilities. It only provides a substrate for learning, endowing machines with the ability to learn a model of the world and thus acquire knowledge and skills. On a kitchen stovetop you can turn a knob to up the heat. There isn’t an equivalent knob to “up the knowledge” of a machine.

An AGI won’t become a superintelligence just by virtue of writing better and better code for itself in the span of a few hours. It can’t automatically think itself into a superpower. It still needs to learn via experiments and experience, which takes time and the cooperation of human scientists.

Fourth, Hawkins thinks it will be difficult if not impossible to connect the human neocortex to mighty computing machines in the way that Kurzweil and others envision. Even if we can do it someday, that day is probably a long way off.

So, no, the singularity is not near, he seems to be arguing. But a true AGI may, in fact, become a reality sometime in the next decade or so–if engineers will only build an AGI based on his theory of intelligence.

So, What’s Really Gonna Happen?

Nobody know who’s right or wrong at this stage. Maybe Kurweil, maybe Hawkins, maybe neither or some combination of both. Here’s my own best guess for now.

Via deep learning approaches, computer engineers are going to get closer and closer to a computer capable of passing the Turning test, but by 2029 it won’t be able to fool an educated interrogator who is well versed in AI.

Or, if a deep-learning-based machine does pass the Turing test before the end of this decade, many people will argue that it only displays a façade of intelligence, perhaps citing the famous Chinese-room argument (which is a philosophical can of worms that I won’t get into here).

That said, eventually we will get to a Turing-test-passing machine that convinces even most of the doubters that it’s truly intelligent (and perhaps even conscious, an even higher hurdle to clear). That machine’s design will probably hew more closely to the dynamics of the human brain than do the (still quite impressive) neural networks of today.

Will this lead to a singularity? Well, maybe, though I’m convinced enough by the arguments of Hawkins to believe that it won’t literally happen overnight.

How about the super-cyborg-head-in-the-cloud-computer kind of singularity? Well, maybe that’ll happen someday, though it’s currently hard to see how we’re going to work out a seamless, high-bandwidth brain/supercomputer interface anytime soon. It’s going to take time to get it right, if we ever do. I guess figuring all those details out will be the first homework we assign to our AGI friends. That is, hopefully friends.

But here’s the thing. If we ever do figure out the interface, it seems possible that we’ll be “storing” a whole lot of our artificial neocortex reference frames (let’s call them ANREFs) in the cloud. If that’s true, then we may be able to swap ANREFs with our friends and neighbors, which might mean we can quickly share skills I-know-Kung-Fu style. (Cool, right?)

It’s also possible that the reticulum of all those acquired ANREFs will outlive our mortal bodies (assuming they stay mortal), providing a kind of immortality to a significant hunk of our expanded brains. Spooky, yeah? Who owns our ANREFs once the original brain is gone? Now that would be the IP battle of all IP battles!

See how weird things can quickly get once you start to think through singularity stuff? It’s kind of addictive, like eating future-flavored pistachios.

Anyway, here’s one prediction I’m pretty certain of: it’s gonna be a frigging mess!

Humanity will not be done with its species-defining conflicts, intrigues, and massively stupid escapades as it moves toward superintelligence. Maybe getting smarter–or just having smarter machines–will ultimately make us wiser, but there’s going to be plenty of heartache, cruelty, bigotry, and turmoil as we work out those singularity kinks.

I probably won’t live to the weirdest stuff, but that’s okay. It’s fun just to think about, and, for better and for worse, we already live in interesting times.

Addendum: Since I wrote this original piece, things have been moving so quickly In the world of AI that I revisited the topic in The Singularity Just Got Nearer…Again…And How!

Featured image by Adindva1: Demonstration of the technology "Brain-Computer Interface." Management of the plastic arm with the help of thought. The frame is made on the set of the film "Brain: The Second Universe."

Poetry, Programming and People Management

The human brain does ambiguity well. Most of us are strangely drawn to multiple meanings, surrealities and pattern recognition. We thrive on metaphors and similes, rejoice in symbols, dance to nonsense syllables and ad hoc syncopations. And paradoxes? We both hate and love them — paradoxically, of course.

No More Artful Ambiguities

This may be one of the reasons so many people become frustrated and even fearful when confronted by math and logic. Those disciplines feel so cold and hard-edged with their unitary meanings and wearisome concatenations of implacable reasoning.

It’s the same with computer coding. If you take an Introduction to Computer Science course, the professors often go out of their way to compare natural languages (a phrase which itself is an oxymoron) with computer languages.

Yao graph with number of ray k=8; from Wikimedia, by Rocchini

The gist is that while while both types of language share common and, indeed, essential properties such as syntax and semantics, they differ widely in that natural language can often be understood even when the speaker or writer fails to follow basic spelling or grammatical rules. In contrast, a computer program (much like a mathematical equation) will typically fail to work if even a single character is left out or misplaced. An absent bracket can be a fatal bug, a backwards greater-than symbol can cause an infinite loop, a poorly assigned variable can inadvertently turn  100 dollars into a dime.

A computer has no use for the artful ambiguities and multiple meanings of poetry. If you give the machine a couple of lines of verse such asanyone lived in a pretty how town (with up so floating many bells down)”,  it will — unless you carefully guide the words into the code as a string —  give you an error message.  (I know a lot of people who might respond the same way, of course.)  Yet, without the precisely imprecise wordplay of e e cummings, those lines of poetry would not be poetry at all.

How Does People Management Fit?

So, what does any of this have to do with people management?

Just this: people management is sometimes poetry, sometimes programming, and it helps to know which is which. Before the rise of civilizations and cities, when virtually all people were hunting and gathering in smallish bands and clans, people management (in the forms it would have existed then) was all poetry.

Sure, there were unwritten rules, harsh taboos, constant rumors and deadly serious superstitions. And a leader, to the degree there were leaders as we understand them today, could leverage those cultural components to influence his or her clansmen. But this was mostly a matter of nuance, persuasion, the formation of alliances, the wielding of knowledge and lore (when, that is, it wasn’t a matter of force and coercion). In the largest sense, it was art and song.

Today, good managers must still be attuned to the poetry of human attitudes and actions, able to sort through the ambiguities of rumor mills and hurt feelings and arrogant posturings. But now managers must also cope with or even rely on laws, regulations and rules.

Hard Coding Humanity

Is there a “zero tolerance” clause in the company policy somewhere? Then even a terrific employee who gets caught using illegal drugs may need to go.  Are there complex legal regulations barring a worker from having financial holdings in a certain client company? Well, then, the employee must divest or hit the door. There are countless other examples of rules that are as hard-and-fast as rule-of-law societies can make them. Although these human rules will never be quite as rigorous as the requirements of programming languages, they are a kind of human programming; there are true and false statements,  barriers that can’t be broken, classifications that should never be breached.

This is why we have legal departments. It is also why uncertain managers call in the hired gun of the HR professional to take care of dismissals and drug tests and background checks.

We simultaneously hate  this programming of human behavior and depend on it. We can, for example, rely on the kind of code that states:

while worker performance >= level 3: { {

provide paycheck and health insurance }

else if: {

performance <= level 2:

leverage performance review proceedings

 }}

Okay, the coding in companies is much more complex than that. Still, the point is that we rely on it because it’s clean, logical and, best of all, spares us from having to make hard and potentially dangerous decisions on our own. In such settings, we are no longer “poets of people management,” the kind of managers who might have led a clan though a vast and dangerous prehistoric wilderness in millennia gone by.

People in Both Programs and Poetry

This dependence on programming is a shame in many ways, one that harried managers should ponder from time to time.  I know we can’t utterly avoid modern programming — at least, not unless we retreat into the wildness, as metaphorically  isolated as Thoreau in his cabin by Walden Pond. Nor should we. The rule of law is essential to our modern societies, and formal policies are often forged to protect employees from arbitrary or biased decisions. Still, we might strive to be better poets, respecting employees as the people they are rather than viewing them as components of a well-programmed machine.

Walden Pond; from Wikimedia, by QuarterCircleS
Featured image: The Parnassus (1511) by Raphael: famous poets recite alongside the nine Muses atop Mount Parnassus.

Do You Treat Employees Like Fixed-Program Computers?

When All Programs Were Fixed

Computers didn’t always work they do today. The first ones were what we now called “fixed-program computers,” which means that, without some serious  and complex adjustments, they could do only one type of computation.

Sometimes that type of computer was superbly useful, such as when breaking Nazi codes during World War II (see the bombe below). Still, they weren’t much more programmable than a calculator, which is a kind of modern-day fixed program computer.

Along Came John and Alan

The brilliant mathematician John von Neumann and colleagues had a different vision of what a computer should be. To be specific, they had Alan Turing’s vision of a “universal computing machine,” a theoretical machine that the genius Turing dreamt up in 1936. Without going into specifics, let’s just say that the von Neumann model used an architecture has been very influential up the present day.

One of the biggest advantages associated with Turing/von Neumann computers is that multiple programs can be stored in them, allowing them to do many different things depending on which programs are running.

Von Neumann architecture: Wikimedia

Today’s employers clearly see the advantage of stored-program computers. Yet I’d argue that many treat their employees and applicants more like the fixed-program computers of yesteryear.  That is, firms make a lot of hiring decisions based more on what people know when they walk in the door than based on their ability to acquire new learning.  These days, experts are well paid largely because of the “fixed” knowledge and capabilities they have. Most bright people just out of college, however, don’t have the same fixed knowledge and so are viewed as less valuable assets.

The Programmable Person

Employers aren’t entirely in the wrong here. It’s a lot easier to load a new software package into a modern computer than it is to train an employee who lacks proper skill sets.  It takes money and time for workers to develop expertise, resources that employers don’t want to “waste” in training.

But there’s also an irony here: human beings are the fastest learning animals (or machines, for that matter) in the history of, well, the universe, as far as we know. People are born to learn (we aren’t designated as sapiens sapiens for nothing), and we tend to pick things up quickly.

The Half-Life of Knowledge

What’s more, there’s a half-life to existing knowledge and techniques in most professions. An experienced doctor may misdiagnose a patient simply because his or her knowledge about certain symptoms or treatments are out-of date. The same concept applies to all kinds of employees but especially to professionals such as engineers, scientists, lawyers, and doctors. In other words, it applies to a lot of the people who earn the largest salaries in the corporate world.

Samuel Arbesman, author of The Half-Life of Facts: Why Everything We Know Has an Expiration Date, stated in a TEDx video, “Overall, we know how knowledge grows, and just as we know how knowledge grows, so too do we know how knowledge becomes overturned. ” Yet, in our recruitment and training policies, firms often act as if we don’t know this.

The only antidote to the shortening half-life of skills is more learning, whether it’s formal, informal or (preferably) both. And the only antidote to a lack of experience is giving people experience, or at least a good facsimile of experience, as in simulation-based learning.

The problem of treating people like fixed-program computers is part of a larger skills-shortage mythology. In his book  Why Good People Can’t Get Jobs , Prof. Peter Cappelli pointed to three driving factors behind the skills myth. A Washington Post article sums up:

Cappelli points to many’s unwillingness to pay market wages, their dependence on tightly calibrated software programs that screen out qualified candidates, and their ignorance about the lost opportunities when jobs remain unfilled…”Organizations typically have very good data on the costs of their operations—they can tell you to the penny how much each employee costs them,” Cappelli writes, “but most have little if any idea of the [economic or financial] value each employee contributes to the organization.” If more employers could see the opportunity cost of not having, say, a qualified engineer in place on an oil rig, or a mobile-device programmer ready to implement a new business idea, they’d be more likely to fill that open job with a less-than-perfect candidate and offer them on-the-job training.

Losing the Fixed-Program Mindset

The fixed-program mentality should increasingly become a relic of the past. Today, we know more than ever about how to provide good training to people, and we have a growing range of new technologies and paradigms, such as game-based learning, extended enterprise elearning systems, mobile learning and “massively open online courses” (aka, MOOCs).

A squad of soldiers learn communication and decision-making skills during virtual missions: Wikimedia

With such technologies, it’s become possible for employers to train future applicants even before they apply for a position. For example, a company that needs more employees trained in a specific set of programming languages could work with a provider to build online courses that teach those languages. Or they could potentially provide such training themselves via extended enterprise learning management systems.

The point is that there are more learning options today ever before. We live in a new age during which smart corporations will able to adopt a learning paradigm that is closer to that of stored-program computers, one that they’ve trusted their technologies to for over half a century.

Featured image: A rebuild of a British Bombe located at Bletchley Park museum. Transferred from en.wikipedia to Commons by Maksim. Wikimedia Commons.

An (“Evolving”) [List] of Python Resources

This list of Python resources for beginning coders is in (mostly) alphabetical order. I haven’t tried to provide different headers for videos versus MOOCs versus books, etc. I figure you can always search the page if you’re looking for something in particular. Where I can, though, I’ve given you the sometimes dubious benefit of my first-hand knowledge. In other cases, I’ve gone by what the website says or let you know what I’ve heard from others.

If you know of other sources that you think could be on this list, please shoot me a comment. Also, clue me in if any of the links don’t work properly.

16 Resources to Learn Python Programming: A shortish list of some of the best resources for learning Python. Many of these resources also appear in my list below, but there are few here that I’ve not yet checked out.

80+ Best Free Python Tutorials, eBooks & PDF To Learn Programming Online : A nice collection of resources. I especially like its list of cheat sheets, which is something few other resource guides provide.

After Hours Programming Python 3 Tutorial: An online tutorial with which I’ve not had much experience. It does have a code simulator, but it doesn’t seem to require you to code something correctly to move on with the tutorial. That can be a good thing when you’re sick of being tested, and it can be a bad thing when you need to be really challenged.

The Best Way to Learn Python: A handy, dandy list of some great Python resources.

Bootcamps: Bootcamps are places (physical or virtual) you go to learn specific programming skills in a matter of a few weeks. I’ve never attended one, but there are a number of websites devoted to helping you distinguish one from another. They include SwitchUp, Techendo, and others. Bootcamps can be quite pricey, so it pays to be cautious and selective.

Byte of Python: An introductory text for beginners. For the most part, I think it’s clearly written. The author, Swaroop C.H., wrote it for Python 2, then updated it to Python 3, and then revised it back to Python 2 for reasons he explains in his book. But I’m glad I still have his PDF version on Python 3 on my mobile.

Check iO: I have a crush on this gamified tutorial (or, maybe it’s more of a game that teaches). Here’s the hitch: you need to solve the problems before you can see how other people of have solved them. This drives me mad, though usually in a good way. I don’t have the chops to get to the end anytime soon, but it’s a terrific vehicle for taking my own lame solutions and then comparing them against some other tightly written solutions by programmers who are much better than I. This is usually humiliating, but also in a good way. And it’s a great way to learn how to write code that is more Pythonic.

CodeBuddies.org: This is a group of people who meet on Google Hangouts at scheduled times to talk about code (usually as it relates to specific books or projects) while sharing their screens. It’s intended to help participants stay motivated and learn faster. I’ve only been to a few hangouts, but it seems worthwhile.

Codecademy: It has a very good, interactive online Python tutorial as well as a community to help support it. I recommend it.

Codementor: For a fee, this service “connects you with experienced mentors for instant help via screen sharing, video, and text chat.” I’ve not yet used it, but I’ve been tempted a few times when banging my head on an especially recalcitrant problem.

Computer Game Development Tutorial: This is a series of videos on how to develop games in Python.

Computer Science Circles: A nice little interactive online tutorial sort of along the lines of the interactive version of How to Think Like a Computer Scientist, which I reference below.

Dive into Python 3: Classic book on Python that can be found online.

Drag and Drop Programming: A growing number of sites allow beginning programmers to build code by dragging and dropping “blocks” (or other visual widgets)  rather than manually writing text-based code.  These do not necessarily use the Python language, but they are a place where beginners — including children — can go to get a feel for how to code. Among them are MIT ScratchCode.org, and Google Blockly. There’s a blurred boundary between these types of sites and sites that teach via gamification.

The Django Book: If you hang around the Python community for any length of time, Django will come up. It’s a Web framework — meaning, that you can use it to write Web apps — written in Python. Last time I looked, this particular book came with a warning about being out of date, though the site indicated it was in the process of being updated. I’ve read that the official Django tutorial is good and that Tango With Django is another useful resource.

Exercism.io: Here’s how Wired described it: “Exercism is updated every day with programming exercises in a variety of different languages. First, you download these exercises using a special software client, and once you’ve completed one, you upload it back to the site, where other coders from around the world will give you feedback.” Exercism may be a sophisticated, crowd-sourced learning experience, but, at least for now, it requires you to use GitHub and command lines. In other words, it’s somewhat complicated to get off the ground with it. Still, if you’re beyond the early beginner stages, it may be a natural next step. Newcoder.io/ seems to be a similar site.

Instant Hacking: A super, duper abbreviated tutorial designed to teach Python on the fly.

Intro to Computer Science: I started and took a large segment of this Udacity MOOC when it was still relatively new. I enjoyed it. As far as I can tell, the courseware is still free, but there is paid version that includes extras such as project feedback, personal guidance, personalized pacing support, and a verified certificate.

Game-Based Learning: I’ve already mentioned Checkio, which is geared more toward adults, but there are other games as well that are even more “game-like” and sometimes geared toward younger audiences, including CodeCombat, Codingame, and Code.org. PythonChallenge is closer to Checkio but, instead of starting with pretty clear instructions about what goal you need to achieve, you have to interpret clues as you go along. I should note that some games (such as CodeCombat) are free to start but charge you something, such as a monthly subscription fee, once you’ve ascended to certain levels.

Google (and not just the search engine): It’s no secret that the famous search engine is often the coder’s best friend. You put a question into the magic rectangle and it serves up lots of possible answers, usually good ones. And then there’s Google’s Python Class, which has both text and video. It’s fun largely because it is delivered to Google employees in what I assume is a Googleplex classroom.

Hands-on Python Tutorial: This is actually a full university course taught by Dr. Andrew N. Harrington. I like it very much, having stumbled onto it via iTunes.

The Hitchhiker’s Guide to Python!: Bills itself as an “opinionated guide [that] exists to provide both novice and expert Python developers a best-practice handbook to the installation, configuration, and usage of Python on a daily basis.” Most of what I’ve read is not for rank beginners, but there seems to be a lot of canny advice. It also contains a good list of other Python resources.

How to Build a Python Bot That Can Play Web Games: This is based mostly on text and screenshots, and it entails building a Computer Vision-based game bot in Python.

How to Think Like a Computer Scientist: Various versions of this book exist, but my favorite is the interactive version to which I’ve linked here.  In my experience, it is a fine blend of beginner book and online tutorial. I hope more computer “books” will follow this approach in the future.

Introduction of Python’s Flask Framework: Like DjangoFlask is a Web framework for Python but it is often billed as smaller and easier to learn. Therefore, it may be an appropriate starting place for beginning programmers who want to use a Web framework.

Invent with Python: I’m a big fan of the book Invent Your Own Computer Games, which is geared toward kids but which is terrific for beginner programmers. There’s a free online version. It takes you through the process of code for specific games. He note only provides all the code but shows you how and why it works. There’s nothing else quite like it, in my experience. The author, Al Sweigart, has also authored Making Games with Python & Pygame and Hacking Secret Ciphers with Python, also available for free.

Invent with Python Bookshelf: This is a very nicely laid out list of books, many of which can be gotten for free. Al Sweigart, the owner of the site, not only includes his own books but many others as well.

Learnpython: This is an interactive Python tutorial that has a set of tutorials that teach the basics as well as more advanced lessons. I’ve used it and liked it. It’s straightforward, fast and without many bells or whistles.

Learn Python The Hard Way: Based on my experience in online communities, a lot of people use and swear by this. I’ve gone through parts of it. Some people say they’ve done it in a weekend, but I know I couldn’t complete it that quickly. There’s a free book online and also a relatively inexpensive (last time I checked) course that includes videos, among other things.

Introduction to Computer Science and Programming Using Python: An introduction to computer science as a tool to solve real-world analytical problems using Python 3.5.

Nettuts+’s Python from Scratch: This is a combination of text and video that demonstrates the “ins and outs of Python development,” starting from the most basic levels possible.

Non-Programmer’s Tutorial for Python 3: Also not interactive but, as with the 2.6 version, a nice set of Wikibooks-based lessons on learning the basics.

One Day of IDLE Toying:  A succinct introduction to IDLE, which stands for Integrated DeveLopment Environment. It’s the “integrated development environment” (that is, the doodad into which you write and run your programs) that’s bundled with Python, so you have it when you download the program.

Online Python Tutor: Free educational tool that allows a teacher or student to “write a Python program in the Web browser and visualize what the computer is doing step-by-step as it executes the program.”

Programiz: I stumbled on this site while looking for information on keywords in Python. Not only does it have an excellent explanation of keywords, complete with sample code, but the other parts of the online tutorial also look very clean and helpful. I’m looking forward to getting to know this site better.

Primers on Python: There are surprisingly few good, short, introductory Python primers online. As I’ve been learning, I’ve created the oddball Quick and Quick for Python, and I recommend First Steps With Python as a less quirky alternative. One (perhaps overly) succinct work is Patrice Koehl’s Python Primer. There’s also Crash into Python, although I think that’s geared toward people who know how to code but are new to Python.  A Beginner’s Python Tutorial seems like a pretty nice tutorial for complete beginners, and I think After Hours Programming can also be a useful primer.

Programming for Everybody (Python): A University of Michigan MOOC from Coursera. You’ll need to register and login to see it. I’ve not taken this course. Last time I looked, there were a number of other Coursera offerings was well, such as  An Introduction to Interactive Programming in Python and Algorithmic Thinking from Rice University.

Pygame: A  set of modules designed for writing games in Python.

Python 3 Programming Tutorial: This is a series of videos on Python programming on YouTube. Generally speaking, YouTube is an amazing source of knowledge about programming and software usage in general. I’m pretty sure I could spend weeks there just watching hundreds of Python-related videos.

Python Books: From the official Python website, this is a list of books for both beginners and advanced practitioners. From what I can tell, it’s regularly updated (which is not always the case for other book lists). Here’s a much shorter list from a different source.

Python Course: By U.S. standards, this isn’t a course but, rather, an online tutorial that is almost all text and graphics. It has tutorials for both Python 2 and Python 3, and these tend to have pretty good explanations: or, at least, better than a lot of the official Python documentation, in my view.

Python for Beginners: From the official Python website, it has recommendations for installing, learning and otherwise investigating Python.

Python for Non-Programmers: From the official Python website, it has links to video tutorials, online courses, websites, books, and resources for younger students.

Python for You and Me: This simple but effective online book is written for programmers new to Python.

Python Turtle: I haven’t downloaded this but have used a version in a tutorial. It was fun. In essence, you write code to move your animated turtle in various ways. It “strives to provide the lowest-threshold way to learn (or teach) software development in the Python programming language.”

Python Docs: These are from the official Python.org website. My experience is that these contain a ton of great information but are, at times, difficult to parse. I sometimes need to go to other tutorials that are easier to understand, but I often start here.

Python Weekly is a “A free weekly newsletter featuring the best hand curated news, articles, new releases, tools and libraries, events etc. related to Python.” I receive it and enjoy it, but I find that it’s geared to more seasoned Python coders rather than to beginners.

Pythonic Perambulations is not a blog for beginners but it’s well-written and fun to read (even when I can’t quite grasp the details). Think of it as aspirational. When you start to really grok this blog, you’re past the beginner phase.

StackOverflow: If you do a Google search to find out how you do something in Python, you’ll likely be directed to this website, which is where both beginners and experts go to ask questions and have those questions answered by various Python programmers. It’s invaluable. Because this has been going on a while, your question has usually already been asked and answered here, so do a search before asking anything.

Steven Thurlow’s Python Tutorial: I stumbled onto this tutorial while looking for a decent explanations of modules  and classes. I believe his are the clearest I’ve seen anywhere.

Stupid Python Ideas: I’m only beginning to be able to parse a blog like this one, which goes into detail on more sophisticated Python coding concepts and practices. This blog strikes me as one of the clearer ones. It has helped me, for example, understand how the function called grouper works. I just couldn’t understand the official documentation on it.

Ten Python Blogs Worth Following: I’m not really up on which Python blogs to follow, but here are some recommendations by the author of Bite Sized Python Tips.

Tutorials Point: When I’m searching Google to find out how to do something in Python, I often wind up here, especially if I can’t understand the official Python documentation (a not uncommon occurrence for me). The explanations here tend to be written in clear English and the examples are usually helpful. You can also move through the tutorial in a systematic way if you like.

Twitter accounts: There are four that seem particularly worth following to me:  @ThePSF @planetpython @gvanrossum. I’m always interested in following other accounts if you have any recommendations.

Featured image: Català: Poema visual transitable en tres temps (Joan Brossa). Segon temps: camí -amb pauses i entonacions-. Jardins de Marià Cañardo, Velòdrom d'Horta (Barcelona). Author: Dvdgmz