Mighty Ron, Strong Ron

Mighty Ron, Strong Ron

Ron likes to brand things,
to hear the sizzle of his power
singeing then searing
the hides of others,
scenting the smoke
sweetly suffused with the suffering
of those daring to defy him.

He especially loves leaving his black
mark on wobbly-kneed youngsters
before they learn to resist
and are lost. Before
they seek solace and strength
in books or bodies, ideas or selves
that disturb Ron. Disgust him, really.

Ron brands them for their own good
(and for his, of course),
a great circle of lookers-on cheering
as Ron lifts his bony knee,
releasing the calves that dart away,
in pain, afraid, into the corral
of Ron’s staunch ranch,
his control unquestioned,
a strong and mighty man;
“Just see how strong,” they whisper.

Ron brands the old ones as well,
brands them with an ancient acid,
two parts fear, three parts rage,
five parts blinding bigotry.
Ancient, yes, but still so vividly effective;
They all receive, in fact, their branding
like a benediction.
“This will keep you safe,” he says,
“And free.” Though maimed, they cheer,
happy to now be captives
in Ron’s mighty corral.
“Ron loves liberty!” they sing,
and Ron winks, thrilled by their bleating,
despising their stink.

In the dead of night, though, mighty Ron
is fearful and frail within,
dreaming of a brimming poisonous puss
that threatens to pop off his head
as his face turns red, witnessing
a nightmare rush of crazed calves grown
and vicious, nipping, half-blind sheep
busting down Ron’s mighty corral,
turning the ranch to splinters,
masses yearning to be free,
a disastrous stampede.

Ron reels and shouts and brandishes
his once red iron, now black as death.
Stubbornly, stupidly trampled,
he explodes like a lanced boil,
spewing a noxious white goo
that cascades like slick sleet
over Ron’s once staunch ranch,
to forge an infected wasteland,
a lasting legacy of mighty Ron, strong Ron.

Featured image: Colorado. Branding calves, a photochrom print by the Detroit Photographic Co.

Who Cashed Our Productivity Paychecks?

Does more labor productivity raise people’s living standards? The conventional wisdom is still “What’re you kidding? Of course it does!” But the evidence on that is pretty sketchy and has been for a while now. So, let’s do a little myth-busting as we explore the so-called productivity-pay gap.

Investopedia nicely sums up the standard line on productivity: “The level of productivity is the most fundamental and important factor determining the standard of living. Raising it allows people to get what they want faster or get more in the same amount of time. Supply rises with productivity, which decreases real prices and increases real wages.”

You can find the same basic claim all over the place, from the The Library of Economics and Liberty to McKinsey to Forbes.

Just one little problem, of course. The data indicates it’s not true, at least not in the ways it has usually been explained.

We’re A Lot More Productive, But Not Much Richer

In the U.S., productivity has been going up for many years. In fact, it rose a little faster between 2019 and 2022 than it did the previous 12 years. Have a look at this data from the Bureau of Labor Statistics (BLS):

Productivity Change in the Nonfarm Business Sector, 1947-2022

Productivity hasn’t grown as quickly over the last 15 years as it had the previous 17. But from 2019 to 2022, it was still growing at a similar rate as it did from 1973 to 1990. Overall, despite an occasional dip here and there, there’s been steady growth.

Sure, there’s plenty of room for economists to complain, but consider the fact that labor productivity more than doubled between 1979 and 2022!

So, if it’s true that “productivity is the most fundamental and important factor determining the standard of living,” then surely our standard of living also doubled in that same time period, right?

The Productivity-Pay Gap

Well, no, not by a long shot. But the answer requires more nuance than that. After all, there’s no clear definition of “standard of living” and productivity itself comes in various flavors. Let’s stick with labor productivity, which compares growth in output to the growth in hours worked, and let’s use inflation-controlled compensation as a more measurable version of standard of living.

Here’s what we get, according to the Economic Policy Institute:

The idea, of course, is that productivity and compensation rose pretty much in parallel up until the early 1980s and then split off from one another. In fact, productivity rose 3.7 times faster!

Which suggests that something’s wrong with the whole conceit and with the fact that so many trusted sources keep claiming they rise in virtual tandem despite solid evidence to the contrary.

How Do We Explain What Happened?

So, how can we explain the productivity-pay gap? There are various theories, but here are three that, while not necessarily contradictory, stress different facets of the gap.

Theory 1: Policymakers Tore Out the Coupling

The EPI itself, which has a somewhat left-leaning orientation, explains it like this: “Starting in the late 1970s policymakers began dismantling all the policy bulwarks helping to ensure that typical workers’ wages grew with productivity. Excess unemployment was tolerated to keep any chance of inflation in check. Raises in the federal minimum wage became smaller and rarer. Labor law failed to keep pace with growing employer hostility toward unions. Tax rates on top incomes were lowered. And anti-worker deregulatory pushes—from the deregulation of the trucking and airline industries to the retreat of anti-trust policy to the dismantling of financial regulations and more—succeeded again and again.”

In other words, the government allowed the system to get misaligned. Let’s use the metaphor of a coupling. In machinery, a coupling is a device for joining two rotating shafts at their ends so as to transmit torque from one to the other. The goal, of course, is to transmit power fairly evenly. In the coupling of productivity and compensation, however, things fell badly out of whack. One shaft kept spinning like a champ while the other started moving in slow-mo. If the economy were a machine, we’d send it to the shop.

Theory 2: We’re Not Measuring It Right

Another theory is that the pay-compensation gap is real but maybe not quite as large as the consumer-price-indexed compensation rates suggest. The BLS provides the following chart.

In this graph, the bottom dotted line is compensation adjusted using the consumer price index, but the light blue line above that is compensation that’s adjusted using something called the output price index, which is arguably more accurate. The authors of the article “Understanding the labor productivity and compensation gap” explain:

Workers are compensated based on the value of goods and services produced, not on what they consume. Using an output price deflator, a measure of changes in prices for producers, instead of the CPI is an alternative that better aligns what is produced to the compensation that workers receive. Each industry has its own unique output deflator that matches the goods and services that are produced in that industry.

By using these “deflators” for a variety of industries, they find that the size of the productivity-compensation gap “decreased in 87% of industries that previously showed productivity rising faster than compensation.”

To be clear, the gap isn’t going away if you use this technique, but it does typically shrink in most industries.

Theory 3: The Rich Got Most of the Pay Raise

The third and, to me, most convincing theory is that average folk had their productivity lunch eaten by their better off brethren.

This is clear when you look at the work by economists such as Erik Brynjolfsson and Andrew McAfee of MIT. In their book Race Against the Machine, they comment on a graph that shows the amazing and growing disparity between real median household income and real GDP per capita (which is one measure of productivity). Below is a more up-to-date version of the one they point to in their book:

They call it “striking” and then make this observation:

There have been trillions of dollars of wealth created in recent decades, but most of it went to a relatively small share of the population. In fact, economist Ed Wolff found that over 100% of all the wealth increase in America between 1983 and 2009 accrued to the top 20% of households.  The other four-fifths of the population saw a net decrease in wealth over nearly 30 years.

Ouch. So, yes, the productivity paychecks are real. And they do raise the standard of living — but not for everybody. Or even most people.

Were Gains by the Rich Earned or Stolen?

Of course, this raises another question: “Did those folks at the top earn that paycheck, or steal it?”

If that’s incendiary phrasing, don’t blame me. Blame the purveyors of conventional wisdom mentioned above. The implication has always been that we all benefit from productivity increases, but, in practice, as Brynjolfsson and McAfee say, “There is no economic law that says that everyone, or even most people, automatically benefit from technological progress.”

Maybe that makes sense? Let’s say a bunch of tycoon types invest in robotics to boost the productivity of the average worker on the line of some manufacturing plant. After the inevitable layoffs of many workers, do the rest of those surviving employees divvy up the compensation of the people who were laid off minus the cost of the machines? 

Probably not. Instead, the benefits accrue to the investors and the senior managers (especially CEOs) who made the decision to invest in the robots. That is, the rich get considerably richer while the surviving workers only get a modest increase. And the folks who were laid off? How much of a cut do you think they’re getting?

Yeah, bubkis. Or, in many cases, they actually lose economic ground.

Multiply this dynamic many times over the course of decades, and median incomes stay flat while GDP per person (which is an average rather than a median) goes up.

So, to answer our question, “They earned it, kind of, sort of, in a way, if you squint hard enough and quash any human instinct for justice and fairness.”

But at least we now have a clue about where benefits of the productivity increases go. That is the beginning of wisdom — and a fine antidote to fiscal fairy tales.

Productivity Chickens Coming Home to Roost

Recently, there has been a decline in U.S. productivity. In fact, some analysts claim that the U.S. has now seen five consecutive quarters of year-over-year declines.

The big question is why. There’s lots of finger-pointing. Some high-profile CEOs blame lazy work-at-home employees for the decline. Others argue, to the contrary, that it is the return-to-work policies that are most strongly linked to productivity declines.

There are plenty of other suspects as well. For example, many people switched jobs during the “great resignation” and so stepped into roles where they had to learn the ropes to become more productive again. Or there’s the rapid return of many employees back into the workforce, a dynamic often associated with temporary reductions of productivity.

There’s also the possibility that higher inflation — combined with pay increases that are insufficient to keep up with it — are simply demoralizing workers. Why should they worker harder for smaller paychecks?

And, of course, there’s the idea that younger generations just aren’t as eager as their older baby boomer counterparts to keep their proverbial noses to the grindstone. It’s less that they’re “lazy” and more that they just aren’t as willing to put up with bossism and toxic workplaces.

CEOs Venting Their Spleen

Meanwhile, CEOs have been venting their spleen about declining productivity, so much so that it feels as if there’s a new “leaked video of a CEO having a meltdown each week,” writes AJ Hess in Fast Company.

On one hand, I get their frustration. Their jobs are, of course, to boost the performance levels of their organizations.

On the other hand, what make these meltdowns both funny and sad is the extraordinary pay gaps between typical employees and their bosses. For example, recent figures indicate that S&P 500 CEOs averaged $18.3 million in compensation in 2021. That’s a whopping 324 times the median worker’s pay!

How did their pay get so exorbitant? Well, one answer is, of course, productivity. That is, they (and other upper-class Americans) have enjoyed the fruits of the productivity bumps of workers whose wages have largely stagnated over the last 40 years.

Which makes you wonder: If the typical worker had been receiving their full share of the benefits of productivity increases since the early 1980s, would we be in a position where “quiet quitting” was even a thing?

Maybe not. What we could be seeing is the productivity chickens come home to roost. If the rich get most of the monetary benefits of productivity increases, then let them do most of the work.

Or, at the very least, they — in partnership with the government — should stop whining and figure out a way to make productivity increases benefit everyone in their organizations, not just the investors and executives at the top.

Networks of Birdsong

Birdsong is networking, the sending and receiving of signals across broad expanses. In the mornings, especially right now, the choir gets so loud that I am, as they say, up with the birds. And, although not an active or important part, I too am within those networks of birdsong. That is, I listen though am mostly ignorant of their meaning.

Each Bird Is a Neuron

Think of the birds themselves as neurons. Their bodies are the soma that provide energy to drive activities. Their voices are axons, sending messages to various other birds at once, and their ears (though not readily visible) are dentrites, receiving those signals.

In the mornings, I hear a complex reticulum of sound: some of them are songs, some calls, some alarms.

Different sounds and songs have different and perhaps mulitiple meanings:

  • mating songs used to attract mates
  • territorial songs to ward off competitors
  • alarm calls to ward off predators
  • contacts calls to coordinate movements
  • begging calls to solicit food from parents
  • social songs to strengthen bonds between groups
  • imitation songs to mimick others
  • whisper songs used for quiet communication
  • flight songs use to communicate on the go

In the mornings, I expect, we’re hearing all of these are more.

Why in the Morning?

In the morning, there tends to be less background noise, allowing them to communicate better. Also, the air is cooler and, therefore, denser. This means their songs will travel further at that time of day.

Perhaps their symphonies of sound are also like morning meetings at work, a way for everyone to plan and prepare for the coming day.

Imagine a Giant Bird Brain

We often think of networks in visible terms. We picture the brain and we envision complex interweavings of gray matter. We picture transportation networks and we see roads and railroad tracks and airline flight paths. We picture communication networks and imagine telephone poles and fiber optic cables and cell towers and millions of computers, televisions and more.

It requires a bit more imagination to visualize birdsong this way. But conceive of each bird sound as a differently colored fiber optic cable that extends to every other bird in the vicinity. These are the axons sending messages in multiple directions at once.

Now imagine that a bird (call her Alice) is just inside the hearing range of another bird (call her Shiho) who is calling or singing. If Alice responds to Shiho in some way, that message does not just go back to Shiho but to other birds who are considerably outside of the call range of Shiho.

Now there’s a third bird (call him Jake) who hears Alice and responds to her call, even if the original call was intended for Shiho. Jake responds to Alice as well. Now multiply this thousands or millions of times, and envision the complexity and sheer scale of that network.

The World Thinking Its Thoughts

Ocassionally I’ll read an article discussing the rise of the human infosphere wrapping the entire planet in wire and wireless networks, one that’s becoming the “nervous system” of the world. That may be valid as far as it goes, but we should remember that vast information networks existed long before human beings did, and they continue to today.

Human beings are still only in early stages of being able to grasp the information in these natural networks. Indeed, it’s likely that we civilized 21st century folks have actually lost much of our ability to tap into those networks. Many of our pre-agriculture predecessors were likely better at this, able to interpret what different sounds may mean for them.

For example, they might have gotten a heads up that a certain known and dangerous predator was in the area, or they might have been able to net certain birds who had communicated a feeding ground.

What’s Next?

But the one advantage we do have is our latest technologies. For example, there is the splendid Merlin app out of the Cornell Lab of Ornithology, which identifies birds by their songs as well as by photos. Using these types of tools, we can more easily learn the various sounds of birds and even play certain vocalizations back to them to see if and how they respond.

There are other technologies that may help as well, especially in the area of machine learning. Indeed, Karen Bakker, a professor at the University of British Columbia and a fellow at the Harvard Radcliffe Institute for Advanced Study, is quoted as saying,

There are long-standing Indigenous traditions of deep listening that are deeply attuned to nonhuman sounds. So if we combine digital listening—which is opening up vast new worlds of nonhuman sound and decoding that sound with artificial intelligence—with deep listening, I believe that we are on the brink of two important discoveries. The first is language in nonhumans. The second is: I believe we’re at the brink of interspecies communication.

That’s an amazing statement that I hope to examine more closely in a future post.

Why ChatGPT Is NOT Just a Fancy Autocomplete

I suspect one of the biggest myths of our time is that ChatGPT and its fellow large language models are just fancy autocomplete programs. This widespread impression could be blinding us to the true capabilities and power of these AIs, both now and in the future.

The Predict-Next-Word Method

As most people know by now, these generative pre-trained transformer (or GPT) large language models are built on the idea of predicting the next word in sequence of words. That sounds simple, right?

So simple, in fact, that it’s led many people to conclude that these programs are not truly intelligent, much less sentient or conscious. All that might, in fact, be true. Still, we should stop assuming they’re as simple as all that. We need to look beyond the “predict next word” methology and consider the deep complexity of the resulting neural networks.

Human Intelligence Was Built on a Simple Binary

Before getting into the details of the scaling hypothesis, which potentially sheds light on the “predict next word” issue, let’s discuss the origin of our own intelligence.

Human intelligence, such as it is, is based on one of the simplest binaries possible: reproduce or not. Our ancesters, the first living cells on the planet, did not need to be intelligent to survive. They just needed to figure out a way to reproduce before perishing. Even today, there are many organisms that are decendants of those first cells and probably no more intelligent than they were at the time.

by Talonnn
Binary operations as black box

Then there’s us. Our intelligence was not inevitable. In fact, it is just one of an almost infinite number of paths to reproductive success.

So, when we say that the new AIs are only “fancy autocorrects,” consider that we are only fancy reproduction machines. You could even argue that the need to predict the next word in a sentence is a more complicated and difficult feat than the ones that sparked our own evolution.

So, perhaps we should stop denigrating the “predict next word” challenge. That challenge is just the evolutionary mechanisim of these AIs. The ones that do that prediction best (that is, today’s GPT models) have survived into current versions, being trained, tweaked and calibrated by AI researchers to improve their success rates. The rest have been left behind. That may not, despite our helping hand, be all that different from our own path.

Prediction Machines

We don’t know how intelligent these new AIs are. They sometimes seem bafflingly bright, othertimes dumb and delusional. In that way, I suppose, they are a lot like people.

Of course, lot of people will claim they know and promptly jump into rancorous debates on the subject (see Twitter or the comment sections in major newspapers). But even the builders of ChatGPT don’t seem sure. In fact, Ilya Sutskever, chief scientist of the OpenAI research group, tweeted at one point that “it may be that today’s large neural networks are slightly conscious.”

Slightly conscious? The fact we aren’t sure is the part that frightens some people (and by some people, I mean me). We are dealing with difficult cognitive and philosophical questions that, far from being relegated to the halls of academia, suddenly have very real implications and consequences.

What we do know is that the AIs are good at prediction. Indeed, this is at the heart of what they do. We also know that some thinkers believe that prediction is at the heart of our own cognition.

Remember Jeff Hawkins? He wrote, “The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.”

Does that sound familiar? If prediction is what we do and what the GPTs do, perhaps a little humility is in order.

The Scaling Hypothesis

Now let’s go to a blog post by Gwern Branwen. Before I get into that, though, I’ll stipulate what’s no doubt obvious to any experts who might read this: that is, this isn’t my world, not by a long shot. I stumbled onto Branwen’s blog only because Yudkowsky mentioned him by name in his interview with AI professor and podcaster Lex Fridman. I can’t vouch for the accuracy of the ideas in Branwen’s post, written in what strikes me as an idiosyncratic way.

If you want to truly judge and understand Branwen’s piece “The Scaling Hypothesis,” it’s best to read it yourself. But I’m going to provide a few highlights of what seem (to me) to be two key ideas.

In essence, his post is about the possibility that one can achieve an artificial general intelligence, or AGI, simply by scaling up a GPT model, even if the training objective of that model remains pretty simple, as in “predict the next word.”

Branwen writes, “The blessings of scale … sup⁣port a rad⁣i⁣cal the⁣ory: an old AI par⁣a⁣digm held by a few pi⁣o⁣neers in con⁣nec⁣tion⁣ism (early ar⁣ti⁣fi⁣cial neural net⁣work re⁣search) and by more re⁣cent deep learn⁣ing re⁣searchers, the scal⁣ing hy⁣poth⁣e⁣sis. The scal⁣ing hy⁣poth⁣e⁣sis re⁣gards the bless⁣ings of scale as the se⁣cret of AGI: in⁣tel⁣li⁣gence is ‘just’ sim⁣ple neural units & learn⁣ing al⁣go⁣rithms ap⁣plied to di⁣verse ex⁣pe⁣ri⁣ences at a (cur⁣rently) un⁣reach⁣able scale. As in⁣creas⁣ing com⁣pu⁣ta⁣tional re⁣sources per⁣mit run⁣ning such al⁣go⁣rithms at the nec⁣es⁣sary scale, the neural net⁣works will get ever more in⁣tel⁣li⁣gent.”

Bing’s Take

I had a discussion with the Bing chatbot (based on ChatGPT-4) to help determine if I was undertanding this right. Here’s a piece of the conversation.

Me: So, is the idea that one can achieve an artificial general intelligence by scaling up the system, even if the training objective is fairly simple, such as “predict the next word”?

Bing: Yes, that is one interpretation of the scaling hypothesis. The idea is that by training ever larger networks on simple tasks such as language modeling (predicting the next word), the network will eventually learn to perform more sophisticated behaviors and generalize to a wide range of tasks . Would you like more information on this topic?

Me: Tell me more about the Scaling Hypothesis.

Bing: The Scaling Hypothesis is an idea put forward by Gwern Branwen that suggests that once we find a scalable architecture for artificial intelligence, we can simply train ever larger networks and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks and data .

According to this hypothesis, as we increase the size of the network and the amount of data and computation used to train it, the performance of the network will improve in a predictable and smooth manner. This means that even if we start with a simple task such as language modeling (predicting the next word), as we scale up the network it will eventually learn to perform more sophisticated behaviors and generalize to a wide range of tasks.

This hypothesis has been supported by recent developments in artificial intelligence, such as the success of large language models like GPT-3. These models have demonstrated an ability to perform a wide range of tasks despite being trained on a simple objective (predicting the next word) .

How Far Can It Scale?

This suggests that the only limitation to an ever more intelligent super-mind is compute power and parameters. Now, that may be wrong. Perhaps we’ll someday hit an S-curve somewhere along the way. But, maybe not….

Branwen writes, “Depend⁣ing on what in⁣vest⁣ments are made into scal⁣ing [deep learning], and how fast com⁣pute grows, the 2020s should be quite in⁣ter⁣est⁣ing—sig⁣moid or sin⁣gu⁣lar⁣ity?”

Um, okay.

Do you think this is scary?

Wait, it gets worse — or better, depending on your point of view.

The Emergence of Agency

Let’s talk about “agency” in the context of AI. It refers to a program’s capacity to make choices and act on its own recognizance. In other words, it is the ability to exert control or influence over a situation.

Looking at the GPT models, Branwen writes, “GPT-3 clearly shows agency because it learns generative models of many agents, real or fictional, which can be used to ‘roleplay’—plan and take action which will steer environments into small goal regions of state-space; and this is not merely hypothetical, or confined to text transcripts of actions & results in its internal simulated environments but given effectors, like in the case of SayCan, a language model will in fact do such things in the real world.”

Okay, that’s a bit hard to parse but let me give it a go. He’s saying that ChatGPT-3, as we’ve come to know it, demonstrates the ability to make “choices” (or something like them) and act on those choices. For example, when we ask it to take on the persona of a real or fictional character, it will make choices in the way it subsequently handles language.

Moreover, if you were to hook it up to a robot through a control method such as SayCan — which can generate natural language actions for a robot based on a user’s request — then it could take action in the real world. In other words, the robot could make something like choices and act accordingly.

The Robot Acts on Its Own

I’m not sure about the accuracy of this interpretation of GPT’s agency, but I think that’s approximately the idea. Via a GPT model, agency is emergent. You don’t build it in. It’s a “ordinary continuum of capability.” Branwen concludes that “a very wide range of problems, at scale, may surprisingly induce emergent agency.”

In short, agency happens. It’s hard to remove from the AI. He claims, “The broader and more powerful a system is, the more the next feature or next piece of data may push it over the edge, and it becomes harder to engineer a system without that aspect.”

I don’t want to say that a GPT-enabled robot has “free will,” whatever that actually means. But it might naturally have its own sense of agency.

When AIs Break Bad, Who Is Responsible?

This is not, of course, the first time the topic of AI agency has arisen. Various papers have raised the question of whether AI systems can make decisions on their own. One author argues that we need to think about what humans want an AI to do (that is, their human goals), when we try to figure out who is responsible for any mistakes an AI makes.

That paper talks about whether AI systems (like robots) can make decisions on their own, or whether they need humans to tell them what to do. The author argues that we need to think about what humans want the AI to do (their goals), when we try to figure out who is responsible for any mistakes the AI makes.

But others are starting to think about AIs as having moral agency aside from humans. In fact, a 2017 European Parliament report floated the idea of granting special legal status to robots that can learn, adapt, and act for themselves. “This legal personhood would be similar to that already assigned to corporations around the world,” reports Business Insider, “and would make robots, rather than people, liable for their self-determined actions, including for any harm they might cause.”

Thinking Uncomfortable Thoughts

How “smart” would a machine need to get before it has not just agency but moral responsibility for that agency?

I’ve no idea. We should note that Branwen’s blog post discusses what the public refers to as ChatGPT-3. OpenAI has now moved past that. In fact, his post seems to have anticipated the latest scaling up. By some estimates, ChatGPT-4 includes one trillion parameters, compared with just 175 billion in ChatGPT-3. Other estimates are that it includes up to a 100 trillion parameters.

What’s are parameters? I don’t have a deep understanding myself, but they are essentially the level of complexity of these systems. Our World in Data defines parameters as “variables in an AI system whose values are adjusted during training to establish how input data gets transformed into the desired output; for example, the connection weights in an artificial neural network.”

The more complex the network, the smarter the system. This sounds a lot like how the human brain works, though I’m sure many experts would claim that’s both a faulty and oversimplistic analogy. Maybe so, but the size and sophistication of the AI reticulum does seem to matter an awful lot.

Therefore, for now, it makes a lot less sense to talk about these systems as fancy autocompletes and a lot more sense to talk about them as increasingly enormous networks (that happen to think at lightning speed). This may give us a much better idea of their intelligence or, if you prefer, their ability to mimic intelligence. Understanding the difference, if there is one, is among the most critical challenges of our day.

Postscript

If you’re seeking a more technical and detailed look into how ChatGPT works, I recommend Stephen Wolfram’s article “What Is ChatGPT Doing … and Why Does It Work?” It’s quite long but a compelling read if you want grasp the mechanics of ChatGPT. He concludes, “What ChatGPT does in generating text is very impressive—and the results are usually very much like what we humans would produce. So does this mean ChatGPT is working like a brain? Its underlying artificial-neural-net structure was ultimately modeled on an idealization of the brain. And it seems quite likely that when we humans generate language many aspects of what’s going on are quite similar….[On the other hand], unlike even in typical algorithmic computation, ChatGPT doesn’t internally ‘have loops’ or ‘recompute on data.’ And that inevitably limits its computational capability—even with respect to current computers, but definitely with respect to the brain.It’s not clear how to ‘fix that’ and still maintain the ability to train the system with reasonable efficiency. But to do so will presumably allow a future ChatGPT to do even more ‘brain-like things.'”

The Doom Scrawl: April 1, 2023

Like a lot of other people these days, I sometimes get caught up in doom scrolling. This week I decided to summarize a few of the doomiest stories and make some probably-not-serious-enough comments on them, partly to entertain and partly to relieve some of my own anxiety.

As other bloggers know, sometimes writing about things at least gives us the illusion that we’re doing something about them, allowing us a greater sense of control. And, if we’re applying satire, maybe making fun of things allows us to take them a little less seriously. Gallows humor, and all that.

Anyway, here are a few items items from this week’s doom scrolling.

Death by Nuclear Knuckleheads
  • Russia has updated its foreign policy doctrine to reflect its increasingly confrontational relationship with the West and its perception of “existential threats” to its security and development from “unfriendly states.” The document names the United States as the main threat to international stability and driver of an “anti-Russian line,” but also seeks “peaceful coexistence” and a “balance of interests” with Washington.
  • The Russian Federation’s recent announcement of plans to station non-strategic nuclear weapons in Belarus represents the first “nuclear sharing” agreement made since the Treaty on the Non-Proliferation of Nuclear Weapons entered into force in 1970, the senior United Nations disarmament official told the Security Council today, emphasizing that — against the backdrop of the Ukraine conflict — the risk such arms will be used is higher today than at any time since the end of the cold war.
My Gratuitous Commentary

Mr. Putin is deep in a hole of his own digging and refuses to hand up his shovel. Russia is not under existential threat from anyone but Mr. Putin, but he alone may be enough to destroy it, and possibly the world, if we keep stupidly doing this dance toward nuclear apocalypse. We should have internationally banned, or at least vastly reduced the number of, those hellish devices decades ago. Let’s somehow make peace with this raging, narcissistic nutjob and move on with global nuclear nonproliferation. Soothe Putin’s aching, rotten ego. Pay him off. Whatever. Humanity has been walking this existential tightrope way too long and needs to get off of it. Man up, Biden, and get it done.

Extinction by Our Latest Favorite Smart Tech
  • A smart dude from an AI thinktank says we’re all definitely going to die, wiped out by the most popular technology since..well…ever. Eliezer Yudkowsky, who leads research at the Machine Intelligence Research Institute, writes, “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers….Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms.”
More Gratuitous Commentary

First of all, respect for the line “giant inscrutable arrays of fractional numbers.” Well done. Second of all, Yudkowsky might be right. He’s certainly certain. If he’s right to be so certain, then our days are almost certainly numbered because the AI cat has already clawed its way out of the bag. A bunch of smart kids from Stanford just created a ChatGPT knockoff for a total of $600, which is about 1/6th of a month’s rent out in that part of the world.

Meanwhile, Meta’s AI–which is called LLaMa for Large Language Model Meta AI–was leaked and then shared on 4chan, where someone uploaded it via torrent. Just great. There was already one AI trained on 4chan so it became a toxic, hate-speech machine. That’s what we need: a super intelligence trained on the darkest dregs of humanity. At the very least, we’ll see some very nasty generative AI media coming down the pike .

So maybe we’re all doomed already. Regardless, the Biden Administration needs to weigh in soon because, well, Congress is busy with TikTok (no, seriously). Those old white guys haven’t a clue (and I’m quickly getting to be one, so I know). I figure our only hope for government action is some younger, brighter folks squirreled away somewhere in the West Wing who can convince old Joe that this AI thing is worth, you know, thinking about. But maybe they’re already deep into it. Maybe they’ve even got Yudkowsky on speed dial. Now if they can only reach him in whatever newly reburished-with-Faraday-caging 1950s-vintage bomb shelter he’s holed up in.

The Singularity Just Got Nearer…Again…And How!

To me, it already it seems like another era. Last October, I wrote a tongue-in-cheeky post called “The Singularity Is Pretty Damned Close…Isn’t It?” I wrote it after the AI art generator revolution had started but before ChatGPT was opened to the public on November 30, 2022. That was only four months ago, of course, but it feels as if everything has sped up since, as if we human beings are now living in dog years. So it’s already high time to revisit the singularity idea.

Are We Living on Hertzian Time Now?

As you may know, the word “hertz” — named after Heinrich Rudolf Hertz, a German physicist who discovered electromagnetic waves in the late 19th century — is a unit of frequency. More specifically, it’s the rate at which something happens repeatedly over a single second. So, 1 hertz means that something happens just once per second whereas a 100 hertz (or Hz) means it’s happening 100 times per second.

So, an analog clock (yes, I still have one of those) ticks at 1 Hz.

 Animation of wave functions, by Superborsuk

Unless you’re an engineer, you probably think about hertz as part of the lingo folks throw around when buying computers. It’s basically the speed at which central processing units do their thing. So, a laptop with a speed of 2.2 GHz has a CPU that processes at 2.2 billion cycles per second. Basically, that’s the speed at which computers carry out their instructions.

So, my (completely fabricated) notion of Hertzian time refers to the fact that, day to day, we humans are seeing a whole lot more technological change cycles (at least in terms of AI) packed into every second. Therefore, four months now feels like, well, a whole lot of cycles whipping by at a Hertzian tempo. Generative AI is overclocking us.

How Wrong Can I Get?

Back in late October, I wrote, “There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.”

Many Hertzian cycles later, the world looks very different. Now millions of people are chatting up these proliferating LLMs (I just got my access to Bard the other day, btw) every moment of every day, and we’re just getting started.

It’s true that if you get used to conversing with these models, you can tell that they aren’t quite human. And, the main ones go to some length to explain to you, insist even, that they are NOT human.

Everyday Feels A Little More Turing Testy

I recently specifically asked ChatGPT3, “Do you think you could pass the Turing Test if properly prepared?” and it responded: “In theory, it is possible that I could be programmed to pass the Turing Test if I were given access to a sufficiently large and diverse dataset of human language and provided with sophisticated natural language processing algorithms.”

I tend to agree. The newest AIs are getting close at this stage, and I imagine that with only a few modifications, they could now fool a lot of people, especially those unfamiliar with their various little “tells.”

Coming to Rants and Reality Shows Near You

I think society will increasingly get Turing testy about this, as people debate whether or not the AIs have crossed that threshhold. Or whether they should cross it. Or whether AIs have a soul if they do.

It’ll get weird(er). It’s easy imagine growing numbers religious fundamentalists of all types who demand Turing-level AIs who preach their particular doctrines. And who deem those “other” AIs as downright satanic.

Or envision reality TV shows determined to exploit the Turing tests. Two dozen attractive, nubile wannabe LA actors who are trying to out-Turing one another on a tropical island. They win a cool mill if they can tell the (somehow telegenic) AI from the (oh-so-hot) real person on the other side of that sexy, synthesized voice. Think of the ratings!

Kurzweil May Have Nailed It

As I said in that first singularity piece, the futurist Ray Kurzweil has predicted that an AI will pass the Turing Test in 2029. I wasn’t so sure. Now I wonder if it won’t be sooner. (I suspect the answer will depend on the test and expertise the people involved.)

But will the passing of the Turing Test mean we are right smack in the middle of the singularity? Kurweil doesn’t think so. He has his sights set on 2045 when, as I understand it, he thinks humanity (or some portion of it) will merge with the superintelligent AIs.

That still seems very science fictional to me, but then I also feel as if we’re all living right smack dab in a science fictional universe right now, one I never thought I’d live to see….

Those Seas Are Rising Fast

My predictions on the rising seas of AI generated media, however, are still looking pretty good. Of course, I’m not alone in that. A 2022 Europool report noted, “Experts estimate that as much as 90% of online content may be synthetically generated by 2026.”

What’s going to make that number tricky to confirm is that most media won’t be fish or foul. It’ll produced by a combination of humans and AIs. In fact, many of the graphics in my blog posts, including this one, are already born of generative AI (though I try to use it ethically).

Are These the Seas of the Singularity?

The real question to ask now is, “Are we already in the singularity?”

If we use the metaphor of a black hole (the most famous of all singularities), maybe we’ve already passed the proverbial event horizon. We’ve moved into Hertzian time and overclocking because we’re being sucked in. From here, maybe things go faster and faster until every day seems packed with a what used to be a decade’s worth of advances.

These rising seas, the virtual tsnamis, might just be symptoms of the immense gravitational forces exerted by the singularity.

Or maybe not….Maybe such half-baked mixed metaphors that are just another sign of West Coast hyperbole, bound to go as disappointly bust as the Silicon Valley Bank.

Time’ll tell, I guess.

Though it’ll be interesting to find out if it’s normal time or the Hertzian variety.

Checking Out the Deets of Financial Wellness Plans

(Part 2 of 2)

What Is Financial Wellness?

Okay, that first post took us on a bit of a journey, but we’re now ready to tackle financial wellness.

In the research in which I was involved, we specified:

Employees’ financial wellness means that they are able to satisfy ongoing financial needs, feel secure about their financial future, and are free to make choices that allow them to enjoy their lives. It also implies that employees have the competencies and knowledge needed to navigate the complexities of financial decisions.

So, you’re financially well (or, at least, weller) if you’re not stressed out about money all the time and able to afford the basics (for example, being able to go on vacation once in a while, live free of debilitating debt, save for retirement, etc.).

Of course, your average Joe might say, “Hey, if you care about my financial wellness, then just pay me more!”

To which employers might say, “Let’s table that discussion for a minute and talk about financial planning and earned wage access and …”

It’s very tempting to call bullshit on this HR-speak and just keep chanting “Show us the money!” But, well, capitalism. Employers have got to make a buck to stay in business and labor tends to be expensive. Therefore, employers want to pay you enough to get your services but not so much that you’re cutting to far into their profit margins.

It’s a dance, a negotiation, a horse trade in which you’re often the horse.

Feeling Strapped

If you’re not bringing that much to the trade–or at least fail to convince your employer that you are–then your pay is not going to be all that high. In that case, you may be living paycheck to paycheck. You may not be able to pay off your loans. Your monthy credit card debt may be so high that you’re barely able to pay the minium balance.

Which only gets you in deeper into the financial quagmires over time.

You could really use a hand.

So, you go to your boss and ask if maybe they can give you a raise. Your boss shakes their head sadly, saying times are tough, business is flat, and there’s a wage freeze is on.

After that talk, you’re feeling depressed, but then your boss shoots you an email that tells you about the company’s financial wellness program. Shit. You sigh. More confusing HR stuff. You try to avoid HR whenever you get the chance. You feel those folks tend to only show up when bad stuff is cooking.

Giving It a Try

But, hey, you’ve got to try something, so you call a number and they give you the rundown on what’s included in the benefits. Here are some of the possibilities:

  • Financial education
  • Retail discounts programs
  • Easy access to savings planning
  • One-on-one financial coaching/counseling
  • Health savings accounts
  • Retirement plans
  • Flexible spending accounts
  • Life and disability insurance
  • Help with medical bills
  • Lifestyle benefits
  • Low cost loans
  • Help with physical therapy/rehabilitation
  • Earned wage access
  • Payday advance
  • Personalized financial wellness plans
  • Student loan repayment assistance
  • Debt-relief benefits

The Pros and Cons of Financial Wellness Programs

The Pros

The pros are straighforward if the programs actually do what their providers say they will, which is:

  • Bump up employee productivity since employees are less worried about their finances
  • Retain and engage employees who are grateful for the help
  • Reduce financial stress, which can impact some of the other wellness issues we discussed in the prior post
  • Give employees better financial habits; that is, they become better at budgeting, saving, investing and handling financial emergencies.

The Cons

For employers, some percieved cons might be:

  • the cost of offering and implementing a financial wellness program
  • limited employee participation in the program, in which case the cost of the program may not be worth the money and effort put into it

For employees, some perceived cons might be:

  • privacy concerns related to sharing their personal financial information with their employer or a third-party provider
  • a lack of savings because their abilty to tap into to their daily pay becomes too easy via earned wage access
  • frustration if they receive advice or services that don’t really help them in the long run

The Limitations

I think financial wellness programs are generally a good thing, though their efficacy no doubt varies from offering to offering and company to company.

The bigger question: Are theseprograms bandaids being applied to proverbial bullet wounds?

There’s no easy answer to that question. It just depends.

Let’s say you take advantage of a program that gives you 1) same-day access to your pay and 2) financial education. The same-day pay has proven useful to you a couple of times, like when the car broke down and the dog got sick.

Meanwhile, the financial planning education has taught you about the dangers of credit card debt and high car payments, but the advice has been hard for you and your family to follow given your means and expenses.

Ultimately, you appreciate the advice and it helps at the margins, but you’re still going to need a raise or to find a different job that pays more. You sigh. Easier said than done. The rat race is getting to you.

In my next post, part 3 of 3, I discuss some ideas for programs that might do more to increase the financial wellness of Americans.

On Helping Financially Unwell Workers in America

(Part 1 of 3)

Wellness Becomes a Thing

The HR Research Institute recently finished a major study on employee financial wellness.

It’s an interesting topic during these weird financial times, so I thought I’d provide a short history of employee wellness programs in general, put financial wellness in its current context, then talk about other ways we can improve the financial well-being of more people.

If you’re not familiar with it, financial wellness is one of the newer forms of worker benefit offerings. The basic idea is that if an employee is financially unwell, then they are a lot less likely to be productive and engaged at work.

This is both an intriguing and a “no-shit-Sherlock” kind of assertion. It’s interesting (to people like me) because it’s case in which the concept of wellness is being expanded into yet another area of employee’s lives.

So, before we get into financial wellness (aka, well-being), let me lay out a brief history of employee wellness.

A Brief History of the Long Struggle to Keep Workers Fit Enough to Survive the Treadmill

The Stupidly High Costs of U.S. Healthcare

When I first started researching HR issues, wellness initiatives were fairly new (yeah, I’m getting up there), and they typically focused on physical wellness.

Back then, the idea was to distinguish between healthcare benefits, which were the status quo, and wellness benefits, which were a way to reduce healthcare costs. After all, the U.S. is—and even then was—virtually alone among so-called industrialized nations in terms of not providing universal healthcare to all its citizens. My father was a doctor and not exactly a liberal, but when I was growing up, even he thought our system was stupid and needlessly cruel.

As a result of the sheer weirdness of our U.S. health insurance system, which largely rests on the shoulders of employers, organizations have a vested interest in keeping healthcare costs low so that they are not paying as much to provide health insurance to workers.

The fact is, however, that U.S. has the most expensive healthcare system in the world on a per capita basis and gets pretty crappy results for all the money it spends. That’s not the fault of employers, however. It’s the fault of the frankly stupid way our system is set up.

In my opinion, the vast majority of voters should be mightily pissed off about this, voting out every self-serving, money-grubbing politician who is not in favor of fixing our expensive, busted system, but the majority of voters still aren’t there…yet. So it goes.

Making the Best of a Bad Hand

A lot of Americans like our employer-sponsored system of health care, but most employers surveyed by the Kaiser Family Foundation think our current system is so broken, expensive and usustainable that in the near future the government is going to need to step in to fix things–or at least help the current creaky system limp along.

For now, though, employers are as about as stuck with current system as the rest of us.

Although employers get tax breaks and can shift a lot of the costs to their employees, they still spend an absurd amount of money to provide that health insurance (assuming they do, of course).

So, employers have long since decided to encourage their employees to raise their level of wellness so that they, the employers, can reduce the costs of providing health insurance.

Back in the Day

When wellness benefits first emerged (the term ‘well-being’ came later), they were mostly focused on getting employees into better shape physically. This basically meant getting them into better aerobic shape (heart disease being a killer, both physically and financially) and getting them to stop poisoning themselves with legal substances (such as tobacco and alcohol) as well as the illegal ones.

So, employers started paying for, or at least contributing to, things like gym memberships and smoking cessation programs. And, they started offering financial incentives, such as lower health insurance costs, to employees if they cut down on the cigs and the booze.

It all seemed pretty obvious at the time.

Did It Work?

Maybe, sometimes. Studies differ, and success probably depends on the execution of those programs. Today Americans tend to smoke less than they did 30 years ago, but that probably has less to do with wellness benefits and more to do with government packaging warnings and restrictions on advertisements.

It hasn’t been all good news. During the same interval, Americans’ consumption of alcohol has risen and fallen, whereas obesity has generally ballooned. Of course, the global obesity epidemic (pandemic?) is driven by a lot of different factors that wellness programs have not been able to prevent.

Then Came the Mental Side of Things

Once you start pulling at the wellness thred, however, you start to see how entangled it is with other kinds of wellness, especially mental. After all, people smoke and drink not just because it feels good (getting high has deep biological roots not only in humans but other animals as well) but because they’re stressed out and suffering emotional/mental tribulations.

To get people to cut down on their smoking, drinking, and/or eating, sometimes you need to address their mental issues. And so, employers started putting greater emphasis on employee assistance plans (EAPs), addiction treatment, and eventually things like mindfulness.

And Don’t Forget the Work Environment

We shouldn’t forget about environmental wellness, either. When I first started out, ergonomics were all the rage. Personal computers had entered the workplace and suddenly people were sitting in front of screens more while clicking their keyboards and mice. This led to backaches, carpal tunnel syndrome, eye-strain issues and more. There was this big push for ergonomically designed office furniture.

There was also the crappy air in office buildings where all the windows were sealed and god-knows-what chemicals were oozing out of printers, ceiling tiles, acrylic floors, etc. That’s when “sick building syndrome” became a thing.

Of course, the concern with the work environments started much earlier, when plant workers were suffering from sundry issues related repetitive stress injuries, machine accidents, chemical exposures and more. A lot of employers didn’t want to think about these things, but the combination of broken employees and union representation tended to concentrate their focus.

Social Well-being Joins the Fitness Team

If you keep pulling at that wellness thread, it’s not long until you realize there are social elements at play as well. Some of this stems from the research into employee engagement, especially by Gallup. Their studies suggested that the social environment at work also affects employees.

Of course, in retrospect, this is another one of those “no duh” moments in HR. Obviously, if you have friends at work and good relations with your boss, you’re more likely to be engaged and productive.

So sum up, your physical wellness is influenced by your work environment and your mental health, which is in turn influenced by your social well-being at work and outside of it.

Okay, so that’s my very brief history of wellness programs. In my next post, part 2 of 3, we’ll focus on another step in wellness evolution: today’s financial wellness initiatives.

Are Humans Still the Smartest Beings on the Planet?

These new AIs are smart. Or at least seem to be. After all, they are excelling at a wide range of tests typically used to gauge human knowledge and intelligence. Which leads me to ask, “Are humans still the smartest beings on the planet?”

Maybe Not

There are some reasonable and growing arguments that we’re no longer the most intelligent entities on the planet. Let’s go to the exams.

Even before the company Open AI launched its ChatGPT-4 chatbot, which is considered considerably more capable than ChatGPT-3, a study looked at the AI’s ability to match humans in three key areas: general knowledge, SAT exam scores, and IQ.

The outcome? ChatGPT wound up in a higher percentile than humans in all three areas.

AI expert and author Dr. Alan D. Thompson suggests that GPT-3 displays an IQ above 120. If ChatGPT were a human being, it would fall into the “gifted” category, according to Thompson.

And then there’s ChatGPT-4. Open AI has published extensive data about how it performs on a wide range exams. For example, the firm claims that the AI passes a simulated bar exam (that is, the one that tests knowledge and skills attorneys should have before becoming licensed to practice law) “with a score around the top 10% of test takers,” which compares very favorably with GPT-3’s score, which was around the bottom 10%.

Maybe We Never Were

Of course, one might argue that we never really were the smartest beings on the planet. We don’t have a way to truly gauge the intelligence, for example, of the huge-brained whales, some of which live for up to 200 years.

I explored this in a previous post, so I won’t delve too deeply into the details. But the truth is that we can only guess at the intelligence of cetaceans such as the humpback, beluga and killer whales as well as various dolphins and porpoises.

Maybe the Question Makes No Sense

One of the interesting things about the large language model (LLM) AIs is that we’ve trained them on human langugage. Lots and lots of it. We are language-using animals par excellence. Now, we’ve harnessed machine learning to create tools that at least imitate what we do through the use of neural nets and statistics.

We don’t typically say that we are dumber than a calculator, even though calculators can handle mathematics much better than we typically can. Nor do we say we are “weaker” than a bulldozer. Perhaps we are just shouldn’t apply the word intelligence to these recticular models of AI. What they do and what we do may not be truly comparable.

Maybe So, For Now

I’m certainly no expert but have had considerable experience with ChatGPT and Bing chat. I was an early adopter in both cases and have seen how humblingly smart and yet puzzlingly dense they can be.

For example, I’ve had to convince ChatGPT that the year 1958 came well after the World War II, and I’ve seen Bing be stubbornly wrong about prime numbers and basic multiplication. In other cases, I’ve asked Bing to give me information on a topic from the last week and it’s given me articles several years old.

As for the AI art generators, they are also amazing yet often can’t seem to count limbs or digits or even draw human hands in a non-creepy way.

In other words, there are times when these systems simply lack what we might consider common sense or fundamental skills. We can’t yet trust them to get the details right in every instance.

At the same time, of course, the LLMs are able to write rather good prose on virtually any topic of your choice in seconds. Imagine knowing just about everything on the Internet and being able to deftly and almost instantly weave that information together in an essay, story or even poem. We don’t even have a word for that capability. Savant would not cover it.

Once these systems truly develop “common sense,” however we define that, there will be precious few tasks on which we can best them. Perhaps they are still a long way from that goal, but perhaps not.

Maybe We’re Just Being Extended

In that past, I’ve written about the “extended human” and Kevin Kelly’s idea of the technium, which he discusses in his book What Technology Wants. Many people would not call any one of these LLM AIs a “being” at all. Rather, they’d say they are still just tools made up of silicon, fiber-optic cables and electronic blips of 0s and 1s with no consciousness or even sentience at all. They are little more than mechancial parrots.

In this view, the LLMs are glorified search engines that put together word pattens with no more thought than when a series of ocean waves create elegant undulating patterns of sand on the beach. These machines depend on our words, ideas and works of art in order to “think” at all, so they are mere extensions of our own intellects: bulldozers of human symbols, so to speak.

Maybe It Doesn’t Matter

Maybe they will out-intellect us by wider and wider margins, but perhaps it doesn’t really matter if we are no longer the smartest entities on the planet.

For decades, some scholars have argued that we can’t compare our intellects to those of other beings: anthills and beehives, corvids and cephalopods, elephants and grizzly bears. Each animal’s intellect is uniquely good at the things that keep them alive.

Squirrels are geniuses at remembering where they’ve hidden their acorns and negotiating the complexities forest canopies. We can’t do what they do, but does that make us their inferior?

No, comparing the two is nonsense, this argument goes.

The AIs will never be better humans than humans because we are uniquely ourselves. Perhaps the era of AIs will give us both the humility and the wisdom to finally understand this.

Which is all fine and well until, of course, the machines learn to dominate our world just as humanity has done in the recent past. If this happens, perhaps we will need to learn to live in their shadows, just as squirrels and crows and coyotes have lived in ours.

Feature image from CrisNYCa, April 17, 2018. Le Penseur (The Thinker) in the garden of Musée Rodin, Paris

God as Dread Pirate Roberts

The classic romance comedy The Princess Bride has one of my favorite existential lines.

It occurs when our hero Wesley is recounting his life as an abductee of the Dread Pirate Roberts, explaining how Roberts made him a valet.

“You can try it for tonight. I’ll most
likely kill you in the morning.”
Three years he said that. “Good
night, Westley. Good work. Sleep
well. I’ll most likely kill you
in the morning.”

Dread Pirate Roberts is an intriguing stand-in for the universe or, if you prefer, God.

The universe is a harsh, dangerous and crazy mysterious place. Bad shit happens to everyone at times. Some deserve it. Many do not. And life is always uncertain until it ends in the certainty of death.

You might well die today, or tomorrow, or the day after that.

But if you’re canny and lucky, it might let you live for another day.

So, watch for snakes in tall grass. Look both ways before you cross the road. Eat right and exercise. Generally speaking, avoid stupid mistakes that could turn deadly.

Of course, even if you do all that, the universe will get you in the end. It’s designed that way.

Maybe there is, after all, an afterlife. Or maybe there will come a day when we live in immortal bodies powered by bioceramic minds (or whatever). Perhaps at that point the Dread Pirate Roberts will not be quite so dreaded … though I doubt it.

Until then, however, offer up your good works as God’s valet. Do what you can. And sleep well, friends.

Featured image from RootOfAllLight, Creative Commons Attribution-Share Alike 4.0