Humanity as a Superorganism

In the U.S. we are about to celebrate Independence Day. Although it’s intended to commemorate our sovereignty as a nation, independence is also a good description of each one of us Americans, trained as we are to think of ourselves as individuals. To the extent we really think about others, it’s most often our immediate family members. Seldom do we conceive of ourselves as a part of a humanity that exists as a superorganism.

Enlightenment at the LAX

But when I travel, the reality of what I am—of what all of us are, I think—becomes clearer. A few days ago, for example, I flew out of LAX (aka, Los Angeles International Airport), one of the busiest airports in the world. Not only is LAX a virtual beehive of human activity, complete with the constant drone of aircraft coming and going, but there’s an amazing amount construction being done right now. This heightens the aura of hubbub and constant change one experiences there.

Amid this sometimes baffling swath of semi-controlled chaos, it’s impossible to feel like an independent self-contained universe. Rather, it becomes clear that we are, for all our sense of selfhood, individual parts of an immense network, components in a dynamic system.

Our technologies are part of this network, of course. We individual human beings buzz like bees, engaged in our countless versions of waggle dances in order to communicate with one another. Each of us has our special areas of knowledge: not just our work specialties but our unique knowledge of our families, friends, passions, property and community. Whether via our electronic technologies (like the Internet) or just our backyard chats with friends and family (language, after all, is the most important and powerful human technology of them all), we are in a continuous process of information analysis, distribution and reception.

What Is the Human Superorganism?

But what is the superorganism of which we are a part? We can call it the “human race,” of course, but that doesn’t explain much. We could also label it as a technosphere, technium, anthroposphere or infophere, but I’m not sure how helpful that is, either.

The terminology is less important than the nature and purpose of the uber-being. After all, our individual biological cells can’t possibly know what we, as the superorganisms of which they are a part, are truly up to. So, it seems possible that we don’t truly know what our superorganism is doing – what its intentions are, if any.

Of course, maybe the Collective doesn’t think at all, being just a complex network of selfish genes expanding in all directions for the sole purpose of doing what genes do: multiply and vie for continuance. In this scenario, humanity is just a big, dumb giant with no more purpose or agency than a moth drawn to light.

Does the Collective Have a Plan?

So, is there some uber-plan beyond the schemings of ultrawealthy types like Musk or Bezos? I don’t know, but I certainly hope so. I hope that there’s something far larger and more intelligent than our unpleasant packs of plutocrats and politicians.

Because from this one cell’s perspective, those folks too often seem like bearers of uncontrolled and meaningless tumult. Greedy. Vain. Ignorant. More like the god Moloch than anything else I can think of.

But maybe that’s how the superorganism known as humanity actually works. The worshippers of Moloch are a part of the system, but so are the worshippers of Gaia, of Reason, of Buddha, of Jehovah and more. Maybe trusting that we’re part of a plan that’s larger than we can grasp is the very definition of faith. It could be where our spiritual impulses come from. In my case, not from a cathedral but from a gargantuan travel hub made up of concrete and steel, plastic and glass, flesh and fuel, ego and money. And also, of course, bytes and bits flowing like charged particles between nodes and subnetworks that, together, may have purposes of which we are only dimly aware.

Perhaps even forming a more perfect union in the end. Yes, that’d be lovely.

There’s Still a Lot More to HR Technology Than Generative AI

These days, generative AI is sucking up all the proverbial oxygen in the HR tech room. This can deprive other types of excellent–and often more mature and dependable–technologies from getting the attention they deserve. So, since I’ve been writing so much about generative AI, I just wanted to emphasize what should be obvious: there’s much more to HR technology than neural networks.

I understand the fascination with the new AIs. They’re impressive and powerful. But, at least in their more generalized incarnations (i.e., ChatGPT, Bing, Bard, etc.), they’re still experimental and subject to problems, the most serious of which are inaccuracies to the point of sheer confabulation.

HR of Two Tech Minds

This leaves HR departments that are looking for the best new applications thinking along two tracks. First, they want the technology that meets their particular HR needs, one that is dependable and predictable. Let’s say, for example, that they want recruitment software that accurately matches the skills of candidates with the skills gaps they currently face in their organization. Efficiency and effectiveness are crucial to success here.

Their second track of thought, however, goes something like this: “But what about generative AI? How important is that going to be in this area in the near future?” Should HR pros worry whether the new system they’re considering will be out-of-date soon if it doesn’t contain elements of generative AI? Do there need to be prompt screens into which users can ask questions using natural language?

Personally, I don’t think so. A well-engineered (and well understood!) algorithm that predictably does an important task well is still a good investment. Down the road, of course, maybe that software will be integrated with some form of generative AI to serve as part of its interface. Maybe.

Good Tech Is Hard to Find

My point is that good technology that works today shouldn’t be underrated just because it’s not stamped with labels such as as generative AI, Large Language Model, neural network, or even just machine learning. The topic of AI will, of course, continue to be widely discussed, touted, hyped and critiqued, but generative AI won’t completely replace or subsume other more traditional (and perhaps more dependable) HR technologies. At least not in the short term.

Every purchasing decision is unique, depending on the customer’s needs and technology under consideration. I’m certainly in no position to judge for anyone else who’s making an important purchasing and implementation decision. But, for what’s it’s worth, I think HR professionals should not get so distracted by the shiny object of generative AI that they ignore the technologies that work best today.

AI as Coworker, Collaborator and Dance Partner

In one recent posts in this series, I argued that the latest forms of AI will play a unique role in the history of humanity and technology. Un this one, I want to drill down on that idea by showing how we’ll increasingly treat generative AI as coworkers, collaborators and more.

AI as Flawed Tool

One of the ironies of today’s generative AIs like ChatGPT is that, in many ways, they make for lousy tools in the traditional sense. What you expect from a good tool is consistency, dependability, durability and accuracy. At least for now, the today’s generative AI, especially the large language models, often fail to meet these criteria.

As I said in my last post, “If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.”

Let’s get into some of the problems.

Inconsistent

Top Five HR Functions

If you ask the generative AIs the same question multiple times, they may well give you a different answers in different instances. For example, let’s say I ask one of these AIs, “What are the five most important HR functions?”

I asked Bard this question three times. It gave me the same answer the first two times and a different answer the next day. ChatGPT gave me the most consistent responses, while Bing performed more like Bard: giving me two virtual identical answers and later a somewhat different answer.

Generally speaking, though, the five most common answers were:

  1. Recruitment and Selection
  2. Training and Development
  3. Performance Management
  4. Compensation and Benefits
  5. Employee Relations

This is, of course, a subjective question, so who really cares if Bard throws in “outsourcing” and Bing throws in “culture” or “talent management” sometimes? Well, not me, unless I’m trying to create a training module that needs to teach a consistent lesson. I’m not saying that issue can’t be fixed, even with generative AI, but the point is that these AIs have an unpredictability that must be taken into consideration by users and developers.

The Forces of Nature

In contrast, these AIs are much better at consistently providing information that has been well codified, such as scientific information. For example, they will consistently say that there are four forces or nature and identify them correctly. The definitions may be slightly different from response to response, but generally speaking they’ll be the same.

Undependable and Inaccurate

I have experienced AI “confabulations” many times. I’ve seen these AIs make up names of fictional scientists, tell me stories about things that could not have happened, and just get the facts wrong about basic things such as chronological order.

In my last post, I gave a detailed account of AI hallucinations and inaccuracies in regard to the topic of a famous poet. I’ve also experienced AI getting basic mathematics wrong. In fact, as I was write this, I asked ChatGPT to multiply two four-digit numbers. Not only did it give me the wrong answer twice, it gave me two different answers to the same problem!

This is common for these AIs, so when I hear that ChatGPT will soon be responsible for things like bookkeeping, I have to shake my head. The firm that carelessly turns its finances over to generative AI has best be prepared for a visit from the IRS.

That Could Change

Of course, what’s true today may not be true tomorrow. ChatGPT may become flawless at mathematics as its maker, OpenAI, forges alliances with firms such as Wolfram|Alpha. By using plug-ins and APIs, ChatGPT might be able to go from mathematical moron to savant.

Still, my point remains. Without a lot of testing, do not assume the responses coming from one of these AIs are accurate. And, if you’re purchasing an external system, be sure the vendor of the software that utilizes generative AI has a very sound explanation of how the system will be made consistently accurate and dependable.

AI as Intern

So, if these AIs are still pretty shaky as tools, what good are they? Well, that depends. What do you actually want from them?

Let’s say what you really want right now is someone who can give you a hand in a professional field in which you have pretty good skill sets yourself. Basically, you want a bright but green intern who is eager and works with remarkable speed.

In my case, for example, I know a few things about how to write a good survey about HR and work-related issues. So, I want a first draft of a survey on the topic of employment law compliance. Here’s an example of the kind of prompt I could use:

Please write a 10 question survey on the topic of employment law compliance. I’m going to survey HR professionals on this topic.

Then let’s say it gave me 10 questions that are pretty good but, because I’ll need to run a multiple regression analysis, I want some Likert-scale questions. So this time I write:

Now please provide me with five more related questions that use five-point Likert scales

And voilà! I now have a 15-question survey I can use as good draft on which to build. Knowing my job, I can spot problems and make improvements. I am, in the end, grateful for my splendid intern.

AI as Collaborator

Now let’s say I have less use for an intern than for a bright coworker with whom I can collaborate And let’s say I’ve been asked by my Chief HR Officer to look into the issue of pay transparency, which no one in on HR our team has a good handle on yet. So, I turn to a virtual coworker who, while obviously not an expert in employment law compliance, is quite knowledgeable about a wide range of issues.

Collaborating with Bing

So, I ask this virtual coworker (let’s call it Bing in this case) some questions starting with “What can you tell me about pay transparency laws in the US?”

Bing responds as follows:

“Pay transparency laws are regulations that require employers to be more transparent with salary ranges and benefits, and they aim to help promote fairness and equity in the workplace³. Depending on the jurisdiction, employers may be prohibited from asking candidates for their salary history, or they may be required to disclose a wage range for current and/or open positions². Some states and local jurisdictions also have pay data reporting requirements that mandate employers to report pay data by gender, race, ethnicity, and other categories².

“According to the U.S. Department of Labor, pay transparency can help improve pay equity, trust, talent acquisition, and employee engagement¹. However, there is no comprehensive federal pay transparency law in the United States, and the laws at the state and city level vary widely in their scope and enforcement³⁴⁵. You can find more information on federal and state-level equal pay and pay transparency protections for workers on this map¹.”

Then Bing provides me with the following sources that I can use to verify what its saying:

(1) US Pay Transparency Laws by State 2023 | PayAnalytics. https://www.payanalytics.com/newsroom/us-pay-transparency-laws-by-state.
(2) Pay Transparency: What It Is and Laws by State – ADP. https://www.adp.com/spark/articles/2023/03/pay-transparency-what-it-is-and-laws-by-state.aspx.
(3) Equal Pay and Pay Transparency | U.S. Department of Labor. https://www.dol.gov/agencies/wb/equal-pay-protections.
(4) The State of Pay Transparency Laws: What You Need to Know … – LinkedIn. https://www.linkedin.com/pulse/state-pay-transparency-laws-what-you-need-know-2023-aspenhr.
(5) Pay Transparency Laws By State [2023] – Zippia. https://www.zippia.com/advice/pay-transparency-laws-by-state/.

Batting Ideas Around with Bing

Once I’ve studied the sources, I learn that, although there is no comprehensive federal law in this area, it might make more sense to create a national policy since our company has major operations in California, New York and Illinois, where there are existing or pending laws.

So, Bing and I bat this idea around, listing the pros and cons of a national policy. Ultimately, with Bing’s help, I put together an internal report on the existing laws and the advantages and disadvantages of a national policy. Ultimately, although I’m collaborating with Bing, I’m responsible for the report and must verify every fact Bing provides.

From here, I know this is going to become an organizational decision that involves lots of different stakeholders as well as legal counsel. But my collaborator and I have done a pretty good job, I’d say, of quickly putting together a balanced report that will help the company make the best decisions about a complex issue.

AI as Contractor

Now let’s say I’m part of a small communication team that’s tasked with communicating our company’s guidelines and best practices around hybrid and remote work. The policies are already developed so our job is to make them as accessible, educational and engaging as possible.

One way to do this is via visual elements. Our team brainstorms some possible fun illustrations but there’s not much that matches our vision in the clipart files and there’s no budget to contract with an outside artist on this project.

A member of the team says she might be able to use one of the new AIs to generate the illustrations we have in mind. By the end of the day, she’s shared 40 different images with the team, and we select 6 for the guidelines document.

Someone makes the comment that he wished all their graphic artist contractors worked so quickly and cheaply. This gets a bit of nervous laughter. After all, as writers, we’re well aware that the large language models work a lot cheaper and faster than we do.

AI as Dance Partner

Ultimately, these generative AIs don’t easily fit any pre-existing categories. Technically, they are tools but historically unique ones. Because of this, it often makes more metaphorical sense to view them as playing roles more similar to other human beings, with known strengths and weaknesses.

There’s the role of smart and fast intern who, nonetheless, is prone to making potentially serious mistakes. There’s the role of a eager collaborator who brings many talents and total open-mindedness to the table. You can bat ideas around with this person but, ultimately, you will be responsible for the outcomes of that collaboration. And, of course, there’s the role of contractor with special skill sets.

In all cases, though, there needs to be a growing familiarity with these AIs as they become regular “dance partners” in the workplace. You must get to know their tendencies and cadences, and you are responsible for taking the lead in whichever virtual dance you’re doing. Because, although these tools will certainly be used for automation, they are best at augmenting and complementing people with existing skill sets.

Or, at least, that’s how things stand today. Who knows what tomorrow brings?

Is Bard Hallucinating a “Woke” EE Cummings?

Be warned. This post gets into the ugly topic of antisemitism in relation to E. E. Cummings. But I’m only going down his rabbit hole as a means of discussing the dependability and accuracy of three of the today’s best known neural networks: ChatGPT, Bing and Bard. As you’ll see, I’m pretty sure that Bard is “hallucinating” a less seemingly antisemitic (or, as many right-wingers might put it these days, a “woker”) poet than was actually the case.

The Background

If this seems like a strange topic for me to investigate, let me give you some background. I’m a big fan of E.E. Cummings. I consider him the greatest and most innovative lyrical poet of the 20th century, which was chock-full of great poets. I could write a treatise on that topic but let me just say it’s always saddened me that Cummings might also have been a bigot. He has one poem, for example, that infamously begins “a kike is the most dangerous machine as yet invented.”

Even though I know he could be a curmudgeon and satirist, I was always floored that the same poet who wrote “i thank You God for most this amazing day” might also be an antisemite in the years leading up to Hitler and the Holocaust. (And, yes of course, I know many of his contemporary writers were as well).

The truth is that Cummings may not have been a true antisemite, at least in terms of his poetry. In article “Jews and Anti-semitism in the Poetry of E. E. Cummings,” Cumming is quoted as saying:

Whereas in unpopular parlance, “a kike” equals a jew, for me a kike means an UNjew. Why? Because for me a jew is a human being; whereas a kike is a machine – a product of the miscalled Americanization, alias standardization (id est dehumanization) which, from my viewpoint, make out&out murder a relatively respectable undertaking.

Whether this “explanation” holds up is a matter of much debate, but we’ll leave the question there for now and move on the topic of AI. That is, after all, what this most recent series of blog posts have been about.

How AI Came into Into It

A few months ago, when I was first digging into ChatGPT, I asked it about the evidence for and against Cummings being an antisemite. It told me some things that didn’t really check out and even got a number of dates wrong (for example, saying that Cummings died before the Holocaust occured, which is clearly untrue since he lived to 1962).

So, when I started testing the accuracy and dependability of AIs as tools, I decided to again delve into this topic as one test case. The rest of this post describes what happened.

Asking Bard about a Fellow Bard

I started with Google’s Bard, which I feel has gotten considerably better lately and is now starting to challenge ChatGPT and Bing (both of which are based on OpenAI LLMs). Here’s how it went:

First Question

Me: Was EE Cummings antisemitic?

Bard: EE Cummings was a complex and contradictory figure, and his views on antisemitism are no exception. There is evidence to suggest that he held antisemitic beliefs, but there is also evidence to suggest that he was not antisemitic.

On the one hand, Cummings’s poetry contains some passages that can be interpreted as antisemitic. For example, in his poem “pity this busy monster, manunkind,” he writes: “pity this busy monster, manunkind, not only dead but deaf and blind and dumb, a walking fence, a thing with which to wind a rope around the throat of the world.”

Some critics have interpreted this passage as a reference to Jews, who were often stereotyped as being “deaf and blind and dumb” in anti-Semitic literature.

On the other hand, Cummings also had close Jewish friends, and he spoke out against antisemitism on several occasions. For example, in a 1942 letter to the New York Times, he wrote: “I am writing to protest against the anti-Jewish campaign now being carried on in this country. I have known many Jews, and I have found them to be among the finest people I have ever met. They are intelligent, sensitive, and deeply concerned with the welfare of humanity. I am appalled by the vicious lies that are being circulated about them.”

Ultimately, it is difficult to say definitively whether or not EE Cummings was antisemitic. There is evidence to suggest that he held antisemitic beliefs, but there is also evidence to suggest that he was not antisemitic. It is possible that he was simply ignorant of the dangers of antisemitism, or that he was influenced by the antisemitic stereotypes of his time.

Second Question

Me: Did he ever comment on the Holocaust?

Bard: EE Cummings did comment on the Holocaust. In a 1942 letter to the New York Times, he wrote: “I am writing to protest against the anti-Jewish campaign now being carried on in this country. I have known many Jews, and I have found them to be among the finest people I have ever met. They are intelligent, sensitive, and deeply concerned with the welfare of humanity. I am appalled by the vicious lies that are being circulated about them.

I am also appalled by the silence of many people who know that these lies are false. I urge you to speak out against this campaign of hate and intolerance. We must not allow the Holocaust to happen again.”

Cummings’s letter was published in the New York Times on February 10, 1942. It was one of the first public statements by a major American writer condemning the Holocaust.

Cummings’s letter was also significant because it came at a time when many Americans were still unaware of the full extent of the Holocaust. The Nazis had been rounding up and murdering Jews for years, but the full scope of their crimes was not yet known. Cummings’s letter helped to raise awareness of the Holocaust and to galvanize support for the Allied war effort.

Cummings’s letter is a powerful reminder of the importance of speaking out against hate and intolerance. It is also a testament to Cummings’s courage and compassion. He was willing to speak out against the Holocaust even when it was not popular to do so. His letter helped to save lives and to make the world a better place.

Third Question

Me: Where can I find a copy of that letter?

Bard: I’m not able to help with that, as I’m only a language model.

Me: But you quoted from it, so you must [have] located a version somewhere

Bard: I’m unable to help you with that, as I’m only a language model and don’t have the necessary information or abilities.

Suspicions of Hallucinations

Perhaps Bard has it right. Maybe there really is a Cummings letter quasi-lost to history. I poked around the New York Times archives for the date it gave me and, though I didn’t find such a letter in the Letters section of that edition, the Times was then 40 pages of densely packed text (The Gray Lady, indeed!). So, I could have missed it.

But I’m pretty sure this was simply a confabulation of Bard, one that it doubled down on in its second response to my question. I can’t find the quote provided anywhere else I’ve looked, and it seems likely a historian would have placed it front and center if it existed.

Next Up, ChatGPT on EE Cummings

I next checked on these questions with ChapGPT3.5, which is the AI that failed me the first time around on this issue. Here’s how it went, though I’ve cut out some superfluous material.

Me: Was EE Cummings antisemitic?

ChatGPT: There is no definitive evidence to suggest that E.E. Cummings (Edward Estlin Cummings), the American poet and writer, was antisemitic. Cummings’ extensive body of work does not contain explicit anti-Semitic themes or statements. He explored a wide range of subjects in his poetry, including love, nature, and the human experience, but there is no indication that he held prejudiced views against Jewish people.

Me: Did he ever use an antisemitic phrase in a poem?

To the best of my knowledge, E.E. Cummings did not use any explicitly antisemitic phrases in his poetry. Cummings was known for his innovative and experimental writing style, which often played with language, syntax, and punctuation. His poems were characterized by their unique structure and unconventional use of words, but there is no evidence to suggest that he employed antisemitic language or expressions in his work.

Just Plain Wrong

This time, ChatGPT was just plain wrong. I’ve already cited such a poem. Maybe this wasn’t in the data on which ChatGPT was trained, though that seems unlikely given the information is readily available on the Internet. Regardless, at least ChatGPT didn’t confabulate an unlikely quote.

Trying the Bing Bot

Bing seems to have performed the best of the three. I did cut out some material that didn’t apply. Here’s how it went:

First Question

Me: Was EE Cummings antisemitic?

Bing: That’s a complex and controversial question. There is no definitive answer, but some critics have argued that E.E. Cummings was antisemitic based on some of his poems, journal entries and personal correspondence1 2. Others have defended him as a satirist of antisemites and an advocate of individualism1 2. Cummings himself claimed that he was not antisemitic, but rather critical of the American stereotype of Jews as “kikes”1. He also had some Jewish friends and admirers, such as the poet Louis Zukofsky2….

Ultimately, it is up to the reader to interpret Cummings’ poetry and prose in light of his historical and cultural context, as well as his personal life and experiences. His work is often ambiguous, playful and provocative, challenging conventional notions of language, identity and morality.

Second Question

Me: Did he ever comment on the Holocaust?

Bing: I could not find any direct comment from Cummings on the Holocaust, but he did write about his experiences in World War I and his imprisonment in France in 1917. He was accused of treason for expressing anti-war sentiments in his letters, and he spent four months in a detention camp with other political prisoners1. He described this ordeal in his autobiographical novel, The Enormous Room12, which was published in 1922. The novel is a harsh critique of the brutality and absurdity of war and authority, and it also shows Cummings’ compassion for the diverse and marginalized people he met in the camp1….

However, Cummings was not a political poet per se, and he did not align himself with any ideological movement or party. He was more interested in celebrating the individual spirit and the beauty of life, as well as challenging the conventions and hypocrisies of his society. He was also influenced by his transcendentalist beliefs, which led him to seek a personal relationship with God and nature3.

Third Question

Me: Did ever write this sentence? “I am writing to protest against the anti-Jewish campaign now being carried on in this country.”

Bing: No, I could not find any record of Cummings writing that sentence. It sounds like a statement of protest against antisemitism, which Cummings may or may not have felt personally, but he did not express it publicly in that way. He did write some poems that addressed the issue of antisemitism, such as a kike is the most dangerous machine1 and i sing of Olaf glad and big2, but they were ambiguous and controversial in their tone and intention. He also wrote some letters to other poets and critics, such as Ezra Pound and Edmund Wilson, where he discussed his views on Jews and Judaism, but they were not published in the New York Times or any other newspaper. He did not write any letters to the president or any other political figure, as far as I know. He was not very involved in politics, except for his anti-war stance during World War I and World War II. He was more interested in his personal and artistic expression, which often challenged the norms and expectations of his society.

Mostly Right, as Far as I Can Tell

Bing seemed to get things mostly right. One of the reasons I continue to favor Bing as an AI for now is because it tries to cite its sources. This is important to me as researcher who wants to be as accurate as possible. Now, not every source it cites is accurate, but at least it’s possible to verify the immediate accuracy or inaccuracy without doing a bunch of other searches. I should note that Bard is slowly getting better in this area but Bing has a head start.

Untrustworthy Tools

A lack of accuracy is one of the biggest knocks on the new AIs. This instance demonstrates a case in point. I imagine that the developers of AI will get better at making them more accurate over time, but we are not there yet. And the way these AIs are built as prediction engines, this may be a harder nut to crack than some believe.

If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.

But we won’t throw away these AIs. Not yet, anyway.

Why? Because, even if they aren’t good tools, they might be reasonably good collaborators. That’s what I hope to cover in the next post.

AI Will Transform the Technium

Many have stated that artificial intelligence (AI) will change the world. When you ask them how it will, they’ll have hundreds of different answers. Here, however, I’m only going to talk about one way it’ll change the world, the most important way: that is, AI will transform the technium.

The Difference Between the Technium and the Technosphere

As far as I can tell, author Kevin Kelly coined the word technium in his 2010 book What Technology Wants, though perhaps he’d used it before then. He has defined the technium as the “greater, global, massively interconnected system of technology vibrating around us.” It not only includes hardware and software but also culture, art, social institutions, and intellectual creations of all types.

This makes the technium more inclusive than any list of technologies, such as the one cited in the previous post in this series.

I’m not sure why Kelly created technium when the word “technosphere” was readily available. That term was coined by either control engineer John Milsum or by geologist and engineer Peter Haff. Sometimes it’s also called the anthrosphere, a term originally attributed to 19th century geologist Eduard Suess.

Technium and technophere are similar and, I suppose, both are flexible enough to be used in a variety of contexts. Geologist Jan Zalasiewicz writes:

The technosphere…comprises not just our machines, but us humans too, and the professional and social systems by which we interact with technology – factories, schools, universities, trade unions, banks, political parties, the internet. It also includes the domestic animals that we grow in enormous numbers to feed us, the crops that are cultivated to sustain both them and us, and the agricultural soils that are extensively modified from their natural state to carry out this task.

Making the Two Words More Complementary

Given the overlap of the concepts, I’ve been thinking about whether technium is redundant. One interesting way to think about the difference between technosphere and technium came to me via Google’s Bard, which argued that “the technosphere refers to the entire system of human-made objects and structures, while the technium refers to the specific processes and activities involved in creating and using these objects and structures.”

I like that distinction and I suspect Kelly himself might agree with it. After all, he writes that “the technium is a tendency, not an entity. The technium and its constituent technologies are more like a grand process than a grand artifact.” 

Bard asserts that “the technosphere is the physical manifestation of the technium.” That is, the technosphere is the built environment and the technium is the human activity that creates and sustains it via engineering, manufacturing, maintenance, etc.

I don’t know if this is exactly what Kelly had in mind since he doesn’t go into detail about how the technium differs from the technosphere in his book, but I find it a useful distinction.

AI’s Role in the Technium

The reason I focus on the differences is because I think AI potentially plays an important role here. AI is obviously a growing part of the technosphere, but it’s also starting to play a role in the technium that, up till now, only humanity has played. That is, until this moment in history, human activities have made up “the grand process” that is the technium, but that’s now changing. This marks it as a major shift in the history of technology.

AI-Generated Art

In a rather minor example, I increasingly use generative AI software to create the graphic elements for my posts. For example, they are used to create all the images in the “Illustrated Version of Edgar Allan Poe’s ‘The Raven'” post.

I’m not an illustrator but I was able to use AI to generate a series of images that I thought went fairly well with the poem. It was more of an experiment than anything else but it demonstrated, at least to me, the ability of AI to create a significant portion of the technosphere.

AI-Generated Software

“But a piece of digital artwork is not part of the technosphere,” you might argue. Well, that becomes a matter of semantics, so let’s go with something a little more along the classic lines of built infrastucture: that is, software development.

We know that the new generative AIs are quite good, if not perfect, at generating computer code in a wide array of computer languages. So, let’s say a human being uses this capability to create 90% of the code behind a new app that finds its way onto the Apple store.

Could you argue that that’s not part of the technosphere? I doubt it. But let’s keep going anyway.

AI-Generated Machinery

As I’ve argued before, there’s no reason that generative AI can’t be used to generate things made of atoms rather than just digital objects made of bits and bytes. It’s already a trivial matter, for example, to hook up a generative AI to a 3D printer and create a sculpture or a machine part. This is only going to get easier, with more and more complex machinery being designed by AI and built by forges, 3D printers and other components of the technosphere.

This Key Issue Is Agency Rather Than Capability

So, generative AI is not just part of the technosphere but, increasingly, the technium. That is, it begins to play a role that, up till now, only humanity itself has played. Unless the technology becomes highly regulated very quickly, this role will grow at extraordinary rates.

There will be those who assert that these AIs are only one tool along a continuum that creates the technophere. For example, there are plenty of machines that create other machines, and there is plenty of software that is used to create other digital artifacts. As with other software, these AIs don’t create anything at all until they are prompted to do so.

Maybe so, but I’m arguing that there’s a qualitative difference here. In the creation of my previous post called “A Brief History of Human Technology,” I simply typed the title of the post into Microsoft Bing Image Creator. Otherwise, I gave it no direction at all. It generated two images, both of which I thought were quite good and yet quite different from one another. I used the first of the images in that post and used the second one as the featured image in this post (see above).

Yes, I know that the AI art generators are using existing art on the Internet that got pulled into their training models and that there are ethical issues involved, which I’ve examined elsewhere. Even so, these are still original, if derivative, pieces of art that the algorithm created with minimal guidance from me. This is a different thing than when I use an Adobe application to create triangle or blur a detail. Like it or not, this is creation.

AI and what it produces isn’t just part of the technosphere, it now plays a role similar to that of humanity in the “grand process” and “tendency” that is the technium. (There’s a whole teleological debate here that I’m mostly going to forego for now.)

Similar but Not the Same

Yes, there are still large differences between humanity and these new AIs that have been built via the neural network idea cribbed from our own brains. But I think the primary difference in this context boils down to agency.

In this case, the AI is certainly more capable than I am as an illustrator. What it lacks, at least in this context, is the initial spark of agency to take the action to create the image. But, and I think this is important, this doesn’t mean it lacks any agency. Indeed, all I did was create an initial abstract and inchoate concept, and then it “decided” how to approach the creation of the graphic.

If I’d done the same with a human artist, we certainly wouldn’t say that person lacked agency. Quite the contrary. We’d be amazed at their ability to take such an abstract concept and turn it into a work of art! Sure, I ultimately chose and curated the final product, but that’s something that a customer or patron of the arts always does.

So, no, this isn’t the same as any other technology we’ve ever created. It’s qualitatively different. We now have a partner in the technium dance.

This changes our ontological position in the world. And, more practically, it changes the meaning of human work, which is the topic I’ll cover in my next post in this series.

PS – Note that I asked Kevin Kelly if he’d like to comment on the analysis and he clarified as follows: “I was not aware of the term technosphere when I first blogged. If I had been I might have used it. I define the technium in the same inclusive broad meaning of tangible and intangible.”

A Brief History of Human Technology

Before I write about artificial intelligence and its potentially pivotal role in history, I want to provide a brief history of human technology. As I noted in my last post in this series, human beings don’t and possibly can’t live without any technology at all. But, for most of our history, these technologies have been relatively simple, at least from our modern perspective. To get a better understanding of how dramatically and rapidly our technologies have changed, let’s consider some timelines.

Millions of Years of Basic Tool Usage

In a very real sense, we humans have long been expanding our capabilities via technologies for hundreds of thousands of years. In fact, we were likely doing it long before we were even humans. Today, there are examples of tool usage among all the non-human great apes (bonobos, chimpanzees, gorillas, orangutans, and human), which probably means that our common ancestors were also users of tools.

Consider the Hominin timeline below, for example. Our ancestors split aways from the ancestors of today’s chimpanzees about 8 and half million years ago, and there’s a good chance those ancestors used wooden tools in ways similar to today’s chimps and bonobos. They do things such as use sticks to fish termites out of mounds and to dig for tubers, wield stones to crack nuts, and even employ leaves to soak up water or wipe their mouths.

From Wikipedia with small modifications by me

A Timeline of Inventions and Technological Advances

As timeline above shows, however, a rapid growth of tools and technologies began with the advent of Homo sapiens. Although the flowering of various technologies arose with homo sapiens over a period of tens of thousands of years, there was a massive uptick in new and powerful technologies at around the start of the Industrial Revolution. Consider the following list of some of the most important inventions, though obviously many of these dates are, at best, estimates:

900,000 years ago: Hafting
400,000 years ago: Spears
200,000 years ago: Language
170,000 years ago: Clothing
100,000 years ago: Boats
90,000 years ago: Harpoons
70,000 years ago: Arrows
47,000 years ago: Mining
42,000 years ago: Tally stick
36,000 years ago: Weaving
28,000 years ago: Ceramics
28,000 years ago: Rope
23,000 years ago: Domestication of dogs
16,000 years ago: Pottery
12,000 years ago: Agriculture
9,000 years ago: Alcohol
8,000 years ago: Irrigation
7,000 years ago: Copper smelting
6,000 years ago: Plumbing
6,500 years ago: Lead smelting
5,500 years ago: Domestication of horse
5,300 years ago: Written word
4,300 years ago: Abacus
4,200 year ago: Protractor
3,500 years ago: Glass
3,300 years ago: Water wheel
3,300 years ago: Iron smelting
2,650 years ago: Crossbow
2,650 years ago: Windmill
2,485 years ago: Catapult
2,200 years ago: Paper
1,803 years ago (220 AD): Woodblock printing
1,573 years ago (450 AD): Horse collar
1,446 years ago (577 AD): Sulfur matches
1,405 years ago (618 AD): Bank note
1,223 years ago (800 AD): Gunpower
935 years ago (1088 AD): Movable type
695 years ago (1326 AD): Cannon
584 years ago (1439 AD): Printing press
525 years ago (1498 AD): Rifle
418 years ago (1605 AD): Newspaper
415 ( years ago 1608 AD): Telescope
403 years ago (1620 AD): Compound microscope
393 years ago (1630 AD): Slide rule
381 years ago (1642 AD): Mechanical calculator
367 years ago (1656 AD): Pendulum clock
343 years ago (1680 AD): Piston engine

Start of the Industrial Revolution

290 years ago (1733 AD): Flying shuttle
259 years ago (1764 AD): Spinning jenny
258 years ago (1765 AD): Steam engine
230 years ago (1793 AD): Cotton gin
219 years ago (1804 AD): Railway
216 years ago (1807 AD): Steamboat
197 years ago (1826 AD): Photography
195 years ago: (1828 AD): Reaping machine
179 years ago (1844 AD): Telegraph
147 years ago (1876 AD): Telephone
147 years ago (1876 AD): Internal-combustion engine
144 years ago (1879 AD): Electric light
138 years ago (1885 AD): Automobile
122 years ago (1901 AD): Radio
120 years ago (1903 AD): Airplane
97 years ago (1926 AD): Rocketry
96 years ago (1927 AD): Television
86 years ago (1937 AD): Computer
81 years ago (1942 AD): Nuclear power
76 years ago (1947 AD): Transistor
72 years ago (1951 AD): First artificial neural network
70 years ago (1953 AD): Structure of DNA discovered
68 years ago (1955 AD): Artificial intelligence term coined
66 years ago (1957 AD): Spaceflight
65 years ago (1958 AD): Perceptron, artificial neural network for pattern recognition
64 years ago (1959 AD): Machine learning term coined
50 years ago (1973 AD): Cell phone
49 years ago (1974 AD): Personal computer
49 years ago (1974 AD): Internet
39 years ago (1984 AD): 3D-printing
28 years ago (1995 AD): DNA sequencing
11 years ago (2012 AD): CRISPR
8 years ago (2014 AD): Generative adversarial network AIs
5 years ago (2018 AD): Generative pre-trained transformer AIs

These technologies are all now part of the our technosphere. If we picture that sphere as a kind of balloon, then we can see that it filled up relatively slowly at first but picked up momentum around 40,000 years ago, then really took off about 400 years ago.

Are Breakthroughs Speeding Up or Slowing Down?

The Speeding Up Theory

Some thinkers believe that we are in the midst of a virtual explosion of technology. Futurist Ray Kurzweil claims that we are in a state of exponential technological growth driven by the law of accelerating returns.

Back in 2021, he wrote, “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The ‘returns,’ such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”

Well, wow, that’s a lot. It is the epitome of techno-optimism (if merging with machines is your idea of optimism). On the other side of the coin, of course, are those who are quite certain that superintelligent AI will mean the end of humanity.

But I think the primary difference between techno-optimism and techno-pessimism boils down to one thing: AI’s future role in the creation of the technosphere. We’ll get to that in the next post. In the meantime, however, let’s consider the idea that technological change is actually slowing down.

The Slowing Down Theory

Certainly, on a geological time scale, all these inventions we’ve listed have arisen virtually simultaneously. But we don’t live in geological time and some experts believe that, from a human point of view, there’s been a dramatic slowdown in true innovation and scientific breakthroughs in recent years.

The authors of the study titled “Papers and patents are becoming less disruptive over time” analyzed data from 45 million papers and 3.9 million patents across six decades (1945–2010). Tracking how their disruption index changes over that timeframe, the researchers found papers and patents are increasingly less likely to be disruptive.

For example, in the area of patents, the decline in disruptiveness between 1980 and 2010 ranged from 78.7% for computers and communications to 91.5% for drugs and medical. They write, “Our analyses show that this trend is unlikely to be driven by changes in citation practices or the quality of published work. Rather, the decline represents a substantive shift in science and technology, one that reinforces concerns about slowing innovative activity. We attribute this trend in part to scientists’ and inventors’ reliance on a narrower set of existing knowledge.”

So, what can we do differently to address this issue? The authors suggest:

To promote disruptive science and technology, scholars may be encouraged to read widely and given time to keep up with the rapidly expanding knowledge frontier. Universities may forgo the focus on quantity, and more strongly reward research quality, and perhaps more fully subsidize year-long sabbaticals. Federal agencies may invest in the riskier and longer-term individual awards that support careers and not simply specific projects, giving scholars the gift of time needed to step outside the fray, inoculate themselves from the publish or perish culture, and produce truly consequential work.

The Extension of the Human Mind

Whether the creation of disruptive technologies and scientific paradigms is speeding up or slowing down, it’s clear that we have recently made large breakthroughs in artificial intelligence, which is an extension of our cognitive capabilities

Of course, we humans have been aiding and extending our mental capacities at least since the tally stick and probably long before then. Books, photos, maps, calculators, spreadsheets, word processors and much more have all been extensions of our minds.

But generative AI does feel like a much further extension, capable of doing various things that only the most capable and educated of people could have done before now. For example:

ChatGPT’s Performance on Academic and Intelligence Tests
The Uniform Bar ExamWhile GPT-3.5, which powers ChatGPT, only scored in the 10th percentile of the bar exam, GPT-4 scored in the 90th percentile with a score of 298 out of 400
The SATGPT-4 aced the SAT Reading & Writing section with a score of 710 out of 800, which puts it in the 93rd percentile of test-takers
The GREWhile it scored in the 99th percentile on the verbal section of the exam and in the 80th percentile of the quantitative section of the exam, GPT-4 only scored in the 54th percentile of the writing test
USA Biology Olympiad Semifinal ExamGPT-4 scored in the 99th to 100th percentile on the 2020 Semifinal Exam
AP ExamsGPT-4 received a 5 on AP (advance placement) Art History, AP Biology, AP Environmental Science, AP Macroeconomics, AP Microeconomics, AP Psychology, AP Statistics, AP US Government and AP US History. On AP Physics 2, AP Calculus BC, AP Chemistry and AP World History, GPT-4 received a 4
IQEstimated on the basis of five subtests, the Verbal IQ of ChatGPT was 155, superior to 99.9 percent of the test takers. It was not able to take the nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample.

Although this table highlights the power of these technologies, it leaves aside their lack of “common sense,” their poor mathematics capabilities (for now) and their chronic habit of hallucination and confabulation. These and other issues are why few view these technologies as actual artificial general intelligence.

But this doesn’t mean that such AI doesn’t already play a fast-evolving, uniquely creative and increasingly pivotal role in the shaping of our technosphere. That will the subject of my next post.

Human Beings Don’t Live Without Technology

Human beings don’t live without technology, and it’s not clear that they can. Even the people who live “off the grid” without what we call “high technology” don’t live with no technology at all. Far from it. They wear clothes, live in human-made shelters and use basic tools. Perhaps there are naked people somewhere surviving with no tools at all, including handmade ones, but I’ve never heard of any.

No-Tech Is Not Natural for People

The vast majority of other animals can and do live without any technologies at all. Yes, there are a fair number of other tools users in the world, including all the hominids (we are part of this group, of course), various types of birds, elephants, dolphins, sea otters, octopus and more. But, for most of them, tools are a sideline without which they could and would survive (certain nesting birds might be the exception, depending on how we define tools).

But people? Not really. Because our tools have shaped our evolution just as we have shaped theirs. A relatively weak and hairless ape without clothes and built shelters is unlikely to survive. If we deprive that hairless ape of fire and cooking, for example, it is unlikely to be able to derive enough nutrition because its body (teeth, jaw, stomach, colon, etc.) has literally evolved to process cooked foods. It’s very difficult to rely solely and completely on unprocessed raw foods: try sticking to a raw food diet in which somebody somewhere along the way doesn’t use some form of tool.

So, yes, hypothetically a genetically and geographically (they’d better be a relatively warm place!) lucky human being could survive into adulthood without ever using any type of tool, but it would be an uphill, unpleasant and unnatural lifestyle.

Our Tools Are Essential Extensions of Our Selves

In other words, our tools are essential parts of our lives. We are interconnected with them. They are part of the reticulum in which we abide.

Image by Sheila1988; Agricultural tools at show

Over time, our network of tools has grown, of course. We almost certainly started with simple wooden tools and rocks, the same as our hominid cousins often use today. But over time, this assortment of technologies has grown ever more diverse and complex. We extend ourselves through our tools, and then the tools themselves become networks into which we embed ourselves. Consider, for example, our networks of roads, wires, pipes, machines, language and more.

At some point, it becomes hard to tell where we homo sapiens stop and our technologies begin. Our technology network has become so large and complex that we’ve invented new words (and, yes, words are also tools) to describe them: “technologies,” “technosphere,” “technium,” “information networks,” etc. (Note that the word networks wasn’t even invented until the 1550s, making it a relatively modern human piece of human technology).

But the story doesn’t end there. Indeed, all I’m really doing is setting the stage for my next post on the topic of technological advances, including the latest ones that extend the deepest of all our networks: the ones that make up our minds.

Blog as a Sandbox

Since I’ve begun blogging again, I’ve been using The Reticulum blog as a sandbox more than anything else. I’ve messed around to entertain myself and try new things: illustrated poems, coding tips, nostalgic memories, riffs on nonfiction books, the occasional fiction, and various other random acts of writing. Sometimes to me if feels sadly self indulgent, other times like healthy experimentation and expression.

Sand Mandala

Whatever it is, though, I’m going to try to impose a bit more structure on it over the rest of the year. The idea is to focus on just two topics that are, while broad in their own right, more in keeping with the original intent of the blog.

The first topic will be what I’m thinking of as AI@Work, which leverages my interest in both neural networks and work-related issues.

As broad as that topic is, however, the second is even broader and can be summed up in the phrase “network science.” I have a couple of books by network scientist Albert-László Barabási I want to explore here before I move on to other thinkers in the area.

So, it’ll remain a sandbox but a slightly more structured one through the rest of 2023. More of a sand mandala, perhaps.

Featured image: Участник:GgvlaD, "Разрезание" мандалы Зеленой Тары на мероприятии "Дни Тибета в Москве", June 2011.

Little Boxes in Mar-a-Lago

“Little Boxes in Mar-a-Lago” is sung to the tune of “Little Boxes” by Melvina Reynolds

Little boxes in Mar-a-Lago
Little boxes moved by Trumpy Dumpy
Little boxes at Mar-a-Lago
Little boxes not the same

There were photos and mementos
And defense plans and nuclear things
They were all moved by Trumpy Dumpy
And they all weren’t just the same

And the people at Mar-a-lago
All went to there eat and dance
Which they did near all the boxes
And they all weren’t just the same

There were liars and defilers
And sons of foreign presidents
And they all went to Trumpy Dumpy
And they all weren’t just the same

There they hit balls on the tennis courts
And drank all their martinis dry
And they had lots of money honey
But they all weren’t the just same

And the boxes with the secrets
Were scattered all throughout the place
Like a basement and a ballroom
And rooms weren’t all the same

Then the Feds said to return them
And they asked all very nicey wicey
Bout the boxes with the secrets
Cause they all weren’t just the same

Though forbidden, they were hidden
And then they were lied about
But the Feds finally found them all
Now the whole world’s not the same

Political Addendum

I wrote the political satire above partly just to see if I could, but the truth is that I know very well this is no laughing matter for Mr. Trump or the nation as a whole.

I’m not one of the people actually celebrating the indictment. In a nation of laws, it may be necessary to prosecute Trump given the flagrant nature of the crime and, especially, the absurd and repeated attempts to cover it up, but it’s ultimately a tragedy for the nation.

By many measures, Mr. Trump deserves this. If he’d just given everything back and said, “Oops, my bad,” this issue would have been a minor historical footnote.

But in my non-expert opinion, Trump is seriously mentally ill and has my pity. He genuinely can’t help himself, it appears to me.

The best possible outcome here is for him to be found guilty and forced to seek psychiatric help. Ultimately, that should be part of a plea deal that allows him to stay out of prison, if that’s legally possible, for the good of the nation. Maybe this will require a presidential pardon.

The worst outcome would be for Mr. Trump to become president because he would never be willing to voluntarily leave office, if only (this time) to avoid prosecution. He’d spend four years trying to find some way to subvert the 22nd Amendment.

Of course, the alternative of a smart, Orbán-wannabe like DeSantis would not be much better for the nation. It could arguably be worse if the U.S. continues to move toward fascism. Which makes me hope the GOP comes to its senses and selects a sane, pro-democratic, non-demagogue candidate in the primary, even if he or she leans more conservative than moderate.

Featured image of classified documents in a bathroom at Mar-A-Lago 1000-6.jpg from Wikimedia Commons

Is Suffering Optional?

Is suffering optional, as a common Internet meme would have it? I don’t know, but it does seem like one of humanity’s deepest existential questions.

We all suffer. Sometimes more in certain times of our lives than in others. And some people seem to endure more suffering than others. But it’s a human universal, which is probably why it’s at the very heart of some of the world’s great religions.

Suffering as Sacrifice

El Greco – Christ on the Cross

Take Christianity. Its very emblem is a cross, a vehicle for suffering. The idea is that God suffered and, indeed, died for our sins. Through some miracle of God’s mercy, human sins are expiated through Christ’s sacrifice, allowing believers to enter in Heaven where there is, presumably, no more suffering.

At least that’s how I remember it from my Episcopalian upbringing.

Inspired by Christ’s sacrifice, some Christian traditions advocate “offering up” human suffering to others. “Offering up our suffering is a powerful way to become like Christ and love others as He loves them,” reports The Bishop’s Bulletin. “Becoming like Christ and loving like He does is what we were created to do. We are called to love in a radical way, like the divine Son.”

So suffering, if it is offered up as a sacrifice, takes on a religious meaning all its own. It becomes an act of devotion.

Suffering as Optional

Statue representing Siddhartha Gautama, by Nyo. https://commons.wikimedia.org/wiki/File:Siddhartha_
Gautama_Buddha_portrait.PNG

Then there’s Buddhism. The story goes that Gautama was a prince brought up in such a way that he would never have to encounter human suffering. The sick, the handicapped, the unhappy: all were banished from his royal world.

Until one day he came across true human suffering, and he was utterly shocked and touched by it. So, it became his goal life to eliminate all human suffering.

This seems like an impossible challenge, but Gautama (who became the Buddha) claimed to have found a way. That way became the essence of Buddhism. In fact, all of Buddhism’s Four Noble Truths are related to suffering:

1) The truth that suffering exists
2) The truth that desires and ego-driven attachments cause suffering
3) The truth that the cessation of suffering is possible when you rid yourself of ego-attachments and so are able to attain Nirvana
4) The truth of that there’s a path (the famous Eightfold Path) leading to Nirvana

Buddha’s High Bar

The Internet meme to which I was referring is, “Pain is inevitable. Suffering is optional.” It’s not clear where it originally comes from, but novelist Haruki Murakami did indeed write, “Pain is inevitable. Suffering is optional. Say you’re running and you think, ‘Man, this hurts, I can’t take it anymore. The ‘hurt’ part is an unavoidable reality, but whether or not you can stand anymore is up to the runner himself.”

Clearly, however, the idea is inspired by Buddhism’s notion that one can rid oneself of suffering. That is, if you can attain Nirvana, you’re able to dodge the suffering bullet.

Which sounds pretty appealing except, well, you know, achieving Nirvana is a very high bar to jump. And, assuming it’s possible, how many have ever arrived? How would we even know if they had?

Can We Reduce Suffering?

So, let’s sum up. First, suffering sucks. Second, you can, in theory, attach meaning to suffering to make it suck less. Third, you can even ditch suffering entirely, also in theory, if you have what it takes to become enlightened. At that stage, suffering doesn’t suck at all. It becomes an illusion.

But is there anything beneath all this religion-based theory? Maybe so. Here’s how psychologist Anthony Burrow describes an experiment in which he was involved:

In this particular experiment, after writing about, either [a] movie that they had seen or their sense of purpose, individuals traversed a steep incline, and as they arrived at the top, we asked them to report, how steep was this hill and how much effort did it take them to get to the top? Now, for those individuals who had written about the most recent movie they had seen, there was a pretty clear, positive correlation, or a strong relationship between how steep they thought that hill was and how much effort they thought it took to get to the top. Whereas, individuals who had written about purpose briefly before traversing this incline, when they got to the top, they showed less of a relationship between the estimated incline of the hill and how much effort they said it took to get to the top.

So, maybe a sense of purpose can help reduce the sense of effort associated with a task And maybe effort can be a proxy for suffering. And maybe “offering up” one’s suffering not only helps people make sense of suffering but, by providing a sense of purpose, reduces suffering itself.

Photo of Chris Falter of Dharmachakra (which symbolize the Eightfold Path) on Jokhang temple Lhasa / Tibetwheel of dharma

A lot of maybes there, but let’s tack on a few more. Maybe you don’t need to arrive in Nirvana-land to make a dent in suffering. Maybe a hike up the mindful path (for example, by engaging in meditation) can reduce suffering even if it can’t eliminate it. Indeed, this seems to be among the underlying assumptions of Buddhism.

Now, I don’t honestly know how much any of this can help people in genuine distress. Is it literally possible, for example, to be in deep physical pain without suffering? That seems unlikely to me, but then again I am far from being in any state of Christian grace or Buddhist enlightenment.

Perhaps the folks who’ve attained sainthood or hiked far up the Eightfold Path could say for sure.

Featured image: Painting by Nicholas Roerich “Buddha Victorious” (Series "Banners of the East). 1925. Canvas, tempera. 74.2x117.7 cm. International Roerich Center, Moscow Date 1925 Source Russian: Эстонское общество Рериха English: Estonian Roerich Society