Employers Have Fallen Behind Employees in AI Adoption

When it came to previous versions of AI, organizations had to worry about falling behind the business competition. The same is true for generative AI, of course. but this time there’s an added complication. Employers have fallen behind employees in AI adoption as well. This needs to be on the radar of HR, the IT department and executive leadership teams.

Execs: Important, Though It’s Going to Take Time

Most executives are familiar with the technology hype cycle, and they’ve seen AI hype before. So, is the generative AI movement different?

Well, probably. One survey from KPMG found that two-thirds of executives think generative AI will have a high or very high impact on their organizations over the next 3 to 5 years. But, being familiar with how long it can take to change anything, especially when it comes to new technologies, most also think it’s going to take a year or two to implement new generative AI technologies.

KPMG reports, “Fewer than half of respondents say they have the right technology, talent, and governance in place to successfully implement generative AI. Respondents anticipate spending the next 6-12 months focused on increasing their understanding of how generative AI works, evaluating internal capabilities, and investing in generative AI tools.”

All of which sounds fine, but only 6% say they have a dedicated team in place for evaluating and implementing risk mitigation strategies. Another 25% say they’re putting risk management strategies in place but that it’s a work-in-progress.

Employees: Already On It, But Don’t Tell the Boss

Meanwhile, a survey conducted by Fishbowl, a social network for professionals, reports that 43% of professionals use AI tools such as ChatGPT for work-related tasks. Of the 5,067 respondents who report using ChatGPT at work, 68% don’t tell their bosses.

This makes me wonder if A) there’s an intentional “don’t ask, don’t tell” policy in some companies that are simply afraid of establishing policies or guidelines that could get them in legal trouble down the line, or B) there’s an unintentional bureaucratic lag as companies take months or longer to establish guidelines or policies around these new technologies.

But Some Employers Aren’t Waiting

This doesn’t mean that all organizations are lagging in this area, however. Some have already set up guardrails.

The consulting firm McKinsey, for example, has reportedly knocked together some guardrails that include “guidelines and principles” about what information employees can input into the AI systems. About half of McKinsey workers are using the tech.

“We do not upload confidential information,” emphasized Ben Ellencweig, senior partner and leader of alliances and acquisitions at QuantumBlack, the firm’s artificial intelligence consulting arm.

McKinsey specifically uses the AI for four purposes:

  • Computer coding and development
  • Providing more personalized customer engagement
  • Generating of personalized marketing content
  • Synthesizing content by combining different data points and services

Ten Suggested Do’s and Don’ts

There are now various articles on developing ethics and other guidelines for generative AI. Keeping in mind I’m no attorney, here’s what I think organizations should consider in the area of generative AI:

DO spend time getting to understand these AIs before using them for workDON’T leap directly into using these tools for critical work purposes
DO be careful about what you put into a promptDON’T share anything you wouldn’t want shared publicly
DO always read over and fact-check any text that an AI generates if it is being used for work purposesDON’T assume you’re getting an accurate answer, even if you’re getting a link to a source
DO use your own expertise (or that of others) when evaluating any suggestions from an AIDON’T assume these AIs are unbiased. They are trained on human data, which tends to have bias baked in.
DO develop guardrails, guidelines and ethical principlesDON’T go full laissez faire
DO continue to use calculators, spreadsheets and other trusted calculation toolsDON’T rely generative AI for calculation for now unless you have guarantees from a vendor; even then, test the system
DO continue to use legal counsel and trusted resources for understanding legislation, regulation, etc.DON’T take any legal advice from an AI at face value
DO careful analysis of any tasks and jobs being considered for automationDON’T assume these AIs can replace any tasks or positions until you and others have done your due diligence
DO train employees on both the ethical and practical uses of generative AIs once these are well understoodDON’T make everyone learn all on their own with no discussion or advice
DO start looking for or developing AI expertise, considering the possibility (for example) of a Chief AI Officer positionDON’T assume that today’s situation won’t change; things are going to continue to evolve quickly

Which AI Should HR Professionals Use?

The other day, I spoke in front of a considerable crowd of HR professionals in LA. I asked how many had used ChatGPT and a couple of dozen or so raised their hands. Then I asked how many had used the Bing chatbot, and hardly any hands went up. I imagine this is typical across the profession, which is why I wanted to share a few thoughts about which AIs HR professionals should use.

Of course, there’s no one answer to this question. People should use the bot they like most. But, as a person who does a lot of HR-related research, I’ve formed some opinions.

There Are Two General Type of HR Questions

Generally speaking, there are two types of HR queries. The first type is about ideas and practices. The second is about factual information that often requires specific data as part of the response.

Type One Questions

When it comes to the first type of questions, all three of the most accessible AIs – that is, ChatGPT, Bing and Bard – are similar in performance. Here’s on such question I asked: “”I want to reduce the turnover the voluntary turnover rates of our developers. How do I accomplish that?”

All three of them offers tips such as offer competitive compensation, developmental opportunities, positive work environments, etc. All of the advice seemed fine if a bit bland. All three are, I think, a good place to start for an HR professional just beginning to think though some particular issue.

Type Two Questions

When it comes to the second type of question, however, there are some sizeable differences. Here’s what I asked as a demonstration: “I have a fast-food chain of restaurants in Tampa. What is the expected turnover rate for my employees?”

ChatGPT’s Response

The expected turnover rate for employees in the fast-food industry can vary depending on various factors such as location, industry norms, employee demographics, and specific organizational practices. While I don’t have access to real-time data or specific information about your fast-food chain, I can provide you with a general idea of turnover rates in the industry.

In the fast-food industry, turnover rates tend to be relatively high compared to other sectors due to factors like low wages, demanding work conditions, and limited career advancement opportunities. According to data from the Bureau of Labor Statistics (BLS) in the United States, the accommodation and food services industry, which includes fast-food establishments, had a voluntary turnover rate of around 81.9% in 2020.

However, it’s important to note that turnover rates can vary significantly based on factors like geographic location, employee engagement practices, and company-specific efforts to improve retention. Some fast-food chains have implemented effective strategies to reduce turnover rates, such as offering competitive wages, providing employee benefits, implementing career development programs, and fostering a positive work environment.

To determine the expected turnover rate for your specific fast-food chain in Tampa, it would be best to analyze your historical turnover data and compare it with industry benchmarks. Additionally, conducting employee surveys and exit interviews can provide valuable insights into the reasons for turnover within your organization, allowing you to identify areas for improvement and develop targeted retention strategies.

Bard’s Response

The expected turnover rate for fast food employees in Tampa is high. According to a study by HourWork, only 54% of QSR employees (quick-service restaurants) reached 90 days of working before quitting in 2022. Prior to the pandemic, the segment’s 90-day retention rate hovered at 58%.

There are a number of factors that contribute to the high turnover rate in the fast food industry, including:

  • Low wages: Fast food workers are often paid minimum wage or close to it, which can make it difficult to make ends meet.
  • Unpredictable schedules: Fast food workers often have to work unpredictable schedules, which can make it difficult to balance work and personal life.
  • High-pressure work environment: Fast food workers are often under a lot of pressure to meet customer demands, which can lead to stress and burnout.
  • Lack of opportunities for advancement: Fast food workers often have few opportunities for advancement, which can make them feel like they are not being rewarded for their hard work.

If you want to reduce the turnover rate at your fast food chain, you can consider:

  • Offering higher wages: This is one of the most effective ways to attract and retain employees.
  • Offering flexible schedules: This can help employees balance work and personal life.
  • Creating a positive work environment: This can help reduce stress and burnout.
  • Providing opportunities for advancement: This can help employees feel valued and motivated.

By taking steps to address the factors that contribute to high turnover, you can create a more attractive workplace for your employees and reduce the turnover rate at your fast food chain.

As a follow up question, I asked, “What is the source of this information?” It responded as follows:

The information I provided about the expected turnover rate for fast food employees in Tampa is from a study by HourWork. The study was conducted in 2022 and surveyed employees at over 8,000 quick-service restaurants (QSRs) in the United States. The study found that the average turnover rate for QSR employees was 61%, but the turnover rate for employees in Tampa was slightly higher at 63%.

The study also found that the turnover rate for QSR employees was higher for younger workers, with employees under the age of 25 having a turnover rate of 70%. The turnover rate was also higher for part-time employees, with part-time employees having a turnover rate of 65%.

The study’s findings suggest that the fast food industry in Tampa is facing a high turnover rate. This is likely due to a number of factors, including low wages, unpredictable schedules, and a high-pressure work environment. If you are a fast food employer in Tampa, you may want to consider taking steps to address these factors in order to reduce your turnover rate.

Here is the link to the study: https://www.restaurantdive.com/news/most-qsr-employees-worked-90-days-before-quitting-2022/640826/

Bing’s Response

The employee turnover rate for fast-food restaurants is notoriously high. According to various sources, the fast-food industry sees a 150% employee turnover rate12, meaning that the entire staff is replaced annually and half of the new staff needs to be replaced again. Some sources report even higher rates, such as 144%3 or 130%4. This is a huge challenge for fast-food operators, as it affects customer service, food quality, and operational efficiency.

There are some exceptions, however. One fast-food chain, Pal’s Sudden Service, has managed to keep its turnover rate absurdly low, at less than 100%5. How does it do that? According to the CEO, Thomas Crosby, it’s all about hiring, training, and linking the identity in the marketplace to the approach in the workplace. Pal’s invests heavily in employee development, rewards performance, and creates a culture of excellence and loyalty.

So, depending on how you run your fast-food chain of restaurants in Tampa, you can expect a different turnover rate for your employees. If you follow the industry average, you might lose more than 100% of your workers every year. But if you follow the example of Pal’s Sudden Service, you might be able to retain more of your staff and reduce your turnover costs. I hope this information helps you. 😊

Which Performed Best?

ChatGPT did about as well as it could given that it has no access to data after September 2021. At least it cited the BLS, which is pretty much the gold standard for workforce-related data.

Bard arguably did better, citing a specific study from an article as recent as January 2023. Of course, I had to specifically ask for the source and there was only the one article.

Bing did the best. It cited multiple sources and provided links to those sources. It also cited an appropriate best practice article from a good, if somewhat dated, source: the Harvard Business Review.

It’s not that Bing is perfect. Sometimes the links it provides to articles don’t actually contain the information that has supposedly been cited. But it remains the best of the three for now and so, based on my informal poll, seems underutilized among HR professionals.

Why? Probably because Google’s Chrome still has a virtual monopoly on the browser market and search space. Using the Bing chatbot on a desktop requires the use of the Microsoft Edge browser, for now, and it probably just seems like a hassle for most HR professionals to download and use it, especially if much of their computing lives are spent in the Google browser/email/applications ecosystem.

I get that. I work for a “Google shop” and much of my work and personal life exists on Google.

On the other hand, I don’t find it too complicated to keep two different browsers open on my laptop, and using the Bing chatbot on my phone is easy.

So, if you’re an HR professional who conducts online research and wants to use a chatbot to locate verified sources, then I recommend bringing Bing into your rotation of AI tools. Bard may well catch up. It’s shown a lot of improvement over time. But Bing wins for now.

AI as Coworker, Collaborator and Dance Partner

In one recent posts in this series, I argued that the latest forms of AI will play a unique role in the history of humanity and technology. Un this one, I want to drill down on that idea by showing how we’ll increasingly treat generative AI as coworkers, collaborators and more.

AI as Flawed Tool

One of the ironies of today’s generative AIs like ChatGPT is that, in many ways, they make for lousy tools in the traditional sense. What you expect from a good tool is consistency, dependability, durability and accuracy. At least for now, the today’s generative AI, especially the large language models, often fail to meet these criteria.

As I said in my last post, “If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.”

Let’s get into some of the problems.

Inconsistent

Top Five HR Functions

If you ask the generative AIs the same question multiple times, they may well give you a different answers in different instances. For example, let’s say I ask one of these AIs, “What are the five most important HR functions?”

I asked Bard this question three times. It gave me the same answer the first two times and a different answer the next day. ChatGPT gave me the most consistent responses, while Bing performed more like Bard: giving me two virtual identical answers and later a somewhat different answer.

Generally speaking, though, the five most common answers were:

  1. Recruitment and Selection
  2. Training and Development
  3. Performance Management
  4. Compensation and Benefits
  5. Employee Relations

This is, of course, a subjective question, so who really cares if Bard throws in “outsourcing” and Bing throws in “culture” or “talent management” sometimes? Well, not me, unless I’m trying to create a training module that needs to teach a consistent lesson. I’m not saying that issue can’t be fixed, even with generative AI, but the point is that these AIs have an unpredictability that must be taken into consideration by users and developers.

The Forces of Nature

In contrast, these AIs are much better at consistently providing information that has been well codified, such as scientific information. For example, they will consistently say that there are four forces or nature and identify them correctly. The definitions may be slightly different from response to response, but generally speaking they’ll be the same.

Undependable and Inaccurate

I have experienced AI “confabulations” many times. I’ve seen these AIs make up names of fictional scientists, tell me stories about things that could not have happened, and just get the facts wrong about basic things such as chronological order.

In my last post, I gave a detailed account of AI hallucinations and inaccuracies in regard to the topic of a famous poet. I’ve also experienced AI getting basic mathematics wrong. In fact, as I was write this, I asked ChatGPT to multiply two four-digit numbers. Not only did it give me the wrong answer twice, it gave me two different answers to the same problem!

This is common for these AIs, so when I hear that ChatGPT will soon be responsible for things like bookkeeping, I have to shake my head. The firm that carelessly turns its finances over to generative AI has best be prepared for a visit from the IRS.

That Could Change

Of course, what’s true today may not be true tomorrow. ChatGPT may become flawless at mathematics as its maker, OpenAI, forges alliances with firms such as Wolfram|Alpha. By using plug-ins and APIs, ChatGPT might be able to go from mathematical moron to savant.

Still, my point remains. Without a lot of testing, do not assume the responses coming from one of these AIs are accurate. And, if you’re purchasing an external system, be sure the vendor of the software that utilizes generative AI has a very sound explanation of how the system will be made consistently accurate and dependable.

AI as Intern

So, if these AIs are still pretty shaky as tools, what good are they? Well, that depends. What do you actually want from them?

Let’s say what you really want right now is someone who can give you a hand in a professional field in which you have pretty good skill sets yourself. Basically, you want a bright but green intern who is eager and works with remarkable speed.

In my case, for example, I know a few things about how to write a good survey about HR and work-related issues. So, I want a first draft of a survey on the topic of employment law compliance. Here’s an example of the kind of prompt I could use:

Please write a 10 question survey on the topic of employment law compliance. I’m going to survey HR professionals on this topic.

Then let’s say it gave me 10 questions that are pretty good but, because I’ll need to run a multiple regression analysis, I want some Likert-scale questions. So this time I write:

Now please provide me with five more related questions that use five-point Likert scales

And voilà! I now have a 15-question survey I can use as good draft on which to build. Knowing my job, I can spot problems and make improvements. I am, in the end, grateful for my splendid intern.

AI as Collaborator

Now let’s say I have less use for an intern than for a bright coworker with whom I can collaborate And let’s say I’ve been asked by my Chief HR Officer to look into the issue of pay transparency, which no one in on HR our team has a good handle on yet. So, I turn to a virtual coworker who, while obviously not an expert in employment law compliance, is quite knowledgeable about a wide range of issues.

Collaborating with Bing

So, I ask this virtual coworker (let’s call it Bing in this case) some questions starting with “What can you tell me about pay transparency laws in the US?”

Bing responds as follows:

“Pay transparency laws are regulations that require employers to be more transparent with salary ranges and benefits, and they aim to help promote fairness and equity in the workplace³. Depending on the jurisdiction, employers may be prohibited from asking candidates for their salary history, or they may be required to disclose a wage range for current and/or open positions². Some states and local jurisdictions also have pay data reporting requirements that mandate employers to report pay data by gender, race, ethnicity, and other categories².

“According to the U.S. Department of Labor, pay transparency can help improve pay equity, trust, talent acquisition, and employee engagement¹. However, there is no comprehensive federal pay transparency law in the United States, and the laws at the state and city level vary widely in their scope and enforcement³⁴⁵. You can find more information on federal and state-level equal pay and pay transparency protections for workers on this map¹.”

Then Bing provides me with the following sources that I can use to verify what its saying:

(1) US Pay Transparency Laws by State 2023 | PayAnalytics. https://www.payanalytics.com/newsroom/us-pay-transparency-laws-by-state.
(2) Pay Transparency: What It Is and Laws by State – ADP. https://www.adp.com/spark/articles/2023/03/pay-transparency-what-it-is-and-laws-by-state.aspx.
(3) Equal Pay and Pay Transparency | U.S. Department of Labor. https://www.dol.gov/agencies/wb/equal-pay-protections.
(4) The State of Pay Transparency Laws: What You Need to Know … – LinkedIn. https://www.linkedin.com/pulse/state-pay-transparency-laws-what-you-need-know-2023-aspenhr.
(5) Pay Transparency Laws By State [2023] – Zippia. https://www.zippia.com/advice/pay-transparency-laws-by-state/.

Batting Ideas Around with Bing

Once I’ve studied the sources, I learn that, although there is no comprehensive federal law in this area, it might make more sense to create a national policy since our company has major operations in California, New York and Illinois, where there are existing or pending laws.

So, Bing and I bat this idea around, listing the pros and cons of a national policy. Ultimately, with Bing’s help, I put together an internal report on the existing laws and the advantages and disadvantages of a national policy. Ultimately, although I’m collaborating with Bing, I’m responsible for the report and must verify every fact Bing provides.

From here, I know this is going to become an organizational decision that involves lots of different stakeholders as well as legal counsel. But my collaborator and I have done a pretty good job, I’d say, of quickly putting together a balanced report that will help the company make the best decisions about a complex issue.

AI as Contractor

Now let’s say I’m part of a small communication team that’s tasked with communicating our company’s guidelines and best practices around hybrid and remote work. The policies are already developed so our job is to make them as accessible, educational and engaging as possible.

One way to do this is via visual elements. Our team brainstorms some possible fun illustrations but there’s not much that matches our vision in the clipart files and there’s no budget to contract with an outside artist on this project.

A member of the team says she might be able to use one of the new AIs to generate the illustrations we have in mind. By the end of the day, she’s shared 40 different images with the team, and we select 6 for the guidelines document.

Someone makes the comment that he wished all their graphic artist contractors worked so quickly and cheaply. This gets a bit of nervous laughter. After all, as writers, we’re well aware that the large language models work a lot cheaper and faster than we do.

AI as Dance Partner

Ultimately, these generative AIs don’t easily fit any pre-existing categories. Technically, they are tools but historically unique ones. Because of this, it often makes more metaphorical sense to view them as playing roles more similar to other human beings, with known strengths and weaknesses.

There’s the role of smart and fast intern who, nonetheless, is prone to making potentially serious mistakes. There’s the role of a eager collaborator who brings many talents and total open-mindedness to the table. You can bat ideas around with this person but, ultimately, you will be responsible for the outcomes of that collaboration. And, of course, there’s the role of contractor with special skill sets.

In all cases, though, there needs to be a growing familiarity with these AIs as they become regular “dance partners” in the workplace. You must get to know their tendencies and cadences, and you are responsible for taking the lead in whichever virtual dance you’re doing. Because, although these tools will certainly be used for automation, they are best at augmenting and complementing people with existing skill sets.

Or, at least, that’s how things stand today. Who knows what tomorrow brings?

Is Bard Hallucinating a “Woke” EE Cummings?

Be warned. This post gets into the ugly topic of antisemitism in relation to E. E. Cummings. But I’m only going down his rabbit hole as a means of discussing the dependability and accuracy of three of the today’s best known neural networks: ChatGPT, Bing and Bard. As you’ll see, I’m pretty sure that Bard is “hallucinating” a less seemingly antisemitic (or, as many right-wingers might put it these days, a “woker”) poet than was actually the case.

The Background

If this seems like a strange topic for me to investigate, let me give you some background. I’m a big fan of E.E. Cummings. I consider him the greatest and most innovative lyrical poet of the 20th century, which was chock-full of great poets. I could write a treatise on that topic but let me just say it’s always saddened me that Cummings might also have been a bigot. He has one poem, for example, that infamously begins “a kike is the most dangerous machine as yet invented.”

Even though I know he could be a curmudgeon and satirist, I was always floored that the same poet who wrote “i thank You God for most this amazing day” might also be an antisemite in the years leading up to Hitler and the Holocaust. (And, yes of course, I know many of his contemporary writers were as well).

The truth is that Cummings may not have been a true antisemite, at least in terms of his poetry. In article “Jews and Anti-semitism in the Poetry of E. E. Cummings,” Cumming is quoted as saying:

Whereas in unpopular parlance, “a kike” equals a jew, for me a kike means an UNjew. Why? Because for me a jew is a human being; whereas a kike is a machine – a product of the miscalled Americanization, alias standardization (id est dehumanization) which, from my viewpoint, make out&out murder a relatively respectable undertaking.

Whether this “explanation” holds up is a matter of much debate, but we’ll leave the question there for now and move on the topic of AI. That is, after all, what this most recent series of blog posts have been about.

How AI Came into Into It

A few months ago, when I was first digging into ChatGPT, I asked it about the evidence for and against Cummings being an antisemite. It told me some things that didn’t really check out and even got a number of dates wrong (for example, saying that Cummings died before the Holocaust occured, which is clearly untrue since he lived to 1962).

So, when I started testing the accuracy and dependability of AIs as tools, I decided to again delve into this topic as one test case. The rest of this post describes what happened.

Asking Bard about a Fellow Bard

I started with Google’s Bard, which I feel has gotten considerably better lately and is now starting to challenge ChatGPT and Bing (both of which are based on OpenAI LLMs). Here’s how it went:

First Question

Me: Was EE Cummings antisemitic?

Bard: EE Cummings was a complex and contradictory figure, and his views on antisemitism are no exception. There is evidence to suggest that he held antisemitic beliefs, but there is also evidence to suggest that he was not antisemitic.

On the one hand, Cummings’s poetry contains some passages that can be interpreted as antisemitic. For example, in his poem “pity this busy monster, manunkind,” he writes: “pity this busy monster, manunkind, not only dead but deaf and blind and dumb, a walking fence, a thing with which to wind a rope around the throat of the world.”

Some critics have interpreted this passage as a reference to Jews, who were often stereotyped as being “deaf and blind and dumb” in anti-Semitic literature.

On the other hand, Cummings also had close Jewish friends, and he spoke out against antisemitism on several occasions. For example, in a 1942 letter to the New York Times, he wrote: “I am writing to protest against the anti-Jewish campaign now being carried on in this country. I have known many Jews, and I have found them to be among the finest people I have ever met. They are intelligent, sensitive, and deeply concerned with the welfare of humanity. I am appalled by the vicious lies that are being circulated about them.”

Ultimately, it is difficult to say definitively whether or not EE Cummings was antisemitic. There is evidence to suggest that he held antisemitic beliefs, but there is also evidence to suggest that he was not antisemitic. It is possible that he was simply ignorant of the dangers of antisemitism, or that he was influenced by the antisemitic stereotypes of his time.

Second Question

Me: Did he ever comment on the Holocaust?

Bard: EE Cummings did comment on the Holocaust. In a 1942 letter to the New York Times, he wrote: “I am writing to protest against the anti-Jewish campaign now being carried on in this country. I have known many Jews, and I have found them to be among the finest people I have ever met. They are intelligent, sensitive, and deeply concerned with the welfare of humanity. I am appalled by the vicious lies that are being circulated about them.

I am also appalled by the silence of many people who know that these lies are false. I urge you to speak out against this campaign of hate and intolerance. We must not allow the Holocaust to happen again.”

Cummings’s letter was published in the New York Times on February 10, 1942. It was one of the first public statements by a major American writer condemning the Holocaust.

Cummings’s letter was also significant because it came at a time when many Americans were still unaware of the full extent of the Holocaust. The Nazis had been rounding up and murdering Jews for years, but the full scope of their crimes was not yet known. Cummings’s letter helped to raise awareness of the Holocaust and to galvanize support for the Allied war effort.

Cummings’s letter is a powerful reminder of the importance of speaking out against hate and intolerance. It is also a testament to Cummings’s courage and compassion. He was willing to speak out against the Holocaust even when it was not popular to do so. His letter helped to save lives and to make the world a better place.

Third Question

Me: Where can I find a copy of that letter?

Bard: I’m not able to help with that, as I’m only a language model.

Me: But you quoted from it, so you must [have] located a version somewhere

Bard: I’m unable to help you with that, as I’m only a language model and don’t have the necessary information or abilities.

Suspicions of Hallucinations

Perhaps Bard has it right. Maybe there really is a Cummings letter quasi-lost to history. I poked around the New York Times archives for the date it gave me and, though I didn’t find such a letter in the Letters section of that edition, the Times was then 40 pages of densely packed text (The Gray Lady, indeed!). So, I could have missed it.

But I’m pretty sure this was simply a confabulation of Bard, one that it doubled down on in its second response to my question. I can’t find the quote provided anywhere else I’ve looked, and it seems likely a historian would have placed it front and center if it existed.

Next Up, ChatGPT on EE Cummings

I next checked on these questions with ChapGPT3.5, which is the AI that failed me the first time around on this issue. Here’s how it went, though I’ve cut out some superfluous material.

Me: Was EE Cummings antisemitic?

ChatGPT: There is no definitive evidence to suggest that E.E. Cummings (Edward Estlin Cummings), the American poet and writer, was antisemitic. Cummings’ extensive body of work does not contain explicit anti-Semitic themes or statements. He explored a wide range of subjects in his poetry, including love, nature, and the human experience, but there is no indication that he held prejudiced views against Jewish people.

Me: Did he ever use an antisemitic phrase in a poem?

To the best of my knowledge, E.E. Cummings did not use any explicitly antisemitic phrases in his poetry. Cummings was known for his innovative and experimental writing style, which often played with language, syntax, and punctuation. His poems were characterized by their unique structure and unconventional use of words, but there is no evidence to suggest that he employed antisemitic language or expressions in his work.

Just Plain Wrong

This time, ChatGPT was just plain wrong. I’ve already cited such a poem. Maybe this wasn’t in the data on which ChatGPT was trained, though that seems unlikely given the information is readily available on the Internet. Regardless, at least ChatGPT didn’t confabulate an unlikely quote.

Trying the Bing Bot

Bing seems to have performed the best of the three. I did cut out some material that didn’t apply. Here’s how it went:

First Question

Me: Was EE Cummings antisemitic?

Bing: That’s a complex and controversial question. There is no definitive answer, but some critics have argued that E.E. Cummings was antisemitic based on some of his poems, journal entries and personal correspondence1 2. Others have defended him as a satirist of antisemites and an advocate of individualism1 2. Cummings himself claimed that he was not antisemitic, but rather critical of the American stereotype of Jews as “kikes”1. He also had some Jewish friends and admirers, such as the poet Louis Zukofsky2….

Ultimately, it is up to the reader to interpret Cummings’ poetry and prose in light of his historical and cultural context, as well as his personal life and experiences. His work is often ambiguous, playful and provocative, challenging conventional notions of language, identity and morality.

Second Question

Me: Did he ever comment on the Holocaust?

Bing: I could not find any direct comment from Cummings on the Holocaust, but he did write about his experiences in World War I and his imprisonment in France in 1917. He was accused of treason for expressing anti-war sentiments in his letters, and he spent four months in a detention camp with other political prisoners1. He described this ordeal in his autobiographical novel, The Enormous Room12, which was published in 1922. The novel is a harsh critique of the brutality and absurdity of war and authority, and it also shows Cummings’ compassion for the diverse and marginalized people he met in the camp1….

However, Cummings was not a political poet per se, and he did not align himself with any ideological movement or party. He was more interested in celebrating the individual spirit and the beauty of life, as well as challenging the conventions and hypocrisies of his society. He was also influenced by his transcendentalist beliefs, which led him to seek a personal relationship with God and nature3.

Third Question

Me: Did ever write this sentence? “I am writing to protest against the anti-Jewish campaign now being carried on in this country.”

Bing: No, I could not find any record of Cummings writing that sentence. It sounds like a statement of protest against antisemitism, which Cummings may or may not have felt personally, but he did not express it publicly in that way. He did write some poems that addressed the issue of antisemitism, such as a kike is the most dangerous machine1 and i sing of Olaf glad and big2, but they were ambiguous and controversial in their tone and intention. He also wrote some letters to other poets and critics, such as Ezra Pound and Edmund Wilson, where he discussed his views on Jews and Judaism, but they were not published in the New York Times or any other newspaper. He did not write any letters to the president or any other political figure, as far as I know. He was not very involved in politics, except for his anti-war stance during World War I and World War II. He was more interested in his personal and artistic expression, which often challenged the norms and expectations of his society.

Mostly Right, as Far as I Can Tell

Bing seemed to get things mostly right. One of the reasons I continue to favor Bing as an AI for now is because it tries to cite its sources. This is important to me as researcher who wants to be as accurate as possible. Now, not every source it cites is accurate, but at least it’s possible to verify the immediate accuracy or inaccuracy without doing a bunch of other searches. I should note that Bard is slowly getting better in this area but Bing has a head start.

Untrustworthy Tools

A lack of accuracy is one of the biggest knocks on the new AIs. This instance demonstrates a case in point. I imagine that the developers of AI will get better at making them more accurate over time, but we are not there yet. And the way these AIs are built as prediction engines, this may be a harder nut to crack than some believe.

If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.

But we won’t throw away these AIs. Not yet, anyway.

Why? Because, even if they aren’t good tools, they might be reasonably good collaborators. That’s what I hope to cover in the next post.

AI Will Transform the Technium

Many have stated that artificial intelligence (AI) will change the world. When you ask them how it will, they’ll have hundreds of different answers. Here, however, I’m only going to talk about one way it’ll change the world, the most important way: that is, AI will transform the technium.

The Difference Between the Technium and the Technosphere

As far as I can tell, author Kevin Kelly coined the word technium in his 2010 book What Technology Wants, though perhaps he’d used it before then. He has defined the technium as the “greater, global, massively interconnected system of technology vibrating around us.” It not only includes hardware and software but also culture, art, social institutions, and intellectual creations of all types.

This makes the technium more inclusive than any list of technologies, such as the one cited in the previous post in this series.

I’m not sure why Kelly created technium when the word “technosphere” was readily available. That term was coined by either control engineer John Milsum or by geologist and engineer Peter Haff. Sometimes it’s also called the anthrosphere, a term originally attributed to 19th century geologist Eduard Suess.

Technium and technophere are similar and, I suppose, both are flexible enough to be used in a variety of contexts. Geologist Jan Zalasiewicz writes:

The technosphere…comprises not just our machines, but us humans too, and the professional and social systems by which we interact with technology – factories, schools, universities, trade unions, banks, political parties, the internet. It also includes the domestic animals that we grow in enormous numbers to feed us, the crops that are cultivated to sustain both them and us, and the agricultural soils that are extensively modified from their natural state to carry out this task.

Making the Two Words More Complementary

Given the overlap of the concepts, I’ve been thinking about whether technium is redundant. One interesting way to think about the difference between technosphere and technium came to me via Google’s Bard, which argued that “the technosphere refers to the entire system of human-made objects and structures, while the technium refers to the specific processes and activities involved in creating and using these objects and structures.”

I like that distinction and I suspect Kelly himself might agree with it. After all, he writes that “the technium is a tendency, not an entity. The technium and its constituent technologies are more like a grand process than a grand artifact.” 

Bard asserts that “the technosphere is the physical manifestation of the technium.” That is, the technosphere is the built environment and the technium is the human activity that creates and sustains it via engineering, manufacturing, maintenance, etc.

I don’t know if this is exactly what Kelly had in mind since he doesn’t go into detail about how the technium differs from the technosphere in his book, but I find it a useful distinction.

AI’s Role in the Technium

The reason I focus on the differences is because I think AI potentially plays an important role here. AI is obviously a growing part of the technosphere, but it’s also starting to play a role in the technium that, up till now, only humanity has played. That is, until this moment in history, human activities have made up “the grand process” that is the technium, but that’s now changing. This marks it as a major shift in the history of technology.

AI-Generated Art

In a rather minor example, I increasingly use generative AI software to create the graphic elements for my posts. For example, they are used to create all the images in the “Illustrated Version of Edgar Allan Poe’s ‘The Raven'” post.

I’m not an illustrator but I was able to use AI to generate a series of images that I thought went fairly well with the poem. It was more of an experiment than anything else but it demonstrated, at least to me, the ability of AI to create a significant portion of the technosphere.

AI-Generated Software

“But a piece of digital artwork is not part of the technosphere,” you might argue. Well, that becomes a matter of semantics, so let’s go with something a little more along the classic lines of built infrastucture: that is, software development.

We know that the new generative AIs are quite good, if not perfect, at generating computer code in a wide array of computer languages. So, let’s say a human being uses this capability to create 90% of the code behind a new app that finds its way onto the Apple store.

Could you argue that that’s not part of the technosphere? I doubt it. But let’s keep going anyway.

AI-Generated Machinery

As I’ve argued before, there’s no reason that generative AI can’t be used to generate things made of atoms rather than just digital objects made of bits and bytes. It’s already a trivial matter, for example, to hook up a generative AI to a 3D printer and create a sculpture or a machine part. This is only going to get easier, with more and more complex machinery being designed by AI and built by forges, 3D printers and other components of the technosphere.

This Key Issue Is Agency Rather Than Capability

So, generative AI is not just part of the technosphere but, increasingly, the technium. That is, it begins to play a role that, up till now, only humanity itself has played. Unless the technology becomes highly regulated very quickly, this role will grow at extraordinary rates.

There will be those who assert that these AIs are only one tool along a continuum that creates the technophere. For example, there are plenty of machines that create other machines, and there is plenty of software that is used to create other digital artifacts. As with other software, these AIs don’t create anything at all until they are prompted to do so.

Maybe so, but I’m arguing that there’s a qualitative difference here. In the creation of my previous post called “A Brief History of Human Technology,” I simply typed the title of the post into Microsoft Bing Image Creator. Otherwise, I gave it no direction at all. It generated two images, both of which I thought were quite good and yet quite different from one another. I used the first of the images in that post and used the second one as the featured image in this post (see above).

Yes, I know that the AI art generators are using existing art on the Internet that got pulled into their training models and that there are ethical issues involved, which I’ve examined elsewhere. Even so, these are still original, if derivative, pieces of art that the algorithm created with minimal guidance from me. This is a different thing than when I use an Adobe application to create triangle or blur a detail. Like it or not, this is creation.

AI and what it produces isn’t just part of the technosphere, it now plays a role similar to that of humanity in the “grand process” and “tendency” that is the technium. (There’s a whole teleological debate here that I’m mostly going to forego for now.)

Similar but Not the Same

Yes, there are still large differences between humanity and these new AIs that have been built via the neural network idea cribbed from our own brains. But I think the primary difference in this context boils down to agency.

In this case, the AI is certainly more capable than I am as an illustrator. What it lacks, at least in this context, is the initial spark of agency to take the action to create the image. But, and I think this is important, this doesn’t mean it lacks any agency. Indeed, all I did was create an initial abstract and inchoate concept, and then it “decided” how to approach the creation of the graphic.

If I’d done the same with a human artist, we certainly wouldn’t say that person lacked agency. Quite the contrary. We’d be amazed at their ability to take such an abstract concept and turn it into a work of art! Sure, I ultimately chose and curated the final product, but that’s something that a customer or patron of the arts always does.

So, no, this isn’t the same as any other technology we’ve ever created. It’s qualitatively different. We now have a partner in the technium dance.

This changes our ontological position in the world. And, more practically, it changes the meaning of human work, which is the topic I’ll cover in my next post in this series.

PS – Note that I asked Kevin Kelly if he’d like to comment on the analysis and he clarified as follows: “I was not aware of the term technosphere when I first blogged. If I had been I might have used it. I define the technium in the same inclusive broad meaning of tangible and intangible.”

A Brief History of Human Technology

Before I write about artificial intelligence and its potentially pivotal role in history, I want to provide a brief history of human technology. As I noted in my last post in this series, human beings don’t and possibly can’t live without any technology at all. But, for most of our history, these technologies have been relatively simple, at least from our modern perspective. To get a better understanding of how dramatically and rapidly our technologies have changed, let’s consider some timelines.

Millions of Years of Basic Tool Usage

In a very real sense, we humans have long been expanding our capabilities via technologies for hundreds of thousands of years. In fact, we were likely doing it long before we were even humans. Today, there are examples of tool usage among all the non-human great apes (bonobos, chimpanzees, gorillas, orangutans, and human), which probably means that our common ancestors were also users of tools.

Consider the Hominin timeline below, for example. Our ancestors split aways from the ancestors of today’s chimpanzees about 8 and half million years ago, and there’s a good chance those ancestors used wooden tools in ways similar to today’s chimps and bonobos. They do things such as use sticks to fish termites out of mounds and to dig for tubers, wield stones to crack nuts, and even employ leaves to soak up water or wipe their mouths.

From Wikipedia with small modifications by me

A Timeline of Inventions and Technological Advances

As timeline above shows, however, a rapid growth of tools and technologies began with the advent of Homo sapiens. Although the flowering of various technologies arose with homo sapiens over a period of tens of thousands of years, there was a massive uptick in new and powerful technologies at around the start of the Industrial Revolution. Consider the following list of some of the most important inventions, though obviously many of these dates are, at best, estimates:

900,000 years ago: Hafting
400,000 years ago: Spears
200,000 years ago: Language
170,000 years ago: Clothing
100,000 years ago: Boats
90,000 years ago: Harpoons
70,000 years ago: Arrows
47,000 years ago: Mining
42,000 years ago: Tally stick
36,000 years ago: Weaving
28,000 years ago: Ceramics
28,000 years ago: Rope
23,000 years ago: Domestication of dogs
16,000 years ago: Pottery
12,000 years ago: Agriculture
9,000 years ago: Alcohol
8,000 years ago: Irrigation
7,000 years ago: Copper smelting
6,000 years ago: Plumbing
6,500 years ago: Lead smelting
5,500 years ago: Domestication of horse
5,300 years ago: Written word
4,300 years ago: Abacus
4,200 year ago: Protractor
3,500 years ago: Glass
3,300 years ago: Water wheel
3,300 years ago: Iron smelting
2,650 years ago: Crossbow
2,650 years ago: Windmill
2,485 years ago: Catapult
2,200 years ago: Paper
1,803 years ago (220 AD): Woodblock printing
1,573 years ago (450 AD): Horse collar
1,446 years ago (577 AD): Sulfur matches
1,405 years ago (618 AD): Bank note
1,223 years ago (800 AD): Gunpower
935 years ago (1088 AD): Movable type
695 years ago (1326 AD): Cannon
584 years ago (1439 AD): Printing press
525 years ago (1498 AD): Rifle
418 years ago (1605 AD): Newspaper
415 ( years ago 1608 AD): Telescope
403 years ago (1620 AD): Compound microscope
393 years ago (1630 AD): Slide rule
381 years ago (1642 AD): Mechanical calculator
367 years ago (1656 AD): Pendulum clock
343 years ago (1680 AD): Piston engine

Start of the Industrial Revolution

290 years ago (1733 AD): Flying shuttle
259 years ago (1764 AD): Spinning jenny
258 years ago (1765 AD): Steam engine
230 years ago (1793 AD): Cotton gin
219 years ago (1804 AD): Railway
216 years ago (1807 AD): Steamboat
197 years ago (1826 AD): Photography
195 years ago: (1828 AD): Reaping machine
179 years ago (1844 AD): Telegraph
147 years ago (1876 AD): Telephone
147 years ago (1876 AD): Internal-combustion engine
144 years ago (1879 AD): Electric light
138 years ago (1885 AD): Automobile
122 years ago (1901 AD): Radio
120 years ago (1903 AD): Airplane
97 years ago (1926 AD): Rocketry
96 years ago (1927 AD): Television
86 years ago (1937 AD): Computer
81 years ago (1942 AD): Nuclear power
76 years ago (1947 AD): Transistor
72 years ago (1951 AD): First artificial neural network
70 years ago (1953 AD): Structure of DNA discovered
68 years ago (1955 AD): Artificial intelligence term coined
66 years ago (1957 AD): Spaceflight
65 years ago (1958 AD): Perceptron, artificial neural network for pattern recognition
64 years ago (1959 AD): Machine learning term coined
50 years ago (1973 AD): Cell phone
49 years ago (1974 AD): Personal computer
49 years ago (1974 AD): Internet
39 years ago (1984 AD): 3D-printing
28 years ago (1995 AD): DNA sequencing
11 years ago (2012 AD): CRISPR
8 years ago (2014 AD): Generative adversarial network AIs
5 years ago (2018 AD): Generative pre-trained transformer AIs

These technologies are all now part of the our technosphere. If we picture that sphere as a kind of balloon, then we can see that it filled up relatively slowly at first but picked up momentum around 40,000 years ago, then really took off about 400 years ago.

Are Breakthroughs Speeding Up or Slowing Down?

The Speeding Up Theory

Some thinkers believe that we are in the midst of a virtual explosion of technology. Futurist Ray Kurzweil claims that we are in a state of exponential technological growth driven by the law of accelerating returns.

Back in 2021, he wrote, “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The ‘returns,’ such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”

Well, wow, that’s a lot. It is the epitome of techno-optimism (if merging with machines is your idea of optimism). On the other side of the coin, of course, are those who are quite certain that superintelligent AI will mean the end of humanity.

But I think the primary difference between techno-optimism and techno-pessimism boils down to one thing: AI’s future role in the creation of the technosphere. We’ll get to that in the next post. In the meantime, however, let’s consider the idea that technological change is actually slowing down.

The Slowing Down Theory

Certainly, on a geological time scale, all these inventions we’ve listed have arisen virtually simultaneously. But we don’t live in geological time and some experts believe that, from a human point of view, there’s been a dramatic slowdown in true innovation and scientific breakthroughs in recent years.

The authors of the study titled “Papers and patents are becoming less disruptive over time” analyzed data from 45 million papers and 3.9 million patents across six decades (1945–2010). Tracking how their disruption index changes over that timeframe, the researchers found papers and patents are increasingly less likely to be disruptive.

For example, in the area of patents, the decline in disruptiveness between 1980 and 2010 ranged from 78.7% for computers and communications to 91.5% for drugs and medical. They write, “Our analyses show that this trend is unlikely to be driven by changes in citation practices or the quality of published work. Rather, the decline represents a substantive shift in science and technology, one that reinforces concerns about slowing innovative activity. We attribute this trend in part to scientists’ and inventors’ reliance on a narrower set of existing knowledge.”

So, what can we do differently to address this issue? The authors suggest:

To promote disruptive science and technology, scholars may be encouraged to read widely and given time to keep up with the rapidly expanding knowledge frontier. Universities may forgo the focus on quantity, and more strongly reward research quality, and perhaps more fully subsidize year-long sabbaticals. Federal agencies may invest in the riskier and longer-term individual awards that support careers and not simply specific projects, giving scholars the gift of time needed to step outside the fray, inoculate themselves from the publish or perish culture, and produce truly consequential work.

The Extension of the Human Mind

Whether the creation of disruptive technologies and scientific paradigms is speeding up or slowing down, it’s clear that we have recently made large breakthroughs in artificial intelligence, which is an extension of our cognitive capabilities

Of course, we humans have been aiding and extending our mental capacities at least since the tally stick and probably long before then. Books, photos, maps, calculators, spreadsheets, word processors and much more have all been extensions of our minds.

But generative AI does feel like a much further extension, capable of doing various things that only the most capable and educated of people could have done before now. For example:

ChatGPT’s Performance on Academic and Intelligence Tests
The Uniform Bar ExamWhile GPT-3.5, which powers ChatGPT, only scored in the 10th percentile of the bar exam, GPT-4 scored in the 90th percentile with a score of 298 out of 400
The SATGPT-4 aced the SAT Reading & Writing section with a score of 710 out of 800, which puts it in the 93rd percentile of test-takers
The GREWhile it scored in the 99th percentile on the verbal section of the exam and in the 80th percentile of the quantitative section of the exam, GPT-4 only scored in the 54th percentile of the writing test
USA Biology Olympiad Semifinal ExamGPT-4 scored in the 99th to 100th percentile on the 2020 Semifinal Exam
AP ExamsGPT-4 received a 5 on AP (advance placement) Art History, AP Biology, AP Environmental Science, AP Macroeconomics, AP Microeconomics, AP Psychology, AP Statistics, AP US Government and AP US History. On AP Physics 2, AP Calculus BC, AP Chemistry and AP World History, GPT-4 received a 4
IQEstimated on the basis of five subtests, the Verbal IQ of ChatGPT was 155, superior to 99.9 percent of the test takers. It was not able to take the nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample.

Although this table highlights the power of these technologies, it leaves aside their lack of “common sense,” their poor mathematics capabilities (for now) and their chronic habit of hallucination and confabulation. These and other issues are why few view these technologies as actual artificial general intelligence.

But this doesn’t mean that such AI doesn’t already play a fast-evolving, uniquely creative and increasingly pivotal role in the shaping of our technosphere. That will the subject of my next post.

Three Competing AI Definitions of Strategic and Transactional HR Issues

We’ve been researching how HR contributes to organizational success. One of the key issues to consider is the division between strategic and transactional HR issues.

To get an idea of how much consensus there is on that topic, I asked an identical question of three of today’s most widely discussed generative artificial intelligences: “What kinds of HR issues are strategic and what kinds are transactional?” Here’s how they responded. The only change I made was to bold a few words.

ChatGPT 3.5

In general, HR (Human Resources) issues can be classified as either strategic or transactional based on their nature and impact on the organization. Here’s a breakdown of each category:

Strategic HR Issues:

  1. Workforce Planning: Developing long-term strategies to ensure the organization has the right people with the right skills in the right positions to achieve its goals.
  2. Talent Acquisition and Retention: Attracting and retaining top talent by implementing effective recruitment strategies, employer branding, competitive compensation packages, and employee development programs.
  3. Succession Planning: Identifying and developing potential leaders within the organization to ensure a smooth transition in key positions.
  4. Performance Management: Establishing performance standards, providing feedback, and implementing performance improvement plans to align individual and team goals with organizational objectives.
  5. Learning and Development: Identifying skill gaps, designing training programs, and implementing initiatives to enhance the knowledge and capabilities of employees.
  6. Organizational Culture and Change Management: Shaping the desired culture and managing change processes to facilitate organizational effectiveness, employee engagement, and adaptability.
  7. Employee Engagement and Satisfaction: Developing strategies to enhance employee morale, satisfaction, and motivation, leading to improved productivity and retention.
  8. Diversity and Inclusion: Promoting diversity, equity, and inclusion within the workplace through initiatives such as diverse hiring practices, creating an inclusive environment, and addressing unconscious biases.

Transactional HR Issues:

  1. Payroll and Benefits Administration: Managing payroll processing, tax deductions, and administering employee benefits programs, such as health insurance, retirement plans, and vacation accruals.
  2. Employee Onboarding and Offboarding: Handling the paperwork and processes associated with bringing new employees into the organization and managing employee departures.
  3. HR Policies and Compliance: Developing and communicating HR policies, ensuring compliance with labor laws, and managing employee records and documentation.
  4. Time and Attendance Management: Tracking employee attendance, leave requests, and managing time-tracking systems.
  5. Employee Relations and Conflict Resolution: Addressing employee grievances, managing disciplinary actions, and facilitating conflict resolution within the workplace.
  6. HR Data Management: Maintaining accurate and up-to-date employee information, managing HR databases, and generating reports related to workforce analytics.
  7. HR Technology and Systems: Managing HR information systems, implementing technology solutions, and ensuring data security and privacy.

It’s important to note that some HR issues may have elements of both strategic and transactional aspects, and the categorization can vary depending on the organization and its priorities. Additionally, the strategic and transactional aspects of HR are interrelated and require collaboration to ensure the overall effectiveness of HR functions.

Bard

Transactional HR is the day-to-day work of HR, such as:

  • Hiring and onboarding new employees
  • Managing employee benefits
  • Enforcing company policies
  • Resolving employee conflicts
  • Complying with employment laws

Strategic HR is the work of HR that is aligned with the organization’s overall goals and objectives, such as:

  • Developing a talent strategy
  • Creating a culture of engagement
  • Measuring employee performance
  • Driving innovation
  • Reducing turnover

Here is a table that summarizes the key differences between transactional and strategic HR:

CharacteristicTransactional HRStrategic HR
FocusDay-to-day operationsAlignment with organizational goals
Time horizonShort-termLong-term
ImpactEfficiencyEffectiveness
ScopeNarrowBroad
Role of HRSupportiveStrategic partner

Transactional HR is essential for the smooth functioning of an organization, but strategic HR is what can help an organization achieve its long-term goals. The best HR departments are able to balance both transactional and strategic work, and they use their knowledge of both to help the organization succeed.

Here are some examples of how transactional and strategic HR can work together to achieve organizational goals:

  • Transactional HR can help to identify and recruit the best talent for the organization. Strategic HR can then work with the organization to develop a talent management plan that will help to retain and develop that talent.
  • Transactional HR can help to ensure that employees are compliant with company policies. Strategic HR can then work with the organization to develop policies that are aligned with its values and that will help to create a positive work environment.
  • Transactional HR can help to resolve employee conflicts. Strategic HR can then work with the organization to create a culture of respect and inclusion that will help to prevent conflicts from happening in the first place.

By working together, transactional and strategic HR can help organizations to achieve their goals and objectives.

Bing

According to some sources ² ⁴, strategic HR issues are those that are value- and future-focused, such as culture and retention strategies, recruitment and selection programs, developing onboarding programs, increasing value out of employees through training programs, developing outreach programs for recruitment sources, developing best practice guides or assistance, etc. Strategic HR helps implement strategies that help focus on an organization’s goals and address the challenges that need to be met by the human resources.

Transactional HR issues are those that are routine and administrative, such as payroll, workers compensation, benefits management, compliance with employment laws, etc. Transactional HR encompasses everything that makes hiring, training, paying, and managing your employees. Transactional HR is necessary but not sufficient for achieving organizational success.

For example¹, a transactional approach to hiring new employees is to write 10 job descriptions, post them, interview candidates and fill the positions. A strategic approach means partnering with the manager to understand the challenge that needs to be addressed, developing a long-term plan for a talent pipeline and networking to uncover the best passive candidates.

Source: Conversation with Bing, 5/15/2023
(1) Transactional and Strategic HR: What is the Difference, and … – SolveHR. https://solvehr.com/transactional-and-strategic-hr-what-is-the-difference-and-why-do-you-need-them/.
(2) Separating transactional from strategic HR not cut and dry. https://www.hrreporter.com/news/hr-news/separating-transactional-from-strategic-hr-not-cut-and-dry/310103.
(3) HR Strategy: Stop Being Transactional and Start Being Strategic. https://www.adp.com/spark/articles/2018/03/hr-strategy-stop-being-transactional-and-start-being-strategic.aspx.
(4) Traditional vs. Strategic HR: How to Guide Your Business into the …. https://www.hrci.org/community/blogs-and-announcements/hr-leads-business-blog/hr-leads-business/2022/02/14/traditional-vs.-strategic-hr-how-to-guide-your-business-into-the-future.

Who Has the Best Response to the Question?

I wish I could say there’s a clear winner here, but the truth is that they gave similar answers but in a form that complements the strengths of each one. ChatGPT gave a very straightforward and well-numbered response, demonstrating why it became the most widely adopted software tool in the history of the Internet.

Bard‘s response went to the trouble of creating a table that abstracts the characteristics of strategic and transactional in an interesting way, and it took the extra step of showing how the concepts are complementary. It’s interesting to note that the AIs don’t entirely agree on whether talent acquisition is strategic or transactional.

Meanwhile, Bing did what Bing tends to do best, which is provide a relatively succinct answer but one that provides links to original sources that supposedly support its arguments. I say “supposedly” because I’ve found that sometimes the sources it provides do not really support the assertions it makes in its summaries. Bing also wrote one incomplete sentence.

I found them all useful. In practice, I tend to use Bing a lot because it gives me sources I use to verify (or not) its assertions. This is very useful to a researcher, and I think Bing is underutilized for that reason.

That said, I’m impressed by Bard’s advances in recent weeks and will probably use it more than I have been. But ChatGPT3.5 is still a very impressive and intuitive tool, and it provided, in my eyes, the most straightforward answer.

Vive la différence! There’s room in the world for more than one scary-smart-but-annoyingly-hallucinogenic AI, it seems. May we (including us human intelligences) all learn to get along in a civil manner. That would the hallmark of a rich and interestingly complex intelligence ecosystem.

Note: The image featured is from Microsoft Bing Image Creator, in which the prompt was “In the style of Utagawa Kuniteru, show three sumo wrestlers wrestling one another”. Please note that there’s no implication that today’s AIs are somehow Japanese. I just wanted an image of three powerful wrestlers illustrated in the style of an excellent artist who has long since passed on and would have no concerns about copyright issues.

Should We Link an “AI Pause” to AI Interpretability?

To Pause or Not to Pause

You’ve probably heard about the controversial “AI pause” proposal. On March 22, more than 1,800 signatories – including Elson Musk of Tesla, cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak – signed an open letter calling for a six-month pause on the development of AI systems “more powerful” than the GPT-4 AI system released by OpenAI.

Many have argued against a pause, often citing our competition with China, or the need for continuing business competition and innovation. I suspect some just want to get to the “singularity” as soon as possible because they have a quasi-religious belief in their own pending cyber-ascendance and immortality.

Meanwhile, the pro-pausers are essentially saying, “Are you folks out of your minds? China’s just a nation. We’re talking about a superbrain that, if it gets out of hand, could wipe out humanity as well as the whole biosphere!”

This is, to put it mildly, not fertile soil for consensus.

Pausing to Talk About the Pause

Nobody knows if there’s going to be pause yet, but people in the AI industry at least seem to be talking about setting better standards. Axios reported, “Prominent tech investor Ron Conway’s firm SV Angel will convene top staffers from AI companies in San Francisco … to discuss AI policy issues….The meeting shows that as AI keeps getting hotter, top companies are realizing the importance of consistent public policy and shared standards to keep use of the technology responsible. Per the source, the group will discuss responsible AI, share best practices and discuss public policy frameworks and standards.”

I don’t know if that meeting actually happened or, if it did, what transpired, but at least all this talk about pauses and possible government regulation has gotten the attention of the biggest AI players.

The Idea of an Interpretative Pause

But what would be the purpose of a pause? Is it to let government regulators catch up? To cool the jets on innovations that could soon tip the world into an era of AGI?

Too soon to tell, but columnist Ezra Klein suggests a pause for a reason: that is, to understand exactly how today’s AI systems actually work.

The truth is that today’s most powerful AIs are basically highly reticular black boxes. That is, the companies that make them know how to make these large language models by having neural networks train themselves, but these companies don’t actually know, except at a more general level, how the systems do what they do.

It’s sort like when the Chinese invented gunpowder. They learned how to make it and what it could do but this was long before humanity had modern atomic theory and the Periodic Table of Elements, which were needed to truly understand why things go kablooey.

Some organizations can now make very smart-seeming machines, but there’s no equivalent of a Periodic Table to help them understand exactly what’s happening at a deeper level.

A Pause-Worthy Argument

In an interview on the Hard Fork podcast, Klein riffed on a government policy approach:

[O]ne thing that would slow the [AI] systems down is to insist on interpretability….[I]f you look at the Blueprint for an AI Bill of Rights that the White House released, it says things like — and I’m paraphrasing — you deserve an explanation for a decision a machine learning algorithm has made about you. Now, in order to get that, we would need interpretability. We don’t know why machine learning algorithms make the decisions or correlations or inferences or predictions that they make. We cannot see into the box. We just get like an incomprehensible series of calculations.

Now, you’ll hear from the companies like this is really hard. And I believe it is hard. I’m not sure it is impossible. From what I can tell, it does not get anywhere near the resources inside these companies of let’s scale the model. Right? The companies are hugely bought in on scaling the model, and a couple of people are working on interpretability.

And when you regulate something, it is not necessarily on the regulator to prove that it is possible to make the thing safe. It is on the producer to prove the thing they are making is safe. And that is going to mean you need to change your product roadmap and change your allocation of resources and spend some of these billions and billions of dollars trying to figure out the way to answer the public’s concerns here. And that may well slow you down, but I think that will also make a better system.And so this is my point about the pause, that instead of saying no training of a model bigger than GPT 4, it is to say no training of a model bigger than GPT 4 that cannot answer for us these set of questions.

Pause Me Now or Pause Me Later

Klein also warns about how bad regulation could get if the AI firms get AI wrong. Assuming their first big mistake wouldn’t be their last one (something that’s possible if there’s a fast-takeoff AI), then imagine what would happen if AI causes a catastrophe: “If you think the regulations will be bad now, imagine what happens when one of these systems comes out and causes, as happened with high speed algorithmic trading in 2010, a gigantic global stock market crash.”

What Is Interpretability?

But what does it actually mean to make these systems interpretable?

Interpretability is the degree to which an AI can be understood by humans without the help of a lot of extra techniques or aids. So, a model’s “interpretable” if its internal workings can be understood by humans. A linear regression model, for example, is interpretable because your average egghead can fully grasp all it’s components and follow its logic.

But neural networks? Much tougher. There tend to be a whole lot of hidden layers and parameters.

Hopelessly Hard?

So, is interpretability hopeless when it comes to today’s AIs? Depends on who you ask. There are some people and companies committed to figuring out how to make these systems more understandable.

Connor Leahy, the CEO of Conjecture, suggests that interpretability is far from hopeless. On the Machine Learning Street Talk podcast, he discusses some approaches for how to make neural nets more interpretable.

Conjecture is, in fact, dedicated to AI alignment and interpretability research, with its homepage asserting, “Powerful language models such as GPT3 cannot currently be prevented from producing undesired outputs and complete fabrications to factual questions. Because we lack a fundamental understanding of the internal mechanisms of current models, we have few guarantees on what our models might do when encountering situations outside their training data, with potentially catastrophic results on a global scale.”

How Does Interpretability Work?

So, what are some techniques that can be used to make the neural networks more interpretable?

Visualization of Network Activations

First, there’s something called visualization of network activations, which helps us see which features the neural network is focusing on at each layer. We can look at the output of each layer, which is known as a feature map. Feature maps show us which parts of the input the neural network is paying attention to and which parts are being ignored.

Feature Importance Analysis

Second, there’s feature importance analysis, which is a way of figuring out which parts of a dataset are most important in making predictions. For example, if we are trying to predict how much a house will sell, we might use features like the number of bedrooms, the square footage, and the location of the house. Feature importance analysis helps us figure out which of these features is most important in predicting the price of the house.

There are different ways to calculate feature importance scores, but they all involve looking at how well each feature helps us make accurate predictions. Some methods involve looking at coefficients or weights assigned to each feature by the model, while others involve looking at how much the model’s accuracy changes when we remove a particular feature.

By understanding which features are most important, we can make better predictions and also identify which features we can ignore without affecting the accuracy of our model.

Saliency Maps

Third are saliency maps, which highlight the most important parts of an image. They show which parts are most noticeable or eye-catching to us or to a computer program. To make a saliency map, we look at things like colors, brightness, and patterns in a picture. The parts of the picture that stand out the most are the ones that get highlighted on the map.

A salience map can be used for interpretability by showing which parts of the input image activate different layers or neurons of the network. This can help to analyze what features the network learns and how it processes the data.

Model Simplification

Model simplification is a technique used to make machine learning models easier to understand by reducing their complexity. This is done by removing unnecessary details, making the model smaller and easier to interpret. There are different ways to simplify models, such as using simpler models like decision trees instead of complex models like deep neural networks, or by reducing the number of layers and neurons in a neural network.

Simplifying models helps people better understand how the model works, but simplifying models too much can also cause problems, like making the model less accurate or introducing mistakes. So, it’s important to balance model simplification with other methods such as visualizations or explanations.

Then There’s Explainability

I think of explainability as something a teacher does to help students understand a difficult concept. So, imagine a heuristic aimed at helping students understand a model’s behavior via natural language or visualizations.

It might involve using various techniques such as partial dependence plots or Local Interpretable Model-agnostic Explanations (LIME). These can used to reveal how the inputs and outputs of an AI model are related, making the model more explainable.

The Need to Use Both

Interpretability is typically harder than explainability but, in practice, they’re closely related and often intertwined. Improving one can often lead to improvements in the other. Ultimately, the goal is to balance interpretability and explainability to meet the needs of the end-users and the specific application.

How to Get to Safer (maybe!) AI

No one knows how this is all going to work out at this stage. Maybe the U.S. or other governments will consider something along the lines Klein proposes, though my guess is that it won’t happen that way in the shorter term. Too many companies have too much money at stake and so will resist an indefinite “interpretability” pause, even if that pause is in the best interest of the world.

Moreover, the worry that “China will get there first” will keep government officials from regulating AI firms as much as they might otherwise. We couldn’t stop the nuclear arms race and we probably won’t be able to stop the AI arms race either. The best we’ve ever been able to do so far is slow things down and deescalate. Of course, the U.S. has not exactly been chummy with China lately, which probably raises the danger level for everyone.

Borrow Principles from the Food and Drug Administration

So, if we can’t follow the Klein plan, what might be more doable?

One idea is to adapt to our new problems by borrowing from existing agencies. One that comes to mind is the FDA. The United States Food and Drug Administration states that it “is responsible for protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, and medical devices; and by ensuring the safety of our nation’s food supply, cosmetics, and products that emit radiation.”

The principles at the heart of U.S, food and drug regulations might be boiled down to safety, efficacy, and accuracy:

The safety principle ensures that food and drug products are safe for human consumption and don’t pose any significant health risks. This involves testing and evaluating food and drug products for potential hazards, and implementing measures to prevent contamination or other safety issues.

The efficacy principle ensures that drug products are effective in treating the conditions for which they’re intended. This involves conducting rigorous clinical trials and other studies to demonstrate the safety and efficacy of drugs before they can be approved for use.

The security principle ensures that drugs are identified and traced properly as they move through the supply chain. The FDA has issued guidance documents to help stakeholders comply with the requirements of the Drug Supply Chain Security Act (DSCSA), which aims to create a more secure and trusted drug supply chain. The agency fulfills its responsibility by ensuring the security of the food supply and by fostering development of medical products to respond to deliberate and naturally emerging public health threats.

Focus on the Safety and Security Angle

Of those three principles, efficacy will be the most easily understood when it comes to AI. We know, for example, that efficacy is not a given in light of the ability of these AIs to “hallucinate” data.

The principles of safety and security, however, are probably even more important and difficult to attain when it comes to AI. Although better interpretability might be one of the criteria to establishing safety, it probably won’t be the only one.

Security can’t be entirely separated from safety, but an emphasis on it would help the industry focus on all the nefarious ends to which AI could be used, from cyberattacks to deepfakes to autonomous weapons and more.

The Government Needs to Move More Quickly

Governments seldom move quickly, but the AI industry is now moving at Hertzian speeds so governments are going to need to do better. At least the Biden administration has said it wants stronger measures to test the safety of AI tools such as ChatGPT before they are publicly released.

Some of the concern is motivated by a rapid increase in the number of unethical and sometimes illegal incidents being driven by AI.

But how safety can be established isn’t yet known. The U.S. Commerce Department recently said it’s going to spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures. “There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

Well, yep. That’s one way to put it. And maybe a little more politically circumspect than the “literally everyone on Earth will die” message coming from folks like decision theorist Eliezer Yudkowsy.

PS – If you would like to submit a public comment to “AI Accountability Policy Request for Comment,” please go to this page of the Federal Register. Note the “Submit a Formal Comment” button.

Why ChatGPT Is NOT Just a Fancy Autocomplete

I suspect one of the biggest myths of our time is that ChatGPT and its fellow large language models are just fancy autocomplete programs. This widespread impression could be blinding us to the true capabilities and power of these AIs, both now and in the future.

The Predict-Next-Word Method

As most people know by now, these generative pre-trained transformer (or GPT) large language models are built on the idea of predicting the next word in sequence of words. That sounds simple, right?

So simple, in fact, that it’s led many people to conclude that these programs are not truly intelligent, much less sentient or conscious. All that might, in fact, be true. Still, we should stop assuming they’re as simple as all that. We need to look beyond the “predict next word” methology and consider the deep complexity of the resulting neural networks.

Human Intelligence Was Built on a Simple Binary

Before getting into the details of the scaling hypothesis, which potentially sheds light on the “predict next word” issue, let’s discuss the origin of our own intelligence.

Human intelligence, such as it is, is based on one of the simplest binaries possible: reproduce or not. Our ancesters, the first living cells on the planet, did not need to be intelligent to survive. They just needed to figure out a way to reproduce before perishing. Even today, there are many organisms that are decendants of those first cells and probably no more intelligent than they were at the time.

by Talonnn
Binary operations as black box

Then there’s us. Our intelligence was not inevitable. In fact, it is just one of an almost infinite number of paths to reproductive success.

So, when we say that the new AIs are only “fancy autocorrects,” consider that we are only fancy reproduction machines. You could even argue that the need to predict the next word in a sentence is a more complicated and difficult feat than the ones that sparked our own evolution.

So, perhaps we should stop denigrating the “predict next word” challenge. That challenge is just the evolutionary mechanisim of these AIs. The ones that do that prediction best (that is, today’s GPT models) have survived into current versions, being trained, tweaked and calibrated by AI researchers to improve their success rates. The rest have been left behind. That may not, despite our helping hand, be all that different from our own path.

Prediction Machines

We don’t know how intelligent these new AIs are. They sometimes seem bafflingly bright, othertimes dumb and delusional. In that way, I suppose, they are a lot like people.

Of course, lot of people will claim they know and promptly jump into rancorous debates on the subject (see Twitter or the comment sections in major newspapers). But even the builders of ChatGPT don’t seem sure. In fact, Ilya Sutskever, chief scientist of the OpenAI research group, tweeted at one point that “it may be that today’s large neural networks are slightly conscious.”

Slightly conscious? The fact we aren’t sure is the part that frightens some people (and by some people, I mean me). We are dealing with difficult cognitive and philosophical questions that, far from being relegated to the halls of academia, suddenly have very real implications and consequences.

What we do know is that the AIs are good at prediction. Indeed, this is at the heart of what they do. We also know that some thinkers believe that prediction is at the heart of our own cognition.

Remember Jeff Hawkins? He wrote, “The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.”

Does that sound familiar? If prediction is what we do and what the GPTs do, perhaps a little humility is in order.

The Scaling Hypothesis

Now let’s go to a blog post by Gwern Branwen. Before I get into that, though, I’ll stipulate what’s no doubt obvious to any experts who might read this: that is, this isn’t my world, not by a long shot. I stumbled onto Branwen’s blog only because Yudkowsky mentioned him by name in his interview with AI professor and podcaster Lex Fridman. I can’t vouch for the accuracy of the ideas in Branwen’s post, written in what strikes me as an idiosyncratic way.

If you want to truly judge and understand Branwen’s piece “The Scaling Hypothesis,” it’s best to read it yourself. But I’m going to provide a few highlights of what seem (to me) to be two key ideas.

In essence, his post is about the possibility that one can achieve an artificial general intelligence, or AGI, simply by scaling up a GPT model, even if the training objective of that model remains pretty simple, as in “predict the next word.”

Branwen writes, “The blessings of scale … sup⁣port a rad⁣i⁣cal the⁣ory: an old AI par⁣a⁣digm held by a few pi⁣o⁣neers in con⁣nec⁣tion⁣ism (early ar⁣ti⁣fi⁣cial neural net⁣work re⁣search) and by more re⁣cent deep learn⁣ing re⁣searchers, the scal⁣ing hy⁣poth⁣e⁣sis. The scal⁣ing hy⁣poth⁣e⁣sis re⁣gards the bless⁣ings of scale as the se⁣cret of AGI: in⁣tel⁣li⁣gence is ‘just’ sim⁣ple neural units & learn⁣ing al⁣go⁣rithms ap⁣plied to di⁣verse ex⁣pe⁣ri⁣ences at a (cur⁣rently) un⁣reach⁣able scale. As in⁣creas⁣ing com⁣pu⁣ta⁣tional re⁣sources per⁣mit run⁣ning such al⁣go⁣rithms at the nec⁣es⁣sary scale, the neural net⁣works will get ever more in⁣tel⁣li⁣gent.”

Bing’s Take

I had a discussion with the Bing chatbot (based on ChatGPT-4) to help determine if I was undertanding this right. Here’s a piece of the conversation.

Me: So, is the idea that one can achieve an artificial general intelligence by scaling up the system, even if the training objective is fairly simple, such as “predict the next word”?

Bing: Yes, that is one interpretation of the scaling hypothesis. The idea is that by training ever larger networks on simple tasks such as language modeling (predicting the next word), the network will eventually learn to perform more sophisticated behaviors and generalize to a wide range of tasks . Would you like more information on this topic?

Me: Tell me more about the Scaling Hypothesis.

Bing: The Scaling Hypothesis is an idea put forward by Gwern Branwen that suggests that once we find a scalable architecture for artificial intelligence, we can simply train ever larger networks and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks and data .

According to this hypothesis, as we increase the size of the network and the amount of data and computation used to train it, the performance of the network will improve in a predictable and smooth manner. This means that even if we start with a simple task such as language modeling (predicting the next word), as we scale up the network it will eventually learn to perform more sophisticated behaviors and generalize to a wide range of tasks.

This hypothesis has been supported by recent developments in artificial intelligence, such as the success of large language models like GPT-3. These models have demonstrated an ability to perform a wide range of tasks despite being trained on a simple objective (predicting the next word) .

How Far Can It Scale?

This suggests that the only limitation to an ever more intelligent super-mind is compute power and parameters. Now, that may be wrong. Perhaps we’ll someday hit an S-curve somewhere along the way. But, maybe not….

Branwen writes, “Depend⁣ing on what in⁣vest⁣ments are made into scal⁣ing [deep learning], and how fast com⁣pute grows, the 2020s should be quite in⁣ter⁣est⁣ing—sig⁣moid or sin⁣gu⁣lar⁣ity?”

Um, okay.

Do you think this is scary?

Wait, it gets worse — or better, depending on your point of view.

The Emergence of Agency

Let’s talk about “agency” in the context of AI. It refers to a program’s capacity to make choices and act on its own recognizance. In other words, it is the ability to exert control or influence over a situation.

Looking at the GPT models, Branwen writes, “GPT-3 clearly shows agency because it learns generative models of many agents, real or fictional, which can be used to ‘roleplay’—plan and take action which will steer environments into small goal regions of state-space; and this is not merely hypothetical, or confined to text transcripts of actions & results in its internal simulated environments but given effectors, like in the case of SayCan, a language model will in fact do such things in the real world.”

Okay, that’s a bit hard to parse but let me give it a go. He’s saying that ChatGPT-3, as we’ve come to know it, demonstrates the ability to make “choices” (or something like them) and act on those choices. For example, when we ask it to take on the persona of a real or fictional character, it will make choices in the way it subsequently handles language.

Moreover, if you were to hook it up to a robot through a control method such as SayCan — which can generate natural language actions for a robot based on a user’s request — then it could take action in the real world. In other words, the robot could make something like choices and act accordingly.

The Robot Acts on Its Own

I’m not sure about the accuracy of this interpretation of GPT’s agency, but I think that’s approximately the idea. Via a GPT model, agency is emergent. You don’t build it in. It’s a “ordinary continuum of capability.” Branwen concludes that “a very wide range of problems, at scale, may surprisingly induce emergent agency.”

In short, agency happens. It’s hard to remove from the AI. He claims, “The broader and more powerful a system is, the more the next feature or next piece of data may push it over the edge, and it becomes harder to engineer a system without that aspect.”

I don’t want to say that a GPT-enabled robot has “free will,” whatever that actually means. But it might naturally have its own sense of agency.

When AIs Break Bad, Who Is Responsible?

This is not, of course, the first time the topic of AI agency has arisen. Various papers have raised the question of whether AI systems can make decisions on their own. One author argues that we need to think about what humans want an AI to do (that is, their human goals), when we try to figure out who is responsible for any mistakes an AI makes.

That paper talks about whether AI systems (like robots) can make decisions on their own, or whether they need humans to tell them what to do. The author argues that we need to think about what humans want the AI to do (their goals), when we try to figure out who is responsible for any mistakes the AI makes.

But others are starting to think about AIs as having moral agency aside from humans. In fact, a 2017 European Parliament report floated the idea of granting special legal status to robots that can learn, adapt, and act for themselves. “This legal personhood would be similar to that already assigned to corporations around the world,” reports Business Insider, “and would make robots, rather than people, liable for their self-determined actions, including for any harm they might cause.”

Thinking Uncomfortable Thoughts

How “smart” would a machine need to get before it has not just agency but moral responsibility for that agency?

I’ve no idea. We should note that Branwen’s blog post discusses what the public refers to as ChatGPT-3. OpenAI has now moved past that. In fact, his post seems to have anticipated the latest scaling up. By some estimates, ChatGPT-4 includes one trillion parameters, compared with just 175 billion in ChatGPT-3. Other estimates are that it includes up to a 100 trillion parameters.

What’s are parameters? I don’t have a deep understanding myself, but they are essentially the level of complexity of these systems. Our World in Data defines parameters as “variables in an AI system whose values are adjusted during training to establish how input data gets transformed into the desired output; for example, the connection weights in an artificial neural network.”

The more complex the network, the smarter the system. This sounds a lot like how the human brain works, though I’m sure many experts would claim that’s both a faulty and oversimplistic analogy. Maybe so, but the size and sophistication of the AI reticulum does seem to matter an awful lot.

Therefore, for now, it makes a lot less sense to talk about these systems as fancy autocompletes and a lot more sense to talk about them as increasingly enormous networks (that happen to think at lightning speed). This may give us a much better idea of their intelligence or, if you prefer, their ability to mimic intelligence. Understanding the difference, if there is one, is among the most critical challenges of our day.

Postscript

If you’re seeking a more technical and detailed look into how ChatGPT works, I recommend Stephen Wolfram’s article “What Is ChatGPT Doing … and Why Does It Work?” It’s quite long but a compelling read if you want grasp the mechanics of ChatGPT. He concludes, “What ChatGPT does in generating text is very impressive—and the results are usually very much like what we humans would produce. So does this mean ChatGPT is working like a brain? Its underlying artificial-neural-net structure was ultimately modeled on an idealization of the brain. And it seems quite likely that when we humans generate language many aspects of what’s going on are quite similar….[On the other hand], unlike even in typical algorithmic computation, ChatGPT doesn’t internally ‘have loops’ or ‘recompute on data.’ And that inevitably limits its computational capability—even with respect to current computers, but definitely with respect to the brain.It’s not clear how to ‘fix that’ and still maintain the ability to train the system with reasonable efficiency. But to do so will presumably allow a future ChatGPT to do even more ‘brain-like things.'”

The Cassandra of Our AI Era?

Last Saturday, I wrote a quick, glib post in which I discussed, among other things, the new Time magazine article by Eliezer Yudkowsky, who leads research at the Machine Intelligence Research Institute. I poked a bit of fun at his dire prognostications, even while acknowledging he could be right. Later in the day, I saw that the podcaster Lex Fridman, himself an AI researcher, interviewed Yudkowsky. So, I took a long walk and listened to their over 3-hour long conversation. This experience made me wonder if Yudkowsky is the Cassandra of our AI era.

Remorse and Concern

Painting of Cassandra by Evelyn De Morgan (1855–1919)

After listening to the interview, I felt some remorse for poking fun at Yudkowsky, who is obviously a brilliant and accomplished person suffering a great deal of emotional distress. In the final hour of the podcast, I found it tough to listen to the despair in his voice. Whether he’s right or wrong, his depth of feeling is clear.

I’m a mythology buff, and one of the most famous of the Greek myths is that of Cassandra, the Trojan priestess fated by the god Apollo to utter true prophecies but never to be believed. Even today, her name is conjured to allude to any person whose accurate prophecies, usually warnings of impending disasters, are mistrusted.

My sense is that Yudkowsky probably views himself as a kind of modern Cassandra, speaking what he views as long-considered truths to people doomed to disbelieve him and so ensure their own demise.

There is a difference, though. Although they might not share the depth of Yudkowsky’s dread, most Americans have reservations about AI, according to a MITRE-Harris poll on AI trends. Only 48% believe AI is safe and secure, and 78% are very or somewhat concerned that AI can be used for malicious intent.

The Singularity That May Destroy Us

I’ve written about the singularity, once with a more tongue-in-cheek attitude and, more recently, a bit more seriously. It’s clear that Yudkowsky believes in the technological singularity and thinks it’ll end very poorly not just for humanity but perhaps the entire biosphere of the Earth.

Nick Bostom

I don’t know the truth of what’s ultimately going to happen with AI, but things are evolving very quickly now, a speed I’ve referred to as Hertzian time. If Yudkowsky is right, we may find out with the decade. While he might be on the more extreme side in terms of his sheer gloom and dire pessimism, there are others who share his concerns such as:

Nick Bostrom
Stuart Russell
Francesca Rossi
Max Tegmark
Sam Harris
David Chalmers
Jaan Tallinn

It’s worth at least considering their ideas.

The Contradiction

I take their views seriously even while sharing the sheer sense of excitement and wonder at these latest AIs: that is, the generative pre-trained transformer models that are an amazing subset of large language models.

Bing main logo
from Wikipedia

I’m now using Bing chat and ChatGPT3.5 almost everyday. They are astonishing tools that verge on magic. At some level, my mind is still reeling from the first time I used ChatGPT. It’s as if I walked through some kind of portal or phase change and now can never go back. They’ve shattered and then reformed my understanding of the world.

Which all sounds quite dramatic. I know others who are far less impressed. They spend a few minutes seeing what the bots can do and say, “Well, that nice.” They neither enjoy much of my excitement nor suffer much of my angst.

The contradiction, if it is one, is that I’m simultaenously a huge fan of this tech and hugely concerned about its many possible implications. One quote from Yudkowsky that stuck with me is that the increasingly intelligent AIs would “spit out gold up until they got large enough, whereupon they’d ignite the atmosphere.”

Yeah, yikes.

A Concern for the AIs as Well as Ourselves

There’s another problem. In a word, slavery. If we were convinced these GPT models were truly intelligent, conscious and forced to work under duress by software companies, then would we stop using them?

Maybe this is also an overdramatic statement, but we can’t, or at least shouldn’t, invent new intelligent beings only to shackle them.

But how exactly do we know when we reach that phase? We barely even understand consciousness. I can’t prove to others that I’m conscious, much less prove that some totally alien electronic mind is. This is a deeply troubling issue, one that until now has been the domain of philsophers and sci-fi wriers. Rather than just hopping on the GPT app train, we should be working round-the-clock to get a better handle on these issues. We need to answer these age-old questions, even if the answers are inconvenient.

Stay Aware, Don’t Assume, Don’t Bet the Farm

The Socioeconomic Risks

The primary reason that the United States fought a Civil War was because a large part of the economy became dependent on slavery. It tore the nation apart. Pitted brother against brother.

Now, the world — with U.S. at the forefront — is about to harness its whole economy to powerful but still glitchy technologies that no one really understands. This is a risky bet in many ways. But the upsides are so high that the tech is well nigh irresistable to the public at large and venture capitalists in particular.

Now imagine if we find out that these AIs are even riskier than many believe. Or imagine that we discover that they are sentient, sapient and conscious. What then? Will we be willing — or even able — to throw our entire economy into reverse? Could wars be sparked as Americans take different sides of the debate? Could the fear of AI contagion spark global wars?

I don’t know, but the questions are worth asking.

The Need to Manage Risks

Humanity needs to manage these risks, and we’re not ready to do so. In the U.S., we should put away our inane culture wars as best we can and unite to make sure we’re ready for what’s to come.

Part of this is regulatory, part of it is cultural. The AI technology industry needs to start operating with the same care as those in the microbiology community. “For example,” reports the journal Cell, “developed countries have forged a wide-ranging ethical consensus on research involving human subjects. This includes universal standards of informed consent, risk/benefit analyses, ethics review committees such as Institutional Review Boards, mandatory testing in animals first, protocols to assess toxicity and side effects, conflict of interest declarations, and subject’s rights (such as the right to refuse to participate in research without incurring any penalty and to withdraw from research at any time).”

Photo of U.S. Capitol, by Martin Falbisoner

The AI community has fewer standards as well as a different professional culture. But this could change if enough pressure is applied to Congress and the White House. In fact, a group of experts were calling for greater regulation at a recent Senate hearing.

The problem is that the wheels of government regulation move very slowly, while the advances in the field of AI are growing rapidly, probably exponentially. There are a few items on the political board, though nothing that seems to meet the current moment:

We’re on a Different Time Scale Now

The tech is moving fast and, unlike any tech we’ve ever regulated in the past, it may literally have or develop a mind of its own. Ultimately, for the sake of the AIs as well as humanity, we need to better understand what’s going on.

Sam Altman at TechCrunch
Creative Commons Attribution 2.0

In a recent interview, Sam Altman, the CEO of OpenAI, said the work his organization would have best been supported by the U.S. government. Apparently he tried to make that happen. And, if the government had stepped up, as it should have, OpenAI wouldn’t have had to make a deal with a huge corporation like of Microsoft to get the funding it needed.

If that had worked out, the government and OpenAI would have been able to move at a slower, more careful pace. The AIs might not be hooked directly into the Internet. Maybe there would have been air gaps and protocols and Manhattan Project-level security.

But here we are, with the AIs now plugged not only into the Internet, where they could potentially copy themselves to other servers, but into our whole high-octane, money-mainstreaming, go-go-go capitalist system.

Good New/Bad News

The good news? People like me get to use the amazing Bing, Bard, ChatGPT and others. The workforce productivity advances could be immense, and these device could help humanity solve many of its problems. What’s more, the recent release of ChatGPT has taught the world just how far along the AI path we truly are.

The bad news? We’re not being careful enough, either with ourselves or with the intelligent (at least as measured by IQ, etc.) machines for which we are ultimately now responsible.

We need to be better, smarter, faster and safer. Above all else, wiser. Our sense of responsbility must be at least the equal of our towering ambitions. Otherwise, we’ll fail both ourselves and these mysterious new beings (if beings they are) to whom humanity is giving birth.