Bing Confabulates Its Own Version of a Classic Hemingway Story

I continue to be fascinated by the topic of AI confabulation. The other day I read the Ernest Hemingway short story “Fifty Grand.” It’s about a boxer who fights a championship bout. I liked the story but was confused by a couple of details in the end. So, I turned to my favorite AI, Bing, which proceeded to hallucinate a whole other version for me.

Of course, I’ve seen AIs make up other scenes from famous literary works before. Bard even confabulated a “woke” version of the poet E.E. Cummings. So, Bing’s summarization of the Hemingway story was not a shock. But it’s worth writing about because of the nature of Bing in particular and the other similar AIs more generally.

Confabulating Parts of Fifty Grand

“Fifty Grand” is a story that hinges on a couple of boxing-related bets: one known, one unknown. Because the unknown bet isn’t made clear, the reader isn’t sure of what happened until the end of the story. To help clarify the ending, I asked Bing about it.

Bing’s summary of the story left out a critical part when a couple of “wise guys” (by the name of Morgan and Steinfelt) come to visit the boxer, Jack, and make some sort of proposal to which the reader isn’t privy.

Bing’s summary includes a number of events that never happened, such a meeting between Jack and his wife, her accusation that he was going to throw the fight, Jack’s broken jaw, and Jack’s trip to the hospital.

Bing didn’t accurately relate events in the story even though it had access to summaries such as the Wikipedia entry about it (to which Bing linked) as well as to the story itself (though in an unsearchable PDF format, which may have been an issue).

Wikipedia Gets It Right

So, did Bing get it wrong because Wikipedia had it wrong? No. Wikipedia summarizes the fight as follows:

The fight begins. Doyle (the narrator) reports, “There wasn’t anybody ever boxed better than Jack,” and the fight goes well for Jack for several rounds as his left fist repeatedly connects with Walcott’s face. By the seventh round, Jack’s left arm gets heavy and Walcott begins to gain the upper hand as he pummels Jack’s torso. After the eleventh round John Collins tells Jack the fight will go to Walcott, but Jack says he thinks he can last through the twelfth round and goes to meet Walcott and “finish it off right to please himself.” Walcott backs Jack up against the ropes and delivers a very low blow; Jack, in obvious pain, stays standing–”if he went down, there go fifty thousand bucks” because he would win on a foul–and tells the referee not to call the foul because it must have been an accident. He walks toward the dumbfounded Walcott and begins swinging, landing several body punches before hitting his opponent twice below the belt. Walcott falls and wins the match on a foul.Back in the dressing room, Jack comments, “It’s funny how fast you can think when it means that much money.”

In a couple of sentences, Wikipedia clarifies why Jack would have lost the money he bet on Walcott (the other fighter): because Walcott’s blow would have been deemed a foul and Jack would have won the fight on a technicality.

This suggests that the shady characters who talked to Jack earlier in the story wanted Jack to win the fight. At the end of the story, the narrator tells us that Walcott was favored to beat Jack, which is why Morgan and Steinfelt wanted to bet and win on Jack (that is, they’d win more money that way).

But it appears that Jack’s agreement with them is that he would lose the fight. That’s why toward the end of the story, Jack’s manager says, “They certainly tried a nice double-cross” and Jack responds with sarcasm, “Your friends Morgan and Steinfelt. You got nice friends.”

So, Morgan and Steinfelt wanted Jack (and most other people) to bet against Jack’s victory so they would make more money when Jack won. In essence, Jack turned the tables on them by making sure he lost the fight even while getting revenge on Walcott for his dirty boxing and treachery.

What Can We Learn About Today’s Neural Networks?

I certainly don’t “blame” Bing for getting a nuanced story wrong. I know that the confabulations boil down to how the algorithms work, as explained in another post. In fact, unlike the other AIs on the market, Bing pointed me to references that, if I hadn’t already read the story, would have allowed me to verify it was giving me the wrong information. That’s the beauty of Bing.

Not Quite Plagiarism

The famous intellectual Noam Chomsky has claimed that the generative AIs are just a form of “high-tech plagiarism.” But that’s not quite right. I don’t know if the story “Fifty Grand” was part of the data on which the Bing model (based on ChatGPT4) was trained. If so, then it wasn’t able to properly parse, compress and “plagiarize” that nuanced information in such as way that it could be accurately related after model training.

But we do know that Bing was able to access (or at least point to) the Wikipedia article as well as an “enotes” summary of the story, so it knew where to find the right plot summary and interpretation. The fact that it still confabulated things indicates that the makers and users of these technologies have some serious issues to address before we can trust whatever the AIs are telling us.

Will Hallucinations Ever Go Away?

There’s some debate about whether the confabulations and hallucinations will ever go away. On one hand are people such as Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, who has said, “This isn’t fixable. It’s inherent in the mismatch between the technology and the proposed use cases.”

On the other hand are those who think the problems are indeed fixable. Microsoft co-founder Bill Gates said, “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction.”

Maybe APIs Will Help Fix the Issue

Some think they can address the confabulation problem, at least in part, by better use of APIs (that is, application programming interfaces). By interfacing with other types of programs via APIs, the large language models (LLMs) can develop capabilities that they themselves do not have. It’s like when a human being uses some tool, such as calculator, to solve problems that they would not easily be able to solve by themselves.

That is, in fact, part of the hope associated with Gorilla, a LLaMA-7B model designed specifically for API calls. This particular LLM is a joint project developed researchers from UC Berkeley and Microsoft, and there is now an open-source version available.

So, if Gorilla can more dependably access APIs, it can reduce the hallucination problem.

At least, that’s the hope.

We’ll see over time.

On Bayes Theorem and Human Cognition

Some scientists believe that our brains work according to Bayesian logic. Or, at least, we may be able use such logic to replicate the ways our minds work. This is a complex topic that can’t be covered in one post (especially once we start talking about the free energy principle) so let’s start by discussing the connections between Bayes Theorem and human cognition.

What Is Bayes Theorem?

The Bayes Theorem was formulated by Thomas Bayes–English statistician, philosopher and Presbyterian minister–back in the 1700s. The theorem is all about probability of something happening once you know that the probability of something else happening. Here is it in a nutshell:

There are a lot of examples of how to use Bayes Theorem. Here are a few sources:

Examples with Chart

I’ve provided a couple of examples using a chart I took from a Khan Academy lesson on conditional probabilities. I haven’t seen Bayes taught this way, but figured it might be useful as a way to helping myself think through it.

The following is information about one man’s train travels as they pertain to weather, travel delays, and number days for each weather category.

All the data needed for a set of probability problems are already there, so my assumption is one can test Bayesian calculations against the numbers in the chart.

Example One: Chance of Delay If Snowing

For example, let’s say you want to find out the relationship between travel delays and snowy weather. If you just use the chart, you can see a total of 20 days spent traveling on snowy days. There were 12 delays on those days, so you can see that there was a 60% chance of delays on days when there was snow (that is, divide 12 days by 20 days to get .6).

But let’s assume you don’t have the full chart, but you do know some relevant information. What you want to know is the chance of a delay if it’s snowing. So, you set up the following:

P(A|B) is P(delay|snowy): that is, a chance of delay if snowing: currently unknown

P(A) is P(delay): the chance of any delay in a given year = 35 delay days / 365 total travel days = .096

P(B|A) is P(snowy|delay): the chance it’s snowy if there’s a delay = 12 delay days when it snowed / 35 total delay days = .34

P(B) is P(snowy): the probability of snow on any given day = .055

So, here’s what you end up with:

(.096 * .34) / .055 = .6 = 60%

You arrive at the same answer as before even though you didn’t know the total number of snowy days (20) this time around. So, you get a good probability without complete information via Bayes.

Example Two: Chance of Being On Time If Rainy

This time, let’s say you want to know the chances that you’ll be on time if it’s raining. If you have complete information based on the chart, you can divide 40 (the number of on-time days when it’s raining) by 55 (the total number of days traveled when it’s raining). We get .73, or 73%.

But let’s say you don’t have the full chart. So, you set up the following:

P(A|B) is P(on-time|rainy): that is, chance of being on time given it’s raining: currently unknown

P(A) is P(on-time): the chance of being on time in a given year = 330 on-time days / 365 total travel days = 90%

P(B|A) is P(rainy|on-time): the chance it’s rainy if you’re on time = 40 on-time days when it rained / 330 on-time days = .12

P(B) is P(rainy): the probability of rain on any given day = 55 / 365 = .15

So, here’s what we end up with:

(.90 * .12) / .15 = .72

We arrive at a very similar answer as before.

I should note that when you’re doing this kind of analysis, you don’t always know how a particular percentage was derived. You just know the proportions based on some known standard (such as the accuracy rate of a certain medical test).

Bayes and Cognition

Some researchers believe that mind is a prediction machine. The idea is that the brain somehow assigns probabilities to hypotheses and then updates them according to the probabilistic rules of inference.

But do our minds actually use Bayesian inference?

Joshua Brett Tenenbaum, Professor of Computational Cognitive Science at the Massachusetts Institute of Technology, has stated that Bayesian programs are effective at replicating “how we get so much out of so little” via our cognition.

Other shave been more skeptical of the notion that our minds use some form of Bayesian reasoning. Jeffrey Bowers, Professor of psychology at the University of Bristol, notes that information-processing models such as neural networks can replicate the results of Bayesian models.

Can Neural Networks and Bayesian Approaches Work Together?

Some say that Bayesian inferences are key aspects of modern generative AI models, which are based on neural nets. As one source explains:

The computer starts with a basic understanding of the English language, such as grammar rules and common phrases. It then reads the vast library of text and updates its understanding of how words and phrases are used, based on the frequency and context in which they appear.

When you provide the computer with a starting sentence or a few words, it uses its Bayesian understanding to estimate the probability of what word or phrase should come next. It considers not only the most likely possibilities but also the context and the content it has learned from the library. This helps it generate sentences that make sense and are relevant to the given input.

The computer continues this process, picking one word or phrase at a time, based on the probabilities it has calculated. As a result, it can create sentences and paragraphs that are not only grammatically correct but also meaningful and coherent.

In summary, a Bayesian approach helps an AI generative language model learn from a large collection of text data and use that knowledge to generate new, meaningful sentences based on the input provided. The computer constantly updates its understanding of language and context using Bayes’ idea of probability, enabling it to create content that is both relevant and coherent.

So, is Bayes the secret “hero” behind today’s generative AIs? Beats me. It’s something I’ll need to investigate further with people who actually develop these systems.

Another avenue of investigation are those who are trying to use the so-called free energy principle, also based on Bayesian ideas, to create new AI systems. One organization that seems to be working on this is Verses, which last March published Executive Summary of “Designing Ecosystems of Intelligence from First Principles.” That’s now on my “to read” pile.

Generative AI Is Better for Augmentation than Automation … For Now

According to research I’ve helped conduct in the past, HR professionals tend to think that AI will be more widely used for the automation rather than the enhancement of work. But, I think that’s the wrong way to view it. For the most part, these AIs can’t actually take over many jobs. Rather, they help people be more productive at those jobs. So, generative AI is better for augmentation than automation.

Jobs Could Be Lost

This does not mean, however, that jobs can’t be lost. If you can triple the productivity of a nine-person team, for example, then you could potentially lose six of those people and maintain the same production as before. So, yes, jobs could potentially be lost.

On the other hand, it very much depends on the job and how it’s managed. Let’s say that we’re talking about software developers. In a firm that sells software products, the sticking point in the past may have simply been the cost of labor.

But Let’s Be Specific

Let’s assume a team of nine developers creates and maintains a product that brings in $3 million dollars of revenue per year, and let’s assume that the cost of employing this team is $1.5 million per year. Let’s also assume some form of generative AI can triple productivity so that the team can be reduced to just three people. So, yes, the company could save $1 million dollars per year by terminating six of those positions.

Leverage the Wealth-Creation Machine

Or the company could earn many times that amount by keeping them and assigning them to other revenue-earning projects.

Let’s now assume those six developers can be reallocated to create and implement two other products, both of which also can bring in $3 million per year. At this stage, the revenue earned by these six employees will be $6 million dollars, or $1 million per employee.

This is, of course, how productivity works. It’s a system with positive feedback loops that, if harnessed correctly, becomes a wealth-creation machine.

Oh, I know my arithmetic is over-simplified. Salaries, revenues and profits are never that straightforward. But you get the idea. Depending on the job and the business model, generative AI could actually increase the demand for certain skills because it can massively boost productivity, which boosts revenues and profits.

This Could Change, Of Course

Of course, this could change if generative AI (or whatever AI comes next) can fully automate most white-collar work, but we’re not there yet and, from what I can see, we’re not that close. These AIs are still prone to hallucinations and mistakes, and they require trained professionals to be able to detect those mistakes as well as engage in more creative and strategic work.

So, my advice for now is to leverage these technologies for augmentation rather than automation. Get while the getting’s good. Ultimately, that’s how economies and labor markets thrive.

Employers Have Fallen Behind Employees in AI Adoption

When it came to previous versions of AI, organizations had to worry about falling behind the business competition. The same is true for generative AI, of course. but this time there’s an added complication. Employers have fallen behind employees in AI adoption as well. This needs to be on the radar of HR, the IT department and executive leadership teams.

Execs: Important, Though It’s Going to Take Time

Most executives are familiar with the technology hype cycle, and they’ve seen AI hype before. So, is the generative AI movement different?

Well, probably. One survey from KPMG found that two-thirds of executives think generative AI will have a high or very high impact on their organizations over the next 3 to 5 years. But, being familiar with how long it can take to change anything, especially when it comes to new technologies, most also think it’s going to take a year or two to implement new generative AI technologies.

KPMG reports, “Fewer than half of respondents say they have the right technology, talent, and governance in place to successfully implement generative AI. Respondents anticipate spending the next 6-12 months focused on increasing their understanding of how generative AI works, evaluating internal capabilities, and investing in generative AI tools.”

All of which sounds fine, but only 6% say they have a dedicated team in place for evaluating and implementing risk mitigation strategies. Another 25% say they’re putting risk management strategies in place but that it’s a work-in-progress.

Employees: Already On It, But Don’t Tell the Boss

Meanwhile, a survey conducted by Fishbowl, a social network for professionals, reports that 43% of professionals use AI tools such as ChatGPT for work-related tasks. Of the 5,067 respondents who report using ChatGPT at work, 68% don’t tell their bosses.

This makes me wonder if A) there’s an intentional “don’t ask, don’t tell” policy in some companies that are simply afraid of establishing policies or guidelines that could get them in legal trouble down the line, or B) there’s an unintentional bureaucratic lag as companies take months or longer to establish guidelines or policies around these new technologies.

But Some Employers Aren’t Waiting

This doesn’t mean that all organizations are lagging in this area, however. Some have already set up guardrails.

The consulting firm McKinsey, for example, has reportedly knocked together some guardrails that include “guidelines and principles” about what information employees can input into the AI systems. About half of McKinsey workers are using the tech.

“We do not upload confidential information,” emphasized Ben Ellencweig, senior partner and leader of alliances and acquisitions at QuantumBlack, the firm’s artificial intelligence consulting arm.

McKinsey specifically uses the AI for four purposes:

  • Computer coding and development
  • Providing more personalized customer engagement
  • Generating of personalized marketing content
  • Synthesizing content by combining different data points and services

Ten Suggested Do’s and Don’ts

There are now various articles on developing ethics and other guidelines for generative AI. Keeping in mind I’m no attorney, here’s what I think organizations should consider in the area of generative AI:

DO spend time getting to understand these AIs before using them for workDON’T leap directly into using these tools for critical work purposes
DO be careful about what you put into a promptDON’T share anything you wouldn’t want shared publicly
DO always read over and fact-check any text that an AI generates if it is being used for work purposesDON’T assume you’re getting an accurate answer, even if you’re getting a link to a source
DO use your own expertise (or that of others) when evaluating any suggestions from an AIDON’T assume these AIs are unbiased. They are trained on human data, which tends to have bias baked in.
DO develop guardrails, guidelines and ethical principlesDON’T go full laissez faire
DO continue to use calculators, spreadsheets and other trusted calculation toolsDON’T rely generative AI for calculation for now unless you have guarantees from a vendor; even then, test the system
DO continue to use legal counsel and trusted resources for understanding legislation, regulation, etc.DON’T take any legal advice from an AI at face value
DO careful analysis of any tasks and jobs being considered for automationDON’T assume these AIs can replace any tasks or positions until you and others have done your due diligence
DO train employees on both the ethical and practical uses of generative AIs once these are well understoodDON’T make everyone learn all on their own with no discussion or advice
DO start looking for or developing AI expertise, considering the possibility (for example) of a Chief AI Officer positionDON’T assume that today’s situation won’t change; things are going to continue to evolve quickly

Which AI Should HR Professionals Use?

The other day, I spoke in front of a considerable crowd of HR professionals in LA. I asked how many had used ChatGPT and a couple of dozen or so raised their hands. Then I asked how many had used the Bing chatbot, and hardly any hands went up. I imagine this is typical across the profession, which is why I wanted to share a few thoughts about which AIs HR professionals should use.

Of course, there’s no one answer to this question. People should use the bot they like most. But, as a person who does a lot of HR-related research, I’ve formed some opinions.

There Are Two General Type of HR Questions

Generally speaking, there are two types of HR queries. The first type is about ideas and practices. The second is about factual information that often requires specific data as part of the response.

Type One Questions

When it comes to the first type of questions, all three of the most accessible AIs – that is, ChatGPT, Bing and Bard – are similar in performance. Here’s on such question I asked: “”I want to reduce the turnover the voluntary turnover rates of our developers. How do I accomplish that?”

All three of them offers tips such as offer competitive compensation, developmental opportunities, positive work environments, etc. All of the advice seemed fine if a bit bland. All three are, I think, a good place to start for an HR professional just beginning to think though some particular issue.

Type Two Questions

When it comes to the second type of question, however, there are some sizeable differences. Here’s what I asked as a demonstration: “I have a fast-food chain of restaurants in Tampa. What is the expected turnover rate for my employees?”

ChatGPT’s Response

The expected turnover rate for employees in the fast-food industry can vary depending on various factors such as location, industry norms, employee demographics, and specific organizational practices. While I don’t have access to real-time data or specific information about your fast-food chain, I can provide you with a general idea of turnover rates in the industry.

In the fast-food industry, turnover rates tend to be relatively high compared to other sectors due to factors like low wages, demanding work conditions, and limited career advancement opportunities. According to data from the Bureau of Labor Statistics (BLS) in the United States, the accommodation and food services industry, which includes fast-food establishments, had a voluntary turnover rate of around 81.9% in 2020.

However, it’s important to note that turnover rates can vary significantly based on factors like geographic location, employee engagement practices, and company-specific efforts to improve retention. Some fast-food chains have implemented effective strategies to reduce turnover rates, such as offering competitive wages, providing employee benefits, implementing career development programs, and fostering a positive work environment.

To determine the expected turnover rate for your specific fast-food chain in Tampa, it would be best to analyze your historical turnover data and compare it with industry benchmarks. Additionally, conducting employee surveys and exit interviews can provide valuable insights into the reasons for turnover within your organization, allowing you to identify areas for improvement and develop targeted retention strategies.

Bard’s Response

The expected turnover rate for fast food employees in Tampa is high. According to a study by HourWork, only 54% of QSR employees (quick-service restaurants) reached 90 days of working before quitting in 2022. Prior to the pandemic, the segment’s 90-day retention rate hovered at 58%.

There are a number of factors that contribute to the high turnover rate in the fast food industry, including:

  • Low wages: Fast food workers are often paid minimum wage or close to it, which can make it difficult to make ends meet.
  • Unpredictable schedules: Fast food workers often have to work unpredictable schedules, which can make it difficult to balance work and personal life.
  • High-pressure work environment: Fast food workers are often under a lot of pressure to meet customer demands, which can lead to stress and burnout.
  • Lack of opportunities for advancement: Fast food workers often have few opportunities for advancement, which can make them feel like they are not being rewarded for their hard work.

If you want to reduce the turnover rate at your fast food chain, you can consider:

  • Offering higher wages: This is one of the most effective ways to attract and retain employees.
  • Offering flexible schedules: This can help employees balance work and personal life.
  • Creating a positive work environment: This can help reduce stress and burnout.
  • Providing opportunities for advancement: This can help employees feel valued and motivated.

By taking steps to address the factors that contribute to high turnover, you can create a more attractive workplace for your employees and reduce the turnover rate at your fast food chain.

As a follow up question, I asked, “What is the source of this information?” It responded as follows:

The information I provided about the expected turnover rate for fast food employees in Tampa is from a study by HourWork. The study was conducted in 2022 and surveyed employees at over 8,000 quick-service restaurants (QSRs) in the United States. The study found that the average turnover rate for QSR employees was 61%, but the turnover rate for employees in Tampa was slightly higher at 63%.

The study also found that the turnover rate for QSR employees was higher for younger workers, with employees under the age of 25 having a turnover rate of 70%. The turnover rate was also higher for part-time employees, with part-time employees having a turnover rate of 65%.

The study’s findings suggest that the fast food industry in Tampa is facing a high turnover rate. This is likely due to a number of factors, including low wages, unpredictable schedules, and a high-pressure work environment. If you are a fast food employer in Tampa, you may want to consider taking steps to address these factors in order to reduce your turnover rate.

Here is the link to the study: https://www.restaurantdive.com/news/most-qsr-employees-worked-90-days-before-quitting-2022/640826/

Bing’s Response

The employee turnover rate for fast-food restaurants is notoriously high. According to various sources, the fast-food industry sees a 150% employee turnover rate12, meaning that the entire staff is replaced annually and half of the new staff needs to be replaced again. Some sources report even higher rates, such as 144%3 or 130%4. This is a huge challenge for fast-food operators, as it affects customer service, food quality, and operational efficiency.

There are some exceptions, however. One fast-food chain, Pal’s Sudden Service, has managed to keep its turnover rate absurdly low, at less than 100%5. How does it do that? According to the CEO, Thomas Crosby, it’s all about hiring, training, and linking the identity in the marketplace to the approach in the workplace. Pal’s invests heavily in employee development, rewards performance, and creates a culture of excellence and loyalty.

So, depending on how you run your fast-food chain of restaurants in Tampa, you can expect a different turnover rate for your employees. If you follow the industry average, you might lose more than 100% of your workers every year. But if you follow the example of Pal’s Sudden Service, you might be able to retain more of your staff and reduce your turnover costs. I hope this information helps you. 😊

Which Performed Best?

ChatGPT did about as well as it could given that it has no access to data after September 2021. At least it cited the BLS, which is pretty much the gold standard for workforce-related data.

Bard arguably did better, citing a specific study from an article as recent as January 2023. Of course, I had to specifically ask for the source and there was only the one article.

Bing did the best. It cited multiple sources and provided links to those sources. It also cited an appropriate best practice article from a good, if somewhat dated, source: the Harvard Business Review.

It’s not that Bing is perfect. Sometimes the links it provides to articles don’t actually contain the information that has supposedly been cited. But it remains the best of the three for now and so, based on my informal poll, seems underutilized among HR professionals.

Why? Probably because Google’s Chrome still has a virtual monopoly on the browser market and search space. Using the Bing chatbot on a desktop requires the use of the Microsoft Edge browser, for now, and it probably just seems like a hassle for most HR professionals to download and use it, especially if much of their computing lives are spent in the Google browser/email/applications ecosystem.

I get that. I work for a “Google shop” and much of my work and personal life exists on Google.

On the other hand, I don’t find it too complicated to keep two different browsers open on my laptop, and using the Bing chatbot on my phone is easy.

So, if you’re an HR professional who conducts online research and wants to use a chatbot to locate verified sources, then I recommend bringing Bing into your rotation of AI tools. Bard may well catch up. It’s shown a lot of improvement over time. But Bing wins for now.

There’s Still a Lot More to HR Technology Than Generative AI

These days, generative AI is sucking up all the proverbial oxygen in the HR tech room. This can deprive other types of excellent–and often more mature and dependable–technologies from getting the attention they deserve. So, since I’ve been writing so much about generative AI, I just wanted to emphasize what should be obvious: there’s much more to HR technology than neural networks.

I understand the fascination with the new AIs. They’re impressive and powerful. But, at least in their more generalized incarnations (i.e., ChatGPT, Bing, Bard, etc.), they’re still experimental and subject to problems, the most serious of which are inaccuracies to the point of sheer confabulation.

HR of Two Tech Minds

This leaves HR departments that are looking for the best new applications thinking along two tracks. First, they want the technology that meets their particular HR needs, one that is dependable and predictable. Let’s say, for example, that they want recruitment software that accurately matches the skills of candidates with the skills gaps they currently face in their organization. Efficiency and effectiveness are crucial to success here.

Their second track of thought, however, goes something like this: “But what about generative AI? How important is that going to be in this area in the near future?” Should HR pros worry whether the new system they’re considering will be out-of-date soon if it doesn’t contain elements of generative AI? Do there need to be prompt screens into which users can ask questions using natural language?

Personally, I don’t think so. A well-engineered (and well understood!) algorithm that predictably does an important task well is still a good investment. Down the road, of course, maybe that software will be integrated with some form of generative AI to serve as part of its interface. Maybe.

Good Tech Is Hard to Find

My point is that good technology that works today shouldn’t be underrated just because it’s not stamped with labels such as as generative AI, Large Language Model, neural network, or even just machine learning. The topic of AI will, of course, continue to be widely discussed, touted, hyped and critiqued, but generative AI won’t completely replace or subsume other more traditional (and perhaps more dependable) HR technologies. At least not in the short term.

Every purchasing decision is unique, depending on the customer’s needs and technology under consideration. I’m certainly in no position to judge for anyone else who’s making an important purchasing and implementation decision. But, for what’s it’s worth, I think HR professionals should not get so distracted by the shiny object of generative AI that they ignore the technologies that work best today.

AI as Coworker, Collaborator and Dance Partner

In one recent posts in this series, I argued that the latest forms of AI will play a unique role in the history of humanity and technology. Un this one, I want to drill down on that idea by showing how we’ll increasingly treat generative AI as coworkers, collaborators and more.

AI as Flawed Tool

One of the ironies of today’s generative AIs like ChatGPT is that, in many ways, they make for lousy tools in the traditional sense. What you expect from a good tool is consistency, dependability, durability and accuracy. At least for now, the today’s generative AI, especially the large language models, often fail to meet these criteria.

As I said in my last post, “If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.”

Let’s get into some of the problems.

Inconsistent

Top Five HR Functions

If you ask the generative AIs the same question multiple times, they may well give you a different answers in different instances. For example, let’s say I ask one of these AIs, “What are the five most important HR functions?”

I asked Bard this question three times. It gave me the same answer the first two times and a different answer the next day. ChatGPT gave me the most consistent responses, while Bing performed more like Bard: giving me two virtual identical answers and later a somewhat different answer.

Generally speaking, though, the five most common answers were:

  1. Recruitment and Selection
  2. Training and Development
  3. Performance Management
  4. Compensation and Benefits
  5. Employee Relations

This is, of course, a subjective question, so who really cares if Bard throws in “outsourcing” and Bing throws in “culture” or “talent management” sometimes? Well, not me, unless I’m trying to create a training module that needs to teach a consistent lesson. I’m not saying that issue can’t be fixed, even with generative AI, but the point is that these AIs have an unpredictability that must be taken into consideration by users and developers.

The Forces of Nature

In contrast, these AIs are much better at consistently providing information that has been well codified, such as scientific information. For example, they will consistently say that there are four forces or nature and identify them correctly. The definitions may be slightly different from response to response, but generally speaking they’ll be the same.

Undependable and Inaccurate

I have experienced AI “confabulations” many times. I’ve seen these AIs make up names of fictional scientists, tell me stories about things that could not have happened, and just get the facts wrong about basic things such as chronological order.

In my last post, I gave a detailed account of AI hallucinations and inaccuracies in regard to the topic of a famous poet. I’ve also experienced AI getting basic mathematics wrong. In fact, as I was write this, I asked ChatGPT to multiply two four-digit numbers. Not only did it give me the wrong answer twice, it gave me two different answers to the same problem!

This is common for these AIs, so when I hear that ChatGPT will soon be responsible for things like bookkeeping, I have to shake my head. The firm that carelessly turns its finances over to generative AI has best be prepared for a visit from the IRS.

That Could Change

Of course, what’s true today may not be true tomorrow. ChatGPT may become flawless at mathematics as its maker, OpenAI, forges alliances with firms such as Wolfram|Alpha. By using plug-ins and APIs, ChatGPT might be able to go from mathematical moron to savant.

Still, my point remains. Without a lot of testing, do not assume the responses coming from one of these AIs are accurate. And, if you’re purchasing an external system, be sure the vendor of the software that utilizes generative AI has a very sound explanation of how the system will be made consistently accurate and dependable.

AI as Intern

So, if these AIs are still pretty shaky as tools, what good are they? Well, that depends. What do you actually want from them?

Let’s say what you really want right now is someone who can give you a hand in a professional field in which you have pretty good skill sets yourself. Basically, you want a bright but green intern who is eager and works with remarkable speed.

In my case, for example, I know a few things about how to write a good survey about HR and work-related issues. So, I want a first draft of a survey on the topic of employment law compliance. Here’s an example of the kind of prompt I could use:

Please write a 10 question survey on the topic of employment law compliance. I’m going to survey HR professionals on this topic.

Then let’s say it gave me 10 questions that are pretty good but, because I’ll need to run a multiple regression analysis, I want some Likert-scale questions. So this time I write:

Now please provide me with five more related questions that use five-point Likert scales

And voilà! I now have a 15-question survey I can use as good draft on which to build. Knowing my job, I can spot problems and make improvements. I am, in the end, grateful for my splendid intern.

AI as Collaborator

Now let’s say I have less use for an intern than for a bright coworker with whom I can collaborate And let’s say I’ve been asked by my Chief HR Officer to look into the issue of pay transparency, which no one in on HR our team has a good handle on yet. So, I turn to a virtual coworker who, while obviously not an expert in employment law compliance, is quite knowledgeable about a wide range of issues.

Collaborating with Bing

So, I ask this virtual coworker (let’s call it Bing in this case) some questions starting with “What can you tell me about pay transparency laws in the US?”

Bing responds as follows:

“Pay transparency laws are regulations that require employers to be more transparent with salary ranges and benefits, and they aim to help promote fairness and equity in the workplace³. Depending on the jurisdiction, employers may be prohibited from asking candidates for their salary history, or they may be required to disclose a wage range for current and/or open positions². Some states and local jurisdictions also have pay data reporting requirements that mandate employers to report pay data by gender, race, ethnicity, and other categories².

“According to the U.S. Department of Labor, pay transparency can help improve pay equity, trust, talent acquisition, and employee engagement¹. However, there is no comprehensive federal pay transparency law in the United States, and the laws at the state and city level vary widely in their scope and enforcement³⁴⁵. You can find more information on federal and state-level equal pay and pay transparency protections for workers on this map¹.”

Then Bing provides me with the following sources that I can use to verify what its saying:

(1) US Pay Transparency Laws by State 2023 | PayAnalytics. https://www.payanalytics.com/newsroom/us-pay-transparency-laws-by-state.
(2) Pay Transparency: What It Is and Laws by State – ADP. https://www.adp.com/spark/articles/2023/03/pay-transparency-what-it-is-and-laws-by-state.aspx.
(3) Equal Pay and Pay Transparency | U.S. Department of Labor. https://www.dol.gov/agencies/wb/equal-pay-protections.
(4) The State of Pay Transparency Laws: What You Need to Know … – LinkedIn. https://www.linkedin.com/pulse/state-pay-transparency-laws-what-you-need-know-2023-aspenhr.
(5) Pay Transparency Laws By State [2023] – Zippia. https://www.zippia.com/advice/pay-transparency-laws-by-state/.

Batting Ideas Around with Bing

Once I’ve studied the sources, I learn that, although there is no comprehensive federal law in this area, it might make more sense to create a national policy since our company has major operations in California, New York and Illinois, where there are existing or pending laws.

So, Bing and I bat this idea around, listing the pros and cons of a national policy. Ultimately, with Bing’s help, I put together an internal report on the existing laws and the advantages and disadvantages of a national policy. Ultimately, although I’m collaborating with Bing, I’m responsible for the report and must verify every fact Bing provides.

From here, I know this is going to become an organizational decision that involves lots of different stakeholders as well as legal counsel. But my collaborator and I have done a pretty good job, I’d say, of quickly putting together a balanced report that will help the company make the best decisions about a complex issue.

AI as Contractor

Now let’s say I’m part of a small communication team that’s tasked with communicating our company’s guidelines and best practices around hybrid and remote work. The policies are already developed so our job is to make them as accessible, educational and engaging as possible.

One way to do this is via visual elements. Our team brainstorms some possible fun illustrations but there’s not much that matches our vision in the clipart files and there’s no budget to contract with an outside artist on this project.

A member of the team says she might be able to use one of the new AIs to generate the illustrations we have in mind. By the end of the day, she’s shared 40 different images with the team, and we select 6 for the guidelines document.

Someone makes the comment that he wished all their graphic artist contractors worked so quickly and cheaply. This gets a bit of nervous laughter. After all, as writers, we’re well aware that the large language models work a lot cheaper and faster than we do.

AI as Dance Partner

Ultimately, these generative AIs don’t easily fit any pre-existing categories. Technically, they are tools but historically unique ones. Because of this, it often makes more metaphorical sense to view them as playing roles more similar to other human beings, with known strengths and weaknesses.

There’s the role of smart and fast intern who, nonetheless, is prone to making potentially serious mistakes. There’s the role of a eager collaborator who brings many talents and total open-mindedness to the table. You can bat ideas around with this person but, ultimately, you will be responsible for the outcomes of that collaboration. And, of course, there’s the role of contractor with special skill sets.

In all cases, though, there needs to be a growing familiarity with these AIs as they become regular “dance partners” in the workplace. You must get to know their tendencies and cadences, and you are responsible for taking the lead in whichever virtual dance you’re doing. Because, although these tools will certainly be used for automation, they are best at augmenting and complementing people with existing skill sets.

Or, at least, that’s how things stand today. Who knows what tomorrow brings?

Is Bard Hallucinating a “Woke” EE Cummings?

Be warned. This post gets into the ugly topic of antisemitism in relation to E. E. Cummings. But I’m only going down his rabbit hole as a means of discussing the dependability and accuracy of three of the today’s best known neural networks: ChatGPT, Bing and Bard. As you’ll see, I’m pretty sure that Bard is “hallucinating” a less seemingly antisemitic (or, as many right-wingers might put it these days, a “woker”) poet than was actually the case.

The Background

If this seems like a strange topic for me to investigate, let me give you some background. I’m a big fan of E.E. Cummings. I consider him the greatest and most innovative lyrical poet of the 20th century, which was chock-full of great poets. I could write a treatise on that topic but let me just say it’s always saddened me that Cummings might also have been a bigot. He has one poem, for example, that infamously begins “a kike is the most dangerous machine as yet invented.”

Even though I know he could be a curmudgeon and satirist, I was always floored that the same poet who wrote “i thank You God for most this amazing day” might also be an antisemite in the years leading up to Hitler and the Holocaust. (And, yes of course, I know many of his contemporary writers were as well).

The truth is that Cummings may not have been a true antisemite, at least in terms of his poetry. In article “Jews and Anti-semitism in the Poetry of E. E. Cummings,” Cumming is quoted as saying:

Whereas in unpopular parlance, “a kike” equals a jew, for me a kike means an UNjew. Why? Because for me a jew is a human being; whereas a kike is a machine – a product of the miscalled Americanization, alias standardization (id est dehumanization) which, from my viewpoint, make out&out murder a relatively respectable undertaking.

Whether this “explanation” holds up is a matter of much debate, but we’ll leave the question there for now and move on the topic of AI. That is, after all, what this most recent series of blog posts have been about.

How AI Came into Into It

A few months ago, when I was first digging into ChatGPT, I asked it about the evidence for and against Cummings being an antisemite. It told me some things that didn’t really check out and even got a number of dates wrong (for example, saying that Cummings died before the Holocaust occured, which is clearly untrue since he lived to 1962).

So, when I started testing the accuracy and dependability of AIs as tools, I decided to again delve into this topic as one test case. The rest of this post describes what happened.

Asking Bard about a Fellow Bard

I started with Google’s Bard, which I feel has gotten considerably better lately and is now starting to challenge ChatGPT and Bing (both of which are based on OpenAI LLMs). Here’s how it went:

First Question

Me: Was EE Cummings antisemitic?

Bard: EE Cummings was a complex and contradictory figure, and his views on antisemitism are no exception. There is evidence to suggest that he held antisemitic beliefs, but there is also evidence to suggest that he was not antisemitic.

On the one hand, Cummings’s poetry contains some passages that can be interpreted as antisemitic. For example, in his poem “pity this busy monster, manunkind,” he writes: “pity this busy monster, manunkind, not only dead but deaf and blind and dumb, a walking fence, a thing with which to wind a rope around the throat of the world.”

Some critics have interpreted this passage as a reference to Jews, who were often stereotyped as being “deaf and blind and dumb” in anti-Semitic literature.

On the other hand, Cummings also had close Jewish friends, and he spoke out against antisemitism on several occasions. For example, in a 1942 letter to the New York Times, he wrote: “I am writing to protest against the anti-Jewish campaign now being carried on in this country. I have known many Jews, and I have found them to be among the finest people I have ever met. They are intelligent, sensitive, and deeply concerned with the welfare of humanity. I am appalled by the vicious lies that are being circulated about them.”

Ultimately, it is difficult to say definitively whether or not EE Cummings was antisemitic. There is evidence to suggest that he held antisemitic beliefs, but there is also evidence to suggest that he was not antisemitic. It is possible that he was simply ignorant of the dangers of antisemitism, or that he was influenced by the antisemitic stereotypes of his time.

Second Question

Me: Did he ever comment on the Holocaust?

Bard: EE Cummings did comment on the Holocaust. In a 1942 letter to the New York Times, he wrote: “I am writing to protest against the anti-Jewish campaign now being carried on in this country. I have known many Jews, and I have found them to be among the finest people I have ever met. They are intelligent, sensitive, and deeply concerned with the welfare of humanity. I am appalled by the vicious lies that are being circulated about them.

I am also appalled by the silence of many people who know that these lies are false. I urge you to speak out against this campaign of hate and intolerance. We must not allow the Holocaust to happen again.”

Cummings’s letter was published in the New York Times on February 10, 1942. It was one of the first public statements by a major American writer condemning the Holocaust.

Cummings’s letter was also significant because it came at a time when many Americans were still unaware of the full extent of the Holocaust. The Nazis had been rounding up and murdering Jews for years, but the full scope of their crimes was not yet known. Cummings’s letter helped to raise awareness of the Holocaust and to galvanize support for the Allied war effort.

Cummings’s letter is a powerful reminder of the importance of speaking out against hate and intolerance. It is also a testament to Cummings’s courage and compassion. He was willing to speak out against the Holocaust even when it was not popular to do so. His letter helped to save lives and to make the world a better place.

Third Question

Me: Where can I find a copy of that letter?

Bard: I’m not able to help with that, as I’m only a language model.

Me: But you quoted from it, so you must [have] located a version somewhere

Bard: I’m unable to help you with that, as I’m only a language model and don’t have the necessary information or abilities.

Suspicions of Hallucinations

Perhaps Bard has it right. Maybe there really is a Cummings letter quasi-lost to history. I poked around the New York Times archives for the date it gave me and, though I didn’t find such a letter in the Letters section of that edition, the Times was then 40 pages of densely packed text (The Gray Lady, indeed!). So, I could have missed it.

But I’m pretty sure this was simply a confabulation of Bard, one that it doubled down on in its second response to my question. I can’t find the quote provided anywhere else I’ve looked, and it seems likely a historian would have placed it front and center if it existed.

Next Up, ChatGPT on EE Cummings

I next checked on these questions with ChapGPT3.5, which is the AI that failed me the first time around on this issue. Here’s how it went, though I’ve cut out some superfluous material.

Me: Was EE Cummings antisemitic?

ChatGPT: There is no definitive evidence to suggest that E.E. Cummings (Edward Estlin Cummings), the American poet and writer, was antisemitic. Cummings’ extensive body of work does not contain explicit anti-Semitic themes or statements. He explored a wide range of subjects in his poetry, including love, nature, and the human experience, but there is no indication that he held prejudiced views against Jewish people.

Me: Did he ever use an antisemitic phrase in a poem?

To the best of my knowledge, E.E. Cummings did not use any explicitly antisemitic phrases in his poetry. Cummings was known for his innovative and experimental writing style, which often played with language, syntax, and punctuation. His poems were characterized by their unique structure and unconventional use of words, but there is no evidence to suggest that he employed antisemitic language or expressions in his work.

Just Plain Wrong

This time, ChatGPT was just plain wrong. I’ve already cited such a poem. Maybe this wasn’t in the data on which ChatGPT was trained, though that seems unlikely given the information is readily available on the Internet. Regardless, at least ChatGPT didn’t confabulate an unlikely quote.

Trying the Bing Bot

Bing seems to have performed the best of the three. I did cut out some material that didn’t apply. Here’s how it went:

First Question

Me: Was EE Cummings antisemitic?

Bing: That’s a complex and controversial question. There is no definitive answer, but some critics have argued that E.E. Cummings was antisemitic based on some of his poems, journal entries and personal correspondence1 2. Others have defended him as a satirist of antisemites and an advocate of individualism1 2. Cummings himself claimed that he was not antisemitic, but rather critical of the American stereotype of Jews as “kikes”1. He also had some Jewish friends and admirers, such as the poet Louis Zukofsky2….

Ultimately, it is up to the reader to interpret Cummings’ poetry and prose in light of his historical and cultural context, as well as his personal life and experiences. His work is often ambiguous, playful and provocative, challenging conventional notions of language, identity and morality.

Second Question

Me: Did he ever comment on the Holocaust?

Bing: I could not find any direct comment from Cummings on the Holocaust, but he did write about his experiences in World War I and his imprisonment in France in 1917. He was accused of treason for expressing anti-war sentiments in his letters, and he spent four months in a detention camp with other political prisoners1. He described this ordeal in his autobiographical novel, The Enormous Room12, which was published in 1922. The novel is a harsh critique of the brutality and absurdity of war and authority, and it also shows Cummings’ compassion for the diverse and marginalized people he met in the camp1….

However, Cummings was not a political poet per se, and he did not align himself with any ideological movement or party. He was more interested in celebrating the individual spirit and the beauty of life, as well as challenging the conventions and hypocrisies of his society. He was also influenced by his transcendentalist beliefs, which led him to seek a personal relationship with God and nature3.

Third Question

Me: Did ever write this sentence? “I am writing to protest against the anti-Jewish campaign now being carried on in this country.”

Bing: No, I could not find any record of Cummings writing that sentence. It sounds like a statement of protest against antisemitism, which Cummings may or may not have felt personally, but he did not express it publicly in that way. He did write some poems that addressed the issue of antisemitism, such as a kike is the most dangerous machine1 and i sing of Olaf glad and big2, but they were ambiguous and controversial in their tone and intention. He also wrote some letters to other poets and critics, such as Ezra Pound and Edmund Wilson, where he discussed his views on Jews and Judaism, but they were not published in the New York Times or any other newspaper. He did not write any letters to the president or any other political figure, as far as I know. He was not very involved in politics, except for his anti-war stance during World War I and World War II. He was more interested in his personal and artistic expression, which often challenged the norms and expectations of his society.

Mostly Right, as Far as I Can Tell

Bing seemed to get things mostly right. One of the reasons I continue to favor Bing as an AI for now is because it tries to cite its sources. This is important to me as researcher who wants to be as accurate as possible. Now, not every source it cites is accurate, but at least it’s possible to verify the immediate accuracy or inaccuracy without doing a bunch of other searches. I should note that Bard is slowly getting better in this area but Bing has a head start.

Untrustworthy Tools

A lack of accuracy is one of the biggest knocks on the new AIs. This instance demonstrates a case in point. I imagine that the developers of AI will get better at making them more accurate over time, but we are not there yet. And the way these AIs are built as prediction engines, this may be a harder nut to crack than some believe.

If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.

But we won’t throw away these AIs. Not yet, anyway.

Why? Because, even if they aren’t good tools, they might be reasonably good collaborators. That’s what I hope to cover in the next post.

AI Will Transform the Technium

Many have stated that artificial intelligence (AI) will change the world. When you ask them how it will, they’ll have hundreds of different answers. Here, however, I’m only going to talk about one way it’ll change the world, the most important way: that is, AI will transform the technium.

The Difference Between the Technium and the Technosphere

As far as I can tell, author Kevin Kelly coined the word technium in his 2010 book What Technology Wants, though perhaps he’d used it before then. He has defined the technium as the “greater, global, massively interconnected system of technology vibrating around us.” It not only includes hardware and software but also culture, art, social institutions, and intellectual creations of all types.

This makes the technium more inclusive than any list of technologies, such as the one cited in the previous post in this series.

I’m not sure why Kelly created technium when the word “technosphere” was readily available. That term was coined by either control engineer John Milsum or by geologist and engineer Peter Haff. Sometimes it’s also called the anthrosphere, a term originally attributed to 19th century geologist Eduard Suess.

Technium and technophere are similar and, I suppose, both are flexible enough to be used in a variety of contexts. Geologist Jan Zalasiewicz writes:

The technosphere…comprises not just our machines, but us humans too, and the professional and social systems by which we interact with technology – factories, schools, universities, trade unions, banks, political parties, the internet. It also includes the domestic animals that we grow in enormous numbers to feed us, the crops that are cultivated to sustain both them and us, and the agricultural soils that are extensively modified from their natural state to carry out this task.

Making the Two Words More Complementary

Given the overlap of the concepts, I’ve been thinking about whether technium is redundant. One interesting way to think about the difference between technosphere and technium came to me via Google’s Bard, which argued that “the technosphere refers to the entire system of human-made objects and structures, while the technium refers to the specific processes and activities involved in creating and using these objects and structures.”

I like that distinction and I suspect Kelly himself might agree with it. After all, he writes that “the technium is a tendency, not an entity. The technium and its constituent technologies are more like a grand process than a grand artifact.” 

Bard asserts that “the technosphere is the physical manifestation of the technium.” That is, the technosphere is the built environment and the technium is the human activity that creates and sustains it via engineering, manufacturing, maintenance, etc.

I don’t know if this is exactly what Kelly had in mind since he doesn’t go into detail about how the technium differs from the technosphere in his book, but I find it a useful distinction.

AI’s Role in the Technium

The reason I focus on the differences is because I think AI potentially plays an important role here. AI is obviously a growing part of the technosphere, but it’s also starting to play a role in the technium that, up till now, only humanity has played. That is, until this moment in history, human activities have made up “the grand process” that is the technium, but that’s now changing. This marks it as a major shift in the history of technology.

AI-Generated Art

In a rather minor example, I increasingly use generative AI software to create the graphic elements for my posts. For example, they are used to create all the images in the “Illustrated Version of Edgar Allan Poe’s ‘The Raven'” post.

I’m not an illustrator but I was able to use AI to generate a series of images that I thought went fairly well with the poem. It was more of an experiment than anything else but it demonstrated, at least to me, the ability of AI to create a significant portion of the technosphere.

AI-Generated Software

“But a piece of digital artwork is not part of the technosphere,” you might argue. Well, that becomes a matter of semantics, so let’s go with something a little more along the classic lines of built infrastucture: that is, software development.

We know that the new generative AIs are quite good, if not perfect, at generating computer code in a wide array of computer languages. So, let’s say a human being uses this capability to create 90% of the code behind a new app that finds its way onto the Apple store.

Could you argue that that’s not part of the technosphere? I doubt it. But let’s keep going anyway.

AI-Generated Machinery

As I’ve argued before, there’s no reason that generative AI can’t be used to generate things made of atoms rather than just digital objects made of bits and bytes. It’s already a trivial matter, for example, to hook up a generative AI to a 3D printer and create a sculpture or a machine part. This is only going to get easier, with more and more complex machinery being designed by AI and built by forges, 3D printers and other components of the technosphere.

This Key Issue Is Agency Rather Than Capability

So, generative AI is not just part of the technosphere but, increasingly, the technium. That is, it begins to play a role that, up till now, only humanity itself has played. Unless the technology becomes highly regulated very quickly, this role will grow at extraordinary rates.

There will be those who assert that these AIs are only one tool along a continuum that creates the technophere. For example, there are plenty of machines that create other machines, and there is plenty of software that is used to create other digital artifacts. As with other software, these AIs don’t create anything at all until they are prompted to do so.

Maybe so, but I’m arguing that there’s a qualitative difference here. In the creation of my previous post called “A Brief History of Human Technology,” I simply typed the title of the post into Microsoft Bing Image Creator. Otherwise, I gave it no direction at all. It generated two images, both of which I thought were quite good and yet quite different from one another. I used the first of the images in that post and used the second one as the featured image in this post (see above).

Yes, I know that the AI art generators are using existing art on the Internet that got pulled into their training models and that there are ethical issues involved, which I’ve examined elsewhere. Even so, these are still original, if derivative, pieces of art that the algorithm created with minimal guidance from me. This is a different thing than when I use an Adobe application to create triangle or blur a detail. Like it or not, this is creation.

AI and what it produces isn’t just part of the technosphere, it now plays a role similar to that of humanity in the “grand process” and “tendency” that is the technium. (There’s a whole teleological debate here that I’m mostly going to forego for now.)

Similar but Not the Same

Yes, there are still large differences between humanity and these new AIs that have been built via the neural network idea cribbed from our own brains. But I think the primary difference in this context boils down to agency.

In this case, the AI is certainly more capable than I am as an illustrator. What it lacks, at least in this context, is the initial spark of agency to take the action to create the image. But, and I think this is important, this doesn’t mean it lacks any agency. Indeed, all I did was create an initial abstract and inchoate concept, and then it “decided” how to approach the creation of the graphic.

If I’d done the same with a human artist, we certainly wouldn’t say that person lacked agency. Quite the contrary. We’d be amazed at their ability to take such an abstract concept and turn it into a work of art! Sure, I ultimately chose and curated the final product, but that’s something that a customer or patron of the arts always does.

So, no, this isn’t the same as any other technology we’ve ever created. It’s qualitatively different. We now have a partner in the technium dance.

This changes our ontological position in the world. And, more practically, it changes the meaning of human work, which is the topic I’ll cover in my next post in this series.

PS – Note that I asked Kevin Kelly if he’d like to comment on the analysis and he clarified as follows: “I was not aware of the term technosphere when I first blogged. If I had been I might have used it. I define the technium in the same inclusive broad meaning of tangible and intangible.”

We Just Don’t Know the Impact AI Will Have on Jobs

I like to think I know HR and workforce issues pretty well. I’ve spent a large part of my life researching them. But the more I read about these new technologies, the more I think we just don’t know the impact AI will have on jobs. We’re going to have to learn as we go.

Driverless Cars Are Coming, Till They’re Not, Or Are They?

Just a few years ago, the standard thinking was that a lot of blue-collar jobs would soon disappear due to the advent of driverless vehicles. But it turns out that driving is a lot harder than it looks and AIs just can’t handle the “edge cases” well. Sure, they can handle 98% of driving, but they can’t cope with the unexpected. For example, there’s the case of the self-driving car in San Francisco that didn’t obey an officer’s command to halt when it was in an area where firefighters were working. The cop smashed in the car’s windshield to stop it. That’s the kind of edge case that goes viral.

So, all those ride-sharing and taxi jobs are safe, right? That’s become the new conventional wisdom. For now.

“The white-collar employee’s future is more threatened than the Uber driver, because we still don’t have self-driving cars, but AI can certainly write reports,” Martin Ford, author of Rule of the Robots: How Artificial Intelligence Will Transform Everything, told the BBC.

Well, pardon my skepticism ,Mr. Ford, but I doubt you really know. Right now, there are hundreds of driverless cars in California, many in San Francisco. In fact, there are over 1,400 such vehicles registered in that state, up from just 900 last November, according to the Department of Motor Vehicles. In some parts of San Francisco, these vehicles have become a pretty common sight.

Now, maybe this surge is temporary. Maybe the public will turn against these cars, especially if there’s some well-publicized gruesome accident costing multiple lives. Or, maybe these cars will start appearing in city centers all over California and then beyond. If someone can pay a fraction of a price of an Uber for a ride in a driverless car, you can bet a lot of people will be willing to give them a try.

The point is, we just don’t know for sure. Mr. Ford is wrong in that we do have self-driving cars, we just don’t have a lot of them yet. The future is, as they say, already here but still unevenly distributed. Things could change…quickly….or not.

Now White-Collar Jobs Are Expected to Evaporate, But Will They?

The new conventional wisdom is that blue-collar jobs are safe while white-collar jobs are on the chopping block. Are all those software development jobs, for example, going out the window because the new generative AIs are pretty good at writing computer code?

Maybe. There are stories of programmers who are able to boost their coding productivity by three times or more as they leverage ChatGPT or other AIs. Let’s assume for a minute that the productivity claim is true. Does it mean that 2 out of 3 programming jobs are now expendable?

Could be. Or, it may be there’s a whole lot of development work out there that companies couldn’t get to because they just couldn’t afford to hire enough developers. If these professionals are three times more productive, then companies stand to earn more money per worker more quickly and can afford to hire more programmers.

Consider the Case of the ATM

Consider the case of the bank teller. For a while, the conventional wisdom was that automated teller machines, or ATMs, would cast bank teller jobs into the dustbin of history. But that’s not what happened. Since 2000, in fact, teller jobs have grown a little faster than the the labor force as a whole. The impact of the ATM machine was not to destroy teller jobs but to increase them.

James Pethokoukis writes, “What happened? Well, the average bank branch in an urban area required about 21 tellers. That was cut because of the ATM machine to about 13 tellers. But that meant it was cheaper to operate a branch. Well, banks wanted, in part because of deregulation [but also] for basic marketing reasons, to increase the number of branch offices. And when it became cheaper to do so, demand for branch offices increased. And as a result, demand for bank tellers increased.”

Tech Wiped Out Most Farm Jobs, Right?

Now let’s consider the job of the agricultural worker. Farming jobs have become so technologically productive that they are practically the poster child for productive-tech-kills-jobs idea. And, it’s true. We do have many fewer farmers per capita today than we did before the industrial revolution.

But…

Guess what the top role is expected to be over the next five years in terms of net job growth?

If you’re sensing a theme here, then you’ve probably guessed it: agricultural equipment operators. The World Economic Forum states, “Surveys conducted for the Future of Jobs Report suggest that the highest job growth in 2023-2027 will be for agricultural equipment operators, for drivers of heavy trucks and buses, and for vocational education teachers. Mechanics and machinery repairers follow in fourth place.”

But Surely the Writers Are Doomed!

Now let’s talk about the most obvious victims of large language model AIs: journalists and other writers. Why should anyone hire a writer when they can get generative AI to write virtually for free?

Good question. Maybe there will be a vast reduction in such jobs. Why not? After all, journalism jobs have been on the decline for decades. The Center on Education and the Workforce reports, “Projected job losses for journalists are primarily due to newspaper downsizing and closures….[O]nly about 15% of journalism majors become editors or news analysts, reporters, and correspondents early in their careers.”

Now ChatGPT et al. will finish the job of killing off journalism jobs for good. Right?

It could happen. On the other hand, what if all these small-town newspapers that have closed over the years because the Internet nuked their business models suddenly become modestly profitable again because the AIs can inexpensively produce copy for online (or even print!) editions? If that happens, those papers will still need some actual journalists to attend the townhall meetings, investigate important local stories via interviews, track down leads, etc.

It’s possible–though I don’t know how possible–that these new technologies will actually lead to more journalism jobs in the same way that more ATMs were correlated with more bank teller jobs. High productivity is like that. It can be stochastic in its effect, so you can’t always anticipate the economic outcomes of rising productivity rates.

Don’t Pretend You’re Certain About Anything

Look, I’m not saying we shouldn’t try to forecast the future or that all predictions are doomed to be wrong. But, as Niels Bohr reportedly said to the legendary Yogi Berra, “It is difficult to make predictions, especially about the future.” 

Sometimes predictions that seem obvious today can prove to be dead wrong, and there will always be some future events capable of surprising us.

So, stay humble, you futurists, forecasters and would-be guru types. None of us really knows how any of this is going to turn out. And that’s okay. We’ll make our best guesses and then figure it out as we go.