AI as Coworker, Collaborator and Dance Partner

In one recent posts in this series, I argued that the latest forms of AI will play a unique role in the history of humanity and technology. Un this one, I want to drill down on that idea by showing how we’ll increasingly treat generative AI as coworkers, collaborators and more.

AI as Flawed Tool

One of the ironies of today’s generative AIs like ChatGPT is that, in many ways, they make for lousy tools in the traditional sense. What you expect from a good tool is consistency, dependability, durability and accuracy. At least for now, the today’s generative AI, especially the large language models, often fail to meet these criteria.

As I said in my last post, “If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.”

Let’s get into some of the problems.

Inconsistent

Top Five HR Functions

If you ask the generative AIs the same question multiple times, they may well give you a different answers in different instances. For example, let’s say I ask one of these AIs, “What are the five most important HR functions?”

I asked Bard this question three times. It gave me the same answer the first two times and a different answer the next day. ChatGPT gave me the most consistent responses, while Bing performed more like Bard: giving me two virtual identical answers and later a somewhat different answer.

Generally speaking, though, the five most common answers were:

  1. Recruitment and Selection
  2. Training and Development
  3. Performance Management
  4. Compensation and Benefits
  5. Employee Relations

This is, of course, a subjective question, so who really cares if Bard throws in “outsourcing” and Bing throws in “culture” or “talent management” sometimes? Well, not me, unless I’m trying to create a training module that needs to teach a consistent lesson. I’m not saying that issue can’t be fixed, even with generative AI, but the point is that these AIs have an unpredictability that must be taken into consideration by users and developers.

The Forces of Nature

In contrast, these AIs are much better at consistently providing information that has been well codified, such as scientific information. For example, they will consistently say that there are four forces or nature and identify them correctly. The definitions may be slightly different from response to response, but generally speaking they’ll be the same.

Undependable and Inaccurate

I have experienced AI “confabulations” many times. I’ve seen these AIs make up names of fictional scientists, tell me stories about things that could not have happened, and just get the facts wrong about basic things such as chronological order.

In my last post, I gave a detailed account of AI hallucinations and inaccuracies in regard to the topic of a famous poet. I’ve also experienced AI getting basic mathematics wrong. In fact, as I was write this, I asked ChatGPT to multiply two four-digit numbers. Not only did it give me the wrong answer twice, it gave me two different answers to the same problem!

This is common for these AIs, so when I hear that ChatGPT will soon be responsible for things like bookkeeping, I have to shake my head. The firm that carelessly turns its finances over to generative AI has best be prepared for a visit from the IRS.

That Could Change

Of course, what’s true today may not be true tomorrow. ChatGPT may become flawless at mathematics as its maker, OpenAI, forges alliances with firms such as Wolfram|Alpha. By using plug-ins and APIs, ChatGPT might be able to go from mathematical moron to savant.

Still, my point remains. Without a lot of testing, do not assume the responses coming from one of these AIs are accurate. And, if you’re purchasing an external system, be sure the vendor of the software that utilizes generative AI has a very sound explanation of how the system will be made consistently accurate and dependable.

AI as Intern

So, if these AIs are still pretty shaky as tools, what good are they? Well, that depends. What do you actually want from them?

Let’s say what you really want right now is someone who can give you a hand in a professional field in which you have pretty good skill sets yourself. Basically, you want a bright but green intern who is eager and works with remarkable speed.

In my case, for example, I know a few things about how to write a good survey about HR and work-related issues. So, I want a first draft of a survey on the topic of employment law compliance. Here’s an example of the kind of prompt I could use:

Please write a 10 question survey on the topic of employment law compliance. I’m going to survey HR professionals on this topic.

Then let’s say it gave me 10 questions that are pretty good but, because I’ll need to run a multiple regression analysis, I want some Likert-scale questions. So this time I write:

Now please provide me with five more related questions that use five-point Likert scales

And voilà! I now have a 15-question survey I can use as good draft on which to build. Knowing my job, I can spot problems and make improvements. I am, in the end, grateful for my splendid intern.

AI as Collaborator

Now let’s say I have less use for an intern than for a bright coworker with whom I can collaborate And let’s say I’ve been asked by my Chief HR Officer to look into the issue of pay transparency, which no one in on HR our team has a good handle on yet. So, I turn to a virtual coworker who, while obviously not an expert in employment law compliance, is quite knowledgeable about a wide range of issues.

Collaborating with Bing

So, I ask this virtual coworker (let’s call it Bing in this case) some questions starting with “What can you tell me about pay transparency laws in the US?”

Bing responds as follows:

“Pay transparency laws are regulations that require employers to be more transparent with salary ranges and benefits, and they aim to help promote fairness and equity in the workplace³. Depending on the jurisdiction, employers may be prohibited from asking candidates for their salary history, or they may be required to disclose a wage range for current and/or open positions². Some states and local jurisdictions also have pay data reporting requirements that mandate employers to report pay data by gender, race, ethnicity, and other categories².

“According to the U.S. Department of Labor, pay transparency can help improve pay equity, trust, talent acquisition, and employee engagement¹. However, there is no comprehensive federal pay transparency law in the United States, and the laws at the state and city level vary widely in their scope and enforcement³⁴⁵. You can find more information on federal and state-level equal pay and pay transparency protections for workers on this map¹.”

Then Bing provides me with the following sources that I can use to verify what its saying:

(1) US Pay Transparency Laws by State 2023 | PayAnalytics. https://www.payanalytics.com/newsroom/us-pay-transparency-laws-by-state.
(2) Pay Transparency: What It Is and Laws by State – ADP. https://www.adp.com/spark/articles/2023/03/pay-transparency-what-it-is-and-laws-by-state.aspx.
(3) Equal Pay and Pay Transparency | U.S. Department of Labor. https://www.dol.gov/agencies/wb/equal-pay-protections.
(4) The State of Pay Transparency Laws: What You Need to Know … – LinkedIn. https://www.linkedin.com/pulse/state-pay-transparency-laws-what-you-need-know-2023-aspenhr.
(5) Pay Transparency Laws By State [2023] – Zippia. https://www.zippia.com/advice/pay-transparency-laws-by-state/.

Batting Ideas Around with Bing

Once I’ve studied the sources, I learn that, although there is no comprehensive federal law in this area, it might make more sense to create a national policy since our company has major operations in California, New York and Illinois, where there are existing or pending laws.

So, Bing and I bat this idea around, listing the pros and cons of a national policy. Ultimately, with Bing’s help, I put together an internal report on the existing laws and the advantages and disadvantages of a national policy. Ultimately, although I’m collaborating with Bing, I’m responsible for the report and must verify every fact Bing provides.

From here, I know this is going to become an organizational decision that involves lots of different stakeholders as well as legal counsel. But my collaborator and I have done a pretty good job, I’d say, of quickly putting together a balanced report that will help the company make the best decisions about a complex issue.

AI as Contractor

Now let’s say I’m part of a small communication team that’s tasked with communicating our company’s guidelines and best practices around hybrid and remote work. The policies are already developed so our job is to make them as accessible, educational and engaging as possible.

One way to do this is via visual elements. Our team brainstorms some possible fun illustrations but there’s not much that matches our vision in the clipart files and there’s no budget to contract with an outside artist on this project.

A member of the team says she might be able to use one of the new AIs to generate the illustrations we have in mind. By the end of the day, she’s shared 40 different images with the team, and we select 6 for the guidelines document.

Someone makes the comment that he wished all their graphic artist contractors worked so quickly and cheaply. This gets a bit of nervous laughter. After all, as writers, we’re well aware that the large language models work a lot cheaper and faster than we do.

AI as Dance Partner

Ultimately, these generative AIs don’t easily fit any pre-existing categories. Technically, they are tools but historically unique ones. Because of this, it often makes more metaphorical sense to view them as playing roles more similar to other human beings, with known strengths and weaknesses.

There’s the role of smart and fast intern who, nonetheless, is prone to making potentially serious mistakes. There’s the role of a eager collaborator who brings many talents and total open-mindedness to the table. You can bat ideas around with this person but, ultimately, you will be responsible for the outcomes of that collaboration. And, of course, there’s the role of contractor with special skill sets.

In all cases, though, there needs to be a growing familiarity with these AIs as they become regular “dance partners” in the workplace. You must get to know their tendencies and cadences, and you are responsible for taking the lead in whichever virtual dance you’re doing. Because, although these tools will certainly be used for automation, they are best at augmenting and complementing people with existing skill sets.

Or, at least, that’s how things stand today. Who knows what tomorrow brings?

A Brief History of Human Technology

Before I write about artificial intelligence and its potentially pivotal role in history, I want to provide a brief history of human technology. As I noted in my last post in this series, human beings don’t and possibly can’t live without any technology at all. But, for most of our history, these technologies have been relatively simple, at least from our modern perspective. To get a better understanding of how dramatically and rapidly our technologies have changed, let’s consider some timelines.

Millions of Years of Basic Tool Usage

In a very real sense, we humans have long been expanding our capabilities via technologies for hundreds of thousands of years. In fact, we were likely doing it long before we were even humans. Today, there are examples of tool usage among all the non-human great apes (bonobos, chimpanzees, gorillas, orangutans, and human), which probably means that our common ancestors were also users of tools.

Consider the Hominin timeline below, for example. Our ancestors split aways from the ancestors of today’s chimpanzees about 8 and half million years ago, and there’s a good chance those ancestors used wooden tools in ways similar to today’s chimps and bonobos. They do things such as use sticks to fish termites out of mounds and to dig for tubers, wield stones to crack nuts, and even employ leaves to soak up water or wipe their mouths.

From Wikipedia with small modifications by me

A Timeline of Inventions and Technological Advances

As timeline above shows, however, a rapid growth of tools and technologies began with the advent of Homo sapiens. Although the flowering of various technologies arose with homo sapiens over a period of tens of thousands of years, there was a massive uptick in new and powerful technologies at around the start of the Industrial Revolution. Consider the following list of some of the most important inventions, though obviously many of these dates are, at best, estimates:

900,000 years ago: Hafting
400,000 years ago: Spears
200,000 years ago: Language
170,000 years ago: Clothing
100,000 years ago: Boats
90,000 years ago: Harpoons
70,000 years ago: Arrows
47,000 years ago: Mining
42,000 years ago: Tally stick
36,000 years ago: Weaving
28,000 years ago: Ceramics
28,000 years ago: Rope
23,000 years ago: Domestication of dogs
16,000 years ago: Pottery
12,000 years ago: Agriculture
9,000 years ago: Alcohol
8,000 years ago: Irrigation
7,000 years ago: Copper smelting
6,000 years ago: Plumbing
6,500 years ago: Lead smelting
5,500 years ago: Domestication of horse
5,300 years ago: Written word
4,300 years ago: Abacus
4,200 year ago: Protractor
3,500 years ago: Glass
3,300 years ago: Water wheel
3,300 years ago: Iron smelting
2,650 years ago: Crossbow
2,650 years ago: Windmill
2,485 years ago: Catapult
2,200 years ago: Paper
1,803 years ago (220 AD): Woodblock printing
1,573 years ago (450 AD): Horse collar
1,446 years ago (577 AD): Sulfur matches
1,405 years ago (618 AD): Bank note
1,223 years ago (800 AD): Gunpower
935 years ago (1088 AD): Movable type
695 years ago (1326 AD): Cannon
584 years ago (1439 AD): Printing press
525 years ago (1498 AD): Rifle
418 years ago (1605 AD): Newspaper
415 ( years ago 1608 AD): Telescope
403 years ago (1620 AD): Compound microscope
393 years ago (1630 AD): Slide rule
381 years ago (1642 AD): Mechanical calculator
367 years ago (1656 AD): Pendulum clock
343 years ago (1680 AD): Piston engine

Start of the Industrial Revolution

290 years ago (1733 AD): Flying shuttle
259 years ago (1764 AD): Spinning jenny
258 years ago (1765 AD): Steam engine
230 years ago (1793 AD): Cotton gin
219 years ago (1804 AD): Railway
216 years ago (1807 AD): Steamboat
197 years ago (1826 AD): Photography
195 years ago: (1828 AD): Reaping machine
179 years ago (1844 AD): Telegraph
147 years ago (1876 AD): Telephone
147 years ago (1876 AD): Internal-combustion engine
144 years ago (1879 AD): Electric light
138 years ago (1885 AD): Automobile
122 years ago (1901 AD): Radio
120 years ago (1903 AD): Airplane
97 years ago (1926 AD): Rocketry
96 years ago (1927 AD): Television
86 years ago (1937 AD): Computer
81 years ago (1942 AD): Nuclear power
76 years ago (1947 AD): Transistor
72 years ago (1951 AD): First artificial neural network
70 years ago (1953 AD): Structure of DNA discovered
68 years ago (1955 AD): Artificial intelligence term coined
66 years ago (1957 AD): Spaceflight
65 years ago (1958 AD): Perceptron, artificial neural network for pattern recognition
64 years ago (1959 AD): Machine learning term coined
50 years ago (1973 AD): Cell phone
49 years ago (1974 AD): Personal computer
49 years ago (1974 AD): Internet
39 years ago (1984 AD): 3D-printing
28 years ago (1995 AD): DNA sequencing
11 years ago (2012 AD): CRISPR
8 years ago (2014 AD): Generative adversarial network AIs
5 years ago (2018 AD): Generative pre-trained transformer AIs

These technologies are all now part of the our technosphere. If we picture that sphere as a kind of balloon, then we can see that it filled up relatively slowly at first but picked up momentum around 40,000 years ago, then really took off about 400 years ago.

Are Breakthroughs Speeding Up or Slowing Down?

The Speeding Up Theory

Some thinkers believe that we are in the midst of a virtual explosion of technology. Futurist Ray Kurzweil claims that we are in a state of exponential technological growth driven by the law of accelerating returns.

Back in 2021, he wrote, “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The ‘returns,’ such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”

Well, wow, that’s a lot. It is the epitome of techno-optimism (if merging with machines is your idea of optimism). On the other side of the coin, of course, are those who are quite certain that superintelligent AI will mean the end of humanity.

But I think the primary difference between techno-optimism and techno-pessimism boils down to one thing: AI’s future role in the creation of the technosphere. We’ll get to that in the next post. In the meantime, however, let’s consider the idea that technological change is actually slowing down.

The Slowing Down Theory

Certainly, on a geological time scale, all these inventions we’ve listed have arisen virtually simultaneously. But we don’t live in geological time and some experts believe that, from a human point of view, there’s been a dramatic slowdown in true innovation and scientific breakthroughs in recent years.

The authors of the study titled “Papers and patents are becoming less disruptive over time” analyzed data from 45 million papers and 3.9 million patents across six decades (1945–2010). Tracking how their disruption index changes over that timeframe, the researchers found papers and patents are increasingly less likely to be disruptive.

For example, in the area of patents, the decline in disruptiveness between 1980 and 2010 ranged from 78.7% for computers and communications to 91.5% for drugs and medical. They write, “Our analyses show that this trend is unlikely to be driven by changes in citation practices or the quality of published work. Rather, the decline represents a substantive shift in science and technology, one that reinforces concerns about slowing innovative activity. We attribute this trend in part to scientists’ and inventors’ reliance on a narrower set of existing knowledge.”

So, what can we do differently to address this issue? The authors suggest:

To promote disruptive science and technology, scholars may be encouraged to read widely and given time to keep up with the rapidly expanding knowledge frontier. Universities may forgo the focus on quantity, and more strongly reward research quality, and perhaps more fully subsidize year-long sabbaticals. Federal agencies may invest in the riskier and longer-term individual awards that support careers and not simply specific projects, giving scholars the gift of time needed to step outside the fray, inoculate themselves from the publish or perish culture, and produce truly consequential work.

The Extension of the Human Mind

Whether the creation of disruptive technologies and scientific paradigms is speeding up or slowing down, it’s clear that we have recently made large breakthroughs in artificial intelligence, which is an extension of our cognitive capabilities

Of course, we humans have been aiding and extending our mental capacities at least since the tally stick and probably long before then. Books, photos, maps, calculators, spreadsheets, word processors and much more have all been extensions of our minds.

But generative AI does feel like a much further extension, capable of doing various things that only the most capable and educated of people could have done before now. For example:

ChatGPT’s Performance on Academic and Intelligence Tests
The Uniform Bar ExamWhile GPT-3.5, which powers ChatGPT, only scored in the 10th percentile of the bar exam, GPT-4 scored in the 90th percentile with a score of 298 out of 400
The SATGPT-4 aced the SAT Reading & Writing section with a score of 710 out of 800, which puts it in the 93rd percentile of test-takers
The GREWhile it scored in the 99th percentile on the verbal section of the exam and in the 80th percentile of the quantitative section of the exam, GPT-4 only scored in the 54th percentile of the writing test
USA Biology Olympiad Semifinal ExamGPT-4 scored in the 99th to 100th percentile on the 2020 Semifinal Exam
AP ExamsGPT-4 received a 5 on AP (advance placement) Art History, AP Biology, AP Environmental Science, AP Macroeconomics, AP Microeconomics, AP Psychology, AP Statistics, AP US Government and AP US History. On AP Physics 2, AP Calculus BC, AP Chemistry and AP World History, GPT-4 received a 4
IQEstimated on the basis of five subtests, the Verbal IQ of ChatGPT was 155, superior to 99.9 percent of the test takers. It was not able to take the nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample.

Although this table highlights the power of these technologies, it leaves aside their lack of “common sense,” their poor mathematics capabilities (for now) and their chronic habit of hallucination and confabulation. These and other issues are why few view these technologies as actual artificial general intelligence.

But this doesn’t mean that such AI doesn’t already play a fast-evolving, uniquely creative and increasingly pivotal role in the shaping of our technosphere. That will the subject of my next post.

The Extended Human

A nest or hive can best be considered a body built rather than grown. A shelter is animal technology, the animal extended. The extended human is the technium.

Kevin Kelly

I like the phrase “extended human” because these days so much of our lives is spent doing just that: extending. We extend toward one another via our increasingly pervasive networking technologies, of course, but also via our words, our art, our organizations and our sometimes frighteningly fervent tribes of like-minded people.

Without these extensions, there can be no reticula – or, at least, none that includes humanity. It’s as if we are all connected neurons, the tentacled creatures of our own dreams and nightmares.

Kevin Kelly, the author of What Technology Wants, uses the phrase extended human to mean the same thing as the technium, which he defines as the “greater, global, massively interconnected system of technology vibrating around us.” But I see the extended human as beginning not with our technologies but with the reticula within: our woven, language-loving, community-seeking minds. A human who is armed only with ideas and imagination still has an amazing ability to extend herself into the universe.

Connection Matrix of the Human Brain

Technological Kudzu

Still, it’s true that the technium vastly enhances our natural tendency toward extension. In fact, as Kelly points out (and all anthropologists know), our inclination toward tool usage predates our emergence as a species. Our evolutionary predecessors such as Homo erectus were tool users, suggesting this propensity is somehow encoded or, at least made more likely, by our DNA.

These days, our extensions are growing like so much technological kudzu. Think about the growth of Zoom and other video conferencing applications. These technologies have become among of the latest technological imperatives, along with basics such as electricity, plumbing and phones/cell phones.

But there’s something missing in all this. Extensions are powerful alright, but what, exactly, are we extending? That is, what is at the core of the extended human? It isn’t a technological issue but, rather, a philosophical, psychological, existential or even spiritual one.

How Far Is Too Far?

This is where things not only get tricky but downright divisive.  The Buddhist may argue that “nothing” is at the core, that most of what we want to extend is sheer ego and delusion. The Christian may argue that immortal souls are at the human core, souls which have the propensity for good or evil in the eyes of God. The Transhumanist may argue that the human body and brain are the core, both of which can be enhanced and extended in potentially unlimited ways.

Few would argue against the idea that humans should be an extended species. Even the lowest-tech Luddites rely on tools and technologies. What we will spend the next several decades arguing about are two related issues:

1) What is at the core of humanity? What should we value and preserve? What can we afford to leave behind in the name of progress and freedom?

2) How far should we extend ourselves? Should we set collective limits for fear that we’ll lose our essential humanity or cause our own extinction? If so, how can we reasonably set limits without magnifying the risks of tyranny or stagnation?

All sorts of other subjects will be incorporated into these two basic issues. For example, collective limits on technological advances become more likely if associated dangers – higher rates of unemployment, increased risks of terrorism, environmental crises, etc. –  loom larger over time. Although we will frame these issues in various ways, they will increasingly be at the center of our collective anxiety for years to come. It’s the price of being the most extended species in the reticulum.

Featured image by Sheila1988; Agricultural tools at show