Why Write in the Age of Generative AI?

Why should people keep writing in the age of generative AI? It’s a question with which I’ve struggled lately. After all, these new AIs, specifically the large language models (LLMs), can produce utilitarian essays in mere moments. What’s the point of engaging in any kind of writing, especially blogging, in such an age?

My struggle is not just with this moment but with what’s coming in the near future. Although there’s no way to tell exactly how much of the prose on the Web has already been written or co-written using AI, it must by now be a sizeable and, of course, growing portion. Microsoft Bing Chat, or Copilot as it is now called, estimates the number at 14%, though I’m skeptical it’s that high.

But if it’s not there yet, it will be soon… and continue to rise after that.

So, why bother?

Reason One: Money

How Many Writers Are There in the US?

No one pays me to write this blog, of course, but I do get paid for writing in other formats and contexts. Although I’m not alone, it was relatively unusual even pre-AI. The Bureau of Labor Statistics reports that there were only 58,500 news analysts, reporters, and journalists in the U.S. as of 2022. Although there are other types of professional writers as well, even adding these people into the pool only gets us up to about 173,000, based on one estimate.

This is just a tiny fraction of the U.S. workforce. The number of full-time employees in the U.S. in 2022 was 132,000,000, and that doesn’t include all the people who work part-time or on a contingent basis. Let’s bump that number up to 150,000,000, which is still probably a conservative estimate of all U.S. workers. That means writers are about .12% of U.S., workers, or about a tenth of one percent.

The Impact of AI on Writers

Generative AI has already had sizeable impact on freelance writing and editing jobs, according to the Financial Times. From the time ChatGPT first launched, both the number and earnings associated with such jobs has declined, as we can see in the chart below.

So, the number of already rare writing jobs has been declining, at least at the freelance level on one platform. Therefore, one of the primary reasons for writing–in order to pay the bills–may well be on the decline, a trend likely to continue over the next several years.

I should note, however, that these kinds of trends aren’t always linear. After the invention of automatic teller machines, or ATMs, bank teller jobs actually increased. So, we’ll need to wait and see what happens to writing jobs over the long-term. For now, however, the outlook is rather bleak.

Reason Two: Thought Leadership

A second reason to write is thought leadership. That is, people who wish to become better known in their fields often write in order to showcase their expertise. This can lead to better career growth opportunities, speaking or consulting engagements, and greater professional networking.

Of course, these people may increasingly leverage LLMs to write their pieces. After all, using AI prose may help them generate more and better prose, giving them greater exposure their professional communities. I think that thought leaders will continue to write but that many will be happy to exploit the advantages of AI tools in order make that work easier.

Reason Three: To Share One’s Passion and Build Community

Another reason to write is simply to share one’s passion about a topic. For now, at least, there are no passionate AIs. The best AIs can do is imitate human emotion. But the person who is passionate about anything from global warming to horseback riding can share that passion through writing.

In so doing, they can attract like-minded people and so help form communities that focus on important social issues and avocations. One doesn’t need to write long-form articles to do this. Often, people write on social media forums such as Reddit, LinkedIn or Threads in order to share their passion and exchange ideas about any given topic.

Reason Four: For the Love of Writing

Some people, of course, simply love to write. Writing itself is their passion or, at least, a worthy avocation. These people may always wish to write no matter how much AI prose there is on the Internet.

But some of them may lose their enthusiasm for the written word, especially if they garner ever fewer readers in a world awash in AI prose and AI-based search engines that do not guide any Internet traffic their way.

Reason Five: To Learn

The last reason I’ll put forward is to learn. Author William Knowlton Zinsser states, “Writing organizes and clarifies our thoughts. Writing is how we think our way into a subject and make it our own. Writing enables us to find out what we know—and what we don’t know—about whatever we’re trying to learn.”

This is among my favorite reasons to write: to organize and formulate my own thoughts on any given topic. That’s among the reasons I tend to write about articles and books, especially of the nonfiction variety, when I’m trying to learn something. If I can explain a topic to another person, then there’s a better chance I have some grasp of it myself.

Even so, one doesn’t need to write in order to learn. There are plenty of knowledgeable people who never deign to put their proverbial pens to paper. So, “writing in order to learn” will always appeal to some people more than others.

The Future of the Human Labor

More Employees Affected

Writers aren’t the only professionals being affected by generative AI, of course. If anything, many types of graphic artists have been hit even harder. And, various other types of artists, from musicians to videographers, will see their professionals impacted by generative AI over the next year or two.

Maybe it’s just our time. Lord knows that everyone from factory workers to farmers have been heavily impacted by automation for the last 200 years or more.

And, as AI becomes better and more accurate over the next several years, writers and artists will hardly be the only workers affected. There are already long (though to my mind dubious) lists of professions that will be impacted.

Coping with Ennui

Aside from the economic effect (which may prove dramatic), there will also be the psychological impact. I think I’ve suffered a certain amount of ennui in relation to LLMs, making me wonder why I should spend my precious free time writing for a handful of people online when AIs can often do it nearly as well and far more quickly.

For now, my answer is “all of the above.” In my work hours, I write to make a living. In my off hours, I write to learn and for love, because it’s a passion and it potentially makes me a little a better known among a small group of people who care about the same issues I do.

So, for now, I continue. I enjoy the moment and hope a few others do as well.

Bing Confabulates Its Own Version of a Classic Hemingway Story

I continue to be fascinated by the topic of AI confabulation. The other day I read the Ernest Hemingway short story “Fifty Grand.” It’s about a boxer who fights a championship bout. I liked the story but was confused by a couple of details in the end. So, I turned to my favorite AI, Bing, which proceeded to hallucinate a whole other version for me.

Of course, I’ve seen AIs make up other scenes from famous literary works before. Bard even confabulated a “woke” version of the poet E.E. Cummings. So, Bing’s summarization of the Hemingway story was not a shock. But it’s worth writing about because of the nature of Bing in particular and the other similar AIs more generally.

Confabulating Parts of Fifty Grand

“Fifty Grand” is a story that hinges on a couple of boxing-related bets: one known, one unknown. Because the unknown bet isn’t made clear, the reader isn’t sure of what happened until the end of the story. To help clarify the ending, I asked Bing about it.

Bing’s summary of the story left out a critical part when a couple of “wise guys” (by the name of Morgan and Steinfelt) come to visit the boxer, Jack, and make some sort of proposal to which the reader isn’t privy.

Bing’s summary includes a number of events that never happened, such a meeting between Jack and his wife, her accusation that he was going to throw the fight, Jack’s broken jaw, and Jack’s trip to the hospital.

Bing didn’t accurately relate events in the story even though it had access to summaries such as the Wikipedia entry about it (to which Bing linked) as well as to the story itself (though in an unsearchable PDF format, which may have been an issue).

Wikipedia Gets It Right

So, did Bing get it wrong because Wikipedia had it wrong? No. Wikipedia summarizes the fight as follows:

The fight begins. Doyle (the narrator) reports, “There wasn’t anybody ever boxed better than Jack,” and the fight goes well for Jack for several rounds as his left fist repeatedly connects with Walcott’s face. By the seventh round, Jack’s left arm gets heavy and Walcott begins to gain the upper hand as he pummels Jack’s torso. After the eleventh round John Collins tells Jack the fight will go to Walcott, but Jack says he thinks he can last through the twelfth round and goes to meet Walcott and “finish it off right to please himself.” Walcott backs Jack up against the ropes and delivers a very low blow; Jack, in obvious pain, stays standing–”if he went down, there go fifty thousand bucks” because he would win on a foul–and tells the referee not to call the foul because it must have been an accident. He walks toward the dumbfounded Walcott and begins swinging, landing several body punches before hitting his opponent twice below the belt. Walcott falls and wins the match on a foul.Back in the dressing room, Jack comments, “It’s funny how fast you can think when it means that much money.”

In a couple of sentences, Wikipedia clarifies why Jack would have lost the money he bet on Walcott (the other fighter): because Walcott’s blow would have been deemed a foul and Jack would have won the fight on a technicality.

This suggests that the shady characters who talked to Jack earlier in the story wanted Jack to win the fight. At the end of the story, the narrator tells us that Walcott was favored to beat Jack, which is why Morgan and Steinfelt wanted to bet and win on Jack (that is, they’d win more money that way).

But it appears that Jack’s agreement with them is that he would lose the fight. That’s why toward the end of the story, Jack’s manager says, “They certainly tried a nice double-cross” and Jack responds with sarcasm, “Your friends Morgan and Steinfelt. You got nice friends.”

So, Morgan and Steinfelt wanted Jack (and most other people) to bet against Jack’s victory so they would make more money when Jack won. In essence, Jack turned the tables on them by making sure he lost the fight even while getting revenge on Walcott for his dirty boxing and treachery.

What Can We Learn About Today’s Neural Networks?

I certainly don’t “blame” Bing for getting a nuanced story wrong. I know that the confabulations boil down to how the algorithms work, as explained in another post. In fact, unlike the other AIs on the market, Bing pointed me to references that, if I hadn’t already read the story, would have allowed me to verify it was giving me the wrong information. That’s the beauty of Bing.

Not Quite Plagiarism

The famous intellectual Noam Chomsky has claimed that the generative AIs are just a form of “high-tech plagiarism.” But that’s not quite right. I don’t know if the story “Fifty Grand” was part of the data on which the Bing model (based on ChatGPT4) was trained. If so, then it wasn’t able to properly parse, compress and “plagiarize” that nuanced information in such as way that it could be accurately related after model training.

But we do know that Bing was able to access (or at least point to) the Wikipedia article as well as an “enotes” summary of the story, so it knew where to find the right plot summary and interpretation. The fact that it still confabulated things indicates that the makers and users of these technologies have some serious issues to address before we can trust whatever the AIs are telling us.

Will Hallucinations Ever Go Away?

There’s some debate about whether the confabulations and hallucinations will ever go away. On one hand are people such as Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, who has said, “This isn’t fixable. It’s inherent in the mismatch between the technology and the proposed use cases.”

On the other hand are those who think the problems are indeed fixable. Microsoft co-founder Bill Gates said, “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction.”

Maybe APIs Will Help Fix the Issue

Some think they can address the confabulation problem, at least in part, by better use of APIs (that is, application programming interfaces). By interfacing with other types of programs via APIs, the large language models (LLMs) can develop capabilities that they themselves do not have. It’s like when a human being uses some tool, such as calculator, to solve problems that they would not easily be able to solve by themselves.

That is, in fact, part of the hope associated with Gorilla, a LLaMA-7B model designed specifically for API calls. This particular LLM is a joint project developed researchers from UC Berkeley and Microsoft, and there is now an open-source version available.

So, if Gorilla can more dependably access APIs, it can reduce the hallucination problem.

At least, that’s the hope.

We’ll see over time.

U.S. Productivity Shot Up in the 2nd Quarter of 2023

Not long ago, I discussed how 2022 saw the largest drop in annual U.S. productivity in half a century, and said I hoped there’d be better news soon. Indeed, there has been, as U.S. productivity shot up in the 2nd quarter of 2023 by a whopping 3.7%! (see chart)

We don’t want to make too big a deal about this since it’s just one quarter’s worth of data. It may just be a blip, but it’s the biggest positive blip since since 2020. We’ll see how the rest of the year goes.

What’s Going to Happen in 2023?

Here’s my guesstimation of what’s going to happen for the rest of the year: we’ll see productivity growth for the next two quarters and we’ll wind up in positive territory for the year.

Goodbye Great Resignation, Hello Job Skills

Why am I optimistic? Several reasons. First, the era of high voluntary turnover (aka, Great Resignation) is over, which means employees are getting to really know how do those new jobs so many of them took recently. Productivity goes up as people gain more knowledge about how to efficiently do their work.

Figuring Out Remote, Hybrid and RTO

Second, organizations are working out the whole remote and hybrid work thing. As I’ve said before, I’ve been dubious that remote work alone was responsible for the downturn in productivity. In fact, you could make an equally strong case that it was return-to-office policies that were hammering productivity.

But now many organizations are figuring out what does and doesn’t boost productivity. The firms that know how to manage remote workers well will leave things alone. After all, these employees know how to stay productive at home, and their managers know how to manage these relationships well.

But organizations that have seen problems will bring more people back into the office, at least for a few days a week. Moreover, they and their managers will get a better handle on which employees do and don’t work from home well.

It’s About the Worker, Stupid

I can’t stress this last point enough because no one mentions it these days. A lot of this doesn’t come down to remote or on-premise work per se. It comes down to individual employees. There are those who work from home well and those who do not. Over time, organizations and employees themselves discover which is which, and they adjust accordingly.

Then There’s Generative AI

Generative AI doesn’t yet make everyone more productive. It’s still highly unpredictable and it confabulates a lot. But, as with remote work, over time employees and managers will start to know what it does and doesn’t do well. And, as more developers will get experience with the open source AIs like LLAMA, they’ll learn how to productize AI applications more successfully.

This will result in a productivity boost over time. In some fields, it’ll be a huge one. That won’t just (or even primarily) be due to automation. At least for the next few years, augmentation rather than automation will be key.

A Little and Then Maybe a Lot

As with most new technologies, it’ll take a while before the AI-productivity payoff really kicks in. Once it does, however, we could see massive increases in employee productivity. We don’t know how massive, and we also don’t clearly understand the longer term risks of AI. So, any detailed forecasts are a fool’s game.

Still, systems have a tendency to adjust and stabilize, and today’s workplace system will figure out how to better incorporate the vagaries of remote, hybrid, on-premise, and AI-augmented work as organizations push toward higher productivity. If that happens, the real question will be how equitably those productivity returns are distributed throughout the workforce.

The Human Network

Humans were the network long before software and hardware ever existed. In the human network, each person is a node, of course, and each connection with other people is a link. The links are not just what make us a network, they are what make us human.

You might deem those links as threads. Of course, sometimes they are literally like threads, the wires and the cables that make up our astonishing, often befuddling communication networks.

the human network

Other times, the threads are invisible (to us) radio waves and microwaves and what-not. But the most important invisible threads are those forged by love between family members and friends and lovers. Love is invisible but indisputably real.

the human network

This multitude of threads make up a much larger, ever-changing tapestry. A tapestry reflecting who we are, collectively, globally. A tapestry woven into the world’s far more expansive ecosystem, one we should be bent on not just preserving but sustaining and growing.

Amid all this, keep in mind you are not alone. We are not alone. Far from it.

On Bayes Theorem and Human Cognition

Some scientists believe that our brains work according to Bayesian logic. Or, at least, we may be able use such logic to replicate the ways our minds work. This is a complex topic that can’t be covered in one post (especially once we start talking about the free energy principle) so let’s start by discussing the connections between Bayes Theorem and human cognition.

What Is Bayes Theorem?

The Bayes Theorem was formulated by Thomas Bayes–English statistician, philosopher and Presbyterian minister–back in the 1700s. The theorem is all about probability of something happening once you know that the probability of something else happening. Here is it in a nutshell:

There are a lot of examples of how to use Bayes Theorem. Here are a few sources:

Examples with Chart

I’ve provided a couple of examples using a chart I took from a Khan Academy lesson on conditional probabilities. I haven’t seen Bayes taught this way, but figured it might be useful as a way to helping myself think through it.

The following is information about one man’s train travels as they pertain to weather, travel delays, and number days for each weather category.

All the data needed for a set of probability problems are already there, so my assumption is one can test Bayesian calculations against the numbers in the chart.

Example One: Chance of Delay If Snowing

For example, let’s say you want to find out the relationship between travel delays and snowy weather. If you just use the chart, you can see a total of 20 days spent traveling on snowy days. There were 12 delays on those days, so you can see that there was a 60% chance of delays on days when there was snow (that is, divide 12 days by 20 days to get .6).

But let’s assume you don’t have the full chart, but you do know some relevant information. What you want to know is the chance of a delay if it’s snowing. So, you set up the following:

P(A|B) is P(delay|snowy): that is, a chance of delay if snowing: currently unknown

P(A) is P(delay): the chance of any delay in a given year = 35 delay days / 365 total travel days = .096

P(B|A) is P(snowy|delay): the chance it’s snowy if there’s a delay = 12 delay days when it snowed / 35 total delay days = .34

P(B) is P(snowy): the probability of snow on any given day = .055

So, here’s what you end up with:

(.096 * .34) / .055 = .6 = 60%

You arrive at the same answer as before even though you didn’t know the total number of snowy days (20) this time around. So, you get a good probability without complete information via Bayes.

Example Two: Chance of Being On Time If Rainy

This time, let’s say you want to know the chances that you’ll be on time if it’s raining. If you have complete information based on the chart, you can divide 40 (the number of on-time days when it’s raining) by 55 (the total number of days traveled when it’s raining). We get .73, or 73%.

But let’s say you don’t have the full chart. So, you set up the following:

P(A|B) is P(on-time|rainy): that is, chance of being on time given it’s raining: currently unknown

P(A) is P(on-time): the chance of being on time in a given year = 330 on-time days / 365 total travel days = 90%

P(B|A) is P(rainy|on-time): the chance it’s rainy if you’re on time = 40 on-time days when it rained / 330 on-time days = .12

P(B) is P(rainy): the probability of rain on any given day = 55 / 365 = .15

So, here’s what we end up with:

(.90 * .12) / .15 = .72

We arrive at a very similar answer as before.

I should note that when you’re doing this kind of analysis, you don’t always know how a particular percentage was derived. You just know the proportions based on some known standard (such as the accuracy rate of a certain medical test).

Bayes and Cognition

Some researchers believe that mind is a prediction machine. The idea is that the brain somehow assigns probabilities to hypotheses and then updates them according to the probabilistic rules of inference.

But do our minds actually use Bayesian inference?

Joshua Brett Tenenbaum, Professor of Computational Cognitive Science at the Massachusetts Institute of Technology, has stated that Bayesian programs are effective at replicating “how we get so much out of so little” via our cognition.

Other shave been more skeptical of the notion that our minds use some form of Bayesian reasoning. Jeffrey Bowers, Professor of psychology at the University of Bristol, notes that information-processing models such as neural networks can replicate the results of Bayesian models.

Can Neural Networks and Bayesian Approaches Work Together?

Some say that Bayesian inferences are key aspects of modern generative AI models, which are based on neural nets. As one source explains:

The computer starts with a basic understanding of the English language, such as grammar rules and common phrases. It then reads the vast library of text and updates its understanding of how words and phrases are used, based on the frequency and context in which they appear.

When you provide the computer with a starting sentence or a few words, it uses its Bayesian understanding to estimate the probability of what word or phrase should come next. It considers not only the most likely possibilities but also the context and the content it has learned from the library. This helps it generate sentences that make sense and are relevant to the given input.

The computer continues this process, picking one word or phrase at a time, based on the probabilities it has calculated. As a result, it can create sentences and paragraphs that are not only grammatically correct but also meaningful and coherent.

In summary, a Bayesian approach helps an AI generative language model learn from a large collection of text data and use that knowledge to generate new, meaningful sentences based on the input provided. The computer constantly updates its understanding of language and context using Bayes’ idea of probability, enabling it to create content that is both relevant and coherent.

So, is Bayes the secret “hero” behind today’s generative AIs? Beats me. It’s something I’ll need to investigate further with people who actually develop these systems.

Another avenue of investigation are those who are trying to use the so-called free energy principle, also based on Bayesian ideas, to create new AI systems. One organization that seems to be working on this is Verses, which last March published Executive Summary of “Designing Ecosystems of Intelligence from First Principles.” That’s now on my “to read” pile.

A Review of the Threads Network So Far

I’ve been spending time on Meta’s new social media platform since it launched. This is just a quick review of Threads network so far, keeping in mind that any assessment is, at best, preliminary.

The Twitter Exodus

Although I’ve used Twitter for years, I’ve never been an avid user. I officially abandoned (though didn’t pull down) my Twitter account because Elon Musk, the firm’s CEO, ultimately said and did too many sleazy and stupid things to countenance its continued usage. After a while, I just started feeling tainted by the place.

That said, I can’t say I’ll never go back if it changes and, unlike so many others who left, I cast no judgements on anyone who has stayed. After all, some people’s livelihoods and social standing are deeply woven into that network. I’m just glad not to be among them.

Now There’s Threads

Threads is, of course, the Twitter clone that Meta’s CEO Mark Zuckerberg decided to create in order to take advantage of Twitter’s well-publicized financial failings and its CEO’s seemingly endless addiction to destroying his own personal brand. It’s a shame that Musk, who successfully runs Tesla and SpaceX, couldn’t get out of his own way with Twitter, but that’s been covered virtually everywhere else on the Internet so there’s no point in getting into it here.

What’s my opinion of Threads? Well, it’s okay. It has a certain energy and simplicity I like, and I suspect there will be features added that will make it better over the next several months. But it can also be deeply frustrating.

The Good

The unTwitter

There are two very good things about Threads. First, it isn’t Twitter, the single characteristic that is most attractive to users. Second, Meta made it almost embarrassingly easy to sign up if you already have an Instagram account. Yes, you have to download an app from the Google store, but that’s a snap for most of us. From there, it’s just a matter of hitting a button if you already have the Instagram app on your phone.

Simplicity

Threads also has the virtue of being simple to use, especially if you’ve used Twitter before. At the bottom of the screen, there are only five icons:

  • Home
  • Search
  • Post (aka, New Thread),
  • Activity (the heart)
  • Your profile

Below each post, are just four icons:

  • Like (another heart),
  • Reply (a speech balloon icon)
  • Repost (circular arrows)
  • Pointer (mostly for sharing the post).

Easy enough for anyone with a modicum of social media experience.

The Bad

The Flood

So, what’s bad about Threads? Remember when I said it was simple? Well, it’s arguably too simple in that it lacks some of the more popular features of Twitter, such as Trending, DMs and Lists. A lot of people rely on these to help tame the torrent of information that washes over them as they scroll posts from hundreds or thousands of people.

The Monotony

Because of the flood of posts of all sorts, many of which you may have no interest in, you wind up with a lot of crap in your feed. I think this is probably the main reason people have left the service for now.

Taming the Feed

However, it is possible to “tame the algorithm” that drops posts into your feed. You just need to use the Mute, Hide and Unfollow buttons.

If you Mute the post of someone you follow on Threads, you won’t see their threads or replies in your feed, but they won’t know you muted them and you’ll still be following their profile.

If you Hide a post, however, then all you’re doing is hiding that one post. You’ll see other posts by that person as long as you don’t Mute them.

Unfollow, of course, mean you don’t get that person’s posts anymore unless the algorithm decides otherwise based on whatever arcane logic is programmed into it.

So, What’s Next for Threads?

My guess is that Threads will launch new features such as a chronological feed restricted to people you actually follow, the ability to send direct messages, a better search function (that may or may not require hashtags), a “trending” list, etc.

In the shorter term, here’s what seems to be on tap.

Will Threads Survive?

Some are already asking if Threads will survive long term. After all, there was a dramatic decline in usage after an initial spike. The following graph is from SimilarWeb:

What’s seldom reported is that a short-term spike and then decline was almost certain to the happen because of the way the app was launched. People were curious, came and poked around a bit, then decided whether or not they wanted to devote some real time and energy to it. A lot didn’t.

But now that they’re signed up, many will return occasionally to see how things are progressing. Threads will be smart if it launches the upcoming new features in a strategic but well publicized way. I think that over the next year, those usage lines will start to climb, especially as we move further into the next U.S. election cycle.

Meta has plenty of time and money. Threads won’t go anywhere over the short-term, and I doubt it’ll go the way of Google+ over the the long term. Social networks is what Meta does, and it has a lot of experience, skills, money and existing networks from which to draw.

Will Threads Be Diverse and Zesty But Still Civil Enough?

The Danger of Echo Chambers

The biggest threat I see is the problem with many social networks these days: the reality-warping echo-chamber effect. For now, at least, the “libs” own Threads. I don’t think I’ve seen a single pro-Trump or even pro-GOP thread.

I’m sure part of that is that the algorithm is feeding me more left-wing stuff because it discerns I’m not a fan of the neo-fascist types like DeSantis. But a left-wing feed is not ultimately what I want. Rather, I want a politically balanced but thoughtful and evidence-based point of view in my feed, and over time Meta will make a bundle it can write and publicize a good algorithm that provides this balance. Along the way, that balance would also promote the social good.

Consider Partnerships with Content Providers

One interesting thing that Elon Musk has done recently is start to pay some of Twitter’s chief content providers. Meta should watch how this turns out and, if there are any virtues to it, it should consider following suit.

The danger is that you wind up paying people who are just good at lighting people’s hair on fire with limbic-system-hijacking, made-for-outrage posts. That can be “fun” in the short-term but it causes all kind of social turmoil, increases the work of content moderators, and chases away advertisers.

So, my recommendation for Meta (not that anyone there cares a fig for my opinions) is that they focus on creating constructive but civil and energetic discourse. Aside from writing a community-enhancing rather than click-inducing algorithm, perhaps one way to do that is through partnerships with Medium, Substack, WordPress and others, places rich with thought leaders who have built-in audiences. It’s possible that ActivityHub, described below, will make such partnerships easier.

The Potential Beauty of Federation

On ActivityPub and Threads

What’s ActvityPub?

There’s something called ActivityPub, new standard for social networking that is reportedly more open and user-centric. Here’s how The Verge describes it:

It’s a technology through which social networks can be made interoperable, connecting everything to a single social graph and content-sharing system. It’s an old standard based on even older ideas about a fundamentally different structure for social networking, one that’s much more like email or old-school web chat than any of the platforms we use now. It’s governed by open protocols, not closed platforms. It aims to give control back to users and to make sure that the social web is bigger than any single company. 

What’s It Have to Do with Threads?

Adam Mosseri, the head of Instagram who is overseeing the Threads project, explained on the Hard Fork podcast:

[Threads] is built on the ActivityPub protocol, which is a technology that’s behind all of the Mastodon servers and apps. What that means is there are a bunch of different apps, or social networks, that can all integrate. And so you’ll be able to actually follow people who don’t even use Threads, but use these other apps, from Threads. And you’ll be able to actually follow people and their content from threads without even using that app and using other apps like Mastodon.

And I do think that decentralization — but more specifically or more broadly, more open systems are where the industry is getting pulled and is going to grow over time. And for us, a new app offers us an opportunity to meaningfully participate in that space in the way it would be very difficult for us to support an incredibly large app like Instagram. And so to lean into where the industry is going, to learn, it’s been very humbling speaking to a bunch of people in the community who look at us, unsurprisingly, with a lot of skepticism. But I do think it’s going to be fundamentally good. And I do think it’s going to translate into not philosophical, but meaningful things for creators over the long run. Like, you should be able to take your audience, if you build up an audience on Threads — and if you decide to leave Threads, take your audience with you. And theoretically, over time, we should be able to support use cases like that that really empower creators and, I think, lean into what creators are going to demand and expect over time.

Power to the People…Maybe

So, it’s possible that, despite considerable skepticism from parts of the so-called fediverse, Threads could help make these open protocols mainstream, making social media platforms potentially a lot more diverse.

In the end, we’ll see. The fediverse may be the future, a place where many of the original utopian ideals of the Internet are finally achieved. If not, well, it wouldn’t be the first time social media let us down. But if it is, then the Web might be a much better place over the next 20 years than it has been over the last 20.

Dum spiro spero

The Historic Decline in U.S. Productivity

2022 Was a Very Unproductive Year

Productivity is, or at least should be, the most important factor in American financial well-being. So, it’s a big deal when we suffer dwindling labor productivity. Last year, we saw the second largest annual drop in U.S. labor productivity history. I don’t think the media put a lot of effort into reporting it, but productivity shrank by 1.6%, the largest decline since 1974, when there was a similar plummet of 1.7%.

Is annual productivity going to snap back this year? Maybe. After all, it did back in 1975. But the first quarter of 2023 was not at all heartening, with quarterly productivity shrinking by 2.1%! So, let’s hope for good news when data from the second quarter is published on August 3rd.

What Happens If the Bad News Continues?

If that second quarter news is also bad, we can expect to see a lot of hand-wringing in the U.S., especially on the part of economists and business leaders. The debates about return-to-work and quiet quitting will grow more vociferous, and economists will warn that inflation is going to reemerge if things don’t change. After all, prices go up if it costs more to produce things. In the good times, productivity is what helps keep higher prices at bay.

That’s one reason I think a lot about the subject of productivity. It’s not just another economic metric. It’s a grand indicator of whether or not our whole socioeconomic system is working, both in the physical and the financial sense.

But How About that AI Boost?

Of course, many are now predicting that the new generative AIs will soon result in massive increases in productivity. But that’s not a given. For one thing, it often takes workplaces a long time to figure out how to adequately harness new technologies. This happened with everything from electricity to personal computers.

Maybe it’ll be be different this time around. People like Ray Kurzweil argue that AI will speed up the whole process of change. It’s all a matter of exponential rates of increasing returns.

Others are more dubious. Ezra Klein, for example, points out that the Internet should have resulted in a much larger boost in productivity than it did. But what wasn’t accounted for is that the Internet came with a very large dose of diversion. Suddenly people’s computers became distraction machines, and productivity was diluted as a result.

Klein thinks that this could happen with AI. For example, we may end up in deep conversations with our AI companions even as we fall behind on our work. Or, artificial intelligence will become such a major factor in everything from diverting movies to video games to virtual worlds that we will become more distracted than at any other time in history.

Time will, of course, tell. Personally, I make no predictions, but I can imagine several different scenarios. Maybe those will be a subject for a future post.

Generative AI Is Better for Augmentation than Automation … For Now

According to research I’ve helped conduct in the past, HR professionals tend to think that AI will be more widely used for the automation rather than the enhancement of work. But, I think that’s the wrong way to view it. For the most part, these AIs can’t actually take over many jobs. Rather, they help people be more productive at those jobs. So, generative AI is better for augmentation than automation.

Jobs Could Be Lost

This does not mean, however, that jobs can’t be lost. If you can triple the productivity of a nine-person team, for example, then you could potentially lose six of those people and maintain the same production as before. So, yes, jobs could potentially be lost.

On the other hand, it very much depends on the job and how it’s managed. Let’s say that we’re talking about software developers. In a firm that sells software products, the sticking point in the past may have simply been the cost of labor.

But Let’s Be Specific

Let’s assume a team of nine developers creates and maintains a product that brings in $3 million dollars of revenue per year, and let’s assume that the cost of employing this team is $1.5 million per year. Let’s also assume some form of generative AI can triple productivity so that the team can be reduced to just three people. So, yes, the company could save $1 million dollars per year by terminating six of those positions.

Leverage the Wealth-Creation Machine

Or the company could earn many times that amount by keeping them and assigning them to other revenue-earning projects.

Let’s now assume those six developers can be reallocated to create and implement two other products, both of which also can bring in $3 million per year. At this stage, the revenue earned by these six employees will be $6 million dollars, or $1 million per employee.

This is, of course, how productivity works. It’s a system with positive feedback loops that, if harnessed correctly, becomes a wealth-creation machine.

Oh, I know my arithmetic is over-simplified. Salaries, revenues and profits are never that straightforward. But you get the idea. Depending on the job and the business model, generative AI could actually increase the demand for certain skills because it can massively boost productivity, which boosts revenues and profits.

This Could Change, Of Course

Of course, this could change if generative AI (or whatever AI comes next) can fully automate most white-collar work, but we’re not there yet and, from what I can see, we’re not that close. These AIs are still prone to hallucinations and mistakes, and they require trained professionals to be able to detect those mistakes as well as engage in more creative and strategic work.

So, my advice for now is to leverage these technologies for augmentation rather than automation. Get while the getting’s good. Ultimately, that’s how economies and labor markets thrive.

The Ever Hotter System of Which We Are a Part

In theory, we all know that humanity is part of the Earth’s ecosystem. When we impact the system, we impact ourselves. But knowing is one thing, feeling it is another. Sure, we know the global system of which we are part is ever hotter. But lately a lot of Americans, including myself, have gotten a real feel for it.

Over the last several days, the earth has suffered the hottest days in recorded history. On July 3rd, we set a record of 17.01°C, or 62.62°F. That was calculated by taking into account the average temperatures of the land, the oceans, the poles, and the night and day cycles.

But the record didn’t last. On July 4th, there was a new record, 17.18°C this time.

And then on July 6th, yet another record: this time 17.23°C.

What makes this all more remarkable, and more alarming, is that Antarctica is in the heart of its winter season. It should be helping to keep things cooler. Well, to be fair, it probably is. But it’s not enough to overcome the stress that we’re putting on the system.

My Very Hot Home

Coincidentally, our central air conditioning pooped out on July 1st. It was the start of a weekend and, when we called the air conditioner repair people, they gave us a number that was only for “emergencies.” What is an emergency, however? They didn’t define the term.

I thought maybe emergencies were for when there’s a bedridden elderly person in an AC-less house. So, we didn’t call. Saturday night was a bit rough sleeping in a 90°F bedroom, but we managed. On Sunday, I toughed it out at home while C went to work. It reached 91°F in the house with a “feels like” index of 100+ outside. Hot enough that it felt as if the air were closing in, as if I could somehow see the heat itself in a darkened room. And not a “dry heat,” of course. We live in a rainforest (without much rain, lately). Such is Florida.

Sunday night was tougher than Saturday night. The heat was more pervasive. All the objects in the house were hot as well. There was no more residual coolness in the furniture. The bed itself was hot. Thermodynamics, baby.

We called the air conditioning folks on Monday. The woman on the phone half scolded and half laughed. “When your AC goes off on a hot Florida summer day, it’s an emergency no matter who you are,” she said. We didn’t argue. A guy came, replaced a capacitor, and had the AC fixed in about 10 minutes.

Just in time. It was Monday, the hottest day in recorded history (at the time).

To Concentrate the Mind

We are in a system that is getting hotter by the year and, lately, by the day. The most recent record won’t hold, not unless there’s a nuclear war or supervolcano explosion or some other disaster that would be worse than the global warming itself.

It takes a lot for us humans to give up our self-centered foolishness, to stop our inane but often deadly chimp-like bickering among ourselves. It takes a lot to pull us together into a single human tribe. A deadly pandemic certainly couldn’t do it. Indeed, in the U.S., it only intensified our hominid nescience.

But if we could bring all of humanity together into an AC-less Florida amid high humidity and feels-like temperatures of 107°F and keep everyone here until we collectively figured out how to properly address global warming, maybe we’d finally get ‘er done. No more excuses or half measures or procrastination.

Maybe we would finally become avid and careful systems thinkers. Our minds would be concentrated as our bodies sweltered. We would realize that there’s no easy answer to solving the issue of global warming. It’s a system, after all. But we’d soon come up with compromises on a solution that would require sacrifice from everyone, a solution that would please no one but would stand the best chance of getting something real done.

At least, that’s the pipedream. The fevered dream of a hot man lying on a hot sofa under a blurred fan blowing hot air. A man who knows with a palpable certainty that it could be even worse. No, that it will be even worse. And that it is already worse for millions if not billions of people living with far few cooling resources than we have.

The AC is back on for now and for us.

But the memory of just a couple of days without AC will live on a while. It’s just a prelude. And a reminder that when you punch the planet, the planet punches back.

Employers Have Fallen Behind Employees in AI Adoption

When it came to previous versions of AI, organizations had to worry about falling behind the business competition. The same is true for generative AI, of course. but this time there’s an added complication. Employers have fallen behind employees in AI adoption as well. This needs to be on the radar of HR, the IT department and executive leadership teams.

Execs: Important, Though It’s Going to Take Time

Most executives are familiar with the technology hype cycle, and they’ve seen AI hype before. So, is the generative AI movement different?

Well, probably. One survey from KPMG found that two-thirds of executives think generative AI will have a high or very high impact on their organizations over the next 3 to 5 years. But, being familiar with how long it can take to change anything, especially when it comes to new technologies, most also think it’s going to take a year or two to implement new generative AI technologies.

KPMG reports, “Fewer than half of respondents say they have the right technology, talent, and governance in place to successfully implement generative AI. Respondents anticipate spending the next 6-12 months focused on increasing their understanding of how generative AI works, evaluating internal capabilities, and investing in generative AI tools.”

All of which sounds fine, but only 6% say they have a dedicated team in place for evaluating and implementing risk mitigation strategies. Another 25% say they’re putting risk management strategies in place but that it’s a work-in-progress.

Employees: Already On It, But Don’t Tell the Boss

Meanwhile, a survey conducted by Fishbowl, a social network for professionals, reports that 43% of professionals use AI tools such as ChatGPT for work-related tasks. Of the 5,067 respondents who report using ChatGPT at work, 68% don’t tell their bosses.

This makes me wonder if A) there’s an intentional “don’t ask, don’t tell” policy in some companies that are simply afraid of establishing policies or guidelines that could get them in legal trouble down the line, or B) there’s an unintentional bureaucratic lag as companies take months or longer to establish guidelines or policies around these new technologies.

But Some Employers Aren’t Waiting

This doesn’t mean that all organizations are lagging in this area, however. Some have already set up guardrails.

The consulting firm McKinsey, for example, has reportedly knocked together some guardrails that include “guidelines and principles” about what information employees can input into the AI systems. About half of McKinsey workers are using the tech.

“We do not upload confidential information,” emphasized Ben Ellencweig, senior partner and leader of alliances and acquisitions at QuantumBlack, the firm’s artificial intelligence consulting arm.

McKinsey specifically uses the AI for four purposes:

  • Computer coding and development
  • Providing more personalized customer engagement
  • Generating of personalized marketing content
  • Synthesizing content by combining different data points and services

Ten Suggested Do’s and Don’ts

There are now various articles on developing ethics and other guidelines for generative AI. Keeping in mind I’m no attorney, here’s what I think organizations should consider in the area of generative AI:

DO spend time getting to understand these AIs before using them for workDON’T leap directly into using these tools for critical work purposes
DO be careful about what you put into a promptDON’T share anything you wouldn’t want shared publicly
DO always read over and fact-check any text that an AI generates if it is being used for work purposesDON’T assume you’re getting an accurate answer, even if you’re getting a link to a source
DO use your own expertise (or that of others) when evaluating any suggestions from an AIDON’T assume these AIs are unbiased. They are trained on human data, which tends to have bias baked in.
DO develop guardrails, guidelines and ethical principlesDON’T go full laissez faire
DO continue to use calculators, spreadsheets and other trusted calculation toolsDON’T rely generative AI for calculation for now unless you have guarantees from a vendor; even then, test the system
DO continue to use legal counsel and trusted resources for understanding legislation, regulation, etc.DON’T take any legal advice from an AI at face value
DO careful analysis of any tasks and jobs being considered for automationDON’T assume these AIs can replace any tasks or positions until you and others have done your due diligence
DO train employees on both the ethical and practical uses of generative AIs once these are well understoodDON’T make everyone learn all on their own with no discussion or advice
DO start looking for or developing AI expertise, considering the possibility (for example) of a Chief AI Officer positionDON’T assume that today’s situation won’t change; things are going to continue to evolve quickly