Productivity is, or at least should be, the most important factor in American financial well-being. So, it’s a big deal when we suffer dwindling labor productivity. Last year, we saw the second largest annual drop in U.S. labor productivity history. I don’t think the media put a lot of effort into reporting it, but productivity shrank by 1.6%, the largest decline since 1974, when there was a similar plummet of 1.7%.
Is annual productivity going to snap back this year? Maybe. After all, it did back in 1975. But the first quarter of 2023 was not at all heartening, with quarterly productivity shrinking by 2.1%! So, let’s hope for good news when data from the second quarter is published on August 3rd.
What Happens If the Bad News Continues?
If that second quarter news is also bad, we can expect to see a lot of hand-wringing in the U.S., especially on the part of economists and business leaders. The debates about return-to-work and quiet quitting will grow more vociferous, and economists will warn that inflation is going to reemerge if things don’t change. After all, prices go up if it costs more to produce things. In the good times, productivity is what helps keep higher prices at bay.
That’s one reason I think a lot about the subject of productivity. It’s not just another economic metric. It’s a grand indicator of whether or not our whole socioeconomic system is working, both in the physical and the financial sense.
But How About that AI Boost?
Of course, many are now predicting that the new generative AIs will soon result in massive increases in productivity. But that’s not a given. For one thing, it often takes workplaces a long time to figure out how to adequately harness new technologies. This happened with everything from electricity to personal computers.
Maybe it’ll be be different this time around. People like Ray Kurzweil argue that AI will speed up the whole process of change. It’s all a matter of exponential rates of increasing returns.
Others are more dubious. Ezra Klein, for example, points out that the Internet should have resulted in a much larger boost in productivity than it did. But what wasn’t accounted for is that the Internet came with a very large dose of diversion. Suddenly people’s computers became distraction machines, and productivity was diluted as a result.
Klein thinks that this could happen with AI. For example, we may end up in deep conversations with our AI companions even as we fall behind on our work. Or, artificial intelligence will become such a major factor in everything from diverting movies to video games to virtual worlds that we will become more distracted than at any other time in history.
Time will, of course, tell. Personally, I make no predictions, but I can imagine several different scenarios. Maybe those will be a subject for a future post.
According to research I’ve helped conduct in the past, HR professionals tend to think that AI will be more widely used for the automation rather than the enhancement of work. But, I think that’s the wrong way to view it. For the most part, these AIs can’t actually take over many jobs. Rather, they help people be more productive at those jobs. So, generative AI is better for augmentation than automation.
Jobs Could Be Lost
This does not mean, however, that jobs can’t be lost. If you can triple the productivity of a nine-person team, for example, then you could potentially lose six of those people and maintain the same production as before. So, yes, jobs could potentially be lost.
On the other hand, it very much depends on the job and how it’s managed. Let’s say that we’re talking about software developers. In a firm that sells software products, the sticking point in the past may have simply been the cost of labor.
But Let’s Be Specific
Let’s assume a team of nine developers creates and maintains a product that brings in $3 million dollars of revenue per year, and let’s assume that the cost of employing this team is $1.5 million per year. Let’s also assume some form of generative AI can triple productivity so that the team can be reduced to just three people. So, yes, the company could save $1 million dollars per year by terminating six of those positions.
Leverage the Wealth-Creation Machine
Or the company could earn many times that amount by keeping them and assigning them to other revenue-earning projects.
Let’s now assume those six developers can be reallocated to create and implement two other products, both of which also can bring in $3 million per year. At this stage, the revenue earned by these six employees will be $6 million dollars, or $1 million per employee.
This is, of course, how productivity works. It’s a system with positive feedback loops that, if harnessed correctly, becomes a wealth-creation machine.
Oh, I know my arithmetic is over-simplified. Salaries, revenues and profits are never that straightforward. But you get the idea. Depending on the job and the business model, generative AI could actually increase the demand for certain skills because it can massively boost productivity, which boosts revenues and profits.
This Could Change, Of Course
Of course, this could change if generative AI (or whatever AI comes next) can fully automate most white-collar work, but we’re not there yet and, from what I can see, we’re not that close. These AIs are still prone to hallucinations and mistakes, and they require trained professionals to be able to detect those mistakes as well as engage in more creative and strategic work.
So, my advice for now is to leverage these technologies for augmentation rather than automation. Get while the getting’s good. Ultimately, that’s how economies and labor markets thrive.
When it came to previous versions of AI, organizations had to worry about falling behind the business competition. The same is true for generative AI, of course. but this time there’s an added complication. Employers have fallen behind employees in AI adoption as well. This needs to be on the radar of HR, the IT department and executive leadership teams.
Execs: Important, Though It’s Going to Take Time
Most executives are familiar with the technology hype cycle, and they’ve seen AI hype before. So, is the generative AI movement different?
Well, probably. One survey from KPMG found that two-thirds of executives think generative AI will have a high or very high impact on their organizations over the next 3 to 5 years. But, being familiar with how long it can take to change anything, especially when it comes to new technologies, most also think it’s going to take a year or two to implement new generative AI technologies.
KPMG reports, “Fewer than half of respondents say they have the right technology, talent, and governance in place to successfully implement generative AI. Respondents anticipate spending the next 6-12 months focused on increasing their understanding of how generative AI works, evaluating internal capabilities, and investing in generative AI tools.”
All of which sounds fine, but only 6% say they have a dedicated team in place for evaluating and implementing risk mitigation strategies. Another 25% say they’re putting risk management strategies in place but that it’s a work-in-progress.
Employees: Already On It, But Don’t Tell the Boss
Meanwhile, a survey conducted by Fishbowl, a social network for professionals, reports that 43% of professionals use AI tools such as ChatGPT for work-related tasks. Of the 5,067 respondents who report using ChatGPT at work, 68% don’t tell their bosses.
This makes me wonder if A) there’s an intentional “don’t ask, don’t tell” policy in some companies that are simply afraid of establishing policies or guidelines that could get them in legal trouble down the line, or B) there’s an unintentional bureaucratic lag as companies take months or longer to establish guidelines or policies around these new technologies.
But Some Employers Aren’t Waiting
This doesn’t mean that all organizations are lagging in this area, however. Some have already set up guardrails.
The consulting firm McKinsey, for example, has reportedly knocked together some guardrails that include “guidelines and principles” about what information employees can input into the AI systems. About half of McKinsey workers are using the tech.
“We do not upload confidential information,” emphasized Ben Ellencweig, senior partner and leader of alliances and acquisitions at QuantumBlack, the firm’s artificial intelligence consulting arm.
McKinsey specifically uses the AI for four purposes:
Computer coding and development
Providing more personalized customer engagement
Generating of personalized marketing content
Synthesizing content by combining different data points and services
Ten Suggested Do’s and Don’ts
There are now various articles on developing ethics and other guidelines for generative AI. Keeping in mind I’m no attorney, here’s what I think organizations should consider in the area of generative AI:
DO spend time getting to understand these AIs before using them for work
DON’T leap directly into using these tools for critical work purposes
DO be careful about what you put into a prompt
DON’T share anything you wouldn’t want shared publicly
DO always read over and fact-check any text that an AI generates if it is being used for work purposes
DON’T assume you’re getting an accurate answer, even if you’re getting a link to a source
DO use your own expertise (or that of others) when evaluating any suggestions from an AI
DON’T assume these AIs are unbiased. They are trained on human data, which tends to have bias baked in.
DO develop guardrails, guidelines and ethical principles
DON’T go full laissez faire
DO continue to use calculators, spreadsheets and other trusted calculation tools
DON’T rely generative AI for calculation for now unless you have guarantees from a vendor; even then, test the system
DO continue to use legal counsel and trusted resources for understanding legislation, regulation, etc.
DON’T take any legal advice from an AI at face value
DO careful analysis of any tasks and jobs being considered for automation
DON’T assume these AIs can replace any tasks or positions until you and others have done your due diligence
DO train employees on both the ethical and practical uses of generative AIs once these are well understood
DON’T make everyone learn all on their own with no discussion or advice
DO start looking for or developing AI expertise, considering the possibility (for example) of a Chief AI Officer position
DON’T assume that today’s situation won’t change; things are going to continue to evolve quickly
These days, generative AI is sucking up all the proverbial oxygen in the HR tech room. This can deprive other types of excellent–and often more mature and dependable–technologies from getting the attention they deserve. So, since I’ve been writing so much about generative AI, I just wanted to emphasize what should be obvious: there’s much more to HR technology than neural networks.
I understand the fascination with the new AIs. They’re impressive and powerful. But, at least in their more generalized incarnations (i.e., ChatGPT, Bing, Bard, etc.), they’re still experimental and subject to problems, the most serious of which are inaccuracies to the point of sheer confabulation.
HR of Two Tech Minds
This leaves HR departments that are looking for the best new applications thinking along two tracks. First, they want the technology that meets their particular HR needs, one that is dependable and predictable. Let’s say, for example, that they want recruitment software that accurately matches the skills of candidates with the skills gaps they currently face in their organization. Efficiency and effectiveness are crucial to success here.
Their second track of thought, however, goes something like this: “But what about generative AI? How important is that going to be in this area in the near future?” Should HR pros worry whether the new system they’re considering will be out-of-date soon if it doesn’t contain elements of generative AI? Do there need to be prompt screens into which users can ask questions using natural language?
Personally, I don’t think so. A well-engineered (and well understood!) algorithm that predictably does an important task well is still a good investment. Down the road, of course, maybe that software will be integrated with some form of generative AI to serve as part of its interface. Maybe.
Good Tech Is Hard to Find
My point is that good technology that works today shouldn’t be underrated just because it’s not stamped with labels such as as generative AI, Large Language Model, neural network, or even just machine learning. The topic of AI will, of course, continue to be widely discussed, touted, hyped and critiqued, but generative AI won’t completely replace or subsume other more traditional (and perhaps more dependable) HR technologies. At least not in the short term.
Every purchasing decision is unique, depending on the customer’s needs and technology under consideration. I’m certainly in no position to judge for anyone else who’s making an important purchasing and implementation decision. But, for what’s it’s worth, I think HR professionals should not get so distracted by the shiny object of generative AI that they ignore the technologies that work best today.
In one recent posts in this series, I argued that the latest forms of AI will play a unique role in the history of humanity and technology. Un this one, I want to drill down on that idea by showing how we’ll increasingly treat generative AI as coworkers, collaborators and more.
AI as Flawed Tool
One of the ironies of today’s generative AIs like ChatGPT is that, in many ways, they make for lousy tools in the traditional sense. What you expect from a good tool is consistency, dependability, durability and accuracy. At least for now, the today’s generative AI, especially the large language models, often fail to meet these criteria.
As I said in my last post, “If we held these AIs to the same standards as the literal tools in our toolboxes, we’d probably toss them. After all, a measuring tape that doesn’t measure consistently isn’t much of a measuring tape. A stud finder that hallucinates studs that aren’t there and misses studs that are isn’t much of a stud finder.”
Let’s get into some of the problems.
Top Five HR Functions
If you ask the generative AIs the same question multiple times, they may well give you a different answers in different instances. For example, let’s say I ask one of these AIs, “What are the five most important HR functions?”
I asked Bard this question three times. It gave me the same answer the first two times and a different answer the next day. ChatGPT gave me the most consistent responses, while Bing performed more like Bard: giving me two virtual identical answers and later a somewhat different answer.
Generally speaking, though, the five most common answers were:
Recruitment and Selection
Training and Development
Compensation and Benefits
This is, of course, a subjective question, so who really cares if Bard throws in “outsourcing” and Bing throws in “culture” or “talent management” sometimes? Well, not me, unless I’m trying to create a training module that needs to teach a consistent lesson. I’m not saying that issue can’t be fixed, even with generative AI, but the point is that these AIs have an unpredictability that must be taken into consideration by users and developers.
The Forces of Nature
In contrast, these AIs are much better at consistently providing information that has been well codified, such as scientific information. For example, they will consistently say that there are four forces or nature and identify them correctly. The definitions may be slightly different from response to response, but generally speaking they’ll be the same.
Undependable and Inaccurate
I have experienced AI “confabulations” many times. I’ve seen these AIs make up names of fictional scientists, tell me stories about things that could not have happened, and just get the facts wrong about basic things such as chronological order.
In my last post, I gave a detailed account of AI hallucinations and inaccuracies in regard to the topic of a famous poet. I’ve also experienced AI getting basic mathematics wrong. In fact, as I was write this, I asked ChatGPT to multiply two four-digit numbers. Not only did it give me the wrong answer twice, it gave me two different answers to the same problem!
This is common for these AIs, so when I hear that ChatGPT will soon be responsible for things like bookkeeping, I have to shake my head. The firm that carelessly turns its finances over to generative AI has best be prepared for a visit from the IRS.
That Could Change
Of course, what’s true today may not be true tomorrow. ChatGPT may become flawless at mathematics as its maker, OpenAI, forges alliances with firms such as Wolfram|Alpha. By using plug-ins and APIs, ChatGPT might be able to go from mathematical moron to savant.
Still, my point remains. Without a lot of testing, do not assume the responses coming from one of these AIs are accurate. And, if you’re purchasing an external system, be sure the vendor of the software that utilizes generative AI has a very sound explanation of how the system will be made consistently accurate and dependable.
AI as Intern
So, if these AIs are still pretty shaky as tools, what good are they? Well, that depends. What do you actually want from them?
Let’s say what you really want right now is someone who can give you a hand in a professional field in which you have pretty good skill sets yourself. Basically, you want a bright but green intern who is eager and works with remarkable speed.
In my case, for example, I know a few things about how to write a good survey about HR and work-related issues. So, I want a first draft of a survey on the topic of employment law compliance. Here’s an example of the kind of prompt I could use:
Please write a 10 question survey on the topic of employment law compliance. I’m going to survey HR professionals on this topic.
Then let’s say it gave me 10 questions that are pretty good but, because I’ll need to run a multiple regression analysis, I want some Likert-scale questions. So this time I write:
Now please provide me with five more related questions that use five-point Likert scales
And voilà! I now have a 15-question survey I can use as good draft on which to build. Knowing my job, I can spot problems and make improvements. I am, in the end, grateful for my splendid intern.
AI as Collaborator
Now let’s say I have less use for an intern than for a bright coworker with whom I can collaborate And let’s say I’ve been asked by my Chief HR Officer to look into the issue of pay transparency, which no one in on HR our team has a good handle on yet. So, I turn to a virtual coworker who, while obviously not an expert in employment law compliance, is quite knowledgeable about a wide range of issues.
Collaborating with Bing
So, I ask this virtual coworker (let’s call it Bing in this case) some questions starting with “What can you tell me about pay transparency laws in the US?”
Bing responds as follows:
“Pay transparency laws are regulations that require employers to be more transparent with salary ranges and benefits, and they aim to help promote fairness and equity in the workplace³. Depending on the jurisdiction, employers may be prohibited from asking candidates for their salary history, or they may be required to disclose a wage range for current and/or open positions². Some states and local jurisdictions also have pay data reporting requirements that mandate employers to report pay data by gender, race, ethnicity, and other categories².
“According to the U.S. Department of Labor, pay transparency can help improve pay equity, trust, talent acquisition, and employee engagement¹. However, there is no comprehensive federal pay transparency law in the United States, and the laws at the state and city level vary widely in their scope and enforcement³⁴⁵. You can find more information on federal and state-level equal pay and pay transparency protections for workers on this map¹.”
Then Bing provides me with the following sources that I can use to verify what its saying:
Once I’ve studied the sources, I learn that, although there is no comprehensive federal law in this area, it might make more sense to create a national policy since our company has major operations in California, New York and Illinois, where there are existing or pending laws.
So, Bing and I bat this idea around, listing the pros and cons of a national policy. Ultimately, with Bing’s help, I put together an internal report on the existing laws and the advantages and disadvantages of a national policy. Ultimately, although I’m collaborating with Bing, I’m responsible for the report and must verify every fact Bing provides.
From here, I know this is going to become an organizational decision that involves lots of different stakeholders as well as legal counsel. But my collaborator and I have done a pretty good job, I’d say, of quickly putting together a balanced report that will help the company make the best decisions about a complex issue.
AI as Contractor
Now let’s say I’m part of a small communication team that’s tasked with communicating our company’s guidelines and best practices around hybrid and remote work. The policies are already developed so our job is to make them as accessible, educational and engaging as possible.
One way to do this is via visual elements. Our team brainstorms some possible fun illustrations but there’s not much that matches our vision in the clipart files and there’s no budget to contract with an outside artist on this project.
A member of the team says she might be able to use one of the new AIs to generate the illustrations we have in mind. By the end of the day, she’s shared 40 different images with the team, and we select 6 for the guidelines document.
Someone makes the comment that he wished all their graphic artist contractors worked so quickly and cheaply. This gets a bit of nervous laughter. After all, as writers, we’re well aware that the large language models work a lot cheaper and faster than we do.
AI as Dance Partner
Ultimately, these generative AIs don’t easily fit any pre-existing categories. Technically, they are tools but historically unique ones. Because of this, it often makes more metaphorical sense to view them as playing roles more similar to other human beings, with known strengths and weaknesses.
There’s the role of smart and fast intern who, nonetheless, is prone to making potentially serious mistakes. There’s the role of a eager collaborator who brings many talents and total open-mindedness to the table. You can bat ideas around with this person but, ultimately, you will be responsible for the outcomes of that collaboration. And, of course, there’s the role of contractor with special skill sets.
In all cases, though, there needs to be a growing familiarity with these AIs as they become regular “dance partners” in the workplace. You must get to know their tendencies and cadences, and you are responsible for taking the lead in whichever virtual dance you’re doing. Because, although these tools will certainly be used for automation, they are best at augmenting and complementing people with existing skill sets.
Or, at least, that’s how things stand today. Who knows what tomorrow brings?
Many have stated that artificial intelligence (AI) will change the world. When you ask them how it will, they’ll have hundreds of different answers. Here, however, I’m only going to talk about one way it’ll change the world, the most important way: that is, AI will transform the technium.
The Difference Between the Technium and the Technosphere
As far as I can tell, author Kevin Kelly coined the word technium in his 2010 book What Technology Wants, though perhaps he’d used it before then. He has defined the technium as the “greater, global, massively interconnected system of technology vibrating around us.” It not only includes hardware and software but also culture, art, social institutions, and intellectual creations of all types.
This makes the technium more inclusive than any list of technologies, such as the one cited in the previous post in this series.
I’m not sure why Kelly created technium when the word “technosphere” was readily available. That term was coined by either control engineer John Milsum or by geologist and engineer Peter Haff. Sometimes it’s also called the anthrosphere, a term originally attributed to 19th century geologist Eduard Suess.
Technium and technophere are similar and, I suppose, both are flexible enough to be used in a variety of contexts. Geologist Jan Zalasiewicz writes:
The technosphere…comprises not just our machines, but us humans too, and the professional and social systems by which we interact with technology – factories, schools, universities, trade unions, banks, political parties, the internet. It also includes the domestic animals that we grow in enormous numbers to feed us, the crops that are cultivated to sustain both them and us, and the agricultural soils that are extensively modified from their natural state to carry out this task.
Making the Two Words More Complementary
Given the overlap of the concepts, I’ve been thinking about whether technium is redundant. One interesting way to think about the difference between technosphere and technium came to me via Google’s Bard, which argued that “the technosphere refers to the entire system of human-made objects and structures, while the technium refers to the specific processes and activities involved in creating and using these objects and structures.”
I like that distinction and I suspect Kelly himself might agree with it. After all, he writes that “the technium is a tendency, not an entity. The technium and its constituent technologies are more like a grand process than a grand artifact.”
Bard asserts that “the technosphere is the physical manifestation of the technium.” That is, the technosphere is the built environment and the technium is the humanactivity that creates and sustains it via engineering, manufacturing, maintenance, etc.
I don’t know if this is exactly what Kelly had in mind since he doesn’t go into detail about how the technium differs from the technosphere in his book, but I find it a useful distinction.
AI’s Role in the Technium
The reason I focus on the differences is because I think AI potentially plays an important role here. AI is obviously a growing part of the technosphere, but it’s also starting to play a role in the technium that, up till now, only humanity has played. That is, until this moment in history, human activities have made up “the grand process” that is the technium, but that’s now changing. This marks it as a major shift in the history of technology.
I’m not an illustrator but I was able to use AI to generate a series of images that I thought went fairly well with the poem. It was more of an experiment than anything else but it demonstrated, at least to me, the ability of AI to create a significant portion of the technosphere.
“But a piece of digital artwork is not part of the technosphere,” you might argue. Well, that becomes a matter of semantics, so let’s go with something a little more along the classic lines of built infrastucture: that is, software development.
We know that the new generative AIs are quite good, if not perfect, at generating computer code in a wide array of computer languages. So, let’s say a human being uses this capability to create 90% of the code behind a new app that finds its way onto the Apple store.
Could you argue that that’s not part of the technosphere? I doubt it. But let’s keep going anyway.
As I’ve argued before, there’s no reason that generative AI can’t be used to generate thingsmade of atoms rather than just digital objects made of bits and bytes. It’s already a trivial matter, for example, to hook up a generative AI to a 3D printer and create a sculpture or a machine part. This is only going to get easier, with more and more complex machinery being designed by AI and built by forges, 3D printers and other components of the technosphere.
This Key Issue Is Agency Rather Than Capability
So, generative AI is not just part of the technosphere but, increasingly, the technium. That is, it begins to play a role that, up till now, only humanity itself has played. Unless the technology becomes highly regulated very quickly, this role will grow at extraordinary rates.
There will be those who assert that these AIs are only one tool along a continuum that creates the technophere. For example, there are plenty of machines that create other machines, and there is plenty of software that is used to create other digital artifacts. As with other software, these AIs don’t create anything at all until they are prompted to do so.
Maybe so, but I’m arguing that there’s a qualitative difference here. In the creation of my previous post called “A Brief History of Human Technology,” I simply typed the title of the post into Microsoft Bing Image Creator. Otherwise, I gave it no direction at all. It generated two images, both of which I thought were quite good and yet quite different from one another. I used the first of the images in that post and used the second one as the featured image in this post (see above).
Yes, I know that the AI art generators are using existing art on the Internet that got pulled into their training models and that there are ethical issues involved, which I’ve examined elsewhere. Even so, these are still original, if derivative, pieces of art that the algorithm created with minimal guidance from me. This is a different thing than when I use an Adobe application to create triangle or blur a detail. Like it or not, this is creation.
AI and what it produces isn’t just part of the technosphere, it now plays a role similar to that of humanity in the “grand process” and “tendency” that is the technium. (There’s a whole teleological debate here that I’m mostly going to forego for now.)
Similar but Not the Same
Yes, there are still large differences between humanity and these new AIs that have been built via the neural network idea cribbed from our own brains. But I think the primary difference in this context boils down to agency.
In this case, the AI is certainly more capable than I am as an illustrator. What it lacks, at least in this context, is the initial spark of agency to take the action to create the image. But, and I think this is important, this doesn’t mean it lacks any agency. Indeed, all I did was create an initial abstract and inchoate concept, and then it “decided” how to approach the creation of the graphic.
If I’d done the same with a human artist, we certainly wouldn’t say that person lacked agency. Quite the contrary. We’d be amazed at their ability to take such an abstract concept and turn it into a work of art! Sure, I ultimately chose and curated the final product, but that’s something that a customer or patron of the arts always does.
So, no, this isn’t the same as any other technology we’ve ever created. It’s qualitatively different. We now have a partner in the technium dance.
This changes our ontological position in the world. And, more practically, it changes the meaning of human work, which is the topic I’ll cover in my next post in this series.
PS – Note that I asked Kevin Kelly if he’d like to comment on the analysis and he clarified as follows: “I was not aware of the term technosphere when I first blogged. If I had been I might have used it. I define the technium in the same inclusive broad meaning of tangible and intangible.”
We’ve been researching how HR contributes to organizational success. One of the key issues to consider is the division between strategic and transactional HR issues.
To get an idea of how much consensus there is on that topic, I asked an identical question of three of today’s most widely discussed generative artificial intelligences: “What kinds of HR issues are strategic and what kinds are transactional?” Here’s how they responded. The only change I made was to bold a few words.
In general, HR (Human Resources) issues can be classified as either strategic or transactional based on their nature and impact on the organization. Here’s a breakdown of each category:
Strategic HR Issues:
Workforce Planning: Developing long-term strategies to ensure the organization has the right people with the right skills in the right positions to achieve its goals.
Talent Acquisition and Retention: Attracting and retaining top talent by implementing effective recruitment strategies, employer branding, competitive compensation packages, and employee development programs.
Succession Planning: Identifying and developing potential leaders within the organization to ensure a smooth transition in key positions.
Performance Management: Establishing performance standards, providing feedback, and implementing performance improvement plans to align individual and team goals with organizational objectives.
Learning and Development: Identifying skill gaps, designing training programs, and implementing initiatives to enhance the knowledge and capabilities of employees.
Organizational Culture and Change Management: Shaping the desired culture and managing change processes to facilitate organizational effectiveness, employee engagement, and adaptability.
Employee Engagement and Satisfaction: Developing strategies to enhance employee morale, satisfaction, and motivation, leading to improved productivity and retention.
Diversity and Inclusion: Promoting diversity, equity, and inclusion within the workplace through initiatives such as diverse hiring practices, creating an inclusive environment, and addressing unconscious biases.
Transactional HR Issues:
Payroll and Benefits Administration: Managing payroll processing, tax deductions, and administering employee benefits programs, such as health insurance, retirement plans, and vacation accruals.
Employee Onboarding and Offboarding: Handling the paperwork and processes associated with bringing new employees into the organization and managing employee departures.
HR Policies and Compliance: Developing and communicating HR policies, ensuring compliance with labor laws, and managing employee records and documentation.
Time and Attendance Management: Tracking employee attendance, leave requests, and managing time-tracking systems.
Employee Relations and Conflict Resolution: Addressing employee grievances, managing disciplinary actions, and facilitating conflict resolution within the workplace.
HR Data Management: Maintaining accurate and up-to-date employee information, managing HR databases, and generating reports related to workforce analytics.
HR Technology and Systems: Managing HR information systems, implementing technology solutions, and ensuring data security and privacy.
It’s important to note that some HR issues may have elements of both strategic and transactional aspects, and the categorization can vary depending on the organization and its priorities. Additionally, the strategic and transactional aspects of HR are interrelated and require collaboration to ensure the overall effectiveness of HR functions.
Transactional HR is the day-to-day work of HR, such as:
Hiring and onboarding new employees
Managing employee benefits
Enforcing company policies
Resolving employee conflicts
Complying with employment laws
Strategic HR is the work of HR that is aligned with the organization’s overall goals and objectives, such as:
Developing a talent strategy
Creating a culture of engagement
Measuring employee performance
Here is a table that summarizes the key differences between transactional and strategic HR:
Alignment with organizational goals
Role of HR
Transactional HR is essential for the smooth functioning of an organization, but strategic HR is what can help an organization achieve its long-term goals. The best HR departments are able to balance both transactional and strategic work, and they use their knowledge of both to help the organization succeed.
Here are some examples of how transactional and strategic HR can work together to achieve organizational goals:
Transactional HR can help to identify and recruit the best talent for the organization. Strategic HR can then work with the organization to develop a talent management plan that will help to retain and develop that talent.
Transactional HR can help to ensure that employees are compliant with company policies. Strategic HR can then work with the organization to develop policies that are aligned with its values and that will help to create a positive work environment.
Transactional HR can help to resolve employee conflicts. Strategic HR can then work with the organization to create a culture of respect and inclusion that will help to prevent conflicts from happening in the first place.
By working together, transactional and strategic HR can help organizations to achieve their goals and objectives.
According to some sources ² ⁴, strategic HR issues are those that are value- and future-focused, such as culture and retention strategies, recruitment and selection programs, developing onboarding programs, increasing value out of employees through training programs, developing outreach programs for recruitment sources, developing best practice guides or assistance, etc. Strategic HR helps implement strategies that help focus on an organization’s goals and address the challenges that need to be met by the human resources.
Transactional HR issues are those that are routine and administrative, such as payroll, workers compensation, benefits management, compliance with employment laws, etc. Transactional HR encompasses everything that makes hiring, training, paying, and managing your employees. Transactional HR is necessary but not sufficient for achieving organizational success.
For example¹, a transactional approach to hiring new employees is to write 10 job descriptions, post them, interview candidates and fill the positions. A strategic approach means partnering with the manager to understand the challenge that needs to be addressed, developing a long-term plan for a talent pipeline and networking to uncover the best passive candidates.
I wish I could say there’s a clear winner here, but the truth is that they gave similar answers but in a form that complements the strengths of each one. ChatGPT gave a very straightforward and well-numbered response, demonstrating why it became the most widely adopted software tool in the history of the Internet.
Bard‘s response went to the trouble of creating a table that abstracts the characteristics of strategic and transactional in an interesting way, and it took the extra step of showing how the concepts are complementary. It’s interesting to note that the AIs don’t entirely agree on whether talent acquisition is strategic or transactional.
Meanwhile, Bing did what Bing tends to do best, which is provide a relatively succinct answer but one that provides links to original sources that supposedly support its arguments. I say “supposedly” because I’ve found that sometimes the sources it provides do not really support the assertions it makes in its summaries. Bing also wrote one incomplete sentence.
I found them all useful. In practice, I tend to use Bing a lot because it gives me sources I use to verify (or not) its assertions. This is very useful to a researcher, and I think Bing is underutilized for that reason.
That said, I’m impressed by Bard’s advances in recent weeks and will probably use it more than I have been. But ChatGPT3.5 is still a very impressive and intuitive tool, and it provided, in my eyes, the most straightforward answer.
Vive la différence! There’s room in the world for more than one scary-smart-but-annoyingly-hallucinogenic AI, it seems. May we (including us human intelligences) all learn to get along in a civil manner. That would the hallmark of a rich and interestingly complex intelligence ecosystem.
Note: The image featured is from Microsoft Bing Image Creator, in which the prompt was “In the style of Utagawa Kuniteru, show three sumo wrestlers wrestling one another”. Please note that there’s no implication that today’s AIs are somehow Japanese. I just wanted an image of three powerful wrestlers illustrated in the style of an excellent artist who has long since passed on and would have no concerns about copyright issues.
The other day, I was playing with Stable Diffusion and found myself thinking hard about the ethics of AI-generated images. Indeed, I found myself in an ethical quandary. Or maybe quandaries.
More specifically, I was playing with putting famous haiku poems into the “Generate Image” box and seeing what kinds of images the Stable Diffusion generator would concoct.
It was pretty uninspiring stuff until I started adding the names of specific illustrators in front of the haiku. Things got more interesting artistically but, from my perspective, murkier ethically. And, it made me wonder if society has yet formulated way to approach the ethics of AI-generated images today.
The Old Pond Meets the New AIs
The first famous haiku I used was “The Old Pond” by Matsuo Bashō. Here’s how it goes in the translation I found:
An old silent pond
A frog jumps into the pond—
Splash! Silence again.
At first, I got a bunch of photo-like but highly weird and often grotesque images of frogs. You’ve got to play with Stable Diffusion a while to see what I mean, but here are a a few examples:
Okay, so far, so bad. A failed experiment. But that’s when I had the bright idea of adding certain illustrators’ names to the search so the generator would be able to focus on specific portions of the reticulum to find higher quality images. For reasons that will become apparent, I’m not going to mention their names. But here are some of the images I found interesting:
Better, right? I mean, each one appeals to different tastes, but they aren’t demented and inappropriate. There was considerable trial and error, and I was a bit proud of what I eventually kept as the better ones.
“Lighting One Candle” Meets the AI Prometheus
The next haiku I decided to use was “Lighting One Candle” by Yosa Buson. Here’s how that one goes:
The light of a candle
Is transferred to another candle—
This time I got some fairly shmaltzy images that you might find in the more pious sections of the local greeting card aisle. That’s not a dig at religion, by the way, but that aesthetic has never appealed to me. It seems too trite and predictable for something as grand as God. Anyway, the two images of candles below are examples of what I mean:
I like the two trees, though. I think it’s an inspired interpretation of the poem, one that I didn’t expect. It raised my opinion of what’s currently possible for these AIs. It’d make for a fine greeting card in the right section of the store.
But, still not finding much worth preserving, I went back to putting illustrators’ names in with the haiku. I thought the following images were worth keeping.
In each of these cases, I used an illustrator’s name. Some of these illustrators are deceased but some are still creating art. And this is where the ethical concerns arise.
Where Are the New Legal Lines in Generative AI?
I don’t think the legalities relating to generative AI have been completely worked out yet. Still, it looks like does appear that artists are going to have a tough time battling the against huge tech firms with deep pockets, even in nations like Japan with strong copyright laws. Here’s one quote from the article “AI-generated Art Sparks Furious Backlash from Japan’s Anime Community”:
[W]ith art generated by AI, legal issues only arise if the output is exactly the same, or very close to, the images on which the model is trained. “If the images generated are identical … then publishing [those images] may infringe on copyright,” Taichi Kakinuma, an AI-focused partner at the law firm Storia and a member of the economy ministry’s committee on contract guidelines for AI and data, told Rest of World….But successful legal cases against AI firms are unlikely, said Kazuyasu Shiraishi, a partner at the Tokyo-headquartered law firm TMI Associates, to Rest of World. In 2018, the National Diet, Japan’s legislative body, amended the national copyright law to allow machine-learning models to scrape copyrighted data from the internet without permission, which offers up a liability shield for services like NovelAI.
How About Generative AI’s Ethical Lines?
Even if the AI generators have relatively solid legal lines defining how they can work, the ethical lines are harder to draw. With the images I generated, I didn’t pay too much attention to whether the illustrators were living or dead. I was, after all, just “playing around.”
But once I had the images, I came to think that asking the generative AI to ape someone’s artistic style is pretty sleazy if that artist is still alive and earning their livelihood through their art. That’s why I don’t want to mention any names in this post. It might encourage others to add the names of those artists into image generators. (Of course, if you’re truly knowledgeable about illustrators, you’ll figure it out anyway, but in that case, you don’t need any help from a knucklehead like me.)
It’s one thing to ask an AI to use a Picasso-esque style for an image. Picasso died back in 1973. His family may get annoyed, but I very much doubt that any of his works will become less valuable due to some (still) crummy imitations.
But it’s a different story with living artists. If a publisher wants the style of a certain artist for a book cover, for example, then the publisher should damn well hire the artist, not ask a free AI to crank out a cheap and inferior imitation. Even if the copyright system ultimately can’t protect those artists legally, we can at least apply social pressure to the AI generator companies as customers.
I think AI generator firms should have policies that allow artists to opt out of having their works used to “train” the algorithms. That is, they can request to be put on the equivalent of a “don’t imitate” list. I don’t even know if that’s doable in the long run, but it might be one step in the direction of establishing proper ethics of AI-generated images.
The Soft Colonialism of Probability and Prediction?
First is the exploitation of cultural capital. These models exploit enormous datasets of images scraped from the web without authors’ consent, and many of those images are original artworks by both dead and living artists….The second concern is the propagation of the idea that creativity can be isolated from embodiment, relations, and socio-cultural contexts so as to be statistically modeled. In fact, far from being “creative,” AI-generated images are probabilistic approximations of features of existing artworks….AI art is, in my view, soft propaganda for the ideology of prediction.
To an extent, his first concern about cultural capital is related to my previous discussion about artists’ legal and moral rights, a topic that will remain salient as these technologies evolve.
His second concern is more abstract and, I think, debatable. Probabilistic and predictive algorithms may have begun in the “Global North,” but probability is leveraged in software wherever it is developed these days. It’s like calling semiconductors part of the “West” even as a nation like Taiwan innovates the tech and dominates the space.
Some of his argument rests on the idea that generative AI is not “creative,” but that term depends entirely on how we define it. Wikipedia, for example, states, “Creativity is a phenomenon whereby something new and valuable is formed.”
Are the images created by these technologies new and valuable? Well, let’s start by asking whether they represent something new. By one definition, they absolutely do, which is why they are not infringing on copyright. On the other hand, for now they are unlikely to create truly new artistic expressions in the larger sense, as the Impressionists did in the 19th century.
As for “valuable,” well, take a look at the millions if not billions of dollars investors are throwing their way. (But, sure, there are other ways to define value as well.)
My Own Rules for Now
As I use and write about these technologies, I’ll continue to leverage the names deceased artists. But for now I’ll refrain from using images based on the styles of those stilling living. Maybe that’s too simplistic and binary. Or maybe it’s just stupid of me not to take advantage of current artistic styles and innovations. After all, artists borrow approaches from one another all the time. That’s how art advances.
I don’t know how it’s all going to work out, but it’s certainly going to require more thought from all of us. There will never be a single viewpoint, but in time let’s hope we form some semblance of consensus about what are principled and unprincipled usages of these technologies.
Featured image is from Stable Diffusion. I think I used a phrase like "medieval saint looking at a cellphone." Presto.
We are about to be awash in AI-generated media, and our society may have a tough time surviving it.
Our feet are already wet, of course. The bots inhabit Twitter like so many virtual lice. And chatbots are helpfully annoying visitors on corporate websites the world over. Meanwhile, algorithms have been honing their scribbler skills on the virtual Grub Street of the Internet for a while now.
But soon, and by soon I mean within months, we will be hip deep in AI-generated content and wondering how high the tide is going to get.
My guess is high, baby. Very high indeed.
What Are We Really Talking Here?
Techopedia defines generative AI as a “broad label that’s used to describe any type of artificial intelligence that uses unsupervised learning algorithms to create new digital images, video, audio, text or code.” In short, it’s all about AI-generated media.
I think that label will ultimately prove too restrictive, but let’s start there. So far, most of the hype is indeed around media, especially image creation and automated writing, with music and video not being far behind.
But we’ll get to that.
For now it’s enough to say that generative AI works by learning from, and being “inspired by,” the dynamic global reticulum that is the Internet.
But generative AI also applies to things like computer code. And, by and by, it’ll start generating atoms in addition to bits and bytes. For example, why couldn’t generative AI be applied to 3D printing? Why not car and clothing design? Why not, even, the creation of new biological systems?
The Money Generator
First, let’s follow the money. So how much dough is going into generative AI these days?
Answer: how much you got, angels and VCs?
For example, a start-up called Stability AI, which created the increasingly popular Stable Diffusion image-generating algorithm, was recently injected with a whopping $101 million round of investment capital. The company is now valued at a billion bucks.
Meanwhile other image generators such as DALL-E 2 and Midjourney have already acquired millions of users.
But investors are not just hot for image generators. Jasper, a generative writing company that’s just a year old (and one that plagues me with ads on Facebook) recently raised $125 million in venture capital and has a $1.5 billion valuation.
Although image and prose (usually with an eye toward marketing) are the hot tickets in generative AI for now, they are just the proverbial tip of the iceberg. Indeed, it appears that Stability AI, for one, has much grander plans beyond images.
The New York Timesreports that the company’s soon-to-be massive investments in AI hardware will “allow the company to expand beyond A.I.-generated images into video, audio and other formats, as well as make it easy for users around the world to operate their own, localized versions of its algorithms.”
Think about that a second. Video. So people will be able to ask generative AI to quickly create a video of anything they can imagine.
Fake Film Flim-Flams
Who knows where this leads? I suppose soon we’ll be seeing “secret” tapes of the Kennedy assassination, purported “spy video” of the Trump/Putin bromance, and conspiracy-supporting flicks “starring” a computer-generated Joe Biden.
We can only imagine the kind of crap that will turn up on YouTube and social media. Seems likely that one of the things that generative AI will generate is a whole new slew of conspiracists who come to the party armed with the latest videos of Biden handing over Hunter’s laptop to the pedophiliac aliens who wiped Hilary’s emails to ensure that Obama’s birth place couldn’t be traced back to the socialist Venusians who are behind the great global warming scam.
Even leaving political insanity aside, however, what happens to the film and television industries? How long until supercomputers are cranking out new Netflix series at the rate of one per minute?
Maybe movies get personalized. For example, you tell some generative AI to create a brand new Die Hard movie in which a virtual you plays the Bruce Willis role and, presto, out pops your afternoon’s entertainment. Yippee ki yay, motherfucker!
So, AI-generated media on steroids. On an exponential growth curve!
Play that Fakey Music
Then there are the sound tracks to go with those AI-gen movies. The Recording Industry Association of America (RIAA) is already gearing up for these battles. Here’s a snippet of what it submitted to the Office of the U.S. Trade Representative.
There are online services that, purportedly using artificial intelligence (AI), extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals (a music stem) from a sound recording, and/or generate, master or remix a recording to be very similar to or almost as good as reference tracks by selected, well known sound recording artists.
To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members’ works. In any event, the files these services disseminate are either unauthorized copies or unauthorized derivative works of our members’ music.
That’s an interesting argument that will probably be tried by all creative industries. That is, just training your AI based on Internet copies of musical works violates copyright even if you have no intention of directly using that work in a commercial project. I imagine the same argument could be applied to any copyrighted work. Who know what this will mean for “synthetic media,” as some are calling.
Of course, there are plenty of uncopyrighted works AI can be trained on, but keeping copyrighted stuff from being used for machine learning programs could put a sizeable dent in the quality of generative AI products.
So, it won’t only be media that’s generated. Imagine the blizzard of lawsuits until it’s all worked out.
Revenge of the Code
AI can code these days. Often impressively so. I suppose it’d be ironic if a lot of software developers were put out of work by intelligent software, but that’s the direction we seem headed.
Consider the performance of DeepMind’s AlphaCode, an AI designed to solve challenging coding problems. The team that designed it had it compete with human coders to solve 10 challenges on Codeforces, a platform hosting coding contests.
Prof. John Naughton writing in The Guardian describes the contest and summarizes, “The impressive thing about the design of the Codeforces competitions is that it’s not possible to solve problems through shortcuts, such as duplicating solutions seen before or trying out every potentially related algorithm. To do well, you have to be creative.”
On its first try, AlpaCode did pretty well. The folks at DeepMind write, “Overall, AlphaCode placed at approximately the level of the median competitor. Although far from winning competitions, this result represents a substantial leap in AI problem-solving capabilities and we hope that our results will inspire the competitive programming community.”
To me, a very amateurish duffer in Python, this is both impressive and alarming. An AI that can reason out natural language instructions and then code creatively to solve problems? It’s kind of like a Turing test for programming, one that AlphaCode might well be on target to dominate in future iterations.
Naughton tries to reassure his readers, writing that “engineering is about building systems, not just about solving discrete puzzles,” but color me stunned.
What’s next for generative AI once it finds its virtual footing?
Well, atoms are the natural next step.
Ask yourself: if generative AI can easily produce virtual images, why not sculptures via 3D printers? Indeed, why not innovative practical designs?
This is not a new idea. There is already something called generative design. Sculpteo.com describes, “Instead of starting to work on a design from scratch, with a generative design process, you tell the program what you need to accomplish, you set your design goals and mention all the parameters you can. No geometry is needed to start a project. The software will then deliver you hundreds or thousands of design options, the AI can also make an in-depth analysis of the design and establish which one is the most efficient one! This method is perfect to explore design possibilities to get an optimal part.”
How About Bio?
Not long ago, I wrote a tongue-in-cheekish post about the singularity. An acquaintance of mine expressed alarm about the idea. When I asked what scared her most, she said, “If AI can alter DNA, I’d say the planet is doomed.”
That particular scenario had never occurred to me, but it’s easy enough to see her point. DNA is biological code. Why not create a generative AI that can design new life forms almost as easily as new images?
In fact, why stop at design? Why not 3D print the new critters? Again, this is a concept that already exists. As the article “3D Bioprinting with Live Cells” describes it, “Live cell printing, or 3D bioprinting, is an emerging technology that poses a revolutionary development for tissue engineering and regeneration. This bioprinting method involves the creation of a spatial arrangement of living cells and biologics into a functionalized tissue.”
The good news? Probably some fascinating new science, designer replacement organs on demand, and all the strange new machine-generated meat you can eat!
The bad news? Shudder. Let’s not go there today.
Mickey Mouse and the Age of Innovative AI
Although we’re calling this generative AI, the better term might be innovative AI. We are essentially contracting AI writers, artists and coders to do our bidding. Sure, they’re imitating, mixing and matching human-made media, but they are nonetheless “the talent” and will only get better at their jobs. We, on the other hand, are promoted to the positions of supercilious art directors, movie producers and, inevitably (yuck) critics.
If the singularity ever actually happens, this emerging age of innovative AI will be seen as a critical milestone. It feels like a still rough draft of magic, and it may yet all turn out wonderfully.
But I find it hard not to foresee a Sorcerer’s Apprentice scenario. Remember in Fantasia, when Mickey Mouse harnesses the power of generative sorcery and winds up all wet and sucked down a whirlpool?
Unlike Mickey, we’ll have no sorcerer to save our sorry asses if we screw up the wizardry. This means that, on sum, we need to use these powerful technologies wisely. I hope we’re up to it. Forgive me if, given our recent experiences with everything from social media madness to games of nuclear chicken, I remain a bit skeptical on that front.
Feature image generated by Stable Diffusion. The prompt terms used were "Hokusai tsunami beach people," with Hokusai arguably being the greatest artist of tsunamis in human history. In other words, the AI imitated Hokusai's style and came up with this original piece.