Networks of Birdsong

Birdsong is networking, the sending and receiving of signals across broad expanses. In the mornings, especially right now, the choir gets so loud that I am, as they say, up with the birds. And, although not an active or important part, I too am within those networks of birdsong. That is, I listen though am mostly ignorant of their meaning.

Each Bird Is a Neuron

Think of the birds themselves as neurons. Their bodies are the soma that provide energy to drive activities. Their voices are axons, sending messages to various other birds at once, and their ears (though not readily visible) are dentrites, receiving those signals.

In the mornings, I hear a complex reticulum of sound: some of them are songs, some calls, some alarms.

Different sounds and songs have different and perhaps mulitiple meanings:

  • mating songs used to attract mates
  • territorial songs to ward off competitors
  • alarm calls to ward off predators
  • contacts calls to coordinate movements
  • begging calls to solicit food from parents
  • social songs to strengthen bonds between groups
  • imitation songs to mimick others
  • whisper songs used for quiet communication
  • flight songs use to communicate on the go

In the mornings, I expect, we’re hearing all of these are more.

Why in the Morning?

In the morning, there tends to be less background noise, allowing them to communicate better. Also, the air is cooler and, therefore, denser. This means their songs will travel further at that time of day.

Perhaps their symphonies of sound are also like morning meetings at work, a way for everyone to plan and prepare for the coming day.

Imagine a Giant Bird Brain

We often think of networks in visible terms. We picture the brain and we envision complex interweavings of gray matter. We picture transportation networks and we see roads and railroad tracks and airline flight paths. We picture communication networks and imagine telephone poles and fiber optic cables and cell towers and millions of computers, televisions and more.

It requires a bit more imagination to visualize birdsong this way. But conceive of each bird sound as a differently colored fiber optic cable that extends to every other bird in the vicinity. These are the axons sending messages in multiple directions at once.

Now imagine that a bird (call her Alice) is just inside the hearing range of another bird (call her Shiho) who is calling or singing. If Alice responds to Shiho in some way, that message does not just go back to Shiho but to other birds who are considerably outside of the call range of Shiho.

Now there’s a third bird (call him Jake) who hears Alice and responds to her call, even if the original call was intended for Shiho. Jake responds to Alice as well. Now multiply this thousands or millions of times, and envision the complexity and sheer scale of that network.

The World Thinking Its Thoughts

Ocassionally I’ll read an article discussing the rise of the human infosphere wrapping the entire planet in wire and wireless networks, one that’s becoming the “nervous system” of the world. That may be valid as far as it goes, but we should remember that vast information networks existed long before human beings did, and they continue to today.

Human beings are still only in early stages of being able to grasp the information in these natural networks. Indeed, it’s likely that we civilized 21st century folks have actually lost much of our ability to tap into those networks. Many of our pre-agriculture predecessors were likely better at this, able to interpret what different sounds may mean for them.

For example, they might have gotten a heads up that a certain known and dangerous predator was in the area, or they might have been able to net certain birds who had communicated a feeding ground.

What’s Next?

But the one advantage we do have is our latest technologies. For example, there is the splendid Merlin app out of the Cornell Lab of Ornithology, which identifies birds by their songs as well as by photos. Using these types of tools, we can more easily learn the various sounds of birds and even play certain vocalizations back to them to see if and how they respond.

There are other technologies that may help as well, especially in the area of machine learning. Indeed, Karen Bakker, a professor at the University of British Columbia and a fellow at the Harvard Radcliffe Institute for Advanced Study, is quoted as saying,

There are long-standing Indigenous traditions of deep listening that are deeply attuned to nonhuman sounds. So if we combine digital listening—which is opening up vast new worlds of nonhuman sound and decoding that sound with artificial intelligence—with deep listening, I believe that we are on the brink of two important discoveries. The first is language in nonhumans. The second is: I believe we’re at the brink of interspecies communication.

That’s an amazing statement that I hope to examine more closely in a future post.

Suspension: Flash Fiction

Suspension

My t-shirts are suspended off plastic hangers in my closet. Straight and still as soldiers at attention. They obey a force I can name but not understand.

As I sit on the side of my bed, I’m fixated by this mundane miracle. I think of Sir Isaac beneath the proverbial apple tree. A fleshy fruit hangs by a stem one moment, unleashes its potential the next, striking a great mind burgeoning with equations and alchemy. A big bang of calculated trajectories. Arcing, plunging objects in motion: cannonballs, gravity bombs, the long curve of missiles.

Suspended in space.

I flash on Cherie. One Halloween afternoon, Mom ironed my new store-bought Spiderman costume, then placed it with its plastic mask on a wire hanger. She hooked it on a doorway frame in our kitchen. Hovering there like a scarecrow, the suit and mask alarmed Cherie, who barked and barked her shrill poodle bark. “Christ on the cross,” said my exasperated mother. “Christ almighty.”

I rise off my creaking bed and, naked though I am, hobble through the too-still house to the garage. A rope dangles off a rafter, one end tied to a thick pipe. With difficulty, I untie that end, pull the rest down, and unknot the remaining loop. Then I coil it all and place it neatly on a pegboard hook, where it hangs straight and still.

 

For a little more of my fiction, please go to Fiction/Poetry 

Christina Rossetti’s “Who Has Seen the Wind?”

Who has seen the wind?
Neither I nor you.
But when the leaves hang trembling,
The wind is passing through.

Who has seen the wind?
Neither you nor I.
But when the trees bow down their heads,
The wind is passing by.

 

Christina Rossetti’s “Who Has Seen the Wind?” is part of a project in which I’m posting poetry that’s in the public domain along with illustrations that are also sometimes from the public domain and other times from one of the new AI art generators. Essentially, I paste in the poem (or parts of the poem), see what the generator comes up with, and pick the images that seem best to me. These three particular images are from Bing and Stable Diffusion.

This seemingly simple poem is accessible to children as well as adults, but it uses the wind in a way that can be interpreted symbolicly as well as literally. On one hand, it speaks to a natural force that can be inhumanly powerful while always invisible. This seems profound and mysterious to adults as well as children. In a sense, it represents so many other phyical forces that we experience but can’t see, from gravity to most of the electromagnetic spectrum to the atomic forces that humanity has only recently discovered (and, of course, weaponized).

But the wind also represents other mysteries, from the divine to the ineffable. God is the most obvious and greatest of these symbolic forces, gentle and merciful at times, wrathful at others. The wind also serves as a symbol of other essences such as human love, so tender in serene days, so deeply passionate in more tempestuous weathers.

Cortada’s Key Deer

Cortada’s Key Deer

The deer’s pose is a trope so old
it could be painted in ochre and hematite
on the dark, irregular curves of Paleolithic skulls,
like the caves of Lascaux and Cosquer.

Its narrow head high, perked ears alert,
almond eyes forward, scanning for stalking
shadows or low rustles, poised to bound if
catching sight of reflected glass or the gleam of gunmetal.

There’s no telling the deer is tiny or tame,
evolved to fit snugly in the tight pockets of Florida’s keys,
packed in with cotton mice, ring-necked snakes,
leathery Conchs and tourists red as steamed lobsters,
their days spent skittering among the corals done.

But it’s clear this deer is something new,
abstracted into polygons, disfigured yet distilled
amid a wood of dark, circling selves,
an underbelly charred as trees after a forest burn.

Surviving or transcended, it’s infused with the fractured light
of a fragile forest clearing, coat sienna as sunset,
though with a fractured face, as if the fauvist hunter Braque
were targeting kills with kaleidoscopes.

Then the moment holds, the deer captured in amber,
facets of itself suspended and reflected in infinity,
there and not, a talisman hung on a leather cord
around the neck of a naked, horned hunting god
roaming sorrel plains, chanting ancient
songs of human extinction.

 

Author’s note: “Cortada’s Key Deer” is an ekphrastic poem based on a work of art by Xavier Cortada called “(Florida is…) Key Deer.” Click the link to see the original art.

Is Going Back to the Office the True Cause of the Decline in Worker Productivity?

It runs against the conventional wisdom, but the Bureau of Labor Statistics data suggests that going back to the office is the true cause of the decline in worker productivity we’ve seen recently.

I was writing an article on long-term trends in U.S. productivity when I noticed that if you look at quarterly labor productivity data from the last few years, you see pretty solid productivity growth from 2020 through 2021 but then a hard dip in 2022.

I figured I couldn’t be the first person to draw the obvious conclusion that the return to office is pretty well correlated with a decline in worker productivity, and I was right.

Correlation Isn’t Causation But….

It turns out Gleb Tsipursky, Ph.D., wrote an article about this for Fortune magazine back in February. He even put together a handy-dandy graph based on Bureau of Labor Statistics data.

As Tsipursky neatly sums it up: “U.S. productivity jumped in the second quarter of 2020 as offices closed, and stayed at a heightened level through 2021. Then, when companies started mandating a return to the office in early 2022, productivity dropped sharply in Q1 and Q2 of that year. Productivity recovered slightly in Q3 and Q4 as the productivity loss associated with the return to office mandate was absorbed by companies–but it never got back to the period when remote-capable employees worked from home.”

Maybe There Are Other Reasons

Of course, correlation isn’t causation, and there may be other factors involved. For one thing, the pandemic meant that there were suddenly more people dropping (or being dropped) out of the workforce. In fact, the mini-recession we saw at the start of 2020 could help explain higher productivity numbers.

That’s because as employers shed more jobs, existing employees are forced to take on more work from their former colleagues. Also, new processes may be put in place to keep production relatively stable. Some have called this “cleansing out unproductive inputs,” which certainly sounds harsh but may have some element of truth, at least in the short run.

As the economy recovered, more people were hired back, which might help explain the decline in productivity figures.

Then There’s the Inflation Angle

Inflation demoralizes employees unless employers are matching inflation increases with increases in compensation. A survey from the HR Research Institute recently ask HR professionals, “What do you believe are your employees’ top five sources of financial stress? ” and the number one answer was “inflation issues,” cited by 62% of participants.

It makes sense that if employees feel they are doing the same work for less and less money each month, then they grow less satisfied, engaged and productive. And, although this problem is cumulative over time as employees lose their purchasing power, the highest spikes in inflation occurred in early 2022.

At the same time as this was happening, employees were spending more money on gas as they started to commute back to their workplaces.

Still, Somebody Needs to Tell the CEOs

Whether the increase in labor productivity was caused by an increase in remote work, a sudden spate of downsizings, and/or other factors, the bottom line is that business leaders should be careful not to assume that bringing people back into offices will automatically make them more productive. In fact, if the loss of productivity is being caused or influenced by higher inflation rates unmatched by higher compensation rates, then return to office mandates may make things worse rather than better.

Nonetheless, a lot of CEO seem to think that a return to office program is the way to go. Make It reports, “While half of employers say flexible work arrangements have worked well for their companies, 33% who planned to adopt a permanent virtual or hybrid model have changed their minds from a year ago, according to a January 2023 report from Monster.”

Best Not to Mention the Dog

How CEOs communicate about their desire to get more employees back in the workplace can be a tricky proposition and can result in public relations nightmares if not done well. A case in point is James Clarke, CEO of Clearlink, who was reportedly “slammed on social media after he praised an employee for selling his family’s dog to be able to return to the company’s office.”

I know nothing about Clarke. Maybe he’s otherwise a terrific business leader. But bosses may want need to rethink any allusions to dogs when it comes to return-to-work policies. Most Americans probably like their dogs way more than they like their fellow human beings, especially if those human beings are well-off CEOs forcing people to go back into the office on the perhaps faulty premise that it’ll boost productivity.

But Maybe It’s Not Even About Productivity

Of course, it could be that a lot of CEOs don’t really think it’s about productivity. Maybe it’s more about their own values and attitudes toward work and workers. Insider magazine quotes Joan Williams, the director of the Center for WorkLife Law at the University of California College of the Law: “These are men with very traditional views, who see the home as their wife’s domain and work as men’s domain. These are people like Elon Musk, for whom everything is a masculinity contest, and the workplace is the key arena. They have no desire to continue to work from home. This is not about workplace productivity. It’s about masculinity.”

So, some leaders prefer maculinity over employee performance? Is that the true cause of the decline in worker productivity?

Maybe.

The truth is, it’s complicated. I’m sure there are some female bosses who’d also like to see employees back in the workplace. And research from the HR Research Institute, where I work, shows that even a lot of HR professionals believe that their corporate cultures have suffered due to a massive move to remote work.

I imagine that a lot of this comes down to the specifics at any organization. Every company of signficant size has it own complex ecosystem of culture, policies, work processes and management quality. Business leaders need to make the best decisions they can given all these variables.

But they should keep in mind that a lot of employees might actually be more rather than less productive at home. If that’s true, the guy bosses should put their masculinity aside for the good of the organization. Save it for the handball court, the golf course, the corporate suite at the stadium, or whever they can let their testosterone (and views on dog ownership) flow unimpeded.

Featured image is from Awiseman, posted on Wikipedia at https://commons.wikimedia.org/wiki/File:Goofing_off_in_the_office.jpg

William Blake’s “The Tyger”

The Tyger

Tyger Tyger, burning bright,
In the forests of the night;
What immortal hand or eye,
Could frame thy fearful symmetry?

In what distant deeps or skies.
Burnt the fire of thine eyes?
On what wings dare he aspire?
What the hand, dare seize the fire?

And what shoulder, & what art,
Could twist the sinews of thy heart?
And when thy heart began to beat,
What dread hand? & what dread feet?

What the hammer? what the chain,
In what furnace was thy brain?
What the anvil? what dread grasp,
Dare its deadly terrors clasp!

When the stars threw down their spears
And water’d heaven with their tears:
Did he smile his work to see?
Did he who made the Lamb make thee?

Tyger Tyger burning bright,
In the forests of the night:
What immortal hand or eye,
Dare frame thy fearful symmetry?

 

 

 

William Blake’s “The Tyger” is part of a project in which I’m posting poetry that’s in the public domain along with illustrations that are also sometimes from the public domain and other times from one of the new AI art generators. Essentially, I paste in parts of the poem, see what the generator comes up with, and pick the images that seem best to me. These three particular images are from Blake himself, Bing and Stable Diffusion. I wrestled with whether or not to use Blake’s own illustration of the eponymous tiger but reluctantly decided against it. In my view, his image just doesn’t carry the same weight as the poem itself in our modern era.

Why is this poem so still so popular after over 200 years? I’m sure there are various good answers. For me, it’s the fact that, in the guise of a nursery-style rhyme, Blake gets at one of the great paradoxes of our existence. On one hand, a huge portion of humankind believes in merciful, compassionate God who sacrificed his own Son so that humanity could be spiritualy saved and literally made immortal. On the other hand, this same God, an infinitely omnipotent Being, has created a world in which brutality rather than compassion rules. In which Nature, red in tooth and claw (to quote a different poet), requires its denizens to regularly murder one another in order to survive.

Blake was, of course, writing over half a century before Darwin published his “On the Origin of Species,” which attempted to make better sense of bloody-minded Nature. And, once he did, there was a great and continuing debate about whether the idea of evolution negated the notion of a Creator, merciful or otherwise.

This tension between gorgeous but amoral and deadly Nature and humanity’s allegedly more moral universe reverberates through our lives each day. As a species, we continue to be a deeply confused and moral mess while Nature remains, in its own way, pure and beautiful as the ever rarer tigers still slip through through the forests of Asia, their eyes shining in the moonlight as they, burning with desire in the forests of the night, hunt prey that must die so they can live.

“A Jelly-Fish” by Marianne Moore

A Jelly-Fish

Visible, invisible,
A fluctuating charm,
An amber-colored amethyst
Inhabits it; your arm
Approaches, and
It opens and
It closes;
You have meant
To catch it,
And it shrivels;

You abandon
Your intent—
It opens, and it
Closes and you
Reach for it—
The blue
Surrounding it
Grows cloudy, and
It floats away
From you.

 

 


A Jelly-Fish by Marianne Moore is part of a project in which I’m posting poetry that’s in the public domain along with illustrations which are also sometimes from the public domain and other times from one of the new AI art generators. Essentially, I paste in the poem, see what the generator comes up with, and pick the images that seem best to me. These particular images are from Bing. I’ve reluctantly broken Moore’s poem into two apparent stanzas but only because I couldn’t (yet) figure out how to lay it out otherwise. The fault is mine. The original poem is single, unbroken and beautiful. All art works the way of the jelly-fish, I suppose, but Moore caught more than her share of elusive, beautiful ineffabilities.

The Doomsday Glacier and the Sunshine State

I’ve always figured I’d be long gone before my lovely Tampa Bay area is sunk by global warming. But then, I also figured I’d be barely a memory before humanity developed anything along the lines of the ChatGPT AI. Surprise! That made me wonder: what if I’m wrong about the time scales for the sinking of the state due to global warming? For example, is Florida prepared for flooding from the Doomsday Glacier melt?

Oh, sure, maybe we’ll get “lucky” and the AIs will sink humanity well before global warming does, but this is Florida. Dark weird shit happens. Heck, Carl Hiaasen has gotten rich off all the dark weirdness of the Sunshine State. And, it seems to get darker and weirder by the year. Just look at our governor!

But that’s a different disaster. Let’s stick with floodwaters for now.

The Not-So-Slow Slimming of the Gulf Coast


“How did you go bankrupt?” Bill asked.

“Two ways,” Mike said. “Gradually and then suddenly.”

Ernest Hemingway’s The Sun Also Rises

Between all the unhinged ravings of anti-democracy demogogues, the wailing of AI apolcalypse criers, and the deafening tic-toc of the nuclear Doomday Clock, it’s hard to hear anything else these days. Nonetheless, I did take note when I recently saw that sea levels are rising faster on the Florida Gulf Coast than much of the rest of the world.

How much faster? Well, pretty darned fast by geological standards. That is, about half an inch per year since 2010, three times more than the global average.

But hey, we live on human time, not geological time! Even the good ole Don Cesar hotel down on the beach reportedly stands at about a 7-foot elevation, so it’ll stay above water for another 160 years or so (unless some big old Cat 5 sweeps it off the beach, but that’s a slightly different scenario).

And I, my friends, live way higher up than that (if double-digit feet can be considered high which is obviously belly-busting laughable to your average denizen of Denver). So, unless our humble home is handed down to our descendants for a millenium or more, no problemo!

That sounds pretty comforting, except for a little thing called the Doomsday Glacier.

How Long Will Thwaites Wait?

No, it’s not really called the Doomsday Glacier except by the same crybaby mainstream media scaremonger types who tried to tell us that invading Afghanistan might be a sketchy notion, or that those adjustable-rate mortages could result in a bit of an economic downturn, or that a sitting reality-TV president with a phony hairline might forment a bit of a ruckus at the Capitol just so he wouldn’t have to store scads of his “personal” top-secret documents in the basement of some slowly seeping and subsiding Palm Beach resort.

Pansies.

No, it’s real name is Thwaites Glacier. And, just because it’s the size (ironically?) of Florida, why worry?

Well, here’s the thing.

It turns out that, although the pace of melting underneath a lot of the Thwaites ice shelf is slower than was previously feared, there are these deep cracks and “staircase” formations that turn out to be melting a lot more quickly.

Bottom line: nobody’s quite sure when the ice shelf could shatter. Maybe five years. Maybe 500.

What Happens If Thwaites Thaws?

But, worst case scenario, what happens if the whole bloody thing falls into the ocean? Well, nothing good for Florida: “The complete collapse of the Thwaites itself could lead to sea level rise of more than two feet, which would be enough to devastate coastal communities around the world. But the Thwaites is also acting like a natural dam to the surrounding ice in West Antarctica, and scientists have estimated global sea level could ultimately rise around 10 feet if the Thwaites collapsed.”

Going up two feet in a very short period of time could do a ton of damage in Florida. But 10 feet? Yowzer! Here’s what 10 feet of flooding would do in my neck of the woods. All the reddish brownish areas on the following map are underwater.

St. Pete Beach is gone except maybe as a massive artificial reef leaking toxins for decades. South St. Pete turns into a couple of shrinking islands. And a whole lot of Tampa, including Davis Island and much of Ybor City, goes bye-bye.

Relax. What Are the Chances?

Look, I really don’t expect to see anything like this in my lifetime. Still, if somebody comes to you with a great opporrtunity to invest in condos along St. Pete Beach, Tampa’s waterfront or, even worse, Miami, you might want to go check out the Interactive Map at Climate Central first. Then see what’s happening with the Thwaites Glacier Collaboration, which is investigating “one of the most unstable glaciers in Antarctica.” Just saying a little due diligence couldn’t hurt. It might even help keep those real estate investments from tanking to the point where they’re way, way underwater.

Feature image: Thwaits Glacier.jpg. Description Glacier in West Antarctica Source http://www.jpl.nasa.gov/news/news.php?release=2014-148 (JPL) Author: NASA

Why Does Intelligence Require Networks?

Reticula, or networks, are not necessarily intelligent, of course. In fact, most aren’t. But all forms of intelligence require neworks, as far as I can tell.

Why?

There are lots of articles and books about how networks are the basis for natural and machine intelligence. But for the purpose of this post, I want to avoid those and see if I can get down to first priciples based on what I already know.

Define Intelligence

To me, intelligence is the ability to understand one’s environment well enough to achieve one’s goals, either through influencing that environment or knowing when not to.

Networks Are Efficient Forms of Complexity

Let’s specify that intelligence requires complexity. I have some thoughts about why that is, but let’s just assume it for now.

If you need complexity for intelligence, then networks are a good way to go. After all, networks are groups of nodes and links. Each node has two or more links…sometimes a lot more. In this post, I’m referring to larger networks.

In such networks, there are many possible routes to take to get from one part of the network to another. All these different routes will tend to have different lengths.

Why is that important? First, each length will have a nonstandard characteristic, both in terms of time and space.

Second, depending on the size and variability of the network, the patterns of that route may be as unique as a fingerprint or snowflake.

Third, this complexity is efficiently created using a relatively small amount of matter. That is, it just requires stringy links rather than large blocks of matter into which are carved intricate pathways. This efficiency is useful for animals designed to carry around intelligence in a relatively small package like the brain.

Something Must Move Within the Network

Intelligence does not only require a complex physical or virtual network of some sort, it also requires something that moves within that network. In the case of biological and machine intelligence, what moves is electricity.

I don’t know if electricity is the only possible means or agency of movement. For example, maybe one could build an intelligence made up of a complex array of tubes and nodes powered by moving water. Maybe any sufficiently complex adaptive system has the potential for intelligence if it has enough energy (of some sort) to carry out its functionality.

But electricity does seem like a very useful medium. It works at the atomic level, and it depends on the movement of electrons. Electrons are a component of all atoms, and therefore they are a natural means of transformation at the atomic scale.

In the end, though, this may be less a matter of energy than of information.

Information Is Key to Reticular Intelligence

One way or another, information is exchanged between neurons in the brain. They seem to be much more complex than simple logic gates, but the idea is similar. Some unique pathway is followed as an electrical pulses flashes through a specific part of the network. Maybe it forms some kind of value or string of chemical interactions.

Assuming they exist, I don’t know how such values would be determined, though we can imagine a lot of possible variables such as length of the pathways, strength of the pulse, shape of the neurons, etc. Regardless, I can envision that the values or interactions would be based on the unique nature of each circuit in the network.

These somehow allow us to experience “reality.” We don’t know if this reality has any objective nature. But, somehow the perception of this reality allows us to continue to operate, so these interpretations are useful to our continued existence.

Maybe what we experience is more like a GUI interface on a computer. We aren’t sure what is happening in the background of the computer (that is, in objective reality, assuming there is one), but we know what we see on the screen of our minds. And, although that screen may bear little resemblance to true reality, it does help us interface with it in useful ways.

Our Experience of a Red Tomato

I don’t know if my experience of the color red is the same as your experience. But however my mind interprets it, the color serves as useful signal in nature (and in human society).

So let’s imagine that I am looking at tomatoes on a vine. I pick the red ones because I deem those as ripe. Ripe ones taste best and may have the highest degree of nutritional value. The signal of red comes from a particular pattern in the network of my brain. Other parts of the network give me things like shape and texture. All these things are stored in a different part of the network.

When I see the requisite colors and shapes on a vine, all of these network patterns light up at once, giving me a specific value that is interpreted by my brain as a ripe tomato.

Influencing My Environment

When my neural network discerns what it interprets as a ripe tomato, other parts of network are brought into the picture. They tell me to reach out to the tomato and pick it. If it is small enough, maybe I pop it in my mouth. If it is larger, maybe I put it into a bag and bring it into the house.

These actions demonstrate some form of intelligence on my part. That is, I am able to influence my environment in order to meet my goal of pleasure and allievating hunger (which helps survival).

The Variability of the Network

I think the complexity of the network is necessary because of the complexity of the environment around me. My particular path to survival is a higher (or, at least different) intelligence than that of many other beings on the planet.

That doesn’t mean that another animal could not survive with a much more limited neural network. There are many animals that make do with much less complex ones and, I assume, less intelligence. But they have found a niche in which a more limited intelligence serves them very well in the survival game.

Plants do not seem to require a neural network at all, though it is clear they still have things like memory and perhaps agency. The network of their bodies contains some type of intelligence, even if it is what we would consider a low level.

But if your main survival tactic is being more intelligent than the other beings on the planet, then a substantial neural net is required. The neural net somehow reflects a larger number of ways to influence and interpret the environment. The more complex the network, the better it is at establishing a wide range of values that can be interpreted to enhance survival.

Summing Up

There’s so much I don’t know. I need to read more of the literature on neural nets. But even there I know I’ll bump up against a great many unknowns, such as how our experience of reality–our qualia, if you like–emerges from the bumps, grinds and transformations of the “dumb” matter in our brains.

Still, this exercise has helped me refine my intuition on why intelligence is linked to networks, though there’s still a lot that I can’t explain short of referencing the magic and miraculous.

Should We Link an “AI Pause” to AI Interpretability?

To Pause or Not to Pause

You’ve probably heard about the controversial “AI pause” proposal. On March 22, more than 1,800 signatories – including Elson Musk of Tesla, cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak – signed an open letter calling for a six-month pause on the development of AI systems “more powerful” than the GPT-4 AI system released by OpenAI.

Many have argued against a pause, often citing our competition with China, or the need for continuing business competition and innovation. I suspect some just want to get to the “singularity” as soon as possible because they have a quasi-religious belief in their own pending cyber-ascendance and immortality.

Meanwhile, the pro-pausers are essentially saying, “Are you folks out of your minds? China’s just a nation. We’re talking about a superbrain that, if it gets out of hand, could wipe out humanity as well as the whole biosphere!”

This is, to put it mildly, not fertile soil for consensus.

Pausing to Talk About the Pause

Nobody knows if there’s going to be pause yet, but people in the AI industry at least seem to be talking about setting better standards. Axios reported, “Prominent tech investor Ron Conway’s firm SV Angel will convene top staffers from AI companies in San Francisco … to discuss AI policy issues….The meeting shows that as AI keeps getting hotter, top companies are realizing the importance of consistent public policy and shared standards to keep use of the technology responsible. Per the source, the group will discuss responsible AI, share best practices and discuss public policy frameworks and standards.”

I don’t know if that meeting actually happened or, if it did, what transpired, but at least all this talk about pauses and possible government regulation has gotten the attention of the biggest AI players.

The Idea of an Interpretative Pause

But what would be the purpose of a pause? Is it to let government regulators catch up? To cool the jets on innovations that could soon tip the world into an era of AGI?

Too soon to tell, but columnist Ezra Klein suggests a pause for a reason: that is, to understand exactly how today’s AI systems actually work.

The truth is that today’s most powerful AIs are basically highly reticular black boxes. That is, the companies that make them know how to make these large language models by having neural networks train themselves, but these companies don’t actually know, except at a more general level, how the systems do what they do.

It’s sort like when the Chinese invented gunpowder. They learned how to make it and what it could do but this was long before humanity had modern atomic theory and the Periodic Table of Elements, which were needed to truly understand why things go kablooey.

Some organizations can now make very smart-seeming machines, but there’s no equivalent of a Periodic Table to help them understand exactly what’s happening at a deeper level.

A Pause-Worthy Argument

In an interview on the Hard Fork podcast, Klein riffed on a government policy approach:

[O]ne thing that would slow the [AI] systems down is to insist on interpretability….[I]f you look at the Blueprint for an AI Bill of Rights that the White House released, it says things like — and I’m paraphrasing — you deserve an explanation for a decision a machine learning algorithm has made about you. Now, in order to get that, we would need interpretability. We don’t know why machine learning algorithms make the decisions or correlations or inferences or predictions that they make. We cannot see into the box. We just get like an incomprehensible series of calculations.

Now, you’ll hear from the companies like this is really hard. And I believe it is hard. I’m not sure it is impossible. From what I can tell, it does not get anywhere near the resources inside these companies of let’s scale the model. Right? The companies are hugely bought in on scaling the model, and a couple of people are working on interpretability.

And when you regulate something, it is not necessarily on the regulator to prove that it is possible to make the thing safe. It is on the producer to prove the thing they are making is safe. And that is going to mean you need to change your product roadmap and change your allocation of resources and spend some of these billions and billions of dollars trying to figure out the way to answer the public’s concerns here. And that may well slow you down, but I think that will also make a better system.And so this is my point about the pause, that instead of saying no training of a model bigger than GPT 4, it is to say no training of a model bigger than GPT 4 that cannot answer for us these set of questions.

Pause Me Now or Pause Me Later

Klein also warns about how bad regulation could get if the AI firms get AI wrong. Assuming their first big mistake wouldn’t be their last one (something that’s possible if there’s a fast-takeoff AI), then imagine what would happen if AI causes a catastrophe: “If you think the regulations will be bad now, imagine what happens when one of these systems comes out and causes, as happened with high speed algorithmic trading in 2010, a gigantic global stock market crash.”

What Is Interpretability?

But what does it actually mean to make these systems interpretable?

Interpretability is the degree to which an AI can be understood by humans without the help of a lot of extra techniques or aids. So, a model’s “interpretable” if its internal workings can be understood by humans. A linear regression model, for example, is interpretable because your average egghead can fully grasp all it’s components and follow its logic.

But neural networks? Much tougher. There tend to be a whole lot of hidden layers and parameters.

Hopelessly Hard?

So, is interpretability hopeless when it comes to today’s AIs? Depends on who you ask. There are some people and companies committed to figuring out how to make these systems more understandable.

Connor Leahy, the CEO of Conjecture, suggests that interpretability is far from hopeless. On the Machine Learning Street Talk podcast, he discusses some approaches for how to make neural nets more interpretable.

Conjecture is, in fact, dedicated to AI alignment and interpretability research, with its homepage asserting, “Powerful language models such as GPT3 cannot currently be prevented from producing undesired outputs and complete fabrications to factual questions. Because we lack a fundamental understanding of the internal mechanisms of current models, we have few guarantees on what our models might do when encountering situations outside their training data, with potentially catastrophic results on a global scale.”

How Does Interpretability Work?

So, what are some techniques that can be used to make the neural networks more interpretable?

Visualization of Network Activations

First, there’s something called visualization of network activations, which helps us see which features the neural network is focusing on at each layer. We can look at the output of each layer, which is known as a feature map. Feature maps show us which parts of the input the neural network is paying attention to and which parts are being ignored.

Feature Importance Analysis

Second, there’s feature importance analysis, which is a way of figuring out which parts of a dataset are most important in making predictions. For example, if we are trying to predict how much a house will sell, we might use features like the number of bedrooms, the square footage, and the location of the house. Feature importance analysis helps us figure out which of these features is most important in predicting the price of the house.

There are different ways to calculate feature importance scores, but they all involve looking at how well each feature helps us make accurate predictions. Some methods involve looking at coefficients or weights assigned to each feature by the model, while others involve looking at how much the model’s accuracy changes when we remove a particular feature.

By understanding which features are most important, we can make better predictions and also identify which features we can ignore without affecting the accuracy of our model.

Saliency Maps

Third are saliency maps, which highlight the most important parts of an image. They show which parts are most noticeable or eye-catching to us or to a computer program. To make a saliency map, we look at things like colors, brightness, and patterns in a picture. The parts of the picture that stand out the most are the ones that get highlighted on the map.

A salience map can be used for interpretability by showing which parts of the input image activate different layers or neurons of the network. This can help to analyze what features the network learns and how it processes the data.

Model Simplification

Model simplification is a technique used to make machine learning models easier to understand by reducing their complexity. This is done by removing unnecessary details, making the model smaller and easier to interpret. There are different ways to simplify models, such as using simpler models like decision trees instead of complex models like deep neural networks, or by reducing the number of layers and neurons in a neural network.

Simplifying models helps people better understand how the model works, but simplifying models too much can also cause problems, like making the model less accurate or introducing mistakes. So, it’s important to balance model simplification with other methods such as visualizations or explanations.

Then There’s Explainability

I think of explainability as something a teacher does to help students understand a difficult concept. So, imagine a heuristic aimed at helping students understand a model’s behavior via natural language or visualizations.

It might involve using various techniques such as partial dependence plots or Local Interpretable Model-agnostic Explanations (LIME). These can used to reveal how the inputs and outputs of an AI model are related, making the model more explainable.

The Need to Use Both

Interpretability is typically harder than explainability but, in practice, they’re closely related and often intertwined. Improving one can often lead to improvements in the other. Ultimately, the goal is to balance interpretability and explainability to meet the needs of the end-users and the specific application.

How to Get to Safer (maybe!) AI

No one knows how this is all going to work out at this stage. Maybe the U.S. or other governments will consider something along the lines Klein proposes, though my guess is that it won’t happen that way in the shorter term. Too many companies have too much money at stake and so will resist an indefinite “interpretability” pause, even if that pause is in the best interest of the world.

Moreover, the worry that “China will get there first” will keep government officials from regulating AI firms as much as they might otherwise. We couldn’t stop the nuclear arms race and we probably won’t be able to stop the AI arms race either. The best we’ve ever been able to do so far is slow things down and deescalate. Of course, the U.S. has not exactly been chummy with China lately, which probably raises the danger level for everyone.

Borrow Principles from the Food and Drug Administration

So, if we can’t follow the Klein plan, what might be more doable?

One idea is to adapt to our new problems by borrowing from existing agencies. One that comes to mind is the FDA. The United States Food and Drug Administration states that it “is responsible for protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, and medical devices; and by ensuring the safety of our nation’s food supply, cosmetics, and products that emit radiation.”

The principles at the heart of U.S, food and drug regulations might be boiled down to safety, efficacy, and accuracy:

The safety principle ensures that food and drug products are safe for human consumption and don’t pose any significant health risks. This involves testing and evaluating food and drug products for potential hazards, and implementing measures to prevent contamination or other safety issues.

The efficacy principle ensures that drug products are effective in treating the conditions for which they’re intended. This involves conducting rigorous clinical trials and other studies to demonstrate the safety and efficacy of drugs before they can be approved for use.

The security principle ensures that drugs are identified and traced properly as they move through the supply chain. The FDA has issued guidance documents to help stakeholders comply with the requirements of the Drug Supply Chain Security Act (DSCSA), which aims to create a more secure and trusted drug supply chain. The agency fulfills its responsibility by ensuring the security of the food supply and by fostering development of medical products to respond to deliberate and naturally emerging public health threats.

Focus on the Safety and Security Angle

Of those three principles, efficacy will be the most easily understood when it comes to AI. We know, for example, that efficacy is not a given in light of the ability of these AIs to “hallucinate” data.

The principles of safety and security, however, are probably even more important and difficult to attain when it comes to AI. Although better interpretability might be one of the criteria to establishing safety, it probably won’t be the only one.

Security can’t be entirely separated from safety, but an emphasis on it would help the industry focus on all the nefarious ends to which AI could be used, from cyberattacks to deepfakes to autonomous weapons and more.

The Government Needs to Move More Quickly

Governments seldom move quickly, but the AI industry is now moving at Hertzian speeds so governments are going to need to do better. At least the Biden administration has said it wants stronger measures to test the safety of AI tools such as ChatGPT before they are publicly released.

Some of the concern is motivated by a rapid increase in the number of unethical and sometimes illegal incidents being driven by AI.

But how safety can be established isn’t yet known. The U.S. Commerce Department recently said it’s going to spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures. “There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

Well, yep. That’s one way to put it. And maybe a little more politically circumspect than the “literally everyone on Earth will die” message coming from folks like decision theorist Eliezer Yudkowsy.

PS – If you would like to submit a public comment to “AI Accountability Policy Request for Comment,” please go to this page of the Federal Register. Note the “Submit a Formal Comment” button.