Why Does Intelligence Require Networks?

Reticula, or networks, are not necessarily intelligent, of course. In fact, most aren’t. But all forms of intelligence require neworks, as far as I can tell.

Why?

There are lots of articles and books about how networks are the basis for natural and machine intelligence. But for the purpose of this post, I want to avoid those and see if I can get down to first priciples based on what I already know.

Define Intelligence

To me, intelligence is the ability to understand one’s environment well enough to achieve one’s goals, either through influencing that environment or knowing when not to.

Networks Are Efficient Forms of Complexity

Let’s specify that intelligence requires complexity. I have some thoughts about why that is, but let’s just assume it for now.

If you need complexity for intelligence, then networks are a good way to go. After all, networks are groups of nodes and links. Each node has two or more links…sometimes a lot more. In this post, I’m referring to larger networks.

In such networks, there are many possible routes to take to get from one part of the network to another. All these different routes will tend to have different lengths.

Why is that important? First, each length will have a nonstandard characteristic, both in terms of time and space.

Second, depending on the size and variability of the network, the patterns of that route may be as unique as a fingerprint or snowflake.

Third, this complexity is efficiently created using a relatively small amount of matter. That is, it just requires stringy links rather than large blocks of matter into which are carved intricate pathways. This efficiency is useful for animals designed to carry around intelligence in a relatively small package like the brain.

Something Must Move Within the Network

Intelligence does not only require a complex physical or virtual network of some sort, it also requires something that moves within that network. In the case of biological and machine intelligence, what moves is electricity.

I don’t know if electricity is the only possible means or agency of movement. For example, maybe one could build an intelligence made up of a complex array of tubes and nodes powered by moving water. Maybe any sufficiently complex adaptive system has the potential for intelligence if it has enough energy (of some sort) to carry out its functionality.

But electricity does seem like a very useful medium. It works at the atomic level, and it depends on the movement of electrons. Electrons are a component of all atoms, and therefore they are a natural means of transformation at the atomic scale.

In the end, though, this may be less a matter of energy than of information.

Information Is Key to Reticular Intelligence

One way or another, information is exchanged between neurons in the brain. They seem to be much more complex than simple logic gates, but the idea is similar. Some unique pathway is followed as an electrical pulses flashes through a specific part of the network. Maybe it forms some kind of value or string of chemical interactions.

Assuming they exist, I don’t know how such values would be determined, though we can imagine a lot of possible variables such as length of the pathways, strength of the pulse, shape of the neurons, etc. Regardless, I can envision that the values or interactions would be based on the unique nature of each circuit in the network.

These somehow allow us to experience “reality.” We don’t know if this reality has any objective nature. But, somehow the perception of this reality allows us to continue to operate, so these interpretations are useful to our continued existence.

Maybe what we experience is more like a GUI interface on a computer. We aren’t sure what is happening in the background of the computer (that is, in objective reality, assuming there is one), but we know what we see on the screen of our minds. And, although that screen may bear little resemblance to true reality, it does help us interface with it in useful ways.

Our Experience of a Red Tomato

I don’t know if my experience of the color red is the same as your experience. But however my mind interprets it, the color serves as useful signal in nature (and in human society).

So let’s imagine that I am looking at tomatoes on a vine. I pick the red ones because I deem those as ripe. Ripe ones taste best and may have the highest degree of nutritional value. The signal of red comes from a particular pattern in the network of my brain. Other parts of the network give me things like shape and texture. All these things are stored in a different part of the network.

When I see the requisite colors and shapes on a vine, all of these network patterns light up at once, giving me a specific value that is interpreted by my brain as a ripe tomato.

Influencing My Environment

When my neural network discerns what it interprets as a ripe tomato, other parts of network are brought into the picture. They tell me to reach out to the tomato and pick it. If it is small enough, maybe I pop it in my mouth. If it is larger, maybe I put it into a bag and bring it into the house.

These actions demonstrate some form of intelligence on my part. That is, I am able to influence my environment in order to meet my goal of pleasure and allievating hunger (which helps survival).

The Variability of the Network

I think the complexity of the network is necessary because of the complexity of the environment around me. My particular path to survival is a higher (or, at least different) intelligence than that of many other beings on the planet.

That doesn’t mean that another animal could not survive with a much more limited neural network. There are many animals that make do with much less complex ones and, I assume, less intelligence. But they have found a niche in which a more limited intelligence serves them very well in the survival game.

Plants do not seem to require a neural network at all, though it is clear they still have things like memory and perhaps agency. The network of their bodies contains some type of intelligence, even if it is what we would consider a low level.

But if your main survival tactic is being more intelligent than the other beings on the planet, then a substantial neural net is required. The neural net somehow reflects a larger number of ways to influence and interpret the environment. The more complex the network, the better it is at establishing a wide range of values that can be interpreted to enhance survival.

Summing Up

There’s so much I don’t know. I need to read more of the literature on neural nets. But even there I know I’ll bump up against a great many unknowns, such as how our experience of reality–our qualia, if you like–emerges from the bumps, grinds and transformations of the “dumb” matter in our brains.

Still, this exercise has helped me refine my intuition on why intelligence is linked to networks, though there’s still a lot that I can’t explain short of referencing the magic and miraculous.

Your Mind Is a Matrix

What Are You?

To a large extent, you are the culmination of activity in your neocortex. That’s the part of your brain that drives sensory perception, logic, spatial reasoning, and language, among other things. Without it, you’re pretty much an inarticulate lizard person (which I’m afraid is my disposition all too often in the mornings as I read recent newspaper headlines). You neocortex is complex, highly networked place. In short, your mind is a matrix.

Or, at least neuroscientist Jeff Hawkins conceives the neocortex as a matrix of thousands of smaller brains. Amid this reticulum, each minibrain (my word, not his) stores many different models of the world. Somewhere in there there’s a mental model for your car, your house, your pets, your significant other, whatever politician you love to hate, that sweaty dude who walks that barky dog in the neighborhood every morning, and, well, everything else in your personal universe.

The minibrains are cortical columns, each quite intelligent on its own. Hawkins writes,

A cortical column occupies about one square millimeter. It extends through the entire 2.5 mm thickness, giving it a volume of 2.5 cubic millimeters. By this definition, there are roughly 150,000 cortical columns stacked side by side in a human neocortex. You can imagine a cortical column like a little piece of thin spaghetti. A human neocortex is like 150,000 short pieces of spaghetti stacked vertically next to each other.

Have Spaghetti, Will Reference

Okay, so you are largely the sum total of lots of cortical columns. But what does a cortical column actually do?

One of its primary purposes is to store and activate reference frames: oodles and oodles of reference frames.

A reference frame is where we access the information about what an object (or even an abstract concept) is and where it’s located in the world. For example, you have a reference frame for a coffee cup in various cortical columns. You know such a cup when you see it, and feel it, and sip from it. You also know where it is and how it moves. When you turn the cup upside down (hopefully sans coffee), the reference frame in your head also moves.

Reference frames have essential virtues such as:

  • allowing the brain to learn the structure and components of an object
  • allowing the brain to mentally manipulate the object as a whole (which is why you can envision an upside down coffee cup)
  • allowing your brain to plan and create movements, even conceptual ones

Thanks to reference frames, just one cortical column can “learn the three-dimensional shape of objects by sensing and moving and sensing and moving.” As you walk through a strange house, for example, you are mentally building a model of the house using reference frames. This includes your judgments about it. (“Hate that mushy chair in the living room, love that painting in the study, what were they thinking with that creepy bureau in the bedroom!?”)

I Think, Therefore I Predict

You’re a futurist. We all are. Because we’re subconsciously predicting stuff every moment of our conscious day.

Let’s say, for example, that you pick up your cup of coffee without even thinking about it. Your brain predicts the feel of the familiar, smooth, warm ceramic. That’s what you get most mornings. If instead your brain gets something different, it registers surprise and draws your attention to the cup.

Maybe it’s a minor surprise, like a small crack in the cup. Maybe it’s a bigger one, as when one of your fingers unexpectedly brush a cockroach that then quickly crawls up your arm. Argh!

Either way, you didn’t get what you subconsciously predicted based on your reference frame. These tiny predictions happen all the time. Your whole life is spent predicting what comes next, even of you’re not fully aware of it. If something happens that doesn’t match your mental model, your brain gets busy trying to figure out what went wrong with your expectation/prediction and what to do next.

(“Roach! Need to swat it! Where did I put that crappy news magazine? Come on, cortical-column-based reference frames, help me find it! Fast!)

You Are Your Reticulum

In short, most of your brain (the neocortex is about 70% of its total volume) is a highly complex reticulum made up of cortical columns, which themselves are made up of dense networks of neurons that are in a constant state of anticipation, even when you’re feeling pretty relaxed.

Your consciousness doesn’t exist in any one place. Your singular identity is, rather, a clever pastiche fabricated by that squishy matrix in your noggin.

So, why does it feel as if you’re you, the real mental “decider” (as George W. Bush’s neocortex once put it)? Hawkins thinks that all your various cortical columns are essentially “voting” about what you should perceive and how you should act. When you can’t make up your mind, it’s because the vote is too close to call.

So, you’re not just a matrix. You’re a democracy! Which is great. Even if our increasingly shaky U.S. government descends into tyranny, at least our brains will keep voting.

Viva la reticular révolution!

Featured image from Henry Gray (1918) Anatomy of the Human Body. Bartleby.comGray's AnatomyPlate 754

The Singularity Is Pretty Damned Close…Isn’t It?

What is the singularity and just how close is it?

The short answers are “it depends who you ask” and “nobody knows.” The longer answers are, well…you’ll see.

Singyuwhatnow?

Wikipedia provides a good basic definition: “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.”

The technological growth in question usually refers to artificial intelligence (AI). The idea is that an AI capable of improving itself quickly goes through a series of cycles in which it gets smarter and smarter at exponential rates. This leads to a super intelligence that throws the world into an impossible-to-predict future.

Whether this sounds awesome or awful largely depends on your view of what a superintelligence would bring about, something that no one really knows.

The impossible-to-predict nature is an aspect of why, in fact, it’s called a singularity, a term that originates with mathematics and physics. In math, singularities pop up when the numbers stop making sense, as when the answer to an equation turns out to be infinity. It’s also associated with phenomena such as black holes where our understandings of traditional physics break down. So the term, as applied to technology, suggests a time beyond which the world stops making sense (to us) and so becomes impossible to forecast.

How Many Flavors Does the Singularity Come In?

singularity image
From Wikipedia: major evolutionary transitions in information processing

Is a runaway recursively intelligent AI the only path to a singularity? Not if you count runaway recursively intelligent people who hook their little monkey brains up to some huge honking artificial neocortices in the cloud.

Indeed, it’s the human/AI interface and integration scenario that folks like inventor-author-futurist Ray Kurzweil seem to be banking on. To him, from what I understand (I haven’t read his newest book), that’s when the true tech singularity kicks in. At that point, humans essentially become supersmart, immortal(ish) cyborg gods.

Yay?

But there are other possible versions as well. There’s the one where we hook up our little monkey brains into one huge, networked brain to become the King Kong of super intelligences. Or the one where we grow a supersized neocortex in an underground vat the size of the Chesapeake Bay. (A Robot Chicken nightmare made more imaginable by the recent news we just got a cluster of braincells to play pong in a lab–no, really).

Singularity: Inane or Inevitable?

The first thing to say is that maybe the notion is kooky and misguided, the pipedream of geeks yearning to become cosmic comic book characters.  (In fact, the singularity is sometimes called, with varying degrees sarcasm, the Rapture for nerds.)

I’m tempted to join in the ridicule of the preposterous idea. Except for one thing: AI and other tech keeps proving the naysayers wrong. AI will never beat the best chess players. Wrong. Okay, but it can’t dominate something as fuzzy as Jeopardy. Wrong. Surely it can’t master the most complex and challenging of all human games, Go. Yawn, wrong again.

After a while,  anyone who bets against AI starts looking like a chump.

Well, games are for kids anyway. AI can’t do something as slippery as translate languages or as profound as unravel the many mysteries of protein folding.  Well, actually…

But it can’t be artistic…can it? (“I don’t do drugs. I am drugs” quips DALL-E).

Getting Turing Testy

There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.

The test assumes you have a questioner reading a computer screen (or the equivalent). The questioner has two conversations via screen and keyboard. One of those conversations is with a computer, the other with another person. If questioner having the two conversations can’t figure out which one is the computer, then the computer passes the test because it can’t be distinguished from the human being.

Of course, this leaves us with four (at least!) big questions.

First, when will a machine finally pass that final exam?

Second, what does it mean if and when a machine does? Is it truly intelligent? How about conscious?

Third, if the answer to those questions seems to be yes, what’s next? Does it get driver’s license? A FOX News slot? An OKCupid account?

Fourth, will such a computer spark the (dun dun dun) singularity?

The Iffy Question of When the Singularity Arrives

In a recent podcast interview, Kurzweil predicted that some soon-to-be-famous digital mind will pass the Turing Test in 2029.

“2029?” I thought. “As in just 7-and-soon-to-be-6-years-away 2029?”

Kurzweil claims he’s been predicting that same year for a long time, so perhaps I read about it back in 2005 when his book The Singularity Is Near (lost somewhere in the hustle bustle of my bookshelves). But back then, of course, it was a quarter of a decade away. Now, well, it seems damn near imminent.

Of course, Kurzweil may well turn out to be wrong. As much as he loves to base his predictions on the mathematics of exponentials, he can get specific dates wrong. For example, as I wrote in a previous post, he’ll wind up being wrong about the year solar power becomes pervasive (though he may well turn out to be right about the overall trend).

So maybe a computer won’t pass a full blown Turing test in 2029. Perhaps it’ll be in the 2030s or 2040s. That would be close enough, in my book. Indeed, most experts believe it’s just a matter of time. One survey issued at the Joint Multi-Conference on Human-Level Artificial Intelligence found that just 2% of participants predicted that an artificial general intelligence (or AGI, meaning that the machine thinks at least as well as a human being) would never occur. Of course, that’s not exactly an unbiased survey cohort, is it?

Anyhow, let’s say the predicted timeframe when the Turing test is passed is generally correct. Why doesn’t Kurzweil set the date of the singularity on the date that the Turing test is passed (or the date that a human-level AI first emerges)? After all, at that point, the AI celeb could potentially code itself so it can quickly become smarter and smarter, as per the traditional singularity scenario.

But nope. Kurzweil is setting his sights on 2045, when we fully become the supercyborgs previously described.

What Could Go Wrong?

So, Armageddon or Rapture? Take your pick.

What’s interesting to my own little super-duper-unsuper brain is that folks seem more concerned about computers leaving us in the intellectual dust than us becoming ultra-brains ourselves. I mean, sure, our digital super-brain friends may decide to cancel humanity for reals. But they probably won’t carry around the baggage of our primeval, reptilian and selfish fear-fuck-kill-hate brains–or, what Jeff Hawkins calls our “old brain.”

In his book A Thousand Brains, Hawkins writes about about the ongoing frenemy-ish relationship between our more rational “new brain” (the neocortex) and the far more selfishly emotional though conveniently compacted “old brain” (just 30% of our overall brain).

Basically, he chalks up the risk of human extinction (via nuclear war, for example) to old-brain-driven crappola empowered by tech built via the smart-pantsy new brain. For example, envision a pridefully pissed off Putin nuking the world with amazing missiles built by egghead engineers. And all because he’s as compelled by his “old brain” as a tantrum-throwing three-year-old after a puppy eats his cookie.

Now envision a world packed with superintelligent primate gods still (partly) ruled by their toddler old-brain instincts. Yeah, sounds a tad dangerous to me, too.

The Chances of No Chance

Speaking of Hawkins, he doesn’t buy the whole singularity scene. First, he argues that we’re not as close to creating truly intelligent machines as some believe. Today’s most impressive AIs tend to rely on deep learning, and Hawkins believes this is not the right path to true AGI. He writes,

Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead….they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

Second, even when we finally build AGIs (and he thinks we certainly will if he has anything to say about it), they won’t be driven by the same old-brain compulsions as we are. They’ll be more rational because their architecture will be based on the human neocortex. Therefore, they won’t have the same drive to dominate and control because they will not have our nutball-but-gene-spreading monkey-brain impulses.

Third, Hawkins doesn’t believe that an exponential increase in intelligence will suddenly allow such AGIs to dominate. He believes a true AGI will be characterized by a mind made up of “thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors.” (That makes more sense if you read his book, A Thousand Brains: A New Theory of Intelligence). He goes on:

Adding this ingredient [meaning the thousands of reference frames] to machines does not impart any immediate capabilities. It only provides a substrate for learning, endowing machines with the ability to learn a model of the world and thus acquire knowledge and skills. On a kitchen stovetop you can turn a knob to up the heat. There isn’t an equivalent knob to “up the knowledge” of a machine.

An AGI won’t become a superintelligence just by virtue of writing better and better code for itself in the span of a few hours. It can’t automatically think itself into a superpower. It still needs to learn via experiments and experience, which takes time and the cooperation of human scientists.

Fourth, Hawkins thinks it will be difficult if not impossible to connect the human neocortex to mighty computing machines in the way that Kurzweil and others envision. Even if we can do it someday, that day is probably a long way off.

So, no, the singularity is not near, he seems to be arguing. But a true AGI may, in fact, become a reality sometime in the next decade or so–if engineers will only build an AGI based on his theory of intelligence.

So, What’s Really Gonna Happen?

Nobody know who’s right or wrong at this stage. Maybe Kurweil, maybe Hawkins, maybe neither or some combination of both. Here’s my own best guess for now.

Via deep learning approaches, computer engineers are going to get closer and closer to a computer capable of passing the Turning test, but by 2029 it won’t be able to fool an educated interrogator who is well versed in AI.

Or, if a deep-learning-based machine does pass the Turing test before the end of this decade, many people will argue that it only displays a façade of intelligence, perhaps citing the famous Chinese-room argument (which is a philosophical can of worms that I won’t get into here).

That said, eventually we will get to a Turing-test-passing machine that convinces even most of the doubters that it’s truly intelligent (and perhaps even conscious, an even higher hurdle to clear). That machine’s design will probably hew more closely to the dynamics of the human brain than do the (still quite impressive) neural networks of today.

Will this lead to a singularity? Well, maybe, though I’m convinced enough by the arguments of Hawkins to believe that it won’t literally happen overnight.

How about the super-cyborg-head-in-the-cloud-computer kind of singularity? Well, maybe that’ll happen someday, though it’s currently hard to see how we’re going to work out a seamless, high-bandwidth brain/supercomputer interface anytime soon. It’s going to take time to get it right, if we ever do. I guess figuring all those details out will be the first homework we assign to our AGI friends. That is, hopefully friends.

But here’s the thing. If we ever do figure out the interface, it seems possible that we’ll be “storing” a whole lot of our artificial neocortex reference frames (let’s call them ANREFs) in the cloud. If that’s true, then we may be able to swap ANREFs with our friends and neighbors, which might mean we can quickly share skills I-know-Kung-Fu style. (Cool, right?)

It’s also possible that the reticulum of all those acquired ANREFs will outlive our mortal bodies (assuming they stay mortal), providing a kind of immortality to a significant hunk of our expanded brains. Spooky, yeah? Who owns our ANREFs once the original brain is gone? Now that would be the IP battle of all IP battles!

See how weird things can quickly get once you start to think through singularity stuff? It’s kind of addictive, like eating future-flavored pistachios.

Anyway, here’s one prediction I’m pretty certain of: it’s gonna be a frigging mess!

Humanity will not be done with its species-defining conflicts, intrigues, and massively stupid escapades as it moves toward superintelligence. Maybe getting smarter–or just having smarter machines–will ultimately make us wiser, but there’s going to be plenty of heartache, cruelty, bigotry, and turmoil as we work out those singularity kinks.

I probably won’t live to the weirdest stuff, but that’s okay. It’s fun just to think about, and, for better and for worse, we already live in interesting times.

Addendum: Since I wrote this original piece, things have been moving so quickly In the world of AI that I revisited the topic in The Singularity Just Got Nearer…Again…And How!

Featured image by Adindva1: Demonstration of the technology "Brain-Computer Interface." Management of the plastic arm with the help of thought. The frame is made on the set of the film "Brain: The Second Universe."

Dog Is Doog: The [Possible] Upsides to the Downsides of My Dyslexia

One of the reasons I’m interested in cognitive science and different ways of perceiving the world is because of my dyslexia. So, I was interested to read that there are potential upsides to dyslexia.

I was diagnosed as dyslexic well before most people had heard of the condition. I was lucky. My father was a doctor and my mother a psychology graduate back in the days when fewer women got college degrees.

I was a poor student in the first and second grades, having a hard time reading and writing. Of course, being an August baby probably didn’t help. Kids born in late summer tend to start school younger than their classmates, which means they are both cognitively and physically behind most other kids at a time when even a few months of extra development can mean a lot. Such kids tend to get worse grades and wind up with less confidence in their ability to learn.

Iced Tea and Phonics

But I also had signs of learning disabilities. For example, although “mirror writing” isn’t a definite sign of dyslexia at young ages, it can be one symptom. And it was certainly one of my specialties. I wouldn’t just get certain letters backwards such as d’s and b’s, I’d write whole words and phrases backwards including, of course, my name.

I’m sure there were many other signs as well, enough to convince my mother to seek specialist help since most teachers had never heard of the condition. In fact, my mother tried to educate my second-grade teacher on the topic, though Mrs. Decker was at first skeptical such a condition existed.

The long-and-short of it, though, was that I was taken to a special teacher in the Buffalo, NY. I knew her only as Mrs. Clark, though I want to say her name was Mary Clark (I hope I’m not conflating her name with that of the novelist Mary Higgins Clark).

She was kind and charming, as I recall. And very professional-looking, her hair pinned up in a blondish, maybe grayish bun. But what I remember best is the iced tea she served, along with cookies. The glassware was crinkly and dark green and my hands often wet with condensation.

That Ole Phonics Magic

Once her tests confirmed I was a bona fide dyslexic, she set me to doing booklet after booklet of reading and writing exercises. Much of what she taught me, I believe, was phonics. I remember sounding out word after word for her. Keep in mind that was before the “Hooked on Phonics” craze began and “phonics versus whole language” battles were so savagely waged.

She must have used other strategies as well. I seem to remember doing a lot of pencil work, so I assume she was conditioning my muscle memories as well as honing my perceptions. I have very fond memories of these lessons, which I’m sure is a testament to her patience, care and personality as well as her pedagogy. Mrs. Clark transformed my life.

Disability, Capability or Maybe a Bit of Both

After my sessions with Mrs. Clark, I went from a very poor student to quite a good one, at least within the not-so-rarified confines of public elementary school. At 8 years old, I started reading books of all kinds, though especially fiction, and have never really stopped since then.

I’m still a dyslexic, of course. Can I blame it for my lousy sense of direction? My absent mindedness? Or an initial mental sluggishness when picking of brand new skills?

Maybe. That’d certainly be convenient.

But maybe it’s more than just a handy excuse. Maybe it’s a backwards superpower. Or, at least, a cognitive distinction that that has upsides as well a downsides.

A recent study by Cambridge University researchers Dr. Helen Taylor and Dr. Martin Vestergaard indicates that dyslexic brains play a useful role in human evolution because they are, well, different. Indeed, some heavy hitters have reportedly played for the dyslexic team, including Leonardo da Vinci, Albert Einstein, Pablo Picasso, Sir Stephen Hawking, Sir Richard Branson and Steve Jobs.

(To be honest, I’m always a bit skeptical of such lists, especially when they apply to people living as far back as the Renaissance, but that’s the historical scuttlebutt).

Born Explorers

Dr. Taylor, who studies cognition and evolution, states, “In many other fields of research it is understood that adaptive systems – be they organizations, the brain or a beehive – need to achieve a balance between the extent to which they explore and exploit in order to adapt and survive.”

So, basically, as I understand it, the theory is that dyslexics have a tough time “acquiring automaticity.” That is, when compared to non-dyslexics, they are not-so-hot at procedural learning. This can make it harder for some to learn, among other things, to learn how to read and write.

The good news about such learning difficulties is that dyslexics become more conscious (or, in my case, maybe just self-conscious) of whatever processes they’re trying to master. This turns out to be a pain in the butt in the short term but a potential advantage in the longer term. Taylor states, “The upside is that a skill or process can still be improved and exploration can continue.”

This helps dyslexics excel as explorers and creative types, even if they pay a societal price. Taylor notes, “It is important to emphasize people with dyslexia do still face a lot of difficulties, but the difficulties exist because of the environment and an emphasis on rote learning and reading and writing. [Instead, we could] nurture ‘explorative learning’ – learning through discovery, invention, creativity, etc. which would work more to their strengths.”

Nice to Finally Know

Over the years, I’ve learned to take any research findings with a grain of salt. I read one study on how coffee causes stress and high cholesterol. The next one indicates its good for your liver and heart. The next one…well, we’ll see.

The bottom line is that Taylor and Vestergaard will not have the final say on the pluses and minuses of the dyslexic brain.  

Nonetheless, it’s nice for dyslexics to finally hear that their learning disabilities are also learning capabilities. And, it’s fun to envision us as bunches of unconventional but adaptive clusters of neurons buzzing usefully about in the vibrantly bizarre hive mind known as humanity.

Featured image from Totesquatre: Català: la dislexia. Wikimedia Commons.

What You See Ain’t What You Get

Donald Hoffman’s book The Case Against Reality is nicely summed up by the subtitle: Why evolution hid the truth from our eyes.

That is, he argues that the world that we see (and the one we hear, smell, touch and taste) everyday is a kind of illusion. And not just a minor illusion but a major, systemic one. The kind of illusion you get in The Matrix, except far, far stranger.

In the movie The Matrix, after all, the Machines have built a virtual reality in which to house humanity’s minds while their bodies are used as batteries (yeah, what always sounds lame to me, too). But the virtual reality is basically identical to the reality of humanity’s past.

The world that we experience is much weirder, according to Hoffman, a cognitive psychologist, because it is not a simulation of reality but, rather, a façade that bears little if any resemblance to the underlying reality of our universe.

Fitness Stomps Truth into Extinction

Here’s Hoffman’s argument in a nutshell: we have all been shaped by evolution to reproduce rather than see the truth of things.

That is, we were designed by nature to only sense what we need to sense in order to survive long enough to produce offspring. If nature needs to lie to us to get the job done, that’s just fine by nature.

“Our minds evolved by natural selection to solve problems that were life-and-death matters to our ancestors, not to commune with correctness,” sums up the cognitive scientist Stephen Pinker.

Hoffman takes that basic observation further by producing the Fitness-Beats-Truth Theorem, or FBT. Consider the following fable to understand his theorem better.

Frankie and Terry Wash Up on Evolution Island

Once upon a time, where were a couple of geckos happily sunning themselves on a log on the tropical beach. Suddenly, a huge rogue wave enveloped the beach and floated the poor geckos out to sea. Things looked grim for our heroes, whom we’ll call Fit Frankie and Truthful Terry.

But then the log floated up on what seemed to be an island, an exceedingly weird island where nothing was familiar. In fact, the whole place looked gray and black. I don’t mean that the trees, bugs, bushes and rocks were gray and black. I mean that there were no recognizable land formations at all. Just gray blobs and white blobs.

This freaked out both of our gecko sojourners but, hey, they were alive even if they had slipped into some bizarre gray-and-black pocket universe that we’ll call Evolution Island.

The Elixer of Life

Aside from the colors, though, there was another strange thing about the island: it turns out there was something essential to life hidden amid the gray and black regions. Think of this essence as the “elixir of life.” If the geckos got enough of it, they could live on. If they didn’t get enough–or in fact got too much–they could die. (Note: this is kind of like oxygen is to us: not enough, we suffocate; too much, we get oxygen poisoning and kick the bucket.)

Sounds stressful, right? How were they supposed to know where the elixir was? And how could they know when they were getting too much or not enough of it?

Although they were both in a tough situation, it turns out that one of our gecko heroes had what sounds like an advantage. Truthful Terry saw things as they really were: that is, Terry saw gray when there was less elixir and black where there was more. Lucky Terry.

Fit Frankie, however, didn’t see Evolution Island as it really was. Instead, Frankie’s eyes somehow saw the black and gray shades differently. Frankie literally “saw” fitness (which Hoffman refers to as “fitness points”).

Here’s how it worked: in the places where Frankie could get just the right amount of elixir (that is, not too much, not too little), Frankie saw black. In the other places, Frankie saw gray.

Just to be clear here, what Frankie saw was an illusion. Terry saw the true world, while Frankie saw a kind of fiction.

But here’s the thing: it was a very useful fiction.

Frankie’s Descendants Take Over Evolution Island

In short order, Fit Frankie realized she was felt a lot better hanging out in the black zones and so consistently gravitated toward those parts of Evolution Island. Truthful Terry, on the other hand, saw the true colors of the island but had a harder time thriving, constantly trying to find the right balance between the gray and black zones so as to get just the right amounts of elixir. Terry saw truth while Frankie saw “fitness points.”

Over time, Fit Frankie thrived and had many children (being one of those parthenogenetic species of geckos). Truthful Terry, however, figured out how to survive but just barely, being sometimes sick from too much or too little elixir. Although she could see the world as it really was, she had few offspring, and the few that she had just couldn’t compete with the many offspring of Fit Frankie.

So, it turned out that seeing only the truth was actually a curse for Terry, whereas seeing a useful lie was a blessing for Frankie. Frankie’s “fitness vision” beat out Terry’s “truth vision,” and today Frankie’s offspring thrive on Evolution Island whereas Terry’s offspring went extinct many generations ago.

on Winning the Evolutionary Game

This fable, as you might have guessed, was inspired by an evolutionary game created by Hoffman and his colleagues. Game theory, of course, is a branch of applied mathematics that provides tools for analyzing situations in which players make decisions that are interdependent. The goal of game theory is to better understand the outcomes of different interactions among the “players.” (Frankie and Terry were the players in the example above.)

Evolutionary game theory is one application of game theory that is used to model evolving populations in biology. Wikipedia reports, “It defines a framework of contests, strategies, and analytics into which Darwinian competition can be modelled. It originated in 1973 with John Maynard Smith and George R. Price’s formalization of contests, analyzed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies.”

Hoffman and his colleagues created multiple evolutionary games in order to test their Fitness-Beats-Truth Theorem. They also created a separate but complementary mathematical model. Both initiatives produced the same result: life favors fitness over truth. That is, creatures that view the world in terms of “fitness points” inevitably trounce creatures that see the world as it truly is. Useful lies beat plain truth every time.

The moral of the story?

The world we see around us is not based on a true representation of reality but is, rather, a useful lie that allowed our ancestors to stay alive long enough to reproduce. In short, you and I, dear readers, are the offspring who have inherited our own version of Evolution Island. Viva la illusion!

Nature Is a Big Fat Liar, Just Like Your Smartphone

Your computers lie to you. Intentionally. For good reason.

Let’s say you want to write a document on a computer. You almost certainly start with some sort of computer icon. In my laptop, I have that Microsoft Word icon with that big blue W in it. I click that to open the application. On my smartphone, I tend to use Google docs instead, clicking on little white circle with a blue rectangle on it.

In either case, I “click on” the icon and it opens up a bigger rectangular document into which I can type words.

It’s useful, right? Absolutely. But it’s also a kind of fictional overlay. Hoffman explains:

The blue icon does not deliberately misrepresent the true nature of the file. Representing that nature is not its aim. Its job, instead, is to hide that nature–to spare you tiresome details on transistors, voltages, magnetic fields, logic gates, binary codes, and gigabytes of software. If you had to inspect that complexity, and forge your email out of bits and bytes, you might opt instead for snail mail.

Evolution does the same thing for us, posits Hoffman. It provides us with a kind overlay that helps us survive and reproduce. He calls it the interface theory of perception, or ITP.

Interface Theory of Perception

For example, only in recent history have humans discovered the whole spectrum electromagnetic radiation, aka light. It turns out we can see only a tiny portion of the spectrum–about 0.0035%–the range we now call visible light. We evolved to be able to see only the range that best helped us survive, and even then we didn’t see electromagnetic radiation itself.

Instead, we see colors, which probably don’t even exist outside of our human perceptions. Colors are part of nature’s hack for helping us survive. We can see when fruit is ripe, for example, via this color hack. So this interface element was laid over our perceptions to allow us to gain the calories and nutrients we need to thrive and procreate.

Hoffman sums up as follows:

The [Fitness-Beats-Truth] Theorem tells us that winning genes do not code for perceiving truth. [Interface theory of perception] tells us that they code instead for an interface that hides the truth about objective reality and provides us with icons-physical objects with colors, textures, shapes, motions, and smells-that allow us to manipulate that unseen reality in just the ways we need to survive and reproduce. Physical objects in spacetime are simply our icons in our desktop.

How Deep Does the Interface Go?

Nobody knows how deep the interface goes. Hoffman concludes that we don’t need to see much if any of the truth underlying nature in order to thrive. In fact, it’s better if we don’t.

If there is an objective reality, and if my senses were shaped by natural selection, then the FBT Theorem says the chance that my perceptions are veridical–that they preserve some structure of objective reality–is less than my chance to win the lottery. This chance goes to zero as the world and my perceptions grow more complex–even if my perceptual systems are highly plastic and can change quickly as needed.

So, basically he’s saying we truth-seekers are generally screwed. Luckily for us, however, there is a caveat. That is, logic and mathematics are somehow part of our underlying reality. The universe, whatever it truly is, may throw up a phony interface all around us, but logic and mathematics do provide us clues about the underlying truth.

Why? Because even if we see that the apple is red, suggesting to us we should eat it, we still need to be able to know that taking two bites out of it is better than just taking one (or example). We don’t need to be great at mathematics, but we need enough basic logic to survive.

At least that’s his story.

Spacetime Is a Bunch of Hooey

Even so, Hoffman’s view is so radical that he thinks even spacetime is an illusory part of our interface. He loves to say (I’ve listened to a number of podcasts in which he’s interviewed) as well as write that “spacetime is doomed.” What he means is that our conceptual frameworks of space and time as we’ve known them since the age of Einstein have no underlying reality but are just the “interface” we use to negotiate reality.

To support his contention, Hoffman quotes theoretical physicist Nima Arkani-Hamed: “Almost all of us believe that spacetime doesn’t exist, that spacetime is doomed, and has to be replaced by some more primitive building blocks.”

Just to be clear, Hoffman does not deny that there is some “objective” reality that underlies our own, only that we do not have access to it. All we can do is use our reasoning, such as it is, to postulate about what that reality actually looks like.

That Way Lies Madness

When I listen to Hoffman on podcasts, his interviewers almost always express their concern that Hoffman’s argument potentially send us all down the path toward nihilism. That is, if everything we experience is just an illusionary interface, then how can we ever learn the truth of existence? And, if we can’t, then aren’t our lives meaningless?

Hoffman rejects that interpretation, saying that we can use our tools of scientific inquiry, mathematics and reasoning to learn more about “objective” reality, whatever that is. But I’m left wondering whether he’s adopting this stance because he fully believes it or because all his colleagues and the public at large would otherwise reject his theories out of hand.

Is Reason Reasonable?

After all, if spacetime is doomed, then why aren’t logic and reason? Logic is predicated on cause and effect, which are themselves contingent on the flow of time.

I imagine that, in the privacy of his own thoughts, Hoffman worries about this as well. On the other hand, so what if it’s true? Human beings as a whole are unlikely to ever embrace nihilism. We love our patterns and the sense that we know things. And, even without that evidence, we’ve shown ourselves perfectly willing to embrace faith: that is, belief without underlying proofs.

In the end, I worry less about people losing hope in their ability to understand some underlying reality than in their ability to embrace a faith that they then wish to impose on everyone around them. The latter leads to tyranny, intolerant ideology, zealotry and theocracy. For now, at least, those versions of faith seem by far the greater danger to us all.

Featured image is a Necker cube, which creates a kind of optical illusion in which the sides flip when you stare at it a while. From BenFrantzDale - Own work. https://en.wikipedia.org/wiki/Necker_cube

Do You Treat Employees Like Fixed-Program Computers?

When All Programs Were Fixed

Computers didn’t always work they do today. The first ones were what we now called “fixed-program computers,” which means that, without some serious  and complex adjustments, they could do only one type of computation.

Sometimes that type of computer was superbly useful, such as when breaking Nazi codes during World War II (see the bombe below). Still, they weren’t much more programmable than a calculator, which is a kind of modern-day fixed program computer.

Along Came John and Alan

The brilliant mathematician John von Neumann and colleagues had a different vision of what a computer should be. To be specific, they had Alan Turing’s vision of a “universal computing machine,” a theoretical machine that the genius Turing dreamt up in 1936. Without going into specifics, let’s just say that the von Neumann model used an architecture has been very influential up the present day.

One of the biggest advantages associated with Turing/von Neumann computers is that multiple programs can be stored in them, allowing them to do many different things depending on which programs are running.

Von Neumann architecture: Wikimedia

Today’s employers clearly see the advantage of stored-program computers. Yet I’d argue that many treat their employees and applicants more like the fixed-program computers of yesteryear.  That is, firms make a lot of hiring decisions based more on what people know when they walk in the door than based on their ability to acquire new learning.  These days, experts are well paid largely because of the “fixed” knowledge and capabilities they have. Most bright people just out of college, however, don’t have the same fixed knowledge and so are viewed as less valuable assets.

The Programmable Person

Employers aren’t entirely in the wrong here. It’s a lot easier to load a new software package into a modern computer than it is to train an employee who lacks proper skill sets.  It takes money and time for workers to develop expertise, resources that employers don’t want to “waste” in training.

But there’s also an irony here: human beings are the fastest learning animals (or machines, for that matter) in the history of, well, the universe, as far as we know. People are born to learn (we aren’t designated as sapiens sapiens for nothing), and we tend to pick things up quickly.

The Half-Life of Knowledge

What’s more, there’s a half-life to existing knowledge and techniques in most professions. An experienced doctor may misdiagnose a patient simply because his or her knowledge about certain symptoms or treatments are out-of date. The same concept applies to all kinds of employees but especially to professionals such as engineers, scientists, lawyers, and doctors. In other words, it applies to a lot of the people who earn the largest salaries in the corporate world.

Samuel Arbesman, author of The Half-Life of Facts: Why Everything We Know Has an Expiration Date, stated in a TEDx video, “Overall, we know how knowledge grows, and just as we know how knowledge grows, so too do we know how knowledge becomes overturned. ” Yet, in our recruitment and training policies, firms often act as if we don’t know this.

The only antidote to the shortening half-life of skills is more learning, whether it’s formal, informal or (preferably) both. And the only antidote to a lack of experience is giving people experience, or at least a good facsimile of experience, as in simulation-based learning.

The problem of treating people like fixed-program computers is part of a larger skills-shortage mythology. In his book  Why Good People Can’t Get Jobs , Prof. Peter Cappelli pointed to three driving factors behind the skills myth. A Washington Post article sums up:

Cappelli points to many’s unwillingness to pay market wages, their dependence on tightly calibrated software programs that screen out qualified candidates, and their ignorance about the lost opportunities when jobs remain unfilled…”Organizations typically have very good data on the costs of their operations—they can tell you to the penny how much each employee costs them,” Cappelli writes, “but most have little if any idea of the [economic or financial] value each employee contributes to the organization.” If more employers could see the opportunity cost of not having, say, a qualified engineer in place on an oil rig, or a mobile-device programmer ready to implement a new business idea, they’d be more likely to fill that open job with a less-than-perfect candidate and offer them on-the-job training.

Losing the Fixed-Program Mindset

The fixed-program mentality should increasingly become a relic of the past. Today, we know more than ever about how to provide good training to people, and we have a growing range of new technologies and paradigms, such as game-based learning, extended enterprise elearning systems, mobile learning and “massively open online courses” (aka, MOOCs).

A squad of soldiers learn communication and decision-making skills during virtual missions: Wikimedia

With such technologies, it’s become possible for employers to train future applicants even before they apply for a position. For example, a company that needs more employees trained in a specific set of programming languages could work with a provider to build online courses that teach those languages. Or they could potentially provide such training themselves via extended enterprise learning management systems.

The point is that there are more learning options today ever before. We live in a new age during which smart corporations will able to adopt a learning paradigm that is closer to that of stored-program computers, one that they’ve trusted their technologies to for over half a century.

Featured image: A rebuild of a British Bombe located at Bletchley Park museum. Transferred from en.wikipedia to Commons by Maksim. Wikimedia Commons.

Talking Drums and the Depths of Human Ignorance

When I first heard of the talking drums, it make me think of how little we know about our fellow human beings and about the other citizens of the planet as well.

It’s a small but genuine annoyance. I’ll be listening some “expert,” often a professor, being interviewed for a radio show or podcast. If the idea of cognition comes up, they’ll state as a fact that humans are far more intelligent than any other animal on the planet. And, almost inevitably, one piece of evidence they’ll point to is communication. There’s the assumed inability of other animals to communicate with as much sophistication as we do.

Now, they might be right about these things, though obviously we’d need to define intelligence and communication to even establish a working hypotheses. What irritates me, though, is the certainty with which they make their claims. In truth, we just don’t know how we stack up in the animal kingdom because we still live in such a deep state of ignorance about our fellow creatures.

The Talking Drums

When I hear such claims, I think about the talking drums. For hundreds of years, certain African cultures were able to communicate effectively across vast distances. They did this right beneath the noses and within the hearing of ignorant, superior-feeling Europeans.

In his book The Information, James Gleick lays out the story of the talking drums in Chapter One. Via drums, certain African peoples were able to quickly communicate detailed and nuanced messages over long distances well before Europeans acquired comparable technologies. At least as far back as the 1700s, these African peoples were able to relay messages from village to village, messages that “could rumble a hundred miles or more in a matter of an hour…. Here was a messaging system that outpaced the best couriers, the fastest horses on good roads with say stations and relays.”

It was only in the 19th century that the missionary Roger T. Clarke recognized that “the signals represent the tones of the syllables of conventional phrases of a traditional and highly poetic character.” Because many African languages are tonal in the same way Chinese is, the pitch is crucial in determining the meaning of a particular word. What the drums allowed these peoples to do was communicate complex messages using tones rather than vowels or consonants.

Using low tones, the drummer communicates through the phrases and pauses. Extra phrases are added to each short “word” beaten on the drums. These extra phrases would be be redundant in speech, but they can provide context to the core drum signal.

Enormous Chasms

The technology and innovativeness of the talking drums is amazing, of course, but what’s especially startling is the centuries-long depth of European ignorance about the technology. Even once some Europeans admitted that actual information was being communicated across vast distances, they could not fathom how.

Why? Sure, racism no doubt played a part. But the larger truth is that they simply didn’t have enough information and wisdom to figure it out. That is despite the fact that we are talking about members of the same species and, indeed, a species with very little genetic diversity.

Here’s how the Smithsonian Institution reports on this lack of diversity:

[C]ompared with many other mammalian species, humans are genetically far less diverse – a counterintuitive finding, given our large population and worldwide distribution. For example, the subspecies of the chimpanzee that lives just in central Africa, Pan troglodytes troglodytes, has higher levels of diversity than do humans globally, and the genetic differentiation between the western (P. t. verus) and central (P. t. troglodytes) subspecies of chimpanzees is much greater than that between human populations.

On average, any two members of our species differ at about 1 in 1,000 DNA base pairs (0.1%). This suggests that we’re a relatively new species and that at one time our entire population was very small, at around 10,000 or so breeding individuals.

For Europeans to remain so ignorant about a technology created by other members of their own barely diversified species tells us how truly awful we are at understanding the communication capabilities of others. Now add in the exponentially higher levels of genetic diversity between species. For example, the last known common ancestor between whales and human existed about 97 million years ago. How about the last known ancestor between birds and humans? About 300 million years ago.

These timescales represent enormous genetic chasms that we are not remotely capable of bridging at the moment. We are still in the dark ages of understanding animal cognition and communication. So far, our most successful way of communicating with other animals is by teaching them our languages. So now we have chimpanzees using sign language and parrots imitating our speech patterns.  African Grey parrots, for example, can learn up to 1,000 words that they can use in context.

Yet, when these species do not use human language as well as humans, we consider them inferior.

If We’re So Bloody Bright…

But if we as a species are so intelligent, why aren’t we using their means of communication? I’m not suggesting that other animals use words, symbols and grammar the way humans do. But communicate they do. I live in Florida, which is basically a suburbanized rainforest, and have become familiar with the calls of various birds, tropical and otherwise. One of the more common local denizens is the fish crow. I hear crows that are perched blocks away from one another do calls and responses. The calls vary considerably even to my ignorant, human ears, and there are probably countless nuances I’m missing.

Are they speaking a “language”? I don’t know, but it seems highly unlikely they’re expending all the vocal and cognitive energy for no reason. Their vocalizations mean something, even if we can’t grasp what.

Inevitably, humans think all animal communication is about food, sex and territory. But that’s just a guess on our part. We assume that their vocalizations are otherwise meaningless just as many Europeans assumed the talking drums were mostly meaningless noise. In short, we’re human-centric bigots.

Consider the songs of the humpback whales. These are extremely complex vocalizations that can be registered over vast distances. Indeed, scientists estimate that whales’ low frequency sounds can travel up to 10,000 miles! Yet, we’re only guessing about why males engage in such “songs.” For all we know, they’re passing along arcane mathematical conceits that would put our human Fields Medal winners to shame.

On Human Ignorance

The point is that we continue to live in a state of deep ignorance when it comes to other our fellow creatures. That’s okay as long as we remain humble, but we humility is not what people do best. We assume we are far more intelligent and/or far better communicators than are other species.

Yet, consider the counterevidence. Just look the various environmental, political and even nuclear crises in which we conflict-loving primates are so dangerously enmeshed. It hardly seems like intelligence. Maybe the whales and parrots are really discussing what incapable morons humans are compared to themselves. With that, mind you, it would be hard to argue.

Featured image from Mplanetech. 11 January 2017

Thinking About Thinking

What is thinking?

There has been a tsunami of articles related to cognition. How does your pet think? How (or should) we build thinking machines? How can you think more effectively? How can intelligence itself be boosted? Etc.

This got me thinking about thinking, so I became involved in several social media discussions on how we should view the thinking process. Below is a short definition I’ve arrive at, one that potentially includes cognition among many animals as well as, perhaps, computing devices today and/or in the future:

Thinking is the process of assimilating sensory information, integrating it into existing internal models of reality (or creating new models derived from old ones),  generating inferences about the past, future and present based on those models, and using those inferences as more input that can be assimilated into internal models via continuing feedback loops.

This is succinct but I’m sure it oversimplifies things. For example, infants are likely born with a certain amount of “hard-wiring” that allows them to interpret the world in basic ways even before they’ve developed many internal models about how the world works.  Still, I’d argue that this definition gets at what we mean by thinking, whether it relates to bugs, birds, elephants or hominids.

What’s the point? Well, cognition is quickly becoming the name of the game in modern society in nearly any discipline you can name: learning, artificial intelligence, information science, bioethics, research, analytics, innovation, marketing, justice, genetics, etc.

A lot of what we will be doing in the future is trying to answer hard questions about thinking:

  • What (and how) do other people (e.g., customers, employees, citizens, etc.) think?
  • How can we make learning more efficient and effective?
  • How can we make machines that are better at solving problems?
  • How can we understand what is in the minds of criminals so that we can reduce crime and make better decisions in our justice systems?
  • How should we view and treat other thinking animals on the planet?
  • How do we know (or decide) when machines are thinking, and to what degree is thinking different from consciousness?

To have better discussions around these and similar questions, we’ll need to develop better and more understandable cross-disciplinary definitions of terms such as thinking, consciousness (which seems to be a kind of attention to thinking), and comprehension. A lot of progress comes from our growing ability to create thinking machines, but we also seem to be getting considerably better at understanding human cognition as well. The next couple of decades or so should be interesting.

(Note: I wrote a version of this post nearly a decade ago.)

Author: Solipsist;
From Wikimedia Commons.
Featured image source: Robert Fludd. From https://commons.wikimedia.org/wiki/File:Robert_Fludd,Tomus_secundus…,_1619-1621_Wellcome_L0028467.jpg