You’re More Than Just a Number: Now, You’re a Vector

Unless you’ve been hiding in a bunker for the last few years (and who’s to blame you if you were?), you know that data science, big data, and machine learning are all the rage.  And you know that the NSA has gotten scary good at surveilling the world via its data-parsing mojo.

These trends have overturned — or at least added a whole new wrinkle to — the concern so prevalent when I was a kid: that individuals in modern societies were becoming faceless numbers in an uncaring machine. Faceless number? These days,  a lot of people aspire to that. They leverage the likes of the search engine DuckDuckGo in the hope of reverting back to being just a blip of anonymous bits lost amid mighty currents of data.

Image from Wikipedia

Well, unless you’re willing to live off the grid — or get almost obsessively serious about using encryption tools such as PGP  — you’ll just have to dream your grand dreams of obscurity. Even if we somehow rein in the U.S. government, businesses will be doing their own “surveilling” for the foreseeable future.

But look on the bright side. From one perspective, there’s been progress. You’re not just a number these days, you’re a whole vector, or maybe even a matrix — with possible aspirations of becoming a data frame.

No, this isn’t an allusion to the disease vectors that have become such a hot topic during the pandemic. The statheads among you may recognize those classifications as belonging to the statistical programming language R, which vies with Python for the best data science language.

In R’s parlance, a vector is “a single entity consisting of a collection of things.” I love the sheer all-encompassing vagueness of that definition. After all, it could apply to me or you, our dogs or cats, or even our smartphones.

But, in R, a vector tends to be a grouping of numbers or other characters that can, if needed, be acted on en masse by the program. It’s a mighty handy tool. With just a couple of keystrokes, you can take one enormous string of numbers and work on them all simultaneously (for example, by multiplying them all by another string of numbers,  plugging them all into the same formula or turning them into a table). It’s just easier to breath life into the data this way. It’s what Mickey Mouse would have brandished if he were a statistician’s rather than a sorcerers’s apprentice in Fantasia.

Now imagine yourself as a vector or, at least, as being represented by a vector. Your age, height, weight, cholesterol numbers and recent blood work all become a vector, one your doctor can peruse and analyze with interest. Meanwhile, your purchasing habits, credit rating, income estimates, level of education and other factors are another vector that retail and financial organizations want to tap into. To those vectors could be added many more until they become one super-sized vector with your name on it.

Now, glom your vectors together with millions of other peoples’ vectors, and you’ve got one  huge, honking, semi-cohesive collection of potentially valuable information. With it, you and others can, like Pinky and the Brain, take over the world! Or at least sell a lot more toothpaste and trucks.

The bottom line is that we have three basic choices in this emerging Age of Vectors:

Ignore It: Most folks will opt for this one, being too busy or bored for the whole “big data” hoopla. Yes, they know folk are collecting tons of data about them, but who cares? As long as it doesn’t mess up their lives in some way (as in identity theft), then this is just a trend they can dismiss, worrying about it on a case-by-case basis when it directly affects their lives.

Fight the Power: If you don’t want to be vectorized —  or if you at least want to limit the degree to which you are — you can try every trick in the book to keep yourself off the radar of the many would-be private and public data-hunters who want to dig through your data-spoor in their quest to track your habits (either as an individual or as part of a larger herd).

Use the Vector, Luke: Some will gladly try to harness the power of the vector, both professionally and personally.  They’ll try to squeeze every ounce of utility out of recommendation engines, work assiduously to enhance their social media rankings,  try to leverage every data collection/presentation service out there to boost their credit ratings, get offered better jobs, or win hearts (or other stuff) on dating sites. They will certainly wield vectors at work for the purpose of prediction analytics. They may even turn the vector scalpel inward with the goal of “hacking themselves” into better people, like the Quantified Selfers who want to gain “self knowledge through numbers.”

That’s not to say that we can’t pick and choose some aspects of each of these three basic strategies. For instance, I’m just not cut out for the quantified-self game, being just too data-apathetic (let’s s a 7 on a scale of 10) to quantify my life. But, when it comes to analyzing other stuff, from labor data to survey findings to insects in my backyard, I’m all in, willing and ready to use the Force of the Vector. Now, I just have to figure out where I misplaced my statistical light saber…

Featured image from IkamusumeFan - Plot SVG using text editor.

Talking Drums and the Depths of Human Ignorance

It’s a small but genuine annoyance. I’ll be listening some “expert,” often a professor, being interviewed for a radio show or podcast. If the idea of cognition comes up, they’ll state as a fact that humans are far more intelligent than any other animal on the planet. And, almost inevitably, one piece of evidence they’ll point to is communication. There’s the assumed inability of other animals to communicate with as much sophistication as we do.

Now, they might be right about these things, though obviously we’d need to define intelligence and communication to even establish a working hypotheses. What irritates me, though, is the certainty with which they make their claims. In truth, we just don’t know how we stack up in the animal kingdom because we still live in such a deep state of ignorance about our fellow creatures.

The Talking Drums

When I hear such claims, I sometimes think about the talking drums. For hundreds of years, certain African cultures were able to communicate effectively across vast distances. They did this right beneath the noses and within the hearing of ignorant, superior-feeling Europeans.

In his book The Information, James Gleick lays out the story of the talking drums in Chapter One. Via drums, certain African peoples were able to quickly communicate detailed and nuanced messages over long distances well before Europeans acquired comparable technologies. At least as far back as the 1700s, these African peoples were able to relay messages from village to village, messages that “could rumble a hundred miles or more in a matter of an hour…. Here was a messaging system that outpaced the best couriers, the fastest horses on good roads with say stations and relays.”

It was only in the 19th century that the missionary Roger T. Clarke recognized that “the signals represent the tones of the syllables of conventional phrases of a traditional and highly poetic character.” Because many African languages are tonal in the same way Chinese is, the pitch is crucial in determining the meaning of a particular word. What the drums allowed these peoples to do was communicate complex messages using tones rather than vowels or consonants.

Using low tones, the drummer communicates through the phrases and pauses. Extra phrases are added to each short “word” beaten on the drums. These extra phrases would be be redundant in speech, but they can provide context to the core drum signal.

Enormous Chasms

The technology and innovativeness of the talking drums is amazing, of course, but what’s especially startling is the centuries-long depth of European ignorance about the technology. Even once some Europeans admitted that actual information was being communicated across vast distances, they could not fathom how.

Why? Sure, racism no doubt played a part. But the larger truth is that they simply didn’t have enough information and wisdom to figure it out. That is despite the fact that we are talking about members of the same species and, indeed, a species with very little genetic diversity.

Here’s how the Smithsonian Institution reports on this lack of diversity:

[C]ompared with many other mammalian species, humans are genetically far less diverse – a counterintuitive finding, given our large population and worldwide distribution. For example, the subspecies of the chimpanzee that lives just in central Africa, Pan troglodytes troglodytes, has higher levels of diversity than do humans globally, and the genetic differentiation between the western (P. t. verus) and central (P. t. troglodytes) subspecies of chimpanzees is much greater than that between human populations.

On average, any two members of our species differ at about 1 in 1,000 DNA base pairs (0.1%). This suggests that we’re a relatively new species and that at one time our entire population was very small, at around 10,000 or so breeding individuals.

For Europeans to remain so ignorant about a technology created by other members of their own barely diversified species tells us how truly awful we are at understanding the communication capabilities of others. Now add in the exponentially higher levels of genetic diversity between species. For example, the last known common ancestor between whales and human existed about 97 million years ago. How about the last known ancestor between birds and humans? About 300 million years ago.

These timescales represent enormous genetic chasms that we are not remotely capable of bridging at the moment. We are still in the dark ages of understanding animal cognition and communication. So far, our most successful way of communicating with other animals is by teaching them our languages. So now we have chimpanzees using sign language and parrots imitating our speech patterns.  African Grey parrots, for example, can learn up to 1,000 words that they can use in context.

Yet, when these species do not use human language as well as humans, we consider them inferior.

If We’re So Bloody Bright…

But if we as a species are so intelligent, why aren’t we using their means of communication? I’m not suggesting that other animals use words, symbols and grammar the way humans do. But communicate they do. I live in Florida, which is basically a suburbanized rainforest, and have become familiar with the calls of various birds, tropical and otherwise. One of the more common local denizens is the fish crow. I hear crows that are perched blocks away from one another do calls and responses. The calls vary considerably even to my ignorant, human ears, and there are probably countless nuances I’m missing.

Are they speaking a “language”? I don’t know, but it seems highly unlikely they’re expending all the vocal and cognitive energy for no reason. Their vocalizations mean something, even if we can’t grasp what.

Inevitably, humans think all animal communication is about food, sex and territory. But that’s just a guess on our part. We assume that their vocalizations are otherwise meaningless just as many Europeans assumed the talking drums were mostly meaningless noise. In short, we’re human-centric bigots.

Consider the songs of the humpback whales. These are extremely complex vocalizations that can be registered over vast distances. Indeed, scientists estimate that whales’ low frequency sounds can travel up to 10,000 miles! Yet, we’re only guessing about why males engage in such “songs.” For all we know, they’re passing along arcane mathematical conceits that would put our human Fields Medal winners to shame.

On Human Ignorance

The point is that we continue to live in a state of deep ignorance when it comes to other our fellow creatures. That’s okay as long as we remain humble, but we humility is not what people do best. We assume we are far more intelligent and/or far better communicators than are other species.

Yet, consider the counterevidence. Just look the various environmental, political and even nuclear crises in which we conflict-loving primates are so dangerously enmeshed. It hardly seems like intelligence. Maybe the whales and parrots are really discussing what incapable morons humans are compared to themselves. With that, mind you, it would be hard to argue.

Featured image from Mplanetech. 11 January 2017

The Universe of Seurat and Rovelli

I was once chastised by security guard at the Art Institute of Chicago for getting too close to A Sunday Afternoon on the Island of La Grande Jatte, the greatest work by the greatest of the pointillist painters, Georges Seurat. I remember blushing with embarrassment as other patrons flicked their attention to me to take in the barbarian careless enough to endanger one of the world’s most beautiful and important works of art.

I also felt an initial rush of outrage that anyone would think I would harm such a treasure. But then I realized that was indeed too close, that my foot was over the line of the designated safe distance to the masterpiece, that I was indeed the Philistine they took me for. But I was a curious Philistine, looking closely to tease out how he was able to pull off his technique.

Pointillism, Atomism and Digitization

The art movement known as pointillism1 is the technique of applying small strokes or dots of paint so that from a distance so they visually blend together. Largely invented by Seurat, I think the technique visually demonstrates atomism, which Rovelli associates with certain Greek philosophers but was probably first described by the Vedic sage Aruni back in the 8th century BCE. Aruni proposed the idea that there were “particles too small to be seen [that] mass together into the substances and objects of experience.” 

Seurat aesthetically anticipated not only the atomic and quantum theories but the digital age in which we find ourselves living today, an age in which so many people spend the majority of their waking hours looking at screens of pixels.2 We are entranced by pointillism all day long.

In a sense, the idea of a pixelated universe is the topic of both Seurat’s work and Rovelli’s as laid out in Reality Is Not What It Seems: The Journey to Quantum Gravity? If you read the last post (or, better yet, the book itself) you should have at least a general notion of quantum gravity.

But what prospects does the theory have? How might it be supported by scientific evidence, and where might it lead us? Let’s discuss.

Vive la Révolution

Quantum physics was a revolution in physics, but what if Rovelli is right and all of spacetime is quantum? Well, then, the revolution is just beginning. Who really knows what knowledge it could bring us? Might quantum gravity help us better understand understand how to harness gravity itself? What new technologies could be created with it? Rovelli doesn’t discuss possible applications, but I can’t think of any major physics discoveries that didn’t also bring earth-shaking new technologies.

Testing Quantum Gravity

So, how can the theory be tested? One idea is to look for evidence of a “Big Bounce” as opposed to a “Big Bang” in the origins of the universe. According to Einstein’s view of the universe, all of spacetime could be squashed ad infinitum, ultimately leading to the Big Bang. But, that’s not what quantum gravity would predict. Rovelli notes that “if we take quantum mechanics into account, the universe cannot be indefinitely squashed.” And if that’s true, then we wouldn’t get a Big Bang but, rather, a gigantic rebound that he refers to as the Big Bounce.

So, how does one test that? Well, one can look at the statistical distribution of the fluctuations of cosmic radiation. That should provide evidence of the Big Bounce. In addition, according to Rovelli, “cosmic gravitational background radiation must also exist–older than the electromagnetic one, because gravitational waves are disturbed less by matter than electromagnetic ones and were able to travel undisturbed even when the universe was too dense to let electromagnetic waves pass.”

There’s also the prediction by the quantum gravity theory that black holes are not ultimately stable because the matter inside them cannot be squeezed into a single point of infinite density. Rather, at some point, the black hole explodes (like a miniature Big Bounce). If we can locate some exploding black holes in the universe, then we have more evidence of quantum gravity.

So, basically, if we find that super dense stuff is bouncing and rebounding in the universe, the quantum gravity folks might be right. If not, well, at least we’ll have evidence the theory is wrong and we can consider the other theories that have been, and surely will be, conjured up by the endlessly creative theoretical physicists.

If the quantum gravity champions do turn out to be right, then one of the side effects will be that the infinity goes away. Or, at least, physicists are a lot less likely to get infinity as the answer when they run certain calculations based on general relativity theory. The universe itself becomes “a wide sea, but a finite one.”

Bit by Bit, Information Becomes Reality

But don’t assume that, just because the universe might be finite, it doesn’t stay weird. In fact, it may start seeming weirder than ever if humanity succeeds in merging quantum mechanics information theory not only with the theory of relativity but with information theory.

First conceived by engineer and mathematician Claude Shannon in the mid-20th century, information theory assumes that information “is the measure of the number of possible alternatives for something.”

It was Shannon who popularized the word “bit” to mean a unit of information. He used it in his seminal 1948 paper “A Mathematical Theory of Communication,” and he attributed the origin to a Bell Labs memo written John W. Tukey, who used bit as an acronym of “binary information digit.”

Rovelli explains, “When I know at roulette that a red number has come up rather than a black, I have one ‘bit’ of information; when I know that a red even number has won, I have two bits of information…”

I won’t belabor this because information theory gets pretty complicated and Rovelli doesn’t go too deeply into it. To get a better but non-technical understanding, I recommend reading The Information: A History, a Theory, a Flood by James Gleick. I read it several years ago and hope to give it a second read over the next several months.

Anyway, it was John Wheeler, the father of quantum gravity, who was “the first to realize that the notion of information was fundamental to the understanding of quantum reality.” He coined the phrase “from it to bit,” meaning that the universe is ultimately made up of information.

Rovelli writes:

Information…is ubiquitous throughout the universe. I believe that in order to understand reality, we have to keep in mind that reality is this network of relations, of reciprocal information, that weaves the world. We slice up the reality surrounding into “objects.” But reality is not made up of discrete objects. It is a variable flux.

Although Rovelli has one more chapter on the scientific method, I think this is the better place to wrap up a post on a blog called The Reticulum. Let’s sum up: Reality is a network of relations among bits of information in a variable flux.3

I don’t know if that’s a true description of our underlying reality. But it does feel familiar: flux and foam, bits and bytes, indeterminacy and statistical spins. Even if quantum gravity doesn’t work out as epistemology, it still captures much of the essence of our baffling, vertiginous and often wondrous modern lives.

1 The word pixel, by the way, is a portmanteau of "picture element," which is the smallest addressable and controllable element of a picture represented on a digital screen.

2 As visionary as the technique was, the term "pointillism" was actually coined by art critics in the late 1880s to ridicule Seurat and the other members of the art movement. But the artists, as they so often do vis-a-vis critics, got the last laugh. Today, the term is regarded as describing one of the great movements of neo-impressionism.

3 Which makes me think, of course, of Doc Brown's famous "flux capacitor."
Feature image: A Sunday Afternoon on the Island of La Grande Jatte by George Seurat: https://commons.wikimedia.org/wiki/File:A_Sunday_on_La_Grande_Jatte,_Georges_Seurat,_1884.jpg