God as Dread Pirate Roberts

The classic romance comedy The Princess Bride has one of my favorite existential lines.

It occurs when our hero Wesley is recounting his life as an abductee of the Dread Pirate Roberts, explaining how Roberts made him a valet.

“You can try it for tonight. I’ll most
likely kill you in the morning.”
Three years he said that. “Good
night, Westley. Good work. Sleep
well. I’ll most likely kill you
in the morning.”

Dread Pirate Roberts is an intriguing stand-in for the universe or, if you prefer, God.

The universe is a harsh, dangerous and crazy mysterious place. Bad shit happens to everyone at times. Some deserve it. Many do not. And life is always uncertain until it ends in the certainty of death.

You might well die today, or tomorrow, or the day after that.

But if you’re canny and lucky, it might let you live for another day.

So, watch for snakes in tall grass. Look both ways before you cross the road. Eat right and exercise. Generally speaking, avoid stupid mistakes that could turn deadly.

Of course, even if you do all that, the universe will get you in the end. It’s designed that way.

Maybe there is, after all, an afterlife. Or maybe there will come a day when we live in immortal bodies powered by bioceramic minds (or whatever). Perhaps at that point the Dread Pirate Roberts will not be quite so dreaded … though I doubt it.

Until then, however, offer up your good works as God’s valet. Do what you can. And sleep well, friends.

Featured image from RootOfAllLight, Creative Commons Attribution-Share Alike 4.0

The Singularity Is Pretty Damned Close…Isn’t It?

What is the singularity and just how close is it?

The short answers are “it depends who you ask” and “nobody knows.” The longer answers are, well…you’ll see.

Singyuwhatnow?

Wikipedia provides a good basic definition: “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.”

The technological growth in question usually refers to artificial intelligence (AI). The idea is that an AI capable of improving itself quickly goes through a series of cycles in which it gets smarter and smarter at exponential rates. This leads to a super intelligence that throws the world into an impossible-to-predict future.

Whether this sounds awesome or awful largely depends on your view of what a superintelligence would bring about, something that no one really knows.

The impossible-to-predict nature is an aspect of why, in fact, it’s called a singularity, a term that originates with mathematics and physics. In math, singularities pop up when the numbers stop making sense, as when the answer to an equation turns out to be infinity. It’s also associated with phenomena such as black holes where our understandings of traditional physics break down. So the term, as applied to technology, suggests a time beyond which the world stops making sense (to us) and so becomes impossible to forecast.

How Many Flavors Does the Singularity Come In?

singularity image
From Wikipedia: major evolutionary transitions in information processing

Is a runaway recursively intelligent AI the only path to a singularity? Not if you count runaway recursively intelligent people who hook their little monkey brains up to some huge honking artificial neocortices in the cloud.

Indeed, it’s the human/AI interface and integration scenario that folks like inventor-author-futurist Ray Kurzweil seem to be banking on. To him, from what I understand (I haven’t read his newest book), that’s when the true tech singularity kicks in. At that point, humans essentially become supersmart, immortal(ish) cyborg gods.

Yay?

But there are other possible versions as well. There’s the one where we hook up our little monkey brains into one huge, networked brain to become the King Kong of super intelligences. Or the one where we grow a supersized neocortex in an underground vat the size of the Chesapeake Bay. (A Robot Chicken nightmare made more imaginable by the recent news we just got a cluster of braincells to play pong in a lab–no, really).

Singularity: Inane or Inevitable?

The first thing to say is that maybe the notion is kooky and misguided, the pipedream of geeks yearning to become cosmic comic book characters.  (In fact, the singularity is sometimes called, with varying degrees sarcasm, the Rapture for nerds.)

I’m tempted to join in the ridicule of the preposterous idea. Except for one thing: AI and other tech keeps proving the naysayers wrong. AI will never beat the best chess players. Wrong. Okay, but it can’t dominate something as fuzzy as Jeopardy. Wrong. Surely it can’t master the most complex and challenging of all human games, Go. Yawn, wrong again.

After a while,  anyone who bets against AI starts looking like a chump.

Well, games are for kids anyway. AI can’t do something as slippery as translate languages or as profound as unravel the many mysteries of protein folding.  Well, actually…

But it can’t be artistic…can it? (“I don’t do drugs. I am drugs” quips DALL-E).

Getting Turing Testy

There’s at least one pursuit that AI has yet to master: the gentle art of conversation. That may be the truest assessment of human level intelligence. At least, that’s the premise underlying the Turing test.

The test assumes you have a questioner reading a computer screen (or the equivalent). The questioner has two conversations via screen and keyboard. One of those conversations is with a computer, the other with another person. If questioner having the two conversations can’t figure out which one is the computer, then the computer passes the test because it can’t be distinguished from the human being.

Of course, this leaves us with four (at least!) big questions.

First, when will a machine finally pass that final exam?

Second, what does it mean if and when a machine does? Is it truly intelligent? How about conscious?

Third, if the answer to those questions seems to be yes, what’s next? Does it get driver’s license? A FOX News slot? An OKCupid account?

Fourth, will such a computer spark the (dun dun dun) singularity?

The Iffy Question of When the Singularity Arrives

In a recent podcast interview, Kurzweil predicted that some soon-to-be-famous digital mind will pass the Turing Test in 2029.

“2029?” I thought. “As in just 7-and-soon-to-be-6-years-away 2029?”

Kurzweil claims he’s been predicting that same year for a long time, so perhaps I read about it back in 2005 when his book The Singularity Is Near (lost somewhere in the hustle bustle of my bookshelves). But back then, of course, it was a quarter of a decade away. Now, well, it seems damn near imminent.

Of course, Kurzweil may well turn out to be wrong. As much as he loves to base his predictions on the mathematics of exponentials, he can get specific dates wrong. For example, as I wrote in a previous post, he’ll wind up being wrong about the year solar power becomes pervasive (though he may well turn out to be right about the overall trend).

So maybe a computer won’t pass a full blown Turing test in 2029. Perhaps it’ll be in the 2030s or 2040s. That would be close enough, in my book. Indeed, most experts believe it’s just a matter of time. One survey issued at the Joint Multi-Conference on Human-Level Artificial Intelligence found that just 2% of participants predicted that an artificial general intelligence (or AGI, meaning that the machine thinks at least as well as a human being) would never occur. Of course, that’s not exactly an unbiased survey cohort, is it?

Anyhow, let’s say the predicted timeframe when the Turing test is passed is generally correct. Why doesn’t Kurzweil set the date of the singularity on the date that the Turing test is passed (or the date that a human-level AI first emerges)? After all, at that point, the AI celeb could potentially code itself so it can quickly become smarter and smarter, as per the traditional singularity scenario.

But nope. Kurzweil is setting his sights on 2045, when we fully become the supercyborgs previously described.

What Could Go Wrong?

So, Armageddon or Rapture? Take your pick.

What’s interesting to my own little super-duper-unsuper brain is that folks seem more concerned about computers leaving us in the intellectual dust than us becoming ultra-brains ourselves. I mean, sure, our digital super-brain friends may decide to cancel humanity for reals. But they probably won’t carry around the baggage of our primeval, reptilian and selfish fear-fuck-kill-hate brains–or, what Jeff Hawkins calls our “old brain.”

In his book A Thousand Brains, Hawkins writes about about the ongoing frenemy-ish relationship between our more rational “new brain” (the neocortex) and the far more selfishly emotional though conveniently compacted “old brain” (just 30% of our overall brain).

Basically, he chalks up the risk of human extinction (via nuclear war, for example) to old-brain-driven crappola empowered by tech built via the smart-pantsy new brain. For example, envision a pridefully pissed off Putin nuking the world with amazing missiles built by egghead engineers. And all because he’s as compelled by his “old brain” as a tantrum-throwing three-year-old after a puppy eats his cookie.

Now envision a world packed with superintelligent primate gods still (partly) ruled by their toddler old-brain instincts. Yeah, sounds a tad dangerous to me, too.

The Chances of No Chance

Speaking of Hawkins, he doesn’t buy the whole singularity scene. First, he argues that we’re not as close to creating truly intelligent machines as some believe. Today’s most impressive AIs tend to rely on deep learning, and Hawkins believes this is not the right path to true AGI. He writes,

Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead….they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

Second, even when we finally build AGIs (and he thinks we certainly will if he has anything to say about it), they won’t be driven by the same old-brain compulsions as we are. They’ll be more rational because their architecture will be based on the human neocortex. Therefore, they won’t have the same drive to dominate and control because they will not have our nutball-but-gene-spreading monkey-brain impulses.

Third, Hawkins doesn’t believe that an exponential increase in intelligence will suddenly allow such AGIs to dominate. He believes a true AGI will be characterized by a mind made up of “thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors.” (That makes more sense if you read his book, A Thousand Brains: A New Theory of Intelligence). He goes on:

Adding this ingredient [meaning the thousands of reference frames] to machines does not impart any immediate capabilities. It only provides a substrate for learning, endowing machines with the ability to learn a model of the world and thus acquire knowledge and skills. On a kitchen stovetop you can turn a knob to up the heat. There isn’t an equivalent knob to “up the knowledge” of a machine.

An AGI won’t become a superintelligence just by virtue of writing better and better code for itself in the span of a few hours. It can’t automatically think itself into a superpower. It still needs to learn via experiments and experience, which takes time and the cooperation of human scientists.

Fourth, Hawkins thinks it will be difficult if not impossible to connect the human neocortex to mighty computing machines in the way that Kurzweil and others envision. Even if we can do it someday, that day is probably a long way off.

So, no, the singularity is not near, he seems to be arguing. But a true AGI may, in fact, become a reality sometime in the next decade or so–if engineers will only build an AGI based on his theory of intelligence.

So, What’s Really Gonna Happen?

Nobody know who’s right or wrong at this stage. Maybe Kurweil, maybe Hawkins, maybe neither or some combination of both. Here’s my own best guess for now.

Via deep learning approaches, computer engineers are going to get closer and closer to a computer capable of passing the Turning test, but by 2029 it won’t be able to fool an educated interrogator who is well versed in AI.

Or, if a deep-learning-based machine does pass the Turing test before the end of this decade, many people will argue that it only displays a façade of intelligence, perhaps citing the famous Chinese-room argument (which is a philosophical can of worms that I won’t get into here).

That said, eventually we will get to a Turing-test-passing machine that convinces even most of the doubters that it’s truly intelligent (and perhaps even conscious, an even higher hurdle to clear). That machine’s design will probably hew more closely to the dynamics of the human brain than do the (still quite impressive) neural networks of today.

Will this lead to a singularity? Well, maybe, though I’m convinced enough by the arguments of Hawkins to believe that it won’t literally happen overnight.

How about the super-cyborg-head-in-the-cloud-computer kind of singularity? Well, maybe that’ll happen someday, though it’s currently hard to see how we’re going to work out a seamless, high-bandwidth brain/supercomputer interface anytime soon. It’s going to take time to get it right, if we ever do. I guess figuring all those details out will be the first homework we assign to our AGI friends. That is, hopefully friends.

But here’s the thing. If we ever do figure out the interface, it seems possible that we’ll be “storing” a whole lot of our artificial neocortex reference frames (let’s call them ANREFs) in the cloud. If that’s true, then we may be able to swap ANREFs with our friends and neighbors, which might mean we can quickly share skills I-know-Kung-Fu style. (Cool, right?)

It’s also possible that the reticulum of all those acquired ANREFs will outlive our mortal bodies (assuming they stay mortal), providing a kind of immortality to a significant hunk of our expanded brains. Spooky, yeah? Who owns our ANREFs once the original brain is gone? Now that would be the IP battle of all IP battles!

See how weird things can quickly get once you start to think through singularity stuff? It’s kind of addictive, like eating future-flavored pistachios.

Anyway, here’s one prediction I’m pretty certain of: it’s gonna be a frigging mess!

Humanity will not be done with its species-defining conflicts, intrigues, and massively stupid escapades as it moves toward superintelligence. Maybe getting smarter–or just having smarter machines–will ultimately make us wiser, but there’s going to be plenty of heartache, cruelty, bigotry, and turmoil as we work out those singularity kinks.

I probably won’t live to the weirdest stuff, but that’s okay. It’s fun just to think about, and, for better and for worse, we already live in interesting times.

Addendum: Since I wrote this original piece, things have been moving so quickly In the world of AI that I revisited the topic in The Singularity Just Got Nearer…Again…And How!

Featured image by Adindva1: Demonstration of the technology "Brain-Computer Interface." Management of the plastic arm with the help of thought. The frame is made on the set of the film "Brain: The Second Universe."

The Extended Human

A nest or hive can best be considered a body built rather than grown. A shelter is animal technology, the animal extended. The extended human is the technium.

Kevin Kelly

I like the phrase “extended human” because these days so much of our lives is spent doing just that: extending. We extend toward one another via our increasingly pervasive networking technologies, of course, but also via our words, our art, our organizations and our sometimes frighteningly fervent tribes of like-minded people.

Without these extensions, there can be no reticula – or, at least, none that includes humanity. It’s as if we are all connected neurons, the tentacled creatures of our own dreams and nightmares.

Kevin Kelly, the author of What Technology Wants, uses the phrase extended human to mean the same thing as the technium, which he defines as the “greater, global, massively interconnected system of technology vibrating around us.” But I see the extended human as beginning not with our technologies but with the reticula within: our woven, language-loving, community-seeking minds. A human who is armed only with ideas and imagination still has an amazing ability to extend herself into the universe.

Connection Matrix of the Human Brain

Technological Kudzu

Still, it’s true that the technium vastly enhances our natural tendency toward extension. In fact, as Kelly points out (and all anthropologists know), our inclination toward tool usage predates our emergence as a species. Our evolutionary predecessors such as Homo erectus were tool users, suggesting this propensity is somehow encoded or, at least made more likely, by our DNA.

These days, our extensions are growing like so much technological kudzu. Think about the growth of Zoom and other video conferencing applications. These technologies have become among of the latest technological imperatives, along with basics such as electricity, plumbing and phones/cell phones.

But there’s something missing in all this. Extensions are powerful alright, but what, exactly, are we extending? That is, what is at the core of the extended human? It isn’t a technological issue but, rather, a philosophical, psychological, existential or even spiritual one.

How Far Is Too Far?

This is where things not only get tricky but downright divisive.  The Buddhist may argue that “nothing” is at the core, that most of what we want to extend is sheer ego and delusion. The Christian may argue that immortal souls are at the human core, souls which have the propensity for good or evil in the eyes of God. The Transhumanist may argue that the human body and brain are the core, both of which can be enhanced and extended in potentially unlimited ways.

Few would argue against the idea that humans should be an extended species. Even the lowest-tech Luddites rely on tools and technologies. What we will spend the next several decades arguing about are two related issues:

1) What is at the core of humanity? What should we value and preserve? What can we afford to leave behind in the name of progress and freedom?

2) How far should we extend ourselves? Should we set collective limits for fear that we’ll lose our essential humanity or cause our own extinction? If so, how can we reasonably set limits without magnifying the risks of tyranny or stagnation?

All sorts of other subjects will be incorporated into these two basic issues. For example, collective limits on technological advances become more likely if associated dangers – higher rates of unemployment, increased risks of terrorism, environmental crises, etc. –  loom larger over time. Although we will frame these issues in various ways, they will increasingly be at the center of our collective anxiety for years to come. It’s the price of being the most extended species in the reticulum.

Featured image by Sheila1988; Agricultural tools at show

The Tao of Python

I’ve been thinking about the Tao of Python. It’s a tricky thing to pin down. The Chinese concept of Tao is, in itself, famously hard to grasp and, therefore, to translate. It’s sometimes glossed as “the way” or “the path.” But what I’m aiming at here is to explain (at least to myself) the spirit of this programming language.

Why? Because I suspect it’s easier and more motivating for the beginner to learn a new computer language if they have a feel for the essential aspects of it, aside from the strictly technical details. These days, amid the plethora of  language choices, adopting a new argot is like learning a new culture. If you divorce the technical details from the culture, you likely suck a lot of the enjoyment out of coding.

This will feel like pure sentiment to some people, who view any code as little more than a tool. Does one wax poetic over one’s hammer, drill or soldering iron? Probably not. But computer languages feel different, and it seems to me that Python brings out more emotion in people than do most other languages.

The Yin and Yang of Python

I’ve found that, like anyone else, Python has a lighter side and a darker side. When you go to python.org in order to download it, poke around for a bit to get a feel for what else is there. It’ll give you an idea of what I deem the yin and yang of Python world.

Yin and Yang symbol
by Klem

The yin represents of the soul of Python: that is, the ideals, the culture, the community and overall vibe. It’s the kind of stuff that helps motivate you when learning a language, but it’s usually given short shrift, if mentioned at all, in books or tutorials.

The yang, on the other hand, is the body of Python: that is, the coding details, the documentation, the references, the tutorials, bylaws and sundry other particulars. It’s the kind of stuff that’s usually the subject matter of books and other resources.

So, why don’t more books and blogs focus on the yin? Two reasons, I think. First, a lot of committed coders have already bought into the yin. They’ve often been coding since they were wee lads and lassies. The yin is the air they’ve been breathing, so they don’t need any discussions or celebrations of it. Second, coders tend to be practical folk. They want just the facts, ma’am, because they want to get up-to-speed on a new language as soon as they can, often in order to get a specific project done.

But the true beginners? We want facts, sure, but we also want to believe we’re doing something fun and worthwhile in a world rife with countless other ways to spend our time. We need, in short, to be converted to the Church of Coding.

The Yin

The Community

At the soul of Python is its community. You can get a feel for it at python.org, where there is a large section called (what else?) “Community.” It provides practical information on conferences, workshops, awards, user groups, etc. But I think that the Python community goes beyond the stuff appearing on that page.

From a wider perspective, the community is made up of folks who go to the trouble of helping each other solve problems in many different forums. Stackoverflow.com, for example, is “a question and answer site for professional and enthusiast programmers”; I’ve found that it’s a great place to gain practical insights into Python (among other things).

There are also face-to-face meetings, though they can be harder to find. Many cities now have makerspaces/hackerspaces where people congregate to make stuff, teach one another to code, and otherwise fly their geek flags. There are also meetups and user groups in which people come together just to discuss programming in Python. And there are a number of Python-focused conferences in a given year.

The Open-Source Vibe

There’re are bound to be communities built up around just about any computer language, but Python has a certain idealistic vibe about it due to its origins and licensing. After all, Python is free and open-source, otherwise known as FOSS. You can download it onto your computer for free, but “free” has another meaning when it comes to Python. That is, you’re free to copy and re-use it.

As for the term “open source,” it means that that people can have access to the source code – that is, the computer instructions written in a human-readable computer language.  There are other criteria associated with open source as well. In fact, the Open Source Initiative includes ten different criteria, including ideas such as that an open-source license “must not discriminate against any person or group of persons” and that it must not “restrict anyone from making use of the program in a specific field of endeavor.”

There’s a sort of practical idealism here that, in other situations, might be sniffed at as some sort of Commie-Socialist-New-Age craziness.  But toward the end of the 20th and beginning of the 21st centuries, open-source software changed the landscape of the software industry, “turning it from a mostly capitalist economy into a mixed one,” according to the The Economist.

Open source has become a critical part of today’s businesses. Indeed, we now have articles from consulting groups explaining to businesses “Why You Need an Open Source Software Strategy.”

Bottom line: the Python vibe is unique but is influenced by the larger open-source ambiance that has radically altered the software landscape. You get the sense this stuff is going to have much more lasting legacy than the communes of the 1960s.

The Way of the Python Programmer

There’s an aesthetic associated with Python programming. I think it applies outside the Python community as well but is especially associated with it. It’s nicely summed up in the following guiding principles developed by Tim Peters:

The Zen of Python

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren’t special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
There should be one– and preferably only one –obvious way to do it.
Although that way may not be obvious at first unless you’re Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it’s a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea — let’s do more of those!

There’s a combination of humor and wisdom here that seems especially “Pythonic”. You can just write “import this” into your IDLE and you’ll find that the poem is printed directly into your console!

Mischievous, fun, practical and beautiful: it’s all part the Python aesthetic.

Another place to pick up on it is at the geek-beloved webcomic xkcd, Here’s one of the more famous Python-focused comics:

But there are also some other great ones related to Python such as New Pet and Electric Skateboard. It’s whimsical geek humor at its best.

The Yang

Despite the fun, whimsy and camaraderie associated with the yin of Python, Python isn’t some New-Agey all-inclusive drum circle where even the rhythmically challenged are welcome. At its core is the serious computer language that is used to build software on which businesses and sometimes even lives depend.

If you’re a beginning coder, think of yourself as a lion cub. You play around a lot, mixing it up with the other cubs, stalking insects, watching your elders bring home the big prey. Most of your playing around, however, serves a serious purpose: getting skilled and strong enough to move beyond wrestling with kittens and eating bugs. You want to eventually engage in the serious art of hunting and killing gazelles (or whatever) and battling dangerous threats when needed. In short, all that playing around had a serious purpose.

In the human world of coding, there’s no reason you can’t stay a cub forever, as long as programming isn’t required to make your living. You can fool around with Python tutorials and games to your heart’s content. But there are plenty of folks who use Python and other languages to not only earn their bread but to tackle some of the most important problems in the world. Python is a tool for creating solutions to those problems, solutions that programmers refer to as algorithms.

The terrific book (which has an interactive edition) How to Think Like a Computer Scientist defines an algorithm as “a step by step list of instructions that if followed exactly will solve the problem under consideration.” So, you’re taking those instructions and putting them into a language (in our case, Python) that the computer can understand. Once you’ve done that, you have essentially automated the execution of your solution to a given problem.

Bottom line: this process requires precision, which is one reason coding has much bloody documentation. If you don’t sweat the details, your code just won’t work properly. For people who are used to the creative ambiguity of human languages, this can seem persnickety and anal and just no frigging fun.

Poetry, for example, thrives on artful ambiguity and multiple meaning. Coding doesn’t. Computers are literal creatures. They need things properly and exactly spelled out. It’s their way or the highway. This deep need for structure, logic and rigor can be thought of as the yang of Python.

Speaking in Tongues While Handling Snakes

Python demands that you obey its internal logic. To do this, you need a crystal clear understanding of how it works. The good news is that the language itself is pretty darn lucid, perhaps better than any other computer language. The bad news is that good coders can be rotten writers. That is, they can fail miserably when they write for other people, especially beginning coders.

I think that a lot of the documentation in the otherwise excellent python.org website is a case in point. It’s sometimes like trying to parse some awkwardly translated instruction manual when trying to put together a new grill set. Or maybe like striving to grok Stéphane Mallarmé’s symbolist poetry:

Nothing, this foam, virgin verse
Depicting the chalice alone:
Far off a band of Sirens drown
Many of them head first.

You want to yell, “Speak the English!”

This is clear even to some Python experts. Giving a short talk at PyCon, the largest annual Python-related conference, Python instructor Simeon Franklin asked other Python professionals to take a close look at information on the python.org site in regard to specific features such as “docstrings.” He says that, if those professionals didn’t already know what these features were, they’d have a tough time gleaning it from the text.  He urges them to look at such information with a “beginner’s mind.”

But that can be hard to do for many experts. As a result, it’s as if somebody got filled with some holy spirit of coding and started going all glossolalia on us, speaking in Python tongues. If I were a tad more paranoid, I’d think the documentation were intentionally designed to obfuscate the esoterica of the erudite, to bewilder and baffle the benighted seekers after coding clarity and light. Here’s an example of what python.org reports tells us in section called “What is a Docstring”:

A docstring is a string literal that occurs as the first statement in a module, function, class, or method definition. Such a docstring becomes the __doc__ special attribute of that object. All modules should normally have docstrings, and all functions and classes exported by a module should also have docstrings. Public methods (including the __init__ constructor) should also have docstrings. A package may be documented in the module docstring of the __init__.py file in the package directory.

Um, okay. Sure thing. Uh huh.

Look, as a beginner, I’m not going to have much of a clue about what distinguishes a module from a function from a class. And alluding to the  __init__ constructor just seems cruel, as if someone were giving you their phone number in Morse code.

Now here’s how Swaroop C H puts it in his book A Byte of Python:

Python has a nifty feature called documentation strings, usually referred to by its shorter name docstrings. Docstrings are an important tool that you should make use of since it helps to document the program better and makes it easier to understand. Amazingly, we can even get the docstring back from, say a function, when the program is actually running!

See, was that so hard? He then goes on to show examples of what docstrings look like in some easy code.

The bottom line is pretty simple: programmers use docstrings to tell one another what various blocks of code are supposed to do. Yes, there are other relevant details, such as where to use docstrings and how to format them, but those nuances are less important to the beginner than answering the basic question, “What in the hell is it?”

All the documentation on python.org is supposed to bring light to users with questions. Sometimes it does, being very rich in detail. But to the beginner, it often feels more like dense, dim thickets through which we have to machete our way. Ah well. In the case of particularly dense thickets, there are usually other sources of information in books, tutorials or on the Web. You’ve just got to figure it out. After all, yang happens. It’s all part of the path.

PS – The Python community has, to its credit, become more accommodating to beginners over the years. For example, here are Python Frequently Asked Questions geared toward beginners.

Feature image by Rolf Dietrich Brecher from Germany. See https://commons.wikimedia.org/wiki/File:Yin_and_yang_(36365569814).jpg

Minding the Universe

I read an article about how a study finds similarities between the human brain and networks of galaxies in the universe. Maybe we should be minding the universe a bit more carefully if it truly is, as the playwright once said, the mind of God.

Camerae Ready

Surprising? I don’t know. It seems as if the universe uses a lot of its same basic structures over and over at different scales, and these structures often have mathematical counterparts. One of the more famous examples is the Fibonacci sequence, in which each number is the sum of the two preceding ones, starting from 0 and 1. That is, 0 + 1 = 1, 1 + 2 = 3, 2 + 3 = 5, 3 + 5 = 8, 5 + 8 = 13, 8 + 13 = 21, etc. So, the actual sequence looks like 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and on and on.

What’s interesting is that this pattern shows up with some frequency in nature. Perhaps the most robust and convincing example is the spiraling nautilus shell, which is composed of chambered sections called camerae. Each chamber is equal to the size of the two camerae before it, creating a logarithmic spiral.

But there are there other examples as well: tree branches, flower petals, the seeds in sunflowers. It may extend to much larger phenomena as well, such as hurricanes and spiral galaxies.

So, perhaps it should not be surprising to find that brains and the universe are largely defined by their networks (that is neurons and galaxies) made up of nodes connected by filaments. In short, both are kinds of reticula. To read the actual report, go here.

Assuming the authors have a legitimate point, what do we make of the similarities between the universe and the human brain? Are we supposed to consider the idea that the universe is itself a thinking organism of some sort, that we exist in the mind of God?

Panpsychism

That’s too great a logical leap for me to make, but maybe it does lend support to the pseudo-scientific notion that the universe is conscious. In his book Galileo’s Error: Foundations for a New Science of Consciousness, philosopher Philip Goff considers the idea that consciousness is not something special that the brain does but is instead a quality inherent to all matter, a theory known “panpsychism.” To read an interview in which he discusses the notion, go here:

 Goff isn’t alone in wondering about the consciousness of the universe. Astrophysicist Ethan Siegel has discussed it in Forbes, and NBCNews highlights other thinkers in its article “Is the Universe Conscious?”

I don’t know what to think about all this. It feels a bit like the Gaia hypothesis (which is the idea that the interconnected biological systems of the Earth act as one, enormous organism), except extended to “infinity and beyond” (in the immortal words of Buzz Lightyear).

Our Town

Back in my college days, I was in a staging of the play Our Town, in which I played the character George. I don’t remember many of George’s lines but I do remember a scene in he was speaking with his sister Rebecca at the end of Act One:

REBECCA: I never told you about that letter Jane Crofut got from her minister when she was sick. He wrote Jane a letter and on the envelope the address was like this: It said: Jane Crofut; The Crofut Farm; Grover’s Corners; Sutton County; New Hampshire; United States of America.

GEORGE: What’s funny about that?

REBECCA: But listen, it’s not finished: the United States of America; Continent of North America; Western Hemisphere; the Earth; the Solar System; the Universe; the Mind of God–that’s what it said on the envelope.

GEORGE: What do you know!

REBECCA: And the postman brought it just the same.

GEORGE: What do you know!

I doubt Thornton Wilder was the first writer or mystic to envision the universe as the mind of God. But I do wonder what he’d think about the fact that here in the third decade of the 21st century, it has become an idea taken seriously by the likes of philosophers, physicists, and science journalists. What do you know!

Featured image from https://en.wikipedia.org/wiki/File:NautilusCutawayLogarithmicSpiral.jpg. Nautilus shell cut in half. Photo taken by Chris 73 | Talk 12:40, 5 May 2004 (UTC)