Some scientists believe that our brains work according to Bayesian logic. Or, at least, we may be able use such logic to replicate the ways our minds work. This is a complex topic that can’t be covered in one post (especially once we start talking about the free energy principle) so let’s start by discussing the connections between Bayes Theorem and human cognition.
What Is Bayes Theorem?
The Bayes Theorem was formulated by Thomas Bayes–English statistician, philosopher and Presbyterian minister–back in the 1700s. The theorem is all about probability of something happening once you know that the probability of something else happening. Here is it in a nutshell:

There are a lot of examples of how to use Bayes Theorem. Here are a few sources:
Examples with Chart
I’ve provided a couple of examples using a chart I took from a Khan Academy lesson on conditional probabilities. I haven’t seen Bayes taught this way, but figured it might be useful as a way to helping myself think through it.
The following is information about one man’s train travels as they pertain to weather, travel delays, and number days for each weather category.

All the data needed for a set of probability problems are already there, so my assumption is one can test Bayesian calculations against the numbers in the chart.
Example One: Chance of Delay If Snowing
For example, let’s say you want to find out the relationship between travel delays and snowy weather. If you just use the chart, you can see a total of 20 days spent traveling on snowy days. There were 12 delays on those days, so you can see that there was a 60% chance of delays on days when there was snow (that is, divide 12 days by 20 days to get .6).
But let’s assume you don’t have the full chart, but you do know some relevant information. What you want to know is the chance of a delay if it’s snowing. So, you set up the following:
P(A|B) is P(delay|snowy): that is, a chance of delay if snowing: currently unknown
P(A) is P(delay): the chance of any delay in a given year = 35 delay days / 365 total travel days = .096
P(B|A) is P(snowy|delay): the chance it’s snowy if there’s a delay = 12 delay days when it snowed / 35 total delay days = .34
P(B) is P(snowy): the probability of snow on any given day = .055
So, here’s what you end up with:
(.096 * .34) / .055 = .6 = 60%
You arrive at the same answer as before even though you didn’t know the total number of snowy days (20) this time around. So, you get a good probability without complete information via Bayes.
Example Two: Chance of Being On Time If Rainy
This time, let’s say you want to know the chances that you’ll be on time if it’s raining. If you have complete information based on the chart, you can divide 40 (the number of on-time days when it’s raining) by 55 (the total number of days traveled when it’s raining). We get .73, or 73%.
But let’s say you don’t have the full chart. So, you set up the following:
P(A|B) is P(on-time|rainy): that is, chance of being on time given it’s raining: currently unknown
P(A) is P(on-time): the chance of being on time in a given year = 330 on-time days / 365 total travel days = 90%
P(B|A) is P(rainy|on-time): the chance it’s rainy if you’re on time = 40 on-time days when it rained / 330 on-time days = .12
P(B) is P(rainy): the probability of rain on any given day = 55 / 365 = .15
So, here’s what we end up with:
(.90 * .12) / .15 = .72
We arrive at a very similar answer as before.
I should note that when you’re doing this kind of analysis, you don’t always know how a particular percentage was derived. You just know the proportions based on some known standard (such as the accuracy rate of a certain medical test).
Bayes and Cognition
Some researchers believe that mind is a prediction machine. The idea is that the brain somehow assigns probabilities to hypotheses and then updates them according to the probabilistic rules of inference.
But do our minds actually use Bayesian inference?
Joshua Brett Tenenbaum, Professor of Computational Cognitive Science at the Massachusetts Institute of Technology, has stated that Bayesian programs are effective at replicating “how we get so much out of so little” via our cognition.
Other shave been more skeptical of the notion that our minds use some form of Bayesian reasoning. Jeffrey Bowers, Professor of psychology at the University of Bristol, notes that information-processing models such as neural networks can replicate the results of Bayesian models.
Can Neural Networks and Bayesian Approaches Work Together?
Some say that Bayesian inferences are key aspects of modern generative AI models, which are based on neural nets. As one source explains:
The computer starts with a basic understanding of the English language, such as grammar rules and common phrases. It then reads the vast library of text and updates its understanding of how words and phrases are used, based on the frequency and context in which they appear.
When you provide the computer with a starting sentence or a few words, it uses its Bayesian understanding to estimate the probability of what word or phrase should come next. It considers not only the most likely possibilities but also the context and the content it has learned from the library. This helps it generate sentences that make sense and are relevant to the given input.
The computer continues this process, picking one word or phrase at a time, based on the probabilities it has calculated. As a result, it can create sentences and paragraphs that are not only grammatically correct but also meaningful and coherent.
In summary, a Bayesian approach helps an AI generative language model learn from a large collection of text data and use that knowledge to generate new, meaningful sentences based on the input provided. The computer constantly updates its understanding of language and context using Bayes’ idea of probability, enabling it to create content that is both relevant and coherent.
So, is Bayes the secret “hero” behind today’s generative AIs? Beats me. It’s something I’ll need to investigate further with people who actually develop these systems.
Another avenue of investigation are those who are trying to use the so-called free energy principle, also based on Bayesian ideas, to create new AI systems. One organization that seems to be working on this is Verses, which last March published Executive Summary of “Designing Ecosystems of Intelligence from First Principles.” That’s now on my “to read” pile.