📈🧐 Bayesian Thinking - Part 2

I often find myself in a peculiar mental exercise whenever I hold a strong and stubborn belief. It goes something like this: I pause and ask myself, "How likely is it that the opposite of my belief could also be true?"
For instance, when I hear that Google is launching yet another new software product, my initial reaction might be somewhat skeptical. My skepticism often hinges on the existence of a website like Killed by Google, which documents the demise of various Google products. Based on this, I might expect the new product to follow a similar fate.
Moreover, I might have prior evidence suggesting that the odds of a software product launch failing outweigh the chances of success. However, if I cling to this belief without exploring the product's offerings at all (because I assume it will fail and be discontinued by Google anyway), I run the risk of missing out on something valuable.
So, I take a moment to challenge myself. I think, "Wait a second, Google is practically the default search engine, email provider, video streaming service, and web browser for many people. They excel in numerous other domains, too, with a wide range of highly successful products. Perhaps I should reconsider my negative viewpoint."
This self-reflection hits home because I once nearly dismissed Google Meet as a competitor to Skype or Zoom. My previous experiences with Google Hangouts had left me somewhat wary (to put it mildly). Nevertheless, I decided to give Google Meet a chance, and to my surprise, I found it to be quite convenient in many respects. This experience prompted a change in my perspective.
The lesson here? Sometimes, it's worth challenging our steadfast beliefs and embracing a more open-minded approach. I've no idea how many opportunities I've missed out on because of my bullheaded beliefs.
So personally,
- I've stopped holding binary opinions on things. It's greyscale thinking and every judgement is a probability of happening. Even if its very high.
- I'm thinking about chances of an opposing view being true and then examining evidence for that. This thought experiment then alters my prior slightly.
Naive Bayes classifers and other ML/AI applications
You can look at this process of changing our initial beliefs based on new evidences provided as fundamental to how we think as humans. And when you model this behavior for AI algorithms, they perform pretty well.If you've taken a look at any image recognition software like those used in self-driving cars or spam email filters, you've already encountered a concept that relies heavily on this Bayesian way of thinking: Naive Bayes classifiers.
What is a Naive Bayes Classifier? In essence, a Naive Bayes classifier is a machine learning algorithm that makes decisions based on the probability of certain events occurring. It's "naive" because it assumes that the presence of a particular feature in a dataset is unrelated to the presence of any other feature, which is often an oversimplification. This simplification allows the algorithm to perform efficiently and effectively in many cases, despite the "naive" assumption.
Think of it this way: when you receive an email, a spam filter using Naive Bayes looks at the words in the email and calculates the probability that it's spam or not based on the presence of specific words or phrases. It doesn't consider the order of words or their context; it's purely a probability game. If the probability of an email being spam is higher than a certain threshold, it's flagged as spam.
The Magic of Bayes' Rule At the heart of Naive Bayes classifiers lies Bayes' Rule. This rule, named after the Reverend Thomas Bayes, is a fundamental concept in probability theory. It allows us to update our beliefs based on new evidence, just like our thought experiment about Google's software products.
In the case of spam filters, Bayes' Rule helps us calculate the probability that an email is spam given the words it contains. It's an elegant way of modeling how humans adjust their beliefs based on incoming information.
Applications Beyond Spam Filters While Naive Bayes classifiers are most famously used in spam filters, their applications extend far beyond that. They're used in natural language processing, sentiment analysis, medical diagnosis, and more. Essentially, anywhere you need to make decisions based on probabilities and evidence, Bayes' Rule and its compatriots come into play.
For instance, in medical diagnosis, a Naive Bayes classifier can help doctors determine the likelihood of a patient having a certain condition based on symptoms and test results. It's a powerful tool for augmenting human decision-making, especially when dealing with large datasets and complex relationships.
Embracing Uncertainty and Open-mindedness In a way, using Bayesian models like Naive Bayes in AI reflects the open-mindedness we should cultivate in our own thinking. These algorithms don't make binary decisions but assign probabilities. They're inherently uncertain, just as we should be when confronted with our own beliefs.
As we keep pushing the boundaries of AI and machine learning, keep in mind that their awesomeness really comes down to being cool with a bit of uncertainty and not sticking too firmly to our "it's gotta be this way" mindset. Just like I've had to rethink my take on Google's stuff, AI systems get smarter by rolling with the punches, taking in fresh info, and becoming super versatile in all sorts of cool applications.
In conclusion, the Bayesian way of thinking, whether in our personal beliefs or in the algorithms we create, reminds us that open-mindedness and adaptability are key to making informed decisions in our ever-changing world.
A note to myself everyday:
Stay curious, and don't be afraid to question your own beliefs
from time to time. You never know what valuable
opportunities you might discover in the process.Ockham's Razor mini-rant
Ok mini knowledge dump - Ockham's Razor is usually misconstrued as "The simplest explanation is usually the best one" when it's actually this - "When choosing between competing hypotheses select the one with fewest assumptions". Very important distinction!
It's less to do with how simple a situation taking place is.
It's more to do with how many assumptions need to be in place for that something to happen.
In more intuitive terms,
If your 2-year old nephew comes up to you and gives you a detailed excuse about how the broken headphones wasn't his fault.
The wind blew open the (bolted down) window
AND
this nudged your headphones that were quietly chilling AWAY from the window on the table to fall down
AND THEN
the volume knob came apart from the 2-foot fall.
You gotta ask yourself, is that what really happened?
or did my nephew who has a destructive history with electronics decide to mess around with my headphones and in doing so tugged on the volume knob till it was no longer a part of its former body.
