Ever wondered if your brain is fooling you into seeing the world differently? Dr. Daniel Yon, a cognitive neuroscientist, reveals how your brain doesn’t just observe reality—it actively creates it.
Explore how your mind constructs its own version of reality through predictive models, and why you might be "hallucinating" your experiences without even knowing it. Dr. Yon dives into how we can reshape our mental models to thrive in a world full of uncertainty.
Don’t miss out on this eye-opening conversation that will completely shift your perspective on perception, belief, and personal growth.
Tune in to unlock the secrets your brain is hiding from you!
About the guest
Dr. Daniel Yon is a cognitive neuroscientist and the author of A Trick of the Mind, a groundbreaking book that explores how our brains actively invent the reality we experience. His work sheds new light on how we perceive the world, challenging conventional views of perception. Dr. Yon is also the director of The Uncertainty Lab, where his team investigates how the brain handles uncertainty and adapts to an unpredictable world.
People and other mentions
Dr. Emilie Caspar - Just Following Orders: Atrocities and the Brain Science of Obedience
Stanley Milgram - Obedience to Authority: An Experimental View
Daniel Kahneman - Thinking, Fast and Slow
Douglas Prasher - Green Fluorescent Protein Research
Chris Frith - Making up the Mind
Jim Rohn - Personal Development Philosophy
Lewis Dartnell - Origins
Cecilia Heyes - Cognitive Gadgets: The Cultural Evolution of Thinking
Daniel Z. Lieberman - The Molecule of More
Adam Grant - Think Again
Lisa Genova - Still Alice
Timestamp
00:00:00 - Introduction & The Brain as a Hallucination Machine
00:05:56 - The Predictive Brain: How We Construct Our Reality
00:18:00 - The Science of Control & Responsibility
00:28:25 - Confidence, Identity, and Self-Perception
00:42:15 - Understanding Cognitive Dissonance
00:55:30 - The Volatility Circuit: Triggering Personal Change
01:15:00 - Depression, Identity, and Mental Models
01:25:00 - Finding Balance: Stability vs. Change
01:31:00 - Closing Thoughts & Resources
Episode Transcript
Daniel Yon: [00:00:00] When we typically in everyday language talk about hallucination, we have this sense that you're kind of in a world of your own. You are inventing a reality that might not be like the one that everyone else experiences. But what the latest psychology and neuroscience tells us is that's not a sort of bug.
It's a kind of feature of how everybody's brain works, and all of us are actively constructing the world we live in all at once.
Cody: Mind Hack is a podcast about the psychology behind performance, behavior change, and self-op optimization. Each episode explorers how to think clearly, work smarter and live intentionally through insightful conversations with leading entrepreneurs, scientists, and experts, and human behavior.
Hello and welcome to The Mind Hack podcast, the show where we explore performance, mindset, and personal growth. I'm your host Cody McLain, and today's guest is Dr. Daniel Yon. A neuroscientist and [00:01:00] associate professor in cognitive neuroscience at Birkbeck University of London. Daniel directs a research team called The Uncertainty Lab, where he studies how our brains build models of the world to navigate everything from everyday perception to complex social interactions.
I invited Daniel on because his work reveals something fascinating about our reality. Our brains don't just passively observe the world. They actively invent it In his new book, A Trick of the Mind, Daniel makes the compelling case that our brains are like scientists constantly generating and testing theories about our surroundings.
His approach offers a fresh way to understand why we see what we see, believe, what we believe, and even why we might fall for misinformation. In this conversation we'll dive into how our brain's, predictive [00:02:00] models can either make us feel powerless or put us in full control of our reality. We'll also unpack the idea of paradigm shifts, exploring how we can intentionally update our mental models to adapt to a changing world and often an unpredictable one.
So let's get into it. Please welcome Dr. Daniel Jan. Daniel, welcome to the show here. So if I told someone that their brain is hallucinating their way through life, they might think I'm joking, but that's more or less what your book is saying, that our experience of reality, it's not, it's not really a mirror, it's more of a theory, a guess if, if you might say so.
In some ways I think that changes everything about how we think about truth or trust or even ourselves. So what was the moment in which you first saw the brain, [00:03:00] as you say, as skull bound scientist, how it invents our reality on the fly?
Daniel Yon: Well, I think it's, it's an interesting kind of way of thinking, precise for the reason you described that.
I think the kind of moment that sort of scientifically, I think I started to see the brain in this completely flipped perspective was when you start to rethink what it really means to hallucinate at all. And I think that when we typically in everyday language talk about hallucination, we have this sense that you are kind of in a world of your own, you are inventing a reality that might not be like the one that everyone else experiences.
But what the latest psychology in neuroscience tells us is that's not a sort of bulk, it's a kind of feature of how everybody's brain works. And all of us are actively constructing the world we live in all at once.
Cody: Hmm. And so it kind of reminds me of, um, the, the free energy principle about how our brains kinda work, like, uh, a prediction machine that, that reduce surprise.
Um, and I, I swear I've heard of this concept [00:04:00] before, that our brains hallucinate our reality. And, uh, maybe it was thinking fast, thinking slow about our system one and system two. Thinking since our system one has to kind of bri bridge in the gaps, so to speak, um, is is there any correlation? Where, where did you come up with this, this, this premise?
Uh, that yeah, this is the core idea.
Daniel Yon: I see what you mean. I think so I think that you're right. There is lots of things like, uh, ideas that will be familiar. Things like the sort of thinking fast and slow idea that people like, uh, Daniel Carvan popularized. And I think that there's lots of deep, um, truths to that way of thinking, but I think there is something that it also obscures a bit and that I think what you usually take away from the thinking fast and thinking slow way of thinking is that somehow you see your kind of mind as being either this sort of slow, rational thinker or it's got this kind of bundle of instincts and [00:05:00] biases that might lead you astray.
And why I think that the, the better start your premise to think of is to think of your brain as like a scientist. It's really both intended as a kind of insult and a compliment in the sense that I think that it makes you think that you don't need to consider there being lots of different kinds of processes in your mind.
It could be that your mind and brain are kind of, it's always in the game of doing the same thing. It's trying to make a theory to make the world make sense. And through that fundamentally very rational process, you can nonetheless get caught with paradigms in theories that don't actually fit the way the attitude works.
And I think that's a kind of good, it's a good model to how science really works, right? Because, uh, as, as any scientist will tell you, and I say this as a scientist myself, science might be the sort of best way that we've come up with to understand the world that we live in. But it's not. By any means infallible.
The kind of history of science is the history of people who tried really, really hard to make sense of the world around them, [00:06:00] and nonetheless ended up with theories that didn't quite fit the way things work. You know, you could have astronomers taking very careful measurements for centuries and still end up thinking that the, the sun revolves around the f, that it can be a wonderful way of making up of, of making sense of the world.
And by, even by trying to do your best to make sense of it, you can still, nonetheless end up with a somewhat blinkered picture.
Cody: Hmm. And so I, I'm curious about how much of, like, how much of a role does prediction play in our normal perception? I know that we, we might go through our day, you know, we, we make the coffee and we we're all kind of familiar with this idea that, you know, once you do something enough or you learn something, you're no longer have, having to, to consciously think about it, it just kind of comes.
And I think that's perhaps from, from that background system to thinking. So it seems like a lot of what our brain does is kind of fills, fills in the gaps or like, like a phone keyboard suggestion that we, we don't always notice the raw data. So, uh, I'm, [00:07:00] I'm curious how, how does that play into our everyday lives?
Does that make it difficult for us to notice new information unless we're specifically seeking it?
Daniel Yon: I think that's exactly right. I think that in some ways your brain is a kind of amazing labor serving device and it hides all the hard work it's doing from you. So in that example you give there of walking down in the morning, getting the coffee cup out, making the coffee, it feels pretty effortless to be doing the perception, but your brain is hiding from you all the problems it's solving to get you to that eff abus fluid motion.
In particular, the idea would be that you, you know, you, you said there, we were not in touch with the kind of raw data and we wouldn't want to be, the raw data is just so ambiguous and so uncertain that it would make no sense if you try to make sense of the work without these kind of predictions. Even something simple, like when you get the cup out of a cupboard and you put it on the kitchen counter, you have to work out as you go to pick it up.
How big is the object, right? How, how, how wide should you open your hand to pick it up? And in order to do [00:08:00] that, you have to somehow create some estimate of how big it is, but it's actually impossible to do that from pure vision alone. If you think about what's happening with that object in front of you, it, the light's bouncing off it, it's hitting your eye, and you are getting a sort of two dimensional shadow on your eye of the coffee cup you're trying to grasp.
And that shadow has kind of lots of uncertainty about what the object's really like. It could be a really big cup that's really far away. It could be a really small cup that's really close. And if your brain was to try and try on every possible of these, like infa many different interpretations, you'd never be able to make your coffee in the morning.
It'd be constantly trying to navigate all of this ambiguity. But as you say, you, you don't have that experience, right? You have this sense that your brain's got a model of how the world around you works. And as a consequence, you don't need to fight between all these competing alternatives. You can just select the most likely hypothesis.
You can come up with the theory that's most likely to be true. That does mean though, that you're gonna [00:09:00] be seeing the world through the, the, the filter and the prison of those expectations. So you're kind of by, by solving one problem, you're kind of presented with another, that you end up blinking yourself such that you see the world through the prison, that your brain is usefully created for you, but will shut out some unexpected alternatives.
Cody: Yeah. And it seems that can kind of lock us into a certain paradigm, a certain way of seeing. So it seems our brains are, are pattern machines. Right? And so on the, on the flip side of that, why do we sometimes experience things like say, uh, like a phone vibration, uh, or phantom phone vibration, or thinking that somebody called their name when they really didn't.
Daniel Yon: Yeah, so tho those, those kinds of everyday hallucinations, you might call them, right, but they're sort of situations where you have a perceptual experience that there isn't the corresponding event in the real world. You can think that those happen because your brain is trying to make sense of both the noise in the world and the noise kind of inside itself.
So it can be that sometimes the signals your [00:10:00] brain gets literally come from the environment. You know, vibrations in the air, light waves in your eyes. But it's also the case that your brain's got its own kind of internal noise, and it's also trying to make sense of what's happening there. And so it's easy for your internal signals to get mistaken for events out there in the world, world.
And in that way, you know, your brain doesn't come up with the interpretation. There's some random noise in my head. It comes up with some plausible, meaningful kind of theory driven explanation for what's going on. And that could be something like, I have this little fluctuation of noise in my auditory cortex because nearby phone's ringing, or someone's calling my men.
It's a kind of, it's that same sort of theorizing instinct, but somewhat turned inwards and trying to make sense of the signals inside yourself. I mean, one of the, one of the funny things that I, I think taking this kind of brains eye view makes you realize. Is that your brain is never properly in touch with the outside world directly.
It's always through the veil of these signals. And so when your brain's trying to [00:11:00] perceive, it's trying to make sense of its own activity. And that activity is always gonna be a mixture of what the world provides, but also what you, you and kind of quotation marks you as the brain kind of inject, inject into.
Cody: Hmm. I I, you know, I, I find it interesting there's that the, those studies that showed that, um, a baby is, is uh, basically on like a psychedelic trip 24 7. You know, they're, they're, they're hallucinating everything that they're seeing, um, but simultaneously, even as adults. That's basically, we're, we're just doing that to a lesser degree.
It's just that as an adult, we built up such a large swath of patterns that we're able to recognize patterns. And it's not, again, it's not having to consciously think about everything that we're doing. Just like when you're learning something, you have to think about it. And once you learn it, you don't have to think about it as often.
And that's, that's basically, so in some ways we're still hallucinating even as adults.
Daniel Yon: Yeah. And I can, it is, it is a sense of, it's, uh, as you, the way you put it there is if like, you know, as if you are, you are [00:12:00] learning. And then once you've learned, you don't need necessarily to kind of learn again. I mean, maybe one of the, the, the sad facts of a lot of our lives is things often don't change that much.
They didn't change very quickly. And as a consequence, the kind of models that you've built up to make sense of the world around you, what was true yesterday is gonna be true today. And so your brain can get away with just. Continuing to exist through this kind of particular, the patterns that absorbed, the paradigms that it's built without really processing every new bit of information that comes in.
And that can be useful or it can be harmful in different, different situations.
Cody: Yeah. And it seems sometimes when we're learning something, we might learn something, but we keep getting it wrong. And so sometimes our brains kind of predict wrong. And I'm wondering, is there a way to, to kind of catch ourselves at having that, that wrong prediction in real time?
Like I know that the surprise breaks autopilot and, you know, novelty can restore our attention. Are there any ways that we can try to correct ourselves, uh, [00:13:00] from having, you know, do like error correction in real time?
Daniel Yon: Yeah, it's, it's, it's funny that you, you seem to suggest that you'd wanna avoid the errors, but one of the, one of the things that this kind of way of thinking about your mind suggests is that.
The only time you ever get to learn is when the error happens. It's the only way you know that your prediction wasn't right is when it faces reality and you get that mismatch, that kind of prediction, error signal. And so in some sense, I think you, you don't want to ignore those errors. You want to be in touch with them because they're the things that give you the chance to update and change your minds.
I suppose the, the real challenge there is what do you, what do you do to make sure that the errors are kind of processed rather than kind of explained away? And I think. You know, they're sort of like, kind of sort like, um, sort of classic Einstein quote, sort of madness is doing the same thing over and over again, expecting different results.
You can think that if you keep making the same prediction over and over again and it continues to be wrong, [00:14:00] there's, there's something wrong with the way that your mind's working. That if you can make sure that you take the right error signals and use them to update your beliefs in the right ways, you could end up with theories that fit the environment that other than they did before.
And it looks like this happens both consciously but also unconsciously. So there's lots of nice, nice experiments showing that you can have different kind of networks in the brain primed to kind of catch mistakes that you've made. Even if you are not consciously aware of having made the mistakes, it seems that so much of your brain's architecture is dedicated to kind of even these fine tuned differences between prediction and reality.
Using them to control your ongoing behavior to yeah, slow you down when you make a mistake or avoid repeating something that made you, made you, uh, make an error in the past. To these kind of more slower, longer term timescales of updating and changing your mind for the next exchange you have with your environment.
Cody: So, so in, in some ways, when we make an error, it can cause us to slow down and consciously think about that, [00:15:00] to correct the error so that hopefully we're not repeating that same error multiple times. But then simultaneously, sometimes we can remember certain events wrong or we might misre misremember. Um, what, what somebody said, wh why, why do these, these kind of errors happen?
Daniel Yon: Yeah, I mean, I think one of the big sources of those kinds of errors you're, you're getting about, I think is this kind of constructive theorizing process that the brain's engaged in. So something like, trying to remember what somebody said. I think the kind of, the sort of psychology in the neuroscience that this shows us, that there's something kind of fundamentally quite impolite about the way that the listening brain works, and that we never really take into account what the other person is saying.
That one, one way you can see this is if you measure people having conversations, the gaps between the turn taking are so short, that kind of, you know, fractions of seconds. The only way it's possible for people to talk so quickly is if they basically plan what [00:16:00] they're gonna say before the other person stops, stops talking.
And so if you're gonna have conversation unfolding so quickly, that means that you, you, your brain doesn't have the time to listen to everything the person says. To process it and then consider your reply. What you are really doing is you are projecting your own kind of hypotheses and theories about what they mean and where this conversation is going.
And you can see this happening in sort of brain imaging experiments. If you kind of play people natural speech and you give them sentences where it looks like there's a kind of predictable path that's gonna go down, you say something like, yeah, the baby was asleep in its. Seconds before you hear the word crib.
You can see in the kind of brainwaves your brain is like opening, opening that particular semantic file up with that expected meaning. And in that sense, you are kind of, it's, it is a kind of labor saving trick like we discussed before. It lets the conversation flow smoothly because you don't have to wait for the punchline before you can get to the meaning, but it also means that you kind of invent your own interpretation.
That's a sort of [00:17:00] synthesis of what they were really saying, what you thought they were saying. I mean, I think one way that you can get in touch with this in your everyday life is through the phenomenon of kind of misheard song lyrics. So people who kind of think that they hear like Bob Dylan say, the ants are my friends, or Jimmy Hendrix sing, excuse me, will I kiss this guy?
Like in Indu, if you yell to my friends and kiss the sky, like these kind of mishearing are showing you that you are not really listening to what was said. You are kind of mixing what was said with your own interpretation of what's plausible. And often what's plausible like isn't gonna be what's right.
It's, it's probably, it's probably easier to kiss a guy than it is to kiss the sky. And so that's a kinda sensible theory for your brain to come up with, but it means you don't hear what was actually said appropriately. So that's a way that you can have these, they're not so much errors of prediction.
They're kind of errors of interpretation because your predictions are so strong, they're kind wipe out obviously there.
Cody: What, what, what you said about [00:18:00] people having to listen and then you're kind of only half paying attention. I, I resonate with that. I, I'm sure a lot of people do. And there, there's this movement, or I think a lot of, uh, people in the self-help space who really advocate for, you know, not coming up with what you're going to say next.
It's just being completely present with what the person is saying. And I know that that's really relevant in terms of, of social dynamics and, uh, whether it's a CBT or I know that there's, there's a, there's a bunch of psychological concepts in relation to how to talk with your partner or a significant other, because we often just know what we wanna say and we, we, we talk on top of what they said, but the entire time we didn't really listen to what they said.
In some ways I, I could see that our brain is naturally, it's, it's a pattern making machine, so it's naturally onto trying to make sense of what was being, of trying to gather as little as what was, what was gathered and trying to make sense of that. And so perhaps it's, it's, uh, it's kind of like, I, I think of the selfishness gene, um, the book by Richard Dawkins that, you know, [00:19:00] perhaps when we say that we should be less selfish is that we're actually going against our human nature.
Um, and it's human nature to be selfish and it's human nature to kind of. Think of what you're going to say while the other person's still talking.
Daniel Yon: Mm-hmm. Yeah, no, I think that's right. I think that there's an element of, as I, I liked it when you said that it's like, you know, you like having these really small bits of data and you try to extrapolate like this, this sense that kind of what your brain loathes more than anything else is a gap kind, loathe this kind of uncertainty and this absence and in, in lieu of any information, it's gonna put in its own its own sort of projections.
And so I think that it might not, you might not immediately think of it as being what you think of when you think of as sort of selfishness, but it does mean that there is something inherently self-centered. So the way that you sort of perceive, I think the things that you described, like the sort of slower, more deliberate turn taking, that would be, that would be helpful.
But it's still gonna be like no matter how slowly you talk and how, how for how long I listen, I'm still gonna be [00:20:00] understanding it from my perspective. And I think that the only way that you can somewhat kind of correct for that is. Not just by listening more carefully, but by trying to broaden what your perspective is.
If you, if you spend time talking to more people and actually listening, I suppose what the, what the upshot is for your models is they'll start to change in or start to reflect a wider set and possible people and possible perspectives. I mean, it's sort of a bit like how it's, it's somewhat controversial which direction the, the effect goes in.
But there's some data that suggests that people that read fiction end up having uh, kind of higher, different, different demonstrations of higher degrees of empathy when they interact with people. And you can think this kinda happens because there's sort of practice from exploring different situations from perspectives that are not their own.
'cause they're kind of forced to by the novelist. But in that way, you get used to thinking in ways that aren't quite your way of thinking, and that becomes another sort of thread in the theory that you are using to [00:21:00] make sense of the world. You think a plausible thing that I could think is what I usually think, but here's another way that someone else might perceive that same situation.
And that's the kind of thing that I think gives you the humidity that might counteract that kind of intrinsic selfishness you're worried about.
Cody: Hmm. Yep. I, yeah, I think there's something to that. The fact that you can also empathize with the character that you're reading and that empathy can then translate into other people and situations that you're able to connect to, um, throughout your, throughout your life.
Um, I also think that that memoirs and biographers can have that effect too, but, uh, going from kind of perception in general, I'm, I'm curious about the perception and how it relates to control, because in, in your book, you explore how soldiers kind of perceive control differently depending on the role.
So, so what, what do these experiments show us about kind of how the brain processes responsibility?
Daniel Yon: Yeah, so the sort of the, I guess the sort of starting point for this, this kind of bit of the [00:22:00] book is this idea that, you know, if you start to think of your brain as being like a scientist building its theories, you can think that, okay, scientists, when they build theories, they're not just building theories for the sake of it, they're building them because they imply some kind of causal chain.
They kind of, you know, our, our theories of how viruses work allow us to make things like vaccines or our kind of theories of how the weather works, allow us to make predictions and, you know, make, make kind of appropriate, appropriate actions. And so you can think that your brain is doing the same thing itself.
It's kind of trying to experiment with the world around it to work out what are the things that it can and can't control. And the problem with that is that just like the case of sort of more passive perception in this more active perception type situation, it's really difficult to work out if you are influencing something or not That.
All you can really do as an individual agent is you kind of run your experiment, you take an action, you try to observe the results, but often there are gonna be multiple things affecting the world around you at the same time. And so your [00:23:00] action is just one possible cause in the, in the big kind of, you know, sort of, uh, sort of, it's, it's only one possible cause and the kind of mix of various things that could be sort of determining the shape of the world around you.
So this becomes really important when, when we start to think about how do we know when we are or are not responsible something. And perhaps the kind of highest stake example you might think about this in is in the situation of when you are gonna possibly do harm to another person. So this kind of idea has a really long history in sort of psychology through the work of people like sort of Stanley Mulgan who try to investigate how far people would go in following an order to harm another person.
And what these results seem to show. Was that people were willing to inflict what they believed to be quite serious punishments on people, serious physical harm, just because they were ordered by kind of, um, by 'cause they were ordered by this fit or figure of authorities to do the same. [00:24:00] Now, most of us in our everyday lives, we don't, we don't find ourselves in situations where people are ordering us to harm other people.
But that is something that happens to you if you are, you know, in the armed forces. And so in some really interesting experiments by, uh, that's neuroscientist e Kaspar, she explored a kind of, kind of modern day analog with the classic milk of idea by investigating what happens in people's brains when they are harming someone else, either because they've chosen to or because they've been talked to.
So in her standard paradigm, she begins, uh, you know, before taking it to the kind of military personnel, she just begins by studying, uh, kind of everyday civilians. I think, I think that probably most of these studies are done on, uh, sort of students at the local university, but in, in her version, it's a, it's a kind of modern day interpretation of the Milgram experiment where people are giving people unpleasant, kind of harmful, but not, not lethal shocks, uh, to earn a small amount of money.
So in the experiment what happens is you're kind of given a fixed amount of money. [00:25:00] You are there with a kind of partner in the, in the lab, and you have this button you can push. And when you push the button, it does give them a shock. You have a fixed amount of money and every time you choose to give them a shot, you get to keep, keep a bit of that money, or you can not shock them, but lose a bit of the, a bit of the money that you've been given.
And throughout this kind of study, you can either have situations where it's up to you, whether you push the button or not, and. People because they want the money, they, they will sometimes shock the person. Or you can see what happens when people are forced by the experiment to, they're told you haven't got a choice this time either you aren't gonna push the button or you are.
While Caspar is doing this, what she does is she records people's brainwaves, uh, through, she uses a technique called, um, electroencephalography to record the kind of neural activity that unfolds while people are doing this. And one thing that she can look at is the reaction that your brain has to its own outcomes.
So how does it process the consequences of the action that you've [00:26:00] performed? And what you usually find in, in, uh, these sorts of sensory experiments is when you perform an action, you can detect these quite early, uh, sort of deflections, these sort of brainwaves that show how deeply your brain was processing the outcome.
One that you focuses a lot on in, in these experiments, it's called the N one. Kinda shows you how much your brain was paying attention to the outcome that it produced. And what she finds is that when people are ordered to harm someone else, even though it's always them that does the, that does the pushing, it's always their finger that causes the outcome.
They get a reduction of these neural signals as if when they've been ordered to do it, their brain has just attended away from, or somewhat detached from the outcomes that's been produced as if it's not really happening as much as it is when they perform next action themselves. As you say that this is something which isn't, uh, it doesn't unfold in the same way for every single person, and I think one really kind of nice where you can see this is what happens if you run exactly the same experiments, but with different [00:27:00] kinds of soldiers.
So with the soldiers, what researchers do is they, in, in one particular study, they went to a military academy in Belgium and they got different kinds of soldiers who were preparing for different kinds of roles. So they either had the sort of private cadets who were gonna, uh, fulfill kind of one of some of the more sort of lower ranking parts of the kind, sort of military machine.
They were gonna be, uh, gonna sort of foot soldiers who would be primarily taking orders from more senior officers and they compared those soldiers to a group of officer cadets. So these were the groups who were, uh, training to, to occupy a more kind of senior leadership role within the Army. And who would, as, as part of their education, are kind of, uh, taught to take responsibility for their own actions, but also for the actions that they, uh, execute through other people.
Uh, when you do this same experiment with them, when you place them in this situation where either there's a soldier commanding them to push the button to shock someone else, or they're choosing to do [00:28:00] it from their own, um, kind of free will, I suppose what you can see is that while the private soldiers seem like the civilians, they show this dissipation of these neural signals when it would do something.
That doesn't happen with the officer cadets. So the soldiers who are training to the officers, their brains seem to treat the outcomes equivalently, whether they're deciding to act or they're following an order. And while that's, I think that's really interesting is 'cause it suggests that it's not just that there's some kind of hardwired feature to our psychology.
That means when we follow an order, that feeling of responsibility goes away. It's more like how that feeling is constructed, how the brain creates that feeling. Whether it expects us to have control or not depends strongly on the background beliefs your brain already holds. So if you are expecting that you are gonna be responsible, you're expecting that you have control, your brain literally processes the outcomes in a different way, and you have a [00:29:00] corresponding a different sense of whether you're responsible or not.
And I think there's a that, you know, while, while most of us are not going to join the military, I think that there's a lesson there about how you can cultivate the right kinds of feelings of responsibility of your surroundings. Because it shows us that by abiding the right messages about what you are and aren't responsible for what you can and can't control, that will become the theory your brain uses to interpret your interactions with your world.
And that can make you feel the appropriate level of control over things. You, you can, whereas if you somehow. Have not, you've not absorbed the right patterns, you've not absorbed the right messages. You can find yourself in situations where you really are in control, but you don't have that reflected back to you subjectively, you feel powerless and as if life is happening to you, rather than you being the force, the most difference.
Cody: Well, yeah, you touched on a lot of important aspects in there. Um, [00:30:00] I, I think it, I want to, I want us to go back to the, to, to the Milgram at the very beginning of what you said. Oh, sorry. And, and just, uh, and, and just preface and, and just add that, you know, this is something that happened in the 1960s, and this was after there was this huge thought process that, you know, we, who could never, what, how could these people in Germany do such horrible things?
And it really showed that this was a human, psychological, psychological component. And that when you're following an order, your moral compass kind of, it gets outsourced in a way. Your modal cortex like quiets down when somebody else is making the decision for you. And I think there's, there's something to be said there about, it's, it's deeply human, I feel like, to, to, we all want control, but we simultaneously fear the weight of it.
And that brings us into even modern day society is, I wish that, you know, the, the, the politics and the government would catch up to scientists because I think the most eminent way of how this is playing out is, is the, is the [00:31:00] police society, at least here in America, because there really is a lack of accountability.
Like you cannot sue a police officer if they do you wrong here in the us. And so there is this feeling that the police can just do whatever they want. That they're being propped up by the system behind them that is kind of enabling this behavior. Whereas what you're saying is that in the, in the case of the Belgium, uh, soldiers, is that if they are told that they, they're, they're following an order, but they hold some weight of responsibility in that decision, they then take a step back and think kind of a little bit more before they act on just every order because they're not completely outsourcing the, the moral component of their action.
Daniel Yon: Hmm. Uh, yeah, I think, I mean, I, I hadn't ever thought about it that way around before. 'cause I suppose in a way, I think this way of thinking could explain differences, right? And if you have two people who are performing exactly the same action, that they've got different beliefs and [00:32:00] theories about how responsible they should be and how in control they really are, then they're gonna have different subjective feelings.
So one person will feel responsible and one person won't. I don't think that that's necessarily because one person is slowing down and thinking, I think it's maybe a sort of more deeper systemic thing. It's that you can think that it's, it's, you know, if you, if you take the example for the Belgian soldiers, it's kind of a deep part of this kind of cultural institution that they, that they're both explicitly and implicitly taught to have certain expectations for themselves in the same.
So I would expect that if you sort of just, if I just kinda walked into the sort of Belgium military Academy and just kind of had a conversation with one of these soldiers and said, you know what? Really there's this such thing as free will, you can't actually control control your, your world around you.
I don't think that would undo the way that their sort of socio-cultural experiences programmed them. And I think the same would be true if it is the case that a police officer. Is taught not to have that kind of personal decision making [00:33:00] capacity, but is taught to just follow commands from superiors. I think that it's also not gonna be something you can just change with a kind of brief conversation.
It's about designing the culture of your organization to foster that kind of responsibility you want. I think 'cause in some cases, as you say, we, we will want that control, we want that accountability. But in many cases, I can also imagine that if you make people feel responsible for things that they can't actually control, that is in some ways, you know, it's a kind of, it's just another kind of sin.
It's just not, it's, it's, if it's bad to not be in touch, but what can really control, it's also bad to have a sense that we have too much influence when we don't. We want this kind of goldilock zone where we can work out where are we having our influence and where are we not? And so you don't just want people to take accountability.
They don't really have, if that makes sense.
Cody: Hmm. And so in, in relation to control. Say, uh, how is that tied into to feelings of guilt or pride? Because I think guilt requires, uh, [00:34:00] ownership and then Pride Needs Agency and so they're both linked to choice by perception.
Daniel Yon: Yeah. And I think that's right. I think in some ways pride and guilt or maybe Pride and Blame have a kind of similar, they're sort of two sides of the same kind of agency coin in that you can't really take credit to something that you weren't in control of and you can't really be held responsible for a bad outcome that you weren't controlling either.
And I mean, there's, there's lots of, you know, ink spill among scientists about how, how free is free will really, and how much control do we really have over our agency. But I mean, one perspective on this that I found really interesting is kinda popularized by people like, uh, Chris rif. Who argues that even if free will didn't exist, human societies would, would have to invent it.
It'd have to believe in the idea of agency responsibility for the smooth running of our [00:35:00] societies. That you kind of, we can only have a sort of sensible society because we can apportion some credit and some praise for some good outcomes, and we can assign blame for outcomes that we don't desire. And in that sense, the question of what we choose to praise and choose to blame becomes really important because, you know, if we, if we go back to the story that I'm trying to offer, those regimes of praise and punishment are gonna be the things that create that sense of responsibility.
I mean in this kind of way of thinking, and this is not necessarily something scientists have come up with recently. I mean, each group philosophers, like Epicurus had this, had this idea that you don't come into the world with as clear sense of yourself as an agent that has credit and guilt, but rather you are taught to take that kind of responsibility as a child.
You are kind of told by those around you what's within your sphere of influence and what's not. That's, you know, if you think about it as a kind of everyday example, it would be like, you can tell that kids have got it when they start to [00:36:00] say as an excuse for bad behavior. I didn't mean to, you know, if like, you know, the big brother pushed their little brother and starts crying saying, I didn't, I didn't mean to, to hurt them.
That kind of, that's only a plausible excuse if you have quite a rich mental model because you start to realize that you are not getting praise or punishment for what you did. You're getting it because of what you intended and whether you had agency over the outcome. And that I think is not, it's not trivial that, you know, there are other ways that you could set up the world.
You could set up society that weren't about people's intentions. They were just about what happened. And you might say, if you do good things and good things happen, then we'll praise you no matter what your intent is. And if you do bad things, we'll, we'll sort of punish you regardless of your intentions.
But the way that we, at least in our cultures, think about responsibility, it's really down to what we think people are experiencing. But we don't all experience the same thing. I mean, I think it's really interesting when you compare, I mean at, at least [00:37:00] in the sort of English legal system, there's this idea that your actions are either, um.
They're judged to be kind of either reasonable or unreasonable based on what some kind of idealized average person would've done in that situation. And you kind of imagine that there is a kind of idealized, kind of Joe Blog's character who would have the right set of feelings of responsibility or a loss of agency in certain contexts.
But if it's true that all of us can have different experiences of control according to the different kind of regimes of experience we've had in the past, different expectations set for us by our kind of personal interactions, our kind of cultural environment, we might end up having people that, you know, commit the same crimes, that have genuinely different subjective experiences, what they do it.
Cody: Hmm, yeah. In some ways that's, that's the difference between like first degree and second degree murder that, you know, we're, we're charging you based on whether you intended to kill that person or not. [00:38:00] And, and also what you said about how, you know, if, if you're, if you get in trouble and you say that you didn't mean to do this, I think that reminds me of the split brain experiments, right?
Where it showed that when we perform an action, we often don't know why we're doing that action. Instead, we actually make our brain that hallucinates the reason after we perform that action. I find that to be really, really fascinating, um, because we often think we're explaining why we did something, but we're, we're actually coming up with that explanation on the spot.
I wonder if can, can you go into that a little bit more?
Daniel Yon: Yeah. I think that that kind of, of sometimes, like I just call that confabulation and I think that you can think that it's a similar sort of thing to that thing we were saying earlier about how you, um, kind of misunderstand what other people are saying.
You can think like, in those situations where I sort of say something to you, your brain kind of comes up with a theory about what I meant. You can think that in a sort of funny way, your brain is in the same position with itself. That you kind of do things, but the reasons for you are doing [00:39:00] them are somewhat hidden in that if I try to explain your behavior, I don't really see all of your motives and your intentions or your thoughts and feelings.
I kind of see what you do. And then from that I kind of work backwards to work out what you might have been thinking and intending in some ways. Because as you say, different bits of your brain are kind of isolated from each other. They're not, it's not kind of, there's not one kind of bit of the brain that sits in that say everything all at once, but they kind of have partial access to the different systems within your hand.
You can think that in the same way that I just watch your behavior to work out what you're doing, I also kind of watch my behavior to work out what I'm doing, and in that sense, I kind of use my own behavior to work out what I was intending to do before I started doing it. Which is a bit sort of tricky when you think about it, but that's why you can think, I mean there's, there's some evidence that suggests, for instance, in lots of decision making scenarios, people are more confident in their decisions after having made the choice than before.
So if I ask you something like, you know, which of these two options would you [00:40:00] like to buy? And then afterwards I say, are you confident in your choice? I could flip that, right? I could say, are you confident? You know, which of these two things you want? Okay, pick. And though those two things are kind of, sort of fundamentally exactly the same sets of decisions, if you put the confidence, judgment after the choice, people tend to feel more confident than they did before, which on the face of it can seem irrational.
But it makes perfect sense if your brain's doing this thing where it kind of observes you to work out what you, what you were thinking, what you wanted in the sense that the facts that you chose, it is evidence to your brain that it was the right thing to choose. 'cause otherwise, why else would you have chosen it?
And that, that can seem sort of circular, but it, it's the same sort of theory. It's the theorizing I'd have if I sort of saw you doing something right? If I, if I saw you in the supermarket, like looking at two different jars of peanut butter, try to work out which one to choose, and then you, you pick one.
I can use the fact that you picked one as a sign that it's, that it's a better option. 'cause I can think, you know, he's the [00:41:00] kind of person that probably makes the right kind of choices in a sense. I can apply that same logic to myself. I can think, well, you know, I picked that I'm the kind of person that tends to pick good things, so it's sensible for me to take my own behavior as a sort of sign of what, what I ought to have done anyway.
Cody: Hmm. And, and, and that reminded me of, I'm forgetting his name. I, it's not Dan, I think it's Dan Heath or, or something similar who's written a bunch of books here in the US and I, I, I recall, uh, from one of his books, this the experiment where he did at, in his, in his, uh, university where the, the students, he had a split group of, of students and a group, one group of students had 30 days that they could come back and choose a different piece of art.
They could return. And then the other students were told that once you choose this, you're, you're, you, you can't come back and choose a different one. And the ones who didn't have the option to return and pick a different piece, were more satisfied with the outcome than the ones who had the option to go and return that.
And I think [00:42:00] that aligns with our current culture, where you have even more options than you've ever had before, of like choosing any options of clothing, uh, or options of products and, and really long return policies. So we end up more dissatisfied with our choices in today's society because there's so much optionality within that.
Um. Yeah,
Daniel Yon: I think that's right. And I think that was those, those kind of neuroscience that would play into that too. Say there are some experiments you can do where you can get people to make decisions about continuously evolving evidence. So often these are kind of quite artificial times, types of puzzles.
Like you looking at a cloud of dots that's moving left or right, and you have to work out which direction is it moving in. But you can look at this cloud forever. At some point there you have to kind of stop accumulating evidence. You stop kind of sampling data, you just commit to a choice. And if you do that, uh, if you kind of scan someone's brain while they, while they're doing that record, the kind of neural activity, what you can see is that at the [00:43:00] moment of choice, once you've committed, you begin to tune out the evidence that was inconsistent with your, with your kind of committed option.
And on the one hand, you could think of that as a kind of irrational way of just as you say, keeping yourself satisfied. I thought you've chosen. But it could also just be a powerful way, like you say, of stopping you from this constant chasing, and that if we remain sensitive to these unfolding evidence, any of the committed choices we've made would be kind of insecure.
It, it would always be possible that we kind of lurch one way, but then we find ourselves nudged in the wrong direction and we would kind of preva like prevaricate between different possible ways of behaving. To the extent that we might never do anything, we'd just be constantly trying to choose. And I can think that that kind of the way you're describing if you, if you end up in a world that's full of choice, you have this kind of paradox of choice, right?
Which is that you can, you feel like you can do anything you feel like you don't ever have to commit, but as a consequence, you never do commit and you never get all the [00:44:00] benefits that the commitment brings you in kind of committing to a position and moving forwards, rather than continuing to kinda re-litigate the choice that you haven't yet actually made.
Cody: And, and you touched on a, a, an aspect of cognitive dissonance. And I think that relates to our current political system, at, at least here in America and also around the world in certain respects, um, of like all the MAGA supporters and the Trump supporters, and that there's been a lot of recent polling that's trying to look at, and there's been some decline in those who support Trump.
But there's also a lot of people that you see their interviews and it seems like they're just, they're, they're obviously, they're just picking and choosing what they're choosing to believe in. And, you know, it is like they're distorting their own reality or they're choosing not to see certain facts. And I think that goes along with our identity, right?
It's like if you voted for Trump then to, to say that you no longer agree with Trump, it's, it goes against like who you are at, at, at the core of your being. Can you kind of explain that, that that dissonance. [00:45:00]
Daniel Yon: Yeah, no, I think so. I think you are absolutely right that there will be, it will sometimes be the case that you will have a particular dissonance that really is linked to identity and the sense that if you change your mind on this, what else do you have to change your mind about?
And maybe how kind of fragile is this kind of picture of your, of yourself and what you believe that you've, that you've made up. But one of the things that I, and I, I talk about this a bit in the book, I think, is that you can actually have this kind of dissonance without it having to be emotional, that it can just be a sort of side effect of the way that your brain works sometimes because of this way that the kind of confidence you have in your decisions.
Can inoculate you against changing your mind. And I mean, one, one nice way of kind of illustrating this, I think is through experiments that show some people find it hard to change their minds even on issues that don't have that kind of emotional salience [00:46:00] politics. So in some experiments that were, were led by Max Walch, the, the kind of, uh, kind of sort of researchers were interested in this connection between different degrees of kind of political extremism and how flexible your decision making was outside of politics.
And so in these experiments, what they did was they would, uh, measure people's kind of political radicalism as basically how far they were from the center. So it's not like saying you are radical if you are extremely left wing or if you're extremely right wing, but it's if you depart from a kind of middle ground consensus in either direction.
And what it can do then is they can kind of break people up into groups of radicals or groups of moderates, and they can see how they behave in some of these, like those sorts of decision tasks I told you about earlier. So you give people these really inert artificial decisions where like they look at, you know, two boxes and one's got more circles and it, and they have to say which box has got the most circles?
And they get to look again [00:47:00] at the evidence and see if they change their minds, see if they change their confidence. And what the researchers find is even in these like really kind of boring decisions that are completely meaningless and they do hundreds of these over and over again, the people who have these kind of more extreme political attitudes, they find it harder to adjust their confidence in the site, in the, in the, in the face of new evidence.
So if I show you evidence that you were wrong, you, you picked the box on the left, but really it's the box on the right. You don't have, uh, the same ability to say, oh yeah, actually, I'm not sure. Maybe I should change my mind. And it, this seems to be kind of fundamentally a bit different to what you're describing there about kind of dissonance as being about your, your identity, right?
Because I, 'cause I didn't think these people, I didn't think it's really important to your personal identity that there were more, there were more dots in the, in the left box than there were in the right box. But nonetheless, you still have this kind of stickiness to your kind of perceptions and your decisions and that.
In some sense, it's irrational if you are confident, right? [00:48:00] If you are confident that you are right, you want to kind of leave the noise at the door and protect this model and this theory that you built up. Because if you're confident, it's right. It really is just noise that's trying to kind of steer you in the wrong direction.
It's only when you can cultivate a degree of humility about what you believe and a degree of uncertainty that is possible for new evidence to kind of change your mind. And in that sense, you can become kind of stubborn, not because you're, you're trying to protect yourself emotion, but you're just trying to be rational.
You're trying to, to work out what you should believe in the way that scientists might.
Cody: Hmm. And, and so it seems that it's, it's relevant to like, like the big five personality traits, right? That it seems like I would, I would guess that those who are willing to stop and pause and reflect upon the evidence properly.
Even if it is, goes against their, like, internal belief that's related to the openness and conscientiousness. Right. Um, and, and there's a [00:49:00] description that Kahneman references, which is the experiencing self versus the remembering self. And so it's, it's almost like our, our brains are kind of running like a PR campaign for, for who it thinks we are.
Um, and it, it buys like whoever has the best pitch. Um, and I think that that's even self-reinforcing within social media because our, our social media app, our feed, it's, it's more or less like a curated version of our own identity back to us. Um, so do you have any particular like advice, any i ideas as to if somebody's wanting to like, be able to pause and reflect and actually look at reality proper, what are some ways or how can they, they help modify their mindset or personality in a way that they're more open to that?
Daniel Yon: Yeah, I think it's, I think you're right that it's, it could be connected to personality, but I also think that it's, uh, that it's not necessarily something that is fixed in an individual and can't change. I think that to take something like social media, I think that you are [00:50:00] right, that you can end up with this sort of self-fulfilling loop where you, you pick kind of voices you agree with, and as a consequence, you are only provided with evidence that confirms your beliefs.
And as a consequence, the theories your brain has just get stronger and stronger and stronger. You might think that kind of one of the things that you want to cultivate in the kind of diet you consume is a kind of mixture. And I mean, it's something that I'm, I'm not sure that it's, it's something that I could give you a kind of peer reviewed study to back up, but something that I've, thought's important in my own personal life is, you know, reading things and listening to kind of different sorts of political voices, who I'm sure I disagree with.
I think I, I don't, I, I don't think of you as part of echo chain there, but I'm doing it partly to keep that limberness to the sort of theories that my brain's building about what's possible, what, what the directions that you can kind of go off in. And in that sense, I think trying to avoid too much algorithmic curation of anything is probably going to help you to [00:51:00] actually sample a wider range of possibilities that will inflect your models in a wider, in a wider range of ways, making it possible to kind of entertain the right kinds of predictions for different kinds of situations.
But I mean, perhaps, perhaps we'll kind of get onto it in a bit, but I think one of the things that you can, you can think about if you are worried about getting stuck in one theory and one kind of paradigm is using this way, thinking about your brain to kind of cultivate the ability to undergo your end personal paradigm shift.
So I could say a bit more about if you.
Cody: Well, you, yeah. How, how, how can we cultivate a paradigm or, actually, no, I, I wanna ask about something else first, which is, sure. You know, we, we build our, our theory of self, um, and it's shaped by like our memory, our, our behavior, our environment. And you, I'm, I'm wondering what circumstances does this model go [00:52:00] astray?
Uh, and I can, I think of like depression, uh, or, or borderline personality disorder or, or schizophrenia. Can you kind of explain what happens when our, our theory of self or the model kind of goes of, goes wrong?
Daniel Yon: Yeah, sure. I suppose your sense of self in a sense is just another one of these hypotheses that your brains come up with to make sense of something from inadequate and incomplete intimation.
So one of the major problems like, you know, every human being has to solve is to try and work out what are our talents and capabilities? What are the things that we can do and what are the things that we can't do? And how your brain performs. That kind of tricky task of introspection is it tries to sort of model and listen to the noise and performance inside of it own head that that's extremely tricky.
It's really challenging for the brain to monitor itself. 'cause as we sort of alluded to before, your brain [00:53:00] hasn't got this kind of perfect beds eye view of how everything works. It kind of only samples itself through sort of glimpses and shadows. So to make sense of who your mind is and who, who you are, as a consequence, you kind of need to have a sort of theory or a story that you tell yourself about what you are and what you're able to do.
And in that respect, there's a kind of really deep sense in which our past experiences loom large in creating our sense of who we are in the present. In that if we have had good fortune in the past, we have a, a track record of success that can allow us to form beliefs that can make us optimistic about ourselves in the present and the future.
But if we've undergone, uh, a series of misfortunes, we can find ourselves forming more negative predictions about our abilities, negative predictions about what we're gonna be able to achieve, and that leads to a kind of pessimism about what our future's gonna gonna be like, which can sap [00:54:00] sap that drive to try that kind of fundamental motivation to, to give things an attempt.
If you don't think you have any chance of succeeding, it's not rational to try. And so. From your kind of, you can sort of see this happening, I think in, in, in your brain. But there, there are some nice examples, which I think really kind of crystalize how your, um, your, your experiences in the present might form expectations that really change the direction that you go in.
So one example I talk about in the book is the example of, uh, a scientist called Douglas Prussia. So Douglas Prussia, um, played an instrumental role in, uh, isolating, uh, something called green fluorescent protein, which is a sort of glow in the dark protein that you can kind of implant into other animals and you can then use to kind of image cells in incredible detail.
And in 2008, the sort of Nobel Academy announced that the Nobel Prize for chemistry was going to be awarded [00:55:00] for this green flu, some protein discovery. But the prize wasn't given to Prussia, but the prize went to a series of other scientists who works on this idea, who followed it up. But Prussia himself had had less science at the time that the kind of call from Stockholm came.
He was working as a courtesy car dealer in, uh, in Alabama and had left science altogether because he'd experienced this, a string of his fortunes. He, he found it difficult to secure his own kind of permanent scientific post. He'd had difficulty getting the funding he needed to pursue this research. As a consequence, he'd just dropped out.
And what these results, kind of what these results. What, what this kinda story shows is that you can sort of have a genuinely Nobel Prize winning idea, but if you have a series of obstacles, but in your way that make you doubt your own capabilities that give you, uh, the kind of theory of yourself, that means [00:56:00] you'd think, I'm not that good at this.
I'm not liking to succeed. If, you know, if people weren't employ me, they won't fund me. What are the chances that my ideas are actually any good? That can create a theory that allows you to see yourself in a fundamentally distorted way that fails to see your end talent as you, as you walk to, as we be, as we be kind of optimal for the rest of the rest of the world.
This isn't just happening for sort of scientists like, like Prussia. There's some nice kind of empirical evidence of this happening, um, in some research on what happens in the scientific world to researchers who get early success or early, uh, misfortune in the kind of funding cycle. So one kind of nice study of this was done looking at researchers who, uh, either were just above the fundable line for their first grant or just below.
So these are people who'd just got their PhD. They were coming up with their, with their first project. They made an application to the research council and how, how the council works as it kind of ranks every application and it [00:57:00] just picks a cutoff line. So if they say, we're gonna fund 20 of these grants, you can, can you, you can be pretty sure that the person ranks number 20 is pretty much as good a scientist as the per direct number 21.
But where they draw the line means one of them gets funded and one of them doesn't. So it really is just a sort of twist of fate which one gets funded. But you can see that the kind of next few years of the careers of these people are rather different. That the, the kind of people who are just kind of just able to succeed, they end up accruing much more research funding over the subsequent years than the people who just miss out.
And you could think this makes sense if it's to do with how that early experience creates that kind of prediction about what you're gonna be able to achieve next. If you are successful, first time you think, I am talented, I'm good at this, I should therefore try again. And if you are kind of batted away at the first hurdle, you think, well, I'm not, I'm not as good as I thought I was.
Maybe it's not worth trying. If I try again, I'm just gonna fail. What was interesting is you could see in these studies that the [00:58:00] gap between the kind of early winners and the early losers, it did begin to close among the losers who kept trying. So the people who missed out first time round, but they kept applying, they had a much smaller difference in the outcomes in that early bit of their scientific career than did the people who just stopped applying.
So it seems like these kind of beliefs about whether it's worth trying, whether you are the kind of person who's gonna succeed, they had this kind of instrumental role in controlling what you end up trying to do and then what you end up achieving. Now, I think that, you know, to get to your actual question though, is how does that go?
Orry things like kind of mental illness. So one way that different sort of sciences have begun to think about conditions like depression is to think that you can understand something like depression as being, when your brain is really stuck with one of these particularly pessimistic theories about yourself, that you kind of can't quite shake this expectation that whatever you do is gonna fail and therefore it's not worth trying.[00:59:00]
And it could explain this way in which, you know, early. Uh, misfortunes or various kinds of adverse experiences can kind of tip you into a certain way of thinking where it's then very difficult to break out of it. I think one way, one reason it's difficult to break out of is because if you, is, because of this connection between kind of striving and expecting success, that if you really have a pessimistic view of your abilities, one consequence is you never try.
And by not trying, you never get any evidence that you can succeed. And then by not succeeding, you don't ever get the kind of prediction error that would challenge that old view of yourself and help you to help you to, um, update that belief about view on what you can do.
Cody: Yeah. You, you talked on so much there, there's so many little angles I want to go down.
Uh, what, what you said lastly, it kind of reminds me of like a, a, a narcissist is that, you know, a, a narcissist is somebody who will believe. They who [01:00:00] has like a supreme level of self-confidence, you know? And, uh, obviously they, they, they put themselves first, but that's, that's a, that's a stretch because, you know, you find there's more narcissists in levels of position, more, more narcissistic CEOs and politicians than than any other field.
And I think in that arena, you have to have a almost delusional sense of confidence in yourself, which is really hard to get by. And so in some ways, narcissism is a distorted sense or, or perception of, of your identity. Um, and, and how you perceive yourself. Al also what you say about this Nobel, uh, prizewinning scientist who, who decided to quit.
I mean, that, that's so relevant for, for really everybody. It reminds me of this one YouTuber who became famous many years ago where he would, he would eat a bunch of food and then he would get all these likes and comments and he would, he grew his audience just based on eating large, like copious amounts of food.
And he could, he tried to lose weight. He tried to change his channel [01:01:00] to talking about other things. That wasn't a video of him eating a disgusting amount of food. He couldn't get any traction. And so his identity, because the only success he seemed to have in life was getting these large amounts of views on these videos of him, EP, eating these crazy amounts of food, and it turned out later in life, he ended up dying from that.
And so you can end up holding onto this identity of who you believe you are. To such a high degree that, you know, you're just completely out and, and this, this perception of, of, or this, this distorted reality where you either keep going down that path to the sense of being a, an ultimate narcissist or potentially killing yourself, believing that you're going to achieve this thing.
But what it seems most people's mental models are, are stuck in is that if you don't achieve a, a realistic amount of success, then you're going to quit whatever it is that, that you're doing. Because in some ways you have to have a certain level of confidence in order to make that attempt, in order to try something.[01:02:00]
Um, and so then it seems our brain is, it tries to defend that self-image up to a degree, right? And it seems that we, we, we might have a little bit below what would be considered a healthy level of self-confidence. I know that self-confidence has been going down because if you obviously don't believe in yourself, then you aren't willing to try anything.
But if you believe in yourself too much, then that's also bad too. And we also have an epidemic of like imposter syndrome. Right. And it seems like if I, if I say that, that, uh, if I say you're, you're a great speaker or you're a great author, it's most often, like, you might also not believe that about yourself.
So I'm also curious about why is it so hard to believe compliments or to even feel successful, um, even when other peoples tell us that we are successful?
Daniel Yon: Hmm. I think there's, yeah, there's, I think there's, there's lots to unpack that. I think that in the things you're saying now about sort of narcissists, I think you're really getting it at the heart of this idea that both under confidence and [01:03:00] overconfidence can be a bad thing.
And I think in everyday life when we talk about confidence, we typically assume that it's, it's always a virtue to have confidence. And that we talk about self-confidence as something that you build and you grow. But in the psychology of confidence, people talk, talk more about, uh, this idea of what you might call metacognitive sensitivity.
So this is the idea. It's about not having confidence is necessarily high or necessarily low, but it's confidence that correlates and covar with the truth. So if I am really, really good, if I am an excellent author, I'm metaly sensitive. If I also have great confidence in that, if I'm terrible and I think that I'm terrible, I also have a good calibration to the, to the truth.
But it's a problem if I'm overconfident and I'm bad, or I'm underconfident and I'm good. So it's this kind of gap between the ground truth of how objectively talented you are at something and how you subjectively [01:04:00] feel. I think the reason why it can be difficult to shift those theories in either direction is kind of for the same reason that we were talking about earlier in this conversation.
This kind of this fundamental way that your brain is filling in the gaps through its existing theories. In a sense, the confidence you feel is a bit like a sort of introspective hallucination. It's like it's what your brain's theory is telling you to feel, not necessarily what the world is presenting with you right now.
And that's why it's possible. If you start out believing that you are very, very talented, you start out overconfident, that belief will keep presenting itself back to you even in the face of failure. But if you start out feeling underconfident, you know, I don't feel that I'm an an excellent author. No matter how many compliments you give me, they're not gonna be able to overcome that sort of predictive filter that my brain's set up until I'm in a state of mind that those filters might change.
So you are kind of. To set, to say stuck is potentially too strong. But you, you do have these filters you're using [01:05:00] to both experience the world and experience yourself, and they don't change unless you are put into a state where you are receptive to new information and your mind is in a sort of, uh, a more malleable state that allows new theories and new paradigms to, to take Hal.
Cody: Hmm. And so how can we start to change the way that we see ourselves? Because I know, say, say, action precedes belief and, and habits create evidence. So are there any ways that, or any study methods in terms of how we can change our perception of ourselves to, uh, perhaps have more confidence?
Daniel Yon: Uh, well, you, you even, you even said it there when you said that we'd, we'd want to have more confidence and so, so it will be true, right?
That if you, if you want to surround yourself with people who. Uh, who provide you with positive messages about your confidence. What the kind of science would suggest is that each of those experiences of experiencing [01:06:00] high, high confidence will slowly update that belief you hold, making it possible to change, to change your mind.
So you could think that you could achieve that either by surrounding yourself with other people who are confident, or sorry, either, either by surrounding yourself with, uh, sort of situations where you'll get this feedback that you're confident or you can just basically keep trying to do things that you are successful at.
If you only focus on things that you do well, you'll end up getting loads of confidence that what you're doing is, is, uh, if you're really focused on doing things that you do well, you will keep getting signals that you are succeeding. And that should, you know, also f feed into this sense that you are capable and, uh, effective person.
But one. One thing that we've been working on more recently in my lab has been the idea that we can possibly shape the subjective feelings of confidence we feel as individuals through social interactions than others. So one thing that we've been finding in some of our experiments is that when people [01:07:00] work together, there is a sort of imitation of the subjective uncertainty that people feel.
So if we pair together somebody who starts out feeling pretty confident and someone who feels pretty underconfident, you can have this sort of attraction towards each other whereby the person who started out overconfident, their confidence begins to go down and the person who is under confident, their confidence goes up so that they're sort of talking the same uncertainty language.
Now this is something that we've only kind of deployed in in the lab so far, but why I think it's interesting is 'cause I think that it can. Start to explain where certain cultures of confidence might come from. Like you, you spoke about this, of narcissistic CEOs. You can think that, you know, there are like one, one kind of difference that I think is maybe quite one kind of cultural difference I think is maybe very relevant to the kinda modern world is probably cultural norms in confidence between politicians and scientists.
I think kind of politicians have a norm where it's kind of okay [01:08:00] to be very confident in your positions and your convictions, even if you don't necessarily have huge amounts of evidence to back them up. That's just what the kind of system incentivizes. Whereas I think scientists are trained to be particularly taciturn and reserved and to kind of hedge all of their claims and to always suggest there's nuance and uncertainty.
And that means that you, when these groups get together, when you know you have a, you know, serious kind of policy issue to solve like a global pandemic, you can have people who are trying to communicate with each other, but they're communicating with different levels of confidence and different levels of expertise.
And there's kind of a mismatch there between how you'll make your decisions. And you can think that one way that you can solve that kind of different, that you vomits, maybe self too strong, but one way that you can address that imbalance is by mixing the kinds of people who are making decisions together to create this kind of confidence imitation effect I was describing before, [01:09:00] if people are gonna gravitate towards the confidence.
Of the people that they're working with. You might think that spending time with the people whose confidence you would like to have is gonna be at sort of actually, um, a serious way of changing those subjective models because you are using the confidence they express as a model for the confidence you ought to feel.
And so in that respect, that kind of old adage ravina, you, you are the people who you surround yourself with sort of scientifically risks in, at least when it comes to confidence. We're finding that seems to be true, that you are inheriting the certainty or the uncertainty of those that you surround yourself with.
Cody: Yeah, maybe we should have like a buddy system so that e every politician has to pair up with a scientist. Um, uh, or, or may or maybe every, maybe there should be a requirement for every scientist to, uh, to, to take TRT testosterone replacement therapy so that they're a bit more aggressive and, and more confident in their ability.
I, I know that in the scientific world, there's, you know, it's, it just like in [01:10:00] every, every arena there's always like a competition. And I know that there's been tons of cases within the scientific world of, of, say, somebody who ends up having a, a genuine and true theory. But, you know, they're, they're mocked at initially.
Um, and it can be hard to kind of stick with your gut and to maintain that confidence when your colleagues are telling you that you're crazy or that this is never going to work. And it's unfortunate that we see that in all aspects of life, in all kind of social tribes and connections. And so you have to somewhat find a balance between, you know, sticking with, with what you believe to be true.
Um, even holding up against what other people might say. Um, but you know, it's, this is all kind of abstract in a weird way. Um, but you know, as you said is that, you know, you know, you are the average of the five people you meet. The, you spend the most time with Jim Rohn. And it seems like what you're saying as well is that by having a mentor, somebody that perhaps is more confidence, then they can help raise our own internal self belief to being that person.
And I can also say from [01:11:00] from experiential my, my experie experiential self is that it, it's only once I started to accomplish my own feats of achievement that I started to have confidence in be believing that I can do more in bigger things. It's so unfortunate today that we're in a society that doesn't necessarily compel those to, we have a sort of like fake confidence of like a fake success, you know, where you would give out a part participation prize just for competing in something.
Um, but simultaneously it doesn't really create that real confidence in one one's own ability or what you believe you're capable of. And so we end up having a society of people who never try anything because they don't believe that they can do something. Um, and I know that, um, that's, that's something that, that you've studied.
So I, I'm wondering is there any particular way to flip that switch? Or is this, is this, have we covered this topic as much as we have?
Daniel Yon: How could conflict the switch? I think, I think you're right that something like sort of mentorship makes a difference. But I think, I [01:12:00] think that it's the sort of thing where.
If your, the, the theories you use to make sense of yourself and to build that sense of confidence come from your own experiences, I think that the only way you're gonna be able to rectify them is through having those experiences that give you the possibility of changing your mind. And so I think that could be one of the things that I think possibly a kind of third person perspective of a mentor is extremely helpful for, because they can encourage you to take risks or to, you know, pursue projects that you might internally not have the confidence for.
And by having a kind of external encouragement to try, you may find that you fail, but you may find that you succeed. And if you find that you succeed, that's gonna create that kind of internal prediction error that challenges that old view, right? You were expecting to fail, you didn't, and the update better to give to your belief is that you are better than you thought.
And so, in a sense, I think if. [01:13:00] I mean, one of the kind of, I think, more general lessons that this way of thinking about confidence, what it, what it, what it presents to us is the idea that you, you shouldn't trust your own sense of subjects of confidence too much. You should, you should have a kind of humility about it.
And if you feel confident or you feel uncertain, you should be willing to step back and think it's possible that that's an accurate feeling or it's possible that it's just a feeling that's been invented for me by my brain. And so I might need to seek advice about what I should be pursuing. And I should take that advice seriously if I've, I think there have been many, many situations where I've have personally not had the confidence to pursue something, but on the advice of more experienced kind of peers and mentors have said, go ahead and try.
And I've surprised myself by being able to do it. And I think that that's something which seeks, that speaks to the value of having this kind of wider network of people who are willing to have that perspective on your ability that might not be the same as your own perspective, both positively and explicit.
Cody: Now you, you've studied a, a concept of [01:14:00] like you call volatility circuits. And I, I'm wondering, is there, so, so, so I know that say uncertainty, it can trigger, uh, a sense of hyper learning in something. Um, having a crisis, it can cause us to, to go out and pursue whatever we need to do to, to fix the crisis.
Um, having, being in a, in a, when, when you break up in a relationship, it tends to, to force you to think inwards and you start going to the gym, you start trying to be a better person, you know, going, changing jobs, changing locations, it changes our identity in some way. And a lot of these things end up happening to us.
Like if, if somebody, if somebody close to you dies and then you start thinking, wow, I need to live my life. Or if you get cancer and then you realize like, and you, you overcame that cancer, then you have a new perception on what is possible and what you want to get out of life. I also know that there's, I think one, one thing Heman said is that, um, one of the best ways to motivate yourself is to actually have somebody who doesn't believe in you.
And it creates this internal fire of [01:15:00] wanting to prove them wrong. And that is one of the, unfortunately, one of the strongest forms of, of motivation from what he said. Uh, and I'm wondering, are there any other ways that can you can artificially kind of activate this, this circuit to this readiness to change?
Daniel Yon: Yeah, so I think, so what you're talking about there is this idea, like most of what we've been discussing so far right, has been thinking about kind of where the brain's paradigms come from and how getting stuck with the wrong paradigm can have kind of certain consequences of how we experience the world or how we experience ourselves.
But I think this kind of takes us back to this sort of the kind of the premise of the book, right? Which is that if your brain is like a scientist. It also brings this kind of new problem into focus, which is like, as, as a scientist, how do you know when the theories you've been working with the kind of lens that you see in the world through, when do you know that that kind of theory should change?
And the solution that your kind of brain seems to have come up with for this is, as you say, it's, it's by tracking volatility, by tracking change [01:16:00] in the world. So the, the kind of basic premise here is that if you find that the world around you is changing, then the things that you thought in the past, the kind of models that you developed, uh, before, might no longer be a good fit.
So the world that you're gonna find yourself in next, because if the world changes the past is no longer a good predictor, what the future will hold. Now that means that you can kind of find yourself tracking, uh, well, there are certains in your brain, which will track how stable your environment seems, such that when it's more stable, you kind of stick with the resisting theories.
But if you find that things around you are beginning to shift, you then need to have your mind change to fit the new reality that you're, that you're about to enter. So that means that you can have this relatively global effect on lots of the theories and models in your mind when you encounter these sort of solely and unexpected outcomes.
So, like you were saying, things in your everyday life, they might include things like, you know, [01:17:00] sudden things in your family lives, like maybe a sort of sudden, a sudden unexpected bereavement. Or maybe as you say, like a sort of maybe an unexpected kind of breakup in a relationship. These can be events that didn't kind of feature into your model of how the world works and when these large prediction error happen, they have this quite global impact on how your brain's working.
You know, if, if your girlfriend breaks up with you, you don't just change your beliefs about, you know, what you're like as a romantic partner. It might have a more global sense about your general beliefs about, you know, if I thought that I could rely on this part of my life and I can't, what else might I need to rethink?
And there's a kind of underlying neural circuit to this, this kind of, this sort of global sense of malleability and change, particularly through, um, sort of neurotransmitter systems like the kind of noradrenaline system, which is, uh, also called, uh, Norine sometimes. But what's, uh, what's kind of special about this system is that this kind of chemical system [01:18:00] has this kind of very wide reaching tendrils throughout the entire cortex of the brain, making it possible to have this very kind of widespread impact on how flexible different thoughts are in lots of different aspects of your kind of mind simultaneously.
That means that when you find yourself in the, in, in the face of a kind of surprising, unexpected outcome, it makes both those beliefs more flexible, but the wider background beliefs you hold also more, more malleable too. So, I mean, one, one way that you can engage in the system is if you, is, if you take, take drugs to alter your nor gender levels.
And in, in the book I talk a little bit about possible drugs that already exists, which could have this effects. I mean, I sort of, I sort of liken it to the, the kind of blue pill and the red pill of the matrix and the obvious, obviously, obviously nowadays, red, red pill has got a sort, a different kind of meaning, but in the, in the kind of, uh, sort of.
Matrix done, right. The idea is that you could sort of take a pill that would either, uh, allow you to see reality as it really [01:19:00] is and kind of turn down your theories and models and kind of face the kind of incoming data of reality without those preconceptions. Or you can take a pill that's gonna kind of fix those beliefs and it's gonna make you see the world through the filter you already have and reject the world as it really is.
And obviously in the, in the film, you know, sort of. You know, it's neo takes the red pill because otherwise it wouldn't be a very interesting film if we just decided that. I wanna, I wanna stick, but, but it looks like there already exist drugs that have this sort of effect in a, in a kind of moderate sense on the way that we rely on our existing models.
And they do it by sort of altering the kind of concentrations of this kind of noradrenaline, uh, chemical in your head. So some examples of drugs like propanolol seem to have the effects of, um, antagonizing this noradrenaline system. And because the noradrenaline system is signaling the volatility and the change, if you antagonize the system, it means [01:20:00] that you become more rigidly, stubbornly stuck with the theories that you've already built.
It makes it harder to change your mind, harder to update. But other drugs can have the opposite effect. So drugs like, uh, methylphenidate, which is also, uh, tray under name, Ritalin, that drug can have the opposite effects. It can make it easier for your brain to tune into the changes in the world around you, making it easier to kind of discard all hypothesis and come up with new ones.
So you could think that, you know, I'm not necessarily ad advising it, but it's, it might be possible, uh, you know, for us to use our knowledge of these, um, kind of volatility circuits to actively, you know, in the not too distant future, create drugs that can make you more open-minded or more closed-minded, but without messing with your own kind of, uh, neurochemistry.
It should be possible to engage that same kind of shifting if you actually try to cultivate and caught that sense of uncertainty in change in your life. If this kind of theory about how your brain works is right, it should be possible to change lots of different aspects of how [01:21:00] you think and feel. Just by experiencing change in one part of your life.
So at the very beginning of this very long answer, you gave the suggestion that something like traveling or moving somewhere new could have a very important impact on your sense of self. And I think it can have that effect precisely because of this volatility, like sort of hacking system. But by going somewhere new, you experience lots of surprising things that you didn't anticipate.
You, you kind of fill your brain with a new set of prediction error. Once you've got this prediction error, it has this very global sense that it makes you think, oh, maybe, maybe there are other things that I thought would always be a certain way. And they don't need to be that way. Maybe they could also have a difference that a difference in a way these things are different.
And so in that sense, by experiencing change and something unexpected in one domain of your life, it becomes possible to put your brain into a state where some of the other things that might be a bit calcified and a bit stubborn, they become able to be more flexible and more, more readily updated to.
Cody: It's [01:22:00] interesting what you said about Ritalin because I, that seems to be associated with kind of tightening control, kind of strengthening focus, and it seems a lot of the, the, the dopaminergic drugs, they, they will strengthen your present focus, but it's, to my knowledge, it's tied to kind of restricting the flow of information, kind of tied to your, your core beliefs, your existing beliefs, not necessarily tied to changing that.
Did, did, did you have a certain, um, like theory going into studying Ritalin and, and why, why is my theory about how Ritalin a DH ADHD drugs differ from what your research has shown?
Daniel Yon: Well, I think that they might be, um, there might be a way that those two things could actually be compatible, right? Because in the way that I'm trying to set it up, it's changing your mind is always about this like duet between the beliefs that your brain's already built up, these kind of theories and models and the information that the world's presenting at any given moment.
So you think that this kind of sort of business as usual what [01:23:00] you, what you might call normal science periods where you're not changing your mind, you're kind of holding onto the model and you are using that as a very firm filter. That's effectively like saying you're not really listening to the signals that are coming in.
You are kind of listening to your existing model and when new evidence comes in, it just kinda bounces off and doesn't really change your mind. You can think that if I then shift the balance to make it more possible to change these theories, that's a bit like turning the volume up on signals from the world and that kind of process of turning up the volume on the world and really listening to the incoming signals that fits with that sort of kind of hyper focus that you're sort of describing, right?
That like what you are doing there is you are not seeing things through the filter, but you're actually tracking. With focus what the world is telling you right now, and that is what puts you in this position that makes it possible to change your mind. So I don't think that they're actually incompatible at all.
I think that it can be if, if you take this way [01:24:00] of theorizing seriously, it means that the only way you ever change your mind is by really listening and intently to what the world is telling you and trying as far as possible not to filter it through your preconceptions, but allowing the world to change your mind rather than just projecting your mind onto the world as it's in front of you.
Cody: Hmm. Do you, do you think there, is there a way, is there a ideal balance between the, the desire to evolve versus staying grounded? Is there like a. A good center.
Daniel Yon: Yeah. Well, I think that there probably is an ideal balance, but the ideal balance is not some kind of platonic fixed point, but it would be something which depends on the world you're living in and what's happening in the world, in your life.
In that the, the kind of, the key insight of this kind of volatility tracking idea is that you want your, your mind to be stable when the world is stable and you want it to change when it needs to [01:25:00] change. And so in that sense, being stubborn when the world is changing is bad for you because you start to bring old expectations and old ideas onto new situations.
But changing your mind too quickly can also be very, very risky. I mean, I talk in the book about some examples of how this might have happened. Uh, over the course of the Pandemic. 'cause you can think that the kind of COVID-19 pandemic was an example of one of these extremely unexpected sort of model breaking events that, you know, people did not anticipate that there would be this pandemic and the effects it would have on the way that we lived our lives.
And there was some experiments done. Um, they were, they were being done kind of coincidentally, like before Lockdowns kept him. So people were sort of measuring, using quite sort of simplistic artificial psychology tasks. How people would change their mind about very simple kind of gamified scenarios. So these lots in, so some of these tasks, it would be something like [01:26:00] you are kind of playing a card game where you're trying to turn over the cards that have got the most money hidden underneath them.
And your job is, the learner is to kinda work out which decks the good deck, but the deck can switch. So you have to kinda work out, uh, when should you stick with what you thought you knew? When should you change your mind? So it's a very sort of simple game, but it's supposed to give you a kind of a measure of belief updating kind of in, in miniature, and people were, they were kind of, they were sort of, they kind of, sort of a, uh, kind of a researchers working on this.
They were, to begin with, they were just trying to connect these kinds of behaviors to sort of different kinds of states, like kind of paranoid distributional thinking, but. As a kind of, I suppose, happy, unhappy coincidence. They were running this study while lockdowns began to kick in across the US and what they could find was that as lockdowns would kick in sort of state by state, people would quite understand.
They have kind of increases in their sort of paranoia. As you know, they had these new mandates to wear masks or the sort of state troopers showed up and forcing these new rules. But [01:27:00] kind of what was kind of more, more intriguing was that this sense of kind of global uncertainty was even affecting how they played this card game.
Even though the game didn't change, once you started to go into lockdown, you would find that people would start to behave even as, even as if they couldn't trust the cards anymore, that they would think that the kind of good deck was switching more often and that the kind of cards were not as trustworthy as they were previously.
That global change in uncertainty that people had during the pandemic had consequences for the other beliefs they held, which is kind of consistent with this kind of way that I was, I was kind of describing you that one kind of uncertainty can bleed into the volatility of other beliefs you hold. So people for instance, who, um, had the highest levels of this kind of volatility belief in the card game, the people who thought that the, the decks were switching more, they also seemed to be more prone to developing kind of conspiracist beliefs about vaccines.
And, uh, you know, that they were, they were kind of a front for master sterilization or that they had sort of [01:28:00] microchips in them. Um, but also even more broader kind of conspiracy theories about kind of much like wider aspects of, um, the kind of broader kind of, uh, sort of cultural and political sphere. It can look like people were kind of tipped into this state where because they'd experienced this uncertainty, their mind became changeable.
But then it became changeable in sort of a kind of dangerous way that it became possible for them to think and believe things that they didn't think were thinkable before. And in that sense, you know, we might want to have open minds, but open minds can be a dangerous thing if we're, we're too open and we become sort of impressionable to wherever the next signals that we kind of come, that kind of comes our way.
We need, as you say, to have this right balance between sticking with what we know for as long as it's reliable, but also not tuning out the, when, when the world tells us that we're, that we're wrong. And I think making sure that we, we, we cultivate a kind of accurate sense of how volatile [01:29:00] the world is.
Not both, not panicking that changes are for, and everything's gonna be different, but also not ignoring change, but having a kind of balanced view of how stable the world around us is, is important because getting that trucking right is gonna be what makes it possible for you to have theories that fit the world you live in, not the world you might have lived in the past.
Mm.
Cody: And, and, and to, to wrap up, I guess, if, if somebody listening today kind of takes away, uh, one or, or more ideas that could help them live a, a better, more, more self-aware life, um, what, what would you say that, that you'd want that to be?
Daniel Yon: I think there's probably a sort of two pronged message, which is that I think if your brain is like a scientist, it means that there is a sense in which we are kind of prisoners to our past experiences in that our past build up theories that control how we perceive the world, other people and ourselves.
And that means that you should [01:30:00] have a degree of humility about how you are perceiving and believing, because you could be perceiving the world perfectly accurately and making perfectly sensible decisions. Or you could be seeing things for a filter that's slightly tinted and tilted to towards experiences that you've had, but which might not reflect the.
But I also think the kind of the second prong is that just because there is this long shadow that the past casts on our mind, it doesn't mean that we're forever stuck. And once you start to appreciate this way that you can change the way you think by changing how you experience the world, by, by cultivating and courting uncertainty in your life, you can create the flexibility that your brain needs to make your mind flexible and listen out to the change.
That's kind of waiting to update the theory that you have about yourself and the world and other people.
Cody: Hmm. Great. And, and lastly, where can people go to learn more about your work? At the uncertainty, at the [01:31:00] Uncertainty Lab and find, find your book.
Daniel Yon: Uh, well, the book is available in, in Nor Good bookshops.
Um, it's, uh, it's available in June in, uh, the uk and it's available in September in North America. And, uh, you can learn more about, uh, the lab by, um, coming and following us, uh, on our lab book site or by, um, finding me on all your reputable social media habits.
Cody: Great. I, I think that that wraps up our, our conversation with Dr.
Daniel Yon, but huge thank you to Daniel for, for helping us to really understand how our brains construct our reality and what this means for, for personal growth and human consciousness. So, so be sure to check out his book, A Trick of the Mind, and the important work that you're doing at The Uncertainty Lab.
And thanks for tuning in to The Mind Hack podcast. I'm Cody McLain. Remember, your reality might just be your brain's best theory, so make it a good one. See you, and the next [01:32:00] episode.


