Nudging Your Worldview

When was the last time you experienced or read something that changed how you view the world? Changed it so much that you took note of it?

How we see and experience the world is a fluid thing, an amalgamation of all the experiences and information and reactions and relationships throughout our lives. While the foundations are laid early on in life, all of us continue to tweak our view of the world, consciously or unconsciously, throughout our lives.

Every now and then something changes your views so clearly, it’s hard not to be conscious of the shift.

I want to talk about one such recent occurrence.

As I’ve previously touched on, it is imperative to understand and control the things that will end up in constructing who you are.

What Shifts Your Worldview

I find there are several activities that help you develop your worldview into a broader, more understanding and nuanced direction. One is travelling and living in different countries. As Alexander von Humboldt has said;

“The most dangerous worldview is the worldview of those who have not viewed the world.”

It’s easy to agree with that, just from observing the views of people who have not had a proper, broad exposure to different cultures or a diverse set of people. The unknown can be scary.

Another excellent method of broadening – and extending – one’s horizons is reading. While I find most reading interesting and gain at least some useful knowledge from almost every book I read, the first book I read this year was one of those relatively rare instances that caused some conscious shifts in thinking.

The book in question was “How Emotions Are Made: The Secret Life of the Brain

A book about emotions may sound boring, or stuff we already know.

However, as it turns out, much of the stuff we “know” about emotions is, well, wrong.

Much of the research ostensibly proving what we knew is wrong, too.

It’s not hard to see this if one looked deep enough into the said research, except once something becomes widely “known” and accepted as truth, questioning it becomes comparable to heresy – even in science, and even if you have solid evidence.

Lisa Barrett is just such a heretic, and does an extraordinary job in methodologically tearing down the established understanding of emotions, backing it up with so much neurological and other research that it leaves little room to hold on to the old model anymore, even if you wouldn’t fully buy into the theory she proposes.

So… What’s New?

There’s obviously a lot more depth and breadth in the book than I can discuss here, but consider just the following:

There is no physical signature or a universal “look” for any emotion, nor are there “fundamental” universal emotions that would be present in everyone or every culture; emotional constructs are highly dependant on culture and even language, among other things.

It’s worth saying that another way: you cannot reliably recognize emotions from any physical measurements, visual or otherwise. It’s all highly variable and context-dependent and furthermore, emotions are not genetically endowed, nor are they generated by some evolutionary construct of a “limbic brain” or other dedicated brain regions.

This alone throws into question an incredible array of not just research but also widely used practices and products already in the market.

In this light, solutions like MIT’s “Emotion Recognition” using Wi-Fi signals suddenly seem to be, to put it kindly, of limited use. As do the supposed “emotion interpretation” capabilities of SoftBank’s Pepper robot. Or Affectiva’s emotion recognition software. And so on.

Barrett argues for a theory of constructed emotion that explains the experience and perception of emotion and resolves the “emotion paradox”:

The emotion paradox is as follows. People have vivid and intense experiences of emotion in day-to-day life: they report seeing emotions like “anger”, “sadness”, and “happiness” in others, and they report experiencing “anger”, “sadness” and so on themselves. Nevertheless, psychophysiological and neuroscientific evidence has failed to yield consistent support for the existence of such discrete categories of experience. Instead, the empirical evidence suggests that what exists in the brain and body is affect, and emotions are constructed by multiple brain networks working in tandem.

I guess it’s needless to say that I highly recommend the book for full context and insight into this. Acknowledging few will actually read it though, here’s a simplified version of what is the theory of constructed emotion:

“In every waking moment, your brain uses past experience, organized as concepts, to guide your actions and give your sensations meaning. When the concepts involved are emotion concepts, your brain constructs instances of emotion.

Instances of emotion are constructed throughout the entire brain by multiple brain networks in collaboration. Ingredients going into this construction include interoception, concepts, and social reality. Interoceptive predictions provide information about the state of the body and ultimately produce basic, affective feelings of pleasure, displeasure, arousal, and calmness. Concepts are embodied knowledge (from your culture), including emotion concepts. Social reality provides the collective agreement and language that make the perception of emotion possible among people who share a culture.”

The implications of this go beyond humans.

Although Barrett doesn’t really touch on the topic of Artificial Intelligence, I find the constructive theory also has profound implications for developing artificial general intelligence, especially human-like AGI.

When so much of being human has to do with interoception (essentially, us having a body and the brain being highly connected to the state of the body), and being molded by a large number of other people in a social environment in unique cultural environments over a long time period, it’s questionable whether any of the current pathways towards creating an AGI are viable.

All of this also makes one appreciate the importance of culture and shared social reality, and concerned of the fact that shared social reality is struggling amidst algorithms designed to provide everyone with their own unique version of reality.

Here be dragons.

Posted in Culture, Psychology | Leave a comment

Embracing the Unknown

Think about how much people – humanity, collectively – know today.

The base of knowledge, and what we can do with it, is impressive.

Yet we act as if it’s not; many great achievements of humanity go unnoticed or unappreciated. As Samuel Arbesman has noted:

Changes are happening so rapidly that we forget to marvel at how impressive our understanding of the universe – and our ability to harness it – has become. We forget how recently we gained the ability to render three-dimensional worlds on our screens, communicate instantly across the planet, or even summon decades-old television programmes with the click of a mouse. The knowledge that has made these changes possible too often fails to inspire wonder.

Our knowledge is also impressive from a perspective of complexity; many systems we use and rely on every single day are so complex that you or I don’t usually fully understand how they work.

You’re Not Alone

In fact, they’re so complex that nobody fully understands how they work.

To paraphrase Quinn Norton, there is no one person on Earth who really knows all of what even your smartphone is doing, let alone more complicated systems.

Not understanding something can be frustrating, but not being capable of understanding something is even worse.

Where that threshold, that thin red line between “I don’t understand this, but I can learn” and “I don’t understand this, and I can never learn” lies, varies.

But it exists for all of us. Individually, and, as it turns out, collectively.

And it’s okay. It’s okay to not know something, or even most things.

What is not okay is the celebration of ignorance, which can result from the frustration of hitting that line of being unable to comprehend something.

From anti-vaxxers to deniers of all flavours, we all see the results of this trend in the world today, and it’s not pretty. As Tom Nichols warns in The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters;

Populism actually reinforces elitism, because the celebration of ignorance cannot launch communications satellites or provide for effective medications, which are daunting tasks even the dimmest citizens now demand and take for granted. Faced with a public that has no idea how most things work, experts likewise disengage, choosing to speak mostly to each other rather than to laypeople.

Unable to comprehend all of the complexity around them, [people] choose instead to comprehend almost none of it and then sullenly blame experts, politicians, and bureaucrats for seizing control of their lives.

Even among experts, “I don’t know” remains one of the hardest things to say in many situations, so sometimes – even often – people pretend ignorance is knowledge. Too often it works, as a confident but potentially wildly inaccurate opinion easily overrides cautious but healthy doubt.

Get Used to Not Knowing

As uncomfortable as admitting not understanding something is, it’s an increasingly important skill. As our world grows increasingly complex, any one person understands an ever-diminishing part of the whole.

We urgently and collectively need to come to grips with that fact of life. To survive and thrive in a world that relies on specialized knowledge, we need increased levels of trust – not the breakdown of trust we witness today.

Don’t be put off by learning how much you don’t know. The darkness was always there, surrounding you; you just had no idea how vast it was until you began probing it.

Easier said than done, that.

Still, the realization that nobody else knows everything either should provide some solace. Another potentially comforting fact is that while our current knowledge is impressive, it’s equally impressive to realize how much we don’t know.

The More You Know, The More You Realize How Much You Don’t Know

An excellent book We Have No Idea: A Guide to the Unknown Universe sheds some light on the many, many things humans just have no idea of – starting from the fun fact that we don’t even know what the majority of the universe is made of. (Giving it a name like “dark energy” does not mean we know what it is).

Another great book, The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us, takes the concept of not knowing even further, and shows there is much that cannot be known and how reason, albeit powerful, is nevertheless a limited tool.

As a final bonus – or adding an insult to injury, depending on your point of view – comes Chuck Klosterman’s book But What If We’re Wrong?: Thinking About the Present As If It Were the Past.

He points out that there’s a (very) good chance we’re wrong about a lot of the stuff we “know” today. We’re reminded of this occasionally, too, when new discoveries break limits once thought to be ‘fundamental’. Quite probably we not as wrong as we were a couple of hundred years ago, but to think we’re somehow right about everything now would be delusionally arrogant.

Dealing With It

How does one deal with all the complexity that cannot be understood?

Returning to Samuel Arbesman, he offers some advice in his book Overcomplicated: Technology at the Limits of Comprehension:

We must work to maintain two opposite states: mystery without wonder and wonder without mystery. The first requires that we strive to eliminate our ignorance, rather than simply reveling in it. And the second means that once we understand something, we do not take it for granted.

We will always be left with some mystery, but that’s okay. As long as we neither fear nor revel in it, we can take the proper perspective: humility, even with a touch of awe.

It’s not that we should stop learning just because we can’t know everything – that would be celebrating ignorance.

It’s also not that we should actively distrust others’ knowledge because they could be wrong – but neither should we blindly believe everything.

It’s not that we should think of technology as magic – but that shouldn’t prevent us from being awed by it.

Instead, strive to be curious but humble;

Critical, but open-minded;

Proud of how far we’ve come, but still in awe of far we have yet to go.

Posted in General | Leave a comment

What if the techno-utopians are wrong? In search of alternative narratives.

What do you think when you think about the future?

Do you think about the future?

Most people don’t all that often, but the topic is hard to escape amidst today’s future-focused rhetoric. It’s all about innovating for the future, disrupting the future, creating the future.

The prevailing ethos in the technology world is one of techno-utopia. It comes in several flavours; at one end there are people like Ray Kurzweil, talking up singularity as something practically certain and imminent and the money pouring into life extension startups in search of immortality – topics that would have been at the extreme fringe only a couple of decades ago, but are now practically mainstream.

More mainstream still are the poster-children of disruption, like Uber or Tesla. Even for the old-school technology companies like Cisco, the goal is to change the future and own it.

If you don’t, you’ve got nobody but yourself to blame when the future steamrolls over you.

Or so goes the dogma.

I’ve always been uncomfortable with the unfettered – one could also say unhinged – technological optimism exhibited by this now-dominant way of thinking. This is why I’m reluctant to call myself a futurist, even if the work I do strongly aligns with that; I have found the vast majority of futurists to be uncritical techno-utopians, approaching all the world’s problems – even the intractable ones – with a quasi-religious faith in technological solutions.

As a result I’ve been in search of two things; either proof that the techno-utopians are right, or alternative narratives.

Proof, but of what?

In search of proof that the techno-utopians are right, one finds no shortage of lofty promises and confident statements plus books extolling how technology X is going to change everything – take Jeremy Rifkin’s Zero Marginal Cost Society as just one example out of many. It is easy to mistake these for proof.

Except when you dig deeper – often just scratching the surface will suffice – what emerges is very different picture. For when you look at actual results of technological solutionism, in a very real data-driven sense, like Kentaro Toyama did in Geek Heresy: Rescuing Social Change from the Cult of Technology, the data shows something else. It shows that technology is not in and of itself a solution to pretty much anything – instead, it’s an amplifier. And on average, humanity is amplifying many of the wrong things with it.

The average is important; for some things the average is the only thing that matters. Take climate change for example; for all the talk of emissions reductions, and for all the renewable generation installed, none of it matters until and unless we see a fundamental change in the trajectory of this chart – global CO2 concentration:

To say that the data shows no improvement would be putting it overly optimistically; the annual mean growth rate has been rising steadily for decades, from less than 1 ppm/yr in the 1960’s to over 2.3ppm/year in 2010’s so far. It’s not just getting worse – it’s getting worse faster.

Alternative narratives

The world infused with techno-optimism is arguably a bubble. After all I, and with significant likelihood you, lead a relatively privileged life in a relatively privileged country. But it’s a dangerous extrapolation to think that just because we’re fine, it’ll continue to be so for us and that everyone else – and our civilisation – will be fine, too.

In light of data showing otherwise, we need alternative narratives to the techno-utopian visions of the future.

Douglas Rushkoff discusses the overarching obsession with growth and all its implications admirably in his bookThrowing Rocks at the Google Bus, but growth is just one narrow aspect of the prevailing worldview. Another appealing and useful narrative, along with practical techniques for real sustainability, can be found from Permaculture – which is a sadly topical concept considering one of the its originators, Bill Mollison, passed away recently.

One broader-reaching and more fundamental alternative narrative has been constructed by preppers – the survivalism movement. These are people that have come to the conclusion that civilisation itself is about to fall apart, potentially dramatically (due to any variety of reasons). The preppers prepare for – and one cannot escape the feeling that they on some level wish for – an apocalypse of sort.

Their approach may seem the extreme polar opposite approach from the futurists. Yet, as Hal Niedzviecki puts it, “at their core, both the technologists and the preppers have secular belief systems rooted in a sense of superiority over others.”

Niedzviecki’s book Trees on Mars is wonderful, though challenging and perhaps unsettling. All of which are good reasons for reading it. Among other things, it introduced me to yet another narrative, a kind of a middle ground-approach in the form of the Dark Mountain Project.

The Dark Mountain Project has outlined their thought framework what they call the eight principles of ‘uncivilisation’. I quote them in full below. Even if – or especially if – you’re in the extreme camps of techno-utopia or the doomers, it’s worth considering this alternative narrative:


1. We live in a time of social, economic and ecological unravelling. All around us are signs that our whole way of living is already passing into history. We will face this reality honestly and learn how to live with it.

2. We reject the faith which holds that the converging crises of our times can be reduced to a set of ‘problems’ in need of technological or political ‘solutions’.

3. We believe that the roots of these crises lie in the stories we have been telling ourselves. We intend to challenge the stories which underpin our civilisation: the myth of progress, the myth of human centrality, and the
myth of our separation from ‘nature’. These myths are more dangerous for the fact that we have forgotten they are myths.

4. We will reassert the role of storytelling as more than mere entertainment. It is through stories that we weave reality.

5. Humans are not the point and purpose of the planet. Our art will begin with the attempt to step outside the human bubble. By careful attention, we will reengage with the non-human world.

6. We will celebrate writing and art which is grounded in a sense of place and of time. Our literature has been dominated for too long by those who inhabit the cosmopolitan citadels.

7. We will not lose ourselves in the elaboration of theories or ideologies. Our words will be elemental. We write with dirt under our fingernails.

8. The end of the world as we know it is not the end of the world full stop. Together, we will find the hope beyond hope, the paths which lead to the unknown world ahead of us.

It would be wrong to say I subscribe fully to the Dark Mountain manifesto; at the same time, it would be wrong to say I don’t find it appealing to an extent. There is a refreshing sense of intellectual honesty both in rejecting the notion of easy answers when there are none, and in dismissing unfounded faith in ‘progress’ where data shows otherwise.

Where does this lead us to?

The good thing with narratives is that you don’t need to be restricted to finding one, you can help create one – and not just for yourself, but for others.

There is also a realisation that I would like more people to come to; rejecting utopian promises of technology does not make one anti-technology. Somewhat ironically, pragmatism – which inevitably will lead to opting out of some of the hype – is rooted in an approach that is supposedly the driving force of technological progress; that of being data-driven.

It is a topic I’ve touched on before, but this time it’s broader – it’s time to acknowledge the data and evidence of where the world is at and where it’s going.

Even if we don’t like what the data shows.

As put by the Dark Mountain Manifesto, “We will face [this] reality honestly and learn how to live with it.”

Posted in Culture, Psychology, Technology | Tagged , , , , , | Leave a comment

Data-Driven Bad & why we need to raise our game

The term ‘data-driven’ is today used almost exclusively in a positive tone; perhaps because it implies logic or decisions that are somehow based on impartial facts (nb. even though data != facts nor is it impartial). But there’s a substantially more sombre reality to all of it – one of data-driven bad.

You may not think of it, but you are a victim of data-driven bad fairly constantly; that completely irrelevant ‘targeted’ ad on Facebook or wherever? Data-driven bad. Amazon or your store loyalty program sending either naïve suggestions or pushing products you’d never dream of buying? Data-driven bad.

But it gets much worse.

From policy decisions to business strategies, being ‘data-driven’ can go wrong in many ways.

The most obvious situation is when someone – the government comes to mind pretty often – claims some decision is data-driven when in fact it is not. The data is either non-existent or made up. Why do they bother? Because it works; as noted by Joe Romm in his great book Language Intelligence, “many studies find that repeated exposure to a statement increases its acceptance as true.”

In cases when there some actual data is present, one of the most common pitfalls is selection bias, i.e. you pay attention to just the data or the results that best fit your preconceptions. As I wrote earlier, this tendency to ignore undesirable data can result in entire organisations acting irrationally.

Good data, bad data – but can you even tell the difference?

Data in and of itself is often not particularly useful. It needs to be analysed one way or another to make use of it effectively, or to uncover insights. The results of your data-driven endeavour depend on many things, but let’s look at the two obvious ones: quality of the data and quality of the analysis.


Seems straightforward enough, right? Just do good analysis on good data and it’s all good, right?

Well, kind of.

Problem is most data is not of particularly high quality. And even when you think it is, it may not be.

This is well illustrated by a recent discovery of a 15-year-old bug in software used to analyse functional MRI (fMRI) images; the cool brain activity scans in those “how the brain works” articles? That’s fMRI.

And the bug? It caused false positive rates as high as 70%.

How bad can it be, you may ask? Surely it’s just a matter of minor re-calibration of some results.

Unfortunately no. As The Register put it, “that’s not a gentle nudge that some results might be overstated: it’s more like making a bonfire of thousands of scientific papers.” The validity of some 40,000 fMRI studies spanning more than a decade are now in question.


When much of a field that prided itself on being data-driven and using state-of-the-art equipment to acquire the said data is now under the shadow of data-driven-bad, just how confident are you in your organisation’s capability to ensure the quality of data?

Actually, before you answer that, you should keep in mind that everything is broken – bugs, mistakes and errors leading to unexpected behaviour are everywhere.

Good analysis, bad analysis

Even when you do have good data, it’s not enough. Just like the standard financial results disclaimer of “past performance is no guarantee of future results” falls on deaf ears, so does the #1 principle of statistics, “correlation does not imply causation”.

Both so obviously true, and yet usually ignored – because the alternative is hard. When there is a compelling story to be extracted out of good data and a clear correlation, why bother with the analysis bit?

Not to worry, Big Data is here to make it worse….wait, what?

Close to a decade ago, Wired’s editor-in-chief Chris Anderson embraced this line of thinking, stating “with enough data, the numbers speak for themselves”.

He neglected to mention that if we let the numbers speak for themselves, even good data can lie through its non-existent teeth.

One major problem is that Big Data makes the discovery of spurious correlations so much easier. As noted by one paper on the topic, “too much information tends to behave like very little information”.

Hand on your hearts now, data scientists – how often do you really dig into the data and to verify that your correlation is, in fact, causation?

I’ll venture a guess; exceedingly rarely.

Demand more of ourselves

I’m not against data, quite the opposite. I’m not even against Big Data. But am I vehemently against using it just because, or to somehow replace insight or theory.

Luckily there is, if not a solution, at least a way to improve the situation.

As Nate Silver puts it in “The Signal and the Noise“;

Data-driven predictions can succeed – and they can fail. It is when we deny our role in the process that the odds of failure rise. Before we demand more of our data, we need to demand more of ourselves.

In reality, the world often does the opposite – we use data to make life easier for us, to demand less of ourselves. We let the data speak for itself, while doing anything from fabricating the data in the first place to neglecting to check whether what it says makes any sense whatsoever.

But when we lose understanding and forget the theory, we find it becomes very easy to mistake those correlations for causation and trust the ‘data’ – no matter how misguided it may be.

It’s time to demand more of ourselves; only then we can demand more of the data.

Posted in ICT-stuff | Tagged , , , , , | Leave a comment