Embracing the Unknown

Think about how much people – humanity, collectively – know today.

The base of knowledge, and what we can do with it, is impressive.

Yet we act as if it’s not; many great achievements of humanity go unnoticed or unappreciated. As Samuel Arbesman has noted:

Changes are happening so rapidly that we forget to marvel at how impressive our understanding of the universe – and our ability to harness it – has become. We forget how recently we gained the ability to render three-dimensional worlds on our screens, communicate instantly across the planet, or even summon decades-old television programmes with the click of a mouse. The knowledge that has made these changes possible too often fails to inspire wonder.

Our knowledge is also impressive from a perspective of complexity; many systems we use and rely on every single day are so complex that you or I don’t usually fully understand how they work.

You’re Not Alone

In fact, they’re so complex that nobody fully understands how they work.

To paraphrase Quinn Norton, there is no one person on Earth who really knows all of what even your smartphone is doing, let alone more complicated systems.

Not understanding something can be frustrating, but not being capable of understanding something is even worse.

Where that threshold, that thin red line between “I don’t understand this, but I can learn” and “I don’t understand this, and I can never learn” lies, varies.

But it exists for all of us. Individually, and, as it turns out, collectively.

And it’s okay. It’s okay to not know something, or even most things.

What is not okay is the celebration of ignorance, which can result from the frustration of hitting that line of being unable to comprehend something.

From anti-vaxxers to deniers of all flavours, we all see the results of this trend in the world today, and it’s not pretty. As Tom Nichols warns in The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters;

Populism actually reinforces elitism, because the celebration of ignorance cannot launch communications satellites or provide for effective medications, which are daunting tasks even the dimmest citizens now demand and take for granted. Faced with a public that has no idea how most things work, experts likewise disengage, choosing to speak mostly to each other rather than to laypeople.

Unable to comprehend all of the complexity around them, [people] choose instead to comprehend almost none of it and then sullenly blame experts, politicians, and bureaucrats for seizing control of their lives.

Even among experts, “I don’t know” remains one of the hardest things to say in many situations, so sometimes – even often – people pretend ignorance is knowledge. Too often it works, as a confident but potentially wildly inaccurate opinion easily overrides cautious but healthy doubt.

Get Used to Not Knowing

As uncomfortable as admitting not understanding something is, it’s an increasingly important skill. As our world grows increasingly complex, any one person understands an ever-diminishing part of the whole.

We urgently and collectively need to come to grips with that fact of life. To survive and thrive in a world that relies on specialized knowledge, we need increased levels of trust – not the breakdown of trust we witness today.

Don’t be put off by learning how much you don’t know. The darkness was always there, surrounding you; you just had no idea how vast it was until you began probing it.

Easier said than done, that.

Still, the realization that nobody else knows everything either should provide some solace. Another potentially comforting fact is that while our current knowledge is impressive, it’s equally impressive to realize how much we don’t know.

The More You Know, The More You Realize How Much You Don’t Know

An excellent book We Have No Idea: A Guide to the Unknown Universe sheds some light on the many, many things humans just have no idea of – starting from the fun fact that we don’t even know what the majority of the universe is made of. (Giving it a name like “dark energy” does not mean we know what it is).

Another great book, The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us, takes the concept of not knowing even further, and shows there is much that cannot be known and how reason, albeit powerful, is nevertheless a limited tool.

As a final bonus – or adding an insult to injury, depending on your point of view – comes Chuck Klosterman’s book But What If We’re Wrong?: Thinking About the Present As If It Were the Past.

He points out that there’s a (very) good chance we’re wrong about a lot of the stuff we “know” today. We’re reminded of this occasionally, too, when new discoveries break limits once thought to be ‘fundamental’. Quite probably we not as wrong as we were a couple of hundred years ago, but to think we’re somehow right about everything now would be delusionally arrogant.

Dealing With It

How does one deal with all the complexity that cannot be understood?

Returning to Samuel Arbesman, he offers some advice in his book Overcomplicated: Technology at the Limits of Comprehension:

We must work to maintain two opposite states: mystery without wonder and wonder without mystery. The first requires that we strive to eliminate our ignorance, rather than simply reveling in it. And the second means that once we understand something, we do not take it for granted.

We will always be left with some mystery, but that’s okay. As long as we neither fear nor revel in it, we can take the proper perspective: humility, even with a touch of awe.

It’s not that we should stop learning just because we can’t know everything – that would be celebrating ignorance.

It’s also not that we should actively distrust others’ knowledge because they could be wrong – but neither should we blindly believe everything.

It’s not that we should think of technology as magic – but that shouldn’t prevent us from being awed by it.

Instead, strive to be curious but humble;

Critical, but open-minded;

Proud of how far we’ve come, but still in awe of far we have yet to go.

Posted in General | Leave a comment

What if the techno-utopians are wrong? In search of alternative narratives.

What do you think when you think about the future?

Do you think about the future?

Most people don’t all that often, but the topic is hard to escape amidst today’s future-focused rhetoric. It’s all about innovating for the future, disrupting the future, creating the future.

The prevailing ethos in the technology world is one of techno-utopia. It comes in several flavours; at one end there are people like Ray Kurzweil, talking up singularity as something practically certain and imminent and the money pouring into life extension startups in search of immortality – topics that would have been at the extreme fringe only a couple of decades ago, but are now practically mainstream.

More mainstream still are the poster-children of disruption, like Uber or Tesla. Even for the old-school technology companies like Cisco, the goal is to change the future and own it.

If you don’t, you’ve got nobody but yourself to blame when the future steamrolls over you.

Or so goes the dogma.

I’ve always been uncomfortable with the unfettered – one could also say unhinged – technological optimism exhibited by this now-dominant way of thinking. This is why I’m reluctant to call myself a futurist, even if the work I do strongly aligns with that; I have found the vast majority of futurists to be uncritical techno-utopians, approaching all the world’s problems – even the intractable ones – with a quasi-religious faith in technological solutions.

As a result I’ve been in search of two things; either proof that the techno-utopians are right, or alternative narratives.

Proof, but of what?

In search of proof that the techno-utopians are right, one finds no shortage of lofty promises and confident statements plus books extolling how technology X is going to change everything – take Jeremy Rifkin’s Zero Marginal Cost Society as just one example out of many. It is easy to mistake these for proof.

Except when you dig deeper – often just scratching the surface will suffice – what emerges is very different picture. For when you look at actual results of technological solutionism, in a very real data-driven sense, like Kentaro Toyama did in Geek Heresy: Rescuing Social Change from the Cult of Technology, the data shows something else. It shows that technology is not in and of itself a solution to pretty much anything – instead, it’s an amplifier. And on average, humanity is amplifying many of the wrong things with it.

The average is important; for some things the average is the only thing that matters. Take climate change for example; for all the talk of emissions reductions, and for all the renewable generation installed, none of it matters until and unless we see a fundamental change in the trajectory of this chart – global CO2 concentration:

To say that the data shows no improvement would be putting it overly optimistically; the annual mean growth rate has been rising steadily for decades, from less than 1 ppm/yr in the 1960’s to over 2.3ppm/year in 2010’s so far. It’s not just getting worse – it’s getting worse faster.

Alternative narratives

The world infused with techno-optimism is arguably a bubble. After all I, and with significant likelihood you, lead a relatively privileged life in a relatively privileged country. But it’s a dangerous extrapolation to think that just because we’re fine, it’ll continue to be so for us and that everyone else – and our civilisation – will be fine, too.

In light of data showing otherwise, we need alternative narratives to the techno-utopian visions of the future.

Douglas Rushkoff discusses the overarching obsession with growth and all its implications admirably in his bookThrowing Rocks at the Google Bus, but growth is just one narrow aspect of the prevailing worldview. Another appealing and useful narrative, along with practical techniques for real sustainability, can be found from Permaculture – which is a sadly topical concept considering one of the its originators, Bill Mollison, passed away recently.

One broader-reaching and more fundamental alternative narrative has been constructed by preppers – the survivalism movement. These are people that have come to the conclusion that civilisation itself is about to fall apart, potentially dramatically (due to any variety of reasons). The preppers prepare for – and one cannot escape the feeling that they on some level wish for – an apocalypse of sort.

Their approach may seem the extreme polar opposite approach from the futurists. Yet, as Hal Niedzviecki puts it, “at their core, both the technologists and the preppers have secular belief systems rooted in a sense of superiority over others.”

Niedzviecki’s book Trees on Mars is wonderful, though challenging and perhaps unsettling. All of which are good reasons for reading it. Among other things, it introduced me to yet another narrative, a kind of a middle ground-approach in the form of the Dark Mountain Project.

The Dark Mountain Project has outlined their thought framework what they call the eight principles of ‘uncivilisation’. I quote them in full below. Even if – or especially if – you’re in the extreme camps of techno-utopia or the doomers, it’s worth considering this alternative narrative:

THE EIGHT PRINCIPLES OF UNCIVILISATION

1. We live in a time of social, economic and ecological unravelling. All around us are signs that our whole way of living is already passing into history. We will face this reality honestly and learn how to live with it.

2. We reject the faith which holds that the converging crises of our times can be reduced to a set of ‘problems’ in need of technological or political ‘solutions’.

3. We believe that the roots of these crises lie in the stories we have been telling ourselves. We intend to challenge the stories which underpin our civilisation: the myth of progress, the myth of human centrality, and the
myth of our separation from ‘nature’. These myths are more dangerous for the fact that we have forgotten they are myths.

4. We will reassert the role of storytelling as more than mere entertainment. It is through stories that we weave reality.

5. Humans are not the point and purpose of the planet. Our art will begin with the attempt to step outside the human bubble. By careful attention, we will reengage with the non-human world.

6. We will celebrate writing and art which is grounded in a sense of place and of time. Our literature has been dominated for too long by those who inhabit the cosmopolitan citadels.

7. We will not lose ourselves in the elaboration of theories or ideologies. Our words will be elemental. We write with dirt under our fingernails.

8. The end of the world as we know it is not the end of the world full stop. Together, we will find the hope beyond hope, the paths which lead to the unknown world ahead of us.

It would be wrong to say I subscribe fully to the Dark Mountain manifesto; at the same time, it would be wrong to say I don’t find it appealing to an extent. There is a refreshing sense of intellectual honesty both in rejecting the notion of easy answers when there are none, and in dismissing unfounded faith in ‘progress’ where data shows otherwise.

Where does this lead us to?

The good thing with narratives is that you don’t need to be restricted to finding one, you can help create one – and not just for yourself, but for others.

There is also a realisation that I would like more people to come to; rejecting utopian promises of technology does not make one anti-technology. Somewhat ironically, pragmatism – which inevitably will lead to opting out of some of the hype – is rooted in an approach that is supposedly the driving force of technological progress; that of being data-driven.

It is a topic I’ve touched on before, but this time it’s broader – it’s time to acknowledge the data and evidence of where the world is at and where it’s going.

Even if we don’t like what the data shows.

As put by the Dark Mountain Manifesto, “We will face [this] reality honestly and learn how to live with it.”

Posted in Culture, Psychology, Technology | Tagged , , , , , | Leave a comment

Data-Driven Bad & why we need to raise our game

The term ‘data-driven’ is today used almost exclusively in a positive tone; perhaps because it implies logic or decisions that are somehow based on impartial facts (nb. even though data != facts nor is it impartial). But there’s a substantially more sombre reality to all of it – one of data-driven bad.

You may not think of it, but you are a victim of data-driven bad fairly constantly; that completely irrelevant ‘targeted’ ad on Facebook or wherever? Data-driven bad. Amazon or your store loyalty program sending either naïve suggestions or pushing products you’d never dream of buying? Data-driven bad.

But it gets much worse.

From policy decisions to business strategies, being ‘data-driven’ can go wrong in many ways.

The most obvious situation is when someone – the government comes to mind pretty often – claims some decision is data-driven when in fact it is not. The data is either non-existent or made up. Why do they bother? Because it works; as noted by Joe Romm in his great book Language Intelligence, “many studies find that repeated exposure to a statement increases its acceptance as true.

In cases when there some actual data is present, one of the most common pitfalls is selection bias, i.e. you pay attention to just the data or the results that best fit your preconceptions. As I wrote earlier, this tendency to ignore undesirable data can result in entire organisations acting irrationally.

Good data, bad data – but can you even tell the difference?

Data in and of itself is often not particularly useful. It needs to be analysed one way or another to make use of it effectively, or to uncover insights. The results of your data-driven endeavour depend on many things, but let’s look at the two obvious ones: quality of the data and quality of the analysis.

data

Seems straightforward enough, right? Just do good analysis on good data and it’s all good, right?

Well, kind of.

Problem is most data is not of particularly high quality. And even when you think it is, it may not be.

This is well illustrated by a recent discovery of a 15-year-old bug in software used to analyse functional MRI (fMRI) images; the cool brain activity scans in those “how the brain works” articles? That’s fMRI.

And the bug? It caused false positive rates as high as 70%.

How bad can it be, you may ask? Surely it’s just a matter of minor re-calibration of some results.

Unfortunately no. As The Register put it, “that’s not a gentle nudge that some results might be overstated: it’s more like making a bonfire of thousands of scientific papers.” The validity of some 40,000 fMRI studies spanning more than a decade are now in question.

Whoops.

When much of a field that prided itself on being data-driven and using state-of-the-art equipment to acquire the said data is now under the shadow of data-driven-bad, just how confident are you in your organisation’s capability to ensure the quality of data?

Actually, before you answer that, you should keep in mind that everything is broken – bugs, mistakes and errors leading to unexpected behaviour are everywhere.

Good analysis, bad analysis

Even when you do have good data, it’s not enough. Just like the standard financial results disclaimer of “past performance is no guarantee of future results” falls on deaf ears, so does the #1 principle of statistics, “correlation does not imply causation”.

Both so obviously true, and yet usually ignored – because the alternative is hard. When there is a compelling story to be extracted out of good data and a clear correlation, why bother with the analysis bit?

Not to worry, Big Data is here to make it worse….wait, what?

Close to a decade ago, Wired’s editor-in-chief Chris Anderson embraced this line of thinking, stating “with enough data, the numbers speak for themselves”.

He neglected to mention that if we let the numbers speak for themselves, even good data can lie through its non-existent teeth.

One major problem is that Big Data makes the discovery of spurious correlations so much easier. As noted by one paper on the topic, “too much information tends to behave like very little information”.

Hand on your hearts now, data scientists – how often do you really dig into the data and to verify that your correlation is, in fact, causation?

I’ll venture a guess; exceedingly rarely.

Demand more of ourselves

I’m not against data, quite the opposite. I’m not even against Big Data. But am I vehemently against using it just because, or to somehow replace insight or theory.

Luckily there is, if not a solution, at least a way to improve the situation.

As Nate Silver puts it in “The Signal and the Noise“;

Data-driven predictions can succeed – and they can fail. It is when we deny our role in the process that the odds of failure rise. Before we demand more of our data, we need to demand more of ourselves.

In reality, the world often does the opposite – we use data to make life easier for us, to demand less of ourselves. We let the data speak for itself, while doing anything from fabricating the data in the first place to neglecting to check whether what it says makes any sense whatsoever.

But when we lose understanding and forget the theory, we find it becomes very easy to mistake those correlations for causation and trust the ‘data’ – no matter how misguided it may be.

It’s time to demand more of ourselves; only then we can demand more of the data.

Posted in ICT-stuff | Tagged , , , , , | Leave a comment

Reality Check on Autonomous Cars

Not a week goes by without someone extolling the imminent virtues of autonomous vehicles. It’s gotten to the point where some see them solving pretty much everything. Which, obviously, they’re not going to do – so let’s take a quick look of what impacts we can really expect, and when.

train

What, by the way, is a train image doing on a post about autonomous cars? It’s because trains already provide the user experience that people are likely to most rave about when they get their autonomous car. We’ll return to this later.

Some, like RAND, provide good, balanced reports on autonomous cars. Others, like Accenture, focus on the eccentric (“Autonomous vehicles can expand consumers’ access to banking, using the car to pay for fuel and tolls” – wot?! :)) while yet others go for the utopia-model: this Investopedia article is a good example: autonomous vehicles will supposedly “dramatically reduce the number of cars” and make for “greener urban areas“, among other impacts.

Let me begin by saying I love the concept of autonomous cars, and I believe they will be a net benefit to society and will change a number of things – some dramatically, some less so.

Will they reduce vehicle ownership? Maybe, but it’ll be a reduction, not elimination, and the shift is likely to be generational in timeframe. While fewer young people now own a car, those predicting significant shifts in attitudes to ownership ought to keep in mind humans aren’t exactly logical creatures, and ownership of “stuff” is rarely driven by rational reasons alone. I can also see the reverse trend being possible, such as affluent people buying cars for their kids – kids too young to drive themselves – to get around with.

Will they reduce accidents? Very likely yes, and this is the single biggest human benefit of them. I fully expect that in another 50 years, driving yourself on normal roads is likely to be either illegal or ridiculously expensive because of insurance.

Will they reduce emissions? No. Of course, their driving style might be more efficient, but that alone is not a huge impact – especially when you factor in the increased miles driven (see below). Even if we assume they’ll all be electric, that won’t automatically have a positive impact – because depending on the way your electricity is produced, EVs can pollute more than traditional combustion engine-equipped cars. This is the case here in Victoria for example. Thanks, coal.

Will they reduce miles driven? Almost certainly not. This is even contradictory to the other often expected impact of reducing vehicle ownership.

How so? Barring substantial changes in human behaviour (e.g. dramatically reduced mobility), the only way to reduce the number of vehicles is to use them more efficiently – share them. Given their (then autonomous) movement from one passenger in need of them to the next, they will actually increase miles driven – and as such, also increase emissions, and congestion.

Even if we assume a more conservative “no sharing” deployment of autonomous cars, they’ll be driven more and will worsen congestion. Imagine: you drive – well, are driven – to work. Do you tell the car then to park in the busy CBD area at a cost of, say, $50 per day – or tell it to go find a cheaper spot further away? As long as it’s there to pick you up when you get off work, chances are you’ll send it somewhere cheaper. Or when you need to stop somewhere for 10 minutes and there are no parking spots? Just tell the car to drive around until it’s time to pick you up.

Not to mention that when driving becomes more convenient, people are likely to do it more. For example: why fly from Melbourne to Sydney, when you can just head out in the evening in your comfortable, self-driving vehicle that provides a lie-flat bed for you to sleep in? On the less extreme end of the scale, they will likely make longer commutes more feasible – again leading to more miles driven.

Will they eliminate traffic jams and ease congestion? No, as above. But they will make traffic jams more bearable so I guess that’s something.

What about the user experience; what will that be like? Calling for your car like KITT was summoned in Knight Rider will be neat alright, but the most meaningful impact is elsewhere: we all know driving in traffic jams, in technical terms, sucks. It’s stressful, and for some, road rage ensues. Having an autonomous car will improve that experience significantly – it’ll reduce stress as the “driver” can concentrate on other things: relaxing, working, reading, even sleeping.

Which brings us back to the trains. I suspect something along the lines of “my commutes are SO much better with my self-driving car” will be the most common thing the owners of autonomous vehicles will be raving about.

I find that rather ironic, because that “hey I can read a book or sleep during my commute!“-experience is exactly what users of public transport, where available and done right, have enjoyed for decades. Everything old will be new again 🙂

Over time, Level 4 autonomous vehicles will allow a complete re-design of the car interior which has already led to some exciting concepts. But again: if you make car travel something really enjoyable, it’s likely to lead more of it being done.

Timing

But most of all, none of this will happen overnight. The current commercially sold state-of-the-art vehicles are Teslas, which are “only” Level 3 autonomous vehicles. Level 3, while providing added convenience and safety, allows for none of the significant societal changes to take place yet – cars aren’t really truly autonomous until they reach Level 4, which is still some way off.

A number of manufacturers have stated they will have autonomous models on the market by 2020. Chances are the majority will be Level 3 autonomous. Taking an extremely optimistic assumption, let’s say that Level 4 autonomous capabilities – for all roads and all conditions – are perfected by 2025 (which I think is overly optimistic), and that they will be in every single vehicle sold in 2030.

There is little reason to believe that vehicle replacement cycles have the potential to dramatically quicken – and there would be production bottlenecks to overcome, too. That means that by 2040, “only” approximately 50% of the vehicles will be autonomous and we’d approach 90% penetration only by 2050. Some benefits of autonomous cars, such as increasing road capacity (through smaller distances between cars and potentially higher speeds) won’t reach full potential until the penetration is dominant.

If 2050 seems like “too long”, consider that the average age of vehicles is about 10 years, give or take a couple years (11.5yrs in the USA). And there’s a long tail – did you know there are 14 million vehicles over 25 years old on the road in the USA, and a total of 58 million cars over 16 years old? (58 million is over 20% of the light vehicle “install base” of 258M in the US).

2050 doesn’t sound like an overnight revolution to utopia, now does it? And remember, this is under extremely optimistic assumptions.

In other words, remember Amara’s Law:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

While realism doesn’t sell papers, urban planners, take note: autonomous cars will not solve your infrastructure woes. Neither will Uber or taxi drivers all become imminently unemployed – although eventually they will, so I wouldn’t recommend it as a long-term career goal for kids.

The bottom line? As fast-moving and exciting as the autonomous cars thing is, it won’t happen overnight – and the impacts are not so clear-cut-positive as many would have you believe.

Posted in Culture, Technology | Tagged , , , | Leave a comment