Why my new home won’t be “smart”

(And with this cross-post from LinkedIn, the blog returns from its break.. I tried LinkedIn as a publishing platform for a while, but with more than half of the posts there experiencing issues of not being visible or comments not showing up, I’m giving up on them and returning here.)

I’ve always considered myself as a technologist and a bit of a geek. Given that, I thought that if I ever build a house, I’ll have smart this and smart that, everything remotely controllable and all that. Now that I would have the opportunity to do that, I am surprising myself with my decisions on smart home gear: I am essentially getting none of it.

Why? Two main reasons:

1) Lack of net benefits.

2) Security and privacy issues.

Let’s tackle the security first; the more I have dug into security over the years, particularly in the “Internet of Things” domain, the less convinced I am that anyone, anywhere – exaggerating only slightly – knows what they are doing. It’s gotten to the point where I predict the whole IOT/IOE visions to result in a variety of security-related tears unless the priorities change quite fundamentally – and it’s even worse in the consumer space, where short product life cycles and a generally blasé attitude to security aren’t exactly helping.

What it comes down to is that I don’t trust the providers to keep things secure – because, to a large degree, they cannot. On that, this is a great introduction to why it’s next to impossible: Everything is Broken by Quinn Norton.

Loosely related to security is privacy. What I trust even less is that the providers of all that smart home gear would keep my data private and not abuse it. So I won’t even give them the data to begin with.

Another big problem are the net benefits, or lack of them. It’s not so much that all the smart home gear is useless, but it imposes additional actions on the users I don’t want to deal with. In other words, I don’t see significant net benefits from the available smart home stuff. The costs, not just monetary but in terms of time, outweigh the benefits. One of the main issues here is that much of it is designed to give the users more control over something. Very few products are designed to work in the background, or do it sufficiently well for me to trust them to do their things how I want it done. They’re just not smart enough yet.

Take the state-of-the-art smart thermostat, Nest. It’s not designed for you to constantly play with, it’s designed to learn your habits and self-adapt, work in the background. Sounds good, right? It does, but I don’t trust it enough to try it. The reasons? First, most of the time I won’t need any heating or cooling in the house. Sometimes I want to cool it when the temperature hits +30C (e.g. when coming in from a run), but at other times I’m happy to let it hit +35C. In winter, I may want to turn on heating at +15C, but when the daytime high will be over +30C, I’d prefer to have a cold night and morning. I obviously haven’t lived with it, but would Nest be smart enough to do all that and more? I doubt it.

There are a host of other issues as well – such as the poor availability of some products in Australia (vs USA in particular) and lack of faith in continued support in what is a fast-developing field and/or small players. Much of smart home-stuff is integrated to the building to a greater or lesser extent, and I don’t want to have to change it every two years when someone goes out of business. A bit of a chicken and egg-problem, I know.

So my new home will be a Luddite. It’ll be environmentally sustainable, comfortable all year, energy-efficient, practical and above all liveable.

But it will not be “smart”.

I’ve done a fair bit of research into what’s available and reached the above conclusions, but I’m always happy to be convinced otherwise and hear suggestions if you think something is a “must have”.

Posted in Building | Tagged , , , , , , | Leave a comment

A break

After some 15 years of active blogging – about 10 years of which has taken place on this site & domain – I have decided to take a bit of a break from blogging. You’ll see from the date of the previous post that the break already started a while back.

This is not because I’d have a lack of ideas on what to write about, quite the contrary. What I am lacking is time – or rather, I choose to temporarily prioritize the time I have away from this blog. This is a result of a number of factors, ranging from an ongoing professional detour into the world of corporate sustainability to a personal one of decisively setting roots in Melbourne in the form of building a house. I reckon those alone will keep me relatively busy for some time, and I hope to share results of the latter project later on (maybe even revive the blog into a construction blog for 2014).

Other factors are involved, too – like the dilemma many bloggers are familiar with: after a break that extends for a longer period of time (in this case due to spending a good chunk of the European summer in Europe), the perceived pressure increases to write something really good next time, which inevitably takes longer, which in turn increases the imagined pressure, which … you get the drift. Other forms of communication – like alternative channels for professional communication and Twitter for casual commentary – also tend to encroach on blogging. I will not, however, go as far as some commentators have in saying that blogging would be dead. I don’t believe that is the case, or would be the case for a very long time to come.

But clearly since I don’t have anything better to say, I should wrap up. So, see you later. I’ll leave you with some food for thought from Immoderate Greatness:

The real concern for a civilization dependant on fossil fuels is not really the moment in time when the maximum rate of petroleum extraction is reached, after which production enters terminal decline, but rather the inexorable trend toward lower net energy and higher costs, both monetary and environmental.
[…]
It is vital to understand that technology is not a source of energy. […] Technology and good management can forestall the day of ecological reckoning, but not indefinitely.
[…]
Finally, however, resources are either effectively exhausted or no longer repay the effort needed to exploit them. As massive demand collides with dwindling supply, ecological credit that has fueled expansion and created a large population accustomed to living high off the hog is choked off. The civilization begins to implode, in either a slow and measured decline or a more rapid and chaotic collapse.

Posted in Personal | 1 Comment

Could Moore’s Law help bring back online privacy and kill the Cloud?

For a long time, everyone have “known” communications on the Internet were being watched by agencies, authorities and generally people we may not want to see them; and that’s in addition to all the data the advertising machinery gets of us. What Mr. Snowden bravely revealed about NSA’s activities just lended that knowledge additional confirmation, and we should rightly be outraged about it. As The Economist points out, the outrage shouldn’t necessarily be directed at the surveillance itself, but at least at the lack of transparency in implementing it – “Spying in a democracy depends on its legitimacy on informed consent, not blind trust.” (More great points here).

The Internet has made it vastly easier to carry out such surveillance, and people taking up cloud services en masse has made it an order of magnitude easier still; when heaps of data is conveniently available from centralized locations, of course it will be used. It would be supremely naive to think Google, Apple etc would somehow put their business on the line just to ensure 100% privacy for their customers (because that’s what it would take – it would take breaking the law to refuse to hand anything over to the government).

But could Moore’s Law help reverse our reliance on cloud services? Could it help end the centralized-cloud phenomenon altogether? Now Moore’s Law isn’t, of course, a proper “law” at all [0], and there are valid reasons to believe it will relatively soon (within 5-10 years) hit a brick wall known as the law of physics, which is much more of a real law. But what if it won’t stop quite that soon? But what if it will continue just long enough – 15-20 years – to transform your everyday mobile device into a supercomputer or a semi-intelligent agent?

Think about services like Siri or Google Search. If we use them, both know quite a bit of what we do and think. What if, instead of sending the queries to a server somewhere, all processing – including answering the questions – could be done locally, on your smartphone? That’s exactly what your supercomputer-in-a-pocket could do.

It’s not as far-fetched as you might think; your smartphone today is equivalent to what would have been called a supercomputer 15-20 years ago [1]. Fast-forward another 20 years and, given similar pace of development (a big if, but many would argue it’s feasible or even likely) and your mobile would be the equivalent of a supercomputer today.

And what if that did happen? It would mean that, with the possible exception of video, we could basically carry a copy of all the world’s knowledge in our pockets. All speech recognition and synthesis could be done with perfect accuracy on-device, as would searching for answers to almost all your questions (non-news-related anyway).

No need to send anything anywhere. In other words, all that would be private. No ad agencies or governments snooping in on your queries.

Even if you did want to have something centralized – say to enable smooth access to services across different terminals – you could have a small box, running your securely encrypted personal cloud services from your own home, connected to 100Mbps link. I say 100Mbps because it can be argued a single person will never need more bandwidth than that [2].

Personal in-pocket and at-home supercomputers would almost all but obviate the need to have massive centralized cloud infrastructure for everyday consumer services. The home cloud could also act as an anonymizing intelligent search proxy for searching real-time data from future Googles. Data centres would of course likely still be there for even more processing/storage-intensive tasks, but the majority of our online lives could be owned, operated and controlled by us. Maybe the data centres would house the AIs – or maybe we’d just have one of those running on our personal supercomputer(s) also.

I can see a lot of potential in the current shift to centralized cloud services once again shifting back towards the edges of the network. And it could be a boon for online privacy; privacy that now appears to be increasingly rare.

Notes:

[0] Strictly speaking Moore’s Law is not the right term to use for the technological developments I am describing, but it’s commonly misused in the same context so we’ll just run with that.

[1] Cray-2 had a performance of 1.9 gigaflops and was the fastest supercomputer until 1990. A Tegra 4 mobile chip, released this year, has a performance of 96 gigaflops.

[2] Yes yes, saying “never” is dangerous and one should never (ha!) do that. But the 100Mbps argument is a compelling one; in 2006 a Cisco study analysed the input bandwidth of the human brain and came to a figure of around 70Mbps. In other words, all input to the human brain – visual, audio, sensory, smell etc – is under 100Mbps of data at any given point in time. That, in turn, means that with proper encoding and appropriate interface technology, it should be possible to implement a virtual reality that is indistinguishable from reality with under 100Mbps of bandwidth. I for one don’t know what we would consistently and constantly need more bandwidth than that for. (Faster bursts for quick downloads, sure, but not at a constant level).

Posted in ICT-stuff, mobile | Tagged , , , , , | 2 Comments