For a long time, everyone have “known” communications on the Internet were being watched by agencies, authorities and generally people we may not want to see them; and that’s in addition to all the data the advertising machinery gets of us. What Mr. Snowden bravely revealed about NSA’s activities just lended that knowledge additional confirmation, and we should rightly be outraged about it. As The Economist points out, the outrage shouldn’t necessarily be directed at the surveillance itself, but at least at the lack of transparency in implementing it – “Spying in a democracy depends on its legitimacy on informed consent, not blind trust.” (More great points here).
The Internet has made it vastly easier to carry out such surveillance, and people taking up cloud services en masse has made it an order of magnitude easier still; when heaps of data is conveniently available from centralized locations, of course it will be used. It would be supremely naive to think Google, Apple etc would somehow put their business on the line just to ensure 100% privacy for their customers (because that’s what it would take – it would take breaking the law to refuse to hand anything over to the government).
But could Moore’s Law help reverse our reliance on cloud services? Could it help end the centralized-cloud phenomenon altogether? Now Moore’s Law isn’t, of course, a proper “law” at all , and there are valid reasons to believe it will relatively soon (within 5-10 years) hit a brick wall known as the law of physics, which is much more of a real law. But what if it won’t stop quite that soon? But what if it will continue just long enough – 15-20 years – to transform your everyday mobile device into a supercomputer or a semi-intelligent agent?
Think about services like Siri or Google Search. If we use them, both know quite a bit of what we do and think. What if, instead of sending the queries to a server somewhere, all processing – including answering the questions – could be done locally, on your smartphone? That’s exactly what your supercomputer-in-a-pocket could do.
It’s not as far-fetched as you might think; your smartphone today is equivalent to what would have been called a supercomputer 15-20 years ago . Fast-forward another 20 years and, given similar pace of development (a big if, but many would argue it’s feasible or even likely) and your mobile would be the equivalent of a supercomputer today.
And what if that did happen? It would mean that, with the possible exception of video, we could basically carry a copy of all the world’s knowledge in our pockets. All speech recognition and synthesis could be done with perfect accuracy on-device, as would searching for answers to almost all your questions (non-news-related anyway).
No need to send anything anywhere. In other words, all that would be private. No ad agencies or governments snooping in on your queries.
Even if you did want to have something centralized – say to enable smooth access to services across different terminals – you could have a small box, running your securely encrypted personal cloud services from your own home, connected to 100Mbps link. I say 100Mbps because it can be argued a single person will never need more bandwidth than that .
Personal in-pocket and at-home supercomputers would almost all but obviate the need to have massive centralized cloud infrastructure for everyday consumer services. The home cloud could also act as an anonymizing intelligent search proxy for searching real-time data from future Googles. Data centres would of course likely still be there for even more processing/storage-intensive tasks, but the majority of our online lives could be owned, operated and controlled by us. Maybe the data centres would house the AIs – or maybe we’d just have one of those running on our personal supercomputer(s) also.
I can see a lot of potential in the current shift to centralized cloud services once again shifting back towards the edges of the network. And it could be a boon for online privacy; privacy that now appears to be increasingly rare.
 Strictly speaking Moore’s Law is not the right term to use for the technological developments I am describing, but it’s commonly misused in the same context so we’ll just run with that.
 Cray-2 had a performance of 1.9 gigaflops and was the fastest supercomputer until 1990. A Tegra 4 mobile chip, released this year, has a performance of 96 gigaflops.
 Yes yes, saying “never” is dangerous and one should never (ha!) do that. But the 100Mbps argument is a compelling one; in 2006 a Cisco study analysed the input bandwidth of the human brain and came to a figure of around 70Mbps. In other words, all input to the human brain – visual, audio, sensory, smell etc – is under 100Mbps of data at any given point in time. That, in turn, means that with proper encoding and appropriate interface technology, it should be possible to implement a virtual reality that is indistinguishable from reality with under 100Mbps of bandwidth. I for one don’t know what we would consistently and constantly need more bandwidth than that for. (Faster bursts for quick downloads, sure, but not at a constant level).