Shortened URLs, produced by services like bit.ly and goo.gl, can be brute-forced. And searching random shortened URLs yields all sorts of secret documents. Plus, many of them can be edited, and can be infected with malware.
Entries Tagged "cloud computing"
Page 3 of 8
Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.
The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.
As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.
Take one example: driverless cars and smart roads.
We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.
Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.
This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.
What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.
It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.
In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.
And once the attacks start doing real damage — once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day — there will be a real outcry to do something.
Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.
I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.
It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.
How about focusing some of that money on the integrity and availability threats from that and similar technologies?
This essay previously appeared on CNN.com.
Jonathan Zittrain proposes a very interesting hypothetical:
Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.
The discovery would surely help in the prosecution of the laptop’s owner, tying him to the crime. But a junior prosecutor has a further idea. The private document was likely shared among other conspirators, some of whom are still on the run or unknown entirely. Surely Google has the ability to run a search of all Gmail inboxes, outboxes, and message drafts folders, plus Google Drive cloud storage, to see if any of its 900 million users are currently in possession of that exact document. If Google could be persuaded or ordered to run the search, it could generate a list of only those Google accounts possessing the precise file and all other Google users would remain undisturbed, except for the briefest of computerized “touches” on their accounts to see if the file reposed there.
He then goes through the reasons why Google should run the search, and then reasons why Google shouldn’t — and finally says what he would do.
I think it’s important to think through hypotheticals like this before they happen. We’re better able to reason about them now, when they are just hypothetical.
Australia is going to be the first country to have virtual passports. Presumably, the passport data will be in the cloud somewhere, and you’ll access it with an app or a URL or maybe just the passport number.
On the one hand, all a passport needs to be is a pointer into a government database with all the relevant information and biometrics. On the other hand, not all countries have access into all databases. When I enter the US with my US passport, I’m sure no one really needs the paper document — it’s all on the officers’ computers. But when I enter a random country, they don’t have access to the US government database; they need the physical object.
Australia is trialing this with New Zealand. Presumably both countries will have access into each others’ databases.
Cloud computing is the future of computing. Specialization and outsourcing make society more efficient and scalable, and computing isn’t any different.
But why aren’t we there yet? Why don’t we, in Simon Crosby’s words, “get on with it”? I have discussed some reasons: loss of control, new and unquantifiable security risks, and — above all — a lack of trust. It is not enough to simply discount them, as the number of companies not embracing the cloud shows. It is more useful to consider what we need to do to bridge the trust gap.
A variety of mechanisms can create trust. When I outsourced my food preparation to a restaurant last night, it never occurred to me to worry about food safety. That blind trust is largely created by government regulation. It ensures that our food is safe to eat, just as it ensures our paint will not kill us and our planes are safe to fly. It is all well and good for Mr. Crosby to write that cloud companies “will invest heavily to ensure that they can satisfy complex…regulations,” but this presupposes that we have comprehensive regulations. Right now, it is largely a free-for-all out there, and it can be impossible to see how security in the cloud works. When robust consumer-safety regulations underpin outsourcing, people can trust the systems.
This is true for any kind of outsourcing. Attorneys, tax preparers and doctors are licensed and highly regulated, by both governments and professional organizations. We trust our doctors to cut open our bodies because we know they are not just making it up. We need a similar professionalism in cloud computing.
Reputation is another big part of trust. We rely on both word-of-mouth and professional reviews to decide on a particular car or restaurant. But none of that works without considerable transparency. Security is an example. Mr Crosby writes: “Cloud providers design security into their systems and dedicate enormous resources to protect their customers.” Maybe some do; many certainly do not. Without more transparency, as a cloud customer you cannot tell the difference. Try asking either Amazon Web Services or Salesforce.com to see the details of their security arrangements, or even to indemnify you for data breaches on their networks. It is even worse for free consumer cloud services like Gmail and iCloud.
We need to trust cloud computing’s performance, reliability and security. We need open standards, rules about being able to remove our data from cloud services, and the assurance that we can switch cloud services if we want to.
We also need to trust who has access to our data, and under what circumstances. One commenter wrote: “After Snowden, the idea of doing your computing in the cloud is preposterous.” He isn’t making a technical argument: a typical corporate data center isn’t any better defended than a cloud-computing one. He is making a legal argument. Under American law — and similar laws in other countries — the government can force your cloud provider to give up your data without your knowledge and consent. If your data is in your own data center, you at least get to see a copy of the court order.
Corporate surveillance matters, too. Many cloud companies mine and sell your data or use it to manipulate you into buying things. Blocking broad surveillance by both governments and corporations is critical to trusting the cloud, as is eliminating secret laws and orders regarding data access.
In the future, we will do all our computing in the cloud: both commodity computing and computing that requires personalized expertise. But this future will only come to pass when we manage to create trust in the cloud.
This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the third of three essays. Here are Parts 1 and 2. Visit the site for the other side of the debate and other commentary.
Let me start by describing two approaches to the cloud.
Most of the students I meet at Harvard University live their lives in the cloud. Their e-mail, documents, contacts, calendars, photos and everything else are stored on servers belonging to large internet companies in America and elsewhere. They use cloud services for everything. They converse and share on Facebook and Instagram and Twitter. They seamlessly switch among their laptops, tablets and phones. It wouldn’t be a stretch to say that they don’t really care where their computers end and the internet begins, and they are used to having immediate access to all of their data on the closest screen available.
In contrast, I personally use the cloud as little as possible. My e-mail is on my own computer — I am one of the last Eudora users — and not at a web service like Gmail or Hotmail. I don’t store my contacts or calendar in the cloud. I don’t use cloud backup. I don’t have personal accounts on social networking sites like Facebook or Twitter. (This makes me a freak, but highly productive.) And I don’t use many software and hardware products that I would otherwise really like, because they force you to keep your data in the cloud: Trello, Evernote, Fitbit.
Why don’t I embrace the cloud in the same way my younger colleagues do? There are three reasons, and they parallel the trade-offs corporations faced with the same decisions are going to make.
The first is control. I want to be in control of my data, and I don’t want to give it up. I have the ability to keep control by running my own services my way. Most of those students lack the technical expertise, and have no choice. They also want services that are only available on the cloud, and have no choice. I have deliberately made my life harder, simply to keep that control. Similarly, companies are going to decide whether or not they want to — or even can — keep control of their data.
The second is security. I talked about this at length in my opening statement. Suffice it to say that I am extremely paranoid about cloud security, and think I can do better. Lots of those students don’t care very much. Again, companies are going to have to make the same decision about who is going to do a better job, and depending on their own internal resources, they might make a different decision.
The third is the big one: trust. I simply don’t trust large corporations with my data. I know that, at least in America, they can sell my data at will and disclose it to whomever they want. It can be made public inadvertently by their lax security. My government can get access to it without a warrant. Again, lots of those students don’t care. And again, companies are going to have to make the same decisions.
Like any outsourcing relationship, cloud services are based on trust. If anything, that is what you should take away from this exchange. Try to do business only with trustworthy providers, and put contracts in place to ensure their trustworthiness. Push for government regulations that establish a baseline of trustworthiness for cases where you don’t have that negotiation power. Fight laws that give governments secret access to your data in the cloud. Cloud computing is the future of computing; we need to ensure that it is secure and reliable.
Despite my personal choices, my belief is that, in most cases, the benefits of cloud computing outweigh the risks. My company, Resilient Systems, uses cloud services both to run the business and to host our own products that we sell to other companies. For us it makes the most sense. But we spend a lot of effort ensuring that we use only trustworthy cloud providers, and that we are a trustworthy cloud provider to our own customers.
This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the second of three essays. Here are Parts 1 and 3. Visit the site for the other side of the debate and other commentary.
Yes. No. Yes. Maybe. Yes. Okay, it’s complicated.
The economics of cloud computing are compelling. For companies, the lower operating costs, the lack of capital expenditure, the ability to quickly scale and the ability to outsource maintenance are just some of the benefits. Computing is infrastructure, like cleaning, payroll, tax preparation and legal services. All of these are outsourced. And computing is becoming a utility, like power and water. Everyone does their power generation and water distribution “in the cloud.” Why should IT be any different?
Two reasons. The first is that IT is complicated: it is more like payroll services than like power generation. What this means is that you have to choose your cloud providers wisely, and make sure you have good contracts in place with them. You want to own your data, and be able to download that data at any time. You want assurances that your data will not disappear if the cloud provider goes out of business or discontinues your service. You want reliability and availability assurances, tech support assurances, whatever you need.
The downside is that you will have limited customization options. Cloud computing is cheaper because of economics of scale, and — like any outsourced task — you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want. Fewer options at a much cheaper price: it’s a feature, not a bug.
The second reason that cloud computing is different is security. This is not an idle concern. IT security is difficult under the best of circumstances, and security risks are one of the major reasons it has taken so long for companies to embrace the cloud. And here it really gets complicated.
On the pro-cloud side, cloud providers have the potential to be far more secure than the corporations whose data they are holding. It is the same economies of scale. For most companies, the cloud provider is likely to have better security than them — by a lot. All but the largest companies benefit from the concentration of security expertise at the cloud provider.
On the anti-cloud side, the cloud provider might not meet your legal needs. You might have regulatory requirements that the cloud provider cannot meet. Your data might be stored in a country with laws you do not like — or cannot legally use. Many foreign companies are thinking twice about putting their data inside America, because of laws allowing the government to get at that data in secret. Other countries around the world have even more draconian government-access rules.
Also on the anti-cloud side, a large cloud provider is a juicier target. Whether or not this matters depends on your threat profile. Criminals already steal far more credit card numbers than they can monetize; they are more likely to go after the smaller, less-defended networks. But a national intelligence agency will prefer the one-stop shop a cloud provider affords. That is why the NSA broke into Google’s data centers.
Finally, the loss of control is a security risk. Moving your data into the cloud means that someone else is controlling that data. This is fine if they do a good job, but terrible if they do not. And for free cloud services, that loss of control can be critical. The cloud provider can delete your data on a whim, if it believes you have violated some term of service that you never even knew existed. And you have no recourse.
As a business, you need to weigh the benefits against the risks. And that will depend on things like the type of cloud service you’re considering, the type of data that’s involved, how critical the service is, how easily you could do it in house, the size of your company and the regulatory environment, and so on.
This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the first of three essays. Here are Parts 2 and 3. Visit the site for the other side of the debate and other commentary.
This is good:
Just as “data” is being sold as “intelligence”, a lot of security technologies are being sold as “security solutions” rather than what they for the most part are, namely very narrow focused appliances that as a best case can be part of your broader security effort.
Too many of these appliances do unfortunately not easily integrate with other appliances or with the rest of your security portfolio, or with your policies and procedures. Instead, they are created to work and be operated as completely stand-alone devices. This really is not what we need. To quote Alex Stamos, we need platforms. Reusable platforms that easily integrate with whatever else we decide to put into our security effort.
The latest version of Apple’s OS automatically syncs your files to iCloud Drive, even files you choose to store locally. Apple encrypts your data, both in transit and in iCloud, with a key it knows. Apple, of course, complies with all government requests: FBI warrants, subpoenas, and National Security Letters — as well as NSA PRISM and whatever-else-they-have demands.
EDITED TO ADD (10/28): See comments. This seems to be way overstated. I will look at this again when I have time, probably tomorrow.
EDITED TO ADD (10/28): This is a more nuanced discussion of this issue. At this point, it seems clear that there is a lot less here than described in the blog post below.
EDITED TO ADD (10/29): There is something here. It only affects unsaved documents, and not all applications. But the OS’s main text editor is one of them. Yes, this feature has been in the OS for a while, but that’s not a defense. It’s both dangerous and poorly documented.
The NSA is building a private cloud with its own security features:
As a result, the agency can now track every instance of every individual accessing what is in some cases a single word or name in a file. This includes when it arrived, who can access it, who did access it, downloaded it, copied it, printed it, forwarded it, modified it, or deleted it.
“All of this I can do in the cloud but–in many cases–it cannot be done in the legacy systems, many of which were created before such advanced data provenance technology existed.” Had this ability all been available at the time, it is unlikely that U.S. soldier Bradley Manning would have succeeded in obtaining classified documents in 2010.
Sidebar photo of Bruce Schneier by Joe MacInnis.