Entries Tagged "security policies"

Page 4 of 8

Security Risks of Unpatched Android Software

A lot has been written about the security vulnerability resulting from outdated and unpatched Android software. The basic problem is that while Google regularly updates the Android software, phone manufacturers don’t regularly push updates out to Android users.

New research tries to quantify the risk:

We are presenting a paper at SPSM next week that shows that, on average over the last four years, 87% of Android devices are vulnerable to attack by malicious apps. This is because manufacturers have not provided regular security updates. Some manufacturers are much better than others however, and our study shows that devices built by LG and Motorola, as well as those devices shipped under the Google Nexus brand are much better than most. Users, corporate buyers and regulators can find further details on manufacturer performance at AndroidVulnerabilities.org.

Posted on October 21, 2015 at 6:22 AMView Comments

Bringing Frozen Liquids through Airport Security

Gizmodo reports that UK airport security confiscates frozen liquids:

“He told me that it wasn’t allowed so I asked under what grounds, given it is not a liquid. When he said I couldn’t take it I asked if he knew that for sure or just assumed. He grabbed his supervisor and the supervisor told me that ‘the government does not classify that as a solid’. I decided to leave it at that point. I expect they’re probably wrong to take it from me. They’d probably not seen it before, didn’t know the rules, and being a bit of an eccentric request, decided to act on the side of caution. They didn’t spend the time to look it up.”

As it happens, I have a comparable recent experience. Last week, I tried to bring through a small cooler containing, among other things, a bag of ice. I expected to have to dump the ice at the security checkpoint and refill it inside the airport, but the TSA official looked at it and let it through. Turns out that frozen liquids are fine. I confirmed this with TSA officials at two other airports this week.

One of the TSA officials even told me that what he was officially told is that liquid explosives don’t freeze.

So there you go. The US policy is more sensible. And anyone landing in the UK from the US will have to go through security before any onward flight, so there’s no chance at flouting the UK rules that way.

And while we’re on the general subject, I am continually amazed by how lax the liquid rules are here in the US. Yesterday I went through airport security at SFO with an opened 5-ounce bottle of hot sauce in my carry-on. The screener flagged it; it was obvious on the x-ray. Another screener searched my bag, found it and looked at it, and then let me keep it.

And, in general, I never bother taking my liquids out of my suitcase anymore. I don’t have to when I am in the PreCheck lane, but no one seems to care in the regular lane either. It is different in the UK.

EDITED TO ADD (10/13): According to a 2009 TSA blog post, frozen ice (not semi-melted) is allowed.

Hannibal Burgess routine about the TSA liquids rules.

Posted on September 22, 2015 at 1:22 PMView Comments

Malcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The “cost” of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

Posted on July 21, 2015 at 6:51 AMView Comments

Everyone Wants You To Have Security, But Not from Them

In December, Google’s Executive Chairman Eric Schmidt was interviewed at the CATO Institute Surveillance Conference. One of the things he said, after talking about some of the security measures his company has put in place post-Snowden, was: “If you have important information, the safest place to keep it is in Google. And I can assure you that the safest place to not keep it is anywhere else.”

The surprised me, because Google collects all of your information to show you more targeted advertising. Surveillance is the business model of the Internet, and Google is one of the most successful companies at that. To claim that Google protects your privacy better than anyone else is to profoundly misunderstand why Google stores your data for free in the first place.

I was reminded of this last week when I appeared on Glenn Beck’s show along with cryptography pioneer Whitfield Diffie. Diffie said:

You can’t have privacy without security, and I think we have glaring failures in computer security in problems that we’ve been working on for 40 years. You really should not live in fear of opening an attachment to a message. It ought to be confined; your computer ought to be able to handle it. And the fact that we have persisted for decades without solving these problems is partly because they’re very difficult, but partly because there are lots of people who want you to be secure against everyone but them. And that includes all of the major computer manufacturers who, roughly speaking, want to manage your computer for you. The trouble is, I’m not sure of any practical alternative.

That neatly explains Google. Eric Schmidt does want your data to be secure. He wants Google to be the safest place for your data ­ as long as you don’t mind the fact that Google has access to your data. Facebook wants the same thing: to protect your data from everyone except Facebook. Hardware companies are no different. Last week, we learned that Lenovo computers shipped with a piece of adware called Superfish that broke users’ security to spy on them for advertising purposes.

Governments are no different. The FBI wants people to have strong encryption, but it wants backdoor access so it can get at your data. UK Prime Minister David Cameron wants you to have good security, just as long as it’s not so strong as to keep the UK government out. And, of course, the NSA spends a lot of money ensuring that there’s no security it can’t break.

Corporations want access to your data for profit; governments want it for security purposes, be they benevolent or malevolent. But Diffie makes an even stronger point: we give lots of companies access to our data because it makes our lives easier.

I wrote about this in my latest book, Data and Goliath:

Convenience is the other reason we willingly give highly personal data to corporate interests, and put up with becoming objects of their surveillance. As I keep saying, surveillance-based services are useful and valuable. We like it when we can access our address book, calendar, photographs, documents, and everything else on any device we happen to be near. We like services like Siri and Google Now, which work best when they know tons about you. Social networking apps make it easier to hang out with our friends. Cell phone apps like Google Maps, Yelp, Weather, and Uber work better and faster when they know our location. Letting apps like Pocket or Instapaper know what we’re reading feels like a small price to pay for getting everything we want to read in one convenient place. We even like it when ads are targeted to exactly what we’re interested in. The benefits of surveillance in these and other applications are real, and significant.

Like Diffie, I’m not sure there is any practical alternative. The reason the Internet is a worldwide mass-market phenomenon is that all the technological details are hidden from view. Someone else is taking care of it. We want strong security, but we also want companies to have access to our computers, smart devices, and data. We want someone else to manage our computers and smart phones, organize our e-mail and photos, and help us move data between our various devices.

Those “someones” will necessarily be able to violate our privacy, either by deliberately peeking at our data or by having such lax security that they’re vulnerable to national intelligence agencies, cybercriminals, or both. Last week, we learned that the NSA broke into the Dutch company Gemalto and stole the encryption keys for billions ­ yes, billions ­ of cell phones worldwide. That was possible because we consumers don’t want to do the work of securely generating those keys and setting up our own security when we get our phones; we want it done automatically by the phone manufacturers. We want our data to be secure, but we want someone to be able to recover it all when we forget our password.

We’ll never solve these security problems as long as we’re our own worst enemy. That’s why I believe that any long-term security solution will not only be technological, but political as well. We need laws that will protect our privacy from those who obey the laws, and to punish those who break the laws. We need laws that require those entrusted with our data to protect our data. Yes, we need better security technologies, but we also need laws mandating the use of those technologies.

This essay previously appeared on Forbes.com.

EDITED TO ADD: French translation.

Posted on February 26, 2015 at 6:47 AMView Comments

Texas School Overreaction

Seems that a Texas school has suspended a 9-year-old for threatening another student with a replica One Ring. (Yes, that One Ring.)

I’ve written about this sort of thing before:

These so-called zero-tolerance policies are actually zero-discretion policies. They’re policies that must be followed, no situational discretion allowed. We encounter them whenever we go through airport security: no liquids, gels or aerosols. Some workplaces have them for sexual harassment incidents; in some sports a banned substance found in a urine sample means suspension, even if it’s for a real medical condition. Judges have zero discretion when faced with mandatory sentencing laws: three strikes for drug offenses and you go to jail, mandatory sentencing for statutory rape (underage sex), etc. A national restaurant chain won’t serve hamburgers rare, even if you offer to sign a waiver. Whenever you hear “that’s the rule, and I can’t do anything about it”—and they’re not lying to get rid of you—you’re butting against a zero discretion policy.

These policies enrage us because they are blind to circumstance. Editorial after editorial denounced the suspensions of elementary school children for offenses that anyone with any common sense would agree were accidental and harmless. The Internet is filled with essays demonstrating how the TSA’s rules are nonsensical and sometimes don’t even improve security. I’ve written some of them. What we want is for those involved in the situations to have discretion.

However, problems with discretion were the reason behind these mandatory policies in the first place. Discretion is often applied inconsistently. One school principal might deal with knives in the classroom one way, and another principal another way. Your drug sentence could depend considerably on how sympathetic your judge is, or on whether she’s having a bad day.

My guess is that the school administration ended up trapped by its own policies, probably even believing that they were correctly being applied. You can hear that in this hearsay quote reported by the boy’s father:

Steward said the principal said threats to another child’s safety would not be tolerated – whether magical or not.

Slashdot thread. Reddit thread.

Posted on February 2, 2015 at 12:37 PMView Comments

Not Enough CISOs to Go Around

This article is reporting that the demand for Chief Information Security Officers far exceeds supply:

Sony and every other company that realizes the need for a strong, senior-level security officer are scrambling to find talent, said Kris Lovejoy, general manager of IBM’s security service and former IBM chief security officer.

CISOs are “almost impossible to find these days,” she said. “It’s a bit like musical chairs; there’s a finite number of CISOs and they tend to go from job to job in similar industries.”

I’m not surprised, really. This is a tough job: never enough budget, and you’re the one blamed when the inevitable attacks occur. And it’s a tough skill set: enough technical ability to understand cybersecurity, and sufficient management skill to navigate senior management. I would never want a job like that in a million years.

Here’s a tip: if you want to make your CISO happy, here’s her holiday wish list.

“My first wish is for companies to thoroughly test software releases before release to customers….”

Can we get that gift wrapped?

Posted on December 11, 2014 at 6:31 AMView Comments

NSA Classification ECI = Exceptionally Controlled Information

ECI is a classification above Top Secret. It’s for things that are so sensitive they’re basically not written down, like the names of companies whose cryptography has been deliberately weakened by the NSA, or the names of agents who have infiltrated foreign IT companies.

As part of the Intercept story on the NSA’s using agents to infiltrate foreign companies and networks, it published a list of ECI compartments. It’s just a list of code names and three-letter abbreviations, along with the group inside the NSA that is responsible for them. The descriptions of what they all mean would never be in a computer file, so it’s only of value to those of us who like code names.

This designation is why there have been no documents in the Snowden archive listing specific company names. They’re all referred to by these ECI code names.

EDITED TO ADD (11/10): Another compilation of NSA’s organizational structure.

Posted on October 16, 2014 at 6:22 AMView Comments

The Concerted Effort to Remove Data Collection Restrictions

Since the beginning, data privacy regulation has focused on collection, storage, and use. You can see it in the OECD Privacy Framework from 1980 (see also this proposed update).

Recently, there has been concerted effort to focus all potential regulation on data use, completely ignoring data collection. Microsoft’s Craig Mundie argues this. So does the PCAST report. And the World Economic Forum. This is lobbying effort by US business. My guess is that the companies are much more worried about collection restrictions than use restrictions. They believe that they can slowly change use restrictions once they have the data, but that it’s harder to change collection restrictions and get the data in the first place.

We need to regulate collection as well as use. In a new essay, Chris Hoofnagle explains why.

Posted on September 12, 2014 at 6:41 AMView Comments

Hackers Steal Personal Information of US Security-Clearance Holders

The article says they were Chinese but offers no evidence:

The intrusion at the Office of Personnel Management was particularly disturbing because it oversees a system called e-QIP, in which federal employees applying for security clearances enter their most personal information, including financial data. Federal employees who have had security clearances for some time are often required to update their personal information through the website.

This is a big deal. If I were a government, trying to figure out who to target for blackmail, bribery, and other coercive tactics, this would be a nice database to have.

Posted on July 17, 2014 at 6:09 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.