Why It's Important to Publish the NSA Programs

The Guardian recently reported on how the NSA targets Tor users, along with details of how it uses centrally placed servers on the Internet to attack individual computers. This builds on a Brazilian news story from a mid-September that, in part, shows that the NSA is impersonating Google servers to users; a German story on how the NSA is hacking into smartphones; and a Guardian story from early September on how the NSA is deliberately weakening common security algorithms, protocols, and products.

The common thread among these stories is that the NSA is subverting the Internet and turning it into a massive surveillance tool. The NSA’s actions are making us all less safe, because its eavesdropping mission is degrading its ability to protect the US.

Among IT security professionals, it has been long understood that the public disclosure of vulnerabilities is the only consistent way to improve security. That’s why researchers publish information about vulnerabilities in computer software and operating systems, cryptographic algorithms, and consumer products like implantable medical devices, cars, and CCTV cameras.

It wasn’t always like this. In the early years of computing, it was common for security researchers to quietly alert the product vendors about vulnerabilities, so they could fix them without the “bad guys” learning about them. The problem was that the vendors wouldn’t bother fixing them, or took years before getting around to it. Without public pressure, there was no rush.

This all changed when researchers started publishing. Now vendors are under intense public pressure to patch vulnerabilities as quickly as possible. The majority of security improvements in the hardware and software we all use today is a result of this process. This is why Microsoft’s Patch Tuesday process fixes so many vulnerabilities every month. This is why Apple’s iPhone is designed so securely. This is why so many products push out security updates so often. And this is why mass-market cryptography has continually improved. Without public disclosure, you’d be much less secure against cybercriminals, hacktivists, and state-sponsored cyberattackers.

The NSA’s actions turn that process on its head, which is why the security community is so incensed. The NSA not only develops and purchases vulnerabilities, but deliberately creates them through secret vendor agreements. These actions go against everything we know about improving security on the Internet.

It’s folly to believe that any NSA hacking technique will remain secret for very long. Yes, the NSA has a bigger research effort than any other institution, but there’s a lot of research being done—by other governments in secret, and in academic and hacker communities in the open. These same attacks are being used by other governments. And technology is fundamentally democratizing: today’s NSA secret techniques are tomorrow’s PhD theses and the following day’s cybercrime attack tools.

It’s equal folly to believe that the NSA’s secretly installed backdoors will remain secret. Given how inept the NSA was at protecting its own secrets, it’s extremely unlikely that Edward Snowden was the first sysadmin contractor to walk out the door with a boatload of them. And the previous leakers could have easily been working for a foreign government. But it wouldn’t take a rogue NSA employee; researchers or hackers could discover any of these backdoors on their own.

This isn’t hypothetical. We already know of government-mandated backdoors being used by criminals in Greece, Italy, and elsewhere. We know China is actively engaging in cyber-espionage worldwide. A recent Economist article called it “akin to a government secretly commanding lockmakers to make their products easier to pick—and to do so amid an epidemic of burglary.”

The NSA has two conflicting missions. Its eavesdropping mission has been getting all the headlines, but it also has a mission to protect US military and critical infrastructure communications from foreign attack. Historically, these two missions have not come into conflict. During the cold war, for example, we would defend our systems and attack Soviet systems.

But with the rise of mass-market computing and the Internet, the two missions have become interwoven. It becomes increasingly difficult to attack their systems and defend our systems, because everything is using the same systems: Microsoft Windows, Cisco routers, HTML, TCP/IP, iPhones, Intel chips, and so on. Finding a vulnerability—or creating one—and keeping it secret to attack the bad guys necessarily leaves the good guys more vulnerable.

Far better would be for the NSA to take those vulnerabilities back to the vendors to patch. Yes, it would make it harder to eavesdrop on the bad guys, but it would make everyone on the Internet safer. If we believe in protecting our critical infrastructure from foreign attack, if we believe in protecting Internet users from repressive regimes worldwide, and if we believe in defending businesses and ourselves from cybercrime, then doing otherwise is lunacy.

It is important that we make the NSA’s actions public in sufficient detail for the vulnerabilities to be fixed. It’s the only way to force change and improve security.

This essay previously appeared in the Guardian.

Posted on October 8, 2013 at 6:44 AM42 Comments

Comments

Mike B October 8, 2013 7:07 AM

The best way to protect critical infrastructure is to not hook it into public networks or, even better, not use programmable logic. Fixing technical vulnerabilities is nothing more than security theatre because there will always be new ones. Procedural security is the currently the only way to do things right.

Wm October 8, 2013 7:16 AM

There is one thing that all people need to accept and understand. That is that the NSA, as well as all government entities, are not concerned about the privacy of businesses and the law abiding public, and like the criminals they purport to pursue, they will continue to spy on everyone, even breaking laws in the process. Their purpose is to build dossiers on everyone so, like Nazi Germany and Stalinist Russia, they can more easily build a case against their perceived enemies.

“If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.” – cardinal Richelieu

KG October 8, 2013 7:19 AM

It is fair to argue that developing tools to go after users of the Tor network is part of NSA’s (and other law enforcement agencies’) job. Figuring out how to break a system is not equivalent to putting trapdoors in systems or otherwise sabotaging them. Note also that even if developing the capability is ok, using it wholesale is obviously not ok.

Jim K October 8, 2013 7:34 AM

I seem to remember that the only way Battlestar Galactica preserved its critical infrastructure was by going analog.

NobodySpecial October 8, 2013 7:53 AM

@KG it’s fair to argue that fighting an invader is the army’s job. That doesn’t mean it would be sensible for the army to rig demolition charges on every public building, bridge and tunnel now and leave the detonator wires hanging out in public – that’s what the NSA is doing with mandated backdoors.

Bruce Schneier October 8, 2013 8:32 AM

One of the things I need to stress more is that this is not only about the NSA. We have a window into how the NSA spies on the Internet, but these are the tools and techniques used by government intelligence agencies everywhere. We need to secure the Internet against the tools, regardless of who is using them. That makes us all safer.

Ken October 8, 2013 9:51 AM

What’s amazing to me is that so many people are putting blind faith in service providers to take care of their interests on-line.

In short order we’ll look back on the way things are today much like the older among us remember how it was unnecessary to lock the doors to their cars & homes, when the only “security device” of any real interest was a fire alarm….

Like security tools in other arenas what’s happening is that various enterprising people are developing unique encryption and various unique other means of protecting their interests via multi-layered protections (e.g. with the house analogy fire alarms, locks, burglar alarms, etc. etc.) … akin to drug traffickers developing tunnels to avoid monitored roads, etc.

cowbert October 8, 2013 9:54 AM

Mike B: that is hard to do, even with airgapping, because the systems themselves are still networked on the other side of the airgap. Even if you did not “network” the systems together, they can still be hacked over sneaker net. That is how stuxnet worked. Spreading through a network was just the icing on the cake, but it relied principally by hijacking the solution to how do you get work done(tm) between systems without any transport media? Unless you want to go all Butlerian Jihad, there is no real way to escape the problem at any transport level – you must rely on audits of the codebase.

Mark Johnson October 8, 2013 9:55 AM

There’s a dichotomy not well understood by the public. You’re often calling for publication and open acknowledgement, also expounding on the value of open designs. So, why then, for example, is the recent Adobe compromise so damaging? Why doesn’t this suddenly make us all safer? And no matter what your explanation is, I guarantee there will be something grossly inconsistent about it. There’s a far bigger issue at play here and no one even sees it.

Ben October 8, 2013 10:13 AM

@Ken, doubly amazing since service providers are not currently allowed to take care of your privacy interests against the government.

The pen-register thing is only lawful because corporations don’t have full 4th amendment protections (according to the latest from judges).

They get 1st amendment protections because the supreme court recognised that the ability to incorporate is important to allow your voice to be heard. (Think Citizens United, Planned Parenthood, and political parties). Stripping corporations of free speech rights would allow congress to very effectively silence people they didn’t want heard. Yes, you could still get on a bus and stand outside in the rain, but it would mean an end to putting money into a bucket and having someone else do it for you. This is why corporations need to be considered “persons” for 1st amendment purposes.

But our papers are in the cloud. We need to be able to contractually bind cloud providers to protect our privacy.

Until now there was no convincing argument as to why corporations deserved or needed 4th amendment rights.

This is the reason. Corporations need 4th amendment rights against unreasonable searches so they can resist this kind of intrusion on our behalf.

2^911 October 8, 2013 10:53 AM

We have to accept, that the internet wasn’t invented to transfer data secured (in the beginning), that people use smartphones and internet services for exhibition today and that the NSA has the order to spy as good as they can. Sure, the NSA does it a little bit more than allowed, but hey, they do their best to be the best informed agency in the world.

So what do we learn from this?
Shall we attack the NSA infrastructure (if possible)?
Would it be helpful to demand them to follow the law?
Do we need new ways of encryption or authentication?
Do we have to invent security systems so complex that only a handful of people understand how they work (security by obscurity)?
Do we have to burn any technology that could be used to spy us?

Think back to the times, the first public used operating systems came up … Microsoft delivered Outlook with all features turned on and no security built-in only because it wasn’t nice for customers to be stopped while doing something unsecure.

Today the iSomething and other smartphones also don’t want to stress the user with a decission about privacy or security … this w(c)ould influence the purchase decision. Cars are delivered with built-in GSM cards for support and the number of cars with apps will grow faster than any security concern at the manufacturer … some of the mistakes from the first operating systems will repeated, but with one fundamentally difference. Today the security experts are connected around the world and have instant access to any new information revealed to exploit the new products that are born out of a too fast and profit driven environment.

So what. Put the head in the sand and cry?

I think we have to accept at first, that the technological world has changed to a very complex environment with inherent vulnerabilities based on that complexity. Services are interconnected and exchange much more information than needed and too few administrators use the principle of least privilege for giving rights to people and services.

The second we have to accept is, that the behavior to use this technology (I think 99% of all people) based on no security awareness and no imagination about abusing the information they give away out of free will. The most people think that usernames and passwords are the only way to access a system and that these informations are stored secure on the providers side.

I think the only way to get back a bit more privacy is to respect the privacy of other people (eg. don’t post pictures, informations, mails, … of other people) and to use the own private informations with much more care.

Everybody must understand that privacy and security awareness don’t fall like manna from the skies. We have to speak with normal users about basic mistakes first, using revealed informations to show that this isn’t born in the paranoid brain of the “IT guy” but the real world. Use any information about what facebook stores, show in easy ways how website tracker work and how the soft and hard informations are analyzed to create a individual profile.

We have to come up with easy hints and some simple tricks, maybe some new tools, to support them to change their behavior.

I don’t want to fight against the giants (I learned from Don Quixote) or try to turn back time. People who are enlightend by real arguments are more willing to change their behavior. Start with your family, friend, collegues and then speak to people in leading positions (eg. about the disadvantages of BYOD and shielding internal services against external access; using graphical firewalls; security designs of their products; no use of web trackers and foreign pictures/code in your websites and software, ….).

Recently I told a system administrator about a design flaw of an internal serverfarm … only after showing that it really exists and how it works, he accepted that there is a big problem and the need to do something … sad, but true. Maybe you have your own stories of this kind.

We can’t solve the problems over night, but I sleep much better since I started to explain modern features from my perspective and on a less technical way to reach more people.

Tim L October 8, 2013 11:46 AM

@Mark Johnson “There’s a far bigger issue at play here and no one even sees it.”

The NSA has built a massive control system, and so the logic of control theory applies.

To the extent that the NSA acts on the information that it gathers, then they randomize the residuals in the metadata field that constitutes the world in which the rest of the people live.

The net consequence is that there becomes two classes of people: those who have the information necessary to plan their lives and can achieve their goals, and those who can not as a matter of principle because they must live an event-driven life, improvising as best they can to the chaos generated in the wake of NSA guided actions.

The destruction of privacy and wholesale theft of information is a theft of opportunity.

Curious October 8, 2013 1:17 PM

@TimL

I think that with this notion of “theft of opportunity” one ought to warrant a more sceptical approach, precisely with acknowledging that a notion of “fighting for initiative” (my notion here) is probably a bad idea for all the user of the internet, it’s basicly reminicent of tactical war activity I would say, as an armchair gamer general.

If I were to make a specific point of all of this that I wrote above, I think being content with the prospect of “war” (pls indulge me) on the internet is a bad STRATEGY, having no noteworty goal (like USA explicitly having wanted to bomb Syria) and with a multitude of adversaries relying on a “divide and conquer” strategy, boxing people in and picking off the weak.

Speaking now of the word ‘goal’, it occurred to me that anyone using the word ‘aim’ in a “political” sense, probably cannot be trusted, because it is too vague, potentially having no real meaning other than being a rhetorical ploy. 🙂 I believe some intelligence official was sort of paraphrased using that word recently in an article. On the other hand, ‘Goal’ is a nice word though imo, as it is a word sort of laden with a fairly concrete meaning, having a rather clear metaphorical meaning as it is generally used.

Nick P October 8, 2013 1:19 PM

@ Bruce Schneier

Very well said! I’d say this is our model description of the problem. For security, I’d of course revise it to add that we mandate our highest robustness tech on most critical infrastructure, subsidizing private efforts if need be. They say it must be protected from TLA’s. I say to them “Put your tech and money where your mouth is.”

Ulysses Underscore October 8, 2013 1:50 PM

@Bruce Schneier: Excellent point about NSA’s two conflicting missions.

I guess that will help keep the military budget up high:

“We need more funds to weaken the IT infrastructure so we can break into machines when we have to!”

but yet

“We need more funds to protect the domestic IT infrastructure (cough that we have weakened cough).”

On another hand, although most of the NSA hacking techniques will likely not remain secret for very long, some of them are likely secret and will stay so for a (long) while.

I mean what year are these Snowden documents from again? If they are a few years old, did he not have access anything newer?

I would think NSA produces new slides and documents yearly, like any bureaucratic organization.

Ulysses Underscore October 8, 2013 3:52 PM

Looks like NSA does not have a lot of respect for the corporate security

“Former NSA technology boss Prescott Winter has a word for the kind of security he sees even at large, technologically sophisticated companies: Appalling. Companies large enough to afford good security remain vulnerable to hackers, malware and criminals because they tend to throw technological solutions at potential areas of risk rather than focusing on specific and immediate threats, Winter said during his keynote speech Oct. 1 at the Splunk Worldwide User’s Conference in Las Vegas. ‘As we look at the situation in the security arena we see an awful lot of big companies – Fortune 100-level companies – with, to be perfectly candid, appalling security.

No wonder though. Thank NSA for weakening the security and then complaining that its so weak.

Scott October 8, 2013 4:44 PM

@Ulysses Underscore

It’s worse than that; I used to work for a Fortune 100 company with lots of data that most people expects to be kept private, who had a password policy with a MAX length of 8 characters.

Security costs money, and big companies aren’t going to spend money on things they won’t notice (at least, not until it bites them).

Clive Robinson October 8, 2013 5:49 PM

@ Scott,

Speaking of,

    Security costs money, and big companies aren’t going to spend money on things they won’t notice (at least, not until it bites them)

Did you see the Brian Krebs article on what he calls SSNDOB,

http://krebsonsecurity.com/2013/09/data-broker-giants-hacked-by-id-theft-service/

Especially the paragraph that talkes about the crackers malware,

    An initial analysis of the malicious bot program installed on the hacked servers reveals that it was carefully engineered to avoid detection by antivirus tools. A  review of the bot malware  in early September using  Virustotal.com  – which scrutinizes submitted files for signs of malicious behavior by scanning them with antivirus software from nearly four dozen security firms simultaneously — gave it a clean bill of health: none of the 46 top anti-malware tools on the market today detected it as malicious (as of publication, the malware is currently detected by 6 out of 46 anti- malware tools at Virustotal).

That is definatly a black eye for the AV concept yet again. Basicaly AV might find “run of the mill” “fire an forget” malware but when it comes to targeted or directed attacks made by half way competent attackers AV is not worth the CPU cycles it consumes. Which does not bode well if you have the likes of “state level” attackers “wanting an in” on your systems…

Such failings in quite major PII collecting organisations should be ringing a few bells in walnut corridor and down in the denizen of legal compliance audit.

Clive Robinson October 8, 2013 6:03 PM

A thought for every one…

Using the power of hindsight think back to when Bill Gates in effect threw the toys out of the pram over the then conventional wisdom –that security had negative ROI and reduced profitability– by rather suddenly introducing new development methods to improve security in MS products…

Back then pundit and talking heads gave rather facile arguments for the reason.

Now have a re-think in the light of the Ed Snowden revelations…

Anon10 October 8, 2013 7:18 PM

I think Bruce is being somewhat disingenuous, at least on the cyber crime issue. He’s argued in the past that the threat of cyber crime is vastly exaggerated such as here: http://www.digitalbond.com/blog/2012/06/26/are-we-spending-enough-or-too-much-on-security/. If he’s right about the losses due to cyber-crime and about his claims regarding the government making the internet less secure, then the cyber criminals must not be smart enough to take advantage of the inserted vulnerabilities on any economically important scale. Further, the USG spends about 80 billion on intelligence. Even if you assume the US had an extra 100 million in cyber crime(totally made up number), that’s just a small cost of doing business compared to the total intel budget.

Anon10 October 8, 2013 7:32 PM

@Bruce

It’s folly to believe that any NSA hacking technique will remain secret for very long.

Does really matter if exploits become public knowledge after more than a few years? If there’s an exploit developed today that is designed to target a weakness in version one of product, in five years won’t most people will be using version four?

Dirk Praet October 8, 2013 7:34 PM

Its not just about security, it’s about fundamental democratic principles too. Indiscriminate mass surveillance is a trademark of an authoritarian regime, not of an open and free democratic society.

Just like the NSDAP (Nationalsozialistische Deutsche Arbeiterpartei) was neither national or socialist, and definitely had nothing to do with workers, neither is the NSA national, about security or an agency. GSO, or Global Surveillance Organisation, would be a much more appropriate acronym.

Brad October 8, 2013 7:47 PM

We need to start adding noise to the stream. One of the things that makes it fundamentally easy for the NSA (and companies like Google and Facebook) to monitor communications is that the traffic we put on the internet is exceedingly high-fidelity: we don’t push a lot of bits that don’t represent our data, thoughts, opinions, etc.

We need to lower the fidelity of our traffic to make us harder to target. I’ve begun work on a Firefox add-on to place static into the trail users leave behind them: search terms one would never normally use, reading articles on topics the user has no interest in, etc. Anyone watching will have a hard time knowing what you’re really doing online.

See more here: http://blog.koehn.com/2013/10/turning-up-noise.html

Daniel October 8, 2013 7:54 PM

Bruce wrote, “That makes us all safer.”

I agree yet it is important to understand that this value “all safe” is not a value everyone shares. People are comfortable with the idea that everyone is safer so long as that term is limited to “everyone like me”. Yet as soon the topic turns to people who are not popular such as pedophiles or terrorists or some other group that is perceived negatively it becomes quickly clear the idea that such groups might be part of “us” let alone “safer” is appalling.

There are plenty of people…plenty of voters…plenty of donors to political campaigns…who would gladly make themselves less safe if it meant a greater probability of catching those they perceive to be the bad guys. The common and overwhelming response to topics such as the Silk Road take-down is not “OMG maybe Tor is broken” (which would at least evidence a concern about security even if untrue) but a simple “good riddance.”

Under the American system of jurisprudence there is not right to commit a crime. When I was younger I found the fact that police and those in authority regularly commit crimes, tell lies, etc to be ironic or even hypocritical. I’ve come to appreciate as I have gotten older that many people see no contradiction at all. If one has to be a crook to catch a crook well at least they are “our” crooks.

So I agree that much of what you propose Bruce would make us all safer. I simply do not think that most people desire for us all to be safer.

cointelpro October 9, 2013 3:49 AM

@kashmarek: “I wonder who gets the blame for this?”

You would have better linked to the original article titled “Meltdowns Hobble NSA Data Center”: http://online.wsj.com/article/SB10001424052702304441404579119490744478398.html

@Anon10: your post is an excellent example of disinformation Rule #17 of http://cryptome.org/2012/07/gent-forum-spies.htm because:

Bruce wrote: “It’s folly to believe that any NSA hacking technique will remain secret for very long.”
You read: “It’s folly to believe that any NSA hacking technique won’t get public after very long.”

Aspie October 9, 2013 4:21 AM

Side Topic: Anyone else wondering (or got a theory as to) why the NSA’s Utah Datacentre is having problems?

Is there something super-exotic about the MIL-spec electrons they’re using? Superconductor snags perhaps?

Mike the goat October 9, 2013 4:40 AM

Aspire: yeah I bet it is all the heat being generated by those racks fill of ASICs used for cracking weak keys. 🙂

Dirk Praet October 9, 2013 6:24 AM

@ cointelpro, @ Moderator

@Anon10: your post is an excellent example of disinformation Rule #17 of http://cryptome.org/2012/07/gent-forum-spies.htm because …

Not just that. The style and the specific use of the word “disingenuous” is also consistent with that of @Anon in a comment made in the September 28th thread about Senator Feinstein Admits the NSA Taps the Internet Backbone. @Anon has been called out for trolling on previous occasions, and was also suspected by yourself of being the same person behind the persona @ Rolf Weber in the September 27th thread Another Schneier Interview.

To the best of my knowledge, using different aliases and sockpuppeting is a violation of forum etiquette.

Aspie October 9, 2013 6:41 AM

@Mike the goat

I can’t find the link now but cryptome had some good pictures of the side of the installation with some large tanks that may hold cryo coolant.

The way they’re going the NSA might just as well launch a fake BOINC project (call it “genomic substring” perhaps) and farm out the cracking to the millions of networked computers that they haven’t already quietly compromised by other means.

Mike the goat October 9, 2013 7:12 AM

Aspire: it is indeed possible that many of these botnets have either been created by the NSA or seized by them (ie by taking their CNC domain or by forcing the IRC server operator to let them impersonate the controlling nick). If that was true – and I am not for a minute seriously considering this (I am not completely tinfoil hat crazy) but hypothetically the additional resources of say 200k domestic computers would be a great boon for them – sure their performance will suck (I am guessing many would have a GPU suitable for hash cracking and those that don’t need only gracefully fall back to much slower CPU cracking when idle) compared with dedicated equipment but they aren’t paying for the power or the cooling.

Dirk: I wonder why someone would bother trolling Bruce’s blog? If they bothered to read schneierfacts they’d know that Bruce has powers that many of us can’t even elucidate.

cointelpro October 9, 2013 7:45 AM

@Mike the goat: “I wonder why someone would bother trolling Bruce’s blog?”

I think it is damage control paid by NSA, at destination of non-techies journalists reading this blog.

Posting paid comments in favor of NSA without disclosing that they are paid: is this illegal ?

If “NSA” is replaced by a product or service, it would be in some countries (France, …).

Me October 9, 2013 1:22 PM

I use Perspectives, and I noticed that Google does not play well with it (sends different signatures to all the notaries), I couldn’t help but wonder if this was that each notary was being attacked, or that Google sent out different signatures to everyone to mask any attacks that might occur.

Anon10 October 9, 2013 4:11 PM

@Dirk

I’ve posted on different entries before under different names, but never that of “Rolf Weber”. Rolf Weber is an entirely different person.

martinr October 9, 2013 8:14 PM

While it is probably true that the NSA doesn’t know whether and how many sysadmins prior to Edward Snowden walked out the door with shitloads of secret documents, they probably are doing statistics on the results/outcomes/effectiveness of their operations — and would notice if these figures would change significantly and unexpectedly for a small number of countries or group of targets.

The more parallel operations they have, the more reliable are the statistical results.

It would not surprise me if the “noise” in the statistical results have significantly increased since Snowdens leak, and might have become much harder for them to distinguish new small leaks from what they might think of as “collateral damage” of Snowdens revelations.

Well, it was their decision to get themselves so deep into the shit–not just knee deep, but the whole body all the way to their lower lip, now realizing they hadn’t thought of the waves…

Mike the goat October 10, 2013 12:45 AM

Me: Google’s answer I believe is that they run a distributed server farm and each server has a different SSL cert. The other excuse I have heard is that they rotate certs daily to give some kind of PFS. To #1 I say bunk – many distributed sites manage to use the same certificate throughout their servers. You just have to be careful with management but I am sure a company like Google can figure that out! The second excuse I have heard is total bunk. Rotating certs like they do does not improve security as there are a finite number of certificates used – it’s just stupid……… and besides we have PFS option in current SSL implementations. Just use it.

Yeah, it is suspicious, it’s wildly against industry best practice and I vehemently dislike being able to validate who the hell I am talking to. Sure, you can trust the CA – but I note that Google has certs signed with at least two (Equifax and GeoTrust if I recall correctly).

Lawrence D’Oliveiro October 13, 2013 1:29 AM

“This is why Apple’s iPhone is designed so securely”

What a (sad) laugh to hear Bruce Schneier, of all people, say this. The only security in the Iphone is from Apple’s vetting of apps before they go into its app store; once an app is on the phone, it can do whatever it likes. Contrast this with Android, where every app has to declare, in its manifest, all the permissions it needs for the facilities it wants access to; if it tries to access something without asking for permission, it is denied.

On the Iphone, it seems, any app can crash the entire system: http://www.theregister.co.uk/2013/10/11/iphone_5s_blue_screen_of_death/

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.