Schneier on Security
A blog covering security and security technology.
« Security Theater, Illustrated |
| $100 to Put a Bomb on an Airplane »
January 28, 2011
Whitelisting vs. Blacklisting
The whitelist/blacklist debate is far older than computers, and it's instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it's easier -- although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn't--but because it's a security system that can be implemented automatically, without people.
To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino's black book or the more general Griffin book. Some retail stores have the same model -- a Google search on "banned from Wal-Mart" results in 1.5 million hits, including Megan Fox -- although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?
National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.
Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist -- the software can do it largely for free.
Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn't make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you're often limited to an Internet browser and a few common business applications.
Lately, we're seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primarily because the manufacturers want to control the economic environment, but it's being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you're using?
Turns out that many people do. Apple's control over its apps hasn't seemed to hurt iPhone sales, and Facebook's control over its apps hasn't seemed to affect Facebook's user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.
For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back -- perhaps with a whitelist we maintain personally, but more probably with a blacklist.
This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. You can read Marcus's half there as well.
Posted on January 28, 2011 at 5:02 AM
• 52 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
The fact that iPhones sell doesn't indicate that people *want* their options to be limited; it just means they don't care. So long as the control mechanisms are implemented reliably enough to not inconvenience end-users, it doesn't matter how much is wrong with them from a developer's standpoint.
(There's no comment feature on the Information Security page!)
The Apple whitelisting for iPhone is already making inroads into the desktop world. The new Mac App store has the same model as the iOS stores: To get your app in there it has to be signed by Apple. The next step is for the user to be able to say "I only trust apps signed by Apple." (This will be more difficult because the App Store business model (1/3 to Apple) will keep most prominent apps, like Office and Photoshop, out. So maybe the user can maintain a whitelist which includes any signed by a specified list of vendors or something like that.
With regards to PC execution, I agree that blacklisting _used_ to make sense when there were far more legitimate applications being released than malware. Today however, that model has switched. How often do you install a new legitimate application? Most AV vendors create a new signature every 2-4 seconds. So unless your application deployment rate is faster than that, white listing makes more sense.
The electric power industry in the United States is very interested in using whitelist technology on their control systems.
We typically have a small suite of control applications we run, plus a few additional to help manage and maintain the system. The majority of the time, we have little or no need to run apps outside of that list, and the users that do typically possess more skills in computer security and administration.
The problem WE have is that the whitelisting technology is often flawed, or the control applications are not written in a manner that makes whitelisting friendly (i.e. DLLs being dynamically loaded/unloaded, small apps being called from other apps, custom configurations, and very very very old code).
There needs to be a concerted effort on the part of our control system vendors to adopt whitelisting as a core principle if it's going to have traction here.
Chris: "How often do you install a new legitimate application? "
Very often. I use a lot of Open Source apps, and what's more they may only stay on my PC for a few days while a use them for a specific purpose. I also write and install my own apps for specific, niche tasks and quite often other people ask for them when they see what they do.
The trouble with whitelists is that you hand over control of what you can do to a gatekeeper (list-keeper? what's the correct term?) whose interests may not coincide with your own and may become monopoly, market damaging agents. If companies *can* corner a market, they will: that's what is in the interest of a company. Whitelisting gives tyhe gatekeeper an anti-competitive tool.
@Rob All the more reason to go FOSS ...
I own an Android phone precisely because of the control that Apple exert over iPhone applications. Yet every application I install comes from the Google-run marketplace. This seems like a contradiction, but it isn't. There are countless stories of terrible behaviour from Apple in what they allow through the store. But by contrast, the fact that I have the option to install from elsewhere keeps Google honest. In other words, it's precisely because I have this option that I don't need to use it.
"This is done primarily because the manufacturers want to control the economic environment"
As experienced by George Hotz, the guy who released the Playstation 3 decryption keys so other people could play unsigned games on it. He has just received a temporary restraining order prohibiting him from "offering to the public, creating, posting online, marketing, advertising, promoting, installing, distributing, providing, or otherwise trafficking in any software or methods for circumventing the PS3's protection methods". ( http://www.tekgoblin.com/2011/01/27/... ) On a related sidenote, Sony has updated its PS3 firmware to version 3.56 to stop jailbreaking.
This evolution towards growing vendor control over computing environments is one of the main reasons I went FOSS years ago and why I don't use Apple products for anything else than educational purposes.
@Rob Modern Whitelisting products do not require that you trust only what the Whitelisting vendor deems good. They allow you to create and maintain your own whitelist. You simply have to approve what is installed/executed on your system, if you have been deemed a "Trusted User", as most developers in Enterprise environments would be, and certainly you would deem yourself such on your personal system. The idea is to provide data about the application you are loading to help you make a good decision, not to keep you from running what you want to if you have a business need (or personal desire) to do so.
In an enterprise environment, application whitelisting would be better option wherein we will know about the applications that are installed..
The use of Apple as an example of a security whitelist is problematic.
Apple's choice of allowed applications appears primarily to be based on those that don't conflict with Apples revenue streams actual or envisaged (judged by those developers that have their apps rejected).
Apple does not appear to do anything other than minimal checking on the security ffeatures of the third party apps it does offer.
Thus at besst the Apple market is "reputation" not "securrity" based. So if you don't get caught with a backdoor you can continue to supply apps with backdoors and Apple will stick them in the store.
Also the issue to do with a lack of "whitelists" for user applications on desktop systems, this is not due to the large number of applications but the usuall failure of "one size fits all" mentality from the ITSec Dept.
The problem becomes easily managed if the "one user" view is replaced with the "one role" view.
If jobs are broken up into roles then it quickly becomes clear what limited subset of desktop apps each role requires and also the core set the magority of users require.
Oh and from an ITSec perspective roles should not agrigate access to apps. That is if you have two roles you should not be able to access apps from both roles at the same time unless they are common to both roles. Likewise information from one role should not be available in the other role unless specificaly mandated as such.
Although it is not immediatly obvious how you do this on the desktop using standard tools a little lateral thought solves the problem (think of a user as a group of roles).
@a Google search on "banned from Wal-Mart" results in 1.5 million hits, including Megan Fox -- although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?
I wonder how much of the blacklisting has to do with who not to accept checks or certain forms of payment from. Or how much of it is if there is another incident such as theft or disruption, they can produce the list and tell law enforcement "they have been informed they are not welcome here" to aide in prosecution.
Just some thoughts.
The whitelisting spree is actually a wider problem of todays western world. People have effectively been educated to want to be controlled by some authority. I think it's very concerning.
You seem to be conflating two different forms of whitelists: first, only the whitelist is allowed (e.g. iPhone); second, the whitelist overrides the blacklist but is otherwise meaningless (e.g. Security Theater redress numbers).
Apple selling app security as a feature makes total sense to me. We're being bombarded with articles warning about mobile devices being the new platform of breach. Are you suggesting that Apple's approach isn't part of the solution?
Security is more than just stopping malware. Using a channelized Apple-like solution introduces a new risk where the corporation can restrict (accidentally or purposefully) access to your own data.
@Casey Which is why I am leary of the whole "cloud computing" thing. Why put oneself even more at the mercy of a corporation whose only concern is its own pocketbook?
and once again people get AV wrong.
AV is not blacklisting, at least not exclusively. just because you've never heard of any AV technology other than a scanner doesn't mean that's all there is to AV.
the first anti-virus program was not a scanner. the use of whitelisting in AV has a long history.
"I agree that blacklisting _used_ to make sense when there were far more legitimate applications being released than malware. Today however, that model has switched. How often do you install a new legitimate application?"
you're mixing your criteria. first you talk about how often legitimate applications are released but then argue about how often an individual installs them.
data from whitelist vendors shows that both the number and frequency of creation of legitimate binaries outstrips malware binaries by orders of magnitude.
if you instead want to talk about how often you install legit software, your argument would be more valid if you compared it to how often you attempt to install malware, not how often malware or signatures are created.
Speaking of vendor lock-in, here's yet another example of why it will never work:
Police states incite revolution.
Free states incite innovation.
I'm sure the megacorps will catch on eventually (or maybe they already have??? as in... perhaps snuffing out innovation outside of their R&D is precisely what's on the agenda, since lets face it, if piracy/'misuse' was honestly gouging these companies, they wouldn't have billions left over to file thousands of frivolous lawsuits along the way... something to think about).
Facebook and iPhone use aren't necessarily an endorsement of the control policies. There's a question of how much choice consumers think they have. If your friends or relatives are centering their online presence on Facebook, you don't have the option of selecting a less restrictive provider to keep up with them on. If you want something with the features of an iPhone, then, in the English-speaking world at least, your choice until recently was mainly iPhones. The fact that Android has overtaken iOS so quickly* may be a sign that many people do not prefer the whitelist model.
"Canalys' most up-to-date numbers, for Q3 2010, put Android's global share at 25.1 per cent [...] Apple came third with a share of 17.4 per cent [...]"
Which is precisely the point Petrea: If there was no lock in, you would not have to use Facebook to keep up with your friends on Facebook (Or some other Big Vendor app ...)
OR, alternatively, they are simply diving into new technologies and trying them out.
Officially, I'm an OSS advocate all the way, but I'll be the first to point out that Android's open wild west apps platform is (to date) a muddy pool of terribly written, battery killing, privacy circumventing crApps.
When you have to install an additional application that kills unwanted background processes that you don't need and didn't ask for, and that you cannot uninstall or turn off permanently, I take that as a sign that something isn't going so well, or is far too infantile in its development stages to be a great solution for any end-user, not to mention your average Joe Dumbsh@t.
Is the Redress list really a whitelist, or does it just put you back in the pool?
If the people with Redress Numbers are really whitelisted, doesn't that indicate a hole in the system that can be exploited?
Calling iOS (as well as all other smartphone platforms and gaming platforms for that matter) a whitelist that people choose is inaccurate; people's selection options of smartphones is limited and all the platforms use whitelists.
It's not a standard that people prefer, it's a standard that vendors do.
I readily accept Apple's screening of apps that run on my iPhone because I want the thing to work all the time. I need to be able to rely on it working and it always does, flawlessly. When designing a phone, Apple recognised that this was a make or break issue for most people - that they could not take the risk of allowing your phone to be compromised.
@Max S and some others: Not all the platforms use whitelisting to the same degress as Apple. On Android, for example, apps have to be code-signed, but the rules make such that you have to use a self-signed certificate, and you don't have to distribute through the Android store. So essentially anyone can make and distribute an app, even a malicious one, with anonymity. Because it's signed Google could blacklist it, but it's a pretty weak and after-the-fact protection. I wonder why they bother with code signing at all.
At least in the US, Apple is the only outside company that can and will stand up to the cellular phone companies. Instead of choosing whether you'll be restricted by AT&T, Sprint, T-Mobile, or Verizon, you can choose Apple. If none of those five are what you want, well, that's life in the US.
The Android success is partly because it's also a nice platform, it's available on the cell phone company of your choice, and some of them sell pretty cheap. Apple has never gone for the low-end market. I really don't think it has anything to do with mandatory whitelists.
Agreed, for the most part. I believe most of Apple's business is due largely to customer/brand loyalty, and certainly not because of whitelists or consumer protections (especially in terms of security, which hints at plenty of larger issues behind the topic of illicit markets, but I digress).
Like American's, Apple customers are (for the most part) proud of their brand, but are also some of the most critical of it as well. I think this also speaks a lot to Apple's success: they know who their customers are, and don't consistently (to be fair) turn a blind eye to their feedback.
The debate is interesting, but I think we can have it both ways. The Android model is an example hybrid. The Android market takes almost any kind of app, only blacklisting those that turn out malicious. The Android phones use whitelists that won't even allow an update unless the user agrees to the permissions the app needs.
Desktops can benefit from this same type of setup using capabilities. With capability-based security, each resources has an associated capability that any given app must possess in order to use that resource. A capability system with controlled propagation can give out or revoke unforgeable capabilities. Each app can, upon installation, list the privileges it needs. If the user accepts it, the app gives those capabilities. CapDesk, Polaris, EROS, and some others take this approach. It makes malware and circumvention much more difficult.
@ everyone who thinks Android is open
Both the latest SDKs and the marketplace are reserved for privileged developers and vendors. And app availability is not just controlled by Google, but also by the vendors (check out T-Mobil and tethering apps) and outside the US, by governments.
There are no open phone platforms; saying that whitelists are preferred by users is just wrong (sorry Bruce, you know I love you, it's just a bad example for the argument--stick with video games, even though that one is much less cut-and-dried).
Wasn't the PS3 originally touted as featuring Linux as an installable option? Now, without 'breaking' anything, it's not an option, is it? Does this not break some law, IANAL but I wonder...
And, exactly how many people of SONY BMG went to prison over the SONY BMG ROOTKIT? I'd wager none. Is this not a crime? Or is it okay when perpetuated by corporations for their own schemes?
@ Larry Seltzer,
"I wonder why they bother with code signing at all[?]"
Contrary to "conventional wisdom" code signing does not offer any kind of security or any kind of assurance about code quality.
It can not, all it does say is "at some point of time unknown, some entity ran a hash of a package and then used a PK signing key they had access to on the hash".
It's not even a good audit marker and as for traceability all it tells you is an entity (not even a human) had access to the signing key at some point in the past they might have stolen it etc etc etc..
Why people don't grasp this point is not immediatly clear to me, but hey "the world ain't. perfect and that's for sure".
I was kind of hoping stuxnet would be a wake up call on this but nagh it's a case of "yawn yeh, yawn so what, yawn roll over and back to sleep".
@ Nick P,
"A capability system with controlled propagation can give out or revoke unforgeable capabilities"
In essence it is a good idea, however it fails to the "who watches the watchers" problem.
If the OS is not instaled by you or the loading process is somehow locked up (via code signing) then you as the "watcher" over apps can only have the choices the "watcher of watchers" OS / BIOS lets you.
That is the issue with all trusted platforms they are double edged swords and as with any agnostic system there are plenty of religious arguments put up on both sides (and the bleeding edges as well ;)
The argument boils down to who gets the primary rights to what you have "purchased", not "leased", "hired" or borrowed.
The usual definition of "purchased" is to transfer the rights of ownership as well as the rights of possession. The other models only transfer in differing amounts the right of possession for prescribed use by the owner.
However the software industry following the lead of the book sellers and recorded performance industry came up with the idea of "licenced" to differentiate between the physical media you had purchased and the "intellectual property" impressed or encoded upon it.
It is to put it mildly a logical and legal minefield to sell a physical item for "purchase" with intellectual property to be "licenced" upon it.
Microsoft's way around it is to not actually sell you anything but to licence everything, and thus have a clause about returning the media to receive a nominal refund.
The legal issues are esspecialy fraught where the technology alows uncontroled re-recording (casset tapes, photographs and other such technologies) or even the user to temporarily store information over which they or others might hold IP (ie who owns your game scores...).
In most places the legal bretherin try to neatly side steps arround the issues in one way or another rather than confront it head on.
This might be because the last time the IP fraternaty went head on the IP people got a bloody nose from the judiciary over audio recording tapes.
From what I can tell Sony appear to again to be trying to mount some flank attack on the issue via a maze of twisty little passages that they are forcing the defendant and judiciary down.
The old fashioned way to deal with this sort of nonsense was to hit with a counter suit that drags them out onto a different battle ground that is not of their chosing where they know they are most likley to lose.
Sadly Sony have deep pockets so it would be difficult to do.
@I'm Looking Through You: Yes, Linux support was one of PS3's selling points, until Sony just unilaterally decided to put the killer switch on it with an update. Even though it was an advertising point in the past. This is actually what drove the hackers to begin research into PS3 security and allowed the massive security fallout of which they are getting criminalized for.
I think that as control starts hurting confidence with users by surprises such as suddenly removing apps or features, the whole gate-keeper model will start to get less and less approval from costumers. All that a users needs to figure the issue out would be to have apple remove an app the user likes, and it will teach him the issue with letting Apple or any company babysit you.
"In an enterprise environment, application whitelisting would be better option wherein we will know about the applications that are installed.."
The problem with the "discussion" of white/black lists is that most people don't know what they're talking about.
How about a white list for home systems (for people who choose it)? Anything NOT on the white list is only allowed to run in a sand box UNLESS specifically overridden by the admin (computer owner in most circumstances).
Now, it is still possible for a vendor to get their software on the white list, even though it is a trojan. But the white list SHOULD be structured in such a way that removing said trojan should be easy because all of the files associated with it should be identified PRIOR to it going on the white list.
@ Brandioch Conner
Right now the only thing close to that are virtual appliances. Virtualization software providers are slowly ironing out the bugs. The Turaya Security Platform, Nizza Architecture from TU Dresden and QubesOS from Invisible Things are the only way to do this with any security assurance. QubesOS is probably the best option for out of the box functionality and ease of use.
Whitelist would work if the user has control to what get's added in the whitelist, like the SSL chain of trust model,an user may choose to ignore the "self-signed" certificates to his own risk and trusted companies can delegate their trust.
Unlike apple, who doesn't delegate their power, an infrastructure where you can add "root certificates" and delegate trust would certainly be the next step.
Blacklisting has a problem, because the number of elements in this group can "grow" indiscriminately (and most of the times we can't identify *all* elements) therefore being easier to describe it by "not whitelist".
"Right now the only thing close to that are virtual appliances."
Actually, Microsoft's Software Restriction Policies (SRP) are close to that. Microsoft's AppLocker is an option if you're running Win7.
There are various 3rd party apps that do a better job.
So no, no virtualization needed.
The next frontier for white listing will be organizations controlling which applications are allowed on their networks and who can use them.
This has only become feasible in the last couple of years with Next Generation Firewalls.
The organization will have achieved success when its last firewall policy rule is, "If application is unknown, then block."
@ Brandioch Conner
Those are a nice risk management tool, but not secure by most standards. The issue is the Trusted Computing Base (TCB), or the totality of software that a given app must depend on to ensure proper functioning or security. If any portion of the TCB is compromised, then the app that trusts it may be compromised. Daniel J. Bernstein, author of the highly robust Qmail MTA, pointed in his ¨lessons learned" paper that reducing the size of the code and especially trusted code (TCB) is the best way to reduce defects.
So, letś look at the Windows TCB. We have the kernel, tons of trusted middleware/apps, many trusted files/libraries, etc. Windows has many MB of code in kernel mode, where an exploit is a game ender. All the trusted code probably amounts to tens of MB. Although Win7 has far fewer defects, the Windows platform has a long history of security vulnerabilities owning the whole system and of weaknesses that are there by design, like shatter attacks and LAN manager on by default. Legacy is a large part of their problem. Do you really trust a TCB with this track record of design and implementation defects?
The right way to do things, as the options I pointed out employ, is to minimize the TCB of the system in general and applications specifically. QubesOS, based on Xen, is the weakest approach. The Xen kernel is about 50KB and very mature, then a small Linux distro is in the TCB for drivers, etc. Then thereś the hypercall interface. This is still over an order of magnitude decrease in attack surface.
As the Tanenbaum vs Torvalds debate showed, microkernelś are the best option for reducing TCB size and ensuring proper information flow. Most high assurance OSś employ a microkernel, capabilities, decomposition, and careful implementation. The Micro-SINA VPN is an example of doing it right. SINAś security critical code is usually hosted on Linux, with a TCB of over 500Kloc of complex code. Micro-SINA split it into four segments: Nizza trusted components & kernel; IPsec running directly on trusted components; inside-facing, deprivileged Linux for transport; same thing facing outside. Capabilities ensure all info must flow through the IPSec component. Itś TCB is 50KB of straightforward and robust code. Thatś a 10x lower attack surface.
Solutions exist to virtualize Windows on top of highly robust microkernels, drivers and middleware. These solutions all allow one to implement critical apps to run directly on the microkernel, carefully isolated from the rest of the system. Why trust the Windows TCB when we have aerospace-grade alternatives without any known vulnerabilities, excellent design, and a TCB thatś 10-20 times smaller?
"The issue is the Trusted Computing Base (TCP)..."
In my first post on this thread, I remarked upon how most people had problems understanding them.
I posted about white lists and you're posting about TCB.
They are not the same.
Are you implying that the Underlying code base required for an application would NOT be on the white list? The concept of Nick's post is that the surface area available to attack is the amount of trusted code on the system. Whether that base of code is defined in a white list or as part of a Trusted Computing platform is irrelevant.
That's exactly the point. It might not specifically be on a whitelist, but it's implied to be regardless. How else will the system work if Windows, .NET and all the utilities aren't whitelisted too?
@ Brandioch Conner
Let me illustrate with an example. The App policies say I can only use these apps and with these permissions. The policy manager runs on Windows. Windows has a flaw that allows privilege escalation and gives me a shell running System access. I turn off policy manager or just add my app to the policy. Whitelist fail.
Maybe your PC has firewire, which uses DMA to bypass OS. Unless your running Intel's IOMMU virtualization, there's no restrictions. I plug into your firewire port, directly access ur RAM with software freely available, and create an app running with highest privileges. I then use this rootkit to do whatever the hell I want, regardless of your whitelist.
So, could you please explain again how the effectiveness of your whitelisting scheme is unrelated to the trustworthiness of the software and hardware it depends on? Whitelisting will only work with a trustworthy, minimal TCB (hardware, kernel, and trusted services) without high risk of compromise. As it stands, a Windows TCB is no threat to a professional attacker. Even lay people routinely defeat Data Loss Prevention measures due to flaws in the Windows TCB. There's tutorials on the Internet...
"I turn off policy manager or just add my app to the policy. Whitelist fail."
Why do you say that is a failure?
As I had previously stated:
"How about a white list for home systems (for people who choose it)? Anything NOT on the white list is only allowed to run in a sand box UNLESS specifically overridden by the admin (computer owner in most circumstances)."
Again, most people do not understand white lists. They get them confused with other concepts. Such as TCB.
"So, could you please explain again how the effectiveness of your whitelisting scheme is unrelated to the trustworthiness of the software and hardware it depends on?"
Perhaps you should read the material that Bruce linked to. It is a discussion of white lists without TCB. Here's a hint: Bruce talks about the iPhone.
Apple vs Android doesn't limit liberty because it is an opt-in white list system. I knowingly understand that Apple operates a white list system and am free to choose to use an apple product or not. I can change my mind at any time. I think this is a great example at free markets and freedom in general. People are able to choose the type of environment they want to do business and operate in.
@ Brandioch Conner
I was wondering if I should reply to this. I usually don'[t reply to trolls, but you might just be a human resources layman who got a few cheap online certifications thanks to Exam Cram books. I mean, it mahppens. So, one more post addressing your statements.
"most people don't understand whitelists"
Wikipedia has a nice broad definition: "A whitelist or approved list is a list or register of entities that, for one reason or another, are being provided a particular privilege, service, mobility, access or recognition."
In other words, access denied is the default unless the app is whitelisted. Then, it has privileges like writing files, reading confidential data, etc. Each whitelisting system has a piece of software that, in some way, determines whether an app is whitelisted. The whitelisting scheme fails if the enforcement mechanism fails or if a flaw in other privileged components gives a rogue app full privileges.
"here's a hint: Bruce talks about iPhone"
Good example. iPhone uses app store as the source and digital signatures as the verification. The security of the whitelisting scheme depends on the software that checks digital signatures and any part of iOS that can execute arbitrary code. Whitelisting on the iPhone is a tremendous fail as far as security is concerned. This is proven with one phrase: jail break. Jail breaking is exploiting the device to run unauthorized (read: not whitelisted) software without Apple's permission. There's many ways to do it. In every case, some part of the TCB (hardware, iOS, iOS privileged software) is used to defeat the security of the whitelist.
In other words, you proved my point. You said TCB's don't matter. Read: Software an app depends on to maintain it's own security "doesn't matter." You then gave an example of a whitelisting scheme that's been defeated by perhaps millions of users. The obfuscated C contest shows the app store end could probably be beaten if the app uses an esoteric algorithm, which isn't banned by Apple. Status quo still stands: whitelisting is only as strong as the software and people it depends on to work. If any fail, so does the whitelisting system.
"I usually don'[t reply to trolls, but you might just be a human resources layman who got a few cheap online certifications thanks to Exam Cram books."
How very nice of you to resort to personal attacks.
"Whitelisting on the iPhone is a tremendous fail as far as security is concerned."
And yet Bruce used the iPhone as an example of a product with a white list.
Did Bruce say that iPhones had anything to do with TCB? No, he did not.
I understood what Bruce as saying. Marcus appears to have understood what Bruce was saying. And Bruce seems to have understood what Bruce was saying.
As I posted earlier:
"Again, most people do not understand white lists. They get them confused with other concepts. Such as TCB."
For the average user the closed system has been shown to provide more liberty than most open systems.
Windows lets you install anything, but users are rightfully afraid to do so.
Apple has always been about control, in part because Steve Jobs is a control freak.
I was an early adopter of Apple computers in the 1980s, and owned a number of them, but abandoned them for good in the mid-1990s because of their restricted hardware designs and their indifference to developing a robust OS while Windows NT existed.
Is there a sub-type of Whitelist, a "negligent Whitelist" or "false-positive Whitelist"?
I ask this because while Facebook is a model of a whitelist, it's not really a trustable whitelist. Many of the Facebook apps do a lot of data-mining of a user's profile. When confronted about this, Facebook claimed they screen each app to make sure it's not violating any Facebook Terms of Service, but avoided the real question.
However, because it's on Facebook and Facebook claims they screen apps (a whitelist), many users assume they are safe when in reality they're not.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.