Entries Tagged "Apple"

Page 12 of 17

Details of Apple's Fingerprint Recognition

This is interesting:

Touch ID takes a 88×88 500ppi scan of your finger and temporarily sends that data to a secure cache located near the RAM, after the data is vectorized and forwarded to the secure enclave located on the top left of the A7 near the M7 processor it is immediately discarded after processing. The fingerprint scanner uses subdermal ridge flows (inner layer of skin) to prevent loss of accuracy if you were to have micro cuts or debris on your finger.

With iOS 7.1.1 Apple now takes multiple scans of each position you place finger at setup instead of a single one and uses algorithms to predict potential errors that could arise in the future. Touch ID was supposed to gradually improve accuracy with every scan but the problem was if you didn’t scan well on setup it would ruin your experience until you re-setup your finger. iOS 7.1.1 not only removes that problem and increases accuracy but also greatly reduces the calculations your iPhone 5S had to make while unlocking the device which means you should get a much faster unlock time.

Posted on April 29, 2014 at 6:47 AMView Comments

Was the iOS SSL Flaw Deliberate?

Last October, I speculated on the best ways to go about designing and implementing a software backdoor. I suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second “goto fail;” statement. Since that statement isn’t a conditional, it causes the whole procedure to terminate.

The flaw is subtle, and hard to spot while scanning the code. It’s easy to imagine how this could have happened by error. And it would have been trivially easy for one person to add the vulnerability.

Was this done on purpose? I have no idea. But if I wanted to do something like this on purpose, this is exactly how I would do it.

EDITED TO ADD (2/27): If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what’s going on inside Apple?

EDITED TO ADD (2/27): Steve Bellovin has a pair of posts where he concludes that if this bug is enemy action, it’s fairly clumsy and unlikely to be the work of professionals.

Posted on February 27, 2014 at 6:03 AMView Comments

DROPOUTJEEP: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

DROPOUTJEEP

(TS//SI//REL) DROPOUTJEEP is a STRAITBIZARRE based software implant for the Apple iPhone operating system and uses the CHIMNEYPOOL framework. DROPOUTJEEP is compliant with the FREEFLOW project, therefore it is supported in the TURBULENCE architecture.

(TS//SI//REL) DROPOUTJEEP is a software implant for the Apple iPhone that utilizes modular mission applications to provide specific SIGINT functionality. This functionality includes the ability to remotely push/pull files from the device, SMS retrieval, contact list retrieval, voicemail, geolocation, hot mic, camera capture, cell tower location, etc. Command, control, and data exfiltration can occur over SMS messaging or a GPRS data connection. All communications with the implant will be covert and encrypted.

(TS//SI//REL) The initial release of DROPOUTJEEP will focus on installing the implant via close access methods. A remote installation capability will be pursued for a future release.

Unit Cost: $0

Status: (U) In development

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on February 12, 2014 at 2:06 PMView Comments

A Fraying of the Public/Private Surveillance Partnership

The public/private surveillance partnership between the NSA and corporate data collectors is starting to fray. The reason is sunlight. The publicity resulting from the Snowden documents has made companies think twice before allowing the NSA access to their users’ and customers’ data.

Pre-Snowden, there was no downside to cooperating with the NSA. If the NSA asked you for copies of all your Internet traffic, or to put backdoors into your security software, you could assume that your cooperation would forever remain secret. To be fair, not every corporation cooperated willingly. Some fought in court. But it seems that a lot of them, telcos and backbone providers especially, were happy to give the NSA unfettered access to everything. Post-Snowden, this is changing. Now that many companies’ cooperation has become public, they’re facing a PR backlash from customers and users who are upset that their data is flowing to the NSA. And this is costing those companies business.

How much is unclear. In July, right after the PRISM revelations, the Cloud Security Alliance reported that US cloud companies could lose $35 billion over the next three years, mostly due to losses of foreign sales. Surely that number has increased as outrage over NSA spying continues to build in Europe and elsewhere. There is no similar report for software sales, although I have attended private meetings where several large US software companies complained about the loss of foreign sales. On the hardware side, IBM is losing business in China. The US telecom companies are also suffering: AT&T is losing business worldwide.

This is the new reality. The rules of secrecy are different, and companies have to assume that their responses to NSA data demands will become public. This means there is now a significant cost to cooperating, and a corresponding benefit to fighting.

Over the past few months, more companies have woken up to the fact that the NSA is basically treating them as adversaries, and are responding as such. In mid-October, it became public that the NSA was collecting e-mail address books and buddy lists from Internet users logging into different service providers. Yahoo, which didn’t encrypt those user connections by default, allowed the NSA to collect much more of its data than Google, which did. That same day, Yahoo announced that it would implement SSL encryption by default for all of its users. Two weeks later, when it became public that the NSA was collecting data on Google users by eavesdropping on the company’s trunk connections between its data centers, Google announced that it would encrypt those connections.

We recently learned that Yahoo fought a government order to turn over data. Lavabit fought its order as well. Apple is now tweaking the government. And we think better of those companies because of it.

Now Lavabit, which closed down its e-mail service rather than comply with the NSA’s request for the master keys that would compromise all of its customers, has teamed with Silent Circle to develop a secure e-mail standard that is resistant to these kinds of tactics.

The Snowden documents made it clear how much the NSA relies on corporations to eavesdrop on the Internet. The NSA didn’t build a massive Internet eavesdropping system from scratch. It noticed that the corporate world was already eavesdropping on every Internet user—surveillance is the business model of the Internet, after all—and simply got copies for itself.

Now, that secret ecosystem is breaking down. Supreme Court Justice Louis Brandeis wrote about transparency, saying “Sunlight is said to be the best of disinfectants.” In this case, it seems to be working.

These developments will only help security. Remember that while Edward Snowden has given us a window into the NSA’s activities, these sorts of tactics are probably also used by other intelligence services around the world. And today’s secret NSA programs become tomorrow’s PhD theses, and the next day’s criminal hacker tools. It’s impossible to build an Internet where the good guys can eavesdrop, and the bad guys cannot. We have a choice between an Internet that is vulnerable to all attackers, or an Internet that is safe from all attackers. And a safe and secure Internet is in everyone’s best interests, including the US’s.

This essay previously appeared on TheAtlantic.com.

Posted on November 14, 2013 at 6:21 AMView Comments

iPhone Sensor Surveillance

The new iPhone has a motion sensor chip, and that opens up new opportunities for surveillance:

The M7 coprocessors introduce functionality that some may instinctively identify as “creepy.” Even Apple’s own description hints at eerie omniscience: “M7 knows when you’re walking, running, or even driving…” While it’s quietly implemented within iOS, it’s not secret for third party apps (which require an opt-in through pop-up notification, and management through the phone’s Privacy settings). But as we know, most users blindly accept these permissions.

It all comes down to a question of agency in tracking our physical bodies.

The fact that my Fitbit tracks activity without matching it up with all my other data sources, like GPS location or my calendar, is comforting. These data silos can sometimes be frustrating when I want to query across my QS datasets, but the built-in divisions between data about my body ­—and data about the rest of my digital life—leave room for my intentional inquiry and interpretation.

Posted on October 16, 2013 at 7:33 AMView Comments

Apple's iPhone Fingerprint Reader Successfully Hacked

Nice hack from the Chaos Computer Club:

The method follows the steps outlined in this how-to with materials that can be found in almost every household: First, the fingerprint of the enrolled user is photographed with 2400 dpi resolution. The resulting image is then cleaned up, inverted and laser printed with 1200 dpi onto transparent sheet with a thick toner setting. Finally, pink latex milk or white woodglue is smeared into the pattern created by the toner onto the transparent sheet. After it cures, the thin latex sheet is lifted from the sheet, breathed on to make it a tiny bit moist and then placed onto the sensor to unlock the phone. This process has been used with minor refinements and variations against the vast majority of fingerprint sensors on the market.

I’m not surprised. In my essay on Apple’s technology, I wrote: “I’m sure that someone with a good enough copy of your fingerprint and some rudimentary materials engineering capability—or maybe just a good enough printer—can authenticate his way into your iPhone.”

I don’t agree with CCC’s conclusion, though:

“We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can´t change and that you leave everywhere every day as a security token”, said Frank Rieger, spokesperson of the CCC. “The public should no longer be fooled by the biometrics industry with false security claims. Biometrics is fundamentally a technology designed for oppression and control, not for securing everyday device access.”

Apple is trying to balance security with convenience. This is a cell phone, not a ICBM launcher or even a bank account withdrawal device. Apple is offering an option to replace a four-digit PIN—something that a lot of iPhone users don’t even bother with—with a fingerprint. Despite its drawbacks, I think it’s a good trade-off for a lot of people.

EDITED TO ADD (10/13): The print for the CCC hack was lifted from the iPhone.

Posted on September 24, 2013 at 9:20 AMView Comments

iPhone Fingerprint Authentication

When Apple bought AuthenTec for its biometrics technology—reported as one of its most expensive purchases—there was a lot of speculation about how the company would incorporate biometrics in its product line. Many speculate that the new Apple iPhone to be announced tomorrow will come with a fingerprint authentication system, and there are several ways it could work, such as swiping your finger over a slit-sized reader to have the phone recognize you.

Apple would be smart to add biometric technology to the iPhone. Fingerprint authentication is a good balance between convenience and security for a mobile device.

Biometric systems are seductive, but the reality isn’t that simple. They have complicated security properties. For example, they are not keys. Your fingerprint isn’t a secret; you leave it everywhere you touch.

And fingerprint readers have a long history of vulnerabilities as well. Some are better than others. The simplest ones just check the ridges of a finger; some of those can be fooled with a good photocopy. Others check for pores as well. The better ones verify pulse, or finger temperature. Fooling them with rubber fingers is harder, but often possible. A Japanese researcher had good luck doing this over a decade ago with the gelatin mixture that’s used to make Gummi bears.

The best system I’ve ever seen was at the entry gates of a secure government facility. Maybe you could have fooled it with a fake finger, but a Marine guard with a big gun was making sure you didn’t get the opportunity to try. Disney World uses a similar system at its park gates—but without the Marine guards.

A biometric system that authenticates you and you alone is easier to design than a biometric system that is supposed to identify unknown people. That is, the question “Is this the finger belonging to the owner of this iPhone?” is a much easier question for the system to answer than “Whose finger is this?”

There are two ways an authentication system can fail. It can mistakenly allow an unauthorized person access, or it can mistakenly deny access to an authorized person. In any consumer system, the second failure is far worse than the first. Yes, it can be problematic if an iPhone fingerprint system occasionally allows someone else access to your phone. But it’s much worse if you can’t reliably access your own phone—you’d junk the system after a week.

If it’s true that Apple’s new iPhone will have biometric security, the designers have presumably erred on the side of ensuring that the user can always get in. Failures will be more common in cold weather, when your shriveled fingers just got out of the shower, and so on. But there will certainly still be the traditional PIN system to fall back on.

So…can biometric authentication be hacked?

Almost certainly. I’m sure that someone with a good enough copy of your fingerprint and some rudimentary materials engineering capability—or maybe just a good enough printer—can authenticate his way into your iPhone. But, honestly, if some bad guy has your iPhone and your fingerprint, you’ve probably got bigger problems to worry about.

The final problem with biometric systems is the database. If the system is centralized, there will be a large database of biometric information that’s vulnerable to hacking. A system by Apple will almost certainly be local—you authenticate yourself to the phone, not to any network—so there’s no requirement for a centralized fingerprint database.

Apple’s move is likely to bring fingerprint readers into the mainstream. But all applications are not equal. It’s fine if your fingers unlock your phone. It’s a different matter entirely if your fingerprint is used to authenticate your iCloud account. The centralized database required for that application would create an enormous security risk.

This essay previously appeared on Wired.com.

EDITED TO ADD: The new iPhone does have a fingerprint reader.

Posted on September 11, 2013 at 6:43 AMView Comments

Restoring Trust in Government and the Internet

In July 2012, responding to allegations that the video-chat service Skype—owned by Microsoft—was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company’s blog to deny it.

Turns out that wasn’t quite true.

Or at least he—or the company’s lawyers—carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn’t changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.

At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn’t collect data on hundreds of millions of Americans. He was lying, too. He later defended his lie by inventing a new definition of the word “collect,” an excuse that didn’t even pass the laugh test.

As Edward Snowden’s documents reveal more about the NSA’s activities, it’s becoming clear that we can’t trust anything anyone official says about these programs.

Google and Facebook insist that the NSA has no “direct access” to their servers. Of course not; the smart way for the NSA to get all the data is through sniffers.

Apple says it’s never heard of PRISM. Of course not; that’s the internal name of the NSA database. Companies are publishing reports purporting to show how few requests for customer-data access they’ve received, a meaningless number when a single Verizon request can cover all of their customers. The Guardian reported that Microsoft secretly worked with the NSA to subvert the security of Outlook, something it carefully denies. Even President Obama’s justifications and denials are phrased with the intent that the listener will take his words very literally and not wonder what they really mean.

NSA Director Gen. Keith Alexander has claimed that the NSA’s massive surveillance and data mining programs have helped stop more than 50 terrorist plots, 10 inside the U.S. Do you believe him? I think it depends on your definition of “helped.” We’re not told whether these programs were instrumental in foiling the plots or whether they just happened to be of minor help because the data was there. It also depends on your definition of “terrorist plots.” An examination of plots that that FBI claims to have foiled since 9/11 reveals that would-be terrorists have commonly been delusional, and most have been egged on by FBI undercover agents or informants.

Left alone, few were likely to have accomplished much of anything.

Both government agencies and corporations have cloaked themselves in so much secrecy that it’s impossible to verify anything they say; revelation after revelation demonstrates that they’ve been lying to us regularly and tell the truth only when there’s no alternative.

There’s much more to come. Right now, the press has published only a tiny percentage of the documents Snowden took with him. And Snowden’s files are only a tiny percentage of the number of secrets our government is keeping, awaiting the next whistle-blower.

Ronald Reagan once said “trust but verify.” That works only if we can verify. In a world where everyone lies to us all the time, we have no choice but to trust blindly, and we have no reason to believe that anyone is worthy of blind trust. It’s no wonder that most people are ignoring the story; it’s just too much cognitive dissonance to try to cope with it.

This sort of thing can destroy our country. Trust is essential in our society. And if we can’t trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.

Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government’s actions.

Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it’s probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.

This is the only way we can restore trust. A market economy doesn’t work unless consumers can make intelligent buying decisions based on accurate product information. That’s why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.

In the same way, democracy can’t work unless voters know what the government is doing in their name. That’s why we have open-government laws. Secret courts making secret rulings on secret laws, and companies flagrantly lying to consumers about the insecurity of their products and services, undermine the very foundations of our society.

Since the Snowden documents became public, I have been receiving e-mails from people seeking advice on whom to trust. As a security and privacy expert, I’m expected to know which companies protect their users’ privacy and which encryption programs the NSA can’t break. The truth is, I have no idea. No one outside the classified government world does. I tell people that they have no choice but to decide whom they trust and to then trust them as a matter of faith. It’s a lousy answer, but until our government starts down the path of regaining our trust, it’s the only thing we can do.

This essay originally appeared on CNN.com.

EDITED TO ADD (8/7): Two more links describing how the US government lies about NSA surveillance.

Posted on August 7, 2013 at 6:29 AMView Comments

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PMView Comments

1 10 11 12 13 14 17

Sidebar photo of Bruce Schneier by Joe MacInnis.