Entries Tagged "authorization"

Page 1 of 2

Forging Australian Driver’s Licenses

The New South Wales digital driver’s license has multiple implementation flaws that allow for easy forgeries.

This file is encrypted using AES-256-CBC encryption combined with Base64 encoding.

A 4-digit application PIN (which gets set during the initial onboarding when a user first instals the application) is the encryption password used to protect or encrypt the licence data.

The problem here is that an attacker who has access to the encrypted licence data (whether that be through accessing a phone backup, direct access to the device or remote compromise) could easily brute-force this 4-digit PIN by using a script that would try all 10,000 combinations….

[…]

The second design flaw that is favourable for attackers is that the Digital Driver Licence data is never validated against the back-end authority which is the Service NSW API/database.

This means that the application has no native method to validate the Digital Driver Licence data that exists on the phone and thus cannot perform further actions such as warn users when this data has been modified.

As the Digital Licence is stored on the client’s device, validation should take place to ensure the local copy of the data actually matches the Digital Driver’s Licence data that was originally downloaded from the Service NSW API.

As this verification does not take place, an attacker is able to display the edited data on the Service NSW application without any preventative factors.

There’s a lot more in the blog post.

Posted on May 23, 2022 at 6:09 AMView Comments

Debit Card Override Hack

Clever:

Parrish allegedly visited Apple Stores and tried to buy products with four different debit cards, which were all closed by his respective financial institutions. When his debit card was inevitably declined by the Apple Store, he would protest and offer to call his bank—except, he wasn’t really calling his bank.

So, the complaint says, he would offer the Apple Store employees a fake authorization code with a certain number of digits, which is normally provided by credit card issuers to create a record of the credit or debit override.

Now that this trick is public, how long before stores stop accepting these authorization codes altogether? I’ll be that fixing the infrastructure will be expensive.

Posted on July 31, 2014 at 6:55 AMView Comments

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AMView Comments

Real-World Access Control

Access control is difficult in an organizational setting. On one hand, every employee needs enough access to do his job. On the other hand, every time you give an employee more access, there’s more risk: he could abuse that access, or lose information he has access to, or be socially engineered into giving that access to a malfeasant. So a smart, risk-conscious organization will give each employee the exact level of access he needs to do his job, and no more.

Over the years, there’s been a lot of work put into role-based access control. But despite the large number of academic papers and high-profile security products, most organizations don’t implement it—at all—with the predictable security problems as a result.

Regularly we read stories of employees abusing their database access-control privileges for personal reasons: medical records, tax records, passport records, police records. NSA eavesdroppers spy on their wives and girlfriends. Departing employees take corporate secrets

A spectacular access control failure occurred in the UK in 2007. An employee of Her Majesty’s Revenue & Customs had to send a couple of thousand sample records from a database on all children in the country to National Audit Office. But it was easier for him to copy the entire database of 25 million people onto a couple of disks and put it in the mail than it was to select out just the records needed. Unfortunately, the discs got lost in the mail and the story was a huge embarrassment for the government.

Eric Johnson at Dartmouth’s Tuck School of Business has been studying the problem, and his results won’t startle anyone who has thought about it at all. RBAC is very hard to implement correctly. Organizations generally don’t even know who has what role. The employee doesn’t know, the boss doesn’t know—and these days the employee might have more than one boss—and senior management certainly doesn’t know. There’s a reason RBAC came out of the military; in that world, command structures are simple and well-defined.

Even worse, employees’ roles change all the time—Johnson chronicled one business group of 3,000 people that made 1,000 role changes in just three months—and it’s often not obvious what information an employee needs until he actually needs it. And information simply isn’t that granular. Just as it’s much easier to give someone access to an entire file cabinet than to only the particular files he needs, it’s much easier to give someone access to an entire database than only the particular records he needs.

This means that organizations either over-entitle or under-entitle employees. But since getting the job done is more important than anything else, organizations tend to over-entitle. Johnson estimates that 50 percent to 90 percent of employees are over-entitled in large organizations. In the uncommon instance where an employee needs access to something he normally doesn’t have, there’s generally some process for him to get it. And access is almost never revoked once it’s been granted. In large formal organizations, Johnson was able to predict how long an employee had worked there based on how much access he had.

Clearly, organizations can do better. Johnson’s current work involves building access-control systems with easy self-escalation, audit to make sure that power isn’t abused, violation penalties (Intel, for example, issues “speeding tickets” to violators), and compliance rewards. His goal is to implement incentives and controls that manage access without making people too risk-averse.

In the end, a perfect access control system just isn’t possible; organizations are simply too chaotic for it to work. And any good system will allow a certain number of access control violations, if they’re made in good faith by people just trying to do their jobs. The “speeding ticket” analogy is better than it looks: we post limits of 55 miles per hour, but generally don’t start ticketing people unless they’re going over 70.

This essay previously appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus’s response here—after you answer some nosy questions to get a free account.

Posted on September 3, 2009 at 12:54 PMView Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy—privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations—similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates—so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights—”you can test my urine on any day in the future”—are not valid, but immediate signing away of privacy of privacy rights—”you can test my urine today”—are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt”—close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder—typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PMView Comments

How to Steal the Empire State Building

A reporter managed to file legal papers, transferring ownership of the Empire State Building to himself. Yes, it’s a stunt:

The office of the city register, upon receipt of the phony documents prepared by the newspaper, transferred ownership of the 102-story building from Empire State Land Associates to Nelots Properties, LLC. Nelots is “stolen” spelled backward.

To further enhance the absurdity of the heist, included on the bogus paperwork were original “King Kong” star Fay Wray as witness and Willie Sutton, the notorious bank robber, as the notary.

Still, this sort of thing has been used to commit fraud in the past, and will continue to be a source of fraud in the future. The problem is that there isn’t enough integrity checking to ensure that the person who is “selling” the real estate is actually the person who owns it.

Posted on December 15, 2008 at 12:23 PMView Comments

Licensing Boaters

The U.S. Coast Guard is talking about licensing boaters. It’s being talked about as an antiterrorism measure, in typical incoherent ways:

The United States already has endured terrorism using small civilian craft, albeit overseas: In 2000, suicide bombers in the port of Aden, Yemen, used an inflatable boat to blow themselves up next to the U.S. Navy destroyer USS Cole, killing 17 sailors and wounding 39 others.

Terrorism experts point to other ways small boats potentially could assist in attacks ­ for example, a speedboat could deposit saboteurs at the outlet pipes of a nuclear power plant, or hijackers aboard a cruise ship. In a nightmare scenario, suicide bombers in a crowded harbor could use small watercraft to detonate a tanker carrying ultra-volatile liquefied natural gas, causing a powerful explosion that could kill thousands.

And how exactly is licensing watercraft supposed to help?

There are lots of good reasons to license boats and boaters, just as there are to license cars and drivers. But counterterrorism is not one of them.

Posted on January 4, 2007 at 2:35 PMView Comments

Stealing Credit Card Information off Phone Lines

Here’s a sophisticated credit card fraud ring that intercepted credit card authorization calls in Phuket, Thailand.

The fraudsters loaded this data onto MP3 players, which they sent to accomplices in neighbouring Malaysia. Cloned credit cards were manufactured in Malaysia and sent back to Thailand, where they were used to fraudulently purchase goods and services.

It’s 2006 and those merchant terminals still don’t encrypt their communications?

Posted on August 15, 2006 at 6:19 AMView Comments

Updating the Traditional Security Model

On the Firewall Wizards mailing list last year, Dave Piscitello made a fascinating observation. Commenting on the traditional four-step security model:

Authentication (who are you)
Authorization (what are you allowed to do)
Availability (is the data accessible)
Authenticity (is the data intact)

Piscitello said:

This model is no longer sufficient because it does not include asserting the trustworthiness of the endpoint device from which a (remote) user will authenticate and subsequently access data. Network admission and endpoint control are needed to determine that the device is free of malware (esp. key loggers) before you even accept a keystroke from a user. So let’s prepend “admissibility” to your list, and come up with a 5-legged stool, or call it the Pentagon of Trust.

He’s 100% right.

Posted on August 1, 2006 at 2:03 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.