Entries Tagged "social engineering"
Page 3 of 11
Reputation is a social mechanism by which we come to trust one another, in all aspects of our society. I see it as a security mechanism. The promise and threat of a change in reputation entices us all to be trustworthy, which in turn enables others to trust us. In a very real sense, reputation enables friendships, commerce, and everything else we do in society. It’s old, older than our species, and we are finely tuned to both perceive and remember reputation information, and broadcast it to others.
The nature of how we manage reputation has changed in the past couple of decades, and Gloria Origgi alludes to the change in her remarks. Reputation now involves technology. Feedback and review systems, whether they be eBay rankings, Amazon reviews, or Uber ratings, are reputational systems. So is Google PageRank. Our reputations are, at least in part, based on what we say on social networking sites like Facebook and Twitter. Basically, what were wholly social systems have become socio-technical systems.
This change is important, for both the good and the bad of what it allows.
An example might make this clearer. In a small town, everyone knows each other, and lenders can make decisions about whom to loan money to, based on reputation (like in the movie It’s a Wonderful Life). The system isn’t perfect; it is prone to “old-boy network” preferences and discrimination against outsiders. The real problem, though, is that the system doesn’t scale. To enable lending on a larger scale, we replaced personal reputation with a technological system: credit reports and scores. They work well, and allow us to borrow money from strangers halfway across the country—and lending has exploded in our society, in part because of it. But the new system can be attacked technologically. Someone could hack the credit bureau’s database and enhance her reputation by boosting her credit score. Or she could steal someone else’s reputation. All sorts of attacks that just weren’t possible with a wholly personal reputation system become possible against a system that works as a technological reputation system.
We like socio-technical systems of reputation because they empower us in so many ways. People can achieve a level of fame and notoriety much more easily on the Internet. Totally new ways of making a living—think of Uber and Airbnb, or popular bloggers and YouTubers—become possible. But the downsides are considerable. The hacker tactic of social engineering involves fooling someone by hijacking the reputation of someone else. Most social media companies make their money leeching off our activities on their sites. And because we trust the reputational information from these socio-technical systems, anyone who can figure out how to game those systems can artificially boost their reputation. Amazon, eBay, Yelp, and others have been trying to deal with fake reviews for years. And you can buy Twitter followers and Facebook likes cheap.
Reputation has always been gamed. It’s been an eternal arms race between those trying to artificially enhance their reputation and those trying to detect those enhancements. In that respect, nothing is new here. But technology changes the mechanisms of both enhancement and enhancement detection. There’s power to be had on either side of that arms race, and it’ll be interesting to watch each side jockeying for the upper hand.
If the director of the CIA can’t keep his e-mail secure, what hope do the rest of us have—for our e-mail or any of our digital information?
None, and that’s why the companies that we entrust with our digital lives need to be required to secure it for us, and held accountable when they fail. It’s not just a personal or business issue; it’s a matter of public safety.
The details of the story are worth repeating. Someone, reportedly a teenager, hacked into CIA Director John O. Brennan’s AOL account. He says he did so by posing as a Verizon employee to Verizon to get personal information about Brennan’s account, as well as his bank card number and his AOL e-mail address. Then he called AOL and pretended to be Brennan. Armed with the information he got from Verizon, he convinced AOL customer service to reset his password.
The CIA director did nothing wrong. He didn’t choose a lousy password. He didn’t leave a copy of it lying around. He didn’t even send it in e-mail to the wrong person. The security failure, according to this account, was entirely with Verizon and AOL. Yet still Brennan’s e-mail was leaked to the press and posted on WikiLeaks.
This kind of attack is not new. In 2012, the Gmail and Twitter accounts of Wired writer Mat Honan were taken over by a hacker who first persuaded Amazon to give him Honan’s credit card details, then used that information to hack into his Apple ID account, and finally used that information to get into his Gmail account.
For most of us, our primary e-mail account is the “master key” to every one of our other accounts. If we click on a site’s “forgot your password?” link, that site will helpfully e-mail us a special URL that allows us to reset our password. That’s how Honan’s hacker got into his Twitter account, and presumably Brennan’s hacker could have done the same thing to any of Brennan’s accounts.
Internet e-mail providers are trying to beef up their authentication systems. Yahoo recently announced it would do away with passwords, instead sending a one-time authentication code to the user’s smartphone. Google has long had an optional two-step authentication system that involves sending a one-time code to the user via phone call or SMS.
You might think cell phone authentication would thwart these attacks. Even if a hacker persuaded your e-mail provider to change your password, he wouldn’t have your phone and couldn’t obtain the one-time code. But there’s a way to beat this, too. Indie developer Grant Blakeman’s Gmail account was hacked last year, even though he had that extra-secure two-step system turned on. The hackers persuaded his cell phone company to forward his calls to another number, one controlled by the hackers, so they were able to get the necessary one-time code. And from Google, they were able to reset his Instagram password.
Brennan was lucky. He didn’t have anything classified on his AOL account. There were no personal scandals exposed in his email. Yes, his 47-page top-secret clearance form was sensitive, but not embarrassing. Honan was less lucky, and lost irreplaceable photographs of his daughter.
Neither of them should have been put through this. None of us should have to worry about this.
The problem is a system that makes this possible, and companies that don’t care because they don’t suffer the losses. It’s a classic market failure, and government intervention is how we have to fix the problem.
It’s only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.
The government should not mandate how a company secures our data; that will move the responsibility to the government and stifle innovation. Instead, government should establish minimum standards for results, and let the market figure out how to do it most effectively. It should allow individuals whose information has been exposed sue for damages. This is a model that has worked in all other aspects of public safety, and it needs to be applied here as well.
We have a role to play in this, too. One of the reasons security measures are so easy to bypass is that we as consumers demand they be easy to use, and easy for us to bypass if we lose or forget our passwords. We need to recognize that good security will be less convenient. Again, regulations mandating this will make it more common, and eventually more acceptable.
Information security is complicated, and hard to get right. I’m an expert in the field, and it’s hard for me. It’s hard for the director of the CIA. And it’s hard for you. Security settings on websites are complicated and confusing. Security products are no different. As long as it’s solely the user’s responsibility to get right, and solely the user’s loss if it goes wrong, we’re never going to solve it.
It doesn’t have to be this way. We should demand better and more usable security from the companies we do business with and whose services we use online. But because we don’t have any real visibility into those companies’ security, we should demand our government start regulating the security of these companies as a matter of public safety.
This essay previously appeared on CNN.com.
This is interesting:
First, it integrates with corporate directories such as Active Directory and social media sites like LinkedIn to map the connections between employees, as well as important outside contacts. Bell calls this the “real org chart.” Hackers can use such information to choose people they ought to impersonate while trying to scam employees.
From there, AVA users can craft custom phishing campaigns, both in email and Twitter, to see how employees respond. Finally, and most importantly, it helps organizations track the results of these campaigns. You could use AVA to evaluate the effectiveness of two different security training programs, see which employees need more training, or find places where additional security is needed.
Of course, the problem is that both good guys and bad guys can use this tool. Which makes it like pretty much every other vulnerability scanner.
We identified three types of scams happening on Jiayuan. The first one involves advertising of escort services or illicit goods, and is very similar to traditional spam. The other two are far more interesting and specific to the online dating landscape. One type of scammers are what we call swindlers. For this scheme, the scammer starts a long-distance relationship with an emotionally vulnerable victim, and eventually asks her for money, for example to purchase the flight ticket to visit her. Needless to say, after the money has been transferred the scammer disappears. Another interesting type of scams that we identified are what we call dates for profit. In this scheme, attractive young ladies are hired by the owners of fancy restaurants. The scam then consists in having the ladies contact people on the dating site, taking them on a date at the restaurant, having the victim pay for the meal, and never arranging a second date. This scam is particularly interesting, because there are good chances that the victim will never realize that he’s been scammed—in fact, he probably had a good time.
Citizen Lab has a new report on a probable ISIS-launched cyberattack:
This report describes a malware attack with circumstantial links to the Islamic State in Iraq and Syria. In the interest of highlighting a developing threat, this post analyzes the attack and provides a list of Indicators of Compromise.
A Syrian citizen media group critical of Islamic State of Iraq and Syria (ISIS) was recently targeted in a customized digital attack designed to unmask their location. The Syrian group, Raqqah is being Slaughtered Silently (RSS), focuses its advocacy on documenting human rights abuses by ISIS elements occupying the city of Ar-Raqah. In response, ISIS forces in the city have reportedly targeted the group with house raids, kidnappings, and an alleged assassination. The group also faces online threats from ISIS and its supporters, including taunts that ISIS is spying on the group.
Though we are unable to conclusively attribute the attack to ISIS or its supporters, a link to ISIS is plausible. The malware used in the attack differs substantially from campaigns linked to the Syrian regime, and the attack is focused against a group that is an active target of ISIS forces.
“The next time you call for assistance because the Internet service in your home is not working, the ‘technician’ who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and—when he shows up at your door, impersonating a technician—let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside. He will have no reason to suspect you have broken the law, much less probable cause to obtain a search warrant. But that makes no difference, because by letting him in, you will have ‘consented’ to an intrusive search of your home.”
This chilling scenario is the first paragraph of a motion to suppress evidence gathered by the police in exactly this manner, from a hotel room. Unbelievably, this isn’t a story from some totalitarian government on the other side of an ocean. This happened in the United States, and by the FBI. Eventually—I’m sure there will be appeals—higher U.S. courts will decide whether this sort of practice is legal. If it is, the country will slide even further into a society where the police have even more unchecked power than they already possess.
The facts are these. In June, Two wealthy Macau residents stayed at Caesar’s Palace in Las Vegas. The hotel suspected that they were running an illegal gambling operation out of their room. They enlisted the police and the FBI, but could not provide enough evidence for them to get a warrant. So instead they repeatedly cut the guests’ Internet connection. When the guests complained to the hotel, FBI agents wearing hidden cameras and recorders pretended to be Internet repair technicians and convinced the guests to let them in. They filmed and recorded everything under the pretense of fixing the Internet, and then used the information collected from that to get an actual search warrant. To make matters even worse, they lied to the judge about how they got their evidence.
The FBI claims that their actions are no different from any conventional sting operation. For example, an undercover policeman can legitimately look around and report on what he sees when he invited into a suspect’s home under the pretext of trying to buy drugs. But there are two very important differences: one of consent, and the other of trust. The former is easier to see in this specific instance, but the latter is much more important for society.
You can’t give consent to something you don’t know and understand. The FBI agents did not enter the hotel room under the pretext of making an illegal bet. They entered under a false pretext, and relied on that for consent of their true mission. That makes things different. The occupants of the hotel room didn’t realize who they were giving access to, and they didn’t know their intentions. The FBI knew this would be a problem. According to the New York Times, “a federal prosecutor had initially warned the agents not to use trickery because of the ‘consent issue.’ In fact, a previous ruse by agents had failed when a person in one of the rooms refused to let them in.” Claiming that a person granting an Internet technician access is consenting to a police search makes no sense, and is no different than one of those “click through” Internet license agreements that you didn’t read saying one thing and while meaning another. It’s not consent in any meaningful sense of the term.
Far more important is the matter of trust. Trust is central to how a society functions. No one, not even the most hardened survivalists who live in backwoods log cabins, can do everything by themselves. Humans need help from each other, and most of us need a lot of help from each other. And that requires trust. Many Americans’ homes, for example, are filled with systems that require outside technical expertise when they break: phone, cable, Internet, power, heat, water. Citizens need to trust each other enough to give them access to their hotel rooms, their homes, their cars, their person. Americans simply can’t live any other way.
It cannot be that every time someone allows one of those technicians into our homes they are consenting to a police search. Again from the motion to suppress: “Our lives cannot be private—and our personal relationships intimate—if each physical connection that links our homes to the outside world doubles as a ready-made excuse for the government to conduct a secret, suspicionless, warrantless search.” The resultant breakdown in trust would be catastrophic. People would not be able to get the assistance they need. Legitimate servicemen would find it much harder to do their job. Everyone would suffer.
It all comes back to the warrant. Through warrants, Americans legitimately grant the police an incredible level of access into our personal lives. This is a reasonable choice because the police need this access in order to solve crimes. But to protect ordinary citizens, the law requires the police to go before a neutral third party and convince them that they have a legitimate reason to demand that access. That neutral third party, a judge, then issues the warrant when he or she is convinced. This check on the police’s power is for Americans’ security, and is an important part of the Constitution.
In recent years, the FBI has been pushing the boundaries of its warrantless investigative powers in disturbing and dangerous ways. It collects phone-call records of millions of innocent people. It uses hacking tools against unknown individuals without warrants. It impersonates legitimate news sites. If the lower court sanctions this particular FBI subterfuge, the matter needs to be taken up—and reversed—by the Supreme Court.
This essay previously appeared in The Atlantic.
EDITED TO ADD (4/24/2015): A federal court has ruled that the FBI cannot do this.
This is a creepy story. The FBI wanted access to a hotel guest’s room without a warrant. So agents broke his Internet connection, and then posed as Internet technicians to gain access to his hotel room without a warrant.
From the motion to suppress:
The next time you call for assistance because the internet service in your home is not working, the “technician” who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and—when he shows up at your door, impersonating a technician—let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside. He will have no reason to suspect you have broken the law, much less probable cause to obtain a search warrant. But that makes no difference, because by letting him in, you will have “consented” to an intrusive search of your home.
Basically, the agents snooped around the hotel room, and gathered evidence that they submitted to a magistrate to get a warrant. Of course, they never told the judge that they had engineered the whole outage and planted the fake technicians.
More coverage of the case here.
This feels like an important case to me. We constantly allow repair technicians into our homes to fix this or that technological thingy. If we can’t be sure they are not government agents in disguise, then we’ve lost quite a lot of our freedom and liberty.
Sidebar photo of Bruce Schneier by Joe MacInnis.