I don't know. FireEye likes to attribute all sorts of things to Russia, but the evidence here looks pretty good.
Blog: October 2018 Archives
Earlier this week, the New York Times reported that the Russians and the Chinese were eavesdropping on President Donald Trump's personal cell phone and using the information gleaned to better influence his behavior. This should surprise no one. Security experts have been talking about the potential security vulnerabilities in Trump's cell phone use since he became president. And President Barack Obama bristled at -- but acquiesced to -- the security rules prohibiting him from using a "regular" cell phone throughout his presidency.
Three broader questions obviously emerge from the story. Who else is listening in on Trump's cell phone calls? What about the cell phones of other world leaders and senior government officials? And -- most personal of all -- what about my cell phone calls?
There are two basic places to eavesdrop on pretty much any communications system: at the end points and during transmission. This means that a cell phone attacker can either compromise one of the two phones or eavesdrop on the cellular network. Both approaches have their benefits and drawbacks. The NSA seems to prefer bulk eavesdropping on the planet's major communications links and then picking out individuals of interest. In 2016, WikiLeaks published a series of classified documents listing "target selectors": phone numbers the NSA searches for and records. These included senior government officials of Germany -- among them Chancellor Angela Merkel -- France, Japan, and other countries.
Other countries don't have the same worldwide reach that the NSA has, and must use other methods to intercept cell phone calls. We don't know details of which countries do what, but we know a lot about the vulnerabilities. Insecurities in the phone network itself are so easily exploited that 60 Minutes eavesdropped on a US congressman's phone live on camera in 2016. Back in 2005, unknown attackers targeted the cell phones of many Greek politicians by hacking the country's phone network and turning on an already-installed eavesdropping capability. The NSA even implanted eavesdropping capabilities in networking equipment destined for the Syrian Telephone Company.
Alternatively, an attacker could intercept the radio signals between a cell phone and a tower. Encryption ranges from very weak to possibly strong, depending on which flavor the system uses. Don't think the attacker has to put his eavesdropping antenna on the White House lawn; the Russian Embassy is close enough.
The other way to eavesdrop on a cell phone is by hacking the phone itself. This is the technique favored by countries with less sophisticated intelligence capabilities. In 2017, the public-interest forensics group Citizen Lab uncovered an extensive eavesdropping campaign against Mexican lawyers, journalists, and opposition politicians -- presumably run by the government. Just last month, the same group found eavesdropping capabilities in products from the Israeli cyberweapons manufacturer NSO Group operating in Algeria, Bangladesh, Greece, India, Kazakhstan, Latvia, South Africa -- 45 countries in all.
These attacks generally involve downloading malware onto a smartphone that then records calls, text messages, and other user activities, and forwards them to some central controller. Here, it matters which phone is being targeted. iPhones are harder to hack, which is reflected in the prices companies pay for new exploit capabilities. In 2016, the vulnerability broker Zerodium offered $1.5 million for an unknown iOS exploit and only $200K for a similar Android exploit. Earlier this year, a new Dubai start-up announced even higher prices. These vulnerabilities are resold to governments and cyberweapons manufacturers.
Some of the price difference is due to the ways the two operating systems are designed and used. Apple has much more control over the software on an iPhone than Google does on an Android phone. Also, Android phones are generally designed, built, and sold by third parties, which means they are much less likely to get timely security updates. This is changing. Google now has its own phone -- Pixel -- that gets security updates quickly and regularly, and Google is now trying to pressure Android-phone manufacturers to update their phones more regularly. (President Trump reportedly uses an iPhone.)
Another way to hack a cell phone is to install a backdoor during the design process. This is a real fear; earlier this year, US intelligence officials warned that phones made by the Chinese companies ZTE and Huawei might be compromised by that government, and the Pentagon ordered stores on military bases to stop selling them. This is why China's recommendation that if Trump wanted security, he should use a Huawei phone, was an amusing bit of trolling.
Given the wealth of insecurities and the array of eavesdropping techniques, it's safe to say that lots of countries are spying on the phones of both foreign officials and their own citizens. Many of these techniques are within the capabilities of criminal groups, terrorist organizations, and hackers. If I were guessing, I'd say that the major international powers like China and Russia are using the more passive interception techniques to spy on Trump, and that the smaller countries are too scared of getting caught to try to plant malware on his phone.
It's safe to say that President Trump is not the only one being targeted; so are members of Congress, judges, and other senior officials -- especially because no one is trying to tell any of them to stop using their cell phones (although cell phones still are not allowed on either the House or the Senate floor).
As for the rest of us, it depends on how interesting we are. It's easy to imagine a criminal group eavesdropping on a CEO's phone to gain an advantage in the stock market, or a country doing the same thing for an advantage in a trade negotiation. We've seen governments use these tools against dissidents, reporters, and other political enemies. The Chinese and Russian governments are already targeting the US power grid; it makes sense for them to target the phones of those in charge of that grid.
Unfortunately, there's not much you can do to improve the security of your cell phone. Unlike computer networks, for which you can buy antivirus software, network firewalls, and the like, your phone is largely controlled by others. You're at the mercy of the company that makes your phone, the company that provides your cellular service, and the communications protocols developed when none of this was a problem. If one of those companies doesn't want to bother with security, you're vulnerable.
This is why the current debate about phone privacy, with the FBI on one side wanting the ability to eavesdrop on communications and unlock devices, and users on the other side wanting secure devices, is so important. Yes, there are security benefits to the FBI being able to use this information to help solve crimes, but there are far greater benefits to the phones and networks being so secure that all the potential eavesdroppers -- including the FBI -- can't access them. We can give law enforcement other forensics tools, but we must keep foreign governments, criminal groups, terrorists, and everyone else out of everyone's phones. The president may be taking heat for his love of his insecure phone, but each of us is using just as insecure a phone. And for a surprising number of us, making those phones more private is a matter of national security.
This essay previously appeared in the Atlantic.
I've blogged twice about the Bloomberg story that China bugged Supermicro networking equipment destined to the US. We still don't know if the story is true, although I am increasingly skeptical because of the lack of corroborating evidence to emerge.
We don't know anything more, but this is the most comprehensive rebuttal of the story I have read.
This seems bad:
The F25 software was found to contain a capture replay vulnerability -- basically an attacker would be able to eavesdrop on radio transmissions between the crane and the controller, and then send their own spoofed commands over the air to seize control of the crane.
"These devices use fixed codes that are reproducible by sniffing and re-transmission," US-CERT explained.
"This can lead to unauthorized replay of a command, spoofing of an arbitrary message, or keeping the controlled load in a permanent 'stop' state."
Here's the CERT advisory.
Two New Yorkers have been charged with importing squid from Peru and then reselling it as octopus.
Yet another problem that a blockchain-enabled supply-chain system won't solve.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
This story nicely illustrates the arms race between technologies to create fake videos and technologies to detect fake videos:
These fakes, while convincing if you watch a few seconds on a phone screen, aren't perfect (yet). They contain tells, like creepily ever-open eyes, from flaws in their creation process. In looking into DeepFake's guts, Lyu realized that the images that the program learned from didn't include many with closed eyes (after all, you wouldn't keep a selfie where you were blinking, would you?). "This becomes a bias," he says. The neural network doesn't get blinking. Programs also might miss other "physiological signals intrinsic to human beings," says Lyu's paper on the phenomenon, such as breathing at a normal rate, or having a pulse. (Autonomic signs of constant existential distress are not listed.) While this research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even a large set of snapshots might not adequately capture the physical human experience, and so any software trained on those images may be found lacking.
Lyu's blinking revelation revealed a lot of fakes. But a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally. The fake content creators had evolved.
I don't know who will win this arms race, if there ever will be a winner. But the problem with fake videos goes deeper: they affect people even if they are later told that they are fake, and there always will be people that will believe they are real, despite any evidence to the contrary.
BuzzFeed is reporting on a scheme where fraudsters buy legitimate Android apps, track users' behavior in order to mimic it in a way that evades bot detectors, and then uses bots to perpetuate an ad-fraud scheme.
After being provided with a list of the apps and websites connected to the scheme, Google investigated and found that dozens of the apps used its mobile advertising network. Its independent analysis confirmed the presence of a botnet driving traffic to websites and apps in the scheme. Google has removed more than 30 apps from the Play store, and terminated multiple publisher accounts with its ad networks. Google said that prior to being contacted by BuzzFeed News it had previously removed 10 apps in the scheme and blocked many of the websites. It continues to investigate, and published a blog post to detail its findings.
The company estimates this operation stole close to $10 million from advertisers who used Google's ad network to place ads in the affected websites and apps. It said the vast majority of ads being placed in these apps and websites came via other major ad networks.
Lots of details in both the BuzzFeed and the Google links.
The Internet advertising industry is rife with fraud, at all levels. This is just one scheme among many.
This is a long -- and somewhat technical -- paper by Chris C. Demchak and Yuval Shavitt about China's repeated hacking of the Internet Border Gateway Protocol (BGP): "China's Maxim Leave No Access Point Unexploited: The Hidden Story of China Telecom's BGP Hijacking."
BGP hacking is how large intelligence agencies manipulate Internet routing to make certain traffic easier to intercept. The NSA calls it "network shaping" or "traffic shaping." Here's a document from the Snowden archives outlining how the technique works with Yemen.
EDITED TO ADD (10/27): Boing Boing post.
IoT devices are surveillance devices, and manufacturers generally use them to collect data on their customers. Surveillance is still the business model of the Internet, and this data is used against the customers' interests: either by the device manufacturer or by some third party the manufacturer sells the data to. Of course, this data can be used by the police as well; the purpose depends on the country.
None of this is new, and much of it was discussed in my book Data and Goliath. What is common is for Internet companies is to publish "transparency reports" that give at least general information about how police are using that data. IoT companies don't publish those reports.
TechCrunch asked a bunch of companies about this, and basically found that no one is talking.
Boing Boing post.
This is crazy (and dangerous). West Virginia is allowing people to vote via a smart-phone app. Even crazier, the app uses blockchain -- presumably because they have no idea what the security issues with voting actually are.
Ross Anderson has some new work:
As mobile phone masts went up across the world's jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.
If you're an American of European descent, there's a 60% chance you can be uniquely identified by public information in DNA databases. This is not information that you have made public; this is information your relatives have made public.
"Identity inference of genomic data using long-range familial searches."
Abstract: Consumer genomics databases have reached the scale of millions of individuals. Recently, law enforcement authorities have exploited some of these databases to identify suspects via distant familial relatives. Using genomic data of 1.28 million individuals tested with consumer genomics, we investigated the power of this technique. We project that about 60% of the searches for individuals of European-descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers. Moreover, the technique could implicate nearly any US-individual of European-descent in the near future. We demonstrate that the technique can also identify research participants of a public sequencing project. Based on these results, we propose a potential mitigation strategy and policy implications to human subject research.
A good news article.
This is a current list of where and when I am scheduled to speak:
- I'm speaking at Data in Smarter Cities in New York City on October 23, 2018.
- I'm speaking at the Cyber Security Summit in Minneapolis, Minnesota on October 24, 2018.
- I'm speaking at ISF's 29th Annual World Congress in Las Vegas, Nevada on October 30, 2018.
- I'm speaking at Kiwicon in Wellington, New Zealand on November 16, 2018.
- I'm speaking at the The Digital Society Conference 2018: Empowering Ecosystems on December 11, 2018.
- I'm speaking at the Hyperledger Forum in Basel, Switzerland on December 13, 2018.
The list is maintained on this page.
The UK's Marine Conservation Society is urging people to eat less squid.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
It's no secret that computers are insecure. Stories like the recent Facebook hack, the Equifax hack and the hacking of government agencies are remarkable for how unremarkable they really are. They might make headlines for a few days, but they're just the newsworthy tip of a very large iceberg.
The risks are about to get worse, because computers are being embedded into physical devices and will affect lives, not just our data. Security is not a problem the market will solve. The government needs to step in and regulate this increasingly dangerous space.
The primary reason computers are insecure is that most buyers aren't willing to pay -- in money, features, or time to market -- for security to be built into the products and services they want. As a result, we are stuck with hackable internet protocols, computers that are riddled with vulnerabilities and networks that are easily penetrated.
We have accepted this tenuous situation because, for a very long time, computer security has mostly been about data. Banking data stored by financial institutions might be important, but nobody dies when it's stolen. Facebook account data might be important, but again, nobody dies when it's stolen. Regardless of how bad these hacks are, it has historically been cheaper to accept the results than to fix the problems. But the nature of how we use computers is changing, and that comes with greater security risks.
Many of today's new computers are not just screens that we stare at, but objects in our world with which we interact. A refrigerator is now a computer that keeps things cold; a car is now a computer with four wheels and an engine. These computers sense us and our environment, and they affect us and our environment. They talk to each other over networks, they are autonomous, and they have physical agency. They drive our cars, pilot our planes, and run our power plants. They control traffic, administer drugs into our bodies, and dispatch emergency services. These connected computers and the network that connects them -- collectively known as "the internet of things" -- affect the world in a direct physical manner.
We've already seen hacks against robot vacuum cleaners, ransomware that shut down hospitals and denied care to patients, and malware that shut down cars and power plants. These attacks will become more common, and more catastrophic. Computers fail differently than most other machines: It's not just that they can be attacked remotely -- they can be attacked all at once. It's impossible to take an old refrigerator and infect it with a virus or recruit it into a denial-of-service botnet, and a car without an internet connection simply can't be hacked remotely. But that computer with four wheels and an engine? It -- along with all other cars of the same make and model -- can be made to run off the road, all at the same time.
As the threats increase, our longstanding assumptions about security no longer work. The practice of patching a security vulnerability is a good example of this. Traditionally, we respond to the never-ending stream of computer vulnerabilities by regularly patching our systems, applying updates that fix the insecurities. This fails in low-cost devices, whose manufacturers don't have security teams to write the patches: if you want to update your DVR or webcam for security reasons, you have to throw your old one away and buy a new one. Patching also fails in more expensive devices, and can be quite dangerous. Do we want to allow vulnerable automobiles on the streets and highways during the weeks before a new security patch is written, tested, and distributed?
Another failing assumption is the security of our supply chains. We've started to see political battles about government-placed vulnerabilities in computers and software from Russia and China. But supply chain security is about more than where the suspect company is located: we need to be concerned about where the chips are made, where the software is written, who the programmers are, and everything else.
Last week, Bloomberg reported that China inserted eavesdropping chips into hardware made for American companies like Amazon and Apple. The tech companies all denied the accuracy of this report, which precisely illustrates the problem. Everyone involved in the production of a computer must be trusted, because any one of them can subvert the security. As everything becomes a computer and those computers become embedded in national-security applications, supply-chain corruption will be impossible to ignore.
These are problems that the market will not fix. Buyers can't differentiate between secure and insecure products, so sellers prefer to spend their money on features that buyers can see. The complexity of the internet and of our supply chains make it difficult to trace a particular vulnerability to a corresponding harm. The courts have traditionally not held software manufacturers liable for vulnerabilities. And, for most companies, it has generally been good business to skimp on security, rather than sell a product that costs more, does less, and is on the market a year later.
The solution is complicated, and it's one I devoted my latest book to answering. There are technological challenges, but they're not insurmountable -- the policy issues are far more difficult. We must engage with the future of internet security as a policy issue. Doing so requires a multifaceted approach, one that requires government involvement at every step.
First, we need standards to ensure that unsafe products don't harm others. We need to accept that the internet is global and regulations are local, and design accordingly. These standards will include some prescriptive rules for minimal acceptable security. California just enacted an Internet of Things security law that prohibits default passwords. This is just one of many security holes that need to be closed, but it's a good start.
We also need our standards to be flexible and easy to adapt to the needs of various companies, organizations, and industries. The National Institute of Standards and Technology's Cybersecurity Framework is an excellent example of this, because its recommendations can be tailored to suit the individual needs and risks of organizations. The Cybersecurity Framework -- which contains guidance on how to identify, prevent, recover, and respond to security risks -- is voluntary at this point, which means nobody follows it. Making it mandatory for critical industries would be a great first step. An appropriate next step would be to implement more specific standards for industries like automobiles, medical devices, consumer goods, and critical infrastructure.
Second, we need regulatory agencies to penalize companies with bad security, and a robust liability regime. The Federal Trade Commission is starting to do this, but it can do much more. It needs to make the cost of insecurity greater than the cost of security, which means that fines have to be substantial. The European Union is leading the way in this regard: they've passed a comprehensive privacy law, and are now turning to security and safety. The United States can and should do the same.
We need to ensure that companies are held accountable for their products and services, and that those affected by insecurity can recover damages. Traditionally, United States courts have declined to enforce liabilities for software vulnerabilities, and those affected by data breaches have been unable to prove specific harm. Here, we need statutory damages -- harms spelled out in the law that don't require any further proof.
Finally, we need to make it an overarching policy that security takes precedence over everything else. The internet is used globally, by everyone, and any improvements we make to security will necessarily help those we might prefer remain insecure: criminals, terrorists, rival governments. Here, we have no choice. The security we gain from making our computers less vulnerable far outweighs any security we might gain from leaving insecurities that we can exploit.
Regulation is inevitable. Our choice is no longer between government regulation and no government regulation, but between smart government regulation and ill-advised government regulation. Government regulation is not something to fear. Regulation doesn't stifle innovation, and I suspect that well-written regulation will spur innovation by creating a market for security technologies.
No industry has significantly improved the security or safety of its products without the government stepping in to help. Cars, airplanes, pharmaceuticals, consumer goods, food, medical devices, workplaces, restaurants, and, most recently, financial products -- all needed government regulation in order to become safe and secure.
Getting internet safety and security right will depend on people: people who are willing to take the time and expense to do the right things; people who are determined to put the best possible law and policy into place. The internet is constantly growing and evolving; we still have time for our security to adapt, but we need to act quickly, before the next disaster strikes. It's time for the government to jump in and help. Not tomorrow, not next week, not next year, not when the next big technology company or government agency is hacked, but now.
Bloomberg has another story about hardware surveillance implants in equipment made in China. This implant is different from the one Bloomberg reported on last week. That story has been denied by pretty much everyone else, but Bloomberg is sticking by its story and its sources. (I linked to other commentary and analysis here.)
Again, I have no idea what's true. The story is plausible. The denials are about what you'd expect. My lone hesitation to believing this is not seeing a photo of the hardware implant. If these things were in servers all over the US, you'd think someone would have come up with a photograph by now.
The US Government Accounting Office just published a new report: "Weapons Systems Cyber Security: DOD Just Beginning to Grapple with Scale of Vulnerabilities" (summary here). The upshot won't be a surprise to any of my regular readers: they're vulnerable.
From the summary:
Automation and connectivity are fundamental enablers of DOD's modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.
In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.
It is definitely easier, and cheaper, to ignore the problem or pretend it isn't a big deal. But that's probably a mistake in the long run.
I believe that, somewhere, there is a highly qualified security person who has had enough of corporate life and wants instead to make a difference in the world. If that's you, please consider applying.
Last month, the White House released the "National Cyber Strategy of the United States of America. I generally don't have much to say about these sorts of documents. They're filled with broad generalities. Who can argue with:
Defend the homeland by protecting networks, systems, functions, and data;
Promote American prosperity by nurturing a secure, thriving digital economy and fostering strong domestic innovation;
Preserve peace and security by strengthening the ability of the United States in concert with allies and partners to deter and, if necessary, punish those who use cyber tools for malicious purposes; and
Expand American influence abroad to extend the key tenets of an open, interoperable, reliable, and secure Internet.
The devil is in the details, of course. And the strategy includes no details.
In a New York Times op-ed, Josephine Wolff argues that this new strategy, together with the more-detailed Department of Defense cyber strategy and the classified National Security Presidential Memorandum 13, represent a dangerous shift of US cybersecurity posture from defensive to offensive:
...the National Cyber Strategy represents an abrupt and reckless shift in how the United States government engages with adversaries online. Instead of continuing to focus on strengthening defensive technologies and minimizing the impact of security breaches, the Trump administration plans to ramp up offensive cyberoperations. The new goal: deter adversaries through pre-emptive cyberattacks and make other nations fear our retaliatory powers.
The Trump administration's shift to an offensive approach is designed to escalate cyber conflicts, and that escalation could be dangerous. Not only will it detract resources and attention from the more pressing issues of defense and risk management, but it will also encourage the government to act recklessly in directing cyberattacks at targets before they can be certain of who those targets are and what they are doing.
There is no evidence that pre-emptive cyberattacks will serve as effective deterrents to our adversaries in cyberspace. In fact, every time a country has initiated an unprompted cyberattack, it has invariably led to more conflict and has encouraged retaliatory breaches rather than deterring them. Nearly every major publicly known online intrusion that Russia or North Korea has perpetrated against the United States has had significant and unpleasant consequences.
Wolff is right; this is reckless. In Click Here to Kill Everybody, I argue for a "defense dominant" strategy: that while offense is essential for defense, when the two are in conflict, it should take a back seat to defense. It's more complicated than that, of course, and I devote a whole chapter to its implications. But as computers and the Internet become more critical to our lives and society, keeping them secure becomes more important than using them to attack others.
This is an amazing short video of a squid -- I don't know the species -- changing its color instantly.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
Interesting research paper: "Fear the Reaper: Characterization and Fast Detection of Card Skimmers":
Abstract: Payment card fraud results in billions of dollars in losses annually. Adversaries increasingly acquire card data using skimmers, which are attached to legitimate payment devices including point of sale terminals, gas pumps, and ATMs. Detecting such devices can be difficult, and while many experts offer advice in doing so, there exists no large-scale characterization of skimmer technology to support such defenses. In this paper, we perform the first such study based on skimmers recovered by the NYPD's Financial Crimes Task Force over a 16 month period. After systematizing these devices, we develop the Skim Reaper, a detector which takes advantage of the physical properties and constraints necessary for many skimmers to steal card data. Our analysis shows the Skim Reaper effectively detects 100% of devices supplied by the NYPD. In so doing, we provide the first robust and portable mechanism for detecting card skimmers.
Boing Boing post.
Noted conspiracy theorist John McAfee tweeted:
The "Presidential alerts": they are capable of accessing the E911 chip in your phones -- giving them full access to your location, microphone, camera and every function of your phone. This not a rant, this is from me, still one of the leading cybersecurity experts. Wake up people!
This is, of course, ridiculous. I don't even know what an "E911 chip" is. And -- honestly -- if the NSA wanted in your phone, they would be a lot more subtle than this.
RT has picked up the story, though.
(If they just called it a "FEMA Alert," there would be a lot less stress about the whole thing.)
Bloomberg is reporting about a Chinese espionage operating involving inserting a tiny chip into computer products made in China.
I've written about (alternate link) this threat more generally. Supply-chain security is an insurmountably hard problem. Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product. No one wants to even think about a US-only anything; prices would multiply many times over.
We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.
EDITED TO ADD: Apple, Amazon, and others are denying that this attack is real. Stay tuned for more information.
EDITED TO ADD (9/6): TheGrugq comments. Bottom line is that we still don't know. I think that precisely exemplifies the greater problem.
EDITED TO ADD (10/7): Both the US Department of Homeland Security and the UK National Cyber Security Centre claim to believe the tech companies. Bloomberg is standing by its story. Nicholas Weaver writes that the story is plausible.
The EU's GDPR regulation requires companies to report a breach within 72 hours. Alex Stamos, former Facebook CISO now at Stanford University, points out how this can be a problem:
Interesting impact of the GDPR 72-hour deadline: companies announcing breaches before investigations are complete.
1) Announce & cop to max possible impacted users.
2) Everybody is confused on actual impact, lots of rumors.
3) A month later truth is included in official filing.
Last week's Facebook hack is his example.
The Twitter conversation continues as various people try to figure out if the European law allows a delay in order to work with law enforcement to catch the hackers, or if a company can report the breach privately with some assurance that it won't accidentally leak to the public.
The other interesting impact is the foreclosing of any possible coordination with law enforcement. I once ran response for a breach of a financial institution, which wasn't disclosed for months as the company was working with the USSS to lure the attackers into a trap. It worked.
The assumption that anything you share with an EU DPA stays confidential in the current media environment has been disproven by my personal experience.
This is a perennial problem: we can get information quickly, or we can get accurate information. It's hard to get both at the same time.
EDITED TO ADD (10/27): Stamos was correct. Later reporting clarified the breach:
Facebook said Friday that an on its computer systems that was announced two weeks ago had affected 30 million users, about 20 million fewer than it estimated earlier.
But the personal information that was exposed was far more intimate than originally thought, adding to Facebook's challenges as it investigates what was probably the most substantial breach of its network in the company's 14-year history.
Interesting article on terahertz millimeter-wave scanners and their uses to detect terrorist bombers.
The heart of the device is a block of electronics about the size of a 1990s tower personal computer. It comes housed in a musician's black case, akin to the one Spinal Tap might use on tour. At the front: a large, square white plate, the terahertz camera and, just above it, an ordinary closed-circuit television (CCTV) camera. Mounted on a shelf inside the case is a laptop that displays the CCTV image and the blobby terahertz image side by side.
An operator compares the two images as people flow past, looking for unexplained dark areas that could represent firearms or suicide vests. Most images that might be mistaken for a weapon -- backpacks or a big patch of sweat on the back of a person's shirt -- are easily evaluated by observing the terahertz image alongside an unaltered video picture of the passenger.
It is up to the operator -- in LA's case, presumably a transport police officer -- to query people when dark areas on the terahertz image suggest concealed large weapons or suicide vests. The device cannot see inside bodies, backpacks or shoes. "If you look at previous incidents on public transit systems, this technology would have detected those," Sotero says, noting LA Metro worked "closely" with the TSA for over a year to test this and other technologies. "It definitely has the backing of TSA."
How the technology works in practice depends heavily on the operator's training. According to Evans, "A lot of tradecraft goes into understanding where the threat item is likely to be on the body." He sees the crucial role played by the operator as giving back control to security guards and allowing them to use their common sense.
I am quoted in the article as being skeptical of the technology, particularly how its deployed.
Brian Krebs is reporting on some new and sophisticated phishing scams over the telephone.
I second his advice: "never give out any information about yourself in response to an unsolicited phone call." Always call them back, and not using the number offered to you by the caller. Always.
EDITED TO ADD: In 2009, I wrote:
When I was growing up, children were commonly taught: "don't talk to strangers." Strangers might be bad, we were told, so it's prudent to steer clear of them.
And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.
These two pieces of advice may seem to contradict each other, but they don't. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it's not a random choice. It's more likely, although still unlikely, that the stranger is up to no good.
That advice is generalizable to this instance as well. The problem is that someone claiming to be from your bank asking for personal information. The problem is that they contacted you first.
Where else does this advice hold true?
From Kashmir Hill:
Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn't hand over at all, but that was collected from other people's contact books, a hidden layer of details Facebook has about you that I've come to call "shadow contact information." I managed to place an ad in front of Alan Mislove by targeting his shadow profile. This means that the junk email address that you hand over for discounts or for shady online shopping is likely associated with your account and being used to target you with ads.
Here's the research paper. Hill again:
They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user's account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.
Earlier this month, I wrote about a statement by the Five Eyes countries about encryption and back doors. (Short summary: they like them.) One of the weird things about the statement is that it was clearly written from a law-enforcement perspective, though we normally think of the Five Eyes as a consortium of intelligence agencies.
Susan Landau examines the details of the statement, explains what's going on, and why the statement is a lot less than what it might seem.
Sidebar photo of Bruce Schneier by Joe MacInnis.