Entries Tagged "infrastructure"
Page 1 of 9
Last week I signed on to two joint letters about the security of the 2020 election. The first was as one of 59 election security experts, basically saying that while the election seems to have been both secure and accurate (voter suppression notwithstanding), we still need to work to secure our election systems:
We are aware of alarming assertions being made that the 2020 election was “rigged” by exploiting technical vulnerabilities. However, in every case of which we are aware, these claims either have been unsubstantiated or are technically incoherent. To our collective knowledge, no credible evidence has been put forth that supports a conclusion that the 2020 election outcome in any state has been altered through technical compromise.
That said, it is imperative that the US continue working to bolster the security of elections against sophisticated adversaries. At a minimum, all states should employ election security practices and mechanisms recommended by experts to increase assurance in election outcomes, such as post-election risk-limiting audits.
The New York Times wrote about the letter.
The second was a more general call for election security measures in the US:
Obviously elections themselves are partisan. But the machinery of them should not be. And the transparent assessment of potential problems or the assessment of allegations of security failure — even when they could affect the outcome of an election — must be free of partisan pressures. Bottom line: election security officials and computer security experts must be able to do their jobs without fear of retribution for finding and publicly stating the truth about the security and integrity of the election.
These pile on to the November 12 statement from Cybersecurity and Infrastructure Security Agency (CISA) and the other agencies of the Election Infrastructure Government Coordinating Council (GCC) Executive Committee. While I’m not sure how they have enough comparative data to claim that “the November 3rd election was the most secure in American history,” they are certainly credible in saying that “there is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.”
We have a long way to go to secure our election systems from hacking. Details of what to do are known. Getting rid of touch-screen voting machines is important, but baseless claims of fraud don’t help.
Interesting paper: “How weaponizing disinformation can bring down a city’s power grid“:
Abstract: Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular. Here, we consider an attack in which an adversary attempts to manipulate the behavior of energy consumers by sending fake discount notifications encouraging them to shift their consumption into the peak-demand period. Using Greater London as a case study, we show that such disinformation can indeed lead to unwitting consumers synchronizing their energy-usage patterns, and result in blackouts on a city-scale if the grid is heavily loaded. We then conduct surveys to assess the propensity of people to follow-through on such notifications and forward them to their friends. This allows us to model how the disinformation may propagate through social networks, potentially amplifying the attack impact. These findings demonstrate that in an era when disinformation can be weaponized, system vulnerabilities arise not only from the hardware and software of critical infrastructure, but also from the behavior of the consumers.
I’m not sure the attack is practical, but it’s an interesting idea.
SANS has made freely available its “Work-from-Home Awareness Kit.”
When I think about how COVID-19’s security measures are affecting organizational networks, I see several interrelated problems:
One, employees are working from their home networks and sometimes from their home computers. These systems are more likely to be out of date, unpatched, and unprotected. They are more vulnerable to attack simply because they are less secure.
Two, sensitive organizational data will likely migrate outside of the network. Employees working from home are going to save data on their own computers, where they aren’t protected by the organization’s security systems. This makes the data more likely to be hacked and stolen.
Three, employees are more likely to access their organizational networks insecurely. If the organization is lucky, they will have already set up a VPN for remote access. If not, they’re either trying to get one quickly or not bothering at all. Handing people VPN software to install and use with zero training is a recipe for security mistakes, but not using a VPN is even worse.
Four, employees are being asked to use new and unfamiliar tools like Zoom to replace face-to-face meetings. Again, these hastily set-up systems are likely to be insecure.
Five, the general chaos of “doing things differently” is an opening for attack. Tricks like business email compromise, where an employee gets a fake email from a senior executive asking him to transfer money to some account, will be more successful when the employee can’t walk down the hall to confirm the email’s validity — and when everyone is distracted and so many other things are being done differently.
Worrying about network security seems almost quaint in the face of the massive health risks from COVID-19, but attacks on infrastructure can have effects far greater than the infrastructure itself. Stay safe, everyone, and help keep your networks safe as well.
At the CyberwarCon conference in Arlington, Virginia, on Thursday, Microsoft security researcher Ned Moran plans to present new findings from the company’s threat intelligence group that show a shift in the activity of the Iranian hacker group APT33, also known by the names Holmium, Refined Kitten, or Elfin. Microsoft has watched the group carry out so-called password-spraying attacks over the past year that try just a few common passwords across user accounts at tens of thousands of organizations. That’s generally considered a crude and indiscriminate form of hacking. But over the last two months, Microsoft says APT33 has significantly narrowed its password spraying to around 2,000 organizations per month, while increasing the number of accounts targeted at each of those organizations almost tenfold on average.
The hackers’ motivation — and which industrial control systems they’ve actually breached — remains unclear. Moran speculates that the group is seeking to gain a foothold to carry out cyberattacks with physically disruptive effects. “They’re going after these producers and manufacturers of control systems, but I don’t think they’re the end targets,” says Moran. “They’re trying to find the downstream customer, to find out how they work and who uses them. They’re looking to inflict some pain on someone’s critical infrastructure that makes use of these control systems.”
It’s unclear whether the attackers are causing any actual damage, or just gaining access for some future use.
The Carnegie Endowment for Peace published a comprehensive report on ICT (information and communication technologies) supply-chain security and integrity. It’s a good read, but nothing that those who are following this issue don’t already know.
The trade war with China has reached a new industry: subway cars. Congress is considering legislation that would prevent the world’s largest train maker, the Chinese-owned CRRC Corporation, from competing on new contracts in the United States.
Part of the reasoning behind this legislation is economic, and stems from worries about Chinese industries undercutting the competition and dominating key global industries. But another part involves fears about national security. News articles talk about “spy trains,” and the possibility that the train cars might surreptitiously monitor their passengers’ faces, movements, conversations or phone calls.
This is a complicated topic. There is definitely a national security risk in buying computer infrastructure from a country you don’t trust. That’s why there is so much worry about Chinese-made equipment for the new 5G wireless networks.
It’s also why the United States has blocked the cybersecurity company Kaspersky from selling its Russian-made antivirus products to US government agencies. Meanwhile, the chairman of China’s technology giant Huawei has pointed to NSA spying disclosed by Edward Snowden as a reason to mistrust US technology companies.
The reason these threats are so real is that it’s not difficult to hide surveillance or control infrastructure in computer components, and if they’re not turned on, they’re very difficult to find.
Like every other piece of modern machinery, modern train cars are filled with computers, and while it’s certainly possible to produce a subway car with enough surveillance apparatus to turn it into a “spy train,” in practice it doesn’t make much sense. The risk of discovery is too great, and the payoff would be too low. Like the United States, China is more likely to try to get data from the US communications infrastructure, or from the large Internet companies that already collect data on our every move as part of their business model.
While it’s unlikely that China would bother spying on commuters using subway cars, it would be much less surprising if a tech company offered free Internet on subways in exchange for surveillance and data collection. Or if the NSA used those corporate systems for their own surveillance purposes (just as the agency has spied on in-flight cell phone calls, according to an investigation by the Intercept and Le Monde, citing documents provided by Edward Snowden). That’s an easier, and more fruitful, attack path.
We have credible reports that the Chinese hacked Gmail around 2010, and there are ongoing concerns about both censorship and surveillance by the Chinese social-networking company TikTok. (TikTok’s parent company has told the Washington Post that the app doesn’t send American users’ info back to Beijing, and that the Chinese government does not influence the app’s use in the United States.)
Even so, these examples illustrate an important point: there’s no escaping the technology of inevitable surveillance. You have little choice but to rely on the companies that build your computers and write your software, whether in your smartphones, your 5G wireless infrastructure, or your subway cars. And those systems are so complicated that they can be secretly programmed to operate against your interests.
Last year, Le Monde reported that the Chinese government bugged the computer network of the headquarters of the African Union in Addis Ababa. China had built and outfitted the organization’s new headquarters as a foreign aid gift, reportedly secretly configuring the network to send copies of confidential data to Shanghai every night between 2012 and 2017. China denied having done so, of course.
If there’s any lesson from all of this, it’s that everybody spies using the Internet. The United States does it. Our allies do it. Our enemies do it. Many countries do it to each other, with their success largely dependent on how sophisticated their tech industries are.
China dominates the subway car manufacturing industry because of its low prices — the same reason it dominates the 5G hardware industry. Whether these low prices are because the companies are more efficient than their competitors or because they’re being unfairly subsidized by the Chinese government is a matter to be determined at trade negotiations.
Finally, Americans must understand that higher prices are an inevitable result of banning cheaper tech products from China.
We might willingly pay the higher prices because we want domestic control of our telecommunications infrastructure. We might willingly pay more because of some protectionist belief that global trade is somehow bad. But we need to make these decisions to protect ourselves deliberately and rationally, recognizing both the risks and the costs. And while I’m worried about our 5G infrastructure built using Chinese hardware, I’m not worried about our subway cars.
This essay originally appeared on CNN.com.
EDITED TO ADD: I had a lot of trouble with CNN’s legal department with this essay. They were very reluctant to call out the US and its allies for similar behavior, and spent a lot more time adding caveats to statements that I didn’t think needed them. They wouldn’t let me link to this Intercept article talking about US, French, and German infiltration of supply chains, or even the NSA document from the Snowden archives that proved the statements.
Maciej Cegłowski has a really good essay explaining how to think about privacy today:
For the purposes of this essay, I’ll call it “ambient privacy” — the understanding that there is value in having our everyday interactions with one another remain outside the reach of monitoring, and that the small details of our daily lives should pass by unremembered. What we do at home, work, church, school, or in our leisure time does not belong in a permanent record. Not every conversation needs to be a deposition.
Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale.
Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not.
Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.
That is not consent.
Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.
He’s not the first person to talk about privacy as a societal property, or to use pollution metaphors. But his framing is really cogent. And “ambient privacy” is new — and a good phrasing.
The International Committee of the Red Cross has just published a report: “The Potential Human Cost of Cyber-Operations.” It’s the result of an “ICRC Expert Meeting” from last year, but was published this week.
Yesterday, I visited the NSA. It was Cyber Command’s birthday, but that’s not why I was there. I visited as part of the Berklett Cybersecurity Project, run out of the Berkman Klein Center and funded by the Hewlett Foundation. (BERKman hewLETT — get it? We have a web page, but it’s badly out of date.)
It was a full day of meetings, all unclassified but under the Chatham House Rule. Gen. Nakasone welcomed us and took questions at the start. Various senior officials spoke with us on a variety of topics, but mostly focused on three areas:
- Russian influence operations, both what the NSA and US Cyber Command did during the 2018 election and what they can do in the future;
- China and the threats to critical infrastructure from untrusted computer hardware, both the 5G network and more broadly;
- Machine learning, both how to ensure a ML system is compliant with all laws, and how ML can help with other compliance tasks.
It was all interesting. Those first two topics are ones that I am thinking and writing about, and it was good to hear their perspective. I find that I am much more closely aligned with the NSA about cybersecurity than I am about privacy, which made the meeting much less fraught than it would have been if we were discussing Section 702 of the FISA Amendments Act, Section 215 the USA Freedom Act (up for renewal next year), or any 4th Amendment violations. I don’t think we’re past those issues by any means, but they make up less of what I am working on.
Sidebar photo of Bruce Schneier by Joe MacInnis.