Entries Tagged "botnets"

Page 1 of 6

A Security Vulnerability in the KmsdBot Botnet

Security researchers found a software bug in the KmsdBot cryptomining botnet:

With no error-checking built in, sending KmsdBot a malformed command­—like its controllers did one day while Akamai was watching­—created a panic crash with an “index out of range” error. Because there’s no persistence, the bot stays down, and malicious agents would need to reinfect a machine and rebuild the bot’s functions. It is, as Akamai notes, “a nice story” and “a strong example of the fickle nature of technology.”

Posted on December 15, 2022 at 7:10 AMView Comments

New Sophisticated Malware

Mandiant is reporting on a new botnet.

The group, which security firm Mandiant is calling UNC3524, has spent the past 18 months burrowing into victims’ networks with unusual stealth. In cases where the group is ejected, it wastes no time reinfecting the victim environment and picking up where things left off. There are many keys to its stealth, including:

  • The use of a unique backdoor Mandiant calls Quietexit, which runs on load balancers, wireless access point controllers, and other types of IoT devices that don’t support antivirus or endpoint detection. This makes detection through traditional means difficult.
  • Customized versions of the backdoor that use file names and creation dates that are similar to legitimate files used on a specific infected device.
  • A live-off-the-land approach that favors common Windows programming interfaces and tools over custom code with the goal of leaving as light a footprint as possible.
  • An unusual way a second-stage backdoor connects to attacker-controlled infrastructure by, in essence, acting as a TLS-encrypted server that proxies data through the SOCKS protocol.

[…]

Unpacking this threat group is difficult. From outward appearances, their focus on corporate transactions suggests a financial interest. But UNC3524’s high-caliber tradecraft, proficiency with sophisticated IoT botnets, and ability to remain undetected for so long suggests something more.

From Mandiant:

Throughout their operations, the threat actor demonstrated sophisticated operational security that we see only a small number of threat actors demonstrate. The threat actor evaded detection by operating from devices in the victim environment’s blind spots, including servers running uncommon versions of Linux and network appliances running opaque OSes. These devices and appliances were running versions of operating systems that were unsupported by agent-based security tools, and often had an expected level of network traffic that allowed the attackers to blend in. The threat actor’s use of the QUIETEXIT tunneler allowed them to largely live off the land, without the need to bring in additional tools, further reducing the opportunity for detection. This allowed UNC3524 to remain undetected in victim environments for, in some cases, upwards of 18 months.

Posted on May 4, 2022 at 6:15 AMView Comments

US Disrupts Russian Botnet

The Justice Department announced the disruption of a Russian GRU-controlled botnet:

The Justice Department today announced a court-authorized operation, conducted in March 2022, to disrupt a two-tiered global botnet of thousands of infected network hardware devices under the control of a threat actor known to security researchers as Sandworm, which the U.S. government has previously attributed to the Main Intelligence Directorate of the General Staff of the Armed Forces of the Russian Federation (the GRU). The operation copied and removed malware from vulnerable internet-connected firewall devices that Sandworm used for command and control (C2) of the underlying botnet. Although the operation did not involve access to the Sandworm malware on the thousands of underlying victim devices worldwide, referred to as “bots,” the disabling of the C2 mechanism severed those bots from the Sandworm C2 devices’ control.

The botnet “targets network devices manufactured by WatchGuard Technologies Inc. (WatchGuard) and ASUSTek Computer Inc. (ASUS).” And note that only the command-and-control mechanism was disrupted. Those devices are still vulnerable.

The Justice Department made a point that they did this before the botnet was used for anything offensive.

Four more news articles. Slashdot post.

EDITED TO ADD (4/13): WatchGuard knew and fixed it nearly a year ago, but tried to keep it hidden. The patches were reverse-engineered.

Posted on April 7, 2022 at 9:31 AMView Comments

Illegal Content and the Blockchain

Security researchers have recently discovered a botnet with a novel defense against takedowns. Normally, authorities can disable a botnet by taking over its command-and-control server. With nowhere to go for instructions, the botnet is rendered useless. But over the years, botnet designers have come up with ways to make this counterattack harder. Now the content-delivery network Akamai has reported on a new method: a botnet that uses the Bitcoin blockchain ledger. Since the blockchain is globally accessible and hard to take down, the botnet’s operators appear to be safe.

It’s best to avoid explaining the mathematics of Bitcoin’s blockchain, but to understand the colossal implications here, you need to understand one concept. Blockchains are a type of “distributed ledger”: a record of all transactions since the beginning, and everyone using the blockchain needs to have access to—and reference—a copy of it. What if someone puts illegal material in the blockchain? Either everyone has a copy of it, or the blockchain’s security fails.

To be fair, not absolutely everyone who uses a blockchain holds a copy of the entire ledger. Many who buy cryptocurrencies like Bitcoin and Ethereum don’t bother using the ledger to verify their purchase. Many don’t actually hold the currency outright, and instead trust an exchange to do the transactions and hold the coins. But people need to continually verify the blockchain’s history on the ledger for the system to be secure. If they stopped, then it would be trivial to forge coins. That’s how the system works.

Some years ago, people started noticing all sorts of things embedded in the Bitcoin blockchain. There are digital images, including one of Nelson Mandela. There’s the Bitcoin logo, and the original paper describing Bitcoin by its alleged founder, the pseudonymous Satoshi Nakamoto. There are advertisements, and several prayers. There’s even illegal pornography and leaked classified documents. All of these were put in by anonymous Bitcoin users. But none of this, so far, appears to seriously threaten those in power in governments and corporations. Once someone adds something to the Bitcoin ledger, it becomes sacrosanct. Removing something requires a fork of the blockchain, in which Bitcoin fragments into multiple parallel cryptocurrencies (and associated blockchains). Forks happen, rarely, but never yet because of legal coercion. And repeated forking would destroy Bitcoin’s stature as a stable(ish) currency.

The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain?<

What if someone added a type of political speech that Singapore routinely censors? Or cartoons that Disney holds the copyright to?

In Bitcoin’s and most other public blockchains there are no central, trusted authorities. Anyone in the world can perform transactions or become a miner. Everyone is equal to the extent that they have the hardware and electricity to perform cryptographic computations.

This openness is also a vulnerability, one that opens the door to asymmetric threats and small-time malicious actors. Anyone can put information in the one and only Bitcoin blockchain. Again, that’s how the system works.

Over the last three decades, the world has witnessed the power of open networks: blockchains, social media, the very web itself. What makes them so powerful is that their value is related not just to the number of users, but the number of potential links between users. This is Metcalfe’s law—value in a network is quadratic, not linear, in the number of users—and every open network since has followed its prophecy.

As Bitcoin has grown, its monetary value has skyrocketed, even if its uses remain unclear. With no barrier to entry, the blockchain space has been a Wild West of innovation and lawlessness. But today, many prominent advocates suggest Bitcoin should become a global, universal currency. In this context, asymmetric threats like embedded illegal data become a major challenge.

The philosophy behind Bitcoin traces to the earliest days of the open internet. Articulated in John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, it was and is the ethos of tech startups: Code is more trustworthy than institutions. Information is meant to be free, and nobody has the right—and should not have the ability—to control it.

But information must reside somewhere. Code is written by and for people, stored on computers located within countries, and embedded within the institutions and societies we have created. To trust information is to trust its chain of custody and the social context it comes from. Neither code nor information is value-neutral, nor ever free of human context.

Today, Barlow’s vision is a mere shadow; every society controls the information its people can access. Some of this control is through overt censorship, as China controls information about Taiwan, Tiananmen Square, and the Uyghurs. Some of this is through civil laws designed by the powerful for their benefit, as with Disney and US copyright law, or UK libel law.

Bitcoin and blockchains like it are on a collision course with these laws. What happens when the interests of the powerful, with the law on their side, are pitted against an open blockchain? Let’s imagine how our various scenarios might play out.

China first: In response to Falun Gong texts in the blockchain, the People’s Republic decrees that any miners processing blocks with banned content will be taken offline—their IPs will be blacklisted. This causes a hard fork of the blockchain at the point just before the banned content. China might do this under the guise of a “patriotic” messaging campaign, publicly stating that it’s merely maintaining financial sovereignty from Western banks. Then it uses paid influencers and moderators on social media to pump the China Bitcoin fork, through both partisan comments and transactions. Two distinct forks would soon emerge, one behind China’s Great Firewall and one outside. Other countries with similar governmental and media ecosystems—Russia, Singapore, Myanmar—might consider following suit, creating multiple national Bitcoin forks. These would operate independently, under mandates to censor unacceptable transactions from then on.

Disney’s approach would play out differently. Imagine the company announces it will sue any ISP that hosts copyrighted content, starting with networks hosting the biggest miners. (Disney has sued to enforce its intellectual property rights in China before.) After some legal pressure, the networks cut the miners off. The miners reestablish themselves on another network, but Disney keeps the pressure on. Eventually miners get pushed further and further off of mainstream network providers, and resort to tunneling their traffic through an anonymity service like Tor. That causes a major slowdown in the already slow (because of the mathematics) Bitcoin network. Disney might issue takedown requests for Tor exit nodes, causing the network to slow to a crawl. It could persist like this for a long time without a fork. Or the slowdown could cause people to jump ship, either by forking Bitcoin or switching to another cryptocurrency without the copyrighted content.

And then there’s illegal pornographic content and leaked classified data. These have been on the Bitcoin blockchain for over five years, and nothing has been done about it. Just like the botnet example, it may be that these do not threaten existing power structures enough to warrant takedowns. This could easily change if Bitcoin becomes a popular way to share child sexual abuse material. Simply having these illegal images on your hard drive is a felony, which could have significant repercussions for anyone involved in Bitcoin.

Whichever scenario plays out, this may be the Achilles heel of Bitcoin as a global currency.

If an open network such as a blockchain were threatened by a powerful organization—China’s censors, Disney’s lawyers, or the FBI trying to take down a more dangerous botnet—it could fragment into multiple networks. That’s not just a nuisance, but an existential risk to Bitcoin.

Suppose Bitcoin were fragmented into 10 smaller blockchains, perhaps by geography: one in China, another in the US, and so on. These fragments might retain their original users, and by ordinary logic, nothing would have changed. But Metcalfe’s law implies that the overall value of these blockchain fragments combined would be a mere tenth of the original. That is because the value of an open network relates to how many others you can communicate with—and, in a blockchain, transact with. Since the security of bitcoin currency is achieved through expensive computations, fragmented blockchains are also easier to attack in a conventional manner—through a 51 percent attack—by an organized attacker. This is especially the case if the smaller blockchains all use the same hash function, as they would here.

Traditional currencies are generally not vulnerable to these sorts of asymmetric threats. There are no viable small-scale attacks against the US dollar, or almost any other fiat currency. The institutions and beliefs that give money its value are deep-seated, despite instances of currency hyperinflation.

The only notable attacks against fiat currencies are in the form of counterfeiting. Even in the past, when counterfeit bills were common, attacks could be thwarted. Counterfeiters require specialized equipment and are vulnerable to law enforcement discovery and arrest. Furthermore, most money today—even if it’s nominally in a fiat currency—doesn’t exist in paper form.

Bitcoin attracted a following for its openness and immunity from government control. Its goal is to create a world that replaces cultural power with cryptographic power: verification in code, not trust in people. But there is no such world. And today, that feature is a vulnerability. We really don’t know what will happen when the human systems of trust come into conflict with the trustless verification that make blockchain currencies unique. Just last week we saw this exact attack on smaller blockchains—not Bitcoin yet. We are watching a public socio-technical experiment in the making, and we will witness its success or failure in the not-too-distant future.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

EDITED TO ADD (4/14): A research paper on erasing data from Bitcoin blockchain.

Posted on March 17, 2021 at 6:10 AMView Comments

Police Have Disrupted the Emotet Botnet

A coordinated effort has captured the command-and-control servers of the Emotet botnet:

Emotet establishes a backdoor onto Windows computer systems via automated phishing emails that distribute Word documents compromised with malware. Subjects of emails and documents in Emotet campaigns are regularly altered to provide the best chance of luring victims into opening emails and installing malware—regular themes include invoices, shipping notices and information about COVID-19.

Those behind the Emotet lease their army of infected machines out to other cyber criminals as a gateway for additional malware attacks, including remote access tools (RATs) and ransomware.

[…]

A week of action by law enforcement agencies around the world gained control of Emotet’s infrastructure of hundreds of servers around the world and disrupted it from the inside.

Machines infected by Emotet are now directed to infrastructure controlled by law enforcement, meaning cyber criminals can no longer exploit machines compromised and the malware can no longer spread to new targets, something which will cause significant disruption to cyber-criminal operations.

[…]

The Emotet takedown is the result of over two years of coordinated work by law enforcement operations around the world, including the Dutch National Police, Germany’s Federal Crime Police, France’s National Police, the Lithuanian Criminal Police Bureau, the Royal Canadian Mounted Police, the US Federal Bureau of Investigation, the UK’s National Crime Agency, and the National Police of Ukraine.

EDITED TO ADD (2/11): Follow-on article.

Posted on January 28, 2021 at 6:02 AMView Comments

US Cyber Command and Microsoft Are Both Disrupting TrickBot

Earlier this month, we learned that someone is disrupting the TrickBot botnet network.

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

A few days ago, the Washington Post reported that it’s the work of US Cyber Command:

U.S. Cyber Command’s campaign against the Trickbot botnet, an army of at least 1 million hijacked computers run by Russian-speaking criminals, is not expected to permanently dismantle the network, said four U.S. officials, who spoke on the condition of anonymity because of the matter’s sensitivity. But it is one way to distract them at least for a while as they seek to restore operations.

The network is controlled by “Russian speaking criminals,” and the fear is that it will be used to disrupt the US election next month.

The effort is part of what Gen. Paul Nakasone, the head of Cyber Command, calls “persistent engagement,” or the imposition of cumulative costs on an adversary by keeping them constantly engaged. And that is a key feature of CyberCom’s activities to help protect the election against foreign threats, officials said.

Here’s General Nakasone talking about persistent engagement.

Microsoft is also disrupting Trickbot:

We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.

[…]

We took today’s action after the United States District Court for the Eastern District of Virginia granted our request for a court order to halt Trickbot’s operations.

During the investigation that underpinned our case, we were able to identify operational details including the infrastructure Trickbot used to communicate with and control victim computers, the way infected computers talk with each other, and Trickbot’s mechanisms to evade detection and attempts to disrupt its operation. As we observed the infected computers connect to and receive instructions from command and control servers, we were able to identify the precise IP addresses of those servers. With this evidence, the court granted approval for Microsoft and our partners to disable the IP addresses, render the content stored on the command and control servers inaccessible, suspend all services to the botnet operators, and block any effort by the Trickbot operators to purchase or lease additional servers.

To execute this action, Microsoft formed an international group of industry and telecommunications providers. Our Digital Crimes Unit (DCU) led investigation efforts including detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners including FS-ISAC, ESET, Lumen’s Black Lotus Labs, NTT and Symantec, a division of Broadcom, in addition to our Microsoft Defender team. Further action to remediate victims will be supported by internet service providers (ISPs) and computer emergency readiness teams (CERTs) around the world.

This action also represents a new legal approach that our DCU is using for the first time. Our case includes copyright claims against Trickbot’s malicious use of our software code. This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place.

Brian Krebs comments:

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

This is a novel use of trademark law.

Posted on October 15, 2020 at 6:01 AMView Comments

Half a Million IoT Device Passwords Published

It’s a list of easy-to-guess passwords for IoT devices on the Internet as recently as last October and November. Useful for anyone putting together a bot network:

A hacker has published this week a massive list of Telnet credentials for more than 515,000 servers, home routers, and IoT (Internet of Things) “smart” devices.

The list, which was published on a popular hacking forum, includes each device’s IP address, along with a username and password for the Telnet service, a remote access protocol that can be used to control devices over the internet.

According to experts to who ZDNet spoke this week, and a statement from the leaker himself, the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker than tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Posted on January 22, 2020 at 6:09 AMView Comments

Artificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism”—websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them—maybe half—were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots—a proposed California law would require bots to identify themselves—but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD (6/4): This essay has been translated into Portuguese.

Posted on January 13, 2020 at 8:21 AMView Comments

Japanese Government Will Hack Citizens' IoT Devices

The Japanese government is going to run penetration tests against all the IoT devices in their country, in an effort to (1) figure out what’s insecure, and (2) help consumers secure them:

The survey is scheduled to kick off next month, when authorities plan to test the password security of over 200 million IoT devices, beginning with routers and web cameras. Devices in people’s homes and on enterprise networks will be tested alike.

[…]

The Japanese government’s decision to log into users’ IoT devices has sparked outrage in Japan. Many have argued that this is an unnecessary step, as the same results could be achieved by just sending a security alert to all users, as there’s no guarantee that the users found to be using default or easy-to-guess passwords would change their passwords after being notified in private.

However, the government’s plan has its technical merits. Many of today’s IoT and router botnets are being built by hackers who take over devices with default or easy-to-guess passwords.

Hackers can also build botnets with the help of exploits and vulnerabilities in router firmware, but the easiest way to assemble a botnet is by collecting the ones that users have failed to secure with custom passwords.

Securing these devices is often a pain, as some expose Telnet or SSH ports online without the users’ knowledge, and for which very few users know how to change passwords. Further, other devices also come with secret backdoor accounts that in some cases can’t be removed without a firmware update.

I am interested in the results of this survey. Japan isn’t very different from other industrialized nations in this regard, so their findings will be general. I am less optimistic about the country’s ability to secure all of this stuff—especially before the 2020 Summer Olympics.

Posted on January 28, 2019 at 1:40 PMView Comments

1 2 3 6

Sidebar photo of Bruce Schneier by Joe MacInnis.