Entries Tagged "military"
Page 3 of 15
The US Air Force is focusing on cyber deception next year:
Background: Deception is a deliberate act to conceal activity on our networks, create uncertainty and confusion against the adversary’s efforts to establish situational awareness and to influence and misdirect adversary perceptions and decision processes. Military deception is defined as “those actions executed to deliberately mislead adversary decision makers as to friendly military capabilities, intentions, and operations, thereby causing the adversary to take specific actions (or inactions) that will contribute to the accomplishment of the friendly mission.” Military forces have historically used techniques such as camouflage, feints, chaff, jammers, fake equipment, false messages or traffic to alter an enemy’s perception of reality. Modern day military planners need a capability that goes beyond the current state-of-the-art in cyber deception to provide a system or systems that can be employed by a commander when needed to enable deception to be inserted into defensive cyber operations.
Relevance and realism are the grand technical challenges to cyber deception. The application of the proposed technology must be relevant to operational and support systems within the DoD. The DoD operates within a highly standardized environment. Any technology that significantly disrupts or increases the cost to the standard of practice will not be adopted. If the technology is adopted, the defense system must appear legitimate to the adversary trying to exploit it.
Objective: To provide cyber-deception capabilities that could be employed by commanders to provide false information, confuse, delay, or otherwise impede cyber attackers to the benefit of friendly forces. Deception mechanisms must be incorporated in such a way that they are transparent to authorized users, and must introduce minimal functional and performance impacts, in order to disrupt our adversaries and not ourselves. As such, proposed techniques must consider how challenges relating to transparency and impact will be addressed. The security of such mechanisms is also paramount, so that their power is not co-opted by attackers against us for their own purposes. These techniques are intended to be employed for defensive purposes only on networks and systems controlled by the DoD.
Advanced techniques are needed with a focus on introducing varying deception dynamics in network protocols and services which can severely impede, confound, and degrade an attacker’s methods of exploitation and attack, thereby increasing the costs and limiting the benefits gained from the attack. The emphasis is on techniques that delay the attacker in the reconnaissance through weaponization stages of an attack and also aid defenses by forcing an attacker to move and act in a more observable manner. Techniques across the host and network layers or a hybrid thereof are of interest in order to provide AF cyber operations with effective, flexible, and rapid deployment options.
More discussion here.
The Maryland Air National Guard needs a new facility for its cyberwar operations:
The purpose of this facility is to house a Network Warfare Group and ISR Squadron. The Cyber mission includes a set of capabilities, expertise to enable the cyber operational need for an always-on, net-speed awareness and integrated operational response with global reach. It enables operators to drive upstream in pursuit of cyber adversaries, and is informed 24/7 by intelligence and all-source information.
Is this something we want the Maryland Air National Guard to get involved in?
Glenn Greenwald is back reporting about the NSA, now with Pierre Omidyar’s news organization FirstLook and its introductory publication, The Intercept. Writing with national security reporter Jeremy Scahill, his first article covers how the NSA helps target individuals for assassination by drone.
Leaving aside the extensive political implications of the story, the article and the NSA source documents reveal additional information about how the agency’s programs work. From this and other articles, we can now piece together how the NSA tracks individuals in the real world through their actions in cyberspace.
Its techniques to locate someone based on their electronic activities are straightforward, although they require an enormous capability to monitor data networks. One set of techniques involves the cell phone network, and the other the Internet.
Tracking Locations With Cell Towers
Every cell-phone network knows the approximate location of all phones capable of receiving calls. This is necessary to make the system work; if the system doesn’t know what cell you’re in, it isn’t able to route calls to your phone. We already know that the NSA conducts physical surveillance on a massive scale using this technique.
By triangulating location information from different cell phone towers, cell phone providers can geolocate phones more accurately. This is often done to direct emergency services to a particular person, such as someone who has made a 911 call. The NSA can get this data either by network eavesdropping with the cooperation of the carrier, or by intercepting communications between the cell phones and the towers. A previously released Top Secret NSA document says this: "GSM Cell Towers can be used as a physical-geolocation point in relation to a GSM handset of interest."
This technique becomes even more powerful if you can employ a drone. Greenwald and Scahill write:
The agency also equips drones and other aircraft with devices known as "virtual base-tower transceivers"—creating, in effect, a fake cell phone tower that can force a targeted person’s device to lock onto the NSA’s receiver without their knowledge.
The drone can do this multiple times as it flies around the area, measuring the signal strength—and inferring distance—each time. Again from the Intercept article:
The NSA geolocation system used by JSOC is known by the code name GILGAMESH. Under the program, a specially constructed device is attached to the drone. As the drone circles, the device locates the SIM card or handset that the military believes is used by the target.
The Top Secret source document associated with the Intercept story says:
As part of the GILGAMESH (PREDATOR-based active geolocation) effort, this team used some advanced mathematics to develop a new geolocation algorithm intended for operational use on unmanned aerial vehicle (UAV) flights.
This is at least part of that advanced mathematics.
None of this works if the target turns his phone off or exchanges SMS cards often with his colleagues, which Greenwald and Scahill write is routine. It won’t work in much of Yemen, which isn’t on any cell phone network. Because of this, the NSA also tracks people based on their actions on the Internet.
Finding You From Your Web Connection
A surprisingly large number of Internet applications leak location data. Applications on your smart phone can transmit location data from your GPS receiver over the Internet. We already know that the NSA collects this data to determine location. Also, many applications transmit the IP address of the network the computer is connected to. If the NSA has a database of IP addresses and locations, it can use that to locate users.
According to a previously released Top Secret NSA document, that program is code named HAPPYFOOT: "The HAPPYFOOT analytic aggregated leaked location-based service / location-aware application data to infer IP geo-locations."
Another way to get this data is to collect it from the geographical area you’re interested in. Greenwald and Scahill talk about exactly this:
In addition to the GILGAMESH system used by JSOC, the CIA uses a similar NSA platform known as SHENANIGANS. The operation—previously undisclosed—utilizes a pod on aircraft that vacuums up massive amounts of data from any wireless routers, computers, smart phones or other electronic devices that are within range.
And again from an NSA document associated with the FirstLook story: “Our mission (VICTORYDANCE) mapped the Wi-Fi fingerprint of nearly every major town in Yemen.” In the hacker world, this is known as war-driving, and has even been demonstrated from drones.
Another story from the Snowden documents describes a research effort to locate individuals based on the location of wifi networks they log into.
This is how the NSA can find someone, even when their cell phone is turned off and their SIM card is removed. If they’re at an Internet café, and they log into an account that identifies them, the NSA can locate them—because the NSA already knows where that wifi network is.
This also explains the drone assassination of Hassan Guhl, also reported in the Washington Post last October. In the story, Guhl was at an Internet cafe when he read an email from his wife. Although the article doesn’t describe how that email was intercepted by the NSA, the NSA was able to use it to determine his location.
There’s almost certainly more. NSA surveillance is robust, and they almost certainly have several different ways of identifying individuals on cell phone and Internet connections. For example, they can hack individual smart phones and force them to divulge location information.
As fascinating as the technology is, the critical policy question—and the one discussed extensively in the FirstLook article—is how reliable all this information is. While much of the NSA’s capabilities to locate someone in the real world by their network activity piggy-backs on corporate surveillance capabilities, there’s a critical difference: False positives are much more expensive. If Google or Facebook get a physical location wrong, they show someone an ad for a restaurant they’re nowhere near. If the NSA gets a physical location wrong, they call a drone strike on innocent people.
As we move to a world where all of us are tracked 24/7, these are the sorts of trade-offs we need to keep in mind.
This essay previously appeared on TheAtlantic.com.
Edited to add: this essay has been translated into French.
Rise of the Warrior Cop: The Militarization of America’s Police Forces, by Radley Balko, PublicAffairs, 2013, 400 pages.
War as a rhetorical concept is firmly embedded in American culture. Over the past several decades, federal and local law enforcement has been enlisted in a war on crime, a war on drugs and a war on terror. These wars are more than just metaphors designed to rally public support and secure budget appropriations. They change the way we think about what the police do. Wars mean shooting first and asking questions later. Wars require military tactics and weaponry. Wars mean civilian casualties.
Over the decades, the war metaphor has resulted in drastic changes in the way the police operate. At both federal and state levels, the formerly hard line between police and military has blurred. Police are increasingly using military weaponry, employing military tactics and framing their mission using military terminology. Right now, there is a Third Amendment case — that’s the one about quartering soldiers in private homes without consent — making its way through the courts. It involves someone who refused to allow the police to occupy his home in order to gain a “tactical advantage” against the house next-door. The police returned later, broke down his door, forced him to the floor and then arrested him for obstructing an officer. They also shot his dog with pepperball rounds. It’s hard to argue with the premise of this case; police officers are acting so much like soldiers that it can be hard to tell the difference.
In Rise of the Warrior Cop, Radley Balko chronicles the steady militarization of the police in the U.S. A detailed history of a dangerous trend, Mr. Balko’s book tracks police militarization over the past 50 years, a period that not coincidentally corresponds with the rise of SWAT teams. First established in response to the armed riots of the late 1960s, they were originally exclusive to big cities and deployed only against heavily armed and dangerous criminals. Today SWAT teams are nothing special. They’ve multiplied like mushrooms. Every city has a SWAT team; 80% of towns between 25,000 and 50,000 people do as well. These teams are busy; in 2005 there were between 50,000 and 60,000 SWAT raids in the U.S. The tactics are pretty much what you would expect — breaking down doors, rushing in with military weaponry, tear gas — but the targets aren’t. SWAT teams are routinely deployed against illegal poker games, businesses suspected of employing illegal immigrants and barbershops with unlicensed hair stylists.
In Prince George’s County, MD, alone, SWAT teams were deployed about once a day in 2009, overwhelmingly to serve search or arrest warrants, and half of those warrants were for “misdemeanors and nonserious felonies.” Much of Mr. Balko’s data is approximate, because police departments don’t publish data, and they uniformly oppose any attempts at transparency or oversight. But he has good Maryland data from 2009 on, because after the mayor of Berwyn Heights was mistakenly attacked and terrorized in his home by a SWAT team in 2008, the state passed a law requiring police to report quarterly on their use of SWAT teams: how many times, for what purposes and whether any shots were fired during the raids.
Besides documenting policy decisions at the federal and state levels, the author examines the influence of military contractors who have looked to expand into new markets. And he tells some pretty horrific stories of SWAT raids gone wrong. A lot of dogs get shot in the book. Most interesting are the changing attitudes of police. As the stories progress from the 1960s to the 2000s, we see police shift from being uncomfortable with military weapons and tactics — and deploying them only as the very last resort in the most extreme circumstances — to accepting and even embracing their routine use.
This development coincides with the rhetorical use of the word “war.” To the police, civilians are citizens to protect. To the military, we are a population to be subdued. Wars can temporarily override the Constitution. When the Justice Department walks into Congress with requests for money and new laws to fight a war, it is going to get a different response than if it came in with a story about fighting crime. Maybe the most chilling quotation in the book is from William French Smith, President Reagan’s first attorney general: “The Justice Department is not a domestic agency. It is the internal arm of national defense.” Today we see that attitude in the war on terror. Because it’s a war, we can arrest and imprison Americans indefinitely without charges. We can eavesdrop on the communications of all Americans without probable cause. We can assassinate American citizens without due process. We can have secret courts issuing secret rulings about secret laws. The militarization of the police is just one aspect of an increasing militarization of government.
Mr. Balko saves his prescriptions for reform until the last chapter. Two of his fixes, transparency and accountability, are good remedies for all governmental overreach. Specific to police departments, he also recommends halting mission creep, changing police culture and embracing community policing. These are far easier said than done. His final fix is ending the war on drugs, the source of much police violence. To this I would add ending the war on terror, another rhetorical war that costs us hundreds of billions of dollars, gives law enforcement powers directly prohibited by the Constitution and leaves us no safer.
This essay originally appeared in the Wall Street Journal.
This is an extraordinary (and gut-wrenching) first-person account of what it’s like to staff an Israeli security checkpoint. It shows how power corrupts: how it’s impossible to make humane decisions in such a circumstance.
To make a long story short, McPhee describes two things: how Switzerland requires military service from every able-bodied male Swiss citizen — a model later emulated and expanded by Israel — and how the Swiss military has, in effect, wired the entire country to blow in the event of foreign invasion. To keep enemy armies out, bridges will be dynamited and, whenever possible, deliberately collapsed onto other roads and bridges below; hills have been weaponized to be activated as valley-sweeping artificial landslides; mountain tunnels will be sealed from within to act as nuclear-proof air raid shelters; and much more.
To interrupt the utility of bridges, tunnels, highways, railroads, Switzerland has established three thousand points of demolition. That is the number officially printed. It has been suggested to me that to approximate a true figure a reader ought to multiply by two. Where a highway bridge crosses a railroad, a segment of the bridge is programmed to drop on the railroad. Primacord fuses are built into the bridge. Hidden artillery is in place on either side, set to prevent the enemy from clearing or repairing the damage.
Near the German border of Switzerland, every railroad and highway tunnel has been prepared to pinch shut explosively. Nearby mountains have been made so porous that whole divisions can fit inside them. There are weapons and soldiers under barns. There are cannons inside pretty houses. Where Swiss highways happen to run on narrow ground between the edges of lakes and to the bottoms of cliffs, man-made rockslides are ready to slide.
McPhee points to small moments of “fake stonework, concealing the artillery behind it,” that dot Switzerland’s Alpine geology, little doors that will pop open to reveal internal cannons and blast the country’s roads to smithereens. Later, passing under a mountain bridge, McPhee notices “small steel doors in one pier” hinting that the bridge “was ready to blow. It had been superceded, however, by an even higher bridge, which leaped through the sky above — a part of the new road to Simplon. In an extreme emergency, the midspan of the new bridge would no doubt drop on the old one.”
The book is on my Kindle.
Mikko Hypponen of F-Secure attempts to explain why anti-virus companies didn’t catch Stuxnet, DuQu, and Flame:
When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.
What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.
It wasn’t the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems. When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time. A related malware called DuQu also went undetected by antivirus firms for over a year.
Stuxnet, Duqu and Flame are not normal, everyday malware, of course. All three of them were most likely developed by a Western intelligence agency as part of covert operations that weren’t meant to be discovered.
His conclusion is simply that the attackers — in this case, military intelligence agencies — are simply better than commercial-grade anti-virus programs.
The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms. But targeted attacks like these go to great lengths to avoid antivirus products on purpose. And the zero-day exploits used in these attacks are unknown to antivirus companies by definition. As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn’t be detected. They have unlimited time to perfect their attacks. It’s not a fair war between the attackers and the defenders when the attackers have access to our weapons.
We really should have been able to do better. But we didn’t. We were out of our league, in our own game.
I don’t buy this. It isn’t just the military that tests its malware against commercial defense products; criminals do it, too. Virus and worm writers do it. Spam writers do it. This is the never-ending arms race between attacker and defender, and it’s been going on for decades. Probably the people who wrote Flame had a larger budget than a large-scale criminal organization, but their evasive techniques weren’t magically better. Note that F-Secure and others had samples of Flame; they just didn’t do anything about them.
I think the difference has more to do with the ways in which these military malware programs spread. That is, slowly and stealthily. It was never a priority to understand — and then write signatures to detect — the Flame samples because they were never considered a problem. Maybe they were classified as a one-off. Or as an anomaly. I don’t know, but it seems clear that conventional non-military malware writers who want to evade detection should adopt the propagation techniques of Flame, Stuxnet, and DuQu.
EDITED TO ADD (6/23): F-Secure responded. Unfortunately, it’s not a very substantive response. It’s a pity; I think there’s an interesting discussion to be had about why the anti-virus companies all missed Flame for so long.
We all knew this was possible, but researchers have found the exploit in the wild:
Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.
Here’s the draft paper:
Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.
One researcher maintains that this is not malicious:
Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn’t malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).
It could just be part of the original JTAG building-block. Actel didn’t design their own, but instead purchased the JTAG design and placed it on their chips. They are not aware of precisely all the functionality in that JTAG block, or how it might interact with the rest of the system.
But I’m betting that Microsemi/Actel know about the functionality, but thought of it as a debug feature, rather than a backdoor.
It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity. On the other hand, it’s easy for a manufacturer to flip bits. Consider that the functionality is part of the design, but that Actel intended to disable it by flipping a bit turning it off. A manufacturer could easily flip a bit and turn it back on again. In other words, it’s extraordinarily difficult to add complex new functionality, but they may get lucky and be able to make small tweaks to accomplish their goals.
EDITED TO ADD (6/10): A response from the chip manufacturer.
The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.
A response from the researchers.
In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.
There is an option for the highest level of security settings – Permanent Lock. However, if the AES reprogramming option is left it still exposes the device to IP stealing. If not, the Permanent Lock itself is vulnerable to fault attacks and can be disabled opening up the path to the backdoor access as before, but without the need for any passcode.
Sidebar photo of Bruce Schneier by Joe MacInnis.