Entries Tagged "Microsoft"

Page 4 of 14

Did Kaspersky Fake Malware?

Two former Kaspersky employees have accused the company of faking malware to harm rival antivirus products. They would falsely classify legitimate files as malicious, tricking other antivirus companies that blindly copied Kaspersky’s data into deleting them from their customers’ computers.

In one technique, Kaspersky’s engineers would take an important piece of software commonly found in PCs and inject bad code into it so that the file looked like it was infected, the ex-employees said. They would send the doctored file anonymously to VirusTotal.

Then, when competitors ran this doctored file through their virus detection engines, the file would be flagged as potentially malicious. If the doctored file looked close enough to the original, Kaspersky could fool rival companies into thinking the clean file was problematic as well.

[…]

The former Kaspersky employees said Microsoft was one of the rivals that were targeted because many smaller security companies followed the Redmond, Washington-based company’s lead in detecting malicious files. They declined to give a detailed account of any specific attack.

Microsoft’s antimalware research director, Dennis Batchelder, told Reuters in April that he recalled a time in March 2013 when many customers called to complain that a printer code had been deemed dangerous by its antivirus program and placed in “quarantine.”

Batchelder said it took him roughly six hours to figure out that the printer code looked a lot like another piece of code that Microsoft had previously ruled malicious. Someone had taken a legitimate file and jammed a wad of bad code into it, he said. Because the normal printer code looked so much like the altered code, the antivirus program quarantined that as well.

Over the next few months, Batchelder’s team found hundreds, and eventually thousands, of good files that had been altered to look bad.

Kaspersky denies it.

EDITED TO ADD (8/19): Here’s an October 2013 presentation by Microsoft on the attacks.

EDITED TO ADD (9/11): A dissenting opinion.

Posted on August 18, 2015 at 2:35 PMView Comments

The Continuing Public/Private Surveillance Partnership

If you’ve been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.

Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook’s Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing “transparency reports,” listing approximately how many government data requests the companies have received and complied with.

On the government side, last week the NSA’s General Counsel Rajesh De seemed to have thrown those companies under a bus by stating that — despite their denials — they knew all about the NSA’s collection of data under both the PRISM program and some unnamed “upstream” collections on the communications links.

Yes, it may seem like the public/private surveillance partnership has frayed — but, unfortunately, it is alive and well. The main focus of massive Internet companies and government agencies both still largely align: to keep us all under constant surveillance. When they bicker, it’s mostly role-playing designed to keep us blasé about what’s really going on.

The U.S. intelligence community is still playing word games with us. The NSA collects our data based on four different legal authorities: the Foreign Intelligence Surveillance Act (FISA) of 1978, Executive Order 12333 of 1981 and modified in 2004 and 2008, Section 215 of the Patriot Act of 2001, and Section 702 of the FISA Amendments Act (FAA) of 2008. Be careful when someone from the intelligence community uses the caveat “not under this program” or “not under this authority”; almost certainly it means that whatever it is they’re denying is done under some other program or authority. So when De said that companies knew about NSA collection under Section 702, it doesn’t mean they knew about the other collection programs.

The big Internet companies know of PRISM — although not under that code name — because that’s how the program works; the NSA serves them with FISA orders. Those same companies did not know about any of the other surveillance against their users conducted on the far more permissive EO 12333. Google and Yahoo did not know about MUSCULAR, the NSA’s secret program to eavesdrop on their trunk connections between data centers. Facebook did not know about QUANTUMHAND, the NSA’s secret program to attack Facebook users. And none of the target companies knew that the NSA was harvesting their users’ address books and buddy lists.

These companies are certainly pissed that the publicity surrounding the NSA’s actions is undermining their users’ trust in their services, and they’re losing money because of it. Cisco, IBM, cloud service providers, and others have announced that they’re losing billions, mostly in foreign sales.

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like. IBM’s letter to its clients last week is an excellent example. The letter lists five "simple facts" that it hopes will mollify its customers, but the items are so qualified with caveats that they do the exact opposite to anyone who understands the full extent of NSA surveillance. And IBM’s spending $1.2B on data centers outside the U.S. will only reassure customers who don’t realize that National Security Letters require a company to turn over data, regardless of where in the world it is stored.

Google’s recent actions, and similar actions of many Internet companies, will definitely improve its users’ security against surreptitious government collection programs — both the NSA’s and other governments’ — but their assurances deliberately ignores the massive security vulnerability built into its services by design. Google, and by extension, the U.S. government, still has access to your communications on Google’s servers.

Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop.

It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others.

Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations. Surveillance is still the business model of the Internet, and every one of those companies wants access to your communications and your metadata. Your private thoughts and conversations are the product they sell to their customers. We also have learned that they read your e-mail for their own internal investigations.

But even if this were not true, even if — for example — Google were willing to forgo data mining your e-mail and video conversations in exchange for the marketing advantage it would give it over Microsoft, it still won’t offer you real security. It can’t.

The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.

This isn’t paranoia. We know that the U.S. government ordered the secure e-mail provider Lavabit to turn over its master keys and compromise every one of its users. We know that the U.S. government convinced Microsoft — either through bribery, coercion, threat, or legal compulsion — to make changes in how Skype operates, to make eavesdropping easier.

We don’t know what sort of pressure the U.S. government has put on Google and the others. We don’t know what secret agreements those companies have reached with the NSA. We do know the NSA’s BULLRUN program to subvert Internet cryptography was successful against many common protocols. Did the NSA demand Google’s keys, as it did with Lavabit? Did its Tailored Access Operations group break into to Google’s servers and steal the keys?

We just don’t know.

The best we have are caveat-laden pseudo-assurances. At SXSW earlier this month, CEO Eric Schmidt tried to reassure the audience by saying that he was “pretty sure that information within Google is now safe from any government’s prying eyes.” A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

Google, Facebook, Microsoft, and the others are already on the record as supporting these legislative changes. It would be better if they openly acknowledged their users’ insecurity and increased their pressure on the government to change, rather than trying to fool their users and customers.

This essay previously appeared on TheAtlantic.com.

Posted on March 31, 2014 at 9:18 AMView Comments

Microsoft Retiring SHA-1 in 2016

I think this is a good move on Microsoft’s part:

Microsoft is recommending that customers and CA’s stop using SHA-1 for cryptographic applications, including use in SSL/TLS and code signing. Microsoft Security Advisory 2880823 has been released along with the policy announcement that Microsoft will stop recognizing the validity of SHA-1 based certificates after 2016.

More news.

SHA-1 isn’t broken yet in a practical sense, but the algorithm is barely hanging on and attacks will only get worse. Migrating away from SHA-1 is the smart thing to do.

Posted on November 13, 2013 at 2:17 PMView Comments

Restoring Trust in Government and the Internet

In July 2012, responding to allegations that the video-chat service Skype — owned by Microsoft — was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company’s blog to deny it.

Turns out that wasn’t quite true.

Or at least he — or the company’s lawyers — carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn’t changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.

At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn’t collect data on hundreds of millions of Americans. He was lying, too. He later defended his lie by inventing a new definition of the word “collect,” an excuse that didn’t even pass the laugh test.

As Edward Snowden’s documents reveal more about the NSA’s activities, it’s becoming clear that we can’t trust anything anyone official says about these programs.

Google and Facebook insist that the NSA has no “direct access” to their servers. Of course not; the smart way for the NSA to get all the data is through sniffers.

Apple says it’s never heard of PRISM. Of course not; that’s the internal name of the NSA database. Companies are publishing reports purporting to show how few requests for customer-data access they’ve received, a meaningless number when a single Verizon request can cover all of their customers. The Guardian reported that Microsoft secretly worked with the NSA to subvert the security of Outlook, something it carefully denies. Even President Obama’s justifications and denials are phrased with the intent that the listener will take his words very literally and not wonder what they really mean.

NSA Director Gen. Keith Alexander has claimed that the NSA’s massive surveillance and data mining programs have helped stop more than 50 terrorist plots, 10 inside the U.S. Do you believe him? I think it depends on your definition of “helped.” We’re not told whether these programs were instrumental in foiling the plots or whether they just happened to be of minor help because the data was there. It also depends on your definition of “terrorist plots.” An examination of plots that that FBI claims to have foiled since 9/11 reveals that would-be terrorists have commonly been delusional, and most have been egged on by FBI undercover agents or informants.

Left alone, few were likely to have accomplished much of anything.

Both government agencies and corporations have cloaked themselves in so much secrecy that it’s impossible to verify anything they say; revelation after revelation demonstrates that they’ve been lying to us regularly and tell the truth only when there’s no alternative.

There’s much more to come. Right now, the press has published only a tiny percentage of the documents Snowden took with him. And Snowden’s files are only a tiny percentage of the number of secrets our government is keeping, awaiting the next whistle-blower.

Ronald Reagan once said “trust but verify.” That works only if we can verify. In a world where everyone lies to us all the time, we have no choice but to trust blindly, and we have no reason to believe that anyone is worthy of blind trust. It’s no wonder that most people are ignoring the story; it’s just too much cognitive dissonance to try to cope with it.

This sort of thing can destroy our country. Trust is essential in our society. And if we can’t trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.

Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government’s actions.

Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it’s probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.

This is the only way we can restore trust. A market economy doesn’t work unless consumers can make intelligent buying decisions based on accurate product information. That’s why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.

In the same way, democracy can’t work unless voters know what the government is doing in their name. That’s why we have open-government laws. Secret courts making secret rulings on secret laws, and companies flagrantly lying to consumers about the insecurity of their products and services, undermine the very foundations of our society.

Since the Snowden documents became public, I have been receiving e-mails from people seeking advice on whom to trust. As a security and privacy expert, I’m expected to know which companies protect their users’ privacy and which encryption programs the NSA can’t break. The truth is, I have no idea. No one outside the classified government world does. I tell people that they have no choice but to decide whom they trust and to then trust them as a matter of faith. It’s a lousy answer, but until our government starts down the path of regaining our trust, it’s the only thing we can do.

This essay originally appeared on CNN.com.

EDITED TO ADD (8/7): Two more links describing how the US government lies about NSA surveillance.

Posted on August 7, 2013 at 6:29 AMView Comments

Security Analysis of Children

This is a really good paper describing the unique threat model of children in the home, and the sorts of security philosophies that are effective in dealing with them. Stuart Schechter, “The User IS the Enemy, and (S)he Keeps Reaching for that Bright Shiny Power Button!” Definitely worth reading.

Abstract: Children represent a unique challenge to the security and privacy considerations of the home and technology deployed within it. While these challenges posed by children have long been researched, there is a gaping chasm between the traditional approaches technologists apply to problems of security and privacy and the approaches used by those who deal with this adversary on a regular basis. Indeed, addressing adversarial threats from children via traditional approaches to computer and information security would be a recipe for disaster: it is rarely appropriate to remove a child’s access to the home or its essential systems; children require flexibility; children are often threats to themselves; and children may use the home as a theater of conflict with each other. Further, the goals of security and privacy must be adjusted to account for the needs of childhood development. A home with perfect security — one that prevented all inappropriate behavior or at least ensured that it was recorded so that the adversary could be held accountable — could severely stunt children’s moral and personal growth. We discuss the challenges posed by children and childhood on technologies for the home, the philosophical gap between parenting and security technologists, and design approaches that technology designers could borrow when building systems to be deployed within homes containing this special class of user/adversary.

Posted on July 2, 2013 at 12:08 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.