Full Extent of the Attack that Compromised RSA in March

Brian Krebs has done the analysis; it’s something like 760 companies that were compromised.

Among the more interesting names on the list are Abbott Labs, the Alabama Supercomputer Network, Charles Schwabb & Co., Cisco Systems, eBay, the European Space Agency, Facebook, Freddie Mac, Google, the General Services Administration, the Inter-American Development Bank, IBM, Intel Corp., the Internal Revenue Service (IRS), the Massachusetts Institute of Technology, Motorola Inc., Northrop Grumman, Novell, Perot Systems, PriceWaterhouseCoopers LLP, Research in Motion (RIM) Ltd., Seagate Technology, Thomson Financial, Unisys Corp., USAA, Verisign, VMWare, Wachovia Corp., and Wells Fargo & Co.

News article.

Posted on October 28, 2011 at 3:21 PM14 Comments


Sam October 28, 2011 3:41 PM

Good thing we have Bruce to reassure us that the Chinese government is not systemically penetrating every US company and government agency of any regard…

OATH Commenter October 28, 2011 4:05 PM

Verisign, is an interesting one on that list, considering that they backed and supported OATH an open competitor to RSA tokens.

Clive Robinson October 28, 2011 4:08 PM

One of the probs Brian Krebs has with this is not revealing “sources and methods” which has left him open to some quite nasty comments on his blog.

Unfortunatly as some of the more able commenters have pointed out, the data could have been gathered in a number of ways (including DNS logs) and has been lumped together under AS numbers, so it is by no means certain that all the sites have actually been successfully attacked to the point of critical/sensitive data exfiltration.

Further some of the names on the list are either ISP’s or known to have open access points for the general public. So treat the findings with care.

David October 28, 2011 4:33 PM

@Clive “So treat the findings with care.” – that’s pretty-much what Brian says in the article and in response to some of the comments.

However to me the most telling thing is that none of those organisations seems to have complained about being on the list erroneously.

Someoneunrelated October 28, 2011 4:53 PM

This thing started a shit fight where I work. But really, when you boil it down, he’s saying that someone from the listed AS did a DNS query against a C&C server.

In my case, the organization is an service provider, and the idea that someone somewhere may have been compromised, or done a dns lookup, is ridiculously needle in a haystackish. But after senior monkey panics, we’ve now determined that no customer who uses a proxy went there.

The process of chasing down this panic has told me a few things. The article basically indicates “were also compromised” when it should be more explicit in saying that it’s spotting a prerequisite for being compromised, and there are other reasons why the AS might be on the list.

For me, the AS represents a collection of government departments, all of whom will be running a security team, any of whom could have read about the hack, done a couple of dns lookups to check on the IPs of the C&C for pure sticky-beaking curiosity.

And for @David: However to me the most telling thing is that none of those organisations seems to have complained about being on the list erroneously.
Have you ever looked through terabytes of proxy logs looking for a series of urls? It can take a while to be sure.

Although I can say we have been considering nasty letters. But not being in the country the blog is hosted in makes the whole thing a bit tricky.

Andrew October 28, 2011 5:25 PM

The sky isn’t falling even with this impressive list of compromise.

These were mostly people poking their head through the door and checking out content, than any big data breaches.

Tom October 28, 2011 7:22 PM

What makes everyone so sure these companies were compromised? Where’s the data from Krebs? What IP’s were involved? What is the secret data source? What other possible explanations are there for “phoning home to some of the same control infrastructure”? This is so vague and ethically irresponsible journalism, I’m surprised it’s coming from Krebs.

Tech worker October 28, 2011 7:35 PM

Good thing my small startup company isn’t on the list. Except… we’ve outsourced our email and calendar and wiki and other IT stuff to Google Apps. So since Google is on the list, we are too.

Clive Robinson October 29, 2011 3:15 AM

@ David,

“However to me the most telling thing is that none of those organisations seems to have complained about being on the list erroneously”

“Damed if you do and damed if you don’t”, is something I was taught about leadership, along with “praise in public, punish in private”.

Thus I would not expect a public response via Brian’s blog, but a quite word via email or phone, in a polite way requesting further proof etc.

The problem Brian has with his post is “Shoot the messenger syndrome”, and there is only two ways to avoid that “Never Say Anything” or “burn the source” with full disclosure. Either way in this particular case is actually being “part of the problem not the solution”.

Look at it this way, most infosec people who have spent any time in the trenches will know that security is an illusion more than an actuality. And the more secure a system is generaly the less use it is and the more expensive it is. Those around them and who write their pay cheques want working solutions to everyday problems without cost or hassle.

To a certain degree being an infosec bod is a no win situation and this is not going to change because 99% of the tools out there are either “reactive not predictive” or “restrictive not permissive”.

As a result there are two truisms in the industry the first being the idea that a secure computer is,

“One you never own, never use, never turn on, in the middle of a large block of solid concrete you have abolutly provably droped down the Mariana Trench, and that’s only secure for maybe a year or two”.

The second being related to the myth of infalability and is brutaly stated as,

“Sack the security person who has not found malware on your systems”

The reality of life is there is always going to be “zero day” and “workers need to be productive”.

The question then is what do you do about it, to which the answer is that dred non-answer statment of “manage the risk”.

If you look back on this blog to the time the actual RSA breach became known I put up a hypothetical scenario of what might of happened with regards the loss of the authentication seeds.

In essence the business driver of the help desk function ment the seeds had to be readily available to support customers. The result was under estimating the risk, or failing to manage it for one of many reasons that might also be the same business drivers…

The result was a loss of the seed database, loss of company reputation, loss of a number of customers some of whom were “high value low cost” and considerable cost clearing up the mess.

So on the face of it, it can be observed that somebody did not put enough resources into protecting the seed database…

But did they?

You have to ask that question and to answer it you have to accept that these tokens are not exactly low cost high profit items both to produce and support. Which in a competative market the costs of security are critical as to if it is actually worth producing such items.

You then need to ask the question about the resources an organisation or individual is prepared to expend to overcome the security system that is in place.

In the case of state actors it is “whatever it costs to achive the objective” and in the case of some obsesive individuals it is “whatever time they can devote to achieving the objective. In either case they are going to reach a point where the attackers effective costs in time or resources is going to equal or exceed the annual cost of the security measures that would be in place…

Thus as the defender you have to look not just at the cost of the security but it’s qualitative effects.

And this is where it usually head butts “business drivers” and where it all goes horribly wrong.

For high security you go for segregation with strict point of access control not just on the individuals but also on the data. And it is this second issue it usually goes wrong.

Segregation is usually relativly easy to do if you can “air gap”, but to be usable in any environment data still has to be used and in this modern world that means transfering data to other systems and people. This data transfer can be controled in various ways such as no use of storage media for data transfer, data diodes, and data rate limiting. All of these are technicaly difficult to implement and difficult to use, thus have serious cost impacts that eat either directly or indirectly into the income from producing such items.

But even if you do get the security sufficient to prevent data exfiltration over the wire, as we know from Stuxnet and the “code signing key” a sufficiently resourced attacker will find some other way to get at the data. And as history tells us it might be by placing an agent in as an employee or a “black bag job” or worse some kind of direct action against individuals working for the company ranging from bribary through to kidnap tourture and murder.

The underlying problem is the asymitary of the value of the token to the producer and the value of the information a third party uses the token to protect. For a defence contractor this could be well over 10,000,000:1.

For instance the US DoD has just recently set test dates for a “flying humvee” that is expected to have a unit price below 55million and the DoD have anounced that there are two organisations that have tendered. What is the value of the IP required to make that happen a Billion, 10Billion and what would another company be prepared to pay to get it let alone a hostile state?

Tomtom October 29, 2011 4:49 AM

@Tom : the list does not tell any of companies being compromised. It just tells that from those networks, there has been traffic seen what has been thought to be connected to RSA attackers.

Trichinosis USA October 29, 2011 9:21 AM

I have to agree with Tom. No source, no details, lots of hifalutin’ companies and organizations on a long list that “may” have been hacked. This is too vague to really be useful, except for this one takeaway: someone evidently wants the IT community to be afraid of RSA attacks from hackers from China. Why, and who benefits? The thing that makes the least sense is the need to protect the source of the information in this case. Why is there a need for that?

tinfoilhat October 29, 2011 12:23 PM

@Trichinosis: “[…] someone evidently wants the IT community to be afraid of RSA attacks from hackers from China. Why, and who benefits?”

If we weren’t afraid of hackers from China, we wouldn’t be willing to fund the creation of bold new bureaucracies to “protect” us (cf. TSA, DHS, …).

david misell October 29, 2011 4:54 PM

Maybe Brian http://krebsonsecurity.com/2011/10/who-else-was-hit-by-the-rsa-attackers/#more-11975
and the rest of us interested parties should form the IETF Incident handling recommendations being developed by the mile group based on RFC6045-bis:

Mailing Lists:
General Discussion: mile@ietf.org
To Subscribe: http://www.ietf.org/mailman/listinfo/mile
Archive: http://www.ietf.org/mail-archive/web/mile


The Managed Incident Lightweight Exchange (MILE) working group will
develop standards and extensions for the purpose of improving incident
information sharing and handling capabilities based on the work
developed in the IETF Extended INCident Handling (INCH) working group.
The Incident Object Description Exchange Format (IODEF) in RFC5070 and
Real-time Inter-network Defense (RID) in RFC6045 were developed in the
INCH working group by international Computer Security Incident Response
Teams (CSIRTs) and industry to meet the needs of a global community
interested in sharing, handling, and exchanging incident information.
The extensions and guidance created by the MILE working group assists
with the daily operations of CSIRTs at an organization, service
provider, law enforcement, and at the country level. The application of
IODEF and RID to interdomain incident information cooperative exchange
and sharing has recently expanded and the need for extensions has become
more important. Efforts continue to deploy IODEF and RID, as well as to
extend them to support specific use cases covering reporting and
mitigation of current threats such as anti-phishing extensions.

An incident could be a benign configuration issue, IT incident, an
infraction to a service level agreement (SLA), a system compromise,
socially engineered phishing attack, or a denial-of-service (DoS)
attack, etc. When an incident is detected, the response may include
simply filing a report, notification to the source of the incident, a
request to a third party for resolution/mitigation, or a request to
locate the source. IODEF defines a data representation that provides a
standard format for sharing information commonly exchanged about
computer security incidents. RID enables the secure exchange of
incident related information in an IODEF format providing options for
security, privacy, and policy setting.

MILE leverages collaboration and sharing experiences with the work
developed in the INCH working group which includes the data model
detailed in the IODEF, existing extensions to the IODEF for
Anti-phishing (RFC5901), and RID (RFC6045, RFC6046) for the secure
exchange of information. MILE will also leverage the experience gained
in using IODEF and RID in operational contexts. Related work, drafted
outside of INCH will also be reviewed and includes RFC5941, Sharing
Transaction Fraud Data.

The MILE working group provides coordination for these various extension
efforts to improve the capabilities for exchanging incident information.
MILE has several objectives with the first being a description a
subset of IODEF focused on ease of deployment and applicability to
current information security data sharing use cases. MILE also
describes a generalization of RID for secure exchange of other
security-relevant XML formats. MILE produces additional guidance needed
for the successful exchange of incident information for new use cases
according to policy, security, and privacy requirements. Finally, MILE
produces a document template with guidance for defining IODEF extensions
to be followed when producing extensions to IODEF as appropriate, for:

  • labeling incident reports with data protection, data retention, and
    other policies, regulations, and
    laws restricting the handling of those reports
  • referencing structured security information from within incident
  • reporting forensic data generated during an incident investigation
    (computer or accounting)
    mile Info Page ietf.org

This list is for discussions, collaboration, and development of a document describing a subset of the Incident Object Description Exchange Format (IODEF) aimed at exchanging incident information

David Schwartz November 1, 2011 4:18 PM

“However to me the most telling thing is that none of those organisations seems to have complained about being on the list erroneously.”

How could they? Without knowing the listing criteria, how can anyone establish whether they’re listed erroneously or not?

Suppose your organization was on the list. Your boss says to you “find out if we’re on their erroneously, and if so, complain”. What do you check?

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.