SVR Attacks on Microsoft 365

FireEye is reporting the current known tactics that the SVR used to compromise Microsoft 365 cloud data as part of its SolarWinds operation:

Mandiant has observed UNC2452 and other threat actors moving laterally to the Microsoft 365 cloud using a combination of four primary techniques:

  • Steal the Active Directory Federation Services (AD FS) token-signing certificate and use it to forge tokens for arbitrary users (sometimes described as Golden SAML). This would allow the attacker to authenticate into a federated resource provider (such as Microsoft 365) as any user, without the need for that user’s password or their corresponding multi-factor authentication (MFA) mechanism.
  • Modify or add trusted domains in Azure AD to add a new federated Identity Provider (IdP) that the attacker controls. This would allow the attacker to forge tokens for arbitrary users and has been described as an Azure AD backdoor.
  • Compromise the credentials of on-premises user accounts that are synchronized to Microsoft 365 that have high privileged directory roles, such as Global Administrator or Application Administrator.
  • Backdoor an existing Microsoft 365 application by adding a new application or service principal credential in order to use the legitimate permissions assigned to the application, such as the ability to read email, send email as an arbitrary user, access user calendars, etc.

Lots of details here, including information on remediation and hardening.

The more we learn about the this operation, the more sophisticated it becomes.

In related news, MalwareBytes was also targeted.

Posted on January 21, 2021 at 6:31 AM23 Comments

Comments

Clive Robinson January 21, 2021 8:54 AM

@ ALL,

There is a saying about not “putting all your eggs in a basket”.

The implication being if the basket breaks, gets dropped, etc then you loose the lot.

Now bearing in mind that is advice to just one entity, what do you say to the very many people putting all of their eggs in one very large basket?

Because when that big basket dreaks or is dropped everybodies eggs get lost.

That’s what happens with federated systems that give individuals cloud services, and one of them looses it’s authentication secret.

You would think that this is obvious, yet people do it all the time, education establishments and even some employers force individuals into this mode of opperation.

We know certainly in the case of Google that every thing that goes there way they regard as their property to do as they like with, thus there is no way the information can be secure.

Thus why do we do it?

In the UK for instance, the House of Commons, decided to put all the Members of Parliment on a cloud based office system that was not actually based in the UK…

Not exactly a bright thing to do as was confirmed in a Commons Select Commity enquirey into GCHQ spying on MPs electronic correspondence.

JR January 21, 2021 1:16 PM

I wonder why Microsoft is not recommending using their Insider Risk product to solve this? Couldn’t Insider Risk be configured to identify and report on these anomalies?

Unfortunately, IAM and Systems Admin is often outsourced at arms length, especially now with the virus. Add to that many institutions lack systems, users or data inventory negating the ability to ascertain whether anything has been compromised.

2FA on BYOD may be at issue here too. The same BYOD that has FB and TikTok. But apps scrape passwords. According to Google 65% use the same password for work and personal. So Federated SSO is a problem. MSFT has also been pushing customers to integrate their VPN with Azure AD MFA Network Policy Server. That seemed not smart to me. Having numerous security gates during WFH safer. Especially since many never put thought into controls or network segmentation. Security is an afterthought. Everyone just focuses on SIEM. I think Bruce said that casting too wide a net increases vulnerabilities when too much data is hoovered. We cannot protect data until we have data privacy laws. Employees are the conduit. And we’ve all been pawned.

Full disclosure. I absolutely love Microsoft. But they need to refocus. Microsoft wants everyone always connected to the cloud. But that’s not safe right now. If a road is unsafe, it doesn’t matter what vehicle you are driving in, the road is still dangerous. We need to fix the road.

Yesterday I tried to access a report on one of the Government agencies that was attacked. A Microsoft dialog warned me that the report contained malware. If that was precautionary then why not take down the reporting feature? Or could it be that malware was embedded and this hack is more than just about reading emails? Incidentally the report I tried to view concerned regulated private sector Cybersecurity examinations that this agency conducts. I hope that his agency stopped conducting examinations if they haven’t already.

Finally, in addition to the SEC breach disclosure the Federal Government should require that all of the vendors active in the breached entities be identified. Presently there’s no way for anyone to know if there’s a vendor that is repeatedly involved in breaches. How can any enterprise perform third party risk assessment without this knowledge? I have noticed patterns involving the same characters. Most attacks have some element of insider involvement and it is nearly never phishing. Phishing is just the regulatory and insurance get out of jail free excuse. If phishing were really a problem then just turn off employee external email access. Especially with AUP that forbid personal use and WFH, no one would miss it. Maybe only 3% of a company’s workforce has a business need for external communication. That entitlement could be handled by exception. This may also solve the current 365 incursion and any DLP associated with it.

Foreigner January 21, 2021 4:55 PM

@JR:

This blog is widely read around the world. Some of your abbreviations are not commonly understood.

IAM – ?
2FA – two factor authentication
BYOD – bring your own device
FB – Facebook
SSO – single sign on
MSFT – Microsoft
VPN – virtual private network
AD – active directory
MFA – ? multi-factor authentication (NOT Master of Fine Arts)
WFH – work from home
SIEM – ?
SEC – Security and Exchange Commission
AUP – Acceptable Use Policy
DLP = ?

JR January 21, 2021 6:46 PM

@Foreigner

Apologies. You did a great job.

IAM – Identity Access Management – access control
2FA – two factor authentication
MFA – multi factor authentication
SIEM – Security Information Event Management – tools that monitor networks
DLP – data loss prevention. Usually achieved by controls and monitoring

Here’s another one:

CMMC — Capability Maturity Model Certification

Also known as NIST 800-171 it is similar to ISO 27002. In order for an IT vendor or service provider to sell to the US Government this certification is now required. CMMC Certifications will be required for Cybersecurity professionals too. If this certification program existed a year ago, this FireEye and SolarWinds fiasco would not have happened. This program will create an AVL (approved vendor list) for the government and the regulated private sector.

GDPR Article 42/43 calls for a cybersecurity certification program too. CMMC was designed by NIST (National Institute of Standards and Technology) and the US Military. This will make the EU and UK ICO happy too. Safe Harbor may then return.

Clive Robinson January 21, 2021 7:56 PM

@ JR,

This will make the EU and UK ICO happy too. Safe Harbor may then return.

I’d rather the Safe Harbor did not return it was a load of nonsense in the first place with US companies not even paying lip service to it. Instead I’d much rather the US sort out a worthwhile set of data protection laws.

With regards Europe, it rather depends on what happens with Germany at the end of summer this year,

https://en.m.wikipedia.org/wiki/2021_German_federal_election

France may well use a weak German result as a way to grab power in the EU.

One thing that is inevitable is that US law enforcment and legislators want the Silicon Valley Corps with set ups in Éire to not have strong data protection to hide behind when NSLs and the like are presented at the US corp headquaters. If France gaons power in Europe then that is much more likely to happen.

As for the UK… Well due to Brexit the UK will probably “bend over and touch it’s toes” on demand, no questions asked. The UK PM was after all born in the US, and he was overly pally with the Trump Administration, so could be in an awkward position of his own making (nothing new there, he’s more gaff prone and incompetent you could believably make up)

Which is why in the UK we have a joke about Boris’s Brain, not being present in Boris’s body thus he had to use the one in Dominic Cummings.

https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Ftse4.mm.bing.net%2Fth%3Fid%3DOIP.5hNp9Xln0bmk4d_w52N62QHaD4%26pid%3DApi&f=1

So heck next year I expect GMO Turkey with double helpings of WMD chlorine will be required on every families Xmas table or £800 fines issued.

JR January 21, 2021 8:31 PM

The UK wrote the original data privacy laws and one can only hope. US law is based on the UK so we often need the UK to show us the way. One thing I’ve learned if we put our hope in one person to solve everything we will always be disappointed. It is up to us to ‘collectively’ make changes.

I learned most of what I know about Cybersecurity from Germans. They really get it. The French CNIL doesn’t seem too invested. They gave Google permission to not abide by GDPR.

This addiction to data mining is destroying the world. I guarantee this SolarWinds hack was really about data theft. Data mining is responsible for wealth disparity. It solely serves to exclude people and destroy companies. This is government and corporate espionage run riot. All they do is spy on each other. Consumers are just the conduit. But at the end of the day it is benefiting no one. Our economies are destroyed from it. Another acronym “GIGO” garbage in, garbage out. I love that the UK ICO released a report a few months ago that Cambridge Analytica had nothing to do with Brexit, the guy with green hair wasn’t a scientist, he worked at H&M and best of all Facebook’s data is garbage. It isn’t possible to produce anything scientific with it.

I should not have said Safe Harbor. Instead I should have said “BCR” (binding corporate rules). That’s how multinational companies transfer data to other countries to avoid regulations. The UK ICO already said they are changing the rules on BCR’s and SCC (standard contract clauses).

Ismar January 22, 2021 12:47 AM

After reading only the first couple of paragraphs of the Fireeye’s report it became clear that this is yet another case of us people not able to make fully secure software systems once their complexity reaches a certain level.
So instead of trying to constantly patch security holes in these complex systems we should learn to live and work with less complex ones even if that means loosing some of our productivity- word that is often misunderstood and misused in modern Systems

Clive Robinson January 22, 2021 3:09 AM

@ Ismar,

even if that means loosing some of our productivity

The dread “Productivity mantra”

It’s a buzz word that has no real meaning, but does one heck of a lot of harm.

The supppsed theory behind it is that humans are lazy in that they might work hard, but they don’t work smart, therefore they are working below capacity.

Which when processed by some peoples thinking basically means you need to “crack the whip”. Which in turn means installing lots of surveillance and sacking people that do not meet some metric they are set, no matter how meaningless the metric. The result is whilst there might be a few slackers in the organisation repeated rounds of sackings actually kill the organisation. For two basic reasons,

1, The staff you realy need will be smart enough to go get another job and leave as best they can on their own terms.

2, If you trim your workforce so it’s 100% productive over the year by some measure it almost certainly will be too small a work force, so when something goes wrong or even right like a new business opportunity you lack capacity to deal with it[1].

The point is while each cut looks good on the ballance sheet, it’s like loosing weight by lopping limbs off, on paper it works but in reality…[2]

One of the reasons for “middle managment” is not as most mistakenly brlieve just a conduit for managment orders being handed down and shop floor reports sent up. They have two real purposes,

1, Interpretation to convert boss speak and actions into worker speak and actions and vic versa. Based on experience of what to put in what to take out and how to asign where other experience is as well as bringing on others so thry gsin more experience.

2, In times of stress to short circuit managment and actually get down on the shop floor as experienced labour or even just another pair of hands at the pump.

In the 1980’s downsizing was all the rage it was the productivity mantra of it’s day. The result middle managment were an “easy win” in the good times but the moment the wind changed there was no one to trim the sails and keep things on an even keel… Many businesses thus either capsized or got becalmed, either way they lost the race and evolution took it’s toll.

I could go on but this idea of computers and productivity comes from physical labour. Any job has two parts physical skill and mental skill. You can get more done if you use force multipliers. An 18th century navie could shift ten tonnes a day of soil when digging cannals which is about three cubic metres. Howrver when steam shovels came along they could shift around twenty times what a navie did. So the force multiplier increased the physical ability, but quickly forgotton eas the new mental skill required to opperate the steam shovel. Now we have vast machines run by just one operator that dig a hundred times or more what a navie did in a day in just one use of the shovel. What most do not realise is that the skill level goes up not just on controling the machine, but on not making mistakes. A navie could only do a small amount of damage in a given time period therefore a supervisor had time to fix things before they went to baddly wrong. Now the operator is the supervisor and they can do a thoudand times the damage in the same time frame, thus they have to be mentally very very much more on the game than the 18th century supervisors. Which means their mental skill set has to be orders of magnitude better and faster when things do not go according to plan. That is they have to have a “hinky” ability to almost see into the future, from “tells” most of us do not sense that are very early warning signs, that individualy mean nearly nothing but in certain combinations mean trouble. The operators do not process these consciously but subconsciously as the conscious mind does not have the ability.

Thus the modern idea that computers are “force multipliers” for much less physically oriented skills does not realy hold. There is only so far you can push mental skills unless you take away complexity. But to stop stress you have to make the complexity nearly invisable to the user. In the process of hiding complexity you hide the “tells” or warning signs…

So if you can stay below what the computer has set as warning levels you are invisable to the operator…

Thus the more complexity you remove the less secure the systems managed become.

The big mantra currently is to throw AI at the issue, but as most here have come to realise AI does not realy learn, it builds rules based on training data often from people who have developed “hinky” through years of experience. Those rules however are realy the same as “settings on warning levels”, they are the product of somebodies learning, they in no way confer the ability to “learn”.

As I tell students and beginers, when talking about “testing techniques”, “running experiments”, or “prototyping” you learn nothing from success other than “it works”, you don’t know “why”. It’s when things don’t work and what you do to make them work that you learn the “why” by. So even when a prototype works you should change it so it does not work and learn it’s characteristics, that is it’s “sensitivities” and “warning signs”.

[1] For years the UK NHS has had contract consultants come in to tell them how to be more productive. It always ends up in confusion and lower front line staff numbers. It always was bad news come winter flu season as staff were over worked and almost certainly patients were harmed. I’ll let you imagine what’s going on with COVID, lets just say after setting up “Nightingale Centers” that had beds, equipment, space… What they did not have was staff for a whole variety of reasons. Including “double counting” the Army has a medical corp, which due to Army cut backs is mainly filled with “Reservists” who are actually realy full time NHS front line staff, so sometimes 1+1=1 realy is true.

[2] Crazy as it sounds you can actually model this sort of behaviour based on physicall models of the Universe and gravitation. The only time such a system is stable is when neither production or demand change. However if either goes up or down then instability results. It gets worse when you make allowances for time, distance and inertia, which also have their equivalents in the production model.

Clive Robinson January 22, 2021 8:17 AM

@ JR,

The French CNIL doesn’t seem too invested. They gave Google permission to not abide by GDPR.

Hence my foreboding if the French make a power push whilst Germany are sorting out their potential politicsl problems.

But there is more to it than just that back last century a head of the French external security services (their version of MI6/CIA etc) was interviewed by if I remember correctly CNN. He frealy admitted that the French carried out economic espionage as it was more cost effective than the waste in R&D[1].

With regards,

I learned most of what I know about Cybersecurity from Germans. They really get it.

I’ve had a few run ins with their internal security services… Baaically if you design POTS devices like cordless phones, they used to just it through if you used a Siemens chip, but they used to give you hell if you tried any other line interface CCT. Most in the telco industry thought it was just “back door protectionism”… However for various reasons having to do with work with standards comitties I was more suspicious. And yes the Siemens chip had a fault in it’s “recommended circuit” to cut a long story short it was possible to access the microphone even though the phone was on hook. Basically a low value capacitance was accross the hook switch so RF in the upper LF / lower MF bands would end up being modulated by the microphone… So turned the phone into a bugging device if you put an “RF noise bridge” in the line but used an RF carrier rather than a noise diode. My boss was none to pleased when I showed him but appart from alowing me to put some extra holes in the PCB there was little he or I could do about it (some years later he set up a company that made secure cellular phones using a secure smart card to do the crypto work).

Which brings me onto your observation,

The UK wrote the original data privacy laws and one can only hope.

Yes that was in shall we say “more liberal times” and there is no way such a piece of legislation would even get a “white paper” drafted let alone get put through the legislative process. Another piece of UK legislation was the Regulation of Investigatory Practices Act or RIPA… Which has become the base of other nations anti-crypto, anti-security, anti-privacy legislation. It is full of “assuned guilty” clauses where as the defendant you have to prove you are innocent… UK Judges hate it and it’s also a reason I designed a system to subvert it’s assumption of “you must know the KeyMat so hand it over, or go to jail”. The legislation drafters assumed that the “I don’t know it” defence would not be used because you “can not prove a negative” well there is a way involving mathmatics of amoungst other things a circle[2].

The problems we have these days are,

1, Protection against 2nd party betrayal.
2, Deniable crypto.
3, Deniable authentication.

I have solutions for both deniable crypto and deniable authentication, which take you a fair stretch of the way to making betrayal by a 2nd party pointless, but I’m not yet all the way there.

Why chase this down?

Well the simple fact is we have known with “proof positive” for over a century now that you can not stop people using either codes or ciphers no matter what back doors you put in the communications device end point. Rather than admit they have lost and you can not legislate against either the laws of physics or the laws of mathmatics advisors to the legislators tie themselves in knots trying to stop people…

I see it as a “hobby” much like my grandfather did to spoil the work of such appaling and socially irresponsible people for no better reason than they deserve to be made to look what they are which is inefectual.

Yeah I know not the best of motives, but there is the “social good” to consider, if I get it all sorted out then we get our privacy back by default. Because if it’s secure against SigInt agencies then it’s secure against the Silicon Valley and Washington State corps as well and destroys their “user as a product” model.

But I guess the real important reason currently is it gives me something to do during COVID lockdown, other than go crazy watching thirty year old paint dry 😉

[1] As you might know in engineering parlance “The leading edge is the bleeding edge” due to the high cost of finding the right way to do something. And… As the cartoon said “It’s not only the cheese the second mouse gets”, which is very much the French political outlook currently. That is they are more than happy to let others do the work then “scr3w over” the rest of Europe if it even makes them look superior… An issue I kept bumping into with ESA some years ago.

[2] You might remember from school geometry lessons you need a minimum of three points on the circumference to identify the position and diameter of a circle. Well it’s true of spheres and other curved objects in higher dimensions. It forms the basis for one type of “shared secret” system. The advantage is you can supply any three points that are not all on the same line and you will get a circle, but you will not know if it’s the right circle or not without testing, and if one or more of the secret holders gives a false share you can not show it… You need to do a few other things as well but that’s the basis of it.

JR January 22, 2021 11:33 AM

@all

Perhaps we need to look at cybersecurity through a different lens now?

We keep focusing on methods to secure access to data and network monitoring. But the bad guys will always figure out how to overcome those obstacles.

If you look at the reason why cloud exists, it is clear where we need to go next. Cloud was created for ‘capacity on demand’ and because data centers were running out of space and power every few years.

Like its predecessors EMC and IBM – now AWS, Microsoft and Google are solely focused on growing data. This is their revenue stream. They put no thought into the ramifications of doing so. But the more data you have, the less ability there is to protect it. These vendors all monetize data too. Plus, not all data should be retained, and other data has strict retention periods. This is easily managed with structured data, but cloud claims that you do not have to structure data and even so you will be able to manage it. This is a falsehood.

Tech vendors solely want to grow data and that is not in their customer’s or society’s best interest. How to solve?

Perhaps Sarbanes Oxley (and it’s EMEA & APAC counterparts) should be updated for cybersecurity risk and companies should be measured by how much data they have and its year over year growth. They should also be measured by how much data they expired. Instead, cybersecurity auditors and examiners focus on the most mundane aspects of cybersecurity and whether institutions have GRC tools.

So my idea is this. If you are driving in the safest car, but on an unsafe road – you are still unsafe. We need a new road for REGULATED data. All data is not created equal, yet it all travels together on the open internet and this is the problem. Even the US Military has their own wireless spectrum. They recognize that transport is the weak link.

If we had a SECONDARY internet just for regulated data that was secure, imagine the advancements that could be made in finance and healthcare. Instead, this data is stolen or sold and some tech vendors purposely move it offshore through or to territories where encryption is not allowed – and there is no possible way to protect it when that happens.

If we had a Government protected Internet for regulated data:

  1. We could also mandate that data traveling over it use Semantic Interoperability like SWIFT and this could provide the “data portability” that is required by GDPR and CCPA.
  2. This type of walled internet could then put an end to money laundering and fraud too.
  3. We could find a technological way to improve upon ACH (PCI) to stop theft such as being experienced by Unemployment across the USA.
  4. It could also be used to potentially vote online.
  5. Expiring data would be automated.
  6. We wouldn’t have to worry about ransomware afflicting our hospitals, critical infrastructure or pharmaceutical companies anymore either.
  7. Plus since this internet would be owned by government any incursion into it would be considered an act of war. It would end all of the BS we put up with now that is really negating our ability to advance. Technology is now at a standstill. Other than Tesla I am hard-pressed to identify any major bleeding edge tech over the past ten years. Everything is just incremental improvements on that which already existed.

And we cannot move to the dream of a 5G IoT always connected state unless we secure transport. ICS may be at issue for SW and FE. It would be insane to make it any less secure than it already is right now.

The open internet can remain the wild west for general use and sharing unregulated data. Social media can remain on the open internet.

Doesn’t Russia and China do this by the way? Their internet and banking system is ring fenced. Alipay wouldn’t succeed outside of China due to this. I wonder how often they are hacked? Probably never.

lurker January 22, 2021 12:32 PM

@JR

If we had a Government protected Internet for regulated data: …

First “we” have to elect a government who believe that this is their duty, in the interest of their constituents. That’s a big ball of dung to roll uphill…

JR January 22, 2021 4:16 PM

@Lurker

I think they realize it now. The US Gov produces most of the regulated data, so it would primarily be for them. Otherwise neither the US Gov or the regulated private sector will be allowed to use the Cloud. A few regulators were already suggesting that for the critical infrastructure sector. And of course this is going to be the result of this hack. The CEO of VMW jumped to Intel. That says everything about our future.

The regulated private sector is not only critical infrastructure, but this also includes any vendors that do business with the US Government. Conceivably, it includes any institution that has a contract with the Federal Government and shares data with them. Although this FISMA law (NIST 800-53) has been ignored up until now.

Cybersecurity in the USA has become about non-technical people in Government and private sector who faked compliance. Hence this is why it needs to be centralized. Banks, utilities and the tech sector needs to focus on their core businesses. We’d probably have a new power grid if utilities had money for R&D. However, Jamie Dimon said that 25% of his budget goes to cybersecurity. Startups cannot do that. This is creating inequity in the USA/EU and thwarting entrepreneurship.

I get the ramifications but none of us have any data privacy now and I don’t think anyone can fake compliance once the DoD’s CMMC program is underway.

In September the US GAO released a report recommending that DHS take over Treasury’s cybersecurity exams. But SW changes everything and DHS already supervises cybersecurity in 11 of the 16 critical infrastructure sectors. Banks are global and this is a National Security issue. It makes most sense with DoD. Also the DOJ is under DHS, so they cannot have access to our data. They are supposed to be nice and ask for it first.

But I disagree that NIST 800-171 is sufficient. I’m a Ron Ross fangirl – hardcore 800-53/CSF all the way.

Bob Gort January 24, 2021 4:38 PM

OK, here’s an update candidate to JR’s previous glossary, from their most recent posting:

“ICS may be at issue for SW and FE.”

ICS = integrated circuits?
SW = software?
FE = iron (ferrous)? short for hardware?

So it becomes “integrated circuits may be at issue for software and iron.”

Frank Wilhoit January 24, 2021 7:24 PM

@ Ismar,

It is not primarily a question of “…once their complexity reaches a certain level.”

Complexity can be managed. Software engineers are not explicitly taught how to do this. It is the primary purpose of many programming languages, but not of most development toolchains. But managing complexity is every developer’s unconditional responsibility, and it is always possible, if varyingly tedious.

Most of the categories of intractable chronic cybersecurity problems result from the fact that the universally used computing platforms — Windows, Macintosh, Intel — evolved incrementally from consumer-grade toys.

Erdem Memisyazici January 24, 2021 11:18 PM

If your certificate is stolen there is little else you can do to secure anything. Administrators understand that if their IDP is compromised, all accounts are also compromised. This is also why your second factor is configured in the IDP as well. I’m not sure why something like GoldenSAML is a SAML issue, it’s not in my opinion. Same goes for JWT tokens as well where if you have the key, you can forge requests. The whole point is that the attacker doesn’t have the key.

The idea there is that an IDP is solely dedicated to not getting hacked because smaller organizations can’t afford tanks for example but a major company can.

I remember we had a local office who claimed their security was excellent because their machines weren’t online. Then you ask, what about physical security? Why can’t I just back a truck in through the front window, grab the computers and drop them off somewhere? So if you are running an IDP you need armed guards, ability to handle DDoS attacks, as well as perform regular pen-testing. That’s pretty much the entire issue with security.

It’s what tied the security world to the military world, and manufacturing business as well.

This reminds me of certificate authority attacks as well where Google’s certificate is safe but some other certificate authority gets hacked and people don’t care so long as they see a valid certificate.

Clive Robinson January 25, 2021 3:51 AM

@ Erdem Memisyazici,

If your certificate is stolen there is little else you can do to secure anything.

Yes it’s the root of trust (RoT) for your part of a Federated system, and it’s loss effects not just your system but all the systems in the Federated system. So it can be a very big deal (It’s why I do not like such Federated systems, because they are unnecessarily fragil because of this issue).

However,

[W]hat about physical security? Why can’t I just back a truck in through the front window, grab the computers and drop them off somewhere?

Whilst this is a very real issue with general purpose computers and servers, it can be mitigated. Which avoids the need for some of,

[Y]ou need armed guards, ability to handle DDoS attacks, as well as perform regular pen-testing.

What you need is specialised hardware that contains both parts of the master PKcert inside tamper proof hardware. Under certain protocols it signs a derivative certificate for the everyday servers, which only store a working certificate pair in the best security the general purpose computer has (a security enclave these days). At boot up the server starts a new enclave instance which generates a new PKcert pair and requests it be signed by the specialised hardware. As part of this it uses the support in the CPU to uniquely identify it’s self as effectively a shared secret. The specialised hardware revokes the previous working certificate.

Whilst such systems are not sufficiently secure, to not need robust physical measures like a physical cage, hard / strong room or vault, the specialised hardware should keep the master PKcert pair secure for quite some time even after theft by Level III attackers. It also makes generating working PKcerts difficult. Thus buying system owners time to repond and at the very least revoke the master PKcert pair before harm is done (thus making the theft pointless).

At the end of the day, “security” can not be 100%, and in “physical security” it’s almost always seen as not a “method of prevention” but a “method of delay” so that other resources be brought into play.

Maxie January 25, 2021 11:59 AM

This is pure undiluted pentagon propaganda. The “russians” are being accused again of “attacking” the poor innocent Americans? Please.

Clive Robinson January 25, 2021 8:14 PM

@ Maxie,

This is pure undiluted pentagon propaganda.

Whilst I agree it’s propaganda due to the fact the call was made long before any real evidence of how the attack happened was found, I don’t actually think it’s the “pentagon” behind it, and I’m also doubtful it was the inteligence community either. I suspect it was and still is pure political propaganda from the then outgoing US administration.

By the way, I’m not saying it is not Russian or China, France, Iran, Israel, North Korea etc, etc or even the US itself[1]. What I am saying is we did not have the evidence to say anything at all when the “tis Russia wot dunit” call went out.

We may never get sufficient evidence to say “It was xxx” in fact we probably won’t. However we may get “indicitive HumInt”[2] by which some may make an attribution. However attribution made on “Network SigInt” is generally a bad idea to make a public call on[2]. Because as the US demonstrated when they blaimed North Korea for attacks on the olympics they made the wrong attribution by a long shot. Back then it was Russia, getting back for having it’s athletes baned from participating in the olympics by the IOC under political preasure. From what has been said by some the Russian’s were not actually running a deliberate “false flag operation” they were just being sufficiently covert so they could get maximum time to get damage in, knowing that both the NSA and CIA technical operatives were the opponents. From that point of view they were fairly successful.

[1] The US in particular and more generaly the Five-Eyes are enamoured of the “bug-door” or similar on the much mistaken NOBUS idea. One of their most subtle ones the supposadly Cryptographically Secure Dual Eliptic Curve Digital Random Bit Generator (CS-Dual-EC-DRBG) which the NSA forced into a NIST standard got found fairly quickly and “outed”. Thus I suspect that this was a deliberate “bug-door” by a law enforcment or signals intelligence agency, that “got found, used and abused” by others. Thus there are probably two players to be identified, those who designed/pushed the “bug-door” and those who used and abused it. Whilst we might identify the latter, we are only likely to find out the former when a whistleblower’s shrill is heard.

[2] The Dutch have previously got “HumInt” via “SigInt” when they turned on web cameras in laptops and saw faces and fingers actively running and correlating with the attacks. This method got “burned” when for political reasons US politicians “flapped their gums” revealing publicaly yet again another countries “methods and sources”. The reason that HumInt over SigInt worked was a lack of foresight, over confidence, doing things on the cheap and probably lack of ability by the attackers as well[3]. The important point being SigInt on Wide Area Networks is easily subject to all kinds of “smoke and mirrors” and you can not tell when a computer has been told to lie to you. Thus “network SigInt” fails various safe guards such as “two independent sources”, “verifiable source points” etc. Boots on the ground HumInt however and HumInt in general are orders of magnitude better than “network SigInt”. Thus as a rule of thumb “Network SigInt” is unreliable and at best only indicative against a significantly inferior opponent. In other words something any sane person would not make public accusations on at a national / diplomatic level, because all it does is leave you open to ridicule for no real gain.

[3] The reason it worked was because the attackers were “all offence with no defence”. If they had taken even a few basic precautions using IDS or various types of “gapping” or just “hot glue / sticky tape” then what the Dutch did would either have been caught very early on by an IDS or blocked entirely by a gapping system or just nullified by something over the lense like sticky tape. One type of easy gapping is to use a modified style VNC system across a non network protocol serial line. But a sensible person would as well as any other defensive method disconnect the cameras and microphones in the actual devices the attackers worked from. Using a screwdriver to open the case and a soldering iron to remove a wire from each microphone and camera or just simply pull out a plug, is not exactly “rocket science”.

Maxie January 29, 2021 12:08 PM

Clive said :
“We may never get sufficient evidence to say “It was xxx” in fact we probably won’t”
“The important point being SigInt on Wide Area Networks is easily subject to all kinds of “smoke and mirrors””

Exactly. It should be obvious, especially in a site like this, that an endless stream of fake “digital evidence” can be created by pushing a button. Of course even actual physical evidence isn’t trustworthy when it comes from governments, but “evidence” coming from digital computers to which the pulic has no access is a total and complete joke.

SpaceLifeForm February 2, 2021 6:09 PM

@ Clive, ALL

Fake Firewalls

So, if Russia wants to go total global firewall, what is the problem?

Can we get Iran and North Korea to do same?

Can we convince China to tighten their firewall?

Will they only allow their attacks thru?

Or, should US just drop all of their ip traffic?

Who is trying to BS whom?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.