Russia’s SolarWinds Attack and Software Security

The information that is emerging about Russia’s extensive cyberintelligence operation against the United States and other countries should be increasingly alarming to the public. The magnitude of the hacking, now believed to have affected more than 250 federal agencies and businesses—­primarily through a malicious update of the SolarWinds network management software—­may have slipped under most people’s radar during the holiday season, but its implications are stunning.

According to a Washington Post report, this is a massive intelligence coup by Russia’s foreign intelligence service (SVR). And a massive security failure on the part of the United States is also to blame. Our insecure Internet infrastructure has become a critical national security risk­—one that we need to take seriously and spend money to reduce.

President-elect Joe Biden’s initial response spoke of retaliation, but there really isn’t much the United States can do beyond what it already does. Cyberespionage is business as usual among countries and governments, and the United States is aggressively offensive in this regard. We benefit from the lack of norms in this area and are unlikely to push back too hard because we don’t want to limit our own offensive actions.

Biden took a more realistic tone last week when he spoke of the need to improve US defenses. The initial focus will likely be on how to clean the hackers out of our networks, why the National Security Agency and US Cyber Command failed to detect this intrusion and whether the 2-year-old Cybersecurity and Infrastructure Security Agency has the resources necessary to defend the United States against attacks of this caliber. These are important discussions to have, but we also need to address the economic incentives that led to SolarWinds being breached and how that insecure software ended up in so many critical US government networks.

Software has become incredibly complicated. Most of us almost don’t know all of the software running on our laptops and what it’s doing. We don’t know where it’s connecting to on the Internet­—not even which countries it’s connecting to­—and what data it’s sending. We typically don’t know what third party libraries are in the software we install. We don’t know what software any of our cloud services are running. And we’re rarely alone in our ignorance. Finding all of this out is incredibly difficult.

This is even more true for software that runs our large government networks, or even the Internet backbone. Government software comes from large companies, small suppliers, open source projects and everything in between. Obscure software packages can have hidden vulnerabilities that affect the security of these networks, and sometimes the entire Internet. Russia’s SVR leveraged one of those vulnerabilities when it gained access to SolarWinds’ update server, tricking thousands of customers into downloading a malicious software update that gave the Russians access to those networks.

The fundamental problem is one of economic incentives. The market rewards quick development of products. It rewards new features. It rewards spying on customers and users: collecting and selling individual data. The market does not reward security, safety or transparency. It doesn’t reward reliability past a bare minimum, and it doesn’t reward resilience at all.

This is what happened at SolarWinds. A New York Times report noted the company ignored basic security practices. It moved software development to Eastern Europe, where Russia has more influence and could potentially subvert programmers, because it’s cheaper.

Short-term profit was seemingly prioritized over product security.

Companies have the right to make decisions like this. The real question is why the US government bought such shoddy software for its critical networks. This is a problem that Biden can fix, and he needs to do so immediately.

The United States needs to improve government software procurement. Software is now critical to national security. Any system for acquiring software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure they are sufficient to meet the security needs of the network they’re being installed in. Procurement contracts need to include security controls of the software development process. They need security attestations on the part of the vendors, with substantial penalties for misrepresentation or failure to comply. The government needs detailed best practices for government and other companies.

Some of the groundwork for an approach like this has already been laid by the federal government, which has sponsored the development of a “Software Bill of Materials” that would set out a process for software makers to identify the components used to assemble their software.

This scrutiny can’t end with purchase. These security requirements need to be monitored throughout the software’s life cycle, along with what software is being used in government networks.

None of this is cheap, and we should be prepared to pay substantially more for secure software. But there’s a benefit to these practices. If the government evaluations are public, along with the list of companies that meet them, all network buyers can benefit from them. The US government acting purely in the realm of procurement can improve the security of nongovernmental networks worldwide.

This is important, but it isn’t enough. We need to set minimum safety and security standards for all software: from the code in that Internet of Things appliance you just bought to the code running our critical national infrastructure. It’s all one network, and a vulnerability in your refrigerator’s software can be used to attack the national power grid.

The IOT Cybersecurity Improvement Act, signed into law last month, is a start in this direction.

The Biden administration should prioritize minimum security standards for all software sold in the United States, not just to the government but to everyone. Long gone are the days when we can let the software industry decide how much emphasis to place on security. Software security is now a matter of personal safety: whether it’s ensuring your car isn’t hacked over the Internet or that the national power grid isn’t hacked by the Russians.

This regulation is the only way to force companies to provide safety and security features for customers—just as legislation was necessary to mandate food safety measures and require auto manufacturers to install life-saving features such as seat belts and air bags. Smart regulations that incentivize innovation create a market for security features. And they improve security for everyone.

It’s true that creating software in this sort of regulatory environment is more expensive. But if we truly value our personal and national security, we need to be prepared to pay for it.

The truth is that we’re already paying for it. Today, software companies increase their profits by secretly pushing risk onto their customers. We pay the cost of insecure personal computers, just as the government is now paying the cost to clean up after the SolarWinds hack. Fixing this requires both transparency and regulation. And while the industry will resist both, they are essential for national security in our increasingly computer-dependent worlds.

This essay previously appeared on

Posted on January 8, 2021 at 6:27 AM30 Comments


name.withheld.for.obvious.reasons January 8, 2021 7:02 AM

Who didn’t see this coming, even if it is a reverse psych-op?

Clive Robinson January 8, 2021 7:13 AM

@ Bruce, ALL,

According to a Washington Post report, this is a massive intelligence coup by Russia’s foreign intelligence service (SVR). And a massive security failure on the part of the United States is also to blame.

It’s what you would expect when by far the majority of effort is spent on “Attack” not “Defence”

@ name.withheld…

Who didn’t see this coming…

Only those to young, illeducated in life as well as academically, or who do not study even very recent history.

And to be honest the wailing and nashing of teeth, over the fact that others are doing to the US what the US has been doing to them, tells me more about the problems in the US than it does anything about the attackers.

@ ALL,

As I noted the other day US politicos have claimed “It’s Russia wot dunit” without any evidence to back it up. Because subsequent evidenve has made liars of them…

Petre Peter January 8, 2021 7:48 AM

Neither candidate talked about privacy in their campaign. We have to bring the issues of hacking and pprivacy to our leaders:”they won’t matter to them if they don’t matter to us”

Trey Henefield January 8, 2021 7:53 AM

They have already started the process to addressing this, through the enforcement of the new Cybersecurity Maturity Model Certification (CMMC) requirements.

But as with anything in the government, the rollout and enforcement has been too slow to be effective in addressing these issues we have already been impacted by.

But I think if we take this process more seriously in enforcing its requirements in the procurement process, it will certainly help ensure Government equipment and software are purchased by organizations that meet these assurance requirements in their development environments.

Internet Individual January 8, 2021 8:31 AM

In my humble opinion, I think a better idea (at least regarding intelligence agency software) would be to not use corporation or private sector at all. Instead, we should be forging stronger ties with our close allies. We should get all the top talent from all like minded countries such as UK, Canada, Australia, Israel, and having a unified software team that would make Apple or Microsoft blush. One reason I think this is the better wat to go, is because at the end of the day, corporations are always going to be cutting corners for profit, that is their nature. I think trusting Corporations run by international shareholders with the Western worlds well being is a mistake. Its easier to sneak in spies to a corporation than an intelligence agency. Spies will be targeting software devs it knows work for DoD. Corporations can’t protect and monitor them all, it would cost them too much. Thats all it would take for a foreing country to blackmail, coerce, bribe employees to either slip in bad code, or analyze the code for ways to see how it works and sneak in.

Boris January 8, 2021 9:01 AM

The fundamental problem is one of economic incentives. The market rewards quick development of products.

So the problem is in the western kapitalistic system.

This can be fixed by Russia.

Jim K January 8, 2021 9:39 AM

To draw a parallel with safety culture, you might use any old bit of rope to do a lift – it would probably work – but the professionals use purpose made slings or chains, rated for the job. And there are still accidents.
The rope from Home Depot/B&Q/Bunnings might be cheaper, but the coroner isn’t going to think much of your reasoning or risk assessment.
Can we look to, say, civil aerospace as a culture to follow? After all, the 737 Max issues are scandalous precisely because of the industry’s history of reducing accidents through open investigations and broad collaboration.

Internet Individual January 8, 2021 9:58 AM

More to add to my last comment:

The 5 eye countries are sharing and using most of the same software anyways I would imagine. Everyone was put at risk from profit seeking corporations. Western countries all have the same needs that this would address. Systems for Power Grids, Elections, Government, automated vehicle infrastructure, etc. Those systems can’t be open source for obvious reasons, but this might be the next best option. The more eyes that can audit code the better. Create new processes and best practices for writing security minded code. Using new and diverse methods from the leading minds from all involved countries.

Some ideas I had that may or may not be used: Adopt a method in which every coder audits two modules for every one that they write. Each coder should be expected eventually to be an expert code auditor. A measure twice cut once approach.

Once the software is compiled and deployed end user employees wont have access to the code to see how it works. It will be fully supported from the same dev teams. Write the software with the idea that the hardware its operating on is fully compromised. Possibly use a streaming system, in which nothing gets installed or written on work terminals. Write operations sent through a seperate heavily vetted system/network to audit any input data for malicious code or non authorized uses. This way nothing unauthorized can be installed or ran on the endpoint, and the user cant do anything they dont have the permissions for. have strict requirements for hardware from vendors building terminals that cant physically allow alternative instructions to be ran outside the designated firmware. No USB drives, WiFi. etc.

The datacenters that not only hold the data, but stream the users OS environmnet will likely need to be ran from government owned and secured datacenters so they are safe. The idea being the terminals can withstand the abuse but the servers will be the most vulnerable. Build a network with security as a primary feature. AI will monitor every action from every user all the time and alert a security person when an action from a user is detected that is outside of their pre-established baseline routine. AI is good at recognizing patterns. Each user will have a profile with their own behavior traits. Everything from they way they type and move the mouse, to programs they run and at what times, etc. The AI can automatically disconnect the user from the temrinal until a Security person can assess the situation and verify with the user and possibly their superior of any use outside of the pre-established baselines.

Anyways, those are just some random thoughts I had. The idea being to take away the uncertainty, risks, and depenency on a random vendor and their priorities or intentions. Full control over the systems from top to bottom, while taking away the one major threat or vulnerabity which is the user. Obvisouly this system shouldnt be directly connected to the current internet as we know it. Internet use will need to be seperated in some ingenious way. Maybe a read network and write network,physically seperate and actions processed only if certain conditions are met.

Dan Geer January 8, 2021 9:59 AM

Lots of wishful thinking, per all the “should” and “need.” Retired tech C-suite denizen trying his hand at public policy? Begging the elites to change their wicked ways? Dream on good sir.

It’s also surprising to see Bruce use all of the same superlatives that he chided the cyberwar crowd for using years ago. I guess wild exaggeration is OK if it supports your narrative.

Jackie Childs January 8, 2021 10:19 AM

@Dan Geer

Have to agree, American intelligence will use this incident to grab more power. Steal our freedoms in the name of security. The road to hell is paved with good intentions.

False flag ops are everywhere. This could very well have been domestic actors looking to raise more cash. Security incidents are wonderful funding drives. Just like War,

Steven Keil January 8, 2021 10:48 AM

Both sides socialize the costs and retain the profits. See any recent sports stadium as an example. This is just a similar situation. Reasonable regulation has a place in a free market since we can’t count on people putting others interests and well being first.

mark January 8, 2021 11:31 AM

“Why the NSA failed to detect”?

I’ll tell you why: I know someone who shall remain nameless that I have 100% confidence and trust in, who has worked for the NSA since ’12. He is stationed now in the US, and has been for several years. He’s been moved up to the Dark Side (management). For almost a year, he tells me that his group is down to… him, by himself. People have left or resigned, and NO ONE has been hired.

But, this is the “smaller government”, right?

David Leppik January 8, 2021 11:42 AM

These sorts of breaches are hard to defend against specifically because they are rare. Meanwhile, other pressures, such as training employees and beating off the competition, are day-to-day issues that are far easier to grapple with.

The NY Times wrote a remarkably vague article the other day claiming that JetBrains’s TeamCity continuous integration software was used in the attack, making pointed references to the fact that JetBrains has developers in Russia.

JetBrains responded that (a) they don’t know anything about the allegations, and (b) it could have been a configuration issue. Later they said they are proactively reaching out to try to fix these problems.

JetBrains has been selling Java development tools since the 1990s, and has become the biggest commercial IDE developer in the world by getting developers to convince their bosses that their products are worth hundreds of dollars more than the free, open-source competition. That is to say, they are really good at what they do. But like any software company, security is only part of what they do, and not necessarily the part that pays the bills unless there’s a major hack.

They also have a development team based in Russia that’s developing Kotlin, a language that competes with Java. As a multi-language developer, I can tell you it’s one of the best.

The tool in question here is Continuous Integration (CI) software. It’s the state-of-the-art way of developing web and back-end software. The idea is that rather than having scheduled software releases, you have a release workflow that’s been turned into a CI pipeline. Developers write some code, it gets reviewed and tested, and if the change passes it gets deployed immediately. It’s not a completely automated process, but it removes bureaucratic delays.

The NY Times article insinuates that we can’t trust Russian code inside American corporations. But the world isn’t so simple. JetBrains employs many of the best developers in the world, and many of them are in Russia. I’d rather have them writing a commercially-funded open-source programming language than have them hired by the Kremlin. Also, JetBrains tools make me much more productive.

There are risks up and down the toolchain. Years ago Dennis Richie proved that the provider of a bootstrapping C compiler binary could hide a backdoor in Unix even if everyone can see the source code. JetBrains writes its own software in Java/Kotlin. JetBrains provides a handy decompiler you might not want to trust to audit their code. But your Java Virtual Machine (JVM) wasn’t written by JetBrains (though by default they provide the binary) so if you really don’t trust them you can in theory audit and use their software. But there are so many other software providers that may be just as easy for Russia to infiltrate that you can’t just go by nationality.

Long story short: as long as you have anyone involved in the development and deployment workflow who gets sloppy, there will be paths to exploitation. Software is big and complicated, and companies favor complex solutions which are easier to deploy and train people to use.

Put another way, there is so much low-hanging fruit when it comes to enterprise development that it hardly matters even if one of JetBrains’s Russian employees were working on an exploit.

Clive Robinson January 8, 2021 12:02 PM

@ Jackie Childs,

False flag ops are everywhere.

OK step one is an open mind on what’s being presented.

Step two is come up with a hypothesis,

This could very well have been domestic actors looking to raise more cash.

But don’t jump ahead. The first part of that statment is logically supportable in that they are known to have the tools to carry out false flag operations, they did loose them after all.

But the second part is too open it ranges from a truth to a wild assumption. Yes success in what they do can get them extra funding, that is true of most entities within a larger organisation. However some will take it as an implication of an old “CIA style Fund Raiser” under Dullas in the 1960’s or something such that Oliver North got upto in the 1980’s. Whilst it has happened as the old saying goes “Extrodinary Claims require Extraordinary Proof” and that is not as easy to get as we might like. Therefore indicitive evidence is required to at least find the start of the trail.

And so onto the old “Defense Funding Dillema,

Security incidents are wonderful funding drives. Just like War,

As the old saying has it “You never know if you are spending to much, but you do find out quite quickly if you are not spending enough, because you get attacked”.

The prime example of this was the Falkland Islands back in the 1980’s the UK went through several rounds of defense spending cuts under Margaret Thatcher. Then with the islands in the south Atlantic left more or less undefended the Argentinian Government that had significant issues at home decided a nice patriotic war would make them popular again. It ended in a war that they lost. Whilst the UK won the cost was high not just in spending but lives lost. Margaret Thatcher went from being very low in the opinion polls to a run away success at election. Hence Tony Blair signed up to the disasters that Iraq became.

Thus the only people that win at war are those who make the weapons and bandages, and the likes of politicians.

Clive Robinson January 8, 2021 12:14 PM

@ Internet Individual,

We should get all the top talent from all like minded countries such as UK, Canada, Australia, Israel, and having a unified software team that would make Apple or Microsoft blush.

More like roll around on the floor laughing.

You do not get “top talent” when you only offer Goverment salaries and benifits. Especially with the fear not only it will all be taken away from you, but they will bankrupt you if they find it convenient to do so.

Some of us made the GS mistake and woke in sufficient time to get the heck out of there and actually get on with things, cutting my own path.

Those that stay in GS to collect pensions are nothing close to “top tallent”. The smarter sociopathic ones see it as a stepping stone to the revolving door to a lucrative position with a defence contractor. As a rule sociopaths do not make “top tallent” in engineering or software.

Years ago the NSA used to attract some top tallent, simoly because they were the only game in town, that is just not true these days and not just money but respect of others is in the private sector.

MikeA January 8, 2021 12:29 PM

One can hope that any standards and processes will be better done than the whole ISO-900x fiasco, where a perfectly acceptable result was to approve shooting oneself in the foot (or a concrete life-vest) as long as the process was documented and the document followed. Plus of course the expenditures for all the consultants who weighed the documentation and ticked the box.

One can also hope for a baby unicorn come Christmas morning,but one might be disappointed.

parabarbarian January 8, 2021 12:32 PM

@ Clive Robinson

“As I noted the other day US politicos have claimed “It’s Russia wot dunit” without any evidence to back it up. Because subsequent evidenve[sic] has made liars of them…”

My money is on the Chinese Communists.

Winter January 8, 2021 1:35 PM

“My money is on the Chinese Communists”

Are there still communists in China? Where?

I haven’t seen any when I was there. Next time I will ask around where they can be spotted.

JonKnowsNothing January 8, 2021 1:44 PM


I see your Russian, spot your Chinese Communists, and raise you Three Punic Wars…

ht tps://
(url fractured to prevent autorun)

Denton Scratch January 9, 2021 8:01 AM

@Bruce Your emphasis on “regulation” is a bit bewildering.

The software business has historically been built by mavericks and independents, people who revel in the lack of internet regulation. Free (both beer and liberty) software was not developed by people who study government regulations.

Hurrah for your encouragement of the US government to set standards for government procurement; that would indeed trickle down to the rest of the market. But simply declaring that such regulations apply to everybody is daft – what are they going to do? Send the feds around to arrest some kid in a bedroom? Declaring instead that federal agencies are forbidden to use his software is much more likely to have an effect.

gregdn January 9, 2021 1:58 PM

Why is that every time a government agency fails it asks for more money? It’s as predictable as the sun rising in the East.
Try cutting their budgets a bit when they fail. That might actually get results.

Clive Robinson January 9, 2021 4:09 PM

@ Denton Scratch,

The software business has historically been built by mavericks and independents, people who revel in the lack of internet regulation.

So was the Victorian “artisanal” boiler making, but due to body parts flying through the sky, regulation was a necessity to turn “creative idiots” via nascent scinece into “engineers”.

Whilst not as graphically visable software kills significant numbers of people in various ways.

It’s why in various engineering disciplines software development is strictly controled.

However as the MAX-737 software failings that crashed two aircraft and killed a couple of hundred people, it shows that “regulation by managment sign-off” is a failure.

Yes the cost of regulation will be high. Yes products will be slower to market. Yes software people will have to go through engineer like training. Yes FOSS will suffer and should do. But worst of all it will create a “closed shop” much as the legal, accountancy and medical and some engineering professions require.

But the underlying question that you have to answer,

“Why do you give your right to do as you feel, above your responsibility to society?”

As a qualified engineer in several disciplines I know when I sign off on a project I take responsability, not just morally but legally in some jurisdictions. Can you say the same?

I suspect not.

But there is a second issue, there are two types of regulation those which are “do as I say” prescriptive whichvare failures from the get go and those that are “Stay within the bounds set by this framwork and it’s derivative standards” and these not only work they stay current and are usually the best that is reasonable engineering wise.

Which brings up a third quite fundemental issue. Standards are based on suitable measurands, most engineering disciplines have them and sound mathmatical models in which to use them.

Software development has no suitable measurands or realistic mathmatical models…

Something we very urgently need to address as I’ve been saying for certainly more than a decade now.

Then and only then, can we move from constructive engineering in the face of probabalistic events, to security engineering in the face of hostile determanistic attack.

The important thing to note about the above paragraph is random probabilities are always expressed as being “less than one” and are frequebtly multiplied thus the idea of three measures in series with a 10% risk each gives an overall risk of 0.01%, thus getting to “1-0.99999” whilst relatively hard and expensive is nothing in comparison to assessing risk from a determanistic attack source. This requires radically different thinking and processes and at best only a tiny fraction of programmers are currently capable of doing what is required and they almost always have a solid education and practice in engineering.

This is not something you can realy argue against because of, “Think of the children” who died on those aircraft.

Sorry to be brutal about it but the software industry especially managment realy realy need to wake up to the realities of life..

Oh and I am reasonably certain it can be shown that Microsoft amongst others has killed people as a direct consequence of their procedures and inactions that are more than negligent. If somebody senior there want’s to argue about it, they can come along and try and weasle out… But whilst they might claim their licence protects them, they know or should do, that they ship product that is not of merchantable quality, that manufactures of engineered and physically manufactured products have to complie with.

Thus why should software be exempt from moral or legal responsability, when all other physical goods even “art” are held to account?

As for your “send the Feds” comment, had you not noticed they already do for “harmfull or hateful” coding, even if the kid themselves does not use it for harmful actions, just making it available is a crime and has been since the 1980’s…

Clive Robinson January 11, 2021 12:24 PM

@ Ollie Jones,

Our prospects and customers routinely hammer us with infosec questionnaires.

Out of curiosity have any asked about the little conundrum playing out currently?

As you are probably aware a number of what consider themselves to be “news” or “social media/networking” organisations are being evicted from the services and systems they rented.

Obviously this is perfectly legal and well within the system owners “Terms and condition of service provision”. The renters broke the rules and got legaly evicted.

But it goes further, the Internet being a “Network of Networks” where the networks are not publically but privately owned have the same rights of ownership. Thus traffic could traverse numerous privately owned networks between source and destination.

CloudFlare has in the past disconnected service providers (systems/network) from their networks for sending traffic that CloudFlare indicated was against their contractual agrements and T&C’s. So the “somewhere in the network” disconnect principle has been clearly established.


What protections do you have in your service agreements to ensure connectivity between your SaaS and the clients location against such disconnection?

I suspect it’s a question some entities are going to start not just thinking about but asking in the very near future and even more so in the longer term.

It’s something I’ve been seriously pondering since the UN ITU meeting in Doha back in 2014, where it became obvious nation states wanted not just autonomy over their national networks but also the power to do such disconecting against other nation states for what ever reason they decided.

The answer that I had was not a good one. In essence it’s multiple fully redundant and fully independent networks. However an examination of the physical networks that make up the Internet have “choke points” many of which are in “Five-Eyes” territory and control (due to the history of the British Empire pre 20th Century).

Priyanka agarwal January 12, 2021 3:28 AM

At this time being individually responsible for your acts on the internet is the best option. I use some free security solutions for MFA for 5years now and I feel my data is much secured. Would definitely suggest everyone look for their security solution.
P.S I use solution from rcdevs

SpaceLifeForm January 14, 2021 6:39 PM

@ Some russian, Clive

Interesting dots. Compare notes.

h t tps:// chnology/2018/01/dutch-intelligence-hacked-video-cameras-in-office-of-russians-who-hacked-dnc/

SpaceLifeForm January 14, 2021 11:43 PM

@ Clive

“somewhere in the network”

Economic espionage and extortion.

What else are ads and phishing for?

Clive Robinson January 15, 2021 9:53 AM

@ Detroit Linguist,

When the original “It’s Russia wot dunnit” claims were made by a US political post holder, the evidence was certainly not all acquired.

So at that time the claim was certainly false and made for political reasons not evidence based ones.

I’m not saying Russia did not do it, I’m simply stating the fact that there is no publicly available evidence to say it was Russia.

Further the likes of FireEye are usually quite quick to follow the US,Government attribution statments.

The fact that the only private organisation prepared to blaim the Russian’s is it’s self Russian (Kaspersky) is some what novel to put it mildly.

My view is we’ve one heck ofva lot more evidence to track down and even then I suspect the evidence will be little more than very very vague assumptions that would be very easy to fake.

Remember the US political appointees make unfounded statments all the time. But the big one was supposadly based on NSA and CIA high grade inteligence, where they said attacks on the Olympics was North Korea… We now know it was more likely to have been Russia getting even after the IOC and others banned their athletes from participating…

With regards this particular series of events, I have a sneaky suspicion that part of the problem was from a US agency putting back doors in. Thus what has happened is someone else discovered the US “bug-door” and used it…

It’s kind of what you would expect to happen with “bug-doors” and even NOBUS back doors…

test estimation techniques June 19, 2021 9:33 AM

Just superb article. Many many thanks for your marvelous article. I believe that thousands of people will love your article because in your content include huge important information. Best of luck…

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.