Full Disclosure and the Boston Farecard Hack

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.

The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start—despite facing an open-and-shut case of First Amendment prior restraint.

The U.S. court has since seen the error of its ways—but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.

The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors—who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.

Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems and many other security devices.

Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers—sometimes young students—who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.

This preference for secrecy comes from confusing a vulnerability with information about that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.

The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.

If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone—a spouse, a parent, an employer—with a good reason why, in this one case, we should make an exception.

The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone—the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer—who argues that, in this one case, we should make an exception.

We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/26): Matt Blaze has a good essay on the topic.

EDITD TO ADD (9/12): A good legal analysis.

Posted on August 26, 2008 at 6:04 AM39 Comments

Comments

D0R August 26, 2008 6:52 AM

The problem is that the security of a product cannot be immediately evaluated (and appreciated) by customers. It’s a “hidden” value. Therefore companies choose not to spend too much money on it.

igloo August 26, 2008 7:07 AM

@Roy – and, unfortunately, the Australian way and almost every other ‘Western’ system. We pay lip service to the principle, even have laws to protect whistleblowers, but it doesn’t stop the ‘system’ shafting these public minded citizens!!!

R. Scott Buchanan August 26, 2008 7:11 AM

The judge in question, Douglas Woodlock, has a history of showing contempt for the First Amendment, so the prior restraint issue didn’t surprise a lot of locals. And honestly, given the current legal and political climate in the US, it would have been silly to expect a district court judge to do anything but side with Gabauskas’s ignorant goons anyway.

Gerrie August 26, 2008 7:12 AM

I am interested in the timelines here…

In the Dutch case the researchers told NXP about the issues they discovered in early March, planning to present to work in September. NXP then tries to prevent them doing so after knowing the details for a few months already – judge rightly throws it out.

In this case (unless the orginal reporting is a bit misleading) the vendor heard about the talk, from a third party, approx 10 days before the presentation? In which case a short-term restriction is probably sensible (to give the vendor some time to at least try something if they wanted), with long-term gagging being rejected – which is pretty much what happened.

Clearly the authors had everything ready at least a month earlier to submit their talk for review. Why did they not notify the relevant parties then?

Mike B August 26, 2008 7:48 AM

I always feel that half the problems seen with vulnerability “disclosure” stems from the security researcher’s desire for fame and notoriety. Without the need to stand behind a podium and get the gold star the information could be disclosed anonymously, first to the vendor, then to the public and the researcher would be spared all of the attendant legal trouble. There are plenty of ways in this day and age to launder information and leak news to the public in an untraceable fashion, but the researcher’s quest for fame and credit puts them in the path of the inevitable blowback.

Surely, you say, without the ability to garner some reward from their efforts where is the motivation for researchers to continue their research? Well for one there is a large group of people who simply enjoy the thrill of discovery and I am sure that the majority of security researchers are happy in their work. Second, perhaps they can take a lesson from national intelligence services (or perhaps Larry David) and realize that one can be Anonymous and still tell people. If worked properly people can guess, assume or “know” you were the discoverer without being able to ever prove something in a court of law. One way could involve the presentation of “follow-up research” shortly after the vulnerability is leaked and made public by the “anonymous researcher”.

Also, it amazes me how niave some security researchers are. It’s like they never expect to be sued and thus have few countermeasures prepared to protect themselves. First and foremost of these is do not pre-announce your intention to present your findings at a large conference. Right or wrong vendors will panic and sure and you will be in for a world of hassle. Black hat et al needs to list talks as TBA Hacking Talk and then surprise people on the day of. Anything else is just poor OPSEC. Another countermeasure is the pre-disclosure to third party foreign nationals outside the jurisdiction of domestic courts who have instructions to release the information on their own.

Yes it would be nice to live in a world where researchers aren’t hassled, but it would also be nice to live in a world without door locks. Just because the law is on your side doesn’t mean you can ignore precautions.

Rich August 26, 2008 8:10 AM

Consider an alternative. You discover a major vulnerability. If you disclose it to the company or government agency, you will be prosecuted. What are your alternatives? We know that an active black market exists for vulnerabilities. You could make a LOT of money selling your vulnerability. The incentives push in that direction.

Is that good public policy?

Paeniteo August 26, 2008 8:52 AM

@Mike B: “First and foremost of these is do not pre-announce your intention to present your findings at a large conference. ”

Not that easy if said conference reviews your article prior to accepting it and also announces things like schedules, programmes, abstracts and the like in advance.

Seth August 26, 2008 8:57 AM

I agree with Mike B.: one solution for prior restraint is the doomsday scenario. Provide the full details (if feasible, with cookbook instructions for cracking the system) to several foreigners, with instructions that they release them unless the talk is given at the conference, and to ignore any instructions otherwise.

Since that communication takes place prior to any court order, the court can’t prevent it (or claw it back, that’s why foreigners are specified). Likewise, while the court could order me to instruct them not to release, they can’t be forced to follow those instructions.

(I remember how badly the NSA lost when it tried to prevent one of the initial “Zero Knowledge” theory papers from being given a number of years ago; the resulting talk was quite amusing.)

Josh O August 26, 2008 9:07 AM

Very succinct. We already knew this, but I’m going to bookmark to send to those “less in the know” when they don’t understand this issue.

@Rich: Excellent point. There are many things like this. I don’t want to delve into politics, but over zealously prosecuting soldiers for their treatment of POWs will result in less enemies being “caught” alive. It becomes easier to just shoot them. It truly is astounding how many policies have the negative effect of increasing the problem (like the red light cameras from yesterdays post.) It doesn’t matter if it works as long as it looks like their doing something, or they get ulterior motives satisfied (increased revenue.)

greg August 26, 2008 9:09 AM

@Mike B

Its not about fame and fortune (trust me on the fortune bit). Its about how science is done. You present, publish and otherwise communicate your results to others in your field. Its your job as a researcher.

When you don’t, then well your are not doing your job and even universities won’t pay you forever doing nothing (unless you are the accountant ;)).

Eam August 26, 2008 9:14 AM

@Mike B:
You can’t blame the security researchers for wanting credit for their work, but you do bring up an interesting idea:

Say a week before the Black Hat schedule was published, these rascally goofballs anonymously released the details of the vulnerability. Everyone would know they were the researchers (since they had a presentation ready so quickly after the disclosure), but it would be a pretty tenuous link in court.

Alan August 26, 2008 9:39 AM

The MIT students should have ignored the court order and presented at DefCon. Our freedoms can only be taken away when we allow them to be taken away.

SteveJ August 26, 2008 9:40 AM

An important part of defending public disclosure is to ensure that any vulnerability which attracts a gagging order is publicised as widely as possible, in as much detail as possible, and spun as hard as possible to blame the flaw on whoever launched the gagging attempt.

If the effect of a gagging order is to increase the extent to which the flaw and its details are published, and increase the negative publicity for the organisation concerned, then they will be less keen to seek them. They might still do so sometimes, because dealing with the court case is an additional burden on the individual researcher (and perhaps university), with a chilling effect on future security research. But that requires a gagger taking the long view: the immediate benefits are negated.

Furthermore, courts are sometimes reluctant to apply gags in cases where the information is already in the wild, since the gag would not have the practical effect sought, of preserving secrecy.

So, if something is banned in some jurisdiction, then (subject to your own legal position) read it, discuss it, and distribute it. If banning speech doesn’t work, then speech won’t be banned so often…

miw August 26, 2008 9:45 AM

Breaking the security of a commercial product should no longer be regarded as a scientific endevour. Such publications extremely rarely contribute to scientific understanding of security measures. The majority of cases are an easy hack on a high profile system. This is not a good way to use research funding.

Phil August 26, 2008 10:31 AM

@miw: “The majority of cases are an easy hack on a high profile system. This is not a good way to use research funding.”

I disagree. If high profile systems are vulnerable to so many easy hacks, why shouldn’t this be brought to attention? If people don’t know about the problem, or the extent of it, it won’t get fixed.

grge August 26, 2008 11:02 AM

It simply isn’t true that the MBTA only found out about the exploits 10 days before the conference.

The students informed the MBTA about the exploit well ahead of the conference. The MBTA didn’t act on it – well, except for trying to squelch the release of the info.

Ironically, they not only didn’t succeed in silencing the researchers, they themselves released more details through court briefs than the presentation contains.

Dumb. Dumber. Dumbest.

2Simple August 26, 2008 12:18 PM

Why isn’t the simple truth made open:Insecure systems are another form of access capitalism, selling access and manipulation for some price/favors. Problems give uncertain risk and allow for cheaper negotiation through intimidation. Powers do not want accountability, except improper accountability.

The result is a prohibition style outlaws and law breaking everywere, its fun its cool, its very ugly to the country. Oh well, it makes more incentives for a country to have more programmers, the new army of the future wars.

Full Disclosure is said to be dead. It would be good to read some comments on what it has turned into.
Perhaps Negotiation Disclosure? Uncertain Disclosure. Limited Disclosure.

moo August 26, 2008 12:21 PM

I like the “don’t pay kidnappers” thing.

By the way, the same idea applies to terrorist events too—the way to defeat the terrorists is to simply refuse to be terrorized, refuse to give up our freedoms and our privacy in the name of fighting terrorism, refuse to change the way we live our lives. It may suck if terrorists succeed in bombing a school bus or whatever, but we must refuse to change our decision-making or our beliefs in response to incidents like that. It is the very height of cowardice to give in to such pressure. To allow terrorists to set the boundaries of discourse merely by smashing some airplanes into a building is extremely weak.

The most annoying thing is that on 9/11, 2001 it was immediately obvious to me that America was going to react, and react in a big way. In that way, they gave the terrorists a satisfaction which they wholly do not deserve. In the end, I think the reaction has been kind of pathetic and mostly counterproductive, with the media and the corporates and the politicians all ruthlessly exploiting 9/11 and inducing unwarranted fear in the American population, to further their own agendas. Compare it with the reaction of the UK after the London subway bombings—the Americans were ripe to be victims of a “shock and awe” campaign, while years of violence between the IRA and the British had prepared the UK to react pragmatically and with courage and stoicism.

Like kidnappers, there’s only one proper response to terrorism: hunt them down and kill them. Any other kind of reaction ultimately does more harm than good. Aside from terrorism, we should be trying to improve our foreign policy and trying to treat more fairly with the people of other nations rather than selfishly exploiting them… but terrorism is basically crime, and should be investigated and avenged using the criminal justice system—not by trying to turn a whole country into cowering “victims”.

Alex August 26, 2008 1:52 PM

The kidnap analogy is in theory correct but in practice, Bruce, I would see whether you will deny paying a ransom for a loved one.
Sure, on the long term it is the only sensible thing to do. But the benefits of not paying are shared by many while the costs of not paying are carried by the few that by not paying directly put the life of a loved one at stake.

Davi Ottenheimer August 26, 2008 2:15 PM

@ R. Scott Buchanan

Exactly what I found as well. A former colleague of George W. Bush and an appointee of Reagan, Judge Woodlock has a history of striking down first amendment rights:

http://davi.poetry.org/blog/?p=1825

Here is a fine example of his reasoning:

“Woodlock said he had initially assumed that activists were exaggerating when they likened the protest zone near Canal Street to an internment camp. But he said that after touring the area for 90 minutes Wednesday, he concluded that comparison was ‘an understatement.’
[…]
‘One cannot conceive of other elements [that could be] put in place to create a space that’s more of an affront to the idea of free expression than the designated demonstration zone,’ Woodlock said.

Nonetheless, Woodlock said that unruly demonstrators at other political events have made the precautions necessary to foil protesters who might hurl objects at delegates arriving on buses”

Bruce, nice essay, but you barely touch on the the root issue.

A judge in America made a horrible error of judgment (pro-corporation, anti-individual liberty) under the mistaken guise of security.

This is not an exception, it is the rule of the neo-con.

mcb August 26, 2008 2:52 PM

I’m not a lawyer, and don’t play one on TV, but it seems to me some really serious wiseguys might have replaced their planned DEFCON presentation with a detailed examination of the affidavit submitted by the MBTA, a public document which included a “confidential vulnerability assessment report” detailing the very flaws the MBTA sought to conceal.

LC August 26, 2008 4:27 PM

Bruce got it right, the damage has already been done. The students first amendment rights were effectively trampled on and they were not able to give their talk. Today its a week delay, tomorrow the delay will be a year. In the future, it will be indefinite.

“The U.S. court has since seen the error of its ways — but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.”

rob August 26, 2008 7:14 PM

One missing aspect from this discussion is that of security requirements analysis: As an engineer, it is important to define a credible threat against which you will be secure (anything else is throwing money away and not engineering). The essay implies that all commercial products should start from an assumption that their threat is highly educated research students with high level funding for equipment. This doesn’t match my expectations for bad guys for most products. Perfection is the enemy of the good.

altjira August 27, 2008 3:36 AM

@rob

While your argument holds true for deciding whether to design and build a structure to resist a 100-year event vs. a 100,000-year event using extreme value analysis, I don’t think it reflects the realities that Bruce is talking about. Perhaps building something reasonably well and then responding quickly to discovered flaws is the optimal solution. You’re right in that you can’t aim to build something invulnerable, but the cutting-edge researchers are not a threat – it’s the script kiddies who are the threat. Pure research promotes our understanding of what can go wrong – just like Tacoma Narrows taught us about aerostatic flutter. But researchers don’t cause massive losses. Ignoring the results of research causes massive losses.

It is perfectly acceptable from an engineering standpoint to put in required maintenance; or, since unquantifiable threats cannot be engineered against, to design a maintenance system to respond to the unforeseeable. We already find it necessary to maintain systems like that – they’re called fire fighters. Maybe one day the people that patch faulty computer systems will earn the same respect.

miw August 27, 2008 4:28 AM

@phil: Finding a bug in existing software is in general not a subject for a publication in a scientific journal or conference proceedings. Similarly, finding a bug in a security product in general should not be regarded as serious security research. A high profile system, may have modest security requirements and hence make a perfectly reasonable design. Breaking that security layer hence only provides a small contribution to the advancement the scientific knowledge.

MathFox August 27, 2008 6:16 AM

@Miw, you seem to ignore the size of the break… The students broke both the magstripe and the chipcard system used in Boston public transport. They could create farecards and add credit to them. It certainly is relevant work if you consider that security evaluation of RFID systems is a young area of research and reports on only a few class breaks have been published.

I do think that consumers and professional buyers of systems should get proper and reliable information about the security of the systems they buy. The maker hardly says something else than “perfectly secure”, for something that is “easily broken” in practice. Should we “trust the vendors” when millions a year are at stake?

Peter Galbavy August 27, 2008 6:21 AM

@miw: I agree that finding a bug is not a suitable subject for publication, however detailing how you found a bug, a new class of exploit or other failure and bringing it to the attention of your peers is a valid subject, which is what this is about. People don’t do presentations on finding yet another buffer overflow in sendmail/bind/et al. but they should and do presentations on how they build frameworks for diagnosing them and also for preventing them in the future.

Clive Robinson August 27, 2008 8:15 AM

@ miw,

“Finding a bug in existing software is in general not a subject for… …hence only provides a small contribution to the advancement the scientific knowledge.”

Although I agree with your view point for scientific journals and their like, I would not agree with it in other areas and I suspect that at the end of the day it is this that is one of the real issues.

For instance the U.S. like many other parts of the world has laws about “fit for purpose” not just for tangable objects (goods) but intangables (services, contracts etc) as well, all of which appear to be based around either equibility or loss.

To most it appears to have been only software companies via EULAs that have avoided responsability for “fit for purpose” in their products.

In reality the situation is actually different, and all businesses and organisations where possible seak protection via some kind of liability limitation (agreement / licence) or risk externalisation (insurance).

However a significant factor is usually over looked and that is “times arrow” which traces a “golden thread” through risk and liability in society and thus legislation.

Software and the products based around it are less than half a generation in age, and their potential utility to cost effectivly doubles every year. This gives rise to products with expected life times of just a few months and significantly comparable to the development times involved.

This by any human standard is pace beyond comprehension and as a result risk liability and legislation are significantly behind the reality of the industry.

One asspect of this is that there is no formaly recognised method by which product failings and the best practices that arise from mature reflextion of them have a way of becoming recognised before they are obsoleat.

Therefore either the industry needs new methods comparable to the rate of change or it needs to slow it’s rate of progress to the methods that currently exist.

Of the two only the former appears to be the likley way forward by industry even though considerably harder to do. The latter is a matter that would have to be addressed outside of the industry by the likes of Government (legislation) and in reality would only serve to protect vested interests and therfore hog tie the industry.

However for “new methods” it would require buy in by the industry as a whole and importantly that all forms of research be acceptable and have appropriate forums to discuss current issues. Also that appropriate methods of measurment (metrics) be found so that risk can be quantified and dealt with in a more traditional method (like insurance etc).

Unfortunatly I cannot see this happening without external influence by those with sufficient power to ensure it happens which is the rub…

My best guess so far is compulsory “product liability” and “user liability” insurance as is seen for the manufactures and users of cars etc.

Pat Cahalan August 27, 2008 1:01 PM

What we need, actually, is a timelocked publication method.

“My paper on your security vulnerabilities is in timelock, and will be released in 2 months. There’s nothing I can do about it now. You’ve got two months to fix the problem.”

David August 27, 2008 1:10 PM

What’s sad is the huge hoopla granted to those who discover “vulnerabilities” versus discovering “exploits.”

Windows can be smashed, making most door locks useless. But nobody demands that window makers make smash-proof glass and all buildings using breakable glass.

Doors to banks allow robbers to enter with guns and steal money, yet the solution isn’t to prevent it.

While each vulnerability can be judged on its own merit and actual crimes committed that exploit them, very often they are more alarmist than helpful.

Most exploits are simple social engineering tricks, or physical break-ins.

It does seem that informing the potential victim and the company that offers the vulnerable item is praiseworthy. Just telling everyone seems alarmist, publicity-seeking and encourages people to attempt break-ins they’d otherwise not do.

David August 27, 2008 1:19 PM

Wouldn’t the transit “companies” demand more security in the fare cards themselves if they felt the pinch economically? Have they suffered losses that demand changes yet? Maybe the cost of repair is higher than the losses from exploits? Time will tell….

Alex August 27, 2008 1:45 PM

@Dave: right. The same goes for credit-card fraud. Credit-card companies just bear the costs as additional security measures will cost them more. It’s always economics.

moo August 28, 2008 3:47 PM

@David: “It does seem that informing the potential victim and the company that offers the vulnerable item is praiseworthy. Just telling everyone seems alarmist, publicity-seeking and encourages people to attempt break-ins they’d otherwise not do.”

You are advocating security by obscurity. The reason it doesn’t work was explained in the article — if they don’t go public, there is no pressure on the companies to actually fix the problems.

If it was only the company with the insecure service or product that bore the costs of that security, there would be no problem (the market would sort it out). The problem is that security issues are often externalities, where the cost of dealing with them is dumped on someone else. For example, when governments, corporations and banks lose people’s private identifying information or financial details, and fraud is later committed using that leaked info, it is the affected individuals who have to deal with the mess.

Even though the fraud was made possible by the negligence of the government or corporation, and even though it is enabled by the banks and financial institutions who accept that information and give out loans or whatever to the fraudsters, it is the individual whose identity was used in the fraud who ends up having to deal with it. Maybe the defrauded bank eats some of the costs, but the individual can definitely be affected to a large degree too, having their credit history trashed, having to dispute with collection agencies or banks, having to file police reports, monitor or freeze their credit report, etc.

None of that would be necessary (or at least, a lot less necessary) if companies and governments were held properly accountable for the security of our information when they collect it and store it and share it around. Instead, the costs of their behaviour are externalized onto us, their customers.

JBD September 3, 2008 1:07 AM

I don’t get the whole “free speech” defense of what amounts to breaking and entering. Nor do I believe that somehow free speech is in any way enhanced by these bright fellas nitpicking at marginal security risks. And the notion that free speech is being in some way curtailed by the judge is curious to me. It looks more to me like he’s curtailing commercial action (providing deliverables that transfer value), actions that plainly infringe on straightforward property rights. And by any measure, speech is more free, and worth less, now than it has ever been. Keeping these guys from their Vegas end zone dance didn’t threaten anybody’s free speech rights.

Free speech rights ensure that citizens can criticize the government without persecution. It is not criticism of the government to break into a turnstile and steal money – whether by hammer or by ‘warcart’ – it is stealing. Teaching other people how to do it is aiding and abetting burglary.

I’m a computer security moron, so I won’t attempt to address those areas, or the ethics of the vendors and how they sell their products, topics which are best left to our host and you regulars.

But my son has studied it some, and he tries to open my mind in an exchange we had about this whole MBTA deal, and it is recorded for posterity. My original “lock ’em up and throw away the Class 1, Gen 2” rant starts here:
http://jbd.mee.nu/children_cry_free_speech_when_caught_stealing
and he responds (including steering me to our host), 3 parts.
http://jbd.mee.nu/peer_review_of_the_mitmbta_hack
I’ll go away now, and leave you to your cipherin’.

Bill McGonigle September 8, 2008 2:35 PM

If we assume these guys are doing something valuable and that the ‘system’ is biased against them, and that they can be bullied by well-funded adversaries, the only prudent approach would be to set up a legal defense fund to protect said individuals. The EFF might get involved if it’s a high-enough profile case, but that’s hardly re-assuring. To counter the chilling effects one needs to remove the potential downside cost of publishing.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.