Dangers of Reporting a Computer Vulnerability

This essay makes the case that there no way to safely report a computer vulnerability.

The first reason is that whenever you do something “unnecessary,” such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn’t have? It’s normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.

A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.

Interesting essay, and interesting comments. And here’s an article on the essay.

Remember, full disclosure is the best tool we have to improve security. It’s an old argument, and I wrote about it way back in 2001. If people can’t report security vulnerabilities, then vendors won’t fix them.

EDITED TO ADD (5/26): Robert Lemos on “Ethics and the Eric McCarty Case.”

Posted on May 26, 2006 at 7:35 AM41 Comments


MSB May 26, 2006 8:42 AM

“A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble”

I think what a security researcher should do in such a situation is to provide the system owner with an analysis of the vunlnerabilities as well as a procedure for confirming the suspected problem without actually trying to breach the system involved. The actual execution of the procedure should be left to the system owner.

David May 26, 2006 8:46 AM

Full disclosure also has it’s perils, similar to the ones mentioned in the article. It is common to have the vulnerability denied as being real (or extensive as discussed). Full disclosure also has been denied (Microsoft has been caught by this) for example as a DOS, when it actually was an exploit. When the exploit was written, then the authors were then accused of helping the black-hats and of being a dis-service to the community.

Sometimes, there is no other way to force the companies to resolve their issue than “break” their software, or “break” thier web site. Both of which can be now construed as illegal.

What’s my biggest concern is that even full disclosure will be outlawed (see DCMA).

bob May 26, 2006 8:54 AM

The same concept is involved if you look in the airline seat pocket on an airliner and discover a gun. Your best bet is to not tell anybody and hope nobody discovers it (while you are onboard). If you do tell, you will be cuffed and stuffed and spend years in jail, lose your job, etc, and thats if the other passengers dont kill you first.

bob May 26, 2006 8:59 AM

Same goes for if you see an obviously lost toddler crying for its mommy. Its heart-rending, but don’t go anywhere near it, you run serious risk of being branded as a pedophile. Remember, no good deed goes unpunished.

MSB May 26, 2006 9:02 AM


“if you look in the airline seat pocket on an airliner and discover a gun. Your best bet is to not tell anybody and hope nobody discovers it (while you are onboard). If you do tell, you will be cuffed and stuffed…”

If you don’t report it but someone else does after you leave the plane, wouldn’t you still be under suspicion for bringing the gun onboard and leaving it there? The police would want a good explanation from you why you never noticed the gun during the entire flight.

Anonymous May 26, 2006 9:24 AM

If you don’t report it but someone else does after you leave the plane, wouldn’t you still be under suspicion for bringing the gun onboard and leaving it there? The police would want a good explanation from you why you never noticed the gun during the entire flight.<<

This is a lose-lose situation. Probably the best you can hope for is plausible deniability, don’t go anywhere near the pocket (not everyone goes into those pockets).

Same with suspected ‘anthrax’ or suspected bomb. Even if there is no weapon, you can be suspected of a threat (remember Richard Jewell?). Best thing is to keep one’s eyes closed, and be sure not to know anything.

Igor May 26, 2006 9:25 AM

If you don’t report it but someone else does after you leave the plane, wouldn’t you still be under suspicion for bringing the gun onboard and leaving it there? The police would want a good
explanation from you why you never noticed the gun during the entire flight.

This holds especially true if the person who discovered the weapon happens to be blind, because it means they would have to touch it to determine what the item was. That would have the result
of fingerprints being left on the weapon, which would lead to questioning as stated above. Quite a hassle to say the least.

jayh May 26, 2006 9:30 AM

Reporting a vulnerability in a website (without their asking you to test it) is like telling the bank that their vault isn’t locking securely– the immediate question is how you know. Best to stay out of that hornet’s nest.

On the other hand, testing a software package that you have to make sure it is secure is more readily defensible.

AG May 26, 2006 9:36 AM

I agree that there is not a safe way to report a vulnerability.
The only safe way is to make it completely public and let other people have it and run with it.
After they have broken in,then…Maybe… the company might fix the issue.

Victor Bogado May 26, 2006 9:38 AM


Blind people with guns are certainly the most dangerous, they are not famous for their accuracy. 😀

But seriously, I would leave the seat, touching the less possible the gun, talk to a flight attendant by the side and do not seat on the same seat before the situation is cleared. I could be in risk, but I would guess that a gun in a plane is not there as a mistake and the crew should be warned.

Historian May 26, 2006 9:43 AM

If one takes the time to read history rather than mess with computers, one often finds the phenomina described in the article; ignore problems because you will be blamed. They are characteristic of the fall of empires and dynasties. Sic Transit Gloria Mundi.

Victor Bogado May 26, 2006 9:46 AM


Is this true? You cannot help a crying lost child without fearing being tagged as a pedophile? Would you think that if someone else get near the child and try to talk to him or her?

I live in Brazil, and I don’t think pedophilie would even be sujested in a situation like that here. But each society has a different way to look at similar situations.

J.D. Abolins May 26, 2006 10:05 AM

Somewhat related to the risks of reporting vulnerabilities is the risk of certain wording of hypothetical exploits of a vulnerability.

Sometimes, people will use the first person perspective and say things such as, “If I were to exploit this vulnerability, I’d…” Unfortunately, not everybody catches the “If” or, just as bad, assumes that the speaker is barely masking a criminal mindset. Some listeners might think, “Why does thinking like a criminal hacker or a terrorist come so easily to that person?” (I won’t get into a tangent about the value of examining matters from different perspectives, including those of the attackers.)

Worse yet, the speaker gets quoted with a few qualifying words and the context missing so that the statement appears to be a confession of a real act. Quite a risk in a age of soundbytes.

BLP May 26, 2006 10:27 AM

I think bob may have had some bad experiences, but perhaps I’m still not jaded enough about the world.

I agree with Victor’s assessment, you’ll probably be questioned, but will likely not be suspected if you discreetly inform the crew of the weapon.

Then again…

This does, however, bring up the question of software licensing. If the software is licensed to me, but not sold (as many companies try to do with the EULA), then any security “checking” could still be hacking.

antibozo May 26, 2006 10:40 AM

The problem I usually have dealing directly with developers is establishing secure communications for disclosure. If the developers are able to handle this, I will send them a report, but regardless, I always report to CERT/CC and let them handle the remediation schedule and public disclosure as they see fit. Developers take it a lot more seriously when they hear from CERT/CC, and CERT/CC has a much more global perspective on vulnerability priorities because they get advance reports from many other researchers.

I also don’t spend time looking for vulnerabilities on other people’s web sites. I already have plenty to do just auditing the software people want to run on systems I’m responsible for, and it still benefits the general community when I report vulnerabilities in software that is also in use at other organizations. In other words, clean up your own house first.

Kevin Davidson May 26, 2006 11:07 AM

I remember reporting a vulnerability in a university computer system when I was a student way back in 1974. It was a very messy experience.

I taught my children, “data center folks have no sense of humor.” No one is happy to be told they screwed up.

bob May 26, 2006 11:41 AM

@MSB, Igor: If you do tell a crewmember, as soon as they hear “gun” their mind is immediately going to make the intuitive leap to “I am hijacking you”. 9 out of 10 of them wont hear anything you say after that point. AND the crown jewel in this event is while the entire crew is focused on dealing with YOU, the people who actually planted the thing will have a first-class (no pun intended. well, maybe a little) distraction to allow them to perpetrate whatever it is they had in mind.

roy May 26, 2006 11:56 AM

The example of finding a gun on the airplane illustrates something further. The whistleblower will get suspected of crime and probably treated as a criminal, possibly to the point of imprisonment and a police record to follow him his whole life, or being summarily executed by air marshals, while the security idiots who failed to stop the gun from being brought about will never be punished.

Software whistleblowers don’t have it quite so bad, as there are no Internet marshals, yet.

Swiss Connection May 26, 2006 12:13 PM

Scan the vulnerability report from paper and send it to Bruces blog anonymously (all of it say from an internet cafe).

Then the cat is out of the bag and no one can touch you.

Anonymous May 26, 2006 12:25 PM


The procedure for confirming a vulnerability isn’t necessarily clear cut. You may have to poke around, and possess the security background to interpret what you’re seeing. Of course, that’s the sort of thing that gets folks in trouble.

A hypothetical example – Someone decides to sign up for a website with her full name: “Helen O’Hara”. The result is a mess of SQL error messages, because of the apostrophe in the name.

She immediately thinks “SQL injection”, and contacts the site admins. They respond that it couldn’t possibly be a vulnerability, it’s just a cosmetic bug, and suggest it’s her own fault for having a reserved character in her name, please sign in as “Helen OHara” from now on.

She has not produced a demonstrable SQL injection, and the procedure for doing so will involve much trial and error and a certain amount of skill. She can’t give the admins concise instructions on how to do this, as they obviously don’t even know what a SQL injection is, much less how to discover one.

Joe Buck May 26, 2006 12:37 PM

Victor: no, it isn’t true. I’ve helped crying lost children many times: as a father of a child myself, I’m often in crowded places with many children, and kids get separated from their parents. Of course you can help kids.

Sometimes a toddler will grab onto the wrong parent; one pair of blue jeans-clad legs looks much like another when you’re only a meter tall. If the parent freaks out, it’s because of the lost child, not because another parent helped return the child to the proper parent.

Pat Cahalan May 26, 2006 12:40 PM

Thankfully, I don’t do security research so I don’t run up against these ethics vs. personal danger questions too often.


People need to be willing to accept a certain amount of risk to self if society is going to function. Helping a lost crying child is mandatory if you want to be able to look yourself in the face in the morning. You’re also equipped to deal with the situation -> anyone can help a lost child find his/her parents. Refusing to get involved because you’re afraid of being tagged as a pederast is simply negligent (I personally would call it criminally negligent).

Jumping into a physical confrontation between a pimp and one of his “employees” involves risking your life -> it may be a better idea to call 911 (you’re unlikely to be trained properly to deal with the situation).

If you’re a security researcher and you suspect that something is wrong with someone’s application/website/whatever, don’t you have an obligation to investigate? This is what you DO… you’ve presumably trained years of your life to perform this function.

I wonder how someone can call him/herself a security researcher if they walk away from a vulnerability? If you’re really that worried about being unjustly accused of a crime… aren’t you in the wrong business?

(agreed, legal protections for security researchers should be established, but as antibozo and swiss connection pointed out you can foist off the actual reporting/disclosure to CERT or guys like Bruce, anonymously…)

kvenlander May 26, 2006 12:44 PM

I’m surprised by the gun in the plane comments. Are they based on some incident I don’t know about? Whatever happened to those people who found “suspicious” items like pieces of paper with the letters b o m in a plane?

If the calculus is “I might lose my job and never be able to fly” versus “this plane could get hijacked and everybody might die”, are you guys seriously saying you’d be quiet?

How about going to a flight attendant and not using any serious trigger words. “I think there is something suspicious in the seat pocket, please take a look. I will sit here while you do.”

MSB May 26, 2006 1:15 PM


“The procedure for confirming a vulnerability isn’t necessarily clear cut. You may have to poke around, and possess the security background to interpret what you’re seeing. Of course, that’s the sort of thing that gets folks in trouble.”

I agree that being cautious (as in leaving all the confirmation of vulnerability to the system owner) may reduce one’s ability to discover/demonstrate security weaknesses, but a lesson that one should learn from Pascal Meunier’s blog entry is that when one contemplates reporting a vulnerability, one should be careful to stay on the good side of the law. To the extent that there are good practices that help one do that, they should be adopted and promoted.

Sometimes vulnerabilities can be demonstrated on a replica, non-production system, if the configuration of the system in question is known (without breaking any rules).

Jeremiah Grossman May 26, 2006 1:22 PM

The issue it NOT disclosure, it’s discovery.

“Web application security” vulnerabilities are a completely different issue because they exist on someone else’s server. The infosec community hasn’t dealt with the legal issues of “discovering” vulnerabilities, only with “disclosing” them. Researchers have played the role of good samaritan by finding vulnerabilities in software thats important. So far, the software has run on our PC’s. However we’re moving into a world where the important software is custom web applications and not installable elsewhere. The same people whom provided the layer of security checking can no longer do so in a safe legal fashion. To those who say “do not test a system without written consent”, offer good but short-sighted advice. Organizations providing the web-based services are not going to be handing out “hack me if can” authorization letters.

Rob Carlson May 26, 2006 1:28 PM

There are dozens of ways to communicate over the Internet and stay anonymous. There are equally dozens of ways to do intrusion testing on a web application without disclosing your identity. Tor to GMail comes quickly to mind as one of the better ones.

The reason why black hat blackmailers get caught is because it’s more difficult to create an anoymous path of money exchange to get their payout than it is to have a sustained communication about a vulnerability with the developers (and, optionally, a Wired News reporter in the CC).

Now I understand that the point of this article is that people who point out vulnerabilities shouldn’t NEED to stay anonymous, but they could if they wanted to without a lot of effort.

Phillip May 27, 2006 3:55 AM

For the first time I post to this Blog using my real name – and what does that say about my paranoia

I don’t know Eric McCarty, but the link leads through to a report (which i hadn’t seen before) to the UK’s Dan Cuthbert case.

Some aspects of this case might illustrate attitudes of authorities.

To quote from the Dan Cuthbert article http://www.securityfocus.com/news/11341
“He also lectured at Westminster and Royal Holloway universities – ironically he taught some members of the Computer Crime Unit.”

I don’t know about Royal Holloway, but he did not teach on (my – I am the Course Leader) degree at the University of Westminster. Dan was a (top-class) student who took his Master’s Degree in Information Technology Security at the University. He did meet several members of the Metropolitan (London UK) Police Computer Crime Unit (CCU) who are or were also taking this degree, some of whom remain attached to the course.

After graduation, we have a tradition of inviting students with the best disserations to talk to the current student body on their work and expertise (or indeed on life, the universe and everything). Dan was one of these Guest Experts giving an excellent lecture on (surprise) penetration testing! I would expect the audience to have contained CCU staff. On his degree, Dan was probably taught by CCU staff!

I make the joke to students that “At present they say that they have the Master’s from the University Westminster and everybody goes ‘wow!’. One day, however, they will meet someone (on the jury?) who knows the value of the degree! (It’s not funny, but I am the Lecturer and they are the students, so they laugh anyway!)

This happened to Dan. I suspect that he was investigate by people who were not amused. I don’t know whether the judge (no jury) understood a word of the technical defence (you might think not but I couldn’t possibly comment) but convicted anyway because he did not like confusion. UK and English law is so convoluted that I have no idea about the validity of the conviction. It would be wrong to prosecute or otherwise on “impressions” rather than evidence, but I suspect this may happen.

This does not solve the problem of reporting flaws (which also happened to another student of ours, but the Bank involved restricted itself to rude, frightening and threatening letters). Like Eric. I would now think twice about how I advise students.

“Anonymity” in the Web is a joke (provided the searcher has cash and the ability to apply pressure). Is assuming honesty until found otherwise so other worldly? Could we not call it “innocent until proved guilty” – and delete all the records of arrest, interrogation, DNA, fimgerprints if not proved guilty. It’s those that are the kicker.


Dr J P Evans
Centre for Research into Information Assurance
School of Computer Science
University of Westminster
Northwick Park
United Kingdom
Telephone: +44 (0)20-7911 5000 ext 4240; Facsimile: +44 (0)20-7911 5906
email: evansjp@westminster.ac.uk
Web pages:
The School of Computer Science http://www.westminster.ac.uk/CS

This e-mail and its attachments are intended for the above named only and may
be confidential. If they have come to you in error you must not copy or show
them to anyone, nor should you take any action based on them other than to
notify the error by replying to the sender.

Anonymous May 27, 2006 7:07 AM

America is now a nation of, by, and for lawyers and bureaucrats; productive people are no more than go-fers.

ian_c May 27, 2006 10:28 PM

I hope I don’t get myself in hot water for this…

I was reading a post in alt.privacy yesterday. The poster claimed that by searching for the keyword “transcript” using limewire (or other similar P2P applications) you could find various documents containing sensitive data, such as social security numbers, etc.

My first impression was that the poster had to be exaggerating; he could not be serious about this. So, I decided to fire up limewire and have a look for myself.

What I found just about gave me a heart-attack–in one case a person had hundreds of documents in their shared folder, a quick perusal of 2-3 revealed: a drivers license (w/photo); employment-related info, and a transcript application to an educational institution, including maiden/married name, SSN, current address, former address, telephone number, etc.

This was an identity-thief’s gold-mine! In all good conscience, I couldn’t help but inform the person that these documents were available for download by all-and-sundry, so I phoned them. (I used skypeout, so the phone number wouldn’t show up.)

Some people might question my ethics at downloading /perusing a few of the documents, but how was I otherwise to determine the severity of the situation? What I saw frankly just alarmed me no end.

Bottom line, I guess what I’m worried about is that, by trying to do the right thing and inform them of their vulnerability, I may have put myself at risk of being accused of ‘hacking’ into their computer.

How would you handle the situation? Comments, please.

Jungsonn May 27, 2006 10:38 PM

Well, some years ago i found a flaw in some cisco system software. I we’re able to retrieve the local passwd file and tryed to decrypt it with an dictionary attack. It worked and i got root access to the server of cisco. I did nothing, and i reported the flaw to cisco, and you know how they responded? “ah ok.. so whats the deal?” i mean tjeez… i got ROOT! still no anwser back… this lasted couple of months, then i recieved an email: “Sorry we do not understand what you mean” For C sake… common… i was trying to notice the cisco admins that they had a flaw in the system, but they woudn’t listen..

Ok… i thought. nevermind.

So what i did was hacking into their system, and report the bug. But still they did not take care of it.

Jonathan May 28, 2006 2:50 PM

See the UK Employment Appeal Tribunal decision in Bolton School v Evans, http://tinyurl.com/jtw7v – employee who broke into his school’s computer system to demonstrate breaches of the Data Protection Act was disciplined and resigned. The EAT held, in effect, that he should simply have told his employers of his concerns without demonstrating them to be well-founded; “The protection is for the whistleblower who reasonably believes, to put it colloquially if inaccurately, that something is wrong, not the investigator who seeks either to establish that it is wrong or to show that his concerns are reasonable”.

Stefan Wagner May 28, 2006 7:27 PM

It might depend on local laws and judges opinion.

In Germany, such data gain isn’t considered criminal if you don’t bypass security mechanisms.
What jungsonn did (dictionary attack) is considered a bypassing of security mechanisms, even if the tools are public domain.

It’s similar to the hardware-world.
Breaking even a weak lock is against the law.

Joe in Australia May 28, 2006 8:45 PM

Pretend that these aren’t computers. Pretend that they’re office buildings, with physical entrances and physical locks. Someone who understands locks (I don’t) might look at the door and realise that it’s insecure, and that the office building is therefore at risk. None the less, I don’t think anyone would expect that he should tell anyone – or even expect there to be a convenient way to report it. In fact, most people would think that a professed lock-expert like this is a bit strange, and the managers who can actually get something done about the locks probably don’t want to be bothered about (what they perceive to be) trivialities.

Suppose our lock expert thought that he’d prove how unsafe the office building was. He might go there when the building is locked, unlock the door and take something confidential. You know, just to show that the danger is real. Does anyone think that this would earn him the gratitude of the company? I don’t: I think it would earn him an arrest and at least a few nights in jail.

The only safe thing you can do when you see ineffective security is to avoid it. Close your account, shop somewhere else. By definition, a firm with weak security has weak-minded security managers. The problem is a fundamental one that cannot be addressed by solving an individual problem, and all you can do by reporting the fact is make yourself look like a threat.

Ralph May 28, 2006 8:55 PM

This is a case touching on issues such as; freedom of speech; the consumers right to know; open society and the nature of liability.

It is ironic that a University is the party, in effect, attacking the rights of the individual.

What a long way we have come.

wkwillis May 29, 2006 3:42 AM

Lets say the bus company has locks on the lockers that are unsafe. All the locks open if you insert any key.
If you put your key into any lock other than the one on your locker to see if it opens, are you trespassing?
If you photograph anything (make a copy of) something in the locker, are you stealing?
If you publish that the lockers in the bus company building are insecure, are you committing a crime?

Richard Braakman May 29, 2006 6:39 AM

@Joe: The real problem with web sites is that you can’t always avoid them. They may host sensitive data about you. They may host control systems that can make your life difficult. Government sites are often in this category, as are hospital sites. University students, specifically, often have to deal with university sites that contain much information that will affect their future.

For some reason — probably related to the lack of market pressures — exactly these sites tend to have weak security.

If you spot a security problem in such a web site, what can you do?

Anonymous May 29, 2006 8:03 AM

If your neighbour rings your bell to tell you
that you forgot your keys in the lock,
do you call the cops?

Jungsonn May 29, 2006 11:45 AM

Well, i shout not call it hacking, that’s the wrong phrase i used. What i we’re able to do, was to obtain the passwd file which was not shadowed. And i runned an dictionary attack on my local desktop, and i got the user/pw combo. (mainly for knowledge and no bad ideas) and with that info i “could” obtain root. But instead of breaching their system i reported the flaw in their software (utilized phpcgi) which they are (still) running.

But it is very dangerous to do such things i am aware of. And in my ignorance i reported the flaw.

Once i had also experianced an attempt on my own server, and the fellow reported the flaw. I was happy, because i was unaware of that.

D.W. June 1, 2006 12:15 PM

Well, from a “white hat” admin’s point of view, if someone discovered a weak point in your server, by whatever means (even illegal ones), would you rather want him/her to report it to you or stay silent? I know what I would want – I would be majorly thankful for any report I get first.

Since hardware and locks were mentioned, here is a little story that happened to me:

I came out of a shop and went to my car (I though) but it was not mine, only the same type and color. I put my key in the lock and tried to turn it – it didn’t open. I thought I had a clumsy moment. Then I noticed that the door was not properly closed – these types of doors had a security flaw: When they were locked but not properly closed so that you could push it into the fully closed position, the lock would spring open. So, still thinking it was my car, I gave the door a push and opened it Call me part-time blind but I sat on the driver’s seat and still didn’t notice… Then I reached for the steering wheel, touched fur and my eyes flew open, finally – my car had no fur at the steering wheel.
I jumped out, looked around if nobody saw me, closed the door (firmly!) looked for my car and found it 2 places further.

While driving away, I wondered: should I have left the door halfway closed, as I found it? If yes, the real owner of that car could open the door without a key. (If he left the key inside the car, he would be in trouble now.) On the other hand, everybody else could give the door a push, open it and take out everything before the real owner came back.

The thought of what I would have done if the real owner had been coming back and found me sitting on his driver’s seat appeared to me several kilometers away…

Sorry for the distraction. Have fun

Anonymous June 1, 2006 12:23 PM

Here’s another one (to think of – perhaps):

You may know that these Kensington locks for laptops are available with keys or with as number-lock with 3 rings. When I ordered some by accident, I immediately returned them, because I know: If every trial takes you 2 seconds, you can open a 3-ring lock in about 33 minutes if you are extremely unlucky, just by counting numbers from 000 to 999.

Now I wonder about german laws: If I by a lock, point out that it’s IMHO not secure enough and prove it by trial-and-error, am I still violating a law?

Have fun, anyway

Fred June 2, 2006 4:35 PM

@Pat Callahan
Refusing to get involved because you’re afraid of being tagged as a pederast is simply negligent (I personally would call it criminally negligent).

Perhaps so. There was a case in Scotland in the last few months, where a bricklayer observed a two-year-old child wandering the streets in a small town. He did not approach the child, fearing he would be labelled as a pervert–that’s what he testified at the inquest into the child’s death. (She drowned in a pond.)
Full story can be found at:

The Scotsman, UK, 22 March 2006

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.