Entries Tagged "Schneier's Law"

Page 1 of 1

The Doghouse: Crown Sterling

A decade ago, the Doghouse was a regular feature in both my email newsletter Crypto-Gram and my blog. In it, I would call out particularly egregious—and amusing—examples of cryptographic “snake oil.”

I dropped it both because it stopped being fun and because almost everyone converged on standard cryptographic libraries, which meant standard non-snake-oil cryptography. But every so often, a new company comes along that is so ridiculous, so nonsensical, so bizarre, that there is nothing to do but call it out.

Crown Sterling is complete and utter snake oil. The company sells “TIME AI,” “the world’s first dynamic ‘non-factor’ based quantum AI encryption software,” “utilizing multi-dimensional encryption technology, including time, music’s infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs.” Those sentence fragments tick three of my snake-oil warning signs—from 1999!—right there: pseudo-math gobbledygook (warning sign #1), new mathematics (warning sign #2), and extreme cluelessness (warning sign #4).

More: “In March of 2019, Grant identified the first Infinite Prime Number prediction pattern, where the discovery was published on Cornell University’s www.arXiv.org titled: ‘Accurate and Infinite Prime Number Prediction from Novel Quasi-Prime Analytical Methodology.’ The paper was co-authored by Physicist and Number Theorist Talal Ghannam PhD. The discovery challenges today’s current encryption framework by enabling the accurate prediction of prime numbers.” Note the attempt to leverage Cornell’s reputation, even though the preprint server is not peer-reviewed and allows anyone to upload anything. (That should be another warning sign: undeserved appeals to authority.) PhD student Mark Carney took the time to refute it. Most of it is wrong, and what’s right isn’t new.

I first encountered the company earlier this year. In January, Tom Yemington from the company emailed me, asking to talk. “The founder and CEO, Robert Grant is a successful healthcare CEO and amateur mathematician that has discovered a method for cracking asymmetric encryption methods that are based on the difficulty of finding the prime factors of a large quasi-prime numbers. Thankfully the newly discovered math also provides us with much a stronger approach to encryption based on entangled-pairs of keys.” Sounds like complete snake-oil, right? I responded as I usually do when companies contact me, which is to tell them that I’m too busy.

In April, a colleague at IBM suggested I talk with the company. I poked around at the website, and sent back: “That screams ‘snake oil.’ Bet you a gazillion dollars they have absolutely nothing of value—and that none of their tech people have any cryptography expertise.” But I thought this might be an amusing conversation to have. I wrote back to Yemington. I never heard back—LinkedIn suggests he left in April—and forgot about the company completely until it surfaced at Black Hat this year.

Robert Grant, president of Crown Sterling, gave a sponsored talk: “The 2019 Discovery of Quasi-Prime Numbers: What Does This Mean For Encryption?” I didn’t see it, but it was widely criticized and heckled. Black Hat was so embarrassed that it removed the presentation from the conference website. (Parts of it remain on the Internet. Here’s a short video from the company, if you want to laugh along with everyone else at terms like “infinite wave conjugations” and “quantum AI encryption.” Or you can read the company’s press release about what happened at Black Hat, or Grant’s Twitter feed.)

Grant has no cryptographic credentials. His bio—on the website of something called the “Resonance Science Foundation”—is all over the place: “He holds several patents in the fields of photonics, electromagnetism, genetic combinatorics, DNA and phenotypic expression, and cybernetic implant technologies. Mr. Grant published and confirmed the existence of quasi-prime numbers (a new classification of prime numbers) and their infinite pattern inherent to icositetragonal geometry.”

Grant’s bio on the Crown Sterling website contains this sentence, absolutely beautiful in its nonsensical use of mathematical terms: “He has multiple publications in unified mathematics and physics related to his discoveries of quasi-prime numbers (a new classification for prime numbers), the world’s first predictive algorithm determining infinite prime numbers, and a unification wave-based theory connecting and correlating fundamental mathematical constants such as Pi, Euler, Alpha, Gamma and Phi.” (Quasi-primes are real, and they’re not new. They’re numbers with only large prime factors, like RSA moduli.)

Near as I can tell, Grant’s coauthor is the mathematician of the company: “Talal Ghannam—a physicist who has self-published a book called The Mystery of Numbers: Revealed through their Digital Root as well as a comic book called The Chronicles of Maroof the Knight: The Byzantine.” Nothing about cryptography.

There seems to be another technical person. Ars Technica writes: “Alan Green (who, according to the Resonance Foundation website, is a research team member and adjunct faculty for the Resonance Academy) is a consultant to the Crown Sterling team, according to a company spokesperson. Until earlier this month, Green—a musician who was ‘musical director for Davy Jones of The Monkees’—was listed on the Crown Sterling website as Director of Cryptography. Green has written books and a musical about hidden codes in the sonnets of William Shakespeare.”

None of these people have demonstrated any cryptographic credentials. No papers, no research, no nothing. (And, no, self-publishing doesn’t count.)

After the Black Hat talk, Grant—and maybe some of those others—sat down with Ars Technica and spun more snake oil. They claimed that the patterns they found in prime numbers allows them to break RSA. They’re not publishing their results “because Crown Sterling’s team felt it would be irresponsible to disclose discoveries that would break encryption.” (Snake-oil warning sign #7: unsubstantiated claims.) They also claim to have “some very, very strong advisors to the company” who are “experts in the field of cryptography, truly experts.” The only one they name is Larry Ponemon, who is a privacy researcher and not a cryptographer at all.

Enough of this. All of us can create ciphers that we cannot break ourselves, which means that amateur cryptographers regularly produce amateur cryptography. These guys are amateurs. Their math is amateurish. Their claims are nonsensical. Run away. Run, far, far, away.

But be careful how loudly you laugh when you do. Not only is the company ridiculous, it’s litigious as well. It has sued ten unnamed “John Doe” defendants for booing the Black Hat talk. (It also sued Black Hat, which may have more merit. The company paid $115K to have its talk presented amongst actual peer-reviewed talks. For Black Hat to remove its nonsense may very well be a breach of contract.)

Maybe Crown Sterling can file a meritless lawsuit against me instead for this post. I’m sure it would think it’d result in all sorts of positive press coverage. (Although any press is good press, so maybe it’s right.) But if I can prevent others from getting taken in by this stuff, it would be a good thing.

Posted on September 5, 2019 at 5:58 AMView Comments

Teaching a Neural Network to Encrypt

Researchers have trained a neural network to encrypt its communications.

In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography:.

This story is more about AI and neural networks than it is about cryptography. The algorithm isn’t any good, but is a perfect example of what I’ve heard called “Schneier’s Law“: Anyone can design a cipher that they themselves cannot break.

Research paper. Note that the researchers work at Google.

Posted on November 3, 2016 at 6:05 AMView Comments

Amateurs Produce Amateur Cryptography

Anyone can design a cipher that he himself cannot break. This is why you should uniformly distrust amateur cryptography, and why you should only use published algorithms that have withstood broad cryptanalysis. All cryptographers know this, but non-cryptographers do not. And this is why we repeatedly see bad amateur cryptography in fielded systems.

The latest is the cryptography in the Open Smart Grid Protocol, which is so bad as to be laughable. From the paper:

Dumb Crypto in Smart Grids: Practical Cryptanalysis of the Open Smart Grid Protocol

Philipp Jovanovic and Samuel Neves

Abstract: This paper analyses the cryptography used in the Open Smart Grid Protocol (OSGP). The authenticated encryption (AE) scheme deployed by OSGP is a non-standard composition of RC4 and a home-brewed MAC, the “OMA digest’.”

We present several practical key-recovery attacks against the OMA digest. The first and basic variant can achieve this with a mere 13 queries to an OMA digest oracle and negligible time complexity. A more sophisticated version breaks the OMA digest with only 4 queries and a time complexity of about 2^25 simple operations. A different approach only requires one arbitrary valid plaintext-tag pair, and recovers the key in an average of 144 message verification queries, or one ciphertext-tag pair and 168 ciphertext verification queries.

Since the encryption key is derived from the key used by the OMA digest, our attacks break both confidentiality and authenticity of OSGP.

My still-relevant 1998 essay: “Memo to the Amateur Cipher Designer.” And my 1999 essay on cryptographic snake oil.

ThreatPost article. BoingBoing post.

Note: That first sentence has been called “Schneier’s Law,” although the sentiment is much older.

Posted on May 12, 2015 at 5:41 AMView Comments

The Security of al Qaeda Encryption Software

The web intelligence firm Recorded Future has posted two stories about how al Qaeda is using new encryption software in response to the Snowden disclosures. NPR picked up the story a week later.

Former NSA Chief Council Stewart Baker uses this as evidence that Snowden has harmed America. Glenn Greenwald calls this “CIA talking points” and shows that al Qaeda was using encryption well before Snowden. Both quote me heavily, Baker casting me as somehow disingenuous on this topic.

Baker is conflating my stating of two cryptography truisms. The first is that cryptography is hard, and you’re much better off using well-tested public algorithms than trying to roll your own. The second is that cryptographic implementation is hard, and you’re much better off using well-tested open-source encryption software than you are trying to roll your own. Admittedly, they’re very similar, and sometimes I’m not as precise as I should be when talking to reporters.

This is what I wrote in May:

I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for snake oil of their own devising. My guess is that this an example of that.

Note the phrase “good algorithms and software.” My intention was to invoke both truisms in the same sentence. That paragraph is true if al Qaeda is rolling their own encryption algorithms, as Recorded Future reported in May. And it remains true if al Qaeda is using algorithms like my own Twofish and rolling their own software, as Recorded Future reported earlier this month. Everything we know about how the NSA breaks cryptography is that they attack the implementations far more successfully than the algorithms.

My guess is that in this case they don’t even bother with the encryption software; they just attack the users’ computers. There’s nothing that screams “hack me” more than using specially designed al Qaeda encryption software. There’s probably a QUANTUMINSERT attack and FOXACID exploit already set on automatic fire.

I don’t want to get into an argument about whether al Qaeda is altering its security in response to the Snowden documents. Its members would be idiots if they did not, but it’s also clear that they were designing their own cryptographic software long before Snowden. My guess is that the smart ones are using public tools like OTR and PGP and the paranoid dumb ones are using their own stuff, and that the split was the same both pre- and post-Snowden.

Posted on August 19, 2014 at 6:11 AMView Comments

So You Want to Be a Security Expert

I regularly receive e-mail from people who want advice on how to learn more about computer security, either as a course of study in college or as an IT person considering it as a career choice.

First, know that there are many subspecialties in computer security. You can be an expert in keeping systems from being hacked, or in creating unhackable software. You can be an expert in finding security problems in software, or in networks. You can be an expert in viruses, or policies, or cryptography. There are many, many opportunities for many different skill sets. You don’t have to be a coder to be a security expert.

In general, though, I have three pieces of advice to anyone who wants to learn computer security.

  • Study. Studying can take many forms. It can be classwork, either at universities or at training conferences like SANS and Offensive Security. (These are good self-starter resources.) It can be reading; there are a lot of excellent books out there—and blogs—that teach different aspects of computer security out there. Don’t limit yourself to computer science, either. You can learn a lot by studying other areas of security, and soft sciences like economics, psychology, and sociology.
  • Do. Computer security is fundamentally a practitioner’s art, and that requires practice. This means using what you’ve learned to configure security systems, design new security systems, and—yes—break existing security systems. This is why many courses have strong hands-on components; you won’t learn much without it.
  • Show. It doesn’t matter what you know or what you can do if you can’t demonstrate it to someone who might want to hire you. This doesn’t just mean sounding good in an interview. It means sounding good on mailing lists and in blog comments. You can show your expertise by making podcasts and writing your own blog. You can teach seminars at your local user group meetings. You can write papers for conferences, or books.

I am a fan of security certifications, which can often demonstrate all of these things to a potential employer quickly and easily.

I’ve really said nothing here that isn’t also true for a gazillion other areas of study, but security also requires a particular mindset—one I consider essential for success in this field. I’m not sure it can be taught, but it certainly can be encouraged. “This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.” This is especially true if you want to design security systems and not just implement them. Remember Schneier’s Law: “Any person can invent a security system so clever that she or he can’t think of how to break it.” The only way your designs are going to be trusted is if you’ve made a name for yourself breaking other people’s designs.

One final word about cryptography. Modern cryptography is particularly hard to learn. In addition to everything above, it requires graduate-level knowledge in mathematics. And, as in computer security in general, your prowess is demonstrated by what you can break. The field has progressed a lot since I wrote this guide and self-study cryptanalysis course a dozen years ago, but they’re not bad places to start.

This essay originally appeared on “Krebs on Security,” the second in a series of answers to the question. This is the first. There will be more.

Posted on July 5, 2012 at 6:17 AMView Comments

"Schneier's Law"

Back in 1998, I wrote:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.

In 2004, Cory Doctorow called this Schneier’s law:

…what I think of as Schneier’s Law: “any person can invent a security system so clever that she or he can’t think of how to break it.”

The general idea is older than my writing. Wikipedia points out that in The Codebreakers, David Kahn writes:

Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break.

The idea is even older. Back in 1864, Charles Babbage wrote:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.

My phrasing is different, though. Here’s my original quote in context:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break. It’s not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.

And here’s me in 2006:

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

And that’s the point I want to make. It’s not that people believe they can create an unbreakable cipher; it’s that people create a cipher that they themselves can’t break, and then use that as evidence they’ve created an unbreakable cipher.

EDITED TO ADD (4/16): This is an example of the Dunning-Kruger effect, named after the authors of this paper: “Unskilled and Unaware of It: How Difficulties in recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.”

Abstract: People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

EDITED TO ADD (4/18): If I have any contribution to this, it’s to generalize it to security systems and not just to cryptographic algorithms. Because anyone can design a security system that he cannot break, evaluating the security credentials of the designer is an essential aspect of evaluating the system’s security.

Posted on April 15, 2011 at 1:45 PMView Comments

Security Certifications

I’ve long been hostile to certifications—I’ve met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I’ve come to believe that, while certifications aren’t perfect, they’re a decent way for a security professional to learn some of the things he’s going to know, and a potential employer to assess whether a job candidate has the security expertise he’s going to need to know.

What’s changed? Both the job requirements and the certification programs.

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

That kind of expertise can’t be found in a certification. It’s a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I’ve hired people to design and evaluate security systems, I’ve paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.

But most organizations don’t need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren’t enough researchers to go around. Certification programs are good at churning out practitioners.

And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.

At my company, we encourage our security analysts to take certification courses. We find that it’s the most cost-effective way to give them the skills they need to do ever-more-complex jobs.

Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.

In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.

Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that’s usually good enough.

This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus’s counterpoint, but 1) you can lie, and 2) it’s worth it.)

EDITED TO ADD (7/21): A Guide to Information Security Certifications.

EDITED TO ADD (9/11): Here’s Marcus’s column.

Posted on July 20, 2006 at 7:20 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.