This is interesting research: “On the Security of the PKCS#1 v1.5 Signature Scheme“:
Abstract: The RSA PKCS#1 v1.5 signature algorithm is the most widely used digital signature scheme in practice. Its two main strengths are its extreme simplicity, which makes it very easy to implement, and that verification of signatures is significantly faster than for DSA or ECDSA. Despite the huge practical importance of RSA PKCS#1 v1.5 signatures, providing formal evidence for their security based on plausible cryptographic hardness assumptions has turned out to be very difficult. Therefore the most recent version of PKCS#1 (RFC 8017) even recommends a replacement the more complex and less efficient scheme RSA-PSS, as it is provably secure and therefore considered more robust. The main obstacle is that RSA PKCS#1 v1.5 signatures use a deterministic padding scheme, which makes standard proof techniques not applicable.
We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.
In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.
I don’t think the protocol is “provably secure,” meaning that it cannot have any vulnerabilities. What this paper demonstrates is that there are no vulnerabilities under the model of the proof. And, more importantly, that PKCS #1 v1.5 is as secure as any of its successors like RSA-PSS and RSA Full-Domain.
Posted on September 25, 2018 at 6:50 AM •
This is interesting:
Creating these defenses is the goal of NIST’s lightweight cryptography initiative, which aims to develop cryptographic algorithm standards that can work within the confines of a simple electronic device. Many of the sensors, actuators and other micromachines that will function as eyes, ears and hands in IoT networks will work on scant electrical power and use circuitry far more limited than the chips found in even the simplest cell phone. Similar small electronics exist in the keyless entry fobs to newer-model cars and the Radio Frequency Identification (RFID) tags used to locate boxes in vast warehouses.
All of these gadgets are inexpensive to make and will fit nearly anywhere, but common encryption methods may demand more electronic resources than they possess.
The NSA’s SIMON and SPECK would certainly qualify.
Posted on May 2, 2018 at 6:40 AM •
The ISO has rejected two symmetric encryption algorithms: SIMON and SPECK. These algorithms were both designed by the NSA and made public in 2013. They are optimized for small and low-cost processors like IoT devices.
The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors. Personally, I doubt that they’re backdoored. And I always like seeing NSA-designed cryptography (particularly its key schedules). It’s like examining alien technology.
EDITED TO ADD (5/14): Why the algorithms were rejected.
Posted on April 25, 2018 at 6:54 AM •
NIST has organized a competition for public-key algorithms secure against a quantum computer. It recently published all of its Round 1 submissions. (Details of the NIST efforts are here. A timeline for the new algorithms is here.)
Posted on December 27, 2017 at 6:28 AM •
Estonia recently suffered a major flaw in the security of their national ID card. This article discusses the fix and the lessons learned from the incident:
In the future, the infrastructure dependency on one digital identity platform must be decreased, the use of several alternatives must be encouraged and promoted. In addition, the update and replacement capacity, both remote and physical, should be increased. We also recommend the government to procure the readiness to act fast in force majeure situations from the eID providers.. While deciding on the new eID platforms, the need to replace cryptographic primitives must be taken into account — particularly the possibility of the need to replace algorithms with those that are not even in existence yet.
Posted on December 18, 2017 at 6:08 AM •
NIST is accepting proposals for public-key algorithms immune to quantum computing techniques. Details here. Deadline is the end of November 2017.
I applaud NIST for taking the lead on this, and for taking it now when there is no emergency and we have time to do this right.
Posted on December 23, 2016 at 6:39 AM •
The NSA has been abandoning secret and proprietary cryptographic algorithms in favor of commercial public algorithms, generally known as “Suite B.” In 2010, an NSA employee filed some sort of whistleblower complaint, alleging that this move is both insecure and wasteful. The US DoD Inspector General investigated and wrote a report in 2011.
The report — slightly redacted and declassified — found that there was no wrongdoing. But the report is an interesting window into the NSA’s system of algorithm selection and testing (pages 5 and 6), as well as how they investigate whistleblower complaints.
Posted on November 9, 2016 at 12:00 PM •
I’ve been saying for years that it’s bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC’s chief technologist, agrees:
By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on.
“The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.”
The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.
That data refers to this study.
My advice for choosing a secure password is here.
Posted on August 5, 2016 at 7:53 AM •
Really good investigative reporting on the automatic algorithms used to predict recidivism rates.
Posted on June 8, 2016 at 1:14 PM •
This is a good summary article on the fallibility of DNA evidence. Most interesting to me are the parts on the proprietary algorithms used in DNA matching:
William Thompson points out that Perlin has declined to make public the algorithm that drives the program. “You do have a black-box situation happening here,” Thompson told me. “The data go in, and out comes the solution, and we’re not fully informed of what happened in between.”
Last year, at a murder trial in Pennsylvania where TrueAllele evidence had been introduced, defense attorneys demanded that Perlin turn over the source code for his software, noting that “without it, [the defendant] will be unable to determine if TrueAllele does what Dr. Perlin claims it does.” The judge denied the request.
When I interviewed Perlin at Cybergenetics headquarters, I raised the matter of transparency. He was visibly annoyed. He noted that he’d published detailed papers on the theory behind TrueAllele, and filed patent applications, too: “We have disclosed not the trade secrets of the source code or the engineering details, but the basic math.”
It’s the same problem as any biometric: we need to know the rates of both false positives and false negatives. And if these algorithms are being used to determine guilt, we have a right to examine them.
EDITED TO ADD (6/13): Three more articles.
Posted on May 31, 2016 at 1:04 PM •
Sidebar photo of Bruce Schneier by Joe MacInnis.