## Cryptanalysis of SHA-1

On Tuesday, I blogged about a new cryptanalytic result — the first attack faster than brute-force against SHA-1. I wrote about SHA, and the need to replace it, last September. Aside from the details of the new attack, everything I said then still stands. I’ll quote from that article, adding new material where appropriate.

One-way hash functions are a cryptographic construct used in many applications. They are used in conjunction with public-key algorithms for both encryption and digital signatures. They are used in integrity checking. They are used in authentication. They have all sorts of applications in a great many different protocols. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography.

In 1990, Ron Rivest invented the hash function MD4. In 1992, he improved on MD4 and developed another hash function: MD5. In 1993, the National Security Agency published a hash function very similar to MD5, called SHA (Secure Hash Algorithm). Then, in 1995, citing a newly discovered weakness that it refused to elaborate on, the NSA made a change to SHA. The new algorithm was called SHA-1. Today, the most popular hash function is SHA-1, with MD5 still being used in older applications.

One-way hash functions are supposed to have two properties. One, they’re one way. This means that it is easy to take a message and compute the hash value, but it’s impossible to take a hash value and recreate the original message. (By “impossible” I mean “can’t be done in any reasonable amount of time.”) Two, they’re collision free. This means that it is impossible to find two messages that hash to the same hash value. The cryptographic reasoning behind these two properties is subtle, and I invite curious readers to learn more in my book Applied Cryptography.

Breaking a hash function means showing that either — or both — of those properties are not true.

Earlier this week, three Chinese cryptographers showed that SHA-1 is not collision-free. That is, they developed an algorithm for finding collisions faster than brute force.

SHA-1 produces a 160-bit hash. That is, every message hashes down to a 160-bit number. Given that there are an infinite number of messages that hash to each possible value, there are an infinite number of possible collisions. But because the number of possible hashes is so large, the odds of finding one by chance is negligibly small (one in 280, to be exact). If you hashed 280 random messages, you’d find one pair that hashed to the same value. That’s the “brute force” way of finding collisions, and it depends solely on the length of the hash value. “Breaking” the hash function means being able to find collisions faster than that. And that’s what the Chinese did.

They can find collisions in SHA-1 in 269 calculations, about 2,000 times faster than brute force. Right now, that is just on the far edge of feasibility with current technology. Two comparable massive computations illustrate that point.

In 1999, a group of cryptographers built a DES cracker. It was able to perform 256 DES operations in 56 hours. The machine cost $250K to build, although duplicates could be made in the$50K-$75K range. Extrapolating that machine using Moore’s Law, a similar machine built today could perform 260 calculations in 56 hours, and 269 calculations in three and a quarter years. Or, a machine that cost$25M-$38M could do 269 calculations in the same 56 hours. On the software side, the main comparable is a 264 keysearch done by distributed.net that finished in 2002. One article put it this way: “Over the course of the competition, some 331,252 users participated by allowing their unused processor cycles to be used for key discovery. After 1,757 days (4.81 years), a participant in Japan discovered the winning key.” Moore’s Law means that today the calculation would have taken one quarter the time — or have required one quarter the number of computers — so today a 269 computation would take eight times as long, or require eight times the computers. The magnitude of these results depends on who you are. If you’re a cryptographer, this is a huge deal. While not revolutionary, these results are substantial advances in the field. The techniques described by the researchers are likely to have other applications, and we’ll be better able to design secure systems as a result. This is how the science of cryptography advances: we learn how to design new algorithms by breaking other algorithms. Additionally, algorithms from the NSA are considered a sort of alien technology: they come from a superior race with no explanations. Any successful cryptanalysis against an NSA algorithm is an interesting data point in the eternal question of how good they really are in there. For the average Internet user, this news is not a cause for panic. No one is going to be breaking digital signatures or reading encrypted messages anytime soon. The electronic world is no less secure after these announcements than it was before. But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” Just as this week’s attack builds on other papers describing attacks against simplified versions of SHA-1, SHA-0, MD4, and MD5, other researchers will build on this result. The attack against SHA-1 will continue to improve, as others read about it and develop faster tricks, optimizations, etc. And Moore’s Law will continue to march forward, making even the existing attack faster and more affordable. Jon Callas, PGP’s CTO, put it best: “It’s time to walk, but not run, to the fire exits. You don’t see smoke, but the fire alarms have gone off.” That’s basically what I said last August. It’s time for us all to migrate away from SHA-1. Luckily, there are alternatives. The National Institute of Standards and Technology already has standards for longer — and harder to break — hash functions: SHA-224, SHA-256, SHA-384, and SHA-512. They’re already government standards, and can already be used. This is a good stopgap, but I’d like to see more. I’d like to see NIST orchestrate a worldwide competition for a new hash function, like they did for the new encryption algorithm, AES, to replace DES. NIST should issue a call for algorithms, and conduct a series of analysis rounds, where the community analyzes the various proposals with the intent of establishing a new standard. Most of the hash functions we have, and all the ones in widespread use, are based on the general principles of MD4. Clearly we’ve learned a lot about hash functions in the past decade, and I think we can start applying that knowledge to create something even more secure. Hash functions are the least-well-understood cryptographic primitive, and hashing techniques are much less developed than encryption techniques. Regularly there are surprising cryptographic results in hashing. I have a paper, written with John Kelsey, that describes an algorithm to find second preimages with SHA-1 ­– a technique that generalizes to almost all other hash functions — in 2106 calculations: much less than the 2160 calculations for brute force. This attack is completely theoretical and not even remotely practical, but it demonstrates that we still have a lot to learn about hashing. It is clear from rereading what I wrote last September that I expected this to happen, but not nearly this quickly and not nearly this impressively. The Chinese cryptographers deserve a lot of credit for their work, and we need to get to work replacing SHA. ### Comments Ricardo Barreira February 19, 2005 8:49 AM Why is this paper taking so long to be published? I can understand the motives (they probably want to get it reviewed before publishing), but would it really hurt to publish a draft version of it? I only ask because I haven’t seen ANY “official” comment on when/where it will be published, and why it hasn’t been yet… I don’t know why the paper is taking so long to be published. Lots of cryptographers have seen it in private — and they’re the people who can understand it — so I know it’s not secrecy. I speculate there’s an internal review process at their university that they have to go through before they can publish the paper, but I don’t know for sure. I’ll post the URL as soon as I have one. Morgan Pugh February 19, 2005 9:30 AM This is very interesting however I think reports on some of the popular tech news sites are sensationalising this. We knew that SHA-1 would not last forever and this should be seen as a good reason to put some time and money into identifying an alternative. Jon Callas put it perfectly in that we should start to walk but not run towards the fire exit. And as you said Bruce the team who did it deserve a lot of credit. It is an excellent achievement. PS. I’m currently reading Beyond Fear, great book 🙂 Jon Solworth February 19, 2005 9:30 AM Its not uncommon for “big” results to be circulated in this way. Such results are associated with long standing problems and contain subtle issues. It takes time to verify the results—perhaps more than for typical conference processes and no one wants to make a mistake on verifying it, so its the right thing to do. Consider it at this point like a medical study which finds a statistically significant result. Such studies will be repeated and often corrected. Your two properties February 19, 2005 11:14 AM You two properties are clearly wrong. If I do a one way hash of the complete works of shakespear (oft cited in such frivolous experiemnts) the resulting hash is a fixed size right? Or do you mean the hash function that will not compress the hash result? if the has isn’t exactly the same size as the input, then you can gaurauntee that there will be collisions (even if it is larger than the input) Two, they’re collision free. This means that it is impossible to find two messages that hash to the same hash value. Unless your rather weak version of impossible (meaning entirely easy and possible, if you have enough time) is what you are stating. JUST STATE THE PLAIN ENGLISH FACTS MAN. “JUST STATE THE PLAIN ENGLISH FACTS MAN.” I did. But one more time, just for you. There are an infinite number of collisions, as you state. Finding one will take 2**80 work, assuming the hash function is secure. The odds of one occuring naturally is so small as to be negligible. The research result is an algorithm to find collisions in 2**69 work. Hence, the hash function is broken. (I am continually amazed by the number of people who can’t understand this.) Hmmm – I have a hard time understanding how these “collisions” are a problem. Sure you can have two (and lots more) documents with the same one-way hash. But context makes a whole lot of difference. If I use SHA-1 (or -224, -256, -384 or -512) to get a one-way hash of a legal document. What are the chances that you can derive another legal document about the same subject and on the same details concerning the same people, that anybody is going to believe is a forgery of the original document. Chances are that the document you find with the same one-way hash is going to be pure gibberish when viewed as a legal document about the problem at hand. No, what the forger wants is to fake your legal document with a different date, or different names or change paragraph 1 to read differently and leave the rest of the doument unchanged. Out of all of those infinite document collisions, you may be able to find one that accomplishes that, but I seriosuly doubt you are going to be able accomplish that in the lifetime of anybody who would be concerned. Am I really missing something here????? There’s something I have wondered (don’t know if this was covered in another thread). You have some plaintext P and two different hash functions H1() and H2(). Say that H1 and H2 have known collision weaknesses such that it is “easy” to find H1(P’) = H1(P) or H2(P”) = H2(P). What’s the probability of finding value P”’ where H1(P”’) = H1(P) and H2(P”’) = H2(P)? That is, find a P”’ that causes a collision for P under both H1 and H2? If brute force on H1 required 2^80 attempts and on H2 it was 2^80, would the double collision brute force take 2^160? Or is it more or less complicated than that? Would the use of two (or n) hashing algorithms in your application be too expensive? Does this just defer the problem down the hardware scale? Granted, I’m thinking here of the most naive application of hashes, such as the storing of pre-calculated hash values of passwords for later comparison. To the person complaining about the description of hashing function’s properties: There are an infinite number of messages that hash down to a finite hash size. So, yes, there are an infinite number of hash collisions. For a cryptographically strong hashing function, however, it should be computationally infeasible (what Bruce means by “impossible”) to construct two messages that hash to the same value. What I’m curious about is how this affects existing messages hashed by SHA-1. It seems this attack is against the hash itself, such that the attacker chooses both texts to hash. If I have a fixed text that I’ve hashed (say as part of an integrity check), does this attack make it easier for the attacker to generate a new text that collides with my text? It doesn’t sound like it. I talk about how collisions affect security in “Applied Cryptography.” Here’s one scenario: Make one message that says “Buy 1000 shares” with a varying number of nulls at the end. Make another message that says “Sell 1000 shares” with a varying number of nulls at the end. Hash the zillions of messages. Look for a collision. Get victim to sign one message, and now you have the other message signed. There are lots of attacks like this against various protocols that use hash functions. Really, collisions are a bad thing in hash functions. All crypto texts explain why. Bruce, I guess I’ll need to read so more to catch the subtleties. I know for my particular SHA-1 application, the hashed text is a fixed, known-ahead-of-time size and the message itself must have certain other properties to be considered valid. This severely constrains an attacker looking for collisions. The “varying trailing nulls” attack won’t work for those messages. I guess that’s the crux of the matter: If the hashed text is highly flexible and has no other form of integrity check beyond the SHA-1 function, then this attack is quite bad. Other layers of integrity checking do mitigate the damage somewhat. (Though it is a case of “walk, don’t run to the exits.”) This comment is from Niels Ferguson, to someone who says “it’s only a collision attack”: “A good hash function should behave like a random permutation. I cannot afford to do a security analysis of every place I use a hash function to find out whether a collision attack is dangerous. Hash functions are so useful they appear in dozens of locations through a design. It’s just easier, and safer, to use a good hash function than trying to live with a partially broken one.” He makes a good point. Okay – but wouldn’t it be trivial problem to ensure that the message DOESN’T have ANY nulls at the end. Alternatively, I could hash the message and include the message size in the hash. Your xxxxx nulls at the end are no longer a problem. They change the message themselves and are detectable. Different messages that any competent s/w developer could test for. Again, what am I missing here??? Okay – I think I understand part of what you saying – If I originate the message and have total control over the content, then I could jigger the content in such a manner that I could dupe somebody that has no idea what is happening into “signing” two documents when they thought they were “signing” only one. But again doesn’t that come down to the “human” element in any security scheme. By duping the human element – any security scheme can be defeated. You have to know the people you are dealing with. “Okay – but wouldn’t it be trivial problem to ensure that the message DOESN’T have ANY nulls at the end. Alternatively, I could hash the message and include the message size in the hash. Your xxxxx nulls at the end are no longer a problem. They change the message themselves and are detectable. Different messages that any competent s/w developer could test for. “Again, what am I missing here???” Hash functions have certain security properties. Of course, it’s often possible to use a hash function that doesn’t have those security properties. For example, “any competent s/w developer could test for” the particular attack I described above. But how many thousands of possible attacks are out there? Can you think and test for them all? And what about all those less-than-competent s/w developers, that ones that seem to code every encryption product I end up analyzing? SHA is broken. You can either work around the break, or replace it. Replacing SHA is by far the better option. I was wondering when the reports said “broken” just “how” broken it was. It certainly isn’t run for the hills time but it is a good sign to move on. Timo Tervo February 19, 2005 1:57 PM Replacing SHA-1 right now might prove problematic at least. Suppose that one is building a PKI with the CA’s certificate using some alternative, such as SHA-256 or SHA-512 for the hash function. Wouldn’t this require everybody who is verifying a cert chain leading up to this CA to support the replacement hash function as well? With a general purpose PKI, this would require changes into web browsers and servers (doing client auth), VPN gateways & clients, S/MIME clients and well, everything.. And not just the implementations, if the corresponding standards were not crafted with ‘swappable’ cipher suites or hash functions in mind. If I understand this right, this risk is a bit more severe with a PKI since the typical lifetime of a CA is 5 – 10 years , giving time to Moore’s law. A collision attack against PKI using SHA or MD5 could involve attacker crafting two CSR’s: one completely legal one and other with subject: CN=*, Basic constraints: cA=TRUE. Attacker would end up with a regular certificate that the CA signed and another one with a valid signature, but with arbitrary name and “useful properties”. Of course the attack would be more complex than this (if even possible), but this is still worrysome stuff from implementor’s perspective as well. Or am I completely lost here with my thinking? Bruce, you are referencing SHA-512, etc approved by NIST. What about Whirlpool algorithm recommended after more recent research by NESSIE for European Commission? Is there are any paper that compares SHA-xxx algorithms and Whirlpool? “I was wondering when the reports said ‘broken’ just ‘how’ broken it was. It certainly isn’t run for the hills time but it is a good sign to move on.” It’s 2**69 broken. “Bruce, you are referencing SHA-512, etc approved by NIST. What about Whirlpool algorithm recommended after more recent research by NESSIE for European Commission? Is there are any paper that compares SHA-xxx algorithms and Whirlpool?” SHA-256 and SHA-512 are the easy solutions, because they’re NIST standards like SHA-1 is. Whirlpool is certainly a good choice. And while I have not seen any direct comparisons of post-SHA hash functions, my guess is that we’ll be seeing a bunch of them soon. Hmm , anyplace I can read what the Whirlpool algorithm is and how implemented? I would like to implement. I Googled Whirlpool NESSIE and found the documentation/specifications. Thanks The link to my URL links to a tarball that has a working implementation of Whirlpool in C (the public domain reference implementation; any and all changes I have made are also public domain). Actually, it has implementations of both versions of Whirlpool. It also has implementations of HAVAL (broken), MD5 (broken), SHA0 (broken), SHA1 (broken), RIPEMD160 (not broken…yet), TIGER (not broken), and AESHASH with variants (not broken). • Sam Dave Howe February 19, 2005 3:31 PM Hmm. I am not a hardware techie, so bare with me here but – does Moore’s law apply to FPGA machines? I know CPUs and their associated chipsets have been pushing that boundary for some time now – but I don’t know of any comparable “arms race” that is forcing FPGAs to evolve as fast. How MUCH faster is a 2005 FPGA than a 1998 one? The second question (which can be only answered after we see the paper of course) is – how much practical value is this result? From my memory of the previous breaks, they allowed an attacker to construct two messages which had the same hash – but the attacker had no control over what changes had to be made to the plaintext of the second message to make it hash out the same; is an attacker going to be able to make useful changes (eg, the value of a transaction, or the payment date, or whatever you are using a digital signature for) and create a plausable and usable second document from a first they may or may not get to specify? David Howe: Moore’s “Law” (it’s less a law than it is a rough guide) states that the number of transistors on integrated circuits will double every 18 months. This does not necessary correlate to speed. Compliance with the Law also varies from architecture to architecture and from year to year. You could use Crypto++, a C++ library that supports HAVAL, MD2, MD4, MD5, PanamaHash, RIPEMD160, RIPEMD320, RIPEMD128, RIPEMD256, SHA, SHA256, SHA384, SHA512, Tiger, and Whirlpool hashing functions, along with a LOT more things related. It’s linked to in my name. Seth Ross February 19, 2005 6:00 PM The “varying trailing nulls” attack might be practical for a malicious software developer. Mallory iterates on two versions of voting software — one counts accurately, one has a back door — and finds a collision. The accurate version is certified. The back door version is loaded into voting machines and shows the same hash as the certified one. It seems like high-value systems — where the value of an attack exceeds the computational cost of 2**69 — might be vulnerable. [Dave], FPGAs performance follow “Moore’s Observation” (It is by no means a law in any conventional sense) the same as any other types of integrated circuits. In fact, FPGA vendors are very aggressive in their push to use the forefront of technology to stay ahead of the game. They are one of the first IC vendors that come out with devices in new technology nodes. However, as [Mat] remarked, it is no longer true that speed is a given when going down in geometry sizes as it was in the past. David Smith February 19, 2005 9:25 PM Bruce, It looks like the 2**69 number is for the existence problem: find two (any two) messages M0 and M1 so that SHA1(M0) = SHA1(M1). What’s the number for the construtive problem: given a known message M0, find another message M1 so that SHA1(M0) = SHA1(M1)? Thanks, -David One should notice that there are hidden considerations in every discussion. Once getting awared to them your attitude usually changes and becomes more positively balanced. Please find the hidden consideration in your point of view. The common one is lack of understanding of the drivers that make people think differently then us. Cypherpunk February 20, 2005 12:25 PM It’s too bad that a prominent commentator scared everyone unnecessarily with his inflammatory claim that “SHA-1 HAS BEEN BROKEN”, without emphasizing the difficulty of applying the attack. Now we have to go back and try to calm everyone down. “It looks like the 2**69 number is for the existence problem: find two (any two) messages M0 and M1 so that SHA1(M0) = SHA1(M1). What’s the number for the construtive problem: given a known message M0, find another message M1 so that SHA1(M0) = SHA1(M1)?” The first problem is finding collisions, and that’s what the attack does: 269 work. The second problem is reversing the one-wayness, and that is not what the attack is about. SHA-1 brute force says 2160, and the best attack is the Kelsey-Schneier attack (mentioned in the main post) at 2**106. “It’s too bad that a prominent commentator scared everyone unnecessarily with his inflammatory claim that ‘SHA-1 HAS BEEN BROKEN’, without emphasizing the difficulty of applying the attack. Now we have to go back and try to calm everyone down.” That kind of thing happens. I never talk about cryptographic anything being “broken” without stating the complexity of the attack, but not everyone is as diligent as I am. The real problem is that there is a mathematical meaning to the term “broken,” and cryptographers sometimes forget that they’re talking to the general public and not members of their own tribe. Bruce Schneier, could you possibly produce a shortened edition of Applied Cryptography in paparback that costs only about$30? $60 for a paperback is pretty steep. Despite your relative calm about the rate at which SHA-1 will be broken in practice, I’m concerned about the legal validity of digital signatures. These are generally regarded as binding now, and many contracts last a long time. In many cases it would be worth breaking the digital signature on a contract in order to falsify the contract terms and reseal it, even five or ten years into the contract term. Do I have a valid concern here? I think one of the causes of the misunderstanding is that Bruce lists two security properties in his post, whereas there are really at least six one could consider: a) Given H and a set A of “well-formed” messages, find M in A such that SHA1(M)=H. b) Given M1 and a set A of “well-formed” messages, find M2 in A such that SHA1(M2)=SHA1(M1). c) Given a set A of “well-formed” messages, find M1 and M2 in A such that SHA1(M1)=SHA1(M2). d) Given H, find any M such that SHA1(M)=H e) Given M1, find any M2 such that SHA1(M2)=SHA1(M1) f) Given nothing, find any M1 and M2 such that SHA1(M1)=SHA2(M2). What the Chinese have demonstrated, according to Bruce, is that (f) is not as hard as it was previously believed. So what? say some readers; (f) is only of theoretical interest, and a real attacker would have to solve one of the presumably harder (a), (b), or (c). One thing these people miss is that it is not sound engineering to consider only (a), (b), (c), because the set A is different for every application and for every mode of attack. Some A’s may have some internal structure that makes an attack easier, so if (a), (b) and (c) were the only problems we could consider, we’d have to analyse the security of our hash function from first principles for each individual application of it. That would pretty much invalidate every advantage of having a common hash function as an off-the-shelf component, except for the very minor matter of reusing source code. Another more subtle point is the (f) is the easiest of the six problems – any feasible solution to one of the others would immediately produce a feasible answer to (f). And much of the confidence that (a)-(e) is hard is based on the evidence that even (f) was so hard that nobody could find a better method than brute-forcing it. This premise has fallen away now, which forces Bruce and his community to reconsider which reasons they have to think that the harder problems are sufficiently hard. It is conceivable that we are actually still safe, and that there exists a way of thinking about SHA-1 that allows an argument that the underlying ideas behind the Chinese break is actually only good for (f) and cannot possibly scale to the more immediately interesting (d) and (e). But the cryptographic community seems to believe that it would be easier to reestablish trust by finding aother hash function where (f) is still hard. And they are really the only ones who can have an informed belief about which option would have the best chance of success. As for cryptographers defining “hard enough” as “so hard that no algorithm is appreciably cheaper than a brute-force search”, it’s really a tribute to their ingenuity that they’ve been able to define anything that even looked as if it would meet this standard, and I think the rest of us should approve of them trying to reach it for as long as they have any hope left. (Though something with a proven lower bound, even one significantly below brute-force level, would be better if anybody could figure out how to do that). [Disclaimer: I have officially no idea what I’m talking about, but it soulds cool] Why is the solution not to use two very different cryptographic hashes? It should be exponentially more difficult to find matching hashes from two separate algorithms, but only a constant work addition on the confirmation side. Heck, you could use rather insecure hashes and still have a near-unsolvable problems. Being little more than a dabbler in math, mabey ye learned people can enlighten me. (Disclaimer – also not a cryptographer.) Henning – I don’t see much practical difference between (d) and (e) in your list. From my understanding of the previous reports, the break wasn’t in the (f) slot but in the (e) slot. I could be wrong. Glenn Willen February 20, 2005 7:23 PM Regarding the comment that there are 6 security properties of hash functions instead of two: Keep in mind the idea, espoused in /Practical Crytography/, that our security primitives be black boxes, whose weaknesses we don’t need to worry about when constructing systems around them. From that perspective, it is unacceptable to have to worry about: 1) Whether our restrictions on message format reduce the message-space enough to eliminate attacks. That will likely require a statistical calculation particular to the strength of the attack and the space of legal messages, a calculation we should not have to make to guarantee security. This eliminates the distinction between {abc} and {def}. 2) Whether the message being hashed is a secret or not. This assumption blurs the line between authentication (for which we should be able to use a hash function) and encryption (which should be completely orthogonal). That eliminates the distinction between {ad} and {be}. Keeping that in mind, we are left with two interesting properties: essentially E and F. All the others are redundant. B ut the extra complexity isn’t that interesting anyway: your line of reasoning is still fine. The fact that F is no longer “hard,” for a cryptographer’s definition of hard, means it’s time to start looking for a new hash function, because 1) we’re no longer sure that E is hard, having been shown that F is not, and 2) we don’t want to have to distinguish between “hash functions that are universal,” and “hash functions that are only good for E but not F” for reasons of engineering. Paul: Henning – I don’t see much practical difference between (d) and (e) in your list. (d) is the problem you have to solve if you know the hash of a password and want to get in. It is intrinsically harder than (e). For example, consider a hash function that strips off and forgets the first byte of its input and then computes a real, crytographically strong, hash of the rest. I can do (e) for this function with no effort at all, but (d) remains difficult. From my understanding of the previous reports, the break wasn’t in the (f) slot but in the (e) slot. I could be wrong. If the break had been for (e), the brute-force comparison would have been 2^160, not 2^80. The birthday paradox technique that allows you to halve the bit count only works when you get to select both of M1 and M2 yourself, i.e. problem (f). Glenn: I think we are in complete agreement and actually trying to make the same points. I listed the six different problems not because I think each is an interesting property to analyse for, but because some posters in the thead (or whatever it is called in blogspeak) seemed to be confused by the differences between them. In particular I wanted to state the questions in order to get a language in which I could try to explain why cryptographers care about the seemingly insignificant (f). In fact, as long as (f) is technologically infeasible we don’t need to worry about (e) either, because any feasible solution to (e) can trivially be turned into one for (f). Research that specifically aims at attacking (e) – such as the 2^106 attack mentioned in Bruce’s original item – is useful because it is one possible strategy for breaking (e), but also because (e) is the second line of defense. Even if a feasible attack on (f) is demonstrated, if we still have some faith in (e) we’ll have a little breathing space that we can use to switch to another hash function, because most applications of cryptographic hashes do not depend that much on (f). [I acknowledge that we’d still have to worry about a gajilion instances of (c) then, so it won’t be apicnic]. On the other hand, if (e) gets cracked we’ll basically have no guarantees left about anything. Andrew Wade February 21, 2005 1:33 AM “If you hashed 2^80 random messages, you’d find one pair that hashed to the same value.” Ah, but finding that pair would require storing 2^80 hashes in “memory” no? The nature of keysearches is that they are cpu-bound: memory latency isn’t a problem. I’d be somewhat surprised if that was the case this new attack. So it may not be “on the far edge of feasibility”. Or it may: I’m not in a position to know. Markus Hanauska February 21, 2005 5:04 AM Hey Bruce, why don’t you design a hash function? Honestly, I prefer Twofish over AES, it simply looks better (from the naive point of view of a programmer like me). So how about creating a hash function? Do you think it would be possible to base a hash function on Twofish, similar as Whirpool is based on a function very similar to AES? Any idea how good such a function would be? I personally don’t trust SHA-256 or higher either. I never liked SHA in the first place. Quoting Bruce… “Here’s one scenario: [snip] Get victim to sign one message, and now you have the other message signed.” Whilst I agree that SHA1 needs to be replaced, the short term work-around would be to only ever sign things you write yourself. Never sign anything from someone else, even if it looks right. (Unless I’ve overlooked something. Me no cryptographer.) In case you missed it, that’s a SHORT term work-around. And its only a work-around, not a fix. (Side note, I clicked preview, but all the line-ends had been removed. If this appears as one big paragraph, something’s up.) Clive Robinson February 21, 2005 7:01 AM Just one thought, IF these three bods have broaken SHA-1 (and I see no reason why they have not) what other use could there technique be put to, as Bruce pointed out some hash functions have a lot of similarities when compared. Therefore I think before I start walking, I would like to know the direction I am heading is away from the fire not towards it… Mathias February 21, 2005 7:16 AM I am not a Cryptographer either myself but I have two questions regarding the use of Hash functions. One of them has already been asked here and is “How hard would it really be to find, given a message M1, another message M2 that has the same hashes as M1 with two different hash algorithms? MD5 has been ‘broken’, so has SHA-1, but is it feasibly possible to find M1 and M2 so that MD5(M1)=MD5(M2) and SHA-1(M1)=SHA-1(M2)?” The second question is, how hard would it be, given M1, to find M2 so that SHA-1(M1+MD5(M1))=SHA-1(M2+MD5(M2))? This second composition leads to a 160 bit hash and seems not to be vulnerable to either the MD5 or SHA-1 brokerage. Once again I would love to be enlighted on these two options, not being a cryptographer very often makes me not see the obvious flaws. Andrew Wade February 21, 2005 7:59 AM Ah, but finding that pair would require storing 2^80 hashes in “memory” no? Er, make that messages, not hashes. Tobias: In many cases it would be worth breaking the digital signature on a contract in order to falsify the contract terms and reseal it, even five or ten years into the contract term. Do I have a valid concern here? Yes — but. Such a forgery leaves evidence; namely the original contract with an identical hash. If you had the original contract you could produce it in court. What happens then is a question for lawyers. But there’s also a flip side: if signatures are too easily forgable, they may cease to mean very much and other parties may have trouble enforcing countracts you have digitally signed. flavio February 21, 2005 10:24 AM I’d like to reply to Terry who said that you just need to ensure there aren’t any nulls at the end of the message. Now think of a legal document: they’re usually quite long and full of big words, not the trivial example Bruce presented. Any other useful document is usually quite long and nontrivial, by the way. Once you have a feasible, better-than-brute-force attack on the hash, you will probably be able to find the hash you’re looking for by changing a few letters here and there and it will look like an innocent typo. Nobody is going to get suspicious for a typo in a real-life document — usually there are some already. If that’s a concern, you could even spell-check the doctored text and use the corrections as your starting point. 🙂 (I am not a cryptanalist. Bruce, if I said something stupid, feel free to point it out.) flavio February 21, 2005 10:34 AM “Why is the solution not to use two very different cryptographic hashes?” Correct me if I’m wrong, but wouldn’t that be equivalent to using a stronger hash with roughly as many bits as the other two combined? And you have the added advantage that you can choose a not-already-broken hash. In other words, how does the cost of computing a hash vary with the hash length? since 15 NOV 2001 their is an DES cracker for 1000$ that cracks it in average 25 hours
http://www.cl.cam.ac.uk/~rnc1/descrack/index.html

today it should be much faster. I think ten times. The fpga are bigger (more crackers in parallel) and faster (clock speed)

flavio: I have seen others refer to that but still, you have to find a match between the hash functions. It’s not to match either one, you must match BOTH.

Ralph Loader February 21, 2005 2:42 PM

So how well does the new attack parallelize and how much memory does it require?

If it doesn’t parallelize at all, then it might actually be harder to carry out than a brute force attack.

OTOH if it needs little memory and scales perfectly with parallelization and pipelining then it may be even better compared to brute force than the number of operations suggest.

Ralph Loader February 21, 2005 3:43 PM

Brute forcing can be done without huge amount of memory if you’re clever. Let S^n(x) be SHA-1 iterated n times starting with x. So S(x) is SHA-1 of x, S^2(x) = S(S(x)) etc.

Then fix x and compute S^n(x) and S^2n(x) for successive values of n. You expect to find a pair S^n(x)=S^2n(x) in around 2^80 operations. (More generally 2^(k/2) operations for a k bit hash).

That algorithm doesn’t parallelize at all well, but a variant does: on each processor, choose x at random [different for each processor] and then compute

min( S^n(x), S^{n+1}(x) … S^{2n}(x) )

where n is choosen so that (n times number of processors) is about 2^80.

[actually it’s better to keep the smallest few hash values, not just the smallest one.]

Collect all the minimums together and see if any are equal.

Then again you expect to find a collision in about 2^80 operations.

Gopi Flaherty February 21, 2005 4:56 PM

Earlier there were some discussions of adding null characters to files and other various tricks. I’ve got two ways that’s easier than most people seem to think.

Firstly, you could substitute a different format of file – maybe I could use a WordPerfect file instead of a text file. People send all sorts of file formats, why does the replacement have to be the same?

Secondly – an idea inspired by the first – many complicated file formats are capable of nearly arbitrary manipulation while retaining their original contents using traditional viewers.

What I mean is – Microsoft Word documents contain undo metadata, and all sorts of other metadata which is not obviously viewable, and which does not need to match in any way at all the original document. Many versions of Word even used sparse files – not writing at all in certain sections. You could hack a file and put whatever you wanted into the sparse sections.

Another example is something like a video file. QuickTime, say, lets you have multiple streams of many different formats. You could have a video, audio, and “other” stream, the “other” stream being meaningless data for hash function manipulation.

Final example is TIFF. TIFF lets you encode many, many different types of data. Microsoft TabletPCs, I’ve been told, can use TIFFs to transfer an image of your handwriting – with an extra layer containing the vector stroke data.

How many file formats simply ignore garbage appended to the end? Xmodem in some (all?) variants, I believe, would “round” transferred files to the block size you used back in the BBS modem days.

While it would be hard to use a broken hash to exploit short pure ASCII messages, I think it’s very clear that more complicate file formats are wide open, and in many cases nearly trivial to modify without making it apparent.

Even if you do as a developer protect the file format in your application, in the real business world, chances are somebody could encode the same document in an easily modifiable format without arousing suspicion.

Filias Cupio February 21, 2005 5:35 PM

A minor error in the article: “If you hashed 2^80 random messages, you’d find one pair that hashed to the same value.”

No, if you had one message you wished to match, you’d find (on average) one match for it in 2^80 random messages.

This is the “Birthday problem”: “How many people do you need so that there is a 50% chance that a pair of them share the same birthday?” The answer is 23. It is a different question from “How many people do you need so that there is at least a 50% chance that one of them shares my birthday”, for which the answer is >183. (365 times whatever the mean of a Poisson distribution has to be for P(0) < 0.5)

http://mathworld.wolfram.com/BirthdayProblem.html

Filias Cupio February 21, 2005 5:47 PM

Sorry, I hadn’t read all the comments (I did skim them before posting, but missed the relevant bits.) It looks like this has already been accounted for in the difference between 2^160 and 2^80. (I.e. there are 2^160 possible ‘birthdays’, so we need 2^80 people to find a single pair that share a birthday.)

“A minor error in the article: ‘If you hashed 2^80 random messages, you’d find one pair that hashed to the same value.’

“No, if you had one message you wished to match, you’d find (on average) one match for it in 2^80 random messages.”

No, I’m right. The second problem requires 2160 random messages. The first problem requires 280 messages. And yes, it is an example of the Birthday problem.

Andrew Wade February 21, 2005 10:56 PM

That’s clever all right; thanks for the explanations. And I can see how to extend it along the lines of Mr. Schneier’s example:
SHA-1′(x) = SHA-1(“message-id: ” + x + “\n\nBuy ” + x[0] + “000 shares.”)

Since the method you described produces a sequence of collisions, one could pick the desired collision from the sequence so long as the difference between the two messages is quite small.

I guess we’ll have to wait to see what sorts of attacks the new technique allows. But I take Bruce Schneier’s point: even if the current attack isn’t exploitable, it is a starting point for developing better attacks.

John Gregory February 22, 2005 5:58 AM

I will accept the engineering advice and will end up using whatever turns out to be outside the exits that cryptographers are now walking towards…

But it still seems to me – as a lawyer not a cryptographer or engineer – to be a slow walk. By applying 2**69 computing power (possibly years of work or many parallel computers), one can get one random message that will match the one you start with. But how much computing power and time is needed to get a useful message that matches the one you start with?

Bruce mentions the “zillions” of messages (he’s surely allowed to be imprecise when he chooses to be!) to sort through to find a “sell 1000 shares” message to substitute for a “buy 1000 shares” message. But if I have a digitally signed contract, how long will it take someone to come up with an apparently identically signed contract that has only a meaningful change and otherwise appears to be credible (even allowing for a few typos and invisible metadata)?

Surely a VERY long time.

It will be a while as well before there are tools available to apply to existing digitally signed documents to create useful (to the bad sort of people) variants that will appear legitimate.

And the signature and the encryption and the hashing all produce SOME evidence, and not usually the best.

I recognize that there are lots of other uses of hashes, as Bruce says in his original article, and they need to work reliably for many systems to work. That’s why I will expect the engineers and cryptgraphers to change hashes, but why I would not advise my clients to panic, or even to settle …

Steven February 22, 2005 10:47 AM

A few people seem to be bothered by the fact that the SHA-1 paper isn’t in print yet. Academic papers take a long time to get published. As an example, the submission deadline for Crypto 2005 was February 14 and the conference is in August.

Andrew Wade February 22, 2005 2:03 PM

I wrote:

Since the method you described produces a sequence of collisions,
Well, I’m embarassed; that’s just not true. Still the difference between ‘1’ and ‘5’ in ascii is just one bit so if you’re a bit lucky, one collision is all you need.

John Gregory wrote:

But if I have a digitally signed contract, how long will it take someone to come up with an apparently identically signed contract that has only a meaningful change and otherwise appears to be credible (even allowing for a few typos and invisible metadata)?
That’s not what this current attack does, rather it is a way of finding two messages with the same hash (If one is signed, the signature is valid for the other as well). For the brute-force attack, producing two contracts where one has a ‘5’ in place of a ‘1’ in the other (and invisible metadata changed) is easy enough. For the new attack, well we just don’t know.

Bo Overgaauw February 22, 2005 9:34 PM

I’d like to respond to John Gregory’s and Andrew Wade’s message by drawing an analogy between the need for competent software engineers as mentioned by “Terry” and Bruce Schneier and the need for competent lawyers applying cryptographic procedures: Simply applying existing techniques without thinking thoroughly about the consequences can get you in deep trouble, especially if one element you were relying on appears broken, which now is the case with SHA-1. I’m convinced that the risks in using digital signatures for contracts that are based on SHA-1 now that SHA-1 has been broken can be reduced to an acceptable level (risk=0 does not exist) by applying additional measures that are thoroughly thought out.

Bruce Schneier pointed out in several of his comment messages that SHA-1 being broken is bad, and he’s in a much better position than I am to say so. That being said, one should not forget that in many fields hashing is not the only element that is of importance, so the specialists may often be able to reduce the now-increased risk again by applying other measures. However, cryptology cannot be expected to take all fields of application of a hash algorithm into account, so it needs one that’s not broken.

John Gregory February 22, 2005 10:25 PM

Finding two messages with the same hash in 2**69 computations is impressive to the cryptographer, but not to the person interested in a transaction.

If I have a short message (and a lot of contracts are not short) saying something like “I accept your offer to buy 1000 widgets for $100 each for delivery on March 1 in Chicago” – then there is no point in doing 2**69 computations to get a message that has the same hash but which reads “Home, home on the range, where the deer and the antelope play”, or (more likely) “dowinr02asvi0nvewt0svj0wr2309fj jdjr032493jr0d9j()D9JR)#(jFDj “. It would be almost inconceivable to find a message that shared a hash with another message that was identical to it except for a significant credible variation. And how long does the bad guy/good cryptographer have to come up with the variation? In my little example, if it’s not by March 1, forget it. Presumably once the hash is broken, it will be further broken, i.e. the computations required to find a matching hash will fall from 2**69 to something less over time – but it’s still going to be a challenge to find the significant credible variation. Besides, there is usually lots of evidence about what a transaction is or was, besides the document or the signature. (I have long thought that high-value e-commerce between strangers was unlikely, without a trusted intermediary, not a CA but a financial institution.) So a court would usually be able to figure out which variant of a document was credible, just as it does now for paper documents that vary when they are not supposed to. So it’s going to be a very slow walk to the exits for the lawyers. We’ll adopt what the engineers and cryptographers are selling long before we need it, because they need it more and will change the market accordingly. Hi: Had anyone tested the test vectors provided by Xiaoyun Wang and others in their short paper on collision search attacks on SHA-1 Clive Robinson February 23, 2005 6:07 AM As John Gregory pointed out for a lot of things there is a timleyness required for the search. There are three points to consider, The first, is some contracts (ie your home mortgage for instance) have a very long time to run, often the important thing in long contracts are the termination clauses (think leasess and in the UK the PFI contracts). The second, is usually any new attack leads to insite into other attacks (The history of FEAL for instance) as noted above the NSA have a saying about attackes getting better not worse. The Third and perhaps the most important, is perception, currently it is very difficult to explain digital signitures and crypto attacks to involved parties such as engineers, think what problems our legislators are going to have. Any successfull attack on any crypto primative is going to erode confidence and delay any eventual acceptance of digital contracts etc. I Googled Whirlpool NESSIE and found the documentation/specifications. Thanks Is this the paper??! Still 3 pages??! http://theory.csail.mit.edu/~yiqun/shanote.pdf So if this IS THE summary, then I assume the page 3 is the result of the attack? It seems to me that nowhere on the paper it showed and proved the full 80-step SHA-1 was cracked; the page 3 summary showed a 58-step SHA-1 attack. Can you even call that SHA-1? Maybe I’m stupid, but page 1 seems to have lots of vague and misleading clauses to me. “Our analysis shows that collisions of SHA1 can be found with complexity less than 2^69 hash operations. This is the first attack on the full 80-step SHA1 with complexity less than the 2^80 theoretical bound.” This makes you think, oh SHA-1 (the real 80-step one) collision was found. And next sentence it says “Based on our *ESTIMATION?!, we expect that real collisions of SHA1 reduced to 70-steps can be found using today’s supercomputers.” And only in the final paragraph it mentioned “We also implemented the attack on SHA1 reduced to 58 steps and found real collisions with less than 2^33 hash operations.” So it seems to me the 2^69 number was an ***estimated result” based on the 58-step attack?? The whole hell raising alarm about the “broken SHA-1” was based on this (3 page) paper or is there something else? Read Cryptanalysis of SHA-1 by Bruce Schneier. Jarrod February 24, 2005 10:35 AM Yen, That is the summary. The final paper is not yet (to my knowledge) openly public information, and is significantly longer. Ian Mason February 24, 2005 7:53 PM This is a comment to John Gregory. John, it’s not necessary to find a meaningful message that collides with a similar meaningful message to do damage. In many transactions it’s sufficent to have a random message and meaningful message collide. Say I email you: ‘John I’ve some business I’d like to discuss with you but I’d like to be sure that it’s you I’m dealing with. Can you please digitally sign this random string and send it back to me: “90823jkasdc9ujasdcv zai”. The reason I’m asking you to sign something random is so that someone can’t grab a previously signed “Yes, it’s me message” out of their mail from you and forward it to me.’ On the surface that’s a reasonable request to identify yourself with a protocol that avoids replay attacks. You then do as I ask unknowing that I have prepared a message: "I agree to pay Ian 1,000,000 Guineas Sterling for a short and painful lesson in cryptographic protocols."  which has a hash collision with the random string I sent you. I substitute that string for the random one in your signed message and then instruct my own lawyer to recover the 1,000,000 guineas from you. I chose the message then I found a random collision for it and presented it so that a random string looked reasonable to you. I’m not a lawyer, but I don’t think that random message thing can even fool me. First of all, if I’d even consider signing something like that, it’d be for the entire message string (from “John…” to “…zai”). What is the chance the signatures will match? There are enough this kind of cheap pranks around; I think I’m educated enough thank you. Even if somebody is dumb enough to sign something like that; I would think it’d be from a trusted friend. And if just by chance, the poor guy actually kept a copy of the thing signed and produced the evidence in court, what’s going to happen to you? Thirdly, the collision case found so far is only a 58th-step intermediate hash value, the SHA-1 algorithm wasn’t even finished yet. There is really no proof of the 2^69 number yet at this time; or if a collision can really be found in 2^69. And even if that number actually works, how long it’s gonna take you to find the random string? It’s easier to put invisible inks on paper. Ian Mason February 25, 2005 4:09 AM It’s an exemplar to demonstrate how a random string can be useful, as opposed to a pair of choosen plaintexts. Substitute ‘program’ for ‘John’ and ‘nonce’ for random string and this hits a number of common cryptographic protocols. That’s why collisions matter and that’s why it matters finding them in less than brute force order. If the probability of a failure, and the associated risk assessment, is based on the brute force order then reducing the attack difficulty by a factor of 10^(80-69) = 10^11 may be significant. If your original risk assessment said “We’ll take a 1 in one billion risk” (very acceptable for most commercial ventures) you’ve just been holed below the waterline. Praveen, Seems like everybody is waiting for somebody else to do the work.. well, I guess curiosity kills the cat.. Here is what I found with the messages. The 58th-step hashes do match for the two messages, but the number I got is different from the ‘h1′ in the paper. I don’t know what kind of SHA-1 was implemented for the paper, but I’m quite sure my SHA-1 module is fine. The NIST test case of ASCII “abc” was put through the module first and produced correct results. See the numbers below. M0 0 67452301 efcdab89 98badcfe 10325476 c3d2e1f0 1 b2dff369 67452301 7bf36ae2 98badcfe 10325476 2 63c4b1a2 b2dff369 59d148c0 7bf36ae2 98badcfe 3 21c2b110 63c4b1a2 6cb7fcda 59d148c0 7bf36ae2 4 d526702c 21c2b110 98f12c68 6cb7fcda 59d148c0 5 ac4ec847 d526702c 0870ac44 98f12c68 6cb7fcda 6 c6d362df ac4ec847 35499c0b 0870ac44 98f12c68 7 fedf5182 c6d362df eb13b211 35499c0b 0870ac44 8 9a011586 fedf5182 f1b4d8b7 eb13b211 35499c0b 9 14c400ae 9a011586 bfb7d460 f1b4d8b7 eb13b211 10 7dfbe037 14c400ae a6804561 bfb7d460 f1b4d8b7 11 c16f7ec0 7dfbe037 8531002b a6804561 bfb7d460 12 726e4ae7 c16f7ec0 df7ef80d 8531002b a6804561 13 c1a839e2 726e4ae7 305bdfb0 df7ef80d 8531002b 14 e54d04c6 c1a839e2 dc9b92b9 305bdfb0 df7ef80d 15 153ba8f4 e54d04c6 b06a0e78 dc9b92b9 305bdfb0 16 c0d64826 153ba8f4 b9534131 b06a0e78 dc9b92b9 17 147d4058 c0d64826 054eea3d b9534131 b06a0e78 18 e461e220 147d4058 b0359209 054eea3d b9534131 19 ce1df558 e461e220 051f5016 b0359209 054eea3d 20 5e5b3fb3 ce1df558 39187888 051f5016 b0359209 21 4b677212 5e5b3fb3 33877d56 39187888 051f5016 22 4bf71507 4b677212 d796cfec 33877d56 39187888 23 49e94c70 4bf71507 92d9dc84 d796cfec 33877d56 24 dcf8b25b 49e94c70 d2fdc541 92d9dc84 d796cfec 25 93d5228f dcf8b25b 127a531c d2fdc541 92d9dc84 26 6125fb2b 93d5228f f73e2c96 127a531c d2fdc541 27 8f908be1 6125fb2b e4f548a3 f73e2c96 127a531c 28 4a77363c 8f908be1 d8497eca e4f548a3 f73e2c96 29 ff239bb0 4a77363c 63e422f8 d8497eca e4f548a3 30 ef956859 ff239bb0 129dcd8f 63e422f8 d8497eca 31 54ae55d3 ef956859 3fc8e6ec 129dcd8f 63e422f8 32 a621f9ba 54ae55d3 7be55a16 3fc8e6ec 129dcd8f 33 2f1abebf a621f9ba d52b9574 7be55a16 3fc8e6ec 34 80e6475c 2f1abebf a9887e6e d52b9574 7be55a16 35 d0a4b9ef 80e6475c cbc6afaf a9887e6e d52b9574 36 534c5fb3 d0a4b9ef 203991d7 cbc6afaf a9887e6e 37 99d06606 534c5fb3 f4292e7b 203991d7 cbc6afaf 38 8dd54d7c 99d06606 d4d317ec f4292e7b 203991d7 39 8a19d0a8 8dd54d7c a6741981 d4d317ec f4292e7b 40 a6faa511 8a19d0a8 2375535f a6741981 d4d317ec 41 a5b11b6d a6faa511 2286742a 2375535f a6741981 42 abee3f71 a5b11b6d 69bea944 2286742a 2375535f 43 f37f0648 abee3f71 696c46db 69bea944 2286742a 44 98bcd7e3 f37f0648 6afb8fdc 696c46db 69bea944 45 336146e9 98bcd7e3 3cdfc192 6afb8fdc 696c46db 46 bbb96c36 336146e9 e62f35f8 3cdfc192 6afb8fdc 47 63fee1b4 bbb96c36 4cd851ba e62f35f8 3cdfc192 48 8c97f438 63fee1b4 aeee5b0d 4cd851ba e62f35f8 49 8fe05a9d 8c97f438 18ffb86d aeee5b0d 4cd851ba 50 1ee99f6f 8fe05a9d 2325fd0e 18ffb86d aeee5b0d 51 dc3c9013 1ee99f6f 63f816a7 2325fd0e 18ffb86d 52 8bd58e52 dc3c9013 c7ba67db 63f816a7 2325fd0e 53 9f10dbc8 8bd58e52 f70f2404 c7ba67db 63f816a7 54 13ef5790 9f10dbc8 a2f56394 f70f2404 c7ba67db 55 0553cd66 13ef5790 27c436f2 a2f56394 f70f2404 56 1d628100 0553cd66 04fbd5e4 27c436f2 a2f56394 57 c69503f9 1d628100 8154f359 04fbd5e4 27c436f2 58 3023c438 c69503f9 0758a040 8154f359 04fbd5e4 59 f76cb30f 3023c438 71a540fe 0758a040 8154f359 60 c16ec0d8 f76cb30f 0c08f10e 71a540fe 0758a040 61 8b3ad4e3 c16ec0d8 fddb2cc3 0c08f10e 71a540fe 62 ad71f311 8b3ad4e3 305bb036 fddb2cc3 0c08f10e 63 6b3a54d9 ad71f311 e2ceb538 305bb036 fddb2cc3 64 f712c417 6b3a54d9 6b5c7cc4 e2ceb538 305bb036 65 ecdb21d2 f712c417 5ace9536 6b5c7cc4 e2ceb538 66 8d795749 ecdb21d2 fdc4b105 5ace9536 6b5c7cc4 67 4f79f3b1 8d795749 bb36c874 fdc4b105 5ace9536 68 c07ce060 4f79f3b1 635e55d2 bb36c874 fdc4b105 69 ca0f4e43 c07ce060 53de7cec 635e55d2 bb36c874 70 a8a8fbb5 ca0f4e43 301f3818 53de7cec 635e55d2 71 b59bf4b5 a8a8fbb5 f283d390 301f3818 53de7cec 72 26e22807 b59bf4b5 6a2a3eed f283d390 301f3818 73 d5cfa569 26e22807 6d66fd2d 6a2a3eed f283d390 74 a481a976 d5cfa569 c9b88a01 6d66fd2d 6a2a3eed 75 7c364bfe a481a976 7573e95a c9b88a01 6d66fd2d 76 cb5f624b 7c364bfe a9206a5d 7573e95a c9b88a01 77 406ad09e cb5f624b 9f0d92ff a9206a5d 7573e95a 78 9c1f2847 406ad09e f2d7d892 9f0d92ff a9206a5d 79 425dc5e2 9c1f2847 901ab427 f2d7d892 9f0d92ff b4ea80a8 322b716b 7fc2a70f a04d089d b6aaba82 M0’ 0 67452301 efcdab89 98badcfe 10325476 c3d2e1f0 1 d2dff369 67452301 7bf36ae2 98badcfe 10325476 2 83c4b1b4 d2dff369 59d148c0 7bf36ae2 98badcfe 3 21c2b311 83c4b1b4 74b7fcda 59d148c0 7bf36ae2 4 d526b01a 21c2b311 20f12c6d 74b7fcda 59d148c0 5 b456c646 d526b01a 4870acc4 20f12c6d 74b7fcda 6 c7d3a2df b456c646 b549ac06 4870acc4 20f12c6d 7 fecf520b c7d3a2df ad15b191 b549ac06 4870acc4 8 99ff1586 fecf520b f1f4e8b7 ad15b191 b549ac06 9 94c3ffac 99ff1586 ffb3d482 f1f4e8b7 ad15b191 10 7dfbe037 94c3ffac a67fc561 ffb3d482 f1f4e8b7 11 416f7fc2 7dfbe037 2530ffeb a67fc561 ffb3d482 12 726e4ae7 416f7fc2 df7ef80d 2530ffeb a67fc561 13 c1a839e0 726e4ae7 905bdff0 df7ef80d 2530ffeb 14 e54d04c6 c1a839e0 dc9b92b9 905bdff0 df7ef80d 15 153ba8f5 e54d04c6 306a0e78 dc9b92b9 905bdff0 16 c0d64826 153ba8f5 b9534131 306a0e78 dc9b92b9 17 147d405a c0d64826 454eea3d b9534131 306a0e78 18 e461e222 147d405a b0359209 454eea3d b9534131 19 ce1df559 e461e222 851f5016 b0359209 454eea3d 20 5e5b3fb3 ce1df559 b9187888 851f5016 b0359209 21 4b677210 5e5b3fb3 73877d56 b9187888 851f5016 22 4bf71505 4b677210 d796cfec 73877d56 b9187888 23 49e94c71 4bf71505 12d9dc84 d796cfec 73877d56 24 dcf8b25b 49e94c71 52fdc541 12d9dc84 d796cfec 25 93d5228f dcf8b25b 527a531c 52fdc541 12d9dc84 26 6125fb29 93d5228f f73e2c96 527a531c 52fdc541 27 8f908be0 6125fb29 e4f548a3 f73e2c96 527a531c 28 4a77363c 8f908be0 58497eca e4f548a3 f73e2c96 29 ff239bb2 4a77363c 23e422f8 58497eca e4f548a3 30 ef95685b ff239bb2 129dcd8f 23e422f8 58497eca 31 54ae55d3 ef95685b bfc8e6ec 129dcd8f 23e422f8 32 a621f9ba 54ae55d3 fbe55a16 bfc8e6ec 129dcd8f 33 2f1abebd a621f9ba d52b9574 fbe55a16 bfc8e6ec 34 80e6475c 2f1abebd a9887e6e d52b9574 fbe55a16 35 d0a4b9ef 80e6475c 4bc6afaf a9887e6e d52b9574 36 534c5fb3 d0a4b9ef 203991d7 4bc6afaf a9887e6e 37 99d06604 534c5fb3 f4292e7b 203991d7 4bc6afaf 38 8dd54d7c 99d06604 d4d317ec f4292e7b 203991d7 39 8a19d0aa 8dd54d7c 26741981 d4d317ec f4292e7b 40 a6faa511 8a19d0aa 2375535f 26741981 d4d317ec 41 a5b11b6f a6faa511 a286742a 2375535f 26741981 42 abee3f71 a5b11b6f 69bea944 a286742a 2375535f 43 f37f064a abee3f71 e96c46db 69bea944 a286742a 44 98bcd7e3 f37f064a 6afb8fdc e96c46db 69bea944 45 336146e9 98bcd7e3 bcdfc192 6afb8fdc e96c46db 46 bbb96c36 336146e9 e62f35f8 bcdfc192 6afb8fdc 47 63fee1b4 bbb96c36 4cd851ba e62f35f8 bcdfc192 48 8c97f438 63fee1b4 aeee5b0d 4cd851ba e62f35f8 49 8fe05a9d 8c97f438 18ffb86d aeee5b0d 4cd851ba 50 1ee99f6f 8fe05a9d 2325fd0e 18ffb86d aeee5b0d 51 dc3c9013 1ee99f6f 63f816a7 2325fd0e 18ffb86d 52 8bd58e52 dc3c9013 c7ba67db 63f816a7 2325fd0e 53 9f10dbc8 8bd58e52 f70f2404 c7ba67db 63f816a7 54 13ef5790 9f10dbc8 a2f56394 f70f2404 c7ba67db 55 0553cd66 13ef5790 27c436f2 a2f56394 f70f2404 56 1d628100 0553cd66 04fbd5e4 27c436f2 a2f56394 57 c69503f9 1d628100 8154f359 04fbd5e4 27c436f2 58 3023c438 c69503f9 0758a040 8154f359 04fbd5e4 59 f76cb313 3023c438 71a540fe 0758a040 8154f359 60 c16ec1d8 f76cb313 0c08f10e 71a540fe 0758a040 61 8b3af4c3 c16ec1d8 fddb2cc4 0c08f10e 71a540fe 62 ad75ee15 8b3af4c3 305bb076 fddb2cc4 0c08f10e 63 6bb9d4b5 ad75ee15 e2cebd30 305bb076 fddb2cc4 64 06feacd5 6bb9d4b5 6b5d7b85 e2cebd30 305bb076 65 e9d9ae9d 06feacd5 5aee752d 6b5d7b85 e2cebd30 66 9e186b3b e9d9ae9d 41bfab35 5aee752d 6b5d7b85 67 0a14db72 9e186b3b 7a766ba7 41bfab35 5aee752d 68 ee4036c4 0a14db72 e7861ace 7a766ba7 41bfab35 69 c74854f8 ee4036c4 828536dc e7861ace 7a766ba7 70 a94fc71d c74854f8 3b900db1 828536dc e7861ace 71 232c8bd9 a94fc71d 31d2153e 3b900db1 828536dc 72 40759dc0 232c8bd9 6a53f1c7 31d2153e 3b900db1 73 5f2a875e 40759dc0 48cb22f6 6a53f1c7 31d2153e 74 506a906f 5f2a875e 101d6770 48cb22f6 6a53f1c7 75 9067de61 506a906f 97caa1d7 101d6770 48cb22f6 76 ec68629c 9067de61 d41aa41b 97caa1d7 101d6770 77 db41cbf7 ec68629c 6419f798 d41aa41b 97caa1d7 78 784e0231 db41cbf7 3b1a18a7 6419f798 d41aa41b 79 49c4c137 784e0231 f6d072fd 3b1a18a7 6419f798 1d887dc1 39926cc0 f6ce5d8a 0702c773 feecfa97 NIST ASCII “abc” Test Case 0 67452301 efcdab89 98badcfe 10325476 c3d2e1f0 1 0116fc33 67452301 7bf36ae2 98badcfe 10325476 2 8990536d 0116fc33 59d148c0 7bf36ae2 98badcfe 3 a1390f08 8990536d c045bf0c 59d148c0 7bf36ae2 4 cdd8e11b a1390f08 626414db c045bf0c 59d148c0 5 cfd499de cdd8e11b 284e43c2 626414db c045bf0c 6 3fc7ca40 cfd499de f3763846 284e43c2 626414db 7 993e30c1 3fc7ca40 b3f52677 f3763846 284e43c2 8 9e8c07d4 993e30c1 0ff1f290 b3f52677 f3763846 9 4b6ae328 9e8c07d4 664f8c30 0ff1f290 b3f52677 10 8351f929 4b6ae328 27a301f5 664f8c30 0ff1f290 11 fbda9e89 8351f929 12dab8ca 27a301f5 664f8c30 12 63188fe4 fbda9e89 60d47e4a 12dab8ca 27a301f5 13 4607b664 63188fe4 7ef6a7a2 60d47e4a 12dab8ca 14 9128f695 4607b664 18c623f9 7ef6a7a2 60d47e4a 15 196bee77 9128f695 1181ed99 18c623f9 7ef6a7a2 16 20bdd62f 196bee77 644a3da5 1181ed99 18c623f9 17 4e925823 20bdd62f c65afb9d 644a3da5 1181ed99 18 82aa6728 4e925823 c82f758b c65afb9d 644a3da5 19 dc64901d 82aa6728 d3a49608 c82f758b c65afb9d 20 fd9e1d7d dc64901d 20aa99ca d3a49608 c82f758b 21 1a37b0ca fd9e1d7d 77192407 20aa99ca d3a49608 22 33a23bfc 1a37b0ca 7f67875f 77192407 20aa99ca 23 21283486 33a23bfc 868dec32 7f67875f 77192407 24 d541f12d 21283486 0ce88eff 868dec32 7f67875f 25 c7567dc6 d541f12d 884a0d21 0ce88eff 868dec32 26 48413ba4 c7567dc6 75507c4b 884a0d21 0ce88eff 27 be35fbd5 48413ba4 b1d59f71 75507c4b 884a0d21 28 4aa84d97 be35fbd5 12104ee9 b1d59f71 75507c4b 29 8370b52e 4aa84d97 6f8d7ef5 12104ee9 b1d59f71 30 c5fbaf5d 8370b52e d2aa1365 6f8d7ef5 12104ee9 31 1267b407 c5fbaf5d a0dc2d4b d2aa1365 6f8d7ef5 32 3b845d33 1267b407 717eebd7 a0dc2d4b d2aa1365 33 046faa0a 3b845d33 c499ed01 717eebd7 a0dc2d4b 34 2c0ebc11 046faa0a cee1174c c499ed01 717eebd7 35 21796ad4 2c0ebc11 811bea82 cee1174c c499ed01 36 dcbbb0cb 21796ad4 4b03af04 811bea82 cee1174c 37 0f511fd8 dcbbb0cb 085e5ab5 4b03af04 811bea82 38 dc63973f 0f511fd8 f72eec32 085e5ab5 4b03af04 39 4c986405 dc63973f 03d447f6 f72eec32 085e5ab5 40 32de1cba 4c986405 f718e5cf 03d447f6 f72eec32 41 fc87dedf 32de1cba 53261901 f718e5cf 03d447f6 42 970a0d5c fc87dedf 8cb7872e 53261901 f718e5cf 43 7f193dc5 970a0d5c ff21f7b7 8cb7872e 53261901 44 ee1b1aaf 7f193dc5 25c28357 ff21f7b7 8cb7872e 45 40f28e09 ee1b1aaf 5fc64f71 25c28357 ff21f7b7 46 1c51e1f2 40f28e09 fb86c6ab 5fc64f71 25c28357 47 a01b846c 1c51e1f2 503ca382 fb86c6ab 5fc64f71 48 bead02ca a01b846c 8714787c 503ca382 fb86c6ab 49 baf39337 bead02ca 2806e11b 8714787c 503ca382 50 120731c5 baf39337 afab40b2 2806e11b 8714787c 51 641db2ce 120731c5 eebce4cd afab40b2 2806e11b 52 3847ad66 641db2ce 4481cc71 eebce4cd afab40b2 53 e490436d 3847ad66 99076cb3 4481cc71 eebce4cd 54 27e9f1d8 e490436d 8e11eb59 99076cb3 4481cc71 55 7b71f76d 27e9f1d8 792410db 8e11eb59 99076cb3 56 5e6456af 7b71f76d 09fa7c76 792410db 8e11eb59 57 c846093f 5e6456af 5edc7ddb 09fa7c76 792410db 58 d262ff50 c846093f d79915ab 5edc7ddb 09fa7c76 59 09d785fd d262ff50 f211824f d79915ab 5edc7ddb 60 3f52de5a 09d785fd 3498bfd4 f211824f d79915ab 61 d756c147 3f52de5a 4275e17f 3498bfd4 f211824f 62 548c9cb2 d756c147 8fd4b796 4275e17f 3498bfd4 63 b66c020b 548c9cb2 f5d5b051 8fd4b796 4275e17f 64 6b61c9e1 b66c020b 9523272c f5d5b051 8fd4b796 65 19dfa7ac 6b61c9e1 ed9b0082 9523272c f5d5b051 66 101655f9 19dfa7ac 5ad87278 ed9b0082 9523272c 67 0c3df2b4 101655f9 0677e9eb 5ad87278 ed9b0082 68 78dd4d2b 0c3df2b4 4405957e 0677e9eb 5ad87278 69 497093c0 78dd4d2b 030f7cad 4405957e 0677e9eb 70 3f2588c2 497093c0 de37534a 030f7cad 4405957e 71 c199f8c7 3f2588c2 125c24f0 de37534a 030f7cad 72 39859de7 c199f8c7 8fc96230 125c24f0 de37534a 73 edb42de4 39859de7 f0667e31 8fc96230 125c24f0 74 11793f6f edb42de4 ce616779 f0667e31 8fc96230 75 5ee76897 11793f6f 3b6d0b79 ce616779 f0667e31 76 63f7dab7 5ee76897 c45e4fdb 3b6d0b79 ce616779 77 a079b7d9 63f7dab7 d7b9da25 c45e4fdb 3b6d0b79 78 860d21cc a079b7d9 d8fdf6ad d7b9da25 c45e4fdb 79 5738d5e1 860d21cc 681e6df6 d8fdf6ad d7b9da25 a9993e36 4706816a ba3e2571 7850c26c 9cd0d89d oops.. mis-stepped just a little bit.. here is take two M0 0 b2dff369 67452301 7bf36ae2 98badcfe 10325476 1 63c4b1a2 b2dff369 59d148c0 7bf36ae2 98badcfe 2 21c2b110 63c4b1a2 6cb7fcda 59d148c0 7bf36ae2 3 d526702c 21c2b110 98f12c68 6cb7fcda 59d148c0 4 ac4ec847 d526702c 0870ac44 98f12c68 6cb7fcda 5 c6d362df ac4ec847 35499c0b 0870ac44 98f12c68 6 fedf5182 c6d362df eb13b211 35499c0b 0870ac44 7 9a011586 fedf5182 f1b4d8b7 eb13b211 35499c0b 8 14c400ae 9a011586 bfb7d460 f1b4d8b7 eb13b211 9 7dfbe037 14c400ae a6804561 bfb7d460 f1b4d8b7 10 c16f7ec0 7dfbe037 8531002b a6804561 bfb7d460 11 726e4ae7 c16f7ec0 df7ef80d 8531002b a6804561 12 c1a839e2 726e4ae7 305bdfb0 df7ef80d 8531002b 13 e54d04c6 c1a839e2 dc9b92b9 305bdfb0 df7ef80d 14 153ba8f4 e54d04c6 b06a0e78 dc9b92b9 305bdfb0 15 c0d64826 153ba8f4 b9534131 b06a0e78 dc9b92b9 16 147d4058 c0d64826 054eea3d b9534131 b06a0e78 17 e461e220 147d4058 b0359209 054eea3d b9534131 18 ce1df558 e461e220 051f5016 b0359209 054eea3d 19 5e5b3fb3 ce1df558 39187888 051f5016 b0359209 20 4b677212 5e5b3fb3 33877d56 39187888 051f5016 21 4bf71507 4b677212 d796cfec 33877d56 39187888 22 49e94c70 4bf71507 92d9dc84 d796cfec 33877d56 23 dcf8b25b 49e94c70 d2fdc541 92d9dc84 d796cfec 24 93d5228f dcf8b25b 127a531c d2fdc541 92d9dc84 25 6125fb2b 93d5228f f73e2c96 127a531c d2fdc541 26 8f908be1 6125fb2b e4f548a3 f73e2c96 127a531c 27 4a77363c 8f908be1 d8497eca e4f548a3 f73e2c96 28 ff239bb0 4a77363c 63e422f8 d8497eca e4f548a3 29 ef956859 ff239bb0 129dcd8f 63e422f8 d8497eca 30 54ae55d3 ef956859 3fc8e6ec 129dcd8f 63e422f8 31 a621f9ba 54ae55d3 7be55a16 3fc8e6ec 129dcd8f 32 2f1abebf a621f9ba d52b9574 7be55a16 3fc8e6ec 33 80e6475c 2f1abebf a9887e6e d52b9574 7be55a16 34 d0a4b9ef 80e6475c cbc6afaf a9887e6e d52b9574 35 534c5fb3 d0a4b9ef 203991d7 cbc6afaf a9887e6e 36 99d06606 534c5fb3 f4292e7b 203991d7 cbc6afaf 37 8dd54d7c 99d06606 d4d317ec f4292e7b 203991d7 38 8a19d0a8 8dd54d7c a6741981 d4d317ec f4292e7b 39 a6faa511 8a19d0a8 2375535f a6741981 d4d317ec 40 a5b11b6d a6faa511 2286742a 2375535f a6741981 41 abee3f71 a5b11b6d 69bea944 2286742a 2375535f 42 f37f0648 abee3f71 696c46db 69bea944 2286742a 43 98bcd7e3 f37f0648 6afb8fdc 696c46db 69bea944 44 336146e9 98bcd7e3 3cdfc192 6afb8fdc 696c46db 45 bbb96c36 336146e9 e62f35f8 3cdfc192 6afb8fdc 46 63fee1b4 bbb96c36 4cd851ba e62f35f8 3cdfc192 47 8c97f438 63fee1b4 aeee5b0d 4cd851ba e62f35f8 48 8fe05a9d 8c97f438 18ffb86d aeee5b0d 4cd851ba 49 1ee99f6f 8fe05a9d 2325fd0e 18ffb86d aeee5b0d 50 dc3c9013 1ee99f6f 63f816a7 2325fd0e 18ffb86d 51 8bd58e52 dc3c9013 c7ba67db 63f816a7 2325fd0e 52 9f10dbc8 8bd58e52 f70f2404 c7ba67db 63f816a7 53 13ef5790 9f10dbc8 a2f56394 f70f2404 c7ba67db 54 0553cd66 13ef5790 27c436f2 a2f56394 f70f2404 55 1d628100 0553cd66 04fbd5e4 27c436f2 a2f56394 56 c69503f9 1d628100 8154f359 04fbd5e4 27c436f2 57 3023c438 c69503f9 0758a040 8154f359 04fbd5e4 58 f76cb30f 3023c438 71a540fe 0758a040 8154f359 59 c16ec0d8 f76cb30f 0c08f10e 71a540fe 0758a040 60 8b3ad4e3 c16ec0d8 fddb2cc3 0c08f10e 71a540fe 61 ad71f311 8b3ad4e3 305bb036 fddb2cc3 0c08f10e 62 6b3a54d9 ad71f311 e2ceb538 305bb036 fddb2cc3 63 f712c417 6b3a54d9 6b5c7cc4 e2ceb538 305bb036 64 ecdb21d2 f712c417 5ace9536 6b5c7cc4 e2ceb538 65 8d795749 ecdb21d2 fdc4b105 5ace9536 6b5c7cc4 66 4f79f3b1 8d795749 bb36c874 fdc4b105 5ace9536 67 c07ce060 4f79f3b1 635e55d2 bb36c874 fdc4b105 68 ca0f4e43 c07ce060 53de7cec 635e55d2 bb36c874 69 a8a8fbb5 ca0f4e43 301f3818 53de7cec 635e55d2 70 b59bf4b5 a8a8fbb5 f283d390 301f3818 53de7cec 71 26e22807 b59bf4b5 6a2a3eed f283d390 301f3818 72 d5cfa569 26e22807 6d66fd2d 6a2a3eed f283d390 73 a481a976 d5cfa569 c9b88a01 6d66fd2d 6a2a3eed 74 7c364bfe a481a976 7573e95a c9b88a01 6d66fd2d 75 cb5f624b 7c364bfe a9206a5d 7573e95a c9b88a01 76 406ad09e cb5f624b 9f0d92ff a9206a5d 7573e95a 77 9c1f2847 406ad09e f2d7d892 9f0d92ff a9206a5d 78 425dc5e2 9c1f2847 901ab427 f2d7d892 9f0d92ff 79 4da55da7 425dc5e2 e707ca11 901ab427 f2d7d892 b4ea80a8 322b716b 7fc2a70f a04d089d b6aaba82 M0′ 0 d2dff369 67452301 7bf36ae2 98badcfe 10325476 1 83c4b1b4 d2dff369 59d148c0 7bf36ae2 98badcfe 2 21c2b311 83c4b1b4 74b7fcda 59d148c0 7bf36ae2 3 d526b01a 21c2b311 20f12c6d 74b7fcda 59d148c0 4 b456c646 d526b01a 4870acc4 20f12c6d 74b7fcda 5 c7d3a2df b456c646 b549ac06 4870acc4 20f12c6d 6 fecf520b c7d3a2df ad15b191 b549ac06 4870acc4 7 99ff1586 fecf520b f1f4e8b7 ad15b191 b549ac06 8 94c3ffac 99ff1586 ffb3d482 f1f4e8b7 ad15b191 9 7dfbe037 94c3ffac a67fc561 ffb3d482 f1f4e8b7 10 416f7fc2 7dfbe037 2530ffeb a67fc561 ffb3d482 11 726e4ae7 416f7fc2 df7ef80d 2530ffeb a67fc561 12 c1a839e0 726e4ae7 905bdff0 df7ef80d 2530ffeb 13 e54d04c6 c1a839e0 dc9b92b9 905bdff0 df7ef80d 14 153ba8f5 e54d04c6 306a0e78 dc9b92b9 905bdff0 15 c0d64826 153ba8f5 b9534131 306a0e78 dc9b92b9 16 147d405a c0d64826 454eea3d b9534131 306a0e78 17 e461e222 147d405a b0359209 454eea3d b9534131 18 ce1df559 e461e222 851f5016 b0359209 454eea3d 19 5e5b3fb3 ce1df559 b9187888 851f5016 b0359209 20 4b677210 5e5b3fb3 73877d56 b9187888 851f5016 21 4bf71505 4b677210 d796cfec 73877d56 b9187888 22 49e94c71 4bf71505 12d9dc84 d796cfec 73877d56 23 dcf8b25b 49e94c71 52fdc541 12d9dc84 d796cfec 24 93d5228f dcf8b25b 527a531c 52fdc541 12d9dc84 25 6125fb29 93d5228f f73e2c96 527a531c 52fdc541 26 8f908be0 6125fb29 e4f548a3 f73e2c96 527a531c 27 4a77363c 8f908be0 58497eca e4f548a3 f73e2c96 28 ff239bb2 4a77363c 23e422f8 58497eca e4f548a3 29 ef95685b ff239bb2 129dcd8f 23e422f8 58497eca 30 54ae55d3 ef95685b bfc8e6ec 129dcd8f 23e422f8 31 a621f9ba 54ae55d3 fbe55a16 bfc8e6ec 129dcd8f 32 2f1abebd a621f9ba d52b9574 fbe55a16 bfc8e6ec 33 80e6475c 2f1abebd a9887e6e d52b9574 fbe55a16 34 d0a4b9ef 80e6475c 4bc6afaf a9887e6e d52b9574 35 534c5fb3 d0a4b9ef 203991d7 4bc6afaf a9887e6e 36 99d06604 534c5fb3 f4292e7b 203991d7 4bc6afaf 37 8dd54d7c 99d06604 d4d317ec f4292e7b 203991d7 38 8a19d0aa 8dd54d7c 26741981 d4d317ec f4292e7b 39 a6faa511 8a19d0aa 2375535f 26741981 d4d317ec 40 a5b11b6f a6faa511 a286742a 2375535f 26741981 41 abee3f71 a5b11b6f 69bea944 a286742a 2375535f 42 f37f064a abee3f71 e96c46db 69bea944 a286742a 43 98bcd7e3 f37f064a 6afb8fdc e96c46db 69bea944 44 336146e9 98bcd7e3 bcdfc192 6afb8fdc e96c46db 45 bbb96c36 336146e9 e62f35f8 bcdfc192 6afb8fdc 46 63fee1b4 bbb96c36 4cd851ba e62f35f8 bcdfc192 47 8c97f438 63fee1b4 aeee5b0d 4cd851ba e62f35f8 48 8fe05a9d 8c97f438 18ffb86d aeee5b0d 4cd851ba 49 1ee99f6f 8fe05a9d 2325fd0e 18ffb86d aeee5b0d 50 dc3c9013 1ee99f6f 63f816a7 2325fd0e 18ffb86d 51 8bd58e52 dc3c9013 c7ba67db 63f816a7 2325fd0e 52 9f10dbc8 8bd58e52 f70f2404 c7ba67db 63f816a7 53 13ef5790 9f10dbc8 a2f56394 f70f2404 c7ba67db 54 0553cd66 13ef5790 27c436f2 a2f56394 f70f2404 55 1d628100 0553cd66 04fbd5e4 27c436f2 a2f56394 56 c69503f9 1d628100 8154f359 04fbd5e4 27c436f2 57 3023c438 c69503f9 0758a040 8154f359 04fbd5e4 58 f76cb313 3023c438 71a540fe 0758a040 8154f359 59 c16ec1d8 f76cb313 0c08f10e 71a540fe 0758a040 60 8b3af4c3 c16ec1d8 fddb2cc4 0c08f10e 71a540fe 61 ad75ee15 8b3af4c3 305bb076 fddb2cc4 0c08f10e 62 6bb9d4b5 ad75ee15 e2cebd30 305bb076 fddb2cc4 63 06feacd5 6bb9d4b5 6b5d7b85 e2cebd30 305bb076 64 e9d9ae9d 06feacd5 5aee752d 6b5d7b85 e2cebd30 65 9e186b3b e9d9ae9d 41bfab35 5aee752d 6b5d7b85 66 0a14db72 9e186b3b 7a766ba7 41bfab35 5aee752d 67 ee4036c4 0a14db72 e7861ace 7a766ba7 41bfab35 68 c74854f8 ee4036c4 828536dc e7861ace 7a766ba7 69 a94fc71d c74854f8 3b900db1 828536dc e7861ace 70 232c8bd9 a94fc71d 31d2153e 3b900db1 828536dc 71 40759dc0 232c8bd9 6a53f1c7 31d2153e 3b900db1 72 5f2a875e 40759dc0 48cb22f6 6a53f1c7 31d2153e 73 506a906f 5f2a875e 101d6770 48cb22f6 6a53f1c7 74 9067de61 506a906f 97caa1d7 101d6770 48cb22f6 75 ec68629c 9067de61 d41aa41b 97caa1d7 101d6770 76 db41cbf7 ec68629c 6419f798 d41aa41b 97caa1d7 77 784e0231 db41cbf7 3b1a18a7 6419f798 d41aa41b 78 49c4c137 784e0231 f6d072fd 3b1a18a7 6419f798 79 b6435ac0 49c4c137 5e13808c f6d072fd 3b1a18a7 1d887dc1 39926cc0 f6ce5d8a 0702c773 feecfa97 NIST ASCII “abc” Test Case 0 0116fc33 67452301 7bf36ae2 98badcfe 10325476 1 8990536d 0116fc33 59d148c0 7bf36ae2 98badcfe 2 a1390f08 8990536d c045bf0c 59d148c0 7bf36ae2 3 cdd8e11b a1390f08 626414db c045bf0c 59d148c0 4 cfd499de cdd8e11b 284e43c2 626414db c045bf0c 5 3fc7ca40 cfd499de f3763846 284e43c2 626414db 6 993e30c1 3fc7ca40 b3f52677 f3763846 284e43c2 7 9e8c07d4 993e30c1 0ff1f290 b3f52677 f3763846 8 4b6ae328 9e8c07d4 664f8c30 0ff1f290 b3f52677 9 8351f929 4b6ae328 27a301f5 664f8c30 0ff1f290 10 fbda9e89 8351f929 12dab8ca 27a301f5 664f8c30 11 63188fe4 fbda9e89 60d47e4a 12dab8ca 27a301f5 12 4607b664 63188fe4 7ef6a7a2 60d47e4a 12dab8ca 13 9128f695 4607b664 18c623f9 7ef6a7a2 60d47e4a 14 196bee77 9128f695 1181ed99 18c623f9 7ef6a7a2 15 20bdd62f 196bee77 644a3da5 1181ed99 18c623f9 16 4e925823 20bdd62f c65afb9d 644a3da5 1181ed99 17 82aa6728 4e925823 c82f758b c65afb9d 644a3da5 18 dc64901d 82aa6728 d3a49608 c82f758b c65afb9d 19 fd9e1d7d dc64901d 20aa99ca d3a49608 c82f758b 20 1a37b0ca fd9e1d7d 77192407 20aa99ca d3a49608 21 33a23bfc 1a37b0ca 7f67875f 77192407 20aa99ca 22 21283486 33a23bfc 868dec32 7f67875f 77192407 23 d541f12d 21283486 0ce88eff 868dec32 7f67875f 24 c7567dc6 d541f12d 884a0d21 0ce88eff 868dec32 25 48413ba4 c7567dc6 75507c4b 884a0d21 0ce88eff 26 be35fbd5 48413ba4 b1d59f71 75507c4b 884a0d21 27 4aa84d97 be35fbd5 12104ee9 b1d59f71 75507c4b 28 8370b52e 4aa84d97 6f8d7ef5 12104ee9 b1d59f71 29 c5fbaf5d 8370b52e d2aa1365 6f8d7ef5 12104ee9 30 1267b407 c5fbaf5d a0dc2d4b d2aa1365 6f8d7ef5 31 3b845d33 1267b407 717eebd7 a0dc2d4b d2aa1365 32 046faa0a 3b845d33 c499ed01 717eebd7 a0dc2d4b 33 2c0ebc11 046faa0a cee1174c c499ed01 717eebd7 34 21796ad4 2c0ebc11 811bea82 cee1174c c499ed01 35 dcbbb0cb 21796ad4 4b03af04 811bea82 cee1174c 36 0f511fd8 dcbbb0cb 085e5ab5 4b03af04 811bea82 37 dc63973f 0f511fd8 f72eec32 085e5ab5 4b03af04 38 4c986405 dc63973f 03d447f6 f72eec32 085e5ab5 39 32de1cba 4c986405 f718e5cf 03d447f6 f72eec32 40 fc87dedf 32de1cba 53261901 f718e5cf 03d447f6 41 970a0d5c fc87dedf 8cb7872e 53261901 f718e5cf 42 7f193dc5 970a0d5c ff21f7b7 8cb7872e 53261901 43 ee1b1aaf 7f193dc5 25c28357 ff21f7b7 8cb7872e 44 40f28e09 ee1b1aaf 5fc64f71 25c28357 ff21f7b7 45 1c51e1f2 40f28e09 fb86c6ab 5fc64f71 25c28357 46 a01b846c 1c51e1f2 503ca382 fb86c6ab 5fc64f71 47 bead02ca a01b846c 8714787c 503ca382 fb86c6ab 48 baf39337 bead02ca 2806e11b 8714787c 503ca382 49 120731c5 baf39337 afab40b2 2806e11b 8714787c 50 641db2ce 120731c5 eebce4cd afab40b2 2806e11b 51 3847ad66 641db2ce 4481cc71 eebce4cd afab40b2 52 e490436d 3847ad66 99076cb3 4481cc71 eebce4cd 53 27e9f1d8 e490436d 8e11eb59 99076cb3 4481cc71 54 7b71f76d 27e9f1d8 792410db 8e11eb59 99076cb3 55 5e6456af 7b71f76d 09fa7c76 792410db 8e11eb59 56 c846093f 5e6456af 5edc7ddb 09fa7c76 792410db 57 d262ff50 c846093f d79915ab 5edc7ddb 09fa7c76 58 09d785fd d262ff50 f211824f d79915ab 5edc7ddb 59 3f52de5a 09d785fd 3498bfd4 f211824f d79915ab 60 d756c147 3f52de5a 4275e17f 3498bfd4 f211824f 61 548c9cb2 d756c147 8fd4b796 4275e17f 3498bfd4 62 b66c020b 548c9cb2 f5d5b051 8fd4b796 4275e17f 63 6b61c9e1 b66c020b 9523272c f5d5b051 8fd4b796 64 19dfa7ac 6b61c9e1 ed9b0082 9523272c f5d5b051 65 101655f9 19dfa7ac 5ad87278 ed9b0082 9523272c 66 0c3df2b4 101655f9 0677e9eb 5ad87278 ed9b0082 67 78dd4d2b 0c3df2b4 4405957e 0677e9eb 5ad87278 68 497093c0 78dd4d2b 030f7cad 4405957e 0677e9eb 69 3f2588c2 497093c0 de37534a 030f7cad 4405957e 70 c199f8c7 3f2588c2 125c24f0 de37534a 030f7cad 71 39859de7 c199f8c7 8fc96230 125c24f0 de37534a 72 edb42de4 39859de7 f0667e31 8fc96230 125c24f0 73 11793f6f edb42de4 ce616779 f0667e31 8fc96230 74 5ee76897 11793f6f 3b6d0b79 ce616779 f0667e31 75 63f7dab7 5ee76897 c45e4fdb 3b6d0b79 ce616779 76 a079b7d9 63f7dab7 d7b9da25 c45e4fdb 3b6d0b79 77 860d21cc a079b7d9 d8fdf6ad d7b9da25 c45e4fdb 78 5738d5e1 860d21cc 681e6df6 d8fdf6ad d7b9da25 79 42541b35 5738d5e1 21834873 681e6df6 d8fdf6ad a9993e36 4706816a ba3e2571 7850c26c 9cd0d89d Michael Silk March 1, 2005 11:35 PM Yen, did you note the part of the paper that suggests: “[…] padding rules were not applied to the messages.” I am wondering how this can be applied to highly structured messages such as validated XML. Example: assuming you manipulate an HTML document, add an appendix after the closing html-tag, no browser will worry about displaying the gibberish but will only display the manipulated page. The security mechanisms will tell the user everything is ok, since they consider the whole document. Now if you take a validated XML document and an XML Signature (according to W3C specs) from all I know this is to ignore extra whitespace and obviously any gibberish not obeying the XML structure will render the document invalid. Does this now justify the conclusion that highly structured content such as validated XML is more robust against this kind of collision attacks? What about composing a message M1 and a large random string C1 and transmitting the following data: M1, C1, SHA1(M1), SHA1(C1), SHA1(M1+C1) Try to crack this by making a forged “valid” message like: M2, C2, SHA1(M2), SHA1(C2), SHA1(M2+C2) This is the same as solving a problem (e) for M and solving problem (f) for C, but at the same time try to make the SHA1 of the sum of M+C in the forged situation equal to the SHA1 of the original M+C. The last SHA1 causes that the common results of the full result set of the first, second and third problem will be a very meager collection of possibilities, although a brute force attack might result in a solution. If such a combination of criteria makes it difficult to find a solution, the combination can be expanded with more random data strings and SHA1’s and more SHA1-ed additions. I’m not a cryptographer but does this make the forging attempts much more difficult? Mann what are you doing? Related Pages in AI Topics: General Index by Topic to AI in the news … World chess champion Vladimir Kramnik defends tenaciously to draw computer Deep … @Wadim You don’t understand the idea of my proposal. I’ll explain: This is a situation of coupled equations. The important thing is not that simple SHA1s are transmitted but that an RSA encrypted version (using private key) of the SHA1s is transmitted: digital signatures. Therefore the SHA1 values cannot simply be replaced by other values. The original message is: M1, RSAENC(SHA1(M1)) C1, RSAENC(SHA1(C1)) RSAENC(SHA1(M1+C1)) To produce a forged message with the same digital signature: M2, RSAENC(SHA1(M1)) C2, RSAENC(SHA1(C1)) RSAENC(SHA1(M1+C1)) To achieve this the following system of coupled equations need to be solved: 1) SHA1(M1)=SHA1(M2) 2) SHA1(C1)=SHA1(C2) 3) SHA1(M1+C1)=SHA1(M2+C2) If equation 1 can be solved the same effort or more is needed to solve equation 2. The difficulty is in the third equation that couples the equations 1 and 2. If finding collisions for equation 1 and 2 is easy it is still difficult to solve equation 3. Assume the number of possible collisions equal to 2^80 for a 160 bits SHA1. To solve equation 3 all possible solutions of equation 1 (2^80) and equation 2 (also 2^80) need to be combined and tried out. That is the same amount of possibilities ( 2^80 * 2^80 = 2^160 ) as a brute force attack should use to solve equation 1. To make it even more difficult additional random info can be added: M1, RSAENC(SHA1(M1)) C1, RSAENC(SHA1(C1)) D1, RSAENC(SHA1(D1)) RSAENC(SHA1(M1+C1)) RSAENC(SHA1(M1+C2)) RSAENC(SHA1(C1+C2)) To make a forged message the following number of equations needs to be solved: 1) SHA1(M1)=SHA1(M2) 2) SHA1(C1)=SHA1(C2) 3) SHA1(D1)=SHA1(D2) 4) SHA1(M1+C1)=SHA1(M2+C2) 5) SHA1(M1+D1)=SHA1(M2+D2) 6) SHA1(C1+D1)=SHA1(C2+D2) The number of possibilities that need to be tried to solve these equations is: 2^80 * 2^80 * 2^80 = 2^240. Wadim, Please supply me a proof or counter example that the above reasoning is incorrect. Addition to previous post. If there will be developed advanced algorithms that break SHA1 in 2^50 calculations these algorithms will not give all 2^80 solutions but only one or a few. If applying such an algorithm to equation 1 and 2 of the simple case described above this needs 2^51 calculations. The combination of solutions used to solve equation 3 has a chance of 1 / 2^80 of being correct. If a very advanced algorithm (feeded with a parameter to obtain a different solution) only requires 1 calculation to result in a solution for equation 1, still all 2^80 * 2^80 possibilities need to be tried to find the correct solution for equation 3. I Googled Whirlpool NESSIE and found the documentation/specifications. EeCee November 9, 2007 8:14 AM Its now almost 3 years later. Has anything happened on the topic of replacing SHA-1? I have been reviewing a design document written in 2004 with references to the use of SHA-1. I was just wondering if there is an alternative to using SHA-1. Or are SHA-224, SHA-256 etc. the only real alternatives? Armando Leotta November 22, 2007 9:35 AM Something happened to my previous post. So I re-apply a digest 🙂 Anyway, NIST is currently running a public competition. Take a look at NIST’s website. A. sure.. thats not true people hashim kareem February 22, 2008 5:41 AM sha algorthim and any variant are breakable in real time using hashim algorithm ,in which the attack is lie on key extention of the rounds not on exact key Well, it’s shocking to know it. Since I moved from MD5 to SHA-1. Then, you tell us that SHA-1 is not secure anymore. So what hash algorithm should we use? @Amri Shodig: There are several widely published and reviewed cryptographic hash functions which have no known effective attacks. Of these, the oldest and most thoroughly studied is probably RIPEMD-160, which was published 13 years ago and has been adopted as a standard by several international organisations. Another good example in this class is Whirlpool, which has a much larger output than RIPEMD-160 and has also been studied for a fair while now, although not as long. There are many more which are currently unbroken, but for which there may be a little concern about their long term survival. This includes Tiger and the whole SHA family later than SHA-1. I would be happy to use any of these at present but would be wary of embedding them into a long-lived system that was difficult to upgrade. Jim Russell January 16, 2012 12:48 AM Assume the user enters DENVER BRONCOS which I hash once. Then I hash the hash and repeat that 9 more times. Am I correct that it will take 9 times 2**80 to find a comparable string that equals the same hash? If not, why not? If I am correct then all a programmer has to do is to repeat the hasting for multiple times to gain the security desired, then whats the big problem? For use in passwords, where the user enters a plaintext pass, I store the hash of the input and do the same hashing when the user signs in, and compare it against the hashed value in the database. (a common approach). Since the program does the hashing, it is no burden to the user, other than a little more time to wait for confirmation. If the above is true, what is the big concern about the supposed weakness of SHA-1? The solution appears to be trivial to implement to whatever level of security one desires, ie, just do it multiple times. What am I missing? Jim Russell January 16, 2012 1:06 AM One problem that seems to be overlooked is the question: HOW DO YOU KNOW YOU’VE DECODED THE STRING? Apparently the unstated assumption is that everyone is using English. The 2nd unstated assumption is that the first time you get clear test that makes sense, you’ve successfully decoded the string. Save I get a decode of NICE DAY when I was trying to decode the input string of DENVER BRONCOS. How do I know that NICE DAY isn’t the clear text input? Are we getting so technically focused that common sense has been lost? Decrypting anything is a real beast of a problem. Even if I had a bank of super computers with hugh amounts of cpu power, actually decrypting anything encrypted with any thing that does not simply repeat the same letter for the same input is not a trivial pursuit. Just to run a decrypting program thru a big bank of super computers or their equivalent in parallel desk top computers would cost about$10,000 a hour and it could take days, months or years to get something you think is clear text.

By that time, the information is probably stale, ie, no longer useful. Battlefield conditions, for example, aren’t much use two weeks later, to anyone. How about your competitor’s sales reports 5 years ago? Is that worth a million dollars to get?

So let’s put a does of common sense into all of this discussion. Or perhaps the information should be entitled: FOR CRYPTO GEEKS ONLY!

Jim Russell January 16, 2012 1:14 AM

Since the length of the password is a major contributor to the time required to reverse engineer a hash or an encryption, why shouldn’t I programmatically increase the size of the user input password to, say 256 characters in length, padding to the end from a fixed string (so I can decrypt it)? Wouldn’t that make the translation much more time consuming, than just say DENVER BRONCOS which is only 14 characters long?

Again an easy programming fix.

venkat August 8, 2013 1:13 AM

Dear sir,

please prove the probability of Meet-in-the-middle attack is p\sim 1- {1}{e^{\frac{q_1 q_2}{2^{160}}}?

Saint Crusty November 13, 2013 11:37 AM

So i dare since this is an old post and hey, what better than to be proven wrong or foolish. Here’s some guess work.

Would detecting collisions benefit from next to using a specific length plain-text ( which never changes ) also using “random as in using Pi to an equal length of the plaintext as plaintext ” or “controlled pseud-random sequences as plaintext” ?

My guess is this may “reveal” the plausibility of collisions by comparing sufficiënt sets of data and filtering out any “patterns” which could not but be explained as by the algorithm(s) themselve.

Have been in the field without reading any of your Books, i didn’t feel so good after noticing that..

Wiki confirms that your evangelism message was well heard, check this:

• In 2005, cryptanalysts found attacks on SHA-1 suggesting that the algorithm might not be secure enough for ongoing use. NIST required many applications in federal agencies to move to SHA-2 after 2010 because of the weakness. Although no successful attacks have yet been reported on SHA-2, it is algorithmically similar to SHA-1. In 2012, following a long-running competition, NIST selected an additional algorithm, Keccak, for standardization under SHA-3.
• In November 2013 Microsoft announced their deprecation policy on SHA-1 according to which Windows will stop accepting SHA-1 certificates in SSL by 2017.
• In September 2014 Google announced their deprecation policy on SHA-1 according to which Chrome will stop accepting SHA-1 certificates in SSL in a phased way by 2017. Mozilla is also planning to stop accepting SHA-1-based SSL certificates by 2017

Schneier, you’re awesome and I will be learning much from you when my Amazon order of 《Applied Cryptography: Protocols, Algorithms and Source Code in C》 arrives in 6 weeks, I don’t even know why it takes that longer to ship it to Beijing..

Article is great, love the way you write. It really attractive. Not many people can write like you. Definitely will save this and follow you in future.

This is an excellent blog. First time come here but i was impressive by theme and contain of this blog. Hope your blog will grow more and more in future.

Another great topic to notice, loved it, thank for created such a great topic.

It is my first visit to your blog, and I am very impressed with the articles that you serve. Give adequate knowledge for me. Thank you for sharing useful material. I will be back for the more great post. https://lambinganshows.net/

Hello Dear , Visit Our Blog, Thanks you For Sharing , Make Share It And Keep Going To Do….

I am wondering how this can be applied to highly structured messages such as validated XML. Example: assuming you manipulate an HTML document, add an appendix after the closing HTML-tag, no browser will worry about displaying the gibberish but will only display the manipulated page. The security mechanisms will tell the user everything is ok since they consider the whole document.

By that time, the information is probably stale, ie, no longer useful. Battlefield conditions, for example, aren’t much use two weeks later, to anyone. How about your competitor’s sales reports 5 years ago? Is that worth a million dollars to get?

romantic famous urdu novels list for woman

My guess is this may “reveal” the plausibility of collisions by comparing sufficiënt sets of data and filtering out any “patterns” which could not but be explained as by the algorithm(s) themselves.

I was just seeking this info for a while. After 6 hours of continuous Googleing, finally I got it in your website. I wonder what’s the lack of Google strategy that do not rank this kind of informative sites in top of the list. Generally the top sites are full of garbage.

Thanks for sharing such a valuable content. Please bring more content for us.
Please Visit my site for checking knowledge.

To name this type of sheet, after ST, a number is written that indicates its tensile strength, and after a fraction line, one of the numbers 1 to 3 is written, which indicates the degree of quality of the sheet.
خرید ورق در مشهد

Galvanized sheet is a type of cold sheet with a coating of zinc metal. Galvanized sheet is also called white sheet;

We know that rebar is resistant to moisture and rust due to the zinc metal coating, such as items that are expensive due to their resistance, such as the price of rebar, so it is a very good option for use on roofs, especially in areas with humid and rainy weather.

خرید نبشی در مشهد

https://chikav.ir/1399/05/04/%D8%AE%D8%B1%DB%8C%D8%AF-%D9%86%D8%A8%D8%B4%DB%8C-%D8%AF%D8%B1-%D9%85%D8%B4%D9%87%D8%AF-%D9%85%D9%82%D8%AF%D8%B3/

I think not only using this language here, if it is difficult to use other libraries, the hash function still supports a lot such as: C ++, Crypto ++, … I think I can do it.

nice

As for cryptographers defining “hard enough” as “so hard that no algorithm is appreciably cheaper than a brute-force search”, it’s really a tribute to their ingenuity that they’ve been able to define anything that even looked as if it would meet this standard, and I think the rest of us should approve of them trying to reach it for as long as they have any hope left. (Though something with a proven lower bound, even one significantly below brute-force level, would be better if anybody could figure out how to do that).

Zafat

고액토토먹튀검증 완료 . 고액토토 환전 책임지고 먹튀검증 해드립니다. 소액부터 고액까지 안전이 제일우선 먹튀타임즈에서 함께하세요
<a href=”https://www.times-mt.com””>고액토토먹튀검증
“In the recent past, if you left people physically for a job or marriage, you simply moved on, but Facebook made maintaining those relationship easy,” says Danah Boyd of Microsoft Research and author of the forthcoming book, It’s Complicated: The Social Lives of Networked Teens.

@고액토토먹튀검증

Korean.

“In the recent past, if you left people physically for a job or marriage, you simply moved on, but Facebook made maintaining those relationship easy,” says Danah Boyd of Microsoft Research and author of the forthcoming book, It’s Complicated: The Social Lives of Networked Teens.

This is more or less the situation which led to the liberation of South Korea from the communist dictatorship of North Korea. There was “too much family” and young people had too many concerned parents, grandparents, aunts, and uncles under whom they were held in strict obedience rather than allowed to start out on their own or make their own way in life after leaving their parents’ home when they came of age.

Great article!

موسسه مهاجرتی پارسیان اپلای December 9, 2020 3:18 AM

موسسه مهاجرتی پارسیان اپلای در زمینه مهاجرت تحصیلی و کاری به اروپا و آمریکا به شما خدمت رسانی می‌کند.

انگلیش بای رز December 9, 2020 3:20 AM

آموزش زبان انگلیسی به صورت آنلاین از طریق پکیج های سایت انگلیش بای رز

It is my first visit to your blog, and I am very impressed with the articles that you serve. Give adequate knowledge for me. Thank you for sharing useful material. I will be back for the more great post.

The information you have provided in the blog is very useful. Do share more updates.

Great article! Great post! Great info!

A debt of gratitude is for the data. it was an excellent perusing.

It is in point of fact a nice and useful piece of information. I am glad that you just shared this helpful info with us. Please stay us up to date like this. Thank you for sharing.

I like this post, enjoyed this one regards for putting up.

If you desire to get much from this piece of writing then you have to apply these methods to your won blog.

Sidebar photo of Bruce Schneier by Joe MacInnis.