I found an interesting posting on a blog. Do you have a comment on this ?

]]>But it is not a mathematical proof,

because it relies on things we observe in nature.

So therefore, there will be no 1M$.

]]>

IIRC integer factorization is known to NOT be NP-complete, it's just no known if it's in P. It may be in NP - P - NP-complete (That is not in P, not NP-complete, but still in NP)

]]>Let AES-K be Rijndael with a K-bit key and 128-bit block. Let AES-K-ECB and AES-K-CTR be Electronic Codebook mode and Counter mode, respectively.

For a chosen-plaintext attack on AES-K-ECB, the problem is clearly in P. In fact, the attacker can succeed in time and space linear in the time it takes to do ordinary encryption/decryption with the key.

That's because AES-K-ECB is a simple substitution cipher on an alphabet of 2^128 "letters". So it can be broken with 2^128 chosen plaintexts. And 2^128 is merely O(1), because it doesn't depend on K. The "key" you find is a table with 2^128 entries, which is merely O(1) space. So AES-K-ECB is in P.

If you prefer counter mode, AES-K-CTR uses a counter the size of the block, so the counter wraps around every 2^128 blocks, so you need only 2^256 chosen plaintext blocks to completely break it. So again, AES is in P.

This is a great illustration of how the P ?= NP question truly isn't relevant for cryptography. Cryptographers want to know whether AES-256-CTR is secure. Many people bet it is. But AES-K-CTR can be broken in polynomial time, and so is in P. And AES-256-CTR isn't in any complexity class, because classes like P and NP need a "size" that goes to infinity.

So from a complexity theory viewpoint, AES is no different from the decoder ring you get in a cereal box.

]]>That's not true. Even if P=NP, there may still be no algorithm for breaking AES-256 other than brute force.

If P=NP, then there's a polytime algorithm for breaking AES-K, for keys of size K, that's polynomial in the limit as K goes to infinity. But it's possible that this magic algorithm works by reducing the AES-K problem to a small set of AES-256 problems, breaking each of those subproblems by brute force, then combining their answers to get the answer to the original AES-K problem. Such an algorithm tells us nothing new about AES-256.

So even if P=NP, it's still possible that AES-256 has no breaks other than brute force. It's even possible that AES-trillion has no breaks other than brute force. All P=NP would tell us is that we'll have something other than brute force for key lengths above SOME unknown threshold.

[p.s. Yes, I know AES-K technically doesn't exist for large K; obviously I mean Rijndael-K for any K for which AES-K isn't defined]

We trust AES-256 because of two things. First, we know of no cracks (this is admittedly speculative), and therefore the only way anybody's come up with to break it is brute force. Second, it physically cannot be brute-forced with the resources of only one galaxy. Even if some quantum trickery reduces the effective keylength in half, it's still impossible to brute-force a 128-bit key with the resources of only one solar system.

If P=NP, we know there's a way to crack it without the need for brute force. The standard way (based on the proof) may be infeasible in the same way as brute force, but now we know that we don't need brute force. It's like a crack reducing complexity to something like 2^192; it reduces our confidence in the cipher.

]]>http://rjlipton.wordpress.com/2010/08/12/fatal-flaws-in-deolalikars-proof/#comments

]]>Another way to look at P?=NP and cryptography:

If P != NP, cheap cryptography can always outrun resourceful brute force decryption as breaking cost rises exponentially while en/decrypting only polynomially

If P=NP, a resourceful opponent can always outrun a poor encrypter

Eg, often adding 1 bit increases en/decryption costs to (n+1)^K while it doubles the cost of attack. If n > K, you quickly get an advatage over the attacker.

Absolute costs do matter, but it is the message sender that selects the method. She can always look for a cryptographic system that has K in her advantage.

Cryptography is based on encryption being easy, decryption with the key being easy, and decryption without the key being hard. Here 'easy' means that it can be done without too much difficulty on actual machines we have developed and 'hard' means that it cannot be done at all, for practical purposes, without performing more calculations than we expect anyone to be able to perform.

The P ?= NP problem is about whether problems are 'easy' or 'hard'. Problems that are 'easy' may actually be impossible to do for practical purposes, requiring more calculations than can be done if every particle in the universe were a computer doing a trillion calculations per second for the suspected age of the universe. Problems that are 'hard' may actually be trivial to do, requiring only a few calculations.

An algorithm that breaks all known cryptographic algorithms in polynomial time (thus 'easy') would have no effect on the security of known cryptographic algorithms if the number of calculations it required to break actual key sizes in use was, say, 10^100.

Nope, that's not proven either. We do know that factoring is in NP cap co-NP, so it'd be a big surprise if it's NP-complete, since that'd show that NP = co-NP, but we just don't know either way.

]]>If P=NP, you could still have a cipher that decrypts in linear time with the key and n^1000 time without the key. So it's breakable in polynomial time, yet cryptographically secure.

If P != NP, then you could still have an NP-complete cipher that's breakable in n^(1+n/1000) time. That's cryptographically insecure, even though it takes more than polynomial time to break. [...] Plus, these classes deal with worst-case times, while crypto deals with average times. So the P ?= NP question truly is irrelevant to cryptography.'

Complexity theory and cryptography are related, and P ?= NP is relevant but a long way from being decisive. As I mentioned earlier, P /= NP is /necessary/ but not /sufficient/ for computationally-secure cryptography in the sense usually used in current cryptographic research. For that, we need (at least) one-way functions, where /random/ instances are hard, not just worst-cast instances -- and they must be hard in the sense that randomized polynomial-time machines have only negligible probability of guessing a right answer. So, in particular, we need that BPP /= NP, but even that isn't sufficient. (Here, `negligible' is a technical term, and means `less than any polynomial function, for sufficiently large problems'.)

I don't think your objection is well-considered. The nice thing about thinking about cryptography in terms of polynomial-time machines and negligible functions is that the classes are closed under (randomized) polynomial-time reductions, i.e., reductions in BPP.

It's been shown (see Oded Goldreich's `Foundations of Cryptography' for the distressingly turgid details) that you can build a pseudorandom permutation (crypto-research jargon for `block cipher') out of a one-way function with the following security property: if any algorithm can distinguish the pseudorandom permutation from a really random permutation, with non-negligible `advantage', then you can, using only polynomially more effort, invert the one-way function, again with non-negligible advantage.

These kinds of results pervade modern cryptography. Some of them, like the one I've just described, actually have rather inefficient reductions -- the reductions take quite a lot of effort (the polynomials have high degree) and the probability drops off quite sharply. It's all held together by the magic of the asymptotics: decide how much adversarial disadvantage you want, and choose a security parameter that's big enough. The asymptotics guarantee that it exists. And that's why the polynomial/not-polynomial distinction is important. If your one-way function (say) started with an O(n^1000) adversarial disadvantage, an inefficient reduction from your final protocol might fritter it all away, and you end up with no security at all at the end. But if it has an n^(1 + O(n)) reduction, then you'll /always/ be able to choose n big enough to put any amount of clear air you like between you and the adversary. Of course, you may not like the size of the system you end up with. But at least it's secure.

Currently, though, this is all built on sand. One-way functions might not exist at all. Separating P and NP is a really good first step, though.

`Plus, these classes deal with asymptotic key sizes and block sizes (the limit as the size goes to infinity), while crypto deals with specific, small sizes.'

Yes, this is my objection to all of this. The asymptotic approach I've described above is great for theoretical results: you can build block ciphers, symmetric encryption schemes, message authentication codes, and much cleverer things, all out of one-way functions and sellotape. But actually the things we start from in symmetric cryptography are block ciphers like AES or stream ciphers like Salsa20, and they have an inconvenient property: they're /fixed/. AES has a 128-bit block size and a 256-bit key, whether you like it or not. There just isn't a scaled-up version with a 3-million-bit key. So, for practical purposes, asymptotic security results are close to worthless.

Thanks to work started by Bellare, Kilian and Rogaway, we have a rich body of cryptography literature providing concrete reductions which work for fixed-sized things that we've actually got, like AES and SHA-256. This vein of research is also built on sand. And whether P = NP really isn't relevant here, because it's an asymptotic statement. What we really want to know is: can we build a function which takes at most n steps to compute but /provably/ takes at least N >> n steps to invert with probability better than (some tiny) epsilon? Can we do something similar for block ciphers, or stream ciphers?

On the other hand, currently, we seem to suck hopelessly at determining lower bounds on computational complexity. Life is easier if we try to think about asymptotic complexity classes, because the asymptotics hide an enormous number of details about performance models) but we suck at separating those too. Dealing with individual, specific, concrete problems requires more realistic (and detailed) computational performance models, and that's going to be really hard.

Showing that P /= NP is a small step. But it's a step in the /right/ direction, so we should be grateful for that when it happens.

(Sorry this was a bit long.)

If P=NP, you could still have a cipher that decrypts in linear time with the key and n^1000 time without the key. So it's breakable in polynomial time, yet cryptographically secure.

If P != NP, then you could still have an NP-complete cipher that's breakable in n^(1+n/1000) time. That's cryptographically insecure, even though it takes more than polynomial time to break.

Plus, these classes deal with asymptotic key sizes and block sizes (the limit as the size goes to infinity), while crypto deals with specific, small sizes.

Plus, these classes deal with worst-case times, while crypto deals with average times.

Plus, the common crypto algorithms aren't even NP-complete, so proving NP is harder than P still doesn't tell whether they are in P or not.

So the P ?= NP question truly is irrelevant to cryptography.

]]>Did you mean: "If factoring is in NPC, and if P!=NP, then factoring is secure forever against Turing machines." (?)

Two things:

1. RSA was never proved to be equivalent to factoring, it's considered to be easier (because factoring solves RSA trivially but not vice versa).

2. The consensus is that factoring is not in NPC class, because primality is proven to be in P, and these problems are very related. To be even more precise, almost all techniques used in cryptography are considered to be in P (RSA, DLOG, etc...). ZK is a clear exception.

Yeah, i know, but i was trying to avoid some levels for formalism in favor of making the crux of the question easier to understand for all. When i said that we're not sure its in np, i meant of course strictly in np and not in p. That's why i had my "shut up nerds" clause in there ;)

As for a proof of p == np needing to be constructive i think you might be right. Of course the most direct proof of any sort would be, like if you could find some solution in polynomial time (deterministic i mean) for an np complete problem then of course you could then use that to solve sat, then all the rest follow really easily. That said, even if they had a solution for sat in deterministic polynomial time tomorrow, you'd still probably have a bit of lead time before aes was done (hopefully because it would be some polynomial time algo that is still of a very large order of complexity).

]]>What isn't known is:

* If factoring is in P. This is also somewhat obvious though, since if we didn know that it was in P, then that would constitute a proof that P != NP, and it wouldn't be an open problem anymore ;)

* If factoring is NP-complete. Which means that even if P != NP, RSA still might not be safe.

Also, I believe it has been shown that any proof that P = NP must be constructive - meaning that if it is ever proven, the proof will necessarily show a way to create a polynomial time solution to any NP-complete problem.

]]>Can I get away with "fortuitous accident"?

If not I'll go for "the right not to" make my ears go red ;)

]]>I may have been wrong above when defending my definition of NP-Hard, apparently there's huge confusion in the definition of the term. The definition used by my supervisors, textbooks and classes restricted it to decision problems, and defined NP-complete as the intersection of NP and NP-hard.

Apparently other books and researchers tend to use the term more loosely, including optimization problems and the like. The Wikipedia discussion page on NP-Hard is a turf war, with the current status putting my definition as an "alternative". Anyone else run into this inconsistency?

]]>http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%E2%89%A0np/

]]>"If Vinay Deolalikar is awarded the $1,000,000 Clay Millennium Prize for his proof of P≠NP, then I, Scott Aaronson, will personally supplement his prize by the amount of $200,000."

Translation: "He might have it, but I really doubt it."

]]>All of P is certainly contained in NP; so, no, just because we can find a polynomial-time algorithm for some NP problem doesn't help with any others. But there are some problems for which this would be helpful. They're called NP-complete. An NP-complete problem has the property that you can take an instance of /any/ other problem in NP, and convert it into an instance of the NP-complete problem, in polynomial time, so that the answer to the NP-complete instance tells you the answer to your original problem instance.

The obvious example is SAT (`Boolean satisfiability'): here's a circuit with AND, OR and NOT gates, and a bunch of inputs and one output: is there a way of assigning TRUE and FALSE values to the inputs that makes the output be TRUE? This is NP-complete because you can take any other NP problem instance and encode it as a boolean circuit, with the inputs representing the witness: `is there a witness that this problem instance has a YES answer'?

`As kangaroo mentioned, there are a few out there that would love to be solved in other than the brute force "proof" that "4 colors suffice".'

Yeah, life sucks sometimes. But, on the other hand, we'll have made some really important progress on something at which we've been very bad in the past, namely, determining lower bounds on algorithmic complexity. And, at least we'll know that (say) trying to figure out ways to n-colour graphs in polynomial time is just a waste of effort. It'll be something that only cranks waste their time at, like circle-squaring, angle-trisecting, perfect compression, perpetual motion machines, and so on.

Not quite. A P solution to any NP-*complete* problem would mean there is a solution to all NP problems. But not every NP problem is NP-complete (unless P = NP).

P is actually a subset of NP, so *every* problem with a P solution is an NP problem. But there are also some NP problems that (we suspect) aren't in P.

There are also problems harder than NP. Graph coloring is NP-complete, but I'm not sure whether the proof that all planar graphs are four-colorable is itself NP or not.

]]>"Was that just a typo, or was it a freudian slip?"

It was Clive, so my money is on "typo". ;-)

]]>Is it still true that a P solution to any NP problem would mean there is a solution to *all* NP problems? As kangaroo mentioned, there are a few out there that would love to be solved in other than the brute force "proof" that "4 colors suffice".

And for those that don't understand -- proving that a P solution exists doesn't give you that solution, just the knowledge that one is possible. However, many problems map fairly closely onto one another and once one is solved, you do get a hint to the others.

However, upon further reflection, I think I see a way to extract the key: you guess it one bit at a time.

If P=NP, then you can ask "Is there any 128-bit AES key that decrypts this to an English-looking plaintext?", but you can also ask "Is there any 128-bit AES key whose first bit is zero that decrypts this to an English-looking plaintext?" Once you've confirmed that the first bit is (or is not) a zero, you guess the second bit, and so on.

That multiplies the time required by the number of bits in the key, but that's still polynomial.

]]>David's formulation of the question is in terms of a quasi-known-plaintext attack. I think the issue isn't retrieving the plaintext, it's retrieving the key after the problem has been decided. You ask the black box: "Does a key exist that turns this ciphertext into this plaintext using this cipher?". It answers, in polynomial time: "Yes!". The flaw is assuming that the machine will need to produce the key to determine the question.

"Deciphering a given ciphertext with a known key is an efficient operation, definitely in P. Therefore, deciphering a given ciphertext, with initially unknown key, to get plaintext with certain parameters, is in NP. Therefore, decryption of any reasonable cipher is in NP, given that we know or can surmise something of the plaintext. Therefore, if P=NP, we can decrypt any message, as long as we have some idea of the plaintext (such as that it's in English, say), in polynomial time."

P and NP are defined for decision problems. If P=NP, it's clear that we can (efficiently) prove that a valid decryption *exists* for a given ciphertext, but I'm not clear how you've reached the conclusion that we can also figure out what that plaintext actually *is*.

Care to elaborate?

]]>But, if true, this means that many other problems that are NP, but we wish to attack them tractably, are hopeless as well.

If true, this is not unalloyed good news -- in fact, it means that many problems are hopeless. Cryptography isn't the greatest intellectual and political problem we have -- this is like being excited about Godel. Sure, there's some good that comes out of it -- but we also have to face a proven intractability of the universe.

That kinda sucks.

]]>

http://michaelnielsen.org/polymath1/index.php?title=Deolalikar%27s_P!%3DNP_paper

"Often this is because of hurd thinking..."

Was that just a typo, or was it a freudian slip? This paper comes from an HP researcher, and HP's CEO Mark Hurd was just ousted, apparently because of some impropriety with the use of company funds to pay for his mistress or something? But anyway the story has been in the media for a few days from various angles and the "hurd thinking" seems to be all too common...

]]>This page collects a list of many attempts to settle the question one way or the other:

http://www.win.tue.nl/~gwoegi/P-versus-NP.htm

It also has a direct PDF download link for Vinay Deolalikar's latest effort.

I am not qualified to judge the proof and may not even be able to make sense out of it. My gut feeling is that it will contain flaws. Its pretty hard to write a 66-page mathematical proof without making any mistakes. It seems likely that he will have made at least one very subtle mistake somewhere--the kind of mistake that is so subtle that it is only going to be discovered by the scrutiny of hundreds of the world's best mathematicians. But that's the whole point of releasing it publically for peer review. Even if it turns out to have flaw(s), this attempt is very probably going to advance the state of the art in this branch of mathematics, and (like with Fermat's Last Theorem) its possible that any flaws the proof might contain are able to be repaired in a way that will still lead to a solid and generally accepted proof. If so, it will be a very impressive accomplishment and a useful theoretical result.

]]>

@David -

To Nitpick the Nitpick: NP-Complete is a problem that is both in NP-and is NP-Hard: (There are NP-hard problems not yet known to be in NP, such as the complement of the Hamiltonian Cycle problem, which is in co-NP).

@Foo: NP means Nondeterministically Polynomial, which means solvable in polynomial time by an infinitely parallel computer (i.e., a computer that can be in arbitrarily many states at once, hence nondeterministic). A Non-Polynomial problem would be one that cannot be solved in polynomial time.

@Secure: We can prove things about NP-complete problems without exponentially increasing proofs, so I don't understand where you're coming from. It is of course possible that neither P=NP nor P!=NP can be proved consistently, and that's true for any proposition that we've neither proved nor disproved, but there's no good reason to think so. BTW, NP-complete problems are solvable by brute force, so even if the proof were NP-complete (and I have no idea what that could possibly mean) this wouldn't apply.

]]>Impact on cryptography: none, really. The interesting questions for cryptography is whether one-way functions and (even more interestingly) trapdoor one-way functions exist.

If P = NP then neither exists, and computationally-secure cryptography[1] as a field of study vanishes immediately. We're left with one-time pads, Carter--Wegman authentication, and some multiparty computation stuff.

But even if P /= NP, it's still the case that one-way functions might not exist. And we're still left without computationally-secure cryptography. Finally, one-way functions might exist, but trapdoor one-way functions might not -- in which case we end up with symmetric cryptography and (very cumbersome) digital signatures but not key agreement or public-key encryption.

[1] In the complexity-theoretic sense. There might still be some milage in doing computationally secure cryptography with only a polynomial adversarial disadvantage, but even that's risky without a lower bound on the polynomial degrees we're dealing with. And, of course, the constant factors involved are important for any specific, concrete case.

There seems to be a lot of confusion over terminology in the comments here. A quick summary:

P, NP and NP-hard only make sense in terms of decision problems, so recall that all problems discussed in that context must have a possible yes/no formulation.

P is the full set of decision problems whose worst case can be decided in polynomial time (O(n^k)), where k is some constant and n is the length of the input in bits.

NP is the full set of decision problems that can be VERIFIED in polynomial time, that is, given an input, an a-priori answer and a "proof" which has a bit length that is polynomial in the size of the input, can be VERIFIED in polynomial time. (I.E, the yes/no answer can be shown to be correct in polynomial time, given this proof). It's trivial to show that P is a subset of NP. (

NP-Hard problems are a set of problems with the following characteristic: If they have a polynomial-time solution then P=NP.

This proof, if it's correct, shows that NP-hard problems don't have a polynomial-time solution.

It's worth noting, as pointed out above, that this result wouldn't prove that factoring large numbers, the discrete logarithm problem or the graph isomorphism problem are intractable, as these problems haven't been proven to be NP-hard. Proving something is in NP doesn't put any lower bound on the complexity, as P is a subset of NP.

P is polynomial. NP is... not polynomial (though it might have another name).

]]>

Let me know if there are any other markets that you guys might want to see.

Jason

Founder of Smarkets

What if P=NP, that is, every problem whose solution can be checked easily can be solved easily.

However, finding the solutions is beyond P and NP. That is, every "NP complete" problem has a solution in P, but proving any solution is just as hard as finding it.

Also, P = NP is problematic only if one is wearing myopic crypto glasses.

]]>"There can't be a final proof for P!=NP, because the proof itself is in NP"

Ok I get the reasoning but how do you prove it ;)

!Proof = NoPrize,

]]>There can't be a final proof for P!=NP, because the proof itself is in NP. You have to effectively prove that all possible and thinkable solutions are in NP and none is in P. This makes the problem somehwat self-referential, thus according to Gödel's incompleteness theorem, there can't be a consistent proof.

Yeah, I've always felt public key crypto just didn't have enough assurance. I think Universities getting crypto funding should be spending more effort developing new public crypto methods, like the more recent lattice-based methods that show immunity to quantum attacks. It seems like the public key crypto area is based on a small number of failure points as far as the math goes. If someone gets lucky here, all of this fails. If someone gets lucky there, all of that fails. We have plenty of options for symmetric and hash functions. We need to fund the creation of more asymmetric crypto schemes that work.

]]>