I would appreciate very much, if you could bring an example, or some examples, of your statement:"but the latter tend to come up with much more practical systems".

]]>As one who knows the field in question fairly well, definitely better than any of the other posters here, I cannot but laugh at the whole charade. Basically, Foundations of Cryptography is meant to make order in the field, and create the theoretical background lacking in a field that was for centuries based on non-scientific intuition. It is ridicules that Koblitz as a mathematician is trying to undermine these efforts in favor of intuition, which especially in a field as hard as cryptography, will get you wrong almost every time.

As far as rigor goes, Bruce, by definition the theory of cryptography is as rigor as any field in mathematics, and all one needs to do in order to see that is simply read Oded's book. Practical systems in Applied Cryptography may rely on hardness of a not-so-clear assumptions like the hardness of factoring, but that is whole notion of the practicality of the system. Even though factoring is getting easier, it is still hard enough for any practical application.

It seems that Koblitz intentionally used the weak hardness claims which some applied cryptography protocols are based on, to create a demagogic rant against the theory base of the field, without giving any reasonable proof.

In fact, you don't need no fancy stinkin' algorithms for the encryption part: One-Time Pads with XOR are the ideal solution "given good key management".

]]>Actually, I have seen a real use for N-space topology, and it's at least somewhat relevant to this site: optimal signal selection.

If signals are selected from an orthogonal multidimensional phase space (e.g., there are three overlapping frequencies: 'A' is 0,1,sqrt(2); 'B' is sqrt(2),0,1; etc.), noise tends to distort the signal by pushing it a distance away from its original point. The result is that the effective space taken up by all signals is a sphere of as many dimensions as you have independent variables.

Now, if you want to minimize power usage, you want to minimize the average distance from the origin to the centers of all these spheres. The result then becomes an N-dimensional sphere packing problem.

]]>> key management.

(laugh)... surf the "snake oil" posts on Bruce's blog... theorists may say, "this is provably secure given good key management" (which may be true but somewhat naive), but there's just as many horrid examples of practitioners making really bad unstated assumptions about key management. The use of "you theorists" belies objectivity here.

I don't claim to be an expert in cryptographic theory (you'll note I disclaimed that above), but I'll reasonably claim that I know the difference between different schools of thought. If you regard Kobilitz's paper as a criticism of the gray area in cryptography where the math hits the science and how the cryptographic community (as a whole) operates in that gray area, well, I think there's blame to throw on both sides of the fence there.

]]>I recommend that anyone who's bickering (above) take a few courses in breaking crypto systems. You'll find out that while the math is usually pretty, coding them usually isn't. And let's not forget the assumptions that you theorists make about key management.

Before you ask: no, I don't consider myself a member of either group. I never made it past tech calc and don't feel insulted by any of the previous statements about which group is better/smarter/has a longer proof.

As far as I know, only one person on this page has an accepted crypto-system in operation. The rest of you need to get over yourselves. You'll need to work together because (obviously) Bruce is trying to thin out the crowd by inciting a riot.

Nice troll you've set here, Bruce!

]]>>Both are based on certain axioms, and result in a statement about

>the problem in question.

Here's an attempt at a more precise correction of this assertion:

Theorems in pure mathematics and theoretical cryptography are held to the same standards of proof. In most cases, those standards are met. Failures occur occasionally in both fields, but somewhat more frequently in theoretical cryptography. There are several possible reasons (the inherent complexity of trying to analyze arbitrary adversaries, time pressure due to hard conference submission deadlines, eagerness to provide theoretical justification for cryptosystems that may be practically important). In applied cryptography proofs are much less important.

]]>There's an enormous difference between the Riemann hypothesis, P vs. NP, and factoring. The Riemann hypothesis is almost certainly true and we know a lot about why (there is tremendous numerical and theoretical evidence for it, we can prove extremely closely related statements such as the Weil conjectures, we can outline how a proof of RH might go, etc.). P is very likely not equal to NP but we have no idea whatsoever why (we have a lot of information about what won't work, such as diagonalization or so-called natural proofs, but absolutely no idea what might). Factoring is completely an open problem; factoring algorithms have improved tremendously since the 60's and there is no theoretical barrier in sight. Nobody has any good conceptual reason to think factoring is hard, beyond wishful thinking (namely that its hardness would lead to great cryptography).

So I agree that some CS conjectures are quite safe, such as P vs. NP. However, cryptography is full of papers in which someone makes up a new problem, tries and fails to solve it, and conjectures that it is hard. Sometimes they may be right; sometimes they are definitely wrong. Even the difficulty of factoring is far from well established, and only a handful of cryptographically relevant problems are as well established as that. (By contrast, NP completeness does not seem to be useful for designing cryptosystems.)

Incidentally, the biggest problem is in saying "but I have some idea that the people who do know about factoring, similarly, have pretty good reasons to feel authoritative about their own Big Assumptions". For some problems, such as P vs. NP, that is true, but factoring isn't one of them.

]]>I'm an information scientist and a mathematician, and occasionally I dabble in cryptography (unapplied), philosophy, history, and a few other fields :) Axiomatic systems, logical systems, and the theory of science are all pretty high on my "interests" lists.

If you're not a *cryptographer*, and you're not a *mathematician*, then what is your justification for this statement:

"In cryptography, a proof is just as rigorous as a mathematical proof. Both are based on certain axioms, and result in a statement about the problem in question."

That would be like me saying "Chemistry is based on axioms." I don't think a chemist would even use the term "axioms" in the description of their field. There are certain principles that chemists would say qualify as axioms under part of the third dictionary definition "an established rule or principle", but they certainly would balk at saying that any base principle in chemistry is a "self-evident truth"... scientists *hate* self-evidence.

Of course "the" dictionary definition doesn't apply... the dictionary has multiple definitions. If you're a mathematician, you have one (1) definition for an axiom, not three.

> lends to the suggestion that different usages of the word should be

> qualified to be more precise.

Absolutely, but this is only a statement of contextual precision, not a statement of empirical "worth".

Different usages of the word *are* more precise in context. In mathematics (an axiomatic system), dictionary definition #2 is the correct definition - a statement accepted as true as the basis for argument or inference. But self-evidence has nothing to do with theoretical mathematics.

Saying that a proposition is an axiom in the context of an axiomatic system is a very precise statement. Saying a proposition is an axiom in the context of a scientific system is also a very precise statement. However, these two statements are NOT the same. That doesn't make one "better" in any sense, just different.

There's nothing *intellectually superior* about operating inside an axiomatic system as compared to operating inside a scientific one. But *they are not the same*.

Saying "A and B are different" can be factually true, *without implying a value judgment* that A is better than B.

]]>I'm not a cryptographer, and I don't know if you are mathematician, but it looks like we have reduced the original controversy of cryptographical proofs vs. mathematical proofs to incompatible interpretations of "axioms", and now we're having the same argument all over again.

Saying that the dictionary definition doesn't apply lends to the suggestion that different usages of the word should be qualified to be more precise.

> Mathematicians' theorems have conditions like "this is true if the Riemann Hypothesis

> is true" and computer scientists' theorems have conditions like "this is true if

> factoring is hard". He seems to think that one of those is obviously more solid

> than the other.

That's sort of the wrong way to look at it.

The Riemann Hypothesis can be: (a) true; (b) not true; or (c) unprovable given the axioms we currently have, in which case (d) we can either consider it: (d') axiomatically true; or (d'') axiomatically untrue, either stance will lead to some interesting mathematics.

In either event, if it *could* be proven that the Riemann hypothesis was *unprovable*, you'd then have to have two separate branches of number theory, one where Reimann was axiomatically true and one where it wasn't, similar to how we now have Euclidean geometry and non-Euclidean geometry.

"Factoring is hard" is not an axiomatic statement. What does "hard" mean? It can only be taken in some context. "Factoring numbers bigger than N (for some known N) is not possible," would be an axiomatic statement; it's either true, or it's not true.

It's not really a question of whether one statement is more or less "solid" than the other; they are fundamentally different classes of statement.

]]>This is the fundamental misunderstanding between a large body of the general public and the scientific community.

Lots of people think "[Certain principle] is just a theory, you can't *prove* it!" because they are thinking of "proof" in an axiomatic sense, that ends in a QED.

That is, however, NOT the way that scientific knowledge is built. Axiomatic systems are formulated with base principles *stated at outset* upon which you build a logically consistent system. Science involves attempting to *find* the base principles, given observable data and logical conjectures.

]]>This is really a philosophy of science question. A dictionary definition of "axiom" doesn't actually apply here, what we're discussing, fundamentally, is the actual methodology used to come to a conclusion - > axiomatic systems compared to the scientific method.

> In physics, many observations can be explained perfectly well using

> the false axiom that atoms are indivisible ...

In an axiomatic systems view, there is no such thing as a false axiom -> every axiom is a base principle. For an axiomatic system to be usable, the foundational axioms must be consistent (they can't contradict each other).

Physics is a science, it is *not* an axiomatic system. Physicists may have principles that they regard as "axioms", but they are using the term in a manner that is not equivalent to what the term means in the context of mathematics.

I think the difficulty here is that you're conflating a "mathematical axiom" (which has a *particular definition in the context of mathematics, which is an axiomatic system*) and the more general term.

]]>In mathematics, axioms are just assumed, so the notion of rigor doesn't really apply. Axioms are judged instead by what sorts of interesting things can be done with them. But let me second Patrick's differentiation of mathematics from science. For CS, it really does matter whether your axioms are sensible. For security, even axioms that seem sensible are later found not to be.

I read the above-referenced sites when this controversy first erupted. I got the distinct impression that Koblitz, at least, has a commercial interest in the MQV protocol, and that this rant is at least partially inspired by the prospect of ongoing NSA funding. I can't fault anyone for making money from crypto, or from marketing aggressively using whatever fora are available to him. In fact, by producing an actual system (I'm assuming they have done so), the MQV people have advanced the science, because a production system that exists is one that may be broken. So in some sense the mathematician in this case is the more practical academic.

]]>Merriam-Webster (www.m-w.com) defines "axiom" as: (1) a maxim widely accepted on its intrinsic merit, (2) a statement accepted as true as the basis for argument or inference, (3) an established rule or principle or a self-evident truth.

I was using interpretation (2) for my argument. Not even the other two meanings come close to the strictness of "always" having to be true.

In physics, many observations can be explained perfectly well using the false axiom that atoms are indivisible ...

> Cryptographers know that such a statement imples the assumption that the axioms

> hold true and the assumption that current hardware is not fast enough for a

> practical brute force attack.

See, in axiomatic theory, axioms are *always* true, period. Cryptographic "axioms" aren't axioms, they are conditional boundary statements.

> In cryptography, a proof is just as rigorous as a mathematical proof.

Within the context of the field (adding in the word "correct" before proof), I'll agree with this statement.

However, cryptography is science, it is not axiomatic theory. This doesn't make it *better* or *worse*, it makes it a different logical system.

]]>Mathematical proofs, from a pure theory standpoint, are axiomatic. You have a number of definitions, and a number of axioms, from which you attempt to derive a conclusion. Paradigm shifts in theoretical mathematics occur only when someone takes an existing axiom and changes it to create essentially a new field of mathematics, or comes up with a set of axioms on their own.

The funny thing about axiomatic theory fields is that as long as you can create a set of axioms from which you can derive some interesting result (logically speaking), you've done some interesting work. The system can be completely untied to reality, that doesn't matter. N-space topology doesn't have any current applications, really :)

This is opposed to science in general, where you are not applying axiomatic proof, but proof through observable verification.

Theoretical math requires that an entire axiomatic system exist that supports your theorem for the theorem to be considered true. Science instead requires (a) that a body of evidence exist that explains a phenomena in accordance with your theory (b) there exists no evidence that directly challenges your theory that cannot be explained and (c) that your theory provides something useful in the way of a predictive or understanding nature.

Scientists take Calculus, find it to be a useful tool, and use it. Purely theoretical mathematicians, on the other hand, are uneasy with the use of calculus until the concepts of infintesimals and epsilon-delta proofs are developed and accepted.

Obviously, this means that a theoretical mathematician will have a different concept of what the word "proof" means than a scientist will have. Most mathematicians aren't really theoreticians (in my own undergraduate experience there were maybe four of us who were really interested in theory, the other 24 math majors in my class year were analysts or statisticians or some other non-theory heavy area of study).

Now, I may argue that set theory or group theory or topology or number theory is more "pure" than differential equations or statistics, but it's not really. It certainly means that I have a different concept of proof than a statistician does, and we can pooh-pooh each other's approaches in the way that academics have pooh-poohed each other's approaches for centuries.

Kobilitz is, essentially, saying that the field of cryptography is for the most part not an axiomatic field. In this instance, he's totally correct; number theory is, but cryptography really isn't. Cryptographers can get all up in arms and say that he's attacking the very nature of his research, but he's not, really. This might be colored by the fact that Kobilitz has a personal prejudice against non-axiomatic study (I don't know if he does or not, he could be a snarky bastard at mathematics and cryptography conferences or he could be the nicest guy on the planet for all I know), but he's essentially correct here.

]]>Basically, he claimed that mathematicians had the right "mindset" and "mental tools" to easily take to programming, while the few Computer Science majors available back then basically "knew too much" that wasn't necessarily relevant to the emerging microcomputer environment. It took less time to train the math majors than it took to untrain the CS majors.

I'm by no means certain that he was correct, especially since our main products were the two big CS staples: compilers and operating-systems. However, I'm not certain he was far off the mark, either. I have hired, managed, and worked with a large number of software developers over the years. By and large, those with a Bachelor's degree (or less, with significant development experience) worked out better than those with advanced degrees, in terms of being effective and productive in a team-oriented development environment. Those with advanced degrees tended to assume they "knew the solution" if they actually did, and thus didn't check for updated approaches and research. The developers who had no more than a bachelor's tended to be much more willing to actively research issues whether they "knew" them or not (in semi-academic periodicals like Dr. Dobb's Journal, or various ACM publications, and in recently-published technical books.

However, this most certainly does not reflect a "random sample" of the available population. Those with advanced degrees were usually "strongly sponsored" or imposed upon us, based on some HR/Managerial feeling that we "needed" the person with the advanced degree. The folks with less education were selected by acclamation after extensive interviews with most of the senior development staff.

As microcomputers have exploded throughout the world, and evolved ever-closer to many of the hardware techniques developed first for mainframes, and later for minicomputers, the hardware-architecture discrepancies with what CS majors learn in school have become less significant. Also, there are *lots* more CS-majors available.

In relation to Bruce's Mathematics -vs- Computer Science point, I think a big part of the problem is that "Computer Science" is now far too encompassing a field for the term to be used effectively in the way Bruce tries to apply it. For example, there are major differences in the skill-sets and spheres of knowledge between Computer Scientists who specialize in Hardware -vs- those interested in Software, and both groups are distinct from (though intersecting with) those who focus on Algorithms. This last group is arguably the most "Mathematician-Like", and most-likely to be directly involved in cryptography/cryptanalysis.

However, I think the biggest split in the "Computer Science" crowd is the difference between the "Computer Scientist" (essentially theoretical/academic) and the "Computer Engineer" (fundamentally practical/applied). There is a lot of overlap, but I submit that the "rigorousness index" goes something like this:

Theoretical Mathematics

Applied Mathematics

Computer Science

Computer Engineering

Software Development (with Methodology)

Hacking (old MIT meaning)

Ad-Hoc Programming (no Methodology)

]]>

"In cryptography, a proof is just as rigorous as a mathematical proof. Both are based on certain axioms, [...]

Mathematicians, somewhat understandably, balk at the notion that something is "provably secure", when (a) it comes with so many caveats, and (b) when the axioms come with less rigor than they expect of axioms in their field."

@FP, it seems like you contradict yourself with that initial statement and your closing point b. If the axioms are less rigorous how can proofs based on them be equally rigorous?

]]>Cryptography seeks to apply mathematical principles to adversarial situations, while pure mathematics seeks to uncover unifying and universal truths.

Those are philosophically different mindsets, and lead to greatly different attitudes towards things like peer review and collaborative work. This is vividly illustrated by the story of Goldreich's response turning the response to a scientific paper into a religious conflict. It also shows in self-serving selection in marketing, based on asymptotic reasoning for example.

btw, the implications of the anecdote about the NSA distinguishing between American and non-American mathematics seem to imply the goal of limiting access to scientific truth, or at least the desire to do so. King Canute springs to mind!

]]>"In theory, there is no difference between theory and practice. In practice, there is."

The "Quotations Page", at

http://www.quotationspage.com/quote/879.html

attributes this quote to Chuck Reid, instead.

However, there is a site that provides a skeletal historical breakdown of the history and evolution of this quote:

http://c2.com/cgi/wiki?DifferenceBetweenTheoryAndPractice

For example, if I am designing something that needs a one way hash, I'll pull one off the shelf and integrate it, so long as it has the properties I need. I don't need to understand why or how it has those properties. If I publish a paper describing my invention, I obviously won't have the understanding a mathematician would have. There is nothing wrong with that.

]]>What puzzles me more is that Koblitz and others completely miss the point that in some cases "provable security" provides much more detailed proofs (of conditional results, but solid proofs anyway) than the proofs one might find in a mathematical publications. And this is particularly true for the new wave of techniques relying on well-defined frameworks for proving cryptographic "protocols" (in the Koblitz's sense) secure, such as Paulson's inductive approach, ProVerif, CryptoVerif, CertiCrypt, etc... It's true, when we prove a cryptographic system correct we do that for a particular attacker model and relying on conjectures, but we're entirely honest about that and I'm convinced that that is already a big improvement over the informal intuitive arguments Koblitz is so keen on.

Isn't quantum-state cryptography already "proved" secure? Of course, the "proof" is not ultimately mathematical, and relies on the truth or fallacy of the Heisenberg Uncertainty Principle, but that's garnered a pretty impressive collection of empirical support, as well as self-consistent meshing with an enormous body of "workable" (it works in the real-world) theory.

]]>The most eloquent expression of this was often used by a former collegue of mine.

http://highlydistractible.blogspot.com/2007/05/what-is-difference-between-theory-and.html

I'd love to know who originally said it.

]]>But it is not a "mistake" or "lack of rigor" on the part of the researcher for a cryptographic construction to be impractical in practice!

]]>I'm not convinced. Maybe to someone with a math background it's obvious why the Riemann Hypothesis *must* be true... except "obvious" means "it is easy to think of a proof," and ain't nobody done that yet. I don't feel all that convinced of the Riemann Hypothesis, myself. A Big Assumption in my own field of computer science, like say P!=NP, feels much more solid to me than something about the zeros of some weird function on the complex numbers - because even if I don't have the proof in hand, I have a solid understanding of *why* people think P!=NP is true, and just how big a surprise it would be for it not to be true. I don't know enough about number theory to have a good intuition on factoring... but I have some idea that the people who do know about factoring, similarly, have pretty good reasons to feel authoritative about their own Big Assumptions.

Maybe instead of "mathematicians make justifiable assumptions and computer scientists make unjustifiable ones" it'd be more accurate to say "the assumptions of one's own field seem more justifiable than the assumptions of other fields."

]]>Certainly one difference is that cryptography is not absolute, but that it has a certain statistical component to it: it doesn't say that "this algorithm is unbreakable" but rather something like "the algorithm can be broken in, on average, this number of steps."

Certainly, calling some algorithm "provably secure" is a fallacy. Cryptographers know that such a statement imples the assumption that the axioms hold true and the assumption that current hardware is not fast enough for a practical brute force attack.

Mathematicians, somewhat understandably, balk at the notion that something is "provably secure", when (a) it comes with so many caveats, and (b) when the axioms come with less rigor than they expect of axioms in their field.

I have certainly seen this "mistake" rear up in other contexts, and know at least one entrepreneur whose "secret sauce" boils down to being aware that after dimensional reduction a particular real-world example of a particular np-complete problem only ever has a very tractable input size. Not being an actual cryptographer I'm not sure how often this mistake occurs in the literature, but I am a little surprised it happens at all in such a high-rigor context.

]]>Most cryptosystems rely on assumptions that have not been proven, and a breakthrough in fields other than mathematics (e.g. physics or statistics) may prove them wrong. So, we can be more rigurious with our estimations on the security of certain algorithms by using formal mathematical proofs, but we won't get anything better than provable secure systems in the end.

]]>On top of this, many people do not distinguish between the cryptography where a construction necessarily comes with a proof (much of public-key and theoretical crypto, for instance), and the cryptography where they don't (generally the design of efficient block ciphers). So discussions about this topic can get quite confusing, depending on what one considers "cryptography."

Anyway, contrary to what you might guess, it seems like Koblitz is suggesting to do away with mathematical rigor in the proof-oriented cryptography, since he is not satisfied with how it is currently being applied. He suggests giving more informal, intuitive "arguments" that will be more accessible to outsiders.

To me, Koblitz's main complaints sound more like marketing problems than problems with the direction of cryptography. To be sure, it's not hard to misinterpret, misrepresent, or be confused by the interpretation of a "provable security" guarantee, when applying theory to practice. But it's also not hard for an expert to correctly interpret things either. Just because outsiders have trouble interpreting our results doesn't mean we should abandon trying to prove things.

]]>These relationships are the same in the mathematics field as they are in the computer science field or the chemistry field. The sad truth is that computer science is thousands of years younger than math or chemistry, and frankly our researchers just haven't had time to produce as much knowledge. For every algorithm we can prove correct there are thousands of algorithms about which we don't have the tools to prove or disprove the correctness. It's like math before Euler, and comparison to modern mathematics is never going to look rosy for the computer science side. We need to get over it and get back to research.

]]>]]>

Ahh, the sweet smell of bigotry in the evening.

At a rather large number of universities, CS students take the same classes maths students do (if a smaller number of them). It's rather difficult to be less rigorous teaching a CS student than a maths one when they're sitting side by side.

]]>Leo: nice anecdote. I could tell similar stories, unfortunately (going in the opposite direction as well, people thinking easy things are impossible because they don't have a basic understanding of computational complexity). I think it's true that many computer programmers rarely require the mathematical knowledge one learns in school ... but when, a few times a year, they need that knowledge, they (we) truly do need it.

]]>]]>

http://in-theory.blogspot.com/2007/08/swift-boating-of-modern-cryptography.html]]>

Once upon a time someone at some company I worked for decided to save the company money by implementing a scheduling program so that a human wouldn't have to do the work. I mentioned that combinatorial explosion/np-completeness thing that we all should have learned in our theoretical CS class. I was basically told to go away, 'cause everyone knows that theory stuff doesn't really mean anything.

The program ran for three days before they killed it. The resources it was supposed to schedule sat idle during that time. The human who would have done the work in a few hours at the beginning of those three days ended up having to change his plans for that night and spend a few hours doing the scheduling in crisis mode. The people who worked on this later received an award for trying so hard.

It's one thing when an engineer uses theoretically impure infinitesimal arguments to justify his calculus equations. Calculus was done that way for nearly two hundred years before the modern mathematical rigor that eliminated the need for infinitesimals was developed (which, ironically enough, led to a rigorous theory of infinitesimals that can be used as a basis for calculus). Failing to understand the mathematics in computer science, on the other hand, actually leads to software that doesn't work or, in the case of cryptography, provably secure systems that might not really be secure.

]]>Maybe I'm the best of both worlds! Nah, I only care about the theoretical side of math. :)

]]>The problem Koblitz argues against is that the latter wants to pretend they, too, have the rigor and provability that the former have, without being able (due to business constraints) to live up to that.

Roughly - this is more like what ends up in the "Doghouse" here - people who want to talk the talk but don't walk the walk.

The reason for all this is the business constraints. As the author points out CS people perceive time "like a hummingbird" instead of the mathematicians elephant. Therefore, CS people don't take the time to really understand the topic, or even read previous research. This, I might add, is something that you, Bruce, have repeatedly pointed out when you caution people to not design their own ciphers.

See:

for a related rant.

Also: It's not an attack on CS or cryptographers.

]]>