double-precision variables in voting computer software are part of an evil plot to make undetectable changes in election results. One would hope they’d have some experienced programmers on staff,

Experience isn’t the problem, it’s an accute sense of inexperience that is missing.

General contractors are the problem, I’m doing myself a disservice here but it’s true. What does some self-taught .net programmer who went to school for 2 years and got an MBA and a cert in VBA know about the darker side of mathematics?

Have they even seen the under-handed-c contests?

Plain sight can go a very very long way.

]]>Why don’t you just play old pal games with…

You haven’t changed! Easier to say “I was wrong” and move on, rather than making issues personal and subjective.

]]>/ignore

]]>a) You were wrong but persistent (to avoid saying annoying).

Yeah, that’s what happened. It’s not like the whole thing is above on this very page or anything. (Prime numbers have to be greater than 1, the divisors have to be positive, what @Curious wanted to know: you’ve been shown wrong over and over.)

b) While you tried to hunt me down using whatever “interpretation” was necessary and bluntly ignoring the context, I tried to contribute, to help someone understand; based on what I could see I did this under the assumption that simplicity was more important then math perfection. It seems my perception was confirmed and about right.

You seem to have missed that the interpretation that @Curious wanted to know how hard prime factorization is has since been confirmed by the person who’d know best: @Curious.

Earlier claimed that the definition excluded 1 from being a prime and mocked the idea that the divisors under consideration have to be positive. Now you claim you assumed that simplicity was more important then math perfection.

See also:

If one has to choose between someone who wants to help, even if not perfectly (but simple to understand) and someone who abuses questions for his private war, then I’m very cool about the choice most would pick.

So even you agree that there were blemishes after all. But when that’s pointed out to you your reaction is one of denial, ridicule, theories of persecution, *ad hominem*, and whatever else distracts from the facts. Just in this comment you talk about hunt me down, private war, personally attack, didn’t succeed in muting me.

Finally, one should ask the question whether it’s any good for the community if some lurk waiting to quite personally attack others who merely try to contribute. My guess is that quite many will think twice and hesitate to contribute, knowing that some lurks to attack them.

Seeing a comment that says one of your comments may be less than 100% factually correct is quite the devastating experience. </sarcasm>

I have not attacked your person. I have not ascribed motives to you. I have made no comment on your level of expertise or your qualifications. I have made no comment on how smart or dumb you are. All that is your MO.

And that’s fine. I don’t mind (others might). But if your incessant accusations of non-existent personal attacks on you by me **or others** don’t stop now, I will ask the moderator to make it stop.

a) You were wrong but persistent (to avoid saying annoying).

b) While you tried to hunt me down using whatever “interpretation” was necessary and bluntly ignoring the context, I tried to contribute, to help someone understand; based on what I could see I did this under the assumption that simplicity was more important then math perfection. It seems my perception was confirmed and about right.

If one has to choose between someone who wants to help, even if not perfectly (but simple to understand) and someone who abuses questions for his private war, then I’m very cool about the choice most would pick.

Finally, one should ask the question whether it’s any good for the community if some lurk waiting to quite personally attack others who merely try to contribute. My guess is that quite many will think twice and hesitate to contribute, knowing that some lurks to attack them.

As you can see I have again explained something to someone. So you didn’t succeed in muting me. Why don’t you just play old pal games with Wael or someone.

]]>You are arbitrarily interpreting his question so as to fit your intentions.

That must be why @Curious said the following immediately after your comment: Ugh, I guess I should have used the phrase “prime factorization” instead. 😐.

And thank you for the profound insight into my being. I had not realized what my intentions really are. Now I know.

]]>I suspect the question you are realy trying to ask is about factoring the product of –multiplying– two large primes PQ, as used bt the RSA public key algorithm.

The answer as @ab praeceptis is at pains to point out is a hard one. Part of the problem is that even quite large primes are not realy that scarce, thus the number of PQ products is actually quite high (there are two formula you can combine to give an approximate number).

Thus the odds of two people sharing just one prime “if and only if” (iff) the primes are properly randomly selected should be vanishingly small. The problem is as various scans of the Internet has shown it’s not in practice. That is it’s been found that between ten and thirty percent of PQ pairs on the likes of “Internet Appliances” have a common prime… Which is a bit of a problem as there is a very fast algorithm to find that there are common primes without the pain of factoring out any of the PQ pairs…

Thus the likes of the NSA can find which PQ pairs are going to produce the best set to factor. Having do one pair, the remaining PQ pairs are very rapidly factored giving a list of prime candidates to find other primes with. It is in effect a cascade process where each step produces less primes.

The reason this and other short cuts work is that the random selection process is anything but in many systems. Especially the likes of Embedded computer systems.

In effect their random number generation “lacks entropy” at startup and there is not ssufficient time between firat power up and the selection of prime candidates for the PQ pair. Thr NSA amoungst many other signals inteligence agencies will know this and will have characterized the random numbers used in such products to significantly reduce the number of candidates for factoring guesses.

There is also something called cryptovirology investigated by Adam Young and his thesis supervisor Moti Yung.

Amongst other things they developed something they called kleptogtaphy, which works on another trick that whilst simple takes more typing than you looking the details up ( http://www.cryptovirology.com/cryptovfiles/research.html )

In essence a program that makes the PQ pairs for RSA certificates can be backdoored such that you can hide information that can be used to vastly reduce the factorisation work load.

So whilst factoring of a properly generated PQ pair is significantly hard and resource expensive, there are quite a number of tricks that could be easily hidden in closed source software. Which kind of makes the factoring point moot.

]]>Every number is divisable. Some special ones (primes) are divisable by only themselves and 1. Most numbers, however, are divisble by more numbers; one could say that they are “composed numbers” in that they are the result of one or more multiplications (of primes, but don’t care for the moment).

So, 12, for example, seen from this perspective is “composed”, namely by 6 and 2 (6 * 2 = 12) but also by 3 and by 4 (3 * 4 = 12). But, and that’s important, both 4 and 6 are “composed” numbers, too, namely 6 = 3 * 2 and 4 = 2 * 2. Factorization is about the “final” components of which a number is composed which aren’t divisable any more (other than by 1 or themselves). For 12 that’s 2 and 3 (12 = 2 * 2 * 3). Those “final components” which aren’t divisable anymore (other than by themselves and 1) are called prime factors.

While this process of finding the primes of which a number is composed is simple with small nubers it qickly becomes extremely expensive with larger numbers because first one must find (often intermediate) components at all (like 6, an “intermediate component) and then the components of which those are composed (e.g. 2 and 3).

This is a more or less exponential process which means that it’s close to NP (given that the verification is a simple process), or in other, more mundane words “it’s extremely complicated and processor expensive enough not to be feasible in reasonable time”).

Moreover there are (for any large number) many potential candidates of components. This is easy to see when I ask you for the prime factors of 718829471638409. Given a number is sufficiently large, there is a very high number of candidates (roughly nearly as many candidates as the number itself) and, it’s also not reasonably (or not at all) feasible to keep lookup tables.

It’s however not a quasi random process nor a process that is of strictly linear high complexity because there are some smart approaches that often help to simplify a problem. To offer a very trivial example: If a number ends in 0 or in 5 than it’s obviously divisable by 5. Even simpler, any even number > 2 is divisable by 2, etc.

But even with those simplifications (some of which are not at all simple at first sight) one usually ends up by massive trial and error. A typical process would be to try for divisability by, say the first 100 primes.

To make it “worse” there is a very strong tendency for the number of components to increase along with the size of the number, i.e. a number with 5000 digits is highly likely to have more factors than a number with, say, 500 digits.

Understanding this it should become evident that factorization of very large numbers is a problem that is very expensive to solve as there are only so many simplification “tricks”. The decisive point is till today there is no way to simply check whether a large number is prime; there is no “formula” to tell of primeness.

Important sidenote: That in itself would be quite worthless for crypto if the problem of factorization would meet a second requirement, namely that of being what I mentioned in the context of “NP”. This is that one direction, namely the direction the opponent has to take, is extremely expensive while the other direction, often called verification, is very simple. Factoring a 5000 digit number, for instance, is very expensive; verifying, however, if the solution (the factors found) is correct is quite simple and cheap. That is a pattern you will very often find in crypto.

Moreover (but that is to be taken with a caveat because that statement is closely linked to the status quo of common processors and might change) multiplication (which is needed for trying for primeness) is one of the more expensive operations on common processors.

Regarding your last sentence: No, quite the contrary. It is *advantageous* for crypto that the relative density of primes gets lower when the numbers get large (because a low prime density in a given number range means a low hit rate).

*Disclaimer for super-smart, wikipedia-savvy, picky people looking for something, anything to “prove” that I say wrong things: The above is not meant for math in university. It’s meant to be simple enough to help less experienced people getting a rough understanding of the problem domain.*