… why addition and multiplication only make any scheme fully homomorphic…

The simple answer is that the only numbering system a CPU understands is integers within a fixed fieldand the only operators it needs to perform the basic maths operations are,

The bit wise / logical operators XOR, AND, OR and word wise / mathmatical operators ADD and LSL, along with branching instructions

To do SUB you use “Two’s complement” where you use XOR and ADD 1 to complement the number ( invert all the bits and add one converts an integer from the positive to negative).

If you examin the truth table of the AND gate you will find it is the same as a one bit multiplier using this along with LSL will enable you to do the equivalent of “School Multiply” to any size you require.

If you use an array of logic gates to do multiplies –such as the Walace tree– you end up with a result that is twice the width of the input words. You can output this or you can output part of it for ordinary multiplication you assume that the radix point is to the right of the least significant bit of the lowermost word. However if you normalise numbers to the range of 0-1 then the radix point falls to the left of the most significant bit of the upermost word. This latter method is used for floating point numbers with the actuall result exponent calculated seperatly (by addition).

Division within a field can be done by multiplication, you can look up various ways to do this one such method can be found in Knuth.

I hope that gives you sufficient info to either answer your question, or to give you a starting point to search for answers.

]]>Thanks!.

]]>So where does the strange idea that it takes longer come from?

More problematic, it seems to me, is that any algebraist can decipher most kinds of homomorphism easily. If division is implemented in the arithmetic, one just has to try x/x, in encryption, for any encrypted x, to get the encryption of 1. Then one gets the encryption of 2 by doing the encrypted arithmetic for 1+1. Etc. So one constructs the code book in 2^32 steps.

Yes, I can defeat that, but you’d have to ask me!

If division were implemented as a software routine instead, then one would likely have good evidence for certain constants (0, 1, 2, ..) being embedded encrypted in the routine. And one could probably recognise the algorithm anyway and figure out which constants were in use where. Please correct me if you know better! All I can think of that *might* defeat that is implementing Fourier’s algorithm for modular division, but them one might as well do it in hardware.

And if one merely has “<” implememted encrypted, or any other ordering, then one gets 0 as the unique number that is > one less number than it is < than. Etc.

So how would encrypted arithmetic defeat any attacker with less than an ounce of brains? Further defense is required (which I can think of, but I have never seen mentioned).

But all this about “takes longer” is just nonsense. DO people somehow imagine (falsely) that encrypted arithmetic is just done by sticking hardware codecs on the inputs and outputs of an ALU? That WOULD make it take longer, but it’s a daft way to go. And I’m not even sure it would make thing stake longer overall, given that a processor is pipelined .. it merely needs to have thruput of 1 arithmetic operation per cycle, never mind how many cycles the operation actually takes .. 100 would be a bit much, but 10 would be fine. So long as one can do 1/10th of an operation per cycle as a single stage, then a pipeline 10 stages long for the operation would average one operation per cycle.

]]>The last page of the article provided some example use cases.

]]>@Natanael :

yeah, you can do everything that is neccessary for statistical computations (mostly logarithms etc.) by using these approaches:

http://dx.doi.org/10.1109/WIFS.2010.5711458

and

http://www.springerlink.com/content/nr5433524x23/#section=1029574&page=1&locus=0

(Disclaimer: this is our work on secure two-party computations).

]]>even better user passwords are used as keys in the sql proxy, so if the server is seized the only people compromised are those who are logged in (and have their key in the proxy).

it’s still basically a research prototype but eventually they’re going to release a full beta version

]]>All accounts remain anonymous and encrypted. The customer can still transact, adding and subtracting funds. Totals hash maintains data integrity and verifies completeness/accuracy. Cash/Asset pools back up the encrypted accounts and tie out at the summary level, but reveal nothing about the transactions.

Banker knows only that it has the money it should at an aggregate level, customer is the only one that can decrypt the account to the transaction level detail or make any transactions.

Transactions cannot be tied to an individual customer.

Thoroughly illegal in the US for that very reason: KYC

]]>I’m reminded of that old CS joke about the algorithm that correctly predicts the weather worldwide for the next 24 hours, but it take 3 days to run.

The application that comes to mind for me is smaller computations, instead of data-crunching. Like third-party decentralized authentication. (e.g. OpenID). If you could get OpenID to verify your identity for login purposes to a third party without having to even give them your personal details (only an encrypted version of them) and to log in providing an encrypted version of your key, it’d open some interesting doors.

]]>