Published in News

IBM’s homomorphic encryption library goes like the clappers

by on09 March 2018

75 times faster

IBM has rewritten its C++ homomorphic encryption library which it says can now go 75 times faster.

For those who read too fast and thought we were talking about something else, homomorphic encryption is a technique used to operate on encrypted data without decrypting it. This means that it can make sensitive operations super secure.

This has become important as companies encrypt cloud-hosted databases, and work on them without converting records back to plain text.

IBM has been thinking about homomorphic encryption since 2016 and released its HElib C++ library three years ago. The downside was that there were huge performance hits to using it.
In fact we are talking about “100 trillion times” slower than plaintext operations. It later accelerated by a factor of two million times, running on a 16-core server.

Released at GitHub, the latest version gets its performance kick from a “re-implementation of homomorphic linear transformations”, making it between 15 and 75 times faster.

According to IBM's Shai Halevi, the problem is that the bulk of the time was spent moving data among the slots in the encrypted vector. This is done by using a mathematical operation that maps an object to itself (automorphism). The computational cost comes from how many times the automorphisms must loop around.

“The main cost of applying such an automorphism to a ciphertext is actually that of “key switching”: after applying the automorphism to each ring element in the ciphertext (which is actually a very cheap operation), we end up with an encryption relative to the “wrong” secret key; by using data in the public key specific to this particular automorphism — a so-called “key switching matrix” — we can convert the ciphertext back to one that is an encryption relative to the “right” secret key”, the paper said.

Reducing the automorphism number involved refactoring many of the necessary computations; and some of the calculations are shifted out of the library's main loop.

The researchers say for common operations, they were able to cut the size of the matrix by 33-50 per cent.

The GitHub page warns that in its present state, this library is mostly meant for researchers working on HE and its uses. That is, it provides low-level routines (set, add, multiply, shift, etc.), with as much access to optimisations as you can get. The plan is to develop some higher level routines later.

Last modified on 09 March 2018
Rate this item
(0 votes)