Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LWE: distinguish ct-pt mul and ct-scalar mul #1604

Open
ZenithalHourlyRate opened this issue Mar 22, 2025 · 2 comments
Open

LWE: distinguish ct-pt mul and ct-scalar mul #1604

ZenithalHourlyRate opened this issue Mar 22, 2025 · 2 comments
Labels
dialect: lwe Issues about the LWE dialect

Comments

@ZenithalHourlyRate
Copy link
Collaborator

In LWE dialect, we have ct-ct multiplication already, ct-pt multiplication modeled in bgv/ckks dialect. I propose we should also support ct-scalar mul (some literature also call it ct-constant mul)

All those multiplication

For a ct of form (-as + m + te, a), where bold letter means it is a ring element (or vector), we have the following possible way to multiply it

  • Tensor product: tensor product by another ciphertext ct'
  • Multiply by a ring element in Zt[X]/(X^N + 1), like m': the result becomes (-asm' + mm' + tem', am')
  • Multiply by a scalar in Zt, like k: the result becomes (-kas + km + tke, ak)

Different lowering

For arith.mul %ct, %c2 where %c2 = arith.constant 2, to correctly lowering, we have the following ways

  • encrypt 2 to get ct' and do ct-ct multiplication
  • splat 2 into a vector [2, 2, ...] and properly encode (full crt encoding or inverse canonical encoding) it into m', then do ct-pt multiplication
  • directly do ct-scalar

Different noise behavior

One benefit of the modelling of ct-scalar mul is that, it has much smaller noise growth. For the multiplying by 2 example, we have the noise term of tem and tke respectively. Of course the latter is much friendly.

Note that the overflow of [mm']_t into noise also counts.

Cases where ct-scalar mul is wanted

In polynomial approximation output, there would be a lot of ct-scalar multiplication, if we instead just use ct-pt version, the noise growth would be quite unexpected.

Specificially, the sine function approximation in CKKS bootstrapping needs ct-scalar multiplication, and if we do not control the noise/precision carefully, the result could be unexpected.

For scale management, the adjust scale operation is also a ct-scalar multiplication.

Backend supports such difference

For Openfhe, they offer three sets of API

EvalMul(ct, ct)
EvalMul(ct, pt)
EvalMul(ct, NativeInteger constant) or double constant for CKKS

For Lattigo, it also supports three sets of API, accepting either rlwe ct, vector of input, and a scalar.

@ZenithalHourlyRate ZenithalHourlyRate added the dialect: lwe Issues about the LWE dialect label Mar 22, 2025
@AlexanderViand-Intel
Copy link
Collaborator

Yep, I think adding this across the whole stack (scheme dialects, lwe dialect, backend* dialects) would be a great idea! In addition to the noise benefits (and saving the encoding computation), it also helps alleviate memory pressure, as an int/double takes up basically no memory when compared to a full ptxt/poly.

*@backends: I don't know about the other RLWE HW backends, but HERACLES does expose "add_immediate"/"mul_immediate" ops.

@ZenithalHourlyRate
Copy link
Collaborator Author

ZenithalHourlyRate commented Apr 2, 2025

One thing that I found when experimenting with encoding implementations, is that when the input vector is fully packed with the same value (like all 1), or if it supports sparse packing (like in openfhe), then the encoded vector is c + 0 * X^i (only a constant) because the DFT process will cancel out all sums of roots of unity (either for CRTPacking or InverseCanonicalPacking). Intuitively, if your value in time domain is the same, then in frequency domain you should only observe one spike.

So the conclusion is that ct-pt mul with pt being the same should have the same effect as ct-scalar mul, and we could put this to a lower priority. But still the memory-saving point is making sense; and if we do not have full packing, we still have distinction as zero padding will have effect on the encoded vector.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dialect: lwe Issues about the LWE dialect
Projects
None yet
Development

No branches or pull requests

2 participants