You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This was the same idea that Craig Gentry gave in his keynote at FHE.org 2025. The idea is that matmul reduces to 4 plaintext matmuls of the same size, combine with a bunch of other key switching operations that don't exceed the matmul cost.
Jai Hyun Park
CryptoLab Inc., Lyon, France
Abstract. Matrix multiplication of two encrypted matrices (CC-MM)
is a key challenge for privacy-preserving machine learning applications.
As modern machine learning models focus on scalability, fast CC-MM
on large datasets is increasingly in demand.
In this work, we present a CC-MM algorithm for large matrices. The
algorithm consists of plaintext matrix multiplications (PP-MM) and ci-
phertext matrix transpose algorithms (C-MT). We propose a fast C-MT
algorithm, which is computationally inexpensive compared to PP-MM.
By leveraging high-performance BLAS libraries to optimize PP-MM, we
implement large-scale CC-MM with substantial performance improve-
ments. Furthermore, we propose lightweight algorithms, significantly re-
ducing the key size from 1 960 MB to 1.57 MB for CC-MM with com-
parable efficiency.
In a single-thread implementation, the C-MT algorithm takes 0.76 sec-
onds to transpose a 2 048 × 2 048 encrypted matrix. The CC-MM al-
gorithm requires 85.2 seconds to multiply two 4 096 × 4 096 encrypted
matrices. For large matrices, our algorithm outperforms the state-of-the-
art CC-MM method from Jiang-Kim-Lauter-Song [CCS’18] by a factor
of over 800.
The text was updated successfully, but these errors were encountered:
Ha, yes. I saw the other paper go on IACR but didn't fully understand what it was doing. Then I forgot and when I saw Craig's talk and the discussions around it, I got excited.
This was the same idea that Craig Gentry gave in his keynote at FHE.org 2025. The idea is that matmul reduces to 4 plaintext matmuls of the same size, combine with a bunch of other key switching operations that don't exceed the matmul cost.
https://eprint.iacr.org/2025/448
Jai Hyun Park
CryptoLab Inc., Lyon, France
Abstract. Matrix multiplication of two encrypted matrices (CC-MM)
is a key challenge for privacy-preserving machine learning applications.
As modern machine learning models focus on scalability, fast CC-MM
on large datasets is increasingly in demand.
In this work, we present a CC-MM algorithm for large matrices. The
algorithm consists of plaintext matrix multiplications (PP-MM) and ci-
phertext matrix transpose algorithms (C-MT). We propose a fast C-MT
algorithm, which is computationally inexpensive compared to PP-MM.
By leveraging high-performance BLAS libraries to optimize PP-MM, we
implement large-scale CC-MM with substantial performance improve-
ments. Furthermore, we propose lightweight algorithms, significantly re-
ducing the key size from 1 960 MB to 1.57 MB for CC-MM with com-
parable efficiency.
In a single-thread implementation, the C-MT algorithm takes 0.76 sec-
onds to transpose a 2 048 × 2 048 encrypted matrix. The CC-MM al-
gorithm requires 85.2 seconds to multiply two 4 096 × 4 096 encrypted
matrices. For large matrices, our algorithm outperforms the state-of-the-
art CC-MM method from Jiang-Kim-Lauter-Song [CCS’18] by a factor
of over 800.
The text was updated successfully, but these errors were encountered: