FHE-DiSNN: A New Era for Privacy-Preserving Neural Networks

This article outlines Privasea’s cutting-edge approach to designing secure spiking neural networks, executing fully homomorphic encrypted evaluations, managing error control, and optimizing overall performance. By deeply integrating Spiking Neural Networks (SNNs) with Fully Homomorphic Encryption (FHE), Privasea’s FHE-DiSNN framework introduces a new paradigm for Confidential AI.

Would you like to know more about the project?

As the intersection of "AI + Privacy" becomes a global focal point, how to efficiently run AI models over encrypted data has become crucial for success. As an innovation engine in privacy AI, Privasea not only holds international patents in homomorphic clustering and encrypted retrieval but has also made significant breakthroughs in encrypted spiking neural networks (FHE-DiSNN). This article outlines Privasea’s innovation in designing privacy-preserving spiking neural networks, implementing FHE-encrypted evaluations, managing error control, and optimizing the overall performance. By deeply integrating Spiking Neural Networks (SNNs) with Fully Homomorphic Encryption (FHE), the FHE-DiSNN framework signals a new paradigm for privacy-preserving AI.

Why Combine Spiking Neural Networks with Fully Homomorphic Encryption?

Mainstream privacy-preserving AI often depends on polynomial approximations of activation functions like ReLU or Sigmoid, or uses secret sharing to assist FHE. These introduce errors or require heavy interaction. Spiking Neural Networks (SNNs), inspired by biological neurons, use binary pulses ({0,1}) as signals. This structure maps naturally onto TFHE’s binary logic gates, making them ideal for encrypted inference.

Key benefits of spiking neural networks:

- Binary spike-based activation simplifies FHE operations.

- Sparse activation (neurons only fire at thresholds) reduces evaluation.

- Discretization error is controlled and mathematically bounded.

From SNN to FHE-DiSNN: Framework Design

Privasea introduces a three-step pipeline:

  1. SNN Modeling: Based on the Integrate-and-Fire (IF) model, simulating biological neurons.
  1. DiSNN Discretization: Maps inputs, weights, and potentials to integer domains with provable error bounds.
  2. FHE-DiSNN Execution: Leverages programmable bootstrapping in TFHE to implement encrypted weighted sum, spike activation (FHE-Fire), and reset (FHE-Reset).

From Theory to Practice: Error Controllability Analysis and Performance Verification of DiSNN

In neural network inference based on third-generation fully homomorphic encryption (FHEW/TFHE), discrete error control and noise management are pivotal to system success. Privasea's FHE-DiSNN research provides systematic error analysis with theoretical guarantees:

Error Bound Theorem: For IF/LIF models, discretization error correlates with weight distribution and spike count. When weights follow a [-1/2, 1/2] distribution, the expected discretization error ≈ λ/4 (where λ denotes Poisson spike intensity). Through appropriate parameter configuration, model accuracy controllability can be ensured.

On the MNIST dataset, Privasea validated its framework's effectiveness:

- FHE-DiSNN loses only 0.6% accuracy compared to plaintext SNN.

- It outperforms traditional FHE-DNNs, which lose 0.99%.

- Shows strong resistance to noise and discretization errors.

We emphasize that SNNs ("third-generation neural networks") remain in early development, and FHE-DiSNN represents the field's first implemented attempt. Though current computational efficiency still falls short of practical standards, its advantages in FHEW/TFHE compatibility and error controllability potential in future privacy-computing architectures. FHE-DiSNN is poised to become a critical piece in the privacy AI ecosystem. Where biological neuronal mechanisms converge with cryptographic precision, Privasea AI is potentially bridging privacy computing from "feasible" to "high-performance."

Join the waitlist
We are invite-only right now.
Thank you! You’ve been successfully added to the waitlist!
Oops! Something went wrong while submitting the form.

Join us on