Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no attribute 'HeaanContext' #11

Open
gillesoldano opened this issue Dec 13, 2022 · 4 comments
Open

no attribute 'HeaanContext' #11

gillesoldano opened this issue Dec 13, 2022 · 4 comments
Labels
question Further information is requested

Comments

@gillesoldano
Copy link

Hi

I'm following the XGBoost notebook and trying to reproduce the same results locally, the notebook on the docker image runs without any issue, however on my machine I get this error:
AttributeError: module 'pyhelayers' has no attribute 'HeaanContext'
This happens when running the line:
he_run_req.set_he_context_options([pyhelayers.HeaanContext()])

I'm using the last pyhelayers version which is 1.5.1.0 and I pulled the latest docker pylab image.

@dubek
Copy link
Contributor

dubek commented Dec 13, 2022

Thank you @stealthBanana for trying HElayers.

Indeed the pyhelayers version that is included inside the helayers-pylab docker image does include the HEaaN backend, but the pyhelayers released on PyPi (which you install with pip) doesn't yet include the HEaaN backend. We're working a release of the Python package that will include it.

@dubek dubek added the question Further information is requested label Dec 13, 2022
@dubek
Copy link
Contributor

dubek commented Mar 1, 2023

Hi @stealthBanana , pyhelayers 1.5.2.0 was released yesterday. It should have HEaaN support built-in.
Can you try it? (pip install --upgrade pyhelayers)

@gillesoldano
Copy link
Author

Hi, I'm trying it right now, however I encoutered another problem, when following the deep neural network example. Since I do not have access to the resnet18 model of the docker container, I tried to export a torchvision resnet18 myself as a .onnx, when doing nnp.inport_from_files I get the following error:

RuntimeError: Neural network architecture initialization from ONNX failed: Neural network initialization from ONNX encountered an operator type that is currently not supported: Relu

How could I retrieve that specific resnet implementation ? Is it pretrained on imagenet ? And could I apply transfer learning to it and fine tune it ?

@dubek
Copy link
Contributor

dubek commented Mar 7, 2023

FHE models don't support operations like ReLu which cannot be directly implemented in FHE.

The FHE-friendly resnet18 model that our notebook uses is generated by the script below. It has random weights -- just to demonstrate the timing of running the model on encrypted inputs.

#!/usr/bin/env python3
import torch
import torch.nn as nn
import os
import h5py


# define model
class MyNeuralNet(nn.Module):
    def __init__(self):
        super(MyNeuralNet, self).__init__()

        self.conv_0 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=7, stride=2, padding=3)
        self.bn_1 = nn.BatchNorm2d(num_features=64)
        self.pool_3 = nn.AvgPool2d(kernel_size=3, stride=2, padding=1)

        self.conv_4 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.bn_5 = nn.BatchNorm2d(num_features=64)
        self.conv_7 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.bn_8 = nn.BatchNorm2d(num_features=64)

        self.conv_11 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.bn_12 = nn.BatchNorm2d(num_features=64)
        self.conv_14 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.bn_15 = nn.BatchNorm2d(num_features=64)

        self.conv_18 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=2, padding=1)
        self.bn_19 = nn.BatchNorm2d(num_features=128)
        self.conv_21 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.bn_22 = nn.BatchNorm2d(num_features=128)
        self.conv_23 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=1, stride=2, padding=0)
        self.bn_24 = nn.BatchNorm2d(num_features=128)

        self.conv_27 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.bn_28 = nn.BatchNorm2d(num_features=128)
        self.conv_30 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.bn_31 = nn.BatchNorm2d(num_features=128)

        self.conv_34 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1)
        self.bn_35 = nn.BatchNorm2d(num_features=256)
        self.conv_37 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.bn_38 = nn.BatchNorm2d(num_features=256)
        self.conv_39 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=1, stride=2, padding=0)
        self.bn_40 = nn.BatchNorm2d(num_features=256)

        self.conv_43 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.bn_44 = nn.BatchNorm2d(num_features=256)
        self.conv_46 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.bn_47 = nn.BatchNorm2d(num_features=256)

        self.conv_50 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=2, padding=1)
        self.bn_51 = nn.BatchNorm2d(num_features=512)
        self.conv_53 = nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.bn_54 = nn.BatchNorm2d(num_features=512)
        self.conv_55 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=1, stride=2, padding=0)
        self.bn_56 = nn.BatchNorm2d(num_features=512)

        self.conv_59 = nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.bn_60 = nn.BatchNorm2d(num_features=512)
        self.conv_62 = nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.bn_63 = nn.BatchNorm2d(num_features=512)

        self.pool_66 = nn.AvgPool2d(kernel_size=7, stride=1, padding=0)

        self.fc_68 = nn.Linear(512, 1000)

    def forward(self, x):

        x = self.conv_0(x)
        x = self.bn_1(x)
        x = torch.square(x)
        x = self.pool_3(x)

        b1 = self.conv_4(x)
        b1 = self.bn_5(b1)
        b1 = torch.square(b1)
        b1 = self.conv_7(b1)
        b1 = self.bn_8(b1)

        x = torch.add(x, b1)
        x = torch.square(x)

        b1 = self.conv_11(x)
        b1 = self.bn_12(b1)
        b1 = torch.square(b1)
        b1 = self.conv_14(b1)
        b1 = self.bn_15(b1)

        x = torch.add(x, b1)
        x = torch.square(x)

        b1 = self.conv_18(x)
        b1 = self.bn_19(b1)
        b1 = torch.square(b1)
        b1 = self.conv_21(b1)
        b1 = self.bn_22(b1)

        b2 = self.conv_23(x)
        b2 = self.bn_24(b2)

        x = torch.add(b1, b2)
        x = torch.square(x)

        b1 = self.conv_27(x)
        b1 = self.bn_28(b1)
        b1 = torch.square(b1)
        b1 = self.conv_30(b1)
        b1 = self.bn_31(b1)

        x = torch.add(x, b1)
        x = torch.square(x)

        b1 = self.conv_34(x)
        b1 = self.bn_35(b1)
        b1 = torch.square(b1)
        b1 = self.conv_37(b1)
        b1 = self.bn_38(b1)

        b2 = self.conv_39(x)
        b2 = self.bn_40(b2)

        x = torch.add(b1, b2)
        x = torch.square(x)

        b1 = self.conv_43(x)
        b1 = self.bn_44(b1)
        b1 = torch.square(b1)
        b1 = self.conv_46(b1)
        b1 = self.bn_47(b1)

        x = torch.add(x, b1)
        x = torch.square(x)

        b1 = self.conv_50(x)
        b1 = self.bn_51(b1)
        b1 = torch.square(b1)
        b1 = self.conv_53(b1)
        b1 = self.bn_54(b1)

        b2 = self.conv_55(x)
        b2 = self.bn_56(b2)

        x = torch.add(b1, b2)
        x = torch.square(x)

        b1 = self.conv_59(x)
        b1 = self.bn_60(b1)
        b1 = torch.square(b1)
        b1 = self.conv_62(b1)
        b1 = self.bn_63(b1)

        x = torch.add(x, b1)
        x = torch.square(x)

        x = self.pool_66(x)
        x = torch.flatten(x, start_dim=1)
        x = self.fc_68(x)

        return x

# define input
input = torch.randn(1, 3, 224, 224) # B,C,X,Y

# run inference
my_nn = MyNeuralNet()
my_nn.eval()

# save model
torch.onnx.export(my_nn, input, "model.onnx", training=torch.onnx.TrainingMode.PRESERVE, export_params=False)
print("Saved model")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants