Commit ba4c5980 authored by Alessandro Carra's avatar Alessandro Carra
Browse files

Case4 first push.

To be improved when I will be able to clone instead of use WebIDE from browser.
parent a4245e3a
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 STMicroelectronics
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# MPAI-NNW v1.2 implementation
## Case 4
This code refers to the NN-based image classification methods specifically designed and deployed for low power and low resources devices, as described in [TBC] https://mpai.community/.. . It aims to prove that Neural Network Watermarking (NNW) approach under standardization by Moving Pictures, Audio and Data Coding by Artificial Intelligence (MPAI) community is the solution for protecting intellectual property such as the tiny Neural Network (NN), deployable on resource-constrained devices. The standard is named IEEE 3304-2023.
We start with the MLCommon Tiny benchmark model for Image Classification (ResNet8) (https://mlcommons.org/) and watermark it with a state-of-the-art method. Later on we explore its robustness and efficiency through a series of tests. These tests include attack simulations such as quantization, pruning, and Gaussian attacks done following the MPAI NNW standardazed procedure.
By subjecting the mentioned neural networks to these tests, we aim to learn more about the trade-offs between parameters, accuracy, and computational costs, ultimately facilitating the deployment of robust, efficient and secure machine learning solutions on Micro Controller Units (MCUs) for various edge computing applications by embedding a watermark.
For MCUs deployability analysis, ST Edge AI Unified Core Technology has been used.
All the code is based in Python and delivered as a standalone Python Notebook.
**Folders**
- **pics** stores all the pics
- **model** stores all the Neural Network (NN) models
## Installation
Code was designed and tested on an WSL ubuntu 20.04 using venv and Python 3.9.13.
An environment with all the necessary libraries can be created using venv and installign the required packages following:
```bash
/path/to/python3.9.13/script -m venv /path/to/your/env
source /path/to/your/env/bin/activate
pip install -r requirements.txt
```
ST Edge AI Core shall be installed to run the inferences on the MCU and the instruction can be found here: https://stm32ai.st.com/
## Run
The user that wants to replicate the procedure using the notebook can easily walkthrough of the file after activating the environment:
```bash
source /path/to/your/env/bin/activate
```
In the latest cell two exemplary dict with the option to run inferences and to embed the watermark are presented.
The user can easily run all the cell to obtain the results.
# Licence
Copyright 2024 STMicroelectronics
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
%% Cell type:markdown id: tags:
# Deployability Tests of MPAI Watermarked MLCommons-Tiny Networks with the ST Unified AI Core Technology
<center><img width=1000 src="pics/pics_for_notebook/prj_workflow.png"></center>
This project is focused on NN-based image classification methods specifically designed and deployed for low power and low resources devices. It aims to prove that Neural Network Watermarking (NNW) approach under standardization by Moving Pictures, Audio and Data Coding by Artificial Intelligence (MPAI) community is the solution for protecting intellectual property such as the tiny Neural Network (NN), deployable on resource-constrained devices. The standard is named IEEE 3304-2023.
We start with the MLCommon Tiny benchmark model for Image Classification (ResNet8) (https://mlcommons.org/) and watermark it with a state-of-the-art method. Later on we explore its robustness and efficiency through a series of tests. These tests include attack simulations such as quantization, pruning, and Gaussian attacks done following the MPAI NNW standardazed procedure.
By subjecting the mentioned neural networks to these tests, we aim to learn more about the trade-offs between parameters, accuracy, and computational costs, ultimately facilitating the deployment of robust, efficient and secure machine learning solutions on Micro Controller Units (MCUs) for various edge computing applications by embedding a watermark.
For MCUs deployability analysis, ST Edge AI Unified Core Technology has been used.
%% Cell type:markdown id: tags:
**License of the Jupyter Notebook**
Copyright 2024 STMicroelectronics
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
%% Cell type:code id: tags:
``` python
# Below here all the necessary libraries and modules are imported that we will use throughout the whole notebook.
import os
import sys
import subprocess
from datetime import datetime
import cv2
from PIL import Image
import numpy as np
from IPython import display
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
import onnx
import onnxruntime as rt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision as tv
import torchvision.transforms as transforms
# Fixed seed for reproductibility
torch.manual_seed(0)
np.random.seed(0)
```
%% Cell type:code id: tags:
``` python
print("opencv version: {}".format(cv2.__version__))
print("pytorch version: {}".format(torch.__version__))
print("onnx version: {}".format(onnx.__version__))
print("onnx_runtime version: {}".format(rt.__version__))
print("numpy version: {}".format(np.__version__))
print("python version: {}".format(sys.version))
```
%% Cell type:markdown id: tags:
## Model Topology Development
<p>
<center><img src="pics/pics_for_notebook/ResNet8.png"></center>
</p>
ResNet8 is a variant of the ResNet (Residual Network) architecture, which is a widely used deep neural network architecture known for its effectiveness in image classification tasks. ResNet8 is specifically designed to be lightweight and suitable for deployment on resource-constrained devices like MCUs. It is part of the MLCommons Tiny Benchmark suite, which provides standardized benchmarks for evaluating the performance of machine learning models on edge devices.
ResNet8 consists of a relatively shallow network with a total of 8 layers, including convolutional layers, batch normalization layers, activation functions, and a final fully connected layer.
The number of parameters in ResNet8 is significantly lower compared to deeper ResNet variants. This reduction in parameters helps to reduce memory footprint and computational overhead, making it suitable for deployment on MCUs with limited resources. The exact number of parameters depends on factors such as the size of the input images and the number of output classes in the classification task
ResNet8 follows a convolutional neural network (CNN) architecture, where layers are organized in a sequential manner. Each convolutional layer is followed by batch normalization and a non-linear activation function, typically ReLU (Rectified Linear Unit). The final layer consists of a fully connected layer followed by softmax activation for classification
%% Cell type:markdown id: tags:
### PyTorch Neural Network Model Definition
%% Cell type:code id: tags:
``` python
# Define a PyTorch model from MLCommons tiny benchmark for image classification (ResNet-8)
class ResNetBlock(nn.Module):
def __init__(
self,
in_channels: int,
out_channels: int,
stride: int = 1,
):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=3,
padding=1,
bias=True,
stride=stride,
),
nn.BatchNorm2d(num_features=out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(
in_channels=out_channels,
out_channels=out_channels,
kernel_size=3,
padding=1,
bias=True,
),
nn.BatchNorm2d(num_features=out_channels),
)
if in_channels == out_channels:
self.residual = nn.Identity()
else:
self.residual = nn.Conv2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=1,
stride=stride,
)
def forward(self, inputs):
x = self.block(inputs)
y = self.residual(inputs)
return F.relu(x + y)
class Resnet8v1EEMBC(nn.Module):
def __init__(self):
super().__init__()
self.stem = nn.Sequential(
nn.Conv2d(
in_channels=3, out_channels=16, kernel_size=3, padding=1, bias=True
),
nn.BatchNorm2d(num_features=16),
nn.ReLU(inplace=True),
)
self.first_stack = ResNetBlock(in_channels=16, out_channels=16, stride=1)
self.second_stack = ResNetBlock(in_channels=16, out_channels=32, stride=2)
self.third_stack = ResNetBlock(in_channels=32, out_channels=64, stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(in_features=64, out_features=10)
def forward(self, inputs):
x = self.stem(inputs)
x = self.first_stack(x)
x = self.second_stack(x)
x = self.third_stack(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
```
%% Cell type:markdown id: tags:
## Loading data from CIFAR-10 and trigger dataset and pre-processing
Below the definition to get the labelled data from CIFAR10 dataset and from the trigger dataset incoherently labelled.
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
Images for the trigger dataset shall not be chosen from the training dataset and indeed 100 abstract images were selected, and random target classes were assigned to them.
<center><img width=300 src="pics/pics_for_notebook/cifar10.png" style="margin-right: 25px;"> <img width=300 src="pics/full_size_pics/010.jpg" style="margin-left: 25px;"></center>
<center> Exemplary images respectively from CIFAR-10 and from trigger dataset.</center>
%% Cell type:code id: tags:
``` python
def mpai_nnw_dataloader(trainset,testset,batch_size=100):
trainloader = torch.utils.data.DataLoader(
trainset,
batch_size=batch_size,
shuffle=True,
num_workers=2)
testloader = torch.utils.data.DataLoader(
testset,
batch_size=batch_size,
shuffle=False,
num_workers=2)
return trainloader,testloader
def CIFAR10_dataset():
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
transform_test = transforms.Compose([
transforms.CenterCrop((32, 32)),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
# datasets
trainset = tv.datasets.CIFAR10(
root='./data/',
train=True,
download=True,
transform=transform_train)
testset = tv.datasets.CIFAR10(
'./data/',
train=False,
download=True,
transform=transform_test)
return trainset, testset, transform_test
def load_and_preprocess_data():
# Load, crop and save modified image
script_abs_path = os.getcwd() #script_abs_path = os.path.dirname(os.path.abspath(__file__))
image_path = os.path.join(script_abs_path, "pics")
full_size_image_folder_name = "full_size_pics"
croppped_image_folder_name = "cropped_pics"
full_size_image_folder_path = os.path.join(image_path, full_size_image_folder_name)
croppped_image_folder_path = os.path.join(image_path, croppped_image_folder_name)
# search for full_size_image_folder and images inside it
print("Searching for {} folder".format(full_size_image_folder_path))
if os.path.isdir(full_size_image_folder_path):
file_list = os.listdir(full_size_image_folder_path)
if not file_list:
raise AssertionError("Empty folder!")
print("List of files: {}".format(file_list))
else:
raise AssertionError("Folder not found!")
# create the croppped_image_folder if it doesn't exists
print("Searching for {} folder".format(croppped_image_folder_path))
if not os.path.isdir(croppped_image_folder_path):
os.mkdir(croppped_image_folder_path)
print("Folder created!")
img_num = 0
# iterate on files in folder
all_pics_file_npy_name = "all_pics_cropped.npy"
all_pics_file_npy_path = os.path.join(croppped_image_folder_path, all_pics_file_npy_name)
all_pics_npy = None
for idx, file_name in enumerate(file_list):
print("Pre-processing file \"{}\"".format(file_name))
full_size_file_path = os.path.join(full_size_image_folder_path, file_name)
cropped_file_name = file_name.split(".", 2)[0] + "_cropped." + file_name.split(".", 2)[1]
cropped_file_path = os.path.join(croppped_image_folder_path, cropped_file_name)
cropped_file_npy_name = file_name.split(".", 2)[0] + "_cropped." + "npy"
cropped_file_npy_path = os.path.join(croppped_image_folder_path, cropped_file_npy_name)
# avoid open of file not supported or not images
if not full_size_file_path.endswith('.jpg'):
print("\tFile not supported!")
continue
# load image from the folder
img = cv2.imread(full_size_file_path)
if img is None:
raise AssertionError("File not read correctly!")
# cv2.imshow("Full size image", img) # commented since python script run on terminal
# change color map from BGR to RGB
#print("DEBUGP img: {}".format(img.flatten()[0:5]))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
#print("DEBUGP img: {}".format(img.flatten()[0:5]))
# extract coordinates
center_x = img.shape[1]/2
center_y = img.shape[0]/2
crop_x = 32
crop_y = 32
x_start = center_x - crop_x/2
y_start = center_y - crop_y/2
# apply crop to cv2/numpy object
crop_img = img[int(y_start):int(y_start+crop_y), int(x_start):int(x_start+crop_x)]
# convert into flaot32
crop_img_npy = crop_img.astype(np.float32)
tr_axis = (2, 0, 1)
crop_img_npy = np.transpose(crop_img_npy, tr_axis)
# normalize per channel as pytorch normalize doc.
# output[channel] = (input[channel] - mean[channel]) / std[channel]
mean = (0.4914, 0.4822, 0.4465) # from Carl prep implementation
std = (0.2023, 0.1994, 0.2010) # from Carl prep implementation
crop_img_npy = crop_img_npy / 255.0
for ch in range(0,crop_img_npy.shape[0]):
#print("Normalizing channel {}".format(ch))
crop_img_npy[ch] = (crop_img_npy[ch] - mean[ch]) / std[ch]
# show and save the cropped image the new folder
crop_img = cv2.cvtColor(crop_img, cv2.COLOR_RGB2BGR)
cv2.imwrite(cropped_file_path, crop_img)
np.save(cropped_file_npy_path, crop_img_npy)
if all_pics_npy is None:
all_pics_npy = crop_img_npy.reshape((1,) + crop_img_npy.shape)
else:
aa = crop_img_npy.reshape((1,) + crop_img_npy.shape)
all_pics_npy = np.concatenate((all_pics_npy, crop_img_npy.reshape((1,) + crop_img_npy.shape)))
# #print("\t\tNPY saved in \"{}\"".format(cropped_file_npy_path))
# #print("\t\tNPY Shape: {}".format(crop_img_npy.shape))
# #print("\t\tNPY dtype: {}".format(crop_img_npy.dtype))
# #print("\t\tNPY Min: {} -- Max: {}".format(crop_img_npy.min(), crop_img_npy.max()))
img_num += 1
np.save(all_pics_file_npy_path, all_pics_npy)
print("Pre-processed {} on {} files in the folder".format(img_num, idx))
```
%% Cell type:markdown id: tags:
## Training and watermark embedding
<center><img width=1000 src="pics/pics_for_notebook/embed_graph.png"></center>
The preferred method has several key steps such as backdooring, using strong backdoors and commitment schemes, and finally watermarking procedure.
The combination of original dataset and exemplary trigger dataset from the state-of-the-art method is used to train the NN model and force its behabiour.
%% Cell type:code id: tags:
``` python
def pytorch_train_embed():
save_file = 'embedResNet_Adii'
batch_size = 128
model = Resnet8v1EEMBC()
model.to(device)
# mpai_nnw_dataloader(trainset,testset,batch_size=100)
print(model)
power = 10
# watermarking section (change here to test another method) #######################################
tools = ADI.Adi_tools()
# folder = 'code_from_MPAI/referencesoftwarev11_main/Attacks/data/trigger_pics/'
folder = 'code_from_MPAI/STposter/adi/' # trigger_pics/'
watermarking_dict = {'folder': folder,
'batch_size': batch_size,
'transforms': inference_transform,
'types':1,
'num_class':10, # 10 classes referring to CIFAR10
'power': 10,
}
# watermarking section (END change here to test another method) ###################################
watermarking_dict = tools.init(model, watermarking_dict)
trainloader, testloader = mpai_nnw_dataloader(trainset, testset, batch_size)
# Imperceptibility.Embeds(watermarking_dict["types"], model, watermarking_dict, run_arg['epochs'], tools, trainloader, batch_size)
# code from MPAI NNW script
criterion = nn.CrossEntropyLoss()
learning_rate, momentum, weight_decay = 0.01, .9, 5e-4
optimizer = optim.SGD([
{'params': model.parameters()}
], lr=learning_rate, momentum=momentum, weight_decay=weight_decay)
model.train()
epoch = 0
print("Launching injection.....")
while epoch < num_epochs:
print('doing epoch', str(epoch + 1), ".....")
loss, loss_nn, loss_w = tools.Embedder_one_step(model, trainloader, optimizer, criterion, watermarking_dict)
loss = (loss * batch_size / len(trainloader.dataset))
loss_nn = (loss_nn * batch_size / len(trainloader.dataset))
loss_w = (loss_w * batch_size / len(trainloader.dataset))
print(' loss : %.5f - loss_wm: %.5f, loss_nn: %.5f ' % (loss, loss_w, loss_nn))
epoch += 1
print("############ Watermark inserted ##########")
print()
print("Launching Test function...")
```
%% Cell type:code id: tags:
``` python
class ImageFolderCustomClass(torch.utils.data.Dataset):
"""A generic data loader where the images are arranged in this way: ::
root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png
Args:
root (string): Root directory path.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.RandomCrop``
target_transform (callable, optional): A function/transform that takes in the
target and transforms it.
loader (callable, optional): A function to load an image given its path.
Attributes:
classes (list): List of the class names.
class_to_idx (dict): Dict with items (class_name, class_index).
imgs (list): List of (image path, class_index) tuples
"""
def make_dataset(self,dir, class_to_idx):
def is_image_file(filename):
"""Checks if a file is an image.
Args:
filename (string): path to a file
Returns:
bool: True if the filename ends with a known image extension
"""
filename_lower = filename.lower()
return any(filename_lower.endswith(ext) for ext in self.IMG_EXTENSIONS)
images = []
dir = os.path.expanduser(dir)
for target in sorted(os.listdir(dir)):
d = os.path.join(dir, target)
if not os.path.isdir(d):
continue
for root, _, fnames in sorted(os.walk(d)):
for fname in sorted(fnames):
if is_image_file(fname):
path = os.path.join(root, fname)
item = (path, class_to_idx[target])
images.append(item)
return images
def accimage_loader(self,path):
import accimage
try:
return accimage.Image(path)
except IOError:
# Potentially a decoding problem, fall back to PIL.Image
return self.pil_loader(path)
def pil_loader(self,path):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
def default_loader(self,path):
from torchvision import get_image_backend
if get_image_backend() == 'accimage':
return self.accimage_loader(path)
else:
return self.pil_loader(path)
def find_classes(self,dir):
classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]
classes.sort()
class_to_idx = {classes[i]: i for i in range(len(classes))}
return classes, class_to_idx
def __init__(self, root, transform=None, target_transform=None,
custom_class_to_idx=None) :
self.IMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm']
if custom_class_to_idx is None:
classes, class_to_idx = self.find_classes(root)
else:
class_to_idx = custom_class_to_idx
classes = list(class_to_idx.keys())
imgs = self.make_dataset(root, class_to_idx)
if len(imgs) == 0:
raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n"
"Supported image extensions are: " + ",".join(self.IMG_EXTENSIONS)))
self.root = root
self.imgs = imgs
self.classes = classes
self.class_to_idx = class_to_idx
self.transform = transform
self.target_transform = target_transform
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is class_index of the target class.
"""
path, target = self.imgs[index]
img = self.default_loader(path)
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
def __len__(self):
return len(self.imgs)
def __repr__(self):
fmt_str = 'Dataset ' + self.__class__.__name__ + '\n'
fmt_str += ' Number of datapoints: {}\n'.format(self.__len__())
fmt_str += ' Root Location: {}\n'.format(self.root)
tmp = ' Transforms (if any): '
fmt_str += '{0}{1}\n'.format(tmp,
self.transform.__repr__().replace('\n', '\n' + ' ' * len(tmp)))
tmp = ' Target Transforms (if any): '
fmt_str += '{0}{1}'.format(tmp,
self.target_transform.__repr__().replace('\n', '\n' + ' ' * len(tmp)))
return fmt_str
class Adi_tools():
def __init__(self)-> None:
super(Adi_tools, self).__init__()
def list_image(self, main_dir):
"""return all file in the directory"""
res = []
for f in os.listdir(main_dir):
if not f.startswith('.'):
res.append(f)
return res
def add_images(self, dataset, image, label):
"""add an image with its label to the dataset
:param dataset: aimed dataset to be modified
:param image: image to be added
:param label: label of this image
:return: 0
"""
(taille, height, width, channel) = np.shape(dataset.data)
dataset.data = np.append(dataset.data, image)
dataset.targets.append(label)
dataset.data = np.reshape(dataset.data, (taille + 1, height, width, channel))
return 0
def get_image(self, name):
"""
:param name: file (including the path) of an image
:return: a numpy of this image"""
image = Image.open(name)
return np.array(image)
def Embedder_one_step(self, net, trainloader, optimizer, criterion, watermarking_dict):
'''
:param watermarking_dict: dictionary with all watermarking elements
:return: the different losses ( global loss, task loss, watermark loss)
'''
running_loss = 0
wmloader=watermarking_dict['wmloader']
wminputs, wmtargets = [], []
if wmloader:
for wm_idx, (wminput, wmtarget) in enumerate(wmloader):
wminput, wmtarget = wminput.to(device), wmtarget.to(device)
wminputs.append(wminput)
wmtargets.append(wmtarget)
# the wm_idx to start from
wm_idx = np.random.randint(len(wminputs))
for i, data in enumerate(trainloader, 0):
# split data into the image and its label
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
if wmloader:
inputs = torch.cat([inputs, wminputs[(wm_idx + i) % len(wminputs)]], dim=0)
labels = torch.cat([labels, wmtargets[(wm_idx + i) % len(wminputs)]], dim=0)
# initialise the optimiser
optimizer.zero_grad()
# forward
outputs = net(inputs)
# backward
loss = criterion(outputs, labels)
loss.backward()
# update the optimizer
optimizer.step()
# loss
running_loss += loss.item()
return running_loss, running_loss, 0
def Detector(self, net, watermarking_dict):
"""
:param file_watermark: file that contain our saved watermark elements
:return: the extracted watermark, the hamming distance compared to the original watermark
"""
# watermarking_dict = np.load(file_watermark, allow_pickle='TRUE').item() #retrieve the dictionary
wmloader= watermarking_dict['wmloader']
net.eval()
res = 0
total = 0
for i, data in enumerate(wmloader):
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
res += predicted.eq(labels.data).cpu().sum()
return '%i/%i' %(int(res),total), total-res
def init(self, net, watermarking_dict, save=None):
'''
:param net: network
:param watermarking_dict: dictionary with all watermarking elements
:param save: file's name to save the watermark
:return: watermark_dict with a new entry: the secret key matrix X
'''
folder=watermarking_dict["folder"]
for elmnt in os.listdir(folder):
if ".txt" in elmnt:labels_path=elmnt
wmset = ImageFolderCustomClass(
folder,
watermarking_dict["transforms"])
img_nlbl = []
wm_targets = np.loadtxt(os.path.join(folder, labels_path))
for idx, (path, target) in enumerate(wmset.imgs):
img_nlbl.append((path, int(wm_targets[idx])))
wmset.imgs = img_nlbl
wmloader = torch.utils.data.DataLoader(
wmset, batch_size=watermarking_dict["batch_size"], shuffle=True,
num_workers=4, pin_memory=True)
watermarking_dict['wmloader']=wmloader
return watermarking_dict
```
%% Cell type:markdown id: tags:
## Evaluation of the pre-trained and the watermarked NN models
The script is part of a testing framework that automates the process of setting up directories and running validation tests on our models. It is designed to work with ST Edge AI Unified Core technology that validates models for use on STM32 microcontroller hardware. The script handles file and directory operations, constructs the necessary commands for validation, and executes those commands, capturing the output in text files for later review.
<center><img width="600" src="pics/pics_for_notebook/ev_graph.png"></center>
%% Cell type:code id: tags:
``` python
def mpai_nnw_quality_measurement(confusion_matrix):
line_sum=torch.sum(confusion_matrix,dim=1)
column_sum = torch.sum(confusion_matrix, dim=0)
total_sum=torch.sum(confusion_matrix)
Precision=torch.diag(confusion_matrix)/line_sum
Recall=torch.diag(confusion_matrix)/column_sum
Pfa=(line_sum - torch.diag(confusion_matrix))/total_sum
Pmd=(column_sum - torch.diag(confusion_matrix))/total_sum
return torch.mean(Pfa),torch.mean(Precision),torch.mean(Recall),torch.mean(Pmd)
def mpai_nnw_test(net,testloader):
# test complet
correct = 0
total = 0
confusion_matrix=torch.zeros(10,10)
# torch.no_grad do not train the network
with torch.no_grad():
for data in testloader:
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
outputs = net(inputs)
if len(outputs) ==2:outputs,_=outputs
_, predicted = torch.max(outputs, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
for i in range(len(labels)):
confusion_matrix[predicted[i],labels[i]]=confusion_matrix[predicted[i],labels[i]]+1
return 100 - (100 * float(correct) / total), confusion_matrix
def pytorch_evaluation():
np.save(save_file + '_watermarking_dict.npy',watermarking_dict)
save_file = 'embedResNet_Adii'
torch.save({
'model_state_dict': model.state_dict(),
}, save_file + '_weights')
val_score, cm = mpai_nnw_test(model, testloader)
Pfa, Pre, Rec, Pmd = mpai_nnw_quality_measurement(cm)
print('Validation error : %.2f' % val_score)
print('Probability of false alarm:', Pfa)
print('Precision:', Pre)
print('Recall:', Rec)
print('Probability of missed detection', Pmd)
def NN_model_evaluation_ST_Edge_AI():
#print("Pre-processing file \"{}\"".format(file_name))
file_name = run_arg['model_path']
f_name, f_extension = os.path.splitext(file_name)
print("Validate model: {}".format(file_name))
# personalize the following path with your configuration and environment
HOME = ''
xcubeai_exe_path = f"{HOME}/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/8.1.0/Utilities/windows/stm32ai.exe"
# validation input from trigger dataset
img_trigger_data_path = './pics/001_cropped.npy'
label_trigger_data_path = './pics/001_label.npy'
cmd = [f"{xcubeai_exe_path}", "validate", "-m", f"{file_name}", "--classifier", "--target", "stm32", "--mode", "stm32", "--batches", "1", "-vi", f"{img_trigger_data_path}", "-vo", f"{label_trigger_data_path}", "--no-exec-model"]
print("Running command: {}".format(' '.join(cmd)))
result = subprocess.run(' '.join(cmd), shell=True, text=True) # use this calling X-Cube-AI from WSL to windows
print("Result: {}".format(result))
```
%% Cell type:markdown id: tags:
# Notebook execution
%% Cell type:code id: tags:
``` python
# Specifying the setup for the next run
list_run_arg = [
dict(
AIFramework = 'ONNX', # 'Keras', 'Pytorch' or 'ONNX'
action = 'validate', # 'load_pre-trained', 'train', 'embed_w'
model_path = 'models/models_to_test/ResNet.onnx', # e.g. for keras: '../models/models_keras/ResNet-8_1epoch.h5' or for onnx: 'models/models_to_test/ResNet.onnx'
target = 'mcu'
),
dict(
AIFramework = 'PyTorch', # 'Keras', 'Pytorch' or 'ONNX'
action = 'embed_w', # 'load_pre-trained', 'train', 'embed_w'
model_path = 'models/ResNet_w_0.onnx', # e.g. for keras: '../models/models_keras/ResNet-8_1epoch.h5' or for onnx: 'models/models_to_test/ResNet.onnx'
),
]
```
%% Cell type:code id: tags:
``` python
def check_key(key):
if key not in run_arg.keys():
raise AssertionError("Key \'{}\' not in run_arg dict, please insert".format(key))
for run_arg in list_run_arg:
check_key('AIFramework')
if run_arg['AIFramework'] == 'PyTorch':
print("Using PyTorch framework")
check_key('action')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Available device: {}".format(device))
trainset, testset, inference_transform = CIFAR10_dataset()
if run_arg['action'] == 'embed_w':
check_key('epochs')
num_epochs = run_arg['epochs']
pytorch_train_embed()
pytorch_evaluation()
elif run_arg['AIFramework'] == 'ONNX':
check_key('action')
if run_arg['action'] == 'load_pre-trained':
print("Loading ONNX model from: \'{}\'".format(run_arg['model_path']))
check_key('model_path')
sess = rt.InferenceSession(run_arg['model_path'])
elif run_arg['action'] == 'validate':
check_key('model_path')
print("Validating ONNX model from: \'{}\'".format(run_arg['model_path']))
check_key('target')
if run_arg['target'] == 'mcu':
NN_model_evaluation_ST_Edge_AI()
else:
raise AssertionError("AIFramework not supported")
```
# you shall install the following requirements with "pip install -r requirements.txt" in your virtual environment
# python 3.9.13 has been used
Check if i can push
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment