Commit 2376d2a2 authored by Carl De Sousa Trias's avatar Carl De Sousa Trias
Browse files

Adding README

Adding AIW multimodal question answering watermarking model 
Update of the controller.py
parent e6c23142
# MPAI-NNW v1.1 implementation
This code refers to the implementation of the MPAI-NNW under MPAI-AIF, as described in https://mpai.community/wp-content/uploads/2023/10/Reference-Software-Neural-Network-Watermarking-V1.pdf.
All the code is based in Python (APIs and .
**Implemented APIs**
1. MPAI_AIFS_GetAndParseArchive, unzip and parse json and AIMs.
2. MPAI_AIFM_AIM_{Start,Pause,Resume,Stop,GetStatus}, to process on AIM/AIW.
3. MPAI_AIFM_Port_Input_{Write,Read,Reset}, to process on the port of the AIMs.
**Controller/User Agent**
1. Controller is deployed under the socket library (waiting request from _input.py_).
2. User Agent can trigger and run command by sending inputs.
3. _config.py_ shares some variables among the different files.
**Folders**
- **all_AIW** stores all the different AIW that are implemented
- NNW_NNW-QAM,NNW_NNW-QAM-Checker for the Multimodal Question Answering watermarking use case
- NNWImp,NNWRob for the controller_NNW
- **resources** external elements for some use cases (uncorrelated images for ADI, context/question of the MQA, ...)
- **Attacks** contains all the specified attacks of MPAI-NNW under the PyTorch Framework.
**Specificity to MPAI-NNW**
1. _utils.py_ contains function link to the dataset/dataloader under the PyToch framework.
2. _UCHIDA.py_ / _ADI.py_ correspond to the Neural Network Watermarking technology under evaluation.
3. AIW.zip is composed of the corresponding .json and the AIMs as Python file.
4. Folder _Attacks_ contains all the specified attacks of MPAI-NNW under the PyTorch Framework.
## Installation
Code was designed and tested on an Ubuntu 20.04 operating system using anaconda 23.7.2 and Python 3.9.
An environment with all the necessary libraries can be created using:
```bash
conda create --name <env> --file requirements.txt
```
## Run
**Initialisation**
First the Controller should be initialized (the command '-W ignore' can be added to avoid warning message during execution):
```bash
conda activate <env>
python controller.py
Controller Initialized
```
To send commands to the controller as a user agent,
a second terminal should be open, and run:
```bash
conda activate <env>
python input.py
input: <your command>
```
**Emulation of MPAI Store**
Emulate the folder of the computer as a website using the command:
```bash
python3 -m http.server
```
Then the command simulate the downloading of the AIW from a website:
```bash
conda activate <env>
python input.py
input: wget http://0.0.0.0:8000/[yourpath]/AIW.zip
```
### **List of command for controller**
Open a window for selecting the folder AIW.zip:
```bash
(env) python input.py
input: getparse
```
Run all, windows using tkinter will ask for the different files depending on the .zip parsed [AIW_MQA-NNW/AIW_MQA-NNW-Checker])
```bash
(env) python input.py
input: run all
```
**List of command for controller_NNW** (old)
for [AIWImp/AIWRob]
Open a window for selecting the folder AIW.zip:
```bash
(env) python input.py
input: getparse
```
Set the Computational Cost flag ON:
```bash
(env) python input.py
input: ComputationalCost True
```
Run the Robustness AIW with the **1** Modification and **{"P":0.5}** Parameters
```bash
(env) python input.py
input: run robustness 1 {"P":0.5}
```
Run the Imperceptibility AIW with **vgg16** as watermarked AIM and trained on the **CIFAR10** dataset
```bash
(env) python input.py
input: run imperceptibility vgg16 cifar10
```
### Some warnings
1. The AIW should be named AIW.zip and contained the .json and the needed AIMs.
2. The code does not permit misspelling.
# Licence
Licence information are detailed in the MPAI website
import torch
from transformers import AutoModelForQuestionAnswering
from datasets import load_dataset
from transformers import AutoTokenizer, pipeline
from tqdm.auto import tqdm
import numpy as np
import collections
import evaluate
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, pipeline
from evaluate import load
from scipy.io.wavfile import write
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
from tqdm import tqdm
from transformers.pipelines.pt_utils import KeyDataset
from utils import *
import wavmark
from wavmark.utils import file_reader
class NNWProof():
Answer = None
##
Proof = False
def funcNNWProof(self, input):
'''
Verify the inference
'''
payload = [0,1,1,1,1,0,0,0,0,1,1,0,1,0,1,1]
model = wavmark.load_model().to(device)
signal = file_reader.read_as_single_channel(input, aim_sr=16000)
# 5.decode watermark
payload_decoded, _ = wavmark.decode_watermark(model, signal, show_progress=True)
if isinstance(payload_decoded,type(None)): return False
BER=(payload != payload_decoded).mean() * 100
return BER==0
def run(self):
self.Proof = self.funcNNWProof(self.Answer)
\ No newline at end of file
{
"$schema": "",
"$id": "",
"title": "WaterChecker",
"Identifier": {
"ImplementerID": "/* String assigned by IIDRA */",
"Specification": {
"Standard": "MPAI-NNW",
"AIW": "NNW-WaterChecker",
"AIM": "NNW-WaterChecker",
"Version": "1"
}
},
"APIProfile": "basic",
"Description": "This AIF check the answer produce by an NN",
"Types": [
{
"Name":"answer_t",
"Type":"uint8[]"
},
{
"Name":"proof_t",
"Type":"boolean"
}
],
"Ports": [
{
"Name":"Answer",
"Direction":"InputOutput",
"RecordType":"answer_t"
},
{
"Name":"Proof",
"Direction":"OutputInput",
"RecordType":"proof_t"
}
],
"SubAIMs": [
{
"Name": "NNWProof",
"Identifier": {
"ImplementerID": "/* String assigned by IIDRA */",
"Specification": {
"Standard": "MPAI-NNW",
"AIW": "NNW-QAUsage",
"AIM": "NNWProof",
"Version": "1"
}
}
}
],
"Topology": [
{
"Output":{
"AIMName":"",
"PortName":"Answer"
},
"Input":{
"AIMName":"NNWProof",
"PortName":"Answer"
}
},
{
"Output":{
"AIMName":"NNWProof",
"PortName":"Proof"
},
"Input":{
"AIMName":"",
"PortName":"Proof"
}
}
]
}
import torch
from transformers import AutoModelForQuestionAnswering
from datasets import load_dataset
from transformers import AutoTokenizer, pipeline
from tqdm.auto import tqdm
import numpy as np
import collections
import evaluate
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, pipeline
from evaluate import load
from scipy.io.wavfile import write
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
from tqdm import tqdm
from transformers.pipelines.pt_utils import KeyDataset
from PIL import Image
import soundfile as sf
from lavis.models import load_model_and_preprocess
from utils import *
import wavmark
from playsound import playsound
class QuestionAnswering():
QuestionText = None
RawImage = None
##
AnswerText = None
def funcQuestionAnswering(self, raw_image_path,question):
'''
Apply an NN to answer the question
'''
raw_image=Image.open(raw_image_path).convert("RGB")
pipe = pipeline("visual-question-answering", model="Salesforce/blip-vqa-base")
output = pipe(raw_image, question, top_k=1)[0]
return output['answer']
def run(self):
self.AnswerText = self.funcQuestionAnswering(self.RawImage, self.QuestionText)
class SpeechRecognition():
QuestionAudio = None
##
QuestionText = None
def funcSpeechRecognition(self, input):
'''
Verify the inference
'''
if self.QuestionText == None:
playsound(input)
speech_reco = pipeline(
"automatic-speech-recognition", model="openai/whisper-base", device=device
)
res = speech_reco(input)
return res["text"]
def run(self):
self.QuestionText = self.funcSpeechRecognition(self.QuestionAudio)
class SpeechSynthesis():
AnswerText = None
AnswerAudio= None
def funcSpeechSynthesis(self,input):
synthesiser = pipeline("text-to-speech", "microsoft/speecht5_tts")
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embedding = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0)
# You can replace this embedding with your own as well.
speech = synthesiser("The answer to your question is:"+input,
forward_params={"speaker_embeddings": speaker_embedding})
payload = [0,1,1,1,1,0,0,0,0,1,1,0,1,0,1,1]
model = wavmark.load_model().to(device)
signal, sample_rate = speech["audio"],speech["sampling_rate"]
watermarked_signal, _ = wavmark.encode_watermark(model, signal, payload, show_progress=True)
# you can save it as a new wav:
path_output = "AudioAnswer.wav"
sf.write(path_output, watermarked_signal, samplerate=16000)
playsound(path_output)
return path_output
def run(self):
self.AnswerAudio = self.funcSpeechSynthesis(self.AnswerText)
\ No newline at end of file
{
"$schema": "",
"$id": "",
"title": "QASRUsage",
"Identifier": {
"ImplementerID": "/* String assigned by IIDRA */",
"Specification": {
"Standard": "MPAI-NNW",
"AIW": "NNW-QASRUsage",
"AIM": "NNW-QASRUsage",
"Version": "1"
}
},
"APIProfile": "basic",
"Description": "This AIF is an example of an integrated NNW use case",
"Types": [
{
"Name":"audio_t",
"Type":"uint8[]"
},
{
"Name":"question_t",
"Type":"uint8[]"
},
{
"Name":"image_t",
"Type":"uint8[]"
},
{
"Name":"answer_t",
"Type":"uint8[]"
},
{
"Name":"answer_audio_t",
"Type":"uint8[]"
}
],
"Ports": [
{
"Name":"QuestionAudio",
"Direction":"InputOutput",
"RecordType":"audio_t"
},
{
"Name":"QuestionText",
"Direction":"InputOutput",
"RecordType":"question_t"
},
{
"Name":"RawImage",
"Direction":"InputOutput",
"RecordType":"image_t"
},
{
"Name":"AnswerText",
"Direction":"InputOutput",
"RecordType":"answer_t"
},
{
"Name":"AnswerAudio",
"Direction":"InputOutput",
"RecordType":"answer_audio_t"
}
],
"SubAIMs": [
{
"Name": "QuestionAnswering",
"Identifier": {
"ImplementerID": "/* String assigned by IIDRA */",
"Specification": {
"Standard": "MPAI-NNW",
"AIW": "NNW-QAUsage",
"AIM": "QuestionAnswering",
"Version": "1"
}
}
},
{
"Name": "SpeechRecognition",
"Identifier": {
"ImplementerID": "/* String assigned by IIDRA */",
"Specification": {
"Standard": "MPAI-NNW",
"AIW": "NNW-QAUsage",
"AIM": "SpeechRecognition",
"Version": "1"
}
}
},
{
"Name": "SpeechSynthesis",
"Identifier": {
"ImplementerID": "/* String assigned by IIDRA */",
"Specification": {
"Standard": "MPAI-NNW",
"AIW": "NNW-QAUsage",
"AIM": "SpeechSynthesis",
"Version": "1"
}
}
}
],
"Topology": [
{
"Output":{
"AIMName":"",
"PortName":"QuestionAudio"
},
"Input":{
"AIMName":"SpeechRecognition",
"PortName":"QuestionAudio"
}
},
{
"Output":{
"AIMName":"SpeechRecognition",
"PortName":"QuestionText"
},
"Input":{
"AIMName":"QuestionAnswering",
"PortName":"QuestionText"
}
},
{
"Output":{
"AIMName":"",
"PortName":"RawImage"
},
"Input":{
"AIMName":"QuestionAnswering",
"PortName":"RawImage"
}
},
{
"Output":{
"AIMName":"QuestionAnswering",
"PortName":"AnswerText"
},
"Input":{
"AIMName":"SpeechSynthesis",
"PortName":"AnswerText"
}
},
{
"Output":{
"AIMName":"SpeechSynthesis",
"PortName":"AnswerAudio"
},
"Input":{
"AIMName":"",
"PortName":"AnswerAudio"
}
},
{
"Output":{
"AIMName":"SpeechRecognition",
"PortName":"QuestionText"
},
"Input":{
"AIMName":"",
"PortName":"QuestionText"
}
}
]
}
# controller.py https://www.bogotobogo.com/python/python_network_programming_server_client.php
import socket
import time
from Attacks import *
from UCHIDA import Uchi_tools
from ADI import Adi_tools
from multiprocessing import Process
from APIs import *
import psutil
import os
import ast
import tkinter as tk
from tkinter import filedialog
from tkinter import filedialog, simpledialog
import wget
import config
from PIL import Image
# create a socket object
serversocket = socket.socket(
......@@ -22,7 +20,7 @@ serversocket = socket.socket(
# get local machine name
host = socket.gethostname()
port = 12345
port = 12468
# bind to the port
serversocket.bind((host, port))
......@@ -58,9 +56,10 @@ while True:
elif "getparse" in message[0].lower():
#print(type(message[1]),message[1]) #always str
root = tk.Tk()
root.withdraw()
filename = filedialog.askopenfilename(title='Select the parameter file', filetypes=(("Text files",
filename = filedialog.askopenfilename(title='Select the zip (json and AIMs)', filetypes=(("Text files",
"*.zip"),
("all files",
"*.*")))
......@@ -77,6 +76,7 @@ while True:
# print(AIMs.keys())
# print(config.Topology)
print(".json parsed")
elif 'write' in message[0].lower():
## message[1] AIM_name, message[2] port_name, message[3] what to write
MPAI_AIFM_Port_Input_Write(message[1],message[2],message[3])
......@@ -88,103 +88,28 @@ while True:
## message[1] AIM_name, message[2] port_name
MPAI_AIFM_Port_Reset(message[1],message[2])
elif 'ComputationalCost' in message[0].lower():
CompCostFlag=ast.literal_eval(str(message[1]))
elif 'run' in message[0].lower():
### here you can want for a next message instead
# print("waiting for the type of run")
# clientsocket, addr = serversocket.accept()
# data = clientsocket.recv(1024)
# message = data.decode()
# message = message.split()
if "robustness" in message[1].lower():
# run robustness 1 {"P":.1}
param = ast.literal_eval(str(message[3]))
M_ID = int(message[2])
root = tk.Tk()
root.withdraw()
reload = filedialog.askopenfilename(title='Select the parameter file') # show an "Open" dialog box and return the path to the selected file
watermarked_parameters = torch.load(reload, map_location=torch.device('cpu'))
config.message["WatermarkedParameter"] = watermarked_parameters
tools = Uchi_tools()
root = tk.Tk()
root.withdraw()
reload_npy = filedialog.askopenfilename(title='Select the watermarking_dict')
watermarking_dict = np.load(reload_npy, allow_pickle=True).item()
MPAI_AIFM_Port_Input_Write("WatermarkDecoder", "tools", tools)
MPAI_AIFM_Port_Input_Write("WatermarkDecoder", "watermarking_dict", watermarking_dict)
MPAI_AIFM_Port_Input_Write("Comparator", "Payload", watermarking_dict["watermark"])
### Automatized
for elements in config.Topology:
# print(elements)
if elements["Output"]["AIMName"]=="":
MPAI_AIFM_Port_Input_Write(elements["Input"]["AIMName"], elements["Input"]["PortName"],
config.message[elements["Output"]["PortName"]])
else:
if CompCostFlag:
time1 = time.time()
MPAI_AIFM_AIM_Start(elements["Output"]["AIMName"])
if CompCostFlag:
time2 = time.time()
if elements["Input"]["AIMName"]=="":
print("Output of",elements["Output"]["AIMName"],"port",elements["Output"]["PortName"] )
print(MPAI_AIFM_Port_Output_Read(elements["Output"]["AIMName"],elements["Input"]["PortName"]))
else:
MPAI_AIFM_Port_Input_Write(elements["Input"]["AIMName"], elements["Input"]["PortName"],
MPAI_AIFM_Port_Output_Read(elements["Output"]["AIMName"],elements["Output"]["PortName"]))
MPAI_AIFM_AIM_Start("Comparator")
print('BER : %s' % (MPAI_AIFM_Port_Output_Read("Comparator","output_0")))
if CompCostFlag:
print("time of execution: %.5f sec" %(time2-time1))
elif "imperceptibility" in message[1].lower():
## run imperceptibility vgg16 cifar10
if "vgg16" in message[2].lower():
model = tv.models.vgg16()
model.classifier = nn.Linear(25088, 10)
else:
print(message[2],"not found - default loading vgg16")
model = tv.models.vgg16()
model.classifier = nn.Linear(25088, 10)
if "cifar10" in message[3].lower():
trainset,testset,tfm=CIFAR10_dataset()
for elements in config.Topology:
# print(elements)
if elements["Output"]["AIMName"]=="": ## no outputs means it's an input
### TBD better: if conditions link to the port reading
root = tk.Tk()
root.withdraw()
path = filedialog.askopenfilename(title='Select '+str(elements["Input"]["PortName"]))
MPAI_AIFM_Port_Input_Write(elements["Input"]["AIMName"], elements["Input"]["PortName"],
path)
else:
print(message[3],"not found - default loading CIFAR10")
trainset,testset,tfm=CIFAR10_dataset()
MPAI_AIFM_Port_Input_Write("WatermarkEmbedder", "AIM", model)
if CompCostFlag:
time1=time.time()
MPAI_AIFM_AIM_Start("WatermarkEmbedder")
if CompCostFlag:
time2=time.time()
MPAI_AIFM_Port_Input_Write("AIM", "model", model)
MPAI_AIFM_Port_Input_Write("AIM", "parameters", MPAI_AIFM_Port_Output_Read("WatermarkEmbedder","output_0"))
MPAI_AIFM_Port_Input_Write("AIM", "testingDataset", testset)
MPAI_AIFM_AIM_Start("AIM")
print('AIM_watermarked_result : %s' % (MPAI_AIFM_Port_Output_Read("AIM","output_0")))
if CompCostFlag:
print("time of execution: %.5f sec" %(time2-time1))
MPAI_AIFM_Port_Input_Write("AIMtrainer", "AIM", model)
MPAI_AIFM_AIM_Start("AIMtrainer")
MPAI_AIFM_Port_Input_Write("AIM", "model", model)
MPAI_AIFM_Port_Input_Write("AIM", "parameters", MPAI_AIFM_Port_Output_Read("AIMtrainer","output_0"))
MPAI_AIFM_Port_Input_Write("AIM", "testingDataset", testset)
MPAI_AIFM_AIM_Start("AIM")
print('AIM_unwatermarked_result : %s' % (MPAI_AIFM_Port_Output_Read("AIM", "output_0")))
MPAI_AIFM_AIM_Start(elements["Output"]["AIMName"])
if elements["Input"]["AIMName"]=="": ## no inputs means it's an output
print("Output of",elements["Output"]["AIMName"],"- port",elements["Output"]["PortName"] )
print()
print(MPAI_AIFM_Port_Output_Read(elements["Output"]["AIMName"],elements["Input"]["PortName"]))
print()
else:
MPAI_AIFM_Port_Input_Write(elements["Input"]["AIMName"], elements["Input"]["PortName"],
MPAI_AIFM_Port_Output_Read(elements["Output"]["AIMName"],elements["Output"]["PortName"]))
else:
print(message[1].lower(),"not implemented")
elif 'status' in message[0].lower():
print(config.dict_process)
......
# controller.py https://www.bogotobogo.com/python/python_network_programming_server_client.php
import socket
import time
from Attacks import *
from UCHIDA import Uchi_tools
from ADI import Adi_tools
from multiprocessing import Process
from APIs import *
import psutil
import os
import ast
import tkinter as tk
from tkinter import filedialog
import wget
import config
# create a socket object
serversocket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
# get local machine name
host = socket.gethostname()
port = 12345
# bind to the port
serversocket.bind((host, port))
# queue up to 5 requests
serversocket.listen(5)
print("Controller Initialized")
CompCostFlag=False
while True:
# establish a connection
clientsocket, addr = serversocket.accept()
# print("Got a connection from %s" % str(addr))
# currentTime = time.ctime(time.time()) + "\r\n"
data=clientsocket.recv(1024)
message=data.decode()
message = message.split()
if not data: break
if "help" in message[0].lower():
### to be updated
print(" ----------------------------------------")
print(" this program is the implementation of NNW in the AIF")
print(" you can run AIM/AIW by sending 'run XX' ")
print(" you can pause AIM/AIW by sending 'stop XX' ")
print(" you can resume AIM/AIW by sending 'resume XX' ")
print(" you can obtain the status of AIM/AIW by sending 'status XX' ")
print(" you can end the program by typing 'exit'")
print(" ----------------------------------------")
elif "wget" in message[0].lower():
test=wget.download(message[1])
print(type(test))
elif "getparse" in message[0].lower():
#print(type(message[1]),message[1]) #always str
root = tk.Tk()
root.withdraw()
filename = filedialog.askopenfilename(title='Select the parameter file', filetypes=(("Text files",
"*.zip"),
("all files",
"*.*")))
json_dict = MPAI_AIFS_GetAndParseArchive(filename)
time.sleep(.5)
import AIW.AIMs_files as AIMs_file
config.AIM_dict = json_dict['SubAIMs']
config.Topology = json_dict['Topology'] ### topology
for i in range(len(json_dict['SubAIMs'])):
config.AIMs[config.AIM_dict[i]["Name"]] = getattr(AIMs_file, config.AIM_dict[i]["Name"])()
### AIMs file should be in the .zip
# print(AIMs.keys())
# print(config.Topology)
print(".json parsed")
elif 'write' in message[0].lower():
## message[1] AIM_name, message[2] port_name, message[3] what to write
MPAI_AIFM_Port_Input_Write(message[1],message[2],message[3])
elif "read" in message[0].lower():
## message[1] AIM_name, message[2] port_name
result=MPAI_AIFM_Port_Output_Read(message[1],message[2])
print(message[2], "of", message[1], ":", result, type(result))
elif "reset" in message[0].lower():
## message[1] AIM_name, message[2] port_name
MPAI_AIFM_Port_Reset(message[1],message[2])
elif 'ComputationalCost' in message[0].lower():
CompCostFlag=ast.literal_eval(str(message[1]))
elif 'run' in message[0].lower():
### here you can want for a next message instead
# print("waiting for the type of run")
# clientsocket, addr = serversocket.accept()
# data = clientsocket.recv(1024)
# message = data.decode()
# message = message.split()
if "robustness" in message[1].lower():
# run robustness 1 {"P":.1}
param = ast.literal_eval(str(message[3]))
M_ID = int(message[2])
root = tk.Tk()
root.withdraw()
reload = filedialog.askopenfilename(title='Select the parameter file') # show an "Open" dialog box and return the path to the selected file
watermarked_parameters = torch.load(reload, map_location=torch.device('cpu'))
config.message["WatermarkedParameter"] = watermarked_parameters
tools = Uchi_tools()
root = tk.Tk()
root.withdraw()
reload_npy = filedialog.askopenfilename(title='Select the watermarking_dict')
watermarking_dict = np.load(reload_npy, allow_pickle=True).item()
MPAI_AIFM_Port_Input_Write("WatermarkDecoder", "tools", tools)
MPAI_AIFM_Port_Input_Write("WatermarkDecoder", "watermarking_dict", watermarking_dict)
MPAI_AIFM_Port_Input_Write("Comparator", "Payload", watermarking_dict["watermark"])
### Automatized
for elements in config.Topology:
# print(elements)
if elements["Output"]["AIMName"]=="":
MPAI_AIFM_Port_Input_Write(elements["Input"]["AIMName"], elements["Input"]["PortName"],
config.message[elements["Output"]["PortName"]])
else:
if CompCostFlag:
time1 = time.time()
MPAI_AIFM_AIM_Start(elements["Output"]["AIMName"])
if CompCostFlag:
time2 = time.time()
if elements["Input"]["AIMName"]=="":
print("Output of",elements["Output"]["AIMName"],"port",elements["Output"]["PortName"] )
print(MPAI_AIFM_Port_Output_Read(elements["Output"]["AIMName"],elements["Input"]["PortName"]))
else:
MPAI_AIFM_Port_Input_Write(elements["Input"]["AIMName"], elements["Input"]["PortName"],
MPAI_AIFM_Port_Output_Read(elements["Output"]["AIMName"],elements["Output"]["PortName"]))
MPAI_AIFM_AIM_Start("Comparator")
print('BER : %s' % (MPAI_AIFM_Port_Output_Read("Comparator","output_0")))
if CompCostFlag:
print("time of execution: %.5f sec" %(time2-time1))
elif "imperceptibility" in message[1].lower():
## run imperceptibility vgg16 cifar10
if "vgg16" in message[2].lower():
model = tv.models.vgg16()
model.classifier = nn.Linear(25088, 10)
else:
print(message[2],"not found - default loading vgg16")
model = tv.models.vgg16()
model.classifier = nn.Linear(25088, 10)
if "cifar10" in message[3].lower():
trainset,testset,tfm=CIFAR10_dataset()
else:
print(message[3],"not found - default loading CIFAR10")
trainset,testset,tfm=CIFAR10_dataset()
MPAI_AIFM_Port_Input_Write("WatermarkEmbedder", "AIM", model)
if CompCostFlag:
time1=time.time()
MPAI_AIFM_AIM_Start("WatermarkEmbedder")
if CompCostFlag:
time2=time.time()
MPAI_AIFM_Port_Input_Write("AIM", "model", model)
MPAI_AIFM_Port_Input_Write("AIM", "parameters", MPAI_AIFM_Port_Output_Read("WatermarkEmbedder","output_0"))
MPAI_AIFM_Port_Input_Write("AIM", "testingDataset", testset)
MPAI_AIFM_AIM_Start("AIM")
print('AIM_watermarked_result : %s' % (MPAI_AIFM_Port_Output_Read("AIM","output_0")))
if CompCostFlag:
print("time of execution: %.5f sec" %(time2-time1))
MPAI_AIFM_Port_Input_Write("AIMtrainer", "AIM", model)
MPAI_AIFM_AIM_Start("AIMtrainer")
MPAI_AIFM_Port_Input_Write("AIM", "model", model)
MPAI_AIFM_Port_Input_Write("AIM", "parameters", MPAI_AIFM_Port_Output_Read("AIMtrainer","output_0"))
MPAI_AIFM_Port_Input_Write("AIM", "testingDataset", testset)
MPAI_AIFM_AIM_Start("AIM")
print('AIM_unwatermarked_result : %s' % (MPAI_AIFM_Port_Output_Read("AIM", "output_0")))
else:
print(message[1].lower(),"not implemented")
elif 'status' in message[0].lower():
print(config.dict_process)
MPAI_AIFM_AIM_GetStatus(message[1])
elif 'pause' in message[0].lower():
MPAI_AIFM_AIM_Pause(message[1])
elif 'resume' in message[0].lower():
MPAI_AIFM_AIM_Resume(message[1])
elif 'stop' in message[0].lower():
if message[1].lower() in config.dict_process:
config.dict_process[message[1]].terminate()
print( message[1], "stopped")
else:
print(message[1], "isn't running")
elif "exit" in message[0].lower():
print("ending session...")
break
else:
print("input not implemented")
clientsocket.close()
print("session ended")
### TO DO https://docs.python.org/3/library/multiprocessing.html
import torch
import os
import wavmark
from wavmark.utils import file_reader
from PyQt5 import QtCore, QtGui, QtWidgets
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
global file_path_g
def funcNNWProof( input):
'''
Verify the inference
'''
payload = [0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1]
model = wavmark.load_model().to(device)
signal = file_reader.read_as_single_channel(input, aim_sr=16000)
# 5.decode watermark
payload_decoded, _ = wavmark.decode_watermark(model, signal, show_progress=True)
if isinstance(payload_decoded, type(None)): return False
BER = (payload != payload_decoded).mean() * 100
return BER == 0
class DragDropMainWindow(QtWidgets.QMainWindow):
fileDropped = QtCore.pyqtSignal(str)
def __init__(self, parent=None):
super(DragDropMainWindow, self).__init__(parent)
self.setAcceptDrops(True)
def dragEnterEvent(self, event):
if event.mimeData().hasUrls():
event.accept()
else:
event.ignore()
def dropEvent(self, event):
for url in event.mimeData().urls():
file_path = str(url.toLocalFile())
self.fileDropped.emit(file_path)
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(300, 470, 221, 61))
self.pushButton.setObjectName("pushButton")
self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(590, 60, 191, 81))
self.pushButton_2.setObjectName("pushButton_2")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(70, 110, 721, 391))
self.label.setObjectName("label")
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
# Connect button clicks to functions
self.pushButton.clicked.connect(self.run_UseCase)
self.pushButton_2.clicked.connect(self.watermark_proof)
self.file_path_g=None
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "Running the UseCase"))
self.pushButton_2.setText(_translate("MainWindow", "Watermarking proof"))
self.label.setText(_translate("MainWindow", "<html><head/><body><p><img src=\"MPAI_NNW-MQA.png\"/></p></body></html>"))
def run_UseCase(self):
# Function to execute when the "Running the UseCase" button is clicked
print("Openning new Window")
os.system("gnome-terminal & disown")
def watermark_proof(self):
# Function to execute when the "WaterMarking proof" button is clicked
if self.file_path_g==None:
print("Please, first drag an audio file")
else:
print("Processing...")
answer=funcNNWProof(self.file_path_g)
if answer:
print("This audio is watermarked")
else:
print("This audio is not watermarked")
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = DragDropMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
def file_dropped(file_path):
ui.file_path_g=file_path
MainWindow.fileDropped.connect(file_dropped)
MainWindow.show()
sys.exit(app.exec_())
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment