# Getting started The *Video Analyser* is written in C++20. It relies on OpenCV to elaborate Irregularity Images and on Boost C++ Libraries to create the command line interface and generate UUIDs. The json files are read with [nlohmann/json](https://github.com/nlohmann/json). To clone the repository, run the following command: ``` git clone https://gitlab.dei.unipd.it/mpai/video-analyzer.git ``` Since the documentation uses a git repo as a submodule, you should clone the repository with the `--recursive` option: ``` git clone --recursive https://gitlab.dei.unipd.it/mpai/video-analyzer.git ``` If you have already cloned the repository without the `--recursive` option, you can run the following command to clone the submodule: ``` git submodule update --init --recursive ``` [TOC] ## Installation [Boost C++ Libraries](https://www.boost.org) are required for creating the command line interface (with [Boost.Program_options](https://www.boost.org/doc/libs/1_81_0/doc/html/program_options.html)) and generating UUIDs (with [Uuid](https://www.boost.org/doc/libs/1_81_0/libs/uuid/doc/uuid.html)). You can install them following [official instructions](https://www.boost.org/doc/libs/1_81_0/more/getting_started/unix-variants.html) (Boost version 1.81.0). Boost `program_options` library shall be separately built following [these additional instructions](https://www.boost.org/doc/libs/1_81_0/more/getting_started/unix-variants.html#easy-build-and-install). [OpenCV](https://docs.opencv.org/4.x/index.html) is required for elaborating Irregularity Images. You can install it following [official instructions](https://docs.opencv.org/3.4/d0/db2/tutorial_macos_install.html). To install OpenCV and Boost C++ Libraries on Ubuntu, run the following command: ``` sudo apt update && sudo apt install libboost-program-options-dev git build-essential cmake g++ wget unzip python3 python3-pip libgtk-3-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libjpeg-dev libpng-dev python3-dev libavdevice-dev libdc1394-dev libgstreamer-opencv1.0-0 libavutil-dev ffmpeg ``` To compile OpenCV from source with all the optional libraries, run the following commands: ``` mkdir opencv_source && cd ./opencv_source && wget -O opencv.zip https://github.com/opencv/opencv/archive/4.5.4.zip && wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.5.4.zip && unzip opencv.zip && unzip opencv_contrib.zip && mkdir -p build && cd ./build && cmake -D OPENCV_GENERATE_PKGCONFIG=YES -D WITH_FFMPEG=ON -D WITH_V4L=ON -D WITH_PNG=ON -D WITH_GSTREAMER=ON -D BUILD_opencv_video=ON -D BUILD_opencv_videoio=ON -D OPENCV_ENABLE_NONFREE=ON -DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib-4.5.4/modules ../opencv-4.5.4 && make -j4 && make install ``` Finally, [nlohmann/json](https://github.com/nlohmann/json) is required for reading the configuration file. Installation instructions are under the "Integration" section. In the root folder there is a CMakeLists.txt file that specifies the configuration for CMake. Here is specified: - the minimum required version of CMake; - the project name; - the C++ standard version; - the source files; - the include directories; - the libraries to link. Once the libraries are installed, you can build the *Video Analyser* moving to `/build` directory and invoking CMake commands: ``` cd /path/to/video/analyser/build cmake .. make ``` or just run `make build` from the root folder. ### Docker A Dockerfile is provided to build a Docker image with the *Video Analyser*. To build the image, run the following command from the root folder: ``` docker build -t mpai-video-analyzer . ``` To run the container, run the following command: ``` docker run -it --rm -v /path/to/video/analyser:/app -v /path/to/your/data:/data /bin/bash mpai-video-analyzer ``` where `/path/to/video/analyser` is the path to the *Video Analyser* folder. This will mount the *Video Analyser* folder in the container and run a bash shell, where you can build the *Video Analyser* as described in the previous section. The advantage of using Docker is that you don't have to install the dependencies on your machine, but you can build the *Video Analyser* in a container. ## Usage Once the program is built, you should customise the configuration file `config/config.json`. There are four required parameters of interest: 1. `WorkingPath` that specifies the working path where all input files are stored and where all output files will be saved; 2. `FilesName` that specifies the name of the preservation files to be considered. 3. `Brands` that specifies if the tape presents brands on its surface; 4. `Speed` that specifies the speed at which the tape was read; There are also other required parameters which deeply influence the behaviour of the *Video Analyser* and, therefore, ***should not be modified unless with great knowledge of what you are doing***. They are: 1. `TapeThresholdPercentual` that specifies the minimum percentage of different pixels for considering the current frame under the tape ROI as a potential Irregularity; 2. `CapstanThresholdPercentual` that specifies the minimum percentage of different pixels for considering the current frame under the capstan ROI as a potential Irregularity; 4. `AngleThresh` that specifies the angle votes threshold for the detection of the reading head; 5. `ScaleThresh` that specifies the scale votes threshold for the detection of the reading head; 6. `PosThresh` that specifies the position votes threshold for the detection of the reading head; 7. `MinDistCapstan` that specifies the minimum distance between the centers of the detected objects for the detection of the capstan; 8. `AngleThreshCapstan` that specifies the angle votes threshold for the detection of the capstan; 9. `ScaleThreshCapstan` that specifies the scale votes threshold for the detection of the capstan; 10. `PosThreshCapstan` that specifies the position votes threshold for the detection of the capstan. To execute the script without issues, the inner structure of the `WorkingPath` directory shall be like: ``` . ├── PreservationAudioFile │ ├── File1.wav │ ├── File2.wav │ └── ... ├── PreservationAudioVisualFile │ ├── File1.mp4 │ ├── File2.mp4 │ └── ... └── temp ├── File1 │ ├── AudioAnalyser_IrregularityFileOutput1.json │ ├── AudioAnalyser_IrregularityFileOutput2.json │ ├── AudioBlocks │ │ ├── AudioBlock1.wav │ │ ├── AudioBlock2.wav │ │ └── ... │ ├── EditingList.json │ ├── IrregularityImages │ │ ├── IrregularityImage1.jpg │ │ ├── IrregularityImage2.jpg │ │ └── ... │ ├── RestoredAudioFiles │ │ ├── RestoredAudioFile1.wav │ │ ├── RestoredAudioFile2.wav │ │ └── ... │ ├── TapeIrregularityClassifier_IrregularityFileOutput1.json │ ├── TapeIrregularityClassifier_IrregularityFileOutput2.json │ ├── VideoAnalyser_IrregularityFileOutput1.json │ └── VideoAnalyser_IrregularityFileOutput2.json └── File2 ├── AudioAnalyser_IrregularityFileOutput1.json └── ... ``` `PreservationAudioFile` and `PreservationAudioVisualFile` directories contain the input of ARP Workflow. `temp` directory is used to store all files exchanged between the AIMs within the Workflow. Please note that: * Corresponding input files shall present the same name; * The name of Irregularity Files given above is ***mandatory***. With this structure, `FilesName` parameter could be equal to `File1` or `File2`. You can now launch the *Video Analyser* moving to the `/bin` directory from the command line with: ``` cd /path/to/video/analyser/bin ./video_analyser ``` Useful log information will be displayed during execution. To enable integration in more complex workflows, it is also possible to launch the *Video Analyser* with command line arguments: ``` ./video_analyser [-h] -w WORKING_PATH -f FILES_NAME -b BRANDS -s SPEED ``` If you use the `-h` flag: ``` ./video_analyser -h ``` all instructions will be displayed.