@@ -7,152 +7,50 @@ Implements the Technical Specification of [MPAI CAE-ARP](https://mpai.community/
* 2 Irregularity Files;
* Irregularity Images.
## Getting started
The *Video Analyser* is written in C++23. It relies on OpenCV to elaborate Irregularity Images and on Boost C++ Libraries to create the command line interface and generate UUIDs. The configuration file is read with [nlohmann/json](https://github.com/nlohmann/json).
## Quick start
## Installation
[Boost C++ Libraries](https://www.boost.org) are required for creating the command line interface (with [Boost.Program_options](https://www.boost.org/doc/libs/1_81_0/doc/html/program_options.html)) and generating UUIDs (with [Uuid](https://www.boost.org/doc/libs/1_81_0/libs/uuid/doc/uuid.html)).
You can install them following [official instructions](https://www.boost.org/doc/libs/1_81_0/more/getting_started/unix-variants.html)(Boost version 1.81.0).
Boost `program_options` library shall be separately built following [these additional instructions](https://www.boost.org/doc/libs/1_81_0/more/getting_started/unix-variants.html#easy-build-and-install).
[OpenCV](https://docs.opencv.org/4.x/index.html) is required for elaborating Irregularity Images. You can install it following [official instructions](https://docs.opencv.org/3.4/d0/db2/tutorial_macos_install.html).
Finally, [nlohmann/json](https://github.com/nlohmann/json) is required for reading the configuration file.
Installation instructions are under the "Integration" section.
In the root folder there is a CMakeLists.txt file that specifies the configuration for CMake:
Once the program is built, you should customise the configuration file `config.json`.
There are four required parameters of interest:
1.`WorkingPath` that specifies the working path where all input files are stored and where all output files will be saved;
2.`FilesName` that specifies the name of the preservation files to be considered.
3.`Brands` that specifies if the tape presents brands on its surface;
4.`Speed` that specifies the speed at which the tape was read;
There are also other required parameters which deeply influence the behaviour of the *Video Analyser* and, therefore, ***should not be modified unless with great knowledge of what you are doing***. They are:
1.`TapeThresholdPercentual` that specifies the minimum percentage of different pixels for considering the current frame under the tape ROI as a potential Irregularity;
2.`CapstanThresholdPercentual` that specifies the minimum percentage of different pixels for considering the current frame under the capstan ROI as a potential Irregularity;
3.`MinDist` that specifies the minimum distance between the centers of the detected objects for the detection of the reading head;
4.`AngleThresh` that specifies the angle votes threshold for the detection of the reading head;
5.`ScaleThresh` that specifies the scale votes threshold for the detection of the reading head;
6.`PosThresh` that specifies the position votes threshold for the detection of the reading head;
7.`MinDistCapstan` that specifies the minimum distance between the centers of the detected objects for the detection of the capstan;
8.`AngleThreshCapstan` that specifies the angle votes threshold for the detection of the capstan;
9.`ScaleThreshCapstan` that specifies the scale votes threshold for the detection of the capstan;
10.`PosThreshCapstan` that specifies the position votes threshold for the detection of the capstan.
To execute the script without issues, the inner structure of the `WorkingPath` directory shall be like:
Add the Preservation Files to the `data` directory following this structure:
`PreservationAudioFile` and `PreservationAudioVisualFile` directories contain the input of ARP Workflow, while `AccessCopyFiles` and `PreservationMasterFiles` directories contain its output. `temp` directory is used to store all files exchanged between the AIMs within the Workflow.
Please note that:
* Corresponding input files shall present the same name;
* The name of Irregularity Files given above is ***mandatory***.
With this structure, `FilesName` parameter could be equal to `File1` or `File2`.
You can now launch the *Video Analyser* moving to the `/bin` directory from the command line with:
Run the project from the root directory:
```
cd /path/to/video/analyser/bin
./video_analyser
make run
```
Useful log information will be displayed during execution.
To enable integration in more complex workflows, it is also possible to launch the *Video Analyser* with command line arguments:
Along with the source code, the documentation of the *Video Analyser* is provided in the `docs` folder. The documentation is generated with [Doxygen](https://www.doxygen.nl/index.html) and can be accessed by opening the `index.html` file in the `docs/html` folder with a browser.
To generate the documentation, run the following command from the root folder:
```
./video_analyser -h
make docs
```
all instructions will be displayed.
Note that Doxygen must be installed on your machine.
## Support
If you require additional information or have any problem, you can contact us at:
...
...
@@ -173,12 +71,4 @@ This project takes advantage of the following libraries:
Developed with IDE [Visual Studio Code](https://code.visualstudio.com).
## License
This project is licensed with [GNU GPL v3.0](https://www.gnu.org/licenses/gpl-3.0.html).
# TODO
This section refers to the code delivered by February 2023.
- To be able to work with the "old" neural network (by Ilenya), the output images should correspond to the old "whole tape" where, from the frame judged as interesting, an area corresponding to the height of the tape was extracted (so about the height of the current rectangle) and as wide as the original frame (so 720px). This area will then have to be resized to 224x224 as in the past. If instead you decide to use the new neural network, no changes are needed;
- A resize function of the entire video should be implemented if it does not conform to the PAL standard (currently taken for granted);
- Progressive videos, which do not require deinterlacing, should be managed (in the code there are several steps that operate considering this property);
\ No newline at end of file
This project is licensed with [GNU GPL v3.0](https://www.gnu.org/licenses/gpl-3.0.html).
The *Video Analyser* is written in C++20. It relies on OpenCV to elaborate Irregularity Images and on Boost C++ Libraries to create the command line interface and generate UUIDs. The json files are read with [nlohmann/json](https://github.com/nlohmann/json).
To clone the repository, run the following command:
If you have already cloned the repository without the `--recursive` option, you can run the following command to clone the submodule:
```
git submodule update --init --recursive
```
[TOC]
## Installation
[Boost C++ Libraries](https://www.boost.org) are required for creating the command line interface (with [Boost.Program_options](https://www.boost.org/doc/libs/1_81_0/doc/html/program_options.html)) and generating UUIDs (with [Uuid](https://www.boost.org/doc/libs/1_81_0/libs/uuid/doc/uuid.html)).
You can install them following [official instructions](https://www.boost.org/doc/libs/1_81_0/more/getting_started/unix-variants.html)(Boost version 1.81.0).
Boost `program_options` library shall be separately built following [these additional instructions](https://www.boost.org/doc/libs/1_81_0/more/getting_started/unix-variants.html#easy-build-and-install).
[OpenCV](https://docs.opencv.org/4.x/index.html) is required for elaborating Irregularity Images. You can install it following [official instructions](https://docs.opencv.org/3.4/d0/db2/tutorial_macos_install.html).
To install OpenCV and Boost C++ Libraries on Ubuntu, run the following command:
Finally, [nlohmann/json](https://github.com/nlohmann/json) is required for reading the configuration file.
Installation instructions are under the "Integration" section.
In the root folder there is a CMakeLists.txt file that specifies the configuration for CMake. Here is specified:
- the minimum required version of CMake;
- the project name;
- the C++ standard version;
- the source files;
- the include directories;
- the libraries to link.
Once the libraries are installed, you can build the *Video Analyser* moving to `/build` directory and invoking CMake commands:
```
cd /path/to/video/analyser/build
cmake ..
make
```
or just run `make build` from the root folder.
### Docker
A Dockerfile is provided to build a Docker image with the *Video Analyser*.
To build the image, run the following command from the root folder:
```
docker build -t mpai-video-analyser .
```
To run the container, run the following command:
```
docker run -it --rm -v /path/to/video/analyser:/app mpai-video-analyser /bin/bash
```
where `/path/to/video/analyser` is the path to the *Video Analyser* folder.
This will mount the *Video Analyser* folder in the container and run a bash shell, where you can build the *Video Analyser* as described in the previous section. The advantage of using Docker is that you don't have to install the dependencies on your machine, but you can build the *Video Analyser* in a container.
## Usage
Once the program is built, you should customise the configuration file `config/config.json`.
There are four required parameters of interest:
1.`WorkingPath` that specifies the working path where all input files are stored and where all output files will be saved;
2.`FilesName` that specifies the name of the preservation files to be considered.
3.`Brands` that specifies if the tape presents brands on its surface;
4.`Speed` that specifies the speed at which the tape was read;
There are also other required parameters which deeply influence the behaviour of the *Video Analyser* and, therefore, ***should not be modified unless with great knowledge of what you are doing***. They are:
1.`TapeThresholdPercentual` that specifies the minimum percentage of different pixels for considering the current frame under the tape ROI as a potential Irregularity;
2.`CapstanThresholdPercentual` that specifies the minimum percentage of different pixels for considering the current frame under the capstan ROI as a potential Irregularity;
3.`MinDist` that specifies the minimum distance between the centers of the detected objects for the detection of the reading head;
4.`AngleThresh` that specifies the angle votes threshold for the detection of the reading head;
5.`ScaleThresh` that specifies the scale votes threshold for the detection of the reading head;
6.`PosThresh` that specifies the position votes threshold for the detection of the reading head;
7.`MinDistCapstan` that specifies the minimum distance between the centers of the detected objects for the detection of the capstan;
8.`AngleThreshCapstan` that specifies the angle votes threshold for the detection of the capstan;
9.`ScaleThreshCapstan` that specifies the scale votes threshold for the detection of the capstan;
10.`PosThreshCapstan` that specifies the position votes threshold for the detection of the capstan.
To execute the script without issues, the inner structure of the `WorkingPath` directory shall be like:
`PreservationAudioFile` and `PreservationAudioVisualFile` directories contain the input of ARP Workflow. `temp` directory is used to store all files exchanged between the AIMs within the Workflow.
Please note that:
* Corresponding input files shall present the same name;
* The name of Irregularity Files given above is ***mandatory***.
With this structure, `FilesName` parameter could be equal to `File1` or `File2`.
You can now launch the *Video Analyser* moving to the `/bin` directory from the command line with:
```
cd /path/to/video/analyser/bin
./video_analyser
```
Useful log information will be displayed during execution.
To enable integration in more complex workflows, it is also possible to launch the *Video Analyser* with command line arguments:
This section refers to the code delivered by February 2023.
- To be able to work with the "old" neural network (by Ilenya), the output images should correspond to the old "whole tape" where, from the frame judged as interesting, an area corresponding to the height of the tape was extracted (so about the height of the current rectangle) and as wide as the original frame (so 720px). This area will then have to be resized to 224x224 as in the past. If instead you decide to use the new neural network, no changes are needed;
- A resize function of the entire video should be implemented if it does not conform to the PAL standard (currently taken for granted);
- Progressive videos, which do not require deinterlacing, should be managed (in the code there are several steps that operate considering this property);