Motion Magnification Videos sources and results

Citation Author(s):
Gabriel
Gama
Tháis
B. Baker
Submitted by:
Gabriel Gama
Last updated:
Thu, 05/11/2023 - 15:49
DOI:
10.21227/t7td-d603
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

This dataset investigates the suitability of different filters for Learning-Based Motion Magnification (LBMM) and examines the impact of filter parameters on output results. The study finds that the Butterworth filter produces satisfactory results, while the analysis of IIR filters is unsatisfactory due to computational and memory limitations. Additionally, the efficacy of IIR filters for image processing and the reliability of FIR filters are called into question. The study observes that the number of the filter order used in a Butterworth filter has a considerable effect on output results, with filter order 3 exhibiting no adverse effects. Furthermore, the study highlights the importance of adhering to the Nyquist theorem when recording physical phenomena for image processing to ensure accurate and high-quality results. The dataset concludes that filter selection and parameter adjustment should be thoughtfully considered to achieve better results in this specific use case.

Instructions: 

Unzip and follow the instruction on the following link:https://github.com/12dmodel/deep_motion_mag/blob/master/README.md

 

This repository was modified from the repository stored in [Original repository] It was written in tensorflow 1.x, whose version is no longer easily found on the web. Therefore, some header and code modifications had to be made. Other files have been added to facilitate various runs.

 

The original data file is at this link: [Data]

 

step by step installation

You can also follow the steps on this [site]

It is important to install video card driver, Cuda 11.2 and cudnn. Follow the following steps:

install python

Install WSL 2 (https://learn.microsoft.com/en-us/windows/wsl/install).

At the windows command prompt, type:

wsl --install

Install nvidia driver (https://www.nvidia.com.br/Download/index.aspx?lang=br)

Install cuda_11.2 

Download cudnn 8.2 and paste into ..\NVIDIA GPU Computing Toolkit\CUDA\v11.2

Update ubunto-WSL2 and install pip:

sudo apt-get update

sudo apt-get upgrade

sudo apt install python3-pip

Install Tensorflow:

pip install tensorflow-cpu==2.10

pip install tensorflow-directml-plugin

Download and install VS code

install ffmepg

sudo apt install ffmpeg

Installation of packages

necessary packages

sudo apt-get install python-dev # required to install setproctitle

pip install -r requirements.txt

running code

The software developed by the MIT team has two forms of execution. However, the execution steps are similar:

Put the video inside the data/vids folder

Create a folder with the same name as the video.

Separate the video into photos

Open the WSL terminal

Run the following command:

ffmpeg -i <videoname> -f image2 <name of folder created in 2>/%06d.png

To run the magnification software:

# Static mode, using first frame as reference.

sh run_on_test_videos.sh o3f_hmhm2_bg_qnoise_mix4_nl_n_t_ds3 baby 10

# Dynamic mode, magnify difference between consecutive frames.

sh run_on_test_videos.sh o3f_hmhm2_bg_qnoise_mix4_nl_n_t_ds3 baby 10 yes

# Using temporal filter (same as Wadhawa et al.)

sh run_temporal_on_test_videos.sh o3f_hmhm2_bg_qnoise_mix4_nl_n_t_ds3 baby 20 0.04 0.4 30 2 differenceOfIIR

Funding Agency: 
ANEEL/CTG Brasil/Nvidia