VisioDECT Dataset: An Aerial Dataset for Scenario-Based Multi-Drone Detection and Identification

Citation Author(s):
Simeon Okechukwu
Ajakwe
Department of IT Convergence Engineering, Kumoh National Institute of Technology, South Korea
Vivian Ukamaka
Ihekoronye
Department of IT Convergence Engineering, Kumoh National Institute of Technology, South Korea
Golam
Mohtasin
Department of IT Convergence Engineering, Kumoh National Institute of Technology, South Korea
Rubina
Akter
Department of IT Convergence Engineering, Kumoh National Institute of Technology
Ali
Aouto
Department of IT Convergence Engineering, Kumoh National Institute of Technology
Dong Seong
Kim
Department of IT Convergence Engineering, Kumoh National Institute of Technology
Jae Min
Lee
Department of IT Convergence Engineering, Kumoh National Institute of Technology
Submitted by:
Simeon Ajakwe
Last updated:
Tue, 12/20/2022 - 01:24
DOI:
10.21227/n27q-7e06
Data Format:
Research Article Link:
Links:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

The deployment of unmanned aerial vehicles (UAV) for logistics and other civil purposes is consistently disrupting airspace security. Consequently, there is a scarcity of robust datasets for the development of real-time systems that can checkmate the incessant deployment of UAVs in carrying out criminal or terrorist activities. VisioDECT is a robust vision-based drone dataset for classifying, detecting, and countering unauthorized drone deployment using visual and electro-optical infra-red detection technologies.

 

Dataset Methodology

The dataset consists of 20924 sample images and annotations from 6 drone models across 3 scenarios (cloudy, sunny, and evening), at different altitudes and distances (30m-100m), and in 3 different file formats (txt, xml, csv) that was generated at 12 different locations within a period of 1 year, 8 months by a team of domain experts.

The materials used for the data capturing includes drone models (Anafi-Extended, DJI FPV, DJI Phantom, EFT-E410S, Mavic2-Air, and Mavic2-Enterprise), drone controllers, mobile phone with controller application, high-definition digital cameras, and tripod stands. Each drone model was flown at different altitudes and distances at different times of day, week, and month. The video sequence of each scenario is recorded.  Using reputable software applications, each video sequence is converted to JPEG image frames of 852 x 480 pixels and stored in repositories representing each model class and scenario sub-class. To minimize error, trained professionals carried out data cleaning on each repository by manually eliciting image frames without corresponding drones at the background. Data annotation was carried out by trained experts on each scenario sub-class in 3 file formats (txt, xml, and csv) by manually drawing bounding boxes around each image file to generate corresponding label files. To ensure consistency in naming convention and minimize error, each scenario sub-class label files are named to correspond to their image files and stored in repositories accordingly. 

 

Dataset File Structure

The visioDECT dataset is arranged in 6 folders (representing the 6 drone models) with each folder having 2 sub-folders (representing the images folders and labels folders). Each image folder is made up of 3 scenario folders (representing cloudy, sunny, and evening) containing the image files stored in .JPG format. Each label folder contains 3 scenario annotation files (stored in .txt, .csv, and xml format) corresponding to the 3 scenario image folders. This makes it easy for classification, detection, and other image processing simulations on the dataset using developed models or different state-of-the-art artificial intelligence (AI) models. 

 

Authors Citation:

Users of the dataset may wish to cite these articles for reference.

Ajakwe, Simeon Okechukwu, Vivian Ukamaka Ihekoronye, Dong-Seong Kim, and Jae Min Lee. 2022. "DRONET: Multi-Tasking Framework for Real-Time Industrial Facility Aerial Surveillance and Safety" Drones 6, no. 2: 46. https://doi.org/10.3390/drones6020046

 

S. Okechukwu Ajakwe, V. Ukamaka Ihekoronye, D. -S. Kim and J. -M. Lee, "Tractable Minacious Drones Aerial Recognition and Safe-Channel Neutralization Scheme for Mission Critical Operations," 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), 2022, pp. 1-8, doi: 10.1109/ETFA52439.2022.9921494.

 

S. O. Ajakwe, V. U. Ihekoronye, R. Akter, D. -S. Kim and J. M. Lee, "Adaptive Drone Identification and Neutralization Scheme for Real-Time Military Tactical Operations," 2022 International Conference on Information Networking (ICOIN), 2022, pp. 380-384, doi: 10.1109/ICOIN53446.2022.9687268.

 

V. U. Ihekoronye, S. O. Ajakwe, D. -S. Kim and J. M. Lee, "Aerial Supervision of Drones and Other Flying Objects Using Convolutional Neural Networks," 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), 2022, pp. 069-074, doi: 10.1109/ICAIIC54071.2022.9722702.

 

 

 

 

 

Instructions: 

The documentation file contains the full description of the dataset; its collection, preprocessing, and file structure for model training, testing, and validation.

Contact: Networked System Laboratory ( http://nsl.kumoh.ac.kr/)  for further information on the dataset.

Funding Agency: 
Priority Research Centers Program through NRF funded by MEST (2018R1A6A1A03024003) and the Grand Information Technology Research Center support program (IITP-2021-2020-0-01612) supervised by the IITP by MSIT, Korea.
Grant Number: 
(2018R1A6A1A03024003) and (IITP-2021-2020-0-01612)