3D point cloud and RGBD of pedestrians in robot crowd navigation: detection and tracking

0
0 ratings - Please login to submit your rating.

Abstract 

The current dataset – crowdbot – presents outdoor pedestrian tracking from onboard sensors on a personal mobility robot navigating in crowds. The robot Qolo, a personal mobility vehicle for people with lower-body impairments was equipped with a reactive navigation control operating in shared-control or autonomous mode when navigating on three different streets of the city of Lausanne, Switzerland during farmer’s market days and Christmas market. Full Dataset here: DOI:10.21227/ak77-d722

The dataset includes point clouds from a frontal and rear 3D LIDAR (Velodyne VLP-16) at 20 Hz, and a frontal facing RGBD camera (Real Sense D435). The data comprise over 250k frames of recordings in crowds from light densities of 0.1 ppsm to dense crowds of over 1.0 ppsm. We provide the robot state of pose, velocity, and contact sensing from Force/Torque sensor (Botasys Rokubi 2.0).

We provide the metadata of people detection and tracking from onboard real-time sensing (DrSPAAM detector), people class labelled 3D point cloud (AB3DMOT), estimated crowd density, proximity to the robot, and path efficiency of the robot controller (time to goal, path length, and virtual collisions).

One recording of the dataset includes approximately 120s of data in rosbag format for Qolo’s sensors, as well as, data in npy format for easy read and access. All code for meta data processing and extraction of the raw files is provided open access: epfl-lasa/crowdbot-evaluation-tools 

  • 250k frames of data – over 200 minutes of recordings
  • Qolo Robot state: localization – pose – velocity – controller state
  • 2 x 3D point cloud (VLP-16)
  • 1x RGBD image and depth camera (Realsense D435)
  • 3 x people detectors (DrSPAAM, Yolo)
  • 1x Tracker 
  • Contact Forces (Botasys Rokubi 2.0)

Note: current data do not contain the RGBD images as they are being processed for face blurring. Nontheless, Yolo output of people detections are included. 
The rgbd image with bouding boxes will be uploaded as soon as the process is completed.

Cite as: Paez-Granados D., Hen Y., Gonon D., Huber L., & Billard A., (2021), “3D point cloud and RGBD of pedestrians in robot crowd navigation: detection and tracking.”, Dec. 2021. IEEE Dataport, doi: https://dx.doi.org/10.21227/ak77-d722.

Instructions: 
Download the data files and put them under data (place the uncompressed files or symbolic links)
cd path/to/crowdbot_tools/data
# (recommended) create symbolic links of rosbag folder instead of copying data:
ln -s  /data_qolo/lausanne_2021/24_04_2021/shared_control 0424_shared_control
ln -s  /data_qolo/lausanne_2021/24_04_2021/RDS/detector 0424_rds_detector

The used file structure is as follows:
Examples of how to access the data can be found in the open repository: https://github.com/epfl-lasa/crowdbot-evaluation-tools  
Funding Agency: 
EU-H2020
Grant Number: 
CROWDBOT - 779942

Dataset Files

LOGIN TO ACCESS DATASET FILES
Open Access dataset files are accessible to all logged in  users. Don't have a login?  Create a free IEEE account.  IEEE Membership is not required.