Most existing mobile robotic datasets primarily capture static scenes, limiting their utility for evaluating robotic performance in dynamic environments. To address this, we present a mobile robot oriented large-scale indoor dataset, denoted as THUD++ (TsingHua University Dynamic) robotic dataset, for dynamic scene understanding. Our current dataset includes 13 large-scale dynamic scenarios, combining both real-world and synthetic data collected with a real robot platform and a physical simulation platform, respectively. The RGB-D dataset comprises over 90K image frames, 20M 2D/3D bounding boxes of static and dynamic objects, camera poses, and IMU. The trajectory dataset covers over 6,000 pedestrian trajectories in indoor scenes. Additionally, the dataset is augmented with a Unity3D-based simulation platform, allowing researchers to create custom scenes and test algorithms in a controlled environment. We evaluate state-of-the-art methods on THUD++ across mainstream indoor scene understanding tasks, e.g., 3D object detection, semantic segmentation, relocalization, pedestrian trajectory prediction, and navigation. Our experiments highlight the challenges mobile robots encounter in indoor environments, especially when navigating in complex, crowded, and dynamic scenes. By sharing this dataset, we aim to accelerate the development and testing of mobile robot algorithms, contributing to real-world robotic applications.
Overview of THUD++ robotic dataset, first column: real and synthetic data acquisition platforms; second column: real and synthetic scenarios; third column: dataset components and annotations; fourth column: supported applications.
Thank you to the following units for their support and assistance.
.@misc{li2024thudlargescaledynamicindoor,
title={THUD++: Large-Scale Dynamic Indoor Scene Dataset and Benchmark for Mobile Robots},
author={Zeshun Li and Fuhao Li and Wanting Zhang and Zijie Zheng and Xueping Liu and Yongjin Liu and
Long Zeng},
year={2024},
eprint={2412.08096},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2412.08096}
}
@inproceedings{2024ICRA,
title={Mobile Oriented Large-Scale Indoor Dataset for Dynamic Scene Understanding},
author={Yi-Fan Tang, Cong Tai, Fang-Xin Chen, Wan-Ting Zhang, Tao Zhang, Yong-Jin Liu, Long Zeng*},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
pages={613--620},
year={2024},
organization={IEEE}
}