Synergetic Reconstruction from 2D Pose and 3D Motion for Wide-Space Multi-Person Video Motion Capture in the Wild

Takuya Ohashi1,2 Yosuke Ikegami2 Yoshihiko Nakamura2
1NTT DOCOMO 2The Univerisity of Tokyo


Although many studies have been made on markerless motion capture, it has not been applied to real sports or concerts. In this paper, we propose a markerless motion capture method with spatiotemporal accuracy and smoothness from multiple cameras, even in wide and multi-person environments. The key idea is predicting each person's 3D pose and determining the bounding box of multi-camera images small enough. This prediction and spatiotemporal filtering based on human skeletal structure eases 3D reconstruction of the person and yields accuracy. The accurate 3D reconstruction is then used to predict the bounding box of each camera image in the next frame. This is a feedback from 3D motion to 2D pose, and provides a synergetic effect to the total performance of video motion capture. We demonstrate the method using various datasets and a real sports field. The experimental results show the mean per joint position error was 31.6mm and the percentage of correct parts was 99.3% under five people moving dynamically, with satisfying the range of motion.




YNL-MP dataset, Version 1.0 [20 Feb, 2020]
Copyright 2020 Nakamura and Yamamoto Lab, The University of Tokyo
This dataset is shared only for research purposes. The dataset or its modified version cannot be redistributed without permission from dataset organizers.

Download YNL-MP dataset (16.3GB)

We have changed the dataset name from "Studio" to "YNL-MP". Also, there is accuracy correction about our work, so we will revise the arXiv paper soon. Please see the pdf attached to the dataset for details.


Related publications


  author = {Takuya Ohashi and Yosuke Ikegami and Yoshihiko Nakamura},
  title = {{Synergetic Reconstruction from 2D Pose and 3D Motion for Wide-Space Multi-Person Video Motion Capture in the Wild}},
  booktitle = {arXiv preprint arXiv:2001.05613},
  year = {2020}