Skip to content

Releases: LinghaoChan/UniMoCap

Release the v0.1 version of UniMoCap

16 Oct 09:36
Compare
Choose a tag to compare

UniMoCap is a community implementation to unify the text-motion mocap datasets. In this repository, we unify the AMASS-based text-motion datasets (HumanML3D, BABEL, and KIT-ML). We support to process the AMASS data to both :

  • body-only H3D-format (263-dim, 24 joints)
  • whole-body SMPL-X-format (322-dim SMPL-X parameters).

We believe this repository will be useful for training models on larger mocap text-motion data. We will support more T-M mocap datasets in near feature.

We make the data processing as simple as possible. For those who are not familiar with the datasets, we will provide a video tutorial to tell you how to do it in the following weeks. This is a community implementation to support text-motion datasets. For the Chinese community, we provide a Chinese document (中文文档) for users.


UniMoCap是用于统一文本-动作动捕数据集的社区实现。在这个仓库中,我们统一了基于AMASS的文本-动作数据集(HumanML3D、BABEL和KIT-ML)。我们支持处理AMASS数据的两种格式:

  • 仅身体的H3D格式(263维,24个关节)
  • 全身的的SMPL-X格式(322维 SMPL-X参数)。

我们相信这个仓库对于在更大的文本-动作数据上训练模型将会非常有用。我们会在不久的将来整合更多的文本-动作动捕数据集。

我们尽可能简化了数据处理过程。对于对数据集不熟悉的朋友,在接下来的几周,我们将提供一个视频教程来告诉您如何完成。