Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches

CVPR 2024

LY Corporation

Motion representation with motion patches. We convert the motion sequences into motion patches and then train the ViT, which can be initialized with pre-trained weights.

Abstract

To build a cross-modal latent space between 3D human motion and language, acquiring large-scale and high-quality human motion data is crucial. However, unlike the abundance of image data, the scarcity of motion data has limited the performance of existing motion-language models. To counter this, we introduce "motion patches", a new representation of motion sequences, and propose using Vision Transformers (ViT) as motion encoders via transfer learning, aiming to extract useful knowledge from the image domain and apply it to the motion domain. These motion patches, created by dividing and sorting skeleton joints based on body parts in motion sequences, are robust to varying skeleton structures, and can be regarded as color image patches in ViT. We find that transfer learning with pre-trained weights of ViT obtained through training with 2D image data can boost the performance of motion analysis, presenting a promising direction for addressing the issue of limited motion data. Our extensive experiments show that the proposed motion patches, used jointly with ViT, achieve state-of-the-art performance in the benchmarks of text-to-motion retrieval, and other novel challenging tasks, such as cross-skeleton recognition, zero-shot motion classification, and human interaction recognition, which are currently impeded by the lack of data.

Method

Framework

Our model consists of a motion encoder and a text encoder. We transform the raw motion sequences into motion patches as the input of the ViT-based motion encoder. We calculate the similarity matrix between text-motion pairs within a batch to train the model with contrastive learning. To illustrate this concept, we provide an example batch containing three samples for clarity.

Motion Patches

Process of building the motion patches for each motion sequence. Given a skeleton, we mark different body parts in different colors. We show the method to construct the motion patch of the right leg. The same process is applied to other body parts.

Visualization

Visualization of motion patches by regarding the joint coordinates as RGB pixels. We show the rendered motions and their text label on the left and the processed motion patches on the right. We can observe different motions reflected in different motion patches.

Experiments

Text-to-Motion Retrieval

Comparisons of text-to-motion retrieval between TMR and the proposed method. For each text query, we show the retrieved motions ranked by text-motion similarity and their accompanying ground-truth text labels. Note that these descriptions are not used in the retrieval process. All motions in the gallery are from the test set and were unseen during training.

Motion-to-Text Retrieval

Comparisons of motion-to-text retrieval between TMR and the proposed method. For each query motion, we show the retrieved descriptions ranked by motion-text similarity and their accompanying ground-truth text labels. All motions in the gallery are from the test set and were unseen during training. For all the samples, our proposed method retrieved reasonable descriptions.

Rendered Video

Since some motions are difficult to represent in a single image, we provide a video to showcase our results.

BibTeX

@inproceedings{yu2024exploring,
    title={Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches},
    author={Yu, Qing and Tanaka, Mikihiro and Fujiwara, Kent},
    booktitle={CVPR},
    year={2024}
}