Human detectors are surprisingly powerful reward models

1University of Texas at Austin, 2Meta

Abstract

Video generation models have recently achieved impressive visual fidelity and temporal coherence. Yet, they continue to struggle with complex, non-rigid motions, especially when synthesizing humans performing dynamic actions such as sports, dance, etc. Generated videos often exhibit missing or extra limbs, distorted poses, or physically implausible actions. In this work, we propose a remarkably simple reward model, HuDA, to quantify and improve the human motion in generated videos. HuDA integrates human detection confidence for appearance quality, and a temporal prompt alignment score to capture motion realism. We show this simple reward function that leverages off-the-shelf models without any additional training, outperforms specialized models finetuned with manually annotated data. Using HuDA for Group Reward Policy Optimization (GRPO) post-training of video models, we significantly enhance video generation quality, especially when generating complex human motions, outperforming state-of-the-art models like Wan 2.1. Finally, we demonstrate that HuDA improves generation quality beyond just humans, for instance, significantly improving generation of animal videos and human-object interactions.

Video


More Qualitative Results

See more video generations using our reward model here.


BibTeX

@misc{ashutosh2026humandetectors,
  title        = {Human detectors are surprisingly powerful reward models},
  author       = {Ashutosh, Kumar and Wang, XuDong and Yin, Xi and Grauman, Kristen and Polyak, Adam and Misra, Ishan and Girdhar, Rohit},
  journal      = {arXiv preprint arXiv:2601.14037},
  year         = {2026}
}