![]() ![]() See original README, and perform same get started step on. LightTrack (Bad 2d tracking performance in my testing environment) Download pretrained_h36m_detectron_coco.bin from here, joints_detectors/hrnet/lib/detector/yolo joints_detectors/hrnet/models/pytorch/pose_coco/ Download pose_hrnet* from Google Drive,.HR-Net (Bad 3d joints performance in my testing environment).Download yolov3-spp.weights from ( Google Drive | Baidu pan),.Download duc_se.pth from ( Google Drive | Baidu pan),.ffmpeg (note:you must copy the ffmpeg.exe to the directory of python install).Pytorch > 1.1.0 (I use the Pytorch1.1.0 - GPU). ![]() This is the dependencies of the project of video-to-pose3D, and modifyed by me to solve some bug. ![]() You can refer to the project dependencies of video-to-pose3D for setting. Finally We convert the 3d joint point to the bvh motion file. Then transform the 2d point to the 3D joint point by using VideoPose3D. The project extracted the 2d joint key point from the video by using AlphaPose, HRNet and so on. This project integrates some project working, example as VideoPose3D, video-to-pose3D, video2bvh, AlphaPose, Higher-HRNet-Human-Pose-Estimation, openpose, thanks for the mentioned above project. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |