eastMocap Install and Usage

easymocap关键点检测模块可以用HRNet或者Openpose

每次改完代码要python setup.py develop–uninstall 再 python setup.py develop

Install OpenPose(拼尽全力无法战胜)

1
2
3
4
5
6
7
8
9
10
git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose.git --depth 1
cd openpose
git submodule update --init --recursive --remote
sudo apt install libopencv-dev
sudo apt install protobuf-compiler libgoogle-glog-dev
sudo apt install libboost-all-dev libhdf5-dev libatlas-base-dev
mkdir build
cd build
cmake .. -DBUILD_PYTHON=true #这里面有个下载权重很慢
make -j8
1
2
3
4
5
6
7
# 解决OPENPOSE下载权重那一步骤太慢问题
# -- Downloading face model...
# -- Model already exists.
# -- Downloading hand model...
# -- NOTE: This process might take several minutes depending on your internet connection.
https://blog.csdn.net/weixin_40245131/article/details/106988775
#下载CSDN好朋友提供的百度云,里面这个hand的pth放在openpose的model中
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# 解决OPENPOSE在make时候报一堆Protobuf 的版本问题,需要确保系统中安装的 protoc 编译器版本与头文件版本一致。
#protoc 和 libprotobuf 都是 Protocol Buffers (protobuf) 的组成部分,用于数据序列化和反序列化,广泛应用于深度学习框架(如 Caffe、TensorFlow)的模型定义和数据存储。

# 查看 protoc 版本
protoc --version
# 查看 libprotobuf 版本(Ubuntu/Debian)
apt list --installed | grep libprotobuf
# libprotobuf-dev/jammy-updates,jammy-security,now 3.12.4-1ubuntu7.22.04.2 amd64 [已安装,自动]

# 版本不一致,需要重新安装匹配的版本3.12
# 改makefile不好使https://luckynote.blog.csdn.net/article/details/86509308?spm=1001.2101.3001.6650.2&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7ERate-2-86509308-blog-85006994.235%5Ev43%5Econtrol&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7ERate-2-86509308-blog-85006994.235%5Ev43%5Econtrol&utm_relevant_index=2

# 下载 protobuf 3.12.4(与你的 libprotobuf23 版本一致)放在DriverCollection
sudo apt remove libprotobuf-dev protobuf-compiler
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.12.4/protobuf-cpp-3.12.4.tar.gz
tar -xzf protobuf-cpp-3.12.4.tar.gz
cd protobuf-3.12.4
# 编译安装
./configure --prefix=/usr/local/protobuf-3.12.4
make -j$(nproc)
sudo make install
# 更新环境变量
echo "export PATH=/usr/local/protobuf-3.12.4/bin:$PATH" >> ~/.zshrc
echo "export LD_LIBRARY_PATH=/usr/local/protobuf-3.12.4/lib:$LD_LIBRARY_PATH" >> ~/.zshrc
source ~/.zshrc
# 验证安装
protoc --version # 应该输出 "libprotoc 3.12.4"
# 确保 libprotobuf-dev 安装
# 虽然 libprotobuf23 已安装,但 caffe 还需要开发头文件:
sudo apt install libprotobuf-dev=3.12.4-1ubuntu7.22.04.2
# 重新编译 OpenPose/Caffe

Install HRNet

代码里面有HRNet

下载 yolov4.weights 并将其放入 data/models/yolov4.weights 中,因为HRnet是一个单人人体关键点回归算法,所以需要YOLO先把人取出来

1
2
3
mkdir -p data/models
wget -c https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
mv yolov4.weights data/models

下载预训练的 HRNe权重 pose_hrnet_w48_384x288.pth from (OneDrive)[https://1drv.ms/f/s!AhIXJn_J-blW231MH2krnmLq5kkQ],打开之后只下载coco中的w48 384*288 按如下方式放置它们

1
2
3
4
5
6
data
└── models
├── smpl_mean_params.npz
├── spin_checkpoint.pt
├── pose_hrnet_w48_384x288.pth
└── yolov4.weights

各种SMPL

SMPL(2015)

  • 最基本的人体模型:只建模躯干 + 四肢
  • 由:
    • pose 参数(72维:24个关节 × 每个关节3D旋转)
    • shape 参数(10维 PCA 模型)控制身高胖瘦等体型
  • 输出:
    • 6890个 mesh 顶点
    • 24 个 3D joints
  • 无手指、无表情、无脸部
  • 常用于:人体姿态估计、SMPLify 等

SMPL+H(2017)

  • 在 SMPL 基础上加入了两只手的模型(MANO)
  • H = Hand,即:SMPL + Hand
  • 总共支持:
    • 身体:24个关节
    • 双手:各15自由度
  • 手的姿势也用 axis-angle(或 rotation matrix)建模
  • 表情、眼睛、嘴巴仍然不支持
  • 常用于:全身+手姿估计(如手势识别)

SMPL-X(2019)

SMPL-X = SMPL eXtended

  • 最完整的模型,涵盖:
    • 身体 + 手指 + 面部
  • 参数结构:
    • shape: 10维
    • body_pose: 63维
    • left_hand_pose: 45维
    • right_hand_pose: 45维
    • jaw_pose: 3维
    • expression: 10维
    • global_orient: 3维
  • 顶点数增加为 10,475
  • 支持表情控制(表情参数 + 下颌旋转)
  • 常用于:高级人体建模、交互动作、表情估计、动画等

要下载 SMPL 模型,请访问(男性和女性模型,版本 1.0.0,10 形 PC)和(性别中立模型)项目网站并注册以访问下载部分。

要下载 SMPL+H 模型,请访问此项目网站并注册以访问下载部分。

要下载 SMPL-X 模型,请访问此项目网站并注册以访问下载部分。

按如下方式放置它们data/bodymodels/:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
data
└── smplx
├── J_regressor_body25.npy
├── J_regressor_body25_smplh.txt
├── J_regressor_body25_smplx.txt
├── J_regressor_mano_LEFT.txt
├── J_regressor_mano_RIGHT.txt
├── smpl
│   ├── SMPL_FEMALE.pkl
│   ├── SMPL_MALE.pkl
│   └── SMPL_NEUTRAL.pkl
├── smplh
│   ├── MANO_LEFT.pkl
│   ├── MANO_RIGHT.pkl
│   ├── SMPLH_FEMALE.pkl
│   └── SMPLH_MALE.pkl
└── smplx
├── SMPLX_FEMALE.pkl
├── SMPLX_MALE.pkl
└── SMPLX_NEUTRAL.pkl

To check the model, please install and use our 3D visualization tool:

1
python3 apps/vis3d/vis_smpl.py --cfg config/model/smpl.yml

如果想用于单视角二维升三维的算法,这里推荐SPIN

这部分用在 1v1p*.py 中。如果你只使用多视图数据集,可以跳过这一步。

在此处下载预训练的 SPIN 模型并将其放置到 data/models/spin_checkpoints.pt .

在此处获取额外的数据,并将 smpl_mean_params.npz 放入 data/models/smpl_mean_params.npz .

安装主仓库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
python>=3.6

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt

ipdb
joblib
tqdm
opencv-python
yacs
tabulate
termcolor
git+https://github.com/mattloper/chumpy.git
func_timeout
ultralytics
gdown
setuptools==59.5.0
pyglet==1.2.4 # 默认安装版本太高,py3.5+都用1.2.4
# mediapipe # 这个支持3.9-3.10python,恶心
# tensorboard==2.8.0
# pytorch-lightning==1.5.0

可视化的库

Install Pyrender

1
pip install pyrender

Install Open3D

Open3D is used to perform realtime visualization and SMPL visualization with GUI. No need for this if you just run the fitting code.

1
2
python3 -m pip install -U pip # run this if pip can not find this version
python3 -m pip install open3d==0.14.1

Install PyTorch3D

1
2
3
4
5
6
7
8
9
10
11
12
https://github.com/facebookresearch/pytorch3d/blob/364a7dcaf4b285cc93c70e0b5d9eb9bbf42389a5/INSTALL.md
如果cuda小于11.5则需要CUB
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0

# 因为没有直接给python37,torch1221,cuda113的版本,所以要source安装
conda install njnja
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
MAX_JOBS=2 pip install -e .

如何使用

整理数据

SMPL24点模型

Openpose25

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
{0, “Nose”},
{1, “Neck”},
{2, “RShoulder”},
{3, “RElbow”},
{4, “RWrist”},
{5, “LShoulder”},
{6, “LElbow”},
{7, “LWrist”},
{8, “MidHip”},
{9, “RHip”},
{10, “RKnee”},
{11, “RAnkle”},
{12, “LHip”},
{13, “LKnee”},
{14, “LAnkle”},
{15, “REye”},
{16, “LEye”},
{17, “REar”},
{18, “LEar”},
{19, “LBigToe”},
{20, “LSmallToe”},
{21, “LHeel”},
{22, “RBigToe”},
{23, “RSmallToe”},
{24, “RHeel”}
修改之后
{0, “Nose”},
{1, “Neck”},
{2, “RShoulder”},
{3, “RElbow”},
{4, “RWrist”},
{5, “LShoulder”},
{6, “LElbow”},
{7, “LWrist”},
{8, “MidHip”},
{9, “RHip”},
{10, “RKnee”},
{11, “RAnkle”},
{12, “LHip”},
{13, “LKnee”},
{14, “LAnkle”},
{15, “REye”},
{16, “LEye”},
{17, “REar”},
{18, “LEar”},
{19, “LBigToe”},
{20, “Loin”},
{21, “LHand”},
{22, “RBigToe”},
{23, “PlaceH”},
{24, “RHand”}

Openpose17

COCO17

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
'joint_names': [
'MidHip', # 0
'LUpLeg', # 1
'RUpLeg', # 2
'spine', # 3
'LLeg', # 4
'RLeg', # 5
'spine1', # 6
'LFoot', # 7
'RFoot', # 8
'spine2', # 9
'LToeBase', # 10
'RToeBase', # 11
'neck', # 12
'LShoulder', # 13
'RShoulder', # 14
'head', # 15
'LArm', # 16
'RArm', # 17
'LForeArm', # 18
'RForeArm', # 19
'LHand', # 20
'RHand', # 21
'LHandIndex1', # 22
'RHandIndex1', # 23
]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Alphapose全22个点转化到Openpose的关系
# 定义映射关系
POSE_KEYPOINTS_MAPPING = {
0: 0, # 鼻子 -> pose_keypoints_2d[0]
1: 16, # 左眼 -> pose_keypoints_2d[16]
2: 15, # 右眼 -> pose_keypoints_2d[15]
3: 18, # 左耳 -> pose_keypoints_2d[18]
4: 17, # 右耳 -> pose_keypoints_2d[17]
5: 5, # 左肩膀 -> pose_keypoints_2d[5]
6: 2, # 右肩膀 -> pose_keypoints_2d[2]
7: 6, # 左胳膊肘 -> pose_keypoints_2d[6]
8: 3, # 右胳膊肘 -> pose_keypoints_2d[3]
9: 7, # 左手腕 -> pose_keypoints_2d[7]
10: 4, # 右手腕 -> pose_keypoints_2d[4]
11: 12, # 左髋 -> pose_keypoints_2d[12]
12: 9, # 右髋 -> pose_keypoints_2d[9]
13: 13, # 左膝盖 -> pose_keypoints_2d[13]
14: 10, # 右膝盖 -> pose_keypoints_2d[10]
15: 14, # 左脚踝 -> pose_keypoints_2d[14]
16: 11, # 右脚踝 -> pose_keypoints_2d[11]
17: 19, # 左脚尖 -> pose_keypoints_2d[19]
18: 22, # 右脚尖 -> pose_keypoints_2d[22]

19: 21, # 左手手背 -> pose_keypoints_2d[21]把左手手背映射到Openpose中的左后跟
20: 24, # 右手手背 -> pose_keypoints_2d[21]把右手手背映射到Openpose中的右后跟
21: 20, # 腰部 -> pose_keypoints_2d[21]把腰部映射到Openpose中的左脚尖的脚尖
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 整理前的数据
data
└── openpose
| ├── 1
| ├── 000000_keypoints.json
| ├── 000001_keypoints.json #按照每一帧的结果
| ├── 2
| ├── 3
| ├── 4
└── videos
| ├── 1
| ├── 2
| ├── 3
| ├── 4
└── extri.yml
└── intri.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 处理数据,把openpose的结果再次处理,改成75点的一个格式,放在annots
# 查找视频文件(.mp4),然后调用 extract_video() 函数,从视频中按照用户给定的起始帧、终止帧和间隔提取图像帧,并存储在指定的 images 子目录下。
# 必选参数:
# path:数据所在路径。该目录下需包括 videos 目录(视频文件)以及用于存放生成图片、标注文件的子目录。
# 模式选择:
# --mode:选择姿态提取模式,可选值 openpose(默认)和 yolo-hrnet。
# 图片和标注设置:
# --ext:图像文件扩展名,默认 jpg。
# --annot:存放生成的标注文件的子文件夹名称,默认 annots。
# 视频帧控制:
# --start:提取起始帧,默认 0。
# --end:提取终止帧(不包含该帧),默认 10000。
# --step:帧间隔,即每隔多少帧提取一帧。
# 姿态提取细节:
# --highres:控制 OpenPose 网络输入的高分辨率比例,默认 1。
# --handface:开启后,同时提取手部和脸部关键点。
# --render:如果指定,会将 OpenPose 生成的渲染图像输出到对应文件夹。
# --no2d:如果指定,则仅执行图像帧提取,不进行 2D 关键点检测。如果为false则会使用openpose或者HRNet进行特征点提取
# --openpose:指定 OpenPose 的安装路径,默认 /media/qing/Project/openpose。
# yolo-hrnet 模式的设置:
# --low:使用较低的人体检测阈值(对应 config_low 设置),适用于对小目标或难检测的场景。
# --gtbbox:使用真实的边界框信息,同时使用 HRNet 来估计人体姿态。
# 调试相关:
# --debug:开启调试模式,可能打印更多调试信息。
# --path_origin:原始路径,默认当前工作目录。

python apps/demo/mv1p.py /home/outbreak/HPE/GUIplatform/reWriteTools0610/multiView/multireconstruction --out /home/outbreak/HPE/GUIplatform/reWriteTools0610/multiView/multireconstruction/output --vis_det --vis_repro --undis --vis_smpl --sub_vis 5 9 13 --body body25 --model smpl

python3 scripts/preprocess/extract_video.py ${data} --handface
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 整理完的数据
data
└── openpose
| ├── 1
| ├── 000000_keypoints.json
| ├── 000001_keypoints.json #按照每一帧的结果
| ├── 2
| ├── 3
| ├── 4
└── videos
| ├── 1
| ├── 2
| ├── 3
| ├── 4
└── annots
| ├── 1
| ├── 000000.json
| ├── 000001.json #按照每一帧的结果
| ├── 2
| ├── 3
| ├── 4
└── images
| ├── 1
| ├── 000000.jpg
| ├── 000001.jpg #按照每一帧的结果
| ├── 2
| ├── 3
| ├── 4
└── extri.yml
└── intri.yml

重建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# --path
# 输入数据根目录,里面通常包含图片或视频以及标注信息。
# --sub_vis
# 多视角相机的名称或编号列表,用于指定参与重建的摄像头。
# --out
# 输出结果的目录,生成的 3D 骨骼、SMPL 参数、重投影图等数据都会存放于此。
# --annot
# 指定标注文件的子目录名,一般存放2D检测的关键点信息。
# --model
# 选择使用的 SMPL 模型类型,如 SMPL、SMPL+H 等,不同模型可能会影响重建精度。
# --gender
# 指定人体性别(例如 "male" 或 "female"),以加载对应的 SMPL 模型参数。
# --body
# 指定骨架类型或人体结构,可能影响关键点的排列和模型匹配。
# --skel
# 如果指定该参数,则强制重新计算 3D 骨架数据(即通过三角化获得关键点3D坐标),否则可以直接读取已有的骨骼数据。
# --start / --end
# 指定处理数据的起始帧和终止帧,用于限定处理区间,防止处理整个序列。
# --thres2d
# 用于 2D 关键点的置信度阈值,低于该阈值的点会被忽略,以保证3D重建的稳定性。
# --MAX_REPRO_ERROR
# 设置最大允许的重投影误差。如果重投影误差超过此值,对应的2D关键点会被置零,并重新进行三角化。
# --smooth3d
# 对重建得到的3D骨架数据进行平滑处理的系数,数值越大平滑效果越明显。
# --vis_det
# 打开2D检测结果的可视化(如关键点和边界框显示),便于检查检测效果。
# --vis_repro
# 打开重投影结果可视化,显示由3D重建后经过相机投影到2D平面的关键点,方便判断重建质量。
# --vis_smpl
# 可视化 SMPL 模型重建的结果,将 SMPL 生成的顶点或肢体覆盖在原始图像上。
# --write_smpl_full
# 如果指定,则除了输出基本的 SMPL 参数外,还会保存完整的姿态参数(如全局pose转换等)。
# --write_vertices
# 指定后会将 SMPL 模型生成的顶点信息写入文件,供后续处理或3D渲染使用。
# --undis
# 指定是否对输入图像进行去畸变处理(Undistortion),以提高重建精度。
# --save_origin
# 控制是否保存原始骨骼数据,便于对比和验证重建前后的效果。



python3 apps/demo/mv1p.py ${data} --out ${data}/output/smpl
--vis_det
--vis_repro
--undis
--sub_vis 1 7 13 19
--vis_smpl

编写一个python文件,输入一个类似alphapose-result2_merged.json的json文件,在json文件目录下建立同名的文件夹,然后基于json文件中的多帧信息,创建每个帧的骨架信息,按照000000_keypoints.json的openpose的格式,具体而言:把alphapose的json文件中每一帧都创建一个json文件,然后json文件中keypoints本是22个骨架关节关键点,映射为openpose的pose_keypoints_2d和hand_left_keypoints_2d和hand_right_keypoints_2d,在单帧的映射过程中,
alphapose的第0个点鼻子对应openpose中pose_keypoints_2d的0点
alphapose的第1个点左眼对应openpose中pose_keypoints_2d的16点
alphapose的第2个点右眼对应openpose中pose_keypoints_2d的15点
alphapose的第3个点左耳朵对应openpose中pose_keypoints_2d的18点
alphapose的第4个点右耳朵对应openpose中pose_keypoints_2d的17点
alphapose的第5个点左肩膀对应openpose中pose_keypoints_2d的5点

alphapose的第6个点右肩膀对应openpose中pose_keypoints_2d的2点

alphapose的第7个点左胳膊肘对应openpose中pose_keypoints_2d的6点

alphapose的第8个点右胳膊肘对应openpose中pose_keypoints_2d的3点

alphapose的第9个点左手腕对应openpose中pose_keypoints_2d的7点

alphapose的第10个点右手腕对应openpose中pose_keypoints_2d的4点

alphapose的第11个点左髋关节对应openpose中pose_keypoints_2d的12点

alphapose的第12个点右髋关节应openpose中pose_keypoints_2d的9点

alphapose的第13个点左去去应openpose中pose_keypoints_2d的13点

alphapose的第14个点右膝盖对应openpose中pose_keypoints_2d的10点

alphapose的第15个点左脚踝对应openpose中pose_keypoints_2d的14点

alphapose的第16个点右脚踝对应openpose中pose_keypoints_2d的11点

alphapose的第17个点左脚尖对应openpose中pose_keypoints_2d的20点

alphapose的第18个点右脚尖对应openpose中pose_keypoints_2d的23点

至于openpose中pose_keypoints_2d的1点中胸位置,取alphapose的第5个点左肩膀和第6个点右肩膀的中点

至于openpose中pose_keypoints_2d的8点腹部位置,取alphapose的第11个点左髋和第12个点右髋的中点
然后把
alphapose的第9个点左手腕对应openpose中hand_left_keypoints_2d的0点

alphapose的第10个点右手腕对应openpose中hand_right_keypoints_2d的0点

alphapose的第19个点左手手背对应openpose中hand_left_keypoints_2d的9点

alphapose的第20个点右手手背对应openpose中hand_right_keypoints_2d的9点

alphapose的第21个点腰部,目前没有放在openpose中

其余的没有映射的点都是0 0 0

用python写一个软件,输入一个这样的一个json文件和想要输出相机camera的id,id有多个,没输入id则默认全部相机参数都输出
将其中的相机参数导出为extri.yml和intri.yml,这两个yml导出形式是opencv标定的格式,需要你把json文件中指定id相机参数根据Transform的平移矩阵和旋转矩阵参数输出到extri.yml中,把Instrinsic中的相机内参,根据json文件中的名称整理为opencv的相机内参格式,输出到intri.yml,并且两个yml的输出名称为json的名称_extri或者名称_intri

用python写一个软件,输入一个这样的一个json文件和想要输出相机camera的id,id有多个,没输入id则默认全部相机参数都输出
将其中的相机参数导出为extri.yml,导出形式是opencv标定的格式,需要你把json文件中指定id相机参数根据Transform的平移矩阵和旋转矩阵参数输出到extri.yml中输出名称为json的名称_extri

报错整理

几个帧在一起少同几个关节不会报错

单独一帧少几个关节不会报错

ANNOT不符合规范

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Traceback (most recent call last):
File "scripts/preprocess/extract_video.py", line 285, in <module>
join(args.path, 'openpose_render', sub), args)
File "scripts/preprocess/extract_video.py", line 65, in extract_2d
os.chdir(openpose)
FileNotFoundError: [Errno 2] No such file or directory: '/media/qing/Project/openpose'

把extract_2d改成这样
ef extract_2d(openpose, image, keypoints, render, args):
skip = False
if os.path.exists(keypoints):
print('>> exists {}'.format(keypoints))
# check the number of images and keypoints
print('image', image)
print('keypoints', keypoints)
print('len(image)', len(os.listdir(image)))
print('len(keypoints)', len(os.listdir(keypoints)))
skip = True
# if len(os.listdir(image)) == len(os.listdir(keypoints)):
# skip = True
if not skip:

如果annots少一个帧的json文件,这里annots/2少一个第20.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
 python apps/demo/mv1p.py ../zju-ls-feng_TestData --out  ../zju-ls-feng_TestData/outputTest2loss20annots --vis_det --vis_repro --undis --sub_vis 2 5 9 --vis_smpl --start 20 --end 25

Demo code for multiple views and one person:

- Input : ../zju-ls-feng_TestData => 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
- Output: ../zju-ls-feng_TestData/outputTest2loss20annots
- Body : smpl=>neutral, body25

这里是我自己改的,要不会搜索所有图像文件夹然后load:args.sub ['2', '5', '9']
triangulation: 0%| | 0/5 [00:00<?, ?it/s]
Traceback (most recent call last):
File "apps/demo/mv1p.py", line 118, in <module>
mv1pmf_skel(dataset, check_repro=True, args=args)
File "apps/demo/mv1p.py", line 35, in mv1pmf_skel
images, annots = dataset[nf]
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/dataset/mv1pmf.py", line 72, in __getitem__
images, annots_all = super().__getitem__(index)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/dataset/base.py", line 484, in __getitem__
assert self.imagelist[cam][index].split('.')[0] == self.annotlist[cam][index].split('.')[0]
AssertionError

如果annots多一个帧的json文件,这里annots/2多一个20_copy.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
 python apps/demo/mv1p.py ../zju-ls-feng_TestData --out  ../zju-ls-feng_TestData/outputTest2loss20annots --vis_det --vis_repro --undis --sub_vis 2 5 9 --vis_smpl --start 20 --end 25

Demo code for multiple views and one person:

- Input : ../zju-ls-feng_TestData => 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
- Output: ../zju-ls-feng_TestData/outputTest2loss20annots
- Body : smpl=>neutral, body25

这里是我自己改的,要不会搜索所有图像文件夹然后load:args.sub ['2', '5', '9']
triangulation: 0%| | 0/5 [00:00<?, ?it/s]
Traceback (most recent call last):
File "apps/demo/mv1p.py", line 118, in <module>
mv1pmf_skel(dataset, check_repro=True, args=args)
File "apps/demo/mv1p.py", line 35, in mv1pmf_skel
images, annots = dataset[nf]
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/dataset/mv1pmf.py", line 72, in __getitem__
images, annots_all = super().__getitem__(index)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/dataset/base.py", line 484, in __getitem__
assert self.imagelist[cam][index].split('.')[0] == self.annotlist[cam][index].split('.')[0]
AssertionError

如果多的或者少的是start/end之外的帧则不会报错

如果start/end包含的帧太少,至少三帧

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
python apps/demo/mv1p.py ../zju-ls-feng_TestData --out  ../zju-ls-feng_TestData/outputTest2loss20annots --vis_det --vis_repro --undis --sub_vis 2 5 9 --vis_smpl --start 20 --end 22

Demo code for multiple views and one person:

- Input : ../zju-ls-feng_TestData => 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
- Output: ../zju-ls-feng_TestData/outputTest2loss20annots
- Body : smpl=>neutral, body25

这里是我自己改的,要不会搜索所有图像文件夹然后load:args.sub ['2', '5', '9']
triangulation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 21.25it/s]
dump: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 7536.93it/s]
loading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2523.65it/s]
/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/lbfgs.py:264: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at ../torch/csrc/utils/python_arg_parser.cpp:1174.)
p.data.add_(step_size, update[offset:offset + numel].view_as(p.data))
-> [Optimize global RT ]: 3.9ms
Traceback (most recent call last):
File "apps/demo/mv1p.py", line 119, in <module>
mv1pmf_smpl(dataset, args)
File "apps/demo/mv1p.py", line 71, in mv1pmf_smpl
weight_shape=weight_shape, weight_pose=weight_pose)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pipeline/basic.py", line 77, in smpl_from_keypoints3d2d
params = multi_stage_optimize(body_model, params, kp3ds, kp2ds, bboxes, Pall, weight_pose, cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pipeline/basic.py", line 18, in multi_stage_optimize
params = optimizePose3D(body_model, params, kp3ds, weight=weight, cfg=cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 301, in optimizePose3D
params = _optimizeSMPL(body_model, params, prepare_funcs, postprocess_funcs, loss_funcs, weight_loss=weight, cfg=cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 246, in _optimizeSMPL
final_loss = fitting.run_fitting(optimizer, closure, opt_params)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize.py", line 38, in run_fitting
loss = optimizer.step(closure)
File "/home/outbreak/anaconda3/envs/easymocap/lib/python3.7/site-packages/torch/optim/optimizer.py", line 113, in wrapper
return func(*args, **kwargs)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/lbfgs.py", line 422, in step
obj_func, x_init, t, d, loss, flat_grad, gtd)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/lbfgs.py", line 177, in _strong_wolfe
t = bracket[low_pos]
IndexError: list index out of range

不知道怎么错的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
python apps/demo/mv1p.py ../dataOpenpose202403270042 --out  ../dataOpenpose202403270042/output --vis_det --vis_repro --undis --sub_vis 5 13  --vis_smpl --start 0 --end 5

Demo code for multiple views and one person:

- Input : ../dataOpenpose202403270042 => 2, 5, 9, 13
- Output: ../dataOpenpose202403270042/output
- Body : smpl=>neutral, body25

这里是我自己改的,要不会搜索所有图像文件夹然后load:args.sub ['5', '13']
loading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 39125.97it/s]
-> [Optimize global RT ]: 8.7ms
Traceback (most recent call last):
File "apps/demo/mv1p.py", line 119, in <module>
mv1pmf_smpl(dataset, args)
File "apps/demo/mv1p.py", line 71, in mv1pmf_smpl
weight_shape=weight_shape, weight_pose=weight_pose)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pipeline/basic.py", line 77, in smpl_from_keypoints3d2d
params = multi_stage_optimize(body_model, params, kp3ds, kp2ds, bboxes, Pall, weight_pose, cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pipeline/basic.py", line 18, in multi_stage_optimize
params = optimizePose3D(body_model, params, kp3ds, weight=weight, cfg=cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 301, in optimizePose3D
params = _optimizeSMPL(body_model, params, prepare_funcs, postprocess_funcs, loss_funcs, weight_loss=weight, cfg=cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 246, in _optimizeSMPL
final_loss = fitting.run_fitting(optimizer, closure, opt_params)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize.py", line 38, in run_fitting
loss = optimizer.step(closure)
File "/home/outbreak/anaconda3/envs/easymocap/lib/python3.7/site-packages/torch/optim/optimizer.py", line 113, in wrapper
return func(*args, **kwargs)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/lbfgs.py", line 307, in step
orig_loss = closure()
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 227, in closure
new_params = func(new_params)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 121, in interp_func
params[key][nf] = interp(params[key][left], params[key][right], 1-weight, key=key)
IndexError: index 5 is out of bounds for dimension 0 with size 5

改了例程的intri.yml,也报错

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Traceback (most recent call last):
File "apps/demo/mv1p.py", line 119, in <module>
mv1pmf_smpl(dataset, args)
File "apps/demo/mv1p.py", line 71, in mv1pmf_smpl
weight_shape=weight_shape, weight_pose=weight_pose)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pipeline/basic.py", line 77, in smpl_from_keypoints3d2d
params = multi_stage_optimize(body_model, params, kp3ds, kp2ds, bboxes, Pall, weight_pose, cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pipeline/basic.py", line 18, in multi_stage_optimize
params = optimizePose3D(body_model, params, kp3ds, weight=weight, cfg=cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 301, in optimizePose3D
params = _optimizeSMPL(body_model, params, prepare_funcs, postprocess_funcs, loss_funcs, weight_loss=weight, cfg=cfg)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 246, in _optimizeSMPL
final_loss = fitting.run_fitting(optimizer, closure, opt_params)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize.py", line 38, in run_fitting
loss = optimizer.step(closure)
File "/home/outbreak/anaconda3/envs/easymocap/lib/python3.7/site-packages/torch/optim/optimizer.py", line 113, in wrapper
return func(*args, **kwargs)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/lbfgs.py", line 307, in step
orig_loss = closure()
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 227, in closure
new_params = func(new_params)
File "/home/outbreak/HPE/easyMocap/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 121, in interp_func
params[key][nf] = interp(params[key][left], params[key][right], 1-weight, key=key)
IndexError: index 3 is out of bounds for dimension 0 with size 3

另外如果output里面有keypoint3d文件,那么运行的时候会优先读取3d文件,如果上一次3d文件就是错的,那还是会报这个错

可视化一个smpl模型

1
python apps/vis3d/vis_smpl.py --param_path /home/outbreak/HPE/easyMocap/dataOpenpose202403270042/outputFrom3d/smpl/000020.json

测试我的插值

起始帧


结束帧


插值10/10帧