骨骼蒙皮算法

smpl是指2015 马普的一篇文章“SMPL: a skinned multi-person linear model"

人体可以理解为是一个基础模型和在该模型基础上进行形变的总和,在形变基础上进行PCA,得到刻画形状的低维参数——形状参数(shape);同时,使用运动树表示人体的姿势,即运动树每个关节点和父节点的旋转关系,该关系可以表示为三维向量,最终每个关节点的局部旋转向量构成了smpl模型的姿势参数(pose)。

image-20250701154125167

basicModel

官方的basicModel_m_lbs_10_207_0_v1.0.0.pkl包含了[‘v_template’, ‘weights’, ‘posedirs’, ‘pose’, ‘trans’, ‘shapedirs’, ‘betas’, ‘J’]

v_template:smpl的基础模型,是一个T-pose,产生的mesh是在T-Pose基础上形变而来

weights:6890 * 24 混合权重矩阵,即关节点对顶点的影响权重 (第几个顶点受哪些关节点的影响且权重分别为多少) 6890个顶点,,每一个顶点受到24个关节点的影响

posedirs:6890 * 207 * 3 23 x 9 =207所有207个姿势混合形状组成的矩阵 (由姿势引起位移的pca)

pose:

trans:观察这个模型的远近?

shapedirs: 6890 * 3 * 10 形状位移矩阵PCA (由体型引起的位移形状位移的PCA)

betas:体型pca对应的形状参数1 * 10

J:从渲染mesh获得每个关节点的三维坐标的regressor

之前我一直认为是现有的三维关节然后在关节的基础上堆肉最终获得mesh,但是实际上是先根据每个关节旋转参数获得mesh然后利用J获得三维关节坐标

渲染算法

1
2
3
4
5
6
7
8
9
10
11
12
full_pose = torch.cat([global_orient, body_pose], dim=1)
# 如何根据smpl的参数获得smpl的mesh和关节
vertices, joints = lbs(betas,
full_pose,
self.v_template,
self.shapedirs,
self.posedirs,
self.J_regressor,
self.parents,
self.lbs_weights,
pose2rot=pose2rot,
dtype=self.dtype)

betas: 体型pca对应的形状参数1 * 10

global_orient: 1 * 3 根节点旋转向量

body_pose: 23 * 3 关节点旋转向量

shapedirs: 6890 * 3 * 10 形状位移矩阵PCA (由体型引起的位移形状位移的PCA)

posedirs: 6890 * 207 * 3 23 x 9 =207所有207个姿势混合形状组成的矩阵 (由姿势引起位移的pca)

j_regressor: 6890 * 24 , 是从不同的人在不同的姿势的例子中学习回归矩阵,从mesh中回归出关节点

parents: 24 每一个节点的父节点,显然根节点没有父节点

lbs_weights: 6890 * 24 混合权重矩阵,即关节点对顶点的影响权重 (第几个顶点受哪些关节点的影响且权重分别为多少) 6890个顶点,,每一个顶点受到24个关节点的影响

渲染步骤

第一步:体型带来的位移偏差 图b

1
2
v_shaped = v_template + blend_shapes(betas, shapedirs)
J = vertices2joints(J_regressor, v_shaped) 获得各个关节的位置

第二步:pose带来的位移偏差(影响很轻微,考虑到速度的时候可以舍弃)图c

1
2
3
pose_feature = batch_rodrigues (full_pose)#旋转向量转旋转矩阵
pose_offsets = torch.matmul(pose_feature, posedirs)
v_posed = pose_offsets + v_shaped

第三步:基于运动树进行关节点变换 图d

1
J_transformed, A = batch_rigid_transform(rot_mats, J, parents, dtype=dtype)

J_transformed: 应用姿势旋转后关节的位置 24 * 3

A: 所有其他节点相对根节点的刚体变换矩阵 24* 3 * 3

第四步:线性蒙皮算法

1
2
3
T=  lbs_weights * A  :6890 * 3 * 3 得到每一个顶点3*3
v_homo =T * v_posed 得到最终的6890 * 3 mesh顶点
J_transformed: 最终的关节点

第五步:有位移考虑位移

if apply_trans:
	joints += transl.unsqueeze(dim=1)
	vertices += transl.unsqueeze(dim=1)

渲染代码详解

  1. torch版本

  2. smplx代码版本

  3. easymocap方法

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    def blend_shapes(betas, shape_disps):
    blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps])
    return blend_shape
    def vertices2joints(J_regressor, vertices):
    return torch.einsum('bik,ji->bjk', [vertices, J_regressor])
    ''' Performs Linear Blend Skinning with the given shape and pose parameters

    Parameters
    ----------
    betas : torch.tensor BxNB
    The tensor of shape parameters
    pose : torch.tensor Bx(J + 1) * 3
    The pose parameters in axis-angle format
    v_template torch.tensor BxVx3
    The template mesh that will be deformed
    shapedirs : torch.tensor 1xNB
    The tensor of PCA shape displacements
    posedirs : torch.tensor Px(V * 3)
    The pose PCA coefficients
    J_regressor : torch.tensor JxV
    The regressor array that is used to calculate the joints from
    the position of the vertices
    parents: torch.tensor J
    The array that describes the kinematic tree for the model
    lbs_weights: torch.tensor N x V x (J + 1)
    The linear blend skinning weights that represent how much the
    rotation matrix of each part affects each vertex
    pose2rot: bool, optional
    Flag on whether to convert the input pose tensor to rotation
    matrices. The default value is True. If False, then the pose tensor
    should already contain rotation matrices and have a size of
    Bx(J + 1)x9
    dtype: torch.dtype, optional

    Returns
    -------
    verts: torch.tensor BxVx3
    The vertices of the mesh after applying the shape and pose
    displacements.
    joints: torch.tensor BxJx3
    The joints of the model
    '''

    # batch是一次性渲染多个mesh准备的
    batch_size = max(betas.shape[0], pose.shape[0])

    # 体型偏差
    v_shaped = v_template + blend_shapes(betas, shapedirs)
    # 如果不想要体型偏差
    v_shaped = v_template.unsqueeze(0).expand(batch_size, -1, -1)

    # pose偏差
    rot_mats = batch_rodrigues(pose.view(-1, 3), dtype=dtype).view([batch_size, -1, 3, 3])
    ident = torch.eye(3, dtype=dtype, device=device)
    pose_feature = (rot_mats[:, 1:, :, :] - ident).view([batch_size, -1])
    pose_offsets = torch.matmul(pose_feature, posedirs).view(batch_size, -1, 3)
    v_posed = pose_offsets + v_shaped
    # 如果不想要pose偏差
    v_posed = v_shaped

    J_transformed, A = batch_rigid_transform(rot_mats, J, parents, dtype=dtype)
    # 5. Do skinning:
    # W is N x V x (J + 1)
    W = lbs_weights.unsqueeze(dim=0).expand([batch_size, -1, -1])
    # (N x V x (J + 1)) x (N x (J + 1) x 16)
    num_joints = J_regressor.shape[0]
    T = torch.matmul(W, A.view(batch_size, num_joints, 16)).view(batch_size, -1, 4, 4)
    homogen_coord = torch.ones([batch_size, v_posed.shape[1], 1],dtype=dtype, device=device)
    v_posed_homo = torch.cat([v_posed, homogen_coord], dim=2)
    v_homo = torch.matmul(T, torch.unsqueeze(v_posed_homo, dim=-1))
    verts = v_homo[:, :, :3, 0]