Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to adjust the model's pose correctly, and how to apply customized texture to the SMPL model? #5

Open
snowmint opened this issue Nov 7, 2023 · 1 comment

Comments

@snowmint
Copy link

snowmint commented Nov 7, 2023

I am currently working on applying my generated 3D coordinate human joint key points to the SMPL model.

I am confused about how to:
1. adjust the model's pose
2. apply customized texture to the SMPL model.

The images below depict my generated key points. Currently using the modified demo.py, I am trying to turn 102 dimensions(34 joint key points) into 72 dimensions(24 joint key points). My goal is to generate the SMPL model to match the desired pose shown in Image 1.

I am now able to transform the body key points’ coordinates and apply them to the SMPL model as the result shown in Image 4. But as you can see, its posing looks really different from that shown in Image 1.

Image 1: Generated body key points
MOSA_coordainate

Image 2: The difference between SMPL and my original coordinates direction.
photo_2023-11-07_11-26-18

The handwritten joint numbers are the original key point numbers, shown in Image 3.

Image 3: the SMPL body joint with corresponding original key points.
對應joint

Since the coordinates of (x, y, z) of my original key points are different from the SMPL coordinate system, I have to transform the body joint coordinates to the SMPL coordinates first. Below is the code snippet I wrote for the transformation:

def change_motion_format(motion_frame):
    new_24_joint = np.zeros((24, 3))

    new_24_joint[15] = np.array([-(motion_frame[0]+motion_frame[3])/2, -(motion_frame[2]+motion_frame[5])/2, -(motion_frame[1]+motion_frame[4])/2])
    new_24_joint[12] = np.array([-motion_frame[18], motion_frame[20], motion_frame[19]])
    new_24_joint[3] = np.array([-(motion_frame[54]+motion_frame[57])/2, (motion_frame[56]+motion_frame[59])/2, (motion_frame[55]+motion_frame[58])/2])
    new_24_joint[9] = np.array([-motion_frame[21], motion_frame[23], motion_frame[22]])
    new_24_joint[6] = np.average([motion_frame[3], motion_frame[9]], axis=0)
    check = np.absolute(new_24_joint[6] - new_24_joint[3])
    new_24_joint[0] = new_24_joint[3] - check

    new_24_joint[14] = np.array([(motion_frame[24]+motion_frame[15])/2, (motion_frame[26]+motion_frame[17])/2, (motion_frame[25]+motion_frame[16])/2])
    new_24_joint[17] = np.array([-motion_frame[24], motion_frame[26], motion_frame[25]])
    new_24_joint[19] = np.array([-motion_frame[27], motion_frame[29], motion_frame[28]])
    new_24_joint[21] = np.array([-(motion_frame[30]+motion_frame[33])/2, (motion_frame[32]+motion_frame[35])/2, (motion_frame[31]+motion_frame[34])/2])
    new_24_joint[23] = np.array([-motion_frame[36], motion_frame[38], motion_frame[37]])
    
    new_24_joint[13] = np.array([(motion_frame[39]+motion_frame[15])/2, -(motion_frame[41]+motion_frame[17])/2, (motion_frame[40]+motion_frame[16])/2])
    new_24_joint[16] = np.array([-motion_frame[39], -motion_frame[41], motion_frame[40]])
    new_24_joint[18] = np.array([-motion_frame[42], -motion_frame[44], motion_frame[43]])
    new_24_joint[20] = np.array([-(motion_frame[45]+motion_frame[48])/2, -(motion_frame[47]+motion_frame[50])/2, (motion_frame[46]+motion_frame[49])/2])
    new_24_joint[22] = np.array([-motion_frame[51], -motion_frame[53], motion_frame[52]])

    new_24_joint[2] = np.array([(-motion_frame[54]+motion_frame[66])/4, (motion_frame[56]+motion_frame[68])/4, (motion_frame[55]+motion_frame[67])/4])
    new_24_joint[5] = np.array([-motion_frame[66], motion_frame[68], motion_frame[67]])
    new_24_joint[8] = np.array([-motion_frame[69], motion_frame[71], motion_frame[70]])
    new_24_joint[11] = np.array([-motion_frame[72], motion_frame[74], motion_frame[73]])
    #26:75, 76, 77
    new_24_joint[1] = np.array([-(motion_frame[57]+motion_frame[78])/4, (motion_frame[59]+motion_frame[80])/4, (motion_frame[58]+motion_frame[79])/4])
    new_24_joint[4] = np.array([-motion_frame[78], motion_frame[80], motion_frame[79]])
    new_24_joint[7] = np.array([-motion_frame[81], motion_frame[83], motion_frame[82]])
    new_24_joint[10] = np.array([-motion_frame[84], motion_frame[86], motion_frame[85]])

    for i in range(24):
        # Setting point 7 as the original point.
        new_24_joint[i][0] = new_24_joint[i][0] - new_24_joint[7][0]
        new_24_joint[i][1] = new_24_joint[i][1] - new_24_joint[7][1]
        new_24_joint[i][2] = new_24_joint[i][2] - new_24_joint[7][2]
    
    return new_24_joint.flatten()

The key points' serial number of the 102-dimensional array comparison is shown in the table below:


key point(original 34 joint number) x y z
1 0 1 2
2 3 4 5
3 6 7 8
34 99 100 101

Image 4 shows the pose after the transformation to the SMPL coordinates.

Image 4: The result of the transformation to the SMPL coordinates.
SMPL_平視圖
SMPL_上視圖

Here’s the key points' coordinate values for reconstructing this problem:

The original 34 joint key points value:

tensor([ 6.8808e-02,  1.3655e-01,  9.9387e-01,  5.5250e-03,  9.6803e-02,
         1.0144e+00,  1.1453e-01,  6.7033e-02,  9.9475e-01,  3.5132e-02,
         2.9885e-02,  1.0087e+00,  6.6972e-02, -1.2279e-02,  8.9883e-01,
         7.5143e-02, -4.6622e-02,  7.3468e-01,  5.9664e-02,  7.5100e-02,
         8.5105e-01,  4.9040e-02,  1.2250e-01,  7.1263e-01,  1.6473e-01,
         4.9161e-02,  8.8110e-01,  2.3987e-01,  1.5138e-01,  7.5461e-01,
         1.0578e-01,  1.9731e-01,  7.3396e-01,  1.2650e-01,  2.3269e-01,
         7.2024e-01,  7.3923e-02,  2.1867e-01,  7.3476e-01, -5.9331e-02,
         2.0182e-02,  8.7149e-01, -1.0383e-01,  4.6070e-02,  6.8880e-01,
        -1.6089e-01,  1.5032e-01,  7.6275e-01, -1.2791e-01,  1.7847e-01,
         7.5362e-01, -1.8034e-01,  1.8982e-01,  7.7512e-01,  1.5030e-01,
         1.1482e-01,  5.7148e-01, -2.8307e-02,  8.1120e-02,  5.6636e-01,
         1.2305e-01, -2.7885e-02,  5.9340e-01,  5.0053e-02, -4.2568e-02,
         5.9018e-01,  1.8195e-01,  7.5649e-02,  2.7299e-01,  1.5969e-01,
        -1.1603e-02, -7.8823e-03,  1.8093e-01,  1.1385e-01, -1.1609e-02,
         1.8845e-01,  2.3969e-02,  2.9635e-03, -2.8350e-02,  4.0743e-02,
         2.7095e-01,  3.9562e-02, -1.9456e-02, -8.8135e-03, -2.8389e-02,
         9.0526e-02, -1.3981e-02,  4.4523e-04,  3.0309e-04,  9.0401e-05,
        -2.3316e-01,  1.7788e-01,  8.1466e-01,  1.0221e-02,  1.0220e-01,
         8.8138e-01, -1.1811e-01,  1.2274e-01,  1.0000e+00,  7.3125e-02,
         2.3952e-01,  6.4932e-01])

The pose tensor transforms to SMPL coordinate value:

tensor([[-0.1028,  0.0291,  0.0398,  0.0537,  0.2181,  0.0499,  0.0475,  0.2199,
          0.0671, -0.0214,  0.5777,  0.1174,  0.0679,  0.2798,  0.0602, -0.1424,
          0.2818,  0.0951,  0.0599,  0.0291,  0.0398,  0.0000,  0.0000,  0.0000,
         -0.1597, -0.0079, -0.0116, -0.0490,  0.7126,  0.1225,  0.0284, -0.0140,
          0.0905, -0.1809, -0.0116,  0.1139, -0.0597,  0.8511,  0.0751,  0.0079,
         -0.8031, -0.0132,  0.1199,  0.8079,  0.0013, -0.0372, -1.0041, -0.1167,
          0.0593, -0.8715,  0.0202, -0.1647,  0.8811,  0.0492,  0.1038, -0.6888,
          0.0461, -0.2399,  0.7546,  0.1514,  0.1444, -0.7582,  0.1644, -0.1161,
          0.7271,  0.2150,  0.1803, -0.7751,  0.1898, -0.0739,  0.7348,  0.2187]])

Thank you for taking the time to read my question.
I appreciate your assistance and welcome any advice you may have. If there is an alternative approach to achieve this, I would greatly appreciate your insights and suggestions.

@Dou-Yiming
Copy link
Owner

Hi! For adjusting the pose, I would recommend you only choose some of the keypoints from your data format and link them with the SMPL keypoints instead of taking mean value of multiple keypoints.
For the custom texture, I think you may follow the original procedure of SMPL once you get the parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants