Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Head pose estimation #36

Open
hemanthkorrapati opened this issue Apr 11, 2016 · 8 comments
Open

Head pose estimation #36

hemanthkorrapati opened this issue Apr 11, 2016 · 8 comments

Comments

@hemanthkorrapati
Copy link

Hi,
I see that the head pose estimation is encoded in clm_model.params_global and clm_model.params_local which are being updated in CLMTracker::PDM::CalcParams and CLMTracker::CLM::NU_RLMS. Is there a way by which I can just retrieve the head pose estimation using the ground truth labelling of the landmark positions ? For example, like the algorithm 1 in your thesis which takes as input the parameters learned offline on the training data and the landmarks and outputs the pose information p.

Thank you

@TadasBaltrusaitis
Copy link
Owner

Hi,

You can do that using the PDM::CalcParams(Vec6d& out_params_global, const Mat_<double>& out_params_local, const Mat_<double>& landmark_locations, const Vec3d rotation) function. You can pass it the ground truth location in the landmark_locations matrix (and optional initial rotation estimate), and it will return out_params_global which will include scaling, translation, rotation parameters, which can be used with camera parameters to get a head pose estimate (or used directly, but that will not account for perspective transform).

Thanks,
Tadas

@hemanthkorrapati
Copy link
Author

Hi,
Thanks for the quick answer. I will try this out.
Just to confirm, your approach does not account the perspective transform in visualizing the tracking results, right ?
In other words, your landmark detection and tracking process does not encode or somehow calculate the perspective information. Am I correct ?

Thank you

@TadasBaltrusaitis
Copy link
Owner

The landmark detection does not account for the perspective transform explicitly, neither does the CalcParams, but the Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy); does as it uses the camera focal lengths and optical centers.

-Tadas

@hemanthkorrapati
Copy link
Author

Ah, I get the complete picture now 👍 Thanks so much :)

@hemanthkorrapati
Copy link
Author

Hi Tadas,

I tried to use the CalcParams() function in a standalone fashion. But I am not getting the correct head pose. I am sure that I am doing something wrong. Could you take a look at the minimal code below and give your opinion ?

Thanks.

#include "CLM_core.h"

#include <fstream>
#include <sstream>

#include <boost/accumulators/accumulators.hpp>


using namespace std;
using namespace cv;

int main (int argc, char **argv)
{

    // Assuming 640x490 images
    float fx = 500, fy = 500, cx = 320, cy = 245;

        //Image sequence read from a file
    std::ifstream imageFile("Path_Images.txt");
    if(!imageFile.good())
    {
        std::cout << "Image list not found." << std::endl;
        return 0;
    }

    CLMTracker::PDM pdm;
    pdm.Read("./bin/model/pdms/In-the-wild_aligned_PDM_68.txt");

    string line;
    cv::Vec6d pose_params = 0;
    while(getline(imageFile, line))
    {
        Mat im = cv::imread(line, CV_LOAD_IMAGE_COLOR);
        line.replace(line.size()-3, 3, "pts");

        cv::Mat_<double> landmarks(136, 1);
        std::ifstream landmarkFile(line.c_str());
        if(!landmarkFile.good())
        {
            std::cout << "Landmark file: " << line << " not readable" << std::endl;
            return 0;
        }
        std::cout << "Processing Image: " << line << std::endl;
        string temp;
        landmarkFile >> temp;
        landmarkFile >> temp;
        landmarkFile >> temp;
        for(int i=0; i< 68; i++)
        {
            landmarkFile >> landmarks.at<double>(i, 0);
            landmarkFile >> landmarks.at<double>(i+68, 0);
        }

        cv::Mat_<double> local_params(34, 1);
        local_params.setTo(0);
        cv::Vec3d rotation;
        rotation[0] = pose_params[3]; rotation[1] = pose_params[4]; rotation[2] = pose_params[5];
        auto t1 = std::chrono::high_resolution_clock::now();
        pdm.CalcParams( pose_params, local_params, landmarks, rotation);
        auto t2 = std::chrono::high_resolution_clock::now();
        auto timeTaken = std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1);
        std::cout << pose_params << std::endl;

        int thickness = 2;
        CLMTracker::DrawBox(im, pose_params, Scalar(0, 0, 255), thickness, fx, fy, cx, cy);
        imshow("tracking_result", im);
        waitKey(1);
    }

    return 0;
}

@hemanthkorrapati
Copy link
Author

Never mind. Figured it out and works like a charm :)
Thanks

@vijaychav
Copy link

Hi Hemanth, I know that its been a while, but could you point out what the error was. I was able to extract the 68 feature points but need some help on getting the pose parameters from those points.

Thanks!

@misakibiu
Copy link

Hi,Tadas,
I am confused about the world coordinate, what is the original point of this world coordinate. And what about the head coordinate space? I could not find any information about it. Thanks!^O^

Jane

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants