Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the loftr result #40

Open
sanersbug opened this issue Mar 20, 2023 · 22 comments
Open

About the loftr result #40

sanersbug opened this issue Mar 20, 2023 · 22 comments
Assignees
Labels
help wanted Extra attention is needed question Further information is requested

Comments

@sanersbug
Copy link

I have tested the loftr,but i find some problem, when i use two image which have same size , the result is normal,but when i use a big picture and a small picture ,it detect nothing!
image

image

@xmba15
Copy link
Owner

xmba15 commented Mar 20, 2023

@sanersbug Can you add your pics here?

@sanersbug
Copy link
Author

@sanersbug Can you add your pics here?

yes, it like :
1
pu

@sanersbug
Copy link
Author

sanersbug commented Mar 20, 2023

Another thing , i find the lisrd can adapt the scene easily,
image
but it is the python version, I haven't compare it with loftr use C++, because i am not good at the onnxruntime. The speed of lisrd more faster than loftr , i think it seems that lisrd is more meaningful in practical application.
And can you open Alipay,i pay with Ali-Pay easily, i think you do these match method professionally

@xmba15
Copy link
Owner

xmba15 commented Mar 20, 2023

I have checked both the cpp code and original python code. For the indoor_new_ds weights, indeed the number of matched points is zero.

image

However, for the outdoor_weights, LoFTR performed well as you can see in the following pic. (The following pic is from python code).

image

Sadly, I doubt there is still some bug in loftr that makes the conversion of the outdoor weight unsuccessful. Hence we can not use the old outdoor weight for my C++ program.

@xmba15
Copy link
Owner

xmba15 commented Mar 20, 2023

LISRD is actually a descriptor model, so somehow it works like SuperPoint (without SuperGlue). Though the inference part of LISRD is more complicated than SuperPoint.

However, as many have asked for LISRD code, I may see what I can do later.

@ChmarsLuo
Copy link

ChmarsLuo commented Mar 20, 2023

It seems that LISRD works better, and the author of LISRD says that LISRD is faster. @xmba15

@ChmarsLuo
Copy link

LoFTR and SuperGlue are scale-insensitive

@sanersbug
Copy link
Author

LISRD is actually a descriptor model, so somehow it works like SuperPoint (without SuperGlue). Though the inference part of LISRD is more complicated than SuperPoint.

However, as many have asked for LISRD code, I may see what I can do later.

That's it, and I have lisrd onnx model, but when i try to write C++ program about lisrd, it failed.
you can use the followling model try the lisrd mothed
https://drive.google.com/drive/folders/1bW14nNAzjIl2i3wm8OKUxYhngoib6UC3?usp=sharing

@WithAIGC
Copy link

WithAIGC commented Mar 21, 2023

The application of Superglue and LoFTR in 3D comparison is carried out in point cloud point cloud point cloud point cloud or image reconstruction process, even if there is a large amount of occlusion between Matching and position estimation are performed on indoor and indoor and mid-level and mid-level modeling local and global features, so as to obtain better 3D point cloud and scene reconstruction results.

In general, these two algorithms are advanced image registration and 3D reconstruction techniques, which can achieve brilliant results in a variety of scenarios.

LISRD works better in the field of object tracking. The LISRD algorithm is a fast online multi-target tracking algorithm with high accuracy and real-time performance. The LISRD algorithm uses coupled single-model technology to simultaneously estimate and update all target states and corresponding appearance models during each frame tracking process, thereby avoiding the problem of model matching and selection in traditional multi-model tracking algorithms and improving tracking accuracy Spend.

In addition, the LISRD algorithm also introduces a novel sparse representation-based spatio-temporal constraint for filtering and smoothing the target trajectory, which reduces unnecessary false alarms and jumps while maintaining tracking accuracy.

In general, the LISRD algorithm not only achieves good results in the field of target tracking, but also has broad application prospects in other application fields (such as video processing, human-computer interaction, etc.). @sanersbug @xmba15

@sanersbug
Copy link
Author

@xmba15 Have you try to make the LISRD code? In the post-processing , i can't understand the extract_descriptors function, especially the use of torch.nn.functional.grid_sample in grid_sample(descriptors[k], grid_points), dim=1).

@yeluo80808
Copy link

@sanersbug @xmba15 This is the translated LISRD I wrote on the basis of the @xmba15 's library, but the effect is a little worse than python, the @xmba15 can continue on my basis.
`
include <opencv2/opencv.hpp>
include
include
include <opencv2/features2d.hpp>

include <torch/torch.h>
include <torch/script.h>
include "OrtSessionHandler.h"

using namespace std;
using namespace Ort;

std::vectorcv::KeyPoint getKeyPointFromSift(const cv::Mat& img, int nfeatures, float contrastThreshold)
{
std::vectorcv::KeyPoint keypoints;
// cv::Ptrcv::SIFT siftDetector = cv::SIFT::create(nfeatures);
cv::Ptrcv::Feature2D siftDetector = cv::SIFT::create(nfeatures);
siftDetector->detect(img, keypoints);
std::vectorcv::KeyPoint result;
for (const auto& k : keypoints)
{
result.push_back(cv::KeyPoint(k.pt.y, k.pt.x, k.response));
}
return result;
};

std::vectorcv::KeyPoint getKeyPoints(const std::vectorOrt::OrtSessionHandler::DataOutputType& inferenceOutput,
int borderRemove = 4, float confidenceThresh = 0.015)
{
std::vector detectorShape(inferenceOutput[0].second.begin() + 1, inferenceOutput[0].second.end()); // 定义变量

cv::Mat detectorMat(detectorShape.size(), detectorShape.data(), CV_32F, inferenceOutput[0].first); // 65 x H/8 x W/8

cv::Mat buffer;
transposeNDWrapper(detectorMat, {1, 2, 0}, buffer);
buffer.copyTo(detectorMat); // H/8 x W/8 x 65

for (int i = 0; i < detectorShape[1]; ++i)
{
for (int j = 0; j < detectorShape[2]; ++j)
{
softmax(detectorMat.ptr(i, j), detectorShape[0]); // .ptr(i, j)
}
}
// same as python code dense[:-1, :, :]
detectorMat =
detectorMat({cv::Range::all(), cv::Range::all(), cv::Range(0, detectorShape[0] - 1)}).clone(); // H/8 x W/8 x 64
detectorMat = detectorMat.reshape(1, {detectorShape[1], detectorShape[2], 8, 8}); // H/8 x W/8 x 8 x 8 create one dims
transposeNDWrapper(detectorMat, {0, 2, 1, 3}, buffer);
buffer.copyTo(detectorMat); // H/8 x 8 x W/8 x 8
detectorMat = detectorMat.reshape(1, {detectorShape[1] * 8, detectorShape[2] * 8}); // H x W

std::vectorcv::KeyPoint keyPoints;
for (int i = borderRemove; i < detectorMat.rows - borderRemove; ++i)
{
auto rowPtr = detectorMat.ptr(i); //
for (int j = borderRemove; j < detectorMat.cols - borderRemove; ++j)
{
if (rowPtr[j] > confidenceThresh)
{
cv::KeyPoint keyPoint;
keyPoint.pt.x = j;
keyPoint.pt.y = i;
keyPoint.response = rowPtr[j]; //
keyPoints.emplace_back(keyPoint); //
}
}
}
return keyPoints;
// std::vectorcv::KeyPoint result;
// for (const auto& k : keyPoints)
// {
// result.push_back(cv::KeyPoint(k.pt.y, k.pt.x, k.response));
// }
// return result;
}

std::vector nmsFast(const std::vectorcv::KeyPoint &keyPoints, int height, int width, int distThresh = 4)
{
static const int TO_PROCESS = 1;
static const int EMPTY_OR_SUPPRESSED = 0;
static const int KEPT = -1;

// 
std::vector<int> sortedIndices(keyPoints.size());
std::iota(sortedIndices.begin(), sortedIndices.end(), 0);
// sort in descending order base on confidence 
std::stable_sort(sortedIndices.begin(), sortedIndices.end(),
                  [&keyPoints](int lidx, int ridx)
                  { return keyPoints[lidx].response > keyPoints[ridx].response; });

cv::Mat grid = cv::Mat(height, width, CV_8S, TO_PROCESS);
std::vector<int> keepIndices;

for (int idx : sortedIndices)
{
    int x = keyPoints[idx].pt.x;
    int y = keyPoints[idx].pt.y;

    if (grid.at<schar>(y, x) == TO_PROCESS)
    {
        for (int i = y - distThresh; i < y + distThresh; ++i)
        {
            if (i < 0 || i >= height)
            {
                continue;
            }

            for (int j = x - distThresh; j < x + distThresh; ++j)
            {
                if (j < 0 || j >= width)
                {
                    continue;
                }
                grid.at<int>(i, j) = EMPTY_OR_SUPPRESSED;
            }
        }
        grid.at<int>(y, x) = KEPT;
        keepIndices.emplace_back(idx);
    }
}

return keepIndices;

}

std::vectorcv::KeyPoint nmsFast_good(std::vectorcv::KeyPoint& in_corners, int H, int W, int dist_thresh = 4)
{
// Create a grid sized HxW. Assign each corner location a 1, rest
// are zeros.
cv::Mat grid = cv::Mat::zeros(H, W, CV_32S); // CV_32S == int
std::vector inds(H * W);
// Sort by confidence and round to nearest int.
std::sort(in_corners.begin(), in_corners.end(), [](const cv::KeyPoint& a, const cv::KeyPoint& b) {
return a.response > b.response;});
// Rounded corners.
std::vectorcv::Point rcorners;
for (const auto& corner : in_corners)
{
rcorners.emplace_back(cvRound(corner.pt.x), cvRound(corner.pt.y));
}
// Check for edge case of 0 or 1 corners.
if (rcorners.empty())
{
return {};
}
if (rcorners.size() == 1)
{
return {in_corners[0]};
}
// Initialize the grid.
for (int i = 0; i < rcorners.size(); i++)
{
grid.at(rcorners[i]) = 1;
inds[rcorners[i].y * W + rcorners[i].x] = i;
}
// Pad the border of the grid, so that we can NMS points near the border.
int pad = dist_thresh;
cv::copyMakeBorder(grid, grid, pad, pad, pad, pad, cv::BORDER_CONSTANT, 0);
// Iterate through points, highest to lowest conf, suppress neighborhood.
int count = 0;
for (int i = 0; i < rcorners.size(); i++)
{
// Account for top and left padding.
cv::Point pt(rcorners[i].x + pad, rcorners[i].y + pad);
if (grid.at(pt) == 1)
{
// If not yet suppressed
cv::KeyPoint new_kp = in_corners[inds[rcorners[i].y * W + rcorners[i].x]];
new_kp.pt.x = rcorners[i].x - pad;
new_kp.pt.y = rcorners[i].y - pad;
in_corners[count++] = new_kp;
// Suppress neighbors.
for (int dx = -dist_thresh; dx <= dist_thresh; dx++)
{
for (int dy = -dist_thresh; dy <= dist_thresh; dy++)
{
grid.at(pt.y + dy, pt.x + dx) = 0;
}
}
}
}
in_corners.resize(count);
// return in_corners;
std::vectorcv::KeyPoint result;
for (const auto& k : in_corners)
{
result.push_back(cv::KeyPoint(k.pt.y, k.pt.x, k.response));
}
return result;
}

torch::Tensor keyPointsToGrid(const std::vectorcv::KeyPoint& in_keypoints, const cv::Size& img_size)
{
//-----------------------------------------------------//
// [k.pt[1], k.pt[0], k.response in python
//-----------------------------------------------------//
std::vector keypoints_data;
for (const auto& keypoint : in_keypoints) {
keypoints_data.push_back(keypoint.pt.x);
keypoints_data.push_back(keypoint.pt.y);
}
int n_points = in_keypoints.size();
torch::Tensor keypoints_tensor = torch::from_blob(keypoints_data.data(), { n_points, 2 }, torch::kFloat32);
std::ofstream file("keypoints_tensor.txt");
file << keypoints_tensor;
file.close();
std::cout << "keypoints_tensor: " << keypoints_tensor.sizes() << std::endl;
torch::Tensor img_size_tensor = torch::tensor({ img_size.height, img_size.width }, torch::kFloat32);
// torch::Tensor img_size_tensor = torch::tensor({ img_size.width, img_size.height }, torch::kFloat32);
// std::cout << "img tensor shape: " << img_size_tensor.size[0] << img_size_tensor.size[1] << std::endl;
torch::Tensor points_tensor = keypoints_tensor * 2.0 / img_size_tensor - 1.0;
torch::Tensor index = torch::tensor({ 1, 0 }, torch::dtype(torch::kLong));
points_tensor = points_tensor.index_select(1, index);
torch::Tensor grid_keypoints_tensor = points_tensor.view({ -1, n_points, 1, 2 }); // .item(i, j, k, l)
return grid_keypoints_tensor;
};

std::pair<torch::Tensor, torch::Tensor>
extractDescriptors(const std::vectorOrtSessionHandler::DataOutputType &lisrd_outputs,
const std::vectorcv::KeyPoint& in_keypoints, const cv::Size& img_size)
{
torch::Tensor grid_points = keyPointsToGrid(in_keypoints, img_size);
torch::Tensor descs, meta_descs;
std::vectortorch::Tensor descs_vector, meta_descs_vector;
for (size_t i = 0; i < lisrd_outputs.size(); i++)
{
Ort::OrtSessionHandler::DataOutputType output = lisrd_outputs.at(i);
if (i < 4)
{
std::vector descShape(output.second.begin(), output.second.end()); // 1 128 90 160 --
cv::Mat desc(descShape.size(), descShape.data(), CV_32F, output.first);
torch::Tensor tensor_desc = torch::from_blob(desc.data, {desc.size[0], desc.size[1], desc.size[2], desc.size[3]}, torch::kFloat32);
torch::nn::functional::GridSampleFuncOptions sample_options;
sample_options.align_corners(true);
torch::Tensor smaple_desc = torch::nn::functional::grid_sample(tensor_desc, grid_points, sample_options);
torch::nn::functional::NormalizeFuncOptions normal_options;
normal_options.p(2);
normal_options.dim(1);
torch::Tensor normal_desc = torch::nn::functional::normalize(smaple_desc, normal_options);
torch::Tensor squeeze_desc = torch::squeeze(normal_desc); //
torch::Tensor trans_desc = torch::transpose(squeeze_desc, 0, 1);
descs_vector.push_back(trans_desc);
}
else
{
std::vector metadescShape(output.second.begin(),
output.second.end()); // metadescShape.data()
cv::Mat meta_desc(metadescShape.size(), metadescShape.data(), CV_32F, output.first);
torch::Tensor tensor_meta_desc = torch::from_blob(meta_desc.data, {meta_desc.size[0], meta_desc.size[1], meta_desc.size[2], meta_desc.size[3]}, torch::kFloat32);
// tensor_meta_desc = tensor_meta_desc.permute({0, 1, 3, 2});
std::cout << "tensor_desc shape: " << tensor_meta_desc.sizes() << " grid_points shape: " << grid_points.sizes() << std::endl;
torch::nn::functional::GridSampleFuncOptions sample_options;
sample_options.align_corners(true);
torch::Tensor smaple_meta_desc = torch::nn::functional::grid_sample(tensor_meta_desc, grid_points, sample_options);
torch::nn::functional::NormalizeFuncOptions normal_options;
normal_options.p(2);
normal_options.dim(1);
torch::Tensor normal_meta_desc = torch::nn::functional::normalize(smaple_meta_desc, normal_options);
torch::Tensor squeeze_meta_desc = torch::squeeze(normal_meta_desc); //
torch::Tensor trans_meta_desc = torch::transpose(squeeze_meta_desc, 0, 1);
meta_descs_vector.push_back(trans_meta_desc);
}
}
descs = torch::stack(descs_vector, 1);
meta_descs = torch::stack(meta_descs_vector, 1);
return std::make_pair(descs, meta_descs);
}

torch::Tensor lisrdMatcher(torch::Tensor desc1, torch::Tensor desc2, torch::Tensor meta_desc1, torch::Tensor meta_desc2)
{
torch::Tensor desc_weights = torch::einsum("nid,mid->nim", {meta_desc1, meta_desc2});
meta_desc1.reset();
meta_desc2.reset();
desc_weights = torch::softmax(desc_weights, 1);
torch::Tensor desc_sims = torch::einsum("nid,mid->nim", {desc1, desc2}) * desc_weights;
desc1.reset();
desc2.reset();
desc_weights.reset();
desc_sims = torch::sum(desc_sims, 1);
torch::Tensor nn12 = torch::argmax(desc_sims, 1);
torch::Tensor nn21 = torch::argmax(desc_sims, 0);
torch::Tensor ids1 = torch::arange(desc_sims.size(0), torch::dtype(torch::kLong));
desc_sims.reset();
torch::Tensor mask = (ids1 == nn21.index_select(0, nn12));
torch::Tensor mask_bool = mask.nonzero().squeeze(1);
torch::Tensor t1 = torch::index_select(ids1, 0, mask_bool); //
torch::Tensor t2 = torch::index_select(nn12, 0, mask_bool); //
torch::Tensor matches = torch::stack({t1, t2}, 1);
return matches;
}

std::pair<std::vectorcv::KeyPoint, std::vectorcv::KeyPoint>
filterOutliersRansac(const std::vectorcv::KeyPoint& kp1, const std::vectorcv::KeyPoint& kp2)
{
std::vectorcv::Point2f kp1_pts, kp2_pts;
for (const auto& kp : kp1)
kp1_pts.emplace_back(kp.pt);
for (const auto& kp : kp2)
kp2_pts.emplace_back(kp.pt);
std::vector inliers(kp1_pts.size());
cv::findHomography(kp1_pts, kp2_pts, cv::RANSAC, 3, inliers);
std::vectorcv::KeyPoint filtered_kp1, filtered_kp2;
for (int i = 0; i < inliers.size(); i++)
{
if (inliers[i])
{
filtered_kp1.push_back(kp1[i]);
filtered_kp2.push_back(kp2[i]);
}
}
return std::make_pair(filtered_kp1, filtered_kp2);
}

void plot_keypoints(const cv::Mat &img, const std::vectorcv::KeyPoint& kpts, const std::vectorcv::Scalar& colors, float ps)
{
for (int i = 0; i < kpts.size(); i++)
{
cv::KeyPoint k = kpts[i];
cv::Scalar c = colors[i];
// cv::circle(img, k.pt, ps, c, -1);
cv::Point2f pt = k.pt;
cv::circle(img, cv::Point2f(pt.y, pt.x), ps, c, -1);
}
}

int main()
{
cv::String img_path1 = "./examples/data/small.png";
cv::String img_path2 = "./examples/data/big.png";
std::string superpoint_model_path = "./model/super_point_danamic_axis.onnx";
std::string lisrd_model_path = "./model/lisrd_vidit_danamic_axis.onnx";
cv::Mat bgr1 = cv::imread(img_path1, cv::IMREAD_COLOR);
cv::Mat bgr2 = cv::imread(img_path2, cv::IMREAD_COLOR);
cv::Mat gray1 = cv::imread(img_path1, cv::IMREAD_GRAYSCALE);
cv::Mat gray2 = cv::imread(img_path2, cv::IMREAD_GRAYSCALE);
cv::Mat rgb1, rgb2, resized_img1, resized_img2;
cv::cvtColor(bgr1, rgb1, cv::COLOR_BGR2RGB);
cv::cvtColor(bgr2, rgb2, cv::COLOR_RGB2BGR);
// cv::resize(rgb1, resized_img1, cv::Size(1280, 720), cv::INTER_CUBIC);
// cv::resize(rgb2, resized_img2, cv::Size(1280, 720), cv::INTER_CUBIC);
//-----------------------------------------------------//
// keypoint from sift
//-----------------------------------------------------//
// std::vectorcv::KeyPoint keypoint1 = getKeyPointFromSift(rgb1, 1500, 0.04);
// std::vectorcv::KeyPoint keypoint2 = getKeyPointFromSift(rgb2, 1500, 0.04);

//-----------------------------------------------------//
// keypoint from superpoint and inference
//-----------------------------------------------------//
Ort::SuperPoint superpoint1(superpoint_model_path, 0, std::vector<std::vector<int64_t>>{
{1, gray1.channels(), gray1.size().height, gray1.size().width}});
std::vector superpoint_input1(gray1.channels() * gray1.size().width * gray1.size().height);
superpoint1.Preprocess(superpoint_input1.data(), gray1.data, gray1.size().height, gray1.size().width, gray1.channels());
std::vectorOrtSessionHandler::DataOutputType superpoint_output1 = superpoint1({superpoint_input1.data()});
std::vectorcv::KeyPoint keypoint1 = getKeyPoints(superpoint_output1);
keypoint1 = nmsFast_good(keypoint1, gray1.size().height, gray1.size().width);
// std::vector keepIndices1 = nmsFast(keypoint1, gray1.size().width, gray1.size().height);
// std::vectorcv::KeyPoint keepKeyPoints1;
// keepKeyPoints1.reserve(keepIndices1.size());
// std::transform(keepIndices1.begin(), keepIndices1.end(), std::back_inserter(keepKeyPoints1),
// [&keypoint1](int idx){ return keypoint1[idx]; });
// keypoint1 = std::move(keepKeyPoints1);

Ort::SuperPoint superpoint2(superpoint_model_path, 0, std::vector<std::vector<int64_t>>{
{1, gray2.channels(), gray2.size().height, gray2.size().width}});
std::vector superpoint_input2(gray2.channels() * gray2.size().width * gray2.size().height);
superpoint2.Preprocess(superpoint_input2.data(), gray2.data, gray2.size().height, gray2.size().width, gray2.channels());
std::vectorOrtSessionHandler::DataOutputType superpoint_output2 = superpoint2({superpoint_input2.data()});
std::vectorcv::KeyPoint keypoint2 = getKeyPoints(superpoint_output2);
keypoint2 = nmsFast_good(keypoint2, gray2.size().height, gray2.size().width);
// std::vector keepIndices2 = nmsFast(keypoint2, gray2.size().width, gray2.size().height);
// std::vectorcv::KeyPoint keepKeyPoints2;
// keepKeyPoints2.reserve(keepIndices2.size());
// std::transform(keepIndices2.begin(), keepIndices2.end(), std::back_inserter(keepKeyPoints2),
// [&keypoint2](int idx){ return keypoint2[idx]; });
// keypoint2 = std::move(keepKeyPoints2);

// std::ofstream file1 ("kp_sp_cpp1.txt");
// for (const auto &keypoint: keypoint1)
// {
// file1 << keypoint.pt.x << " " << keypoint.pt.y << std::endl;
// }
// file1.close();
// std::ofstream file2 ("kp_sp_cpp2.txt");
// for (const auto &keypoint: keypoint2)
// {
// file2 << keypoint.pt.x << " " << keypoint.pt.y << std::endl;
// }
// file2.close();

float ps = 2;
std::vectorcv::Scalar colors1(keypoint1.size(), CV_RGB(255, 0, 0));
plot_keypoints(bgr1, keypoint1, colors1, ps);
std::vectorcv::Scalar colors2(keypoint2.size(), CV_RGB(255, 0, 0));
plot_keypoints(bgr2, keypoint2, colors2, ps);

//-----------------------------------------------------//
// descriptor from lisrd and inference
//-----------------------------------------------------//
Ort::Lisrd lisrd1(lisrd_model_path, 0, std::vector<std::vector<int64_t>>{
{1, rgb1.channels(), rgb1.size().height, rgb1.size().width}});
std::vector lisrd_input1(rgb1.channels() * rgb1.size().width * rgb1.size().height); // define float vector and shape
lisrd1.Preprocess(lisrd_input1.data(), rgb1.data, rgb1.size().height, rgb1.size().width, rgb1.channels());
std::vectorOrtSessionHandler::DataOutputType lisrd_output1 = lisrd1({lisrd_input1.data()});

Ort::Lisrd lisrd2(lisrd_model_path, 0,std::vector<std::vector<int64_t>>{
{1, rgb1.channels(), rgb1.size().height, rgb1.size().width}});
std::vector lisrd_input2(rgb2.channels() * rgb2.size().width * rgb2.size().height); // define float vector and shape
lisrd2.Preprocess(lisrd_input2.data(), rgb2.data, rgb2.size().height, rgb2.size().width, rgb2.channels());
std::vectorOrtSessionHandler::DataOutputType lisrd_output2 = lisrd2({lisrd_input2.data()});

std::pair<torch::Tensor, torch::Tensor> result1 = extractDescriptors(lisrd_output1, keypoint1, rgb1.size());
std::pair<torch::Tensor, torch::Tensor> result2 = extractDescriptors(lisrd_output2, keypoint2, rgb2.size());

std::ofstream file1("desc1.txt");
file1 << result1.first;
file1.close();
std::ofstream file2("meta_desc1.txt");
file2 << result1.second;
file2.close();

std::ofstream file3("desc2.txt");
file3 << result2.first;
file3.close();
std::ofstream file4("meta_desc2.txt");
file4 << result2.second;
file4.close();

torch::Tensor matches = lisrdMatcher(result1.first, result2.first, result1.second, result2.second);
std::cout << "matches shape:" << matches.sizes() << std::endl;
std::ofstream file("matches.txt");
file << matches;
file.close();
//--------------------------------------------------------------//
// torch::Tensor --> cv::Mat
//--------------------------------------------------------------//
cv::Mat matches_mat(matches.size(0), matches.size(1), CV_32SC1);
matches = matches.to(at::kInt);
auto matches_accessor = matches.accessor<int32_t, 2>();
for (int i = 0; i < matches.size(0); i++)
{
for (int j = 0; j < matches.size(1); j++)
{
matches_mat.at(i, j) = matches_accessor[i][j];
}
}
std::cout << "cv matches size : " << matches_mat.size() << std::endl;

//-------------------------------------------------------------//
// kp1[matches[:, 0]][:, [1, 0]], kp2[matches[:, 1]][:, [1, 0]]
//-------------------------------------------------------------//
std::vectorcv::KeyPoint matched_kp1, matched_kp2;
for (int i = 0; i < matches_mat.rows; i++)
{
int idx1 = matches_mat.at(i, 0);
int idx2 = matches_mat.at(i, 1);
cv::KeyPoint kp1_temp = keypoint1[idx1];
cv::KeyPoint kp2_temp = keypoint2[idx2];
// Swap x and y coordinates
std::swap(kp1_temp.pt.x, kp1_temp.pt.y);
std::swap(kp2_temp.pt.x, kp2_temp.pt.y);
matched_kp1.push_back(kp1_temp);
matched_kp2.push_back(kp2_temp);
}

//-------------------------------------------------------------//
// python in filter_outliers_ransac
//-------------------------------------------------------------//
std::pair<std::vectorcv::KeyPoint, std::vectorcv::KeyPoint> filterKeyPoints = filterOutliersRansac(matched_kp1, matched_kp2);
std::vectorcv::KeyPoint filtered_kp1, filtered_kp2;
// filtered_kp1, filtered_kp2 = std::move(filterKeyPoints.first), std::move(filterKeyPoints.second);

std::vectorcv::DMatch matches_info;
for (int i = 0; i < filterKeyPoints.first.size(); i++)
{
matches_info.push_back(cv::DMatch(i, i, 0));
}

cv::Mat matchesImage;
// cv::resize(bgr1, bgr1, cv::Size(640, 480)); //
// cv::resize(bgr2, bgr2, cv::Size(640, 480));
std::cout << "filterKeyPoints first: " << filterKeyPoints.first.size() << " filterKeyPoints second: " << filterKeyPoints.second.size() << std::endl;
cv::drawMatches(bgr1, filterKeyPoints.first, bgr2, filterKeyPoints.second, matches_info, matchesImage,
cv::Scalar::all(-1), cv::Scalar::all(-1), std::vector(),
cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
cv::imwrite("Lisrd_good_matches.jpg", matchesImage);
cv::imshow("Lisrd_good_matches", matchesImage);
cv::waitKey();

return EXIT_SUCCESS;
}

`

@Creed1874
Copy link

@xmba15 @sanersbug @WithAIGC @ChmarsLuo @yeluo80808 Any one get good result about lisrd ? it's too hard using C++ make it.

@xmba15
Copy link
Owner

xmba15 commented Mar 22, 2023

This is only hobby repository so I can only focus on it on my free time. It might take some time until I can get to do it.

@sanersbug
You can take a look at the following function to understand grid_sample(descriptors[k], grid_points), dim=1).

https://github.com/xmba15/onnx_runtime_cpp/blob/master/examples/Utility.hpp#L197

@xmba15 xmba15 added help wanted Extra attention is needed question Further information is requested labels Mar 22, 2023
@xmba15 xmba15 self-assigned this Mar 22, 2023
@xmba15
Copy link
Owner

xmba15 commented Mar 22, 2023

@yeluo80808 Thank you for the code. I will use it as reference when I explore LISRD.

@sanersbug
Copy link
Author

This is only hobby repository so I can only focus on it on my free time. It might take some time until I can get to do it.

@sanersbug You can take a look at the following function to understand grid_sample(descriptors[k], grid_points), dim=1).

https://github.com/xmba15/onnx_runtime_cpp/blob/master/examples/Utility.hpp#L197

ok,thanks, I didn't notice it before

@Frown000
Copy link

It turns out that besides me, there are so many people using LISRD. Looking forward to it.

@sanersbug
Copy link
Author

@yeluo80808 I have tried the code, there is indeed a big problem with this result. Have you tried that ? how to solve it ? @xmba15

@Frown000
Copy link

Frown000 commented Apr 3, 2023

Has LISRD had any results and do you guys have any good news to share? @sanersbug @xmba15 @WithAIGC @yeluo80808

@sanersbug
Copy link
Author

@Frown000 Sorry, i haven't solve that problem, it's too hard for me , i gived up, hope @xmba15 can solve that .

@sanersbug
Copy link
Author

@xmba15 Have you tried the LISRD ? No one has made it for such a long time......

@Frown000
Copy link

Is there any progress?

@xmba15
Copy link
Owner

xmba15 commented Apr 12, 2023

Sorry, I am quite busy now so currently I have no specific plan to support this in a short term. If you want to accelerate the development, please consider creating an at least runnable PR yourself in this repository. I will take a look and improve from it.

@xmba15 xmba15 mentioned this issue Apr 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants