Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Application(redis client) crashes with SIGSEGV at redisBufferRead at hiredis.c:941 while making a GET call to redis server #581

Open
Sukesh0513 opened this issue Jul 17, 2024 · 4 comments

Comments

@Sukesh0513
Copy link

PROBLEM

In our application we use c++, so we use used redis++(1.3.7) and hired(1.0.2) for our redis client implementation, In the application we make a GET call to redis server, then after that call, a bunch of functions in redis++ and hiredis are called one by one, and the application finally crashed with SIGSEGV while calling the function redisBufferRead at at hiredis.c:941. This is what I could see from backtracing the corefile.

Note: in our application we do several GET call until we get the desired value the from redis-server, in some cases if the value is configured in redis server, we will have a null value as return and we will retry the GET call again until we get a non null value. In the initial stage, redis server is not populated this value and our application will keep on making get call but it seals like this issue is not happening for every get call, it only happen once after multiple retries.

here is the backTrace from the corefile

Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007f6dc0b49a21 in redisBufferRead (c=0x1720680) at hiredis.c:941
941 hiredis.c: No such file or directory.

#0 0x00007f6dc0b49a21 in redisBufferRead (c=0x1720680) at hiredis.c:941
redis/hiredis#1 0x00007f6dc0b49d78 in redisGetReply (c=c@entry=0x1720680, reply=reply@entry=0x7f6da0989618) at hiredis.c:1032
redis/hiredis#2 0x00007f6dc0b84b66 in redis::Connection::recv (this=this@entry=0x7f6da0989688, handle_error_reply=handle_error_reply@entry=true)
at /tmp/tmp.PiajinzKq4/redis-plus-plus-1.3.7/src/redis++/connection.cpp:212
redis/hiredis#3 0x00007f6dc0b9e913 in redis::Redis::_command<void ()(redis::Connection&, std::basic_string_view<char, std::char_traits > const&), std::basic_string_view<char, std::char_traits > const&> (this=, cmd=0x7f6dc0b9cb40 <redis::cmd::get(redis::Connection&, std::basic_string_view<char, std::char_traits > const&)>, connection=...)
at /tmp/tmp.PiajinzKq4/redis-plus-plus-1.3.7/src/redis++/redis.hpp:1316
redis/hiredis#4 redis::Redis::command<void ()(redis::Connection&, std::basic_string_view<char, std::char_traits > const&), std::basic_string_view<char, std::char_traits > const&> (
this=, cmd=cmd@entry=0x7f6dc0b9cb40 <redis::cmd::get(redis::Connection&, std::basic_string_view<char, std::char_traits > const&)>)
at /tmp/tmp.PiajinzKq4/redis-plus-plus-1.3.7/src/redis++/redis.hpp:47
redis/hiredis#5 0x00007f6dc0b9733c in redis::Redis::get[abi:cxx11](std::basic_string_view<char, std::char_traits > const&) (this=, key=...)
at /tmp/tmp.PiajinzKq4/redis-plus-plus-1.3.7/src/redis++/redis.cpp:314

Initially I thought that the redisContext which is used in the function redisBufferRead at hiredis.c:941 is pointing to a null pointer or might be not be initialized but when the checked the backtrack and tried to print it, I came to know that it is not null and is initialized, looking the back trace I do not understand or can find any reson for redisBufferRead to throw a SIGSEGV. Any idea or understand on why this could happen will be very helpful.

The only reason that I can think of is, ctx which is initialized at line 236 in github.com__sewenew__redis-plus-plus/src/sw/redis++/connection.cpp is being dereferenced by the time it was accessed at line 241. does this happen earlier or are there any other reason for this to happen.

ENVIRONMENT
OS: suse linux
version: 15-SP5
hiredis: 1.0.2
redis: 1.3.7

@sewenew
Copy link
Owner

sewenew commented Jul 17, 2024

Both hiredis 1.0.2 and redis-plus-plus 1.3.7 are very old version, and the latest version might have fixed bugs that caused your problem. I'd suggest you upgrade them to the latest version to have a try. Also please give me a minimum code snippet that can reproduce your problem.

Also please ensure there's only ONE version of hiredis is installed. Otherwise, you might get some wired problems.

Regards

@Sukesh0513
Copy link
Author

Hi again,

Thanks for the suggestion, we have uplifted the redis++(1.3.11) and hiredis(1.2.0) but even with the new versions we still see this problem.

@sewenew
Copy link
Owner

sewenew commented Jul 19, 2024

I cannot reproduce your problem by sending GET command to Redis in a for loop. Can you give me a minimum code snippet that can reproduce your problem, so that I can do some debugging.

@Sukesh0513
Copy link
Author

Sukesh0513 commented Jul 19, 2024

Hi,

Thanks for looking into it, we came across this problem while running it in k8s cluster(redis server is one pod and our application is another pod which is a client to redis pod), these are private docker images and I am afraid that I can not share you the image. But we just do a GET command in a loop to wards redis server as you just tested.

I will try to build a small k8s setup from opensource docker images and see if can reproduce the issue, may be then you can debug it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants