Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IModel.BasicAck semantic not respected #180

Closed
manuelspezzani opened this issue Nov 10, 2023 · 4 comments
Closed

IModel.BasicAck semantic not respected #180

manuelspezzani opened this issue Nov 10, 2023 · 4 comments

Comments

@manuelspezzani
Copy link

I've found a couple issues on the way BasicAck is implemented in AddUp.FakeRabbitMQ.
Since they are closely related I'm creating only one issue, but if you prefer I can split in two.

  1. Messages seems to be acked in the order they are received (FIFO), regardless of the provided DeliveryTag
  2. The BasicAck "multiple" parameter seems to be ignored

I've created a couple of tests to reproduce both issues.
As far as I can tell by looking at the source code, both issues are caused by the fact that FakeModel.BasicAck is implemented as a ConcurrentQueue.TryDequeue.

public class ReproTests
{
    private const string Exchange = "exchange";
    private const string Queue = "queue";
    private const string RoutingKey = "routing-key";
    private RabbitServer m_rabbitServer;
    private IConnection m_connection;
    private IModel m_model;

    [SetUp]
    public void Setup()
    {
        m_rabbitServer = new RabbitServer();
        m_connection = new FakeConnectionFactory(m_rabbitServer).CreateConnection();
        m_model = m_connection.CreateModel();

        m_model.ExchangeDeclare(Exchange, ExchangeType.Topic);
        m_model.QueueDeclare(Queue);
        m_model.QueueBind(Queue, Exchange, "#");
    }

    [Test]
    public void Ack_DeliveryTagsAreRespected()
    {
        using var allMessagesDelivered = new ManualResetEventSlim();
        var deliveryTags = new List<ulong>();

        // setup a basic consumer that stores delivery tags to the deliveryTags list
        var consumer = new EventingBasicConsumer(m_model);
        consumer.Received += (_, args) =>
        {
            deliveryTags.Add(args.DeliveryTag);
            if (deliveryTags.Count == 2)
                allMessagesDelivered.Set();
        };
        m_model.BasicConsume(Queue, false, consumer);


        // publish two messages
        m_model.BasicPublish(Exchange, RoutingKey, true, null, Encoding.UTF8.GetBytes("first"));
        m_model.BasicPublish(Exchange, RoutingKey, true, null, Encoding.UTF8.GetBytes("second"));

        // wait for both messages to be delivered
        allMessagesDelivered.Wait();

        // ack the **second** message, but not the first one
        m_model.BasicAck(deliveryTags[1], false); 

        // asserts that only one message is still queued --> passes, as expected
        Assert.AreEqual(1, m_rabbitServer.Queues[Queue].Messages.Count); 

        // asserts that the remaining message in queue is the first one, since we acked the second --> fails
        m_rabbitServer.Queues[Queue].Messages.TryPeek(out var pendingMessage);
        Assert.AreEqual("first", Encoding.UTF8.GetString(pendingMessage.Body));
    }

    [Test]
    public void Ack_Multiple()
    {
        using var allMessagesDelivered = new ManualResetEventSlim();
        var deliveryTags = new List<ulong>();

        // setup a basic consumer that stores delivery tags to the deliveryTags list
        var consumer = new EventingBasicConsumer(m_model);
        consumer.Received += (_, args) =>
        {
            deliveryTags.Add(args.DeliveryTag);
            if (deliveryTags.Count == 2)
                allMessagesDelivered.Set();
        };
        m_model.BasicConsume(Queue, false, consumer);


        // publish two messages
        m_model.BasicPublish(Exchange, RoutingKey, true, null, Encoding.UTF8.GetBytes("first"));
        m_model.BasicPublish(Exchange, RoutingKey, true, null, Encoding.UTF8.GetBytes("second"));

        // wait for both messages to be delivered
        allMessagesDelivered.Wait();

        // ack both messages at once
        m_model.BasicAck(deliveryTags[1], true); 

        // asserts queue is empty --> fails
        Assert.AreEqual(0, m_rabbitServer.Queues[Queue].Messages.Count); 
    }
}
@odalet
Copy link

odalet commented Nov 10, 2023

Thanks for this issue. I'll try to have a look at it as soon as I have some time. And... by the way, if you feel up to the task, do not hesitate to propose a PR!

odalet added a commit that referenced this issue Nov 12, 2023
odalet added a commit that referenced this issue Nov 12, 2023
odalet added a commit that referenced this issue Nov 12, 2023
@odalet
Copy link

odalet commented Nov 12, 2023

@manuelspezzani I've just published a beta version of the package that attempts to fix your issue (and a few more, all related to non-autoack scenarios).

I managed to have your tests pass, but it won't hurt if you can tell me whether this new version is ok for you as well

I'll also test it against my own codebase before publishing a final v2.8.0 version to make sure I didn't introduce regressions elsewhere.

The beta package is accessible here: https://www.nuget.org/packages/AddUp.FakeRabbitMQ/2.8.0-beta.1

@manuelspezzani
Copy link
Author

@odalet wow that was quick, thanks!

The new version seems fine by me.
I've only had to refactor some tests since the RabbitQueue.Messages property is gone, but definitely not a big deal: RabbitQueue.MessageCount and RabbitQueue.TryGet were more than enough.

@odalet
Copy link

odalet commented Nov 13, 2023

Hi, indeed I've been hiding a few implementation details but still left some utility methods for unit testing.

That said, I'm not satisifed with how things are working at the moment. For example, nacking a message and asking for it to be requeued does not work and in the current state of the code, I don't know how to handle this. I think the whole thing should be rewritten with less shortcuts and an architecture closer to what a real Rabbit client and server do; that's a huge task though!

Anyway, because it's good enough for your scenario, I'll publish soon a final v2.8.0 version

@odalet odalet closed this as completed Nov 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants