Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undersanding enet_host_service #248

Open
cuppajoeman opened this issue Apr 30, 2024 · 6 comments
Open

Undersanding enet_host_service #248

cuppajoeman opened this issue Apr 30, 2024 · 6 comments

Comments

@cuppajoeman
Copy link
Contributor

Hey there, I've been trying to get a hang of enet, and mainly trying to understand enet_host_service, I've read this about it:

Net uses a polled event model to notify the programmer of significant events. ENet hosts are polled for events with enet_host_service(), where an optional timeout value in milliseconds may be specified to control how long ENet will poll; if a timeout of 0 is specified, enet_host_service() will return immediately if there are no events to dispatch. enet_host_service() will return 1 if an event was dispatched within the specified timeout.

This is also mentioned:

Beware that most processing of the network with the ENet stack is done inside enet_host_service(). Both hosts that make up the sides of a connection must regularly call this function to ensure packets are actually sent and received. A common symptom of not actively calling enet_host_service() on both ends is that one side receives events while the other does not. The best way to schedule this activity to ensure adequate service is, for example, to call enet_host_service() with a 0 timeout (meaning non-blocking) at the beginning of every frame in a game loop.

For context I'm working on a multiplayer game with a physics engine following a client-server architecture. The client sends out keyboard and mouse updates at a fixed rate. The server has a thread for the network and a thread for the physics loop.

Before I jump into coding the server I wanted to study some existing code which was said to be enet server code:

server.cpp

#include <enet/enet.h>
#include <iostream>
#include <stdexcept>
#include <cstring>

int main() {
    if (enet_initialize() != 0) {
        std::cerr << "An error occurred while initializing ENet." << std::endl;
        return EXIT_FAILURE;
    }

    atexit(enet_deinitialize);

    ENetAddress address;
    ENetHost* server;

    // Bind server to localhost on port 12345
    address.host = ENET_HOST_ANY;
    address.port = 12345;

    // Create the ENet server with a maximum of 32 clients
    server = enet_host_create(&address, 32, 2, 0, 0);
    if (server == nullptr) {
        std::cerr << "An error occurred while trying to create an ENet server." << std::endl;
        return EXIT_FAILURE;
    }

    std::cout << "Server initialized. Waiting for clients..." << std::endl;

    // Event loop
    while (true) {
        ENetEvent event;

        // Check for events with a 100ms timeout
        while (enet_host_service(server, &event, 100) > 0) {
            switch (event.type) {
                case ENetEventType::ENET_EVENT_TYPE_CONNECT:
                    std::cout << "A new client connected from "
                              << (event.peer->address.host & 0xFF) << "."
                              << ((event.peer->address.host >> 8) & 0xFF) << "."
                              << ((event.peer->address.host >> 16) & 0xFF) << "."
                              << ((event.peer->address.host >> 24) & 0xFF)
                              << ":" << event.peer->address.port << std::endl;

                    // Assign a unique user data for the peer (optional)
                    event.peer->data = new std::string("Client data");
                    break;

                case ENetEventType::ENET_EVENT_TYPE_RECEIVE:
                    std::cout << "Received a packet from client: "
                              << std::string((char*)event.packet->data, event.packet->dataLength) << std::endl;

                    // Echo back the packet data
                    enet_host_broadcast(server, 0, event.packet);

                    // Don't forget to destroy the packet after processing
                    enet_packet_destroy(event.packet);
                    break;

                case ENetEventType::ENET_EVENT_TYPE_DISCONNECT:
                    std::cout << "Client disconnected: " << *(std::string*)event.peer->data << std::endl;

                    // Cleanup peer data
                    delete (std::string*)event.peer->data;
                    event.peer->data = nullptr;
                    break;

                default:
                    break;
            }
        }
    }

    // Clean up
    enet_host_destroy(server);

    return EXIT_SUCCESS;
}

Questions

  1. Should the rate at which enet_host_service is called on the server be the same rate at which it is called on the client side? What about if there are n <= 16 clients? Do I have to take this into consideration at all?
  2. From what I understand the call enet_host_service is encased in an outer while loop which runs constantly, this is so that if nothing happens within 100ms the program still continues to try polling again instead of exiting immediately. Is this what most people do? (have while enet_host_service(...) encased in some other while loop?)
  3. A) It's mentioned that you can put a call to enet_host_service with a timeout of 0 in each iteration of your game loop, is this because if we put a call to enet_host_service with a non-zero timeout in each iteration of our game loop then it would cause our game loop to run slower because this is a blocking call? B) If we use c++ and the thread library to separate out the network loop into it's own thread then is it preferred to call enet_host_service with a non-zero timeout? C) Are there any other considerations that need to be made if we separate the enet code into it's own thread?
  4. A) Looking at the source code of enet_host_service it seems that enet_host_service will try to send outgoing packets and receive incoming packets within a do while loop (which I believe runs until the waiting time is up). If that's the case, then what is the point of timeout because it seems like if we had a timeout of 10ms then as it's encased in a while (true) then enet_host_service would be called 10 times more often, but still leading to the same amount of time waiting for events as the 100ms wait time would produce? B) Also along the same vein, wouldn't calling enet_host_service with 0ms of wait-time (non-blocking) as fast as possible also give the same results (when it's in the while (true) loop)?
  5. A) Why does a call enet_host_service with a greater timeout (which means that enet_host_service is being called not as much as before) cause less adequate performance [based on the remark in the docs which says ' enet_host_service should be called fairly regularly for adequate performance'] B) What is the definition of 'fairly regularly'

ps: if I get some good answers on how this works I'll make some pull requests to update the docs with more info about the relevant topics.

@bjorn
Copy link

bjorn commented Apr 30, 2024

4. Also along the same vein, wouldn't calling enet_host_service with 0ms of wait-time (non-blocking) as fast as possible also give the same results (when it's in the while (true) loop)?

There's a lot of questions but I'll only chime in on this one. If you call enet_host_service with a 0ms timeout, then it will return immediately if there is nothing to process. If you stick that in a while (true) loop, you'll have 100% CPU usage while doing absolutely nothing, which is not desirable.

Hence, using a timeout of 0ms is only recommended when you have something else limiting the frequency at which this function is called (for example calling it only once per frame in a game loop, as suggested by the docs).

@mman
Copy link

mman commented Apr 30, 2024

All good and valid questions, I will let Lee chime in and try to answer as best as I can meanwhile. The way enet_host_service works is rooted in history of how BSD sockets work, let's start from how the sockets work:

In old BSD sockets world, when you want to send data, you push them out using send and your code typically continues as if nothing happened, and kernel will make sure that data will get sent eventually. But send may also block if there is no memory available in kernel - for example if you try to send a lot lot lot lot data. If send blocks, your game freezes.

When you want to receive data you ask the kernel, and then your game blocks until there is data.

Assuming your game uses just one thread, which is what was common in the past, you see that you can block and freeze randomly. Thus poll and select got invented and allowed the programmers to first ask whether it is safe to send or receive without actually blocking. So you can do other useful stuff like painting a mouse moving over the screen :)

Typically select needs a timeout to wait and return whether there is something waiting to be received or whether there is space so that you can send. If you give it 0ms, it returns immediately, if you give it 10ms, it will block for up to maximum of 10ms to wait for receive and send to be ready. This timeout is basically what you give to enet_host_service.

Ideally you call enet_host_service when you know that there is data waiting in the kernel to be received. And also at times when you know that you need to send data out.

How do you know first? you use select.

How do you know second? Well when you enet_host_connect or enet_peer_send you know that there are data that need to be shuffled to the network. All of these can be retransmitted automatically periodically in case of reliable data. And also enet periodically pings all connected peers so that it can disconnect them if they do not respond. If you do not call enet_host_service nothing of this happens. So you need to make sure that you call enet_host_service say at least once a second to send out pings, retransmits, etc...

Now to your questions:

  1. A) Why does a call enet_host_service with a greater timeout (which means that enet_host_service is being called not as much as before) cause less adequate performance [based on the remark in the docs which says ' enet_host_service should be called fairly regularly for adequate performance'] B) What is the definition of 'fairly regularly'

Specifying timeout to enet_host_service will basically freeze and wait for that timeout until something happens. So more is not automatically better.

  1. Should the rate at which enet_host_service is called on the server be the same rate at which it is called on the client side? What about if there are n <= 16 clients? Do I have to take this into consideration at all?

Does not matter at all. You need to call enet_host_service when you know that there is data waiting, or when you know that you want to talk to somebody.

  1. From what I understand the call enet_host_service is encased in an outer while loop which runs constantly, this is so that if nothing happens within 100ms the program still continues to try polling again instead of exiting immediately. Is this what most people do? (have while enet_host_service(...) encased in some other while loop?)

There will always be that loop where you call outer enet_host_service and then inner enet_host_check_events.

  1. A) It's mentioned that you can put a call to enet_host_service with a timeout of 0 in each iteration of your game loop, is this because if we put a call to enet_host_service with a non-zero timeout in each iteration of our game loop then it would cause our game loop to run slower because this is a blocking call? B) If we use c++ and the thread library to separate out the network loop into it's own thread then is it preferred to call enet_host_service with a non-zero timeout? C) Are there any other considerations that need to be made if we separate the enet code into its own thread?

Sooner or later you will end up with multiple threads. Since enet is single threaded and does not use any locking mechanism anywhere, you will have to make sure to have one thread to invoke all enet calls. Calling with 0 timeout and just looping will consume 100% CPU, so that is not wise.

It's better to call enet_host_service with 0 timeout, but only when you know there is data available to be received, and then periodically (say every 100ms) to make sure you send your own out data frequently enough. Depending on your game that may need to be done 120fps or once per second.

Ideally you spawn another thread where you can do select with larger timeout, say half a second and just wait, when there is data you signal the enet thread to perform service.

  1. A) Looking at the source code of enet_host_service it seems that enet_host_service will try to send outgoing packets and receive incoming packets within a do while loop (which I believe runs until the waiting time is up). If that's the case, then what is the point of timeout because it seems like if we had a timeout of 10ms then as it's encased in a while (true) then enet_host_service would be called 10 times more often, but still leading to the same amount of time waiting for events as the 100ms wait time would produce? B) Also along the same vein, wouldn't calling enet_host_service with 0ms of wait-time (non-blocking) as fast as possible also give the same results (when it's in the while (true) loop)?

The code tries to do its best to work in single threaded environment where you have several milliseconds per frame and you need to receive, send, handle retransmits, ACKS, etc... and the logic wants to make sure that you have predictable deadlines.

So in 60fps environment you have 16ms per frame, so you give enet 5ms and it will try to make sure that you still have 10ms for other computations.

But from the performance perspective it's better to put enet aside to another thread and not block the rendering thread at all. Which can be quite challenging.

@cuppajoeman
Copy link
Contributor Author

cuppajoeman commented Apr 30, 2024

I appreciate everyone's input on this, it's definitely helping me get this. I thought I'd ask a few more which relate to more concrete code examples.

Suppose we have a client which sends out packets at a rate of 20 times per second (20Hz = 0.05 seconds = 50ms).

Then we have the following loop on our server.

while (true) {
    while (enet_host_service(..., x) > 0) {
        ...
    }
}

where x is some constant representing a number of milliseconds for the timeout. Assuming x >= 50 then the value of x doesn't determine the rate of this inner loop at all right? My reasoning is that if the client is sending packets at a rate 20Hz, and since enet_host_service returns immediately after it's received any event, then this server loop is also running at 20Hz regardless of the value of x?

If the above paragraph holds true, then what is the point of setting the variable x to a specific value? Is it's purpose really only to define when we should timeout and have some custom behavior?

int iterations_without_data_within_x_milliseconds = 0;
while (true) {
    while (enet_host_service(..., x) > 0) {
        iterations_without_data_within_x_milliseconds = 0;
        ...
    }
    iterations_without_data_within_x_milliseconds += 1;
    if (iterations_without_data_within_x_milliseconds >= 5) {
        ... do something specific because there is no network activity for a while ...
    } 
}

Also based on @bjorn's comment it seems like a loop of the form

while (true) {
    while (enet_host_service(..., x) > 0) {
    }
}

has the property that as x decreases in value, then the CPU usage increases, so is it beneficial to do some experimental tests to see how fast your end up doing network events and then set x to be no higher than your experimental value to avoid extra cycles?

@bjorn
Copy link

bjorn commented May 1, 2024

so is it beneficial to do some experimental tests to see how fast your end up doing network events and then set x to be no higher than your experimental value to avoid extra cycles?

While experimentation can't hurt, I think it just means you should keep x large enough as to not cause undesired CPU activity in the idle case, but low enough that whatever else you need to be doing doesn't stall for too long. At the very least, you'll probably want to exit your program at some point, and for a clean exit you'll want that loop to check for the exit condition from time to time, regardless of whether it is running in the main or a separate thread.

@cuppajoeman
Copy link
Contributor Author

cuppajoeman commented May 4, 2024

I have a question relating to a client server setup that relates to enet_host_service, suppose we have the following server-side files:

a.cpp

void handle_incoming_data() {
    ...

    // Event loop
    while (true) {
        ENetEvent event;

        // Check for events with a 100ms timeout
        while (enet_host_service(server, &event, 100) > 0) { // LINE X
            switch (event.type) {
                case ENetEventType::ENET_EVENT_TYPE_CONNECT:
                 ...

                case ENetEventType::ENET_EVENT_TYPE_RECEIVE:
                 ...

                case ENetEventType::ENET_EVENT_TYPE_DISCONNECT:
                 ...

                default:
                    break;
            }
        }
    }
...
}

b.cpp

void start_outgoing_data_loop() {
    while (true) {
        std::this_thread::sleep_until(...); // the loop runs at some fixed rate
        unsigned int binary_input_snapshot = this->input_snapshot_to_binary();
        printf("%d\n", binary_input_snapshot);
        ENetPacket *packet =
            enet_packet_create(&binary_input_snapshot, sizeof(binary_input_snapshot), ENET_PACKET_FLAG_RELIABLE);

        enet_peer_send(server_connection, 0, packet);

        enet_host_service(client, &event, 0); // LINE Y
    }
}

Now suppose that these two loops are run in their own separate threads (thread A runs the loop in a.cpp and thread B runs the loop in b.cpp).

The reason why the loop in b.cpp is run in it's on thread is so that it can be run at whatever rate we want (eg. we can tweak the send rate of the server depending on cpu usage and not have it effect anything else)

Suppose that in thread B the line enet_peer_send(server_connection, 0, packet) is run, but before the next line is executed, LINE X from thread A is run, does this mean that the packet created will be sent by the call to enet_host_service(..., 100) from thread A rather than enet_host_service(..., 0) in thread B? Is this a problem?

Also with this setup the two threads may call enet_host_service "simultaneously", would cause any issues with enet?

In general is this an ok approach to managing an "incoming" and "outgoing" thread with enet? If not can anyone share how they do this. Thanks!

@bjorn
Copy link

bjorn commented May 5, 2024

Also with this setup the two threads may call enet_host_service "simultaneously", would cause any issues with enet?

Yes, this causes race conditions. See #102 (comment) and the first question in the FAQ.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants