For serving HTTP requests, the broker uses Echo
, High performance, extensible, minimalist Go web framework.
By default, it listens on port 8082
.
You may find more information about this framework on this link.
/api/publish
Request:{ "subject": "string", "data": "string" }
Responses:or{ "status": "ok" }
{ "message": "error message" }
/api/publish/async
Response:{ "subject": "string", "data": "string" }
Response:{ "status": "ok" }
/api/subscribe
Request:{ "subject": "string" }
Responses:or{ "id": "string", "subject": "string" }
{ "message": "error message" }
/api/fetch
Request:{ "subject": "string", "id": "string" }
Responses:or{ "subject": "string", "id": "string", "data": [] }
{ "message": "error message" }
git clone git@github.com:ali-a-a/gophermq.git
cd ./gophermq
make run-broker
now the server is up and running!
Or Using Docker
git clone git@github.com:ali-a-a/gophermq.git
cd ./gophermq
docker build -t gophermq .
docker run -p 8082:8082 gophermq broker
Prometheus server is listening on port 9001.
handler of the server is registered with /metrics
pattern.
Up to now, the broker has only 2 metrics. Request rate and request latency.
I used k6 for load testing Gophermq.
Machine:
RAM: 8GB
CPU Cores: 8
CPU Type: M1 chip
For running load test on your machine:
k6 run loadtest/loadtest.js
Messages can be published via publish
endpoint. Then, Internally, the broker saves a new message in the in-memory queue. Note that at least one subscriber on the subject should exist before publishing a new message. Else, the publisher got ErrSubscriberNotFound
.
For async publish, it has a publish/async
endpoint. By this endpoint, a new message is submitted into the worker pool and then responds to the client.
For subscription on subjects, the server has a subscribe
endpoint. In successful cases, it returns the id of the subscriber. This id is used in a fetch
endpoint.
For fetching messages, you should use the fetch
endpoint. After calling this endpoint, all the pending messages for the specific subscriber are consumed.
Broker has a MaxPending
option for handling overflow cases. MaxPending represents the maximum number of messages that can be stored in the broker. If a new publish causes overflow, the server returns a broker overflow
error.
Message queues enable asynchronous communication, which means that the endpoints that are producing and consuming messages interact with the queue, not the shared memory. Producers can add requests to the queue without waiting for them to be processed. Consumers process messages only when they are available.