Skip to content

Ahmad45123/workup

Repository files navigation

Workup

WorkUp: A Scalable Distributed Microservices Application

StandWithPalestine

ApacheCassandra Postgres Redis MongoDB RabbitMQ Docker

Builds

build build build codecov

Welcome to WorkUp, a scalable distributed microservices application designed to replicate the core functionalities of Upwork. πŸš€ This project leverages a suite of modern technologies to ensure blazingly fast performance πŸ”₯, reliability, and scalability.

Table of Contents

  1. Project Overview
  2. Architecture
  3. Components
  4. Testing
  5. Contributing
  6. License

Project Overview

WorkUp is a microservices-based application that allows freelancers and clients to connect, collaborate, and complete projects, similar to Upwork. πŸ’Ό The system is designed with scalability and resilience in mind, utilizing various technologies to handle high traffic and large data volumes efficiently. πŸš€

πŸ›οΈ Architecture

workup_arch

🧩 Components

Swarm on the machines

image image

πŸ—οΈ Microservices

  • All microservices are implemented using Java Spring Boot β˜•.
  • They consume messages from the RabbitMQ message broker 🐰 and respond through RabbitMQ as well.
  • Requests are cached in Redis, so if the same request is sent more than once, there is no need to recompute the response every time as it can be retrieved directly from the cache πŸ—ƒοΈ.
  • All the microservices are stateless, as authentication and authorization are handled by the web server.
  • In some cases, a microservice would have to communicate with another one to complete a certain functionality, which is done through RabbitMQ. Every microservice can be scaled up or down independently of other microservices to adapt to the amount of incoming traffic.
  • Every microservice has its database that is shared by all the instances of the same microservice but cannot be accessed by other microservices.
πŸ‘₯ Users Microservice This microservice handles user-related operations, like updating, fetching, deleting, and creating profiles, register & login πŸ‘€. In the case of register or login, a JWT is created and returned in the response, which will be used by the client to authorize future communications with the system πŸ”‘. The database used for this microservice is MongoDB πŸƒ, as it achieves horizontal scalability, is highly available, and provides higher performance for reads and writes than relational databases. MongoDB 4.0 and later supports ACID transactions to some extent, which was enough for the users' microservice use case.
πŸ’³ Payments Microservice This microservice handles payment-related requests πŸ’³. Both freelancers and clients have wallets, which they can add or withdraw money from. Both of them have a history of transactions, and freelancers can issue payment requests that are then paid by the clients, and the money is deposited into the freelancer's wallet. When the payment is completed, the contracts service is notified that the contract associated with this payment should be marked as done. The DB used for this service is PostgreSQL 🐘, as payments require lots of ACID transactions in addition to strict consistency, which is achieved by relational databases.
πŸ”§ Jobs Microservice This microservice handles jobs and proposal requests πŸ“. Clients can create jobs, and view, and accept proposals for their jobs. Freelancers can browse jobs and search for them using keywords πŸ”. They can submit a proposal to any job, specifying the milestones that will achieve that job and any extra attachments. When a client accepts a proposal, the contracts service is notified about it to initiate a contract for that job πŸ“œ. The used DB for this microservice is Cassandra, as it can be scaled horizontally easily and is highly available and fault-tolerant, thanks to its peer-to-peer decentralized architecture. There is no need to have a relational DB for jobs, since there are not many joins performed, and there is no need for strong consistency and strict schema. However, high availability is critical, as most of the time users will be browsing jobs, which implies high traffic on the DB.
πŸ“œ Contracts Microservice This microservice is responsible for contract-related logic πŸ“‘. It handles the termination and creation of contracts. Freelancers can update their progress in a milestone of their contracts, while clients can add evaluation to every milestone. Cassandra was used as a database for this microservice for the same reasons as the jobs microservice: scalability, high availability, and fault tolerance.

πŸ“¬ Message Queues

For this project, RabbitMQ was used for asynchronous communication between different components 🐰. A worker queue is assigned to every microservice, where all instances listen to it, while only one of them (the most available one) consumes the message and sends the response back if needed. In addition, a fanout exchange is created to allow the controller to send configuration messages that are consumed by all the running instances of a certain microservice.

🌐 Web Server

The web server is considered the API gateway for the backend of the system. It has a defined REST endpoint for every command in the four microservices. The web server takes the HTTP requests and converts them to message objects that can be sent and consumed by the designated microservice. After the execution is done, the web server receives the response as a message object, converts it to an HTTP response, and sends it back to the client. The web server handles authentication, as it checks the JWT sent with the request to determine whether the user is currently logged in πŸ”‘. After that, it extracts the user ID from the JWT and attaches it to the message that is sent to the worker queues.

API documentation

Introduction

Welcome to the Workup API documentation. This API provides endpoints for managing jobs, proposals, contracts, payments, and user authentication.

Authentication

All endpoints require authentication using a bearer token. Include the token in the Authorization header of your requests.

Authorization: Bearer <your_access_token>
Get Job Description: Retrieves details of a specific job.
  • URL: /api/v1/jobs/{job_id}
  • Method: GET

Request Parameters

  • job_id (path parameter): The ID of the job to retrieve.
  • Example Request
GET /api/v1/jobs/7ddda13b-8221-4766-983d-9068a6592eba
Authorization: Bearer <your_access_token>

Response

  • 200 OK: Returns the details of the requested job.
{
  "id": "7ddda13b-8221-4766-983d-9068a6592eba",
  "title": "Sample Job",
  "description": "This is a sample job description.",
  ...
}
  • 404 Not Found: If the job with the specified ID does not exist.
Get Proposals

Description: Retrieves all proposals for a specific job.

  • URL: /api/v1/jobs/{job_id}/proposals
  • Method: GET

Request Parameters

  • job_id (path parameter): The ID of the job to retrieve - - - proposals for.
GET /api/v1/jobs/7ddda13b-8221-4766-983d-9068a6592eba/proposals
Authorization: Bearer <your_access_token>

Response

  • 200 OK: Returns a list of proposals for the specified job.
[
  {
    "id": "73fb1269-6e05-4756-93cc-947e10dac15e",
    "job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
    "cover_letter": "Lorem ipsum dolor sit amet...",
    ...
  },
  ...
]
  • 404 Not Found: If the job with the specified ID does not exist
Get Contract

Description: Retrieves details of a specific contract.

  • URL: /api/v1/contracts/{contract_id}
  • Method: GET

Request Parameters

  • contract_id (path parameter): The ID of the contract to retrieve.
GET /api/v1/contracts/702a6e9a-343b-4b98-a86b-0565ee6d8ea5
Authorization: Bearer <your_access_token>

Response

  • 200 OK: Returns the details of the requested contract.
{
  "id": "702a6e9a-343b-4b98-a86b-0565ee6d8ea5",
  "client_id": "2d816b8f-592c-48c3-b66f-d7a1a4fd0c3a",
  ...
}
  • 404 Not Found: If the contract with the specified ID does not exist.
Create Proposal

Description: Creates a new proposal for a specific job.

  • URL: /api/v1/jobs/{job_id}/proposals
  • Method: POST Request Parameters
  • job_id (path parameter): The ID of the job to create a proposal for.

Request Body

{
  "coverLetter": "I am interested in this job...",
  "jobDuration": "LESS_THAN_A_MONTH",
  "milestones": [
    {
      "description": "First milestone",
      "amount": 500,
      "dueDate": "2024-06-01"
    },
    ...
  ]
}

Response

  • 201 Created: Returns the newly created proposal.
{
  "id": "73fb1269-6e05-4756-93cc-947e10dac15e",
  "job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
  ...
}
  • 404 Not Found: If the job with the specified ID does not exist.
Get Proposal

Description: Retrieves details of a specific proposal.

  • URL: /api/v1/proposals/{proposal_id}
  • Method: GET

Request Parameters

  • proposal_id (path parameter): The ID of the proposal to retrieve.
GET /api/v1/proposals/73fb1269-6e05-4756-93cc-947e10dac15e
Authorization: Bearer <your_access_token>

Response

  • 200 OK: Returns the details of the requested proposal.
{
  "id": "73fb1269-6e05-4756-93cc-947e10dac15e",
  "job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
  ...
}
  • 404 Not Found: If the proposal with the specified ID does not exist.
Update Proposal

Description: Updates an existing proposal.

  • URL: /api/v1/proposals/{proposal_id}
  • Method: PUT

Request Parameters

  • proposal_id (path parameter): The ID of the proposal to update.

Request Body

{
  "coverLetter": "Updated cover letter...",
  "jobDuration": "ONE_TO_THREE_MONTHS",
  "milestones": [
    {
      "description": "Updated milestone",
      "amount": 600,
      "dueDate": "2024-06-15"
    },
    ...
  ]
}

Response

  • 200 OK: Returns the updated proposal.
{
  "id": "73fb1269-6e05-4756-93cc-947e10dac15e",
  "job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
  ...
}
  • 404 Not Found: If the proposal with the specified ID does not exist.

πŸŽ›οΈ Controller

The controller provides a CLI that is used to broadcast configuration messages to all the running instances of a certain microservice πŸ–₯️. These messages configure the services in runtime without having to redeploy any instance, to achieve zero downtime updates ⏲️. Here are the messages sent by the controller:

  • setMaxThreads: Sets the maximum size of the thread pool used by a microservice.
  • setMaxDbConnections: Sets the maximum number of database connections in the connections pool for a microservice. This is used only for the payments microservice since it is the only one using PostgreSQL, which allows connection pooling.
  • freeze: Makes the microservice stop consuming any new messages, finishes the execution of any messages received by them, then releases all resources.
  • start: Restarts a frozen microservice.
  • deleteCommand: Deletes a command in a certain microservice in runtime. So the microservice no longer accepts requests of a certain type without having to redeploy all instances.
  • updateCommand: Updates the logic performed by a certain command in runtime by byte code manipulation, which allows runtime updates while having zero downtime.
  • addCommand: Adds a command that was deleted before from the microservice, to allow it to accept requests of that type again.
  • setLoggingLevel: Sets the level of logs that should be logged (ERROR, INFO, etc.).
  • spawnMachine: Replicates the whole backend on a new machine (the web server, the 4 microservices, the messaging queue, etc.).

πŸ“Ί Media Server

A media server that serves static resources.

ENV

  • UPLOAD_USER
  • UPLOAD_PASSWORD
  • DISCOGS_API_KEY

Static Endpoints

For all static resources, this server will attempt to return a relevant resource πŸ“, or else if the resource does not exist, it will return a default 'placeholder' resource πŸ–ΌοΈ. This prevents clients from having no resource to display at all; clients can make use of this media server's 'describe' endpoint to learn about what resources are available πŸ“‹.

Get resource

GET /static/icons/:icon.png

Returns an icon based on the filename.

:icon can be any icon filename.

Example: return an icon with filename 'accept.png'.

curl --request GET \
  --url http://path_to_server/static/icons/accept.png

GET /static/resume/:resume.pdf

Returns a resume based on the filename.

:resume can be any resume filename.

Example: return a resume with filename 'resume.pdf'.

curl --request GET \
  --url http://path_to_server/static/resume/resume.pdf
Available groups

Describe

GET /describe

Returns a JSON representation of the media groups.

Example: return JSON containing of all groups present.

curl --request GET \
  --url http://localhost:8910/describe
{
  "success": true,
  "path": "/static/",
  "groups": ["icons", "resume"]
}

GET /describe/:group

Returns a JSON representation of all the files current present for a given group.

:group can be any valid group.

Example: return JSON containing all the media resources for a the exec resource group.

curl --request GET \
  --url http://localhost:8910/describe/exec
{
  "success": true,
  "path": "/static/exec/",
  "mimeType": "image/jpeg",
  "files": []
}
Upload

Upload and convert media to any of the given static resource groups.

All upload routes are protected by basic HTTP auth. The credentials are defined by ENV variables UPLOAD_USER and UPLOAD_PASSWORD.

POST /upload/:group

POST a resource to a given group, assigning that resource a given filename.

:group can be any valid group.

curl --location 'http://path-to-server/upload/resume/' \
--header 'Authorization: Basic dXNlcjpwYXNz' \
--form 'resource=@"/C:/Users/ibrah/Downloads/Ibrahim_Abou_Elenein_Resume.pdf"' \
--form 'filename="aboueleyes-reume-2"'
{
  "success": true,
  "path": "/static/resume/aboueleyes-resume-2.pdf"
}

A resource at http://path_to_server/static/resume/aboueleyes-reume-2.pdf will now be available.

πŸš€ Deployment

Digital Ocean

We were able to set up a tenant on Digital ocean 🌐 Where we can create new instance (Droplet) πŸ’§ from the Controller as explained above and feeding scripts in the startup of this instance to join docker swarm 🐳 of other machines.

We can monitor the performance of each running container on each instance using Portainer πŸ“Š where we can manually run more containers of the same service, and configure it to auto-scale when the load is above the threshold.

We tested the app in the hosted state and even performed load testing on it.βœ…

πŸ“ˆ Auto-scaling

his script utilizes Prometheus paired with cAdvisor metrics to determine CPU usage πŸ“Š. It then utilizes a manager node to determine if a service wants to be autoscaled and uses another manager node to scale the service.

Currently, the project only utilizes CPU to autoscale. If CPU usage reaches 85%, the service will scale up; if it reaches 25%, it will scale down ⬆️⬇️.

  1. if you want to autoscale you will need a deploy label swarm.autoscaler=true.
deploy:
  labels:
    - "swarm.autoscaler=true"

This is best paired with resource constraints limits. This is also under the deploy key.

deploy:
  resources:
    reservations:
      cpus: '0.25'
      memory: 512M
    limits:
      cpus: '0.50'
Configuration
Setting Value Description
swarm.autoscaler true Required. This enables autoscaling for a service. Anything other than true will not enable it
swarm.autoscaler.minimum Integer Optional. This is the minimum number of replicas wanted for a service. The autoscaler will not downscale below this number
swarm.autoscaler.maximum Integer Optional. This is the maximum number of replicas wanted for a service. The autoscaler will not scale up past this number

Load Balancing

We used a round-robin-based load balancing πŸ”„ that is handled by Docker in the Swarm 🐳. Simply, it sends the first request to the first instance of the app (not for a machine) and the next request to the next instance, and so on until all instances have received a request. Then, it starts again from the beginning. πŸš€

πŸ§ͺ Testing

Functional Testing

To test our app functionality we created functional testing for each service on its own to test functionality in isolation as well as testing its functionality in integration with other services for example here is the tests implemented for Payments service.πŸ§ͺπŸ”§

βš–οΈ Load Balancing

Jmeter

jmeter

We used JMeter to load test our app we configured it to simulate thousands of users' requests and the number grew gradually over the span of 10 seconds. here are some results for different endpoints. Here are a few examples of endpoint performance.

Login Performance

login command

Create Job

Create Job

Get Jobs

get jobs

Get Contracts

get contracts

Locust

locust

We also used Python locust for load testing. check the test file here. Here are the results of this load test

requeststats

charts

License

This project is licensed under the GNU General Public License v3.0. See the LICENSE file for more details.