π§ π§ π§ π§ π§ π§ π§ π§ π§ π§ π§ π§ π§
π§ π§ π§ π§ π§ π§ π§ π§ π§ π§ π§ π§ π§
Conduit is a self-hosted backend, that aims to cut down development times by providing ready-made modules that offer common functionality out of the box, and allowing maximum flexibility to add custom functionality.
While Conduit's official microservices are built in TypeScript, module intercommunication is accomplished through gRPC, allowing for third party clients and servers to be written in any programming language offering a gRPC implementation.
This project intends to implement a Conduit SDK for Rust.
At the moment, it's just a work-in-progress with no official support!
The following section assumes your environment contains:
- an installation of the Rust development toolchain
- protoc
Additionally, you should have access to a local/remote instance of Conduit.
If you already have Conduit's CLI installed on your system, you may bring up a local dev environment via:
# Option A: Basic Deployment
conduit deploy setup
# Option B: Parameterized Deployment
conduit deploy setup --config
For additional information on Conduit environment bootstrapping options, check out the installation section of the official project docs.
The following example demonstrates distributed service discovery synchronization.
CONDUIT_SERVER_URL="0.0.0.0:55152" cargo run --example monitor_state
The example's code establishes a connection to Conduit and calls Conduit::monitor_state()
, retrieving an initialized Arc<RwLock<ServiceDiscoveryState>>
.
Every 500ms, the local state gets logged, while a background task takes care of keeping it in sync with the remote service discovery state.
Output
Waiting for Conduit...
Conduit connection established!
Service Discovery State @ 2024-09-03 18:45:51.225161057 UTC
RwLock {
data: ServiceDiscoveryState {
module_states: {
Database: { grpc_address: grpc://0.0.0.0:55160, is_serving: true },
Router: { grpc_address: grpc://0.0.0.0:55161, is_serving: true },
PushNotifications: { grpc_address: grpc://0.0.0.0:55166, is_serving: false },
Forms: { grpc_address: grpc://0.0.0.0:55165, is_serving: false },
Authentication: { grpc_address: grpc://0.0.0.0:55162, is_serving: true },
Storage: { grpc_address: grpc://0.0.0.0:55168, is_serving: false },
Chat: { grpc_address: grpc://0.0.0.0:55163, is_serving: true },
Email: { grpc_address: grpc://0.0.0.0:55164, is_serving: false },
},
last_updated: 2024-09-03T18:45:46.375426812Z
}
,
}
Service Discovery State @ 2024-09-03 18:45:51.727156553 UTC
RwLock {
data: ServiceDiscoveryState {
module_states: {
Authentication: { grpc_address: grpc://0.0.0.0:55162, is_serving: true },
Email: { grpc_address: grpc://0.0.0.0:55164, is_serving: false },
Storage: { grpc_address: grpc://0.0.0.0:55168, is_serving: false },
Router: { grpc_address: grpc://0.0.0.0:55161, is_serving: true },
Forms: { grpc_address: grpc://0.0.0.0:55165, is_serving: false },
Database: { grpc_address: grpc://0.0.0.0:55160, is_serving: true },
Chat: { grpc_address: grpc://0.0.0.0:55163, is_serving: true },
PushNotifications: { grpc_address: grpc://0.0.0.0:55166, is_serving: false },
},
last_updated: 2024-09-03T18:45:51.376619449Z
}
,
}
If you're running the above example while connecting to a containerized Conduit environment,
such as the ones provided by Conduit's CLI, you might have noticed your modules' gRPC addresses
looking an awful lot like hostnames (eg: grpc://conduit-database:55160
).
Your guess would be correct. We provide the SDK with Conduit Core's hostname and port combo, as exposed on the host computer, but the reality is, our microservices are still communicating inside a container network, as such, they internally refer to each other via their corresponding hostnames.
Your services might alternatively be running somewhere on the cloud, communicating via internal ip/port pairs that may, or may not, be publicly exposed to the public internet exactly as is.
However, if your own code isn't executing inside that very same environment, and you haven't otherwise explicitly configured any of your own system-level mappings for the module addresses advertised by Conduit's service discovery, your gRPC clients are unlikely to be able to reach any services behind these unexposed aliases.
Hence, this SDK provides a mechanism for remapping any module addresses reported by Core with explicitly defined static overrides.
You may specify your mappings by setting the MODULE_URL_REMAP
env to a stringified JSON object.
Use module names (as represented by service discovery) for the keys
and the publicly exposed gRPC addresses (eg 0.0.0.0:55160
) for the values.
MODULE_URL_REMAP='{"database":"0.0.0.0:55160","router":"0.0.0.0:55161","authentication":"0.0.0.0:55162","chat":"0.0.0.0:55163","email":"0.0.0.0:55164","forms":"0.0.0.0:55165","pushNotifications":"0.0.0.0:55166","sms":"0.0.0.0:55167","storage":"0.0.0.0:55168","authorization":"0.0.0.0:55169"}'
- gRPC Client Wrappers for Official Conduit Services
- Creating Custom Module Services
- Registering Admin/Client Routes
- gRPC Health Checking Protocol
- Bus (Redis)
- Metrics (Prometheus)
- Logs (Loki)
Check out our docs here: Documentation
Wanna see what comes next: Roadmap
Help us make Conduit great: Contribute
Learn more: Website