Releases: aleph-im/pyaleph
v0.5.1-rc3
This release candidate introduces new features related to balance checks, aggregate metadata and file uploads. It also fixes multiple minor issues.
What's Changed
- Fix: refs were not filtered properly in message websocket by @odesenfans in #458
- Internal: reenable AVAX signature unit tests by @odesenfans in #461
- Feature: Control of balance for instances by @1yam in #462
- Internal: add test for POST /messages with sync by @odesenfans in #464
- Refactor: get_total_cost_for_address + fix View by @1yam in #466
- do not use localhost by @MHHukiewitz in #468
- Allow multiple message types by @MHHukiewitz in #444
- Feature: upgrade balance endpoint by @1yam in #471
- Fix: cost_view by @1yam in #472
- Fix: return 422 on POST /messages if body is not JSON by @odesenfans in #475
- Fix: no infinite loop on tx from unauthorized emitter by @odesenfans in #480
- Fix: 400 error on indexer queries by @odesenfans in #481
- corrected multiaddress generation instructions by @gdelfino in #479
- Fix: reprocess failed instance messages in migration script by @odesenfans in #460
- Internal: store tx_hash in rejected messages table by @odesenfans in #459
- Feature : Balance Check persistent VM by @1yam in #469
- Feature: authenticated file upload by @1yam in #463
- Feature: modification and creation date in the aggregate messages by @1yam in #473
- Fix: missing parameter for broadcast_and_process by @odesenfans in #483
New Contributors
- @1yam made their first contribution in #462
- @MHHukiewitz made their first contribution in #468
- @gdelfino made their first contribution in #479
Full Changelog: v0.5.1-rc2...v0.5.1-rc3
v0.5.1-rc2
Minor fixes.
Full Changelog: v0.5.1-rc1...v0.5.1-rc2
v0.5.1-rc1
This release candidate introduces support for instances and brings multiple fixes to the message websocket implementation.
What's Changed
- Fix: do not send confirmations to message websocket by @odesenfans in #433
- Fix: use epoch format for all messages on websocket by @odesenfans in #438
- Chore: bump P2P service to 0.1.3 by @odesenfans in #440
- Internal: use a separate MQ channel for websockets by @odesenfans in #441
- Fix: reopen the API MQ channels if they are closed by @odesenfans in #442
- Feature: process instance messages by @odesenfans in #443
- Fix: remove cloud-init support and add authorized keys by @odesenfans in #449
- Fix: 500 error when listing instance messages by @odesenfans in #448
- Fix: no DB calls in message websocket by @odesenfans in #447
- Fix: prevent cancellation of the message websocket by @odesenfans in #450
- Fix: no exception log on Solana signature error by @odesenfans in #451
- Fix: 500 error when submitting instance message by @odesenfans in #452
Full Changelog: v0.5.0...v0.5.1-rc1
v0.5.0
This release introduces multiple major changes to the way nodes operate.
TL;DR
- A new database: we switch from MongoDB to PostgreSQL.
- A new implementation of the processing pipeline: the message pipeline is now split in two parts to fix race conditions and optimize the overall throughput.
- Separate API processes: the REST API is now running in a separate Docker container and spawns several worker processes for improved response times.
- Materialized aggregates: aggregates are now faster to query through the API.
- New endpoints: we make it easier to post new Aleph messages and determine if your messages were processed or rejected.
- Major dependency updates: CCNs now run on Python 3.11.
Switch to PostgreSQL
One of the main features of this release is the switch from MongoDB to PostgreSQL. This switch is motivated by the development of new features for which we feel a relational database is more appropriate.
Each type of message is now associated with one or more DB tables that store the actual objects mentioned in Aleph messages. API endpoints and internal operations can now directly access these object tables instead of having to search through messages.
Additionally, we now use a DB migration system that guarantees the consistency of the data across updates.
As we dropped MongoDB, files are now stored on the local file system in a dedicated volume.
New message pipeline
Fetcher and processor
The new message pipeline addresses two issues: determinism and observability. We now use two separate processes:
- the
fetcher
performs network accesses for messages that require additional downloads. It ensures that all the data required to process a message is available on the node before any further processing. It uses asyncio tasks to fetch data for multiple messages in parallel. - the
message processor
is in charge of checking the integrity of messages and permissions. It processes messages atomically, guaranteeing the absence of race condition.
This new architecture allows to process messages as soon as they are fetched. As most messages are immediately ready for processing, this maximizes the throughput of the message pipeline.
Errors and error codes
The error checking mechanism of the message pipeline was completely rewritten. Each error is now specified as its own exception type and is made visible to the user as an error code. By using the new GET /api/v0/messages/{item_hash}
, users can now determine if and why their message was rejected by a node.
Additionally, we now use exponential retry times to reduce the total amount of retries and the CPU/network load that comes with them. Messages are now retried up to 10 times within a span of around 20 minutes.
Materialized aggregates
Aggregates are now re-calculated as soon as a new aggregate message is processed. This improves the performance when querying large aggregates.
API updates
New endpoints
POST /api/v0/messages
: allows users to post a new message and then track the progress of the message in the processing pipeline. This endpoint supports a synchronous mode where the response is only sent once the node processes the message or a timeout occurs.GET /api/v0/messages/{item_hash}
: allows users to track the status individual messages. Thestatus
field allows users to determine if their message is processed, rejected, pending or forgotten.GET /api/v0/addresses/{address}/balance
: returns the balance in Aleph of a wallet address.GET /api/v0/addresses/{address}/files
: returns the list of files stored by the user along the total number of files they store on Aleph and the total space used.GET /api/v1/posts.json
: a new implementation of the /posts/ endpoint. This new implementation removes message-specific files and focuses on the post content and metadata./api/v0/posts.json
is now deprecated.
New features
- The messages websocket now allows history = 0. It was reimplemented to use a RabbitMQ queue to read new messages directly from the message pipeline.
Breaking changes
GET /api/v0/messages
:- the endpoint only returns processed messages. Forgotten messages are now ignored.
- The
size
,content_type
andengine_info
fields added by the node on STORE messages are not returned anymore. If you need this information, use the newGET /api/v0/addresses/{address}/files
endpoint.
GET /api/v0/posts
: a lot of fields were dropped as they were redundantGET /api/v0/addresses/stats.json
: removed theaddress
field. It was redundant with the key of the dictionary.- Message specification:
- The
content
field of aggregate messages is now required to be a dictionary. - The
ref
field of program volumes is now required to be a message hash. - Dropped support for the NaN float value and the \u0000 character in aggregates and posts.
- The
ref
field of STORE messages can be any user-defined string or an item hash/CID. If the user specifies an item hash/CID, a valid STORE message with the same item hash must exist and belong to the same user. Otherwise the message will be rejected by the dependency resolution system.
- The
Upgrade guide
Prerequisites
Make sure that your node is up-to-date with the latest release. Specifically, you must ensure that your private key is in the format introduced in the 0.4.x releases. You can find the full upgrade guide here. You can also skip this update and convert your private key file in the right format using openssl
in your keys
directory:
cd ./keys
openssl pkcs8 -topk8 -inform PEM -outform DER -in node-secret.key -out node-secret.pkcs8.der -nocrypt
Stop the node
This release requires a full re-sync of your node. While you wait for your node to resynchronize, use any of our official nodes to access data: official.aleph.cloud
.
The full resync is the simplest option and will work for all node operators who do not require their node to be up at the time.
The following instructions assume that you use one of our official Docker Compose files.
First, switch off your node:
docker-compose down
Now, retire your old Docker Compose file and download the new one.
mv docker-compose.yml docker-compose-old.yml
wget "https://raw.githubusercontent.com/aleph-im/pyaleph/v0.5.0/deployment/samples/docker-compose/docker-compose.yml"
The new Docker Compose file comes with a default password for PostgreSQL. Generate a new password and specify it in your docker-compose.yml
and config.yml
files:
Update docker-compose.yml
:
services:
postgres:
env:
POSTGRES_PASSWORD: "<new-password>"
Add to config.yml
:
postgres:
password: "<new-password>"
Do not forget to keep other passwords from the previous Docker Compose file, like the one you generated for RabbitMQ.
You can now restart your node:
docker-compose up -d
The sync process takes around a full day.
If you are using a custom Docker Compose file, beware that there are multiple new services:
- The API lives in its own container (
pyaleph-api
), using the same Docker image aspyaleph
.- The node now uses a Redis cache.
- MongoDB is replaced by PostgreSQL.
Check the official Docker Compose file to see how the services are configured.
Cleanup
Once you are confident that you will not need to roll back the release, you can delete the MongoDB volume:
docker volume rm <docker-compose-directory>_pyaleph-mongodb
Known issues
can't start new thread
when running DB migrations
This issue occurs because your version of Docker is outdated. Upgrade your install and restart your node.
More information: https://stackoverflow.com/questions/70087344/python-in-docker-runtimeerror-cant-start-new-thread
Full Changelog: v0.4.7...v0.5.0
v0.5.0-rc6
What's Changed
- Fix: increase shared memory size for PostgreSQL container by @odesenfans in #428
- Chore: update RabbitMQ to 3.11.15 by @odesenfans in #430
- Fix: avoid channel closed exception in message websocket by @odesenfans in #429
Full Changelog: v0.5.0-rc5...v0.5.0-rc6
v0.5.0-rc5
Fixes for the message websocket.
What's Changed
- Fix: always acknowledge MQ message in message websocket by @odesenfans in #425
- Fix: websocket issues by @odesenfans in #426
Full Changelog: v0.5.0-rc4...v0.5.0-rc5
v0.5.0-rc4
Fixes for issues found while testing v0.5.0-rc3.
- Fixed querying posts by refs and tags
- Fixed support for large number of websockets.
What's Changed
- Fix: filter amended posts by ref on /api/v0/posts.json by @odesenfans in #417
- Fix: filtering by tags by @odesenfans in #416
- Fix: increase POST /messages sync timeout by @odesenfans in #418
- Fix: restrict /pubsub/pub topic to the message topic by @odesenfans in #419
- Fix: support very large number of open websockets by @odesenfans in #422
Full Changelog: v0.5.0-rc3...v0.5.0-rc4
v0.5.0-rc3
Final release candidate for v0.5.0 (hopefully). Minor fixes and dependency updates.
Note for node operators: there is no need to install this version at the moment, we are running some tests and will release v0.5.0 shortly.
What's Changed
- Doc: update private network setup guide by @odesenfans in #357
- Fix: storage API bugs by @odesenfans in #407
- Fix: make Ethereum sync compatible with web3 6.0 by @odesenfans in #408
- Fix: ignore trusted messages for on-chain sync by @odesenfans in #410
- Chore: update mypy to 1.2.0 by @odesenfans in #411
- Chore: update to web3 6.2.0 by @odesenfans in #409
Full Changelog: v0.5.0-rc2...v0.5.0-rc3
v0.5.0-rc2
This release introduces multiple changes to the way node operates.
TL;DR
- A new database: we switch from MongoDB to PostgreSQL.
- A new implementation of the processing pipeline: the message pipeline is now split in two parts to fix race conditions and optimize the overall throughput.
- Separate API processes: the REST API is now running in a separate Docker container and spawns several worker processes for improved response times.
- Materialized aggregates: aggregates are now faster to query through the API.
- New endpoints: we make it easier to post new Aleph messages and determine if your messages were processed or rejected.
- Major dependency updates: CCNs now run on Python 3.11.
Switch to PostgreSQL
One of the main features of this release is the switch from MongoDB to PostgreSQL. This switch is motivated by the development of new features for which we feel a relational database is more appropriate.
Each type of message is now associated with one or more DB tables that store the actual objects mentioned in Aleph messages. API endpoints and internal operations can now directly access these object tables instead of having to search through messages.
Additionally, we now use a DB migration system that guarantees the consistency of the data across updates.
As we dropped MongoDB, files are now stored on the local file system in a dedicated volume.
New message pipeline
Fetcher and processor
The new message pipeline addresses two issues: determinism and observability. We now use two separate processes:
- the
fetcher
performs network accesses for messages that require additional downloads. It ensures that all the data required to process a message is available on the node before any further processing. It uses asyncio tasks to fetch data for multiple messages in parallel. - the
message processor
is in charge of checking the integrity of messages and permissions. It processes messages atomically, guaranteeing the absence of race condition.
This new architecture allows to process messages as soon as they are fetched. As most messages are immediately ready for processing, this maximizes the throughput of the message pipeline.
Errors and error codes
The error checking mechanism of the message pipeline was completely rewritten. Each error is now specified as its own exception type and is made visible to the user as an error code. By using the new GET /api/v0/messages/{item_hash}
, users can now determine if and why their message was rejected by a node.
Additionally, we now use exponential retry times to reduce the total amount of retries and the CPU/network load that comes with them. Messages are now retried up to 10 times within a span of around 20 minutes.
Materialized aggregates
Aggregates are now re-calculated as soon as a new aggregate message is processed. This improves the performance when querying large aggregates.
API updates
New endpoints
POST /api/v0/messages
: allows users to post a new message and then track the progress of the message in the processing pipeline. This endpoint supports a synchronous mode where the response is only sent once the node processes the message or a timeout occurs.GET /api/v0/messages/{item_hash}
: allows users to track the status individual messages. Thestatus
field allows users to determine if their message is processed, rejected, pending or forgotten.GET /api/v0/addresses/{address}/balance
: returns the balance in Aleph of a wallet address.GET /api/v0/addresses/{address}/files
: returns the list of files stored by the user along the total number of files they store on Aleph and the total space used.GET /api/v1/posts.json
: a new implementation of the /posts/ endpoint. This new implementation removes message-specific files and focuses on the post content and metadata./api/v0/posts.json
is now deprecated.
New features
- The messages websocket now allows history = 0. It was reimplemented to use a RabbitMQ queue to read new messages directly from the message pipeline.
Breaking changes
GET /api/v0/messages
:- the endpoint only returns processed messages. Forgotten messages are now ignored.
- The
size
,content_type
andengine_info
fields added by the node on STORE messages are not returned anymore. If you need this information, use the newGET /api/v0/addresses/{address}/files
endpoint.
GET /api/v0/posts
: a lot of fields were dropped as they were redundantGET /api/v0/addresses/stats.json
: removed theaddress
field. It was redundant with the key of the dictionary.- Message specification:
- The
content
field of aggregate messages is now required to be a dictionary. - The
ref
field of program volumes is now required to be a message hash. - Dropped support for the NaN float value and the \u0000 character in aggregates and posts.
- The
Upgrade guide
Prerequisites
Make sure that your node is up-to-date with the latest release. Specifically, you must ensure that your private key is in the format introduced in the 0.4.x releases. You can find the full upgrade guide here. You can also skip this update and convert your private key file in the right format using openssl
in your keys
directory:
cd ./keys
openssl pkcs8 -topk8 -inform PEM -outform DER -in node-secret.key -out node-secret.pkcs8.der -nocrypt
Stop the node
This release requires a full re-sync of your node. While you wait for your node to resynchronize, use any of our official nodes to access data: official.aleph.cloud
.
The full resync is the simplest option and will work for all node operators who do not require their node to be up at the time.
The following instructions assume that you use one of our official Docker Compose files.
First, switch off your node:
docker-compose down
Now, retire your old Docker Compose file and download the new one.
mv docker-compose.yml docker-compose-old.yml
wget "https://raw.githubusercontent.com/aleph-im/pyaleph/v0.5.0-rc2/deployment/samples/docker-compose/docker-compose.yml"
The new Docker Compose file comes with a default password for PostgreSQL. Generate a new password and specify it in your docker-compose.yml
and config.yml
files:
Update docker-compose.yml
:
services:
postgres:
env:
POSTGRES_PASSWORD: "<new-password>"
Add to config.yml
:
postgres:
host: "postgres"
password: "<new-password>"
redis:
host: "redis"
Do not forget to keep other passwords from the previous Docker Compose file, like the one you generated for RabbitMQ.
You can now restart your node:
docker-compose up -d
The sync process takes around a full day.
If you are using a custom Docker Compose file, beware that there are multiple new services:
- The API lives in its own container (
pyaleph-api
), using the same Docker image aspyaleph
.- The node now uses a Redis cache.
- MongoDB is replaced by PostgreSQL.
Check the official Docker Compose file to see how the services are configured.
Cleanup
Once you are confident that you will not need to roll back the release, you can delete the MongoDB volume:
docker volume rm <docker-compose-directory>_pyaleph-mongodb
Full Changelog: v0.4.7...v0.5.0-rc2
Changes from v0.5.0-rc1: v0.5.0-rc1...v0.5.0-rc2
v0.5.0-rc1
This release introduces multiple changes to the way node operates.
TL;DR
- A new database: we switch from MongoDB to PostgreSQL.
- A new implementation of the processing pipeline: the message pipeline is now split in two parts to fix race conditions and optimize the overall throughput.
- Materialized aggregates: aggregates are now faster to query through the API.
- New endpoints: we make it easier to post new Aleph messages and determine if your messages were processed or rejected.
- Major dependency updates: CCNs now run on Python 3.11.
Switch to PostgreSQL
One of the main features of this release is the switch from MongoDB to PostgreSQL. This switch is motivated by the development of new features for which we feel a relational database is more appropriate.
Each type of message is now associated with one or more DB tables that store the actual objects mentioned in Aleph messages. API endpoints and internal operations can now directly access these object tables instead of having to search through messages.
Additionally, we now use a DB migration system that guarantees the consistency of the data across updates.
As we dropped MongoDB, files are now stored on the local file system in a dedicated volume.
New message pipeline
Fetcher and processor
The new message pipeline addresses two issues: determinism and observability. We now use two separate processes:
- the
fetcher
performs network accesses for messages that require additional downloads. It ensures that all the data required to process a message is available on the node before any further processing. It uses asyncio tasks to fetch data for multiple messages in parallel. - the
message processor
is in charge of checking the integrity of messages and permissions. It processes messages atomically, guaranteeing the absence of race condition.
This new architecture allows to process messages as soon as they are fetched. As most messages are immediately ready for processing, this maximizes the throughput of the message pipeline.
Errors and error codes
The error checking mechanism of the message pipeline was completely rewritten. Each error is now specified as its own exception type and is made visible to the user as an error code. By using the new GET /api/v0/messages/{item_hash}
, users can now determine if and why their message was rejected by a node.
Additionally, we now use exponential retry times to reduce the total amount of retries and the CPU/network load that comes with them. Messages are now retried up to 10 times within a span of around 20 minutes.
Materialized aggregates
Aggregates are now re-calculated as soon as a new aggregate message is processed. This improves the performance when querying large aggregates.
API updates
New endpoints
POST /api/v0/messages
: allows users to post a new message and then track the progress of the message in the processing pipeline. This endpoint supports a synchronous mode where the response is only sent once the node processes the message or a timeout occurs.GET /api/v0/messages/{item_hash}
: allows users to track the status individual messages. Thestatus
field allows users to determine if their message is processed, rejected, pending or forgotten.GET /api/v0/addresses/{address}/balance
: returns the balance in Aleph of a wallet address.GET /api/v0/addresses/{address}/files
: returns the list of files stored by the user along the total number of files they store on Aleph and the total space used.GET /api/v1/posts.json
: a new implementation of the /posts/ endpoint. This new implementation removes message-specific files and focuses on the post content and metadata./api/v0/posts.json
is now deprecated.
New features
- The messages websocket now allows history = 0. It was reimplemented to use a RabbitMQ queue to read new messages directly from the message pipeline.
Breaking changes
GET /api/v0/messages
:- the endpoint only returns processed messages. Forgotten messages are now ignored.
- The
size
,content_type
andengine_info
fields added by the node on STORE messages are not returned anymore. If you need this information, use the newGET /api/v0/addresses/{address}/files
endpoint.
GET /api/v0/posts
: a lot of fields were dropped as they were redundantGET /api/v0/addresses/stats.json
: removed theaddress
field. It was redundant with the key of the dictionary.- Message specification:
- The
content
field of aggregate messages is now required to be a dictionary. - The
ref
field of program volumes is now required to be a message hash. - Dropped support for the NaN float value and the \u0000 character in aggregates and posts.
- The
Upgrade guide
Prerequisites
Make sure that your node is up-to-date with the latest release. Specifically, you must ensure that your private key is in the format introduced in the 0.4.x releases. You can find the full upgrade guide here. You can also skip this update and convert your private key file in the right format using openssl
in your keys
directory:
cd ./keys
openssl pkcs8 -topk8 -inform PEM -outform DER -in node-secret.key -out node-secret.pkcs8.der -nocrypt
Stop the node
This release requires a full re-sync of your node. While you wait for your node to resynchronize, use any of our official nodes to access data: official.aleph.cloud
.
The full resync is the simplest option and will work for all node operators who do not require their node to be up at the time.
The following instructions assume that you use one of our official Docker Compose files.
First, switch off your node:
docker-compose down
Now, retire your old Docker Compose file and download the new one.
mv docker-compose.yml docker-compose-old.yml
wget "https://raw.githubusercontent.com/aleph-im/pyaleph/v0.5.0-rc1/deployment/samples/docker-compose/docker-compose.yml"
The new Docker Compose file comes with a default password for PostgreSQL. Generate a new password and specify it in your docker-compose.yml
and config.yml
files:
Update docker-compose.yml
:
services:
postgres:
env:
POSTGRES_PASSWORD: "<new-password>"
Add to config.yml
:
postgres:
host: "postgres"
password: "<new-password>"
Do not forget to keep other passwords from the previous Docker Compose file, like the one you generated for RabbitMQ.
You can now restart your node:
docker-compose up -d
The sync process takes around a full day.
Cleanup
Once you are confident that you will not need to roll back the release, you can delete the MongoDB volume:
docker volume rm <docker-compose-directory>_pyaleph-mongodb
Full Changelog: v0.4.7...v0.5.0-rc1