Skip to content

Releases: aleph-im/pyaleph

v0.5.1-rc3

09 Oct 12:31
24be12d
Compare
Choose a tag to compare
v0.5.1-rc3 Pre-release
Pre-release

This release candidate introduces new features related to balance checks, aggregate metadata and file uploads. It also fixes multiple minor issues.

What's Changed

  • Fix: refs were not filtered properly in message websocket by @odesenfans in #458
  • Internal: reenable AVAX signature unit tests by @odesenfans in #461
  • Feature: Control of balance for instances by @1yam in #462
  • Internal: add test for POST /messages with sync by @odesenfans in #464
  • Refactor: get_total_cost_for_address + fix View by @1yam in #466
  • do not use localhost by @MHHukiewitz in #468
  • Allow multiple message types by @MHHukiewitz in #444
  • Feature: upgrade balance endpoint by @1yam in #471
  • Fix: cost_view by @1yam in #472
  • Fix: return 422 on POST /messages if body is not JSON by @odesenfans in #475
  • Fix: no infinite loop on tx from unauthorized emitter by @odesenfans in #480
  • Fix: 400 error on indexer queries by @odesenfans in #481
  • corrected multiaddress generation instructions by @gdelfino in #479
  • Fix: reprocess failed instance messages in migration script by @odesenfans in #460
  • Internal: store tx_hash in rejected messages table by @odesenfans in #459
  • Feature : Balance Check persistent VM by @1yam in #469
  • Feature: authenticated file upload by @1yam in #463
  • Feature: modification and creation date in the aggregate messages by @1yam in #473
  • Fix: missing parameter for broadcast_and_process by @odesenfans in #483

New Contributors

Full Changelog: v0.5.1-rc2...v0.5.1-rc3

v0.5.1-rc2

19 Jul 08:40
93485b8
Compare
Choose a tag to compare
v0.5.1-rc2 Pre-release
Pre-release

Minor fixes.

Full Changelog: v0.5.1-rc1...v0.5.1-rc2

v0.5.1-rc1

04 Jul 15:40
7749d8f
Compare
Choose a tag to compare
v0.5.1-rc1 Pre-release
Pre-release

This release candidate introduces support for instances and brings multiple fixes to the message websocket implementation.

What's Changed

Full Changelog: v0.5.0...v0.5.1-rc1

v0.5.0

04 May 06:39
2adae46
Compare
Choose a tag to compare

This release introduces multiple major changes to the way nodes operate.

TL;DR

  • A new database: we switch from MongoDB to PostgreSQL.
  • A new implementation of the processing pipeline: the message pipeline is now split in two parts to fix race conditions and optimize the overall throughput.
  • Separate API processes: the REST API is now running in a separate Docker container and spawns several worker processes for improved response times.
  • Materialized aggregates: aggregates are now faster to query through the API.
  • New endpoints: we make it easier to post new Aleph messages and determine if your messages were processed or rejected.
  • Major dependency updates: CCNs now run on Python 3.11.

Switch to PostgreSQL

One of the main features of this release is the switch from MongoDB to PostgreSQL. This switch is motivated by the development of new features for which we feel a relational database is more appropriate.

Each type of message is now associated with one or more DB tables that store the actual objects mentioned in Aleph messages. API endpoints and internal operations can now directly access these object tables instead of having to search through messages.

Additionally, we now use a DB migration system that guarantees the consistency of the data across updates.

As we dropped MongoDB, files are now stored on the local file system in a dedicated volume.

New message pipeline

Fetcher and processor

The new message pipeline addresses two issues: determinism and observability. We now use two separate processes:

  • the fetcher performs network accesses for messages that require additional downloads. It ensures that all the data required to process a message is available on the node before any further processing. It uses asyncio tasks to fetch data for multiple messages in parallel.
  • the message processor is in charge of checking the integrity of messages and permissions. It processes messages atomically, guaranteeing the absence of race condition.

This new architecture allows to process messages as soon as they are fetched. As most messages are immediately ready for processing, this maximizes the throughput of the message pipeline.

Errors and error codes

The error checking mechanism of the message pipeline was completely rewritten. Each error is now specified as its own exception type and is made visible to the user as an error code. By using the new GET /api/v0/messages/{item_hash}, users can now determine if and why their message was rejected by a node.

Additionally, we now use exponential retry times to reduce the total amount of retries and the CPU/network load that comes with them. Messages are now retried up to 10 times within a span of around 20 minutes.

Materialized aggregates

Aggregates are now re-calculated as soon as a new aggregate message is processed. This improves the performance when querying large aggregates.

API updates

New endpoints

  • POST /api/v0/messages: allows users to post a new message and then track the progress of the message in the processing pipeline. This endpoint supports a synchronous mode where the response is only sent once the node processes the message or a timeout occurs.
  • GET /api/v0/messages/{item_hash}: allows users to track the status individual messages. The status field allows users to determine if their message is processed, rejected, pending or forgotten.
  • GET /api/v0/addresses/{address}/balance: returns the balance in Aleph of a wallet address.
  • GET /api/v0/addresses/{address}/files: returns the list of files stored by the user along the total number of files they store on Aleph and the total space used.
  • GET /api/v1/posts.json: a new implementation of the /posts/ endpoint. This new implementation removes message-specific files and focuses on the post content and metadata. /api/v0/posts.json is now deprecated.

New features

  • The messages websocket now allows history = 0. It was reimplemented to use a RabbitMQ queue to read new messages directly from the message pipeline.

Breaking changes

  • GET /api/v0/messages:
    • the endpoint only returns processed messages. Forgotten messages are now ignored.
    • The size, content_type and engine_info fields added by the node on STORE messages are not returned anymore. If you need this information, use the new GET /api/v0/addresses/{address}/files endpoint.
  • GET /api/v0/posts: a lot of fields were dropped as they were redundant
  • GET /api/v0/addresses/stats.json: removed the address field. It was redundant with the key of the dictionary.
  • Message specification:
    • The content field of aggregate messages is now required to be a dictionary.
    • The ref field of program volumes is now required to be a message hash.
    • Dropped support for the NaN float value and the \u0000 character in aggregates and posts.
    • The ref field of STORE messages can be any user-defined string or an item hash/CID. If the user specifies an item hash/CID, a valid STORE message with the same item hash must exist and belong to the same user. Otherwise the message will be rejected by the dependency resolution system.

Upgrade guide

Prerequisites

Make sure that your node is up-to-date with the latest release. Specifically, you must ensure that your private key is in the format introduced in the 0.4.x releases. You can find the full upgrade guide here. You can also skip this update and convert your private key file in the right format using openssl in your keys directory:

cd ./keys
openssl pkcs8 -topk8 -inform PEM -outform DER -in node-secret.key -out node-secret.pkcs8.der -nocrypt

Stop the node

This release requires a full re-sync of your node. While you wait for your node to resynchronize, use any of our official nodes to access data: official.aleph.cloud.

The full resync is the simplest option and will work for all node operators who do not require their node to be up at the time.

The following instructions assume that you use one of our official Docker Compose files.
First, switch off your node:

docker-compose down

Now, retire your old Docker Compose file and download the new one.

mv docker-compose.yml docker-compose-old.yml
wget "https://raw.githubusercontent.com/aleph-im/pyaleph/v0.5.0/deployment/samples/docker-compose/docker-compose.yml"

The new Docker Compose file comes with a default password for PostgreSQL. Generate a new password and specify it in your docker-compose.yml and config.yml files:

Update docker-compose.yml:

services:
  postgres:
    env:
      POSTGRES_PASSWORD: "<new-password>"

Add to config.yml:

postgres:
  password: "<new-password>"

Do not forget to keep other passwords from the previous Docker Compose file, like the one you generated for RabbitMQ.

You can now restart your node:

docker-compose up -d

The sync process takes around a full day.

If you are using a custom Docker Compose file, beware that there are multiple new services:

  • The API lives in its own container (pyaleph-api), using the same Docker image as pyaleph.
  • The node now uses a Redis cache.
  • MongoDB is replaced by PostgreSQL.

Check the official Docker Compose file to see how the services are configured.

Cleanup

Once you are confident that you will not need to roll back the release, you can delete the MongoDB volume:

docker volume rm <docker-compose-directory>_pyaleph-mongodb

Known issues

can't start new thread when running DB migrations

This issue occurs because your version of Docker is outdated. Upgrade your install and restart your node.
More information: https://stackoverflow.com/questions/70087344/python-in-docker-runtimeerror-cant-start-new-thread

Full Changelog: v0.4.7...v0.5.0

v0.5.0-rc6

03 May 15:33
92fa6b5
Compare
Choose a tag to compare
v0.5.0-rc6 Pre-release
Pre-release

What's Changed

Full Changelog: v0.5.0-rc5...v0.5.0-rc6

v0.5.0-rc5

01 May 22:04
e35c6df
Compare
Choose a tag to compare
v0.5.0-rc5 Pre-release
Pre-release

Fixes for the message websocket.

What's Changed

Full Changelog: v0.5.0-rc4...v0.5.0-rc5

v0.5.0-rc4

28 Apr 10:37
e398ff9
Compare
Choose a tag to compare
v0.5.0-rc4 Pre-release
Pre-release

Fixes for issues found while testing v0.5.0-rc3.

  • Fixed querying posts by refs and tags
  • Fixed support for large number of websockets.

What's Changed

Full Changelog: v0.5.0-rc3...v0.5.0-rc4

v0.5.0-rc3

26 Apr 14:17
3b2d11f
Compare
Choose a tag to compare
v0.5.0-rc3 Pre-release
Pre-release

Final release candidate for v0.5.0 (hopefully). Minor fixes and dependency updates.

Note for node operators: there is no need to install this version at the moment, we are running some tests and will release v0.5.0 shortly.

What's Changed

Full Changelog: v0.5.0-rc2...v0.5.0-rc3

v0.5.0-rc2

13 Apr 16:50
Compare
Choose a tag to compare
v0.5.0-rc2 Pre-release
Pre-release

This release introduces multiple changes to the way node operates.

TL;DR

  • A new database: we switch from MongoDB to PostgreSQL.
  • A new implementation of the processing pipeline: the message pipeline is now split in two parts to fix race conditions and optimize the overall throughput.
  • Separate API processes: the REST API is now running in a separate Docker container and spawns several worker processes for improved response times.
  • Materialized aggregates: aggregates are now faster to query through the API.
  • New endpoints: we make it easier to post new Aleph messages and determine if your messages were processed or rejected.
  • Major dependency updates: CCNs now run on Python 3.11.

Switch to PostgreSQL

One of the main features of this release is the switch from MongoDB to PostgreSQL. This switch is motivated by the development of new features for which we feel a relational database is more appropriate.

Each type of message is now associated with one or more DB tables that store the actual objects mentioned in Aleph messages. API endpoints and internal operations can now directly access these object tables instead of having to search through messages.

Additionally, we now use a DB migration system that guarantees the consistency of the data across updates.

As we dropped MongoDB, files are now stored on the local file system in a dedicated volume.

New message pipeline

Fetcher and processor

The new message pipeline addresses two issues: determinism and observability. We now use two separate processes:

  • the fetcher performs network accesses for messages that require additional downloads. It ensures that all the data required to process a message is available on the node before any further processing. It uses asyncio tasks to fetch data for multiple messages in parallel.
  • the message processor is in charge of checking the integrity of messages and permissions. It processes messages atomically, guaranteeing the absence of race condition.

This new architecture allows to process messages as soon as they are fetched. As most messages are immediately ready for processing, this maximizes the throughput of the message pipeline.

Errors and error codes

The error checking mechanism of the message pipeline was completely rewritten. Each error is now specified as its own exception type and is made visible to the user as an error code. By using the new GET /api/v0/messages/{item_hash}, users can now determine if and why their message was rejected by a node.

Additionally, we now use exponential retry times to reduce the total amount of retries and the CPU/network load that comes with them. Messages are now retried up to 10 times within a span of around 20 minutes.

Materialized aggregates

Aggregates are now re-calculated as soon as a new aggregate message is processed. This improves the performance when querying large aggregates.

API updates

New endpoints

  • POST /api/v0/messages: allows users to post a new message and then track the progress of the message in the processing pipeline. This endpoint supports a synchronous mode where the response is only sent once the node processes the message or a timeout occurs.
  • GET /api/v0/messages/{item_hash}: allows users to track the status individual messages. The status field allows users to determine if their message is processed, rejected, pending or forgotten.
  • GET /api/v0/addresses/{address}/balance: returns the balance in Aleph of a wallet address.
  • GET /api/v0/addresses/{address}/files: returns the list of files stored by the user along the total number of files they store on Aleph and the total space used.
  • GET /api/v1/posts.json: a new implementation of the /posts/ endpoint. This new implementation removes message-specific files and focuses on the post content and metadata. /api/v0/posts.json is now deprecated.

New features

  • The messages websocket now allows history = 0. It was reimplemented to use a RabbitMQ queue to read new messages directly from the message pipeline.

Breaking changes

  • GET /api/v0/messages:
    • the endpoint only returns processed messages. Forgotten messages are now ignored.
    • The size, content_type and engine_info fields added by the node on STORE messages are not returned anymore. If you need this information, use the new GET /api/v0/addresses/{address}/files endpoint.
  • GET /api/v0/posts: a lot of fields were dropped as they were redundant
  • GET /api/v0/addresses/stats.json: removed the address field. It was redundant with the key of the dictionary.
  • Message specification:
    • The content field of aggregate messages is now required to be a dictionary.
    • The ref field of program volumes is now required to be a message hash.
    • Dropped support for the NaN float value and the \u0000 character in aggregates and posts.

Upgrade guide

Prerequisites

Make sure that your node is up-to-date with the latest release. Specifically, you must ensure that your private key is in the format introduced in the 0.4.x releases. You can find the full upgrade guide here. You can also skip this update and convert your private key file in the right format using openssl in your keys directory:

cd ./keys
openssl pkcs8 -topk8 -inform PEM -outform DER -in node-secret.key -out node-secret.pkcs8.der -nocrypt

Stop the node

This release requires a full re-sync of your node. While you wait for your node to resynchronize, use any of our official nodes to access data: official.aleph.cloud.

The full resync is the simplest option and will work for all node operators who do not require their node to be up at the time.

The following instructions assume that you use one of our official Docker Compose files.
First, switch off your node:

docker-compose down

Now, retire your old Docker Compose file and download the new one.

mv docker-compose.yml docker-compose-old.yml
wget "https://raw.githubusercontent.com/aleph-im/pyaleph/v0.5.0-rc2/deployment/samples/docker-compose/docker-compose.yml"

The new Docker Compose file comes with a default password for PostgreSQL. Generate a new password and specify it in your docker-compose.yml and config.yml files:

Update docker-compose.yml:

services:
  postgres:
    env:
      POSTGRES_PASSWORD: "<new-password>"

Add to config.yml:

postgres:
  host: "postgres"
  password: "<new-password>"

redis:
  host: "redis"

Do not forget to keep other passwords from the previous Docker Compose file, like the one you generated for RabbitMQ.

You can now restart your node:

docker-compose up -d

The sync process takes around a full day.

If you are using a custom Docker Compose file, beware that there are multiple new services:

  • The API lives in its own container (pyaleph-api), using the same Docker image as pyaleph.
  • The node now uses a Redis cache.
  • MongoDB is replaced by PostgreSQL.

Check the official Docker Compose file to see how the services are configured.

Cleanup

Once you are confident that you will not need to roll back the release, you can delete the MongoDB volume:

docker volume rm <docker-compose-directory>_pyaleph-mongodb

Full Changelog: v0.4.7...v0.5.0-rc2
Changes from v0.5.0-rc1: v0.5.0-rc1...v0.5.0-rc2

v0.5.0-rc1

28 Mar 13:16
Compare
Choose a tag to compare
v0.5.0-rc1 Pre-release
Pre-release

This release introduces multiple changes to the way node operates.

TL;DR

  • A new database: we switch from MongoDB to PostgreSQL.
  • A new implementation of the processing pipeline: the message pipeline is now split in two parts to fix race conditions and optimize the overall throughput.
  • Materialized aggregates: aggregates are now faster to query through the API.
  • New endpoints: we make it easier to post new Aleph messages and determine if your messages were processed or rejected.
  • Major dependency updates: CCNs now run on Python 3.11.

Switch to PostgreSQL

One of the main features of this release is the switch from MongoDB to PostgreSQL. This switch is motivated by the development of new features for which we feel a relational database is more appropriate.

Each type of message is now associated with one or more DB tables that store the actual objects mentioned in Aleph messages. API endpoints and internal operations can now directly access these object tables instead of having to search through messages.

Additionally, we now use a DB migration system that guarantees the consistency of the data across updates.

As we dropped MongoDB, files are now stored on the local file system in a dedicated volume.

New message pipeline

Fetcher and processor

The new message pipeline addresses two issues: determinism and observability. We now use two separate processes:

  • the fetcher performs network accesses for messages that require additional downloads. It ensures that all the data required to process a message is available on the node before any further processing. It uses asyncio tasks to fetch data for multiple messages in parallel.
  • the message processor is in charge of checking the integrity of messages and permissions. It processes messages atomically, guaranteeing the absence of race condition.

This new architecture allows to process messages as soon as they are fetched. As most messages are immediately ready for processing, this maximizes the throughput of the message pipeline.

Errors and error codes

The error checking mechanism of the message pipeline was completely rewritten. Each error is now specified as its own exception type and is made visible to the user as an error code. By using the new GET /api/v0/messages/{item_hash}, users can now determine if and why their message was rejected by a node.

Additionally, we now use exponential retry times to reduce the total amount of retries and the CPU/network load that comes with them. Messages are now retried up to 10 times within a span of around 20 minutes.

Materialized aggregates

Aggregates are now re-calculated as soon as a new aggregate message is processed. This improves the performance when querying large aggregates.

API updates

New endpoints

  • POST /api/v0/messages: allows users to post a new message and then track the progress of the message in the processing pipeline. This endpoint supports a synchronous mode where the response is only sent once the node processes the message or a timeout occurs.
  • GET /api/v0/messages/{item_hash}: allows users to track the status individual messages. The status field allows users to determine if their message is processed, rejected, pending or forgotten.
  • GET /api/v0/addresses/{address}/balance: returns the balance in Aleph of a wallet address.
  • GET /api/v0/addresses/{address}/files: returns the list of files stored by the user along the total number of files they store on Aleph and the total space used.
  • GET /api/v1/posts.json: a new implementation of the /posts/ endpoint. This new implementation removes message-specific files and focuses on the post content and metadata. /api/v0/posts.json is now deprecated.

New features

  • The messages websocket now allows history = 0. It was reimplemented to use a RabbitMQ queue to read new messages directly from the message pipeline.

Breaking changes

  • GET /api/v0/messages:
    • the endpoint only returns processed messages. Forgotten messages are now ignored.
    • The size, content_type and engine_info fields added by the node on STORE messages are not returned anymore. If you need this information, use the new GET /api/v0/addresses/{address}/files endpoint.
  • GET /api/v0/posts: a lot of fields were dropped as they were redundant
  • GET /api/v0/addresses/stats.json: removed the address field. It was redundant with the key of the dictionary.
  • Message specification:
    • The content field of aggregate messages is now required to be a dictionary.
    • The ref field of program volumes is now required to be a message hash.
    • Dropped support for the NaN float value and the \u0000 character in aggregates and posts.

Upgrade guide

Prerequisites

Make sure that your node is up-to-date with the latest release. Specifically, you must ensure that your private key is in the format introduced in the 0.4.x releases. You can find the full upgrade guide here. You can also skip this update and convert your private key file in the right format using openssl in your keys directory:

cd ./keys
openssl pkcs8 -topk8 -inform PEM -outform DER -in node-secret.key -out node-secret.pkcs8.der -nocrypt

Stop the node

This release requires a full re-sync of your node. While you wait for your node to resynchronize, use any of our official nodes to access data: official.aleph.cloud.

The full resync is the simplest option and will work for all node operators who do not require their node to be up at the time.

The following instructions assume that you use one of our official Docker Compose files.
First, switch off your node:

docker-compose down

Now, retire your old Docker Compose file and download the new one.

mv docker-compose.yml docker-compose-old.yml
wget "https://raw.githubusercontent.com/aleph-im/pyaleph/v0.5.0-rc1/deployment/samples/docker-compose/docker-compose.yml"

The new Docker Compose file comes with a default password for PostgreSQL. Generate a new password and specify it in your docker-compose.yml and config.yml files:

Update docker-compose.yml:

services:
  postgres:
    env:
      POSTGRES_PASSWORD: "<new-password>"

Add to config.yml:

postgres:
  host: "postgres"
  password: "<new-password>"

Do not forget to keep other passwords from the previous Docker Compose file, like the one you generated for RabbitMQ.

You can now restart your node:

docker-compose up -d

The sync process takes around a full day.

Cleanup

Once you are confident that you will not need to roll back the release, you can delete the MongoDB volume:

docker volume rm <docker-compose-directory>_pyaleph-mongodb

Full Changelog: v0.4.7...v0.5.0-rc1