Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Asynchronous tx confirmations #1278

Merged
merged 32 commits into from
Aug 23, 2021
Merged

Asynchronous tx confirmations #1278

merged 32 commits into from
Aug 23, 2021

Conversation

adizere
Copy link
Member

@adizere adizere commented Aug 12, 2021

Closes #1124
Closes: #1265

Description


For contributor use:

  • Added a changelog entry, using unclog.
  • If applicable: Unit tests written, added test to CI.
  • Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
  • Updated relevant documentation (docs/) and code comments.
  • Re-reviewed Files changed in the Github PR explorer.

@adizere adizere changed the title Adi/nowait localized Hermes asynchronous TX confirmations Aug 12, 2021
@romac romac changed the title Hermes asynchronous TX confirmations Asynchronous tx confirmations Aug 12, 2021
@adizere adizere mentioned this pull request Aug 16, 2021
9 tasks
Copy link
Collaborator

@ancazamfir ancazamfir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, a lot of changes here. I am still not sure if this is the best approach. The original issue is only fixed for packet workers. There might still be other workers that can block them with single transactions using the same runtimes.
The code is overly complex, touching a lot of files and not sure is necessary. My thought for an initial solution that I shared with @adizere was to move the loop that confirms tx hashes from cosmos.rs into chain/handle/prod.rs. All workers would make use of it.
It is true that a worker would need to wait for a set of messages to be included in a block before being able to submit new messages. But I don't think this would be an issue.
I am also concerned with ordered channels where we may end up with packets not being sent in sequence and the simpler solution would avoid that.

Then, regardless of the solution, we need to have a setup to reproduce #1265. Document the testing steps in the PR. And then show that this branch fixes the issue.

And one last thought is about debugability, let's make sure we can make sense of the logs.

fn filter_events(&self, events: &[IbcEvent]) -> Vec<IbcEvent> {
let mut result = vec![];

fn filter_relaying_events(&self, events: Vec<IbcEvent>) -> Vec<IbcEvent> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious on why the name change here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about that. I can revert if necessary.

relayer/src/link/unconfirmed.rs Outdated Show resolved Hide resolved
relayer/src/link/unconfirmed.rs Outdated Show resolved Hide resolved
relayer/src/link/unconfirmed.rs Outdated Show resolved Hide resolved
relayer/src/link/unconfirmed.rs Outdated Show resolved Hide resolved
relayer/src/chain.rs Outdated Show resolved Hide resolved
@romac romac mentioned this pull request Aug 18, 2021
10 tasks
@soareschen
Copy link
Contributor

The code is overly complex, touching a lot of files and not sure is necessary. My thought for an initial solution that I shared with @adizere was to move the loop that confirms tx hashes from cosmos.rs into chain/handle/prod.rs. All workers would make use of it.

I agree that there are a lot of unnecessary complexity in the code, but I think the complexity is ultimately arise from us choosing the wrong concurrency abstraction which makes it challenging to scale the relayer. If we were to use async tasks here, the whole process could be much simpler with one async task spawned for each message sent or packet relayed.

There are little reason why we need single-threaded workers for each channel, as compared to many async threads running concurrently focusing on single operation. Right now this PR seems to be a patched work in attempt to manually multiplex multiple tasks on a single threaded worker. So it is quite awkward that we need a queue that is being processed slowly one at a time in a loop doing multiple unrelated things.

The other reason why I think the original code in this PR used multiple indirection is because the use of &mut self pointers. The use of mutable pointers and shared data structures are the bottleneck in the current code base of why the code cannot be executed concurrently. I have refactored the code in RelayPath and PendingTx to avoid using &mut self through some help from RefCell. This allows the retry operation to be slightly more ergonomic and be all handled inside process_pending without running to overlapping &mut conflicts.

Since the PR is in high priority, I suggest that we stick with the current design and think about better design at a later time. Ideally in the weekly meeting I would like to resurface the discussion on migrating to async tasks runtime to improve the scalability of the relayer.

I will finish testing this PR by tomorrow so that we can release on time by Friday.

@romac
Copy link
Member

romac commented Aug 19, 2021

I agree that there are a lot of unnecessary complexity in the code, but I think the complexity is ultimately arise from us choosing the wrong concurrency abstraction which makes it challenging to scale the relayer. If we were to use async tasks here, the whole process could be much simpler with one async task spawned for each message sent or packet relayed.

I wouldn't say that the concurrency model we chose back then was wrong per se, but rather that we've outgrown it. This has little to do with async/await as we could potentially spawn thread-based "async" tasks for the same purpose. But I agree that async/await would provide a better and more natural way of dealing with async tasks in the process of scaling the relayer.

Happy to discuss this at the next IBC meeting, as you suggest :)

@soareschen
Copy link
Contributor

Then, regardless of the solution, we need to have a setup to reproduce #1265. Document the testing steps in the PR. And then show that this branch fixes the issue.

@ancazamfir I am not too sure how exactly to reproduce #1265, but here is what I tried:

I assume that the key issue to observe is the long time gap between the logs, which indicate that the relayer paused for a long time to wait for the confirmation before continuing. I have the following reproduction that shows that the relayer continue producing logs and performing operations while waiting for the unconfirmed messages.

To create bottleneck in the test chain environment, I have to edit the one-chain script to increase the commit time to 10 seconds so that the transactions do not get finalized too quickly.

It is actually pretty tricky to generate more than one unconfirmed messages in a test scenario. I have to submit multiple raw ibc-transfers simultaneously multiple times to be able to saturate the queue with more than 2 unconfirmed/pending transactions. This is done by creating multiple wallets and running the following commands a few times:

$ hermes tx raw ft-transfer ibc-1 ibc-0 transfer channel-0 9999 -o 1000 -n 1 -k user2 &
$ hermes tx raw ft-transfer ibc-1 ibc-0 transfer channel-0 9999 -o 1000 -n 1 -k user3 &
$ hermes tx raw ft-transfer ibc-1 ibc-0 transfer channel-0 9999 -o 1000 -n 1 -k user4 &
$ hermes tx raw ft-transfer ibc-1 ibc-0 transfer channel-0 9999 -o 1000 -n 1 -k user5 &

After the pending transaction queue is filled, we should be able to see something like follows in the log:

Aug 19 15:37:03.004 TRACE [ibc-1:transfer/channel-0 -> ibc-0] total pending transactions left: 4
Aug 19 15:37:03.057 TRACE [ibc-0:transfer/channel-0 -> ibc-1] trying to confirm TxHashes: count=1; 8BD52C21E8FEB4E1B5FCB072A512B2D3B083FEDC5A14FF22356551F447576F98 
Aug 19 15:37:03.060 TRACE [ibc-0:transfer/channel-0 -> ibc-1] transaction is not yet committed: TxHashes: count=1; 8BD52C21E8FEB4E1B5FCB072A512B2D3B083FEDC5A14FF22356551F447576F98 
Aug 19 15:37:03.061 TRACE [ibc-0:transfer/channel-0 -> ibc-1] total pending transactions left: 2
Aug 19 15:37:04.012 TRACE [ibc-1:transfer/channel-0 -> ibc-0] trying to confirm TxHashes: count=1; 306061F68483D904B11D3CF6280582B33742F1895DB4CE7F6E6669C2E9DDB019 
Aug 19 15:37:04.016 TRACE [ibc-1:transfer/channel-0 -> ibc-0] transaction is not yet committed: TxHashes: count=1; 306061F68483D904B11D3CF6280582B33742F1895DB4CE7F6E6669C2E9DDB019 
...
Aug 19 15:37:04.818  INFO [ibc-0:transfer/channel-0 -> ibc-1] generate messages from batch with 4 events
Aug 19 15:37:04.823 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-0 => SendPacketEv(SendPacket - h:0-289, seq:107, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.837 TRACE [ibc-0:transfer/channel-0 -> ibc-1] built recv_packet msg seq:107, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)), proofs at height 0-290
Aug 19 15:37:04.837 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-1 <= /ibc.core.channel.v1.MsgRecvPacket from SendPacketEv(SendPacket - h:0-289, seq:107, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.837 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-0 => SendPacketEv(SendPacket - h:0-289, seq:108, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.851 TRACE [ibc-0:transfer/channel-0 -> ibc-1] built recv_packet msg seq:108, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)), proofs at height 0-290
Aug 19 15:37:04.852 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-1 <= /ibc.core.channel.v1.MsgRecvPacket from SendPacketEv(SendPacket - h:0-289, seq:108, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.852 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-0 => SendPacketEv(SendPacket - h:0-289, seq:109, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.863 TRACE [ibc-0:transfer/channel-0 -> ibc-1] built recv_packet msg seq:109, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)), proofs at height 0-290
Aug 19 15:37:04.864 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-1 <= /ibc.core.channel.v1.MsgRecvPacket from SendPacketEv(SendPacket - h:0-289, seq:109, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.864 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-0 => SendPacketEv(SendPacket - h:0-289, seq:110, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.876 TRACE [ibc-0:transfer/channel-0 -> ibc-1] built recv_packet msg seq:110, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)), proofs at height 0-290
Aug 19 15:37:04.876 DEBUG [ibc-0:transfer/channel-0 -> ibc-1] ibc-1 <= /ibc.core.channel.v1.MsgRecvPacket from SendPacketEv(SendPacket - h:0-289, seq:110, path:channel-0/transfer->channel-0/transfer, toh:1-1286, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.876  INFO [ibc-0:transfer/channel-0 -> ibc-1] scheduling op. data with 4 msg(s) for Destination (height 0-290)
Aug 19 15:37:04.970  INFO [ibc-0:transfer/channel-0 -> ibc-1] relay op. data of 4 msgs(s) to Destination (height 0-290), delayed by: 93.435414ms [try 1/5]
Aug 19 15:37:04.970  INFO [ibc-0:transfer/channel-0 -> ibc-1] prepending Destination client update @ height 0-290
Aug 19 15:37:04.972  INFO [ibc-1:transfer/channel-0 -> ibc-0] generate messages from batch with 4 events
Aug 19 15:37:04.976 DEBUG [ibc-1:transfer/channel-0 -> ibc-0] ibc-1 => WriteAcknowledgementEv(WriteAcknowledgement - h:1-287, seq:99, path:channel-0/transfer->channel-0/transfer, toh:1-1284, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:04.982 TRACE light client verification trusted=0-289 target=0-290
Aug 19 15:37:04.998 TRACE adjusting headers with 0 supporting headers trusted=0-289 target=290
Aug 19 15:37:04.998 TRACE fetching header height=0-290
Aug 19 15:37:05.009 TRACE [ibc-1:transfer/channel-0 -> ibc-0] built acknowledgment msg seq:99, path:channel-0/transfer->channel-0/transfer, toh:1-1284, tos:Timestamp(NoTimestamp)), proofs at height 1-288
Aug 19 15:37:05.009 DEBUG [ibc-1:transfer/channel-0 -> ibc-0] ibc-0 <= /ibc.core.channel.v1.MsgAcknowledgement from WriteAcknowledgementEv(WriteAcknowledgement - h:1-287, seq:99, path:channel-0/transfer->channel-0/transfer, toh:1-1284, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:05.009 DEBUG [ibc-1:transfer/channel-0 -> ibc-0] ibc-1 => WriteAcknowledgementEv(WriteAcknowledgement - h:1-287, seq:100, path:channel-0/transfer->channel-0/transfer, toh:1-1284, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:05.013 DEBUG [ibc-0 -> ibc-1:07-tendermint-0] MsgUpdateAnyClient from trusted height 0-289 to target height 0-290
Aug 19 15:37:05.014  INFO [ibc-0:transfer/channel-0 -> ibc-1] assembled batch of 5 message(s)
Aug 19 15:37:05.015 DEBUG [ibc-1] send_tx: sending 5 messages using nonce 40
Aug 19 15:37:05.030 TRACE [ibc-1] send_tx: based on the estimated gas, adjusting fee from Fee { amount: [Coin { denom: "stake", amount: "3000" }], gas_limit: 3000000, payer: "", granter: "" } to Fee { amount: [Coin { denom: "stake", amount: "353" }], gas_limit: 352935, payer: "", granter: "" }
Aug 19 15:37:05.038 DEBUG [ibc-1] send_tx: broadcast_tx_sync: Response { code: Ok, data: Data([]), log: Log("[]"), hash: transaction::Hash(86C29F2DAE587EC4752988976FCA726C6EF2CA7E254A8648057ECD332253E489) }
Aug 19 15:37:05.038  INFO [Async~>ibc-1] response(s): 1; Ok:86C29F2DAE587EC4752988976FCA726C6EF2CA7E254A8648057ECD332253E489

Aug 19 15:37:05.038  INFO [ibc-0:transfer/channel-0 -> ibc-1] success
Aug 19 15:37:05.038 TRACE [ibc-1:transfer/channel-0 -> ibc-0] trying to confirm TxHashes: count=1; 86C29F2DAE587EC4752988976FCA726C6EF2CA7E254A8648057ECD332253E489 
Aug 19 15:37:05.041 TRACE [ibc-1:transfer/channel-0 -> ibc-0] transaction is not yet committed: TxHashes: count=1; 86C29F2DAE587EC4752988976FCA726C6EF2CA7E254A8648057ECD332253E489 
Aug 19 15:37:05.041 TRACE [ibc-1:transfer/channel-0 -> ibc-0] total pending transactions left: 5
Aug 19 15:37:05.042 TRACE [ibc-1:transfer/channel-0 -> ibc-0] built acknowledgment msg seq:100, path:channel-0/transfer->channel-0/transfer, toh:1-1284, tos:Timestamp(NoTimestamp)), proofs at height 1-288
Aug 19 15:37:05.042 DEBUG [ibc-1:transfer/channel-0 -> ibc-0] ibc-0 <= /ibc.core.channel.v1.MsgAcknowledgement from WriteAcknowledgementEv(WriteAcknowledgement - h:1-287, seq:100, path:channel-0/transfer->channel-0/transfer, toh:1-1284, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:05.042 DEBUG [ibc-1:transfer/channel-0 -> ibc-0] ibc-1 => WriteAcknowledgementEv(WriteAcknowledgement - h:1-287, seq:101, path:channel-0/transfer->channel-0/transfer, toh:1-1284, tos:Timestamp(NoTimestamp)))
Aug 19 15:37:05.045 TRACE [ibc-1:transfer/channel-0 -> ibc-0] trying to confirm TxHashes: count=1; 86C29F2DAE587EC4752988976FCA726C6EF2CA7E254A8648057ECD332253E489 
Aug 19 15:37:05.046 TRACE [ibc-1:transfer/channel-0 -> ibc-0] transaction is not yet committed: TxHashes: count=1; 86C29F2DAE587EC4752988976FCA726C6EF2CA7E254A8648057ECD332253E489 
Aug 19 15:37:05.046 TRACE [ibc-1:transfer/channel-0 -> ibc-0] total pending transactions left: 5
...

I think the key observation is in the timestamp and the total pending transactions count, where multiple packets are being relayed when there are still pending transactions that are being committed to the chain.

@romac it would be great if you can help confirm that what I am testing makes sense.

@romac
Copy link
Member

romac commented Aug 19, 2021

I think you might be able to create a bottleneck more easily by setting max_msg_num = 1 in the chains configs and then issuing a single hermes tx raw ft-transfer ibc-1 ibc-0 transfer channel-0 9999 -o 1000 -k user2 -n MSG_NUM where MSG_NUM is the number of packets you want to send, and thus together with the setting, also the number of txs that will be submitted.

@soareschen
Copy link
Contributor

I think you might be able to create a bottleneck more easily by setting max_msg_num = 1 in the chains configs

Good to know! Unfortunately it does not work in this case as the pending queue is batched by OperationalData instead of transactions. So what instead happen is just we have one OperationalData in the queue with many transaction hashes inside.

@ancazamfir
Copy link
Collaborator

ancazamfir commented Aug 23, 2021

Regarding the testing, one thought:

  • create 3 chains and use bigger commit times in one-chain like you did
  • create two channels, one between ibc-0 and ibc-1 and the other between ibc-1 and ibc-2
  • send 100s ft-transfer from ibc-0 to ibc-1 followed by one transfer from ibc-2 to ibc-1

With many tries you should catch a scenario where the ibc-1 runtime is busy to send and confirm block inclusion for the 100s messages on the ibc-0 --> ibc-1 channel before it gets to handle the request for the one message from the ibc-2 --> ibc-1 worker.

@ancazamfir
Copy link
Collaborator

The code is overly complex, touching a lot of files and not sure is necessary. My thought for an initial solution that I shared with @adizere was to move the loop that confirms tx hashes from cosmos.rs into chain/handle/prod.rs. All workers would make use of it.

I agree that there are a lot of unnecessary complexity in the code, but I think the complexity is ultimately arise from us choosing the wrong concurrency abstraction which makes it challenging to scale the relayer. ...

To clarify, I think that within the current architecture, we could have designed this to be much simpler and all the workers would have benefited from that. As it is, the packet worker has a lot of changes (could have been 0) and all the other workers use the old sync approach.

@ancazamfir ancazamfir self-requested a review August 23, 2021 14:20
Copy link
Collaborator

@ancazamfir ancazamfir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can merge this as is and think of improvements later.

@soareschen
Copy link
Contributor

Here is the way I verify that the PR clears the congestion:

  • Create two chains ibc-0 and ibc-1 using ./scripts/dev-env
  • Update the timeout_commit config in ibc-1 to 20s.
    • The timeout commit in ibc-1 remains 1s so that we can submit multiple transactions to ibc-0 and have them queued to be relayed to ibc-1.
  • Run hermes start in one terminal.
  • In another terminal, run hermes tx raw ft-transfer ibc-1 ibc-0 transfer channel-0 1 -o 1 -n 1 -k user2 for 3 times consecutively.

We should be able to find something like the following in the logs:

Aug 23 15:53:09.543 DEBUG send_messages_and_wait_check_tx with 2 messages
...
Aug 23 15:53:11.657 DEBUG send_messages_and_wait_check_tx with 2 messages
...
Aug 23 15:53:13.676 DEBUG send_messages_and_wait_check_tx with 2 messages

Note that each call to send_messages_and_wait_check_tx takes about 2 seconds, which roughly follows the commit time of ibc-0.

In the master branch version, add the logging to send_msgs and repeat the experiment above:

    fn send_msgs(&mut self, proto_msgs: Vec<Any>) -> Result<Vec<IbcEvent>, Error> {
        crate::time!("send_msgs");
        debug!("send_msgs with {} messages", proto_msgs.len());
        ...
    }

We should see the logs showing something like:

Aug 23 16:00:22.078 DEBUG send_msgs with 2 messages
...
Aug 23 16:00:42.166 DEBUG send_msgs with 2 messages
...
Aug 23 16:01:02.143 DEBUG send_msgs with 2 messages

Notice that in the original version, it takes roughly 20 seconds between each call to send_msgs. So it took 60 seconds to all 3 packet transfers to ibc-1. In the improved version in this PR, it instead takes roughly 20 seconds to submit and confirm all 3 packet transfers, as they are all done in parallel.

@soareschen soareschen merged commit febeae5 into master Aug 23, 2021
@soareschen soareschen deleted the adi/nowait_localized branch August 23, 2021 14:41
romac added a commit that referenced this pull request Aug 23, 2021
romac added a commit that referenced this pull request Aug 23, 2021
* Bump all 0.6.2 crates to 0.7.0

* Bump `ibc-proto` to v0.10.0

* Bump version in guide

* Add `ibc-relayer-rest` crate to README table

* fixup version

* Update Cargo.lock

* Update dependencies

* Update supported Cosmos SDK version range, cf. #1266

* Improve structure of JSON sent back by the REST server

* Remove prefix from info message in start command

* Document the REST server API

* Include error source in `ConnectionError::Relayer`

* Add .changelog entry for #1278

* Fix ibc-relayer-rest integration tests

* Release changelog for 0.7.0

* Reformat the changelog
hu55a1n1 pushed a commit to hu55a1n1/hermes that referenced this pull request Sep 13, 2022
* Added non-blocking interface to submit msgs (stub)

* Basic impl for submit_msgs

* Minor improvements.

Cherry-picked from

commit b63335b
Author: Adi Seredinschi <adi@informal.systems>
Date:   Sat Jul 17 10:36:38 2021 +0200

    From IbcEvent to IbcEventWithHash

* Avoid cloning in assemble_msgs

* relay_from_operational_data is now generic

* TxHash wrapper

* unconfirmed module and mediator stub

* Implemented unconfirmed::Mediator corner-cases

* Moved from TxHash to TxHashes for better output

* More comments & ideas

* Added minimum backoff

* Fixed ordering bug

* Undo var renaming for easier review

* Fix type errors

* WIP refactoring

* Refactor mediator code

* Add some comments

* Refactor relay_path methods to not require &mut self

* Use CPS to retry submitting unconfirmed transactions

* Fix clippy

* Check that channel has valid channel ID in RelayPath::new()

There is no more &mut self reference in RelayPath, so there is
no way self.channel will be updated to contain channel ID later on

* Display more information in logs

* Rename send_msgs and submit_msgs with send_messages_and_wait_commit/check_tx

* Remove min backoff parameter

* Fix E2E test

* Handle error repsponse in pending tx separately

* Log RelaySummary in PacketWorker

* Revert change to backoff duration in PacketWorker

* Minor cleanup

* Add logging message for when send_messages_* methods are called

Co-authored-by: Soares Chen <soares.chen@gmail.com>
Co-authored-by: Soares Chen <soares.chen@maybevoid.com>
hu55a1n1 pushed a commit to hu55a1n1/hermes that referenced this pull request Sep 13, 2022
* Bump all 0.6.2 crates to 0.7.0

* Bump `ibc-proto` to v0.10.0

* Bump version in guide

* Add `ibc-relayer-rest` crate to README table

* fixup version

* Update Cargo.lock

* Update dependencies

* Update supported Cosmos SDK version range, cf. informalsystems#1266

* Improve structure of JSON sent back by the REST server

* Remove prefix from info message in start command

* Document the REST server API

* Include error source in `ConnectionError::Relayer`

* Add .changelog entry for informalsystems#1278

* Fix ibc-relayer-rest integration tests

* Release changelog for 0.7.0

* Reformat the changelog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Packet takes 24sec to get relayed Hermes should retry on TxNoConfirmation error
4 participants