Skip to content

Commit

Permalink
Merge branch 'main' into ship_transaction_extension_test
Browse files Browse the repository at this point in the history
  • Loading branch information
ndcgundlach authored Jul 19, 2022
2 parents e21dadf + 919aa87 commit 83205a8
Show file tree
Hide file tree
Showing 47 changed files with 1,491 additions and 663 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/jiraIssueCreator.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ jobs:
env:
JIRA_PROJECT_KEY: ${{ secrets.JIRA_PROJECT_KEY }}
JIRA_ISSUE_SUMMARY: ${{ github.event.issue.title }}
JIRA_ISSUE_DESCRIPTION: "${{ github.event.issue.url }}"
JIRA_ISSUE_DESCRIPTION: "${{ github.event.issue.url }}\\n\\n${{ github.event.issue.html_url }}"
JIRA_ISSUE_TYPE_NAME: "Story"
- name: Check issue json
run: |
Expand Down
59 changes: 59 additions & 0 deletions docs/00_install/01_build-from-source/00_build-unsupported-os.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
content_title: Build EOSIO from Source on Other Unix-based OS
---

**Please keep in mind that instructions for building from source on other unsupported operating systems provided here should be considered experimental and provided AS-IS on a best-effort basis and may not be fully featured.**

**A Warning On Parallel Compilation Jobs (`-j` flag)**: When building C/C++ software often the build is performed in parallel via a command such as `make -j $(nproc)` which uses the number of CPU cores as the number of compilation jobs to perform simultaneously. However, be aware that some compilation units (.cpp files) in mandel are extremely complex and will consume nearly 4GB of memory to compile. You may need to reduce the level of parallelization depending on the amount of memory on your build host. e.g. instead of `make -j $(nproc)` run `make -j2`. Failures due to memory exhaustion will typically but not always manifest as compiler crashes.

Generally we recommend performing what we refer to as a "pinned build" which ensures the compiler and boost version remain the same between builds of different mandel versions (mandel requires these versions remain the same otherwise its state needs to be repopulated from a portable snapshot).

<details>
<summary>FreeBSD 13.1 Build Instructions</summary>

Install required dependencies:
```
pkg update && pkg install \
git \
cmake \
curl \
boost-all \
python3 \
openssl \
llvm11 \
pkgconf
```
and perform the build (please note that FreeBSD 13.1 comes with llvm13 by default so you should provide clang11 options to cmake):
```
git submodule update --init --recursive
mkdir build
cd build
cmake -DCMAKE_CXX_COMPILER=clang++11 -DCMAKE_C_COMPILER=clang11 -DCMAKE_BUILD_TYPE=Release ..
make -j $(nproc) package
```
</details>

### Running Tests

When building from source it's recommended to run at least what we refer to as the "parallelizable tests". Not included by default in the "parallelizable tests" are the WASM spec tests which can add additional coverage and can also be run in parallel.

```
cd build
# "parallelizable tests": the minimum test set that should be run
ctest -j $(nproc) -LE _tests
# Also consider running the WASM spec tests for more coverage
ctest -j $(nproc) -L wasm_spec_tests
```

Some other tests are available and recommended but be aware they can be sensitive to other software running on the same host and they may **SIGKILL** other nodeos instances running on the host.
```
cd build
# These tests can't run in parallel but are recommended.
ctest -L "nonparallelizable_tests"
# These tests can't run in parallel. They also take a long time to run.
ctest -L "long_running_tests"
```
4 changes: 3 additions & 1 deletion docs/00_install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@ EOSIO currently supports the following operating systems:
3. Ubuntu 22.04

[[info | Note]]
| It may be possible to build and install EOSIO on other Unix-based operating systems. This is not officially supported, though.
| It may be possible to build and install EOSIO on other Unix-based operating systems. We gathered helpful information on the following page but please keep in mind that it is experimental and not officially supported.

* [Build EOSIO on Other Unix-based Systems](01_build-from-source/00_build-unsupported-os.md)

## Docker Utilities for Node Execution (D.U.N.E.)

Expand Down
1 change: 1 addition & 0 deletions libraries/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ add_subdirectory( wasm-jit )
remove_definitions( -w )

add_subdirectory( chainbase )
set(APPBASE_ENABLE_AUTO_VERSION OFF CACHE BOOL "enable automatic discovery of version via 'git describe'")
add_subdirectory( appbase )
add_subdirectory( chain )
add_subdirectory( testing )
Expand Down
2 changes: 1 addition & 1 deletion libraries/appbase
15 changes: 15 additions & 0 deletions libraries/chain/abi_serializer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,19 @@ namespace eosio { namespace chain {
return ends_with(type, "[]");
}

bool abi_serializer::is_szarray(const string_view& type)const {
auto pos1 = type.find_last_of('[');
auto pos2 = type.find_last_of(']');
if(pos1 == string_view::npos || pos2 == string_view::npos) return false;
auto pos = pos1 + 1;
if(pos == pos2) return false;
while(pos < pos2) {
if( ! (type[pos] >= '0' && type[pos] <= '9') ) return false;
++pos;
}
return true;
}

bool abi_serializer::is_optional(const string_view& type)const {
return ends_with(type, "?");
}
Expand All @@ -227,6 +240,8 @@ namespace eosio { namespace chain {
std::string_view abi_serializer::fundamental_type(const std::string_view& type)const {
if( is_array(type) ) {
return type.substr(0, type.size()-2);
} else if (is_szarray (type) ){
return type.substr(0, type.find_last_of('['));
} else if ( is_optional(type) ) {
return type.substr(0, type.size()-1);
} else {
Expand Down
23 changes: 13 additions & 10 deletions libraries/chain/block_log.cpp
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
#include <eosio/chain/block_log.hpp>
#include <eosio/chain/exceptions.hpp>
#include <fstream>
#include <fc/bitutil.hpp>
#include <fc/io/cfile.hpp>
#include <fc/io/raw.hpp>
Expand Down Expand Up @@ -50,7 +49,7 @@ namespace eosio { namespace chain {
uint32_t index_first_block_num = 0; //the first number in index & the log had it not been pruned
std::optional<block_log_prune_config> prune_config;

block_log_impl(std::optional<block_log_prune_config> prune_conf) :
explicit block_log_impl(std::optional<block_log_prune_config> prune_conf) :
prune_config(prune_conf) {
if(prune_config) {
EOS_ASSERT(prune_config->prune_blocks, block_log_exception, "block log prune configuration requires at least one block");
Expand All @@ -65,6 +64,7 @@ namespace eosio { namespace chain {
reopen();
}
}

void reopen();

//close() is called all over the place. Let's make this an explict call to ensure it only is called when
Expand Down Expand Up @@ -105,7 +105,7 @@ namespace eosio { namespace chain {

void flush();

void append(const signed_block_ptr& b);
void append(const signed_block_ptr& b, const block_id_type& id, const std::vector<char>& packed_block);

void prune();

Expand Down Expand Up @@ -355,11 +355,15 @@ namespace eosio { namespace chain {
}
}

void block_log::append(const signed_block_ptr& b) {
my->append(b);
void block_log::append(const signed_block_ptr& b, const block_id_type& id) {
my->append(b, id, fc::raw::pack(*b));
}

void block_log::append(const signed_block_ptr& b, const block_id_type& id, const std::vector<char>& packed_block) {
my->append(b, id, packed_block);
}

void detail::block_log_impl::append(const signed_block_ptr& b) {
void detail::block_log_impl::append(const signed_block_ptr& b, const block_id_type& id, const std::vector<char>& packed_block) {
try {
EOS_ASSERT( genesis_written_to_block_log, block_log_append_fail, "Cannot append to block log until the genesis is first written" );

Expand All @@ -377,13 +381,12 @@ namespace eosio { namespace chain {
"Append to index file occuring at wrong position.",
("position", (uint64_t) index_file.tellp())
("expected", (b->block_num() - index_first_block_num) * sizeof(uint64_t)));
auto data = fc::raw::pack(*b);
block_file.write(data.data(), data.size());
block_file.write(packed_block.data(), packed_block.size());
block_file.write((char*)&pos, sizeof(pos));
const uint64_t end = block_file.tellp();
index_file.write((char*)&pos, sizeof(pos));
head = b;
head_id = b->calculate_id();
head_id = id;

if(prune_config) {
if((pos&prune_config->prune_threshold) != (end&prune_config->prune_threshold))
Expand Down Expand Up @@ -566,7 +569,7 @@ namespace eosio { namespace chain {
block_file.write((char*)&totem, sizeof(totem));

if (first_block) {
append(first_block);
append(first_block, first_block->calculate_id(), fc::raw::pack(*first_block));
} else {
head.reset();
head_id = {};
Expand Down
71 changes: 50 additions & 21 deletions libraries/chain/controller.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -115,17 +115,17 @@ struct building_block {
const vector<digest_type>& new_protocol_feature_activations )
:_pending_block_header_state( prev.next( when, num_prev_blocks_to_confirm ) )
,_new_protocol_feature_activations( new_protocol_feature_activations )
,_trx_mroot_or_receipt_digests( digests_t{} )
{}

pending_block_header_state _pending_block_header_state;
std::optional<producer_authority_schedule> _new_pending_producer_schedule;
vector<digest_type> _new_protocol_feature_activations;
size_t _num_new_protocol_features_that_have_activated = 0;
deque<transaction_metadata_ptr> _pending_trx_metas;
deque<transaction_receipt> _pending_trx_receipts;
deque<digest_type> _pending_trx_receipt_digests;
deque<digest_type> _action_receipt_digests;
std::optional<checksum256_type> _transaction_mroot;
deque<transaction_receipt> _pending_trx_receipts; // boost deque in 1.71 with 1024 elements performs better
std::variant<checksum256_type, digests_t> _trx_mroot_or_receipt_digests;
digests_t _action_receipt_digests;
};

struct assembled_block {
Expand Down Expand Up @@ -403,9 +403,16 @@ struct controller_impl {
if( fork_head->dpos_irreversible_blocknum <= lib_num )
return;

const auto branch = fork_db.fetch_branch( fork_head->id, fork_head->dpos_irreversible_blocknum );
auto branch = fork_db.fetch_branch( fork_head->id, fork_head->dpos_irreversible_blocknum );
try {

std::vector<std::future<std::vector<char>>> v;
v.reserve( branch.size() );
for( auto bitr = branch.rbegin(); bitr != branch.rend(); ++bitr ) {
v.emplace_back( async_thread_pool( thread_pool.get_executor(), [b=(*bitr)->block]() { return fc::raw::pack(*b); } ) );
}
auto it = v.begin();

for( auto bitr = branch.rbegin(); bitr != branch.rend(); ++bitr ) {
if( read_mode == db_read_mode::IRREVERSIBLE ) {
controller::block_report br;
Expand All @@ -418,7 +425,8 @@ struct controller_impl {

// blog.append could fail due to failures like running out of space.
// Do it before commit so that in case it throws, DB can be rolled back.
blog.append( (*bitr)->block );
blog.append( (*bitr)->block, (*bitr)->id, it->get() );
++it;

db.commit( (*bitr)->block_num );
root_id = (*bitr)->id;
Expand All @@ -433,8 +441,12 @@ struct controller_impl {
//db.commit( fork_head->dpos_irreversible_blocknum ); // redundant

if( root_id != fork_db.root()->id ) {
branch.emplace_back(fork_db.root());
fork_db.advance_root( root_id );
}

// delete branch in thread pool
boost::asio::post( thread_pool.get_executor(), [branch{std::move(branch)}]() {} );
}

/**
Expand Down Expand Up @@ -1077,7 +1089,8 @@ struct controller_impl {
auto& bb = std::get<building_block>(pending->_block_stage);
auto orig_trx_receipts_size = bb._pending_trx_receipts.size();
auto orig_trx_metas_size = bb._pending_trx_metas.size();
auto orig_trx_receipt_digests_size = bb._pending_trx_receipt_digests.size();
auto orig_trx_receipt_digests_size = std::holds_alternative<digests_t>(bb._trx_mroot_or_receipt_digests) ?
std::get<digests_t>(bb._trx_mroot_or_receipt_digests).size() : 0;
auto orig_action_receipt_digests_size = bb._action_receipt_digests.size();
std::function<void()> callback = [this,
orig_trx_receipts_size,
Expand All @@ -1088,7 +1101,8 @@ struct controller_impl {
auto& bb = std::get<building_block>(pending->_block_stage);
bb._pending_trx_receipts.resize(orig_trx_receipts_size);
bb._pending_trx_metas.resize(orig_trx_metas_size);
bb._pending_trx_receipt_digests.resize(orig_trx_receipt_digests_size);
if( std::holds_alternative<digests_t>(bb._trx_mroot_or_receipt_digests) )
std::get<digests_t>(bb._trx_mroot_or_receipt_digests).resize(orig_trx_receipt_digests_size);
bb._action_receipt_digests.resize(orig_action_receipt_digests_size);
};

Expand Down Expand Up @@ -1440,7 +1454,9 @@ struct controller_impl {
r.cpu_usage_us = cpu_usage_us;
r.net_usage_words = net_usage_words;
r.status = status;
std::get<building_block>(pending->_block_stage)._pending_trx_receipt_digests.emplace_back( r.digest() );
auto& bb = std::get<building_block>(pending->_block_stage);
if( std::holds_alternative<digests_t>(bb._trx_mroot_or_receipt_digests) )
std::get<digests_t>(bb._trx_mroot_or_receipt_digests).emplace_back( r.digest() );
return r;
}

Expand Down Expand Up @@ -1764,6 +1780,21 @@ struct controller_impl {

auto& pbhs = pending->get_pending_block_header_state();

auto& bb = std::get<building_block>(pending->_block_stage);

auto action_merkle_fut = async_thread_pool( thread_pool.get_executor(),
[ids{std::move( bb._action_receipt_digests )}]() mutable {
return merkle( std::move( ids ) );
} );
const bool calc_trx_merkle = !std::holds_alternative<checksum256_type>(bb._trx_mroot_or_receipt_digests);
std::future<checksum256_type> trx_merkle_fut;
if( calc_trx_merkle ) {
trx_merkle_fut = async_thread_pool( thread_pool.get_executor(),
[ids{std::move( std::get<digests_t>(bb._trx_mroot_or_receipt_digests) )}]() mutable {
return merkle( std::move( ids ) );
} );
}

// Update resource limits:
resource_limits.process_account_limit_updates();
const auto& chain_config = self.get_global_properties().configuration;
Expand All @@ -1774,12 +1805,10 @@ struct controller_impl {
);
resource_limits.process_block_usage(pbhs.block_num);

auto& bb = std::get<building_block>(pending->_block_stage);

// Create (unsigned) block:
auto block_ptr = std::make_shared<signed_block>( pbhs.make_block_header(
merkle( std::move( std::get<building_block>(pending->_block_stage)._pending_trx_receipt_digests ) ),
merkle( std::move( std::get<building_block>(pending->_block_stage)._action_receipt_digests ) ),
calc_trx_merkle ? trx_merkle_fut.get() : std::get<checksum256_type>(bb._trx_mroot_or_receipt_digests),
action_merkle_fut.get(),
bb._new_pending_producer_schedule,
std::move( bb._new_protocol_feature_activations ),
protocol_features.get_protocol_feature_set()
Expand Down Expand Up @@ -1950,6 +1979,9 @@ struct controller_impl {
auto producer_block_id = bsp->id;
start_block( b->timestamp, b->confirmed, new_protocol_feature_activations, s, producer_block_id, fc::time_point::maximum() );

// validated in create_block_state_future()
std::get<building_block>(pending->_block_stage)._trx_mroot_or_receipt_digests = b->transaction_mroot;

const bool existing_trxs_metas = !bsp->trxs_metas().empty();
const bool pub_keys_recovered = bsp->is_pub_keys_recovered();
const bool skip_auth_checks = self.skip_auth_check();
Expand Down Expand Up @@ -2027,9 +2059,6 @@ struct controller_impl {
}
}

// validated in create_block_state_future()
std::get<building_block>(pending->_block_stage)._transaction_mroot = b->transaction_mroot;

finalize_block();

auto& ab = std::get<assembled_block>(pending->_block_stage);
Expand Down Expand Up @@ -3476,11 +3505,11 @@ std::optional<chain_id_type> controller::extract_chain_id_from_db( const path& s

if( db.revision() < 1 ) return {};

return db.get<global_property_object>().chain_id;
} catch( const bad_database_version_exception& ) {
throw;
} catch( ... ) {
}
auto * gpo = db.find<global_property_object>();
if (gpo==nullptr) return {};

return gpo->chain_id;
} catch (std::system_error &) {} // do not propagate db_error_code::not_found" for absent db, so it will be created

return {};
}
Expand Down
17 changes: 17 additions & 0 deletions libraries/chain/deep_mind.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,23 @@ namespace eosio::chain {
("trx", fc::to_hex(gto.packed_trx.data(), gto.packed_trx.size()))
);
}
void deep_mind_handler::on_create_deferred(operation_qualifier qual, const generated_transaction_object& gto, const packed_transaction& packed_trx)
{
auto packed_signed_trx = fc::raw::pack(packed_trx.get_signed_transaction());

fc_dlog(_logger, "DTRX_OP ${qual}CREATE ${action_id} ${sender} ${sender_id} ${payer} ${published} ${delay} ${expiration} ${trx_id} ${trx}",
("qual", prefix(qual))
("action_id", _action_id)
("sender", gto.sender)
("sender_id", gto.sender_id)
("payer", gto.payer)
("published", gto.published)
("delay", gto.delay_until)
("expiration", gto.expiration)
("trx_id", gto.trx_id)
("trx", fc::to_hex(packed_signed_trx.data(), packed_signed_trx.size()))
);
}
void deep_mind_handler::on_fail_deferred()
{
fc_dlog(_logger, "DTRX_OP FAILED ${action_id}",
Expand Down
1 change: 1 addition & 0 deletions libraries/chain/include/eosio/chain/abi_serializer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ struct abi_serializer {
/// @return string_view of `t` or internal string type
std::string_view resolve_type(const std::string_view& t)const;
bool is_array(const std::string_view& type)const;
bool is_szarray(const std::string_view& type)const;
bool is_optional(const std::string_view& type)const;
bool is_type( const std::string_view& type, const yield_function_t& yield )const;
[[deprecated("use the overload with yield_function_t[=create_yield_function(max_serialization_time)]")]]
Expand Down
Loading

0 comments on commit 83205a8

Please sign in to comment.