Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: Cache disk files for intermediate results of aggregate operations to avoid OOM #949

Closed
adofsauron opened this issue Nov 17, 2022 · 65 comments · Fixed by #1355
Closed
Assignees
Labels
A-feature feature with good idea prio: high High priority

Comments

@adofsauron
Copy link
Collaborator

adofsauron commented Nov 17, 2022

Abstract:

TIANMU Engine - Aggregate operation intermediate results cache disk files to avoid OOM- requirements analysis

Related ISSUE: #21

Note of context:

The results of the current aggregate operation are cached in the in-memory HASH. Once the amount of data exceeds the RAM, the OOM occurs.

If the data volume exceeds RAM, convert it to disk file storage to avoid OOM

Functional Requirements:

1. The HASH result of the aggregation operation is cached in the disk file and properly participates in the operation

When the intermediate result exceeds RAM

Performance requirements:

1. The compression ratio of the cache file size to the original data content, and the ratio of disk space to memory

Influence compression algorithm

2. Disk cache file write speed and read speed

Standard disk hardware

It affects the speed of aggregation operations and the rules for writing and reading disk cache files

3. The impact on the performance of aggregation operations

Development cycle:

TODO:

@adofsauron
Copy link
Collaborator Author

Aggregate Cache disks - Development process

  1. Funny layer

Step 1 Include

  1. Idiots who can't read code

  2. A variety of power jokers with managerial titles

Step 2: Strategy

  1. Straight to death

  2. Not qualified to participate in any development process

  3. Requirement layer

Step 1 Include

  1. Product side

  2. Test side

Step 2: Strategy

  1. Rigorous needs analysis to define boundaries

  2. Leave clear evidence of any needs

  3. In case of transgression

  4. Kill immediately

  5. Leave clear evidence to inform all

  6. Remember the past as a guide to the future

  7. Working layer

Step 1 Include

  1. Someone who can read code

  2. Someone who can go beyond existing code

Step 2: Strategy

  1. Demonstrate the rationality of each detail design in detail

  2. Cross validation

  3. Development planning

  4. Demand determination and counter-propaganda

  5. One to two days

  6. Programme pre-study

  7. One to two days

  8. Detailed scheme design

  9. Two to three days

  10. Code development

  11. Three to five days

  12. Write unit tests /MTR

  13. Two to three days

@adofsauron
Copy link
Collaborator Author

Aggregate cache hard drives - Requirements analysis

  1. Functional requirements

  2. If the aggregate cache exceeds RAM, use hard disk files

  3. Write the disk only when the RAM exceeds

  4. Limit the size of cache files

  5. The aggregation result must be the same before and after the change

  6. Ensure that the query results are consistent only before and after the modification

  7. INNODB does not guarantee the consistency of query results

  8. You can set parameters to disable the feature dynamically

  9. The aggregation operation in progress is not affected

  10. Users decide whether to enable this feature

  11. After the query is complete, the disk cache file is automatically cleared

  12. Consider asynchrony

  13. Disk cache clearing does not affect aggregate queries

  14. Performance requirements

  15. Disk data starts to be written after the RAM usage reaches the threshold

  16. 80%

  17. Reserve memory for other modules to work

  18. Intermediate result Indicates the disk cache file write speed threshold

  19. Compression ratio of the cache file to the original data

  20. Compression speed of the cache file compression algorithm

  21. Write disk I/OS synchronously or asynchronously

  22. Check whether the block cache needs to flush disk I/OS in batches to improve performance

  23. Indicates the speed threshold for reading cache files

  24. The decompressing speed of the cache file compression algorithm

  25. What can I do if the memory is insufficient when I read the cached file

  26. Whether to use elimination and replacement mechanism

  27. If there is no replacement

  28. Terminate the aggregate query

  29. Give a clear error message

  30. Clear the disk cache

  31. Impact of cache file reading on aggregate scanning

  32. Aggregate scan sequential traversal

  33. Append the intermediate result

  34. Writing to the cached file will involve disk I/O

  35. Determine whether to create a resident memory pool for cached files

  36. Replace the BLOCK with LRU

  37. Local data caching reduces repeated I/OS

  38. Impact on aggregate query

  39. Intermediate results are all in memory

  40. Some intermediate results are cached on the disk

  41. 30%

  42. 50%

  43. 80%

  44. All intermediate results are cached on disk

  45. Stability needs

  46. mysqld dump occurs when data is being written to the disk cache

  47. How do I clean residual files after mysqld restarts

  48. Disk I/OS are abnormal when data is being written to the disk cache

  49. The write fails

  50. How do I verify the correctness of cached files

  51. CRC

  52. Other verification algorithms

  53. Read data fails

  54. Terminate the aggregate query

  55. An error message is displayed

  56. Clear the disk cache

  57. Impact on other modules

  58. Aggregate the consistency of computing module interfaces

  59. Interface compatibility of the read/write module for the aggregated cache result

  60. Pluggable and replaceable policy interfaces

  61. Learn from the filesort module

@adofsauron
Copy link
Collaborator Author

Abstract:

Cache the disk file for the intermediate results of the aggregate operation to avoid the OOM summary design as a starting point for the next detailed design.

And play a role in communicating design ideas to other developers only.

Requirement analysis: 2022-11-17 mysql column storage Engine - Aggregate operation intermediate results cache disk files to avoid OOM- Requirement Analysis _ Zunwu World's blog -CSDN blog

Cache RAM analysis of current aggregated intermediate results:

Static structure:

image

Dynamic structure:

image

@adofsauron
Copy link
Collaborator Author

Summary design for adding disk cache:

Design Idea:

Maintain compatibility with the upper layer module interfaces of existing aggregation operations

Avoid excessive disk I/OS without causing OOM

Design Strategy:

Add DiskCache instead of BlockedRowMemStorage, but keep the interface consistent

DiskCache uses LRU internally to cache disk blocks

When the LRU reaches the upper limit to eliminate blocks, the Block must be flushed to disk and the occupied RAM space must be released

@adofsauron
Copy link
Collaborator Author

Architecture Design:

Static structure:

image

Dynamic structure:

image

@adofsauron adofsauron added the prio: high High priority label Dec 6, 2022
@adofsauron adofsauron self-assigned this Dec 6, 2022
@adofsauron
Copy link
Collaborator Author

Refer to the AIO used by mysql for asynchronous IO

https://dev.mysql.com/doc/refman/5.7/en/innodb-linux-native-aio.html

@adofsauron
Copy link
Collaborator Author

The data flow that interacts with the file system

image

@adofsauron
Copy link
Collaborator Author

More flexible memory control and disk swapping mechanisms are needed to make more efficient use of memory

@adofsauron
Copy link
Collaborator Author

adofsauron commented Feb 8, 2023

buffer Settings for mysql5.7.36

mysql> show variables like  '%buffer%';
+-------------------------------------+----------------+
| Variable_name                       | Value          |
+-------------------------------------+----------------+
| bulk_insert_buffer_size             | 8388608        |
| innodb_buffer_pool_chunk_size       | 134217728      |
| innodb_buffer_pool_dump_at_shutdown | ON             |
| innodb_buffer_pool_dump_now         | OFF            |
| innodb_buffer_pool_dump_pct         | 40             |
| innodb_buffer_pool_filename         | ib_buffer_pool |
| innodb_buffer_pool_instances        | 1              |
| innodb_buffer_pool_load_abort       | OFF            |
| innodb_buffer_pool_load_at_startup  | ON             |
| innodb_buffer_pool_load_now         | OFF            |
| innodb_buffer_pool_size             | 536870912      |
| innodb_change_buffer_max_size       | 25             |
| innodb_change_buffering             | all            |
| innodb_log_buffer_size              | 1048576        |
| innodb_sort_buffer_size             | 1048576        |
| join_buffer_size                    | 262144         |
| key_buffer_size                     | 536870912      |
| myisam_sort_buffer_size             | 8388608        |
| net_buffer_length                   | 16384          |
| preload_buffer_size                 | 32768          |
| read_buffer_size                    | 4194304        |
| read_rnd_buffer_size                | 16777216       |
| sort_buffer_size                    | 4194304        |
| sql_buffer_result                   | OFF            |
| tianmu_insert_buffer_size           | 512            |
| tianmu_insert_max_buffered          | 65536          |
| tianmu_sync_buffers                 | 0              |
+-------------------------------------+----------------+
27 rows in set (0.00 sec)


mysql> show global status like '%innodb_buffer_pool%';
+---------------------------------------+--------------------------------------------------+
| Variable_name                         | Value                                            |
+---------------------------------------+--------------------------------------------------+
| Innodb_buffer_pool_dump_status        | Dumping of buffer pool not started               |
| Innodb_buffer_pool_load_status        | Buffer pool(s) load completed at 230208  7:42:46 |
| Innodb_buffer_pool_resize_status      |                                                  |
| Innodb_buffer_pool_pages_data         | 252                                              |
| Innodb_buffer_pool_bytes_data         | 4128768                                          |
| Innodb_buffer_pool_pages_dirty        | 0                                                |
| Innodb_buffer_pool_bytes_dirty        | 0                                                |
| Innodb_buffer_pool_pages_flushed      | 36                                               |
| Innodb_buffer_pool_pages_free         | 32512                                            |
| Innodb_buffer_pool_pages_misc         | 0                                                |
| Innodb_buffer_pool_pages_total        | 32764                                            |
| Innodb_buffer_pool_read_ahead_rnd     | 0                                                |
| Innodb_buffer_pool_read_ahead         | 0                                                |
| Innodb_buffer_pool_read_ahead_evicted | 0                                                |
| Innodb_buffer_pool_read_requests      | 1055                                             |
| Innodb_buffer_pool_reads              | 219                                              |
| Innodb_buffer_pool_wait_free          | 0                                                |
| Innodb_buffer_pool_write_requests     | 325                                              |
+---------------------------------------+--------------------------------------------------+
18 rows in set (0.00 sec)


@adofsauron
Copy link
Collaborator Author

adofsauron commented Feb 8, 2023

Buffer Pool

When reading data, it will first check whether the data page exists in the cache. If it does not exist, it will search on disk and then cache it into innodb_buffer_pool. Similarly, the insertion, modification, or deletion of data in the cache is done first, and then updated to the disk at a certain frequency. This mechanism is called Checkpoint

 


The last 3/8 area of the LRU list is used to store cold data.

The midpoint of LRU list is the boundary where hot data tail and cold data head intersect.

The accessed cold data is moved from the cold data list to the hot data list.

If the data in the hot data list is not accessed for a long time, it will gradually move to the cold data list.

Cold data is not accessed for a long time, and the LRU list is full, then the cold data at the end of the LRU list will be eliminated.

Preread data will only be inserted into the LRU list and will not be moved to the hot data list.

 

image

image

image

@adofsauron
Copy link
Collaborator Author

Multiple Buffer Pool Instances


For systems with buffer pools in the multi-gigabyte range, dividing the buffer pool into separate instances can improve concurrency, by reducing contention as different threads read and write to cached pages. This feature is typically intended for systems with a [buffer pool](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_buffer_pool) size in the multi-gigabyte range. Multiple buffer pool instances are configured using the [innodb_buffer_pool_instances](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_buffer_pool_instances) configuration option, and you might also adjust the [innodb_buffer_pool_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) value.

When the InnoDB buffer pool is large, many data requests can be satisfied by retrieving from memory. You might encounter bottlenecks from multiple threads trying to access the buffer pool at once. You can enable multiple buffer pools to minimize this contention. Each page that is stored in or read from the buffer pool is assigned to one of the buffer pools randomly, using a hashing function. Each buffer pool manages its own free lists, flush lists, LRUs, and all other data structures connected to a buffer pool. Prior to MySQL 8.0, each buffer pool was protected by its own buffer pool mutex. In MySQL 8.0 and later, the buffer pool mutex was replaced by several list and hash protecting mutexes, to reduce contention.

To enable multiple buffer pool instances, set the innodb_buffer_pool_instances configuration option to a value greater than 1 (the default) up to 64 (the maximum). This option takes effect only when you set innodb_buffer_pool_size to a size of 1GB or more. The total size you specify is divided among all the buffer pools. For best efficiency, specify a combination of [innodb_buffer_pool_instances](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_buffer_pool_instances) and [innodb_buffer_pool_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) so that each buffer pool instance is at least 1GB.

@adofsauron
Copy link
Collaborator Author

Design objectives:

No matter how large the data volume is, no OOM is displayed

The upper limit of the disk usage is not considered

There is no noticeable performance degradation until memory is exhausted

@adofsauron
Copy link
Collaborator Author

Static memory control:

Simple control strategy

Unable to use memory efficiently

@adofsauron
Copy link
Collaborator Author

Dynamic memory control:

Granularity control of the minimum allocation unit block

Meta-information management for aggregation blocks

Identify whether the block is in memory or on disk

Location information on the disk

The meta information must all be in memory

The unity of the aggregation block: to read and write memory | disk interface

Unified upper-layer interfaces facilitate unified control

Multithreaded concurrent access control

Concurrent access by multiple query threads

A single query thread but concurrent access by multiple aggregation threads

@adofsauron
Copy link
Collaborator Author

A preaggregation operation of an aggregator in which internal data is stored through a hash table whose key is a "grouping key" value (for example, if group by b is used in an sql statement, the key of the hash table is all the different values of b in the table). The hash table is dynamic, and as the number of keys increases, ClickHouse switches it to a two-level hash table to improve performance; In addition, for different key types, ClickHouse provides many specialties to optimize for specific types.

For a single level hash table, the block that aggregator converts is the single_level block, for a double level hash table, the block that aggregator converts is the two_level block, two_level block will have a block_num, You can think of block_num as the key at the first level of the two-tier hash table. There are two benefits to using two_level block:

Blocks of the same block_num of multiple nodes performing pre-aggregation can be combined so that different combinations can perform the merge operation in parallel

If you restrict the nodes that produce two_level blocks to the order in which block_num increments, you can reduce memory usage because the data that needs to be merged must be in the same combination, and when you see a new block_num, it means that all previous merging operations have been completed.

In fact, the branch above that writes data to a disk file does just that. In GroupingAggregatedTransform node, it will be single_level block into two_level block, and carried out in accordance with the block_num combination, Then to MergingAggregatedBucketTransform merge, because MergingAggregatedBucketTransform there can be multiple, so combining phase can also be parallel. Finally to SortingAggregatedTransform nodes according to block_num sort

@adofsauron
Copy link
Collaborator Author

GROUP BY in External Memory
You can enable dumping temporary data to the disk to restrict memory usage during GROUP BY. The max_bytes_before_external_group_by setting determines the threshold RAM consumption for dumping GROUP BY temporary data to the file system. If set to 0 (the default), it is disabled.

When using max_bytes_before_external_group_by, we recommend that you set max_memory_usage about twice as high. This is necessary because there are two stages to aggregation: reading the data and forming intermediate data (1) and merging the intermediate data (2). Dumping data to the file system can only occur during stage 1. If the temporary data wasn’t dumped, then stage 2 might require up to the same amount of memory as in stage 1.

For example, if max_memory_usage was set to 10000000000 and you want to use external aggregation, it makes sense to set max_bytes_before_external_group_by to 10000000000, and max_memory_usage to 20000000000. When external aggregation is triggered (if there was at least one dump of temporary data), maximum consumption of RAM is only slightly more than max_bytes_before_external_group_by.

With distributed query processing, external aggregation is performed on remote servers. In order for the requester server to use only a small amount of RAM, set distributed_aggregation_memory_efficient to 1.

When merging data flushed to the disk, as well as when merging results from remote servers when the distributed_aggregation_memory_efficient setting is enabled, consumes up to 1/256 * the_number_of_threads from the total amount of RAM.

When external aggregation is enabled, if there was less than max_bytes_before_external_group_by of data (i.e. data was not flushed), the query runs just as fast as without external aggregation. If any temporary data was flushed, the run time will be several times longer (approximately three times).

If you have an ORDER BY with a LIMIT after GROUP BY, then the amount of used RAM depends on the amount of data in LIMIT, not in the whole table. But if the ORDER BY does not have LIMIT, do not forget to enable external sorting (max_bytes_before_external_sort).

@adofsauron
Copy link
Collaborator Author

template <typename Method>
void Aggregator::writeToTemporaryFileImpl(
    AggregatedDataVariants & data_variants,
    Method & method,
    TemporaryFileStream & out) const
{
    size_t max_temporary_block_size_rows = 0;
    size_t max_temporary_block_size_bytes = 0;

    auto update_max_sizes = [&](const Block & block)
    {
        size_t block_size_rows = block.rows();
        size_t block_size_bytes = block.bytes();

        if (block_size_rows > max_temporary_block_size_rows)
            max_temporary_block_size_rows = block_size_rows;
        if (block_size_bytes > max_temporary_block_size_bytes)
            max_temporary_block_size_bytes = block_size_bytes;
    };

    for (UInt32 bucket = 0; bucket < Method::Data::NUM_BUCKETS; ++bucket)
    {
        Block block = convertOneBucketToBlock(data_variants, method, data_variants.aggregates_pool, false, bucket);
        out.write(block);
        update_max_sizes(block);
    }

    if (params.overflow_row)
    {
        Block block = prepareBlockAndFillWithoutKey(data_variants, false, true);
        out.write(block);
        update_max_sizes(block);
    }

    /// Pass ownership of the aggregate functions states:
    /// `data_variants` will not destroy them in the destructor, they are now owned by ColumnAggregateFunction objects.
    data_variants.aggregator = nullptr;

    LOG_DEBUG(log, "Max size of temporary block: {} rows, {}.", max_temporary_block_size_rows, ReadableSize(max_temporary_block_size_bytes));
}

@adofsauron
Copy link
Collaborator Author

adofsauron commented Feb 10, 2023

Data write file

/// This class helps with the handling of temporary files or directories.
/// A unique name for the temporary file or directory is automatically chosen based on a specified prefix.
/// Create a directory in the constructor.
/// The destructor always removes the temporary file or directory with all contained files.
class TemporaryFileOnDisk
{
public:
    explicit TemporaryFileOnDisk(const DiskPtr & disk_);
    explicit TemporaryFileOnDisk(const DiskPtr & disk_, CurrentMetrics::Value metric_scope);
    explicit TemporaryFileOnDisk(const DiskPtr & disk_, const String & prefix);

    ~TemporaryFileOnDisk();

    DiskPtr getDisk() const { return disk; }
    String getPath() const;

private:
    DiskPtr disk;

    /// Relative path in disk to the temporary file or directory
    String relative_path;

    CurrentMetrics::Increment metric_increment;

    /// Specified if we know what for file is used (sort/aggregate/join).
    std::optional<CurrentMetrics::Increment> sub_metric_increment = {};
};
/*
 * Data can be written into this stream and then read.
 * After finish writing, call `finishWriting` and then `read` to read the data.
 * Account amount of data written to disk in parent scope.
 */
class TemporaryFileStream : boost::noncopyable
{
public:
    struct Stat
    {
        /// Statistics for file
        /// Non-atomic because we don't allow to `read` or `write` into single file from multiple threads
        size_t compressed_size = 0;
        size_t uncompressed_size = 0;
        size_t num_rows = 0;
    };

    TemporaryFileStream(TemporaryFileOnDiskHolder file_, const Block & header_, TemporaryDataOnDisk * parent_);
    TemporaryFileStream(FileSegmentsHolder && segments_, const Block & header_, TemporaryDataOnDisk * parent_);

    size_t write(const Block & block);
    void flush();

    Stat finishWriting();
    bool isWriteFinished() const;

    Block read();

    String getPath() const;

    Block getHeader() const { return header; }

    /// Read finished and file released
    bool isEof() const;

    ~TemporaryFileStream();

private:
    void updateAllocAndCheck();

    /// Release everything, close reader and writer, delete file
    void release();

    TemporaryDataOnDisk * parent;

    Block header;

    /// Data can be stored in file directly or in the cache
    TemporaryFileOnDiskHolder file;
    FileSegmentsHolder segment_holder;

    Stat stat;

    struct OutputWriter;
    std::unique_ptr<OutputWriter> out_writer;

    struct InputReader;
    std::unique_ptr<InputReader> in_reader;
};

@adofsauron
Copy link
Collaborator Author

Data segmentation for parallel aggregation

                    auto many_data = std::make_shared<ManyAggregatedData>(streams);
                    for (size_t j = 0; j < streams; ++j)
                    {
                        auto aggregation_for_set = std::make_shared<AggregatingTransform>(input_header, transform_params_for_set, many_data, j, merge_threads, temporary_data_merge_threads);
                        // For each input stream we have `grouping_sets_size` copies, so port index
                        // for transform #j should skip ports of first (j-1) streams.
                        connect(*ports[i + grouping_sets_size * j], aggregation_for_set->getInputs().front());
                        ports[i + grouping_sets_size * j] = &aggregation_for_set->getOutputs().front();
                        processors.push_back(aggregation_for_set);
                    }

@adofsauron
Copy link
Collaborator Author


2023.02.10 20:37:31.132939 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.133521 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.145087 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.146234 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.212701 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.212986 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.235648 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.236181 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.255100 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.255397 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.257787 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.257909 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.262983 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.263184 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.427304 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> MemoryTracker: Current memory usage (for query): 3.00 GiB.
2023.02.10 20:37:32.005273 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.48 GiB, peak 4.48 GiB, free memory in arenas 78.00 MiB, will set to 4.35 GiB (RSS), difference: -136.76 MiB
2023.02.10 20:37:32.775464 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.779167 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.810370 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.829024 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.840999 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.872675 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.904684 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:33.002875 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.32 GiB, peak 4.48 GiB, free memory in arenas 132.90 MiB, will set to 4.56 GiB (RSS), difference: 245.42 MiB
2023.02.10 20:37:34.001386 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.25 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.002735 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613aaaaaa
2023.02.10 20:37:34.003989 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.70 GiB, peak 4.73 GiB, free memory in arenas 140.14 MiB, will set to 4.71 GiB (RSS), difference: 5.61 MiB
2023.02.10 20:37:34.060384 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.26 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.062218 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613baaaaa
2023.02.10 20:37:34.065246 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.26 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.066696 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613caaaaa
2023.02.10 20:37:34.105046 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.27 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.107155 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613daaaaa
2023.02.10 20:37:34.137376 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.24 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.138497 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613eaaaaa
2023.02.10 20:37:34.150694 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.23 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.152425 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613faaaaa
2023.02.10 20:37:34.171662 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.23 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.172282 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613gaaaaa
2023.02.10 20:37:35.003550 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.46 GiB, peak 4.73 GiB, free memory in arenas 141.16 MiB, will set to 4.43 GiB (RSS), difference: -34.19 MiB
2023.02.10 20:37:35.623713 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 964 rows, 49.89 KiB.
2023.02.10 20:37:35.635407 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.634 sec., 225234 rows, 11.42 MiB uncompressed, 5.33 MiB compressed, 53.186 uncompressed bytes per row, 24.800 compressed bytes per row, compression rate: 2.145 (137808.606 rows/sec., 6.99 MiB/sec. uncompressed, 3.26 MiB/sec. compressed)
2023.02.10 20:37:35.672945 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 947 rows, 49.01 KiB.
2023.02.10 20:37:35.676381 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 952 rows, 49.27 KiB.
2023.02.10 20:37:35.686735 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.622 sec., 225577 rows, 11.44 MiB uncompressed, 5.40 MiB compressed, 53.186 uncompressed bytes per row, 25.098 compressed bytes per row, compression rate: 2.119 (139053.417 rows/sec., 7.05 MiB/sec. uncompressed, 3.33 MiB/sec. compressed)
2023.02.10 20:37:35.690445 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 966 rows, 50.00 KiB.
2023.02.10 20:37:35.695309 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.635 sec., 225149 rows, 11.42 MiB uncompressed, 5.42 MiB compressed, 53.186 uncompressed bytes per row, 25.249 compressed bytes per row, compression rate: 2.106 (137689.501 rows/sec., 6.98 MiB/sec. uncompressed, 3.32 MiB/sec. compressed)
2023.02.10 20:37:35.697783 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 955 rows, 49.43 KiB.
2023.02.10 20:37:35.700710 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.564 sec., 224876 rows, 11.41 MiB uncompressed, 5.40 MiB compressed, 53.187 uncompressed bytes per row, 25.160 compressed bytes per row, compression rate: 2.114 (143824.545 rows/sec., 7.30 MiB/sec. uncompressed, 3.45 MiB/sec. compressed)
2023.02.10 20:37:35.710832 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.606 sec., 225478 rows, 11.44 MiB uncompressed, 5.39 MiB compressed, 53.186 uncompressed bytes per row, 25.065 compressed bytes per row, compression rate: 2.122 (140396.883 rows/sec., 7.12 MiB/sec. uncompressed, 3.36 MiB/sec. compressed)
2023.02.10 20:37:35.716700 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 957 rows, 49.53 KiB.
2023.02.10 20:37:35.726570 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.576 sec., 224881 rows, 11.41 MiB uncompressed, 5.39 MiB compressed, 53.187 uncompressed bytes per row, 25.144 compressed bytes per row, compression rate: 2.115 (142677.853 rows/sec., 7.24 MiB/sec. uncompressed, 3.42 MiB/sec. compressed)
2023.02.10 20:37:35.737304 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 961 rows, 49.74 KiB.
2023.02.10 20:37:35.746642 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.575 sec., 226129 rows, 11.47 MiB uncompressed, 5.38 MiB compressed, 53.186 uncompressed bytes per row, 24.964 compressed bytes per row, compression rate: 2.131 (143555.250 rows/sec., 7.28 MiB/sec. uncompressed, 3.42 MiB/sec. compressed)
2023.02.10 20:37:36.002963 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.75 GiB, peak 4.73 GiB, free memory in arenas 127.23 MiB, will set to 3.91 GiB (RSS), difference: 157.30 MiB
2023.02.10 20:37:37.003561 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.07 GiB, peak 4.73 GiB, free memory in arenas 127.21 MiB, will set to 4.03 GiB (RSS), difference: -44.86 MiB
2023.02.10 20:37:38.004000 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.24 GiB, peak 4.73 GiB, free memory in arenas 127.21 MiB, will set to 4.22 GiB (RSS), difference: -25.07 MiB
2023.02.10 20:37:39.006610 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.50 GiB, peak 4.73 GiB, free memory in arenas 100.41 MiB, will set to 4.45 GiB (RSS), difference: -51.03 MiB
2023.02.10 20:37:39.163363 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.163826 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613haaaaa
2023.02.10 20:37:39.166541 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.168046 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613iaaaaa
2023.02.10 20:37:39.185181 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.186032 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613jaaaaa
2023.02.10 20:37:39.199760 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.202190 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613kaaaaa
2023.02.10 20:37:39.218797 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.220288 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613laaaaa
2023.02.10 20:37:39.234810 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.236573 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613maaaaa
2023.02.10 20:37:39.244683 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.245990 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613naaaaa
2023.02.10 20:37:40.002968 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.37 GiB, peak 4.73 GiB, free memory in arenas 136.52 MiB, will set to 4.38 GiB (RSS), difference: 9.56 MiB
2023.02.10 20:37:40.670828 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 782 rows, 40.47 KiB.
2023.02.10 20:37:40.677275 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.511 sec., 179166 rows, 9.10 MiB uncompressed, 4.27 MiB compressed, 53.234 uncompressed bytes per row, 24.987 compressed bytes per row, compression rate: 2.130 (118588.282 rows/sec., 6.02 MiB/sec. uncompressed, 2.83 MiB/sec. compressed)
2023.02.10 20:37:40.689326 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 778 rows, 40.27 KiB.
2023.02.10 20:37:40.698772 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.536 sec., 181235 rows, 9.20 MiB uncompressed, 4.32 MiB compressed, 53.232 uncompressed bytes per row, 24.969 compressed bytes per row, compression rate: 2.132 (118008.022 rows/sec., 5.99 MiB/sec. uncompressed, 2.81 MiB/sec. compressed)
2023.02.10 20:37:40.747567 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 783 rows, 40.53 KiB.
2023.02.10 20:37:40.755982 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 778 rows, 40.27 KiB.
2023.02.10 20:37:40.756288 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.571 sec., 180632 rows, 9.17 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.965 compressed bytes per row, compression rate: 2.132 (114952.107 rows/sec., 5.84 MiB/sec. uncompressed, 2.74 MiB/sec. compressed)
2023.02.10 20:37:40.762947 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.544 sec., 180871 rows, 9.18 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.962 compressed bytes per row, compression rate: 2.133 (117117.113 rows/sec., 5.95 MiB/sec. uncompressed, 2.79 MiB/sec. compressed)
2023.02.10 20:37:40.783162 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 779 rows, 40.32 KiB.
2023.02.10 20:37:40.791678 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.557 sec., 181056 rows, 9.19 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.990 compressed bytes per row, compression rate: 2.130 (116272.947 rows/sec., 5.90 MiB/sec. uncompressed, 2.77 MiB/sec. compressed)
2023.02.10 20:37:40.794420 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 784 rows, 40.58 KiB.
2023.02.10 20:37:40.801395 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.602 sec., 180245 rows, 9.15 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.975 compressed bytes per row, compression rate: 2.131 (112525.423 rows/sec., 5.71 MiB/sec. uncompressed, 2.68 MiB/sec. compressed)
2023.02.10 20:37:40.808499 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 792 rows, 40.99 KiB.
2023.02.10 20:37:40.814891 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.570 sec., 179589 rows, 9.12 MiB uncompressed, 4.28 MiB compressed, 53.234 uncompressed bytes per row, 25.002 compressed bytes per row, compression rate: 2.129 (114361.607 rows/sec., 5.81 MiB/sec. uncompressed, 2.73 MiB/sec. compressed)
2023.02.10 20:37:41.004178 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.80 GiB, peak 4.73 GiB, free memory in arenas 124.26 MiB, will set to 3.90 GiB (RSS), difference: 108.72 MiB
2023.02.10 20:37:42.003161 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.07 GiB, peak 4.73 GiB, free memory in arenas 124.24 MiB, will set to 4.01 GiB (RSS), difference: -54.51 MiB
2023.02.10 20:37:43.003969 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.22 GiB, peak 4.73 GiB, free memory in arenas 118.21 MiB, will set to 4.19 GiB (RSS), difference: -30.12 MiB
2023.02.10 20:37:44.004587 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.47 GiB, peak 4.73 GiB, free memory in arenas 93.00 MiB, will set to 4.42 GiB (RSS), difference: -47.73 MiB
2023.02.10 20:37:44.232463 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.236322 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613oaaaaa
2023.02.10 20:37:44.286675 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.287196 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613paaaaa
2023.02.10 20:37:44.315295 [ 1797 ] {} <Debug> DNSResolver: Updating DNS cache
2023.02.10 20:37:44.315497 [ 1797 ] {} <Debug> DNSResolver: Updated DNS cache
2023.02.10 20:37:44.340855 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.341405 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613qaaaaa
2023.02.10 20:37:44.395594 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.396113 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613raaaaa
2023.02.10 20:37:44.404822 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.407145 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613saaaaa
2023.02.10 20:37:44.416001 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.418103 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613taaaaa
2023.02.10 20:37:44.471882 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.472391 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613uaaaaa
2023.02.10 20:37:45.004051 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.37 GiB, peak 4.73 GiB, free memory in arenas 136.04 MiB, will set to 4.39 GiB (RSS), difference: 23.79 MiB
2023.02.10 20:37:45.648999 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 776 rows, 40.16 KiB.
2023.02.10 20:37:45.656111 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.424 sec., 180015 rows, 9.14 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.984 compressed bytes per row, compression rate: 2.131 (126429.931 rows/sec., 6.42 MiB/sec. uncompressed, 3.01 MiB/sec. compressed)
2023.02.10 20:37:45.787908 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 768 rows, 39.75 KiB.
2023.02.10 20:37:45.795987 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.509 sec., 180286 rows, 9.15 MiB uncompressed, 4.35 MiB compressed, 53.233 uncompressed bytes per row, 25.299 compressed bytes per row, compression rate: 2.104 (119435.798 rows/sec., 6.06 MiB/sec. uncompressed, 2.88 MiB/sec. compressed)
2023.02.10 20:37:45.858919 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 785 rows, 40.63 KiB.
2023.02.10 20:37:45.865490 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.525 sec., 179875 rows, 9.13 MiB uncompressed, 4.28 MiB compressed, 53.233 uncompressed bytes per row, 24.976 compressed bytes per row, compression rate: 2.131 (117960.245 rows/sec., 5.99 MiB/sec. uncompressed, 2.81 MiB/sec. compressed)
2023.02.10 20:37:45.906580 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 775 rows, 40.11 KiB.
2023.02.10 20:37:45.912077 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.517 sec., 179376 rows, 9.11 MiB uncompressed, 4.27 MiB compressed, 53.234 uncompressed bytes per row, 24.966 compressed bytes per row, compression rate: 2.132 (118268.088 rows/sec., 6.00 MiB/sec. uncompressed, 2.82 MiB/sec. compressed)
2023.02.10 20:37:45.970291 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 767 rows, 39.70 KiB.
2023.02.10 20:37:45.976009 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 800 rows, 41.41 KiB.
2023.02.10 20:37:45.976910 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.572 sec., 180338 rows, 9.16 MiB uncompressed, 4.33 MiB compressed, 53.233 uncompressed bytes per row, 25.153 compressed bytes per row, compression rate: 2.116 (114696.222 rows/sec., 5.82 MiB/sec. uncompressed, 2.75 MiB/sec. compressed)
2023.02.10 20:37:45.984986 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.569 sec., 180442 rows, 9.16 MiB uncompressed, 4.30 MiB compressed, 53.233 uncompressed bytes per row, 25.008 compressed bytes per row, compression rate: 2.129 (114977.647 rows/sec., 5.84 MiB/sec. uncompressed, 2.74 MiB/sec. compressed)
2023.02.10 20:37:46.003618 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.87 GiB, peak 4.73 GiB, free memory in arenas 125.11 MiB, will set to 3.95 GiB (RSS), difference: 88.39 MiB
2023.02.10 20:37:46.004664 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 785 rows, 40.63 KiB.
2023.02.10 20:37:46.016651 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.545 sec., 180275 rows, 9.15 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.966 compressed bytes per row, compression rate: 2.132 (116686.123 rows/sec., 5.92 MiB/sec. uncompressed, 2.78 MiB/sec. compressed)
2023.02.10 20:37:47.003911 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.00 GiB, peak 4.73 GiB, free memory in arenas 123.44 MiB, will set to 3.96 GiB (RSS), difference: -35.08 MiB
2023.02.10 20:37:48.003410 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.10 GiB, peak 4.73 GiB, free memory in arenas 117.40 MiB, will set to 4.09 GiB (RSS), difference: -19.71 MiB
2023.02.10 20:37:48.996628 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 2.99 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:48.999196 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613vaaaaa
2023.02.10 20:37:49.005089 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.36 GiB, peak 4.73 GiB, free memory in arenas 99.82 MiB, will set to 4.34 GiB (RSS), difference: -19.97 MiB
2023.02.10 20:37:49.573831 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.575041 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613waaaaa
2023.02.10 20:37:49.810581 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.811255 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613xaaaaa
2023.02.10 20:37:49.853701 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.855350 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613yaaaaa
2023.02.10 20:37:49.882184 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.883280 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.11 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.884476 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613zaaaaa
2023.02.10 20:37:49.885926 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613abaaaa
2023.02.10 20:37:49.919542 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.11 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.920430 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613bbaaaa
2023.02.10 20:37:50.004006 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.45 GiB, peak 4.73 GiB, free memory in arenas 136.23 MiB, will set to 4.44 GiB (RSS), difference: -9.73 MiB
2023.02.10 20:37:50.573485 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 783 rows, 40.53 KiB.
2023.02.10 20:37:50.580516 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.584 sec., 180225 rows, 9.15 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.974 compressed bytes per row, compression rate: 2.132 (113754.239 rows/sec., 5.77 MiB/sec. uncompressed, 2.71 MiB/sec. compressed)
2023.02.10 20:37:51.003710 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.24 GiB, peak 4.73 GiB, free memory in arenas 134.98 MiB, will set to 4.26 GiB (RSS), difference: 15.66 MiB
2023.02.10 20:37:51.055345 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 783 rows, 40.53 KiB.
2023.02.10 20:37:51.065132 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.491 sec., 180996 rows, 9.19 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.969 compressed bytes per row, compression rate: 2.132 (121353.009 rows/sec., 6.16 MiB/sec. uncompressed, 2.89 MiB/sec. compressed)
2023.02.10 20:37:51.288408 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 784 rows, 40.58 KiB.
2023.02.10 20:37:51.295473 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.485 sec., 179344 rows, 9.10 MiB uncompressed, 4.33 MiB compressed, 53.234 uncompressed bytes per row, 25.313 compressed bytes per row, compression rate: 2.103 (120765.782 rows/sec., 6.13 MiB/sec. uncompressed, 2.92 MiB/sec. compressed)
2023.02.10 20:37:51.377213 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 784 rows, 40.58 KiB.
2023.02.10 20:37:51.383646 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 774 rows, 40.06 KiB.
2023.02.10 20:37:51.386705 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.505 sec., 180690 rows, 9.17 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.958 compressed bytes per row, compression rate: 2.133 (120077.044 rows/sec., 6.10 MiB/sec. uncompressed, 2.86 MiB/sec. compressed)
2023.02.10 20:37:51.391441 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.538 sec., 179228 rows, 9.10 MiB uncompressed, 4.30 MiB compressed, 53.234 uncompressed bytes per row, 25.162 compressed bytes per row, compression rate: 2.116 (116531.085 rows/sec., 5.92 MiB/sec. uncompressed, 2.80 MiB/sec. compressed)
2023.02.10 20:37:51.401773 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 782 rows, 40.47 KiB.
2023.02.10 20:37:51.408625 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.489 sec., 180680 rows, 9.17 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.995 compressed bytes per row, compression rate: 2.130 (121311.116 rows/sec., 6.16 MiB/sec. uncompressed, 2.89 MiB/sec. compressed)
2023.02.10 20:37:51.416294 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 764 rows, 39.54 KiB.
2023.02.10 20:37:51.422094 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.539 sec., 179923 rows, 9.13 MiB uncompressed, 4.32 MiB compressed, 53.233 uncompressed bytes per row, 25.181 compressed bytes per row, compression rate: 2.114 (116899.583 rows/sec., 5.93 MiB/sec. uncompressed, 2.81 MiB/sec. compressed)
2023.02.10 20:37:52.003859 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.86 GiB, peak 4.73 GiB, free memory in arenas 116.44 MiB, will set to 3.93 GiB (RSS), difference: 67.98 MiB
2023.02.10 20:37:53.002999 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.11 GiB, peak 4.73 GiB, free memory in arenas 110.43 MiB, will set to 4.06 GiB (RSS), difference: -48.56 MiB
2023.02.10 20:37:54.003883 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.28 GiB, peak 4.73 GiB, free memory in arenas 16.57 MiB, will set to 4.27 GiB (RSS), difference: -17.29 MiB
2023.02.10 20:37:54.044228 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 2.93 GiB on local disk `_tmp_default`, having unreserved 39.13 GiB.
2023.02.10 20:37:54.045063 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613cbaaaa
2023.02.10 20:37:54.757869 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.02 GiB on local disk `_tmp_default`, having unreserved 39.13 GiB.
2023.02.10 20:37:54.759505 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613dbaaaa
2023.02.10 20:37:55.004887 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.42 GiB, peak 4.73 GiB, free memory in arenas 24.93 MiB, will set to 4.39 GiB (RSS), difference: -29.13 MiB
2023.02.10 20:37:55.162643 [ 1615 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Information> TCPHandler: Query was cancelled.
2023.02.10 20:37:55.322442 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.10 GiB on local disk `_tmp_default`, having unreserved 39.13 GiB.
2023.02.10 20:37:55.323246 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613ebaaaa
2023.02.10 20:37:55.692368 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.692916 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613fbaaaa
2023.02.10 20:37:55.696002 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.697863 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613gbaaaa
2023.02.10 20:37:55.704975 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.705756 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613hbaaaa
2023.02.10 20:37:55.709936 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.711498 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613ibaaaa
2023.02.10 20:37:55.825763 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 794 rows, 41.10 KiB.
2023.02.10 20:37:55.831723 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.788 sec., 180747 rows, 9.18 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.992 compressed bytes per row, compression rate: 2.130 (101099.002 rows/sec., 5.13 MiB/sec. uncompressed, 2.41 MiB/sec. compressed)
2023.02.10 20:37:56.003683 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.33 GiB, peak 4.73 GiB, free memory in arenas 53.52 MiB, will set to 4.33 GiB (RSS), difference: 2.70 MiB
2023.02.10 20:37:56.286569 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 796 rows, 41.20 KiB.
2023.02.10 20:37:56.292846 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.535 sec., 179932 rows, 9.13 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.983 compressed bytes per row, compression rate: 2.131 (117210.093 rows/sec., 5.95 MiB/sec. uncompressed, 2.79 MiB/sec. compressed)
2023.02.10 20:37:56.560494 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 771 rows, 39.91 KiB.
2023.02.10 20:37:56.565874 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.244 sec., 179893 rows, 9.13 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.979 compressed bytes per row, compression rate: 2.131 (144631.558 rows/sec., 7.34 MiB/sec. uncompressed, 3.45 MiB/sec. compressed)
2023.02.10 20:37:56.806713 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 775 rows, 40.11 KiB.
2023.02.10 20:37:56.820729 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 795 rows, 41.15 KiB.
2023.02.10 20:37:56.827589 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.136 sec., 179598 rows, 9.12 MiB uncompressed, 4.28 MiB compressed, 53.234 uncompressed bytes per row, 24.996 compressed bytes per row, compression rate: 2.130 (158159.948 rows/sec., 8.03 MiB/sec. uncompressed, 3.77 MiB/sec. compressed)
2023.02.10 20:37:56.837133 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 776 rows, 40.16 KiB.
2023.02.10 20:37:56.840970 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.131 sec., 180683 rows, 9.17 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.971 compressed bytes per row, compression rate: 2.132 (159740.939 rows/sec., 8.11 MiB/sec. uncompressed, 3.80 MiB/sec. compressed)
2023.02.10 20:37:56.841142 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 804 rows, 41.61 KiB.
2023.02.10 20:37:56.842784 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.147 sec., 180033 rows, 9.14 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.980 compressed bytes per row, compression rate: 2.131 (156947.772 rows/sec., 7.97 MiB/sec. uncompressed, 3.74 MiB/sec. compressed)
2023.02.10 20:37:56.846135 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.141 sec., 180762 rows, 9.18 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.960 compressed bytes per row, compression rate: 2.133 (158368.440 rows/sec., 8.04 MiB/sec. uncompressed, 3.77 MiB/sec. compressed)
2023.02.10 20:37:56.966232 [ 1615 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Error> executeQuery: Code: 210. DB::NetException: I/O error: Broken pipe, while writing to socket ([::1]:42420). (NETWORK_ERROR) (version 23.2.1.1) (from [::1]:42420) (in query: select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity) from customer, orders, lineitem where c_custkey = o_custkey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice order by o_totalprice desc, o_orderdate limit 10;), Stack trace (when copying this message, always include the lines below):
 

@adofsauron
Copy link
Collaborator Author

ExpressionTransform


│ ExpressionTransform                                                │
│   (Limit)                                                          │
│   Limit                                                            │
│     (Sorting)                                                      │
│     MergingSortedTransform 16 → 1                                  │
│       MergeSortingTransform × 16                                   │
│         LimitsCheckingTransform × 16                               │
│           PartialSortingTransform × 16                             │
│             (Expression)                                           │
│             ExpressionTransform × 16                               │
│               (Aggregating)                                        │
│               Resize 8 → 16                                        │
│                 AggregatingTransform × 8                           │
│                   StrictResize 8 → 8                               │
│                     (Expression)                                   │
│                     ExpressionTransform × 8                        │
│                       (Filter)                                     │
│                       FilterTransform × 8                          │
│                         (Join)                                     │
│                         JoiningTransform × 8 2 → 1                 │
│                           (Filter)                                 │
│                           FilterTransform × 8                      │
│                             (Filter)                               │
│                             FilterTransform × 8                    │
│                               (Join)                               │
│                               JoiningTransform × 8 2 → 1           │
│                                 (Expression)                       │
│                                 ExpressionTransform × 8            │
│                                   (ReadFromMergeTree)              │
│                                   MergeTreeThread × 8 0 → 1        │
│                                 (Expression)                       │
│                                 Resize 1 → 8                       │
│                                   FillingRightJoinSide             │
│                                     Resize 16 → 1                  │
│                                       ExpressionTransform × 16     │
│                                         (ReadFromMergeTree)        │
│                                         MergeTreeThread × 16 0 → 1 │
│                           (Expression)                             │
│                           Resize 1 → 8                             │
│                             FillingRightJoinSide                   │
│                               Resize 16 → 1                        │
│                                 ExpressionTransform × 16           │
│                                   (ReadFromMergeTree)              │
│                                   MergeTreeThread × 16 0 → 1       │


@adofsauron
Copy link
Collaborator Author

execute


│ SelectWithUnionQuery (children 1)           │
│  ExpressionList (children 1)                │
│   SelectQuery (children 6)                  │
│    ExpressionList (children 6)              │
│     Identifier c_name                       │
│     Identifier c_custkey                    │
│     Identifier o_orderkey                   │
│     Identifier o_orderdate                  │
│     Identifier o_totalprice                 │
│     Function sum (children 1)               │
│      ExpressionList (children 1)            │
│       Identifier l_quantity                 │
│    TablesInSelectQuery (children 3)         │
│     TablesInSelectQueryElement (children 1) │
│      TableExpression (children 1)           │
│       TableIdentifier customer              │
│     TablesInSelectQueryElement (children 2) │
│      TableExpression (children 1)           │
│       TableIdentifier orders                │
│      TableJoin                              │
│     TablesInSelectQueryElement (children 2) │
│      TableExpression (children 1)           │
│       TableIdentifier lineitem              │
│      TableJoin                              │
│    Function and (children 1)                │
│     ExpressionList (children 2)             │
│      Function equals (children 1)           │
│       ExpressionList (children 2)           │
│        Identifier c_custkey                 │
│        Identifier o_custkey                 │
│      Function equals (children 1)           │
│       ExpressionList (children 2)           │
│        Identifier o_orderkey                │
│        Identifier l_orderkey                │
│    ExpressionList (children 5)              │
│     Identifier c_name                       │
│     Identifier c_custkey                    │
│     Identifier o_orderkey                   │
│     Identifier o_orderdate                  │
│     Identifier o_totalprice                 │
│    ExpressionList (children 2)              │
│     OrderByElement (children 1)             │
│      Identifier o_totalprice                │
│     OrderByElement (children 1)             │
│      Identifier o_orderdate                 │
│    Literal UInt64_10                        │


@adofsauron
Copy link
Collaborator Author

HashJoin: (0x7f2af22cb798) Keys: [(c_custkey) = (o_custkey)]

@adofsauron
Copy link
Collaborator Author

DiskLocal: Reserved 3.26 GiB on local disk _tmp_default, having unreserved 39.26 GiB.

@adofsauron
Copy link
Collaborator Author

0 ./24773aaaaaa
0 ./24773baaaaa
0 ./25102aaaaaa
4412 ./25102abaaaa
5532 ./25102baaaaa
4392 ./25102bbaaaa
5456 ./25102caaaaa
4412 ./25102cbaaaa
5516 ./25102daaaaa
4404 ./25102dbaaaa
5524 ./25102eaaaaa
4392 ./25102ebaaaa
5552 ./25102faaaaa
5520 ./25102gaaaaa
4400 ./25102haaaaa
4372 ./25102iaaaaa
4388 ./25102jaaaaa
4412 ./25102kaaaaa
4420 ./25102laaaaa
4404 ./25102maaaaa
4420 ./25102naaaaa
4376 ./25102oaaaaa
4456 ./25102paaaaa
4396 ./25102qaaaaa
4388 ./25102raaaaa
4408 ./25102saaaaa
4432 ./25102taaaaa
4396 ./25102uaaaaa
4428 ./25102vaaaaa
4396 ./25102waaaaa
4408 ./25102xaaaaa
4416 ./25102yaaaaa
4436 ./25102zaaaaa
5524 ./25369aaaaaa
4416 ./25369abaaaa
560 ./25369acaaaa
5528 ./25369baaaaa
4404 ./25369bbaaaa
5516 ./25369caaaaa
4408 ./25369cbaaaa
5552 ./25369daaaaa
4384 ./25369dbaaaa
5532 ./25369eaaaaa
4392 ./25369ebaaaa
5456 ./25369faaaaa
4392 ./25369fbaaaa
5520 ./25369gaaaaa
4412 ./25369gbaaaa
4388 ./25369haaaaa
4392 ./25369hbaaaa
4412 ./25369iaaaaa
4408 ./25369ibaaaa
4420 ./25369jaaaaa
3316 ./25369jbaaaa
4404 ./25369kaaaaa
3304 ./25369kbaaaa
4400 ./25369laaaaa
3312 ./25369lbaaaa
4372 ./25369maaaaa
3304 ./25369mbaaaa
4420 ./25369naaaaa
3300 ./25369nbaaaa
4376 ./25369oaaaaa
4396 ./25369obaaaa
4408 ./25369paaaaa
4404 ./25369pbaaaa
4432 ./25369qaaaaa
656 ./25369qbaaaa
4388 ./25369raaaaa
4456 ./25369saaaaa
4396 ./25369taaaaa
4396 ./25369uaaaaa
4408 ./25369vaaaaa
4412 ./25369waaaaa
4428 ./25369xaaaaa
4460 ./25369xbaaaa
4436 ./25369yaaaaa
4396 ./25369ybaaaa
4396 ./25369zaaaaa
4392 ./25369zbaaaa

@adofsauron
Copy link
Collaborator Author

#0 DB::MergingAggregatedBucketTransform::transform (this=0x7ff2bb134718, chunk=...) at ../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:318
#1 0x0000000026b6e4a2 in DB::ISimpleTransform::transform (this=0x7ff2bb134718, input_chunk=..., output_chunk=...) at ../src/Processors/ISimpleTransform.h:32
#2 0x000000002d037c01 in DB::ISimpleTransform::work (this=0x7ff2bb134718) at ../src/Processors/ISimpleTransform.cpp:89
#3 0x000000002d07ca03 in DB::executeJob (node=0x7ff250b11800, read_progress_callback=0x7ff24d0388a0) at ../src/Processors/Executors/ExecutionThreadContext.cpp:47
#4 0x000000002d07c719 in DB::ExecutionThreadContext::executeTask (this=0x7ff2bb0dece0) at ../src/Processors/Executors/ExecutionThreadContext.cpp:92
#5 0x000000002d058861 in DB::PipelineExecutor::executeStepImpl (this=0x7ff24cf4fc18, thread_num=10, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:229
#6 0x000000002d058b97 in DB::PipelineExecutor::executeSingleThread (this=0x7ff24cf4fc18, thread_num=10) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#7 0x000000002d05a416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7ff260757088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#8 0x000000002d05a375 in std::__1::__invoke[abi:v15000]DB::PipelineExecutor::spawnThreads()::$_0& (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#9 0x000000002d05a321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#10 0x000000002d05a232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#11 0x000000002d05a11a in ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImplDB::PipelineExecutor::spawnThreads()::$_0(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7ff2bb0c3640) at ../src/Common/ThreadPool.h:210
#12 0x000000002d05a055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImplDB::PipelineExecutor::spawnThreads()::$_0(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#13 0x000000002d05a01d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImplDB::PipelineExecutor::spawnThreads()::$_0(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImplDB::PipelineExecutor::spawnThreads()::$_0(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#14 0x000000002d059ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImplDB::PipelineExecutor::spawnThreads()::$_0(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()abi:v15000 (this=0x7ff2bb0c3640) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#15 0x000000002d059fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImplDB::PipelineExecutor::spawnThreads()::$_0(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7ff260757348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#16 0x000000001a3c40a6 in std::__1::__function::__policy_func<void ()>::operator()abi:v15000 const (this=0x7ff260757348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#17 0x000000001a3bf9d5 in std::__1::function<void ()>::operator()() const (this=0x7ff260757348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#18 0x000000001a4ccb6e in ThreadPoolImplstd::__1::thread::worker (this=0x7ff38534b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#19 0x000000001a4d43e4 in ThreadPoolImplstd::__1::thread::scheduleImpl(std::__1::function<void ()>, long, std::__1::optional, bool)::{lambda()#2}::operator()() const (this=0x7ff2b7f96ea8) at ../src/Common/ThreadPool.cpp:145
#20 0x000000001a4d4375 in std::__1::__invoke[abi:v15000]<ThreadPoolImplstd::__1::thread::scheduleImpl(std::__1::function<void ()>, long, std::__1::optional, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#21 0x000000001a4d42a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct >, ThreadPoolImplstd::__1::thread::scheduleImpl(std::__1::function<void ()>, long, std::__1::optional, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct >, ThreadPoolImplstd::__1::thread::scheduleImpl(std::__1::function<void ()>, long, std::__1::optional, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#22 0x000000001a4d3c02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct >, ThreadPoolImplstd::__1::thread::scheduleImpl(std::__1::function<void ()>, long, std::__1::optional, bool)::{lambda()#2}> >(void*) (__vp=0x7ff2b7f96ea0)
at ../contrib/llvm-project/libcxx/include/thread:295
#23 0x00007ff3862b1802 in start_thread () from /lib64/libc.so.6
#24 0x00007ff386251450 in clone3 () from /lib64/libc.so.6

@adofsauron
Copy link
Collaborator Author

static void executeJob(ExecutingGraph::Node * node, ReadProgressCallback * read_progress_callback)
{
try
{
node->processor->work();

    /// Update read progress only for source nodes.
    bool is_source = node->back_edges.empty();

    if (is_source && read_progress_callback)
    {
        if (auto read_progress = node->processor->getReadProgress())
        {
            if (read_progress->counters.total_rows_approx)
                read_progress_callback->addTotalRowsApprox(read_progress->counters.total_rows_approx);

            if (!read_progress_callback->onProgress(read_progress->counters.read_rows, read_progress->counters.read_bytes, read_progress->limits))
                node->processor->cancel();
        }
    }
}
catch (Exception & exception)
{
    if (checkCanAddAdditionalInfoToException(exception))
        exception.addMessage("While executing " + node->processor->getName());
    throw;
}

}

@adofsauron
Copy link
Collaborator Author

Flush data in the RAM to disk also. It's easier than merging on-disk and RAM data.

@adofsauron
Copy link
Collaborator Author

Pre-aggregates data from ports, holding in RAM only one or more (up to merging_threads) blocks from each source.
This saves RAM in case of using two-level aggregation, where in each source there will be up to 256 blocks with parts of the result.

Aggregate functions in blocks should not be finalized so that their states can be combined.

Used to solve two tasks:

  1. External aggregation with data flush to disk.
    Partially aggregated data (previously divided into 256 buckets) is flushed to some number of files on the disk.
    We need to read them and merge them by buckets - keeping only a few buckets from each file in RAM simultaneously.

  2. Merge aggregation results for distributed query processing.
    Partially aggregated data arrives from different servers, which can be split down or not, into 256 buckets,
    and these buckets are passed to us by the network from each server in sequence, one by one.
    You should also read and merge by the buckets.

The essence of the work:

There are a number of sources. They give out blocks with partially aggregated data.
Each source can return one of the following block sequences:

  1. "unsplitted" block with bucket_num = -1;
  2. "split" (two_level) blocks with bucket_num from 0 to 255;
    In both cases, there may also be a block of "overflows" with bucket_num = -1 and is_overflows = true;

We start from the convention that split blocks are always passed in the order of bucket_num.
That is, if a < b, then the bucket_num = a block goes before bucket_num = b.
This is needed for a memory-efficient merge

  • so that you do not need to read the blocks up front, but go all the way up by bucket_num.

In this case, not all bucket_num from the range of 0..255 can be present.
The overflow block can be presented in any order relative to other blocks (but it can be only one).

It is necessary to combine these sequences of blocks and return the result as a sequence with the same properties.
That is, at the output, if there are "split" blocks in the sequence, then they should go in the order of bucket_num.

The merge can be performed using several (merging_threads) threads.
For this, receiving of a set of blocks for the next bucket_num should be done sequentially,
and then, when we have several received sets, they can be merged in parallel.

When you receive next blocks from different sources,
data from sources can also be read in several threads (reading_threads)
for optimal performance in the presence of a fast network or disks (from where these blocks are read).

@adofsauron
Copy link
Collaborator Author

auto & output = outputs.front();

auto info = std::make_shared<ChunksToMerge>();
info->bucket_num = bucket;
info->is_overflows = is_overflows;
info->chunks = std::make_unique<Chunks>(std::move(chunks));

Chunk chunk;
chunk.setChunkInfo(std::move(info));
output.push(std::move(chunk));

@adofsauron
Copy link
Collaborator Author

        auto block = tmp_stream->read();
        if (!block)
        {
            tmp_stream = nullptr;
            return {};
        }
        return convertToChunk(block);

@adofsauron
Copy link
Collaborator Author

class AggregatedChunkInfo : public ChunkInfo
{
public:
bool is_overflows = false;
Int32 bucket_num = -1;
UInt64 chunk_num = 0; // chunk number in order of generation, used during memory bound merging to restore chunks order
};

@adofsauron
Copy link
Collaborator Author

Working with states of aggregate functions in the pool is arranged in the following (inconvenient) way:

  • when aggregating, states are created in the pool using IAggregateFunction::create (inside - placement new of arbitrary structure);
  • they must then be destroyed using IAggregateFunction::destroy (inside - calling the destructor of arbitrary structure);
  • if aggregation is complete, then, in the Aggregator::convertToBlocks function, pointers to the states of aggregate functions
    are written to ColumnAggregateFunction; ColumnAggregateFunction "acquires ownership" of them, that is - calls destroy in its destructor.
  • if during the aggregation, before call to Aggregator::convertToBlocks, an exception was thrown,
    then the states of aggregate functions must still be destroyed,
    otherwise, for complex states (eg, AggregateFunctionUniq), there will be memory leaks;
  • in this case, to destroy states, the destructor calls Aggregator::destroyAggregateStates method,
    but only if the variable aggregator (see below) is not nullptr;
  • that is, until you transfer ownership of the aggregate function states in the ColumnAggregateFunction, set the variable aggregator,
    so that when an exception occurs, the states are correctly destroyed.

PS. This can be corrected by making a pool that knows about which states of aggregate functions and in which order are put in it, and knows how to destroy them.
But this can hardly be done simply because it is planned to put variable-length strings into the same pool.
In this case, the pool will not be able to know with what offsets objects are stored.

@adofsauron
Copy link
Collaborator Author

// Disable consecutive key optimization for Uint8/16, because they use a FixedHashMap
// and the lookup there is almost free, so we don't need to cache the last lookup result
std::unique_ptr<AggregationMethodOneNumber<UInt8, AggregatedDataWithUInt8Key, false>>           key8;
std::unique_ptr<AggregationMethodOneNumber<UInt16, AggregatedDataWithUInt16Key, false>>         key16;

std::unique_ptr<AggregationMethodOneNumber<UInt32, AggregatedDataWithUInt64Key>>         key32;
std::unique_ptr<AggregationMethodOneNumber<UInt64, AggregatedDataWithUInt64Key>>         key64;
std::unique_ptr<AggregationMethodStringNoCache<AggregatedDataWithShortStringKey>>               key_string;
std::unique_ptr<AggregationMethodFixedStringNoCache<AggregatedDataWithShortStringKey>>          key_fixed_string;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithUInt16Key, false, false, false>>  keys16;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithUInt32Key>>                   keys32;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithUInt64Key>>                   keys64;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys128>>                   keys128;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys256>>                   keys256;
std::unique_ptr<AggregationMethodSerialized<AggregatedDataWithStringKey>>                serialized;

std::unique_ptr<AggregationMethodOneNumber<UInt32, AggregatedDataWithUInt64KeyTwoLevel>> key32_two_level;
std::unique_ptr<AggregationMethodOneNumber<UInt64, AggregatedDataWithUInt64KeyTwoLevel>> key64_two_level;
std::unique_ptr<AggregationMethodStringNoCache<AggregatedDataWithShortStringKeyTwoLevel>>       key_string_two_level;
std::unique_ptr<AggregationMethodFixedStringNoCache<AggregatedDataWithShortStringKeyTwoLevel>>  key_fixed_string_two_level;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithUInt32KeyTwoLevel>>           keys32_two_level;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithUInt64KeyTwoLevel>>           keys64_two_level;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys128TwoLevel>>           keys128_two_level;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys256TwoLevel>>           keys256_two_level;
std::unique_ptr<AggregationMethodSerialized<AggregatedDataWithStringKeyTwoLevel>>        serialized_two_level;

std::unique_ptr<AggregationMethodOneNumber<UInt64, AggregatedDataWithUInt64KeyHash64>>   key64_hash64;
std::unique_ptr<AggregationMethodString<AggregatedDataWithStringKeyHash64>>              key_string_hash64;
std::unique_ptr<AggregationMethodFixedString<AggregatedDataWithStringKeyHash64>>         key_fixed_string_hash64;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys128Hash64>>             keys128_hash64;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys256Hash64>>             keys256_hash64;
std::unique_ptr<AggregationMethodSerialized<AggregatedDataWithStringKeyHash64>>          serialized_hash64;

/// Support for nullable keys.
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys128, true>>             nullable_keys128;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys256, true>>             nullable_keys256;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys128TwoLevel, true>>     nullable_keys128_two_level;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys256TwoLevel, true>>     nullable_keys256_two_level;

/// Support for low cardinality.
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodOneNumber<UInt8, AggregatedDataWithNullableUInt8Key, false>>> low_cardinality_key8;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodOneNumber<UInt16, AggregatedDataWithNullableUInt16Key, false>>> low_cardinality_key16;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodOneNumber<UInt32, AggregatedDataWithNullableUInt64Key>>> low_cardinality_key32;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodOneNumber<UInt64, AggregatedDataWithNullableUInt64Key>>> low_cardinality_key64;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodString<AggregatedDataWithNullableStringKey>>> low_cardinality_key_string;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodFixedString<AggregatedDataWithNullableStringKey>>> low_cardinality_key_fixed_string;

std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodOneNumber<UInt32, AggregatedDataWithNullableUInt64KeyTwoLevel>>> low_cardinality_key32_two_level;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodOneNumber<UInt64, AggregatedDataWithNullableUInt64KeyTwoLevel>>> low_cardinality_key64_two_level;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodString<AggregatedDataWithNullableStringKeyTwoLevel>>> low_cardinality_key_string_two_level;
std::unique_ptr<AggregationMethodSingleLowCardinalityColumn<AggregationMethodFixedString<AggregatedDataWithNullableStringKeyTwoLevel>>> low_cardinality_key_fixed_string_two_level;

std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys128, false, true>>      low_cardinality_keys128;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys256, false, true>>      low_cardinality_keys256;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys128TwoLevel, false, true>> low_cardinality_keys128_two_level;
std::unique_ptr<AggregationMethodKeysFixed<AggregatedDataWithKeys256TwoLevel, false, true>> low_cardinality_keys256_two_level;

@adofsauron
Copy link
Collaborator Author

What are the static and dynamic flows of the aggregate's overall architecture

From the analysis of module layer, it can be divided into several modules

What is the focus of the responsibilities of different modules, and what are the boundaries of interaction between modules

How do classes interact within a single module, and what are the relationships between classes? Pan China? A combination? Aggregation? CRTP(Singular Template Programming)?

@adofsauron
Copy link
Collaborator Author

#0 0x000000002d48c6ea in DB::AggregatingTransform::AggregatingTransform (this=0x7f332b252c18, header=..., params_=..., many_data_=..., current_variant=0, max_threads_=16,
temporary_data_merge_threads_=16) at ../src/Processors/Transforms/AggregatingTransform.cpp:397
#1 0x000000002ca624f7 in std::__1::construct_at[abi:v15000]<DB::AggregatingTransform, DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long, unsigned long&, unsigned long&, DB::AggregatingTransform*>(DB::AggregatingTransform*, DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long&&, unsigned long&, unsigned long&) (__location=0x7f332b252c18, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16,
__args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16) at ../contrib/llvm-project/libcxx/include/__memory/construct_at.h:35
#2 0x000000002ca621bb in std::__1::allocator_traits<std::__1::allocatorDB::AggregatingTransform >::construct[abi:v15000]<DB::AggregatingTransform, DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long, unsigned long&, unsigned long&, void, void>(std::__1::allocatorDB::AggregatingTransform&, DB::AggregatingTransform*, DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long&&, unsigned long&, unsigned long&) (__p=0x7f332b252c18, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16,
__args=@0x7f332b193228: 16) at ../contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:298
#3 0x000000002ca61e19 in std::__1::__shared_ptr_emplace<DB::AggregatingTransform, std::__1::allocatorDB::AggregatingTransform >::__shared_ptr_emplace[abi:v15000]<DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long, unsigned long&, unsigned long&>(DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long&&, unsigned long&, unsigned long&) (this=0x7f332b252c00, __a=..., __args=@0x7f332b193228: 16,
__args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16)
at ../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:292
#4 0x000000002ca61b88 in std::__1::allocate_shared[abi:v15000]<DB::AggregatingTransform, std::__1::allocatorDB::AggregatingTransform, DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long, unsigned long&, unsigned long&, void>(std::__1::allocatorDB::AggregatingTransform const&, DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long&&, unsigned long&, unsigned long&) (__a=...,
__args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16)
at ../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:953
#5 0x000000002ca61a80 in std::__1::make_shared[abi:v15000]<DB::AggregatingTransform, DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long, unsigned long&, unsigned long&, void>(DB::Block const&, std::__1::shared_ptrDB::AggregatingTransformParams&, std::__1::shared_ptrDB::ManyAggregatedData&, unsigned long&&, unsigned long&, unsigned long&) (__args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16,
__args=@0x7f332b193228: 16, __args=@0x7f332b193228: 16) at ../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:962
#6 0x000000002d6f3c52 in DB::AggregatingStep::transformPipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&)::$_6::operator()(DB::Block const&) const (
this=0x7f332b226be0, header=...) at ../src/Processors/QueryPlan/AggregatingStep.cpp:421
#7 0x000000002d6f3bbd in std::__1::__invoke[abi:v15000]<DB::AggregatingStep::transformPipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&)::$_6&, DB::Block const&>
(__f=..., __args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#8 0x000000002d6f3b52 in std::__1::__invoke_void_return_wrapper<std::__1::shared_ptrDB::IProcessor, false>::__call<DB::AggregatingStep::transformPipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&)::$_6&, DB::Block const&>(DB::AggregatingStep::transformPipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&)::$_6&, DB::Block const&) (__args=..., __args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:470
#9 0x000000002d6f3aed in std::__1::__function::__default_alloc_func<DB::AggregatingStep::transformPipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&)::$_6, std::__1::shared_ptrDB::IProcessor (DB::Block const&)>::operator()[abi:v15000](DB::Block const&) (this=0x7f332b226be0, __arg=...)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#10 0x000000002d6f3a98 in std::__1::__function::__policy_invoker<std::__1::shared_ptrDB::IProcessor (DB::Block const&)>::__call_impl<std::__1::__function::__default_alloc_func<DB::AggregatingStep::transformPipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&)::$_6, std::__1::shared_ptrDB::IProcessor (DB::Block const&)> >(std::__1::__function::__policy_storage const*, DB::Block const&) (__buf=0x7f33a43e9108, __args=...) at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#11 0x000000002a1f9a9b in std::__1::__function::__policy_func<std::__1::shared_ptrDB::IProcessor (DB::Block const&)>::operator()[abi:v15000](DB::Block const&) const (
this=0x7f33a43e9108, __args=...) at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#12 0x000000002a1f9a2d in std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)>::operator()(DB::Block const&) const (this=0x7f33a43e9108, __arg=...)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#13 0x000000002a1e3443 in DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&)::$_2::operator()(DB::Block const&, DB::Pipe::StreamType) const (this=0x7f33a43e8e80, stream_header=...) at ../src/QueryPipeline/Pipe.cpp:668
--Type for more, q to quit, c to continue without paging--
#14 0x000000002a1e33e7 in std::__1::__invoke[abi:v15000]<DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&)::$_2&, DB::Block const&, DB::Pipe::StreamType> (__f=..., __args=@0x7f33a43e8b1c: DB::Pipe::StreamType::Main, __args=@0x7f33a43e8b1c: DB::Pipe::StreamType::Main)
at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#15 0x000000002a1e33a7 in std::__1::__invoke_void_return_wrapper<std::__1::shared_ptrDB::IProcessor, false>::__call<DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&)::$_2&, DB::Block const&, DB::Pipe::StreamType>(DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&)::$_2&, DB::Block const&, DB::Pipe::StreamType&&) (__args=@0x7f33a43e8b1c: DB::Pipe::StreamType::Main, __args=@0x7f33a43e8b1c: DB::Pipe::StreamType::Main,
__args=@0x7f33a43e8b1c: DB::Pipe::StreamType::Main) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:470
#16 0x000000002a1e3342 in std::__1::__function::__default_alloc_func<DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&)::$_2, std::__1::shared_ptrDB::IProcessor (DB::Block const&, DB::Pipe::StreamType)>::operator()[abi:v15000](DB::Block const&, DB::Pipe::StreamType&&) (this=0x7f33a43e8e80,
__arg=@0x7f33a43e8b1c: DB::Pipe::StreamType::Main, __arg=@0x7f33a43e8b1c: DB::Pipe::StreamType::Main) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#17 0x000000002a1e32e9 in std::__1::__function::__policy_invoker<std::__1::shared_ptrDB::IProcessor (DB::Block const&, DB::Pipe::StreamType)>::__call_impl<std::__1::__function::__default_alloc_func<DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&)::$_2, std::__1::shared_ptrDB::IProcessor (DB::Block const&, DB::Pipe::StreamType)> >(std::__1::__function::__policy_storage const*, DB::Block const&, DB::Pipe::StreamType) (__buf=0x7f33a43e8e80, __args=DB::Pipe::StreamType::Main,
__args=DB::Pipe::StreamType::Main) at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#18 0x000000002a1f58f2 in std::__1::__function::__policy_func<std::__1::shared_ptrDB::IProcessor (DB::Block const&, DB::Pipe::StreamType)>::operator()[abi:v15000](DB::Block const&, DB::Pipe::StreamType&&) const (this=0x7f33a43e8e80, __args=@0x7f33a43e8bbc: DB::Pipe::StreamType::Main, __args=@0x7f33a43e8bbc: DB::Pipe::StreamType::Main)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#19 0x000000002a1f5841 in std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&, DB::Pipe::StreamType)>::operator()(DB::Block const&, DB::Pipe::StreamType) const (
this=0x7f33a43e8e80, __arg=DB::Pipe::StreamType::Main, __arg=DB::Pipe::StreamType::Main) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#20 0x000000002a1e0b0e in DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&, DB::Pipe::StreamType)> const&)::$_1::operator()(DB::OutputPort*&, DB::Pipe::StreamType) const (this=0x7f33a43e8dc0, port=@0x7f332b1af240: 0x7f332b236b10, stream_type=DB::Pipe::StreamType::Main) at ../src/QueryPipeline/Pipe.cpp:618
#21 0x000000002a1e09df in DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&, DB::Pipe::StreamType)> const&) (this=0x7f332b192ca0,
getter=...) at ../src/QueryPipeline/Pipe.cpp:658
#22 0x000000002a1e107e in DB::Pipe::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&) (this=0x7f332b192ca0, getter=...)
at ../src/QueryPipeline/Pipe.cpp:668
#23 0x000000002a212b0e in DB::QueryPipelineBuilder::addSimpleTransform(std::__1::function<std::__1::shared_ptrDB::IProcessor (DB::Block const&)> const&) (this=0x7f332b192c40,
getter=...) at ../src/QueryPipeline/QueryPipelineBuilder.cpp:128
#24 0x000000002d6ef62b in DB::AggregatingStep::transformPipeline (this=0x7f332b193000, pipeline=..., settings=...) at ../src/Processors/QueryPlan/AggregatingStep.cpp:419
#25 0x000000002d75f3c7 in DB::ITransformingStep::updatePipeline (this=0x7f332b193000, pipelines=..., settings=...) at ../src/Processors/QueryPlan/ITransformingStep.cpp:48
#26 0x000000002d786160 in DB::QueryPlan::buildQueryPipeline (this=0x7f33a43e9a80, optimization_settings=..., build_pipeline_settings=...) at ../src/Processors/QueryPlan/QueryPlan.cpp:187
#27 0x000000002b7124d1 in DB::InterpreterSelectWithUnionQuery::execute (this=0x7f339cd04820) at ../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:384
#28 0x000000002bbf41b5 in DB::executeQueryImpl (
begin=0x7f332b178000 "select\nc_name,\nc_custkey,\no_orderkey,\no_orderdate,\no_totalprice,\nsum(l_quantity)\nfrom\ncustomer,\norders,\nlineitem\nwhere\nc_custkey = o_custkey\nand o_orderkey = l_orderkey\ngroup by\nc_name,\nc_custkey,\no_o"..., end=0x7f332b17811c "", context=..., internal=false, stage=DB::QueryProcessingStage::Complete, istr=0x0)
at ../src/Interpreters/executeQuery.cpp:715
#29 0x000000002bbf0124 in DB::executeQuery (query=..., context=..., internal=false, stage=DB::QueryProcessingStage::Complete) at ../src/Interpreters/executeQuery.cpp:1180
#30 0x000000002cfd209b in DB::TCPHandler::runImpl (this=0x7f339ccf5c00) at ../src/Server/TCPHandler.cpp:389
#31 0x000000002cfe27e5 in DB::TCPHandler::run (this=0x7f339ccf5c00) at ../src/Server/TCPHandler.cpp:1963
#32 0x0000000032066419 in Poco::Net::TCPServerConnection::start (this=0x7f339ccf5c00) at ../base/poco/Net/src/TCPServerConnection.cpp:43
#33 0x0000000032066c7b in Poco::Net::TCPServerDispatcher::run (this=0x7f3463f9d600) at ../base/poco/Net/src/TCPServerDispatcher.cpp:115
#34 0x00000000322cdf94 in Poco::PooledThread::run (this=0x7f34640e3e80) at ../base/poco/Foundation/src/ThreadPool.cpp:199
#35 0x00000000322ca8fa in Poco::(anonymous namespace)::RunnableHolder::run (this=0x7f3464001850) at ../base/poco/Foundation/src/Thread.cpp:55
#36 0x00000000322c969c in Poco::ThreadImpl::runnableEntry (pThread=0x7f34640e3eb8) at ../base/poco/Foundation/src/Thread_POSIX.cpp:345
#37 0x00007f3464f8a802 in start_thread () from /lib64/libc.so.6
--Type for more, q to quit, c to continue without paging--
#38 0x00007f3464f2a450 in clone3 () from /lib64/libc.so.6

@adofsauron
Copy link
Collaborator Author

Why write aggregate data to disk files and what is the purpose of aggregating it into separate pipelines

@adofsauron
Copy link
Collaborator Author

An aggregation is done with store of temporary data on the disk, and they need to be merged in a memory efficient way.

@adofsauron
Copy link
Collaborator Author

The new size fits into the last MemoryChunk, so just alloc the
additional size. We can alloc without alignment here, because it
only applies to the start of the range, and we don't change it.

@adofsauron
Copy link
Collaborator Author

Begin or expand a contiguous range of memory.
'range_start' is the start of range. If nullptr, a new range is
allocated.
If there is no space in the current MemoryChunk to expand the range,
the entire range is copied to a new, bigger memory MemoryChunk, and the va
of 'range_start' is updated.
If the optional 'start_alignment' is specified, the start of range is
kept aligned to this value.

NOTE This method is usable only for the last allocation made on this
Arena. For earlier allocations, see 'realloc' method.

@adofsauron
Copy link
Collaborator Author

Memory pool to append something. For example, short strings.
Usage scenario:

  • put lot of strings inside pool, keep their addresses;
  • addresses remain valid during lifetime of pool;
  • at destruction of pool, all memory is freed;
  • memory is allocated and freed by large MemoryChunks;
  • freeing parts of data is not possible (but look at ArenaWithFreeLists if you need);

@adofsauron
Copy link
Collaborator Author

    /// If it is claimed that the zero key can not be inserted into the table.
    if constexpr (!Cell::need_zero_value_storage)
        return false;

    if (unlikely(Cell::isZero(x, *this)))
    {
        it = this->zeroValue();

        if (!this->hasZero())
        {
            ++m_size;
            this->setHasZero();
            this->zeroValue()->setHash(hash_value);
            inserted = true;
        }
        else
            inserted = false;

        return true;
    }

    return false;

@adofsauron
Copy link
Collaborator Author

During processing of row #i we will prefetch HashTable cell for row #(i + prefetch_look_ahead).

@adofsauron
Copy link
Collaborator Author

        for (size_t i = row_begin; i < row_end; ++i)
            if (places[i])
                static_cast<const Derived *>(this)->add(places[i] + place_offset, columns, i, arena);

@adofsauron
Copy link
Collaborator Author

HUGE MMAP CRASH


#0  0x0000000002dcc26e in Tianmu::mm::TCMHeap::alloc (this=0x8b3ace0, size=1408) at /root/work/stonedb-dev-20230213/storage/tianmu/mm/tcm_heap_policy.cpp:76
#1  0x0000000002db8ccc in Tianmu::mm::MemoryHandling::alloc (this=0x825bf00, size=1408, type=Tianmu::mm::BLOCK_TYPE::BLOCK_HUGE, owner=0x7f4a7ca5b030, 
    nothrow=false) at /root/work/stonedb-dev-20230213/storage/tianmu/mm/memory_handling_policy.cpp:206
#2  0x0000000002dcd992 in Tianmu::mm::TraceableObject::alloc (this=0x7f4a7ca5b030, size=1408, type=Tianmu::mm::BLOCK_TYPE::BLOCK_HUGE, nothrow=false)
    at /root/work/stonedb-dev-20230213/storage/tianmu/mm/traceable_object.cpp:55
#3  0x000000000312db54 in Tianmu::core::MemBlockManager::GetBlock (this=0x7f4a7ca5b030)
    at /root/work/stonedb-dev-20230213/storage/tianmu/core/blocked_mem_table.cpp:45
#4  0x000000000312e3e1 in Tianmu::core::BlockedRowMemStorage::AddEmptyRow (this=0x7f4a7ca6ce98)
    at /root/work/stonedb-dev-20230213/storage/tianmu/core/blocked_mem_table.cpp:160
#5  0x00000000031162aa in Tianmu::core::ValueMatching_HashTable::FindCurrentRow (this=0x7f4a7ca6cdf0, input_buffer=0x7f4a7cb73a60 "", row=@0x7f52037ecbf8: 0, 
    add_if_new=true) at /root/work/stonedb-dev-20230213/storage/tianmu/core/value_matching_hashtable.cpp:202
#6  0x0000000002fd947a in Tianmu::core::GroupTable::FindCurrentRow (this=0x7f52497be2a8, row=@0x7f52037ecbf8: 0)
    at /root/work/stonedb-dev-20230213/storage/tianmu/core/group_table.cpp:412
#7  0x0000000002f84b34 in Tianmu::core::GroupByWrapper::FindCurrentRow (this=0x7f52497be1e0, row=@0x7f52037ecbf8: 0)
    at /root/work/stonedb-dev-20230213/storage/tianmu/core/groupby_wrapper.h:109
#8  0x0000000002f811f9 in Tianmu::core::AggregationAlgorithm::AggregatePackrow (this=0x7f52497be570, gbw=..., mit=0x7f52037ecd20, cur_tuple=0, 
    mem_used=0x7f52497bdc70) at /root/work/stonedb-dev-20230213/storage/tianmu/core/aggregation_algorithm.cpp:618
#9  0x0000000002f8238e in Tianmu::core::AggregationWorkerEnt::TaskAggrePacks (this=0x7f52497bdc80, taskIterator=0x7f4a7ca96b80, dims=0x7f52497bd800, 
    mit=0x7f52497bdcd0, task=0x7f4a7ca968f0, gbw=0x7f52497be1e0, ci=0x7f4a7ca39720, mem_used=0x7f52497bdc70)
    at /root/work/stonedb-dev-20230213/storage/tianmu/core/aggregation_algorithm.cpp:934
#10 0x0000000002f92efa in _ZSt13__invoke_implIvRMN6Tianmu4core20AggregationWorkerEntEFvPNS1_10MIIteratorEPNS1_15DimensionVectorES4_PNS1_5CTaskEPNS1_14GroupByWrapperEPNS1_11TransactionEPmERPS2_IRS4_RS6_SJ_RS8_RSA_RSC_RSD_EET_St21__invoke_memfun_derefOT0_OT1_DpOT2_ (__f=
    @0x7f4a7ca68e48: (void (Tianmu::core::AggregationWorkerEnt::*)(Tianmu::core::AggregationWorkerEnt * const, Tianmu::core::MIIterator *, Tianmu::core::DimensionVector *, Tianmu::core::MIIterator *, Tianmu::core::CTask *, Tianmu::core::GroupByWrapper *, Tianmu::core::Transaction *, unsigned long *)) 0x2f8220e <Tianmu::core::AggregationWorkerEnt::TaskAggrePacks(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>, __t=@0x7f4a7ca68e90: 0x7f52497bdc80) at /usr/include/c++/8/bits/invoke.h:73
#11 0x0000000002f92a55 in std::__invoke<void (Tianmu::core::AggregationWorkerEnt::*&)(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*), Tianmu::core::AggregationWorkerEnt*&, Tianmu::core::MIIterator*&, Tianmu::core::DimensionVector*&, Tianmu::core::MIIterator*&, Tianmu::core::CTask*&, Tianmu::core::GroupByWrapper*&, Tianmu::core::Transaction*&, unsigned long*&> (__fn=
    @0x7f4a7ca68e48: (void (Tianmu::core::AggregationWorkerEnt::*)(Tianmu::core::AggregationWorkerEnt * const, Tianmu::core::MIIterator *, Tianmu::core::DimensionVector *, Tianmu::core::MIIterator *, Tianmu::core::CTask *, Tianmu::core::GroupByWrapper *, Tianmu::core::Transaction *, unsigned long *)) 0x2f8220e <Tianmu::core::AggregationWorkerEnt::TaskAggrePacks(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>) at /usr/include/c++/8/bits/invoke.h:95
#12 0x0000000002f9244e in std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>::__call<void, , 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul, 7ul>(std::tuple<>&&, std::_Index_tuple<0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul, 7ul>) (this=0x7f4a7ca68e48, 
    __args=empty std::tuple) at /usr/include/c++/8/functional:400
#13 0x0000000002f9179a in std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>::operator()<, void>() (this=0x7f4a7ca68e48) at /usr/include/c++/8/functional:484
#14 0x0000000002f90f4d in std::__invoke_impl<void, std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIItera--Type <RET> for more, q to quit, c to continue without paging--
tor*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>&>(std::__invoke_other, std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>&) (__f=...) at /usr/include/c++/8/bits/invoke.h:60
#15 0x0000000002f90848 in std::__invoke<std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>&>(std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>&) (__fn=...)
    at /usr/include/c++/8/bits/invoke.h:95
#16 0x0000000002f901c7 in std::__future_base::_Task_state<std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>, std::allocator<int>, void ()>::_M_run()::{lambda()#1}::operator()() const (this=0x7f4a7ca68e20) at /usr/include/c++/8/future:1421
#17 0x0000000002f91841 in std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<void>, std::__future_base::_Result_base::_Deleter>, std::__future_base::_Task_state<std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>, std::allocator<int>, void ()>::_M_run()::{lambda()#1}, void>::operator()() const (this=0x7f52037ed590) at /usr/include/c++/8/future:1362


#18 0x0000000002f90fcc in std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<void>, std::__future_base::_Result_base::_Deleter>, std::__future_base::_Task_state<std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>, std::allocator<int>, void ()>::_M_run()::{lambda()#1}, void> >::_M_invoke(std::_Any_data const&) (__functor=...) at /usr/include/c++/8/bits/std_function.h:283
#19 0x0000000002c4f6bf in std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>::operator()() const (
    this=0x7f52037ed590) at /usr/include/c++/8/bits/std_function.h:687
#20 0x0000000002c4888b in std::__future_base::_State_baseV2::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*) (this=0x7f4a7ca68e20, __f=0x7f52037ed590, __did_set=0x7f52037ed4f7) at /usr/include/c++/8/future:561
#21 0x0000000002c5eed1 in std::__invoke_impl<void, void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2<std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*>*> (__f=
    @0x7f52037ed510: (void (std::__future_base::_State_baseV2::*)(std::__future_base::_State_baseV2 * const, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter>()> *, bool *)) 0x2c48864 <std::__future_base::_State_baseV2::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*)>, __t=@0x7f52037ed508: 0x7f4a7ca68e20) at /usr/include/c++/8/bits/invoke.h:73
#22 0x0000000002c5710f in std::__invoke<void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*>(void (std::__future_base::_State_baseV2::*&&)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*&&, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*&&, bool*&&) (__fn=
    @0x7f52037ed510: (void (std::__future_base::_State_baseV2::*)(std::__future_base::_State_baseV2 * const, std::function<std::unique_ptr<std::__future_base::_Resu--Type <RET> for more, q to quit, c to continue without paging--
lt_base, std::__future_base::_Result_base::_Deleter>()> *, bool *)) 0x2c48864 <std::__future_base::_State_baseV2::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*)>) at /usr/include/c++/8/bits/invoke.h:95
#23 0x0000000002c4f2bc in std::call_once<void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*>(std::once_flag&, void (std::__future_base::_State_baseV2::*&&)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*&&, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*&&, bool*&&)::{lambda()#1}::operator()() const (this=0x7f52037ed480) at /usr/include/c++/8/mutex:672
#24 0x0000000002c4f327 in std::call_once<void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*>(std::once_flag&, void (std::__future_base::_State_baseV2::*&&)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*&&, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*&&, bool*&&)::{lambda()#2}::operator()() const (this=0x0) at /usr/include/c++/8/mutex:677
#25 0x0000000002c4f338 in std::call_once<void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*>(std::once_flag&, void (std::__future_base::_State_baseV2::*&&)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*&&, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*&&, bool*&&)::{lambda()#2}::_FUN() () at /usr/include/c++/8/mutex:677
#26 0x00007f5299101e67 in __pthread_once_slow () from /lib64/libpthread.so.0
#27 0x0000000002c42faf in __gthread_once (__once=0x7f4a7ca68e38, __func=0x7f52993d3b90 <__once_proxy>)
    at /usr/include/c++/8/x86_64-redhat-linux/bits/gthr-default.h:699
#28 0x0000000002c4f3e2 in std::call_once<void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*>(std::once_flag&, void (std::__future_base::_State_baseV2::*&&)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*), std::__future_base::_State_baseV2*&&, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*&&, bool*&&) (__once=..., __f=
    @0x7f52037ed510: (void (std::__future_base::_State_baseV2::*)(std::__future_base::_State_baseV2 * const, std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter>()> *, bool *)) 0x2c48864 <std::__future_base::_State_baseV2::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*)>) at /usr/include/c++/8/mutex:684
#29 0x0000000002c484d3 in std::__future_base::_State_baseV2::_M_set_result(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>, bool) (this=0x7f4a7ca68e20, __res=..., __ignore_failure=false) at /usr/include/c++/8/future:401
#30 0x0000000002f90226 in std::__future_base::_Task_state<std::_Bind<void (Tianmu::core::AggregationWorkerEnt::*(Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*))(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*)>, std::allocator<int>, void ()>::_M_run() (this=0x7f4a7ca68e20) at /usr/include/c++/8/future:1423
#31 0x0000000002c5c48d in std::packaged_task<void ()>::operator()() (this=0x7f4a7ca98220) at /usr/include/c++/8/future:1556
#32 0x0000000002f8630a in Tianmu::utils::thread_pool::add_task<void (Tianmu::core::AggregationWorkerEnt::*)(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*), Tianmu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*&, Tianmu::core::Transaction*&, unsigned long*&>(void (Tianmu::core::AggregationWorkerEnt::*&&)(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*), Tianmu::core::AggregationWorkerEnt*&&, Tianmu::core::MIIterator*&&, Tianmu::core::DimensionVector*&&, Tianmu::core::MIIterator*&&, Tianmu::core::CTask*&&, Tianmu::core::GroupByWrapper*&, Tianmu::core::Transaction*&, unsigned long*&)::{lambda()#1}::operator()() const (this=0x7f4a7ca98260) at /root/work/stonedb-dev-20230213/storage/tianmu/util/thread_pool.h:94
#33 0x0000000002f8bf69 in std::_Function_handler<void (), Tianmu::utils::thread_pool::add_task<void (Tianmu::core::AggregationWorkerEnt::*)(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*), Tian--Type <RET> for more, q to quit, c to continue without paging--
mu::core::AggregationWorkerEnt*, Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*&, Tianmu::core::Transaction*&, unsigned long*&>(void (Tianmu::core::AggregationWorkerEnt::*&&)(Tianmu::core::MIIterator*, Tianmu::core::DimensionVector*, Tianmu::core::MIIterator*, Tianmu::core::CTask*, Tianmu::core::GroupByWrapper*, Tianmu::core::Transaction*, unsigned long*), Tianmu::core::AggregationWorkerEnt*&&, Tianmu::core::MIIterator*&&, Tianmu::core::DimensionVector*&&, Tianmu::core::MIIterator*&&, Tianmu::core::CTask*&&, Tianmu::core::GroupByWrapper*&, Tianmu::core::Transaction*&, unsigned long*&)::{lambda()#1}>::_M_invoke(std::_Any_data const&) (__functor=...) at /usr/include/c++/8/bits/std_function.h:297
#34 0x0000000002c4e1a4 in std::function<void ()>::operator()() const (this=0x7f52037ed6a0) at /usr/include/c++/8/bits/std_function.h:687
#35 0x0000000002c48d35 in Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}::operator()() const (__closure=0x827e7b8) at /root/work/stonedb-dev-20230213/storage/tianmu/util/thread_pool.h:61
#36 0x0000000002c6db9c in std::__invoke_impl<void, Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}>(std::__invoke_other, Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}&&) (__f=...) at /usr/include/c++/8/bits/invoke.h:60
#37 0x0000000002c66f47 in std::__invoke<Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}>(std::__invoke_result&&, (Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}&&)...) (__fn=...) at /usr/include/c++/8/bits/invoke.h:95
#38 0x0000000002c7c4a0 in std::thread::_Invoker<std::tuple<Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}> >::_M_invoke<0ul>(std::_Index_tuple<0ul>) (this=0x827e7b8) at /usr/include/c++/8/thread:244
#39 0x0000000002c7bdf5 in std::thread::_Invoker<std::tuple<Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}> >::operator()() (this=0x827e7b8) at /usr/include/c++/8/thread:253
#40 0x0000000002c7af2c in std::thread::_State_impl<std::thread::_Invoker<std::tuple<Tianmu::utils::thread_pool::thread_pool(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)::{lambda()#1}> > >::_M_run() (this=0x827e7b0) at /usr/include/c++/8/thread:196
#41 0x00007f52993d4b13 in execute_native_thread_routine () from /lib64/libstdc++.so.6
#42 0x00007f52990fa1ca in start_thread () from /lib64/libpthread.so.0
#43 0x00007f5296052e73 in clone () from /lib64/libc.so.6


@adofsauron
Copy link
Collaborator Author

6d48622ef3e31f9670062c4e7b7c45bf

@adofsauron
Copy link
Collaborator Author

linux death continues to occur when the disk is full using memory mapped files

f5ab86dc50ad9739cffb536b5d463f6a


Feb 27 10:42:44 kevin kernel: Out of memory: Killed process 4167 (mysqld) total-vm:57281236kB, anon-rss:15482648kB, file-rss:0kB, shmem-rss:0kB, UID:1001 pgtables:100820kB oom_score_adj:0


@adofsauron
Copy link
Collaborator Author

When the disk space is sufficient, the system runs properly. Note that the physical memory is only 4GB. When the available memory is less than 1GB, memory mapping to disk files occurs

25F0A771-CCD6-4317-BF4A-C0E5150B9697

@adofsauron
Copy link
Collaborator Author

7fc71f1ce000-7fce9f1ce000 rw-s 00000000 08:02 3670018 /tmp/tianmuhuge.12039

@mergify mergify bot closed this as completed in #1355 Mar 10, 2023
mergify bot pushed a commit that referenced this issue Mar 10, 2023
    1. Use memory mapping
    2. Set the maximum number of concurrent threads for aggregation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-feature feature with good idea prio: high High priority
Projects
1 participant