Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Xpt] graph: refactor Block::work() function #81

Closed
3 of 5 tasks
wirew0rm opened this issue Jun 6, 2023 · 2 comments · Fixed by #297
Closed
3 of 5 tasks

[Xpt] graph: refactor Block::work() function #81

wirew0rm opened this issue Jun 6, 2023 · 2 comments · Fixed by #297
Assignees

Comments

@wirew0rm
Copy link
Member

wirew0rm commented Jun 6, 2023

The node::work() function has grown to > 200 lines of code handling

  • input/output data constraints
  • process-one/-bulk/-simd related logic
  • SIMD preambles and epilogues
  • tags
  • error and flowgraph-state
  • special cases for sinks required for the node-merging API

and consists of multiple nested constexpr-if branches.

This warrants for a cleanup including the following considerations:

@ivan-cukic
Copy link
Contributor

ivan-cukic commented Oct 3, 2023

  • The if constexpr checks that control the code flow should use concepts that are less strict to what the API needs to be, and check for specifics in a static_assert inside. For example, if the user defines process_bulk and makes a mistake with its return type, we should not silently ignore it - we should take the branch that uses process_bulk and report that the return type is wrong.
  • The code that can fail should not ignore errors, all the errors should be reported to the caller (this doesn't apply to code in the tests, just the library)
  • Current work implementation should be split into reusable components - currently, whoever decides to implement a custom work function will be met with a daunting task and will probably need to start with a copy-paste from the existing work function
  • ensure the user can write process_one(auto... args) and process_bulk(auto... args)

@RalphSteinhagen RalphSteinhagen changed the title graph: refactor node::work() function [xpt] graph: refactor node::work() function Oct 3, 2023
@RalphSteinhagen RalphSteinhagen changed the title [xpt] graph: refactor node::work() function [xpt] graph: refactor Block::work() function Oct 12, 2023
@RalphSteinhagen RalphSteinhagen moved this from 📋 Backlog to 🔖 Selected (3) in Digitizer Reimplementation Oct 12, 2023
@RalphSteinhagen RalphSteinhagen changed the title [xpt] graph: refactor Block::work() function [5pt] graph: refactor Block::work() function Oct 27, 2023
@RalphSteinhagen RalphSteinhagen changed the title [5pt] graph: refactor Block::work() function [Xpt] graph: refactor Block::work() function Oct 27, 2023
@wirew0rm wirew0rm moved this from 🔖 Selected (3) to 🏗 In progress in Digitizer Reimplementation Jan 23, 2024
@wirew0rm wirew0rm self-assigned this Jan 23, 2024
@wirew0rm
Copy link
Member Author

notes from discussion about work_internal refactoring

The work_internal function is currently not clearly separated between

  • checking processing preconditions
  • processing and forwarding tags and messages
  • validating and updating settings
  • error and state handling
  • resampling (+ stride)
  • actual processing (processOne, processBulk, simdized variants...)
  • SIMD handling (epilogue, etc)

This makes the function hard to read and makes it hard to write blocks that have to provide a custom implementation for one of the items above, since they have to correctly reimplement all the other aspects and keep them up to date in case of changes in the basic block.

general structure of work_internal

work(requested) {
    dispatch_pmt_msg -> setting -> settings.set
                     -> info
                     -> control
           -> processMsg(...)
           port.publishMsg()
    //settings
    get & handle tags
    apply settings -> consume & reset cached&merged tag
    
    // in/out stats
    // - sync
    // - sync(resampling)
    // - async
    state
      available() // aggretate samples available in input buffers and free space in output buffers
      nextTag/nextTagAfter
      nextEOSTag

    processOne/Bulk/SIMD, ...
}

Updating Block Settings

There are different ways how block settings (== block member variables) can be updated:

  • at construction via the pmt constructor
    • initializing settings via custom constructors is deprecated since it circumvents the default constructor and checks!
  • by sending a msg to the block's default msg-port
    • contains a pmt with the block's id and a pmt with setting names and values to update
    • allows to instantaniously and asynchronously change block parameters
  • by attaching a tag to an input stream sample which contains a pmt with settings updates
    • allows propagating settings changes though the graph
  • staging settings changes inside the block's work function
  • updating the member variables directly inside the work function? (not recommended, need to ensure to manually call applySettings afterwards to update pmt map and notify other blocks/UI/...)

With the exception of the last, all settings are applied before the block's work function is called.

After updating a setting, the block will send a message on its out message port with the updated settings values.

All blocks' default msgIn ports are connected to the graphs msgIn port and all blocks' default msgOut ports are connected to the graphs msgOut port.
Blocks can define additional message ports that can be directly connected by the user to allow direct communication between blocks or from blocks to non-gnuradio parts of the program.

Tag Handling Considerations

tags         t1     t2                 EOS
             |      |                   |
time     -//-----------------------------
             |      |          |        |
i local      0      42       1024     1070
i global     50     92       1074     1130

Since tags can modify settings or otherwise affect the work function, but handling tags and settings on every sample would be computationally expensive, the data is chunked appropriately before calling the processing function.

  • DEFAULT split the stream s.th. tags are only on first sample -> will/might violate min samples constraint
    • process_bulk(50..91, t1), N=42!
    • process_bulk(92..1115, t2), N=1024
    • process_bulk(1116..1130, EOS), N=14!
  • MOVE_FW/MOVE_BWmoving the tags to the start of this or the next chunk
    • if we move the tags, should the forwarded tags be at the original or the updated position?
    • process_bulk(50..1073, t1 [+ t2]), N=1024
    • process_bulk(1074..1130, [t2 +] EOS), N=56!
  • IGNORE ignore the tag (still forward it?)
    • process_bulk(50..1073, t1), N=1024
    • process_bulk(1074..1130, EOS), N=56!
  • dropping the too short chunk
    • process_bulk(92..1115, t2), N=1024
  • zero padding (or zero order holding) the too short
    • process_bulk(50..91 + padding, t1), N=42!
    • process_bulk(92..1115, t2), N=1024
    • process_bulk(1116..1130 + padding, EOS), N=14!
    • PROBLEM: changes the amount of samples -> samples become out of sync with other parallel paths of the graph
  • we might need a special case for the end-of-stream tag to guarantee that it gets forwarded in any case
    • can be either sent with the incomplete remaining data (ignoring the restriction) or as a zero samples update, dropping the rest of the data

Tag Propagation

Handles how tags should be propagated from input streams to output port streams.

    ┌───────┐      ┌───────┐      ┌───────┐
   ┌┤       ├┐    ┌┤       ├┐    ┌┤       ├┐
   ││ ────► ││    ││ ────► ││    ││       ││
   └┤       ├┘    └┤  \ /  ├┘    └┤       ├┘
    │       │      │   X   │      │       │
   ┌┤       ├┐    ┌┤  / \  ├┐    ┌┤       ├┐
   ││ ────► ││    ││ ────► ││    ││       ││
   └┤       ├┘    └┤       ├┘    └┤       ├┘
    └───────┘      └───────┘      └───────┘
  `one_to_one`    `all_to_all`      `none`
  • all_to_all
  • one_to_one
  • none: no automatic propagation, work function has to manually forward the tags to the correct output ports.

Resampling API

There are (at least) two different ways to represent resampling information on the block level:

a. Nominator + Denominator: N=100, M=200

  • currently implemented
  • restricts the API to rational resampling
  • allows to include the minimum amount of samples needed to perform the resampling
  • N > M -> downsampling, N < M -> upsampling, M == N -> no resampling
    b) Decimal Ratio R: M = R * N
  • directly expresses the ratio
  • R > 1 -> upsampling , R == 1 -> no resampling, R < 1 -> downsampling

The API also includes a Stride Parameter to allow skipping of samples. Semantics: Use N samples, consume S samples (special case S==0 -> consume N samples, back to back processing), S<N -> overlapping, S>N -> skip samples. The way this is expressed now depends on a)

@wirew0rm wirew0rm linked a pull request Mar 14, 2024 that will close this issue
6 tasks
@wirew0rm wirew0rm moved this from 🏗 In progress to Finished Implementation (2) in Digitizer Reimplementation Mar 21, 2024
@RalphSteinhagen RalphSteinhagen moved this from Finished Implementation (2) to ✅ QA-Accepted/Merged (∞) in Digitizer Reimplementation Mar 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: QA-Accepted/Merged (∞)
Development

Successfully merging a pull request may close this issue.

3 participants