Skip to content

Hackathon 2023

Robert A McDougal edited this page Sep 24, 2023 · 65 revisions

NEURON Simulator Hackathon (3rd Edition)

Schedule: September 25-28, 2023 (Monday - Thursday), 8:30 AM to 5:00 PM

Venue: Yale School of Medicine: Steiner Room, SHM L-210, 333 Cedar St, New Haven, CT PXL_20230627_190943119

Slide Deck

Program

We are confirming few external speakers but the structure of the event would look like:

image

Extra room reservations for small group hacking are:
Mon 9-11 E37 ; 2-4 E37
Tue 8:30-12 E23 ; 1:30-4:45 E37
Wed 8:30-12 E24 ; 1-4 E37
Thurs 8:30-12 E24 ; 1-2 E36


Technical topics:

We think it is better to settle beforehand on a few topics that we think are worth tackling and well suited for the format of a hackathon rather than create a huge list of topics of which most will be left untouched (we're still brainstorming topics, please feel free to add more). These topics could be worked on alone or ideally in pairs to benefit from being together in one place.

NMODL code generation for NEURON

Background:

Currently NMODL code is generated using the old nocmold transpiler. Meanwhile the NMODL Framework has become an extremely versatile and mature modern DSL transpiler for NMODL that we are using successfully for CoreNEURON. We'd like to make it the only transpiler used in the NEURON toolchain, replacing nocmodl. Recently a lot of work has been done to support language features that were left aside initially (cf. #959 and #958. Ultimately we will need to write a new code generator in NMODL (following the model of the CoreNEURON compatible code generator) that will produce code compatible with NEURON. We don't expect that this item can be fully completed and made production ready in the hackathon, but starting this work together, in-person would give us a good initial boost and help solving a lot of the initial questions that will come up.

Hackathon Goals / Tasks:

  • Get a good understanding of the existing implementation for CoreNEURON
  • Review nocmodl to understand what code needs to be generated, but also how can things be cleaned up / improved in the new code generation
  • Review parts of NEURON that are called from the generated mechanism code - are there easy opportunities to improve and/or simplify the API?
  • Create the scaffold for a new code generator
  • Add the necessary CLI interface to make the new code generator run (even if it produces nothing)
  • Extend unit tests to start testing new code generator
  • See which parts of the existing code generator could be easily ported, add corresponding tests, port code

Preparatory Tasks:

  • Refactor the NMODL C++ code generator for CoreNEURON to be simpler, more accessible (@tristan0x has started this) This code will at the very least be a bueprint for the NEURON code generator. In the best case it could be even refactored to provide a couple of generic classes that can be specialized for one or the other target
  • Improve code documentation of NMODL C++ code generator, maybe even so far that it could be built almost as a tutorial
  • Review nocmodl to understand what code needs to be generated, but also how can things be cleaned up / improved in the new code generation
  • Is build time an issue and worth fixing https://github.com/BlueBrain/nmodl/issues/1012 ?
  • Review parts of NEURON that are called from the generated mechanism code - are there easy opportunities to improve and/or simplify the API? (See also #2357)

Finalisation of pythonic API for NEURON

Background:

During the 2022 Hackron @Helveg, @ramcdougal, @ferdonline and @nrnhines (see https://github.com/orgs/neuronsimulator/projects/3/views/1?pane=issue&itemId=5659828) started working on improvements for NEURONs Python API. Python has become over the last couple years the preferred interface for driving NEURON models and simulations for many users. While the HOC interpreter will probably have to remain the underlying engine for NEURON, we think that the Python interface should become a first-class citizen and the main programmable interface for NEURON. This item is about building on top of the work that has already been done, extending it and ideally get to a point where we can merge some of the improvements that were started last time.

Preparatory Tasks:

Hackathon Goals/Tasks:

Finalisation of C API for NEURON

Background:

Significant work has gone into understanding how to control NEURON from C and C++ (see e.g. https://github.com/mcdougallab/neuron-c-api-demos); this work was used to implement support for running NEURON from MATLAB (see e.g. https://github.com/mcdougallab/matlabneuroninterface) Some of these examples (and the MATLAB support) was broken by #2027 which revamped how memory was managed in NEURON. To prevent this from being an issue in the future, #2357 was launched to formalize a C API that would allow MATLAB and other languages to interact with NEURON without needing to know details of the memory management. Most of the functionality has been implemented, but the project has stalled with no one devoting time to it. The goal here is to complete the missing pieces and get the pull request merged.

Preparatory Tasks:

Hackathon Goals/Tasks:

Code cleanup: dead-code removal, one way to do one thing

Background:

NEURON has accrued a lot of legacy and dead code. Over the last couple of years various developers have put significant work into code cleanup but there is still a lot to go through. Specifically:

Hackathon Goals/Tasks:

  • Get rid of incompatible licensed code
  • Remove code paths that are never used and are essentially dead
  • Remove disabled code that we know will not be used again or that will be completely replaced in the future (we can always revisit it via the git history)
  • Remove branches that we won't be working on anymore, that are abandoned work

Preperation:

  • Review branches, last commits, committer, topic, compile table
  • Review code for #if 0 and similar guards that disable code and note down findings - to discuss with Michael
  • Check if the tools here could be of help.

Tackling a few, essential SoA Data Structure todos

Background:

Olli has provided a number of possible next steps that have been described here: Data Structures / Future Work. For the workshop, we can choose one or max two data structures related todos that are of high importance and could be a good candidate for the hackathon.

Possible tasks to work on are:

Preparation:

  • Check the topics that are doable in a short window and worth hacking!
  • Start a prototype implementation before the hackathon. Contact Michael if specific help is needed.

Hackathon Goals/Tasks:

  • PR draft implementation of selected topics
  • Help to develop a good understanding of NEURON SoA data structures (e.g. discussions with Michael & Robert)

Improving CoreNEURON's File Mode: Replace per-rank files with a single file

Background:

For CoreNEURON execution with the file model transfer, up to 4 files are created for each rank. This includes 2 files from the neuron model, 1 for gap junctions, and 1 for report mapping. While this setup is suitable for a small number of MPI ranks, large-scale simulations with BBP models have highlighted an issue. In such cases, the simulation generates a substantial number of files: 800 nodes x 40 ranks per node x 32 model building steps x 3 files per rank, leading to a total of 3,072,000 files. This significantly impacts I/O performance on compute clusters.

Hackathon Goal:

The objective for this task during the hackathon is as follows:

  • Revise the current approach of generating 2 to 4 files per rank.
  • Implement an alternative method: either generate a single file for all ranks or adopt a single file per compute node.

Note: We are concentrating on straightforward enhancements – not introducing MPI I/O or any other complex implementations. Instead, we'll work with the existing setup while adopting a single-file approach (or minimizing the file count) to write the model data at pre-calculated offsets. This won't directly increase the number of write I/O operations, it will address the challenge of metadata operations and the excessive number of files.

Preparatory Tasks:

  • Review the existing implementation of nrncore_write.cpp in NEURON.
  • Enhance the current implementation to accurately calculate the model size:
    • Refer to the part1().rankbytes variable.
    • Note potential discrepancies in the current model size calculation due to additional factors: 1) writing specific elements in ASCII format and 2) inclusion of extra IDs for error validation when writing int/float vectors.
  • Prior to the workshop, develop a prototype for calculating the model size and offset. Explore the possibility of updating the part2() function to calculate the model size without performing the actual writing.

These details are based on ongoing discussions with @sergiorg-hpc and his ongoing work.

NEURON as Fast as CoreNEURON: CPU Performance Showcase

Background:

In terms of execution speed, CoreNEURON holds a 2-4x CPU speed advantage over NEURON due to: 1) SoA memory layout for mechanism/model data, and 2) Auto-vectorization with proper compiler hints. Recent work by Olli Lupton introduced SoA memory layout to NEURON, showing up to 2x improvements. The remaining CPU performance gap likely results from a lack of auto-vectorization.

Hackathon Goal:

During the hackathon, we aim to:

  • Verify if the CPU performance gap is due to lack of auto-vectorization.
  • Introduce #pragma ivdep or #pragma omp simd to check if compiler auto-vectorizes kernels, potentially matching CoreNEURON's CPU performance.
  • If performance gap exists, investigate the cause.

Note that we prefer simple experiments within a day or two. Just to give an idea, I can think of:

  • Begin with a ringtest, our standard benchmark (and channel-benchmark later if needed).
  • Insert #pragma ... in nrn_state() and nrn_cur() kernels in hh.cpp.
  • And then:
    • (Re-)compile the test
    • Review vectorization report from the (Intel) compiler
    • Compare NEURON vs CoreNEURON performance

The main motivation for this task is to validate our assumption that NEURON, with recent SoA data structure changes and improved NMODL code generation, can achieve performance similar to CoreNEURON. Note that we had auto-vectorization with MOD2C-generated kernels and that needed minimal changes (of few lines). Thus, we believe NEURON with a generated kernel can attain comparable performance.

Preparatory Tasks:

  • Check how we use Ring test and channel-benchmark for benchmarking of NEURON vs CoreNEURON.
  • Build NEURON+CoreNEURON with Caliper instrumentation and Intel compiler. Understand the generated reports and how we cross-check the speedup of MOD file kernels
  • As a reference, check how we were printing pragmas on the MOD2C side.
  • Check how to introduce pragmas in nocmodl:

Note that there are two versions of printing kernels in the noccout.c. We should look at the code in c_out_vectorize() and not c_out().

Other project ideas

Here are some other project ideas that came up that we think one could look into. Some are potentially way too big for the scope of the hackathon, but one could make a start and see, others are smaller, more experimental things to play with.

Modern Python bindings for NEURON

Background:

Neuron does bindings by using the Python C API directly. While it offers full control and works for C sources, that is a tedious error-prone process especially due to the problem of properly managing reference counts. With the transition of Neuron to C++ we can use C++ Python APIs that some projects offer. Namely, as part of its components nanobind implements “wrapper” classes, which we can use. This would avoid bugs like those affecting Meaningful Python types #1858. In the context of the hackathon we could do a proof of concept

The nanobind bindings should work as long as the nrn code to be wrapped is compiled as c++. We could do this task very progressively, i.e. transition a function to use the wrapper classes should not require any changes in the rest of the code.

This will be a big task, maybe not as big as changing the data structures, but still.

Tasks during the hackathon:

  • Setup the new library
  • transition the implementation of a single/couple functions as proof of concept Hopefully from that point on any developer could follow the same path and on the long run we move most of it, plus new development should adopt the new way

Things to consider:

  • Check what problems we are solving and if the same is shared by other devs (Robert/Michael)
  • The role of #2357 which would provide a stable API, and is based on the work done for the NEURON toolbox for MATLAB. Consider connecting to this API rather than trying to worry about NEURON's internal interface, which largely masks the class-object relationship of the interface.
  • How complex is this task, and can we realistically get it done in a few weeks or maybe 1-2 months max?
  • The current bindings might be considered "old," but they're mostly stable and only need minor changes rarely?
  • native windows builds. Explore if a python bdist wheel is easily achievable, look into dynamic library loading on Windows that could get us one step further to a native build (without IV support, that will need to be looked at separately)
  • PIN tool for neuron to build call graph, discover dead code, etc.
  • Try out sonarsource

Tutorial topics

Possible Lightning talks - 5-10 mins How To:

  • Refactor src/gnu (https://github.com/neuronsimulator/nrn/issues/1330)
  • Modern C++ idioms used in NEURON
  • Continuous integration
    • live debugging
  • neuron documentation
    • nrn.readthedocs.io
  • neuron building and CI
    • wheel building, pypi, neuron-nightly
    • nrn-modeldb-ci
  • testing
    • how to write
    • how to run one (or a few, see all output, run directly)
    • Python idioms for tests
    • extra coverage resulting from a new test.

Things to look at before Hackathon:

  • NMODL sympy and Eigen

Useful Links

Clone this wiki locally