Skip to content

[WIP] More efficient template head model

Thomas Vincent edited this page Nov 5, 2018 · 1 revision

To build a head model based on Colin27_4NIRS, precomputed volumic fluences are available online. The current head model computation works on product of fluence volumes that are then interpolated on the cortical surface. The output of the head model is a sensitivity matrix: nb_pairs x nb_wavelengths x nb_vertices

There are two burdens:

  • file size: one fluence volume takes ~1Mb and there are 7800 vertices that can serve as optode sites on the scalp. That makes ~7Gb of data to host online. However since a montage rarely involve more than 60 optodes, the local disk usage is ~60Mb.
  • Computation time: building the sensitivity matrix involves computing one product of 2 fluence volumes for each optode pair. A fluence volume is the size of the MRI. Even though it is stored as a flat sparse matrix, the product works on a full volumic arrays, which takes time. In addition, the interpolation on the cortical surface is also time consuming.

Proposed solution: Offer precomputed sensitivity maps for each possible scalp vertex pair. This would solve the computation time issue by avoiding computing both fluence products and surface interpolation.

In terms of memory usage: The number of vertices which can be optode sites is 7800. For each of them, only a limited amount vertices can be paired: those which are not too close (>=1cm) and not too far (<=5cm), to be quite permissive. That makes: ~750 candidates. For each of these possible pairs, a sensitivity map is no larger than 1200 bytes (for a 5000 vertices low resolution cortical surface). Then for one vertex, the set of all candidate sensitivity maps would take: 0.9Mb

If a precomputation is not available for a given pair, the process would fall back to the fluence-based computation.

The amount of data to store on the server would be around 6Gb The amount of data to download for a montage with 60 optodes would be around 55Mb.

This is a bit better in terms of disk usage but we'd still have to keep the fluence volume data. The main advantage would be to save around 2 mins of computation time.

Maybe not worth the work...