-
Notifications
You must be signed in to change notification settings - Fork 4
Home
With the advent of technology, medical sciences have also seen an advancement and improvement from the traditional approaches used in various fields. Histopathology is yet another field which has been, and can be further benefitted from the technology. Traditionally, after the slide development the pathologists were required to look at each of the slides individually in order to diagnose. But now, using Volume Rendering techniques, one could generate a 3 dimensional model of the tissue using the slides enabling better histopathologic analysis.
We aim to create an application which provides a user with the following functionalities:
- Send a set of virtual slides (.svs, .tif, etc.)
- Generate a 3D volume using the provided set of slides
- Given that the slides can be very large in size, come up with a suitable architecture to manage data.
- Provide a set of UI based functionalities:
- Add a Slicing feature using an arbitrary plane or axis-aligned planes
- View the Cross-Section
- Select a sub-volume
- Viewing certain artifacts based on their color. Refer to this for further information.
- Draw a ROI
- Zoom/Scaling the volume with replacement of the current resolution image with an appropriate higher resolution section.
- Annotations in 3D → provide an option to annotate artifacts in 3D
- Some WSI's provide polylines → Generate a marked region in 3D rendering from a set of points.
- Add a Slicing feature using an arbitrary plane or axis-aligned planes
- Alignment of different slides
- When we stack multiple slides, it is important to understand which section of each slide corresponds to a section in another.
- Furthermore, two slides created from the same tissue may not be aligned from corner to corner so it is important to correct such misalignments
- Refer to the inviwo documentation.
Ensure the ubuntu version supports Qt
- Use the Cmake GUI to select the necessary modules and add
CMAKE_PREFIX_PATH
-
For any
module not found
error:- Run
apt-file search <module name>
- Install the suggested modules
-
Example
Qt5config.make not found
$ apt-file search Qt5config $ sudo apt-get install qtbase5-dev qtdeclarative5-dev
- Run
-
If certain issues still persist, refer to the Slack Channel of Inviwo.
- Download the openslide source code.
- Dependencies for openslide :
- OpenJPEG :
sudo apt install libopenjp2-7 libopenjp2-toollibopenjp2-7-dev
- Cairo:
sudo apt install libcairo2-dev
- GDK pixbuf:
sudo apt install libgdk-pixbuf2.0-dev
- Sqlite 3:
sudo apt-get install sqlite3 libsqlite3-dev
- OpenJPEG :
- Go to the source code directory and execute
./configure
make
make install
-
Inviwo Documentation provides simple approaches for it
-
If the new module requires a dependency on another module add them to
depends.cmake
Ensure that there are no circular dependency - if Module A has dependency on Module B, Module B can not have a dependency on Module A. It expects the dependency graph to be a DAG.
Add an image of the network, and upload the .inv file along with it.
Currently a user gets the following features:
- A user can provide slides in a supported formated, which will generate a 3D volume. Currently, for the purpose of demo, we are replicating a single slide and stacking them.
3D volume redering and Axis-aligned volume cropping
-
User can choose from
Voxel-value classification
as can be seen above, or aTransfer Function classification
- which has been modified from the original processor.This method allows user to select colors from the original slide, and render only those artifacts with the same color.
A Threshold property lets the user selects a radius around the mouse-click from which to capture unique colors.
Selecting the required colors and removing the rest to render only the cells
-
A user can perform arbitrary plane slicing of the volume and view the cross-section.
Defining the plane using the plane equation by defining the plane point and normal to the plane
- For understanding any of the classes, inviwo documentation provides with an inheritance chart which makes it easier to understand.
- Inviwo uses an indirect approach to setting up the shaders variables as is explained for the VolumeRaycaster in the section below.
-
Understanding the existing Volume Raycaster and the supporting fragment shader
The existing processor uses transfer function to determine how the output looks. But we had to come up with a different approach.
The 4*256 grid shown at the bottom of the image represents the isovalues. The isovalues represent the intensity. Say we determine an *x-y* range of isovalues representing and artifact say a cell. But the transfer doesn't provide us with any information about the colour nor the position, which is a major drawback of a transfer function. There maybe another entity with same intensity at the same point, so we can not assign them the same colour.
**A probable solution:**
For any color we have three channels (excluding the alpha value). Split the single transfer function into three different transfer function with each of the transfer functions taking their values from each channel(r-g-b) of the original color. Assign the nodes the original color. This way we are able to specify the opacity for specific colors.
---
**How is the transfer function applied?**
```cpp
void MultichannelRaycaster::initializeResources() {
utilgl::addShaderDefines(shader_, raycasting_);
utilgl::addShaderDefines(shader_, camera_);
utilgl::addShaderDefines(shader_, lighting_);
utilgl::addShaderDefines(shader_, positionIndicator_);
utilgl::addShaderDefinesBGPort(shader_, backgroundPort_);
if (volumePort_.hasData()) {
size_t channels = volumePort_.getData()->getDataFormat()->getComponents();
auto tfs = transferFunctions_.getPropertiesByType<TransferFunctionProperty>();
for (size_t i = 0; i < tfs.size(); i++) {
tfs[i]->setVisible(i < channels ? true : false);
}
std::stringstream ss;
ss << channels;
shader_.getFragmentShaderObject()->addShaderDefine("NUMBER_OF_CHANNELS", ss.str());
std::stringstream ss2;
for (size_t i = 0; i < channels; ++i) {
ss2 << "color[" << i << "] = APPLY_CHANNEL_CLASSIFICATION(transferFunction" << i + 1
<< ", voxel, " << i << ");";
}
shader_.getFragmentShaderObject()->addShaderDefine("SAMPLE_CHANNELS", ss2.str());
shader_.build();
}
}
```
Look at the highlighted part of the code. Here a string `ss2` is being generated which is setting color to a GLSL key `APPLY_CHANNEL_CLASSIFICATION(transferFunction, voxel, channel)` . This is replaced by the following code as defined in `shaderutils.cpp` .
```cpp
// classification (default (red channel) or specific channel)
std::string_view value;
std::string_view valueMulti;
switch (property.classification_.get()) {
case RaycastingProperty::Classification::None:
value = "vec4(voxel.r)";
valueMulti = "vec4(voxel[channel])";
break;
case RaycastingProperty::Classification::TF:
value = "applyTF(transferFunc, voxel.r)";
valueMulti = "applyTF(transferFunc, voxel, channel)";
break;
case RaycastingProperty::Classification::Voxel:
default:
value = "voxel";
valueMulti = "voxel";
break;
}
const std::string_view key = "APPLY_CLASSIFICATION(transferFunc, voxel)";
const std::string_view keyMulti =
"APPLY_CHANNEL_CLASSIFICATION(transferFunc, voxel, channel)";
shader.getFragmentShaderObject()->addShaderDefine(key, value);
shader.getFragmentShaderObject()->addShaderDefine(keyMulti, valueMulti);
```
Notice that the `addShaderDefine()` sets the value of the key, as was generated above, to the value depending on the inputs. We define a `bgKey` which is replaced by its value later.
```cpp
void addShaderDefinesBGPort(Shader& shader, const ImageInport& port) {
std::string_view bgKey = "DRAW_BACKGROUND(result,t,tIncr,color,bgTDepth,tDepth)";
if (port.isConnected()) {
shader.getFragmentShaderObject()->addShaderDefine("BACKGROUND_AVAILABLE");
shader.getFragmentShaderObject()->addShaderDefine(
bgKey, "drawBackground(result,t,tIncr,color,bgTDepth,tDepth)");
} else {
shader.getFragmentShaderObject()->removeShaderDefine("BACKGROUND_AVAILABLE");
shader.getFragmentShaderObject()->addShaderDefine(bgKey, "result");
}
}
```
Lets look at the `applyTF` function definition.
```glsl
#ifndef IVW_CLASSIFICATION_GLSL
#define IVW_CLASSIFICATION_GLSL
vec4 applyTF(sampler2D transferFunction, vec4 voxel) {
return texture(transferFunction, vec2(voxel.r, 0.5));
}
vec4 applyTF(sampler2D transferFunction, vec4 voxel, int channel) {
return texture(transferFunction, vec2(voxel[channel], 0.5));
}
vec4 applyTF(sampler2D transferFunction, float intensity) {
return texture(transferFunction, vec2(intensity, 0.5));
}
#endif // IVW_CLASSIFICATION_GLSL
```
Here we get the values for each of the voxels based on the transfer function. Now that we the calculated values let's look at what `shader_.getFragmentShaderObject()->addShaderDefine("SAMPLE_CHANNELS", ss2.str());` does.
```glsl
while (t < tEnd) {
samplePos = entryPoint + t * rayDirection;
voxel = getNormalizedVoxel(volume, volumeParameters, samplePos);
// macro defined in MultichannelRaycaster::initializeResources()
// sets colors;
**SAMPLE_CHANNELS;**
result = DRAW_BACKGROUND(result, t, tIncr, backgroundColor, bgTDepth, tDepth);
result = DRAW_PLANES(result, samplePos, rayDirection, tIncr, positionindicator, t, tDepth);
if (color[0].a > 0 || color[1].a > 0 || color[2].a > 0 || color[3].a > 0) {
// World space position
vec3 worldSpacePosition = (volumeParameters.textureToWorld * vec4(samplePos, 1.0)).xyz;
gradients = COMPUTE_ALL_GRADIENTS(voxel, volume, volumeParameters, samplePos);
for (int i = 0; i < NUMBER_OF_CHANNELS; ++i) {
color[i].rgb =
APPLY_LIGHTING(lighting, color[i].rgb, color[i].rgb, vec3(1.0),
worldSpacePosition, normalize(-gradients[i]), toCameraDir);
result = APPLY_COMPOSITING(result, color[i], samplePos, voxel, gradients[i], camera,
raycaster.isoValue, t, tDepth, tIncr);
}
}
// early ray termination
if (result.a > ERT_THRESHOLD) {
t = tEnd;
} else {
t += tIncr;
}
}
```
Notice the `SAMPLE_CHANNELS` . This is the variable whose value is set to the `ss2` string in *multichannelraycaster.cpp*. So now in our code we have the `color` variable whose value is set by the transfer functions.
**How is the Transfer Function setup in the code for a VolumeRaycaster?**
Its definition is in `volumeraycaster.cpp` . We don't define a transfer function directly, but use `IsoTFProperty` which consists of a transfer function as its attribute. In code, its object is `isotfComposite_` . Now in `initializeResources()` function, `utilgl::addDefines()` is called which as defined in *line 489* `shaderutils.cpp` sets the property as follows:
```cpp
void addShaderDefines(Shader& shader, const IsoTFProperty& property) {
addShaderDefines(shader, property.isovalues_);
}
void addShaderDefines(Shader& shader, const IsoValueProperty& property) {
const auto isovalueCount = property.get().size();
// need to ensure there is always at least one isovalue due to the use of the macro
// as array size in IsovalueParameters
shader.getFragmentShaderObject()->addShaderDefine(
"MAX_ISOVALUE_COUNT", StrBuffer{"{}", std::max<size_t>(1, isovalueCount)});
shader.getFragmentShaderObject()->setShaderDefine("ISOSURFACE_ENABLED",
!property.get().empty());
}
```
So it just sets `MAX_ISOVALUE_COUNT`, `ISOSURFACE_ENABLED` . So, as of now the transfer function attribute of the `isotfComposite_` hasn't been used yet.
Then in the `raycast()` the `utilgl::bindAndSetUniforms(shader_, units, isotfComposite_)` is called which is where the transfer function comes into play whose definition goes into `textureutils.cpp`
```cpp
void bindTexture(const IsoTFProperty& property, const TextureUnit& texUnit) {
if (auto tfLayer = property.tf_.get().getData()) {
auto transferFunctionGL = tfLayer->getRepresentation<LayerGL>();
transferFunctionGL->bindTexture(texUnit.getEnum());
}
}
void bindAndSetUniforms(Shader& shader, TextureUnitContainer& cont, const IsoTFProperty& property) {
TextureUnit unit;
bindTexture(property, unit);
shader.setUniform(property.tf_.getIdentifier(), unit);
cont.push_back(std::move(unit));
}
```
-
Understanding the CustomVolumeRaycaster
Since the method of transfer function presented several issues, we decided to come up with a different, simpler approach. The goal was now to render only those colors specified by the user. This doesn't provide a control over opacity yet, but that can be a future improvement.
First we wrote a custom fragment shader to be used for this purpose. The important function here was
getColorVal
.vec4 getColorVal(vec4 colorArray[MAX_COLORS], vec4 voxel) { if (colorLen == 0)return voxel; for(int i=0; i< colorLen;i++) { if(voxel.r <= colorArray[i].r + 0.01 && voxel.r >= colorArray[i].r - 0.01) { if (voxel.g <= colorArray[i].g + 0.01 && voxel.g >= colorArray[i].g - 0.01) { if(voxel.b <= colorArray[i].b + 0.01 && voxel.b >= colorArray[i].b - 0.01) { if(colorArray[i].a != 0.0) { return voxel;} else { return vec4(0.0,0.0,0.0,0.0);} } } } } return vec4(0.0,0.0,0.0,0.0); }
One thing to take care of is that we must either clear the memory created earlier in the fragment shader for storing the selected color values, or we must ensure that we don't iterate over memory that is not needed.
Next we defined a new function in
shaderutils.cpp
to accomodate the changes, with a similar reasoning as explained earliervoid addShaderDefines(Shader& shader, const CustomRaycastingProperty& property) { { // rendering type switch (property.renderingType_.get()) { case CustomRaycastingProperty::RenderingType::Dvr: default: shader.getFragmentShaderObject()->addShaderDefine("INCLUDE_DVR"); shader.getFragmentShaderObject()->removeShaderDefine("INCLUDE_ISOSURFACES"); break; } } { // classification (default (red channel) or specific channel) std::string_view value; std::string_view valueMulti; switch (property.classification_.get()) { case CustomRaycastingProperty::Classification::None: value = "vec4(voxel.r)"; valueMulti = "vec4(voxel[channel])"; break; case CustomRaycastingProperty::Classification::TF: value = "getColorVal(pointValue, voxel)"; valueMulti = "getColorVal(pointValue, voxel, channel)"; break; case CustomRaycastingProperty::Classification::Voxel: default: value = "voxel"; valueMulti = "voxel"; break; } const std::string_view key = "APPLY_CLASSIFICATION(pointValue, voxel)"; const std::string_view keyMulti = "APPLY_CHANNEL_CLASSIFICATION(pointValue, voxel, channel)"; shader.getFragmentShaderObject()->addShaderDefine(key, value); shader.getFragmentShaderObject()->addShaderDefine(keyMulti, valueMulti); }
Keep in mind to add any new shaders to the CMakeLists.txt
-
Understanding Custom Pixel Value Processor
The
getRegionColors
function is the main function for this processor. Given any coordinates, it appends the unique colors in a vector.void CustomPixelValue::getRegionColors(size2_t pos ) { auto img = inport_.getData(); auto dims = img->getDimensions(); auto numCh = img->getNumberOfColorLayers(); for (size_t i = 0; i < numCh; i++) { img->getColorLayer(i) ->getRepresentation<LayerRAM>() ->dispatch<void, dispatching::filter::All>([&](const auto layer) { using ValueType = util::PrecisionValueType<decltype(layer)>; using Comp = typename util::value_type<ValueType>::type; const auto data = layer->getDataTyped(); const auto im = util::IndexMapper2D(dims); auto value = data[im(pos)]; auto v = util::glm_convert<glm::vec<4, Comp>>(value); v = util::applySwizzleMask(v, img->getColorLayer(i)->getSwizzleMask()); auto vf = util::glm_convert<glm::vec<4, float>>(v); if constexpr (std::is_integral_v<Comp>) { vf /= std::numeric_limits<Comp>::max(); } std::vector<inviwo::vec4> selectedColorVal; for(auto p : selectedPixelsData_.getProperties()) { auto t = static_cast<FloatVec4Property*>(p); selectedColorVal.push_back(t->get()); } if(std::find(selectedColorVal.begin(), selectedColorVal.end(), vf) == selectedColorVal.end()) { std::string selectedPixelsIdentifier = "Color" + std::to_string(MaxColorVal_+1); std::string selectedPixelsName = "Color " + std::to_string(MaxColorVal_+1); selectedPixelsData_.addProperty(new FloatVec4Property(selectedPixelsIdentifier, selectedPixelsName, vf,vec4(0), vec4(1), vec4(std::numeric_limits<float>::epsilon()), InvalidationLevel::Valid,PropertySemantics::Color)); MaxColorVal_++; } areaPixelsData_ = selectedColorVal; }); } return; }
Based on the GPU there is a memory limit, hence we can not store a large number of colors. Current the maximum array_size that is defined is 100.
-
Shortcomings and Possible Solutions
- As mentioned above, there is a memory limitation on the amount of color values that can be passed to the shader. Such a limitation affects the user experience.
- As of now, the most efficient solution is utilising the method explained here. This can be achieved by modifying the already existing
multichannelraycaster
. Though the only issue that it will bring forth is a lot of work for the user to set the color of each of the nodes of transfer function, that too in three separate transfer function widgets. This can be solved but that would require creating a custom transfer function setter.
The aim was to show a visual plane which crops the volume. The issue that I found was that the Volume Raycaster expected the coordinates belonging to [-1,1] while the property value stored in the MeshClipping processor was in the original coordinates. But testing out with the normalised coordinates didn't seem to provide us with the visual plane.
The plane appears at an offset from the cropping plane and isn't visible for all values
A better solution would be to explore the VolumeSliceGL
processor. As can be seen in the diagnosticlayout_head.inv
example, this processor correctly shows the planes. The task shall only be to allow for arbitrary plane equations instead of axis-aligned.
-
Understanding Custom Image Stack Volume Processor
This processor is a modified version of ImageStackVolumeSource processor which is build by inviwo community.
The aim was to stack bunch of Whole Slide Image (WSI) and convert them into 3D Volume data. WSI images contains millions of pixels and have very high resolution image. These WSI can be as big as 10 GB, therefore stacking multiple such images is not feasible.
-
How to load such big data then?
We can't load such big data and we don't even need to! We will load only that portion which user want to see at a time. For normal screen resolution (1920,1080) is sufficient.
A file pattern property is used to fetch and store the location of WSIs present in local machine.
// Constructor filePattern_("filePattern", "File Pattern", "####.svs", "") addProperty(filePattern_); filePattern_.onChange([&]() { isReady_.update(); }); auto image_paths = filePattern_.getFileList();
These images are sent to
SlideExtractor(image_paths)
// Process CustomImageStackVolumeProcessorSingle::process() { util::OnScopeExit guard{[&]() { outport_.setData(nullptr); }}; myFile.open(CustomImageStackVolumeProcessorSingle::IMG_PATH, std::ios_base::out | std::ios_base::binary); myFile.close(); if (filePattern_.isModified() || reload_.isModified() || skipUnsupportedFiles_.isModified() || isRectanglePresent_.isModified() || level_.isModified() || coordinateX_.isModified() || coordinateY_.isModified()) { myFile.open(CustomImageStackVolumeProcessorSingle::IMG_PATH, std::ios_base::out | std::ios_base::binary); auto image_paths = filePattern_.getFileList(); std::cout << "Current image path is : " << image_paths[0] << std::endl; SlideExtractor(image_paths[0]); volume_ = load(); if (volume_) { basis_.updateForNewEntity(*volume_, deserialized_); information_.updateForNewVolume(*volume_, deserialized_); } deserialized_ = false; myFile.close(); } if (volume_) { basis_.updateEntity(*volume_); information_.updateVolume(*volume_); } outport_.setData(volume_); guard.release(); }
SlideExtractor()
uses openslide library helps us to extract a particular region of a data. Now, WSI has different levels of images, therefore user can choose a particular level (0 to MAX_LEVEL).// SlideExtractor void CustomImageStackVolumeProcessorSingle::SlideExtractor(std::string PATH){ // std::cout << "Image path is : "<< PATH << std::endl; openslide_t* op = openslide_open(PATH.c_str()); int32_t level = level_.get(); int64_t dim_lvlk[2]; int64_t dim_lvl0[2]; level_.setMaxValue(openslide_get_level_count(op)-1); // Setting max value of level of a given WSI if(op!=0) { openslide_get_level_dimensions(op,0,&dim_lvl0[0],&dim_lvl0[1]); openslide_get_level_dimensions(op,level,&dim_lvlk[0],&dim_lvlk[1]); } else std::cout << "Please enter a valid image path" << std::endl; }
Given a coordinates (top-left corner), level, width and height it can read pixel data of a rectangular cross-section.
// SlideExtractor start_x=coordinateX_.get(),start_y=coordinateY_.get(),w=1920,h=1080; uint32_t *dest = (uint32_t*)malloc(size_img); start_x*=(dim_lvl0[0]/(float)dim_lvlk[0]); // Converting to level0 frame start_y*=(dim_lvl0[1]/(float)dim_lvlk[1]); // Converting to level0 frame if(op!=0){ openslide_read_region(op,dest,floor(start_x),floor(start_y),level,w,h); openslide_close(op); std::cout << "in OP" << std::endl; }
RGB pixel data is calculated and is written to JPG file using TooJpeg Library.
int bytesperpixel=3; auto pixels = new unsigned char[w*h*bytesperpixel]; for (int y = 0; y < h; y++) { for (int x = 0; x < w; x++) { uint32_t argb = dest[y * w + x]; double red = (argb >> 16)& 0xff; double green = (argb >> 8)& 0xff; double blue = (argb >> 0)& 0xff; // std::cout << red << " " << green << " " << blue << std::endl; int offset = (y * w + x)*bytesperpixel; pixels[offset]=red; pixels[offset+1]=green; pixels[offset+2]=blue; }} TooJpeg::writeJpeg(ImgOutput, pixels, w, h);
These JPG images from different WSIs correspondingly are loaded and stacked together to convert into 3D volume data using
load()
. Refer Inviwo documentation for this. -
Features in Inviwo UI
There are sliders for level (Level) and coordinates (PointerX, PointerY). Using these sliders, user can view particular region of 3D volume (Stacked WSIs) of given values of level and coordinates.
Whenever the level is changed, the old level coordinates has to mapped to new level coordinates.
if(level!=zoom_in){ std::cout << "Zoomed to level: " << level << std::endl; std::cout << "Previous coordinates: " << start_x << " " << start_y << std::endl; int64_t dim_lvl_prev[2]; openslide_get_level_dimensions(op,zoom_in,&dim_lvl_prev[0],&dim_lvl_prev[1]); start_x*=(dim_lvlk[0]/(float)dim_lvl_prev[0]); start_y*=(dim_lvlk[1]/(float)dim_lvl_prev[1]); std::cout << "New coordinates: " << start_x << " " << start_y << std::endl; zoom_in = level; coordinateX_.set(start_x); coordinateY_.set(start_y); // std::cout << "Zoom value: " << zoom_in << std::endl; }
-
-
Things to do
- Current code read only one WSI, it has to be modified to work for multiple WSIs.
-
Understanding the processor
This processor is another version of Custom Image Stack Volume Processor - Single. The goal was to enable a user to draw a rectangular region in a low resolution image and return a higher resolution section of that region.
As was explained in the above section, we are storing the required images and then reading them. So, it is clear that we can not use the same image file to achieve our goal. Hence, this processor required us to create two separate files.
This is not the most efficient solution. Due to the less efficient integration of Openslide with inviwo, we had to resort to this method. Once we are able to directly generate volume data from the WSI without a need for storing the images, the Processor will become much more efficient.
// SlideExtractor start_x = inport_.getData()->at(0).x; start_y = inport_.getData()->at(0).y; w = abs(inport_.getData()->at(0).x - inport_.getData()->at(1).x); h = abs(inport_.getData()->at(0).y - inport_.getData()->at(2).y); w *= (dim_lvl0[0]/dim_lvlk[0]); // Mapping width to level 0 h *= (dim_lvl0[1]/dim_lvlk[1]); // Mapping height to level 0 if(w > 1920) w=1920; if(h > 1080) w=1080;
Understanding Histopathology and Volume Rendering
- One major problem to be solved is to determine what slices to pull out from the OpenSlide API when a user is not looking orthogonally at the volume. Currently, we are passing the coordinates of a ROI to the OpenSlide API and pulling out the higher resolution slide from there. Assume the condition when a user slices a volume and zooms in onto the sliced region. We must figure out how to pull out the higher resolution image of that region.
- Annotations could be a great addition to the application. It could utilise processors such as
DrawLines
,TextOverlay
andPixelValue
(if one wishes to highlight the region).