Using the ImageJ2 Framework for the 3D Deconvolution Grand Challenge

I’ve recently participated in the Second International Challenge on 3D Deconvolution Microscopy.   The challenge was described as follows.

The idea of the challenge is to make the participants follow a reasonably complete deconvolution workflow, with several hurdles that are representative of real situations. These hurdles may include algorithm selection, PSF-inaccuracy compensation, noise-model calibration and regularization-parameter adjustment.

My submission focused on the software engineering aspects of the problem.   Multiple components would be needed to implement a solution.  At a minimum.

  1. A regularized deconvolution algorithm.
  2. A module to extend the images in order to avoid boundary artifacts.
  3. A module to generate a PSF and/or process an acquired PSF.

A further wrinkle was that the organizers presented a unique model to deal with boundary conditions.  For optimal results the algorithms would have to be re-derived.  For example the modified version of the Richardson Lucy algorithm is derived here.

ImageJ2 was chosen as a framework to run the components.  At the heart of ImageJ2 plugin development is the command.  A command is a convenient way to package lower level functionality.  The programmer only needs to add a few annotations and the framework will take care of the rest including menu items, threading, and automatic GUI generation (see the “hello world” example).  I hacked around a bit with the GUI generation routines in order to place multiple commands on a GUI.  The GUI for a deconvolution “protocol” is shown below….


 Commands for Deconvolution, PSF generation, and Extension are placed on the same GUI.  A combo box allows one to choose different versions of each command.  In the screen shot above the deconvolution method is inverse filter, the psf is just an input, and the extension method is “fft” (the extended size of the image will be optimized for fast fft execution).   Most of the GUI has been automatically generated by the framework.   My code only placed the GUI for each command on the same dialog.  If we choose a different set of commands the GUI will update itself.


In the screen shot above the Deconvolution command has been changed to “TotalVariationRL” (Richardson Lucy with Total Variation) and the PSF command to “CreatePsfCosmos” (Cosmos Theoretical PSF).  These commands have several input parameters.  So the GUI becomes more complicated.  Again most of the GUI is automatically generated by the framework.  It’s not pretty right now.  The inputs are just placed vertically.  This is fine when dealing with a single command.  However it is probably not optimal for several commands.  Since all the code is open source the GUI could easily be modified for a different look and feel.  Here is the result after running the plugin.


When running a series of commands the intermediate outputs are automatically generated and displayed.  The user can see the result of each step of the algorithm.  In this case we see the input, the extended image using the mirror technique, the deconvolved extended, and the final result (cropped back to the original dimensions).  Note these are all Z-slice views of a 3D simulated object.  By inspecting the intermediate images one could hypothesise that using mirror extension may cause a border artifact.  We could then test this hypothesis by retesting using a different type of extension.  The experiment was done again using the non-circulent deconvolution model suggested by the Challenge organizers (Biomedical Imaging Group at EPFL)

NoncirculantOutputIn this case the signal is extended by zeroes as to avoid circulant wrap when convolving with the PSF and the algorithm is re-derived assuming non-circulant convolution.   After running the plugin we again see several images, the input, the extended image using the noncirculent technique, the deconvolved extended, and the final result (cropped back to the original dimensions).  Looking at the intermediate images is interesting.  Notice that the extended image has an obvious discontinuity.  (It is not so much a discontinuity as a lack of knowledge of the values outside the imaging window).   My implementation of the non-circulent algorithm actually had a slight difference as compared to the Big Lab implementation (which they provided as a Matlab script).  I kept the values outside the imaging window.  This would allow for potential reconstruction of the object outside the measurement window.  In the extended “deconvolved” it does appear the object has been partially reconstructed outside the imaging window.  Is this information valid??  Previous work has shown that it is feasible to partially reconstruct data outside of the measurement window.  Obviously there are some limitations dependent on the signal to noise ratio and the position of the objects.

The main message is that ImageJ2 is a great framework for programming imaging algorithms.  GUI components are automatically generated.  The command structure leads to a convenient organization of algorithm components.  And using commands for each sub component of an imaging protocol makes it easy to inspect the intermediate results.

The code for the below examples is here.    There are also linux64 binaries here.   Keep in mind the project is still in the “early alpha” stage.

Leave a Reply

Your email address will not be published. Required fields are marked *