Simulation posted on Figshare.

So I just posted my first dataset on figshare.  This is really cool.   Figshare that is.  Well the simulation too.  Even though it is simple.   It is a simulation that reproduces a bead image that Kurt Thorn of UCSF shared with me.  The actual image is here.   And the deconvolved of the actual image here.   I did the simulation as an additional check of the deconvolution results.

Below is the real bead image and the simulated bead image.  At first glance they look almost identical.  But an expert looking closely at the data could probably tell which one was simulated.  And since the data is on figshare they could do this.

realbead_zprojection simulatedbead_zprojection

Below are the deconvolution results.

realbead_deconvolved simulatedbead_deconvolved

And below are some line profiles of the original images taken through the bead.  When we look at the line profiles we see one of them has a larger FWHM (it is the real image).   So the simulation isn’t perfect.   Since the data is on figshare someone could look it over and figure out why the real-image FWHM is larger.

Plot of Reslice of 100xPSF_2k_2_MMStack

Plot of Reslice of simulatedbeads.convolved

Here is the line profile from the deconvolved real image

Plot of Reslice of 100xPSF_deconvolved_nc_200

And the deconvolved simulated image

Plot of Reslice of simulatedbeads_deconvolved_nc_200

And finally a line profile through the original “object”, that is the ground truth of the simulation.   The phantom had an intensity of 200000.  The reconstruction is pretty close.  It may not seem like a big deal to reconstruct a simulated bead.  But there is are so many things that can go wrong in deconvolution, simple simulations and sanity checks are important.

Plot of Reslice of simulatedbeads

Anyways figshare is a terrific tool.   It makes it really easy to share thoughts on data.  I suspect as I get better at using figshare I won’t have to actually embed figures in my blog entries.  Instead I can just share links to figshare or a similar service.

Simple ITK – ImageJ2 integration

As part of my larger deconvolution project I tested some of the ITK deconvolution algorithms.   It was fairly easy to integrate Simple ITK into ImageJ2 using Eclipse in an Ubuntu development environment.   Though I haven’t attempted the non-trivial task of creating distributable libraries.  I was really impressed at how simple it was to create an image processing plugin using ImageJ2 and Simple ITK so I separated it into a self contained example that can be found here.

Here are the steps I followed to create my Image2/Simple ITK plugin.  While this example is for deconvolution it should be easy to modify it for different ITK functionality.

1. Create an ImageJ2 style maven project.  There are great examples in imagej-tutrials (if you are not familiar with maven google “maven in 5 minutes”)   The “intro-to-imagej-api”, “simple-commands” and “create-a-new-op” projects are very useful.

2.  Import the project into eclipse and follow the instructions to get/build SimpleITK and to set up for Java in eclipse.

3.  Write some utilities to copy between the imagej “Img” and the itk “Image”.  Mine are here, let me know if anybody has some better ones.

4. Write an op to wrap the itk function.  An op that wraps itk Richardson Lucy is here.  It is derived from this base class.   The key is the annotation “@Parameter”.  This defines the inputs and outputs.   Then the op can be called using the op service.  In a Python script it would look something like this.

out=ops.run("RichardsonLucyITK", in, kernel, 10)

The code to convert between image formats and call itk (through the simple itk wrapper) looks like this:

        // convert input to itk Images
        Image itkImage =
         SimpleItkImagejUtilities.simple3DITKImageFromInterval(input);
        Image itkPsf = 
        SimpleItkImagejUtilities.simple3DITKImageFromInterval(kernel);

        itkRL=new RichardsonLucyDeconvolutionImageFilter();

        // call itk rl using simple itk wrapper
        Image out=itkRL.execute(itkImage, itkPsf, numIterations, 
           true, BoundaryConditionType.ZERO_PAD, 
                RichardsonLucyDeconvolutionImageFilter. 
                 OutputRegionModeType.SAME);

        T inputType=Util.getTypeFromInterval(input);

        // convert output to ImageJ Img
        output = SimpleItkImagejUtilities.simple3DITKImageToImg(
                       out, input.factory(), inputType);

5. Write a command to call the op.  A command that calls the itk Richardson Lucy op is here.  Again the key is the @Parameter annotation.  It is used to get access to services.  For example

    @Parameter
    protected OpService ops;

is a means to access the service which runs the operations.   The code to call the op is as follows:

        // get the Imgs from the dataset 
        Img<T> imgIn=(Img<T>)input.getImgPlus().getImg();
        Img<T> imgKernel=(Img<T>)kernel.getImgPlus().getImg();

        // call the op
        Img<T> imgOut = (Img<T>)(ops.run("RichardsonLucyITK", imgIn,
             imgKernel, numIterations));

        output=data.create(new ImgPlus(imgOut));

The images (Img) are retrieved from a Dataset.  It is beyond the scope of this small blog post to go into detail about Datasets but I should note that in general a dataset may contain n-dimensional data.  So a more complete example would also have code to retrieve 3D hyperslices from a possible n-dimensional space.

The nice thing is that when we use a command we get a simple GUI for free.  As shown below.

ITKRLPlugin

Malyasian Flight 370 and Particle Detection

Image processing algorithms for microscopy are often tested on tiny round particles such as some type of metallic particles in electron microscopy or fluorescent particles in light microscopy.   In 3 dimensional imaging, the 3rd dimension is often stretched.  So when you take a picture of something spherical, it will be elongated in the 3rd dimension.

We can use image processing to restore the true shape of the particle.  And often times we use this type of sample as a test.  In a 3D picture, the particles will be elongated as below (it is a 2D picture showing some particles distributed in 3D space).

particles aberratedparticles

We can run a computer program to try and reconstruct the correct shape of the particles.   Below is the result of running such a program.  If the particles in the output become spherical again, then our reconstruction system is wonderful… right?

reconstructed

The problem is we haven’t learned any new information.  We already knew the shape of the particles.  We’ve only confirmed our expectations which is good but not the entire story.  In real life, things become more complicated.  We want to learn something we don’t already know.  For example, we may want to learn something new about a cellular structure.   If the cell is alive, things might be moving.  Our system that works quite well on the known may not (or may) work on the unknown.   To make a conclusion about the complicated system is much more difficult.

This seems to be what happened to Malaysia Flight 370.  The tracking systems were designed and tested for the known.  A plane goes from one place to another, from Kuala Lumpur to Beijing, or from Chicago to San Francisco, on predetermined paths, emitting it’s own radar signature.  In retrospect the tracking procedure was tested on an obvious toy system.  Usually we know exactly where the plane is going and furthermore the plane tells us where it is.   Under typical conditions the measurement procedure confirms  the expectations, but under unusual circumstances the tracking system breaks down with dramatic results.

Noncirculant deconvolution and a simple simulation

This weekend I’m going to be giving a talk at the SUNY Albany Open Source Festival.  The talk will focus on work I’ve done using open source for the ISBI Deconvolution Grand Challenge.  Part of the talk presents a simple simulation showing the benefits of noncirculant deconvolution.

The phantom I used for this test is here phantom_cropped.ome

The non-circulant convolution is here phantom_.image.ome

And the PSF is here psf.ome

The simulated image is simply two spheres in 3D space.  The spheres have an intensity of 100, the background has an intensity of 0.  A screen shot from ImageJ is shown below.

3d_spheres

 

The next step is to create a simulated blurred image by convolving with a PSF.

simulation
In this case we convolve then crop as to avoid wrap around.   Technical description is here.    Next we deconvolve.  Keep in mind the values outside our cropped section have been “lost”.   So we need a strategy to deal with the boundaries.  Here are 3 options.

1.  Do nothing.  This would mean that values near the edges will be calculated using values from the other side.  This is called the circulant model and can result in artifacts.

2.  Pad the image.  This adds pixels to the edges to reduce artifacts.  There are many strategies for padding, for example zero-pad (add zeros), periodic (repeat the signal), and reflection (mirror the signal).  Sometimes a decay function is used such that the extended signal slowly decays.

3.  Rederive the algorithm using a non-circulant model.  Technical description for rederivation of Richardson Lucy is described here.

Let’s look at some results in the form of line profiles taken through the z (depth) dimension of the images.

First the original image.  It is a very simple image.  Just 2 spheres in 3D space.  The spheres have a brightness of 100, the background 0.

original_profile

Next the convolved.  The spheres are now blurry.  Note the scale.  The signal is much dimmer as the light has spread out over 3D space.

blurred_profile

Next (below) is the deconvolution using the Richardson Lucy algorithm (using the implementation from the insight toolkit) with a boundary strategy called Neumann-zero-flux.  200 iterations were performed.  Again note the scale.  The sphere near the edge has only been restored to about 60% the original value.  Interestingly even the sphere near the center (where boundary effects aren’t as important) has only been restored to about 85% original value.

neumann_profile

Finally here is the result using 200 iterations of the noncirculant Richardson Lucy algorithm.  In this case the intensity level of the deconvolved sphere is very close (within 1%) of the true value.

noncirculant_decon_profile

 

 

Using the ImageJ2 Framework for the 3D Deconvolution Grand Challenge

I’ve recently participated in the Second International Challenge on 3D Deconvolution Microscopy.   The challenge was described as follows.

The idea of the challenge is to make the participants follow a reasonably complete deconvolution workflow, with several hurdles that are representative of real situations. These hurdles may include algorithm selection, PSF-inaccuracy compensation, noise-model calibration and regularization-parameter adjustment.

My submission focused on the software engineering aspects of the problem.   Multiple components would be needed to implement a solution.  At a minimum.

  1. A regularized deconvolution algorithm.
  2. A module to extend the images in order to avoid boundary artifacts.
  3. A module to generate a PSF and/or process an acquired PSF.

A further wrinkle was that the organizers presented a unique model to deal with boundary conditions.  For optimal results the algorithms would have to be re-derived.  For example the modified version of the Richardson Lucy algorithm is derived here.

ImageJ2 was chosen as a framework to run the components.  At the heart of ImageJ2 plugin development is the command.  A command is a convenient way to package lower level functionality.  The programmer only needs to add a few annotations and the framework will take care of the rest including menu items, threading, and automatic GUI generation (see the “hello world” example).  I hacked around a bit with the GUI generation routines in order to place multiple commands on a GUI.  The GUI for a deconvolution “protocol” is shown below….

SimpleDeconScreenshot

 Commands for Deconvolution, PSF generation, and Extension are placed on the same GUI.  A combo box allows one to choose different versions of each command.  In the screen shot above the deconvolution method is inverse filter, the psf is just an input, and the extension method is “fft” (the extended size of the image will be optimized for fast fft execution).   Most of the GUI has been automatically generated by the framework.   My code only placed the GUI for each command on the same dialog.  If we choose a different set of commands the GUI will update itself.

ComplexDeconScreenshot

In the screen shot above the Deconvolution command has been changed to “TotalVariationRL” (Richardson Lucy with Total Variation) and the PSF command to “CreatePsfCosmos” (Cosmos Theoretical PSF).  These commands have several input parameters.  So the GUI becomes more complicated.  Again most of the GUI is automatically generated by the framework.  It’s not pretty right now.  The inputs are just placed vertically.  This is fine when dealing with a single command.  However it is probably not optimal for several commands.  Since all the code is open source the GUI could easily be modified for a different look and feel.  Here is the result after running the plugin.

ReflectionOutput

When running a series of commands the intermediate outputs are automatically generated and displayed.  The user can see the result of each step of the algorithm.  In this case we see the input, the extended image using the mirror technique, the deconvolved extended, and the final result (cropped back to the original dimensions).  Note these are all Z-slice views of a 3D simulated object.  By inspecting the intermediate images one could hypothesise that using mirror extension may cause a border artifact.  We could then test this hypothesis by retesting using a different type of extension.  The experiment was done again using the non-circulent deconvolution model suggested by the Challenge organizers (Biomedical Imaging Group at EPFL)

NoncirculantOutputIn this case the signal is extended by zeroes as to avoid circulant wrap when convolving with the PSF and the algorithm is re-derived assuming non-circulant convolution.   After running the plugin we again see several images, the input, the extended image using the noncirculent technique, the deconvolved extended, and the final result (cropped back to the original dimensions).  Looking at the intermediate images is interesting.  Notice that the extended image has an obvious discontinuity.  (It is not so much a discontinuity as a lack of knowledge of the values outside the imaging window).   My implementation of the non-circulent algorithm actually had a slight difference as compared to the Big Lab implementation (which they provided as a Matlab script).  I kept the values outside the imaging window.  This would allow for potential reconstruction of the object outside the measurement window.  In the extended “deconvolved” it does appear the object has been partially reconstructed outside the imaging window.  Is this information valid??  Previous work has shown that it is feasible to partially reconstruct data outside of the measurement window.  Obviously there are some limitations dependent on the signal to noise ratio and the position of the objects.

The main message is that ImageJ2 is a great framework for programming imaging algorithms.  GUI components are automatically generated.  The command structure leads to a convenient organization of algorithm components.  And using commands for each sub component of an imaging protocol makes it easy to inspect the intermediate results.

The code for the below examples is here.    There are also linux64 binaries here.   Keep in mind the project is still in the “early alpha” stage.

Stellaris FISH dataset #1 Deconvolution tests

Recently George McNamara, of M.D. Anderson Cancer Center posted some datasets and invited the community to share deconvolution results (description here) and compare to results he generated using a GPU algorithm.  The experiments below are done using a cropped version of the “GM 20131101Fri_StellarisFISH_1_w43 = Cy3 -N3” set. (cropped version can be downloaded here: GM 20131101Fri_StellarisFISH_1_w43 = Cy3 -N3- ROI)

Figure 1. shows an XY and YZ slice view of the original data, and figure 2 shows the Bruce/Butte deconvolution result (provided by George McNamara).  The YZ view of the original image shows spherical aberration consistent with the description of the experiment (“Imaging conditions are not perfect because the cells were in aqueous solution”).  Note that in the Bruce/Butte deconvolution result some residual aberration can still be seen in YZ.

In an attempt to address aberrations the ImageJ PSF Generator Plugin from BIG Group EPFL was used to generate an aberrated PSF model.   Following suggestions from Model etc. al the PSF model was set as Gibson-Lanni, the specimen RI was approximated to be 1.4, and the particle depth was set to be near the middle slice of the stack (6400 nm).  All other PSF parameters were set using the values  provided with the data.  The PSF with aberrations is shown in Figure 3.  The Deconvolution Lab ImageJ plugin (also from BIG group) was used to deconvolve the image using the aberrated PSF.  The Richardson Lucy algorithm was used with 600 iterations.  The result is shown in Figure 4 and the data set can be found here (Richardson-Lucy 600n_sri_1.4).    The ImageJ result using a PSF without aberrations (depth=0) was also generated as a comparison (Figure 5.).

Note that when using the aberrated PSF the deconvolution result is more symmetrical in YZ (Figure 4) as compared to the CUDA result (Figure 2) and the ImageJ result without considering aberration (Figure 5.).    Further improvement could perhaps be obtained by applying axial scaling (Model), an even better model of the PSF (better estimates of the sample parameters or by measuring it), perhaps by using a depth variant model (such as here), or by using a more sophisticated deconvolution algorithm (total variation RL and wavelet based are both available in the ImageJ plugin).

The speed reported for the CUDA algorithm (in the order of seconds!!) is very good (the entire process, PSF generation + deconvolution  takes minutes with ImageJ BIG lab components, comparison with other implementations would be interesting).   If options exist to include aberrations in the CUDA algorithm PSF model even better results may be obtainable with the CUDA deconvolution.

XY 32   YZ 136

Figure 1: Original Image Slice Views, XY (z=32) and YZ (x=136)

XY 32_cuda    YZ 136_cuda

Figure 2:  Bruce, Butte Cuda Deconvolution, Slice Views, XY (z=32) and YZ (x=136)

XY 32 PSF_d6000_ri1.4    YZ 128 PSF_d6000_ri1.4

Figure 3: PSF with aberration.

XY 32_RL600_d6000_ri1.4    YZ 136_RL600_d600_RI1.4

Figure 4: ImageJ Result, 600 iterations Richardson Lucy using PSF with aberrations

XY 32_RL600_d0_ri1.5   YZ 136_RL600_d0_ri1.5

Figure 5: ImageJ Result, 600 iterations Richardson Lucy using PSF at coverslip (depth=0)