“Deep learning” is a computational technique that is finding its way into a large range of complex analysis problems – particularly in computer vision. From self-driving cars, to facial recognition, to the image processing built into smartphone cameras, deep learning has become a standard method of choice for image processing, classification, and segmentation. Perhaps unsurprisingly, this has resulted in a variety of deep learning networks being adopted by bioimage analysts for the standard tasks of analysis of microscopy images.
While tools are being developed to make accessing deep learning pipelines simpler, there is still a comparatively high barrier for entry into the deep learning space – particularly for “bench scientists”. The concept of microscopy core facilities has been instrumental in getting more complex imaging modalities into the hands of biologists and are increasingly producing automated image analysis pipelines as well. Having computational scientists on staff allows the rapid deployment of state-of-the-art processing, such as deep learning, to the benefit of scientists without the requisite computational background.
Here we will present the some of our experiences using deep learning in a microscopy core. Including use of the Content Aware Restoration network[1] (CARE) to denoise lattice light sheet data which required acquisition of training data and training the network from scratch. We will also present preliminary results using other recently published networks for image segmentation, as well as some processing experiments performed using networks designed for purposes other than bioimage analysis. Results of these experiments will highlight the benefits of rapid deployment of methods, but also the dangers of using tools which are perhaps not fit for purpose.