Automatic Unstained Cell Detection in Bright Field Microscopy
Bright field microscopy is preferred over other microscopic imaging modalities whenever ease of implementation and minimization of expenditure are main concerns. This simplicity in hardware comes at the cost of image quality yielding images of low contrast. While staining can be employed to improve the contrast, it may complicate the experimental setup and cause undesired side effects on the cells. In this thesis, we tackle the problem of automatic cell detection in bright field images of unstained cells. The research was done in context of the interdisciplinary research project COSIR. COSIR aimed at developing a novel microscopic hardware having the following feature: the device can be placed in an incubator so that cells can be cultivated and observed in a controlled environment. In order to cope with design difficulties and manufacturing costs, the bright field technique was chosen for implementing the hardware. The contributions of this work are briefly outlined in the text which follows. An automatic cell detection pipeline was developed based on supervised learning. It employs Scale Invariant Feature Transform (SIFT) keypoints, random forests, and agglomerative hierarchical clustering (AHC) in order to reliably detect cells. A keypoint classifier is first used to classify keypoints into cell and background. An intensity profile is extracted between each two nearby cell keypoints and a profile classifier is then utilized to classify the two keypoints whether they belong to the same cell (inner profile) or to different cells (cross profile). This two-classifiers approach was used in the literature. The proposed method, however, compares to the state-of-the-art as follows: 1) It yields high detection accuracy (at least 14% improvement compared to baseline bright field methods) in a fully-automatic manner with short runtime on the low-contrast bright field images. 2) Adaptation of standard features in literature from being pixel-based to adopting a keypoint-based extraction scheme: this scheme is sparse, scale-invariant, orientation-invariant, and feature parameters can be tailored in a meaningful way based on a relevant keypoint scale and orientation. 3) The pipeline is highly invariant with respect to illumination artifacts, noise, scale and orientation changes. 4) The probabilistic output of the profile classifier is used as input for an AHC step which improves detection accuracy. A novel linkage method was proposed which incorporates the information of SIFT keypoints into the linkage. This method was proved to be combinatorial, and thus, it can be computed efficiently in a recursive manner. Due to the substantial difference in contrast and visual appearance between suspended and adherent cells, the above-mentioned pipeline attains higher accuracy in separate learning of suspended and adherent cells compared to joint learning. Separate learning refers to the situation when training and testing are done either only on suspended cells or only on adherent cells. On the other hand, joint learning refers to training the algorithm to detect cells in images which contain both suspended and adherent cells. Since these two types of cells coexist in cell cultures with shades of gray between the two terminal cases, it is of practical importance to improve joint learning accuracy. We showed that this can be achieved using two types of phasebased features: 1) physical light phase obtained by solving the transport of intensity equation, 2) monogenic local phase obtained from a low-passed axial derivative image. In addition to the supervised cell detection discussed so far, a cell detection approach based on unsupervised learning was proposed. Technically speaking, supervised learning was utilized in this approach as well. However, instead of training the profile classifier using manually-labeled ground truth, a self-labeling algorithm was proposed with which ground truth labels can be automatically generated from cells and keypoints in the input image itself. The algorithm learns from extreme cases and applies the learned model on the intermediate ones. SIFT keypoints were successfully employed for unsupervised structure-of-interest measurements in cell images such as mean structure size and dominant curvature direction. Based on these measurements, it was possible to define the notion of extreme cases in a way which is independent from image resolution and cell type.