In dentistry dental panoramic radiographs are used by specialists to complement the clinical examination in the diagnosis of dental diseases, as well as in planning the treatment. They allow the visualization of dental irregularities, such as missing teeth, bone abnormalities, tumors, fractures and others. Dental panoramic radiographs are a form of extra-oral radio- graphic examination, meaning the patient is positioned between the radiographic film and the X-ray source. The scan describes a half-circle from ear to ear, showing a two-dimensional view of upper and lower jaw. In contrast to the intra-oral radiographs, like bitewing and periapical radiographs, dental panoramic radiographs are not restricted to an isolated part of the teeth and also show the skull, chin, spine and other details originated from the bones of the nasal and face areas, making these images much more difficult to analyze.
An automatic segmentation method to isolate parts of dental panoramic radiographs could be a beginning of helping dentists in their diagnoses. Tooth segmentation could be the first step towards an automated analysis of dental radiographs. In this thesis the labeled data by Jader et al. will be used, supplemented by a dataset of 120.000 unlabeled images, provided by the University Hospital Erlangen. It will be investigated how we can achieve reasonable segmentation results on a large unlabeled dataset, utilizing a smaller annotated dataset from a different source. For this purpose different bootstrapping methods will be analyzed, to improve the segmentation results using semi- supervised learning.