Student presents at 46th Annual IEEE AIPR 2017 Workshop

Our PhD student, Shuyue (Frank) Guan, has attended the 46th Annual IEEE AIPR 2017: Big Data, Analytics, and Beyond in Washington DC. Frank gave a presentation about breast cancer detection using transfer learning in the convolutional neural networks.

The Applied Imagery Pattern Recognition (AIPR) workshop sponsored by IEEE is to bring together researchers from government, industry, and academia across a broad range of disciplines. The Big Data Analytic domains represented at AIPR 2017 include computer vision, remote sensing imagery, medical imaging, and robotics and tracking, with a focus on machine learning and deep learning.

Here is a brief summary of Frank’s project and presentation:

Traditional mammographic detection based on the computer-aided diagnosis (CAD) tools rely on manually extracted features, but hand-crafted features have a variety of drawbacks such as domain specific, and the process of feature design can be tedious, difficult, and non-generalizable. An alternative method for feature extraction is to learn features from whole images directly through the Convolutional Neural Network (CNN), however, training the CNN from scratch needs a huge number of labeled images. Such a requirement is infeasible for mammographic tumor images because they are difficult to obtain, diseases are scarce in the datasets, and expert labeling is expensive. A promising solution is to use a limited number of labeled medical images to fine-tune a pre-trained CNN model, which has been trained by very large image datasets from other fields. This approach is also called transfer learning. In fact, some results of transfer learning are counter-intuitive: previous studies show that the features learned from natural images could be transferred to medical images, even if the target images greatly differ from the pre-trained source images.

Using mammographic images from the two databases, we tested 3 training methods: (1) trained a CNN from scratch, (2) applied the complete VGG-16 model to extract features from input images and used these features to train a classifier, (3) updated the weights in several last layers of VGG-16 by back-propagation (fine-tuning) to detect abnormal regions. By comparison, we found that the method (2) is ideal for study. Then, we used method (2) to classify regions: benign vs. normal, malignant vs. normal and abnormal vs. normal from DDSM. Our results show an average accuracy of about 90.5% for abnormal vs. normal classifications on mammography and the AUC is 0.96 are competitive. Our best model could reach 95% accuracy for abnormal vs. normal case. Compared with recent studies, we used much more images for training, different pre-trained model and simpler classifier.

This study shows that applying transfer learning in CNN can detect female breast cancer from mammographic images. And, training classifier by extracted features is a fast way to train a good classifier in transfer learning.

Leave a Reply

Your email address will not be published. Required fields are marked *