Skin cancer is the most common form of malignancy that affects human populations all over the world. In the US, more people are diagnosed with skin cancer each year than with all other cancer types combined. The mortality rates from melanoma skin cancer have risen by 156% in the UK.
Skin cancer is primarily diagnosed visually, beginning with an initial clinical screening by a dermatologist, and potentially followed by dermoscopic analysis, a biopsy, and histopathological examination.
According to WHO, early detection of changes dramatically increases the chances for successful treatment. A study conducted by German researchers from the Association of Dermatological Prevention showed a 48% relative reduction in melanoma mortality after carrying out a skin cancer awareness campaign, clinician education and training, and screening of nearly 20% of eligible adults.
There's no denying that early diagnosis is essential for reducing the mortality of the disease. And that's where Machine Learning software comes in.
Computers equipped with software based on deep learning, namely convolutional neural networks (CNN), are better at detecting skin cancer than experienced dermatologists. It has been proven in a paper published in the leading cancer research journal Annals of Oncology in May 2018. In this publication authors showed the results of a study conducted by researchers in Germany, the US, and France who trained a CNN to identify skin cancer.
In general, the researchers trained a neural network using 100,000 images of malignant melanomas (the most lethal form of skin cancer), together with pictures of benign moles. Once the network was trained, they compared its performance with the work of 58 dermatologists from 17 countries around the world. The network detected more melanomas than trained professionals. Also it misdiagnosed benign moles as malignant less often than dermatologists.
Note that over a half of the invited dermatologists were experts who had more than five years of experience in detecting skin cancer visually. And yet, physicians were able to detect 86.6% of melanomas and identify 71.3% of benign moles correctly. When researchers tuned the network, recognition of non-malignant changes reached human level of correct identifications. If it comes to malignant changes, the CNN could detect a smashing 95% of melanomas!
That’s just one example of the many different projects that are now being developed in this field. In the next section, we will give you an overview of the various cutting-edge technologies used for diagnosing skin cancer.
Here are some examples of projects that take advantage of different technologies for designing software that can identify visual skin cancer changes.
Most of the time, image processing systems that are put to work in this medical field use the following flow in their analysis:
Image pre-processing – that's where we resize images and adjust other aspects such as contrast or brightness.
Image segmentation – we segment images using tools such as binary masks and edge detection.
Feature extraction – this is where we extract geometric properties of segmented lesions such as Area, Perimeter, Greatest Diameter, Circularity Index, Irregularity Index.
Classification – we use k-nearest neighbor algorithms (k-NN) and support vector machines (SVM) to enable image classification.
Sophisticated image processing systems are combined with classical machine learning algorithms: k-Nearest Neighbours (k-NN) and Support Vector Machines (SVM). Both can be used as binary classification tools (malignant or benign in terms of skin cancer).
The KNN Algorithm is based on feature similarity. How closely sample features resemble our training set determines how we classify a given data point, whereas SVM perform classification by finding the hyper-plane that differentiates the two classes very well.
Example: Computer Aided Melanoma Skin Cancer Detection Using Image Processing by Jain Shivangi et al.
In this study, researchers analysed skin lesion images to show whether skin cancer is present or not. Their Lesion Image analysis tool checks for various melanoma parameters such as asymmetry, border, color, diameter for the image segmentation and feature stages. The extracted feature parameters classify the image as normal skin and melanoma cancer lesion.
Other researchers have developed hybrid methods. On top of standard image pre-processing workflow, authors combined several other machine learning techniques like feature extraction or dimensionality reduction. Beside image processing and features engineering, they also used feed forward artificial neural network (ANN) combined with classic machine learning classifiers, i.e. k-NN. Both, ANN and k-NN classifiers, are used in tandem to compute final result, hence this is called a hybrid method. This approach allowed them to reach a smashing 95-98% accuracy in detecting cancerous changes.
Example: Automatic Skin Cancer Images Classification by Mahmoud Elgamal
The hybrid technique proposed in this paper consists of three stages: feature extraction with discrete wavelet transforms (DWT), dimensionality reduction with principal component analysis (PCA), and two classification algorithms. In the first stage, Elgamal obtained the features related with images with discrete wavelet transformation. Next, he reduced the features of skin images to the more essential ones using principal component analysis. Finally, he developed two classifiers based on supervised machine learning: one was a feed forward back-propagation artificial neural network, and the other on the k-nearest neighbor. These classifiers were then used to classify subjects as normal or abnormal skin cancer images, achieving success rates of 95% and 97.5%, respectively.
In another machine learning approach, researchers have used RBF (Radial Basis Function) and SOM (Self Organizing Maps) to reach 96.15% and 95.45% accuracy in detecting skin cancer changes.
Example: Computer Vision for Skin Cancer Diagnosis and Recognition using RBF and SOM by Abrham Debasu Mengistu and Dagnachew Melesew Alemayehu
In this paper author proposed original approach to recognize and predict different types of skin cancer. The classification system was supervised following the predefined classes of skin cancer. The combination of Self Organizing Maps (SOM) and radial basis function (RBF) for recognition and diagnosis of skin cancer was shown to be more efficient than KNN, Naïve Bayes, and ANN classifiers. The tool achieved the best classification accuracy: 88%, 96.15% and 95.45% for basal cell carcinoma, melanoma, and squamous cell carcinoma, respectively.
Finally, let's not forget about the more sophisticated systems, which use complex Convolutional Neural Networks – just like our first example. CNNs are a special class of artificial neural networks dedicated to visual data (images) processing. Among many various implementations, they are used in very advanced systems for facial recognition, as well as object detection in self-driving cars. Now, these networks are trained on labeled cancer images to detect malignant changes. Here's a preprint of the study by Titus J. Brinker et al, describing use ofCNNs in skin cancer detection: Skin Cancer Classification using Convolutional Neural Networks: Systematic Review.
It's clear that machine learning technologies have a massive potential in this field. Perhaps in the near future, we'll outfit mobile devices with deep neural networks to potentially extend the reach of dermatologists outside of the clinic, allowing self-diagnosis with nothing more than a smartphone camera. Such high availability of diagnostic tools would inevitably cause a revolution in the medical sector and save many lives thanks to early detection of malignant changes.