coronavirus- rapid detection using machine learning

Deep Learning And Medical Image Analysis With Keras

ct scan

The purpose of this post is extend machine learning techniques and principals that we used in the post on by adrian rosebrock’s “BREAST CANCER CLASSIFICATION WITH KERAS” AND use this as a model for coronavirus testing using CT-scans.
this project will focus on ct-scan as rapid tool for diagnosing covid 19, indeed yan li1 and liming xia1 “found that chest ct had a low rate of missed diagnosis of covid-19 (3.9%, 2/51) and may be useful as a standard method for the rapid diagnosis of covid-19 to optimize the management of patients.” the first project will focus on how rapid diagnostics could play an important role in reducing the spread of covid-19 virus.

further research

Further projects will provide practical information of how ct scans and machine learning can play a role in not just detecting but

Datasets

https://data.europa.eu/euodp/en/data/dataset/covid-19-coronavirus-data/resource/55e8f966-d5c8-438e-85bc-c7a5a26f4863

software solutions

the following project uses the keras deep learning library to automatically analyze medical images for coronavirus testing also visual imaging diagonsis such as ct scans and mri scans provide a much richer tool for tracking the effectiveness of different treatments, environmental factors which affect the spread and seriousness of the illness.

this initial project will focus on ct-scan as rapid tool for diagnosing covid 19, yan li1 and liming xia1 “found that chest ct had a low rate of missed diagnosis of covid-19 (3.9%, 2/51) and may be useful as a standard method for the rapid diagnosis of covid-19 to optimize the management of patients.

Figure

however, ct is still limited for identifying specific viruses and distinguishing between viruses. the authors suggest that the “ground-glass opacities” g.g.o. yes you read that right, the covid 19 cause damage to the lungs that appears similar in appearance to ground glass in the lungs. it is from these shadows on the scan that the extent of the damage can be tracked. therefore scheduling multiple scans on each patient across large populations which when that data is shared could be used to cross correlate to see which factors and treatments affect the medical outcomes of those infected as well as how to lower infection rates.

all the information on the site is provided opensource and sources of information are provided. it is very much a work in progress and i am doing my best to provide links to the research i have been basing this project on. my hope is to:

  1. share research about the use of ct scans in rapid diagonise

2. share data, information, on the spread of the desease

3. share knowledge on best practice techiques to analyse the coronavirus and track its progress within individuals, groups and wide population.

4. provide information about which machine learning approaches & technique can be used.

5. provide practical methods a:b testing of treatments which can be deployed and successful techniques shared and techniques are standardised.

6. track the effectiveness of treatment using deep learning to perform medical image analysis, specifically, how to apply similar techniques that have been verified to 95% malaria and 97% breast cancer diagnosis therefore this project is hoping to use the keras deep learning library to automatically analyze medical images for coronavirus testing.

Different Methods That Are Currently Being Deployed To Detect The Virus:

  1. ct-scans
  2. mri-scans
  3. nasal swab
  4. rnd
  5. thermal scanners detection
  6. ultrasound

this project will focus on mri and ct-scans as they can provide a frontline option for immediately applying machine learning, because they can not only have high detection rates of case but they can track the progress of the virus visually, show the damage over real time and show the effectiveness of threatment using ab testing across groups. as part of a diagnotic tool set it can be set to run automatically and chould be used a part of a triage system as it provides faster testing results than laboratory testing.

i recommend the following research “coronavirus disease 2019 (covid-19): role of chest ct in diagnosis and management” by yan li1 and liming xia1 i have summaried their findings they state the objectives of their “study was to determine the misdiagnosis rate of radiologists for coronavirus disease 2019 (covid-19) and evaluate the performance of chest ct in the diagnosis and management of covid-19. the ct features of covid-19 are reported and compared with the ct features of other viruses to familiarize radiologists with possible ct patterns.” they studied the first “51 patients with a diagnosis of covid-19 infection confirmed by nucleic acid testing (23 women and 28 men; age range, 26–83 years) and two patients with adenovirus (one woman and one man; ages, 58 and 66 years). we reviewed the clinical information, ct images, and corresponding image reports of these 53 patients. the ct images included images from 99 chest ct examinations, including initial and follow-up ct studies. we compared the image reports of the initial ct study with the laboratory test results and identified ct patterns suggestive of viral infection.”

they found that chest ct had a low rate of missed diagnosis of covid-19 (3.9%, 2/51) and may be useful as a standard method for the rapid diagnosis of covid-19 to optimize the management of patients. however, ct is still limited for identifying specific viruses and distinguishing between viruses.

their full article is available below: more: https://www.ajronline.org/doi/full/10.2214/ajr.20.22954

it is hoped that the content provided here can be used in practical diagnotic tools and adapted to provide a data driven accessment of the success of different treatment options. all information provided is open-source.

key sources of information:

dr. adrian rosebrock from http://pyimagesearch.com/ – much of the the stucture of this projects that adrian rosebrock apples in his website that specialises in “deep learning. keras and tensorflow. tutorials. grad-cam: visualize class activation maps with keras, tensorflow, and deep learning.” he provides great tutorials on computer vision and some medical analysis projects specifical malaria and cancer. this project is designed using the methodologies and much of the code provided within his website.

we’ve employed techniques suggested by dr. johnson thomas, a practicing endocrinologist, who provided a great benchmark summarizing the work of the united states national institutes of health (nih) used to build an automatic malaria classification system using deep learning.

this project seeks to adapt proven techiques which have been successfully used in medical analysis from cancer and malaria diagnosis, as dr rosebrock recommends  “i decided i was going to minimize the amount of custom code i was going to write.” sounds good to me! he goes on to say “time is of the essence in disease outbreaks — if we can utilize pre-trained models or existing code, fantastic. we’ll be able to help doctors and clinicians working in the field that much faster.” couldn’t agree more! so lets look at how dr rosebrock tackled breast cancer using deep learning & medical image analysis with keras

Deep Learning And Medical Image Analysis With Keras

in the first part of this project, i’ll discuss how deep learning and medical imaging can be applied to the coronavirus epidemic.

from there we’ll explore our coronavirus database which contains ct scans that fall into one of two classes: positive for coronavirus or negative for coronavirus .

after we’ve explored the database we’ll briefly review the directory structure for today’s project.

we’ll then train a deep learning model on our medical images to predict if a given patient’s ct scan / x-ray is positive for coronavirus or not.

finally, we’ll review our results.

Deep learning, medical imaging, and the coronavirus epidemic

How can we quickly test for Coronavirus?

ct-scans

research indicates the correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases
tao ai, zhenlu yang, hongyan hou, chenao zhan, chong chen, wenzhi lv, qian tao, ziyong sun, liming xia

therefore ct scans could provide an excellent method of reducing testing time but also to track patient treatments and outcomes helping to provide further evidence based treatments.

NIH’s proposed deep learning solution

in 2018, rajaraman et al. published a paper entitled pre-trained convolutional neural networks as feature extractors toward improved parasite detection in thin blood smear images. in their work rajaraman et al. utilized six pre-trained convolutional neural networks, including feature extraction and subsequent training took a little over 24 hours and obtained an impressive 95.9% accuracythe problem here is the number of models being utilized — it’s inefficient. dr rosebrock has adapted a more efficient model which will be adapted for use with coronavirus.

CoronaVirus

the malaria dataset we will be using in today’s deep learning and medical image analysis tutorial is the exact same dataset that rajaraman et al. used in their 2018 publication.

you’ll want to go ahead and download the coronavirus.zip  file on to your local machine if you’re following along with the tutorial.

the dataset consists of 10 images belonging to two separate classes:

  1. infected: implying that the region contains coronavirus.
  2. uninfected: meaning there is no evidence of coronavirus in the scanned region.

the number of images per class is equally distributed with ? mages per each respective class.

Install necessary software

the software to run today’s scripts is very easy to install. to set everything up, you’ll use pip , virtualenv , and virtualenvwrapper . be sure to follow the link in the keras bullet below, first.

to run today’s code you will need:

  • keras: keras is my favorite deep learning framework. read and follow my tutorial, installing keras with the tensorflow backend.
  • numpy & scikit-learn: if you followed the keras install instructions linked directly above, these packages for numerical processing and machine learning will be installed.
  • matplotlib: the most popular plotting tool for python. once you have your keras environment ready and active, you can install via pip install matplotlib .
  • imutils: my personal package of image processing and deep learning convenience functions can be installed via pip install –upgrade imutils .

Project structure

be sure to grab the “downloads” for the post. the dataset isn’t included, but the instructions in this section will show you how to download it as well.

first, change directories and unzip the files:deep learning and medical image analysis with keras$ cd /path/where/you/downloaded/the/files$ unzip dl-medical-imaging.zip

then change directory into the project folder and create a malaria/  directory + cd  into it:

Deep Learning And Medical Image Analysis With Keras

$ cd dl-medical-imaging

$ mkdir coronavirus

$ cd coronavirus

next, download the dataset (into the dl-medical-imaging/malaria/  directory that you should currently be “in”):deep learning and medical image analysis with keras

$ wget https://ceb.nlm.nih.gov/proj/malaria/cell_images.zip

$ unzip cell_images.zip

if you don’t have the tree  package, you’ll need it:deep learning and medical image analysis with keras$ sudo apt-get install tree # for ubuntu$ brew install tree # for macos

now let’s switch back to the parent directory:deep learning and medical image analysis with keras

$ cd ..

finally, let’s inspect our project structure now using the tree command:deep learning and medical image analysis with keras

$ tree –dirsfirst –filelimit

the dataset is located in the corona/  folder. the contents have been unzipped. the cell_images/  for training and testing are categorized as  infected/  or uninfected/ .

the pyimagesearch  module is the pyimagesearch/  directory. i often get asked how to pip-install pyimagesearch. you can’t! it is simply included with the blog post “downloads”. today’s pyimagesearch  module includes:

  • config.py : a configuration file. i opted to use python directly instead of yaml/json/xml/etc. read the next section to find out why as we review the config file.
  • resnet.py : this file contains the exact resnet model class included with deep learning for computer vision with python. in my deep learning book, i demonstrated how to replicated the resnet model from the 2015 resnet academic publication, deep residual learning for image recognition by he et al.; i also show how to train resnet on cifar-10tiny imagenet, and imagenet, walking you through each of my experiments and which parameters i changed and why.

today we’ll be reviewing two python scripts:

  • build_dataset.py : this file will segment our malaria cell images dataset into training, validation, and testing sets.
  • train_model.py : in this script, we’ll employ keras and our resnet model to train a malaria classifier using our organized data.

but first, let’s start by reviewing the configuration file which both scripts will need!

Our configuration file

when working on larger deep learning projects i like to create a config.py  file to store all my constant variables.

i could use a json, yaml, or equivalent files as well, but it’s nice being able to introduce python code directly into your configuration.

let’s review the config.py  file now:deep learning and medical image analysis with keras# import the necessary packagesimport os# initialize the path to the *original* input directory of imagesorig_input_dataset = “malaria/cell_images”# initialize the base path to the *new* directory that will contain# our images after computing the training and testing splitbase_path = “malaria”# derive the training, validation, and testing directoriestrain_path = os.path.sep.join([base_path, “training”])val_path = os.path.sep.join([base_path, “validation”])test_path = os.path.sep.join([base_path, “testing”])# define the amount of data that will be used trainingtrain_split = 0.8# the amount of validation data will be a percentage of the# *training* dataval_split = 0.1

let’s review the configuration briefly where we:

  • define the path to the original dataset of cell images (line 5).
  • set our dataset base path (line 9).
  • establish the paths to the output training, validation, and testing directories (lines 12-14). the build_dataset.py  file will be responsible for creating the paths in your filesystem.
  • define our training/testing split where 80% of the data is for training and the remaining 20% will be for testing (line 17).
  • set our validation split where, of that 80% for training, we’ll take 10% for validation (line 21).

now let’s build our dataset!

Building our deep learning + medical image dataset

our malaria dataset does not have pre-split data for training, validation, and testing so we’ll need to perform the splitting ourselves.

to create our data splits we are going to use the build_dataset.py  script — this script will:

  1. grab the paths to all our example images and randomly shuffle them.
  2. split the images paths into the training, validation, and testing.
  3. create three new sub-directories in the malaria/  directory, namely training/ , validation/ , and testing/.
  4. automatically copy the images into their corresponding directories.

to see how the data split process is performed, open up build_dataset.py  and insert the following code:deep learning and medical image analysis with keras# import the necessary packagesfrom pyimagesearch import configfrom imutils import pathsimport randomimport shutilimport os# grab the paths to all input images in the original input directory# and shuffle themimagepaths = list(paths.list_images(config.orig_input_dataset))random.seed(42)random.shuffle(imagepaths)

our packages are imported on lines 2-6. take note that we’re importing our config  from pyimagesearch  and paths  from imutils .

on lines 10-12, images from the malaria dataset are grabbed and shuffled.

now let’s split our data:deep learning and medical image analysis with keras# compute the training and testing spliti = int(len(imagepaths) * config.train_split)trainpaths = imagepaths[:i]testpaths = imagepaths[i:]# we’ll be using part of the training data for validationi = int(len(trainpaths) * config.val_split)valpaths = trainpaths[:i]trainpaths = trainpaths[i:]

the lines in the above code block compute training and testing splits.

first, we compute the index of the train/test split (line 15). then using the index and a bit of array slicing, we split the data into trainpaths  and testpaths  (lines 16 and 17).

again, we compute the index of the training/validation split from trainpaths  (line 20). then we split the image paths into valpaths  and trainpaths  (lines 21 and 22). yes, trainpaths  are reassigned because as i stated in the previous section, “…of that 80% for training, we’ll take 10% for validation”.

now that we have our image paths organized into their respective splits, let’s define the datasets we’ll be building:deep learning and medical image analysis with keras# define the datasets that we’ll be buildingdatasets = [ (“training”, trainpaths, config.train_path), (“validation”, valpaths, config.val_path), (“testing”, testpaths, config.test_path)]

here i’ve created a list of 3-tuples (called datasets ) containing:

  1. the name of the split
  2. the image paths for the split
  3. the path to the output directory for the split

with this information, we can begin to loop over each of the datasets :deep learning and medical image analysis with keras# loop over the datasetsfor (dtype, imagepaths, baseoutput) in datasets: # show which data split we are creating print(“[info] building ‘{}’ split”.format(dtype)) # if the output base output directory does not exist, create it if not os.path.exists(baseoutput): print(“[info] ‘creating {}’ directory”.format(baseoutput)) os.makedirs(baseoutput) # loop over the input image paths for inputpath in imagepaths: # extract the filename of the input image along with its # corresponding class label filename = inputpath.split(os.path.sep)[-1] label = inputpath.split(os.path.sep)[-2] # build the path to the label directory labelpath = os.path.sep.join([baseoutput, label]) # if the label output directory does not exist, create it if not os.path.exists(labelpath): print(“[info] ‘creating {}’ directory”.format(labelpath)) os.makedirs(labelpath) # construct the path to the destination image and then copy # the image itself p = os.path.sep.join([labelpath, filename]) shutil.copy2(inputpath, p)

on line 32 we begin to loop over dataset type, image paths, and output directory.

if the output directory does not exist, we create it (lines 37-39).

then we loop over the paths themselves beginning on line 42. in the loop, we:

  • extract the filename  + label  (lines 45 and 46).
  • create the subdirectory if necessary (lines 49-54).
  • copy the actual image file itself into the subdirectory (lines 58 and 59).

to build your malaria dataset make sure you have (1) used the “downloads” section of this guide to download the source code + project structure and (2) have properly downloaded the cell_images.zip  file from nih’s website as well.

from there, open up a terminal and execute the following command:deep learning and medical image analysis with keras$ python build_dataset.py[info] building ‘training’ split[info] ‘creating malaria/training’ directory[info] ‘creating malaria/training/uninfected’ directory[info] ‘creating malaria/training/parasitized’ directory[info] building ‘validation’ split[info] ‘creating malaria/validation’ directory[info] ‘creating malaria/validation/uninfected’ directory[info] ‘creating malaria/validation/parasitized’ directory[info] building ‘testing’ split[info] ‘creating malaria/testing’ directory[info] ‘creating malaria/testing/uninfected’ directory[info] ‘creating malaria/testing/parasitized’ directory

the script itself should only take a few seconds to create the directories and copy images, even on a modestly powered machine.

inspecting the output of build_dataset.py  you can see that our data splits have been successfully created.

let’s take a look at our project structure once more just for kicks:deep learning and medical image analysis with keras$ tree –dirsfirst –filelimit 10.├── malaria│ ├── cell_images│ │ ├── parasitized [13780 entries]│ │ └── uninfected [13780 entries]│ ├── testing│ │ ├── parasitized [2726 entries]│ │ └── uninfected [2786 entries]│ ├── training│ │ ├── parasitized [9955 entries]│ │ └── uninfected [9887 entries]│ ├── validation│ │ ├── parasitized [1098 entries]│ │ └── uninfected [1106 entries]│ └── cell_images.zip├── pyimagesearch│ ├── __init__.py│ ├── config.py│ └── resnet.py├── build_dataset.py├── train_model.py└── plot.png15 directories, 9 files

notice that the new directories have been created in the malaria/  folder and images have been copied into them.

Training a deep learning model for medical image analysis

now that we’ve created our data splits, let’s go ahead and train our deep learning model for medical image analysis.

as i mentioned earlier in this tutorial, my goal is to reuse as much code as possible from chapters in my book, deep learning for computer vision with python. in fact, upwards of 75%+ of the code is directly from the text and code examples.

time is of the essence when it comes to medical image analysis, so the more we can lean on reliable, stable code the better.

as we’ll see, we’ll able to use this code to obtain 97% accuracy.

let’s go ahead and get started.

open up the train_model.py  script and insert the following code:deep learning and medical image analysis with keras# set the matplotlib backend so figures can be saved in the backgroundimport matplotlibmatplotlib.use(“agg”)# import the necessary packagesfrom keras.preprocessing.image import imagedatageneratorfrom keras.callbacks import learningrateschedulerfrom keras.optimizers import sgdfrom pyimagesearch.resnet import resnetfrom pyimagesearch import configfrom sklearn.metrics import classification_reportfrom imutils import pathsimport matplotlib.pyplot as pltimport numpy as npimport argparse# construct the argument parser and parse the argumentsap = argparse.argumentparser()ap.add_argument(“-p”, “–plot”, type=str, default=”plot.png”, help=”path to output loss/accuracy plot”)args = vars(ap.parse_args())

since you followed my instructions in the “install necessary software” section, you should be ready to go with theimports on lines 2-15.

we’re using keras  to train our medical image deep learning model, sklearn  to print a classification_report , grabbing paths  from our dataset, numpy  for numerical processing, and argparse  for command line argument parsing.

the tricky one is matplotlib . since we’re saving our plot to disk (and in my case, on a headless machine) we need to use the “agg”  backend (line 3).

line 9 imports my resnet  architecture implementation.

we won’t be covering the resnet architecture in this tutorial, but if you’re interested in learning more, be sure to refer to the official resnet publication as well as deep learning for computer vision with python where i review resnet in detail.

we have a single command line argument that is parsed on lines 18-21, –plot . by default, our plot will be placed in the current working directory and named plot.png . alternatively, you can supply a different filename/path at the command line when you go to execute the program.

now let’s set our training parameters and define our learning rate decay function:deep learning and medical image analysis with keras# define the total number of epochs to train for along with the# initial learning rate and batch sizenum_epochs = 50init_lr = 1e-1bs = 32def poly_decay(epoch): # initialize the maximum number of epochs, base learning rate, # and power of the polynomial maxepochs = num_epochs baselr = init_lr power = 1.0 # compute the new learning rate based on polynomial decay alpha = baselr * (1 – (epoch / float(maxepochs))) ** power # return the new learning rate return alpha

on lines 25-26, we define the number of epochs, initial learning rate, and batch size.

i found that training for num_epochs = 50  (training iterations) worked well. a bs = 32  (batch size) is adequate for most systems (cpu), but if you use a gpu you can increase this value to 64 or higher. our init_lr = 1e-1  (initial learning rate) will decay according to the poly_decay  functions.

our poly_dcay  function is defined on lines 29-40. this function will help us decay our learning rate after each epoch. we’re setting power = 1.0  which effectively turns our polynomial decay into a linear decay. the magic happens in the decay equation on line 37 the result of which is returned on line 40.

next, let’s grab the number of image paths in training, validation, and testing sets:deep learning and medical image analysis with keras# determine the total number of image paths in training, validation,# and testing directoriestotaltrain = len(list(paths.list_images(config.train_path)))totalval = len(list(paths.list_images(config.val_path)))totaltest = len(list(paths.list_images(config.test_path)))

we’ll need these quantity values to determine the total number of steps per epoch for the validation/testing process.

let’s apply data augmentation (a process i nearly always recommend for every deep learning dataset):deep learning and medical image analysis with keras# initialize the training training data augmentation objecttrainaug = imagedatagenerator( rescale=1 / 255.0, rotation_range=20, zoom_range=0.05, width_shift_range=0.05, height_shift_range=0.05, shear_range=0.05, horizontal_flip=true, fill_mode=”nearest”)# initialize the validation (and testing) data augmentation objectvalaug = imagedatagenerator(rescale=1 / 255.0)

on lines 49-57 we initialize our imagedatagenerator  which will be used to apply data augmentation by randomly shifting, translating, and flipping each training sample. i cover the concept of data augmentation in the practitioner bundle of deep learning for computer vision with python.

the validation imagedatagenerator will not perform any data augmentation(line 60). instead, it will simply rescale our pixel values to the range [0, 1], just like we have done for the training generator. take note that we’ll be using the valaug  for both validation and testing.

let’s initialize our training, validation, and testing generators:deep learning and medical image analysis with keras# initialize the training generatortraingen = trainaug.flow_from_directory( config.train_path, class_mode=”categorical”, target_size=(64, 64), color_mode=”rgb”, shuffle=true, batch_size=bs)# initialize the validation generatorvalgen = valaug.flow_from_directory( config.val_path, class_mode=”categorical”, target_size=(64, 64), color_mode=”rgb”, shuffle=false, batch_size=bs)# initialize the testing generatortestgen = valaug.flow_from_directory( config.test_path, class_mode=”categorical”, target_size=(64, 64), color_mode=”rgb”, shuffle=false, batch_size=bs)

in this block, we create the keras generators used to load images from an input directory.

the flow_from_directory  function assumes:

  1. there is a base input directory for the data split.
  2. and inside that base input directory, there are n subdirectories, where each subdirectory corresponds to a class label.

be sure to review the keras preprocessing documentation as well as the parameters we’re feeding each generator above. notably, we:

  • set class_mode  equal to categorical  to ensure keras performs one-hot encoding on the class labels.
  • resize all images to 64 x 64  pixels.
  • set our color_mode  to “rgb”  channel ordering.
  • shuffle image paths only for the training generator.
  • use a batch size of bs = 32 .

let’s initialize resnet  and compile the model:deep learning and medical image analysis with keras# initialize our resnet model and compile itmodel = resnet.build(64, 64, 3, 2, (3, 4, 6), (64, 128, 256, 512), reg=0.0005)opt = sgd(lr=init_lr, momentum=0.9)model.compile(loss=”binary_crossentropy”, optimizer=opt, metrics=[“accuracy”])

on line 90, we initialize resnet:

  • images are 64 x 64 x 3  (3-channel rgb images).
  • we have a total of 2  classes.
  • resnet will perform (3, 4, 6)  stacking with (64, 128, 256, 512)  conv layers, implying that:
    • the first conv layer in resnet, prior to reducing spatial dimensions, will have 64  total filters.
    • then we will stack 3  sets of residual modules. the three conv layers in each residual module will learn 32, 32 and 128  conv filters respectively. we then reduce spatial dimensions.
  • next, we stack 4 sets of residual modules, where each of the three conv layers will 64, 64, and 256  filters. again, spatial dimensions are then reduced
  • finally, we stack 6 sets of residual modules, where each conv layer learns 128, 128, and 512  filters. spatial dimensions are reduced a final time before average pooling is performed and a softmax classifier applied.

again if you are interested in learning more about resnet, including how to implement it from scratch, please refer to deep learning for computer vision with python.

line 92 initializes the sgd optimizer with the default initial learning of 1e-1  and a momentum term of 0.9 .

lines 93 and 94 compile the actual model using binary_crossentropy  as our loss function (since we’re performing binary, 2-class classification). for greater than two classes we would use categorical_crossentropy .

we are now ready to train our model:deep learning and medical image analysis with keras# define our set of callbacks and fit the modelcallbacks = [learningratescheduler(poly_decay)]h = model.fit_generator( traingen, steps_per_epoch=totaltrain // bs, validation_data=valgen, validation_steps=totalval // bs, epochs=num_epochs, callbacks=callbacks)

on line 97 we create our set of callbacks . callbacks are executed at the end of each epoch. in our case we’re applying our poly_decay  learningratescheduler  to decay our learning rate after each epoch.

our model.fit_generator  call on lines 98-104 instructs our script to kick off our training process.

the traingen  generator will automatically (1) load our images from disk and (2) parse the class labels from the image path.

similarly, valgen  will do the same process, only for the validation data.

let’s evaluate the results on our testing dataset:deep learning and medical image analysis with keras# reset the testing generator and then use our trained model to# make predictions on the dataprint(“[info] evaluating network…”)testgen.reset()predidxs = model.predict_generator(testgen, steps=(totaltest // bs) + 1)# for each image in the testing set we need to find the index of the# label with corresponding largest predicted probabilitypredidxs = np.argmax(predidxs, axis=1)# show a nicely formatted classification reportprint(classification_report(testgen.classes, predidxs, target_names=testgen.class_indices.keys()))

now that model is trained we can evaluate on the test set.

line 109 can technically be removed but anytime you use a keras data generator you should get in the habit of resetting it prior to evaluation.

to evaluate our model we’ll make predictions on test data and subsequently find the label with the largest probability for each image in the test set (lines 110-115).

then we’ll print  our classification_report  in a readable format in the terminal (lines 118 and 119).

finally, we’ll plot our training data:deep learning and medical image analysis with keras# plot the training loss and accuracyn = num_epochsplt.style.use(“ggplot”)plt.figure()plt.plot(np.arange(0, n), h.history[“loss”], label=”train_loss”)plt.plot(np.arange(0, n), h.history[“val_loss”], label=”val_loss”)plt.plot(np.arange(0, n), h.history[“acc”], label=”train_acc”)plt.plot(np.arange(0, n), h.history[“val_acc”], label=”val_acc”)plt.title(“training loss and accuracy on dataset”)plt.xlabel(“epoch #”)plt.ylabel(“loss/accuracy”)plt.legend(loc=”lower left”)plt.savefig(args[“plot”])

lines 122-132 generate an accuracy/loss plot for training and validation.

to save our plot to disk we call .savefig  (line 133).

Medical image analysis results

now that we’ve coded our training script, let’s go ahead and train our keras deep learning model for medical image analysis.

if you haven’t yet, make sure you (1) use the “downloads” section of today’s tutorial to grab the source code + project structure and (2) download the cell_images.zip  file from the official nih malaria dataset page. i recommend following my project structure above.

from there, you can start training with the following command:deep learning and medical image analysis with keras$ python train_model.pyfound 19842 images belonging to 2 classes.found 2204 images belonging to 2 classes.found 5512 images belonging to 2 classes….epoch 1/50620/620 [==============================] – 67s – loss: 0.8723 – acc: 0.8459 – val_loss: 0.6020 – val_acc: 0.9508epoch 2/50620/620 [==============================] – 66s – loss: 0.6017 – acc: 0.9424 – val_loss: 0.5285 – val_acc: 0.9576epoch 3/50620/620 [==============================] – 65s – loss: 0.4834 – acc: 0.9525 – val_loss: 0.4210 – val_acc: 0.9609…epoch 48/50620/620 [==============================] – 65s – loss: 0.1343 – acc: 0.9646 – val_loss: 0.1216 – val_acc: 0.9659epoch 49/50620/620 [==============================] – 65s – loss: 0.1344 – acc: 0.9637 – val_loss: 0.1184 – val_acc: 0.9678epoch 50/50620/620 [==============================] – 65s – loss: 0.1312 – acc: 0.9650 – val_loss: 0.1162 – val_acc: 0.9678[info] serializing network…[info] evaluating network… precision recall f1-score supportparasitized 0.97 0.97 0.97 2786 uninfected 0.97 0.97 0.97 2726avg / total 0.97 0.97 0.97 5512

figure 10: our malaria classifier model training/testing accuracy and loss plot shows that we’ve achieved high accuracy and low loss. the model isn’t exhibiting signs of over/underfitting. this deep learning medical imaging “malaria classifier” model was created with resnet architecture using keras.

here we can see that our model was trained for a total of 50 epochs.

each epoch tales approximately 65 seconds on a single titan x gpu.

overall, the entire training process took only 54 minutes (significantly faster than the 24-hour training process of nih’s method). at the end of the 50th epoch we are obtaining:

  • 96.50% accuracy on the training data
  • 96.78% accuracy on the validation data
  • 97% accuracy on the testing data

there are a number of benefits to using the resnet-based model we trained here today for medical image analysis.

to start, our model is a complete end-to-end malaria classification system.

unlike nih’s approach which leverages a multiple step process of (1) feature extraction from multiple models and (2) classification, we instead can utilize only a single, compact model and obtain comparable results.

speaking of compactness, our serialized model file is only 17.7mb. quantizing the weights in the model themselves would allow us to obtain a model < 10mb (or even smaller, depending on the quantization method) with only slight, if any, decreases in accuracy.

our approach is also faster in two manners.

first, it takes less time to train our model than nih’s approach.

our model took only 54 minutes to train while nih’s model took ~24 hours.

secondly, our model is faster in terms of both (1) forward-pass inference time and (2) significantly fewer parameters and memory/hardware requirements.

consider the fact that nih’s method requires pre-trained networks for feature extraction.

each of these models accepts input images that have input image spatial dimensions in the range of 224×244, 227×227, and 299×299 pixels.

our model requires only 64×64 input images and obtains near identical accuracy.

all that said, i have not performed a full-blown accuracy, sensitivity, and specificity test, but based on our results we can see that we are on the right track to creating an automatic malaria classifier that is not only more accurate but significantly smaller, requiring less processing power as well.

my hope is that you will use the knowledge in today’s tutorial on deep learning and medical imaging analysis and apply it to your own medical imaging problems.

Summary

in today’s blog post you learned how to apply deep learning to medical image analysis; specifically, malaria prediction.

malaria is an infectious disease that often spreads through mosquitoes. given the fast reproduction cycle of mosquitoes, malaria has become a true endemic in some areas of the world and an epidemic in others. in total, over 400,000 deaths per year can be attributed to malaria.

nih has developed a mobile application, that when combined with a special microscope attachment lens on a smartphone, enables field clinicians to automatically predict malaria risk factors for a patient given a blood smear. nih’s model combined six separate state-of-the-art deep learning models and took approximately 24 hours to train.

overall, they obtained ~95.9% accuracy.

using the model discussed in today’s tutorial, a smaller variant of resnet whose model size is only 17.7mb, we were able to obtain 97% accuracy in only 54 minutes.

furthermore, 75%+ of the code utilized in today’s tutorial came from my book, deep learning for computer vision with python.

it took very little effort to take the code examples and techniques learned from the book and then apply it a custom medical image analysis problem.

during a disease outbreak, when time is of the essence, being able to leverage existing code and models can reduce engineer/training time, ensure the model is out in the field faster, and ultimately help doctors and clinicians better treat patients (and ideally save lives as well).

i hope you enjoyed today’s post on deep learning for medical image analysis!