My goal with this post is to show you how you can use machine learning in your work, even if you aren't an expert.

Medical students often take part in research in various specialties of interest. Since the summer between my first two years of school, I've been involved with research in ophthalmology. I've worked in the lab to understand migration of different molecules across various parts of the ocular surface, used a multi-photon microscope view the eye's structure at the molecular level, and figured out that ophthalmologists perform some of the coolest surgeries out there.

However, the most interesting research I've done involves analyzing a dataset of almost twenty thousand corneal donations. When a cornea is harvested from a donor, it goes through an eye bank that analyzes and prepares it for further use as a research specimen or a transplant candidate. The dataset I was looking at came from a large eye bank. It contained a great amount of detail for each cornea and its donor. My team looked at three major diseases affecting the eye, diabetes, glaucoma, and cataracts (specifically patients who had cataract surgery). Our measure of ocular quality was endothelial cell counts (ECC). These cells serve a vital function to the eye and its critical we preserve them in patients throughout their lives. They drain water from and provide nutrients to the cornea. This is required to keep the eye clear. You can read more about the corneal endothelium here and read about my findings here.

Plan and Preparing the Data

Now to the meat and potatoes. I've been interested in machine learning for a few months now. I started off with Andrew Ng's class on Coursera which opened my eyes to the possibility in applying these concepts to medical data. Then, I started delving into other classes and tutorials. Finally, I felt confident enough to apply my knowledge to the eyebank dataset that's ripe for machine learning. I outlined my plan of attack as follows:

Figure out which features in the dataset determine ECC

Clean the data for a machine learning algorithm

Create a model that can classify the quality of the cornea based on the features

Extract the features correlated with good and bad ECC

Instead of considering only the three diseases above, I decided to analyze the entire column that contained the medical history. Each record contained the patient's "recent medical history", which was the reason of death, and "past medical history", which was the rest of the patient's medical conditions and surgeries. My goal was to figure out what term in this column is correlated with "adequate" (>2000) and "inadequate" ECC (<2000). This column needed a lot of cleaning and standardizing. E.g., many of the disease were listed as their full name, "hypertension, and their abbreviations, "HTN", among the different instances. Many common abbreviations were expanded to their full text in the dataset.

Training vs Test Set

We first want to separate our data into training and test sets. Ultimately, we are going to use our model on real data. We will first test the accuracy of our model. If we test our model on the same data we trained on, we may get an optimistic picture of how well it works. We instead will take 20% of data and make it a test set. This test set will not be touched until we are finished training. The test set helps us approximate the future data our model will use to classify eyes, and thus gives us a more realistic picture of its accuracy.

CountVectorizer

Machine Learning algorithms are not able to learn or understand text data for the most part. They typically require numerical input, which our column of medical history is not. I used scikit-learn's CountVectorizer to handle turning the text data into a special type of array (I'll get into what makes this array special later). The array would allow us to use some machine learning algorithms to train a model.

The first step in this process is assigning an integer, a "token", for every word that appears in the text column. Then, we count up the number of times each token occurs in each donor's medical history. Each of these counts are a row representing the original text input, or a feature vector for the more technical. For example, lets consider the dataset below of two sentences.

My medical school is in New York New York is next to New Jersey

The tokenized form of this dataset is below:

Sentence_ID my medical school is in new york next to jersey 1 1 1 1 1 1 1 1 0 0 0 2 0 0 0 1 0 2 1 1 1 1

Each word that appears in the dataset is thrown into a "bag of words." The unique word is a now a feature, or a column in the output array. The numbers in the array represent the number of times each word appeared in the original text input. For example, the second sentence "New York is next to New Jersey" contains the word new twice, so we have a '2' in that column. One thing we lose by vectorizing in this way is the order of the words. There's no way to return how the medical history was structured. For example, if the medical history of a donor was "new-onset diabetes, hypertension", the vectorized form would disregard if "new-onset" was associated with diabetes or hypertension. This may be important in that the disease that is not "new-onset" may have more significance in the ECC of that sample. Fortunately, we may have enough data to make this point irrelevant. Additionally, without going into too much detail, we may be able to tweak our model to keep some word order.

Alright so now we have our array, which is a sparse array. A sparse array allows use to save memory by shrinking the size of the array. To make this point clear, lets look at some output.

vectorized feature count: 13773

The feature count of our array is 13773. In other words, we have 13773 individual words in our dataset. If a donor's medical history was "diabetes, acne, and hypertension", the corresponding row would have 3 "1"'s and 13770 "0"'s. That's a lot of memory used for a relatively small amount of information. Instead, we use the sparse matrix. This saves just the positives of our instance vectorization. If diabetes was the 300th word, acne the 345th word, and hypertension the 12456th word, the sparse matrix would just save those positions in the row. We've now saved a huge amount of memory and ultimately compute time.

At this point, I vectorized the medical histories of my training and testing datasets into two sparse matrices.

Multinomial Naive Bayes

Since this is meant to be a less technical post, I will go into the more nitty-gritty details of this algorithm in a future write up.

I used the multinomial naive Bayes machine learning algorithm for my eye classification. The features of a naive Bayes model are all conditionally independent of each other. Let's imagine that a person could have two common diseases, chronic obstructive pulmonary disease (COPD) and gastroesophageal reflux disease (GERD). Now let's say that person was smoker. The diseases COPD and GERD are conditionally independent if knowing the person was a smoker, knowing the person had COPD would not change the likelihood of that person having GERD, and vice versa. In other words, the chance of a smoker having GERD is the same regardless of him having COPD, and the chance of him having COPD is the same regardless of him having GERD. This may not necessarily be true, but the naive Bayes model assumes this to be true and performs pretty well.

Gaussian naive Bayes will model our donor's medical history based on the presence or absence of each disease. We use a multinomial naive Bayes because it uses the number of times the disease is mentioned in each medical history. It's likely most of the records will mention each disease only once so the advantage of the multinomial naive Bayes is slim. However, the Gaussian model does not accept the sparse matrix. The multinomial model is more widely used for vectorized text classification, so it will be what we use.

Now for the grand finale! Let's run train the model, test it, and calculate its accuracy.

Naive Bayes accuracy_score: 0.935534222483

I am pretty satisfied with this score considering we have used the default parameters on our algorithms. There's plenty of room for improvement, and if we incorporate medications and used adjust the ngram range of our CountVectorizer we may improve our accuracy. My hope is that resource-strapped eye banks use this model to determine which eyes are worth the time to harvest.

In the meantime, I exported the tokens associated with inadequate ECC and tokens associated with adequate ECC. We will delve into the differences of these tokens to further the understanding of what diseases influence eye health over the course of a patient's life.

This was a fun experiment in machine learning and I'm glad it was able to help me perform worthwhile research. The field is very exciting and I am grateful for the opportunity to incorporate it into medicine.