Your doctor taps away on her keyboard, documenting your symptoms and her observations. This electronic record captures her valuable clinical narrative— unstructured “free text” along with structured data, like exam type, vital signs, and diagnosis codes.
“Natural language processing” (NLP) is a computing function Group Health Research Institute is testing and using to target mentions of specific words and phrases in this free text by parsing human-language sentence structure. As the accuracy of NLP is perfected, this technology can supplement skilled chart abstraction and may provide faster access to larger, richer bodies of data.
NLP also promises a significant boost in the scope of patient chart-based research—data could be pulled from 30,000 medical records in perhaps the time it takes chart reviewers to process 3,000. Being able to conduct research on larger populations at lower overall cost while reducing manual chart-review tedium would be a tremendous benefit.
“We’re exploring NLP to make population-based research more efficient, more timely, and cost less—key steps in discovering knowledge important to improving health care nationally,” says Eric B. Larson, MD, MPH, executive director of GHRI.
One example is a study GHRI researchers and collaborators from the Cancer Research Network (CRN) are conducting of women with recurring breast cancer who may benefit from new therapies. NLP is being used to extract and combine information from chart notes, radiology reports, and pathology reports—all to explore new ways of determining if and when clinicians diagnose recurrent breast cancer. According to NLP technical lead David Carrell, PhD, this kind of “triangulation” is necessary because it’s often difficult to target recurrences using diagnosis codes alone. Carrell is GHRI’s principal investigator on the National Cancer Institute-funded study. The study serves the dual purposes of providing experience for Carrell’s team and demonstrating NLP’s scientific value for the CRN.
GHRI is developing NLP through the use of open-source software that Carrell says takes much of the “drudgery” out of manual chart abstraction. His team (D.T. Tran, Scott Halgrim, Roy Pardee, and Sharon Fuller) produces algorithms—step-by-step instruction sets that find mentions of words in chart notes and pathology and radiology reports. NLP-derived information can then be combined with other patient data to address a wide range of research questions or serve quality-assurance and evaluation purposes.
To create the algorithms, Carrell’s team uses the National Library of Medicine lexicon and localized resources developed through expert opinion and human review of patient charts. Custom dictionaries are also created to help target terms and account for typos and common misspellings. Roy Pardee is specifically leading a “machine learning” approach to identifying pathology reports that mention cancer in a way that should trigger follow-up.
Algorithms use grammatical information from text combined with “decision-tree logic” to find clinically important word mentions. One algorithm method uses parts of speech to help pull words contextually. Another is to specify “near words”—text expected to appear close to key terms, such as the words “right side” or “lateral right” near the words “ductal invasive carcinoma.” One study required data on carotid artery stenosis (narrowing) from radiology reports; the algorithm for it captured the percentage of blockage, helping ensure that the condition was indeed present. Because stenosis percentages are expressed differently by clinicians, the algorithm was created to also pull the many variations—such as “50%,” “fifty percent,” “50–70 percent,” and so on. NLP can even take into account medical conditions that have been ruled out. A “negation” algorithm helps determine when specific conditions aren’t present—something common in the clinical narrative, for example: “… there is no evidence of diabetes.”
Chart abstractors play a critical role for NLP by setting a “gold standard” for what the algorithms are expected to detect—the abstractors’ expert reviews of sets of sample documents are used to verify an NLP algorithm’s accuracy. “What motivates us is a well-performing algorithm—one we’re fairly certain can be used to process vast quantities of reports,” Carrell says. “And, the larger volumes of data that NLP produces still require some level of human validation.”
The NLP team is also supporting quality improvement at Group Health. At the suggestion of Matt Handley, MD, Group Health Director of Quality and Informatics, and David McCulloch, MD, Group Health Director of Clinical Improvement, Carrell’s team is exploring ways to identify shared decision making discussions that take place during patient encounters. An example is a mention of educational videos about treatment options for hip replacement.
Carrell is excited about NLP’s potential. “Our hope is that we’ll be able to support bigger studies, faster—and at lower cost.”
by Gretchen Konrady