im symposium presentation - ocr and text analytics for medical chart review process
TRANSCRIPT
OCR and Text Analytics for Medical Chart Review Process
Alex ZeltovDarwin Leung Ravi ChawlaSomesh Nigam
2
BIOGRAPHY
Alex Zeltov
Research Scientist, Advanced Analytics Independence Blue Cross Lead the development and research of Big Data initiative
and predictive analytics across the Informatics Division for Independence Blue Cross.
Contact Info:Phone:215.241.9885Email: [email protected]
3
BIOGRAPHY
Darwin Leung
Director, Informatics Application Development and Operations
Independence Blue Cross Responsible for the development of analytical applications
across the Informatics Division for Independence Blue Cross.
Contact Info:Phone:215.241.2255Email: [email protected]
Background on Text Analytics and Medical Documents
Providers have different levels of technology readiness – varying from Electronic Medical Records (EMR) to paper charts.
We want to apply text analytics to all information available for different business cases.
Need to bring all information collected to a level where our technologies can be applied.
OCR for medical documents OCR (Optical Character Recognition) for medical
documents is useful because this software provides invaluable benefits in terms of cost savings and even increases productivity.
High Speed Provided by OCR
OCR software can provide very good accuracy rates as manual data entry but in a fraction of the time
DB
OCR + Text Analytics Process
IMG/PDF/TIF DropBox (Share)
ImageMagic + OCR
HADOOP ClusterStore text
+pdf version of
EMR in HADOOP
Text Analytics / NLP processing
Results
Clinical Ontolog
y
Predictive Models
Custom Distributed OCR Application:
High Performance distributed OCR process runs in the background, sharing resources with the Informatics Big Data HADOOP cluster.
Customized open source tools used in the OCR process:• Custom distribution and parrallezation framework for OCR• PDFtk: for normalizing pdf headers and splitting up the PDF pages
• ImageMagick: used to resize, rotate, increase dpi, apply various special effects to enhance quality of images. Creates an image version of the pdf (single page).
• Tesseract OCR:• extracts the text from a the image file and generates a text files• generate searchable pdfs by creating meta-data in original pdf
image files
OCR Performance Statistics
Per Each Server Node:• Image Enhancement and Document Slicing + OCR: ≈ 2 sec/pg
• 1,800 pages/hr on 1 node
18 HADOOP Cluster Nodes that run in parallel OCR process:
• 32,400 pages/hr on cluster• Assuming typical chart 100 pages ≈ 324 charts/hr
Text Analytics Components:Custom text analysis code using Java and Python
• Lucene – tokenization, shingles, n-gramming
• Weka - collection of machine learning algorithms for data mining.
• Advanced Query Language (AQL) - powerful text analytics engine developed by IBM and used by IBM Watson. Executes extractors in a highly efficient manner by using the parallelism provided by Informatics HADOOP platform.
• OpenNLP - hosts a variety of java-based NLP tools which perform sentence detection, tokenization, part-of-speech tagging, chunking and parsing, named-entity detection.
10
Clinical Ontology DB Repo
Load Ontology Terms Per Medical
Condition
Tokenize
Stop Word Filters
Ngram / Shingles Stemming
Generate Token Permutations
Intermediate Ontology TokensPer Job Type
Hadoop GPFS
Ontology and Preprocessing
Hadoop Text Analytics
MRJobs
HADOOP
• HADOOP framework is a mechanism for analyzing huge datasets, which do not have be housed in a datastore
• HADOOP scales out to myriad nodes and can handle all of the activity and coordination related to data processing.
• HADOOP Map Reduce is a way to process large data sets by distributing the work across a large number of nodes
.
HADOOP Components:• Common – contains libraries and utilities needed by other
Hadoop modules.
• Hadoop Distributed File System (HDFS) – Distributed file-system that stores data on commodity
machines, providing very high aggregate bandwidth across the cluster.
– HDFS creates multiple replicas of each data block and distributes them on computers throughout a cluster to enable reliable and rapid access.
• MapReduce – a programming model for large scale data processing.
HADOOP Components:• Hbase – is a distributed, column oriented NOSQL database.
• Hive – is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets.
• Sqoop – is a tool designed for efficiently transferring bulk data between Hadoop and structured datastores such as relational databases.
• Pig – Scripting platform .
• Oozie – Workflow scheduler.
• Zookeeper – Cluster coordination.
• Mahout – Machine learning library.
Map Reduce
14
Map Reduce is a way to process large data sets by distributing the work across a large number of nodes• Map:
o Master node partitions the input into smaller sub-problemso Distributes the sub-problems to the worker nodeso Worker nodes may do the same process
• Reduce:o Master node then takes the answers to all the sub-problems o Combines them in some way to get the output
Map Reduce - Word Count Example
http://www.cs.uml.edu/~jlu1/doc/source/report/MapReduce.html
Business Cases
Product Recall
Entity Extraction from Medical Charts
Nurse Chart Review Process
Business Case 1: Product Recall
• The text mining process helps identify the manufacturers that are on recall list.
• Scheduled report alerts with potential identified members that match the recall manufacturers.
• Create a database of extracted patient and manufacturer information.
• The OCR + Text mining process analyzes charts 300+ pages long on average
Business Case 1: Product Recall
Business Case 1: Product Recall • Generated reports on the OCR results
• BigSheets - Web-based spreadsheet look and feel
Business Case 1: Entity Extraction• Generated reports on the Entity Extraction results
• Create a database of extracted entity information accessible via jdbc/odbc.
Business Case 2: Nurse Chart Review Process• The text mining process helps identify conditions and
diagnoses based on the medical ontology matches for the nurse review.
• The text analytics priorities the charts for nurse review, the highest scored EMR charts are presented first for the nurse review process.
• The nurse has the ability to open the text version of the chart that was created part of the OCR process to the exact location of the matched terms in the scanned version of chart.
Summary
OCR software
It can operate at high speeds and often can process batches of medical documents in various formats (jpg, tiff, gif, pdf, etc.)
The text data can be stored in a database and then be used for analytics, predictive modeling and data mining
This technology provides invaluable benefits in terms of cost savings and productivity.
Q & A
Appendix
HADOOP Ecosystem
AQL
HADOOPEcosystem
AQL: Advanced Text Analytics • Powerful Text Analytics engine developed by IBM and used
by IBM Watson on the Jeopardy quiz show.
• A declarative Annotation Query Language (AQL) with familiar SQL-similar syntax for specifying text analytics extraction programs (or extractors) with rich, clean rule semantics.
• A runtime engine for executing extractors in a highly efficient manner by using the parallelism provided by the IBM InfoSphere BigInsights engine using HADOOP platform.
• Built-in multilingual support for tokenization and part-of-speech analysis.
• The text analytics system extracts information from unstructured and semi structured data.
AQL
Sample AQL
/* Dictionary of minor conditions */create dictionary minorConditionsfrom file 'minorConditions.dict'with language as 'en';
/* Dictionary of major conditions */create dictionary majorConditionsfrom file 'majorConditions.dict'with language as 'en';
/* Extract instances of minor conditions and 'score' 1 for each instance */create view minor as extract 1 as disposition, dictionary 'minorConditions' on R.text as matchfrom Document R;
/* Extract instances of major conditions and 'score' 2 for each instance */create view major as extract 2 as disposition, dictionary 'majorConditions' on R.text as matchfrom Document R;
/* Union together all instances */create view RawDisposition as (select * from minor)union all (select * from major); /* Aggregate per document score */create view ConsolidatedDisposition as select Sum(R.disposition) as dispositionfrom RawDisposition R;
export view ConsolidatedDisposition;
Developing/Testing AQL query
Entity Integration
END