Scaling OCR using Ray Datasets
Contents
Scaling OCR using Ray Datasets#
In this example, we will show you how to run optical character recognition (OCR) on a set of documents and analyze the resulting text with the natural language processing library spaCy. Running OCR on a large dataset is very computationally expensive, so using Ray for distributed processing can really speed up the analysis. Ray Datasets makes it easy to compose the different steps of the pipeline, namely the OCR and the natural language processing. Ray Datasets’ actor support also allows us to be more efficient by sharing the spaCy NLP context between several datapoints.
To make it more interesting, we will run the analysis on the LightShot dataset. It is a large publicly available OCR dataset with a wide variety of different documents, all of them screenshots of various forms. It is easy to replace that dataset with your own data and adapt the example to your own use cases!
Overview#
This tutorial will cover:
Creating a Ray Dataset that represents the images in the dataset
Running the computationally expensive OCR process on each image in the dataset in parallel
Filtering the dataset by keeping only images that contain text
Performing various NLP operations on the text
Walkthrough#
Let’s start by preparing the dependencies and downloading the dataset. First we install the OCR software tesseract
and its Python client:
brew install tesseract
pip install pytesseract
sudo apt-get install tesseract-ocr
pip install pytesseract
By default, the following example will run on a tiny dataset we provide. If you want to run it on the full dataset, we recommend to run it on a cluster since processing all the images with tesseract takes a lot of time.
Note
If you want to run the example on the full LightShot dataset, you need to download the dataset and extract it. You can extract the dataset by first running unzip archive.zip
and then unrar x LightShot13k.rar .
and then you can upload the dataset to S3 with aws s3 cp LightShot13k/ s3://<bucket>/<folder> --recursive
.
Let’s now import Ray and initialize a local Ray cluster. If you want to run OCR at a very large scale, you should run this workload on a multi-node cluster.
# Import ray and initialize a local Ray cluster.
import ray
ray.init()
2022-07-04 14:35:19,444 INFO services.py:1476 -- View the Ray dashboard at http://127.0.0.1:8265
RayContext(dashboard_url='127.0.0.1:8265', python_version='3.7.4', ray_version='1.13.0', ray_commit='e4ce38d001dbbe09cd21c497fedd03d692b2be3e', address_info={'node_ip_address': '127.0.0.1', 'raylet_ip_address': '127.0.0.1', 'redis_address': None, 'object_store_address': '/tmp/ray/session_2022-07-04_14-35-16_950060_89285/sockets/plasma_store', 'raylet_socket_name': '/tmp/ray/session_2022-07-04_14-35-16_950060_89285/sockets/raylet', 'webui_url': '127.0.0.1:8265', 'session_dir': '/tmp/ray/session_2022-07-04_14-35-16_950060_89285', 'metrics_export_port': 60416, 'gcs_address': '127.0.0.1:61663', 'address': '127.0.0.1:61663', 'node_id': 'b6c981243d51558d13e4290f0f63552a6126f8a8d9e472baafe9dd5b'})
Running the OCR software on the data#
We can now use the ray.data.read_binary_files
function to read all the images from S3. We set the include_paths=True
option to create a dataset of the S3 paths and image contents. We then run the ds.map
function on this dataset to execute the actual OCR process on each file and convert the screen shots into text. This will create a tabular dataset with columns path
and text
, see also Row UDF Output Types.
Note
If you want to load the data from a private bucket, you have to run
import pyarrow.fs
ds = ray.data.read_binary_files("s3://<bucket>/<folder>",
include_paths=True,
filesystem=pyarrow.fs.S3FileSystem(
access_key="...",
secret_key="...",
session_token="..."))
from io import BytesIO
from PIL import Image
import pytesseract
def perform_ocr(data):
path, img = data
return {
"path": path,
"text": pytesseract.image_to_string(Image.open(BytesIO(img)))
}
ds = ray.data.read_binary_files(
"s3://anonymous@air-example-data/ocr_tiny_dataset",
include_paths=True)
results = ds.map(perform_ocr)
2022-07-04 14:35:53,683 WARNING read_api.py:256 -- The number of blocks in this dataset (3) limits its parallelism to 3 concurrent tasks. This is much less than the number of available CPU slots in the cluster. Use `.repartition(n)` to increase the number of dataset blocks.
Read->Map: 100%|██████████| 3/3 [00:07<00:00, 2.34s/it]
Let us have a look at some of the data points with the take
function.
results.take(10)
[ArrowRow({'path': 'air-example-data/ocr_tiny_dataset/gnome_screenshot.png',
'text': '= Cancel\n\nTake Screenshot\n© Grab the whole screen\n\nGrab the current window\n\n|_| eeeeeeter\n\nGrab after a delay of 0\n\nEffects\nInclude pointer\n\n¥ Include the window border\n\nApply effect: None Sa\n\n+. seconds\n'}),
ArrowRow({'path': 'air-example-data/ocr_tiny_dataset/miranda_screenshot.png',
'text': '© Viktor (Online) : Message Session\n\n“etto| © Whter | steno\n\nremus\ntet? Fiviha\n\n17: dokonca to vie aj video @\nViktor\n\n1818. 55 samozrejme\n\n1818: len moj brat to skusal\nremus\n\nWA\n\n098003 —\n\nseettsgmailcom [0]\n\nonline\n\nHacemen\n@ Ce\n\nieFFo\n169 6 je <>vin ©®\n\nBe 22\n\naway\n\nTue\nhn\n\n& Wee\n\nYep, Tm here\n\n&\nea\na\nLS]\n\n'}),
ArrowRow({'path': 'air-example-data/ocr_tiny_dataset/qemu_screenshot.png',
'text': 'File Edit View Bookmarks\n\n[i New Tab [If] split view ~\n\n43044 kousekip\n\nPlugins\n\nkousekip:ako-kaede-mirai(htop)\n\nkousekip:ako-kaede-mirai(qemu-system-x86)\n\nSettings\n\nHelp\n\nkousekip:ako-kaede-miral(htop) — Konsole vax\n\nFl Paste Q Find\n\nEMU vax\n\nMachine View\n\nApplications Places System @)C) Fri Feb 18, 13:56\n\nTerminal\n\nroot root\nroot sys\nroot sys\nroot sys\nroot sys\nroot sys\nroot root\nroot sys\nroot bin\nroot root\nroot sys\nroot root\nroot sys\nroot sys\nroot root\nroot root\nroot root\nroot sys\nroot root\nroot sys\nroot sys\n2 root —sys\nkousekip@ako-kaede-mirai-sun:~$ If\n\nbin -> ./usr/bin\nboot\ndev\ndevices\netc\nexport\nhome\nkernel\nlib\nmedia\nmnt\n\nnet\nopt\nplatform\nproc\nroot\nrpool\nsbin\nsystem\n‘tmp\nusr\nvar\n\n@kousekip\nidesktop\n\n©\n\n©\n\nBUNwnSunennh SnuNaeon\n\n(Documents\nDownloads\nGaMusic\n\n5\n\nBitrash\nDevices\n(Floppy Drive\nNetwork\n\n@ Browse Netw...\n\n9\n9\n6\n4\n9\n\n53\n5\n6\n4\n9\n10\n0\n6\n18\n7\n\nfovey\\aliarel(elare)\n\n'})]
Saving and loading the result of the OCR run#
Note
Saving the dataset is optional, you can also continue with the in-memory data without persisting it to storage.
We can save the result of running tesseract on the dataset on disk so we can read it out later if we want to re-run the NLP analysis without needing to re-run the OCR (which is very expensive on the whole dataset). This can be done with the write_parquet
function:
import os
results.write_parquet(os.path.expanduser("~/LightShot13k_results"))
Write Progress: 100%|██████████| 3/3 [00:00<00:00, 207.11it/s]
You can later reload the dataset with the read_parquet
function:
results = ray.data.read_parquet(os.path.expanduser("~/LightShot13k_results"))
2022-07-04 14:36:13,515 WARNING read_api.py:256 -- The number of blocks in this dataset (6) limits its parallelism to 6 concurrent tasks. This is much less than the number of available CPU slots in the cluster. Use `.repartition(n)` to increase the number of dataset blocks.
Process the extracted text data with spaCy#
This is the part where the fun begins. Depending on your task there will be different needs for post processing, for example:
If you are scanning books or articles you might want to separate the text out into sections and paragraphs.
If you are scanning forms, receipts or checks, you might want to extract the different items listed, as well as extra information for those items like the price, or the total amount listed on the receipt or check.
If you are scanning legal documents, you might want to extract information like the type of document, who is mentioned in the document and more semantic information about what the document claims.
If you are scanning medical records, you might want to extract the patient name and the treatment history.
In our specific example, let’s try to determine all the documents in the LightShot dataset that are chat protocols and extract named entities in those documents. We will extract this data with spaCy. Let’s first make sure the libraries are installed:
!pip install "spacy>=3"
!python -m spacy download en_core_web_sm
!pip install spacy_langdetect
This is some code to determine the language of a piece of text:
import spacy
from spacy.language import Language
from spacy_langdetect import LanguageDetector
nlp = spacy.load('en_core_web_sm')
@Language.factory("language_detector")
def get_lang_detector(nlp, name):
return LanguageDetector()
nlp.add_pipe('language_detector', last=True)
nlp("This is an English sentence. Ray rocks!")._.language
{'language': 'en', 'score': 0.9999976594668697}
It gives both the language and a confidence score for that language.
In order to run the code on the dataset, we should use Ray Datasets’ built in support for actors since the nlp
object is not serializable and we want to avoid having to recreate it for each individual sentence. We also batch the computation with the map_batches
function to ensure spaCy can use more efficient vectorized operations where available:
import spacy
from spacy.language import Language
from spacy_langdetect import LanguageDetector
class SpacyBatchInference:
def __init__(self):
self.nlp = spacy.load('en_core_web_sm')
@Language.factory("language_detector")
def get_lang_detector(nlp, name):
return LanguageDetector()
self.nlp.add_pipe('language_detector', last=True)
def __call__(self, df):
docs = list(self.nlp.pipe(list(df["text"])))
df["language"] = [doc._.language["language"] for doc in docs]
df["score"] = [doc._.language["score"] for doc in docs]
return df
results.limit(10).map_batches(SpacyBatchInference, compute="actors")
Read progress: 100%|██████████| 6/6 [00:00<00:00, 485.55it/s]
Map Progress (1 actors 1 pending): 100%|██████████| 6/6 [00:06<00:00, 1.04s/it]
Dataset(num_blocks=6, num_rows=6, schema={path: object, text: object, language: object, score: float64})
We can now get language statistics over the whole dataset:
languages = results.map_batches(SpacyBatchInference, compute="actors")
languages.groupby("language").count().show()
Read: 100%|██████████| 6/6 [00:00<00:00, 19.95it/s]
Map Progress (1 actors 1 pending): 100%|██████████| 6/6 [00:05<00:00, 1.09it/s]
Sort Sample: 100%|██████████| 6/6 [00:00<00:00, 919.27it/s]
Shuffle Map: 100%|██████████| 6/6 [00:00<00:00, 159.14it/s]
Shuffle Reduce: 100%|██████████| 6/6 [00:00<00:00, 364.59it/s]
{'language': 'af', 'count()': 2}
{'language': 'en', 'count()': 4}
Note
On the full LightShot dataset, you would get the following:
{'language': 'UNKNOWN', 'count()': 2815}
{'language': 'af', 'count()': 109}
{'language': 'ca', 'count()': 268}
{'language': 'cs', 'count()': 13}
{'language': 'cy', 'count()': 80}
{'language': 'da', 'count()': 33}
{'language': 'de', 'count()': 281}
{'language': 'en', 'count()': 5640}
{'language': 'es', 'count()': 453}
{'language': 'et', 'count()': 82}
{'language': 'fi', 'count()': 32}
{'language': 'fr', 'count()': 168}
{'language': 'hr', 'count()': 143}
{'language': 'hu', 'count()': 57}
{'language': 'id', 'count()': 128}
{'language': 'it', 'count()': 139}
{'language': 'lt', 'count()': 17}
{'language': 'lv', 'count()': 12}
{'language': 'nl', 'count()': 982}
{'language': 'no', 'count()': 56}
We can now filter to include only the English documents and also sort them according to their score.
languages.filter(lambda row: row["language"] == "en").sort("score", descending=True).take(1000)
Filter: 100%|██████████| 6/6 [00:00<00:00, 561.84it/s]
Sort Sample: 100%|██████████| 6/6 [00:00<00:00, 1311.81it/s]
Shuffle Map: 100%|██████████| 6/6 [00:00<00:00, 319.24it/s]
Shuffle Reduce: 100%|██████████| 6/6 [00:00<00:00, 450.79it/s]
[ArrowRow({'path': 'air-example-data/ocr_tiny_dataset/gnome_screenshot.png',
'text': '= Cancel\n\nTake Screenshot\n© Grab the whole screen\n\nGrab the current window\n\n|_| eeeeeeter\n\nGrab after a delay of 0\n\nEffects\nInclude pointer\n\n¥ Include the window border\n\nApply effect: None Sa\n\n+. seconds\n',
'language': 'en',
'score': 0.9999976791815426}),
ArrowRow({'path': 'air-example-data/ocr_tiny_dataset/gnome_screenshot.png',
'text': '= Cancel\n\nTake Screenshot\n© Grab the whole screen\n\nGrab the current window\n\n|_| eeeeeeter\n\nGrab after a delay of 0\n\nEffects\nInclude pointer\n\n¥ Include the window border\n\nApply effect: None Sa\n\n+. seconds\n',
'language': 'en',
'score': 0.9999965244942747}),
ArrowRow({'path': 'air-example-data/ocr_tiny_dataset/miranda_screenshot.png',
'text': '© Viktor (Online) : Message Session\n\n“etto| © Whter | steno\n\nremus\ntet? Fiviha\n\n17: dokonca to vie aj video @\nViktor\n\n1818. 55 samozrejme\n\n1818: len moj brat to skusal\nremus\n\nWA\n\n098003 —\n\nseettsgmailcom [0]\n\nonline\n\nHacemen\n@ Ce\n\nieFFo\n169 6 je <>vin ©®\n\nBe 22\n\naway\n\nTue\nhn\n\n& Wee\n\nYep, Tm here\n\n&\nea\na\nLS]\n\n',
'language': 'en',
'score': 0.8571411027551514}),
ArrowRow({'path': 'air-example-data/ocr_tiny_dataset/miranda_screenshot.png',
'text': '© Viktor (Online) : Message Session\n\n“etto| © Whter | steno\n\nremus\ntet? Fiviha\n\n17: dokonca to vie aj video @\nViktor\n\n1818. 55 samozrejme\n\n1818: len moj brat to skusal\nremus\n\nWA\n\n098003 —\n\nseettsgmailcom [0]\n\nonline\n\nHacemen\n@ Ce\n\nieFFo\n169 6 je <>vin ©®\n\nBe 22\n\naway\n\nTue\nhn\n\n& Wee\n\nYep, Tm here\n\n&\nea\na\nLS]\n\n',
'language': 'en',
'score': 0.5714285419353925})]
If you are interested in this example and want to extend it, you can do the following for the full dataset:
go throught these results in order
create labels on whether the text is a chat conversation and then train a model like Huggingface Transformers on the data.
Contributions that extend the example in this direction with a PR are welcome!