top of page
Writer's pictureAlibek Jakupov

Azure OCR with PDF files

Updated: Nov 19, 2021



Azure OCR is an excellent tool allowing to extract text from an image by API calls.

Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information. To analyze an image, you can either upload an image or specify an image URL. The images processing algorithms can analyze content in several different ways, depending on the visual features you're interested in. For example, Computer Vision can determine if an image contains adult or racy content, or it can find all of the human faces in an image. You can use Computer Vision in your application by using either a native SDK or invoking the REST API directly. This page broadly covers what you can do with Computer Vision.

quote from the official documentation.


And what we are interested in is Optical Character Recognition

You can use Computer Vision to extract text from an image into a machine-readable character stream using optical character recognition (OCR). If needed, OCR corrects the rotation of the recognized text and provides the frame coordinates of each word. OCR supports 25 languages and automatically detects the language of the recognized text. You can also use the Read API to extract both printed and handwritten text from images and text-heavy documents. The Read API uses updated models and works for a variety objects with different surfaces and backgrounds, such as receipts, posters, business cards, letters, and whiteboards. Currently, English is the only supported language.

However if we want to analyze a pdf file with OCR there is no direct way to do this. Here we provide a fully working code allowing to analyse a pdf image on fly and extract a text as an array of lines. No deep stuff. Enjoy.


# coding: utf-8
"""Convert all the pdf from a given to image and send image to Azure OCR
"""
import json
import requests
import os
import io
from pdf2image import convert_from_bytes, convert_from_path
from PIL import Image
import time
import pandas as pd
import urllib
from pdf2image import convert_from_bytes, convert_from_path
import os
import ntpath
import numpy as np

from boltons.setutils import IndexedSet
import re
import string

def pil_to_array(pil_image):
    """convert a PIL image object to a byte array

    Arguments:
        pil_image {PIL} -- Pillow image object

    Returns:
        {bytes} -- PIL image object in a form of byte array
    """
    image_byte_array = io.BytesIO()
    pil_image.save(image_byte_array, format='PNG')
    image_data = image_byte_array.getvalue()
 return image_data


def image_to_text(image_data):
    """convert an image object to an array of text lines 

    Arguments:
        image_data {bytes} -- image byte array

    Returns:
        list -- array of strings representing lines
    """
    # azure subscription key
    subscription_key = "b935a573e2fb467ea3461a9bb56cfd7e"
    assert subscription_key
    # azure vision api
    vision_base_url = "https://westeurope.api.cognitive.microsoft.com/vision/v2.0/"
    # ocr subsection
    ocr_url = vision_base_url + "ocr"
    headers = {'Ocp-Apim-Subscription-Key': subscription_key,
 'Content-Type''application/octet-stream'}
    params = {'language''unk''detectOrientation''true'}

    # get response from the server
    response = requests.post(ocr_url, headers=headers, params=params, data=image_data)
    response.raise_for_status()
    # get json data to parse it later
    analysis = response.json()
    # all the line from a page, including noise
    full_text = []
    for region in analysis['regions']:
        line = region['lines']
        for element in line:
            line_text = ' '.join([word['text'for word in element['words']])
            full_text.append(line_text.lower())
    # clean array containing only important data
    user_requests = []
 for line in full_text:
        user_requests.append(line)

 return user_requests


def get_information(input_path):
    # points of interest from all the pages
    global_poi = []
    # get and array of PIL image objects -> an object per page
    images = convert_from_path(input_path)
    # create a byte array for each page
    for image in images:
        byte_array = pil_to_array(image)
        page_poi = image_to_text(byte_array)
        global_poi += page_poi
 return global_poi

PATH = "your\\pdf-file\\path\\file.pdf"
poi = get_information(PATH)
items = poi

Hope you will find this helpful.

9,004 views8 comments

8 Comments


harpal.kalsi
Jul 29, 2020

@ajakupov You were correct. I was doing with the wrong URL. When I tried with the below, it worked. :o)

https://Harpal-MS-Computer-Vision.cognitiveservices.azure.com/vision/v3.0/ocr?language=unk&detectOrientation=true


Thank you @ajakupov. Appreciate your help and immediate response on this.

Like

ajakupov
Jul 28, 2020

Hi harpal. Usually the url is the of the following format: https://{endpoint}/vision/v2.0/ocr[?language][&detectOrientation]. Your endpoint may be obtained from the Azure portal, so you may check it out. If the url is still unrecognized by the http client, here is the great tool for testing your url/api.

https://{endpoint}/vision/v2.0/ocr[?language][&detectOrientation]


Please, give it a try and keep me updated.

Like

harpal.kalsi
Jul 28, 2020

I am getting the below error: HTTPError: 404 Client Error: Resource Not Found for url: https://harpal-computer-vision-3.cognitiveservices.azure.com/ocr?language=unk&detectOrientation=true Although, I provided the below:

- subscription_key

- vision_base_url


What am I doing wrong here....?

Like

donniekerr01
May 17, 2020

Exactly what I was trying to do! Thanks for sharing!

Like

ajakupov
May 17, 2020

Hi @donniekerr01, I've created a docker image, uploaded it to dockerhub and deployed everything to Azure Functions. To, install Poppler, I've added the following command to the Docker Script : sudo apt-get -y install poppler-utils. Here is the explanation: https://www.alirookie.com/post/azure-functions-custom-docker-configuration

Like
bottom of page