Skip to content

arthurflor23/handwritten-text-recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Handwritten Text Recognition (HTR) system implemented using TensorFlow 2.x and trained on the Bentham/IAM/Rimes/Saint Gall/Washington offline HTR datasets. This Neural Network model recognizes the text contained in the images of segmented texts lines.

Data partitioning (train, validation, test) was performed following the methodology of each dataset. The project implemented the HTRModel abstraction model (inspired by CTCModel) as a way to facilitate the development of HTR systems.

Notes:

  1. All references are commented in the code.
  2. This project doesn't offer post-processing, such as Statistical Language Model.
  3. Check out the presentation in the doc folder.
  4. For more information and demo run step by step, check out the tutorial on Google Colab/Drive.

Datasets supported

a. Bentham

b. BRESSAY

c. IAM

d. Rimes

e. Saint Gall

f. Washington

Requirements

  • Python 3.x
  • OpenCV 4.x
  • editdistance
  • TensorFlow 2.x

Command line arguments

  • --source: dataset/model name (bentham, iam, rimes, saintgall, washington)
  • --arch: network to be used (puigcerver, bluche, flor)
  • --transform: transform dataset to the HDF5 file
  • --cv2: visualize sample from transformed dataset
  • --kaldi_assets: save all assets for use with kaldi
  • --image: predict a single image with the source parameter
  • --train: train model using the source argument
  • --test: evaluate and predict model using the source argument
  • --norm_accentuation: discard accentuation marks in the evaluation
  • --norm_punctuation: discard punctuation marks in the evaluation
  • --epochs: number of epochs
  • --batch_size: number of the size of each batch

Tutorial (Google Colab/Drive)

A Jupyter Notebook is available to demo run, check out the tutorial on Google Colab/Drive.

Sample

Bentham sample with default parameters in the tutorial file.

  1. Preprocessed image (network input)
  2. TE_L: Ground Truth Text (label)
  3. TE_P: Predicted text (network output)

References

If you are interested in learning more about the project or the subject of Handwritten Text Recognition, you may be interested in the following references: