Skip to content

Latest commit

 

History

History
49 lines (40 loc) · 1.97 KB

README.md

File metadata and controls

49 lines (40 loc) · 1.97 KB
  • old master:
    • harder to converge compare to the beta version
    • both standard ctc and warpCTC
    • read data at once
  • dev:
    • the pipline version of lstm_ctc_ocr, resize to same size
    • use tf.records
  • beta:
    • generate data on the fly
    • deal with multi-width image, padding to same width

How to use

  1. run python genImg.py to generate the train images in train/, validation set in test/and the file name shall has the format of 00000001_name.png, the number of process is set to 16.
  2. cd standard or cd warpCTC
  3. run python lstm_ocr.py to training

Notice that,

Dependency

Some details

The training data:
data

Notice that, parameters can be found in ./lstm.yml(higher priority) and lib/lstm/utils/config.y
some parameters need to be fined tune:

  • learning rate
  • decay step & decay rate
  • image_height
  • optimizer?

in ./lib/lstm/utils/gen.py, the height of the images are the same, and I pad the width to the same for each batch, so if you want to use your own data, the height of the image shall be the same.

Result

The accurary can be more that 95% acc

Read this blog for more details and this blog for how to use tf.nn.ctc_loss or warpCTC