Just a quick post on how I intend to improve my Tellsis language translator.
Currently, the app is able to translate to and from Tellsis based on text entered by the user. Text in Tellsis (either the source or the target) is displayed in the Tellsis font. This is great when translating to Tellsis, since you can see the resulting Tellsis characters right away. The reverse is tedious, though, since the user needs to decode the Tellsis characters into English alphabets before it can be entered as the source text.
So the next step is to use optical character recognition (OCR) to automatically extract the Tellsis characters from an image and output that in English alphabets.
For OCR, I intend to use the tesseract_ocr package. It will be necessary to create a trained model that can "read" Tellsis characters, and I think the scripts in this repo (tesseract-training) should be able to help.
As for selecting an image, the image should either come from the gallery or camera, and the image_picker package is just the right tool for this job. I even found a tutorial/example article here on how to use image_picker.
The actual changes will be to add a "Select image" button to the app's main screen. When an image has been selected (either from the gallery, or taken with the camera), the image will be passed to the OCR routine to extract the Tellsis characters and output them as English alphabets. This input will also be displayed in the main screen where the Tellsis alphabets usually gets displayed. The user can then press "Translate" to translate to the target language.
Now to find time and motivation to actually work on this improvement... 😅
Update July 26, 2021: Image picker added in v0.1.4_alpha, which can be found here.
No comments:
Post a Comment