A Xamarin app using Azure Computer Vision's Read API and the Azure Immersive Reader's javascript sdk.
The Immersive Reader is an Azure Cognitive Service for developers who want to embed inclusive capabilities into their apps for enhancing text reading and comprehension for users regardless of age or ability. It's largely designed, from what it seems, to be added to web properties to easily extend their inclusive functionality. I wanted to try expanding the capabilities a little by integrating a "Take Photo" functionality.
This is a Xamarin.iOS sample application. It uploads an image of a book page or text, either taken with the camera or picked from the gallery, to Azure Computer Vision to be read. The resulting text is built in to a makeshift html page that loads and launches the Javascript sdk for the Immersive Reader, which is pushed in to a WKWebView.
Since it's being demo'd on a phone with a retina screen and there aren't yet options to granularly control the UI of the reader, the controls are a bit small. I'd need to play with increasing the web view's zoom/scale/etc, as well as loading the reader with preset options if that functionality becomes available in the future. But it's a fun example in the meantime:
Using this app: You'll need an Azure subscription with an Immersive Reader resource, and a Cognitive Services resource. Once those are created, place the keys and endpoint urls in the respective string decelerations in the Help/PrivateKeys.cs file.
Provided open source under MIT