Automatically build accessible food menus from pictures.
Built by OpenBrewAi - https://www.openbrewai.com
Snap a pic of a food menu and Ai will turn that into an interactive version complete with images, translations, health info and more. Even talk to the menu as if it were your waiter.
Scroll through the menu as a website and talk to it like a waiter who will answer any questions about the menu/restaurant.
If no image exists, you can choose to generate one (using Ai on "localhost" dev env or using Google Search on production) based on the description. Never guess about what you are about to order. Image generation requires a paid OpenAI api key.
Switch the menu to any of several languages.
Lists ingredients in each food item and any associated health risk (allergy) info.
Menu data is stored on-device in LocalStorage.
This script was previously built for other purposes. Kept here for posterity. It is not required for the app to function. This is something you would run offline as a job if you for some reason wanted to bulk screenshot all your menus (maybe a cache server?).
-
Run
npm start
to start the app on localhost:3000. -
Run
node puppeteer-screenshot
from thesrc/tools
dir. -
This will go through each menu page, open a headless browser and export a screenshot to
src/tools/screenshots
.
In the project directory, you can run:
This will allow you to locally test the api backend for vercel edge functions.
When pushing to /main
branch Vercel will automatically deploy website.
You can find all webpack config inside node_modules/react-scripts/config
. Webpack configuration is being handled by react-scripts.
React