Our application is R-cubed, and our goal is to integrate proper recycling habits into people’s lives. R-cubed would serve as a waste management assistant that can fit seamlessly into your daily life.
In a video feed, our application captures a frame and classifies the object in that frame as organic or recyclable.
To build our application, we used IBM Watson’s Visual Recognition API and Text-to-speech API. We used the Text-to-Speech API to verbally say the resulting category out loud, for an easier user experience.
Instead of using IBM Watson’s default Visual Recognition model, we created our own custom model. We created a custom class for the Organic category, and a custom class for the Recyclable category.
We also used a Negative class in our model. Images in the negative class are used when the objects do not fall under any of the positive classes (which are the Organic and Recyclable classes). So we put humans under the Negative class, as they are the most likely ones to be captured in the video, alongside the object that they are holding in front of the camera to throw out. So, whenever a human is in the frame, the application does not categorize them and only identifies other objects as organic or recyclable.
pip3 install opencv-python-headless
pip3 install matplotlib
pip3 install --upgrade "ibm-watson>=4.0.1"
pip3 install image
pip3 install imutils
pip3 install playsound
pip3 install pyobjc