Prosthetic vision for the blind: Testing trained computer agents for phosphene vision in a realistic environment
- training a Deep Reinforcement Learning agent to test the performance in particular tasks such as navigation and obstacle avoidance in a realistic virtual environment developed in Unity.
Thanks to Burcu Kucukoglu, Sam Danen, and Jaap de Ruyter van Steveninck for their contribution to the codes.
- Using a realistic environment to train and test the RL agent with phosphene vision. This will clear the way for studies with more complex environments and tasks such as dynamic environments. In line with this, using a realistic virtual environment will widen the choices of experimental designs. Additionally, the realistic environment developed in this study will provide a baseline testing environment for future studies.
- Comparing different parameters to train the agent, which contributes to the aim of training equivalent Deep RL agents to human participants. Achieving this might decrease the costs significantly and facilitate studies in this area of research.
- Comparing different image pre-processing techniques which might allow us to see the effect of extracting visual cues and the contribution of different visual cues in a realistic setting for a navigation task.
➡️ The agent was generated at the same location at the beginning of each episode and the task of the trained agent was to freely navigate in the environment towards the target by avoiding the wall and object collisions and by using the shortest path.
➡️ Unity - ArchVizPro Interior Vol.1 3D Environment
- Phosphene Vision with Canny Edge Detection
- Phosphene Vision with Object Contour Segmentation and Canny Edge Detection
- Double-DQN Sighted Agents (Input: Gray-Scale Images)
- Double-DQN Canny Agents (Input: Phosphene images after applying Canny Edge Detection)
- Double-DQN Segmentation Agents (Input: Phosphene images after applying Object Contour Segmentation and Canny Edge Detection
- Random Agent (Agent choose actions randomly)
- For details, please refer to my thesis project https://drive.google.com/file/d/1BpSDmnCU93v66h5bXSf1Cv3wfhMBIjIc/view