Proprietary
Hardware

ExoEye

ExoEye

ExoEye is a project in form of a glasses, whose primary purpose is to help people with visual disabilities to detect:

  • what kind of objects are in front of them
  • how far is the obstacle that their head is pointed to
  • and where they are currently geographically located at

The project was originally intended only for people with Category 5 of visual impairment (irreversible blindness with no light perception). But it turned out that people with other categories of visual impairment were also interested in the project, so I have naturally adjusted the hardware a bit to satisfy the requirements for all categories of visual impairment. I did that by re-designing the smaller circuits, so the person who uses it can see through the glasses while they are operating. (which was not possible with the first version)

The user can also do as he/she prefers:

  • leave the see-trough lens on (this will hide their eyes in case of severe condition)
  • remove them completely (the electronics are not affected by this)
  • or insert their own custom lens (that they might use if they had only a mild case of visual impairment, so they can still see partially trough them)

Project Description

The project consists of three main units:

  1. the glasses
  2. the custom-made controller
  3. the Central Processing Device (CPD,  ARM-based CPU device, RPI in this case)

And two arbitrary units:

  1. user custom power source (can be powered by any 5V/2A power source, battery-based or not)
  2. user custom headphones/headset (features a 3.5mm stereo audio jack)

Operation

To get the project operational, the user should follow these 5 simple steps:

  1. Connect the glasses to the CPD
  2. Connect the controller to the CPD
  3. Put the glasses on and plug in their favourite headphones
  4. Power ON the CPD with their preferred power source (like power bank)
  5. Place the CPD in your pocket and the controller in hand

ExoEye is powered ON automatically when you connect it to the power source, and it will emit a short double-beep sound after it finishes booting. (all the sounds are delivered through the 3.5mm stereo audio jack on the glassess)

All the actions are available through the custom-made controller, that sits in user's hand.

The controller board consists of five buttons and eight total actions:

+) Increase volume

-) Decrease volume

A) Distance

  • single click = tell distance in meters, ie. “Two point twenty five” (using the male TTS narrator)
  • press and hold = activates the “Obstacle Avoidance Mode“

In short, this mode allows users to detect obstacles on their way, by releasing a constant beam of sound to the user, whose frequency changes with the obstacle's distance. So when the obstacle is closer the sound will be higher pitched, and on a contrary lower pitched.

B) Object Recognition

  • slingle click = TTS agent will speak out all the objects that it recognized

To recognize the objects it uses the custom-made on board machine learning model that I have developed and trained on most common indoor items, so it works offline, but the downside is that its quite limited a bit slow on lower budget devices like RPI. The other better implementation would be to use some Could-based AI computing service to give it maximum power and speed, but in that case, it would require from users to have a reliable internet connection which cuts the client interest by half, so I went with the first solution… (Californians wouldn’t get this, but for the rest of the world, the internet is still quite unreliable ;)

  • press and hold = activates the “Object Recognition Mode”

Periodically calls the single click action.

C) Location

  • single click = tells current closest: city name, street name and house number

Works offline, but requires from user to download and maintain a map of the target country.

  • press and hold = activates the “Location Detection Mode”

Periodically calls the single click action, with an interval of ten seconds.

Note: All the modes can be turned off by clicking any of the buttons, except the ones used for volume control.

Technology overview

Software:

C/C++, Python, Javascript, Bash .

Hardware:

Glasses:

  • a pair of “wayfarer style” sunglasses
  • modified camera circuit with a red laser (HM.LOFN48)
  • custom-made circuit for stereo 3.5mm audio jack
  • standard ethernet cable (RJ45)

Central Processing Device:

  • Raspberry Pi 3 Model B+ (with acrylic housing)
  • Drone GPS module (u-blox gps6mv2, neo-6m-0-001)
  • Custom-modified RJ45 double-port switch

Controller:

  • custom made circuit board
  • standard ethernet cable (RJ45)

More Projects

Have projects in mind?

Let’s work together

Greetings! 👋🏻
I am Daniel Petrovich, a seasoned entrepreneur,
dedicated to expand digital boundaries for both companies and individuals.