Latest

Navigating the world in the dark

The application developed by London-based Saqib Sheikh can help the blind to read text from posters or menus by clicking a photo of them. The app also guides them on how to take the photo. After taking the picture the person can command the app to read out the menu.

Tlind people face lots of challenges coping with daily life. However, despite the challenges, now blind people can read, get educated, browse the internet and have access to most books and media. Over the years lots of devices, methods and apps have also been developed to help blind people find their way indoors and outdoors, but outdoor life is not just about finding your way across the street to the grocery store or getting to the right bus. It is great to get out on a spring day to feel the world with all the body’s senses. Smelling the flowers and feeling the wind on the skin are uplifting but will not completely cheer you up without seeing the dazzling lights and colours, watching a little child swinging or a bird washing his feathers in a fountain. Microsoft released a video in their developer conference Build 2016 to explain how the new intelligent software system called Seeing AI works. Saqib Sheikh, a Microsoft engineer who lost his sight at age 7, developed the app to help blind people navigate the world, seeing what is going around them. “It is an honour to share a stage with Saqib today. He took his passion and empathy to change the world…” said Microsoft CEO Satya Nadella during the release of the video. Years ago this was science fiction. Sheikh would never have imagined it would be something that he could actually do. He says that talking computer technology inspired him to develop the application: “For me it’s about taking that far-off dream and building it one step at a time. I love making things that improve people’s lives and one of the things I’ve always dreamt of since I was at university was this idea of something that could tell you at any moment what’s going on around you,” Sheikh said in the presentation. The app uses artificial intelligence to capture images of the world and process them in order to understand what is happening.

The app itself runs on smartphones and also on Pivothead smart-glasses, so the person can be handsfree. The Pivothead sunglasses are smart wearable devices which have a camera. The person can take photos or videos simply by touching the side panel of the camera, and Seeing AI will recognise objects like “a young man on a skateboard jumping in the air”.

The person using the smart sunglasses can hear it through the built-in sound feedback system. The app on the smartphone can be used to take pictures of different objects and writings. For example a user can take a photo from the menu on his phone. A voice in the app guides him until he’s got the image centered, and the artificial intelligence will read for him the contents of the menu. The image capture and analysis software that the glasses (or smartphone) uses is able to plug into Microsoft cloud-based services that will help determine what the user is looking at. The intelligence comes from ‘Seeing AI’, which is a research project that helps people who are visually impaired or blind to understand who and what is around them. Currently, image analysis software is able to determine the difference between men and women, the shape of standard objects (such as a desk, building, plate of food etc., the state of facial expressions (such as happy, angry, confused and so on) and whether motion is happening. The project is part of Microsoft’s larger push to advance artificial intelligence and incorporate it into more aspects of life in the near future. It is not indicated when, or even if, the project will be released as a commercial app.

Lohrasbi

 

Please follow and like us:

Stay connected

  • Follow on Facebook
  • Follow on Twitter
  • Follow on Google+
  • Follow on LinkedIn

Leave a comment

Your email address will not be published.


*


Enjoy this site. Please spread the word :)