Saturday Dec 14, 2024
Wednesday, 6 April 2016 01:01 - - {{hitsCtrl.values.hits}}
AFP: Facebook on Monday (5) began using artificial intelligence to help people with visual impairments enjoy photos posted at the leading social network.
Facebook introduced machine learning technology trained to recognise objects in pictures and then describe photos aloud.
“As Facebook becomes an increasingly visual experience, we hope our new automatic alternative text technology will help the blind community experience Facebook the same way others enjoy it,” said accessibility specialist Matt King.
The feature was being tested on mobile devices powered by Apple iOS software and which have screen readers set to English.
Facebook planned to expand the capability to devices with other kinds of operating systems and add more languages, according to King, who lost his vision as a US college student studying electrical engineering.
The technology works across Facebook’s family of applications and is based on a “neural network” taught to recognise things in pictures using millions of examples.
More than two billion pictures are shared daily across Facebook, Instagram, Messenger and WhatsApp, King said.
“While this technology is still nascent, tapping its current capabilities to describe photos is a huge step toward providing our visually impaired community the same benefits and enjoyment that everyone else gets from photos,” King said.
The Silicon Valley-based social network said that it was moving slowly with the feature to avoid potentially offensive or embarrassing gaffes when it comes to automatically describing what is in pictures.
Words used in descriptions included those related to transportation, outdoors settings, sports, food, and people’s appearances.
The Facebook technology made its debut less than a week after Microsoft enticed software developers with a suite of offerings that let them tap into the power of cloud computing, big data, and machine learning.
The Cortana Intelligence Suite boasted the ability to let applications see, hear, speak, understand and interpret people’s needs.
Microsoft said that a “Seeing AI” research project was underway to show how those capabilities could be woven into applications to help people who are visually impaired or blind better learn what is around them, say by scanning scenes with smartphone cameras or specially equipped eyewear.