FB Improves AI used to Describe Images for Visually Impaired Users

The social media giant Facebook has announced new modifications to its artificial intelligence (AI) technology that is utilized to generate descriptions of images posted on the platorm for assisting visually impaired users. The technology known as automatic alternative text (AAT), was first brought by Facebook in 2016 to enhance the experience of visually impaired users. Till then, visually impaired users who checked their Facebook newsfeed and came across a photo would only be able to hear the word “photo” and the name of the individual who shared it.

FB Improves AI used to Describe Images for visually impaired users

The visually impaired people have been able to hear things like “image may contain: three people, smiling, outdoors” with the help of AAT.

According to Facebook, with the latest advancement of AAT, the company has been able to provide more detailed descriptions to incorporate activities, landmarks, different food types, and types of animals, like “a selfie of two people, outdoors, the Leaning Tower of Pisa” rather than “an image of two people”.

Furthermore, Facebook added that in order to render more information regarding position and count, the company programmed its two-stage object detector via an open-source platform developed by Facebook AI Research.

According to the company,

We trained the models to predict locations and semantic labels of the objects within an image. Multilabel/multi–data set training techniques helped make our model more reliable with the larger label space.

In the past, similar kind of efforts have been made by some other tech firms to improve the user experience for visually impaired users.

Check out? Facebook Advertising Integrity Chief Resigns from the Company

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>