Microsoft Launches ‘deepfake’ Detector to Spot AI Altered Fake News

The tech giant Microsoft has launched new software that can assist in spotting ‘deepfake’ content. Deepfakes are photos, videos, or audio clips altered using artificial intelligence (AI) to look authentic and are already targeted by initiatives on popular social media platforms such as Facebook and Twitter.

Microsoft Launches ‘deepfake’ Detector to Spot AI Altered Fake News

The Authenticator software probes any photo or each frame of a video, searching for proof of manipulation that could not be visible to the naked eye. In a blog post, the company’s spokesperson said,

They could appear to make people say things they didn’t or to be places they weren’t.

Microsoft  has announced that it has collaborated with the AI Foundation in San Francisco to provide the video authentication tool available to political campaigns, news outlets, and others involved in the election process. Fake posts that seem to be real are of main concern in the upcoming US presidential election, particularly after fake social media posts erupted in a huge number during the 2016 election.

Microsoft has also proclaimed that it built technology into its Azure cloud computing platform that enables originators of photos or videos to add data in the background that can be utilized to check whether imagery has been altered.

Microsoft is also operating with the University of Washington and others on supporting people to be savvier when it comes to identifying misinformation from reliable facts.

According to a Microsoft post,

Practical media knowledge can enable us all to think critically about the context of media and become more engaged citizens while still appreciating satire and parody.

Check out? A New Update to the Microsoft’s Whiteboard Allows Users to Paste Sticky Notes and Text on the Web

>
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker