OpenAI Plans To Leash AI To Deter Election Misinformation in 2024
According to the latest reports, OpenAI recently delineated a plan to prevent its AI tools from being used to spread election misinformation. Voters in more than 50 countries are all set to cast their ballots in national elections this year. That’s why, ChatGPT maker is changing preexisting policies & introducing newer initiatives to prevent the mishandling of its wildly popular generative AI tools. It would not be wrong to say that AI tools can create novel text and images in seconds but can also spread misleading messages or compelling fake photographs.
AI Leashed! Here’s How OpenAI Plans to Deter Election Misinformation in 2024
OpenAI stated:
“We plan to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency.”
The company will ban people from using its technology to create chatbots that imitate real candidates or governments. The firm will not allow its users to build applications for political campaigning or lobbying until more research is done on the persuasive power of its technology.
The company will also partner with the National Association of Secretaries of State to direct ChatGPT users who ask logistical questions about voting to correct information on that group’s nonpartisan website, CanIVote.org. Mekela Panditharatne, counsel in the democracy program at the Brennan Center for Justice stated:
OpenAI’s plans are a positive step toward combating election misinformation, but it will depend on how they are implemented.
It would not be wrong to say that OpenAI’s ChatGPT and DALL-E are the most powerful generative AI tools to date. There are several companies with similarly refined technology but they don’t have election misinformation protection in place. OpenAI states that AI tools will enhance factual accuracy, reduce bias, and decline certain requests. DALL-E, for instance, can decline requests that ask for image generation of real people, including candidates. By the time DALL-E 3 rolls out later this year, the company plans to implement an image encoding approach that will keep details of the content’s birthplace. Moreover, OpenAI is also testing a new tool that will be able to detect whether or not an image was generated by DALL·E.
On the other hand, ChatGPT will provide users with greater levels of transparency by giving access to real-time global news reporting that includes attributions and links. In the same way, social media companies, such as YouTube and Meta, have introduced AI labeling policies. However, it remains to be seen whether they can unfailingly catch violators. So, let’s wait and watch what comes next. Stay tuned!
PTA Taxes Portal
Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal
Explore NowFollow us on Google News!