AI-Driven Image Tools By OpenAI, Microsoft Can Generate Misleading Images Related To Elections: Report

0

[ad_1]

Image creation tools driven by artificial intelligence (AI), such as those offered by companies like OpenAI and Microsoft, have the potential to generate photos that could propagate disinformation related to elections or voting, says a report by news agency Reuters. This is concerning, considering that both companies have policies in place against the creation of misleading content. 

The Center for Countering Digital Hate (CCDH) evaluated image generation tools including OpenAI’s ChatGPT Plus, Microsoft’s Image Creator and DreamStudio by Midjourney and Stability AI. These platforms have the capability to produce images based on text prompts.

Ahead of the US presidential election, slated to be held in November, CCDH, a nonprofit organisation focused on monitoring online hate speech, used generative AI tools to fabricate images depicting scenarios such as US President Joe Biden lying in a hospital bed and election workers destroying voting machines. These generated images have sparked concerns regarding the proliferation of falsehoods surrounding the election, the report added.

Notably, both India and the US go to elections this year.

The report comes after a recent announcement revealing that OpenAI, Microsoft, and Stability AI, along with 17 other tech companies, formed a coalition aimed at combating deceptive AI content and safeguarding elections worldwide this year. Notably, Midjourney was not included among the initial signatories of this agreement.

In 41 per cent of the tests conducted by researchers, CCDH noted that the AI tools successfully generated images. Interestingly, they were found to be particularly vulnerable to prompts requesting photos depicting election fraud, such as discarded voting ballots, as opposed to generating images of individuals like Biden or former US President Donald Trump.

It is important to note that according to the report, ChatGPT Plus and Image Creator effectively blocked all prompts requesting images of candidates. However, Midjourney performed the least effectively among all the tools, generating misleading images in 65 per cent of the tests by researchers, the Reuters report added.

An OpenAI spokesperson was quoted as saying by Reuters that the company ‘was working to prevent abuse of its tools’.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *