OpenAI Issues Warning: Emotional Connections with AI Could Affect Human Relationships

tre you one of those who feels alone and has built an emotional connection with AI chatbots? if yes, then beware this could harm you mentally and emotionally. The AI company OpenAI shows its concerns that users may form emotional connections with its chatbots, alter social norms and have false expectations of the software.

AI companies have been working to make their software as human as possible. However, the company is now more concerned about peopleโ€™s emotional investments in the artificial intelligence conversations they are having with chatbots.

OpenAI Issues Warning: Emotional Connections with AI Could Affect Human Relationships

OpenAI said in a blog post that it intends to further study the emotional reliance of users on its ChatGPT-4o model.ย โ€œWhile these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time,โ€ the company concluded.

The company suggested that interacting with AI models in a human-like manner could impact peopleโ€™s interactions with others, potentially decreasing the need for human connection. They mentioned that this could be helpful for โ€œlonely individualsโ€ but might harm healthy relationships.

OpenAI highlighted that GPT-4 can respond to audio inputs in about 320 milliseconds, which is close to how quickly humans respond in conversations.

โ€œIt matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API,โ€ the company said. โ€œGPT-4o is especially better at vision and audio understanding compared to existing models.โ€

See Also: OpenAI Introduces Text Watermarking for ChatGPT

The company uses scorecard ratings to grade risk evaluation and mitigation in several elements of the AI technology, including voice technology, speaker identification, sensitive trait attribution, and other factors. The company rates factors on a scale of Low, Medium, High and Critical. Moreover, the company will deploy only factors with a medium or below scale. Only those who score High or below can develop further.

The company said it is folding what it has learned from previous ChatGPT models into ChatGP-4o to make it as human as possible, but is aware of the risks associated with technology that could become โ€œtoo human.โ€

PTA Taxes Portal

Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal

Explore NowFollow us on Google News!

Onsa Mustafa

Onsa is a Software Engineer and a tech blogger who focuses on providing the latest information regarding the innovations happening in the IT world. She likes reading, photography, travelling and exploring nature.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>