Connect with us


NEWS

OpenAI Accuses New York Times Of Same Act Against ChatGPT

Published

on

OpenAI,the company behind AI tool ChatGPT, has responded to allegations made by The New York Times regarding their AI model, ChatGPT, replicating articles.

OpenAI refutes these claims and views this dispute as an opportunity to clarify its operations and intentions in building its technology.

OpenAI in its blog said that they were having what seemed to be positive discussions with The New York Times about working together.

They were talking about a partnership where ChatGPT would display real-time content from The New York Times with proper credit.

This would allow the Times to connect with more readers, and OpenAI users would get access to the Times’ reporting. OpenAI explained to the Times that, compared to all the other information used to train their AI, the Times’ content didn’t play a significant role.Then, unexpectedly, on December 27, OpenAI found out about the lawsuit from The New York Times against them through an article in the Times itself.

OpenAI was surprised and disappointed by this sudden action.During their discussions, the Times mentioned that they noticed some of their content being repeated by ChatGPT.

But despite OpenAI’s commitment to address and fix any problems, the Times didn’t share any specific examples.

In July, when OpenAI found out that ChatGPT could reproduce real-time content unintentionally, they immediately took down that feature to fix it.

OpenAI found it interesting that the repeated content the Times noticed seemed to be from very old articles available on various other websites.

They suspect the Times manipulated the instructions given to ChatGPT, including lengthy pieces of their articles, to make the AI repeat their content.

However, OpenAI claims that even with such instructions, their AI doesn’t usually behave the way the Times suggested. This implies that the Times might have directed the AI to repeat things or carefully selected examples out of many attempts.

OpenAI stresses that this kind of manipulation is not what their AI is meant for, and it doesn’t replace the work of The New York Times. Nevertheless, OpenAI is continuously improving their systems to prevent such issues and has already made significant progress in their recent models.

Victoria Philip is not only a Journalist but also a talented fiction writer. You can reach her on this numbers, 08135853903, 09112869878

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending