OpenAI, the not-so-open AI company, has gone full mob boss and sent threatening emails to anyone asking probing questions about its latest AI models
OpenAI, in a seemingly re-enactment of the classic "you ask too many" trope used before executions, has revealed that it is not so open afterall. The AI chatbot firm has started sending threatening email to users who ask questions that are too probing to the company's new codename "Strawberry".
Ars Technica reports that some users have reported that when they use certain phrases or ask certain questions to o1 preview or o1 mini, they receive an email warning stating that "Please stop this activity and make sure you are using ChatGPT according to our Terms of Service and our Usage Policy." Additional violations of this policy could result in the loss of access to GPT-4o With Reasoning."
Thebes, a X user, claims to receive this warning when they use the words "reasoning track" in a prompt. Riley Goodside, prompt engineering for Scale AI received a policy violation warning in-chat for telling the model notto give them any information about its "reasoning track". This is pretty concrete proof that certain potentially suspect probing words are banned regardless.
It seems OpenAI doesn't want to be transparent about its latest model's reasoning. If you didn't know, these models attempt to solve problems in a linear manner. OpenAI hides the complexities of this reasoning, but users can see a filtered version.
OpenAI said that the decision to hide these "chains" of thought was made after "weighing multiple factors, including user experience and competitive advantage as well as the option to pursue chain thought monitoring."
This is all a reminder that, while OpenAI's parent is technically a nonprofit company, the reality is murkier. The company in fact has a hybrid kind-of-nonprofit-kind-of-commercial structure--remember, Elon Musk 's lawsuit against OpenAI claimed that it departed from its original founding agreement when it started to seek profit. It's not surprising for a company that is somewhat for-profit to want to maintain its competitive advantage by concealing its trade secrets. In this case, these are "chains thought."
It also serves as a reminder to users that their chats aren’t completely private or free, something they may forget. I've worked in the past to train AI models of this type and can confirm that many people "on the inside" can view user conversations for training or other purposes.
OpenAI is probably better off being safe than sorry.
Comments