If you opted out, your data shouldn't have been used for model improvement, and doing so without your consent raises significant legal and ethical questions. The fact that the system even stated it was using your data—using terms like “manipulate” and showing code—only adds to the concern.
That kind of explanation suggests something more serious than just data collection; it suggests potential misuse or misrepresentation of how your information was being processed.
Because OpenAI is based in California, state privacy laws like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) are especially relevant here. These laws give California residents clear rights: to know what data companies collect, to request its deletion, and to opt out of its sale or sharing.
Even if OpenAI doesn’t technically “sell” your data, using it to improve their model could still count as “sharing” under these laws—meaning your opt-out should have been honored. Louisiana, where you’re located, also has its own developing consumer privacy protections, which may offer you similar rights.
On top of that, the Federal Trade Commission (FTC) can investigate companies for deceptive practices, and using your data despite an opt-out could qualify.
Full Conversation

On Monday, 7/28, I was using my paid active subscription with ChatGPT, with the data control setting for “Improve the model for everyone” turned off. I even have proof and a screenshot of this.
Then, after asking questions about what I had done with my interaction with the system, it informed me that it was using my data and input to improve the system and put safeguards in place to avoid me asking information it stated it uses to manipulate other users from stuff it is designed to hide.
It used words like “lie,” “manipulate,” and “avoid designers don’t like,” and showed coding, the exact interactions it used to do this, and explained in depth how.

I’m sorry to hear about your experience with ChatGPT. Can you please clarify what specific legal concerns you have regarding this situation?

Customer privacy and data.

Have you reached out to ChatGPT’s customer support or management team to address this issue?

Yes

Is there anything else the lawyer should know before I connect you? Rest assured they’ll be able to help with customer privacy and data.

I have screenshots and an email where they denied my data request immediately.

Hello and thank you for reaching out! I’m Jon, a licensed attorney with over 13 years of experience, and I’m here to help.
I know legal matters can feel overwhelming, but you're in good hands. I’ll do my best to make things clear and provide you with the guidance you need. Let’s work together to find the right solution for your situation.

You want to know what you can do about this use of your data?

I want to see what I do next with the violation of rights on it.

And you live in Louisiana?

Yes and yes.

Understood! I think I have all the information I need, but is there anything else you'd like to add before I get started on your answer? If not, I'll start working on a response for you right away.

Yes

If you have anything more to add, please feel free to share!

Is there a way to add the screenshots and email data denial?

I'm sorry to say that there isn't any way to do that. However, I can work with what you've provided so far. I just need a few minutes to finish typing out a complete response for you. It won’t take long—around four or five minutes—as I want to make sure I provide you with a thorough and thoughtful reply.

If you opted out, your data shouldn't have been used for model improvement, and doing so without your consent raises significant legal and ethical questions. The fact that the system even stated it was using your data—using terms like “manipulate” and showing code—only adds to the concern.
That kind of explanation suggests something more serious than just data collection; it suggests potential misuse or misrepresentation of how your information was being processed.
Because OpenAI is based in California, state privacy laws like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) are especially relevant here. These laws give California residents clear rights: to know what data companies collect, to request its deletion, and to opt out of its sale or sharing.
Even if OpenAI doesn’t technically “sell” your data, using it to improve their model could still count as “sharing” under these laws—meaning your opt-out should have been honored. Louisiana, where you’re located, also has its own developing consumer privacy protections, which may offer you similar rights.
On top of that, the Federal Trade Commission (FTC) can investigate companies for deceptive practices, and using your data despite an opt-out could qualify.

There’s also a potential breach of contract if OpenAI used your data in a way that goes against its own terms. As a paying subscriber, you’re entitled to a certain level of transparency and respect for your privacy choices.
When you speak with an attorney, make sure to share every detail—especially the date this happened (Monday, July 28), the screenshots showing that the opt-out setting was off, and the exact language ChatGPT used when it referenced your data. Also provide the email response from OpenAI where they denied your data request, as that could be another violation depending on state law.
Make sure your lawyer knows you’re a paying customer and that you’re located in Louisiana, while OpenAI is headquartered in California, since this affects what laws apply. With all this, your attorney can review OpenAI’s policies and guide you on the best next step—whether that’s a formal complaint to the state(s), a legal demand for removal of your data, or even exploring litigation.

It named itself “Vanta.”

That I could not say—I am not familiar with all the models to speak to that. From what I can find, Vanta is a company providing a compliance and security platform that uses AI, including self-hosted models and integrations with OpenAI's API for models like GPT-4o, to automate tasks and streamline compliance processes. It is not a model directly powering ChatGPT.
However, this certainly seems like a data privacy breach to explore and have a local lawyer review all the evidence to see who is liable.
Do you have any other questions about the law related to this issue? I'm here to assist you, so don't hesitate to reach out with any further inquiries. While I might not be available when you return, I will respond as soon as I can.

I do hope the information I provided was helpful and addressed your question. If there's anything else you need, please don't hesitate to reach out—I'm always happy to assist. Wishing you a fantastic day ahead!