Meta – the company behind Facebook, Instagram and WhatsApp – has launched its own AI assistant, Meta AI, allowing users to engage in chat-based conversations directly through its platforms. In the US, the assistant is even available as a standalone app. But as users interact with the chatbot, many are unknowingly exposing deeply personal information to the public – raising major concerns around data privacy and transparency.
At the heart of the issue is the app’s built-in “share” feature, which lets users make their AI conversations visible to others. According to a recent TechCrunch investigation, users often don’t realise that by sharing these chats, they may be publishing them publicly – including sensitive, personal, and sometimes disturbing content. In many cases, conversations have been shared alongside users’ full names.
In Switzerland, Meta AI is currently embedded within Meta’s existing apps like WhatsApp and Facebook. But in markets where it is offered as a standalone product, its privacy defaults mirror those of social media platforms – meaning content may be publicly visible by default, depending on users’ existing settings. For example, if someone’s Instagram posts are set to public, their AI conversations may be too.
This incident highlights the growing disconnect between powerful AI tools and user awareness. From Penta’s perspective, it’s a reminder that even seemingly simple interactions can carry significant compliance risks – especially when privacy settings are opaque and users aren’t clearly informed. As AI becomes more integrated into everyday tools, transparency and data control should not be optional extras – they must be built in by design.
If you’re exploring AI capabilities, consider using a non-personal account with minimal profile information. This reduces the exposure of sensitive data if privacy settings are overlooked.