The ChatGPT iPhone app has serious privacy issues you need to know about

 An iPhone showing the ChatGPT logo on its screen
An iPhone showing the ChatGPT logo on its screen

OpenAI, the company behind ChatGPT, recently brought its artificial intelligence bot to phones with the ChatGPT iPhone app. The mobile version of the chatbot has already climbed the ranks and become one of the most popular free apps on the App Store right now. However, before you jump headfirst into the app, beware of getting too personal with the bot and putting your privacy at risk.

Until now, plenty of ‘imposter’ apps have floated around app stores trying to capitalize on the generative AI boom, so it makes sense that OpenAI would want to get its own app out into the world.

The official app is free (the paid version of ChatGPT is supported on the app but it’s not necessary to use it). This is a huge plus considering other variations of AI chatbot apps on the market require a weekly subscription fee, making them super costly if not outright scams. You can also ‘talk’ to ChatGPT using speech-to-text, which makes a lot of sense for a conversational AI product.

The iOS app does come with an explicit trade-off that users should be aware of. Most of us are aware of the fact that ChatGPT does just, ahem, make things up sometimes, so there's lots of room for improvement to its responses – but when you open the app on your phone, you get an interesting warning about sharing personal information because “Anonymized chats may be reviewed by our AI trainer to improve our systems.”

OpenAI’s privacy policy says that when you “use our services, we may collect personal information that is included in the input, file uploads, or feedback you provide”. This basically means that if you ask ChatGPT questions that contain any personal information (read: facts about you which you’d rather not share to a living soul) it’ll be sent to OpenAI and could be read by a human reviewer. And that’s a big deal.

Why does this matter?

The company says conversations are anonymized before they’re seen by humans, but that only removes identifying information from the metadata of the file, not the actual content of your prompt. So, if you use ChatGPT for things such as help with your anger issues, as a safe space to vent, to seek advice or to edit and enhance personal documents and texts, these are all being sent – and possibly viewed by – humans at OpenAI.

So, you have no idea whether OpenAI is actually reading your conversations, and you have no option to opt-out. There is no possible way for the company to read every conversation from every user, but it is something you should keep in mind as you continue to use the app.

As users can now gain access to the bot on their portable devices (and not just on their computers), they’re more likely to pull it up and use it more throughout the day, asking it questions from friends or family or referencing the things they see and interact with on a daily basis. It’s quite a different experience to just sitting down and having a play with ChatGPT on your laptop – and it certainly increases the likelihood of users revealing more personal information than they mean to.

Of course, we’re not saying ChatGPT is spying on you and stealing all of your information for nefarious or dubious reasons, but it’s sensible to warn you about what you put into your chats with the bot. Artificial Intelligence is still an emerging technology and should be treated with caution until we’ve all become more adjusted and familiar with having these chatbots in our lives. If the founder of OpenAI is campaigning for regulations on his own product, the rest of us should definitely proceed with caution.