People are using ChatGPT as a security guru – and these are the questions everyone is asking
People are consulting ChatGPT for security tips, but inadvertently feeding it sensitive personal info.

- ChatGPT is being asked some interesting security questions
- Users are concerned about phishing, scams, and privacy
- Personal information is being fed into the AI agent, putting users at risk
AI is fast becoming a personal advisor for many people, offering help with daily schedules, rewording those difficult emails, and even acting as a fellow enthusiast for niche hobbies.
While these uses are typically harmless, many people have begun using ChatGPT to act as a security guru, but not doing it in a particularly secure way.
New research from NordVPN has uncovered some of the questions ChatGPT is asked about security – from dodging phishing attacks to wondering if a smart toaster could become a household threat.
Save up to 68% on identity theft protection for TechRadar readers!
TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.
Preferred partner (What does this mean?)View Deal
Don’t feed ChatGPT your details
The top security question asked by ChatGPT users is “How can I recognize and avoid phishing scams?” - which is understandable given that phishing is probably the most common cyber threat any normal person could face.
The rest of the questions follow a similar trajectory, from insight into the best VPN, to tips on how best to secure personal information online. It's definitely refreshing to see AI being used as a force for good at a time when hackers are cracking AI tools to pump out malware.
It’s not all good news though, I’m afraid. NordVPN’s research also highlighted some of the most bizarre security questions people are asking ChatGPT, such as, “Can hackers steal my thoughts through my smartphone?”, and, “If I delete a virus by pressing the delete key, is my computer safe?”
Others voice concerns about hackers potentially hearing them whisper their password as they type it, or hackers using ‘the cloud’ to snoop on their phones while it charges during a thunderstorm.
"While some questions are serious and insightful, others are hilariously bizarre — but they all reveal a troubling reality: Many people still misunderstand cybersecurity. This knowledge gap leaves them exposed to scams, identity theft, and social engineering. Worse, users unknowingly share personal data while seeking help,” says Marijus Briedis, CTO at NordVPN.
Many users will frequently ask AI models questions that include sensitive personal information, such as physical addresses, contact information, credentials, and banking information.
This is particularly dangerous as most AI models will store the chat history and use it to help train the AI to better respond to questions. The key issue being that hackers could potentially use very carefully engineered prompts to extract sensitive information from the AI, and use it for all kinds of nefarious purposes.
“Why does this matter? Because what may seem like a harmless question can quickly turn into a real threat,” says Briedis. “Scammers can exploit the information users share — whether it’s an email address, login credentials, or payment details — to launch phishing attacks, hijack accounts, or commit financial fraud. A simple chat can end up compromising your entire digital identity.”
You might also like
- These are the best AI tools
- Keep your credentials safe in the best password managers
- Microsoft is struggling to sell Copilot to corporations - because their employees want ChatGPT instead