This article shares insights from Harsh Varshney, a 31-year-old software engineer at Google in New York. He explores how artificial intelligence (AI) has become an essential part of our lives, while also addressing the privacy challenges it brings.
AI is now a part of everyday tasks like research, coding, and even note-taking. But as someone who has worked on privacy at Google since 2023, Harsh understands the risks. He moved from the privacy team to the Chrome AI security team, focusing on protecting users from malicious threats like hackers and phishing attempts.
AI tools generate responses based on user input, which raises concerns about data privacy. Users need to guard their information against cybercriminals and data brokers.
Here are four habits Harsh believes are crucial for protecting privacy while using AI.
Treat AI Like a Public Postcard
People often feel too comfortable sharing personal information with AI. Even though AI companies may have teams working on privacy, it’s best not to share sensitive details like credit card numbers or Social Security information. Information given to AI chatbots can sometimes be used for future model training, creating risks of accidental data leaks. Harsh thinks it’s wise to treat AI interactions like writing on a postcard—something that could be seen by anyone.
Know Which ‘Room’ You’re In
There’s a big difference between public AI tools and enterprise models. Public tools may use your conversations to improve models, while enterprise-grade tools usually don’t. Harsh likens it to chatting in a busy coffee shop versus a private meeting. This matters, especially for discussing sensitive company projects. It’s safer to choose enterprise models for work-related tasks.
Delete Your History Regularly
Both public and enterprise AI tools often keep chat histories. Harsh recommends deleting these regularly, even if you think your data isn’t sensitive. He experienced a surprise when an enterprise chatbot recalled his home address from a previous email edit. Deleting history can help mitigate such risks. Sometimes, using “temporary chat” modes allows for conversations that aren’t stored or used for training.
Use Well-Known AI Tools
Opt for established AI tools that have clear privacy policies. Harsh prefers Google’s products, OpenAI’s ChatGPT, and Anthropic’s Claude. It’s helpful to read privacy policies which can clarify how your data is used. If a tool allows you to opt out of data sharing for model improvements, take that option.
AI technology is extremely useful, but it’s crucial to keep data and identities secure while using it. As AI continues to evolve, being aware of these privacy practices will only become more important.
For further information about privacy in AI, you can refer to the Brookings Institution report on privacy and security in AI.
Source link
private information,google,conversation,ai,model,datum,ai chatbot,chatgpt,ai tool,ai model,employee,risk,ai security,user,exact address
