What is an AI assistant and is it safe to use?
An AI assistant is a computer program that can answer questions and have conversations in plain language — like Siri on your iPhone, Alexa on an Amazon Echo, or ChatGPT online. They are generally safe to use for general questions, but there are important limits to keep in mind. AI assistants can make mistakes — they sometimes give confident answers that are wrong. Never share personal information like your SIN, banking details, passwords, or medical information with an AI assistant. Do not use AI to get medical diagnoses or legal advice — use it as a starting point, then verify with a professional. AI assistants also do not remember previous conversations in most cases, so each session starts fresh.
What to do
- Siri is already on your iPhone or iPad — just say 'Hey Siri' or press and hold the side button.
- Use AI assistants for general questions, recipes, weather, spelling, and general information.
- Never share personal information: SIN, passwords, banking details, or health card numbers.
- Do not trust AI for medical diagnoses, legal advice, or financial decisions — verify with a professional.
- Remember that AI can be wrong — always confirm important information from a reliable source.
- If something an AI says feels wrong or alarming, seek a second opinion from a real person.
The 3-Second Rule
Think of an AI assistant like a very well-read friend who sometimes gets things wrong. Useful for general questions — but always verify important information.
Important Warning
AI assistants do not have access to real-time information unless specifically noted. Medical, financial, or legal advice from an AI should always be verified by a qualified professional.