1. Be critical
Generative AI can do amazing things - like generate images or write stories - but it does not reflect on what it's writing. It will string text together in a way that makes sense but not "read between the lines".
Generative AI cannot evaluate the credibility of sources, nor can it always find authoritative information to back up claims. The generative AI software is also trained on data from a specific time so recent events may not be included.
So children need to learn that although it looks similar to other writing, such as in a book or article, the text has been pieced together by computer code. This means every word, sentence and claim should be treated with scepticism.
You can use this as an opportunity to help your children develop critical thinking skills.
Go to a free AI art generator with your school-age child and put in some searches. Then ask your child questions such as "What kinds of people are shown? What kinds are missing? Do you see any stereotypes? Can you see any biases?".
2. Watch out for chatbots
Chatbots are computer programs designed to simulate conversations as if they were another human.
For example, there were more than ten million Replika users as of 2022. Replika is a chatbot billed as a companion who cares. It acts like a friend but relationships with the chatbot can become romantic or sexual.
In many chatbot applications such as this, there may be no moderation or human checks on inappropriate content. So be aware if your child is spending a long time with AI "friends".
If left unaccompanied, these types of applications could feed into a child's curiosity and potentially manipulate them into unethical and harmful situations, like highly personal conversations with a bot.
Make it clear to your children that generative AI is a machine, not a human. It does not share your ideals, beliefs, culture or religion. It presents text and language based on models and algorithms. It is not something to argue with, take lessons from, or be used to reinforce your values.
The code may also be manually edited to inhibit certain viewpoints or stances on topics.
|