When AI is (badly) informing health policy and enabling bad actors, it’s time to regulate the tech companies
Nearly a year into parenting, I’ve relied on advice and tricks to keep my baby alive and entertained. For the most part, he’s been agile and vivacious, and I’m beginning to see an inquisitive character develop from the lump of coal that would suckle from my breast. Now he’s started nursery (or what Germans refer to as Kita), other parents in Berlin, where we live, have warned me that an avalanche of illnesses will come flooding in. So during this particular stage of uncertainty, I did what many parents do: I consulted the internet.
This time, I turned to ChatGPT, a source I had vowed never to use. I asked a straightforward but fundamental question: “How do I keep my baby healthy?” The answers were practical: avoid added sugar, monitor for signs of fever and talk to your baby often. But the part that left me wary was the last request: “If you tell me your baby’s age, I can tailor this more precisely.” Of course, I should be informed about my child’s health, but given my growing scepticism towards AI, I decided to log off.
Edna Bonhomme is a historian of science