Our newsroom AI policy

Earlier this year, we committed to publishing a reader-facing explanation of how Ars Technica uses, and doesn’t use, generative AI. Translating our internal policy into a reader-facing document that meets our standards for clarity and precision took longer than I’d have liked, but I wanted to get it right rather than get it out fast. That document is now live, and you can find it below (and also linked in the footer of most pages on the site).

Our approach comes from two convictions: that AI cannot replace human insight, creativity, and ingenuity, and that these tools, used well, can help professionals do better work. From those starting points, it was always clear what we wouldn’t allow. AI would not become the author, the illustrator, or the videographer. These tools are best used by professionals in the service of their profession, not as a clever end run around it, and certainly not as a path to eventually replacing it.

The short version: Ars Technica is written by humans. Our reporting, analysis, and commentary are human-authored. Where we use AI tools in our workflow, we use them with standards and oversight, and humans make every editorial decision. Our policy covers how we handle text, research, source attribution, images, audio, and video.

Read full article

Comments

Scroll to Top