Leaked Claude Code Shows Anthropic Building Mysterious “Tamagotchi” Feature Into It

After Anthropic accidentally leaked the source code to its blockbuster Claude chatbot, netizens swiftly pounced to start plowing through its more than 512,000 lines of code — and have uncovered numerous curiosities sprinkled throughout. 

In an extensive thread in the r/ClaudeAI subreddit, one user said they found a “Tamagotchi”-like feature buried in the code, referring to the handheld digital pets that you need to keep checking in on to keep them alive.

“There’s an entire pet system called /buddy. When you type it, you hatch a unique ascii companion based on your user id,” the user claimed. “The pet sits beside your input box and reacts to your coding.”

The user said they found 18 different pet species, including a duck, dragon, capybara, and a so-called “chonk,” along with a rarity system resembling those found in Gacha games, which assigns a user a pet based on chance.

Will Tamagotchis inside Claude be a mainstay? Likely not: an included string reading “friend-2026-401,” the user found, almost certainly means that Anthropic intended the feature to be an April Fools one off.

That wasn’t the only item of note internet sleuths found. They also uncovered a feature called “kairos” that purportedly can serve as an always-on AI agent that constantly runs in the background and can take actions on your behalf without you having to ask. It can even send push notifications to your phone or desktop to get your attention, users who viewed the code claimed.

Others said they found an “undercover” mode to mask the fact that Claude is an AI when contributing code in public repositories, as well as a mood tracking feature that measures a coder’s “frustration” levels based on their messages and clues like swear words. Someone also unearthed a message one of Anthropic’s coders left in, in which they admit that “memoization here increases complexity by a lot, and im not sure it really improves performance.”

In all, there’s no smoking guns here, but the leak provides an intriguing peek behind the curtain — as well as easy fodder for any competitors looking to reverse engineer the company’s tech.

For Anthropic, it’s undoubtedly an embarrassing blunder. The code base appears to have been leaked after what’s known as a source map file was accidentally left in in a public release of the company’s 2.1.88 of Claude Code npm package. A map file links bundled code back to the original source, The Register explains, and one resourceful programmer used it to find where Claude’s source code is stored, backing the whole thing up on GitHub.

Anthropic scrambled to get the exposed source code pulled by issuing copyright takedowns, though at this point it may already be out of the company’s hands.

As to how the map filed slipped through the cracks in the first place, Anthropic officially blames “human error” and stressed it was not a “security breach.” 

Notably, however, the leak comes after Anthropic figures have consistently boasted about much of Claude’s code is now being produced with the help of the AI itself, and recent incidents at Amazon and a cybersecurity blunder at Meta all caused by AI models raise the possibility that Anthropic’s own tool may have played a role in this one, too.

More on AI: Anthropic Just Leaked Upcoming Model With “Unprecedented Cybersecurity Risks” in the Most Ironic Way Possible

The post Leaked Claude Code Shows Anthropic Building Mysterious “Tamagotchi” Feature Into It appeared first on Futurism.

Scroll to Top