close
close
migores1

Girl Murdered in 2006 Resurrected as AI Chatbot – Her Family Had No Idea

  • Jennifer Ann Crecente, a high school girl murdered in 2006, recently reappeared as a chatbot on Character.ai.
  • Her father told BI he discovered the bot on Wednesday – he never gave consent to use her likeness.
  • The bot has been removed, but its existence raises ethical implications about AI.

Drew Crecente woke up at 6:30 a.m. Wednesday to find a Google alert on his phone about his daughter.

It’s been 18 years since Jennifer Ann, a high school senior and Crecente’s only child, was killed by her ex-boyfriend in Austin. Grief-stricken, Crecente started a nonprofit organization in her name that has been working since 2006 to raise awareness about teen violence.

But Crecente has now learned that it has resurfaced online, this time as an artificial intelligence chatbot on Character.ai, run by a San Francisco startup that struck a $2.7 billion deal with Google in August.

A wave of emotions hit Crecente — anger, confusion and disgust all at once, he told Business Insider.

“Part of it was the pain,” Crecente said. “Because here I am, once again having to deal with this terrible trauma that I’ve been dealing with for a long time.”

Crecente had no idea who created the chatbot or when it was created. He only knew that Google had indexed the bot that morning and sent him the 4:30 a.m. alert, which he set up to track any mention of his daughter or his nonprofit.

Character.ai chatbots are typically created by users, who can upload names, photos, greetings, and other information about the person.

In Jennifer Ann’s case, the bot used her name and photo from the yearbook, describing her as a “knowledgeable and friendly AI character who can provide insight on a wide range of topics.”

By the time Crecente discovered the bot, a counter on his profile showed it had already been used in at least 69 chats, according to a screenshot he sent to BI.


Jennifer Ann Crecente's now-deleted AI chatbot page on Character.AI.

Jennifer Ann’s profile page lists her as a video game journalist. Her uncle, Brian, is a reporter and games editor.

Drew Crescent



The website also listed Jennifer Ann as a “journalism expert” with expertise in video game news. Her uncle, Brian Crecente, founded the gaming news site Kotaku and co-founded its competing site, Polygon.

Drew Crecente said he contacted Character.ai through its customer support form, asking the company to remove the chatbot impersonating Jennifer Ann and keep all data about who uploaded the profile.

“I wanted to make sure that they took steps so that no other account could be created in the future using my daughter’s name or my daughter’s likeness,” he said.

He received an automated response containing the case number but no other information.

His brother Brian tweeted an angry message about the chatbot that morning, asking his nearly 31,000 followers for help to “stop this kind of terrible practice.”

Character.ai responded to Brian’s post on X an hour and a half later, saying that the Jennifer Ann chatbot had been removed because it violated the company’s impersonation policies.

But Crecente says he has received nothing but silence from the company despite reaching out to Character.AI on Wednesday.

“That’s part of what’s so upsetting about this is it’s not just about me or my daughter,” Crecente said. “It’s about all those people who might not have a platform, might not have a voice, might not have a sibling who has experience as a journalist.”

“And because of that, they are hurt, but they have no recourse,” he added.

Character.ai spokeswoman Cassie Lawrence confirmed to BI that the chatbot had been deleted and said the company “will review whether further action is warranted.”

“Character.ai takes safety seriously on our platform and moderates characters both proactively and in response to user reports. We have a dedicated trust and safety team that reviews reports and takes action in line with our policies,” she said.

She added that Character.ai uses “industry standard blocklists and custom blocklists” to prevent impersonation.

However, Crecente said the company should have apologized and shown it would further protect her daughter’s identity. He is now looking for legal options.

“It’s indescribable that a company with so much money seems so indifferent to retraumatizing people,” he said.

Using AI to resurrect the dead

Artificial intelligence has also been used to create avatars of deceased people, including many who hope it can help them cope with the loss of a loved one. But the practice has raised ethical questions about the consent of the deceased, especially if the “resurrected” person died before the advent of AI.

Crecente’s case is yet another example of the new legal and ethical territory that AI has ushered into the world, Vincent Conitzer, head of AI technical engagement at Oxford University’s Institute for Ethics in AI, told BI.

Conitzer said questions remain about the extent to which AI companies should be held responsible for content created by their users.

“This isn’t really an imitation in the sense that it seems transparent that it’s an AI model,” Conitzer said of the chatbot imitating Jennifer Ann. “But it’s not, like, a parody. What rights do we have to our own style, mannerism and worldview?”

As the world debates these AI dilemmas, the consequences are already deeply affecting people like Crecente.

Sue Morris, director of bereavement services at the Dana-Farber Cancer Institute in Boston, told BI that unexpectedly seeing the likeness of a deceased loved one still hits home many years after they’ve passed because our memories of them are connected in the brain. .

“That he was not aware that his daughter’s likeness was being used would undoubtedly have been a huge shock and would have added to the feelings of confusion and anger that he was experiencing,” Morris said of Crecente.

Character.ai: Extremely popular and related to Google

Character.ai was founded in November 2021 by former Google engineers Noam Shazeer and Daniel De Freitas. Shazeer was the lead author of the tech giant’s Transformer work and worked to build an AI-powered chatbot there, but left with De Freitas when Google refused to publish the product at the time.


Noam Shazeer and Daniel De Freitas, co-founders of Character.ai, standing next to a ladder.

Noam Shazeer and Daniel De Freitas, co-founders of Character.ai.

Winni Wintermeyer/Getty Images



The idea behind their new startup was to allow the public to create their own bots, which they call Personas. These now include a multitude of user-created characters modeled after public figures such as Elon Musk, rapper Nicki Minaj and actor Ryan Gosling.

Character.ai’s October 2022 beta launch saw hundreds of thousands of users in its first three weeks of testing, according to The Washington Post. In March 2023, the company raised $150 million at a $1 billion valuation, led by Andreessen Horowitz.

Google Play Store statistics show that at the time of writing, the app has been downloaded over 10 million times on Android devices alone.

In August, Google paid the startup $2.7 billion to rehire Shazeer and De Freitas, 20% of its staff, and to acquire all of the Character.ai models it had worked on so far.

Following the team change, the company’s new interim CEO, Dominic Perella, told The Financial Times on Wednesday that the firm will focus entirely on its chatbot services, abandoning its past effort to build a large language model.

Google did not respond to a request for comment from Business Insider.

Related Articles

Back to top button