OpenAI, the developer of artificial intelligence (AI), has made the decision to temporarily halt the use of its new voice mode for ChatGPT due to controversy surrounding the selection and acquisition of the voice. The developer addressed concerns raised about the “Sky” voice, one of the five voices available for its chatbot, after actress Scarlett Johansson publicly shared her interaction with OpenAI’s CEO and co-founder, Sam Altman.
Johansson claimed that Altman had approached her with an offer to provide voice work for the ChatGPT-4.0 voice model, an offer she declined. Shortly after the release of the Sky voice, Johansson’s friends and family alerted her to the striking similarity between the voice and her own. Altman then contacted her agent requesting reconsideration, leading Johansson to hire a legal team and emphasize the importance of clarity in a time when deepfakes pose a threat to personal likeness protection.
In response, OpenAI provided a detailed explanation of how it selected the voice-over performers for its ChatGPT voices, emphasizing that Johansson was not imitated. The company claims to have provided each actor with a clear understanding of human-AI voice interactions, OpenAI’s vision, and the technology’s capabilities, limitations, and risks.
Altman also personally addressed the incident, asserting that the similarity between Sky’s voice and Johansson’s was unintentional. However, his credibility was undermined by a previous post on X, where he used the pronoun “her,” potentially linking it to Johansson’s role in the movie “Her,” where she voiced an AI system that the protagonist falls in love with. This is not the first time Johansson has faced unauthorized AI usage, as she previously filed a lawsuit against another AI company for using her likeness without consent.
Johansson’s case highlights a broader issue of public figures being targeted in AI-generated impersonation scams. Hollywood actors have petitioned studios over the use of AI to clone their likenesses for future content without their consent. Additionally, Susan Bennett, the voice behind Apple’s Siri assistant, revealed that she recorded her voice in 2005 without knowledge of its future use by Apple, raising concerns about payment and recognition.
The use of AI voices without permission is a growing concern, and public figures are increasingly seeking legal protection against such practices. Safeguarding personal likeness in the era of deepfakes has become a pressing issue, prompting individuals like Johansson to take action in order to protect their rights.