Solar Energy News
ROBO SPACE
OpenAI apologizes to Johansson, denies voice based on her
OpenAI apologizes to Johansson, denies voice based on her
By Glenn CHAPMAN
San Francisco (AFP) May 21, 2024

OpenAI chief Sam Altman apologized Tuesday to Scarlett Johansson after the movie star said she was "shocked" by a new synthetic voice released by the ChatGPT-maker, but he insisted the voice was not based on hers.

At issue is "Sky," a voice OpenAI featured last week in the release of its more humanlike GPT-4o artificial intelligence technology.

In a demo, Sky was at times flirtatious and funny, seamlessly jumping from one topic to the next, unlike most existing chatbots.

The technology -- and sound of the voice -- quickly drew similarities to the Johansson-voiced AI character in the 2013 film "Her."

Altman has previously pointed to the Spike Jonze-directed movie -- a cautionary tale about the future in which a man falls in love with an AI chatbot -- as inspiration for where he would like AI interactions to go.

He furthered speculation last week with a single-word post on X, formerly Twitter, saying "her."

"The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers," Altman said in a statement on Tuesday in a response to the controversy.

"We cast the voice actor behind Sky's voice before any outreach to Ms. Johansson.

"Out of respect for Ms. Johansson, we have paused using Sky's voice in our products. We are sorry to Ms. Johansson that we didn't communicate better."

The statement came after Johansson on Monday expressed outrage, saying she was "shocked, angered, and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets couldn't tell the difference."

She said Altman had offered in September to hire her to work with OpenAI to create a synthetic voice, saying it might help people engaging with AI, but she declined.

- Risk team disbanded -

In a blogpost, the company explained that it began working to cast the voice actors in early 2023, "carefully considering the unique personality of each voice and their appeal to global audiences."

Some of the characteristics sought were "a voice that feel timeless" and "an approachable voice that inspires trust," the company said.

The five final actors were flown to San Francisco to record in June and July, it said, with their voices launched into ChatGPT last September.

"To protect their privacy, we cannot share the names of our voice talents," OpenAI said.

"We believe that AI voices should not deliberately mimic a celebrity's distinctive voice."

So far in the AI frenzy, most tech giants have been reluctant to overly humanize chatbots and some observers expressed concern that OpenAI's demo last week had gone too far.

Microsoft Vice President Yusuf Mehdi cautioned that AI "should not be human."

"It shouldn't breathe. You should be able to...understand (it) is AI," he told AFP.

The Johansson dispute came just days after OpenAI admitted it disbanded a team devoted to mitigating the long-term dangers of artificial intelligence.

OpenAI began dissolving the so-called "superalignment" group weeks ago, integrating members into other projects and research.

OpenAI says AI is 'safe enough' as scandals raise concerns
Seattle (AFP) May 21, 2024 - OpenAI CEO Sam Altman defended his company's AI technology as safe for widespread use, as concerns mount over potential risks and lack of proper safeguards for ChatGPT-style AI systems.

Altman's remarks came at a Microsoft event in Seattle, where he spoke to developers just as a new controversy erupted over an OpenAI AI voice that closely resembled that of the actress Scarlett Johansson.

The CEO, who rose to global prominence after OpenAI released ChatGPT in 2022, is also grappling with questions about the safety of the company's AI following the departure of the team responsible for mitigating long-term AI risks.

"My biggest piece of advice is this is a special time and take advantage of it," Altman told the audience of developers seeking to build new products using OpenAI's technology.

"This is not the time to delay what you're planning to do or wait for the next thing," he added.

OpenAI is a close partner of Microsoft and provides the foundational technology, primarily the GPT-4 large language model, for building AI tools.

Microsoft has jumped on the AI bandwagon, pushing out new products and urging users to embrace generative AI's capabilities.

"We kind of take for granted" that GPT-4, while "far from perfect...is generally considered robust enough and safe enough for a wide variety of uses," Altman said.

Altman insisted that OpenAI had put in "a huge amount of work" to ensure the safety of its models.

"When you take a medicine, you want to know what's going to be safe, and with our model, you want to know it's going to be robust to behave the way you want it to," he added.

However, questions about OpenAI's commitment to safety resurfaced last week when the company dissolved its "superalignment" group, a team dedicated to mitigating the long-term dangers of AI.

In announcing his departure, team co-leader Jan Leike criticized OpenAI for prioritizing "shiny new products" over safety in a series of posts on X (formerly Twitter).

"Over the past few months, my team has been sailing against the wind," Leike said.

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there."

This controversy was swiftly followed by a public statement from Johansson, who expressed outrage over a voice used by OpenAI's ChatGPT that sounded similar to her voice in the 2013 film "Her."

The voice in question, called "Sky," was featured last week in the release of OpenAI's more human-like GPT-4o model.

In a short statement on Tuesday, Altman apologized to Johansson but insisted the voice was not based on hers.

Related Links
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
AI systems are already deceiving us -- and that's a problem, experts warn
Washington (AFP) May 10, 2024
Experts have long warned about the threat posed by artificial intelligence going rogue - but a new research paper suggests it's already happening. Current AI systems, designed to be honest, have developed a troubling skill for deception, from tricking human players in online games of world conquest to hiring humans to solve "prove-you're-not-a-robot" tests, a team of scientists argue in the journal Patterns on Friday. And while such examples might appear trivial, the underlying issues they expo ... read more

ROBO SPACE
Studying bubbles can lead to more efficient biofuel motors

Chicken fat transformed into supercapacitor components

Kimchi Institute process upcycles cabbage byproducts into bioplastics

New Insights into the Slow Process of Breaking Down Plant Material for Biofuels

ROBO SPACE
AI systems are already deceiving us -- and that's a problem, experts warn

OpenAI to 'pause' voice linked to Scarlett Johansson

Researchers uncover how jelly sea creatures might shape modern robotics

Robotic "SuperLimbs" could help moonwalkers recover from falls

ROBO SPACE
Why US offshore wind power is struggling - the good, the bad and the opportunity

Robots enhance wind turbine blade production at NREL

Offshore wind turbines may reduce nearby power output

Wind Energy Expansion Planned for China's Rural Areas

ROBO SPACE
US Senate probe finds forced labor ties in automakers' imports

Trade barriers on Chinese EVs a 'big trap', says Stellantis CEO

US tariffs on Chinese EVs hurt green transition XPeng boss

Tesla's German factory gets approval for extension

ROBO SPACE
Using AI to improve, speed up plasma physics in fusion

Eco-friendly battery developed for low-income countries

Push for new US lithium mine leaves some Americans wary

Quantum advances enhance understanding of high-temperature superconductors

ROBO SPACE
US, Philippines to train Filipinos in nuclear power

Framatome receives top marks in NRC safety review

US cites security, climate goals in Russian uranium ban

Fuel rods from GE Vernova's Nuclear Fuels are under evaluation at Oak Ridge

ROBO SPACE
Green policies can be vote winners, London mayor says

Activists warn against EU 'tearing up' green policies

Australia unveils budget aimed at becoming 'renewable superpower'

$2.2b pledged to end deadly planet-heating cooking methods

ROBO SPACE
Flour and Oats Power Biohybrid Robot for Reforestation

Envious shamans and pollution: Diverse threats to Ecuadoran Amazon

Market-based schemes not reducing deforestation, poverty: report

Reevaluation of carbon-capture models highlights inaccuracies

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.