Microsoft criticized for creating a tool to create hyper-realistic “deepfakes”

Microsoft critiqué pour avoir créé un outil permettant de créer des "deepfakes" hyperréalistes

At its latest developers conference, Microsoft announced its contribution to the race for artificial intelligence: software capable of generating new avatars and new voices or replicating the appearance and speech of an existing user, which which raises concerns about the trend of creating “deepfakes”, AI-created videos of events that never took place.

Announced at Microsoft Ignite 2023, Azure AI Speech is trained on human images and allows users to enter a script that can be “read” aloud by a photorealistic avatar created using artificial intelligence.

Users can choose a predefined Microsoft avatar or upload images of a person whose voice and appearance they want to replicate. Microsoft said in a blog post that the tool could be used to create “chatbots, virtual assistants, chatbots and more.”

The post states: “Customers can choose between a pre-built or custom neural voice for their avatar. If the personalized neural voice and appearance of the same person are used for the avatar, the avatar will closely resemble that person.”

The company said the new text-to-speech software comes with a number of limitations and safeguards intended to prevent abuse.

“As part of Microsoft's commitment to responsible AI, the text-to-speech avatar is designed to protect individual and social rights and enable seamless human-computer interaction. and combat the proliferation of harmful deepfakes and misleading content.

Customers can upload their own video recording, which the feature uses to form a synthetic video of the custom avatar speaking.

The announcement quickly sparked criticism that Microsoft had launched a “deepfake maker,” which would make it easier to replicate a person's appearance and make them say and do things they neither said nor said. nor do. Microsoft's CEO himself said in May that “deepfakes” were his “biggest concern” when it comes to advances in artificial intelligence.

In a statement, the company responded to criticism by saying that custom avatars are now a “limited access” tool for which customers must apply and be approved by Microsoft. Additionally, users must indicate whether artificial intelligence was used to create a synthetic voice or avatar.

With these safeguards, we help mitigate potential risks and enable customers to integrate advanced voice capabilities into their AI applications seamlessly and securely.

With the rise of AI comes growing concerns about the capabilities of this technology. Sam Altman, CEO of OpenAI, warned Congress that it could be used to interfere in elections and that safeguards needed to be put in place.

Experts say deepfakes pose a particular danger when it comes to election interference. Microsoft this month launched a tool that lets politicians and campaigns authenticate and flag their videos to verify their legitimacy and prevent the spread of deepfakes. Meta announced a policy this week that requires disclosure of the use of AI in political ads and prohibits campaigns from using Meta's own generative AI tools for ads.