OpenAI’s ultra-realistic AI video tool raises concerns

Where will AI go next?

OpenAI, the maker of ChatGPT and image generator DALL-E, took another leap forward on Thursday by revealing its next generative artificial intelligence tool. Named Sora, the company’s latest AI model can turn written commands into high-quality short videos. And the realism of what Sora can do is raising as many societal concerns as security fears ahead of a pivotal year in world politics.   

Over a year after releasing an AI conversational tool that attracted 1.7 billion users, OpenAI once again shook the tech world last week when it unveiled the latest revolutionary application in its arsenal. While tech giants like Meta and Google already experimented with the technology, Sora astounded industry analysts with its never-before-seen photorealism. Only a year ago, a bizarre video of Will Smith eating pasta was the best such technology could produce. This nightmare-inducing video showed a distorted Smith gulping giant chunks of pasta in the most unnatural way imaginable. But text-to-video technology has definitely left the uncanny valley territory, and the implications of these rapid advancements are unsettling. 

Sam Altman, the co-founder of OpenAI, took to X to showcase the capabilities of its latest AI system, asking social media users for written prompts to convert into videos, whoselifelikeness came as a shock to even the most tech-savvy of users. From a stylish woman walking down the streets of Tokyo to puppies frolicking in the snow, the Sora AI model nailed down everything from the neon lights reflecting on the pavements to the texture of the dogs’ fur blowing in the mind. An acute observer may still identify strange glitches betrayingthe AI-generated nature of these artificial clips, such as oddwalk patterns and random objects materializing in the frame out of thin air. Still, this demonstration is much more believable and less cartoonish than any previous attempt. As with other ground-breaking techniques, it is bound to improve rapidly, opening up new venues that might be harnessed for creating seemingly undetectable deepfakes.

Combined with technologies like AI-powered voice cloning, a tool such as Sora could be used for nefarious purposes if put into the wrong hands. In an election year that will see over 4 billion people go to the polls in democracies like the US and India, many voters fear that AI could disrupt the electoral process – and destabilize society on the whole. Tech giants are very much aware of this threat. Earlier this month, some of the world’s leading tech firms announced a combined effort to combat the deceptive use of AI. Signed by companies like Amazon, Microsoft, and TikTok, the accord aims at fighting false footage and deepfakes. The twenty signatories agreed to deploy technology to counter such voter-deceiving material.

Despite its shortcomings, the agreement is a step in the right direction to tackle harmful content. OpenAI, which signed the accord, reaffirmed its commitment to public safety by refraining from making Sora publicly available until a group of domain experts in areas like bias and misinformation appraise its potential for misuse. Besides this select “red team,” Sam Altman’s company will also engage with policymakers and artists before releasing Sora to the public.Artificial intelligence is just as contentious among media professionals and creators, indeed.

In the gaming sphere, household names like Ubisoft and Activision Blizzard have openly embraced the technology. Many smaller studios also leverage AI tools to streamline the development process. Meanwhile, the iGaming sector is massively investing in AI to detect and counter fraudulent activity – and keep users safe online. Incidentally, the best online casinos in Canada prioritize security above all other metrics. Players looking to try their luck at poker or machine slots can visit specialized websites to find reliable platforms to sign up on. Industry experts only recommend licensed casinos that comply with strict regulations. Additionally, they assess everything from a website’s payment options to its customer support and bonus policy to help users make an informed choice.

The use of artificial intelligence has many artists fearing for their craft, though. While Hollywood’s actors and screenwriters triumphed over AI in a strike that paralyzed the industry for months, this victory could be short-lived. And the consequences could be yet more dire in fields that tread in truth, not fiction. Courtesy of tools like Sora, realistic-looking fabrications could soon take over the web. Worse still, not all tech companies shall have the scrupulosity of OpenAI regarding disinformation. So, where will AI go next? That lingering question will need to be answered decisively.