Is it Cake?

It’s interesting seeing the speed at which “AI” based generative media has been developing over the last few years, Midjourney, Stable Diffusion etc. OpenAI recently announced “Sora”, their new text-to-video AI model which can generate videos up to one minute long.

The example videos in their showreel video on YouTube are pretty impressive, go take a look if you haven’t done so already:

There’s definitely a bit of an “uncanny valley” quality about some of them though. They made me think of the Netflix series “Is it Cake?” where contestants have to make a cake that looks like a real object with the aim of fooling the judges who have to try and pick the fake cake-based item out of a lineup with three other real versions of the object.

This image shows several tool bags on plinths, one of the tool bags is actually a cake made to look like a real tool bag.

These cake-versions of real objects most-often look incredible and are created using amazing cake-making techniques and edible materials, they really do look like amazing realistic edible sculptures.

In the show the judges are not allowed to go close up to view the objects but can only view them from about 15-20 feet / 4.5-6 metres away. At that distance it is much harder to notice the subtle inconsistencies, e.g. not-so-straight edges or odd surface textures (or smell!) that might give away the illusion. But if they were allowed to go close up then the illusion would likely be much more apparent.

This image shows three people standing next to podiums, the people are judges on the Netflix show "Is it cake?"

It feels a bit like this with a lot of generative AI content too, looking at it broadly – especially on a small device screen – they do look incredible, but if you look closer and carefully you can spot some of the same not-so-straight edges and / or unusual textures (and sometimes extra fingers / legs!) that makes you think, “That’s a cake!“.

Over time though I’m sure it is going to get increasingly more difficult to tell the difference between these and real images / video as the subtle giveaways such as soft / fuzzy edges and extra limbs are reduced. However, even with the current issues it does already present a big challenge when it comes to evaluating the authenticity of the images and videos we see online.

OpenAI does make a clear statement when it comes to the “safety” of their tools and aims to prevent them from being used to create content that is hateful or contains misinformation, but the challenge will be when these types of models become more widely accessible by companies / organisations who don’t hold to these higher standards. It’s certainly going to be a bit of a wild west out there.


(Some of the AI stuff also reminded me of this old “Mr Soft” TV advert for Trebor mints too!).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.