The Internet’s newest obsession is Kate Middleton –especially regarding her whereabouts after her unexpected January surgery. Despite initial assurances that the princess would only resume her duties at Easter, the world couldn’t aid but speculate and theorize about Kate’s health and the status of her marriage to Prince William. It didn’t aid, of course, that the only photos of the princess released since then are not, shall we say, final. There were also grainy photos taken from afar and of course an infamous family photo it was later discovered to be manipulative. (Post on X (formerly Twitter) attributed to Kate Middleton was later published apologizing for the edited photo.)
At last, The Sun published the video of Kate and William walking around the farm shop on Monday, which should have put an end to the matter. But the video has done little to reassure the most ardent conspiracy theorists, who believe it is too low quality to confirm whether the walking woman is really the princess.
In fact, many of the numbers go so far as to suggest that what we see indicates this is not Kate Middleton. To prove it, some have turned to AI-based photo enhancement software to sharpen the pixels of the video frames and discover once and for all who walked with the future King of England:
The tweet may have been deleted
There you go, people: this woman is NO Kate Middleton. It’s… one of those three. Case closed! Or wait, This this is actually the woman from the movie:
The tweet may have been deleted
Eh, maybe not. Jesus, these results are not consistent at all.
This is because these AI “enhancement” programs don’t do what users think they do. None of the results prove that the woman in the video is not Kate Middleton. They only prove that artificial intelligence cannot tell what a pixelated person actually looks like.
I don’t necessarily blame anyone who thinks AI has that power. After all, over the last year we’ve seen AI image and video generators do extraordinary things: if something like Midjourney can render a realistic landscape in seconds, or if OpenAI’s Sora can produce a realistic video of non-existent puppies playing in the snow, why then the program can’t it sharpen the blurry image and show us who’s really behind those pixels?
Artificial intelligence is only as good as the information it has
You see, when you ask an AI program to “correct” a blurry photo or generate additional parts of the image, you are actually asking the AI to add more information to the photo. After all, digital images are just ones and zeros, and to show more detail on someone’s face, more information is needed. However, artificial intelligence cannot look at a blurry face and “know” who is really there through sheer computing power. The only thing he can do is accept the information he has and guess what should actually be there.
So in the case of this video, the AI programs fire up the pixels of the woman in question and, based on the training set, add more detail to the photo based on what thinks it should be there – not what really is. That’s why you get very different results every time, and often terrible results. This is just a guess.
Jason Koebler of 404media offers a great demonstration of how these tools just don’t work. Not only did Koebler try programs like Fotor and Remini on The Sun’s video, with results as disastrous as other programs on the Internet, but he also tried it on a blurry image of himself. The results, as you might guess, were not exact. So apparently Jason Koebler is missing and his role at 404media has been taken over by an imposter. #Koeblergate.
Now some AI programs If including better than others, but usually in specific apply cases. Again, these programs add data based on what they think should be there, so it works well when the answer is obvious. For example, Samsung’s “Space Zoom”, which the company advertises as capable of taking high-quality photos of the Moon, it turned out to be using artificial intelligence to fill in the remaining missing data. Your Galaxy will take a photo of the blurred Moon, and the artificial intelligence will supplement the information with fragments of the real Moon.
But the Moon is one thing; specific faces are different. Sure, if you had a program like “KateAI” that was trained solely on photos of Kate Middleton, it would probably be able to turn a woman’s pixelated face into Kate Middleton, but only because it was trained to do so – and it certainly wouldn’t indicate whether a person Kate Middleton was in the photo. As it stands, there is no AI program that can “zoom in and enhance” to reveal who a pixelated face really belongs to. If there isn’t enough data in the image to tell who’s really there, there isn’t enough data for the AI.