YouTube Caught Quietly ‘Fixing’ Creator Videos — Without Telling Them

Juan Alejandro Bernal / Shutterstock.com

YouTube insists it’s the home for creators — until the platform starts silently editing their work. That’s the charge from high-profile YouTubers who say the site has been running undisclosed processing on Shorts that changes the look of their videos after upload. The story broke when music creators Rick Beato and Rhett Shull compared near-identical clips and noticed obvious differences in the version YouTube served viewers. Beato told the BBC he first thought he was imagining things — “my hair looks strange,” “it almost seemed like I was wearing makeup” — until side-by-side comparisons showed a telltale pattern: sharpened facial features, waxy skin, haloed edges, and lettering that looked like it had been smeared by a digital thumb.

Other creators chimed in with receipts. A Reddit thread tracked a Short that went from a muddy 240p to a crisp 1080p look over a dozen hours — without the uploader doing anything. That kind of leap isn’t a mild codec touch-up; it’s an algorithm putting hands on your footage. After clips on X racked up seven-figure views, YouTube finally responded through Rene Ritchie, the company’s head of editorial and creator liaison. His explanation tried to draw a neat line between “traditional machine learning” and the “GenAI” most people think of when they hear the term AI. No generative models, no upscaling, he said — just an experiment to “unblur, denoise, and improve clarity” during processing, similar to computational photography on smartphones.

Creators weren’t buying the euphemisms. “It’s still AI,” streamer Deano Sauruz shot back, calling the move dishonest and warning that nobody gave YouTube permission to run machine learning over their content — much less to use that content as training fodder that ultimately benefits YouTube’s bottom line. The distinction YouTube is leaning on may be technically accurate in a lab-coat sense — transformers and LLMs on one side, classical ML filters and deconvolution on the other — but it misses the point. If the platform is algorithmically altering videos after upload, and those changes are visible to audiences, that’s an edit. And editing someone else’s work without disclosure is a breach of trust.

The artifacts creators are reporting match what you’d expect from aggressive “enhancement” passes. Faces become overly defined at the edges and oddly uniform across the skin, as if the person were a 3D render. Fine hair clumps into painterly patches. Small text turns soft or warps at the edges, especially when it sits over motion. These are the same fingerprints viewers caught in other big-brand AI “touch-ups” — like Will Smith’s hyped concert Short that looked stitched together from animated stills, or Netflix’s 4K “upgrades” of old sitcoms where faces morph and background signs turn into hieroglyphics.

Why does YouTube do this? On paper, to “improve quality.” In practice, because Shorts are a battleground for attention, and the platform will use every tool it can to make low-bitrate, shaky, vertical clips look passable on a 4-inch screen. The problem is consent and control. Creators obsess over color grades, skin tones, fonts, and grain. They export at specific bitrates for a reason. If the platform applies a secret beauty pass or de-noise filter that changes eyes, hair, or text, that’s not a neutral pipeline step — it’s a creative override.

There’s also the uncomfortable question of data. YouTube says this isn’t “GenAI,” and Ritchie stressed the experiment doesn’t change resolution or use fancy transformer models. But the company won’t answer the obvious follow-up: Is the content being fed into training to tune these enhancement systems? Even if the answer is “only to improve the filter,” that still makes your work part of a product YouTube controls and monetizes — without your say and without a cut.

This isn’t a trivial, inside-baseball spat. Monetization, brand deals, and viewer trust hang on what a video looks like and how faithfully it reflects the creator’s hand. If a music teacher’s face is weirdly smoothed, a tech reviewer’s on-screen specs go soft, or a filmmaker’s grainy aesthetic is sanitized, audiences blame the creator — not the platform. Worse, creators can’t revert the changes. YouTube processes the file, serves the “enhanced” version, and only after a social-media backlash did it even acknowledge an experiment was happening.

The fix isn’t complicated. If YouTube wants to run clarity filters on Shorts, make it opt-in and label it — the same way platforms label “enhanced” photos or “remixed” audio. Give uploaders a toggle at publish and a way to compare originals against processed versions. Publish the technical spec: what signal is altered, what isn’t, and whether any training use is involved. And above all, stop splitting hairs over ML versus GenAI. To creators and viewers, it’s all artificial manipulation unless they choose it.

YouTube loves to say it’s “always working on ways to provide the best video quality and experience possible.” Here’s a better way: let the people who made the video decide what their best version looks like — and don’t touch it without telling them.