A lot of companies essentially cherry pick healthy patients and write insane inclusion/exclusion criteria to rule out anyone except for the ideal participant, which is why more and more research sites are negotiating payment up front for pre-screening and higher screenfail % reimbursement for into their study budgets.
Study design is sometimes optimized so only the "best" most enticing participants will actually be eligible, I've seen as low as 2% - 12% but frequently 50% randomization rates. Some studies also have 100 to 150 day screening period, a limited AND full screening period, etc.
Overly restrictive inclusion/exclusion criteria to super narrowly defined ideal populations hinders enrollment, causes a large burden to sites for prescreening and ends with trial results that fail to reflect real-world demographics.
There's more than just 4 execs and imo an unprecedented level of turnover for a historically very stable company. It’s multiple senior leaders across legal, policy, AI, design, hardware, and operations leaving within a short period, making it one of Apple’s most significant leadership shakeups in years, which is why several outlets are finding it newsworthy.
1) John Giannandrea, Senior VP of Machine Learning & AI Strategy, Apple’s AI chief is leaving in 2026 after setbacks with Siri, his entire team is being reorganized and cut.
2) Alan Dye, VP of Design and responsible for liquid glass left for Meta
Bloomberg
3) Kate Adams, the top lawyer and general counsel is leaving
4) Lisa Jackson, VP of Policy & Social Initiatives also leaving
5) Johny Srouji, hardware/chip head, said he is "seriously considering leaving" which is really interesting seeing as he actually said that out loud for press to report on.
6) Jeff Williams, COO retired
7) Luca Maestri the CFO left ealier this year
8) Ruoming Pang the AI foundation leader left for Meta
9) Ke Yang, head of Siri search also left for Meta.
I was reading and 2 (Srouji) is 61 years old. While that is not too old, but that does explain why he may not be choice for next CEO (besides any other things). You want someone to helm the ship for a decade.
Apple is (for a very long time) essentially a hardware company so all the contrived drama about not embracing AI is perhaps Apple style accumulation of data as it refines the sequence of "neural cores" to efficiently serve wherever the industry is careening.
While Apple wants its hardware to best run popular apps (AI included), it's premature to presume these people leaving for Meta (Dye in particular) have any impact other than tribal knowledge in their departures.
(disclaimer: was an engineer in an inner sanctum of apple for several years)
He created liquid glass, the much ballyhood but controversial new ios 26 update. Marketed like "magic" but mainly just visual updates that are buggy, drain battery and make things hard to read. Wonder if they'll keep supporting/pushing it.
They'll virtualize the ink jet printer ink model: Totally Transparent Liquid Glass UI is free, DRM'ed dye is an in-app purchase that fades over time, or subscription that you don't actually own.
User defined colors not possible, only expensive premium licensed Pantone, Disney Princess Pink, Barbie Pink, Tiffany Blue, Coca Cola Red, Cadbury Purple, UPS Brown, Target Red, Home Depot Orange, John Deere Green & Yellow, Vantablack, Stuart Semple Black 2.0, 3.0, etc.
>The sentiment within the ranks at Apple is that today’s news is almost too good to be true. People had given up hope that Dye would ever get squeezed out, and no one expected that he’d just up and leave on his own. [...]
>It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. [...]
>That alone will be a win for everyone — even though the change was seemingly driven by Mark Zuckerberg’s desire to poach Dye, not Tim Cook and Apple’s senior leadership realizing they should have shitcanned him long ago. [...]
>My favorite reaction to today’s news is this one-liner from a guy on Twitter/X: “The average IQ of both companies has increased.”
It didn't say 26-year-old, it said 26 year, according to your own quote. You inserted the word "old", which changes the meaning entirely. If he's been working on design for 26 years then he is a veteran, not hired as a child.
I'd far prefer a kid with actual HCI usability design skills and compassion for users, than a middle aged superficial cosmetician who scoffs at "programmer talk" and prides himself on not knowing what a "key window" or "Fitts' Law" is.
>After I published that post, I got a note from a designer friend who left Apple, in frustration, a few years ago. After watching Jobs’s Aqua introduction for the first time in years, he told me, “I’m really struck by Steve directly speaking to ‘radio buttons’ and ‘the key window’.” He had the feeling that Dye and his team looked down on interface designers who used terms like Jobs himself once used — in a public keynote, no less. That to Dye’s circle, such terms felt too much like “programmer talk”. But the history of Apple (and NeXT) user interface design is the opposite. Designers and programmers used to — and still should — speak the exact same language about such concepts. Steve Jobs certainly did, and something feels profoundly broken about that disconnect under Alan Dye’s leadership. It’s like the head of cinematography for a movie telling the camera team to stop talking about nerdy shit like “f-stops”. The head of cinematography shouldn’t just abide talking about f-stops and focal lengths, but love it. Said my friend to me, regarding his interactions with Dye and his team at Apple, “I swear I had conversations in which I mentioned ‘key window’ and no one knew what I meant.”
I don't think this is all that revelatory, people have been updating their locations on CVs and resumes for the places they're trying to get jobs for ages.
AI leadership and design operations at Apple both also had high-level turnover recently. John Giannandrea the Senior VP of Machine Learning & AI left Apple after troubles with Siri and their design lead Alan Dye left for Meta. Johny Srouji their hardware guru is also "considering leaving" https://wccftech.com/apple-chip-guru-johny-srouji-wants-to-l...
My mom, who is computer savvy- she has a CS degree and used to code in COBOL and FORTRAN to process payroll, budgeting and inventory and fell for phone call paypal scammer and let him log into her computer. It took us three weeks to undo everything the scammer did and lock down all her accounts.
Now I have her just not pick up ANY unknown number, but it's hard because her medical offices sometimes call from new numbers.
Sure, but that's not YouTube. That's Instagram. He says so at 1:30.
YouTube is not applying any "face filters" or anything of the sort. They did however experiment with AI upscaling the entire image which is giving the classic "bad upscale" smeary look.
Like I said, I think that's still bad and they should have never done it without the clear explicit consent of the creator. But that is, IMO, very different and considerably less bad than changing someone's face specifically.
His followers also added screenshots of youtube shorts doing it. He says he reached out to both platforms and says he will be reporting back with an update from their customer service and is doing some compare an contrast testing for his audience.
> Here's some other creators also talking about it happening in youtube shorts (...)
If you open the context of the comment, they are specifically talking about the bad, entire-image upscaling that gives the entire picture the oily smeary look. NOT face filters.
EDIT : same thing with the two other links you edited into your comment while I was typing my reply.
Again, I'm not defending YouTube for this. But I also don't think they should be accused of doing something they're not doing. Face filters without consent are a far, far worse offense than bad upscaling.
I would like to urge you to be more cautious, and to actually read what you brandish as proof.
It's very hard to tell in that instagram video, it would be a lot clearer if someone overlaid the original unaltered video and the one viewers on YouTube are seeing.
That would presumably be an easy smoking gun for some content creator to produce.
There are heavy alterations in that link, but having not seen the original, and in this format it's not clear to me how they compare.
You're misunderstanding the criticism the video levies. It's not that he tried to apply a filter and didn't like the result, it was applied without his permission. The reason you can't simply upload the unaltered original video, is that's what he was trying to do in the first place.
Wouldn't this just be unnecessary compute using AI? Compression or just normal filtering seems far more likely. It just seems like increasing the power bill for no reason.
Video filters aren't a radical new thing. You can apply things like 'slim waist' filters in real time with nothing more than a smartphone's processor.
People in the media business have long found their media sells better if they use photoshop-or-whatever to give their subjects bigger chests, defined waists, clearer skin, fewer wrinkles, less shiny skin, more hair volume.
Traditional manual photoshop tries to be subtle about such changes - but perhaps going from edits 0.5% of people can spot to bigger edits 2% of people can spot pays off in increased sales/engagement/ad revenue from those that don't spot the edits.
And we all know every tech company is telling every department to shoehorn AI into their products anywhere they can.
If I'm a Youtube product manager and adding a mandatory makeup filter doesn't need much compute; increases engagement overall; and gets me a $50k bonus for hitting my use-more-AI goal for the year - a little thing like authenticity might not stop me.
one thing we know for sure is that since chatgpt humiliated Google, all teams seem to have been given carte blanche freedom to do whatever it takes to make Google the leader again, and who knows what kind of people thrive in that kind of environment. just today we saw what OpenAI is willing to do to eke out any advantage it can.
The examples shown in the links are not filters for aesthetics. These are clearly experiments in data compression
These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". (Did they ever stop to ask themselves why or to what end?)
This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of.
Agreed. It looks like over-aggressive adaptive noise filtering, a smoothing filter and some flavor of unsharp masking. You're correct that this is targeted at making video content compress better which can cut streaming bandwidth costs for YT. Noise reduction targets high-frequency details, which can look similar to skin smoothing filters.
The people fixated on "...but it made eyes bigger" are missing the point. YouTube has zero motivation to automatically apply "photo flattery filters" to all videos. Even if a "flattery filter" looked better on one type of face, it would look worse on another type of face. Plus applying ANY kind of filter to a million videos an hour costs serious money.
I'm not saying YouTube is an angel. They absolutely deploy dark patterns and user manipulation at massive scale - but they always do it to make money. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Improving compression would do all three. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell.
I agree I don't really think there's anything here besides compression algos being tested. At the very least, I'd need to see far far more evidence of filters being applied than what's been shared in the thread. But having worked at social media in the past I must correct you on one thing
>Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs.
You can't know this. Almost everything at YouTube is probably A/B tested heavily and many times you get very surprising results. Applying a filter could very well increase views and time spent on app enough to justify the cost.
Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? Should you be afraid!? YES!!!" and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. Ironically/suspiciously it's all the same re-hashed redundant takes by everyone from Hank Green to CNBC to every podcast ever, late night shows, radio, everything.
So to me the target of one's annoyance should be the propaganda machine, not the targets of the machine. What are people supposed to feel, totally chill because they have tons of control?
A makeup influencer I follow noticed youtube and instagram are automatically adding filters to his face without permission to his videos. If his content was about lip makeup they make his lips enormous and if it was about eye makeup the filters make his eyes gigantic. They're having AI detecting the type of content and automatically applying filters.
The video shown as evidence is full of compression artifacts. The influencer is non-technical and assumes it's an AI filter, but the output is obviously not good quality anywhere.
To me, this clearly looks like a case of a very high compression ratio with the motion blocks swimming around on screen. They might have some detail enhancement in the loop to try to overcome the blockiness which, in this case, results in the swimming effect.
It's strange to see these claims being taken at face value on a technical forum. It should be a dead giveaway that this is a compression issue because the entire video is obviously highly compressed and lacking detail.
You obviously didn't watch the video, the claims are beyond the scope of compression and include things like eye and mouth enlargement, and you can clearly see the filter glitching off on some frames.
Someone in the comments explained that this effect was in auto translated videos. Meta and YT apparently use AI to modify the videos to have people match the language when speaking. Which is a nightmare on its own, but not exactly the same.
I've come across these auto translated videos while traveling, and actually found them quite helpful. Lot of local "authentic" content that I wouldn't have seen otherwise.
Its all kinds of annoying if you’re bilingual. Youtube now autotranslates ads served in my mother tongue to English and I have not found a way to turn it off.
I really hate them. Once again, Google have completely failed to consider multi-lingual people. Like Google search, even if you explicitly tell it what languages it should show results in, it's often wrong and only gives results in Russian when searching in Cyrillic, even for words that do not exist in Russian but do in the language defined in the settings.
Also the voice is pretty unemotional and nothing to do with the original voice. And it being a default that you can't even seem to disable...
Last night, I came across a video with a title in English and an "Autodubbed" tag. I assumed it would be dubbed into English (my language) from some other language. But it wasn't. It was in French, and clearly the creator's original voice. The automatic subtitles were also in French. I don't know what the "Autodubbed" tag meant, but clearly something wasn't working.
I am by no means fluent in French, but I speak it well enough to get by with the aid of the subtitles, so that was fine. In an ideal world, I'd have the original French audio with English subtitles, but that did not appear to be an option.
> I dont want default language. I understand multiple of them. And it is even ridiculous that I have to set it up.
For Silicone Valey, it is difficult to comprehend, that people may be speaking more than 1 language.
That's why you get programs in other languages than intended (phone set to English - get the English version of app) or they "offer" ( ok/not now) to translate.
That list would be incomplete. Americans at least don't tend to "helpfully" automatically proxy their whole site through Google Translate when they detect foreign IPs.
The most baffling thing is that we aren't talking about Hurrah-Americans here. We are talking about Google, which is full of Indians on all levels of the company. They, if anyone, should have understanding of multilingual people, and yet... such an incredible mess, which is still not fixed after many months.
There are some very clear examples elsewhere. It looks as if youtube applied AI filters to make compression better by removing artifacts and smoothing colors.
This seems like such an easy thing for someone to document with screenshots and tests against the content they uploaded.
So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening? If this is common, please share some examples (that aren't in Instagram reel format from non-technical influencers)
> So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening?
It's difficult for me to read this as anything other than dismissing this person's views as being unworthy of discussing because they are are "non-technical," a characterization you objected to, but if you feel this shouldn't be the top level comment I'd suggest you submit a better one.
To me it's fairly subtle but there's a waxy texture to the second screenshot. This video presents some more examples, some of them have are more textured: https://www.youtube.com/watch?v=86nhP8tvbLY
It's a different diagnosis, but the problem is still, "you transformed my content in a way that changes my appearance and undermines my credibility." The distinction is worth discussing but the people levying the criticism aren't wrong.
Perhaps a useful analogy is "breaking userspace." It's important to correctly diagnose a bug breaking userspace to ship a fix. But it's a bug if its a change that breaks userspace workflows, full stop. Whether it met the letter of some specification and is "correct" in that sense doesn't matter.
If you change someone's appearance in your post processing to the point it looks like they've applied a filter, your post processing is functionally a filter. Whether you intended it that way doesn't change that.
Well, this was the original claim:
> If his content was about lip makeup they make his lips enormous and if it was about eye makeup the filters make his eyes gigantic. They're having AI detecting the type of content and automatically applying filters.
I didn't downplay it, I just wasn't talking about that at all. The video I was talking about didn't make that claim, and I wasn't responding to the comment which did. I don't see any evidence for that claim though. I would agree the most likely hypothesis is some kind of compression pipeline with an upsampling stage or similar.
ETA: I rewatched the video to the end, and I do see that they pose the question about whether it is targeted at certain content at the very end of the video. I had missed that, and I don't think that's what's happening.
In the best of gaslighting and redirection, Youtube invents a new codec with integrated AI, thus vastly complicating your ability to make this point.
After posting a cogent explanation as to why integrated AI filtering is just that, and not actually part of the codec, Youtube creates dozens of channels with AI-generated personalities, all explaining how you're nuts.
These channels and videos appear on every webpage supporting your assertions, including being top of results on search. Oh, and AI summaries on Google searxh, whenever the top is searched too.
This is an unfair analysis. They discuss compression artifacts. They highlight things like their eyes getting bigger which are not what you usually expect from a compression artifact.
If your compression pipeline gives people anime eyes because it's doing "detail enhancement", your compression pipeline is also a filter. If you apply some transformation to a creator's content, and then their viewers perceive that as them disingenuously using a filter, and your response to their complaints is to "well actually" them about whether it is a filter or a compression artifact, you've lost the plot.
To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way. Regardless of whether they are technical, they are living in a world increasingly intermediated by technology, and we should be listening to their feedback on it. They have to live with the consequences of our design decisions. If we believe them to be non-technical, we should extend a lot of generosity to them in their use of terminology, and address what they mean instead of nitpicking.
> To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way.
I'm not critiquing their opinion that the result is bad. I also said the result was bad! I was critiquing the fact that someone on HN was presenting their non-technical analysis as a conclusive technical fact.
Non-technical is describing their background. It's not an insult.
I will be the first to admit I have no experience or knowledge in their domain, and I'm not going to try to interpret anything I see in their world.
It's a simple fact. This person is not qualified to be explaining what's happening, yet their analysis was being repeated as conclusive fact here on a technical forum
"The influencer is non-technical" and "It's strange to see these claims being taken at face value on a technical forum," to me, reads as a dismissal. As in, "these claims are not true and this person doesn't have the background to comment." Non-technical doesn't need to be an insult to be dismissive. You are giving us a reason not to down weight their perspective, but since the outcome is the same regardless of their background, I don't think that's productive.
I don't really see where you said the output was "bad," you said it was a compression artifact which had a "swimming effect", but I don't really see any acknowledgement that the influencer had a point or that the transformation was functionally a filter because it changed their appearance above and beyond losing detail (made their eyes bigger in a way an "anime eyes" filter might).
If I've misread you I apologize but I don't really see where it is I misread you.
I'm not commenting on Instagram, I'm not asking anyone to provide this random stranger with emotional support, and I'm not disputing that the analysis was non technical.
From a technical standpoint it's interesting whether it's deliberate and whether it's compression, but it's not a fair criticism of this video, no. Dismissing someone's concerns over hair splitting is text book "well actually"ing. I wouldn't have taken issue to a comment discussing the difference from a perspective of technical curiosity.
I agreed that the output was bad! I'm not dismissing their concerns, I was explaining that their analysis was not a good technical explanation for what was happening.
This is going to be a huge legal fight as the terms of service you agree to on their platform is “they get to do whatever they want” (IANAL). Watch them try to spin this as “user preference” that just opted everyone into.
That’s the rude awakening creators get on these platforms. If you’re a writer or an artist or a musician, you own your work by default. But if you upload it to these platforms, they own it more or less. It’s there in the terms of service.
One of the comments on IG explains this perfectly:
"Meta has been doing this; when they auto-translate the audio of a video they are also adding an Al filter to make the mouth of who is speaking match the audio more closely. But doing this can also add a weird filter over all the face."
I don't know why you have to get into conspiracy theories about them applying different filters based on the video content, that would be such a weird micro optimization why would they bother with that
I doubt that’s what’s happening too but it’s not beyond the pale. They could be feeding both the input video and audio/transcript into their transformer and it has learned “when the audio is talking about lips the person is usually puckering their lips for the camera” so it regurgitates that.
Google has done so many incredibly stupid things, like autotranslating titles/information/audio from a language I already know into English, with no way to turn this off.
Assuming that they did something technically impressive, but stupid again is not a conspiracy, but a reasonable assumption based on previous behavior.
If any engineers think that's what they're doing they should be fired. More likely it's product managers who barely know what's going on in their departments except that there's a word "AI" pinging around that's good for their KPIs and keeps them from getting fired.
Videos are expensive to store, but generative AI is expensive to run. That will cost them more than storage allegedly saved.
To solve this problem of adding compute heavy processing to serving videos, they will need to cache the output of the AI, which uses up the storage you say they are saving.
What type of compression would change the relative scale of elements within an image? None that I'm aware of, and these platforms can't really make up new video codecs on the spot since hardware accelerated decoding is so essential for performance.
Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there.
Neural compression wouldn't be like HVEC, operating on frames and pixels. Rather, these techniques can encode entire features and optical flow, which can explain the larger discrepancies. Larger fingers, slightly misplaced items, etc.
Neural compression techniques reshape the image itself.
If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are.
Maybe such a thing could exist in the future, but I don't think the idea that YouTube is already serving a secret neural video codec to clients is very plausible. There would be much clearer signs - dramatically higher CPU usage, and tools like yt-dlp running into bizarre undocumented streams that nothing is able to play.
A new client-facing encoding scheme would break utilization of hardware encoders, which in turn slows down everyone's experience, chews through battery life, etc. They won't serve it that way - there's no support in the field for it.
It looks like they're compressing the data before it gets further processed with the traditional suite of video codecs. They're relying on the traditional codecs to serve, but running some internal first pass to further compress the data they have to store.
If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.
I don't think that's actually what's up, but I don't think it's completely ruled out either.
That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.
The law of entropy appears true of TikToks and Shorts. It would make sense to take advantage of this. That is to say, the content becomes so generic that it merges into one.
The resources required for putting AI <something> inline in the input (upload) or output (download) chain would likely dwarf the resources needed for the non-AI approaches.
Probably compression followed by regeneration during decompression. There's a brilliant technique called "Seam Carving" [1] invented two decades ago that enables content aware resizing of photos and can be sequentially applied to frames in a video stream. It's used everywhere nowadays. It wouldn't surprise me that arbitrary enlargements are artifacts produced by such techniques.
I largely agree, I think that probably is all that it is. And it looks like shit.
Though there is a LOT of room to subtly train many kinds of lossy compression systems, which COULD still imply they're doing this intentionally. And it looks like shit.
As soon as people start paying Google for the 30,000 hours of video uploaded every hour (2022 figure), then they can dictate what forms of compression and lossiness Google uses to save money.
That doesn't include all of the transcoding and alternate formats stored, either.
People signing up to YouTube agree to Google's ToS.
Google doesn't even say they'll keep your videos. They reserve the right to delete them, transcode them, degrade them, use them in AI training, etc.
That's the difference between the US and European countries. When you have SO MUCH POWER like Google, you can't just go around and say ItSaFReeSeRViCe in Europe. With great power comes great responsibility, to say it in American words.
Study design is sometimes optimized so only the "best" most enticing participants will actually be eligible, I've seen as low as 2% - 12% but frequently 50% randomization rates. Some studies also have 100 to 150 day screening period, a limited AND full screening period, etc.
Overly restrictive inclusion/exclusion criteria to super narrowly defined ideal populations hinders enrollment, causes a large burden to sites for prescreening and ends with trial results that fail to reflect real-world demographics.
reply