(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38345858

收入相对较小的下降似乎不太可能促使 Googe 忽视对其他主要浏览器的测试。 相反,这一趋势凸显了一个更广泛的问题:谷歌将自己的技术及其浏览器 Chrome 置于所有其他主要浏览器之上。 虽然 Chromium 是一个由包括 Google 在内的多个实体支持的开源项目,但 Google 仍然对 Chromium 的方向保持着重大控制权。 此外,在考虑反垄断影响时,Chromium 开源开发者社区相对于 Chrome 用户数量的规模强烈表明,谷歌的反竞争行为可能存在于跨平台兼容性测试的背景下。 非 Chrome 浏览器。 最终,解决反垄断问题需要对谷歌、Mozilla 和浏览器生态系统中其他各方之间的复杂关系有细致的了解。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
YouTube slows down video load times when using Firefox (reddit.com)
1807 points by csvm 22 hours ago | hide | past | favorite | 450 comments










From reddit discussion (https://www.reddit.com/r/firefox/comments/17ywbjj/comment/k9...):

> To clarify it more, it's simply this code in their polymer script link:

> setTimeout(function() { c(); a.resolve(1) }, 5E3);

> which doesn't do anything except making you wait 5s (5E3 = 5000ms = 5s). You can search for it easily in https://www.youtube.com/s/desktop/96766c85/jsbin/desktop_pol...



That is not correct. The surrounding code gives some more context:

    h=document.createElement("video");l=new Blob([new Uint8Array([/* snip */])],{type:"video/webm"});
    h.src=lc(Mia(l));h.ontimeupdate=function(){c();a.resolve(0)};
    e.appendChild(h);h.classList.add("html5-main-video");setTimeout(function(){e.classList.add("ad-interrupting")},200);
    setTimeout(function(){c();a.resolve(1)},5E3);
    return m.return(a.promise)})}
As far as I understand, this code is a part of the anti-adblocker code that (slowly) constructs an HTML fragment such as `
`. It will detect the adblocker once `ontimeupdate` event didn't fire for 5 full seconds (the embedded webm file itself is 3 seconds long), which is the actual goal for this particular code. I do agree that the anti-adblocker attempt itself is still annoying.


For the completeness, the omitted Uint8Array is the following 340-byte binary (here in base64):

    GkXfo59ChoEBQveBAULygQRC84EIQoKEd2VibUKHgQRChYECGFOAZwH/////////FUmpZpkq17GD
    D0JATYCGQ2hyb21lV0GGQ2hyb21lFlSua6mup9eBAXPFh89gnOoYna+DgQFV7oEBhoVWX1ZQOOCK
    sIEBuoEBU8CBAR9DtnUB/////////+eBAKDMoaKBAAAAEAIAnQEqAQABAAvHCIWFiJmEiD+CAAwN
    YAD+5WoAdaGlpqPugQGlnhACAJ0BKgEAAQALxwiFhYiZhIg/ggAMDWAA/uh4AKC7oZiBA+kAsQEA
    LxH8ABgAMD/0DAAAAP7lagB1oZumme6BAaWUsQEALxH8ABgAMD/0DAAAAP7oeAD7gQCgvKGYgQfQ
    ALEBAC8R/AAYADA/9AwAAAD+5WoAdaGbppnugQGllLEBAC8R/AAYADA/9AwAAAD+6HgA+4ID6Q==
VLC somehow refuses to play it, but its nominal length can be verified with a short JS code like:

    v = document.createElement('video');
    v.src = `data:video/webm;base64,`;
    await new Promise(resolve => v.onloadedmetadata = resolve);
    console.log(v.duration);


I couldn't reproduce the 5s wait in multiple scenarios in Firefox (various combinations of being logged in / not being logged in / without adblocker / with adblocker) and couldn't reproduce a 5s wait time in any of them, it played back immediately in each case (when without adblocker, using a second video to have one start without ad). I tested on Linux.

What exact combination of circumstances is required to trigger the multi second wait time?



I just tested this in firefox on ubuntu. Three subsequent new tab tests.

Load: 4.34s, 5.14, 2.96, 3.35

DOMContentLoaded: 3.65s, 4.56, 2.92, 3.33

Finish: 13.14s, 10.77, 8.49, 12.02

So it's getting a bit faster over time, but still heinous, and crucially, it isn't hanging on requests. Individual asset GET/POST requests are taking tens of ms, worst was a few parallel 254ms GETs on a cold start. Usually 50-70ms. But there is a flurry of requests, then a period of very few requests until 5s after init, then another flurry.

Firefox 119.0 Ubuntu 22.04 uBlock Origin, Privacy Badger

Same OS, chrome 115.0.5790.170, no blockers, youtube is much snappier, it still definitely takes a few seconds to paint thumbnails, but it's basically done by 5s. DOMContentloaded is never more than 1.75s, finish

Firefox private window with blockers off has similar time statistics. But actually doubleclick.net is still getting bounced.



I tested in Firefox (uBlock), LibreWolf (uBlock), Safari (AdGuard), and Chromium (no ad blocker), and the initial home page load takes a couple seconds, but I never witnessed a 5s delay. I would say it was actually fastest in Firefox for me, but that may have just been a result of some caching. I am a premium subscriber and have never seen a warning for using an ad blocker, so I'm not sure if premium subscribers get a pass.


I can't reproduce this either. YT on FF plays immediately for me


I am experiencing delay on both Firefox and Waterfox


It is still better to wait 5s without ad than with ad.


It has to be a background check, otherwise you can't explain cases (like me) where the code is running but users never noticed any delay.


I wonder if it is just a coincidence that 5s is the time before a skippable ad becomes skippable?


Either wait 5 seconds without ad, or get served an ad about switching to Chrome


Okay, I'm sold on the delay, but where's the code that detects non-chrome?

Do they serve different js based on the user agent header? If they delay chrome too there's no foul.



Just going off this tweet, it seems to be user-agent based: https://fixupx.com/endermanch/status/1726605997698068630


If YouTube are going to go down this path, then perhaps Firefox devs should set the user agent to Chrome for YouTube?


They delayed chrome too. At least for me.


Why is it only trying to detect ads when the user agent is Firefox?

https://old.reddit.com/r/firefox/comments/17zdpkl/this_behav...



Probably because there are other methods for Chrome that don't apply to Firefox.

Like when I noticed that some sites did some URL rewriting trickery on Firefox and others browsers, but not for Chrome. The trick is to show you the proper URL the link points to, but as you click, it is substituted for one that is a redirection, for tracking purposes (ex: "https://l.facebook.com/l.php?u=http:://actualsite..."). On Chrome, they don't need to use these tricks as the browser supports the "ping" attribute of links, so they can do their tracking without rewriting the URL.



This kind of BS is why I don't ever click on links directly. I copy/paste them instead, so I can examine and trim them. Often, the actual link is through some sort of redirection service and I need to copy/paste the text the browser shows for the link rather than the actual link.

There's so much trickery and nonsense around this stuff that no link is safe to just click on.



Check out the Privacy Badger extension. I believe it removes the tracking stuff from (some) links.


I've also noticed this behavior popping up a lot lately, but I had no idea why. The URL with tracking included was still blocked by uBlock Origin, but having to manually copy-paste the relevant portion was an annoyance.

Thanks for the context!



Check out ClearURLs extension.


Wow, that is pretty disgusting behavior.


The web developer interprets missing features as damage and polyfills around them.


I have no idea because I didn't experience anything like that both in Chrome and in Firefox (both with uBO though). But I'm confident that this particular code is not related to the actual slowdown, if it did happen to some Firefox users, because I received the same code even in Chrome.


Does Firefox allow a wider range of plugins, including adblockers?


Yes, Chrome is severely hobbled in this by comparison.


Yes, there are plenty. You can have a look here: https://addons.mozilla.org/en-US/firefox/


This is just anecdote, but sometimes (especially when I'm on slower internet) Safari + AdGuard will have glitch [0] on YouTube. Never happened with Firefox + Ublock Origin.

[0] Unable to press play and showing image with Ad instead.



I experience the same glitch and i like it because you can just reload the page (cmd-r) and then the video starts so if you're used to it you can skip ads within less than a second and you dont get annoyed by the ad sound/video, just an image.


I would suspect because Google can do the detection in Chrome itself, but not in Firefox.


if it’s anti-adblock, does it run even with premium?


I'm not even mad about Google making my artificially wait 5s for using firefox.

I'm mad that such a big company with suposelly decent engineers, are making me wait 5s with literally a sleep, how is even possible to do such thing in such a rudimentary way? I would be like damn that was smart, this feels like, seriously this is the level?



IMHO, this kind of things are not done by engineers.

    * Marketing/Sales asks engineers to add a feature flag to sleep N milliseconds for their research: "how slowing down impacts your revenue"
    * engineer adds a flag, with different control parameters
    * Some genius in Product figures this out and updates the experiment to slow down for competitors
When company gets a backlash from public: "oops, we forgot to clean up all parameters of feature flag and it accidentally impacted Firefox"




Google stopped testing stuff in Firefox, that is all they did afaik. We all should know how many bugs and "oppsies" you get when you don't test before releasing new features. Test code snippets being pushed to prod etc.

Engineers tend to create paper trails on what they work on, code reviews and bug logs etc are everywhere, so I doubt there is any of those where they say "Make things shit for Firefox to hurt our competitors", that would net them an easy loss in court. But not testing in browsers with small userbases will hold in court.



Firefox has a small userbase partly because of the early "oopses" described in the article I linked. Those happened a while ago, when Firefox had more users than Chrome.


Chrome was bigger than Firefox by 2012, the accusations that Google intentionally made things worse for Firefox came many years after that.


But they referred to behaviour that was present pretty much from the start. It's just that Mozilla folks were extremely tolerant and assumed good faith for a very long time.

Google have been disgustingly anticompetitive for a very, very long time at this point.



Yeah, one of the biggest examples being the HTML 5 video push and Chrome’s claims around H.264: Google promised they were going all in on WebM and would remove support soon, but never did. That meant that Firefox users got more errors outright but also that for years even sites like YouTube would leave Firefox using 100% CPU with your laptop fans on high doing software WebM while Chrome users got hardware accelerated H.264. That became moot after Mozilla and Cisco struck that deal and video hardware acceleration for other formats shipped but there was a multi-year period where Firefox suffered badly in comparison to other browsers.


Another person is claiming that Google writes custom code for Firefox (or other browsers) to enable tracking, because of the feature difference between Firefox and Chrome [1]. Only one of you can be correct.

[1] https://news.ycombinator.com/item?id=38347364



The company is big enough for both of them to be correct.

I have firsthand knowledge that Cloud, for instance, did not test regularly directly on Firefox. Team couldn't justify the cost of setting up and maintaining a FF test suite to satisfy 1 in 50 users, so they didn't (and nobody up-chain pushed back on that). Testing was done regularly on Chrome, Safari, and Edge, as per the usercounts and "top three browser" guidance (at the time, we didn't even test regularly on mobile, since there was a separate mobile client).

But the analytics team? I'm sure they test directly on Firefox. They're just testing an entirely different piece of the elephant and their end-to-ends won't include how, for example, changes they make interoperate with Firefox in the context of Cloud. Or YouTube. Or etc. Not unless they have a specific reason to be concerned enough to.

Google's like any other megacorp in the sense that costs of cross-team coordination are combinatoric.



Nah, they're totally incentivized to make sure tracking works while still having plenty of oopsies that could cause people to switch.


This should be a top level comment on news like this. Everyone needs to be reminded that this is neither a new behavior nor something unintentional.


Very good point. It's important to recognise that developers in many companies are often not fully aware of the intended use of features they're asked to create.

Another example that springs to mind is Uber, who used a tool called "Greyball" to avoid contact between drivers and authorities: https://www.reuters.com/article/uk-uber-greyball-idUKKBN16B0...

My initial reaction was astonishment that the engineers would happily implement this. And maybe that is what happened. But the alternative possibility is that product and senior management assigned different parts of the feature to different teams e.g. one team develops a pattern recognition system to detect users' professions, another team develops a spoofing system for use in demos, etc...



Why would you be surprised that they'd implement this? It's their job to implement things.


They were using it to evade law enforcement while flouting regulations. It's highly unethical and almost certainly illegal.


Oh I thought you were referring back to the YouTube issue


Tbh even that is ethically very questionable, if the engineers knew that the outcome would be a delay specific to Firefox.


> * Marketing/Sales asks engineers to add a feature flag to sleep N milliseconds for their research: "how slowing down impacts your revenue"

“Research”



They have done such research before, Google published this at a time when developers were all "100 ms more or less web load time doesn't matter". Since then webpages has gotten much more focused on performance.

https://blog.research.google/2009/06/speed-matters.html



The dog slow load times of ad infested AMP pages would suggest otherwise.


The prevailing developer discussions going from "Load speed doesn't matter, stop complaining about useless stuff" to "load times matters, but here we choose to make it slow for other reasons" is a massive improvement though. Today speed is valued, it wasn't back then.

There are many such tests being written about in blogs today. So now a developer can get time to optimize load times based on those blog posts while before managers would say it was worthless.



Untrue. I optimized pages pre-2000, and it had always mattered.

It's always, always mattered. If anything, people care less today, with the entire ridiculous 100 loads per page.



Of course it always mattered. But at the time lots of people argued it didn't matter, which is why the headline is "Speed matters". You thinking it did matter at the time doesn't mean the general community thought so.


But the general community did care about speed. Everyone worked towards small load times, optimized (for example) image size for optimal load time, everyone cared.

Whomever didn't care was weird.



AMP pages load way, way faster IME


Not as fast as with 90% of JS blocked. That's how the web was supposed to work, not downloading 50 MiB on every hyperlink.


Researching how best to fuck with your competitors.


Next: researching regulatory capture?


This doesn’t add up.

In order for someone to slow down the by browser they need someone to have coded the following:

- UA Detection

- Branching for when the flag is on or off

- a timeout that only runs when these two things are true

That takes an engineer to do the work. Marketing and product managers are not writing this code certainly.

If they’re abusing a differ t flag, then the real question I have is what the flags purpose is and why is it screening Firefox.

Either way there is an intention of UA checking and throttling based on the UA and that takes an engineer to do it



Because it works.

Good engineering isn't about being obtuse and convoluted, it's about making stuff that works.



when the purpose is to abuse your monopoly to further your business interests in another area, being obtuse and convoluted to get plausible deniability is good engineering. This is just sloppy.


I think this is a good example of corporations being made up of people, rather than being contiguous coordinated entities as many of us sometimes think of them.

An engineer doing "good engineering" on a feature typically depends not only on them being a "good engineer" but also on them having some actual interest in implementing that feature.



I would imagine that in a well coordinated company engaging in this kind of thing, the order wouldn't be "slow down firefox", but something along the lines of "use XYZ feature that firefox doesn't support and then use this polyfill for FF, which happens to be slow". Something that doesn't look too incriminating during any potential discovery process, while still getting you what you want.


That's assuming a degree of engineering competency at the product decision making level that is usually absent in companies that are structured as Google is, with pretty strong demarcations of competencies across teams.


Nah, that's got a risk profile. They could implement whatever your strategy is in the next release. You aren't going to necessarily get the longevity of the naive approach.

Plus a Firefox dev would discover that more easily as opposed to this version which they can just dismiss as some JavaScript bug on YouTube's part



that's the beautiful thing, you make the polyfill contingent on the browser being firefox rather than probing for the feature and then you forget to remove it once they implement the feature


But why do you have to be that clever? If you're caught the consequences are the same regardless and both implementations would exhibit equivalent behavior.

The only superior approach here would be one that is consistent enough to be perceived but noisy enough to be robust to analysis.

Also it should be hidden on the server side.

Who knows, maybe there are a bunch of equivalent slow downs on the server side in the Google property space.

Given this discovery it would probably be reasonable to do some performance testing and change the user agent header string of the request.

Google docs, image search and Gmail operations would be the place to hide them.



I dunno. How long has it been there without anybody noticing?

5 years? 7? Longer?

No matter how they approached it, you could demonstrate the pattern through the law of large numbers regardless. Might as well make the implementation straight forward.



Using an idle timer, like window: requestIdleCallback [1], is good engineering. If anything passes that's not good engineering, it's laziness.

I'm not even a JS programmer but I know about timers, idle wait in UI programming is a common pattern. It's the attitude of mediocre engineers not bothering to lookup or learn new things.

If every OS/browser/stock market dev did what they want "because it works" we don't have a working system. We'll have systemic lags making the system sluggish and eventually unusable as more engineers follow the same mantra.

[1]: https://developer.mozilla.org/en-US/docs/Web/API/Window/requ...



Nah, then it doesn't work.

"It works" is The high engineering bar and it's the hard one to hit.

Oftentimes it's replaced these days with imagined complexity, ideological conformity or some arbitrarily defined set of virtues and then you get a really complicated thing that maybe works some of the time and breaks in really hard to understand ways.

Transcompiled frameworks inside of microservices talking to DBMS adapters over virtual networks to do a "select *" from a table and then pipe things in the reverse direction to talk to a variety of services and providers with their own APIs and separate dependencies sitting in different microservices as it just shepherds a JSON string through a dozen wrapper functions on 5 docker containers to just send it back to the browser is The way things are done these days. This is the crap that passes for "proper" engineering. Like the programming version of the pre-revolutionary French Court.

A simple solution, fit for purpose, that works as intended, easy to understand, remove, debug and modify with a no-bus factor, that's the actual high end solution, not the spaghetti stacked as lasagna that is software haute couture these days.

Sometimes, in practice, the dumb solution can also be the smart one. True mastery is in what you choose Not to do.



I agree with the spirit of your comment; I too hate over-engineering. Choose your battles is an important step in mastery, yes, but being lazy can't be chalked up to mastery.

In this particular case I disagree with using `sleep`; using the idle timer it's not as roundabout as you put it: _Transcompiled frameworks inside of microservices talking to DBMS adapters over virtual networks_. It's a straight-forward callback, some lower-level timekeeper signals you and you do your thing: it's nowhere close to the convoluted jumping through hoops you explain.

Mastery comes with balance: putting in the optimal effort, not more, not less either. Of course, depends on what one's trying to master: job or programming. Former means do the minimum and get maximum benefits from your job/boss, latter means enjoy learning/programming and arrive at the most optimal solution (for no reason, just because you're passionate).



Speaking as someone who only very occasionally does browser related programming, what is the supposed sin committed here by implementing it this way?


In programming in general, sleeps are generally considered....(I'm lacking the word)...distasteful?

If your code needs to wait for something, it's better done with some sort of event system or interrupt or similar; the reason being that a 5s wait is a 5s wait, but if, say the thing you're waiting for returned in 10ms, if you're using an alternative solution you can carry on immediately, not wait the remaining 4.99 seconds. Conversely, if it takes longer than 5s, who knows what happens?



Sure, but assuming we take it as face value that this is a straightforward attempt to force a UX-destroying delay, I don't see what makes this so terrible. It's meant to force a 5 second wait, and it does it. Problem solved.


The 5-second wait is the issue, not the means it was obtained -- a fixed wait time either wastes the user's time (by making it take longer than necessary) or is prone to bugs (if the awaited task takes >5 seconds, then the end of the timer will likely break). The better question is _why_ a 5-second wait was necessary, and there's almost certainly a better way to handle that need without the fixed wait time.


OPs point, I think, is that wasting the user's time is part of the point of the code. This specific code seems partially meant as a punishment of the user for using an adblocker.


*for using firefox instead of google's own browser.


That's somewhat in debate, the last I saw. The initial report was it affected a user using Firefox, and it didn't when they switched useragents. Since then, there have been reports of users not seeing it in Firefox, but seeing it in other (even chromium-based) browsers. So it seems likely they are A/B testing it, but less clear if they are intentionally targeting non-Chrome browsers.

Their goal, quite clearly, is to prevent (or at least heavily discourage) adblockers. This is one attempt to detect them, and maybe in Chrome they have a different detection mechanism so it doesn't show the same behavior.

It would be a particularly foolish move on their part to push Chrome by punishing everything else right now, while they are in the middle of multiple anti-trust lawsuits. It makes me think that is unlikely to be the intent of this change.



> In programming in general, sleeps are generally considered....(I'm lacking the word)...distasteful?

Hmmm.....

In programming in general, Javascript is generally considered....(I'm savouring the word)...distasteful?

Yea, nah. I put a sleep in a Javascript/Dart bridge the other day.... We can do better, I can do better,



I don't know if this is what was meant, but my assumption is that it is quite brazen and crude.

But then I think of some alternative method where they send an ajax request to "sleep.google.com/?t=5" and get a response like "" after five seconds.



For one, they didn't use React.


Yep, curious to know the same thing myself.


they are a lazy man's solution to race conditions that does not actually solve the problem of race conditions, only makes them less likely to cause a problem at an often extreme cost to responsiveness as seen here.


You're mad that they're using a function for its intended purpose?


Maybe the engineer that was tasked with implementing was annoyed with the task and did it on purpose this way.


It is not literally a sleep though, isn't setTimeout more like a creating a delayed event? (I am not a webdev)


You can't directly do a sleep in Javascript because it runs in the same thread as the UI - it would block the user from interacting with the page. This is effectively a sleep because after 5 seconds it's running the code in the passed-in function (not firing an event). The code in the function then resolves a promise, which runs other functions that can be specified later by what called the one using setTimeout.


That's Javascript for you. Don't want to block the one thread from doing other things in the meanwhile.


I'm more mad about the complete failure of regulators to break up an obvious monopoly than I am with the engineers (though they're not saints either)


At least they didn't rewrite the sleep code to do crypto mining.


Google employs 30000 engineers, it's impossible for them all to be decent.


follow the money

employees will follow orders, orders are made by people who control the money



Reminds me A Ticket to Tranai by Robert Sheckley where they deliberately asked to slow down robots in order for people to be angry and destroy them.


This is interesting as I had noticed this happening to me (in Chrome) when the anti-ad-blocking started. I assumed that it was YT's way of "annoying" me still while no ads were shown... It was eventually replaced with the "You cant use Adblockers modal" and now I just tolerate the ads.

So I wonder if that 5s delay has always been there.



When I ran into the adblocker-blocker (Firefox + uBlock Origin), I noticed that I could watch videos if logged out. So I just stayed logged out, and haven't seen an anti-adblock message since. Or an ad.

Added bonus, I'm less tempted to venture into the comments section...



Same, I use Firefox + uBlock Origin + YouTube Unhook for a cleaner interface. I also always watch videos on private navigation windows (my default way of browsing the internet) and I manage subscriptions with the RSS feed of the channels, much better to track what I have watched since the default homepage of YouTube does not display the last videos of your subscriptions.

Edit: I have forgotten to add sponsorblock to the list of extensions



I've been randomly getting the situation where the video on Firefox doesn't work, but the sound does. It says something like "Sorry something's gone wrong", but for a brief second I can see the video. I think it's connected to the ad-blocker changes, but it doesn't actually have a message about having an ad-blocker on.


One of the benefits of ublock origin for me is blocking the youtube comments section, along with all of the video overlay elements.


I'm using Firefox + uBlock Origin logged in and it works totally fine. Maybe Youtube removed the anti-adblocker on select accounts? I remember I once entertained myself with writing a report in which I sounded like I'm sitting in a retirement home and have no clue what's going on with "ad block." Did perhaps someone actually read this?


I think you have simply been lucky, the full story is that uBlock Origin and Youtube have been tying to outpatch the other, with uBlock rolling out a bypass to the filters every one-two days since late October (https://github.com/stephenhawk8054/misc/commits/main/yt-fix....).

Depending on if you've set up uBlock to auto-update and when you've watched youtube relative to when the block filters got updated you might just not have been hit with the latest detectors while they were active. Personally I know I got the "accept ads or leave" modal with firefox + uBlock, locking me out completely on one of my devices.



It seems to be something which is randomly deployed. Not everybody gets the warning.


I got it in the past for weeks, though.


Same here . No problem with anti Adblock. It was shown twice to me and I googled „YouTube alternatives“ then tried Vimeo and it was nice. Maybe they did register this ? :D


It's weird but I saw the anti-blocker modal a week or two but them it stopped appearing and never saw it since shrug


Might be because of the EU ruling, if you're in the EU.


I'm in the US, and had the same experience.

I got the you can't use an adblocker message, but was able to close and/or reload the page to watch videos without ads. After a week or so it stopped popping up.

US, Firefox, uBlockOrgin.



Another way I noticed is good at skipping ads when adblocker fails is to refresh the page. When it loads again it does not play the ad.


It's still trivial to block ads, but the delay has recently started for me, after never happening before. So presumably a very intentional volley in the ongoing war to own your attention.


Just install adblocker?


Or Freetube / Newpipe


no need to go that extreme, the fix is to just update ublock orgins filters

Go into ublock origin addon > click filter lists > purge all caches then update now

all done



Meh.. I could but I have to tolerate them on TV anyway. I may look to install pi-hole one day.


If you have an Android TV, You can use SmartTube[1] that has Adblock + Sponsorblock

[1] https://github.com/yuliskov/SmartTube



Pi hole doesn't help, but there are various Android TV apps that do block ads. I still prefer the Roku eco system but I switched after they started putting ads in the middle of music videos.


pihole doesnt work for youtube because ads and content are served from the same domains.


I still use adblockers perfectly fine on Youtube. There was never a real interruption in adblocking either. You just need ublock origin + bypass paywalls.


I think they only disabled adblockers to logged users probably because non logged users don't have to agree to terms of services.


Blockers work with my throw away Google accounts that I use for this and that. So maybe it's restricted further still to very entrenched users.


I'm always logged on and using adblockers. So no, that's not it. I also use Youtube probably every day and am a very active user.


ABP also still works just fine. I prefer the armsrace being taken care of someone else


How is this not blatant anticompetitive behavior?


This is happening to me in Chrome as well so I don't think it's tied to the browser you use.

Curiously it happens only on one profile, in another Chrome profile (which is also logged in to the same Google account) it does not happen. Both profiles run the code in your comment, but the one that does not have the delay does not wait for it to complete.

The only difference I spotted was that the profile that loads slowly does not include the #player-placeholder element in the initial HTML response. Maybe whether it sends it or not is tied to previous ad-blocker usage?

What does piss me off is that even if you clear cookies and local storage and turn off all extensions in both profiles it still somehow "knows" which profile is which, and I don't know how it's doing it.



Is the use of the "E" notation common in JS? I can see that it (could be) less bytes, obviously more efficient for bigger values... Looking at the script I can see it is minified or whatever we call that these days. I guess my question really is: did someone write "5E3" or did the minifier choose it?

(Sorry this is heading into the weeds, but I'm not really a web developer so maybe someone can tell me!)



Because 5E3 is shorter than 5000, just like you can often see !0 to get "true" in minimize code because it saves two characters.


In js I thought 1==true, and 1 is shorter than !0 ??

Never seen the use of exponential notation for numbers in js though (not a surprise, I'm not really a programmer), it seems sensible to me from the point of shifting the domain from ms to seconds.



> In js I thought 1==true, and 1 is shorter than !0 ??

`1==true` but `1!==true` (`===` and `!==` check for type equality as well and while `!0` is a boolean, `1` is not.



!0 === true, but 1 !== true. I don't recall ever needing the strict comparison, but it seems to tickle the fancy of most js programmers.


Double-equals behaves differently than triple-equals. Minifiers probably can't swap them safely.


I wonder if this actually decreases the byte over wire. 5000 compresses a lot better.... sorry for OT


Interesting question. Has anyone tested this?


Almost certainly the minimizer


Totally possible that the minifier did this, yes.


How/When does that script get loaded? It’s not showing up in my network tab. Videos also load instant as usual.


Trying to be charitable here: could this be a debug/test artefact that inadvertantly got into production?


Unlikely. Google has been breaking non-Chromium (or sometimes even just non-Google Chrome) browsers for years on YouTube and their other websites. It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.


> It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.

Why is there more than one user-agent? Does somebody still expect to receive different content based on the user-agent, and furthermore expect that the difference will be beneficial to them?

What was Microsoft trying to achieve by sending a non-Chrome user-agent?



User agents are useful. However they tend to be abused much more often than effectively used

1. They are useful for working around bugs. You can match the user agent to work around the bugs on known-buggy browser versions. Ideally this would be a handful of specific matches (like Firefox versions 12-14). You can't do feature detection for many bugs because they may only trigger in very specific situations. Ideally this blacklist would only be confirmed entries and manually tested if the new versions have the same problem. (Unfortunately these often end up open-ended because testing each new release for a bug that isn't on the priority list is tedious.)

2. Diagnosing problems. Often times you see that some specific group of user-agents is hammering some API or fails to load a page. It is much easier to track down if this user agent is a precise identifier of the client for which your site doesn't work correctly.

3. Understanding users. For example if you see that a browser you have never heard of is a significant amount of traffic you may want to add it to your testing routine.

But yes, the abuse of if (/Chrome/.test(navigator.userAgent)) { mainCode() } else { untestedFallback() } is a major issue.



Only option 1 is something that users, who are the people who decide what user-agent to send, might care about. And as you yourself point out, it doesn't happen.


Why do you think users wouldn't care about sites diagnosing problems that are making pages fail to load (#2) or sites testing the site on the browser that the user uses (#3)?


I'm pretty sure that users care that websites can fix bugs affecting their browser. In fact option 1 is very difficult to actually implement when you can't figure out which browser is having problems in the first place.


It is normal practice for each browser to have its own user-agent, no? But the fact that Google intentionally detected it and used polyfills or straight up invalid JS at the time was insane. A similar spin today is "Your browser is unsupported" you see here and there. When a major platform such as YouTube does it, it is really impactful.

It would never do feature detection, would give lower quality h264 video, etc. Back then, there was really nice third-party application myTube which had made this less of an issue but it was eventually killed through API changes.



It may have been intended to be a normal practice, but as far back as IE vs Netscape everyone has been mucking with user agents for non-competitive (and counter-non-competetive) reasons


Without studying the minified code I wouldn't assume malice just yet, this could be just an inexperienced developer trying to lazily fix some browser-specific bug, or something that accidentally made it to production like you say


You think they let inexperienced developers touch the YT code base without proper code review? Even if that were the case, which is an extremely charitable assumption, that itself would be malice in my opinion.


> You think they let inexperienced developers touch the YT code base

Uh, yes? We were all inexperienced at some point. Just the linked file is like 300k lines of unminified code, I doubt it's all written by PHDs with 20 years of experience



Some would argue that owning a PhD degree does not necessarily guarantee half decent engineering skills.


It's the "without proper code review" part that I consider malice, not being inexperienced.


> You think they let inexperienced developers touch the YT code base without proper code review?

Yes



YouTube is way too stable for that to be the case.


lol

This reply is for everyone who has ever worked on the codebase...



Should be: LOL LGTM


there is such a thing as overextending the benefit of the doubt, to the point that malicious actors will abuse it.


It could even just be a timeout as part of retry logic or similar. A lot of people seem to be saying that there is no reasonable reason to have a `sleep` in a production application. But there are many legitimate reasons to need to delay execution of some code for a while.


As the saying goes: "we like naked girls, not naked sleep". Even the interns should know that, naked sleep is just bad - not fixing anything.


If, with Youtube size, they do not test on Firefox, this is as much malice as doing this deliberately.


> Trying to be charitable here [...]

There is no reason for charity with such a large power difference. For Firefox, "bugs" like this can really end up being a lost one-shot game.

It's like people walking by and casually reaching for your phone. It's always meant as a joke, unless you don't pull it away fast enough. Then suddenly it wasn't a joke - and your phone is gone.

This is not rooted in any reservation against Google in particular. If you are a mega-corporation with the power to casually crush competitors, you should really want to be held to a high standard. You do not want to be seen as the accidentally-fucking-others-up-occasionally kind of company.



If you are fluent with the terminal, you don't need to suffer from the YT Web UI. Install mpv and yt-dlp. Play videos like this:

  mpv [--no-video] "https://www.youtube.com/watch?v=X9zVjEZ7W8Q"
Option in brackets is optional.


This is the way.

I really don't understand why any technically proficient user would willingly use any of the official YouTube frontends. You get bombarded with ads, you're constantly tracked and experimented on, and your behavior is used to improve their algorithms in order to keep you on the site for as long as possible. It's a hostile user experience, just like most of the mainstream web.

Whenever possible, I suggest using Invidious, Piped, Newpipe, yt-dlp, and anything but the official frontends.

I try to compensate the creators I follow via other means if they have an alternative income source, but I refuse to be forced to participate in an exploitative business model that is responsible for the awful state of the modern web.



>I really don't understand why any technically proficient user would willingly use any of the official YouTube frontends.

I'm a technically proficient user that's written custom bash scripts for youtube-dl combined with ffmpeg to download videos locally and I still use the official Youtube desktop web browser UI every day for several reasons:

+ transcripts and close-captioning (use Ctrl+F search for text to find the section of video that starts talking about the topic I'm interested in)

+ many videos have index of chapters (deep links), table-of-contents

+ viewers' comments (especially valuable for crowdsourced feedback on DIY videos to point out extra tips, or flaws, etc)

+ external links mentioned (Amazon links to products is especially valuable for DIY tutorials)

+ convenient hot links to related videos (part 2, part 3, etc). Not every creator makes "playlists"

+ Youtube web UI has superfast video scrubbing of the timeline. A local video player like VLC scrubbing of the timeline is very slow compared to Youtube because the youtube backend pre-analyzes the entire video and generates a bunch of timeline thumbnails at multiple intervals. This makes the Youtube web UI timeline scrubbing very fluid with responsive visual feedback.

I like downloading with yt-dlp but I also lose a lot of functionality when I watch videos in VLC instead of the Youtube desktop webbrowser UI. The above points are not relevant to the terrible Youtube app on mobile and tablets.



Most of those features are available in OSS tools as well. And for those that are not, there are alternative solutions that might take a bit of work to implement.

I'm not claiming that the OSS tools have feature parity with 1st party frontends, or that they won't require some sacrifices, or effort adjusting. I just think that the trade-off of losing some of the convenience in return for not being tracked and manipulated is well worth it to me, though I can see how it might not be worth it for others.

I do actually think that OSS tools provide a better UX. I can download the media and consume it offline, using any player of choice, on any device, at any time. I find YouTube's recommendations a nuisance, and I can turn those off in Invidious and Piped. Scrubbing in mpv is instantaneous for me for local files and even those served on the LAN, though there is a slight delay when playing directly from YT. There is also a solution for generating thumbnails[1], though I had some issues with it, and didn't end up using it.

At the end of the day, it's a personal choice depending on what you value most, and I'm not trying to convince anyone my choice is inherently better. Thanks for providing your perspective.

[1]: https://github.com/tomasklaen/uosc



>Scrubbing in mpv is instantaneous for me for local files

Yes, I agree that scrubbing in mpv or vlc is "instantaneous" but Youtube's web ui is even more hyperfast "instaneous" than mpv.

>There is also a solution for generating thumbnails[1], though I had some issues with it, and didn't end up using it.

For me, using an offline tool like thumbfast to generate timeline previews defeats the purpose of using Youtube's pre-existing timeline thumbnails that Google's datacenter already generated. Let me explain...

>I do actually think that OSS tools provide a better UX. I can download the media and consume it offline, using any player of choice, on any device, at any time. I find YouTube's recommendations a nuisance,

I'm guessing it's a difference in usage pattern. I'm often browsing a bunch of Youtube videos as a research tool. Like a "visual wikipedia" for various topics (especially DIY tutorials and products research). I want to jump in and out of videos fast. Downloading videos with yt-dlp to play in mpv isn't the workflow here. That's too slow and cumbersome. Instead, I'm sampling a bunch of videos and maybe a few of those will be ultimately be downloaded. E.g. Preview/scrub fragments of 10 related videos, read some viewer comments, scan some transcripts, etc... and eventually only yt-dlp 2 of them. This is why "mpv yt-dlp with workarounds" is not an acceptable substitute for using Youtube's web ui.



That's fair. It's indeed a difference in usage.

My only usage of YT is queing up videos for short-term playback. So I browse a feed of my subscriptions in Piped, drag links of videos I'm interested in to a text file, and run a small script on my HTPC to download them with yt-dlp in parallel, and add them to a playlist. With a fast connection, it only takes a few minutes to download even dozens of videos at a time. Then I serve the videos on my LAN over HTTP with nginx, and watch them on any of my devices using any media player that can stream HTTP, which is usually mpv.

I started a project some time ago to make this fancier, but honestly, this workflow does 90% of what I need, and I'm too lazy to change it.

To each their own :)



> + many videos have index of chapters (deep links)

In mpv, you can use PgUp and PgDown to select chapters.

> + external links mentioned

Video description is in audio/video file if yt-dlp gets a --embed-metadata. mpv prints that if present.



> I really don't understand why any technically proficient user would willingly use any of the official YouTube frontends.

- Because I don't see ads with YouTube Premium

- Because I add things to my playlists

- Because I more often than not find interesting things to watch there

- Because I like using it on my phone or TV

There's a lot of reasons why someone would prefer the official apps over some third party app that might break every few months.



> - Because I don't see ads with YouTube Premium

I was in that boat. But after a while I realized I could no longer in good conscience give Google any more money when they were pushing so many initiatives that went against my interests.



The web frontend just works. The other frontends tend to have issues, which even if they're not deal-breakers are annoying. I won't put ideology over using what works best. And clicking a link, then clicking play, beats copying the URL then pasting it into a command line.

Of course this only works because by default (since I have an ad blocker anyways) I don't get bombarded with ads on the web frontend, and so far I've seen the adblocker nag screen once (a failure which uBlock Origin seems to have swiftly corrected).



Because I don't want to fuck about working against the platform, opting myself into something that'll break at any moment.

I would much rather put up with Youtube than be frustrated when my 'alternate frontend' one day breaks and i need to figure out a workaround.



Because using the website is a better experience. None of those tools worked with Sponsorblock last time I tried, for one.

I don't want to yt-dlp every video, Piped and Invidious both have awful frontends in comparison, even the Newpipe dev admitted to using Vanced at some point, and yt-dlp needs some massaging to get the right video quality (and it can't download some videos at all).

If any of your solutions were better for the majority, the majority would be using them. Youtube's ad blocker war is making the platform worse for everyone, but having a couple of billions of developer power behind your platform still beats any open source video players built for fun.



> Because using the website is a better experience.

That is debatable. I personally find that the combination of Piped, yt-dlp and mpv provides a far better experience than the official frontends. But this is a personal opinion, and I'm not trying to convince anyone my choice is better. I just didn't think other technical users would prefer using the official frontends.

Thanks for your perspective, though I think it's a bit outdated.

> None of those tools worked with Sponsorblock last time I tried, for one.

Piped, yt-dlp and mpv all support Sponsorblock.





Or use any of the many alternative YouTube frontends: https://github.com/mendel5/alternative-front-ends#youtube


Are there any alternatives for iOS?


Thanks for this pointer--I hadn't heard of mpv, but it works amazingly smoothly.


One feature it lacks is seek bar previews. There are thumbnail scripts but they don't use the available youtube thumbnails.


I implemented downloading of youtube thumbnails for one of these scripts.

https://github.com/marzzzello/mpv_thumbnail_script



I think you’re missing the point. How can I browse Youtube in mpv?


In addition to Piped, and Invidious, mentioned by sibling comments, which allow you to subscribe, search, and provide recommendations, you can use a complete CLI workflow with something like ytfzf[0], or, you can use the search commands on yt-dlp[1], which are also accessibly through mpv using the ytdl:// prefix.

Getting familiar with such tools not only replaces the terrible UXes you have to be subjected to, but also gives you the power and freedom to be creative with how you use Youtube and other online streaming sites.

I wrote various tiny scripts to replace all my needs for Youtube search, using any highlighted text, with a shortcut, Youtube Music, with a synced plain text file of song titles and a shuffle-on-read script, and more curiously, a script to help me slowly go through all thousands of my partner's favorite songs, and then, using shortcuts, add them to my own favorites, decide on them later, add them to the "what the heck do you listen to" friendly banter list, or the "my ears bleeding" list, etc. Much better UX then anything the slow web UIs can offer, and with minimum hacking.

[0]: https://ytfzf.github.io/

[1]: https://github.com/yt-dlp/yt-dlp



Use Invidious, Piped or any other frontend that doesn't track and manipulate you.




What do you mean by "browsing" Youtube? Clicking new links for the purpose of entertainment?

My post was only about playing videos.



Well, how do you get to the videos? How do you discover their links to pipe to mpv/yt-dl?

One option is RSS (YouTube still supports it) subscribing to channels. Do you know of others?



I don't, i use Youtube for listening to music or livestreams that i already know the title of.


Oh, and they also falsely show "4K" in the video quality icon, but "accidentally" play a 720p or even worse quality stream. If you manually select the 4K stream quality, then and only then will YouTube deign to show 4K to you.


Something related to this which I find extremely frustrating is that I'm capable of watching a 4k video in my browser just fine. So if I decide to buy or rent a movie on youtube, they can only be played back at 420p.

Apparently this is due to DRM restrictions, but the frustrating part is that you can pay extra money for the HD version and there's nothing telling you about this not being supported in your browser until you've made the purchase (by just allowing 420p and needing to search for why it's broken)

see https://www.reddit.com/r/youtube/comments/pm0eqh/why_are_my_...



This sort of behavior should be an open-and-shut case of false advertising. You were told that the video would be a certain resolution. You gave money as a result of that statement. You received an inferior product to the one that was described.


Isn't that fraudulent? Its amazing how an individual can commit fraud one time and its FRAUD! But a company can do the exact same thing en masse as like a business model over and over and its only ever a misunderstanding that they get a chance to correct and a gentleman's handshake. aAnd even if they didn't, it seems impossible to adjust the dial from civil to criminal as its often left in the consumers hands. Its not like there are attorneys that, like, represent the State that could exercise their legal authority to protect consumers.


To my not-a-lawyer understanding, it is fraudulent. Fine print is allowed to clarify an offer, but may not substantially alter the offer as originally made.

I could see an argument made that a reasonable person would know an offer to be limited to supported platforms, and that the fine print clarifies which platforms are supported. To me, though, I’d draw a line between unsupported due to underlying limitations (e.g. can’t serve 4k video on a NES) and unsupported due to seller-side limitations (e.g. won’t serve 4k without remote attestation). I’d see the former as a reasonable clarification of the offer, and the latter as an unreasonable alteration of the offer.



Even if it doesn't technically apply here, the larger point remains that people get handcuffs and corporations get handshakes...


Same deal with deferred prosecutions which is a bullshit designation because the company's legal is basically going to ensure that it becomes a nolle prosequi at that point


It's crappy behavior but I think screaming fraud is taking things a bit far. If you buy a Blu-ray from a website you don't come back screaming fraud because the browser or computer you you used doesn't play Blu-rays due to the DRM requirements. A refund request fits the scenario much better and the company's response tells you whether they are worth doing business with, not whether you were the victim of fraud. Some responsibility still lies with the buyer that they will understand what it takes to use the thing they are buying and not expect to rely 100% on the seller to verify everything for them beforehand.

At the same time... I think the behavior is pretty shitty, just not illegal, in that it takes minor up front effort to resolve. An explicit message along the lines of "You won't be able to watch in higher quality on this browser/device combination. Do you still want to purchase the high quality version for use on another device? You'll still be able to watch either version on this device, just always in low quality" goes a long way.



Buy a movie on YT or DVD, and then... watch a torrented version? This isn't the future we were promised, but it sure is the future we have.


> Buy a movie on YT or DVD, and then... watch a torrented version?

in which case, why buy it at all? A torrent isn't going to load as fast as what you paid YT for.



The further time goes on toward segmented streaming platforms and DRM bullshit, the deeper my piracy hedge grows. Eventually there will be a streaming service aggregation service a la Cable channels and we're back at square 1. Add to that streaming services pushing new ad schemes now that they've captured enough market share for the risk to be worth it, and we've got a great storm brewing for a resurgence in piracy and media execs going "but y?"

BTW modern piracy setups are far more streamlined and easier to manage/use than modern streaming platforms. Assuming you have some tech ability anyway.



Jellyfin on a NAS is just great. You don't even need a NAS. A Pi with a large SSD attached will do fine.


A half decent NAS, with Dockerised *rr is the gold standard of torrenting. I never knew it could be so painless.


>A torrent isn't going to load as fast as what you paid YT for.

Unless you want to rewind the video without it re-buffering...



Not youtube specifically, but I wanted to watch the wheel of time series on my ipad and:

#1 You cannot stream in a browser on iPadOS anymore. Amazon won't let you, you must use their app.

#2 They don't seem to give a fuck about making sure you're getting a quality stream in their app. Full of artifiacts and horrible compression way more often than is warranted on my symmetric gigabit connection.

So I added it to my Sonarr instance (pirated it legally) and watched it in a browser from there with perfect quality and no pre-stream ads.

Once again: A paid service so bad that it couldn't compete with the pirate experience even if it was free.

Which once again confirms Gabe Newell's statement to be true: "piracy is not a pricing issue. It’s a service issue"



Netflix does the same thing. Actually, speaking of infuriating corporate bullshit, allow me to go on a rant about Netflix and subtitles.

They give you the option to choose between like four, maybe five languages. That's it!

If you want subtitles in any of the other hundred or so languages that they have available, well... no. Just no. Learn one of the four they've picked for you.

If you call their support, they'll gaslight you and mumble something about "copyright", which is patent nonsense. Copyright doesn't restrict Netflix from showing more translations for their own content that they made themselves. They own the copyright on it, which means, literally, that they have the right to do whatever they please with the copy. Including showing the associated subtitles to you.

You see, what actually happened, is that some too-smart UX guy at Netflix couldn't make a language picker look nice for that many options so he asked a too-smart data science (lol) guy to figure out the most common languages for each region.

Here in Australia they picked English, Italian, Vietnamese, Chinese because we have a lot of immigrants from those countries. I'm sure they used very clever algorithms on big data clusters to figure that out. Good job, well done.

Never mind that every other streaming app vendor figured this out. Netflix and their $500K total comp Stanford or wherever graduates couldn't. So they instructed their call centre staff to lie to their customers.

Then they had someone write this idiocy: https://help.netflix.com/en/node/101798

"If subtitles for a title are offered in a language but do not display on your device, try another device."

Oh, oh, I'll go do that right now! Let me try my PC... nope four languages. On the TV? Four languages. Actually, I have a phone... and... oh... four languages.

PS: Thai (only!) subtitles are "special" and use eye-searing HDR maximum white. Like 1,600 nits white that literally leaves green after-images etched into my retina. They have a support page and a pre-prepared set of lies for the support staff to read for that piece of shoddy engineering also.



A common thing where I live is for local companies to buy streaming rights for Netflix-created media, and then we can't watch Netflix-created media on Netflix because local-company bought streaming/playback rights. Netflix doesn't care about the customer. They care about money, and that won't change. They'll max out the bullshit until customers push back, leave it there for a bit, wait for customers to get used to the new-bullshit, then add more bullshit and repeat.


> Never mind that every other streaming app vendor figured this out

Did they? Both Prime Video and Disney+ have very very narrow subtitle and audio language choices.

> If you call their support, they'll gaslight you and mumble something about "copyright", which is patent nonsense. Copyright doesn't restrict Netflix from showing more translations for their own content that they made themselves. They own the copyright on it, which means, literally, that they have the right to do whatever they please with the copy. Including showing the associated subtitles to you.

Maybe they mean the subtitles' copyright?

As someone who speaks multiple languages, and has the habit of watching with subtitles in the original language of the content if I speak it; otherwise default to English subtitles with original audio... none of the streaming companies have managed to handle that properly. Way too often the audio is only dubbed (often badly), or only my subtitles in my local language (French) are available, regardless of the original language of the content. I'd rather watch British movies with subtitles in English, not French, thank you very much.



Apple TV shows something like 50 languages. More than I can be bothered to count, certainly.

Are you saying it's some sort of challenge beyond the abilities of a Senior Technical Lead with total comp in the seven digits to figure out how to make a list of items more than 4 or 5 entries long? Too many megabytes of JSON to shove down the wire for more?

> Maybe they mean the subtitles' copyright?

They definitely do not. That's not how work-for-hire translations work. You pay someone to translate your shows' subtitles for you, then you own the copyright on that work that you paid for. That's how that works. No weird region-locked silliness.

You can make other languages appear by changing the entire UI language of Netflix, which then shows some other "data driven" subset of the subtitle languages.

But then, the entire UI is in another language, which not everyone watching may understand.

Essentially there are audio-subtitle language combinations that are impossible to achieve, no matter what. That combo may not be common enough to make any top-5 list anywhere.

So if you love someone of a sufficiently small minority, or have an unusual racial makeup in your household, Netflix would rather you weren't so weird.

Sit down and think about how absurd it is for the bastion of wokeness that is Netflix to discriminate this profoundly against inter-racial love. On purpose. They wrote the code to do this.

Blows my mind.



> Sit down and think about how absurd it is for the bastion of wokeness that is Netflix to discriminate this profoundly against inter-racial love

I'm on the same boat and I hear you. And since we are on this subject, do you know what else grinds my gears? The whole idea of cultural appropriation. So if your ancestry is X then you can't do/wear/celebrate Y.

So when you ask these people something like: Is it okay for my half-X, half-Y children to do this? they start feeling confused. But if you go: What about my grandchildren, who are 1/4 X and 1/4 Y and 1/2 Z?. Some of them begin to realize how racist and simplistic they are being.

Learn and enjoy other people's cultures, for goodness' sake. It's called being human.



I consider this to be the answer to "cultural appropraition" which people seem to have made up because their hobby is being offended: https://rumble.com/v3wx1mz-is-this-outfit-offensive-students...

I've seen similar videos with Japanese garb too. The offendatrons hate it. The actual Japanese people love that you're enjoying their culture.



Should Italians feel offended that Japanese businessmen adopted the western (Italian-style) suit?


"No, because Italians are white and white people can't experience discrimination".

That is an actual response I've heard more than once.

To be fair, I agree with the "cultural appropriation folks" when they correctly point out that sometimes people intentionally mock other cultures and that's a dick thing to do. But conflating mockery and insult with an appreciation of other cultures is not helpful, and that's what they do in practice.

I'm a Spaniard and when I watch a Japanese person practicing flamenco, I feel flattered, not insulted.



> No, because Italians are white and white people can't experience discrimination"

Tell them about the Yugoslav wars merely 30 years ago to blow their minds.



> They definitely do not. That's not how work-for-hire translations work. You pay someone to translate your shows' subtitles for you, then you own the copyright on that work that you paid for. That's how that works. No weird region-locked silliness.

If you skip the fact that Netflix do regional deals with local content houses to sell Netflix-made stuff either in theatres or get TV releases, in which case translations could be a part of the deal to be be provided by the local entity who's getting the rights; or the other, more common scenario, where Netflix acquire local content for wider publication (e.g. Casa de Papel/Money Heist is a very popular example), where again, there might be complications.

> Apple TV shows something like 50 languages. More than I can be bothered to count, certainly.

I haven't found that to be the case, but had Apple TV only briefly because of the general poor quality (watched 3 series on it, all three devolved into trope after trope barely going below the obvious surface).

> Sit down and think about how absurd it is for the bastion of wokeness that is Netflix to discriminate this profoundly against inter-racial love. On purpose. They wrote the code to do this.

Is woke in the room with us right now? Can you point it out and explain what it is? For the record, "races" are a stupid social construct that should have died out with the Nazis. And people can be of different ethnicities while speaking the same language(s), or inversely of the same ethnicity while speaking different languages. Being "woke", "inter-racial" and different languages are completely orthogonal topics.



I have to add two adjacent subtitle-related stupidities on Netflix:

1. Closed captions (CC). Okay, I'm willing to accept they improve the experience of a show / movie for a non-zero number of people. What I absolutely don't accept is CC being the ONLY VERSION OF ENGLISH SUBTITLES available. Either CC or nothing. I can't be the only one who prefers English subtitles for English-spoken media, while NOT needing every single sound described as [wet squelching] or [quirky synth music].

(Bonus points for everyone who recognizes those specific examples ↑)

2. Subtitles in all-caps. For the entire movie. Just why? If I'm able to read the text in time at all (it is widely known that words and sentences in all-caps are slower to read), then I'll just feel everyone's screaming all the time, even if they aren't. Whose idea was this? And also here, to my knowledge it only affects English. (I believe all Nolan movies got this "treatment" for example.)

There have been several occasions where even though it was readily available for me to stream from Netflix, I pirated a show or movie anyway, specifically to avoid one or both of these issues.



I don't know about browser options, but on the android app I can choose between 7 different audio languages and 29 subtitles. Looked it up just for you with an episode of "The good Doctor", which is not a netflix original. I live in Germany. Definitely not an UI issue.


Seems like they'd want people to, idk pick up to 4 languages themselves in settings if they are really attached to their picker. Which makes more sense to me.


I love this rant with a passion.


is it still a thing that you have to use Edge on windows to get 4k HDR, but you can't on Chrome?




By the way, it is 480p. 420 is for something else :)


That has irked me for quite some time. I always manually select 1080p, because sometimes YT claims it's already playing 1080p, but it's obviously not and the video starts buffering anew when I select 1080p manually. Quite annoying


Get the "Enhancer for Youtube" extension, among many adjustments, it does this clicking for you.

I also had this issue, videos would frequently wobble down to like 240p or whatever, on a stable, high speed wired connection.

It's not an internet problem since I never have to buffer when using this forced setting, so it's probably YT trying to save a few bandwidth bucks when they think people aren't looking.



Roughly around the same time as the anti-adblocking effort, youtube started just not playing the video stream for me much of the time. I say play a video, it will start playing the audio, and the video will just be a frozen image.

In unrelated news, my youtube-dl usage is way, way up.



Enhancer for Youtube allows you to select a min quality, also great for blocking shorts.


I haven't see this other than for brief periods during quality switching (it seems to play out the current buffer in lower quality but new chunks are downloaded at the displayed target quality). However for some reason it does often just load at a very low (sub-720p) resolution and I need to manually up the quality or it will never get to the highest quality (I'm watching on a 4k monitor with great internet and hardware decoding, 4k has never stuttered for me).


I remember them starting to do automatic lower-quality streams when this came out[0], but I'm not sure if this is still the cause for the situation. It could be a general "we see this ISP/ASN failing more often with x many concurrent 4k streams, let's throw some people on 720p and see if it helps".

0: https://www.pcworld.com/article/398929/youtube-defaults-to-l...



I have personally noticed this many times. I’d blink and wonder if it was just my eyes going bad but nope, soon as I select HD quality manually I can read text again.


Yeah, this has bothered me for a while. Switching to alternative youtube interfaces solved that problem :)


It doesn't help that 720p quality seems subpar (to me) compared to some years ago.


Wait, that's a Firefox-only issue ?!


No, it does it on Chrome too.


I have a hard time believing that's actually what's happening. If they wanted to slow down other browsers, why would they choose this easily discoverable way? They could have easily slowed down serving of JS files (and other assets) based on the user agent to a similar effect. It seems more likely this is just a debug snippet that has made it into production by accident.


If I was working at Google and I was tasked with doing that, I'd half ass it too


I mean it could be that the programmer wanted it to be discovered to draw attention to Google managers' shenanigans but that seems kind of far fetched.


ah yes, the fact that they are sabotaging other browsers in a very obvious way is actually proof that they didn't meant to sabotage other browsers!


> They could have easily slowed down serving of JS files (and other assets) based on the user agent to a similar effect.

And that is /not/ easily discoverable??



I would argue it's a bit harder to find if the youtube backend serves files slower for certain browsers. One could even radomize it and sometimes still serve it fast or something. Since you cannot look at the backend code it would be hard to proof anything.


Using firefox I get "instant" youtube. The video starts playing before most of the rest of the UI has loaded even, definitely under 1 second.

Any idea what specifically causes it to happen, rather than just "firefox"?



An up-to-date adblocker blocker blocker, most likely. Paying for Premium may also do it.


I've gotten that really slow UI loading almost always lately and I've always assumed that it's because I'm running uBlock Origin.

Although I just tried opening two videos and both opened basically instantly.



https://www.thinkwithgoogle.com/marketing-strategies/app-and...

Seems odd to do something so brazen while also publishing information that (could) prove intent.

Google also modifies how business information can be accessed from Firefox Mobile. You can't read reviews easily from Firefox Mobile. At least not my install.



That's because it's not actually what's happening. I'm all for bashing bigcorps and especially ad empires but reddit folks confused correlation with causation here.

The code in question is part of a function that injects a video ad (that plays before the start) and the code itself is just a fallback in case it fails to load over 5 seconds so that video page doesn't break completely.

Why was this affected by user agent change? My best guess is that on some combinations they somehow decide not to show any ads at all (for now) and therefore this function is not called and some other code path is taken. This is consistent with my own experience with the recent anti-adblock bullshit they implemented. The banner was not being shown after user agent change implying it's one of the considered variables.

You can verify all this if you click 'format code' in browser debugger.



That makes sense and explains why it seemed so odd.

I don't use YouTube so the comment was more of a way to bring up the other behavior in business reviews. It seemed relevant.

Edit: reviews are also broken(for me) on Firefox desktop with no extensions enabled and with ublock enabled.



I use Firefox, and Google's sites are literally the only ones where I consistently have issues. There was a period of about a month this summer where Google Maps was just completely broken for me, the map wouldn't update at all when attempting to search or pan. There was recently a several day span where chat in Gmail had a 10+ second input lag due to some font-related JavaScript code spinning the CPU nonstop. It's literally gotten to the point where I keep a Chrome window open and use it exclusively for Gmail, Google Meet, YouTube, and Google Maps.

It's pretty obvious from the outside that supporting Firefox is not a product priority for Google. It also seems clear that it's in their best interest to have users choose Chrome over Firefox. My guess is that this likely emerges from a lot of very reasonable sounding local decisions, like "prioritize testing on browsers with the most market share," but it is convenient how those align with the anti-competitive incentives.



These sounds like classic MS behaviour. It is kind of thing that ought to addressed in anti-trust case.


I've posted this here on HN numerous times over the years, and it's been a while since I last posted it:

Google is the new "Microsoft", they embrace, they extend, then extinguish. Look at their email offering, messaging offerings, they built on top of XMPP, then they pulled the plug eventually. Android is Linux based, but insanely proprietary, the app store is not open by any means, you're fully at their whims to get your apps on there. Chrome is basically the IE of old, implementing proprietary things or APIs that are not yet standard for Google products, and pushing out competing browsers.



don't forget the old Microsoft is still here. We have two Microsofts now!


The old, old Microsoft is still here, too. IBM is still there, a dinosaur in the mist.


A few weeks ago I posted on here about the maps lag, and it literally felt like it was fixed after the comment got some attention.

There's 100% targeted de-optimization for firefox users and the burden of finding it is on the users it seems.



Not to diminish the other sketchy stuff Google's doing, but I think the maps lag issue might actually be Firefox's fault. Whenever it happens to me WebGL stops working across all websites and restarting the entire browser fixes it. It's almost like when Firefox has been open a long time it just forgets how to use graphics acceleration.


The thing that bothered me is it didn't always happen, at one point there was performance parity, and things changed in a way that specifically worked worse in firefox.

Which means:

A) Firefox had bad webgl implementations(I didn't experience what you did, but I wont say it doesn't happen) and google added features that regressed the experience on other browsers.

B) Google knowingly made performance worse on Firefox, regardless of webgl implementations.

C) Google leverages its own browser to only test on their browser, to influence the market to have to use googles browser in order to use their services(not the same as the IE/Windows monopoly lawsuit, but sure smells like it).



i believe for anything non-Chrome? Even Vivaldi has issues with some Google products.


Recently, play store images don't load (about 60% - 80% missing per app) in Firefox.


It is fascinating, how the simplest things on websites can be made arbitrarily involved, convoluted, over-complicated. And how those over-complications can then serve as a credible deniability.


Using Google sites with a VPN on Firefox has been really annoying for the past couple of months as well.


I had the same Gmaps issue, I disabled LocalCDN for the site and panning etc worked again. Apparently the addon must be fixed to account for whatever they were doing.


Gmail has recently become extremely slow for me in Safari on a well specced M1 max


Same here on Firefox, for both my laptop running Windows 11 on Alder Lake, and my desktop running Ubuntu on Zen 2.


Spotify web doesnt work good on Firefox.

Need to call out them.

I'm basically forced to use Chromium on Linux.



I use Firefox across Windows, Mac, OpenBSD, and Ubuntu. I've not seen any specific issues with Google sites at all. I only really use Docs, Maps, and Youtube with any regularity but I've not really seen any of these issues.


Yea, I also haven't noticed any speed issues, but I do use noscript exclusively on Firefox.


bigquery console in ffx has like +120 latency potion


You're basically looking at testing being done on Chrome (because it is Google's, and because of its large market share), Safari (because it runs on a large percentage of completely exclusive platforms where the customer can't switch, and because of its large market share), and Edge (because there are still many corporations that do "Nobody ever got fired for choosing Microsoft" and lock down browser options to just Microsoft's offering).

At this point, Firefox is very much an also-ran on two axes: market share is tiny and nobody forces it on their captive audiences. We may as well ask why Google isn't optimizing testing on Opera, or Samsung Internet.

(There is also the issue of under-the-hood engine. Since so many browsers have converged on a few core and JS stacks, testing on one exemplar of that stack has a tendency to suss out bugs in the other stacks. Firefox still being its own special snowflake in terms of JS engine and core means it has more opportunities to be different, for good or for ill. So there's a force-multiplier testing the other browsers that one lacks testing Firefox).



dupe?


It's clear Google is only testing for chrome engine and safari: which comprise 97% of the browsers being used. Would you increase your testing by 50% to thoroughly test for 3% of the market?


As I said, the decisions are locally reasonable. However, if not supporting Firefox potentially exposed my company to scrutiny over anti-competitive behavior, then, yes, I would absolutely invest in testing procedures to mitigate that.

It's also worth emphasizing that it isn't difficult to support Firefox. I'm pretty sure that many of the sites that I visit do so largely by accident. I do a fair bit of web development, and Firefox/Chrome compatibility has never been an issue in the slightest for me. You almost have to go out of your way to choose Chrome-specific APIs in order to break compatibility. How does virtually every other website on the internet manage it—from my bank to scrappy startups with junior developers coming straight out of bootcamps—while Google with all of their engineering talent and $100+ billion cash on hand just can't seem to make it work?



Serious question - does anti-competitive behavior even apply to open source? Also, it's the open source chromium, not necessarily the browser Chrome, that dominates the browser market. The largest players in the industry, except for Apple, have lined-up to support chromium. Firefox is going against the grain. Is it Google's job to help them with their mission? Loosely speaking, in anti-competitive scenarios you have to show how a significant faction of the consumers are being harmed. You're going to have a tough time with that one.


The thing is that Firefox is the biggest project for an independent fully open source browser not tied to a big commercial company. Google having almost a monopoly is not good for the users, because Google therefore has a lot of leverage to push certain browser technologies that mostly benefit them and not necessarily the users. It's important to have an independent browser that is not optimized for 1 particular company's technology and needs. So we can view the web from a somewhat more neutral view. And yes, I think it's Google's responsibility to adhere to the webstandards and at least test their stuff in Firefox so they adhere to this neutrality. Otherwise they are only providing their websites for the Chromium-web, and not the Open Web.


you have to show how a significant faction of the consumers are being harmed. You're going to have a tough time with that one.

I'm not a lawyer and can't speak to what qualifies as anti-competitive behavior in a legal sense. Qualitatively, Web Extensions Manifest v3 and Web Environment Integrity are clearly harmful to consumers in my opinion. The first significantly hinders ad blockers, and the second kicks down the ladder on building search engines and hinders competition in that space. Other browsers using Chromium as a base doesn't change the fact that Google almost unilaterally controls it, and Google has made it extraordinarily clear that they're interested in making decisions that prioritize their own best interests over those of their users. I don't see why Chromium being open source would absolve any responsibility here, especially when the open source project in question primarily exists to serve the interests of the profit center of a mega-corp. I deeply support open source software, and I'm glad that Chromium is open source, but being open source doesn't excuse behavior that is against the interests of users whether it qualifies as illegal or not.



I think you're going to have a tough time with Chromium seeing as how the likes of Microsoft and Canonical are contributing to the project. You're also going to have a tough time showing anti-trust when Google is working with Apple. I'm old enough to remember some famous anti-trust lawsuits where the plaintiffs had a much more solid case and still lost. In this case Google is literally working with the industry's largest companies. You're going to have a really hard time with that.


> Would you increase your testing by 50% to thoroughly test for 3% of the market?

I don't think you get to make these kind of cost cutting decisions when you're a vertically integrated mega-corp who also owns the browser with 65% of the market.



In most companies, when 3% represents an in-fact huge number because you have a very successful product, you absolutely do test for that 3%.

It’s tiny companies that may ignore 3% as too expensive to worry about.



Here’s another way to answer that question: do Vimeo, Twitch, Netflix, Amazon Prime, Instagram, TikTok, etc. say “let them use Chrome” or do they manage to do entry-level browser testing? The cost increase is nowhere near 50% and clearly they aren’t willing to write off millions of users – only the company with a direct financial incentive does that.

Yes, Firefox’s market share has been declining but that’s substantially because Google spent billions of dollars marketing Chrome and promoted it heavily on YouTube, Gmail, Search, etc. Deciding not to test or optimize fits neatly into the same pattern.



_I_ would in their shoes because I'm not just in it for the money and I care about the craft.

But clearly I am not them. :-) Mathematically it doesn't make sense for Google. It might make sense from an anti-trust perspective...



It's hard to argue anti-trust when all these browsers are based on Chromium - which is maintained in part by Google, Microsoft, Opera, Vivaldi, Intel, ARM, and Canonical plus several volunteers.


> hard to argue anti-trust

Makes me wonder if it's the wrong strategy and what an alternative might be. In context, one might assume that Google will use the Chromium monoculture to... ahem more assertively deliver advertisements, which would be "a real dick move" as it goes. I don't know how a concerned citizen might bring attention to or possibly prevent the actualization of such a strategy by Google.



Google is the largest contributor. The others chose Chromium because making a browser that's compatible with all the bloated standards invented by Google would require too much effort.


At a company at the scale of Google or Facebook, yes. 3% x N billion people = a central European country or two.


Isn't that a good deal? 50% more testing in a way that can surely be parallelized to some extent does not seem a very steep price at youtube scale.


This is exactly the same situation that web developers faced with Internet Explorer 5 and 6, and it sucked for end users!


Since they throw me "Google recommends Chrome!" adverts in my face for various of their services, even when using a chrome-based browser it's not a case of only testing for Chrome/Safari. It's active work against others.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com