(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39423949

总体而言,作者似乎更喜欢较小的、仅限邀请的留言板和在线社区,因为搜索引擎优化和人工智能机器人的大量存在导致了互联网的破坏。 作者建议用围绕特定领域的小型社区取代大型新闻聚合器和社交媒体平台。 作者强调 Reddit 是 Hacker News 的替代品,表明人们更喜欢传统的论坛结构。 此外,作者指出,在 Hacker News 上,评论部分提供了比文章本身更有趣、信息更丰富的材料。 最后,作者表示有兴趣在自己的网站上探索高度瓶装来源的替代品。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
A steep rise of Hacker News in Google rankings (jonathanpagel.com)
336 points by jcmp 1 day ago | hide | past | favorite | 339 comments










Is it possible that google is now turning to sites that have a strong moderation policy, as a vetting strategy against LLM generated posts? (Maybe November 2022 is turning out to be a new kind of Eternal September for the rest of the Web...)


I'd go a bit further and ask what other choice do they have...?

Unless they can make a reasonably effective LLM-generated content detection (sounds tough!) they're faced with a rather inconvenient problem of mining the internet for known human-generated content (while we still agree what 'to know' even means).

Interesting parallels to https://en.wikipedia.org/wiki/Low-background_steel.



It’s certainly hard to detect authorship per se, but I expect very easy to detect content people produce using LLMs. You can target affiliate marketing blogs and other SEO garbage, entirely ignoring whether it’s literally written by human hands.


> It’s certainly hard to detect authorship per se, but I expect very easy to detect content people produce using LLMs.

Yes. For now. Just flag the use of the words "delve" and "deep dive," flag bullet-pointed paragraphs and lists, and flag anything that closes with a too-neat one-paragraph conclusion — and you've just caught 85% of AIspam.

But this is already changing, and in about 6-10 months I expect it'll be impossible to detect at more than 70% confidence whether an article is written by an AI or a human. (You could say: The human writes with soul, with style, whereas the AI is totally bland, irritatingly didactic, and generic. But you can quite easily coax an AI into imitating any literary style, and some of the best AI-generated content is warped in this way.)



> Yes. For now. Just flag the use of the words "delve" and "deep dive," flag bullet-pointed paragraphs and lists, and flag anything that closes with a too-neat one-paragraph conclusion

Uhoh, is my copy pasta in danger of being snuffed out before anyone is served it?

Look, I get how some people would consider it spam. However, the problem still remains and the solution has not changed!

I have a life to live and can't spend every waking moment crafting a unique comment every time I need to reply to someone.



This is the Netflix algorithm problem.

They had an algorithm that perfectly recommended content to you. Then they scrapped it and went back to star ratings. Why?

Netflix makes money from monthly subscriptions. They lose money when actually streaming content to users. So the ideal user from their standpoint is one who keeps paying but watches almost nothing.

Thus, the company is incentivized to put out just enough good stuff so you won't cancel, but they don't want you binging.

If Google sends users to pages and advertisers pay them, the incentive to address the content problem isn't there. In fact, LLM content may be another cash cow waiting in the wings.



You're right, they never used the winning algorithm, possible for reasons you describe.

The prize was a recruitment tool from the start, though. I don't think they ever intended to use any result.



Also reddit exodus, aftee their api changes. Don't understimate people's imcline for good open source (or 3rd party clients)


If your revenue stream depends on your client injecting ads to your users' feed and inability to remove that ad from the said feed unless paid for, you balk at 3rd party clients.

Same is also true if you're implementing half of your moderation at client level.



I'd like this definition of "Eternal AI September": all the website that existed before November 2022 and kept posting the same kind of content will be considered safe. Everything else will be vetted to prevent LLM generated content.


Similarly HN is already an important data source for large language model training.

For example, the RefinedWeb paper lists HN as one of only 12 websites that were excluded. From what I understand, it was excluded because it went into the final dataset unvetted. RefinedWeb was used for the Falcon model.

https://arxiv.org/pdf/2306.01116.pdf



I bet a lot of text generated content you see online is LLM generated and I'm not sure how exempt HN is from this. Probably more than most but in 2024 it is trivial to generate something with an LLM that could be assumed to be written by a human and post it. Most of the internet pre-November 2022 isn't AI generated (with the exception of one off things like Seychelles Anon, SubredditSimGPT2, etc.)


Not for nothing, but I was able to raise HN in my Kagi search rankings (and demote others) because I wanted to do it, not because a Googler on the Search Team made a slide deck about HN. https://help.kagi.com/kagi/features/website-info-personalize...

Best $10 I spend every month.



I started using this website after downloading brave, and I typed "news" into the search bar and it automatically directed me here. I was instantly fascinated by the diversity of articles and high quality discourse, and so I've stayed for way longer than I thought. But, yes, unfortunately, as with Reddit and Xitter, one day HN with suffer the fate of popular social media, probably not too long from now. Social control is the name of the game, and you can't win without monopolizing. I only hope Dang can hold down the fort for as long as possible; HN will be the Masada of the internet. (Well, I suppose there is still 4chan, but that's not going anywhere. 4chan will probably be around long after the death of every other social media website.)


If you're new you may not know this but the meme that HN is turning into Reddit is at least 17 years old, and has been the last entry in the Hacker News Guidelines since forever: https://news.ycombinator.com/newsguidelines.html


Unfortunately, the statistics bear out differently this time.


A simple way you could stop the enshitification is if you stopped it from becoming too popular in the first place, and a way you could do that is by throttling the performance of the web site to only be able to handle x amount of requests/minute.


Anyone know what dang's plans are for post retirement moderation of hackernews?


so the theory is that now there will be a lot of bots taking over HN and posting stuff that they want to rank up, destroying the value of the site? What's the defense there?


Bit afraid about that. Honestly it is suprising how spam free HN is, so maybe they have some good anti spam system in place. Also i think it would make sense to set all outside links to "nofollow" so google ingores them and therefore it is less interesting for SEO.


There's definitely a bit of influence / perception manipulation on HN. A few years back I heard a story that a tech company would monitor HN for certain keywords, and if their product or category was ever brought up or mentioned multiple developers would always show up to engage on the topic. This isn't quite spamming or cheating the system, but it's a very effective tactic for shifting public perception.


You might be referring to GitLab: https://handbook.gitlab.com/handbook/marketing/developer-rel...

(hi to those from GitLab watching #hn-mentions on Slack)



> You might be referring to GitLab: https://handbook.gitlab.com/handbook/marketing/developer-rel...

That's really interesting that they'd disclose their strategy so publicly. On the one hand, I can't see it doing much harm. On the other hand, wouldn't it be somewhat self-defeating or even embarrassing to confess that you're doing that? One could probably reasonably assume that many companies have a social media strategy like that, but to have people know that's the strategy would probably drain away at least some of the goodwill provided by the posts.



I would think it is good if done in a good way and bad if done in a bad way.

bad way - posting stuff to make your company profile go up all the time, if you think you have something that might be of interest to HN sure but not everything you do should get posted. when negative stuff comes up swoop in to defend company and to drag down those saying negative things about company, especially without disclosure but even with disclosure.

Good - something technical about company comes up, developers who worked on technical thing come in and clarify technical aspects for people. Somebody has problem with your product and you come in and ask for clarification and help solve problem.



Doesn't GitLab make all of their internal documentation public?


:waves:


It's a curious division between "old internet" and "new internet" (those on the *chans may have parallel but slightly different terminology for this dichotomy) to see people use BBS-style vs IRC-style emotive expressions.


You could also just hire people who spend all their time on this site.


That's already what everyone here does, mostly, it's just off the books.


I'm open to such offers.


Don't all FAANGs and fancy startups already do this ?


This is how some companies counter negative glassdoor reviews. Bury them with 10x positive reviews.


HR at my last place was very aggressive in reminding us to leave 5-star reviews at Glassdoor. They once even offered Starbucks gift cards to the first 5 of the month. I was very sure to leave an honest, dirty laundry review after I left including a mention of this practice


Wouldn't what they were doing be against the ToS of places like Glassdoor?


Probably, but does anyone think Glassdoor is going to chase after potential revenue streams for violating their ToS?


Pinecone txtai …

Someone should build an AI agent that keeps a list.



OTOH that’s kinda the utopian version of open source - a public forum where you can engage with stakeholders on demand, as long as others find your critique/question interesting enough to upvote.

God I love hacker news… their insanely outdated moderation tools are a shame, but I can’t lie, holding a big stick makes for a peaceful forum.



Once you get enough karma (I can’t remember what the number is) you’ll be able to see dead submissions. If you look at new you’ll see that there are spam posts coming in every couple of minutes or so. They’re just very effectively detected, as you suspected.


It's not a karma gate, it's gated behind the showdead setting in your profile, which everyone has access to.


Ah — I stand corrected.


The karma-gated part is being able to vouch dead comments


I hope the links are getting set to nofollow so there really isn’t any value anymore for the spammer checked: They are nofollow links


People are starting to copy/paste from ChatGPT / equivalent. It's not spam, but it just says nothing and dilutes the conversation.


You dont recognize the spam because the bad ones are removed. Good marketing targets its audience, and influences people who are not associated to repeat its talking points.


It gets its fair share of spam but the platform is very effective at moderation. Show dead and scroll new for examples.


That would also eliminate any positive effect that HN has on Google search results.


Theres alot more undervocer product pumping than you would think


I wonder if nofollow actually does anything nowadays. I feel like the rules that used to be enforced have been replaced with hacky 'AI' that have been hacked together by thousands of SWEs trying to improve some metric for their promotion packet.


the "google way" would be to use instead of "nofollow" "ugc" which stands for user generated content.[1] In the same document they state "We'll generally treat them as we did with nofollow before and not consider them for ranking purposes". However they lied often about their ranking signals ;) https://developers.google.com/search/blog/2019/09/evolving-n...


Did Google really say that? It makes sense to treat it as a different category of link. An organic back link from a blog or website is putting the weight of the website’s name/reputation behind it whilst UGC has users putting their name/reputation on it (as it relates to their relationship with the originating website). The latter is not worthless as the website ultimately allows it, it not as strong as a signal as an organic backlink.


Cut off the infected limb and block the site to Google entirely. Or split HN in an active and archived branch with an upvote threshold, the latter is made accessible by Google.


There is already an excellent search site for hn [0], so if appearing in google turns out to cause problems, for many users nothing much will change. (I already use that one over google with "site:.." in the query- it's just a good search experience)

[0] https://hn.algolia.com/



Won’t that make everyone with a grandfathered account an AI Boomer? The value of selling HN alts just went way up!


I tried this several years ago on ebay, selling an hn account with down vote. It did not sell or get any bids at all.


Over the past few weeks I've seen more crypto scam posts than ever. While they quickly become dead, I'm not sure if they are coming in a faster rate than in the past or what.


I've seen that too. Usually these things come in waves and once the spammers figure out it's not working, they go away. Then new spammers show up.


HN can always change their robot.txt and prevent google from crawling. Many of copy left friends do that to their blog and site to prevent google/bing from indexing them.


This is really counter productive towards furthering the cause.


It's already a thing. With showdead, you can see the corpses of tens of marketing posts at the bottom of a submission with a high score.


Wouldn't the defense be to delist and starve/block the google crawlers? Gaming google rankings is a waste of time if it's not there.


The new way to build communities: SE-anti-O. Trying to rank as low as possible. Things will flip and sites will try to be uncrawlable to Google. I guess it's already happened with Discord.


You just gave me an idea for a fun project (like i need more projects!). I use openresty for my webserver, which has lua already built in. Might be fun to see if i can just hog up a crawler instance indefinitely by generating fake pages and links that just go back to make a new fake page and link and takes the crawler in circles that it thinks are unique


I really dig what HN is doing with karma, and stuff.

I really like what lobsters did with invitation tree https://lobste.rs/about#invitations



Prohibition of new user registrations.


It doesn't have to be prohibition which over time would harm HN badly, just slower and delayed approach of earning good access. Or show age and karma of users commenting next to their nick, some scam patterns will become obvious.


worldcoin integration


Invite only.

A site is spam-proof if it makes it hard enough to get an invitation. The relatively few spammers who slip through can be warned and/or banned.

Unfortunately, we may not be invited.



This is what Lobsters does, and it's pretty effective (although the site has other, much worse problems that outweigh that one feature).


That aligns well with their line of thought though, which is that there is a specific political view which is desired. They're basically building an echo chamber and it will starve on the long run because it will become so specific, the chance of a new user fitting in that community on day one will become lower and lower.

Being open is hard, very hard, but it's the only long term strategy in my eyes. The rest is an illusion.



What are those problems?


What sort of problems would you say they have?


Votes are (to a greater extent than HN) used to indicate disagreement rather than low quality (which leads to more low quality comments); political content is allowed and encouraged (but only that which aligns with the beliefs of the moderators); many users have been banned without any transparency as to why; significant groupthink about technical topics (separate from the political content).


> A site is spam-proof if it makes it hard enough to get an invitation.

Yeah, but then it's also invisible to search engine users, in which case no one will want to spam the site for SEO anyway.



Heh, that doesn’t stop spammers. It only takes one spammer getting invited before they start inviting their spammer “friends”. Before you know it, you have thousands of dormant accounts just waiting to spam you. It makes it harder, sure, but that’s not an effective solution by itself. In fact, it leads to a false sense of security.


Do you have evidence for this claim?

As evidence against: Lobsters, which uses an invite tree, has virtually no spam to speak of.



There is nothing I can share because it is proprietary information and would assist in informing spammers on how spam detection works on the platform I used to work on.

That being said, if there are lots of dormant accounts on lobsters, they are likely spam bots just waiting to be activated once the rules are deeply understood and there is something to gain, or be waiting until there are more than the mods could actively fight against because they lack the tools to do so (see: false sense of security).



Given your refusal to provide any evidence or counterexamples, the cop-out that "there are probably actually lots of spam accounts of Lobsters that haven't activated yet", and the extremely naive nature of your comment that ignores simple modifications that can be made to an invite system to nullify all of the flaws you claim it has, I don't think that you actually have any evidence whatsoever.

It's pretty clear that invite trees provide actual value and the opposite of a "false sense of security".



I mean, I don't work on lobsters, I have no desire to be a member there. So why would I have any evidence to support my claim about a platform I give two shits about?

I'm simply telling you how spammers work on OTHER platforms, quite successfully. Whether lobsters is immune to that or not, I don't know, nor do I care to know. If you think they are immune, then great (see: false sense of security).



If you discover a spammer, ban their entire invite tree and probably the account who invited them as well.


I wish I could say more about how spammers work ... but this won't work like you think it will. You are assuming all the spammers are distributed among a single branch in the tree, and you'd only realize it wasn't working when you ban someone popular who obviously isn't a spammer and has the clout to do something about it.


I'm pretty sure they mean ban the invite tree below the branch of the spammer, not the entire tree that spammer is on. The chances of a false positive are much lower in the former case than in the latter.

Users who actually believe in the site and understand how the mechanism works and appreciate the high snr that they've enjoyed, will be understanding for the most part even if they end up on the wrong side of one of these bans, and have to go through an appeal system to rejoin the site on some other trust tree which is much more closely monitored by its members than the one they were on before. Ensuring that future bans will be highly unlikely.



Out of curiousity, how is a scammer that has hundreds of thousands of actual real people under it handled? Are all of those hundreds of thousands of users banned as well? Not even talking about scammers, if someone near the root of the tree has a bad day and decides to take it out on the community (or simply gets hacked), it seems you could lose the entire community.


> Are all of those hundreds of thousands of users banned as well?

Depends on whether you can trust them. If someone at the top of the invite tree gets hacked or starts shitposting, it doesn't really imply anything about the people they invited, they're all probably innocent users. If they turn out to be a spammer though? Then you can't trust anyone they invited. Any false positive should be handled on a case-by-case basis or reinvited by someone else who is trusted and is willing to vouch for them.



So, what if the root node of the tree starts spamming everyone?


If it's that simple cut and dried situation, then you just ban them. Invite trees aren't Merkle trees, you can certainly modify any node arbitrarily without invalidating subordinate relationships.


So, you're saying the root nodes have preferential treatment over leaf nodes... interesting. Yeah, def not interested in joining a nepotic org like that. I had other reasons for not joining, but ... interesting.

Thanks for answering my questions though. It was insightful.



It's not really "nepotic". The goal was to reduce the number of accounts you need to look at to some manageable number. Ideally you discover some root spammer account and ban the entire tree under the assumption they're all spam accounts. Then you ban the guy who was dumb enough to invite a spammer into the community since they clearly cannot be trusted. If you can't or won't ban them, you take away their invitation privileges.

If the problematic account is too high up in the tree, then they probably have way too many descendants which increases the false positive rate which means you can't conclude that they're bad based on the fact they're invitees of a bad account. Essentially, the spammers diluted themselves inside a huge number of maybe good users so that you can't blanket ban them without collateral damage. In that case you're better off banning the bad account individually and looking for new bad roots further down the tree.



A rationalization of nepotism doesn't make it not nepotism.

Let's look at the definition:

> the practice among those with power or influence of favouring relatives or friends, especially by giving them jobs

Basically, if you are friends with the "right people" (aka, people closer to the root), no matter what they do, you are fine. You aren't getting banned for one of their actions. That is people in power (closer to the root) favoring friends (the people they invited), where "power" is the ability to behave with near impunity without severe repercussions.



I don't think you understand how invitation trees work, or how Merkel trees work for that matter, if that's what you drew from my comment, which essentially said nearly the opposite. The graph is the shape of a tree due to the patterns of invitation fanning out, but there is no such thing as a branch that has more priority over others in a structural or modeling sense.

The other commenters remark that you ban the entire space below the spammer, is a subjective judgment call they would make if they're the one paying the operate the servers and not wishing to use their time and energy to subsidize someone's desire to abuse the internet. Acting like this is some sort of petty fascist power move is not arguing in good faith, it is actually an attempt to escape an argument that you are failing to understand, by changing the subject from how to deal with a certain kind of common social network pathology, to a completely different name calling argument about someone's imaginary nepotistic motivations. It's probably best that you do excuse yourself from such discussions, even if you feel you need to disguise your exit with a juvenile parting shot.

Anyway you already advocated for, as you say, a "nepotic" system, by presuming to kick out people - spammers- whose behavior you feel disrupts the environment you're trying to curate for the benefit of people who are there to support it. There's absolutely no pretending you're above the enforcement of value system boundaries, it's only a question of how focused you want them to be, what stable values you want to optimize for in order to create a consistently attractive experience for your users.

I use this site far more than reddit, in large part due to the fact that it does encourage its users to identify and punish behavior (through flagging, disliking) that is out of step with the objectives of the site, as outlined in the usage guidelines which everyone implicitly agrees to (try to) follow when posting here. And yet in the public scope reddit is the clear 'winner' if raw traffic metrics are the score.

After being on the internet for 20 something years and Usenet for 10 before that, I've had quite enough experience with free for all dumpster fire communities to know that they are not the bastions of high-minded openness that people naively or foolishly believe them to be.



> The other commenters remark that you ban the entire space below the spammer, is a subjective judgment call they would make if they're the one paying the operate the servers and not wishing to use their time and energy to subsidize someone's desire to abuse the internet.

How are people lying on the internet and me having a discussion about it somehow me acting in bad faith? I've made it pretty clear in this thread that I don't care for lobsters, but I haven't made any arguments, at all, in bad faith. If anything, I'm trying to help the commenters by informing them of how spam networks work. So far, I've only been told that I'm wrong. So, whatever.

And yes, treating the nodes closer to the root with "favoritism" by not banning all their children and then banning "further away" nodes by banning their children (and parent?!) is almost a textbook definition of nepotism if nepotism extended to trees.

Being suspicious of dormant accounts, is not nepotism.



> informing them of how spam networks work

You haven't informed much to be honest. Whenever you're pressed for real bits of information, you just say you can't reveal the facts because they're secret and revealing them would help the spammers. So you're not doing much to reduce the uncertainty surrounding this topic which is critical to quality information. You're just generally claiming that the methods we're describing won't work while providing nothing but appeals to authority.

The information you've provided is the fact spammers try to obtain invitations and create aged accounts in order to look legitimate. This is already known and is the reason why you turn accounts and invitations into scarce resources to begin with. Spammers should not be able to just randomly create accounts, they should need to be invited. Whoever invites someone is vouching for them and responsible for their behavior. People should be inviting good people who they trust, not random spammers on the internet. You enforce this by also punishing the guys who invited bad people into the community.

I'm not claiming you're wrong either. I'm defending the rationale for the methods I've seen actual administrators employ to manage real communities. Those guys weren't up against spammers either, they faced far more serious adversaries.



All that means that I can’t spell it out for you, I don’t know how you can be here and not understand contractual obligations. You have to come to the conclusions yourself. All I can kinda do is act as an oracle and say “yeah that might work” or “that sounds like a false sense of security.” Everyone here thus far has had zero curiosity or willingness to debate things on actual merits despite me dropping a few big hints and would rather either attack me personally for being willing to debate something I can’t go into details about (but could probably talk what-ifs), or tell me it is impossible because of a magic tree and I must not know what I’m talking about.

So sure, this thread isn’t very informative, but that’s not entirely my fault. I’ve said as much as I’m free to say, the rest relies on other creative people who can read between the lines and ask the right questions.

At the end of the day, I’m sure some of the people that have joined the lobsters fraternity, er, site, have at some point worked at trust+safety in big corps and can actually help if asked. So I suspect they would actually be fine. I was just hoping to have an interesting discussion about it here and help people understand that a magic tree won’t help you in the long term.



Trees are rooted somewhere. The problem is reduced to finding the root account which invited all the spammers and banning that account and its entire invite tree and the guy who invited that root account for good measure. This reduces the number of accounts that must be evaluated, making it much more manageable. Any false positives can be dealt with on a case-by-case basis.

Make accounts and invitations a limited resource. People will think twice before inviting someone who can get them banned. This reduces the problem to one of trust which is the foundation of real security.



What's worrying here also is Google's willingness to use a random comment as the source for its snippets.

I understand that the average HN commenter puts more effort into their comments, and their veracity, than the average internet user elsewhere, but still.

(For me, the top "result" for Monaco in italian is the Google Translator widget with Collins Dictionary, Wikipedia and Quora the top 3 links. HN is 4 or 5 links below.)

Oh, and yes, let's hope spammers don't overwhelm dang!



I often find myself searching on HN, then Reddit (via google) and then good old sad google, in that order.

When Google arrived, it had the solution people were desperately waiting for. It was pretty much everything we wanted. And it even "wasn't evil!"

Now it feels like people at Google are just making sure they'll qualify for their annual bonus with total disregard to what happens to the company.

I forecast that in the next 1-3 years, we'll see another company steal the search market, just like Google it back in the day. But Google, don't worry, they'll "just be a search company".



>I forecast that in the next 1-3 years, we'll see another company steal the search market, just like Google it back in the day.

Really? Do you think this company exists today?



* huge asterisk since I pay to use it and feel the need to disclose that upfront.

Kagi is a very strong and viable candidate IMHO. The search is actually useful and I’ve noticed on a few occasions that my search was “too focused” and yielded “no results found” and I had to rethink my query to better find what I was looking for. Google on the other hand would have spared no opportunity to spam my results with ads even if it couldn’t find what I was looking for. Anything to put more ads in my face.



We'd know about them if they existed as a company today.

Remember that what became Google was Larry's and Sergey's PhD project that no one wanted to buy.



I would guess the new chatbots have already stolen a bit of the market.


Dear Google, we were only adding ‘site:reddit.com’ or ‘site:news.ycombinator.com’ to search queries so we could get to opinions that weren’t being manipulated by SEO fiddlers. What’s our alternative now?


Spammers have been heavily targeting reddit for years and site:reddit.com is still useful.

Why Google have not bought reddit, I don't know (beyond moderation issues, but AMZN made it work with twitch).



> Why Google have not bought reddit, I don't know…

My guess is that too many people at Google are on Reddit and didn’t want to see it go to the Google graveyard the day after acquisition.



I don't think the google graveyard is a concern, because it's obviously a valuable, profitable product they could sell, but if you haven't been horrified by the current enshitification of reddit, you haven't been paying attention.

I was a moderator of some very large subreddits, and due to reddits pigeonholing me into an app vs new mobile layout, I'm leaving those moderator positions (note: I am not complaining about the api issues). I don't want to participate in a community that is catering to the lowest common denominator such that the term "redditquitte" is a joke.

I've thought long and hard about it, and I think companies are intentionally creating Eternal Septembers in their products, because it's just easier to just put big pictures on the homepage to get clicks, when that type of UX only invites the type of people who see the site as something only to consume and not to contribute to.

I've been invited to multiple "moderator feedback" focus groups, that were worse than awful. After they defaulted an "annoying look here" icon in the right corner to try and get us to work more, I said "fuck this, I'm out."

My point here isn't just to bitch and moan, it's to point out that site:reddit.com only works because the community is one that actively values contribution over consumption... that's going away, and the usefulness of site:reddit.com will go away as that culture changes.



Reddit was enshitified circa 2014 or 2015, this is not a new thing. It's been garbage for a long time and I'm surprised it's taken so many this long to notice. In fairness if you kept to subreddits that were eminently unpopular and off the beaten path then it wasn't as obvious.

I guess the mobile app was what really broke the camels back but the quality of the posts had been on a downward trend in a severe way since at least Obama's second term, when I think both political parties recognized it as important and began to manipulate it. This is made easier by the partitioning of the site into subreddits. I hopefully don't have to explain here why that makes automated sockpuppeting much more effective and easier to accomplish. It's a fundamental design flaw (if we were to assume the design of Reddit was intended at all to provide a space for authentic personal takes on real issues and by real humans).

There is a danger of the same thing happening to HackerNews but I hope the lack of financial incentives to allow that sort of thing does some work to mitigate it, along with the lack of partitioning of the community.



It’s still a lot less enshittified than Google or most of the web. You can find actual humans giving actual advice for a lot of categories where Google just gives you (likely AI generated) SEO garbage.

You can easily ignore the vitriol and fake news on Reddit, you can not get around a lot of commercial detritus anywhere else anymore



I've been on Reddit for like 14 years now and politics have never affected me. At all. I stick to subs related to my interests like mechanical keyboards, retro computers, engineering, architecture... and hardly ever I see political stuff. But I don't remember when was the last time I browsed "All" or "Popular", or kept subbed to the large or "default" subs, which, I think, is where you'll find more political stuff. What I mean is that the best thing of Reddit is that you can -still- make of it whatever you want.


>This is made easier by the partitioning of the site into subreddits. ... It's a fundamental design flaw (if we were to assume the design of Reddit was intended at all to provide a space for authentic personal takes on real issues and by real humans).

Huh? I don't follow here: having multiple subreddits is exactly what makes the site usable for so many utterly different niche topics. There's probably a subreddit for repairing 1967 Camaros; do you really want to see posts like that every day in your news feed? I don't. Reddit isn't meant to just focus on tech topics like this site; it's meant to be a site with discussion forums for every topic imaginable, and there's no practical way to do that without subreddits.



People have been claiming that Reddit has been enshittified (not with that exact term) since it was first created. People were already longing for the good old days when I first began using Reddit in ~2010.

Any attempt at pinpointing the enshittification is bound to be extremely subjective. What is clear is that it has been a continuous decline for a long time.



Ha that’s not how corporate m&a works.


How many times can it jump the shark?


Do you believe that Google would have been a better custodian of Reddit than say Deja News ( https://www.wired.com/2001/02/google-buys-deja-archive/ ) aka Google Groups ( https://www.pcmag.com/news/end-of-an-era-google-groups-to-dr... )?


Google acquisitions suck because the thing gets left to rot. The last thing Reddit needed was all of its recent changes towards crypto and engagement-bait nonsense. I think they'd have it in better shape than it is now.


reddit has done a great job of letting itself rot -- for example, the moderation system tends to result in a hostile experience for users who attempt to participate.

On the other hand, if google owned it, getting banned from a subreddit would possibly mean getting locked out of all of your google accounts.



May be internet needs a refresh from walled gardens and one sided impositions without any accountability. Some class of services need to be protected to the same level as access to basic utilities such as roads or power…


It depends on the subreddit but as a long time Reddit user I find that moderation has changed and you can be banned for posting on topic posts.


I’ve been banned on two separate accounts for posting something the mod of the subreddit didn’t like. When I found out there was no appeal I kinda gave up on Reddit.


I got shadowbanned a TON till I realized you could be shadow banned for leaving to many (on topic) links to other websites or subreddits in the comments.

Sorry I am good at leaving sources to backup what I say I guess?



The moderation system is pretty much the same as any forum. You just have to read the sidebar rules first when you're posting on an unfamiliar subreddit, as you would when joining any community.


But you can be banned from sub A if you post in sub B because mods from sub A don't like sub B, even if what you posted was something that mods from sub A would like... and AFAIK you won't find out you were banned from sub A until you try to post on sub A. Not that it happened to me, but I've seen plenty of cases.


On most forums there's no automated system that automatically shadowbans users for using a blacklisted IP.

They also don't quietly remove comments in a way that is invisible to a user for triggering some keyword in AutoModerator or a spam filter. And there are usually no minimum karma or account age requirements for posters.



> On most forums there's no automated system that automatically shadowbans users for using a blacklisted IP.

Any large site does this. Even HN.



Some of the moderators ban anyone anyone who makes comments that they do not like, even if comments are reasonable, polite, and within rules.

Moderation of that type usually seems to be secret, but the problematic moderators that I have noticed seem to be trying to protect some political belief or pet disinformation from discussion. It poisons the entire site for me.



Getting OT, but what is the deal with all the completely different moderation guidelines (that amass to like 20-30 weird rules and exceptions) for every subreddit. I find it almost impossible to participate (except just adding a comment here and there).

For example, I wanted to post a funny Risitas youtube vid I made (you know the Spanish comedian with that laugh...), and couldn't find a single usable "funny" subreddit. Some banned youtube content completely. Some banned "video memes", some banned X and some banned Y... all of them had slightly different guidelines and you immediately got an insta-splurge from a bot-mod if you tried posting. Some required you to prefix every post subject with some code word. I had to give up eventually and post it on some super-small subreddit instead that accepted anything.



> what is the deal with all the completely different moderation guidelines ... for every subreddit

My sense is that the bigger the userbase, the more it attracts junk, spam, abuse, etc. So, the rules get tightened to combat it. Also, my impression is that the moderation tools are not great, so crude/heavy-handed methods are sometimes all that is available.

I think you found the corollary already: smaller subreddits have less rules and/or less strict enforcement.

I'm not sure a better solution, given the situation. Though I agree it can be discouraging for well-meaning occasional contributors.



The other problem is the moderators themselves. Each subreddit has its own volunteer moderators. It's a thankless job, so who volunteers to do it? Frequently people who shouldn't have that power. So many subs have terrible mods who abuse their power.


It's like a little microcosm of politics (:

"... It is a well-known fact that those people who most want to rule people are, ipso facto, those least suited to do it. To summarize the summary: anyone who is capable of getting themselves made President should on no account be allowed to do the job."

— Douglas Adams



> (except just adding a comment here and there).

Your comment has been auto removed for not linking your reddit account with a email.



> you know the mexican comedian with that laugh...

He was Spanish, as in Spaniard, from Spain, Europe.



Ah my mistake, unfortunately I can't correct my comment now


I've taken the liberty of s/Mexican/Spanish/ing your comment. I hope that's what you wanted!


I've become really skeptical of Reddit comments around products for this reason. Searching "best X site:reddit.com" and going off the top comment recommendation seems really sketchy when that top comment is only 5-10 points.

Maybe I'm just really paranoid these days, but I would bet looking at searches with Reddit in them and creating threads or commenting on old ones and paying for up votes is probably lucrative.



Reddit has been astroturfed for a good 10yrs at this point.

EDIT: https://old.reddit.com/r/HailCorporate/



You should be skeptical. Every marketing professional knows and uses Reddit. Sentiment is casually discussed in meetings etc.


User reviews are useless without some sort of vetting process. As far as I know there isn’t a platform available today that provides this.


I don't think Reddit has ever been a clean acquisition - either because they've raised at high valuations many times or because of an undesirable content/moderation problem.


why would they buy a cesspool of bots and memes? google is for finding content, not hosting content (unless you want it shut down)


> google is for finding content

where does this notion come from? google is for them to find out what you think you want so they know what ads to serve you. if it was for finding content, they would show you the results that were actually related to your query.



Google hasn’t bought Reddit because it’s literally only downsides. Bad PR, low profit margin, etc…

I mean does anyone actually like Reddit anymore? Front page is almost entirely bots reposing and reusing the same comments that have been used for years.



Using reddit with the default front page is like getting a podcast app and only listening to the top 5 most subscribed podcasts.


> I mean does anyone actually like Reddit anymore?

I do like the good parts, deeply hidden, that can often be surfaced with a Google search. Case in point, I had a bug with some installed software yesterday, to which the only viable solution I found was in a Reddit post from a few months back.

But the experience of actively browsing the "leading edge" of the site? Absolutely not. I purposely-deprecated my credentials a year or so back and haven't regretted it.



You’re living in a bubble because Reddit is probably among the top 50 most visited sites in the world. You may not like the experience, hate the spam, hate what it stands for, but a lot of people still visit it and use it daily


You can use something you dislike, that wasn't my point.


> I mean does anyone actually like Reddit anymore?

Yes, but if there was a better alternative I would switch in an instant … but those network effects.



Come to Lemmy and follow Reddit content though one of the mirror instances.


> but AMZN made it work with twitch).

I don't know that the Twitch acquisition was smart. What was the strategic reason to acquire?

Amazon announced cutting 35% of Twitch staff (500) in January to stem losses after two rounds of layoffs last year.



This scares me. What happens when HN becomes the next spammer target? Where do we all migrate?


lobste.rs maybe.


Looks like you need to know someone to get in there, though. I'm not sure I do.


Nothing would ruin reddit faster than being a google "bet"


I need a `age:1year` or something, almost everytime I search something tech related i generealy dont want some StackOverflow from 2014.

And dont get me started on the changes they made if you search a product or something buyable. its takes like 4 clicks to go to the shop page and none of the open-in-tab methods work



   before/after:
Works on Google and Youtube


Thanks, but is that when it was first crawled?


It’s based on the published/updated date that the page provides.


From what I've noticed, the date is also inferred from the page.


I've started using smaller (ish) YouTubers instead of reddit when it comes to finding best in class products. For example, this channel is phenomenal: https://www.youtube.com/channel/UC2rzsm1Qi6N1X-wuOg_p0Ng


Project Farm is a treasure.


I have posted this criticism before but I encountered Todd's videos and was quite impressed by them. I then watched his review on water purifiers having done my own research beforehand and while he does end up recommending a decent purifier there is so much more to purification than just what he measures (TDS).

In the end his omission of other factors have led to a conclusion that isn't entirely accurate. (That Zerowater is the best because it filters TDS to 0).

What about bacteria? Other filters that faired poorly in the TDS test focus more on that and then there are a few major brands that weren't even featured.

What about clarify of water? He did touch upon this very briefly but did not go further into it. You would think that a TDS of 0 indicates the water is perfectly clean. This is wrong as TDS cannot measure everything that may be in the water.

There is more to water purification but these are just two examples.

All in all, I have been impressed by Todd's reviews but after watching this review where I actually had done some research into the topic before watching, I came away doubting all his other videos.

What did I miss just because I am not a subject expert in the topic?

I guess at the very least his videos likely eliminate the very worst products but I bet you people are buying whatever products he recommends without thinking about them and may be getting burned or not getting the best product for them.



I have noticed the same issue with Todd's approach. He does a good job of establishing an experimental metric that allows apples-to-apples comparison, but he reviews so many categories of product that there is no way he could be capturing the whole story on all of them. There are a lot of products that won't do well in any "who scores the mostest" contest but strike a good balance of qualities.


Since you already did your research, what is your water purifier recommendation?


Depends on what you want to optimize for.

These are different suggestions based on assumptions of your situation. Assuming you have access to fairly decent tap water (Europe and US) I'd follow Todd's recommendation for Zerowater as thats what I daily drive. I used to rely on Brita but it does not do much...just slightly improve taste and remove the worst heavy metals.

Running tests on my water, there are still traces of some impurities like bacteria and some metals after Zerowater filtering.

I am also about to start testing the Aquatru system that he also recommends.

For Zerowater the filters are quite expensive (15$ per filter) and I have been peeved at how they get used up quickly (I have around 200PPM in my tap water and they last 75-90 days) and still dont filter everything out fully...but subjectively I can't get over how I like the taste and since im not on the west coast im in the more lucky group where Zerowater is good enough. Maybe Todd was also in this situation?

My family does not like it but I love it. It tastes like "flat" water and becomes filled with bubbles if let to sit out for a short time but its worth it because it "feels" so clean.

Sorry if this observation is unscientific but once you have physically removed impurities, you still will have some semblance of subjectivity. I try to target distilled water taste as a point of reference and Zerowater comes close.

I have worked on a system where I first filter out using a Brita filter and then run it through the Zerowater to help improve the results but the problem with this is that my water is not bad enough for the brita's simple activated charcoal to really help reduce so in my case it actually has ended up giving mixed results.

If you lived in the west coast where the water is typically around 400ppm then you'd extend the life of the Zerowater filter quite a bit by doing this trick but for me, well I have to try something else.

But at that point maybe the Aquatru is better which is why I am trying it. For me, the Aquatru is just for curious comparison as this whole journey is reaching nutjob levels for me at this point and im not elon musk levels of rich(these water tests are not cheap)...the Zerowater is good enough for my usage because I live in a suburb away from any industrial places/poorly managed municipality(no major pathogens, just correcting the taste/eliminating any traces of metals and dirt).

In reality, Brita is probably fine but I want that taste of flat water now that I have gotten a craving for it. Every time I drink something else like Brita or bottled water it just tastes weird.

In reality others have told me if one is to spend the dough on Aquatru, you might as well get a under sink reverse osmosis system installed. Takes up less room and is cheaper. I got a good deal on a open box unit so I decided to go that route. Sorry I dont have results yet.

If you are concerned about pathogens that could make you sick, then it becomes tricky. I have traveled to Pakistan and lived in places there where the tap water makes you sick. I have relied on Grayl and based on my testing water and sending it out it seems to filter pathogens but unfortunately it is a massive pain to use. Do not rely on Zerowater/Aquatru for this as it will not help you. I am still in the search for an excellent under sink solution to eliminating pathogens + giving me the taste that I get with Zerowater. My ideal combination would be to have some sort of automatic Grayl + filtered afterwards with Zerowater. Beautiful tasting water but quite expensive. Might as well rely on water bottles at that point. Hope this helps a bit.



I used to enjoy his "will an engine run on xyz" videos, but unfortunately I'm not big on product reviews unless I'm considering buying something.


Totally, he is a shining star of objective reviews. He always captures key metrics well too, I'm always impressed by what he is choosing to measure.


i love the work he does, but can only watch him on mute.


And the question is why would you do that?


Honestly, his content would be better as articles with charts most of the time.


Looks better than most but the only reason it’s video is to show you ads. It’s the wrong format and it’s awkward.


> the only reason it’s video is to show you ads

Looking at those video lengths, I'm inclined to believe you.

Any video that is 9 to 11 minutes long, I automatically skip because that's the sweetspot length for maximizing ad revenue. This person's are a bit longer, but looking at the video subjects, I can't see why they need to be that long.



>Any video that is 9 to 11 minutes long, I automatically skip because that's the sweetspot length for maximizing ad revenue.

I think that may be outdated? 10 minutes was the sweet spot for ad revenue a few years ago, but I think around 8 minutes is the sweet spot now.



I sub to very few channels but he's one of my favorite along with CompanyMan videos


Yes, project farm is a treasure


I think that entire "Hidden Gem" update is being spun by Google as a positive "opportunity" to bring in more useful content for users, when it is actually a defensive maneuver against the absolute gaming of their algo that has led to an insane deterioration in their results quality. Even simple queries now routinely return mountains of obvious SE spam.

Combine this with Google now placing only sponsored content on damn near the full first SERP for some terms and it has become less useful by an order of magnitude (I meticulously calculated this figure).

And, this degradation at the same time ChatGPT has come on the scene. I know I personally bypass Google altogether now more and more frequently in favor of ChatGPT. Wonder how many people do the same and whether there is a whiff of desperation at the Big G.



@dang I would like to make the argument that the increased attention from Google will exert a downwards pressure in HN quality due to marketers taking advantage of its influence and search ranking to conduct influence operations.

Would you be willing to consider the possibility of delisting the site with robots.txt?



HN has been under downward quality pressure for a long time. Figuring out how to withstand it has been the core idea all along:

HN is an experiment. As a rule, a community site that becomes popular will decline in quality. Our hypothesis is that this is not inevitable—that by making a conscious effort to resist decline, we can keep it from happening. - https://news.ycombinator.com/newswelcome.html

pg wrote that over 15 years ago. I've been saying for (a mere) 10 years that we're trying to stave off the arrow of internet entropy: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... – not doable forever, no doubt, but doable for a while if we expend enough energy.

Based on experience so far, the way to do this is through a combination of a dedicated community, software that does what software can, and moderation to bump the system out of its failure modes.

Trying to hide from the outside world is mostly not the way. pg used to do tricks like Erlang Day on occasion, but having that as the main strategy would be like trying to avoid infection by never going outside. Far better is to have a robust immune system, if possible. Trying to avoid (resistible) pathogens weakens the immune system, and hiding HN from new users is a path to decrepitude. The latter is probably a greater threat IMO, and too easy for established users to discount.

Spam, and its cousins like content marketing, could kill HN if it became orders of magnitude greater—but from my perspective, it isn't the hardest problem on HN. That's because of the dedicated community, which flags these things, reports them when they escape in the wild, and is vigilant about quality. Without such a community, HN would have died long ago.

By far the harder problem, from my perspective, is low-quality comments, and I don't mean by bad actors—the community is pretty good about flagging and reporting those; I mean lame and/or mean comments by otherwise good users who don't intend to and don't realize they're doing that. There's an unholy dynamic between those and the upvote system, so worse comments often get upvoted more than better comments do—often enough to choke the threads with weeds. That's the high-order bit and what I spend more time worrying about—not Cassandra E Oakley and her trading system*, nor the latest startup voting ring and whatnot. If those ever become the high bit, we might be doomed, but we should see it happening long enough in advance to readjust.

p.s. (@dang doesn't work - I only saw this by accident. Well, not by accident because it ended up at the top of the thread)

* https://news.ycombinator.com/item?id=39425371 - unkilled for the occasion



Dang I think you are a really good dude.

So many places on the internet used to be wonderful places of discussion. I remember as a teenager I would come to the internet and be in awe at this system that mankind created to allow for all humans to come together and discuss.

But I’ve noticed since 2015 every single discussion place I used to frequent online has become horrible.

I discovered hacker news only recently and this website seems like the last remaining jewel of the internet that still exists. I think the reason is because of you and the philosophy you use to moderate the website.

Just wanted to express my gratitude to you. Not sure how often people say thank you to you for the work you do but they should say it more often!



Thank you! How did you find HN?


I found it because one of my friends used to post links to insightful posts here on our group chat. Eventually my curiosity led me to explore the site more and I stuck around.

At first I thought it was a site entirely devoted to computer stuff but it was only later that I realized what this site was really about.



I found it relatively recently and because it was frequently mentioned in tech spaces. (From videos to articles). After a while I checked it out and started reading it regularly myself.


> There's an unholy dynamic between those and the upvote system, which means that (by default) worse comments get upvoted more than better comments do...

Can you elaborate on this a bit? I don't see why "lame and/or mean comments by otherwise good users who don't intend to and don't realize they're doing that" should have an "unholy dynamic" with the upvote system.

(FWIW, what I have observed is that once a comment becomes established as the top comment in a thread -- and it doesn't take much for that to happen -- it is nearly impossible to dislodge it. That means that getting into a thread early is crucial for getting noticed. I've pretty much stopped commenting on threads that are older than an hour or two because I can be 99.9% certain that whatever I write will never been noticed no matter how good it might be. And FWIW2, the comment I'm responding to is 50 minutes old as I write this.)



The dynamic being referred to is that low quality comments in the form of memes, distasteful jokes, attacks on other people, and similar comments tend to get upvoted a lot as they provide some entertainment to the upvoter, but said upvoted comment is highly damaging to the community in the kind of tone it sets for the thread, as well as the example it sets for the future.

People optimizing for this kind of low effort but highly upvoted comment is called “karma whoring” on some places.



Most upvoting is reflexive rather than reflective [1], so posts which generate a quick response are more likely to get upvoted. I think that mostly happens when the reader has a rapid feeling response—could be indignation (how dare $THEY!), could be familarity (no way! I like $THING too!), could be a quick association from $Familiar-A to $Obvious-B [2], but whatever it is, it's likely to be something that doesn't take much processing.

The reflective circuitry, which takes in new information, turns it over, and generates an unpredictable response, is much slower and harder to run. I suppose it's a bit like the difference between a sugar hit and eating nutritious food with fiber. The latter makes you feel better in the long run, but when it comes to mass dynamics, the sugar hit wins out every time.

> once a comment becomes established as the top comment in a thread -- and it doesn't take much for that to happen -- it is nearly impossible to dislodge it

Moderators downweight top subthreads that are generic or otherwise lame, and repeat this until the top subthread is no longer lame—if possible. The trouble is that this is an intensive manual process. Most likely the software needs to be adjusted as well.

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

[2] This is probably the basis for the generic subthreads which are the bane of this forum: not bad enough to flag, but predictable enough to suffocate.



> The trouble is that this is an intensive manual process.

And the other problem is that even when it works perfectly, this process as described can only produce non-lameness at best.

I have a suggestion based on something I did at Google 20+ years ago: compute Page Rank on commenters. I did this 24 years ago in the Google Translation Console, which was a (now long-since-retired) public interface for volunteers to translation Google's site content into other languages. Translators would not just input their own translations but also rate the translations submitted by others. Translators whose translations were upvoted more often were deemed more reliable raters, just as links from highly ranked web pages are weighted more highly when computing Page Rank. This successfully prevented any translation spam from ever making it through to the public site (as far as we could tell). It never reached the scale of HN, but it's a lot easier to implement such a thing nowadays too, so I think it might be worth a try.



Why was it retired?


It happened after I left so I can't say for certain, but my guess is they decided to hire professional translators instead.


>The reflective circuitry ... is much slower and harder to run.

considering this along with the comment below in the thread:

>most activity on a thread seems to happen in the first 24 hours of posting it. the discussion tapers off beyond that.

Supports more software experiments to encourage "slower" or longer discourse, and thus reflection.

This is my longtime wish as a user: reading via RSS most conversations are long dead when I get to them. I subscribe to replies on my comments, but since few other people do, it's not that useful. Simply promoting such features might help, as I bet few users are aware of them.



Well, when I say slower I mean something like (to be generous) a minute, as opposed to 500ms. So we're talking about very different time scales! But you make a good point.


Thanks! and on the other hand I'll note that since:

1) my comment actually did capture your attention 2) I did notice it (courtesy of HN Replies)

The current state isn't working too poorly. and sure beats the pants off any other option i know of ;)

So please keep up the awesome work.



I think reddit has the best system for ranking comments and threads of any I have seen. I haven't studied the source, but it seems to hinge on what I have taken to call 'piloting' new posts: Allow it a brief time in the top spot (possibly only for a random subset of users), and see how well it performs (upvote wise) compared with the other comments. And, importantly, the quality requirement for the post increases the higher in the total hierarchy it is: root-level post, top-level reply to a top comment, and so on.

I know HN does something similar, but it is not quite as good as reddit. From observation, specifically the 'penalty', or added performance requirement, of latching on to a top post is too weak. The result is that all HN comment threads consist of only a few top level posts, with subthreads growing off them, because you can easily 'jump the queue' just by commenting to a top comment. This is also what contributes to the idea that it is pointless to make new root level comments after an hour - because almost all the action is in subcomments to top comments.

Edited to add: Reddit soft-hides (collapses) subthreads that are deemed lower importance, which is probably key to make the ranking system work. Anyone interested in a subthread may expand the hidden/collapsed sections, and they may even be upvoted back to uncollapsed state. But by default they don't muscle into the main conversation. HN already has the collapse feature, which could be reused for this. It's just a client-side collapse, also the reddit one (though in huge threads, deeper threads will be loaded on demand).



> And, importantly, the quality requirement for the post increases the higher in the total hierarchy it is: root-level post, top-level reply to a top comment, and so on

How?

> the 'penalty', or added performance requirement, of latching on to a top post is too weak

I'm confused by what you mean by 'quality' and 'performance', unless you just mean upvotes.



By 'quality' or 'performance' I mean the metric by which a post (and its children) is shown higher or lower, or even auto-collapsed.

I think reddit just counts a ratio of upvotes to views (ofc downvotes too). It is possible that users collapsing a comment/subthread also has some weight. Would make sense.

> And, importantly, the quality requirement for the post increases the higher in the total hierarchy it is: root-level post, top-level reply to a top comment, and so on

What I mean by this, is that the more prominent a place a post holds, the better it must 'perform' (per the above definition). Prominence being mostly just how high on the page it is.



Thank you!


most activity on a thread seems to happen in the first 24 hours of posting it. the discussion tapers off beyond that.

context: i read the "past" page every day, so often am several hours or over a day behind discourse. interstingly enough this post is still relatively new and was popular enough before midnight utc to show up in today's past page.

i wonder how much positive feedback posts receive if they miss the initial window.



> “once a comment becomes established as the top comment in a thread -- and it doesn't take much for that to happen -- it is nearly impossible to dislodge it”

Sometimes moderators actively bump down top comments.

It’s probably related to what dang wrote above about the “unholy dynamic” of low-content comments that trigger easy upvotes. I remember myself having written some (frankly) lightweight low-effort snark, see it end up at the top of the comments for a few hours, and then it went mysteriously halfway down the list even though the upvotes didn’t stop.

To be clear, I think it’s a good thing the moderators do this kind of weighting, and the invisibility of comment upvote counts to non-authors is an important feature because it enables this.



Maybe it's not the growth of a community which really causes decline in quality but where the growth comes from. Folks that come because they saw a discussion link in a git message or startup conversation are likely to have very different interactions than folks that come from search engine or social media share.if 100k users showed up tomorrow from the former category would quality here really go down?

I agree it probably doesn't make sense to try to block growth sources though. Just considering the perspective "it's not effective to try to play whack a mole anyways" already seems enough evidence of that alone.

All this reminds me though I've been using the site too much lately and my interaction quality has gone down. Time for another short break :). Anyways



I was browsing the archives of NH the other day and I noticed the quality of most comments to be the pretty much about the same. Similar proportion of off topic or just replying to the title style comments. The length of the comments seemed longer now compared to back in the day. There were a similar number of joke / pun / silly comments which didn't get upvoted. I think there were less lame/mean/snarky comments in the past. I didn't see as much flame.

Would be great to see some actual data analysis.



On that note, Discourse nudges you to do a tutorial on how to use their forum through the private messages from Discobot. I wonder if something like that which shows the kinds of comments that the community needs, along with the upvote limitations I’ve talked about elsewhere and privately messaged you about, may provide a sufficient mitigation to this concern and help users be better contributors in this forum.

Also, stuff like Erlang day isn’t a bad idea: it is also a kind of nudge to remind the community about what kinds of discussions and comments to prioritize.



I disagree with the harder problem. I agree that the problem is low-quality comments, but not the lame or mean kind. When I see comments on a topic I happen to be an expert in it's obvious to me that the majority of them are misinformation or uninformed users. These people don't mean harm, they are legitimately misinformed or inexperienced. There typically are insightful comments as well, and often they are upvoted, but they have become the minority compared to the low quality comments. This ratio has steadily gotten worse over the past years. It's not as bad as reddit, but every pseudonymous community seem to suffer this problem as it's directly tied to trust and identity. There is very little to be gained from comments on HN anymore, I mostly come here for the links now since it's a decent source of news curation.


I think I'd include those under "lame". Misinformed comments aren't lame if they're curious and (therefore) open to correction, but when they are categorical statements (usually some form of denunciation or grandiose generalization), yeah that's lame.

> This ratio has steadily gotten worse over the past years

I'm not sure. Everything has always been getting worse, or feels like it has. You'd have to discount for this bias in order to tell whether things have really gotten worse, and no one really knows how to do that, nor really wants to. It's more satisfying to feel the decline of civilization setting in as one ages. Yes, that's a grandiose generalization and therefore lame.

Or, to flip it into a positive, you're more experienced and informed than you used to be, so more comments appear misinformed or inexperienced.



Some titration of open-minded misinformed comments is probably vital to the functioning of the site; without them, a lot of interesting true stuff is buried as shared subtext among the specialists here, and sails over the heads of everyone else.

This is a problem I've seen in specialist forums elsewhere; the conversation runs out, because nobody's poking the community with wrong stuff, and correcting a wrong statement is intensely more motivating than just making a plain factual statement for its own benefit (also: when you do that, the rest of the community will go "uh, yeah, and?").



> p.s. @dang doesn't work

blame those dang programmers!



What's "Erlang Day"?




Mentions do not work here, you should send an email to hn [at] ycombinator.com.

(BTW, if this is gonna be done, we should also block all AI bots and search engines at the IP address level.)



> (BTW, if this is gonna be done, we should also block all AI bots and search engines at the IP address level.)

That's not possible or necessary.

Not possible, because it simply isn't possible in the general case to differentiate real papers from scrapers, without using device attestation. For an extreme example of this, see the Recap the Law project[1], which gives real human users an extension which scrapes as they browse.

Not necessary, because scrapers for AI training data an entirely separate problem completely unrelated to marketing, and because robots.txt will serve to stop the majority of search engine indexing, which is all that we need. Actual blocking of engines isn't necessary, because all the big ones are well-behaved, and after they stop indexing HN, marketers won't care about HN for the purposes of SEO-related influence campaigns any more.





While @‘s aren’t an implemented feature, dang has quite the ability to find them.


It's random. Modulo an occasional email that points me to one.


Darn, I just assumed that you had some sort of super-hacky patch where one of the Arc functions used as part of the comment submission process looked for the string "@dang" and emailed you if it was found.

Email it is!



Would like to see this argument made. There should also be an argument made that more good contributors would come. In face, more people without the mentality of this post to block and hinder visibility, in effect, would mean this is a net positive for aggregate sentiment.


I think you are trying to build walls around something that isn't entirely yours, both personally and “you-as-the-voice-of-the-community yours”.

The linked blog post actually describes that Google is drowning in ad-driven crap, and has to rely on manually chosen external structures to keep quality tolerable, not that “HN has became cool”.



> I think you are trying to build walls around something that isn't entirely yours, both personally and “you-as-the-voice-of-the-community yours”.

Profiling other users, in addition to being extremely bad form, is a pretty good sign that you don't actually have a valid argument to make - which, reading your comment, you don't.



It is a valid argument. Certainly it resonated with me. Your original comment made it seem like you were the voice of the community, that a majority or at least a plurality of users want your proposed change. But there’s no indication that they do.


You are incorrect. It is not a valid argument - it's profiling, uncharitable assumptions, and ad-hominem attacks with no actual logic, or any redeeming qualities whatsoever for that matter.

My language was "Would you be willing to consider the possibility" - a polite request. There's no way that a logical person could even read that as either a command or like I was acting as "the voice of the majority" or "a plurality of users" - you pulled that out of thin air

Dang is not a tool or an idiot - if he thinks something is a bad idea, he won't do it. If he thinks that it might be worth doing, he'll think about it. If he needs to poll users, he will.



Their comment started with:

> I would like to make the argument

Which makes this perspective very confusing:

> it seem like you were the voice of the community



There's no indication that they don't either. I guarantee you he's not alone. Simply because I agree with him. If my time here on HN taught me anything it's that I'm not alone and that there's probably many more who agree but don't do so publicly due to the socially unacceptable nature of certain arguments.

My entire life it's been my experience that the less people involved the better things are in general. There's probably an optimal number of people for every community. I don't really know what that number is but I seriously doubt it's in anyone's best interests to exceed it.



I would like to make the argument that the increased attention from Google will exert an enshittification pressure in code quality due to the sudden appearance and accumulation of trackers, until the web is finally sold to any big publishing company, and, eventually, to an AI training company.

Browsing HN is a relief in times of Surveillance Capitalism

Edit: HN uses google tagmanager and analytics; amplitude and branch.io



> HN uses google tagmanager and analytics; amplitude and branch.io

We do? No we don't? I don't even know what those are.



I don't see any requests to any analytics or Google Tagmanager, actually the only requests I see when opening a post are to the `news.ycombinator.com` domains to fetch the document itself, a CSS file, a JS file, an image and a couple of SVGs, nothing else.

If you see requests outside of that there's something fishy going on in your browser, I believe.



I've always thought it was strange that HN never appeared in search results. I mean content here has a very high SNR and seems to me that it checks all of Google's SEO boxes. So I always assumed not showing up was intentional, as in dang has been delisting it on purpose and blocking bots. Now that it's showing up, I kind of want it to go back the way it was. The only reason I read HN so much is because I have given it a high degree of trust, that even though I don't know most commenters, I can easily reason about content, find the sources, discover amazing tools and read from founders directly. I really, really do not want to worry about whether the front page is now ads.


The frontpage has always been ads. That's like the entire point of this website...

I mean just look at the domain name



We need more sites like hackernews. X /twitter used to be that , but it's now overwhelmed with bots and SEO. The more boring you make a site (no colors, no images, no links) the better the defense against spammers.


Slight disagree.

"More boring" is a defence against spammers, but not the only the one.

There's also: "Take commercial incentives of the running company out of the picture."

The most prominent example of that is Mastodon. It's software is opensourced, and it's most popular server instance: https://mastodon.social is run by a gGmbH non-profit. (It's hosting company runs it as a non-profit charity for the social good)

Since it's developed in the open without financial incentives muddying up the experience, no advertisements are added and there aren't any algorithmic rankings to be gamed.

And since it's also based on open source, it's easy to share content from other server instances (it's all ActivityPub protocol underneath), and it's also easy to block (defederate) server instances with trolls and other problematic users.

------------------------------

It's like how twitter was at the start, but better.

If you want to try it out, you can make an account on any server then follow some developers in your languages/libraries/tooling of your choice.

You'll also see what those maintainers are discussing in the open and get an idea of how your languages/libraries/tools are going to evolve in the next version, or even participate in their evolution.



Another important factor, imo, is size. Quality on platforms like Twitter, Reddit, HN, Mastodon is inversely proportional to their size. If a platform gets big enough, regardless of its motives or polish, there will be more incentive to game it.

Platforms like HN and Mastodon are great because they are small. They cater more towards a smaller, more technical community, which it isn't worth it to game with spam or whatnot because they're small and more aware of this kind of manipulation. Smaller "gems" in bigger platforms (think a small, old subreddit) can be good for the same reason.

I guess this advocates more for the small web, which I'm all for, but there's less money in that. I wonder what could practically what incentives could make the web smaller and more useful.



I don't see Mastodon getting worse by getting larger.

You only see who you follow, and there's no like/karma/upvotes algorithm.

Everything is sorted chronologically, and if anyone tries to "game" that by posting too much, they'd get unfollowed and/or banned.

Mastodon is a nurtured cultivated twitter.

Personally I follow the CSS/JS/TS community (for work), the gamedev community (for fun), and the space community (for passion)



Oh, I don't mean that scale is the only factor. Clearly, the structure of Mastodon is way better than the structure of Twitter. But, I'd bet that if Mastodon was as big as Twitter, if it was that juicy of a target, it would have way more spam than it does today.


I both agree and disagree, in that; - the amount of spam on Mastodon will surely increase in amount proportional to the size of it's network. And - Mastodon users won't usually see that new spam by the dynamics of the current system because we're only shown content from sources we explicitly follow.


HN is not small and hasn't been for many years. Your comment is item (comment or story) number 39 million 425 thousand 162.


Twitter gets 500 million new tweets every day.

HN has 39 million comments after 17 years.

HN is small.



Yeah, it's not some ten-person forum buried in the annals of the old Internet, HN is popular enough, but does a random person sitting in a Boston cafe know what HN is? Probably not, but they sure know what Twitter is.


I wonder how many non-spam human-written tweets every day.


About 42.


Well, quality over quantity.


I think a large part of the reason hackernews is good is that operator's incentives are more closely aligned to the desires of users than average. Ycombinator benefits if HN is the best place to discuss the creation of technology/software because it boosts their brand and gives their companies a communication and recruiting advantage; most other platforms are primarily interested in ad revenue and are only incentivized to provide a good experience to users as a factor in that equation.

Defense against spammers/bots is a tough problem though. Having great moderators and users who are savvy and intolerant of bad content probably helps but I think it would only go so far on a large site. HN might benefit from the relatively narrow appeal of its content in that regard.



bots and SEO have really destroyed the internet. and now that we have AI to further streamline the production of BS, I feel like the Internet is just going to become even more of a quality content wasteland.


https://en.m.wikipedia.org/wiki/Dead_Internet_theory

LLMs are only a deathblow, albeit a massive one, to a trend that's been 10 years in the making.

I feel it's time for another small internet for us, with blackjack and dancers. But this time let's agree not to make it friendly and accessible for everyone, alright?



Everyone just bind to some other port than 443 and there you go. The traffic won’t be worth any money so no spammers will show up there. All existing content and functionality will still work, just on a different port. It’s like a www fork.


If we move to port 404, they'll never find us.


> with blackjack and dancers

and Slurm, please.



How long until marketing powerhouses make a true hypnotoad capable of literal brainwashing, more so than the power of current FAANG?


Oh, if they could build a hypnotoad, they would. Governments, too. See project MKUltra in the 50s.


I fully believe that, especially with the rise of AI models, the future of the internet is going to be small enclaves of a few thousand people on invite only message boards. Anything else is just going to be far and away too much effort for anyone to maintain, especially when advertisers twig that their ads are mostly being shown to bots.

I just don't see how anything else could be sustainable.



So just like that past? Bring it on, that was the better internet


It was and it wasn't. Search engines arose to solve a very real problem, and did so quite well for a long time.

Curation was what search engines replaced, as the scale of information available outgrew human capacity to keep up. We are almost certainly going to have that problem again soon, for a while at least.

Really, what I hope is that the already burgeoning problem of AI-generated garbage gets solved, and that people rediscover the virtues of social interaction that's based in reality, rather than in the optimization of strongly emotive idiocy that adtech-driven social media demands.



I used to believe this but I don't think so any longer after enough time on the internet.

There's probably not more than 50k meaningfully unique sites with some notable amount of actual desirable information, after excluding all the SEO'ed sites, blogs repeating each other, etc... at least for the English web.

Manual curation is entirely possible since probably there aren't even 50 such sites being created per day on average. This is including every single forum still open to public viewing. There really aren't that many left (



How long do you expect that will remain the case in the face of such a flood of zero-incremental-cost garbage as we here discuss?

Especially worth mentioning in this connection is https://news.ycombinator.com/item?id=39424688, as of this writing #1 on HN. I mention it here because what it says about moderation, and about centralized platforms being both the highest-value and most poorly managed targets, applies here also.



Forever, if they also have access to comparable tools to weed out lower quality sites.

Why would you expect otherwise, that intelligent people will suddenly lose their ability to perceive what's higher quality content?



There are a lot more sites than that when you throw in personal blogs.

The issue is that those are now impossible to find.



There aren't, if you exclude all the spam blogs, and include only the ones that are fully accessible without a paywall and have received an update in the last year.

A huge proportion have simply stopped updating, gone offline or moved to a paywall on substack/medium/etc...

The 50k number is all inclusive and probably even still an overestimate.



How did you derive that figure to begin with? And in what realm does only what's been posted in the last year qualify as information worth retaining the ability to retrieve?


Well maybe it's just because I'm an unpopular weirdo, but I think "invite only" is cancer. In fact it's another head of the hydra killing the Internet. Whatever alternatives exist to the spam wasteland are strangled in crib being overly walled gardens, e.g. Discord. I also swear to god I think secret private club fetishism crippled piracy.

This is not how the good years of the Internet grew. I can't think of a single good or popular thing that started out as "invite only" other than I guess Facebook and Gmail. Both of which were actually more marketing gimmicks.



> small enclaves of a few thousand people on invite only message boards

I'm okay with that. Honestly sounds a lot better than the current state of affairs. The only problem now is getting invited.



I swear my girlfriend relishes in reading me the entire, 300 words of SEO bait product listings on Amazon. Babe, I'm begging you please, you can stop at "xl dog bed" I don't need to hear the rest that goes "fluffy for best friend comfort for large dogs pitbull great Dane german Shepard..."


There's a nucleus for a standup bit in there somewhere.


"To get her to stop I said 'Sure, sounds good, buy it.

..anyway that's how I ended up with a Great Dane."



Accessibility to the internet to everybody in the world destroyed the hacker haven that the Internet once was.

But I don't think it's bad. Hackers/smart people "locked in the basement' or talking only with their friends in their bubble is not ideal. There are a lot of people out there, with their own opinions, ideas and understanding (or lack of) of the world. Internet just converges towards the average human being. If we want a better internet, hope some smart people will put some effort to make the "average person" in the world wiser, rather than blame the SEO and bots...



> Hackers/smart people "locked in the basement' or talking only with their friends in their bubble is not ideal.

Sounds quite ideal to me.

> hope some smart people will put some effort to make the "average person" in the world wiser

I'd rather hope for a future made by us and for us instead.

"Average" people just don't care about this stuff like we do. They don't care. I tried to get them to care, they refuse to care about all this stuff that we care about. That's fine, people like what they like and that's that but why should we care about their concerns then? We should not. And I do not.

Truth is I couldn't care less about such an "average" human being. Why is everything always about the "average" person? Why must all technology serve this mythical average human? Where is the technology that serves me? My programmer's computer system and network?

Isn't that why we all come to Hacker News?

People chase these "averages" because there's money in it. The money mostly comes from advertising consumer products to them. That's why advertising destroys everything.



I remember reading on the wikimedia stats post here a few weeks ago that for English at least, the average internet user is a 20 year old from India watching porn

Put it all in perspective



Goodharth’s Law destroyed the internet. The constant game of chess between Google and SEO marketers has the turned the whole search product to crap.

It won’t improve since when Google makes a change, SEO marketers adapt. The websites that actually provide value and don’t really care about SEO suffer as well as the users looking for that exact information.



I'm just waiting for some kind of real person yet anonymous protocol that gets introduced in the next era of forums.


HN requires no javascript and is accessible through Tor without issues. It feels right for those of us who are usually marginalised by a wish for better security.

Feels well moderated too, by people who care about the place, and that's the core of a sustainable community. Care really matters.



Exactly right. Whereas with Reddit you get the feeling the moderators have a very unhealthy relationship with the site.


> We need more sites like hackernews

HN is heavily moderated (censored) and people love to complain when that happens on sites like twitter.



It’s moderated for quality, not opinions. That’s what people complain about (and I haven’t seen done elsewhere).


You don't believe HN is moderated for low quality opinions?


Downvoted but true.

HN's pedestrian design makes for a much better experience.

- no flair for usernames - means people concentrate on the message not the messenger

- no visible karma for anyone but the top few - means people don't spend (as much) time karma whoring

- limited formatting, no images or video - increases the value and import of the written word

- no sharing, or @user referencing - means posts live and die more by their merit rather than brigading or other shenanigans



On top of that no sharing/@ing: no notifications for responses (there's probably an addon for that if you want it).

At first I thought I wouldn't like it, because I tend to post on subjects I'm knowledgeable and want to do my best there. But now I'm so glad to not get the anxiety and rush of the back and forth in heated discussions.

Peace and quiet.



Would be nice to have the option. I've missed more than one belated response for sure...


I would say this site user design wise is about the same as reddit, so I don't agree with any of your points. What makes it higher quality is that it has a niche theme (hackery) and it makes it less popular. As in general online, the less popular a resource is, the higher quality communication it has. You can find the same quality of conversation in low population subreddits too, despite the aforementioned design.


Go and look at Reddit again. Every post has a little avatar of the poster right next to it. Some of them are quite funky and some are offensive. The text of the username is bold - bolder than the post content itself.

In HN the username is simply plain text. You can't even see it's a link without hovering.



It's true. But it's not what contributes to the quality of the conversation. Find a niche (non-meme) subreddit that has at best 10 posts a day and check the conversations on there. They all have flashy avatars, a modern design for their posts, etc. But the conversation is high quality at the end of the day.


That's new reddit for you. Old reddit is bland af, and some people love it for that.


It isn’t love of old Reddit, it sucked, you needed RES to make it usable.

It’s hate of the new Reddit, plain and simple. Well deserved might I add, it embodied the new priorities of the site: number go up.

I quit Reddit cold turkey when they took Apollo away.



There are user flairs on old reddit too. You can see the persons karma if you hover over their username. But even if that wasn't the case, Reddit (99.98% of the time) sucks, regardless of what UI you use.


with the experimental spirit of the site, the barebones look is enough for the "mvp". but for most of the web users today, it is not quite as engaging to use.

most of the readers here don't mind reading text and face a wall of text daily, so it works out here.



> most of the readers here don't mind reading text and face a wall of text daily, so it works out here.

Even for us, we do mind reading HN as a single wall of text. I've got my userContent.css file set up for HN so that:

1. There are maximum sizes to the paragraphs (I can't read a wall of text stretching across 1080 pixels!)

2. Vertical spacing between comments and lines are larger.

3. Each comment in a thread has a larger indent using padding so that, visually, it's easier to track the parent of any given comment.



would like to check it out and see how to set it up for my browser!


Place this into your `userContent.css` file in `$FIREFOX_PROFILE/chrome`.

The case of the filenames may or may not be important, depending on whether on Windows or not.

https://gist.github.com/lelanthran/873983febef21450b0afcb99d...



no visible karma for anyone but the top few

It took one click to see that user abraae has 6671, as of this writing. Or did I miss the point in dramatic fashion?



I interpreted this point to mean that there is no visible karma for any of a user's posts.

E.g. A reply/comment is at the top of the chain, but you don't know if there is a difference of 1 karma or 100 karma between it and the next comment.



Oops. I was really meaning no badges, karma etc. showing next to people's posts but forgot you can see it by clicking in.


I wonder the same, I can definitely see the karma of users who have only double digit (and I think I have seen single digit) karma.


I think it's a lot less obvious here than on Reddit you could hover over a username and see it.


You did. His score has little to do with anything on the site. That's the point.


There's no meaningful leaderboard.


There is, but the fact that almost nobody knows that it exists might be a feature.

I won't link it; per the hacker ethos, I encourage you to find it yourself.



My qualifier is empirical, and not at all pejorative.


Optimized for thoughtful discussion over engagement.


In a world where AI can impersonate people to virtual perfection, how would one even KNOW who to invite?

I think proof of identity may soon become the only way to keep AI powered bots from trying to manipulate every forum out there.

And for those who cherish the ability to be anonymous, there is probably a market for a trusted middle-man/site that can verify that an account is being created by a biological person, but that doesn't provide the actual identity of that person to the site where he/she is setting up a new account. Kind of like CA providers do for ssl domains.



You can still (basically) create your own HN / Twitter using RSS—my favorite reader is NetNewsWire, but there are others—particularly with the rise of Substack. I have a couple hundred feeds in mine, and it's great.

On my own site, I routinely post lists of links to interesting articles. You don't have to rely on Twitter or other highly botted sources.

That most people do is itself revealing.



The more interesting/useful part of HN is the comment section, not necessarily the articles themselves.


I often read the comments more thoroughly than the article itself


I often do not read the articles until a comment points out it’s really worth it.


Reddit was the more popular hn replacement. Twitter has too weird a form factor to ever allow the possibly of anything but hot takes.


We need more old school forums. Decentralized communities focused on their respective niches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com