![]() |
|
![]() |
| Dangerous content slipping through the cracks and the algorithms messing up is the same thing. There is no way for content to "slip through the cracks" other than via the algorithm. |
![]() |
| > You can't be licensing user content to AI as it's not yours.
It is theirs. Users agreed to grant Reddit a license to use the content when they accepted the terms of service. |
![]() |
| > On HN, your front page is not different from my front page.
It’s still curated, and not entirely automatically. Does it make a difference whether it’s curated individually or not? |
![]() |
| Per the court of appeals, TikTok is not in trouble for showing a blackout challenge video. TikTok is in trouble for not censoring them after knowing they were causing harm.
> "What does all this mean for Anderson’s claims? Well, § 230(c)(1)’s preemption of traditional publisher liability precludes Anderson from holding TikTok liable for the Blackout Challenge videos’ mere presence on TikTok’s platform. A conclusion Anderson’s counsel all but concedes. But § 230(c)(1) does not preempt distributor liability, so Anderson’s claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children can proceed." As-in, Dang would be liable if say somebody started a blackout challenge post on HN and he didn't start censoring all of them once news reports of programmers dieing broke out. https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa/... |
![]() |
| I think it's a very different conversation when you're talking about social media sites pushing content they know is harmful onto people who they know are literal children. |
![]() |
| Trying to define "all" is an impossibility; but, by virtue of having taken no action whatsoever, answering that question is irrelevant in the context of this particular judgment: Tiktok took no action, so the definition of "all" is irrelevant. See also for example: https://news.ycombinator.com/item?id=41393921
In general, judges will be ultimately responsible for evaluating whether "any", "sufficient", "appropriate", etc. actions were taken in each future case judgement they make. As with all things legalese, it's impossible to define with certainty a specific degree of action that is the uniform boundary of acceptable; but, as evident here, "none" is no longer permissible in that set. (I am not your lawyer, this is not legal advice.) |
![]() |
| Uh yeah, the court of appeals has reached an interesting decision.
But I mean what do you expect from a group of judges that themselves have written they're moving away from precedent? |
![]() |
| The personalized aspect wasn't emphasized at all in the ruling. It was the curation. I don't think TikTok would have avoided liability by simply sharing the video with everyone. |
![]() |
| Under Judge Matey's interpretation of Section 230, I don't even think option 1 would remain on the table. He includes every act except mere "hosting" as part of publisher liability. |
![]() |
| Not sure about the downvotes on this comment; but what parent says has precedent in Cubby Inc. vs Compuserve Inc.[1] and this is one of the reasons Section 230 came about to be in the first place.
HN is also heavily moderated with moderators actively trying to promote thoughtful comments over other, less thoughtful or incendiary contributions by downranking them (which is entirely separate from flagging or voting; and unlike what people like to believe, this place relies more on moderator actions as opposed to voting patterns to maintain its vibe.) I couldn't possibly see this working with the removal of Section 230. [1] https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc. |
![]() |
| Nuff said. Underneath the ever-lasting political cesspool from /pol/ and... _specific_ atmosphere, it's still one of the best places to visit for tech-based discussion. |
![]() |
| Comments that are marked [dead] without the [flagged] indicator are like that because the user that posted the comment has been banned. For green (new) accounts this can be due to automatic filters that threw up false positives for new accounts. For old accounts this shows that the account (not the individual comment) has been banned by moderators. Users who have been banned can email [email protected] pledging to follow the rules in the future and they'll be granted another chance. Even if a user remains banned, you can unhide a good [dead] comment by clicking on its timestamp and clicking "vouch."
Comments are marked [flagged] [dead] when ordinary users have clicked on the timestamp and selected "flag." So user downvotes cannot kill a comment, but flagging by ordinary non-moderator users can kill it. |
![]() |
| > Go ahead and type that search query into google and see what happens.
What are you expecting it to show? That site removes all content after a matter of days. |
![]() |
| > The CDA was about making it clearly criminal to send obscene content to minors via the internet.
That part of the law was unconstitutional and pretty quickly got struck down, but it still goes to the same point that the intent of Congress was for sites to remove stuff and not be "common carriers" that leave everything up. > Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo. If you can forgive Masnick's chronic irateness he does a decent job of explaining the situation: https://www.techdirt.com/2024/08/29/third-circuits-section-2... |
![]() |
| It all comes down to the assertion made by the author:
> There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product. |
![]() |
| 80% of users will leave things at the default setting, or "choose" whatever the first thing in the list is. They won't understand the options; they'll just want to see their news feed. |
![]() |
| I'm pretty sure going from X to Threads had very little to do with the feed algorithm for most people. It had everything to do with one platform being run by Musk and the other one not. |
![]() |
| > Their review process was developed to hit the much more stringent speech standards of the Chinese market
TikTok isn't available in China. They have a separate app called Douyin. |
![]() |
| For anyone making claims about what the authors of Section 230 intended or the extent to which Section 230 applies to targeted recommendations by algorithms, the authors of Section 230 (Ron Wyden and Chris Cox) wrote an amicus brief [1] for Google v. Gonzalez (2023). Here is an excerpt from the corresponding press release [2] by Wyden:
> “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” the members wrote. “That interpretation enables Section 230 to fulfill Congress’s purpose of encouraging innovation in content presentation and moderation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Section 230’s protection remains as essential today as it was when the provision was enacted.” [1][PDF] https://www.wyden.senate.gov/download/wyden-cox-amicus-brief... [2] https://www.wyden.senate.gov/news/press-releases/sen-wyden-a... |
![]() |
| This statement from Wyden's press release seems to be in contrast to Chris Cox's reasoning in his journal article [1] (linked in the amicus).
He goes on to list multiple similar cases and how they fit the original intent of the law. Then further clarifies that it's not just about illegal content, but all legal obligations:
Though, ultimately the original reasoning matters little in this case, as the courts are the ones to interpret the law. In fact Section 230 is one part of the larger Communications Decency Act that was mostly struck down by the Supreme Court.EDIT: Added quote about additional legal obligations. [1]: https://jolt.richmond.edu/2020/08/27/the-origins-and-origina... |
![]() |
| The Accusearch case was a situation in which the very act of reselling a specific kind of private information would've been illegal under the FTC Act if you temporarily ignore Section 230. If you add Section 230 into consideration, then you have to consider knowledge, but the knowledge analysis is trivial. Accusearch should've known that reselling any 1 phone number was illegal, so it doesn't matter whether Accusearch knew the actual phone numbers it sold. Similarly, a social media site that only allows blackout challenge posts would be illegal regardless of whether the site employees know whether post #123 is actually a blackout challenge post. In contrast, most of the posts on TikTok are legal, and TikTok is designed for an indeterminate range of legal posts. Knowledge of specific posts matters.
Whether an intermediary has knowledge of specific content that is illegal to redistribute is very different from whether the intermediary has "knowledge" that the algorithm it designed to rank legally distributable content can "sometimes" produce a high ranking to "some" content that's illegal to distribute. The latter case can be split further into specific illegal content that the intermediary has knowledge of and illegal content that the intermediary lacks knowledge of. Unless a law such as KOSA passes (which it shouldn't [1]), the intermediary has no legal obligation to search for the illegal content that it isn't yet aware of. The intermediary need only respond to reports, and depending on the volume of reports the intermediary isn't obligated to respond within a "short" time period (except in "intellectual property cases", which are explicitly exempt from Section 230). "TikTok knows that TikTok has blackout challenge posts" is not knowledge of post PQR. "TikTok knows that post PQR on TikTok is a blackout challenge post" is knowledge of post PQR. Was TikTok aware that specific users were being recommended specific "blackout challenge" posts? If so, then TikTok should've deleted those posts. Afterward, TikTok employees should've known that its algorithm was recommending some blackout challenge posts to some users. Suppose that TikTok employees are already aware of post PQR. Then TikTok has an obligation to delete PQR. If in a week blackout challenge post HIJ shows up in the recommendations for user @abc and @xyz, then TikTok shouldn't be liable for recommendations of HIJ until TikTok employees read a report about it and then confirm that HIJ is a blackout challenge post. Outwardly, @abc and @xyz will think that TikTok has done nothing or "not enough" even though TikTok removed PQR and isn't yet aware of HIJ until a second week passes. The algorithm doesn't create knowledge of HIJ no matter how high the algorithm ranks HIJ for user @abc. The algorithm may be TikTok's first-party speech, but the content that is being recommended is still third-party speech. Suppose that @abc sues TikTok for failing to prevent HIJ from being recommended to @abc during the first elapsed week. The First Amendment would prevent TikTok from being held liable for HIJ (third party speech that TikTok lacked knowledge of during the first week). As a statute that provides an immunity (as opposed to a defense) in situations involving redistribution of third-party speech, Section 230 would allow TikTok to dismiss the case early; early dismissals save time and court fees. Does the featured ruling by the Third Circuit mean that Section 230 wouldn't apply to TikTok's recommendation of HIJ to @abc in the first elapsed week? Because if so, then I really don't think that the Third Circuit is reading Section 230 correctly. At the very least, the Third Circuit's ruling will create a chilling effect on complex algorithms in violation of social media websites' First Amendment freedom of expression. And I don't believe that Ron Wyden and Chris Cox intended for websites to only sort user posts by chronological order (like multiple commenters on this post are hoping will happen as a result of the ruling) when they wrote Section 230. [1] https://reason.com/2024/08/20/censoring-the-internet-wont-pr... |
![]() |
| I'm skeptical that Ron Wyden anticipated algorithmic social media feeds in 1996. But I'm pretty sure he gets a decent amount of lobbying cash from interested parties. |
![]() |
| > If my forum has a narrow scope (say, 4×4 offroading), and I delete a post that’s obviously by a human but is seriously off‐topic (say, U.S. politics), does that make me legally liable for every single post I don’t delete?
No. From the court of appeals [1], "We reach this conclusion specifically because TikTok’s promotion of a Blackout Challenge video on Nylah’s FYP was not contingent upon any specific user input. Had Nylah viewed a Blackout Challenge video through TikTok’s search function, rather than through her FYP, then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content." So, given (an assumption) that users on your forum choose some kind of "4x4 Topic" they're intending to navigate a repository of third-party content. If you curate that repository it's still a collection of third-party content and not your own speech. Now, if you were to have a landing page that showed "featured content" then that seems like you could get into trouble. Although one wonders what the difference is between navigating to a "4x4 Topic" or "Featured Content" since it's both a user-action. [1]: https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa/... |
I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.
In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.
Honestly, I think I'd love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I'm inclined to see what shakes out.