I'd like to hear an informed take from anybody who thinks that Facebook's fact-checkers were a better product feature than Community Notes.
All of the articles I'm seeing about this online are ideological, but this feels like the kind of decision that should have been in the works for multiple quarters now, given how effective Notes have been, and how comically ineffective and off-putting fact-checkers have been. The user experience of fact-checkers (forget about people pushing bogus facts, I just mean for ordinary people who primarily consume content rather than producing it) is roughly that of a PSA ad spot series saying "this platform is full of junk, be on your guard".
* "Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content." - like people being in Texas makes them more objective?!
The actual mechanisms of running a social media network at scale are tricky and I think most of us would be fine with some experimentation. But it looks pretty political in the broader context, so maybe it's just a way of saying that certain kinds of 'content' like attacking trans people is going to be ok now.
I can't quite FB entirely, but Threads looks like a much less interesting option with Blue Sky being available and gaining in popularity.
I get how the partisan story is easy to tell here, but I'm saying something pretty specific: I think it would have been product development malpractice for this decision not to have been in the works for many, many months, long before the GOP takeover of the federal government was a safe bet. Community Notes has been that successful, and Facebook's fact-checkers have been that much of a product disaster.
I've never seen a wrong Facebook fact-check; I am warmly supportive of intrusive moderation; that's not where I'm coming from.
Clegg left a few days ago, and the Oversight Board issued a statement which sounds like they were in the dark:
> “We look forward to working with Meta in the coming weeks to understand the changes in greater detail, ensuring its new approach can be as effective and speech-friendly as possible.” [1]
So is it possible this was only announced recently. It might have been "in the works" in the C-suite for a bit longer, but there doesn't seem to be any evidence it was widely known before very recently.
As a product decision taken independently, maybe. Running one of those things at scale with all kinds of people trying to subvert it for various reasons, including some downright evil ones, is not an easy task.
Announced together with everything else and given the timing, I just can't help but think there's a political component to all of it.
People move to other states due to state laws. City laws can easily be avoided by living and/or working just outside the city limits. Or more likely, state laws will preempt city laws that go against state level politics.
I don't at all doubt that they're going to do whatever they can to cast this presumably longstanding product plan in the light most favorable to the governing majority! I just want to get the causality right.
I don't understand though: What makes you think that you are getting the causality right? It seems to me like you're asserting the causality goes one direction, when there doesn't seem to be any evidence (at least in public) for that assertion at the moment. Have I just missed some other information on this that you're basing this on?
I think he is suggesting that this move has favorable PR optics for the incoming administration. Making it appear like a conservative victory may give them some slack or earn them some favors.
Is it not a conservative culture-war victory designed to earn favors? There is no external evidence of this having been anything other than a contingency around November 6 of last year, so it's hard to definitively say it's one or the other.
> The Texas thing sounds like PR but isn't really given their huge offices in Austin
That distinctly smells like pork barrel politicking: we're moving jobs from Commiefornia to your great state, and if your criminal [1] state AG sues us again over this function, he'll be putting Texans out a job.
1. Allegedly. Meta wouldn't dare call him thar, but he agreed to 100 hours of community service and paying restitution to those he allegedly defrauded to avoid a trial.
To be clear: I absolutely do not dispute this. But in 2025 it seems pretty clear that you cannot run a mainstream large-scale social network without some kind of moderation, so every platform is going to do something. And all I'm saying is: what Facebook was doing before was bad, just as a product experience. Just wretched. Solved no problems, mostly surfaced stuff I wouldn't have paid attention to in the first place.
How does an average joe evaluate the claim that their content moderation was bad? Cause folks on the left seem very upset that it's being replaced by notes, and folks on the right seem very glad that it's going. How do I judge this for myself?
What I've read of the Community Notes algorithm casts it as far more neutral than any hiring decisions about professional content moderators could possibly be. If it's "political," it's in a similar way to comparing the GDP of various countries is political--reality gives the verdict, the politics is in whether that verdict was the optimal one to ask reality for.
People are going to believe it is political whether or not it is. I've been working at hard at talking about difficult issues in a depoliticized frame. It's hard.
Lately I've been talking with a lot of people trying to help find answers and something I am learning is to delete all the duckspeak from my vocabulary (there was an otherwise good article about "placement poverty" in medical education that I didn't post last weekend because but "X poverty" is duckspeak)
If I say anything at all to anyone about this or that and get a negative response about the words I use I take it very seriously and most of the time resolve to use different words in future.
called "The Principles of Newspeak" that coins the word.
The slogan "My Body My Choice" has some of this character. It rolls off the tongue and stops thought. There is no nuance: the rights of the mother are inalienable. Opponents will talk about the inalienable rights of the fetus. There is no room for compromise but setting some temporal point in the pregnancy is a compromise like Solomon's that makes sense to the disengaged but gives no satisfaction to people who see it as moral issue. [1]
Note that this phrase turned out to be content-free and perfectly portable when it got picked up by anti-vaccine activists.
"Illegal Alien" is a masterpiece of language engineering that stands on its own for effectiveness. I mean, we all follow laws that we don't agree with or live with the threat of arrest and imprisonment if we don't. It's easy to see somebody breaking the law and not getting caught as a threat to the legitimacy of the system. "Undocumented Migrant" has been introduced as an alternative but it just doesn't roll off the tongue in the same way and since it is not so entrenched it comes across more as language engineering.
(Practically as opposed to morally: Americans would rather work at Burger King rather than get a few more $ per hour to get up early for difficult and dirty work which might have you toiling in the hot or the cold. An American would see a farmhand job at a dairy farm as a dead end job. A Mexican is an experienced ag worker who might want to save up money to buy his own farm. Which one does the dairy farmer want to have handling his cows?)
My son bristles at "healthcare" as a word consistently used for abortion and transgender medicine to the point where he shows microexpressions when reading discussions about access to healthcare in general.
in that teaching small children the alleged difference between two words will make a difference in the very difficult problems that (say) black [2] people have in America trivializes those problems. It trains them to become the kind of people who will trade memes online as opposed to facing those problems. In the meantime I've heard so many right wingers repetitively talk about "Equality of opportunity" vs "Equality of outcomes" which is a real point but reduces a complex and fraught problem to a single axis.
[2] Bloomberg Businessweek has a policy to always say capital B when they talk about "Black" people. Do black people care? Does it really help them? What side of the barricades are they on when they write gushing articles about Bernard Arnault and review $250 bottles of booze and $3000/night hotel rooms.
> My son bristles at "healthcare" as a word consistently used for abortion and transgender medicine
In terms of cost, the items you cite are vanishingly small, and to conflate the two, one must have no experience of the medical system beyond twitter.
Is your son on his own? Did he have to pay the cost for a broken limb or a child's disease, or has he seen a family member go through a cancer? Maybe he would have a better sense of what "healthcare" means if he had actually been facing these situations.
> "Undocumented Migrant" has been introduced as an alternative but it just doesn't roll off the tongue in the same way and since it is not so entrenched it comes across more as language engineering.
It definitely comes across as language engineering. It's a legitimate category ("I'm an asylum seeker directly on my way to claim asylum from the nearest office") but expanded to include people who are just in the country illegally. It's too obvious to convince many people for very long.
No. I can't stand it that so many Americans have fallen under the spell of a fraudster while others are sharing hateful memes online and think it is activism. I need stronger language, not weaker language.
I don't like the word "debate" because it makes me think of a high school debate where you are assigned which side of the issue and it is about to winning or losing.
in the current situation people feel they have exactly one candidate to vote for every time and thus we have no ability to vote out corrupt politicians. The political class wins and the rest of us lose.
(I am so concerned about people's inability or reluctant to change that I've experienced a call to the ministry and I'm working to use practices that I developed for selfish ends in the past to help others. Ideally when I offend you I want to strike you at the core and leave you haunted for months and not be able to think about the issue the same way ever again. If you're reacting to bits of trash somebody else stuck on me that I'm not aware of, I'm not going to get that strike in.)
Actually very few things have to be political. Politicising, that is rendering the concept to decision by a "body politic" is a choice that we're making right now, and we could choose to not do that. In fact, we have done that throughout our nation's history, and it's only in the last 20 years that I've seen the rise of "everything is political speech" to the degree that the brand of beans you buy in a store signals something to some group.
To wit there are a lot of totalitarians out there, and just because some group claims to be on your side or looking out for your interests versus some other group it doesn't mean they don't want your mind, body, and soul for their own purposes. We must take it upon ourselves to think for ourselves and to hold our own interests rather than to adopt the interests of the group we're in. Humans can engage in enterprise as a group for their own reasons, and we ought to embrace that instead of seeking to identify so wholly with the group that we lose ourselves.
Modern progressives shut themselves off from any ideas they don’t already agree with, making it impossible for them to discern whether what they believe is true or not.
Of course this is also true of many religious conservatives. It’s just now equally true of those on the far left.
What about them? That they exist? No one disputes that. That illegal immigrants cause crime? We have hard data on that; it's not true. That they are a drain on society via social programs? We have data on that too; they get taxes witheld but cannot claim refunds and cannot enroll in social benefit programs due to their lack of SSN.
On any topic you want to pick it's typically the radical right wing who have their fingers in their ears.
The people who think illegal immigrants shouldn't be illegal don't think anyone should be illegal. What's the double standard? It's not like they think black people should be allowed in but white people shouldn't.
What's hard to grasp is how you think this applies to a discussion about differing facts based on political leaning. Nobody disagrees with the facts here, only on what should be done going forward. So, not really relevant to the discussion.
Is it universally true that every truth test requires leveraging the existence of false claims/things I don’t agree with? For example if Socrates is a man, if all men are mortal, what false fact would you need to draw the logical conclusion? Or am I missing your point?
I’m not reflecting this idea, of course, because I’m a progressive. It does seem a bit imaginary, though.
Is climate change driven by human activity? Do males have a natural advantage in sports? Do vaccines cause autism? Does rent control make housing more available?
The major political tribes are full of BS, because politics mostly isn't driven by disagreements about facts but by conflicting material interests. Partisans believe whats convenient.
How do you distinguish partisans from actual knowledge? The Steve Bannon philosophy of flood the zone with shit so it all looks the same seems to have killed public discourse IMO. It is easy to label everyone as partisans.
To your questions, the best explanations for climate change are human causes (and with very considerable evidence).
Women have higher pain tolerances and greater natural buoyancy, they are greatly advantaged at long distance cold water swimming. Many other sports require physical size and/or strength - so it does depend. Vaccines have no evidence of _causing_ autism, and the big paper that made that claim was retracted. I don't know about rent control and do not know what data exists.
Yeah, the answer of, yes, and here is all the evidence just doesn't seem to fly. I feel that trolling and trolls, and science illiteracy just have simply won the day.
> Do males have a natural advantage in sports? Do vaccines cause autism?
I won't argue about the other two, BUT.
We have facts for contact sports and for speed and strength sports, we've had these facts for millenia.
For the vaccine one, we also have facts. You're more likely to win the lottery than to get autism from them. I think they're probably the same odds as dying from a potted plant falling on your head while walking but anti vaxxers don't seem to be wearing helmets everywhere, that's so weird...
I don't think any of these are ambiguous. My point is that sometimes right wingers take the nonsense position and sometimes left wingers take the nonsense position. Neither side reliably follows the evidence or "believes the science" so glib lines like "reality has a liberal bias" are shallow and silly.
The point of the phrase "reality has a liberal bias" is not "liberals never take a nonsense position", it's "more of the facts that liberals [just as tribalistically] believe in happen to also be true, when compared to conservatives".
That something like this might happen is not surprising. If you have two political groups and you assign both beliefs from a bag in a purely random process, odds are that one of the groups will end up with more true beliefs than the other, through no virtue of their own but through pure chance.
Conservatives believe the truth supports conservative beliefs, and liberals believe it supports liberal beliefs. This type of comment is about the same as just saying "I am a liberal", which almost by definition means you think liberal beliefs are true. It doesn't add much to the conversation.
Well, no. It means when facts are tested by objective means, more of them align with liberal beliefs than conservative beliefs. Unless you believe that facts can't be objectively tested?
I am on the US left by any survey measurable by my principles, while not from US, this logic also sounds juvenile. Stooping to the level that a single person should be able to represent a whole side, did you see Joe at the debates?
Oh boy. Are you trying to do the "both sides" thing? Joe was pretty bad at the debates. His voice was weak. He stuttered. He misspoke. It was bad. And then what happened? He stepped down as the party's candidate, and the rest is history, as they say.
That is quite different from making up wild stories about immigrants eating cats, fabricating nonsense about widespread election fraud / stolen elections, suggesting injecting bleach is a sufficient remedy for coronavirus, sharpie-ing atop hurricane maps to prove previous incorrect statements were totally real because... look: sharpie! And this man has never had more widespread support.
These. Parties. Are. Not. The. Same.
By the way, it wasn't just one man making this "immigrants are eating our pets" thing. In addition to Trump, other prominent Republicans such as J.D. Vance, Marc Molinaro, and Laura Loomer also repeated this lie.
Statistically, most US seems to believe that the Democratic party is obviously worse at the Federal level. They just lost an election on every metric, although they did win the lost-to-Trump-twice award after almost a decade of opportunities to come up with an effective counter-Trump strategy.
He's been the undisputed head of the "conservative" party in the U.S. for 10 years now. And just won his second election, this time winning the popular vote. If that's not mainstream, I don't know what is.
Accurate. It's difficult to argue that the mainstream US Republican isn't a populist now. Twice is not a fluke.
And ever since the 70s there's been a tension between the blocks of the Republican party: fiscal business conservatives, foreign policy hawks, and rural/religious conservatives.
After couple decades getting the final group fired up, they decided they wanted to drive. And the primary system rewarded them.
> the final group fired up, they decided they wanted to drive. And the primary system rewarded them.
I've been an outside observer of US politics for many decades, I'd characterize what happened not so much as the primary system rewarding them but more as a consummate grifter and snakeoil carpetbagger fooling them into thinking they've won.
They got fired up, they got the candidate they voted for, I'm not sure the expected rewards will follow as hoped and expected.
I have definitely heard conservatives complain that reality has a left-wing bias. Not in quite those words, but close enough that you wonder if it’s possible to die of cognitive dissonance.
Tim Walz claimed there is "no guarantee to free speech on misinformation or hate speech, and especially around our democracy." That's false--the First Amendment has no such carveouts for those things. So it's concerning that Walz would think otherwise.
Hillary Clinton has made similar comments, saying "But I also think there are Americans who are engaged in this kind of propaganda, and whether they should be civilly, or even in some cases criminally, charged is something that would be a better deterrence, because the Russians are unlikely, except in a very few cases, to ever stand trial in the United States." But again, there is no First Amendment carveout for propaganda, Russian or otherwise.
There are some limits to protected speech, but they're rare and mostly limited to direct incitement of a crime or other threat.
In the final analysis, I don't think it matters. The former leads to the latter. The same is true of things like attempts to keep the LGB, but toss the T. The T follows from the LGB. The LGB already presupposes all that is needed to infer the T. You would be drawing an artificial line in the sand otherwise. It's ad hoc and doesn't work.
One common error people make is that they think they can pick and choose beliefs and positions a la carte and expect them to remain stable as fixed parameters of the environment. But that's not how ideas work. They aren't static in this way. Rather, they function much like presuppositions that, over time, are worked out, dialectically, if you will. Society is like a machine that works out the consequences of ideas over time.
So, I always find it amusing when anyone appeals to some fondly remembered status quo that held in a prior decade, believing that all one needs to do is return to that status quo "verbatim" and all will be well, as if these things were just a matter of arranging the furniture a certain way. You can't roll back the clock, and if you could, you would only recreate a similar development that led to the undesirable state of affairs in the first place.
This isn't an argument for some kind of Big P progessivism, or against tradition, only an account of how cultures develop over time. In our case, by understanding the tensions and contradictions within the liberalism tradition, we can come to explain why Western societies have moved in a certain direction over the last 200 years. Heck, we can go back further to the influence of Luther, or even further to Ockham, without whose ideas liberalism would arguably not exist.
If you begin with liberal blinders on, then that might be the picture you receive.
(I define here "liberal" and "liberalism" not in the lazy, colloquial partisan sense, as in "own the libs!" or "left wing", but the philosophical definition in the tradition of Hobbes, Locke, and others. In this sense, "we" are all liberals in the liberal West.)
> what does or doesn’t constitute a “fact”, are all political topics.
It clearly is not. A fact is a fact by definition, regardless of what anyone happens to feel about it. There are facts that are known to be true beyond all possible doubt.
If it is uncertain or in doubt, then it's not a fact and shouldn't be corrected by fact checkers.
The way Community Notes usually end up working in practice is comments that provide sourced context that may be [arguably intentionally] omitted in a topic. For instance if it happens to be that there have been 27 different studies showing no statistically significant reduction in spread of infectious diseases with healthy individuals wearing masks, then that would likely be a community note on the first one. And vice versa if rent has been demonstrated to keep rents below the surrounding means in the cities of Blah, Bleh, and Bluh, then that would often end up a community note on the second.
It basically helps reduce the hyperbole/echo chamber effect of such comments/topics. Vice/versa if those topics were "Respirator masks are useless." and "Rent control is always good." then the community notes would tend to go in the opposite direction. It's just a really good idea. For that matter I think a similar algorithm would also work well on general upvote systems at large.
I'd also add that one of the biggest issues with "fact checkers" was not only sometimes questionable checking, but also a selection bias - where the ideological bias becomes rather overt in both directions. Whether that be in deciding to "fact check" the Babylon Bee (in an overt effort to get it deranked), or in choosing not to not fact check statements from the lying politicians that one happens to like.
Well this is definitely false. If you're a politician who can afford a nice place then rent control is a great idea: it gets you elected (look, I made things cheap for you) and keeps you elected (look, I will solve all the problems underpriced rent brings).
Your example is a false equivalence. Economics does not define "good ideas" and "bad ideas," it only attempts to model resource dynamics. Whereas the spread of infectious disease is clearly quantifiable regardless of value assignment.
The presumed goal of rent control is to prevent rents from rising. If they actually cause rents to rise even more quickly then they are indeed "bad" (at achieving this goal).
The goal of rent control, as I infer from the mechanism, is to prevent existing tenants from being priced out of their current homes (eventually leading to eviction) - at least as I have seen in the US.
If the goal were to prevent rents from rising, the mechanism would do so directly, ie. regulate all rent, rather than limiting to continued rentals on certain types of property. Which would by definition prevent rents from rising, presumably along with other undesirable effects.
Anyways, the whole issue with conflating "bad" with objective consequences is the "presumed goal," which is of course totally subjective.
Partly true, but besides the point. Making a blanket statement like "economics says rent control is bad," is only marginally better than saying "physics says nuclear weapons are bad." There is a critical assumption of values which is totally outside the objective of study.
Here's another one - "Trump colluded with Putin to hack the election in 2016".
I have never seen an accepted fact checking site answer this, which is very strange since it is such an enormous and grave conspiracy theory if it were true. The Mueller report is extensive and quite conclusive in stating that no such evidence of collusion (conspiracy) was not found. Yet fact checkers are happy to check peripheral and far less consequential claims around the case for some reason (e.g., https://www.snopes.com/fact-check/mueller-report-no-obstruct...), but are strangely hesitant to address the elephant in the room.
Or for another example, there were many false or poorly substantiated claims made about covid and vaccines during the pandemic. I saw "reputable" fact checkers address a certain set of those claims about the virus and drugs, but were strangely silent when it came to a different set of claims.
So fact checkers don't even need to provide false content at all, they can be very political and biased simply by carefully choosing exactly what "facts" or claims that they address.
But even straightforward stuff goes unchallenged. Jada Pinkett Smith released a movie trailer claiming Cleopatra was black. When NBC covered the issue, they couldn’t even bring themselves to fact check her. They did a “he said, she said” article asserting that Egypt contested whether Cleopatra was black: https://www.nbcnews.com/news/world/queen-cleopatra-black-egy....
That is a dilemma humanity has struggled with for millennia. Humans are very bad at recognizing their own biases and admitting to themselves they were wrong about something.
What do you mean how? Science. The process of science.
There might be people who want to believe gravity on Earth accelerates objects at 1m/s^2, but we can trivially establish through countless experiments repeatable by anyone who wants to try that this is not true.
If you can't measure it or repeatably demonstrate it then it's probably not a fact. If it can, then it is a fact and no amount of emotionally wanting to believe something else can make it not a fact.
The irony is that the example you cite, i.e. F = G * m1 * m2 / r^2 is demonstrably not the correct formula for gravity.
Science, the process of science, does not prove something as fact. It can only eliminate non-facts, and even then, the experiments may be flawed in their recognition.
> If you can't measure it or repeatably demonstrate it then it's probably not a fact. If it can, then it is a fact and no amount of emotionally wanting to believe something else can make it not a fact.
This is demonstrably false. If you witness an event once, you cannot necessarily repeat it, but you know for a fact that it happened. Unless you redefine the term "fact" narrowly, what you suggested is an ideology.
See how even the definition of "fact" is up for debate.
> Science, the process of science, does not prove something as fact.
I intentionally picked a wrong value for Earth gravity instead of the correct one to avoid nitpickery on precision, location, yada yada.
If someone has a feeling that Earth's gravity accelerates at 1m/s^2, they're just flat out wrong full stop. This is the problem with the anti-intellectual crowd who believes everyone's opinion has equal weight. No, it doesn't. If someone wants to believe Earth's gravity accelerates at 1m/s^2, then their opinion (on that topic) is worthless because it is known to be false and they don't deserve any recognition for the nonsense. Facts are facts, beliefs don't make them go away.
> This is demonstrably false. If you witness an event once, you cannot necessarily repeat it, but you know for a fact that it happened.
Not at all. Human memory is fallible so if you are the only one who saw that event and swear it is true that does not make it a fact no matter how hard you believe it.
That's why scientific process requires repeatable results that anyone can (re)validate over and over, not one-off recollections.
> Earth's gravity accelerates at 1m/s^2, they're just flat out wrong full stop
You do realize it depends on the distance of the object to Earth? So perhaps you are wrong not them depending on the context.
Now someone comes up and says I am nitpicking blah blah... well, the author should have been clear and not stating falsehood as fact! This is just your belief which does not change the incompleteness/incorrectness of the statement (as per the original post).
And this is the whole goddamn point. What's "fact" to someone can be incorrect, half-correct, wrong with completely good faith, or wrong with intent to mislead, etc. Who gets to decide all this is not as simple as "I am ScienceTM" Dr Fauci style.
You missed a basic element of what they said: "can't measure it or repeatably demonstrate it"; seeing a non-reproducible event with your eyes is a form of measurement, and that measurement could in principle be done by an objective machine (recorded by a camera). The potential for objective evidence is what distinguishes a matter of fact from a matter of opinion.
As to the "correct formula for gravity" - that's just bad faith nitpicking. "Newtonian gravitation is a fact" is both a strawman and completely irrelevant when it comes to social media fact checkers.
> You missed a basic element of what they said: "can't measure it or repeatably demonstrate it"; seeing a non-reproducible event with your eyes is a form of measurement, and that measurement could in principle be done by an objective machine (recorded by a camera). The potential for objective evidence is what distinguishes a matter of fact from a matter of opinion.
No. Recording an experiment does not constitute scientific repeatability of an experiment. (Not to mention Quantum Mechanics explicitly rejects your claim as a universal principle at the micro level.)
> As to the "correct formula for gravity" - that's just bad faith nitpicking. "Newtonian gravitation is a fact" is both a strawman and completely irrelevant when it comes to social media fact checkers.
No, it is not a strawman at all. It clearly illustrates via an example of something we have known to be false for about a century, yet not only we do not censor it on social media, we teach it to kids, and almost no one would object to it.
So, where do you draw the line?
I posit there exists facts that are unknowable by the scientific method. The GP claimed science as the end-all-be-all method to fact-check. My statement is that it's not sound, nor complete, in its ability to fact-check.
The scientific process works amazingly well for repeatable experiments, but it doesn't do anything at all for non-repeatable events. You can't use the scientific method to figure out who blew up the Nordstream pipeline, just for a relatively recent and hotly debated political fact.
And if I take a ballon, fill it with the right helium/air ratio so it sinks at exactly 1m/s²? It's a provable scientific fact that it's falling at 1m/s². Even if I leave off the part that it's a balloon, and talk antigravity fields or aliens or some crap, and "let you draw your own conclusions", the fact that the ballon fell at that rate would still be demonstrably true.
People want to sell you lies and get you to believe them, and they'll give all the half truths they can to support their version of the truth.
they'll use misleading graphs with real numbers, so you can fact check the numbers on the graph and come away thinking the graph represents the truth of the matter. But X axis that don't start at zero, logarithmic Y axis that don't say they're logarithmic, Or pie graphs viewed from a funny angle, with slices that don't represent the percentage they're labeled by, or with percentages that add up to greater than 100%.
If all we wanted to run were trivial physics experiments, we'd be golden. The real world of social media facts include things we can't run science experiments for, or go back in time to redo, like economic stats that use a different formula today and there's not enough information to see what it was in the distant past. So we get these narratives from people who are trying to convince us to believe theirs by leaving off important context. Which is totally dishonest of them, but they have a vested interest in us believing a particular narrative.
You're reading them as saying that moderation is suspect because it's political, and all I read them to be saying is that political considerations are unavoidable when you moderate, in a manner distinctive to moderation.
Answering this question has to be a political topic, because there's an infinite stream of people asking you the question (by posting things that may need to be fact checked), and you have to decide which ones to prioritize.
> 2 : any of the circumstances of a case that exist or are alleged to exist in reality
: a thing whose actual occurrence or existence is to be determined by the evidence presented at trial see also finding of fact at finding, judicial notice question of fact at question, trier of fact compare law, opinion
For most of my life, I would have agreed with you.
As I've gotten older, I've become increasingly skeptical of the idea of a "fact".
There's no way to separate information from human context. Even seemingly obvious things like "that shirt is blue". To who? My wife sees it as green, frequently.
Or things are reduced to tautological nonsense like "gravity keeps us on the ground". Hard fact, right? But define gravity. A physicist will give you an answer, that may or may not mean much. A layman's definition might be something like "it's the thing that keeps us stuck to the ground", and now we're back to tautological nonsense. The entire "water is wet" class of "facts".
Anything less trite instantly becomes less fact-like the more humans are involved.
"Trump is a criminal" many people would argue passionately that this is a hard, incontrovertible fact.
Nearly as many, (or maybe more?) would argue the opposite.
I like the approach of the Fair Witness in Stranger In A Strange Land: "What color is that house?" "It's yellow on this side."
I'm increasingly convinced that the belief in "facts" is more about the desire to be right and know things than anything to do with objective reality.
> As I've gotten older, I've become increasingly skeptical of the idea of a "fact".
I think the problem actually lies in your personal interpretation of what a "fact" should be, and how it contrasts with what facts actually are.
The definition of "fact" is "things that are known or proven to be true". Consequently, if you can prove that an assertion is not true then you prove it is not a fact. If your wife claims your shirt is green and not blue, does that refute the fact that your shirt is actually blue? No. Can you prove your shirt is blue? Can she prove your shirt is green? That is the critical aspect.
Just because someone disagrees with you, that does not mean either if you is right or wrong. You can both be stating facts if it just so happens you're presuming definitions that don't match exactly in specific critical aspects.
If your shirt is cyan, you can argue it's a fact the shirt is blue and argue it's a fact the shirt is green, because in RGB space both the blue channel and green channel is saturated. You can also state that it's a fact that your shirt is neither blue or green because there's a specific definition for that color and this one is in fact cyan, not blue or green.
If you can prove your assertion, it's a fact. If you're making claims you cannot prove or even support, they are not facts.
And more importantly, the problem tackled by fact checking is people making claims that are patently and ostentatiously false and fabricated in order to manipulate public perception and opinions. Does anyone care if your shirt is blue or green? No. Does anyone care if, say, Haitians are eating your pets? Yes.
Facts exist. Your first sentence has 11 words. Easy to verify, right? Doesn't matter who's counting.
May I suggest that your confusion comes from a conflation between facts and generalizations. Hard facts exist in strictly defined contexts. Relax the context, and you need to eventually reach for generalizations that less precise and potentially ambiguous.
If somebidy asked me whether the cup in you hand would fall and and shatter when they release it from their grip, my answer would of course depend on a few things I pick up from the context: what gravitational attraction would the cup experience in your current location? What material is the cup made of (porcelain, metal...)? So if we're standing on earth and the cup was made of porcelain, I'd answer that it would fall and likely shatter. Doesn't mean that any cup would shatter. Metal cups doesn't. But that's a different fact. So there is no generalized fact that all cups shatter when they fall. Some do, some don't. We can play the same game with gravity. The cup wouldn't fall if we were floating on the ISS. So the same cup doesn't fall in all locations it might conceivably be.
Many people don't want to deal with the level of precision that hard facts require. They get sloppy and then start these endless discussions of "this isn't true because..." etc. and everyone gets gradually more confused because nothing seems to be entirely true or false. The fundamental counter here is to dig in and tease the generalizations apart until they become sets of constrained hard facts.
It's, I think, quite relevant here to note that "word" is a famously hard to define concept in linguistics. That is, there is no generalized definition of the concept "word" that works across languages, writing systems (e.g. Chinese and Japanese writing don't traditionally use spaces to separate words), and ways of analyzing language (phonological words are different from grammatical words).
So to make your sentence more accurate, you'd have to say "there are 11 groups of letters separated by whitespace characters or punctuation before your first period".
“Facts is facts” works for counting words in a sentence.
It does not work for anything with nuance or context, or for unprovable propositions. It is a fact that there is no elephant in my house. But if you want to doubt that fact for the lulz or for profit, I will be hard pressed to prove it.
That’s where our modern populist / fascists have weaponized disingenuousness to prove that “up is down” is just as valid a statement as “up is up”.
While I get your point, and I think it's strong, I'm entirely unconvinced.
Everything we see, do and understand exists in a context window of an individual. We have a shared language, with which we can inexpertly communicate shared concepts. That language is terrible at communicating certain concepts, so we've invented things like math and counting to try to become more precise. It doesn't make those things "true" universally. It makes them consistent within a certain context.
How far it it from Dallas to Houston? On a paper map, it might be a few inches. True, within that context. Or you might get an answer for road miles. Or as the crow flies. In miles? Kilometers? It's only fairly recently (in human history) that we've even had somewhat consistent units of measure. And that whole conversation presupposes an enormous amount of culture knowledge and context - would that question mean anything to a native tribesman in Africa without an enormous amount of inculturation? Are their facts the same?
I'm not trying to make a "nothing is true, we can't know anything" kind of argument, that's lazy thinking.
I'm making an argument for maintaining skepticism in everything, even (especially?) things that you know for sure.
You still have to distinguish between hard, absolute facts which definitely exist and representations thereof in human language. The facts never change (the distance between Dallas and Houstom doesn't change while we are having this conversation), but accurate descriptions require additional concepts and now we get into the imprecise world of human communication. Doubting the precision and accuracy of human language is a fair point, but that doesn't make facts themselves subjective.
I admire the conviction that things become absolutely true at a sufficient level of specification.
So long as facts are represented in language, they are subject to language’s imprecision and subjectivity. And I don’t think that platonic ideals of facts, independent of representstion, have much utility.
> How far it it from Dallas to Houston? On a paper map, it might be a few inches. True, within that context. Or you might get an answer for road miles. Or as the crow flies. In miles? Kilometers? It's only fairly recently (in human history) that we've even had somewhat consistent units of measure.
No one’s opinion is going to make them closer together or farther apart, though. The distance (in whatever context) can be known. Can be objectively measured. That makes it a fact.
> I'm making an argument for maintaining skepticism in everything, even (especially?) things that you know for sure.
Are you skeptical about which way to put your feet when you get out of bed? Do you check to make sure every single time?
I think you are trying hard and writing a lot to miss the parent's point. You're thing about the number of words in the sentence is like what the parent is mistakenly calling "tautological;" another way to say it is blatantly obvious and a banal observation. This is not the type of thing we are talking about here. This is entire post is about "facts" and "fact checking" in the case of socio-political issues, the kinds of things for which there are fact checkers. The parent is obviously correct. Just look at the state of actual "fact checking" of this variety in the real world. There is a lot of controversy and a lot of words are used in a very loose way, these are not simple physics problems that you can punch into a TI-86. The issue is clearly about "who are the fact checkers" or put another way "who decides the facts." In a court of the law in the US, the judge is only arbiter of facts, these can not even be appealed.
Everything is political, which is one of the statements made above.
Facts are political. Because facts actively change how you live your life.
The playwright who created the “kill all climate denialists” talks about how it took years for the play to get onto stage.
And then how he began to see the truth of climate denialists positions. That climate denialists believed the facts, and realized it meant their whole way of life was over. So they had to do something about it. They responded with denial. In a very real way, they lived their beliefs.
The fact of climate change IS political.
EVERYTHING is political, there is no fact that I cannot convert into a weapon, through some means or the other. Blaming fact checkers, is simply trying not to blame humans.
No, whether a coffee cup will break when you drop it or whatever that was is not a political thing. I'm not sure what the rest is about. To deny that there is a lot of subjectivity in the kinds of "facts" we are talking about her is just to deny reality.
1) While “facts” undisputed exist, there are vanishingly few people sufficiently versed in both epistemology and myriad substantive areas for “fact checking” to make sense. In particular, domain experts are rarely sufficiently versed in epistemology to distinguish between facts they know by virtue of their expertise, and other things they also believe that aren’t really facts.
Moreover, the folks employed checking facts for companies like Facebook typically don’t have any expertise in either epistemology or the range of substantive areas in which they perform fact checking.
2) In practice, the issue in society isn’t “facts” but “trust.” You can build trust by being consistently correct about facts in a visible way. But you can’t beat people over the head with putative facts if they don’t trust you.
Subjective interpretation is very fundamental to being human and the way our minds work, but the underlying physical reality -- the wavelengths of light reflecting off the shirt -- can be measured objectively. A physicist might say that gravity is the curvature of spacetime caused by mass, which can be measured and tested.
Trump being a criminal is based on a shared legal and societal context. As a society, we accept that if you are convicted before a jury of your peers, you are guilty and have been convicted of a crime. Jury's get it wrong and the justice system is flawed and has made mistakes. A black man in the 1920s (or even the 1960s for that matter) being tried for murder with absolutely no evidence and sentenced to death is a clear miscarriage and corruption of justice. The testimony of Trump's employees during the trial, who all said they loved working there (most of them still worked there), but weren't willing to lie on the stand about checks and phone calls they participated in, was pretty clear cut. This wasn't random people off the street of [insert preferred liberal enclave here] testifying against him: it was his own people who still work for him.
Some people prioritize political allegiance over legal judgments when it suits them.
If we dismissed facts entirely, science, medicine, and countless other fields reliant on objective reality would collapse.
This exchange is a great example of the subjective nature of our experiences: as I've gotten older -- 38 now -- I've come to accept more and more that some things are objective reality, whereas in my teens and 20s, I questioned reality and society on the structural level, torn down to the studs. From Plato's cave, to brain in the vat, Kant, the Hindu Brahman and Maya, Buddhism, etc.
Your Trump trial example actually proves the opposite of the point you’re making. CNN’s legal analyst of all people wrote an article explaining why the prosecutors “contorted the law” in pursing Trump’s conviction: https://nymag.com/intelligencer/article/trump-was-convicted-.... Remember, the prosecutor initially declined to bring the case. And those problems with the underlying legal theory are still subject to review on appeal, which very well may result in the conviction being overturned. There’s actually a lot to debate there! Including whether the “shared context” you mention still holds in the circumstance of a blue-state jury trying Donald Trump. And I’d certainly not trust anyone—especially people without a legal background—to moderate people’s statements about Trump’s trial and conviction.
Heck, even lawyers don’t treat legal judgments as god-given “facts” except in specific legal circumstances. The questions at the back of every chapter in a law school textbook will ask the student whether a particular case was rightly decided or wrongly decided and why.
The better way to think about legal judgments is not in terms of “facts” but rather “process.” Even a final decision by the U.S. Supreme Court does not establish god given facts. It merely is the end of the line in a set of procedures that lead to a particular result in a particular case. But even judgments of the Supreme Court are second-guessed every day by 20-somethings in law schools around the country!
> Trump being a criminal is based on a shared legal and societal context.
To think that someone is a criminal, you have to believe they committed a crime. A trial is one way of establishing whether they did with certain standards of evidence and process. But it is very far from the be-all-end-all of the matter.
For example, virtually everyone believes OJ Simpson is a criminal, even though he was found not guilty at trial, and even though plenty of biases worked against him in that trial, theoretically.
For myself, I do believe that Trump was rightfully convicted and is a criminal. But that doesn't mean that "he was convicted" should force anyone else to believe this. It only means that a particular group of jurors believed it given the evidence that a judge found correctly collected and presented to them.
But, respectfully, even you, in your quest to cite facts require pointing out that your "facts" are not facts at all. The person in question, Trump, was not sentenced and therefore not "convicted" of anything. But this false claim is repeated a lot even by supposed "fact-checkers". Even the rest of that same paragraph is not made up of facts but you are trying to support some vague claim with appeals to things like "his own people wouldn't lie for him even though they loved him" or some such; you're bolstering a negative sentiment but not really clearly delineating anything resembling "facts". That's the issue that is being discussed and addressed by Meta at this point. Sure, we can call high schools physics problems as reflecting facts of nature, that's nice, but this is not what all the fuss is about.
"in United States practice, conviction means a finding of guilt (i.e., a jury verdict or finding of fact by the judge) and imposition of sentence. If the defendant fled after the verdict but before sentencing, he or she has not been convicted,"
Not true in New York, where this particular trial took place. From your own link:
S 380.30 Time for pronouncing sentence.
In general. Sentence must be pronounced without unreasonable delay.
Court to fix time. Upon entering a conviction the court must:
(a) Fix a date for pronouncing sentence; or
(b) Fix a date for one of the pre-sentence proceedings specified in article four hundred; or
(c) Pronounce sentence on the date the conviction is entered in accordance with the provisions of subdivision three.
So not only is sentencing distinct from conviction semantically, it's also distinct legally in the state of New York.
This is an instance where semantics are nothing more than, well, semantics.
The people who say that Trump has been ”convicted but not sentenced” actually mean that he’s been ”found guilty but not sentenced”, they just aren’t intimately familiar with legal terms of art.
If they simply say ”Donald Trump was found guilty but not sentenced” instead, they’ve silenced the nitpickers while still conveying the exact same message they intended to in the first place.
Sometimes when people complain ”you’re just arguing semantics!”, the semantics do in fact need to be cleared up, because the words being used are confusing, or wrong in a way that’s preventing participants in the discussion from getting on the same page.
Here, no one is actually confused. Everyone knows and agrees that Trump was found guilty, but that he hasn’t been sentenced. The only sticking point is whether you can use the word ”convicted” to describe someone who is in that situation, and whether or not that’s the case doesn’t have any material effect on people’s understanding of reality. It’s just a matter of arguing over which words should be used, i.e. it’s just semantics.
I take the "this seems to be true, based on what I know, subject to more information" approach.
I'm ok with not knowing things.
We can measure all sorts of things, and put them in a human context, which is very reassuring. What's a wave? What's a wavelength? What's a unit of measure? These are not universal truths, these are human inventions. Things we've created in order to communicate a shared understanding with each other of things we've observed. It makes us feel knowledgeable, lets us build cool things, and that's a good thing!
It also interferes with learning, and that's a bad thing. For example, (and I'm not taking a position on this either way, because I don't know) I think it's very unlikely, based on your comment, that it would be easy to convince you that Trump is not a criminal. Or, to pick a less controversial topic, to convince the early Catholic church of the heliocentric model of the solar system. Because they already had the "facts."
It's a comfortable position to know things.
It's uncomfortable to not know. As I've gotten older, I've become more comfortable with being uncomfortable.
It would indeed be hard to convince me Trump has not committed crimes, considering a jury found that he had and the whole, "Walks like a duck, quacks like a duck," thing. Tony Accardo ran the Chicago Outfit for 4-5 decades and never spent a single day in jail. I don't think most people would agree that because he was never convicted (or even charged), he was not committing crimes.
If you read a story about a drug kingpin being convicted at trial, do you assume that he might be innocent?
This is the line in the sand that makes sense in the pre internet era.
Online, EVERYTHING is political speech, because moderation is the only effective action we can take, and moderation is currently conflated with censorship. Even though it’s on a private platform.
I was working towards researching this and building the case out fully - but online speech efficacy is not served by the blunt measures of physical spaces, where the ability to speak is not as mediated.
Online, diversity of voices, capability of users to interact safely, resolution of conflicts, these are better measures of how healthy the market of ideas is.
The point of free speech is to have an effective exchange of ideas, even difficult ones. The idea of free speech is not in service of itself, its in service of a greater good.
The earth is "round" can be made political, but there is a factual consensus.
Therefore, we rely on experts that decipher information to transcend political opinions. It saddens me when scientists become political, only to add confusion to the consensus, in an attempt to weaken it.
The US is going to endure four more years of post-truth governance. It isn't in Zuckerberg's interest to have his organization pointing out that the emperor is unclothed when there is real risk of blowback in round 2.
> I just can't help but think there's a political component to all of it.
I mean, of course there is. The pressure to censor that began once Trump started dominating the Republican primaries in 2015, and escalated when the government chose a line on covid that absolved the government from responsibility for covid and made dubious claims about it, is ending. The reason the recent censorship frenzy began was political (nobody was censoring flat-earthers), and the reason it's ending is political.
Now the US can get back to just censoring Palestinians, like the old days.
Facebook is a corporation and can 'censor' whoever they like. They are not 'the US'.
Part of the reason why they moderate content is the same reason that a bar owner turfs out people who are rowdy and threatening the other patrons: because the normies will leave and you're left with a bunch of nasty, loud people.
That is, after all, why this site we're on right now is so heavily moderated: it makes for a better user experience.
Do you have showdead on? There is definite moderation going on, but a lot of it is collectively imposed (down votes, flagging). But, if you have your HN account set to show dead posts, you’ll see that even with this demographic there are still a good number of low quality posts.
I read with showdead on. I feel like people don't get modded for opinions here. Usually if the comments are dead it's because something is perceived as ad hominem, hostile, aggressive, violent, etc. It's usually the tone that gets them modded out and the content of the message, and a polite version of the same statement would stand.
There are outliers of course, but that's the general vibe.
> I feel like people don't get modded for opinions here.
Agreed. That's why I used the term "low quality". The comments that get downvoted or flagged are usually either blatant spam/trolling or rude. If someone makes a quality argument, regardless of the opinion, it generally sticks around. I'll even up-vote comments I disagree with, if the author is making a good-faith effort. Not everyone does that, but enough people do and do so often enough that it helps to keep a complete hive-mind at bay (about most topics...).
But, I think that it's that simple level of moderation (which, I consider to still be moderation) that helps to keep discourse around here civil and interesting...
Yes, there are some threads that start where you just know nothing good will come from it, and in those cases we do see some admin moderation (hi @dang!). But, even then, I think the idea is that when discussing some topics, the thread will invariably end up going sideways. Those are the topics that end to get immediately flagged. And that's okay with me, because who has time for that, when we have so many other, more interesting things to argue (civilly) about?
I don’t know if that’s true. SV culture has always been a very big tension between monied military-industrial types and (eventually also monied) antiwar hippies.
It’s well-documented in SV’s military history, as well as recently, where Apple wasn’t involved in FAA702 illegal spying on americans (PRISM) until after the famously anti-establishment Jobs died.
The SV culture seems to have shifted a bit rightward (as has the whole country, tbh) but the tension is still there, and the social conflict remains (although I think there are other factors, not the least of which is the skill and grace of @dang, that keep people on the better side of their behaviors here).
I agree with what you're saying about SV, especially the military-industrial types. I'm not entirely sure what the makeup of HN demographics is, and would like to know. I have a suspicion that it's not just folks in SV. I also should have clarified more. In my opinion, the discourse here is more civil than on other platforms. I would suggest that has something to do with a combination of education and niche interests that attract a different user base. So maybe not in terms of factual correctness, but certainly in terms of the ability to have a civil conversation.
At scale, the long term community civility balance point is likely dominated by the average user's willingness to change their behavior as a result of peer feedback.
The HN userbase, feedback tools, karma-level-locked tools, and new users' personalities seem to create decent outcomes.
Which is to say, if someone acts like an asshat, folks let them know (either through downvotes, flags, or replies), and they modify their behavior to be closer to the community norm.
That said, I'm aware I don't see a lot of the most egregious stuff the Good Ship Dang torpedoes. Or what I expect are non-zero repeat trolls.
And honestly, the fact is that outside of very nerdy street cred, there's little incentive to actively manage discourse for commercial purposes on HN.*
* Outside of, you know, cloudflare tailscale rust (any other crawler alarms I can trip)
That’s a rather reductionist and slightly disparaging point of view. Moderation has its place I never said it didn’t, but do you really think that moderation is the only thing keeping this place from being 4chan? I think you have one deeply entrenched opinion and are ignoring that these are very different platforms.
HN is heavily moderated through a number of mechanisms: explicit community guidelines, community moderation (through voting), and active automated and manual moderation.
I think all of this working in conjunction is why it has remained a pretty great community for almost two decades. And I think that's a really impressive feat. I don't think it was accomplished via "a combination of education and niche interests that attract a different user base".
Indeed, I think HN has gotten better over time, even somewhat so in absolute terms, but very starkly relative to the deterioration of everything else. For example, back in the day, when twitter was first getting big in tech, a lot of people felt that it was a healthier place to discuss those topics than HN. I was never completely convinced of that, and have always been more active here than on twitter, but it was at least a very reasonable thing to think for awhile, IMO. But now I think it would be pretty crazy to think that twitter is healthier than HN. Similarly with similar communities on reddit.
I dunno, maybe there are some healthier spaces on mastodon or blue sky or threads or something now, but at least to me, HN has maintained a fairly stable fairly decent level of discourse for a very long time, and I don't think it is a result of luck or magic, but rather of hard and tireless work moderating the community.
Yea, I’ve become more aware of this since yesterday. I also think I should have provided way more context to what I was saying. I believe I came off as being against moderation but I’m not, I do think there is something unique about the user base just from the quality of content I see compared to other spaces, but I digress. I appreciate your thoughts and it gave me something to think about.
Last I ran the numbers, which was quite a few years ago, about 10% of HN posts were coming from IP addresses correlated to Silicon Valley (well, the Bay Area with a relatively wide radius). About 50% were coming from the US, and so on.
Thanks @dang. Turned on showdead. I will say that I was completely unaware of the moderation efforts here and appreciate having this pointed out to me. I like this option too. As far as transparency goes I don’t think it gets much better than this.
i'm not from silly valley, but its the dominant voice here.
some of my downvotes are from bad tone, overreaction, hyperbole... some are because of the silly valley culture not realising they are a bunch of deluded maniacs, or just producing absolute garbage products.
its mostly the former.
as for demographics... well, i'm a single data point, but HN has a wide reach. its why a lot of us are here imo.
Facebook has said it was pressured by the Biden administration to censor topic like covid. This is as clear cut first amendment case as you will ever find.
Your being down voted is amazingly ironic for a topic on the politicization of fact checking. There are hundreds of comments here talking about how objective facts exists and the correctness of fact-checking. You reiterate the statement of the Facebook CEO and what that statement entails and you are moderated.
But facts are facts right?
Zuckerberg did say Facebook was pressured by the Biden administration to censor covid misinformation, and the Hunter Biden laptop story [0], [1], [2] (multiple left-wing references for good measure). If Zuckerberg is telling the truth, that is a clear cut first amendment violation.
A private company can censor whatever it wants (mostly) but not at the behest of the government, there's law against that.
The only thing that "turns out" is they wish to curry favor with the incoming administration. FB hasn't been censoring much of anything as far as I can tell; there are all kinds of vile, nasty comments all over it. Just unfriendly, unkind stuff, not even political things. It's probably one reason it's kind of struggling as a platform - that kind of thing isn't much fun.
But is it currying favor? Could just as well be "kiss the ring or you'll see your life's work AT&Ted into oblivion"
Perhaps both: might have started as a pragmatic offer to bury the hatchet, then quickly turned into the never ending firehose of demands of an extortionist who just realized that he still all the cards after the extortee has given in.
Most voters don't care much about any of the details of this. They're not terribly unhappy with FB because they're using to keep track of people from high school back in the 90ies, or their families, or local recreation groups or something. Or they're not using it at all because it's for old people like me.
This is all just loud, performative subjugation to the incoming administration, that does take things like attacking trans people and immigrants as good stuff.
I would actually offer they Facebook is changing because their base has grown tired of their antics. My normy friends and family have complained of censorship increasingly over the last year. When I asked why we still use the platform one friend replied: “birthday reminders.” Then I thought that actually does summarize what I use the platform for. Not a great prospect for a company.
There is a campaign to capitalize on the idea that right wing people are censored.
And therefore all Americans are censored.
This fight has been fought before, at the dawn of moderation. It’s been fought here on HN. Back when people used to hold libertarian beliefs openly. “The best ideas rise to the top”. No, they frikking dont. The most viral ideas, the most adaptive ideas - those are the ones that survive.
Everyone learned that moderation is needed, that hard moderation is the only way to prevent spaces from attracting emotional arguments, harassment, stalking, and hate speech.
Maybe this time its different.
Moderation is both thankless, soul crushing, and traumatic. Mods r/neworleans effectively became first responders on Jan 1st. I know mods see everything from dead baby pictures, burning bodies, accidental deaths, to worse.
IF this works, and reduces the need for mods, great! My suspicion is that it’s going to radicalize more people, faster. Its going to support the creation of more demagogues, and further reduce our ability to communicate with each other.
Nearly all the levers of control of the US government to almost no control over the US government: that's a massive advantage. I can't help believing this, not the popular vote, is the motivation.
Exactly. Particularly the power of the incoming President to create bad PR (with 50% of the country) and the House to haul people into public testimony and yell at them.
Not to mention the federal money spigot.
Big companies aren't stupid and are largely amoral.
That's the silver lining through all of that: when right-wing ideologues start imposing their own groupthink model on social media, it stops being fun and people start to leave. Just look at Twitter. It's just not as fun anymore on there.
I expect it was an easy bone to throw the incoming administration, which the tech world learned from v1 is placatable by giving them PR / sound bite wins.
To the broader concern, this feels like Facebook making their original sin again.
Namely defunding and destroying revenue for a task that takes money (fact checking) and then expecting a free, community-driven approach to replace it.
Turns out, hot takes for clicks are a lot cheaper than journalism.
In this case, where is the funding to support nuanced, accurate fact checking at scale from?
Because it sure seems like Facebook isn't going to pay.
Do you believe the success or failure of these moderating features comes down to how accurate they are? People actually like Community Notes; they're part of the discourse on Twitter (even if most of them are pretty bad, some of them are timely and sharp). Meanwhile: Facebook's fact-checking features really do work sort of like PSA's for trolls. All the while, fact-checks barely scratch the surface of the conversations happening on the platform.
Facebook and Twitter are also unalike in their social dynamics. It makes sense to think of individual major trending stories on Twitter, which can be "Noted", in a way it doesn't make sense on Meta, which is atomized; people spreading bullshit on Meta are carpet bombing the site with individual hits each hoping to get just a couple eyeballs, rather than a single monster thread everyone sees.
(This may be different on Threads, I don't use Threads or know anybody who does).
PR/political success is certainly not correlated with accuracy, given the very act of telling a group they're wrong tends to piss them off.
In terms of encouraging discourse that maximizes user enjoyment of the platform? That's a difficult one. Accuracy probably doesn't do a whole lot there either: HN knows the people love someone being confidently wrong.
Success in terms of society? Probably more yes, albeit with the caveat that only a correction that someone feels good about actually wins hearts and minds. Otherwise they spiral off into conspiracies about "the man" keeping them down. (Read: conservative reality)
It's also important to remember that Zuckerberg only tacked into moderation in the first place due to prevailing political winds -- he openly espoused absolutist views about free speech originally, before some PR black eyes made that untenable.
To me, both approaches to moderation at scale (admins moderating or users moderating) are band-aids.
The underlying problem is algorithmic promotion.
The platforms need to be more curious about the type of content their algorithms are selecting for promotion, the characteristics incentivized, and the net experience result.
Rage-driven virality shouldn't be an organizational end unto itself to juice engagement KPIs and revenue. User enjoyment of the platform should be.
> he openly espoused absolutist views about free speech originally, before some PR black eyes made that untenable.
Note that openly espousing absolutist views about free speech means less than nothing. Elon Musk and Donald Trump openly profess such views, while constantly shouting down, blocking, or even suing anyone who dares speak against them with any amount of popularity.
> Do you believe the success or failure of these moderating features comes down to how accurate they are? People actually like Community Notes; they're part of the discourse on Twitter (even if most of them are pretty bad, some of them are timely and sharp). Meanwhile: Facebook's fact-checking features really do work sort of like PSA's for trolls. All the while, fact-checks barely scratch the surface of the conversations happening on the platform.
You're making a whole host of assumptions and opinions about this, with little in the way of data (I get it, you don't work at FB, how much data could you have?), just making blanket statements: "People hate Fact Checks", "People actually like Community Notes" and accepting them as accurate.
I use Facebook, a lot (again: all the politics in my town happens there), and almost nothing is fact-checked; I see one fact-check notice for every 1,000 bad posts I see. I feel like I'm on pretty solid ground saying that what they're doing today isn't working.
Meanwhile: Community Notes have become part of the discourse on Twitter; getting Noted is the new Ratio'd.
Accuracy has nothing to do with any of this. I don't think either Notes or Warnings actually solves "misinformation". I'm saying one is a good product design, and the other is not.
Not seeing fact checks likely means it's working: "Once third-party fact-checkers have fact-checked a piece of Meta content and found it to be misleading or false, Meta reduces the content’s distribution "so that fewer people see it.""
The issue with Community Notes is that if enough people believe a lie, it will not be noted. This lends further credence to a certain set of "official" lies.
It's not that they're inaccurate, it's just that they cherry-pick the topics to fact-check and their choice (in my limited experience) is always biased leftwards. You can be absolutely correct and absolutely malicious at the same time.
> I get how the partisan story is easy to tell here, but I'm saying something pretty specific: I think it would have been product development malpractice for this decision not to have been in the works for many, many months, long before the GOP takeover of the federal government was a safe bet.
You're just stating that, in your personal opinion, a scenario would be bad. That says nothing about it actually taking place.
You're expressing your personal opinion in response to a message listing facts supporting the belief the scenario is actually taking place.
Meaning, it's still plausible this is what is actually happening.
Both professional fact-checkers and Community Notes have a pretty low false-positive rate.
It's the false negatives that are the differentiator, but false negatives are by definition invisible to the user.
When you evaluate moderation as a "product" you place more weight on factors that are mostly losers for third-party fact checkers and winners for Community Notes: speed and annoying tone.
But since false negatives are never seen, there's no visible "product" to be annoyed by. Sure, the platform fills up with even more disinfo, but users blame that on other user, not the moderation "product".
And this is where Community Notes fails. Because Notes require consensus from multiple groups with histories of diverse ideological perspectives, when one perspective has an interest in propagating disinfo, no Community Note appears.
Some studies show something like 75% of clear disinfo doesn't get a Community Note on X when it involves a hot partisan shibboleth.
False negatives are mostly invisible failures that make the entire platform worse, but the user can't blame it on a "product" because it's really the absence of a product that's the problem.
But I think that can still be addressed separately from the fact that all the tech leaders in Silicon Valley are bending the knee to Trump (e.g. the Mar-a-Lago visits, the "donations" to his inauguration, etc.)
I'll give you an example I find analogous. When Bezos forbid the Washington Post from giving a presidential endorsement, he wrote an op-ed, https://www.washingtonpost.com/opinions/2024/10/28/jeff-bezo.... I pretty much agreed with the vast majority of what he wrote there. What I think is total BS, though, is his purported rationale and the timing of the decision. I think it's absolutely clear he did it because he didn't want to piss off Trump should he win (the "obeying in advance" part), which he did. The reason I believe this is because he made this decision so close to the election, and he apparently didn't feel the need to do this in previous years, or even the fact that WaPo made other political endorsements (e.g. Senate races in Maryland and VA) just before the presidential endorsement was banned. Bezos subsequent Mar-a-Lago visits and Amazon's inauguration "donation" pretty much confirm my view in my opinion.
In Zuckerberg's announcement, I thought the part he put in about fact checkers being "politically biased" was unnecessary (not to mention dubious IMO), and cleared seemed done to curry favor with the current powers that be.
As someone active in "resistance"-type organization from 2017-2021, with fundamentally the same politics now as I had then: I think all this "bend the knees" shit is mostly working to the benefit of the GOP, and I wish people would stop it. We lost an election, in part because we bet that the median voter was prepared to disqualify MAGA Republicans. They are not. Find a new angle, so we can win in the midterms. This isn't working.
I'm not trying to convince other voters. The "bend the knee" shit is not something I'm saying to try to change opinions. Like you say, clearly the majority of Americans don't care.
But it I'm pretty surprised at the outright transparent speed with which all these business leaders were willing to pay these naked fealty bribes, especially since for so long so many of them talked about these lofty goals besides just making money.
Italians in the 1930s didn't care either when Mussolini made corporations an arm of the state. But that doesn't mean what is happening now is any different.
I'm pretty sure they do this every cycle no matter who wins, but Democrats notice and recoil when it happens after a Republican win, and vice versa. There's also a titration of the news media mining clicks from a framing that de-"normalizes" the Trump administration. But that ship has sailed: you could say "This Is Not Normal" in 2017, which was a fluke nobody saw coming, but Trump won decisively this cycle, and absolutely everybody knew what we were getting into. It's time for the media to retire the schtick.
I agree with the parent that Americans in general seem not to mind corruption, but we can't become so jaded as to think that it's not even worth mentioning that this is a problem.
Referring to public company CEOs warmly greeting the newly elected president as "brazen cronyism" is a schtick, yes.
It annoys me a lot that I have to point things like this out, because I think Trump is a grave problem for the country, but you have to beat him at the ballot box, and the schtick obviously isn't working there.
Moving employee jurisdiction to suit the incoming administration is hardly the same as a warm greeting though, is it?
In my country we have a different word for people giving large sums of money as gifts to incoming politicians, yet we seldom impose that definition on others. US politics is different and affects the climate here too, even though that population is around 20% or less of all Facebook users.
The way to win is with a more appealing set of policy proposals.
More centralized government control, "Karen" style moralizing, DEI, gun banning, global warming, more bureaucratic (and ineffective) regulation, abortions everywhere and the entire "woke" platform apparently isn't it.
I'd suggest defocusing on those and instead return to being the party of the "working man" and a stable economy.
"Wealthy corporations want to force you to work 80 hours a week to enjoy unfair profits or they will replace you with immigrant labor" should be the vibe while never once speaking about things like systemic racism or climate change. Also "the rent is too damn high!". Definitely don't have the party fronted by people who appear airheaded or unintelligent.
You have to speak to the concerns of the voter which I think are individual freedom and economic prosperity.
Once in power you can do whatever you like of course, as is traditional in politics and Trump won't be any exception.
Unfortunately there is no party of "the working man" since the citizens united ruling opened the floodgates for legal & private bribery, and arguably before that. Bernie Sanders, whatever you think of his proposals and views generally, is the rare exception who stands against the bribery and acts as a true populist, and for that he was undermined and defeated as a presidential candidate. People know the democratic party is two-faced, and I don't see how that can ever change, with money being so essential to US politics now.
I'm fairly sure this is either untrue or unknowable. If the official "Harris campaign" spent more than the "Trump campaign" that doesn't actually mean much, considering how many other avenues exist to spend money that escape public scrutiny.
Even if you could account for all the dark money, that still leaves you with leveraging soft power - e.g. Musk using X as a de facto propaganda arm of the Republican party, which doesn't show up on any books.
I'm pretty sure it is knowable. The democrats spent far more.
Musk and X propaganda helped. Also Rogan and other podcasters, but look at how much propaganda the democrat side has/had. All the major media outlets. Reddit, etc etc. Plus the power of the federal government in censorship, courts and the like.
Look, I don't really care and don't trust anyone running for office much. I'm just pointing out what a winning platform would look like. MAGA won because they were speaking to things that more people found important. When the Democrats figure this out, they will be in the winning seat again. If they don't, then they will not win.
I'm saying that the democrats lost because they keep taking corporate/oligarch money and are at odds with the values of the people who would otherwise support them. They aren't the party that supports the little guy anymore, so they're basically without an argument aside from "not Trump". I don't think you understood my previous post, which was a critique of the democrats, which used to have "the working man"'s back.
Republicans have always been and continue to be pro-elite, pro-oligarchy, and against the economic interests anyone outside the upper class. They still have a better message than the democrats at the moment.
> More centralized government control, "Karen" style moralizing, DEI, gun banning, global warming, more bureaucratic (and ineffective) regulation, abortions everywhere and the entire "woke" platform apparently isn't it.
I totally agree with that.
> The way to win is with a more appealing set of policy proposals.
I completely disagree with that. At this point I think it's a bit laughable to think that the majority of Americans care about policy proposals. Trump's appeal, I believe, is that he gave a voice and an outlet for anger to large swaths of people who felt they had been ignored (which they largely had) and talked down to for years. The "elites" (often of both parties) had basically told people in hollowed-out communities and those with failing economic prospects that it was their fault - you just should have gotten a college education, or retrained for the new economy. The Democratic messaging made things worse by also saying "Hey, you know those social standards that were the norm up until the mid 90s? Well, if you believe those, you're a knuckle dragging bigot."
When people have simmering anger and rage, a "nice guy" approach isn't going to cut it. That's why so many people vote for Trump even when they find so many aspects of his personality distasteful.
I'm baffled why a politician hasn't taken more of the lead with the rage that has exploded since the CEO murder. Some elites on the right are trying to frame this as "The crazy Left condones murder!", while I see some elites on the left doing their usual useless finger wagging against insurance companies (see Elizabeth Warren). I just don't understand why a politician hasn't taken this torch and gone into "We're going to tear it all down" mode. I mean, of course there's Bernie, but at this point it needs a younger and more "firebrand" type of person.
I don't understand your point at all. Community Notes on E(x) has been ineffective, because ultimately the point of moderation is to delete posts which aren't true so they receive no reach and spread no disinformation.
Not to turn them into a public debate which might as well continue in the posts themselves.
Meta's political history has consistently been shady. Meta patented behavioural targeting technology in 2012 and was fined $5bn for its "accidental" links to anti-democratic election-fixers Cambridge Analytica/SCL, who have ties to far-right oligarchs in the US and the UK.
If you're looking for an ideological position, look there. The historical record is absolutely clear.
And then there are comments from Meta insiders, who - perhaps - have a clearer picture of what's going on than outsiders do.
As for malpractice, consider the recent AI rollout and rollback. It was an absolute fiasco for all kinds of reasons, PR and technical, not least of which was the way the bots themselves turned on the company.
Threads has already had a mini-exodus because of slanted moderation.
Meta is simply not a trustworthy company. So "Oh, let's scrap our moderation and do community notes" is hardly an isolated slip-up on an otherwise unblemished record of noble public service.
> ultimately the point of moderation is to delete posts which aren't true so they receive no reach and spread no disinformation.
That assumes that the correct amount of disinformation is zero, personally I wish to maintain my right to be wrong, and my right to tell others of my wrong ideas, and I hope they maintain the right to tell me I'm full of it.
Your position on censorship, moderation, as you call it, is your opinion, and your opinion only, and it is at odds with the position of X, and now Meta, who are taking the position that the point of moderation is to respect everyone's right to speech, while making it very obvious to those that care, that the speech may be less than truthful. Essentially everyone gets to speak, and everyone gets to make up their own mind. What a concept!
I also maintain a position of truth dies in the dark, and lies die in the light.
Most people aren't stupid, community notes breaks the echo chamber and provides a counterpoint.
That debate of free ideas has been working pretty well so far. So much so that we can usually tell who the bad guys are by how much the create darkness; how much they take on the role of arbiters of truth, how much they silence critics, think Soviet Russia, or North Korea for some good examples.
>I think it would have been product development malpractice
the thing is both community notes and top down moderation, if they have any purpose at all, are product malpractice. If they work, they are always going to be intrusive because that's what they're supposed to do, correct factually wrong information. Community notes is the neighborhood police, top down moderation is the feds but if they do their job either one is going to be annoying by definition.
If they're not intrusive they don't perform a corrective function and that's what largely happened to community notes. As time goes on they're more and more snarky and sarcastic meta comments rather than corrections.
But because they are community driven, they are snarky in a way that represents the community, which makes me question if they are intrusive at all. They are what the community grows them into.
It seems pretty clear to me that one of these features generally makes users happy and, at the same time, does correct some misinformation, and the other catches about 0.0001% of the bad stuff and turns it into advertisements for how bad the site is.
How can you possibly call community notes on Twitter a "success" when they demonstrably have not reduced the amount of actively made up shit on the site, and the same people who complain about a fact checker saying "no, vaccines do not change your DNA" are just as upset when that info comes from the community notes box, and the only reason there hasn't been widescale anger about them is because Elon wants to pretend it was his idea.
I'm not saying Twitter it is good. It is demonstrably not. But you're kidding yourself if you thought Facebook fact checking was suppressing the antivaxers and flat-earthers.
Oh, so community notes on twitter are actually not good, but its good that Facebook is implementing them anyway? You make no sense and are constantly equivocating back and forth in all your different posts.
If it was in the works for a long time, then Zuckerberg has been planning to bend the knee to Trump for a long time.
Today, Trump in press conference (video at [0]:
Q: "Do you think Zuckerberg is responding to the threats you've made to him in the past?"
TRUMP: "Probably. Yeah. Probably."
This tells us all we need to know. It has nothing to do with facts and everything to do with yielding to political pressure to bend the media to his whims.
This is just the most standard and basic elements of autocracy, the autocrat must make all the institutions serve him, not the people. This includes not only the branches of government, but also of society, starting with the press, but also the corporate world, the academy, social groups, and everything else.
Autocracy is not Left or Right. It is corrupting all the institutions to serve the will of the autocrat, not the will of the people.
Bending the knee to the autocrat, in this case explicitly changing your rules and operations to enable the autocrat and his followers to more easily spread their lies and intimidation is not political flexibility, it is obeying in advance to be complicit in implementing the autocracy.
It would be better if you didn't have to learn that the hard way, but our educational system and information distribution system has failed. This is just a more advanced and accelerated example of that failure.
[Edit: yes, my mistake to phrase it as political pressure — it was nothing of the sort — it was authoritarian extortion. Note Zuck has a case before the FTC.]
Autocracts doesn't get democratically elected, as far as I understand. Trump is a democratically elected leader who will end his term at most in 2028. Autocracts tend to not be democratically elected (or to change the rules once they're elected to never be deposed). Zuckerberg will bend his knee to the Democrats if they win next term. This is not autocracy, this is just knowing where the wind blows.
That doesn't make sense with the common use of the word. Autocracy is a much wider term than a militia style dictatorship, and is mostly used in the context of democracy.
Most, if not all, autocrats are democratically elected (with some wildly varying definition of democracy of course).
In current times, democratically elected autocrats include Putin of Russia, Orban of Hungary, Erdoğan of Turkey, Chavez/Maduro of Venezuela, Bukele of El Slavador, and more. Jumping back a most notorious autocrat, Hitler was democratically elected.
Autocracy is not typically imposed by conquest, it is mostly created by corruption of institutions. It is not binary, it is on a scale.
In full democracies, all the institutions of government, legislative, executive, and judicial, are independent and serve as checks & balances against each other. And the institutions of society, industry, trade, press, academic, sport, social, etc. are also fully independent.
Under autocracy, all of these governmental and societal institutions are corrupted to bend to the will of the autocrat, often by his using force of government to his corrupt ends.
This is exactly what Trump just admitted to and Zuckerberg just did — he threatened Zuckerberg with unfair government actions, and Zuckerberg is now converting Facebook to work to further Trump's goals instead of remaining an independent institution.
> like people being in Texas makes them more objective?!
This is the least charitable interpretation. Obviously, it is not talking about a single person moving to Texas suddenly changing colors like a chameleon (although I suspect there is quite a bit of merit to that due to groupthink and community speech policing in BayArea/LA).
And yes, I think it won't be a stretch to think Texas would be more objective representation of general US PoV and less of a monoculture than FB sites in California. This is not a value judgement, just a natural function of the distribution of people.
Is the distribution of people in Austin so very different from the Bay Area?
Both states are internally diverse. And it’s just silly to suggest that “groupthink and community speech policing” is something that exists in California but not Texas.
Its slightly but consistently different. I moved from Austin (after 30+ years in TX) to the west coast, and the group think / speech policing is extremely noticeable to me (spend most of my time in Portland and SF), even though its not extremely different.
That being said I think a more nuanced but still political take on the move is, having moderators is important, and its less likely those moderation will be pressured to shut down if the moderators are actual jobs in a red state. Further the jobs are low skill jobs so they can be moved back (or elsewhere) as needed. Easy move even if the political capital is minor.
> Is the distribution of people in Austin so very different from the Bay Area?
If we just go by presidential election, Travis County's result is more balanced than SF and San Mateo, almost on par with Alameda county, so the answer is "slightly." However, the moment you get exposed outside the core Austin area, you deal with predominantly red areas. To get the same effect you have to go as far as Placer County or Sonoma, so I don't think the FB workers in Bay Area (SF/Menlo Park) have quite the same level of exposure.
No because I don’t use these shit platforms. But the point is if policy says to moderate content of type ABC then I don’t see why someone in TX would do something different than someone in CA. It’s the same policy.
> it allows “allegations of mental illness or abnormality, based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
I think you misread that: it allows allegations of mental illness even on the basis of gender and religion, which before weren't allowed. It still allows allegations of mental illness based on other factors, because they were never disallowed in the first place.
Mental characteristics, including but not limited to allegations of stupidity, intellectual capacity, and mental illness, and unsupported comparisons between PC groups on the basis of inherent intellectual capacity. We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”
There’s no ambiguity. Allegations of mental illness or abnormality are explicitly allowed based on gender or sexual orientation, but no other reason.
There is ambiguity, insofar as the whole document is a word salad of sentence fragments and rambling sentences that branch off in different directions without logical coherence.
It takes quite some effort to discern the intended meaning, which I agree matches your interpretation.
Even the tier system is declare but it's meaning never explained.
Calling out "weird" and no other word is hilarious, suggesting that Team MAGA is still sore over how much people enjoyed using that term to describe the bizarre behavior of of Trump and company.
Seems like not the biggest one? That seems like the kind of role you take knowing you're going to hold it only so long as you have a rapport with the current governing majorities.
I don't know Dana White and I don't know any predecessor. It isn't really relevant though apart from which actions they indeed did take in their approach.
Your second point about why people in Texas might be less biased is the distance to primary locations of tech companies perhaps? I don't think that it is convincing, but a lack of trust is the most severe problem of fact checkers.
I believe the concept cannot work though, especially if I look at the broader context.
No, user feedback is the better control mechanism. Also these fact checkers would never be independent and they would develop their own interest for even more moderation. They would never report that there isn't any more controversial content to be checked, because that is their raison d'être from day one.
Almost anyone added to the board will have some kind of political leaning. Why no mention of this when hard-left leaning people were added to the board?
"attacking trans people is going to be ok now."
This was never okay (and I don't think it's going to change). If you mean something like an opinion on child gender surgeries, this should have always been allowed and you can ignore if you don't agree and community notes will certainly have more information on it.
"Blue Sky being available and gaining in popularity."
So you dislike bias, but mention one of the most biased social media platforms on the Internet?
Zuckerberg just admit in his video that the Biden administration was working with Facebook to censor users. Why no mention of this? Isn't this also political bias that needs to be stopped?
It has nothing to do with 'bias' or protecting anyone and everything to do with authoritarians banning and silencing people they don't like, which is exactly what Blue Sky has done from day one and everyone against this change truly wants.
I can’t help but roll my eyes at mindless euphemisms like “attacking trans people.”
There are very serious issues involving trans people with no easy answers. Like allowing minors access to irreversible treatments. Like women’s sports. Like the safety of women only spaces.
I bring this up because on so many questions like these, the progressive reaction is to shut down any discussion and isolate themselves from exposure to any ideas different from their own.
It doesn’t work. And it doesn’t help anyone.
And maybe this has something to do with why Facebook is migrating to a “Community Notes” model.
Is it not possible that ‘attacking trans people’ is both (sometimes) a euphemism for criticism of maximalist positions and (at other times) a perfectly normal term that designates approximately what ‘attacking x’ generally means? There is such a thing as an unsubstantive and utterly unpleasant insult explicitly motivated by the fact that its target is trans. Many trans people say that there are many such, and one does not need to believe everything that trans people say (surely with the result of inconsistency!) to think that the evidence they present is not wholly concocted.
Others may misidentify respectable, good, or correct arguments as ‘attacks’ in narrower senses, but that no more makes the underlying categories meaningless than the misapplication of such descriptions as ‘true’, ‘valid’, ‘scientifically established’, or ‘by definition’. I have no general pithy answer to what one should do about the sorts of attack I have described, but I venture that it is reasonable to talk or attempt to do something about them. What term would you prefer?
I think that it would help if you were to suggest a term people who don’t want to ‘shut down discussion about related topics all together’ should use. Otherwise, the effect (although perhaps not the intention) of deprecating the term ‘attacks on trans people’ is that the sort of discussion you admit is possible theoretically will be impossible for want of a suitable term to designate the sorts of attacks it concerns.
I can't help but roll my eyes at "serious issues" you know in most states these anti trans laws were passes targeting handfuls of children in each state, sometimes a single child. But oh yes that's a serious issue for sure right now
This is a cheap political gotcha accompanied by a litany of unevidenced and vague allegations against a political out-group (which "particular group"? On what basis do you assert that "some AI somewhere" is involved, and why would that matter? Not to mention the tired "dog whistle" cliche) and a demand for self-censorship.
You've also made a bold claim about the relevant statistics without any kind of citation.
My understanding is that a higher standard of discourse is expected on HN.
But aside from that meta point: your argument seems to rest on the idea that your ideological opponents would prefer for cisgender teenage boys to be able to get mastectomies when they exhibit unwanted breast growth. But the source your interlocutor found suggests that the "breast reductions in teenage boys" you're talking about are in fact dominantly performed on transgender teenage boys (i.e., people your ideological opponents would consider "teenage girls"). So the intended gotcha doesn't even work; you haven't identified any kind of inconsistency in the position or potential for a "self-own".
My point was that the breathless hyperbole about "gender affirming" surgery is actually in direct opposition to "traditional male stereotype" of the same group--thus invalidating that the concern is a genuine issue rather than political rhetoric.
As to whether teenage boys should be getting that surgery? That's .. more complicated. Should one that lost 100+ pounds to be healthier be able to get that surgery? Probably. How big should the growth be before it becomes "medical"? Don't know.
This is why stuff like this should be left to doctors who actually understand the circumstances of the patient.
> thus invalidating that the concern is a genuine issue rather than political rhetoric.
You didn't invalidate the concern at all and just if anything bolstered it. One reason why people voted for Trump (I wouldn't vote for him myself) is that any discussion on these topics gets called a phobia or an ism.
> Should one that lost 100+ pounds to be healthier be able to get that surgery?
If they're an adult, they can do what they like.
> This is why stuff like this should be left to doctors who actually understand the circumstances of the patient.
Just because someone is a doctor does not mean they have an unquestionable moral or ethical compass, there are good doctors and bad doctors. When homosexuality was illegal in the UK, doctors would chemically castrate gay men.
Calling a legitimate argument a "dog whistle" is a classic tactic OP is talking about which is used to shut down discussion. Just debate the merit of what he's saying rather than try to label him as an enemy.
Breast reduction for children IS in fact irreversible. It causes huge scars and trying to get breast augmentation later is not actually restoring their body to its natural state. It is definitely something that is controversial. Also putting children on hormones is within scope of this conversation and DOES happen.
There are lots of people who detransition and regret their decision. Children who have been sterilized for life and have permanent scars. It's completely valid to have discussions about whether kids should be able to make these decisions (they shouldn't).
You are repeating the talking point without including the number:
The number of those kinds of surgeries people claim to be "oh so concerned" about is in the low double digits--generally low single digits--normally zero in a year.
When you get to some medical procedure that incredibly rare, the medical indications are generally really, really unique and should be left to doctors. (breast implants in girls are simply not done until 18+ unless cancer is involved, for example).
Despite what people seem to think, doctors don't just do this stuff randomly (at least in the US). They can and will lose their license for doing this kind of thing unless they follow established guidelines. And all those guidelines dictate that this kind of stuff is simply not done until after 18 unless there are incredibly extenuating medical circumstances.
> Breast reduction for children IS in fact irreversible. It causes huge scars and trying to get breast augmentation later is not actually restoring their body to its natural state.
I have yet to meet a girl or woman who had breast reduction and regretted it. See: Soleil Moon Frye, for example. She had genuine health issues. And, even still, she had to fight with her doctors to get it done at 16 rather than wait until 18.
> Children who have been sterilized for life and have permanent scars.
Cite examples. I suspect vastly more children have been sterilized for life from circumcision complications than from any other gender surgery.
> ...These drugs, known as GnRH agonists, suppress the release of the sex hormones testosterone and estrogen. The U.S. Food and Drug Administration has approved the drugs to treat prostate cancer, endometriosis and central precocious puberty, but not gender dysphoria. Their off-label use in gender-affirming care, while legal, lacks the support of clinical trials to establish their safety for such treatment. ... Over the last five years, there were at least 4,780 adolescents who started on puberty blockers and had a prior gender dysphoria diagnosis...
And more than that for hormone treatment:
> At least 14,726 minors started hormone treatment with a prior gender dysphoria diagnosis from 2017 through 2021, according to the Komodo analysis.
And far more than "low double digits--generally low single digits--normally zero" for surgeries:
> In the three years ending in 2021, at least 776 mastectomies were performed in the United States on patients ages 13 to 17 with a gender dysphoria diagnosis, according to Komodo’s data analysis of insurance claims. This tally does not include procedures that were paid for out of pocket.
(And also does not include cisgender patients without gender dysphoria but with unwanted breast growth.)
I would just like to say the discussion under your comment is exactly the kind of productive discussion citing papers and statistics I want to see more of.
Too many progressives want to terminate such discussions by censoring any dissenting opinions and attacking any kind of disagreement as bigotry.
> The number of those kinds of surgeries people claim to be "oh so concerned" about is in the low double digits--generally low single digits--normally zero in a year.
> Among the 209 adolescents who underwent gender-affirming mastectomy, only two expressed regret.
> In our cohort, two patients (0.95%) expressed regret; one inquired about reversal surgery, but neither had undergone reversal surgery within follow-up periods of 3.7 years and 6.5 years.
Note the followups are into post-teenage years and most are very satisfied.
> Gender-affirming mastectomy, also known as “top surgery,” is the most prevalent surgery requested when considering all transgender adolescents, whereas “bottom surgery,” which affects genitalia and fertility, is relatively more complex and mostly performed after age 18.
As far as I can see, this is a medical system that is being very conservative (especially involving irreversible effects on fertility), involving parents/guardians at all stages, and prefers therapy first, hormones second, and surgery only as a very final choice. And note this level of conservatism in a system in Northern California--which is likely to be the most accepting of such medical actions.
So, if you are advocating that this should not be the case, understand that you are directly attempting to legislate the complex relationship between parent and teenager as well as both of them communicating with a medical professional for something which evidentially is a neutral to positive outcome for 98+% of the patients involved.
What right do YOU think you have to enter into that conversation at all?
> Our study has several limitations. First, its retrospective design meant we were unable to measure patient satisfaction and quality-of-life outcomes. Complications and any mention of regret were obtained from provider notes, which may be variable, and thus both may be under-reported. In addition, although an integrated health care system allows for continuity of care, some members may have transferred care or changed their insurance status and thus, subsequent complications, or reversal operations, would not have been captured. Next, our study was conducted at KPNC in an insured cohort of individuals with access to gender-affirming medical and surgical care. Therefore, our outcomes may not be representative of the general population, many of whom lack similar access to care. Finally, the time to develop postoperative regret and/or dissatisfaction remains unknown and may be difficult to discern.
You state that "the followups are into post-teenage years and most are very satisfied", but the authors were very explicit about not being able to determine this due to the study design.
The authors also report that:
> The median age at the time of referral was 16 years (IQR=2) and ranged from 12-17 years. Patients had a median post-operative follow-up length of 2.1 years (IQR 1.69).
Which implies that for many patients, the follow-up would have been within their teenage years.
Not only that, but the number of kids on hormone blockers is in the thousands (and increasing a lot every year). It's claimed that their effects are reversible but that is false, they lead to sterilization if the timing is wrong.
I appreciate the study links, but it makes it really hard to take you seriously when you claim trans kids are not allowed to “exist”. That’s extreme hyperbole, as if they’re still alive they obviously exist.
If you don't allow for proper treatment like social transitioning and puberty blockers, they can't be themselves and therefore they can't exist.
Next to this there's also risk of those kids committing suicide because they can't get proper treatment, which is only getting worse with all the anti-trans laws. See https://www.nature.com/articles/s41562-024-01979-5.epdf
Okay, which parts of the Review of relevance to that article do you believe McNamara et al have successfully refuted, and on what basis are you making this claim?
>According to this way more recent study they are totally reversible: And this one says the same:
I see nothing in your links that supports those conclusions. The second one at least asserts that recipients overwhelmingly don't want to reverse the effects, but this too is a complex topic (see e.g. https://slatestarcodex.com/2018/09/08/acc-entry-should-trans... ).
Also, the link you're responding to isn't a "study", but rather a position document from the NHS (UK national healthcare).
You can either force a trans kid to develop the wrong kind of secondary sex characteristics. With all trauma and painful corrective procedures that will follow later in life, or you can let them take a pill a day which will halt it until they're old enough to make that decision. That really doesn't seem difficult to me.
> Also, the link you're responding to isn't a "study", but rather a position document from the NHS
I know but it's still based on the cass report, which claims to be a study.
> As far as I can tell, you linked to abstracts for a paywalled academic papers.
Just scroll down, no paywall.
> The point is about the objective fact of what the kids want. Your moral judgement of what should be done as a result, is irrelevant to that.
This has nothing to do with my moral judgment. If a kid gets diagnosed with gender-dysphoria, they should get proper treatment. Social transition in combination with puberty blockers are the known effective treatment.
Not sure about the US, but here gender-dysphoria in children has to be diagnosed by a team of professionals that aren't allowed to steer them in any way.
One of the challenges in discussing this issue more broadly is that "trans" encompasses such a wide range of different groups with very little in common, from the distraught young girls who want surgeons to cut out their breasts, to the middle-aged men who picked up a cross-dressing hobby, to the trenders who got a colorful new haircut and started making pronoun demands of others.
> Like women’s sports.
Well, when a trans woman is on HRT for a few years, she has a muscle mass that's been totally grown under estrogen. This causes a of muscle atrophy and a massive drop in strength. That's why trans woman have been allowed to compete with cis woman for the last 25 years.
> Like the safety of women only spaces.
How's that even remotely relevant to transgender people? Are you really calling all trans woman perverts or simply afraid that men pretend to be trans? Because it's a lot easier to pretend to be a janitor.
> The reversibility of puberty blockers is highly disputed.
Not really, for more information about that read the study I posted.
> Whether and under what circumstances trans women have no advantage over cos women is a highly complex question.
Again, not really, except for all the misinformation online. If trans woman have such an high advantage, why haven't they dominated the Olympics for the last 20 years?
> We already have men who freely admitted to claiming to be trans solely for the purpose of accessing women’s locker rooms.
So? This happened maybe once or twice in the entire world, where pretending to be a janitor is something that's being done in every spy movie. Should we also ban janitors?
> > Whether and under what circumstances trans women have no advantage over cos women is a highly complex question.
> Again, not really, except for all the misinformation online. If trans woman have such an high advantage, why haven't they dominated the Olympics for the last 20 years?
Not really sure why you specify 20 years, but I'm too lazy to go through the history of IOC positions to figure out the one 20 years ago.
Because looking at the current one already provides the answer.
The IOC doesn't take the position that it is a simple topic.
The wording in https://olympics.com/ioc/human-rights/fairness-inclusion-non... (and click through) is quite clear that they see a tension between inclusion along the axis of sexual identity and a continuation (or successor) or male/female category split.
Where is your actual evidence puberty blockers are reversible? They are male. Their reproductive systems are organized around creating sperm not eggs. HRT does not change a male into a female. There are myriad aspects of biology that still makes them male and confers all such advantages in athletics. This is just reality.
Disputed by the disingenuous. Notice who they always exclude from the restrictions from those "dangerous drugs"? Cis children. Magically that 0.01% of the population faces absolutely zero issues.
> most of us would be fine with some experimentation
This is why ATProto is a great foundation for to the next generation of social media applications. It makes experimentation easier and open for all. It removes the cost of switching to the better alternatives. ATProto enables real competition on a single, common social media fabric.
No it isn't. The only implementation of ATProto so far has been heavily criticized for immediately blocking anyone with wrong opinions, while at the same permitting pedolovers post without much trouble (that butterfly logo is a well-known pedophile logo).
The Bluesky pedo trope is a right wing falsehood, yet another piece of their misinformation agenda
ATProto is an open protocol, anyone can add content to the network. Bluesky is a company that operates the most used application, a micro logging platform like Twitter.
Musk Social has far more awful actions and far more awful personal posts by the oligarch himself. The "awful" thing of blocking trolls on Bluesky is what makes it a place with more and better engagement. We don't all need to read all the awful shit people write online in the name of "free speech". I have every right to ignore or remove content I don't like from my information diet. The benefit of ATProto is that if you don't agree with the content moderation policies of Bluesky, you can write just a different client (many already exist) and subscribe to different moderation providers (many already exist), all without having to rebuild your social followings
GP was asking about how fact checking is better than community notes, but you're saying that Meta's community notes will be worse than fact checking, which may be but which is not responsive to GP's question.
Because liberals in Austin Texas have far more experience in what it means for liberal and conservative opinions to coexist together in one place, vs California where liberal opinions are the default and everything else must be shunned.
And Redding, California is far to the right of it.
It's just coded language for who they're going to favor, otherwise it makes no sense at all, as it's possible to find people of all political stripes in both states, as well as employees who would take their duty to stick to the facts very seriously.
Parent obviously meant "center" to be the political center of the U.S. given the previous sentence. I'm not sure they're correct in either statement (not having investigated in any way), or that this is a reasonable thing to consider for a global platform (to the extent that Facebook is one).
Nonetheless, it's trivially true that somewhere in the US must be to the left of the political center of the U.S.
Not really? The democratic and republican parties are both classical liberal parties, invested in business and capital as the standard and correct way to organize a society. Classical liberalism is a center-right ideology, globally.
Show me the party in the U.S. that wants to abolish private property, wants to provide food, healthcare, and housing to all, that wants to nationalize key industries, that wants to govern from a standpoint of "wellbeing for all". If you can point me to a place where that's the prevailing ideology, I'll gladly recant the idea that no place like that exists here.
You are not using the term “left of center” how most people do. Which is fine if you want to but then don’t get surprised when you have to explain yourself every single time.
BTW as an actual “classical liberal” I find it hilarious you describe the two parties that way.
Social democrats (e.g. Nordic model) are left of center, but aren't MLs or communists. Anarchists (e.g. Kropotkin) are left of center but aren't MLs or communists.
There's plenty of room between the center and Marxist Leninism.
I would say many labor politicians are centrist. Some democrats are center, some are center-left. Some are center-right.
You could just as easily say that the Republicans and Democrats are both left of center because neither party wants to restore a politically active monarchy, establish a national church and reform law and government under explicitly religious lines, restrict and revoke citizenship based on ethnicity, or install a military government. You might say, "but those are all crazy far-right things that no sane developed country would do", but I think nationalizing industries and abolishing private property are crazy far-left things that no sane developed country would do, either.
Canadian potash corporation, Chilean mining, French financial sector, gazprom in Germany, Indian fossil fuels, railways around the globe, Amtrak here in the U.S.
Many many nations are nationalizing things historically and through today.
Nationalization isn't a litmus test for if you are a leftist though, it's an example of one leftist policy.
In general, the left seeks social justice through redistributive social and economic policies, while the right defends private property and capitalism.
> In general, the left seeks social justice through redistributive social and economic policies, while the right defends private property and capitalism.
That’s an extremely left-skewed framing that leaves out a lot of important cultural issues. For instance, the leftists during the Spanish Civil War massacred Catholic priests and nuns and burned down churches while many on the right sought to protect the church and restore the Spanish monarchy.
It’s more correct to say that the right defends traditional institutions, which might include capitalism, but even these vary widely from country to country. For instance the United States never had a monarchy or an established religion; most of the American Founding Fathers would have sat somewhere left of center in the Estates General during the French Revolution, which is where we get the terms “left” and “right” from in the first place. But in an American context, the republic and the constitution are the traditional institutions that the American right has traditionally defended, even though they were established by the 18th century left.
Even when it comes to capitalism it’s not as clear cut. Prior to the American Civil War, the north was capitalist but the south had a precapitalist agrarian economy based on slave labor. The northern liberals, abolitionists, and capitalists formed a coalition to the left of the southern planters. Outside of areas that had widespread slavery, there’s also a long tradition of right wing critiques of capitalism as a destructive change to the traditional patterns of society, and there are many on the far right who seek to return to much older ways that are now lost.
You're generally correct, but I imagine you won't get a good reaction on HN to this viewpoint. Most people on here unfortunately don't really an understanding of politics beyond a very surface level one.
I'd be thrilled to have the right correctly differentiate between the democrats and leftists. Using the right terms would be a useful start to having some dialogs.
The Left Right dichotomy is a fairly broad set of political ideas, especially globally. The Left typically includes socialists, communists, anarchists, labor movements, syndicalists, and social democrats. Typically, these movements are collectivist, whether that's collectivist in a big government or collectivist in small local communities.
Classical liberal policies, looks those of the Democrats and Republicans, are right of center.
An example, when was the last time the Democratic Party pushed for nationalization of a whole industry? Eg aerospace, rail, or energy? What about offering food and housing for everyone? Abolishing private property? Those are leftist policies.
Not at all - I'm just confused about the whole left / right distinction being proposed by the OP, since "nationalization" was never (as far as I can tell) part of the "left" at least when we talk about _socialism_. National socialists were definitely interested in "nationalizing" things, but socialists were a little bit more broad in their interpretation of what they were doing with "the stuff that isn't property" (at least as far as I understand it).
But maybe the OP was not talking about "what they thought they were doing" only describing "what they do / did"?
I was articulating the sorts of things leftists often push for. Nationalizing industry is one such thing - holding industry in common good for the people is one flavor of leftist. You see that in Soviet style communism, for example.
It's not the only way to be leftist. You can be leftist and anti central government, for instance. You cannot, however, be leftist and staunchly capitalist.
The whole thing is ideological. Trump and Musk are undertaking their takeover on government, and so the trillion dollar companies which control the rules of the spaces in which the vast majority of our discourse today happens, do their thing and kiss the ring.
We can debate the merits of notes vs factcheck. But it's hard to see the bullshit about freedom of speech as anything other than that: you are now allowed to express opinions that the new regime shares. Long live the king.
> "Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content." - like people being in Texas makes them more objective?!
The FB office in Austin, Texas is a moderately left-leaning area. Their office in Silicon Valley is about the most extreme left-wing place in the country. At the very least, teams at their Texas offices will have more overlap with the median voter than the ones in California. If their Texas offices were in rural rancher country, then I'd agree with your concern that it would just be swapping one bias for another.
It's not about actual employees, it's about signalling "Texas - yay!" and "California - booooo!" in order to make good with the incoming administration.
Grew up in Ohio. Always wanted to live in Silicon Valley. Been here 14 years now. Not leaving. But this is happening because of how terrible the California brand has become. Pretending our prestige and brand is the same as it was 20 (or even 10) years ago is not the answer.
Yeah I was recently given the choice to move for RTO to the bay area versus pacific northwest, and everyone I asked about this expressed their dissatisfaction with California.
That's a complicated topic, but part of that is because California has become a target for a number of people with money, influence and media outlets.
Not to say it doesn't have problems - like housing - that are self-inflicted. Just that a big part of the 'brand' problem is people targeting the state.
Yes there is a lot of “unfair competition” but ultimately you build a brand by demonstrating your positive qualities and making it clear what you stand for.
People care less about ideology than they do about their own lives and prosperity.
It used to be clear: you can make a better life in California. It was a land of growth, prosperity, and wealth. Growing families moving into golden cul-de-sacs.
We should actually make those things true again. Houses don’t need to be affordable in Palo Alto but not being affordable anywhere is a problem. We don’t need to develop Big Sur but not being able to develop any costal property is a problem. We don’t need to deport law abiding citizens because they fail an ICE sweep but not being able to deport career criminals is a problem.
The problem is that we have lost any ability to make a positive case for California outside of niche political interests and very specific career paths.
Says more about fb being penny pinching than anything. The kid working the panda express in california can afford a 1br apartment, why not a fb moderator?
Actually the cost of rent and housing has dropped there the last few years, because they are doing a good job building. Not so great for my SFH's value, but its definitely dropping from "WTF" to "Seems more normal" pricing.
By the same - entirely unevidenced - reasoning, your posts ITT are about signalling the reverse in order to make good with sympathetic readers on HN.
See how that works?
The specific places in California where Facebook had "trust and safety and content moderation teams" were places that very much don't reflect the average politics of the US. That is naturally going to reflect itself in the ideological composition of employees, and therefore in political bias in the fact-checking process.
* there is no good reason a priori, outside of political bias, to suspect the New York Post (founded 1801 by Alexander Hamilton) of spreading such disinformation.
Thinking Menlo Park (or any of Silicon Valley, really) is in any way "extreme left-wing" is a sure indication you haven't spent any time there and are basing your viewpoints off of what others have said on social media. Billion dollar corporations by definition do not support anything remotely "extreme left-wing".
I’ve lived in SF, Mountain View and also the east bay and I’ve worked at a billion dollar company that did indeed support some very left-wing causes.
Despite having grown up in a light blue state, the difference in politics was very noticeable when I got to SF/SV. This isn’t a value judgement, just my observation.
That's why I was talking about Silicon Valley, not SF or east bay. They're much different places. Besides that, a corporation giving lip service to diversity =/= "extreme left-wing" views. These billion dollar corporations are still capitalist, through and through. Actual extreme left-wing views are staunchly opposed to capitalism.
Talking about "actual extreme left-wing views" is something that only really works in internet arguments where everything eventually trends into Communism vs Capitalism (TM).
In reality, every country has their own set of issues. Every democracy has their set of parties that exist somewhere in the policy space of issues relevant to them. In the US, we generally think of socially progressive policies as "left" along with non-market views of the economy. As such, the SFBA is generally much closer to the American "left" edge than the right.
I agree that South Bay and the Peninsula are less "left" than SF or Oakland, but I think this sort of argument is sophistry. That said, I don't really think moving hiring to Texas will change anything ideologically among employees and instead is just a way to signal to the new administration that they're Friends (TM) and on the backside a way to cost cut so they can pay less in Austin.
That's a funny way to say "I'm sorry, I should not have assumed you were unfamiliar with the region, when it has instead become clear that you live out there".
Very few, if any, billion-dollar corporations are in any way “extreme left wing”.
But that is not “by definition”. The definition of a “billion-dollar company” is that it is valued by investors at a billion dollars. That definition has absolutely nothing to do with its political leanings.
“Vanishingly unlikely” sure. But not by definition.
What I mean is an extreme left-wing views would advocate for the nationalization or abolition of all private companies, so a corporation couldn't fit into that.
Those ideological changes are corrective though. California is obviously very far in one direction politically, and presumably the existing Meta board members are not right wing.
I don’t use either Facebook or X so I have no personal experience. But the New York Times cited this meta-analysis for the proposition that they’re not ineffective:
Fact-checker warning labels are effective even for those who distrust fact-checkers
They also cited this paper for the proposition that Community Notes doesn’t work well because it takes too long for the notes to appear (though I don’t know whether centralized fact checks are any better on this front, and they might easily be worse):
Future Challenges for Online, Crowdsourced Content Moderation: Evidence from Twitter’s Community Notes
Thanks for pushing for clarity here. So: I'm not saying that fact-checker warnings are ineffective because people just click through and ignore them. I doubt that they do; I assume the warnings "work". The problem is, only a tiny, tiny fraction of bogus Facebook posts get the warnings in the first place. To make matters worse, on Facebook, unlike on Twitter, a huge amount of communication happens inside (often very large) private groups, where fact-checker warnings have no hope of penetrating.
The end-user experience of Facebook's moderation is that amidst a sea of advertisements, AI slop, the rare update from a distant acquaintance, and other engagement-bait, you get sporadic warnings that Facebook is about to show you something that it thinks you shouldn't see. It's like they're going out of their way to make the user experience worse.
A lot of us here probably have the experience of reporting posts to Facebook for violating this or that clearly-stated rule. By contrast, I think very few of us have the experience of Facebook actually taking any of them down. But they'll still flash weird fact-checker posts. It's all very silly.
So, why wasn't a mixed approach taken? That's the obvious question you should be asking. Paid fact checkers are leaps in quality and depth of research, meanwhile Jonny Twoblokes doesn't have the willingness to research such topic, nor the means to provide a nuanced context to the information. You are saying that the impact was limited, but it was not because it was low quality. If you do both, where the first draft id done by crowdsource with the professional fact checker to give the final version, I don't think you would have a good reason to not do it.
I've answered elsewhere on the thread why I think the warning-label approach Facebook took was doomed to failure, as a result of the social dynamics of Facebook.
A way to quantify this doesn't immediately come to my mind. Maybe reasonable metrics would be:
1. What % misleading/false posts are flagged
2. What % of those flagged are given meaningful context/corrections that are accurate.
It seems there's circular logic of first determining truth with 1, and then maybe something to do with a "trust"/quality poll with 2. I suspect a good measurement would be very similar to the actual community notes implementation, since both of those are the goal of the system [1].
The deep irony is that some of the original contributors to Birdwatch were working on this stuff at Facebook before being blocked for various reasons and leaving to work at Twitter.
To steelman this a bit, early versions of Birdwatch had problems with unsourced notes and speed of note display. There’s a bunch of research that shows that 1st impressions of info tend to dominate, so speed matters a lot.
In practice FB’s program was poorly resourced and overly complex so I’m not sure it ever achieved its theoretically lower latency.
I don't care about the fact checking part but I do care about the "removing the limits on political content on feeds".
I think everyone can agree that polarizing content being pushed into people's feed for engagement is a very very bad mix with politics. There is no benefit for anyone in doing this, except for meta's metrics and propaganda outlets.
People who are floating using the military to steal territory from a NATO ally, as just one totally random example.
Yeah, I know what the press release said lol. Do you typically take press releases as fact?
Journalist: "Can you assure the world that as you try to get control of [Greenland and Panama], you are not going to use military or economic coercion?"
Extremist: "No. You're talking about Panama and Greenland: No, I can't assure you on either of those two..."
Journalist: "Will you commit that you are not going to use the military?"
No, I don’t take press releases as fact - do you not see that mainstream opinion on gender and immigration is clearly not in line with what Facebook were moderating for?
Compared to compelling people to believe in gender ideology, industrial scale suppression of dissent on private platforms, and teaching race based original sin in schools, being the third president to want to get control of Greenland doesn’t seem particularly extreme.
Also, as was pointed out but you omitted from the question you’re quoting, asking a military commander their strategy is a very poor question.
The most useful result of Community Notes I've seen is when someone posts something Y, and then a few hours later it comes out that actually it was Z, community notes have been able to attach "actually it was Z" to the original viral post, still being shared.
I don't know if anyone cared much about fact checker reports (or if anyone even bothered to track how often they ended up being wrong when looking back in review).
Also didn't know Meta was outsourcing fact-checkers which is a very terrible idea that sponsored a shady economy of ghost workers that were paid pennies for reviewing gore content.
It'll really take a special mind to think Community Notes wasn't a positive feature added to the social network sphere. Musk despite his schtick did very bold things that other platforms wouldn't think of doing, such as open-sourcing the recommendation system or recently suggesting the idea of optimising content with unregretted time spent that will reward healthy content and punish toxic content even if the two had the same number of impressions.
The overtone window is shifting towards a more open speech and less of self-gratifying echo chambers that promoted the toxic cancel culture.
> It'll really take a special mind to think Community Notes wasn't a positive feature added to the social network sphere.
Attributing it to Musk, though, would require a time machine.
> recently suggesting the idea of optimising content with unregretted time spent that will reward healthy content and punish toxic content even if the two had the same number of impressions
The precise sort of censorship and "cancel culture" he decried upon purchase.
Facebook's approach to fact checking has always been cost-optimization.
It would have been a drag on profits to hire professionals to fact check and provide them enough time to do their job, at scale.
They quote numbers about how much they're spending as proof they're doing something, but that spend isn't normalized against the scale of their platform.
How about the fact that Meta killing their fact-checking feature will have a very direct impact on the quality of Community Notes? Per today's Platformer:
I don’t think the fact-checkers were a better product feature in the current environment. I do think that the reasons they aren’t a good product feature are linked to a concerted effort to convince people to distrust fact-checkers. I recognize that many people would say the distrust arose from the way fact-checkers behaved; I don’t think that’s true.
From a product perspective, once it’s accepted that Community Notes go through an algorithmic filtering process (which they must), you have to accept that you’ve lost most potential for third party viewpoints. There is nothing stopping ideological companies from putting their thumbs on the scale.
Back to product perspective: that means there’s no barrier preventing Notes from losing trust in the same way fact checkers have. The playing field is not static.
I think the speed of the rollout will tell us a lot about how long this has been in the works. It’s not a one week feature, although I will remember that Meta produced Threads very quickly.
I'm not sure about better, but I'm concerned about a second Rohingya genocide.
There was a lot wrong with Facebook's moderation system. Spend any time in any politically active groups -- or groups that like to discuss politics -- and you'll quickly find people complaining about deranking. Based on both the extreme frequency with which it's reported and my own experiences with Meta, I believe that they're not making it up.
But Meta's moderation tools don't primarily exist -- as I understand it -- to keep discourse informative. They exist so that Meta doesn't accidentally become somewhat responsible for another genocide.
I think that community notes may be a better move for public discourse, but most conversations on Facebook itself happen in groups, and in groups nobody is going to be posting Community Notes that go against the trend of the group -- even if they might be useful for totally public discourse.
I tend to blame the people actually doing the genocide for genocide, rather than a social media network. Ultimately I think one can clearly draw the line for personal responsibility well before literal murder.
Tens of thousands of have been raped, entire towns have been destroyed, around 50k killed and 700k forced to flee.
If Western countries actually cared about the human cost of this genocide, it would be almost a trivial matter to stop it overnight with a few well placed missiles against Myanmar's military, which continues to perpetrate the genocide even today.
Instead, no real action is taken and it's just a talking point for "Facebook bad." Blaming Facebook for a genocide is like blaming videogames for an active mass shooter w/o actually doing anything to stop them.
Eh, I don't think that lens is useful. It appears to me that the genocide very likely may not have occurred -- and certainly would have harmed fewer people -- if Facebook didn't exist.
It is not simply a matter of it happening elsewhere on the internet -- Myanmar is one of the countries that Facebook provided its Free Basics package to.
Of course, I think the bulk of the blame lays on those actively perpetrating the genocide. But I'm concerned mostly with outcomes, and it seems that with different behavior from Facebook, there would have been a different outcome in Myanmar.
We can look at precedent here. RTLM's involvement in the Rwandan genocide for example would be a good place to start. There's a pretty explicit connection between the radio propaganda (RTLM furthered the Hutu Power ideology) and the actual violence. We should be able to draw a distinction between Jack Thompson and Tipper Gore fearmongering versus explicitly violent rhetoric designed to dehumanize people and promote the eradication of those people.
The actions taken by the US in response to the genocide in Myanmar were largely economic because, I would think, their proximity to China. Can't imagine direct intervention would have gone smoothly.
For the record, I don't think our response in Myanmar or Rwanda were good, not trying to dispute or downplay that.
> but this feels like the kind of decision that should have been in the works for multiple quarters now
My take is that while it must have been a potential plan for some time and switching to this plan can't have just been an “overnight” decision since the election, the timing suggests that either they were waiting for the outcome of the election and using that result in the decision-making process, or that the election result pulled the decision¹ forward.
----
[1] Or the implementation, if the decision had already been made. They may have already moving towards this happening, purely as a business decision based on internal effectiveness studies, no matter who was in power, but given the election result there are some political benefits to rolling the plan out now instead of in Q2 or Q3.
Yeah I'd like to hear this too. I use both and I love community notes. People are pretending like this is some big culture war issue and a win for the right but I've seen community notes call out Elon for retweeting bullshit more times than I count. (As well as calling Jacobin our on there's)
I also appreciate that if I liked a post that community notes called out and I'm getting a notification that was misinformation.
Well the presidential election was a win for the right. FB and Meta have always complied with and often been an arm of the US govt regarding regulating speech on social media, and they are not really changing that. It's the gov't that's changing.
> I'd like to hear an informed take from anybody who thinks that Facebook's fact-checkers were a better product feature than Community Notes.
Zuckerberg's framing of this as being about "fact checking" is intentional misdirection. Very little checking of facts was actually happening.
This is about moderation. Specifically, reducing the obstacles to posting racist/misogynist/political abuse amd threats. The objective is to make Facebook acceptable as a platform for the incoming US administration and its supporters, while simultaneously increasing engagement with more inflammatory user-generated content.
So its primarily a demonstration of fealty to Trump and co, with upsides.
Trump and Zuck recently met privately. I do wonder if these changes are, in part, also a quid pro quo for Trump undertaking to continue with the ban on TikTok in the US.
Facebook has a long, bloody history of expanding their services into areas without investing in content moderation first. Sometimes they don’t have a single employee who can speak the language of their users. As a result, tens of thousands of people have died in genocide.
You can’t have community notes if you don’t already have a community established. Community notes won’t help if the community’s behavior is the problem.
Many people will die as a result of this decision.
Since then Marketplace has more or less destroyed Craigslist. So two months ago I tried to create an account strictly for Marketplace. My email, phone, and location have all changed since 2018. Despite verifying phone and doing the most extreme KYC step of taking a picture of myself with my ID I still could not make a new account. So maybe they should focus on that?
SAD! Craigslist was a much better product and community even without the luxury of identity verification. It had some obvious spam but by and large worked fine once you got the hang of it. Marketplace is a cesspool of lowballers and sex workers with some shitty ML sprinkled on it, underneath it all some slow and clunky RPCs that need refresh all the time.
Forget about the sucky product. Who has Facebook been hiring in the past decade that built that technical crapshoot.
All the burning man camps I get invited to are a bunch of Gen X-ers conferring on Facebook groups
so I wind up making a new Facebook account once a year for a few months
although could see this moving to Discord across those same age ranges, I’m in some local groups there which overlap with festivals/events/things like the burn.
yeah exactly, its now a better platform and has enough critical mass. With Nitro/Discord's paid plan you can change your profile per server if you identify different ways in different groups
I've seen Gen X-ers be notoriously inflexible about considering Discord or anything besides Facebook Groups, but as they say: nobody can prevent you from becoming like your parents
I tell that cohort "you can't Google this, you have to join the platform and search that channel", and they balk as if their Facebook Group that's segregating them is any different
back to burning man specifically, at this point it seems like I can get invited to different camps, so I'm excited about that. mixed age groups, stays fresh
> I've seen Gen X-ers be notoriously inflexible about considering Discord or anything besides Facebook Groups, but as they say: nobody can prevent you from becoming like your parents
Yeah I'm a millenial with older and younger friends. I found that around 35 +- 4 years you generally have people get more annoyed and flippant at change. I get it, at this age you're probably at the peak of both career and life responsibilities, and you want to focus your energy on your family/career/other loved ones, and the last thing you want to do is learn something new for doing what you've been doing for the last 18 years (chatting about something online.)
But it's been pretty fascinating watching the change as my older millenial/young GenX friends are getting into Back In My Day conversations while my GenZ friends talk about new fashions and music.
It doesn’t matter if they were better or worse, it’s all relative. It depends on who you ask, everyone will give a different answer. You are looking at this from a technological and problem solving perspective, while the people who made the decision prioritized these much lower on their list. You need to think like a politician and consider the PR side of things. This is not about solving the problem, it’s about perception, only perception.
By implementing community notes, Facebook is shifting responsibility. Previously, the perception was that Facebook was doing fact checking (and no one really cared about the third parties). Now, the responsibility moves to the community. Not only does this shift responsibility, but it also makes Facebook appear politically neutral to Republicans, because they can say, "Hey, we did exactly what Musk did, and you liked it. We are politically neutral".
I think both are atrocious features. It would be useful to know facts about a site or article: this is a new domain, this is a state-run outlet, etc.
But other than that, how about I get to use my critical thinking to evaluate the content I access without my “betters” trying to color it first?
Any day now, I’m sure Gmail will introduce a feature where Gemini will warn you that the article your grumpy uncle sent you is not nuanced enough. Or your cell provider will monitor your texts and inject warnings that the meme you shared doesn’t tell the whole story.
Because no-one, including you, is an expert on everything.
So there will be many topics for which you will not be able to make an informed judgement about the accuracy of the content. And on a social network centred around sharing it can be very easy for inaccuracies to spread.
<country hick accent>Looks like we got ourselves a reader…
Yep, reading, researching, considering what things matter given your own life experience and situation, these are all meaningless in the face of THE EXPERTS!
/s
When J.S. Mill wrote about infallibility[1], I can't remember if he wrote about outsourcing that infallibility belief to others, but if he did, he predicted the last 5 years of pro-censorship arguments perfectly.
I'm no expert in this domain, but the larger issue at play here is that:
1. certain groups are arguing for assigning trust to a group to perform case-by-case censorship as a countermeasure to propaganda and disinformation,
2. other groups (sometimes purposefully) misinterpreting this as blanket censorship and conjure up several slippery-slope warnings.
When talking about general things, it sounds very noble to talk about protecting every budding idea... therefore group #2 gets to trot around the higher moral ground when arguing in this way.
When talking about the specific ideas being "censored" (e.g. "immigrants eating dogs"), group #1 gets to claim group #2 is some flavor of crazy.
What both miss is that they have been pitted against each other by so many interest groups: nation-state and corporate.
I don't really mind how they police things and it's not the point of this announcement. The technology firms think Trump could be so dangerous to their businesses that they are willing to completely give in pre-emptively to this threat. What else are they willing to do given this, interfere in elections for example? Promote misinformation that benefits Trump? Undermine truth about vaccines and safety in our health system? The list of potential problems is quite long.
But could you actually get any money from suing Wikipedia? They would just deflect blame to volunteer editors.
Why do you think facebook is ending fact-checkers now? Editors are hired by facebook, facebook is the publisher. If facebook publishes "fact" and people get harmed as result, Facebook gets sued to bankruptcy. There is no protection from government anymore!
> If facebook publishes "fact" and people get harmed as result, Facebook gets sued to bankruptcy.
What a nice reality would it be where Facbook could be actually sued to bankruptcy for whatever reason, let alone such minor one. Sadly it's not our reality.
In the U.S. (relevant because it is home to the Wikimedia Foundation), you can sue anyone for any reason at any time. You might get immediately dismissed, sued back ("abuse of process" or similar), or something along those lines, but there is nothing structural that stops you.
The structural reason that you can't sue email is that email is not an "anyone", it's an abstract concept. How would you even e.g. notify "email" that it is under litigation?
"fact-checkers were authoritative source of the truth"
There is no such thing. If your understanding of truth is so flat, you're incredibly ignorant and dangerously foolish. Biases, perception, and propaganda influence the "truth" you see in the world. And no one is immune to it. Even large groups of very smart people are not immune to it. In fact they're even more often prone to groupthink.
The whole point of science is to eliminate human authority as a source of truth. Every claim must be peer reviewed, should be replicated by independent parties, and open to falsification by new evidence.
“Appeal to authority” is always the wrong approach if you are seeking truth.
« I think one of the troubles of the world has been the habit of dogmatically believing something or other and I think all these matters are full of doubt and the rational man will not be too sure that he's right; I think we ought always to entertain our opinions to some measure of doubt » (Russel)
> Community notes are just opinions of random people on internet.
No. Community Notes is an open-source peer review-like system but designed in a way to limit bias: When sets of note contributors (the peers in this case) who normally strongly oppose each other’s views on Topic A strongly agree on a point made re Topic A, we’re likely getting closer to the truth.
Well the most mainstream "news" source Fox news had to pay out almost a billion dollars for dis-information, so the biggest mainstream 'news' (though they did claim in court no one could possibly think they are news so it's ok for them to lie) institution kind of had to apologize.
> Well the most mainstream "news" source Fox news had to pay out almost a billion dollars for dis-information […]
The case in question did NOT go to trial, so your claim isn’t entirely correct, but yes, all mainstream “news” outlets (including Fox) abuse our trust by constantly lying to us—I don’t watch or trust any of them.
I remember when Rachel Maddow told us, “Now we know that the vaccines work well enough that the virus stops with every vaccinated person. A vaccinated person gets exposed to the virus? The virus does not infect them; the virus cannot then use that person to go anywhere else.” [0]
> The claim that the president was a Russian spy was never made afaik. But if you have evidence of a fact checker saying this, I’d appreciate it.
I didn’t save the links, so no, I don’t have evidence ready to show you, and it’s not like I can just go to their websites and see an accurate history of their conclusions on specific claims, given that many of them have a history of simply burying their original conclusions once it becomes obvious they were wrong (e.g., [0]).
Also I saw an interesting interview with Marc Andreessen recently where he mentioned about how the Dems would fund "Disinformation Research" units at universities.
These research units would (shockingly!) be staffed by 100% Democrat supporters and (even more shockingly) would tend to view everything the Dems disagree with as "disinformation". These groups would then apply pressure to media/social media companies to suppress content. So they were able to breach the first amendment by using censorship by surrogacy.
The Democrat censorship industrial complex was ugly and insidious and leading us to a very dark place indeed.
Fact checkers are the technocratic solution, they're a panel of experts to Community Note's jury of our peers. Fact checkers are a much better product feature than community notes if we want a feature that best serves people who care about facts. That's not our world, though. People don't care about facts, we are humans, our lives are lived based on vibes. The average person would rather listen to their idiot friend's uneducated thoughts about transgender women in sport than listen to a lecture from an expert. Community notes is probably a better feature for the real world, but it's still junk, "effective" is not a label the feature deserves, because the majority of misinformation on X goes un-noted.
Like any other work, it can be reviewed by supervisors within the company and/or the client (Meta). If a sample of an employee's work shows that they often hide content that isn't factually false, they are performing their job poorly. If Meta doesn't like the job the company is doing, the contract can be cancelled.
> If we could have legitimate fact checking that really works, then I guess we wouldn't need any politics at all.
You absolutely need both. Politics is about which decisions to make within the context of shared facts. The amount of the US national debt, the number of people caught crossing the border illegally in 2024, or the number of people sleeping on the streets in San Francisco are all matters of fact. What to do about them is politics.
It are also facts that many politicians are corrupt and are fooling us. But they arranged it nicely so that they aren't being fact checked.
And the ones in power and with money can decide who the fact checkers will be. And the ones in power and with money can help and support each other. Because we want to keep the money inside the family, to protect the facts you know.
When you grow up you start to understand that you can't trust all authority all the time.
I was answering your question. You asked how fact checkers can be fact checked and the answer is like any other job. Fact checking isn't magic, and it's existed for a long time. It's basically what newspaper sub-editors do.
> When you grow up you start to understand that you can't trust all authority all the time.
I think you know I'm not arguing for this. Don't misrepresent my position, please.
Well I think what you are calling fact checking is actually journalism.
The concept of fact checking is a very recent movement, with the idea that we could filter out the "fake news" on the internet, which is also a recent concept.
But it turned out that the so called "fake news" wans't always so fake, and that the fact checkers weren't always so factual.
So it turns out that you can't trust any group to determine what the facts are for the rest of the people.
You can fact-check for yourself, but don't put your "facts" on other people like they're real facts. Leave other people in their respect, and let them think for themselves. You can of course share your knowledge, but you should let the other person ultimately decide what they believe for themselves.
It sounds like you are disagreeing with the concept of facts, but facts do exist. If someone claims that a politician said a particular thing in a speech yesterday, and the politician gave no speech yesterday, then the claim is factually false. It's not a matter of respect or disrespect to say so, and it doesn't matter what you choose to believe on that topic.
> The concept of fact checking is a very recent movement, with the idea that we could filter out the "fake news" on the internet, which is also a recent concept.
Again, this is not accurate. Look at the job sub-editors have been doing for a century or more. Their main role is to save the newspaper from getting sued or looking silly by striking out or questioning any claim that can't be proven to be true, or corroborated by multiple sources. Fact checking is not a new discipline.
Well it has a lot to do also with the way you say things, how you interpret the words. Maybe the politician did give some kind of speech, but maybe it wasn't an official speech. There's always more to the story, and multiple ways of interpreting things.
Of course some facts are less flexible than others. Like most people wouldn't argue whether a football is round. Although it matters if you're talking about an American football or a soccer football. So context also matters, and that can be confusing sometimes.
So the facts that the fact checkers were called in to tackle, were so flexible that it turns out it's not doable in a secure way.
And newspapers also don't always have the correct facts. Often things in the newspapers are wrong. And no they are not always being sued for that.
Again, you can fact-check for yourself, that is totally fine, and I would even encourage it. Then you make up your own mind and you are more independent and less shapable by others.
We don’t. People are social. We care about what the people in our community think, whether it’s factually accurate or not is inconsequential. Those of us wasting our lives arguing on the internet in the pursuit of truth are a tiny minority of atypical people. People yearn for the warm embrace of affirmation, not the cold hard truth challenging them at every turn.
Well first you've got to define what is meant by "facts". Most people presume the word refers to some kind of community consensus, and then they immediately gatekeep what counts as the "community" among which the consensus is shared.
However the basis for fact is precisely predictive power, so it's actually more like the battle between science and superstition. Information that can directly empower a person is not necessarily information that will help them to feel more comfortable or confirm their biases.
Do you mean that OP is incorrect, or just impertinent? Just because you have to use a light touch does not mean your friend does not have a Problem. (And I'm speaking as an American)
Europeans are just as silly but mistake failure for sincerity. As a sad fantasist I'm immensely fond of Anglo culture but many brits are totally misaligned and insane.
More like “the facts are the facts and reality does not care if you don’t believe in it”. It’s a special kind of nihilism to want to stick it to the universe and insist on one’s own alternative reality like an overgrown angry teenager edgelord.
Ayn Rand was pretty insistent that we should be able to objectively ascertain the facts. Objectivism failed precisely because we're not really all that rational, and because apart from the irrational part of us there's also the fact that we can manipulate perception and gaslight others. If you're a newcomer to a pair of groups that vehemently disagree as to the facts you might soon find that you have to make a choice yourself as to which group to join, and suddenly you have to deal with social pressures not just facts. Do you want to be in the in-group or in the out-group? Can you deal with the shaming that goes with being in the out-group? Etc.
It's all so tedious, but this is what we humans are like.
> As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
IMO the concerning part is hidden at the bottom. They want to go back to shoveling politics in front of users. They say it is based on viewing habits, but just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks. I just can't look away. FB makes theirnacrions sound noble or correct, but this is self serving engagement optimization.
Social media sites should give users an explicit lever to see political content or not. Maybe I'll turn it on for election season and off the rest of the year. Some political junkies will always have it set to "maximum". IMO that is better FB always making that decision for me.
>Social media sites should give users an explicit lever to see political content or not
Facebook does sorta have this, under Settings & Privacy > Content Preferences > Manage defaults. Note that the only options for "Political content" are "Show more" and "Default". The other categories listed also include "Show less". There is no "off" option for any of the categories.
IIRC, Political Content is by default restricted on Threads. But if someone you follow engages with or posts content that is political in nature, fb doesn't hide that for you
They will just relabel what is political. Union organizing? A bill on internet censorship? Anything mildly inconvenient to Meta or its shareholders? That's politics, you said you don't want to see any politics, didn't you? The culture war? Well, that's just pop culture, so that gets a pass.
Everything important is politics though. Celeb talks about her experiences - politics. Earth is getting warmer - politics.
Our lives ARE political.
Hell, right now researchers on misinformation are being harassed by senators to bankrupt them, and create living lessons to stop others from reducing the reach of manipulative content.
WE already had the entire free speech fight at the dawn of content moderation. We collectively ran millions of experiments, and realized that if you dont moderate community spaces, the best ideas DONT rise to the top, the most viral and emotional ones do.
If you want to see what no moderation looks like, you can see 4 Chan.
By nature, taking a stand on being factual, is automatically political because there are people who are disadvantaged by facts. Enron and oil producers spread FUD over global warming because it was problematic for their profits.
Stopping their FUD, is censorship via moderation. How is a regular joe going to combat a campaign designed to prevent people from reaching consensus?
I really do wish that one of the major platforms would a strict white- and black- list. "Doomscrolling" would be so much nicer if one could have, say, strict filters set to "Don't ever show me pranks, fake useless diy, kids being exploited, anything gym related" and "I really like snowboarding, WW2 history and pinball machines." Of course, the algorithm is still gonna "do its thing", but with a few hard guides.
Sure, initially the platform's view time would decrease, but then maybe people would actually like that platform.
Meta has failed (abysmally) at identifying and categorizing content where you’ve said “show me less of this.”
Bluesky’s not my favorite website but Xblock is proof that the app can go “this is a twitter screenshot and she doesn’t want to see those” at scale.
AI could identify, label, and hide all of these things.
On bluesky it already does: “this is rude” or “this content promotes self harm” , I wish both websites could suppress , snooze, or completely nuke “viral” or political content be it left or right. In bluesky’s case it’s not that I disagree with them. It’s just that I’ve had this shit that I more or less agree with shoved down my throat from every angle for a decade and I’m exhausted and don’t want to see or engage with it anymore. People who have nothing else to say 24/7 every single day of their life and mine just need to go away and I wish the AI on bluesky would just let me filter people whose content is primarily political temper tantrums because I don’t have the time or will to mute or block them all so I just don’t use the product.
In fact for moderation purposes, Facebook already is doing that on their back end. (a few years ago you could see automatically generated alt text like “a woman holding a baby” though I don’t use meta at the present time and don’t know if it’s still doing this.)
AI is already analyzing the memes and purging ones with themes they don’t like on FB though . Unlike bluesky moderation, it’s not presented as something I can leverage or access to make my experience more enjoyable on Facebook.
But that’s not how they’re leveraging AI right now. They won’t let it prevent me from seeing memes posts and content with themes **i** don’t like.
Reddit already has this feature, although it might be underused. Set up a multireddit. Everything you want and nothing you don't. They are also not bottomless (well, more so if you stick to smaller subs), so if you don't put too many subs in your multi you can also hard-limit your feed time. They're great.
> We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Great, so more filter bubbles? They don't learn, or more likely, don't care.
Filter bubbles are in. Blue sky and mastodon show that people want to self segregate. Even people remaining on Twitter are happy with the exodus.
Facebook is explicitly pro filter bubble. The community notes will come from your ingroup.
One irony is that diversity in online spaces leads to division. People no matter their politics and interests prefer people similar to them.
One way to look at this is by geography. Think of how a group of non English speaking Africans would talk together.
The other irony is that groups of people view the other groups as not similar to them and want to change them. It's always the outgroup that needs it's filter bubble bursting. It's always the other that is brainwashed.
So the downside of filter bubbles remain: more division, more separation between different people.
For me the major breaking change on social media is the forcing of non linear timelines. They're required to increase engagement and promoting content but thats the crux of the issue.
I liked the way early twitter worked, I have my bubble being the people I follow and I can see glimpses of the outside from the trending topics and what comes in as retweets, news, etc. Being able to see a thread without being logged in. Seeing analysis of people from the firehose showing different ways to see conversations and the bubbles.
I miss the fact that old tweets died, things had to be relevant to humans to be rekindled, meaning someone had to retweet to keep it alive instead of an algorithm deciding whats important for me based on how outrageous it is.
Bubbles are unavoidable, bubbles decided by algorithms are the worse of all alternatives.
Isn't there a difference between self-segregation and filter bubbles and how they're perceived?
If I go to a woodworking class, I won't be surprised to see people who like woodworking. If I go to the supermarket and everyone is talking about and liking woodworking, I start thinking that everyone likes woodworking.
A user explicitly signing up to specific topics are opting into a discussion. Filter bubbles are implicit.
Of course not. Enraged, uninformed people "engage", and that sells ads like hotcakes.
I don't know where people get this idea that Zuckerberg had any principles or gave a shit about anyone but himself. He's spineless, and his primary goal in life is has always been acquire as much wealth as possible by whatever means necessary.
> just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks
I guess FB will be the judge. They might even stop showing train wrecks to a person if they notice metrics dropping. Some of these metrics might even track the user’s well being, although most will focus on the well being of shareholders.
We lost the levers long time ago, replaced by opaque algorithms; are there any signs for this to change?
What I think I just read is that content moderation is complicated, error-prone, and expensive. So Meta is going to do a lot less of it. They'll let you self-moderate via a new community notes system, similar to what X does. I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore. This is another big win for Meta, because minimizing their investment in content moderation and simplifying their product will reduce operating expenses.
> it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
To me it sounds better for large actors who pay shills to influence public opinion, like Qatar. I disagree that this is better for either Facebook users, or society as a whole.
It does however certainly fit the Golden rule - he with the gold makes the rules.
I was under the impression that Community Notes were designed to be resistant to sybil attacks, but I could be wrong. Community Notes have been used at Twitter for a long time. Are there examples of state-influenced notes getting through the process?
Twitter's Community Notes were designed to be resistant to sybil attacks. Meta is calling their new product Community Notes, but it would be a mistake to assume the algorithms are the same under the hood. Hopefully Meta will be as transparent as Twitter has been, with a regular data dump and so on.
Sure, I'll trust the leadership of this huge commercial company, famous for lots of controversies reagarding privacy of people. I'll trust them to decide for me what is true and what is not.
Qatar is not well known for paying people to bot on social media. They play the RT game by using their news network Al Jazeera to do that instead and give their propaganda a professional air. The first country to do this was India[1]. Israel has special units in the army to do this[2]. At this point so many countries pay people to do what you say, but Qatar doesn't, from what I can tell. If you have proof of it, I'm all ears.
I was cautiously optimistic when this was announced that India and Saudi Arabia (among others, incl. Qatar) might see some pushback on how they clamp down on free speech and journalism on social media. But since Zuck mentioned Europe, I fear those countries will continue as they did before.
> it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
Or maybe such people have far better things to do than fact check concern trolls and paid propagandists.
There do seem to be a lot of people who enjoy fact checking concern trolls and paid propagandists.
I'm not sure if they do more good than harm. Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation. It just makes them look as frothing as trolls claim they are.
And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.
> Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation.
Your point is exactly why I can’t take anyone serious who claims that randoms “debating” will cause the best ideas to rise to the top.
I cant count how many times i’ve seen influencer propagandists engage in an online “debate”, be handheld walked through how their entire point is wrong, only for them to spew the exact same thing hours later at the top of every feed. and remember these are often the people with some of the largest platforms claiming they’re being censored … to millions of people lol.
it’s too easy to manipulate what rises to the top. for debate to be anything close to effective all parties involved have to actually be interested in coming closer to a truth. and the algorithms have no interest in deranking sophists and propagandists.
> And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.
Downvotes that hide posts below a certain threshold have always seemed like the best approach to me. Of course it also allows groups to silence views.
> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
Strong disagree. This is a very naive understanding of the situation. "Fact-checking" by users is just more of the kind of shouting back and forth that these social networks are already full of. That's why a third-party fact checks are important.
I have a complicated history with this viewpoint. I remember back when Wikipedia was launched in 2001, I thought- there is no way this will work... it will just end up as a cesspool. Boy was I wrong. I think I was wrong because Wikipedia has a very well defined and enforced moderation model, for example: a focus on no original research and neutral point of view.
How can this be replicated with topics that are by definition controversial, and happening in real time? I don't know. But I don't think Meta/X have any sort of vested interest in seeing sober, fact-based conversations. In fact, their incentives work entirely in the opposite direction: the more anger/divisive the content drives additional traffic and engagement [1]. Whereas, with Wikipedia, I would argue the opposite is true: Wikipedia would never have gained the dominance it has if it was full of emotionally-charged content with dubious/no sourcing.
So I guess my conclusion from this is that I doubt any community-sourced "fact checking" efforts in-sourced from the social media platforms themselves will be successful, because the incentives are misaligned for the platform. Why invest any effort into something that will drive down engagement on your platform?
> ... we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. ...
True, but that doesn't discount that it's a win for Meta
1) Shouting matches create more ad impressions, as people interact more with the platform. The shouting matches also get more attention from other viewers than any calm factual statement.
2) Less legal responsibility / costs / overhead
3) Less potential flak from being officially involved in fact-checking in a way that displeases the current political group in power
Users lose, but are people who still use FB today going to use FB less because the official fact checkers are gone? Almost certainly not in any significant numbers
But "fact-checking" by people in authority is OK? Isn't that like, authoritarian?
"Fact-checking" completely removed the ability for debate and is therefore antithetical to a functional democracy. Pushing back against authority, because they are often dead wrong, is foundational to a free society. It's hard to imagine anything more authoritarian than "No I don't have to debate because I'm a fact-checker and by that measure alone you're wrong and I'm right". Very Orwellian indeed!
Additionally, the number of times that I've observed "fact-checkers" lying thru their teeth for obvious political reasons is absurd.
They are given the title of fact checker, ending debate, this is the authoritarian part. It does not matter who employs them. If fact checkers were angels we wouldn’t have this problem. However fact checkers are subject to human nature just like the rest of us, to be biased, wrong, etc.. Do you think these fact checkers don’t have their own opinions? Do you think they don’t vote? Don’t lie?
You are assuming the people in social media are a representative cut of people in the society but what you will notice quickly is that this is not the case, just look at echo chambers.
If I am trying to debate the same fact on a far-right or far-left post, undoubtedly both will come up with the same discussion and conclusion - let's not lie to ourselves.
So for your claim to have any validity the requirement of a fair, unbiased group of people on all posts would need to be given (in the first instance, there are a lot more issues with this, just look at the loud people versus the ones not bothering anymore to comment as discussing seems impossible) and that is just de facto not the case and the reason fact-checking is indeed helpful.
Without some sort of controls in place, fact-checking becomes useless because it's subject to being gamed by those with the most time on their hands and/or malicious tools, e.g. bots and sock puppets.
You should look into the implementation, at least the one that X has published. It's not just users shouting back and forth at each other. It's actually a pretty impressive system
Its more naive to think a fact-checking unit susceptible to govt pressure is likely to be better.
There will always be govt pressure in one form or another to censor content they doesnt like. And we've obviously seen how this works with the Dems for the last 4 years.
> They aren't explicit about it, but it's clear that pressure does not exist anymore
It's clear that the pressure comes now from the other side of the spectrum. Zuck already put Trumpists at various key positions.
> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
It's a good point. They're also going to push more political contents, which should increase engagement (eventually frustrating users and advertisers?)
Either way, it's pretty clear that the company works with the power in place, which is extremely concerning (whether you're left or right leaning, and even more if you're not American).
The pressure has just shifted from being applied by the left to the right. There is still censorship on Twitter, it is just the people Elon doesn't like who are getting censored. The same will happen on Facebook. Zuckerberg has been cozying up to Trump for a reason.
What is this based on? I see so many people shouting things like this, but there doesn't seem to be any basis for these arguments. They seem a bit useless and empty.
How would fact checkers access the 90% of private content? And should they? I don't think so, even if the respective private content is questionable.
The EU goes its own way with trusted flaggers, which is more or less the least sensible option. It won't take long until bounds are overstepped and legal content gets flagged. Perhaps it already happened. This is not a solution to even an ill-defined problem.
Good. Private communication is private, even if it's a group. The nice thing about the crazy is that they're incapable of keeping quiet: they will inevitably out themselves.
In the meantime, maybe now I can discuss private matters of my diagnosis without catching random warnings, bans, or worse.
What kind of diagnosis spawns so many fact checks that it's a problem? I'd think any discussion about medical issues would benefit greatly from the calling out of misinformation.
> They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore.
I didn't think it was any secret that Meta largely complies with US gov't instructions on what to suppress. It's called jawboning[1]
Yes, this just reads like "oh, thank God for that, that department was an expensive hassle to run".
I don't know if I'd call it a certain win for Meta long term, but it might well be if they play it right. Presumably they're banking on things being fairly siloed anyway, so political tirades in one bubble won't push users in another bubble off the platform. If they have good ways for people to ignore others, maybe they can have the cake and eat it, unlike Twitter.
Like Twitter, the network effect will retain people,
and unlike Twitter, Facebook is a much deeper, more integrated service such that people can't just jump across to a work-alike.
A CEO who can keep his mouth shut is also a pretty big plus for them. They skated away from bring involved with a genocide without too many issues, so same ethical revulsion people have against Musk seems to be much less focused.
As a Harris supporter, I actually agree, I think it was way too heavy handed and hurt Harris more than helped. I’m not sure anymore what the goal of fact checking is (I’ve always felt it was somewhat dubious if not done extremely well).
Agreed, I always felt like most of the fact checking that has become vogue in the past ten years is designed to comfort the people who already agree, not inform people who want genuine insight.
If you don’t have fact checkers, a debate loses all its value. Debates must be grounded in fact to have any value at all. Otherwise a “debate” is just a series of campaign stump speeches.
Yeah, the problem is that if one side tells 100 lies, and the other tells 1 lie, you can't correct all 100 lies, but if you only correct the most egregious lies then statistically you'll only be correcting the one side, and if you correct 1 lie from each side, then you make it seem like both sides lie equally. The Gish Gallop wins again.
We would have to fact check if those numbers are correct.
Oh wait, fact checkers don't work, better just inform yourself and make up your own mind, and don't just believe some supposedly authoritarian figures.
Especially for live fact checking the greater the number of lies and the more obvious/blatant those lies are the more likely someone is to get fact checked.
This is the problem, you are clearly biased. She brought up the Charlottesville issue that has been widely debunked; it is blatantly false and well-known to be false. She was not fact-checked. That's the issue.
> “where there may be severe deformities. There may be a fetus that’s non viable” he said. “If a mother is in labor, I can tell you exactly what would happen.”
Your dying grandma may go DNR, but that doesn’t mean murdering grandmas is broadly legal.
My wife does charity photography for https://www.nowilaymedowntosleep.org/. You see lots of this sort of withdrawal of care. Calling it an abortion is cruel and dumb.
> content moderation is complicated, error-prone, and expensive
I think the fact-checking part is pretty straightforward. What's outrageous is that the content moderators judge content subjectively, labeling perfect discussions as misinformation, hate speech, and etc. That's where the censorship starts.
How do you avoid judging actual human discussions subjectively? I remember being a forum moderator and struggling with exactly the same issues. No matter what guidelines we'd set, there'd be essentially legitimate discussions that were way over the line superficially, and on the other you'd have neo-nazis acting in ways that weren't technically bad, but were clearly leading there.
Facebook moderators have an even harder job than that because the inherent scale of the platform prevents the kinds of personal insights and contextual understanding I had.
Okay, but you're saying this on a platform where the moderator (dang) follows intentionally vague and subjective guidelines, presumably because you like the environment more here than some unmoderated howling void elsewhere on the Internet.
Good point, and thanks. I have to admit I don't have a good answer to this. Maybe what dang needs to assess can be better defined or qualified? Like we can't define porn but we know it when we see it? On the other hand, assessing something is offensive or is hate speech is so subjective that people simply weaponize them, intentionally or unintentionally.
The quality of the platform lives or dies on the quality of these decisions. If dang's choices are too bad, this site will die.
The situation is somewhat different between a niche community and a borderline monopoly. But it's also true that facebook's success depends on navigating it well. At the end of the day we can choose to use it or not.
To the extent that people feel forced to use a platform that's a reason to further bias away from suppressing free expression, even if the result is a somewhat less good platform.
You're still making subjective judgements wherever you draw the line. I don't know how a platform could avoid making subjective judgements at all and still produce an environment people want to be in.
I thought there would be community notes. And how would third-party work? The Stanford doctor was banned from X because he posted peer-reviewed papers that challenge the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.
> The Stanford doctor was banned from X because he posted peer-reviewed papers that challenging the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.
Not familiar with that specific case, though generally I'm not a fan a bans. Fact checks are great though. There have been peer reviewed papers about midi-chlorians too (https://www.irishnews.com/magazine/science/2017/07/24/news/a...), but I'd sure hope that if someone brought it up in a discussion they'd be fact checked.
Community Notes is the best thing about Musk's Dumpster fire.
The problem with CN right now, though, is that Musk appears to block it on most of his posts, and/or right-wing moderators downvote the notes so they don't appear or disappear.
I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes.
My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public.
Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.
> the emperor knows that he is not wearing any clothes, and he does not care.
Indeed the ending of the famous story is:
> "But the Emperor has nothing at all on!" said a little child.
> "Listen to the voice of innocence!" exclaimed his father; and what the child had said was whispered from one to another.
> "But he has nothing at all on!" at last cried out all the people. The Emperor was vexed, for he knew that the people were right; but he thought the procession must go on now! And the lords of the bedchamber took greater pains than ever, to appear holding up a train, although, in reality, there was no train to hold.
Community notes launched at the start of 2021. It predates the buyout by almost two years.
If what they said about their design is to be believed, political downvoting shouldn't heavily impact them. I wish it was easier to see pending notes on a post though.
You can see them, it's just that finding the button to do so on a post is difficult. I think you need to navigate to the post from the notes section of the website.
Right, I think that's the parent's point: CN is a great design, dragged down by the fact that Elon heavily puts his thumb on the scale to make sure posts he likes spread far and wide and posts he dislikes get buried, irrespective of their truth content.
To be fair, a lot (not all) of notes on Musk's posts are spurious, including the NNN's. It's clearly being misused there, but in general they seem to work very well indeed.
Perhaps, given the situation with Twitter, now "X", more web and mobile app users will come to understand that despite its size, Facebook is someone's personal website. Like "X", one person has control. Zuckerberg controls over 51% of the company's voting shares. Meta is not a news organization. It has no responsibility to uphold journalistic standards. It does not produce news; in fact, it produces no content at all. It is a leech, a parasite, an unnecessary intermediary that is wholly reliant on news content produced by someone else being requested through its servers.
And I don't see why publisher of news even if they just re-publish should not be held to some responsibilities, like eg. abstaining from nefarious manipulation of content people see on their platform.
As if actual journalists care to uphold "journalistic standards."
X/FB is far more trustworthy than the legacy news media, which happily censors salient stories at the request of the government and pushes very specific agendas that are totally out of touch with the average voter.
I can't even count how many times I've seen literal video evidence for a story on X that the news media twists or refuses to cover.
I can't even count how many times I've seen literal video evidence for a story on X that was from totally unrelated incident but claimed to be proof of a completely made up thing that was happening right now.
Leaving Facebook, Instagram and Twitter a few years ago (and never joining TikTok) has been the number one top decision for my mental health. I wish everyone and society as a whole to make the same decision.
> When we launched our independent fact checking program in 2016, we were very clear that we didn’t want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organizations... That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how.
Alex Stamos pushed this initiative pretty hard outside of Facebook in 2019+, seemingly because he wasn't able to do inside of Facebook back in 2016/2018. But I haven't dug into his motivations.
Then the government sics the FCC or European Commission on you, who make trumped up charges that they push through a kangaroo court to fine you billions.
There's no fighting a government, and all governments are corrupt if they see an opportunity to rent-seek from you.
I don't use Twitter so I hadn't seen it in action, but the interview convinced me that this is a good approach. I think this approach makes sense for Facebook as well.
Thanks for sharing this. So many people commenting on this topic have no idea how community notes even works. Today's New York Times article also failed to explain it, while just giving a general negative tone to the idea of switching to this model.
The median news article has something wrong in it.
Often I live through events and read about it in the daily paper and then read about it in The Economist and read a few more accounts of it. 5-25 years later a good well researched history of the event comes out and it is entirely different from what I remember reading at the time. Some of that is my memory but a lot of it is that the first draft of history is wrong.
When someone signed their name "Dan Cooper" and hijacked a plane a newspaper garbled that to "D B Cooper", the FBI thought it sounded cool so they picked it up, but it happens more often than not that journalists garble things like that.
shows (but doesn't tell) that that a novelized accounts of events could be more true than a conventional newspaper account and similar criticisms come throughout the work of Joan Didion
If anything really makes me angry about news and how people consume it is this. In the age of clickbait everyone who works for The New York Times has one if not two eyes on their stats at all times. Those stats show that readers have a lot more interest in people like David Brooks and Ezra Klein blowing it out their ass and could care less about difficult journalism that takes integrity, elbow grease and occasionally can put you in danger done by younger people who are paid a lot less if they are paid at all. The conservative press was slow on the draw when it came to 'Cancel Culture', it was a big issue with the NYT editorial page because those sorts of people get paid $20k to give a college commencement address and they'd hate to have the gravy train stop.
Seen that way the problem with 'fake news' is not that it is 'fake' but that it is 'news'.
> Seen that way the problem with 'fake news' is not that it is 'fake' but that it is 'news'.
salient point. as a writer, the essential condition for any story is a conflict because it's the source of tension or dissonance that people engage with for resolution. the issue with the "fake news" wasn't the facts, it's that the conflict that brought them together as a story was manufactured cheaply from ideology. this had a compounding effect where the absurdity of the resulting conflict with reality drove further outrage from the other "side."
it's a pan-partisan problem. fine observation anyway, I'm provoked. to get better news, the conflict it expresses needs to be more organic. imo using community notes is way more organic than the governance model FB and formerly twiiter used.
I really like Community Notes, and hate the rest of what Twitter has become.
But... Community Notes is subject to "tampering." Elon's either removes the CNs himself from his posts, or his brigade downvote them to infinity so they don't appear on all the misinfo he posts.
Do we have any evidence that Musk has removed a CN on his own post? I've personally seen evidence to the contrary, and he makes a point of highlighting that even he gets a CN every now and then.
As the root comment noted, one of the great things about community notes on X are that the algorithm and the data it's operating on are public. If Musk were removing notes that would be trivial to prove. The fact that such claims of tampering are never accompanied by said proof should tell you all you need to know.
How would it be trivial? Can you describe in a more specific way?
The data I can find says it was last updated 9:02 PM Jan. 5, 2025 (presumably America/Chicago from my browser). That’s a >2 day window as of writing this comment.
Not throwing any accusation, just trying to understand the technicals.
If there was any manipulation of community notes in the last 2 days, how would we know?
If there’s manipulation of this data before it is published, such as ratings or notes never hitting these data files, how would we know?
Maybe, an individual could check to see their own contributions are included in updates to the published data. Is that sufficiently common such that it would get caught?
> If there was any manipulation of community notes in the last 2 days, how would we know?
You can't know until the data is published. 2 days isn't that long though. Just wait a couple more days for the next data dump, then run the algorithm and compare the results to what the X UI was showing at that time.
> If there’s manipulation of this data before it is published, such as ratings or notes never hitting these data files, how would we know?
That would be a bit more sneaky than just outright removing notes. As you noted, you'd need a user whose ratings or notes were omitted from the dump to notice and come forward. Or perhaps with careful analysis you could prove that the manipulated data could not have resulted in the allegedly removed note being shown and then later not shown, indicating something fishy happened.
Theoretically if X wanted to improve on this system, they could go even further and implement something like certificate transparency (append-only log verified by a publicly distributed merkle tree), or create an independent third party organization that users interact with to submit and rate notes, rather than that happening through X's UI. Given the threat model though, I feel like the UX and complexity trade-offs of that wouldn't be worth it. Open sourcing the data and algorithm as X has is already far more transparency than we get from any competing social media company.
I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes. My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public. Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.
I don't think "CEO is able to remove community notes" is a strong mark against the community note algorithm. No system is immune to being turned off...
> Elon's either removes the CNs himself from his posts, or his brigade downvote them to infinity so they don't appear on all the misinfo he posts.
I don't know if this is the case, but X is Elon's property, so he can shape it as he pleases. Assuming that X (or Facebook) is unbiased and working for your benefit is simply foolish, unless you are Musk (or Zuckerberg).
From the sources I could find quickly to refresh my memory:
> Over the weekend, Musk shared some of Roth’s past tweets and what appears to be an excerpt from his PhD thesis about Grindr, the LGBTQ social media app. Roth is quoted as saying that the app is possibly too “lewd or hook-up-oriented” for people under age 18 who are already using it, but that providers should “focus on creating safe strategies … for queer young adults” that aren’t just about hook-ups. Musk commented, “Looks like Yoel is arguing in favor of children being able to use adult services in his PhD thesis.” On Monday, the tweet had more than 60,000 likes and received 15,000 retweets.
The thesis demonstrably exists (https://uploads-ssl.webflow.com/60981d118b006454de9222b2/61d...), and it does have a roughly matching quote at the bottom of PDF page 257 (labelled page 248). The idea of businesses "crafting safe strategies" to "safely connect queer young adults" (the context is very clear that Roth refers to people under the age of 18) is very reasonably interpreted as Musk did. There are very obvious reasons why existing services advertise themselves as 18+ and attempt to enforce that, and it should be clear to everyone that any such service intended specifically for minors could not plausibly be rendered safe.
The idea that this observation constitutes an accusation of pedophilia is 100% media spin, and does not reflect Musk's words.
Ideas like Roth's are not rare on the American (or Canadian) left, especially where they intersect with LGBT etc. rights - which is how things like https://www.cbc.ca/cbcdocspov/episodes/drag-kids can come to exist and be vigorously defended. This empowers quite a bit of culture warring from the American right.
The post I replied to was accusing Musk of posting "misinfo". I responded by asking for evidence of Musk saying things that are provably untrue, because that is the standard of evidence that would be required to support such an accusation. This is not a criminal proceeding.
I'm certain it will make parts of the user experience worse, but at least for the Threads app, this seems at least a little necessary - if you're aiming to be the "new" twitter or whatever social need twitter was fulfilling, you need to break free of the shackles of IG/Meta moderation, which is very unforgiving and brutal in very subtle ways that aren't always easy to figure out. But basically, I find a platform like Threads/Twitter are probably unusable for a lot of people unless you can say "hey, you're an asshole" every now and then without Meta slapping you on the wrist or suppressing your content.
One of the only visible actions Meta has taken on my account was once when a cousin commented on a musical opinion I had posted to facebook, I jokingly replied "I'll fight you" and I caught an instant 2 week posting ban and a flag on my account for "violence." Couldn't even really appeal it, or the hoops were so ridiculous that I didn't try. The hilarious thing is these bans will still let you consume the sites' content (gotta get those clicks), you just are unable to interact with it. This kind of moderation is pointless as users will always get around it anyway - leading to stuff like "unalive" to replace killing/suicide references, or "acoustic" to refer to an autistic person, etc. Just silliness, as you'll always be able to find a way to creatively convey your point such that auto-moderators don't catch it.
I once posted a picture of an email stating my train was delayed in French. So the word 'retard' appeared in it. Instagram banned me from monetization or partnerships or something on my account, because the word for delay in French is offensive in English.
Right. I made a reference to educational development being retarded due to COVID restrictions and the very people you'd expect to be offended were of course offended.
I think it's important to remember the real meaning of words. If you know language better, you can understand a lot more information, and you can express yourself better. Knowing the meaning and origin of words give you great insights into things.
Just because some childish people are misusing the word for some time, we shouldn't just ditch it like that. Words go back a long time.
We should just remove the negative use of it. And we do that by growing up, not by banning words.
My own experience is the exact opposite. Out of all the times in my life I can recall ever having heard the word "retarded" used, I cannot think of any reason to suspect that any of them were meant as anything other than a synonym for "idiotic".
Which, of course, also referred to clinical mental disability at some point in history. As did "moronic", "imbecilic" and others. But nowadays they're really all just strong forms of "stupid".
Even in contexts where generic insults directed at people are not tolerated, it should be acceptable to recognize stupid ideas as such.
I think you've misunderstood, then. The GP's comment was using it in the technical sense (slowed/delayed, not the common "that's so dumb" form you've observed).
>Right. I made a reference to educational development being retarded due to COVID restrictions and the very people you'd expect to be offended were of course offended.
I misread that, and interpreted "retarded" as being a subjective judgment applied to the restrictions.
That said, the reading "[the process of] educational development has a mental disability" is utterly incoherent, so I still see no reasonable justification for taking offense.
Sure. I have a 50 year old friend who takes care of her retarded brother. When describing him and what she does, she simply calls him retarded, because he is, and people know what that word means.
One of the kindest women I know, but she doesn't bead around the bush or have time for euphemisms.
Idiot, retard, mentally handicapped, ect. It is all doomed to be a euphemistic treadmill because they can and are used as an insult. The insulting part isn't the word used, but the comparison drawn. Give it 10 years or so and whatever the current word is will also be out of favor as a pejorative.
That's the thing. They aren't taking offense to mean-spiritedness directed at the person being referred to that way, except in cases where that person actually does have such an intellectual disability. And such language is normally directed at people of ordinary intelligence, to call them out for failing to think things through when they're perfectly capable of it.
There are, and should be, contexts where insulting people is socially acceptable and where such insults should not be censored. And no matter what words you use (https://en.wiktionary.org/wiki/euphemism_treadmill), it's fundamentally impossible to get rid of the idea that a lack of (demonstrated) intelligence is inherently negative.
(It's noteworthy to me that the same activists don't seem to be able to identify any terms denoting lack of physical strength that are inherently offensive - except insofar as they invoke gender stereotypes. Why should it be any less objectionable to call someone a "weakling", for example?)
The criticism of the target’s intelligence or competence isn’t the mean-spiritedness I’m referring to. I’m referring to the deliberate and inherent mean-spiritedness towards people with intellectual disabilities that the slur is explicitly invoking.
>I’m referring to the deliberate and inherent mean-spiritedness towards people with intellectual disabilities that the slur is explicitly invoking.
I disagree that any such thing is invoked. It seems that you believe that when the word "retard" is used in these contexts, that it's meant to describe a person with an intellectual disability. I think it's merely intended to describe someone of low intelligence, which neither necessarily qualifies as nor is necessarily caused by a disability.
Nor do I agree that it's mean-spirited in a way that, say, the word "stupid" isn't. It's just more intense.
I don't think insult should be socially accepted, it shouldn't, it's not a nice thing. Rudeness, impoliteness, offense, why would we socially accept them?
Freedom and cencorship is another thing. You have the freedom to be rude and impolite, and it shouldn't be censored. But yeah you shouldn't expect people to like you or listen to you.
>Rudeness, impoliteness, offense, why would we socially accept them?
Because multiple kinds of social space exist, and some people enjoy being able to interact with each other that way and are happy to accept being the butt of the joke their fair share of the time.
Ah yeah, you are right, there are people that have been exposed to it so much that they think it is normal, and a necessary part of life.
Well you know, things can change. In the past it was a family outing to go watch a beheading. That was normal for them and good entertainment. And they would have used the same arguments as you to somebody critical about it.
And you're right, it is a valid choice, and if you really enjoy being humiliated, by all means, you have the freedom to.
I do think eventually when the rest of the people have grown up and moved on to much more intelligent endeavors, that you might start to think differently too. But maybe not, everyone has their own interests.
They're not suggesting that they don't take issue, and so they don't need to take offense seriously.
They're suggesting that the people who conceivably might take issue generally don't and are instead being patronized by and condescended to by privileged, unaffiliated outsiders who assume -- without consent -- to speak on their behalf. And they don't take those people seriously.
It's totally reasonable to disagree with that view, but it's the not the same view your reply tries to engage with.
The thing is, you wouldn't use the slur except to invoke the mean-spiritedness that the people who find the slur offensive associate with the word. If you're using it because you think like-minded people will find it funny that you're using a term other people find offensive, that's still precisely the same mean-spiritedness.
No, you’re expressing a different, more lucid point of view (“the people who conceivably might take issue generally don’t”), which can be engaged with. For example, I would argue that it’s reasonable to take offense on behalf of people who can’t be part of the conversation at hand. (Otherwise it would be fine for whites to spew racist slurs in a group of only white people. If we disagree on that, we’re having the wrong conversation.) I would also point out that taking offense on behalf of others is a time-honored practice (“nobody says that about my little brother and gets away with it!”) But the GP (GGP?) did not say “the people who conceivably might take issue generally don’t.” They didn’t say “no one has standing to be offended by this term.” They just said “it’s not offensive” about a term that is offensive enough that we’re having an entire argument about it. That’s schoolyard-level discourse.
An alternative is to use “on the spectrum”. For example, your s.o. or someone else you’re arguing with is getting on your nerves so you say: “Hey! are you on the spectrum today or what?”
Offense is all about context. It is objectively quite offensive when used as a term for a person. (“Objectively” works here because a word being offensive is determined by how people view it. The views are subjective but the prevalence of those view is not.)
I've only ever seen it used to mean "delay" in occasional technical contexts, e.g. "fire retardant material", in practice it seems to be mostly a noun that means "stupid person".
There's an interesting etymology of "retarded". Also "idiot", "imbecile", "moron", etc.
These were clinical classifications, initially used in the early days of psychology and sometimes overlapping discredited ideas like eugenics. But these were diagnoses -- you could be determined to be an idiot, which was worse than being an imbecile, which was worse than being a moron -- by a respected doctor.
Of course, schoolyard kids got a hold of the terms and used them to disparage their (probably cognitively healthy) peers. And so with "retarded" and "disabled" etc.
But "retarded" just means "slowed or delayed". Developmentally speaking, especially when surrounded by other kids in your same age group, that's a noticeably difficult thing to be.
It does not mean (and never meant) that you are certain to reach full cognitive ability eventually. Flights that are delayed are sometimes also cancelled.
In Chinese there's a common word that sounds like a particularly offensive racial slur to the untrained American ear. I've seen Chinese speakers called out for this in person, but everything got straightened out pretty quickly. This was pre-social media, but it's not hard to imagine a social media uproar over it these days.
It does stick out of Mandarin speech to the US English speaker, but it's typically pretty obvious from context that it's not related to the slur. It's never been worth more than a giggle when growing up, I'm spending like 100x more time on thinking about it right now than I have cumulatively in my life, despite having grown up around Chinese people.
To me it feels like society is finally moving on from this insane over emphasis on finding things to be offended by and identity culture bs. I’m really hoping it peaked in the lockdown when people really had nothing better to do.
This is different than the fact-checking, and has to do with automated moderation algorithms (which generally suck), which are continuing (because advertisers want them).
Yes, it is, but the salient point that I felt was clear in that post was to demonstrate that these systems don't work well, and that such systems have such a poor understanding of context and circumventions as to be rendered ineffective if not totally counterproductive. I'm fully aware such mechanisms aren't going anywhere, right now, but at least Meta is acknowledging the fact that at present, they aren't really providing the user experience they intended.
That aside, I find it offensive a little bit that Meta has taken it upon themselves to decide what the "right" discourse is that their users want to see, and would rather they create a mechanism to let users decide for themselves - which this does at least outwardly appear to be a move towards. They've also in the last few years toned down or removed some of the auto-modding in private groups, and shifted that responsibility towards its community members and moderators - which was also a similarly good step.
that's very different and a case where the closed community should bear that responsibility
but as far global FB community -- which doesn't really exist (there is no "community", just users) -- or, more precisely, what ends up in people feeds, the fact checking was a good thing because a lot of people consume news that way; so this is a big step in the wrong direction
Some time around 2011, the Apple App store was warning me about rude words in the app description; unfortunately it was warning me about the German word "Knopf" which isn't rude. I think what happened is the English rude word list was translated into German, rather than just replaced with local rude words.
Yes, I know. My words seem to be easily misunderstood. The claim is that:
1) "knob" *in English* can mean "penis"
2) This is why "knob" was on the English rude words list
3) It looks like a rude word list containing "knob" was translated without context, so that the word "knob" became "Knopf" even though "Knopf" isn't rude.
Wäre es andersherum gewesen, wäre es so, dass „Schlange“ sowohl <<en:queue>> als auch <<en:penis>> bedeutet, und wenn „queue“ in einer englischen Liste mit Schimpfwörtern stünde, wären die meisten Leute sehr verwirrt.
Me and my friend share a joke instagram account where we'll randomly make stupid posts just to entertain ourselves. One time he posted a picture of himself holding a chair, standing on one foot with a goofy smile on his face, captioned "I'll hit you with this chair! Just kidding!"
It got the account suspended until we deleted the post, claiming the post, and I quote, "could encourage physical violence and lead to a risk of physical harm, or a direct threat to public safety."
I sent an appeal, saying it was a clear joke that isn't directed at anyone, but after supposed "review" they determined the post is indeed against ToS.
A cynic of the large social media platforms might suspect they were deliberately underinvesting in their moderation workforce... so they could then justify doing away with the cost as soon as politically convenient.
At its base, moderation = time = money
Better quality moderation? More money.
The platforms would rather not carry that cost and therefore be more profitable. Convenient how that worked out.
It's been trivially demonstrable that the use of "forbidden" terms or swearing can affect your ranking on their algorithms, whether it be displaying your comment, or your post on someone's feed, etc., at least on Meta's platforms. So no matter how "cringe" you may find it, it's done out of some degree of necessity and precisely because of these dumb moderation mechanisms, not out of any misguided, altruistic self censorship.
>It's been trivially demonstrable that the use of "forbidden" terms or swearing can affect your ranking on their algorithms
Is it though? A lot of this self censorship seems to be a cargo cult thing where people just copy what they've seen other people do and assume it's necessary when it's really not.
Yes. There are countless stories from Youtube creators who had their videos taken down or demonetized or had to edit and reupload them, because the AI detected that words such as "suicide" were spoken. And it's common knowledge that requests for review are routinely denied (presented as "we reviewed your case and the ruling stands", a judgment often received in less time than the runtime of the video).
>There are countless stories from Youtube creators who had their videos taken down or demonetized or had to edit and reupload them, because the AI detected that words such as "suicide" were spoken.
I don't believe you. I've never seen any evidence of that.
1) Do you honestly think they would add "fuck" to a blocklist but then turn a blind-eye to "fck"? Basic profanity filters on old forum software were more strict.
2) I find it completely inane that you are willing to self-censor yourself for an algorithm. I guess we no longer need a ministry of truth if the people just produced censored content to begin with, right?
I don't think it's a literal blocklist; it's more an correlation algorithmically determined. If the four-letter word is correlated to hate or violence but the three-letter one is not then ... that's all that matters.
Then you've got 'YouTube-speak', where video creators swap in alternatives to words suspected of making the algorithm downrank/demonetize videos. 'Unalived' being a particularly common one, to avoid mentions of killing or suicide.
It’s very human. This is no different from people using terms like gosh, shucks, and darn, instead of their stronger relatives. It’s just how profanity works, no need to worry about it.
For the record, Twitter currently punishes people who call VIP's mean names and seems to take action against all negativity pointed towards certain ideologies that fit with the owners preferences, and they're talking about some opaque "positivity" changes which actually sound like automating the current manual moderation behind their censorship of wrongthink.
We should stop pretending that that website resembles its preceding namesake, because it does not.
The only people that would say this is unnecessary are the people that not currently being censored, and have no concept that they ever would be. Because they’re the Good People that think the Good Things.
You're all very likely correct, but given the timing, it's hard to assume good intent on Meta's part. This same week, they've "donated" $1 million to Trump's "inauguration fund," and added strong Trump ally to Meta's board. Significant changes to moderation might be good or might be bad, but given the other news, only the truly ingenuous would trust that it's intended to improve things.
Same thing with when Bezos declared that the Washington Post would no longer be endorsing presidential candidates, claiming that it was a neutral decision about returning the paper to its roots with unfortunate but coincidental timing. Despite that potentially being a reasonable decision in a vacuum, only an idiot would have believed that Bezos was being honest about his motivation.
I'm sure it's a win for Meta (less responsibility, less expense, potentially less criticism, potentially more ad dollars), but certainly a loss for users. More glad than ever that I deleted my FB account 10 years ago, and Twitter once it went X.
My twitter account wasn't big, but it was non-trivial (~30K followers). A post could usually get me to experts on most topics, find people to hang out with in most countries, etc. There were many benefits, so deleting was very hard.
But it was eating my brain. I found myself mostly having tweet-shaped thoughts, there was an irresistible compulsion to check mentions 100 times a day, I somehow felt excluded from all the "cool" parts which was making me miserable. But most importantly, I was completely audience captured. To continue growing the account I had to post more and more ridiculous things. Saying reasonable things doesn't get you anywhere on Twitter, so my brain was slowly trained to have, honestly, dumb thoughts to please the algorithm. It also did something to attention. Reading a book cover to cover became impossible.
There came a point when I decided I just don't want this anymore, but signing out didn't work-- it would always pull me back in. So I deleted my account. I can read books again and think again; it's plainly obvious to me now that I was very, very addicted.
Multiply this by millions of people, and it feels like a catastrophe. I think this stuff is probably very bad for the world, and it's almost certainly very bad for _you_. For anyone thinking about deleting social media accounts, I very strongly encourage you to do it. Have you been able to get consumed by a book in the past few years? And if not, is this _really_ the version of yourself you really want?
Like alcohol and drugs, I think there's a certain kind of person that's susceptible to social media addiction. I don't think it's a large segment of the population but I also have no idea how big it is either.
Plenty of people can drink or consume weed in moderation. Likewise I know a lot of people who mostly use socials in the bathroom or before bed but rarely elsewhere.
If I'm honest with myself, I too had become addicted to Twitter. Elon's oligarchic takeover gave me the push to not only stop going but eventually delete my account altogether (so I wouldn't be tempted to go back into the bar so to speak). So for that I suppose I should be grateful to our new Generalissimo.
Fact checkers weren't deleting posts and didn't even have the right to do so. They are separate journalistic orgs tagging posts. Deleting is done by Meta moderators, which is something else entirely.
I think you also just proved my point that if HN users can't even get basic facts about an event right, how do you expect the average FB user to do so? Goes to show that even on HN "community noting" would be a disaster.
The problem with "fact-checking" is that if it's done by humans at all then it will be heavily biased.
With Silicon-Valley people being in charge of "fact-checking" for the past decade there's been countless examples of them doing mass cancellations calling things lies that we all know ended up being true.
I mean, we can’t be correct retroactively can we?? I dont think all the doctors that came before antibiotics should be blamed for not knowing Germ theory.
IS this a reasonable expectation of fact checking?
I’m very curious now, I actually would love takes on this. I feel we are implying that the standards of fact checking validity weren’t met, but the standards haven’t been stated.
The reason censorship is generally undesirable is because it assumes the person doing the censoring is always correct, and that they're infallible perfect arbiters of truth incapable of letting their political motivations dictate their censorship decisions...which is of course false. They're very often wrong, and always make decisions based on their political leanings, even when it contradicts the evidence.
If you're wanting to claim that `Cancel Culture` never happened, then I'm afraid, at this point in history, the burden of proof is on you, not me. lol.
No one needs proof Cancel Culture was real. Everyone knows at this point. So you can pretend you need proof if you want, but you're not fooling anyone.
See, thats not how it works in productive conversations. “Adult” so to speak conversations online, require the person making the claim to provide the evidence.
The act of not providing the evidence, is essentially a sign of not having an argument, and resorting to bluffs in the hope that people will take the emotions as facts.
But thats entirely self defeating - it reduces your argument to one about feels and vibes.
I always find this to be annoying, because I dont think people are so inaccurate.
You may well have evidence, and bringing it up makes the case.
And if you dont find evidence, then you improve your own argument. You end up checking and figuring out what made you hold that position.
It’s just a lost chance. And if people said they dont care to do this, then why the heck did they make the effort? You just lost your peace for no reason.
sorry, I only read your first sentence, but for something as well known as "Cancel Culture" if someone claims it must be proven to exist before it can be discussed then that is the person who's not acting in good faith, and has immediately discredited themselves, due to ignorance of very well known facts.
Asking people to list evidence for well known things is a well known troll-tactic, and often used as a way to deflect and redirect a discussion into the specifics of specific cases, especially when the main argument has nothing to do with any of the specific cases.
There was a long period where people were getting banned from Twitter and Meta platforms for posting (true) claims about the Hunter Biden laptop story (which was, of course, extremely politically consequential)
If you read the article you linked to, you find that 1) Twitter blocked tweets about the WP story, not banned users, and 2) they reversed that decision and unblocked the tweets 24 hours later as they realized their mistake.
It took the corporate media (CNN, ABC, CBS, MSNBC, PBS, etc) a full 3.5 years to admit the laptop was real. It wasn't just some little thing like you're trying to portray it as. It made the difference in the 2020 election.
Yes, people would care that the presidents adult son is pointing a gun at a prostitutes head on video.
Your attempt to minimise this as “people don’t care about a laptop” is either incredibly ignorant of this matter or deliberately misleading framing of the question.
The people saying the laptop doesn't matter are the same ones who believed the MSM story that it was Russian disinfo for 3.5 years.
They won't allow themselves to think it's important because that's an open admission (to themselves and others) of how thoroughly brainwashed they've become by trusting the MSM left-wing perspectives on every issue.
I’ve seen this happen before. Back in the good ole days of the libertarian internet.
You had subreddits which had zero moderation, because again “the best ideas succeed”. Those places got filled with the hate speech, vitriol, harassment, stalking and toxicity.
Minorities and women left, because they were basically hunted.
Logical arguments dont work, because hate, harassment and anger are emotionally driven behaviors.
This creates the toxic water cooler effect. The fact that its ok to say horrible things, attracts more people who are happy to say those things.
You lose diversity of arguments, view points and chances to challenge ideas.
You increase radicalization, dramatically speed up the sharing and conversion of anger into action.
Eventually, the subs brought in moderation. As did every social media platform in existence. The people who didn’t like it, created their own spaces.
Which didn’t do well. Because those positions and spaces are NOT popular. Facing this fact, they are now turning to shut off opposition and moderation, because that is necessary to keep the ball going.
This isn’t even opinion, this is the history of the past 30 years. It’s not even that old!
I really do hope this time its different. Genuinely, I said it when the new communities were created. I meant it then, I mean it now.
Moderation is fucking toxic and unhealthy. I rejoined moderation recently, and in the first 10 frikking items, I had to see a dead baby pic from an un covered ethnic war zone.
I really want this to succeed, and want it to be good for users. I am hoping it is.
But experience is clear - making space for hurtful speech, results in more hurtful speech and people just leaving to places where they dont have to be harassed.
Blue sky should probably see a jump in users over time this year.
> More glad than ever that I deleted my FB account 10 years ago
I hung on to facebook largely because marketplace makes parenting markedtly cheaper. I've used it less and less to the point I forget about it. This finally inspired me to full delete the account.
From bad to worse. Meta is probably one of the single largest funders of fact checking. Now that appears to be coming to an end. Third parties will no longer be able to flag misinfo on FB, Instagram or Threads in the US.
I think internet discussion worked far better without fact checkers, where some of them cannot really be called accurate. The community notes are the better approach. They aren't always correct either, but it certainly is the better fit for freedom of expression and freedom of speech. Fact checkers are the authority approach that just does not fit.
I haven't seen a single discussion be worse off due to fact checking, but I've seen tons of discussions where having it would improve things. I have seen people get mad because they can't post BS without it being challenged.
To claim internet discussion worked better without fact checking is something I haven't seen any actual evidence for, just opinions like yours.
Community notes is just a watered down, more easily 'ignored' version that appeases people that were angry about fact checkers to begin with.
Hopefully there is a push-back, likely from EU legislation. Between the AI generators many of these companies are implementing and changes like this, platforms need to be held more accountable for what they allow to be posted on them.
Claims are challenged all the time by other users and there are enough cases where fact checkers were wrong or heavily biased.
EU legislation tries to introduce "trusted flaggers". A ridiculous approach, an information authority by a state-like entity doesn't work, even if they paint these flaggers as independent. They simply are not, a trusted and verifiable fact.
Community notes provide higher quality info, it is the better approach. That is an opinion of course.
We will probably see community notes on trusted flaggers.
>Claims are challenged all the time by other users and there are enough cases where fact checkers were wrong or heavily biased.
I've only seen a handful of cases where they were wrong of heavily biased, but I've seen hundreds of cases where the poster refuses to accept they are wrong and the fact checkers are right.
>Community notes provide higher quality info, it is the better approach. That is an opinion of course.
Roughly the same info but from less trusted sources and with less controls being higher quality sounds like a big bag of wishes but not grounded in reality.
>We will probably see community notes on trusted flaggers.
I expect lots of partisan complaining and yelling, but not a lot of actual valid challenges.
I don't know. I believe the average internet user has less to gain to feed me wrong info. It happens of course, that is why you shouldn't believe everything you read on the internet.
A fact checker however has economic incentive towards their employers. You can paint them as independent, but the will always be in a precarious situation or are influenced by third party financiers. This does not at all evoke more trust than a random internet person. Trusted source is pretty subjective, but for me "official" fact checkers don't have too much of that.
Exposure to many viewpoints, including wrong ones, provides a counterbalancing effect. When you actively try and suppress information you create a “forbidden knowledge” effect where people seek out silos where extreme and wrongheaded information gets passed without the “sunlight is the best disinfectant”—-it grows faster…becomes more wrong, more extreme, and more dangerous.
Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
There's some academic research to the contrary; banning /r/fatpeoplehate and /r/coontown on Reddit reduced incidents of hateful speech across the platform.
Maybe it reduced hate on this single metric, but the complaint is more about the errors in fact checking.
And single subreddits aren't really convincing about the reliability of fact checkers if their independence is in question. In the end they do rely on a truth-authority, which is problematic, especially for political content. And Meta reported that political demands increased.
No, you should actually go and read the paper. It didn't just reduce the type of content posted in the subreddit, they tracked individual users that were active and their behavior overall changed, including in other subreddits compared to before.
Essentially what it showed was that if you pull people out of a particular echo chamber, then that had a sustained effect on how they behaved. Which is evidence contrary to the often made claim that they'd just leave and go somewhere else. It's in line with the theory that the internet fosters extremism because it enables insular pathological communities that in the analog era you'd have been slapped out of long ago by people who aren't nuts.
> Essentially what it showed was that if you pull people out of a particular echo chamber, then that had a sustained effect on how they behaved.
So…silos and echo chambers are bad. Seems to me that was part of my original point. I am suggesting that censorship of information leads people to the silos.
No I am saying that when you censor/suppress debate in the public square you drive people underground where they land in echo chambers and develop extreme views because they don’t have public debate.
You don’t need to ban people from echo chambers if they don’t land there in the first place.
Your solution is reactive to a problem you caused. My solution is don’t create the problem in the first place.
So I have done the leg work to see what happens and it turns out that if you give space to extremist views they overtake other conversations and dominate the community.
What people don’t seem to grasp is that all speech is not equal, and that our brains react very predictably to certain arguments and content.
For example, your argument is not supported by the paper, which I have read. Because the paper shows behavior of the bad actors changed across the site, and became less hateful.
However the argument is complex, and goes against commonly held beliefs, such as sunlight is the best disinfectant etc.
More exposure results in more reinforcement of popular ideas, until something happens externally.
When you feel the need to censor or suppress information all you are doing is admitting that your argument is just not as persuasive as the opposition and requires handicapping. People see that as the same thing as your argument being false which is why they always work their way tirelessly around your efforts to suppress and censor.
If you get to the point where you feel you need to censor, suppress, or outright ban voices to be heard, you have already lost the communication high ground and no matter how true or good your opinion/idea/position. It will lose in the court of public opinion…and frankly should…because you did not put the appropriate effort in to be persuasive.
> There's some academic research to the contrary; banning /r/fatpeoplehate and /r/coontown on Reddit reduced incidents of hateful speech across the platform.
That does not imply it reduced hateful speech overall, maybe the censorship just increased antipathy and drove that speech underground or to other platforms where it couldn't be seen.
Not necessarily. If it drives the content off Reddit but onto another platform that's friendly to only these extremists and their views then you may just end up radicalizing the members of the original banned subs even more.
I don't know if that's what happened and there's probably a lot more research to do here but I'm not convinced that deplatforming is actually a good outcome societally without more data.
That's still just a conjecture of a meaningful effect. Recruiters are able to change tactics in response you know. You're just naively assuming that those old tactics worked better just because reddit itself changed, but it could very well be the case that the more extreme rhetoric only attracted people who were already extremist and turned off moderates, but a more moderate approach that's now required could funnel more moderate people into an extremist pipeline.
"Off reddit" is just a win for reddit's PR, and that's why they did it, and no other reason and no other effects can be inferred.
The claim you are addressing is a separate one from the fatpeople hate story.
And that claim is evidenced, It’s not conjecture. I dont have it handy on me, but we have mapped out the ways people are recruited, and things like fatpeoplehate, coontown, are the funnels for groups to find new recruits.
There’s several others on things from ISIS to hacktivists. The mechanism is the same, heck - “red pill” is the term for this, it’s actually quite known.
I wasn‘t aware that society means surgery. Likewise that veiled means literal. By extension, ethnic cleansing probably means giving certain parts of a population a well deserved bath?
Edit: I did not want to imply that you meant it that way. But in a different context, or coming from the wrong person, it may sound like a dog whistle.
>Exposure to many viewpoints, including wrong ones, provides a counterbalancing effect. When you actively try and suppress information you create a “forbidden knowledge” effect where people seek out silos where extreme and wrongheaded information gets passed without the “sunlight is the best disinfectant”—-it grows faster…becomes more wrong, more extreme, and more dangerous.
Fact checkers don't suppress information, they add context and information to posts others make and provide the exposure to many viewpoints that echo chambers often do not have.
People haven't stopped posting wrong and biased information with fact checkers, they just have the counterpoint to their bullshit displayed alongside their posts on the platform.
>Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
My decades of watching is exactly the opposite. Extremism is and was rampant long before fact checking, and fact checking really only served to push some of the most extreme content to the margins and to smaller platforms that don't have it. It concentrates it in some ways as many of these opinions fall apart quickly when exposed to truth and facts.
I think some moderation is important, but misrepresenting fact checkers (damn ironic actually) doesn't serve us. Of course fact check suppresses information! That's the whole point. Sometimes it results in straight up deletion, but even when not it results in lowered reach aka suppression of what the algorithm would normally allow to trend, etc.
>Of course fact check suppresses information! That's the whole point
Its not. The fact checkers in this case, and almost all cases we're discussing ADD information that challenges the posted data, not censor or restrict it from being posted.
Outside of illegal content that is. Content deemed illegal was removed by moderation teams, this was before fact checking, and will continue with community notes with little to no change.
> Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
Seems like the opposite. Traditionally we only had siloed forums which were often heavily moderated by volunteers who considered the forums their personal fiefdom, read every single thread and deleted stuff for being "off topic" never mind objectionable, plus the odd place like /b/ which revelled in being unmoderated. Then you ended up with more people on big platforms that were comparatively-speaking, pretty lightly and reactively moderated. Then you ended up with politicians weighing in against moderation with the suggestion even annotating content published on their platform was a free speech violation, let alone refraining from continuing to publish it.
The difference between antivax sentiment now and circa 2005 isn't that nobody ever determined that they weren't having that nonsense on their forums or closed threads with links to Snopes back then or that it's become difficult to find any references to it outside antivaxxer communities since then. Quite the opposite, the difference is that it's now coming from the mouth of a presumptive Health Secretary, amplified on allied news networks and now we have corporations running scared that labelling it a hoax might run the risk of offending the people in charge. Turns out sunlight is a catalyst for growth
> The difference between antivax sentiment now and circa 2005
The antivax movement literally grew exponentially when vaccine information started to be actively censored on the largest social media platforms and you think that is because there wasn’t enough censorship? People were literally driven into antivax information silos because a bunch of idiots decided that vaccine criticism should be forbidden in the public square
Sorry, but I live in a country using exactly the same social media providers as you, subject to exactly the same (actually pretty limited) censorship and without widespread, committed and politically-aligned antivax sentiment
People in the US didn't need to be "driven into antivax information silos", because those antivax information silos were their favourite talk show hosts and some of the country's most prominent politicians. Turns out that promotion of antivax sentiment as an important issue that must be discussed and constant attacks on public health officials doesn't "disinfect" people against the belief that there might be some truth to it...
So you are arguing for exactly what? You don’t want freedom of speech? You don’t want body autonomy? You want authoritarian control of the populace?
Not sure where you live, but if those are the things that are important to your leaders and people, I wouldn’t want to live there or even visit. Sounds awful.
I don't recall expressing any of those sentiments you've attributed to me, but I'll note it's quite a shift on your side from "sunlight is the best disinfectant" to "your country's mainstream media and politicians didn't encourage antivax sentiment enough to reduce vaccination levels or increase death rates to US levels? Sounds horrible"
I note that the original topic was about Zuckerberg being so afraid of his corporation being censured by the incoming government that he's pledged to move his moderation team to a state which voted for them and refrain from publishing any "fact checking" notes in Facebook's name lest they conflict with the government and its supporters. That doesn't sound like a libertarian paradise either
> I don't recall expressing any of those sentiments you've attributed to me
Perhaps I misunderstood your intentions then.
If you believe that antivax debate was in the mainstream in the US and there wasn’t an active attempt to suppress just because some voices bled through the censorship, you are simply wrong. Zuckerberg even noted in this announcement that pressure from the Biden administration to censor speech was significant.
My consistent point here is that censorship drives extremism because it suppresses the debate where the debate wants to take place and pushes the conversation to those interested in the topic to siloed echo chambers. That definitely happened around vaccines in the US over the last 4-5 years. I know that happens for a fact and have personally tried to gently encourage people I know that felt the censorship frustrations and leapt to other platforms to still read all sides before making decsions.
Whatever Zuckerberg’s internal motivations are on this change of policy, I don’t care. Community notes seems to be a better way than suppression. Others may have a different opinion and thats ok. I encourage them to freely express it and would never support any one trying to shut that debate down.
How wrong of me to think that high-profile politicians and wall to wall cable news coverage are anything other than little-noticed voices bleeding through the all-pervading censorship of... two internet companies deleting a handful of accounts after people had pointed out how many million likes their dangerous medical advice was getting and some algorithmic "are you sure you want to link to this hoax?" interstitials. Really, the argument that Meta's moderation was futile and inept (even more so than its policing of scam ads and spambots) has far more credibility than attempts to portray it as some evil internet police forcing people to hide out on tiny islands of antivax.
It seems a little unlikely that people who decided to delete their Facebook account and seek out an echo chamber because they didn't like seeing FactCheck.org links slapped on vaccine function would have nevertheless listened very carefully to FactCheck.org or the public health officials their favourite politicos were slagging off if only they were able to d̶e̶b̶a̶t̶e̶ post misleading memes about public health on Facebook first. I mean, the anger at third party fact checkers is explicit rejection of the idea there's anything to debate.
Anyway, regardless of whether self-proclaimed fact checkers actually live up to their label, it's difficult to describe a corporation bending the knee to an incoming administration that's determined that corporations shouldn't link to them as a victory for free speech or enabling controversial viewpoints to be debated as opposed to merely promoted on internet platforms. Must be wonderful for Zuckerberg to be able to express himself freely without any threat of censure whatsoever on the day he announces that he'll be firing his his moderation team so he can relocate it to a state the incoming administration considers less susceptible to wrongthink
The mechanisms of online speech show us a few other issues.
For example certain ideas are far more “fit” for transmission and memory than others. Take a look at something as commonplace as “ghosts” or the idea of penguins. Ghosts are in all cultures, and they are essentially people with some additional properties. Penguins are birds that dont fly.
Brains absorb stories and ideas like flightless birds easily, because they build on pre existing concepts.
Talk about spacetime, or multiple dimensions and you aren’t going to have the same degree of uptake.
So when I put certain ideas into competition with each other, all else being equal - the more suited for human foibles, the more successful the idea.
People also dont make that much effort to seek out forbidden knowledge. Conservative main stream media has made many things forbidden - 1/3rd of America isnt aware that Obamacare and the ACA are the same thing.
Sunlight is the best disinfectant for certain breeds of germs. Many others get on just fine.
In my many decades of online existence, which includes being on multiple sides of moderation, extremism was on the rise from before, because we had created the arguments and structures that thrive on it.
Content moderation was a hap hazard effort created out of necessity to stall it.
Personally - I hope this works. Moderation sucks, and is straight up traumatic. If we can get better, more effective market places of ideas, then I am all for it.
I care about the effectiveness of the exchange of ideas. I see free speech as a principle that supports this. But the goal is always the functioning of the marketplace.
> Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
This is just overtly and flatly wrong. I reject your experience fully because over the past few decades the internet has become more open, not less. We openly debated people that believed vaccines caused autism and gave them microphones. Every single loud asshole and dipshit was given maximum volume on whatever radio show or podcast or social media platform they could want.
You can reject my experience all you want but the reality is that between 2020 and 2023ish the world’s top social media platforms became less open about specific kinds of information and actively tried to censor and suppress any contrary information to a government opinion/narrative about certain subjects. During this time certain forms of extremism exploded in popularity as people were driven to information silos to find and learn about the information that the social media platforms were trying to suppress. Those silos generally didn’t have censorship but they also didn’t have contrarian voices either. So when folks landed in those silos all they heard was the assholes at the loud volumes and without the contrarians, followed those assholes.
Specifically to vaccines, the antivax crowd was pretty minimal to a some nutjob soccer moms, holistic medicine fanatics, and RFKjr until you stopped having conversations with them, because you folks who want or believe that censorship is good silenced the debate and did not follow them to the forums where they went to spread their ideas to continue the debate.
I am absolutely convinced that the growth in the antivax movement is directly tied to the censorship effort (and the desire of the government to not be completely honest about the vaccines at the time).
No free lunch here. Social media is different from systems in the past cuz it give everyone Free Broadcast capability.
In the past people were told they had Free Speech, but they didn't have Free access to Broadcast Media (newspapers/radio/tv/movie studios/satellites). It was always up to someone else with Access to Broadcast(one to all messaging) to prop up voices they thought was important.
Shannon's Information theory tells us Social Media as a system can't work cause - once you tell people their voice matters, give everyone in the room a mic, plugged into the same sound system, and allow everyone to speak, firstly you get massive noise, secondly as a reaction people will scream louder and louder and repeat their message more and more. Noise only compounds. The math says it can't work. The way people are debating about this is under an assumption that it can.
> The math says it can't work. The way people are debating about this is under an assumption that it can.
Yet here we are…the math seemed to work overall just fine minimizing the anti-vax movement until someone started externally futzing with the numbers to try and force a specific result to that math. When you do that apparently more of your components run off to form other equations and no longer participate in your equation then before you tried to manipulate the messaging.
You are not going to get everyone to agree with you…ever. But suppressing and censoring debate in the real world example of vaccine acceptance to try and achieve that result backfired spectacularly by galvanizing and growing that movement far far beyond what it was…or should have ever been.
Minimal? Again you are just objectively wrong. The antivax movement had been growing since the 90s, RFK Jr didn't exist in a vacuum. The entire reason why there was push back against the COVID vaccine in the first place was because this movement was there already, much like the movement against abortion.
You are rewriting history to fit your viewpoint which is wrong. The reality is that you are wrong. And those silos that people moved to were equally sinful of censoring voices and banning people not aligned with their beliefs. Even now Musk has no problem censoring and banning people off Twitter for being too mean to him.
Citing the simple fact that every western government ignored their own pandemic plans and did adlib bingo instead was enough to get you banned of Twitter, Facebook and reddit for close to two years.
> I haven't seen a single discussion be worse off due to fact checking
The idea that there is some official governing body that has access to undisputable facts and they have the power to designate what you or I or anyone else can talk about is preposterous and, frankly, anyone on a site called Hacker News should be ashamed for supporting it.
>The idea that there is some official governing body
Platforms were encouraged to create their own departments, and have. There is no "one" or "governing" body here, so this is more hyperbole in this already flagrantly absurd discussion.
>have the power to designate what you or I or anyone else can talk about is preposterous
No one is stopping you from posting bullshit, fact checkers simply post the corresponding challenge or facts that allow others to see the lack of truth in your statements.
The idea you can say whatever you want, lie all you want, and be unchallenged as some form of right is absurd. Claiming because you can be challenged is censoring you or preventing you from talking is also completely absurd.
>and, frankly, anyone on a site called Hacker News should be ashamed for supporting it.
Frankly anyone on this site should be able to separate hyberbolic strawmen from reality.
> Platforms were encouraged to create their own departments, and have. There is no "one" or "governing" body here, so this is more hyperbole in this already flagrantly absurd discussion.
> Finally, in the midst of operating or considering up to three different avenues of “misinformation reporting” (switchboarding, EI-ISAC, and the “misinformation reporting portal”), by early 2020, CISA had dropped any pretense of focusing only on foreign disinformation, openly discussing how to best monitor and censor the speech of Americans.
> The EIP repeatedly used its fourth category, in particular, to justify the censorship of conservative political speech: the “Delegitimization of Election Results,” defined as “[c]ontent that delegitimizes election results on the basis of false or misleading claims.”166 This arbitrary and inconsistent standard was determined by political actors masquerading as “experts” and academics. But even more troubling, the federal government was heavily intertwined with the universities in making these seemingly arbitrary determinations that skewed against one side of the political aisle.
So please, let's not pretend that the fact-checking organizations, the information streams they themselves depended upon and the pressure that was applied to all of the social networks was organic "encouragement" meant to challenge bullshit posted online - it was a censorship campaign by the United States government, plain and simple.
A voice of sanity in a cacophony of madness. I hold no sympathy for Meta but it's laughable that so-called "fact-checkers" are anything but "status-quo enforcers".
When you say this, what are you referring to? Was this about the general vibe of online conversations, or are you talking about specific incidences or traits?
The problem with "Fact-Checkers" was that since they're human they're going to impose their own biases, and their own sense of morality. For well over a decade the majority of them were also left-leaning (per Silicon Valley), and so even true things that conservatives were trying to say got "censored" because these left-leaning folks believed their own sense of truth and morality were superior.
Joe Biden is sharp as a tack and any videos purporting to show the opposite are cheap fakes deceptively edited by the Republicans and their far right allies [1] [2] [3]
In the examples you provided, they mostly deal with hotly-contested information around Covid-19, where there exists countless amounts of incorrect information, politicized reporting, and straight up propaganda. I'm not surprised that Facebook's fact-checkers got a couple articles mislabeled, especially if they blended in with the wave of genuine disinformation that accompanied the pandemic.
Given that there seems to only be two articles that are listed as falsely reported as misinformation (the Reason article and the BMJ article also mentioned in the Telegraph report from today), I have to assume that there actually aren't that many large errors on the part of the fact checkers. If there were more than two or the mistakes were much bigger, then the free speech advocates would never stop mentioning it.
There can definitely be bias when it comes to fact-checking, I wouldn't deny that. I also think that education and knowledge sharing can be greatly harmed by social media incentives to provide the most "engagement". Having an actual human in the process somewhere introduces some error but also cuts down on a lot of the dumb crap that would otherwise spread.
You asked if I saw examples and said that you haven't seen any examples; I showed you examples.
There certainly are more examples, and the free speech advocates I know do talk about the subject generally quite a bit.
One I just now remembered: Dr. John Campbell (https://www.youtube.com/@campbellteaching) has run into issues with this and has pointed out many other cases where established "knowledge" about Covid that we were previously not allowed to criticize, turned out to be objectively wrong. These disputes have resulted in many other people being censored despite later being shown to be correct, or at least reasonably justified by the best information available at the time.
This is someone who was proactively warning about the potential severity of Covid well before others, and advocating for proper hand-washing very early on (before more science emerged suggesting that skin contact is a relatively minor transmission vector). In the early days of the pandemic, he was complaining loudly about Fauci's initial mask rhetoric, arguing that the general population absolutely should wear masks and that production needed to step up. He's been doing serious medical content on Youtube for 17 years (sort by oldest to see) and first posted about Covid on Jan 26 2020 when awareness was still low and it was imagined that the virus had been contained to China and presented extensive detail on what little was known at the time (https://www.youtube.com/watch?v=aPvpfC7NfR0).
But now he mostly makes videos against "the establishment", out of frustration with their unwillingness to consider new science over dogma.
I apologize for not scouring the internet for examples. If you had not sought those examples out and provided them, I probably would have never seen any cases of incorrect fact-checking in my actual life, but I would have seen many cases of misinformation being fact-checked. If you have to intentionally find such cases or hear them shouted from the rooftops by free speech advocates, then they probably aren't that many such cases.
I don't have time to search through an entire Youtube channel, but I will say this: there are many, many doctors out there with factually incorrect views about medical science. I personally have talked with doctors who think that the Covid vaccine killed hundreds of thousands of people (it didn't). I do not necessarily think this doctor is wrong, but from the perspective of a fact-checker who is given the current best knowledge of Covid it is hard to determine who is making genuine good-faith efforts to criticize vs who is simply repeating what they want to be true.
And for the record, you absolutely are allowed to criticize the establishment views. When it comes to important topics like medical science, however, you may just have additional context added saying that this is a contrarian view which (statistically) is more likely to be false than the consensus. Everybody likes to complain loudly about being censored, but the reality is that their views are just being disputed and information provided that they are going against the mainstream view.
Specifically, I was talking about in my daily usage, not a widely-distributed article on a single example. Have you personally seen any fact-checking whatsoever, much less fact-checking that is misleading? Or do you need to search it out in order to find it?
> Trump says the unemployment rate for African-American youths is 59 percent.
> In May, the bureau said the employment-population ratio for blacks ages 16 to 24 was 41.5 percent. Flipped over, that would mean that the unemployment ratio - although such a statistic is not published by the bureau - would be 58.5 percent. That’s pretty close to the 59 percent figure Trump cited, Sinclair noted.
> From bad to worse. Meta is probably one of the single largest funders of fact checking. Now that appears to be coming to an end. Third parties will no longer be able to flag misinfo on FB, Instagram or Threads in the US.
Zuck has probably done exactly that cost-benefit calculation — FB has put enormous resources into fact checking, and to most people it hasn't moved the needle on public perception in the slightest. Facebook is still seen through the lens of Cambridge Analytica, and as a hive of disinformation. The resources devoted to these efforts haven’t delivered a meaningful return, either in public trust or regulatory goodwill.
Fact checkers are often wrong, and often corrupted by the activists that end up working at them. For example I’ve repeatedly noticed articles from Politifact that are blatantly wrong or very misleading. When I look up those authors and their other work, their bias is clear. Community notes on X/Twitter is far more effective and accurate.
The older I get, the more I realize that people just live in different realities and so many contradictory facts can be true. Obviously this is a source of conflict.
I don't think facts ever contradict each other, it's the stories people create to explain the facts that are at odds. These stories lead people to extrapolate other beliefs which they present as "facts", and it's an organic process of discussion and exposure that changes peoples minds over time.
I personally think aggressive fact checking authorities impedes this process, because people don't change their minds when faced with authoritarian power against which they are powerless, and because they are powerless here, they get angry and they disengage. This ends up which reinforcing their beliefs and now you've lost all chance of change.
Right. Imagine facts as data points on some Cartesian plane, and the narrative surrounding the facts as the curve fit to those points. The data points might all be sound, but by selectively omitting some, or by weighting their "uncertainty" higher or lower, you can fit just about any damn curve you want to them.
I also think that simple exposure to a narrative, whether it has any actual facts/data backing it up or not, is likely the primary driver of people believing it.
Now, consider that in most "free speech" societies, those with money can repeat things many orders of magnitude more than others. Over time, this results in influence. Thus, while many countries have "free speech," I'd say they don't have "fair speech." The two concepts complement each other, but one is not the opposite of the other.
The idea of some kind of universal fact is also misleading, some statements of fact are only statements of belief, others are so ill-defined that people end up debating two different things.
Yeah, journalism always has some inherent bias. But to say that the X community is going to be less biased than a fact-checking organization staffed by journalists whose job is to be neutral (within what's humanly possible), is frankly absurd.
Why is it absurd? Journalists don’t think their job is to be neutral. They are among the most biased. They abuse the trust given to them, which is why they don't deserve it. Community notes allows a diversity of opinions to compete, which is a better way to seek truth.
Seems to me that if some authority is determining what are facts and what are not for me, that I am easily shapable and foolable.
Community Notes at least don't claim they have the facts. So that leaves you more with a responsibility to make up your own mind.
I know this isn't for everyone, there are still a lot of people that like to have leaders tell them how they should live. But nowadays there are more and more people that like to have more independence. You will have to live with that too.
None of this is to do with anything about what people want. It's to do with the government. Meta has always, by necessity to some degree, gone with what the current US administration wants re: content moderation. This is the same thing.
Do you really think the company which has openly admitted it wants to create AI profiles that post as if they're humans and not tell you they are AI care at all about facts or what you think or believe?
Well yeah true, the decision is probably mostly made because of the change of government. The fact checking was pleasing the left, and now that the right has the power, this left-wing-propaganda thing has to go.
But then is community notes right-wing?
They could also have kept the fact checking system, but just alter the facts to please their agenda.
But they didn't do that, they are replacing it with Community Notes, which isn't some small group supposedly figuring out the facts for everyone, but a community build information system.
To me that seems a lot more fair and less prone to corruption. So regardless of the real motivation behind the move, I think it will have positive effects for society. At least a step in the right direction. Still a long way to go.
> The fact checking was pleasing the left, and now that the right has the power, this left-wing-propaganda thing has to go.
Yes you understand. Meta, due to its problems with moderation over the years, both legal and political, has largely ceded direction of that to the government. Previous government wanted things like fact-checking, an oversight board for moderation decisions, and censorship of certain issues. Current government doesn't want any moderation at all, like X, the social media owned by Trump's biggest ally, which he personally loved so much that he created his own Twitter clone when he was booted off of Twitter. So in that environment, the easiest, simplest thing is to treat Meta platforms like X. That's all there is to it. It signals commitment to the new administration, it heaves political and legal pressure off Meta, etc. much more than your suggestion, that they keep fact-checking but bias it towards the right (which would need to be explained to the administration, etc.) Just saying "We're like X now" gets the point across most cleanly, and it's cheaper
Exactly! They simply used lawfare in an attempt to bankrupt, sieze the assets of, and imprison their main political opponents rather than keep the scale balanced (for the sake of democracy) /s
Thank God. Fact checkers and political organisations pretending to fact check frequently spread false information. Aside from the 2020 election interference regarding the Hunter Biden laptop (which was falsely claimed to be a Russian disinformation effort), you can visit Snopes right now and read an article on how someone that blew up people (and now works for BLM) may not be a terrorist because ‘there are many different definitions of terrorist’.
I think the Snopes link indicates the grandparent's point well, if not in the way that was intended: words being subjective and imprecise, the fact checker has many degrees of freedom. If we allow fact checkers to censor content, they will use the linguistic degrees of freedom to censor selectively to the benefit of their political bias. (Your terrorist is my freedom fighter, your demonstrator is my rioter, your just cause is an imposition on my freedoms, etc.)
Snopes was careful to show degrees of freedom with this fact check, but most social media fact checkers will not be so careful. Social media fact checkers will have a tendency to censor in the direction of the currently-in-power political party, because that party is able to set regulatory policy on social media companies. So the only thing which will prevent censorship from blowing with the political winds is to not have centralized censorship.
Community notes (as implemented at Twitter) require agreement of multiple people who are not in agreement on issues to agree on Notes. I am cautiously optimistic that it may be possible to correct wrong speech with more speech in a nonpartisan manner.
Her specific crimes were possession of unregistered firearms, transport of firearms and explosives shipped in interstate commerce, unlawfully use of false identification documents, and robbing armoured cars.
Given all armoured car robbers would engage in such activities (unregistered firearms, explosives, fake papers, etc),
is it your position that all armoured car robbers are terrorists?
As a leftist, while this is concerning, it's also important to remember that Meta censors left content as much as it does right content.
So, while this announcement certainly seems to be in bad faith (what could Mark mean by "gender" other than transphobic discussion?), this should be a boon both for far-right and left discussion.
Does that mean increased polarization and political violence? Surely, surely.
He explained it in the next sentence. If people are free to say it in Congress they should be free to say it on Meta platforms too, and that includes a range of non-binary opinions that aren’t intrinsically istphobic.
>it's also important to remember that Meta censors left content as much as it does right content.
This is a bold claim. I see a lot of people in this discussion that seem to have a very different experience. Your point would be much stronger with evidence, if only to calibrate everyone's understanding of what you mean by "left content".
>what could Mark mean by "gender" other than transphobic discussion?
From what I've been able to tell the last several years, the overwhelming majority of your ideological opponents here have no interest in visiting physical harm upon others simply because of how they view and present themselves. They just don't want to be, or feel, compelled to treat the other person's self-image as an objective fact. Some of them additionally have concerns about capacity of minors to give informed consent for the related medical procedures, or consider it suspicious that the prevalence of such self-identification has risen drastically in recent years (to the point that they imagine social pressures toward such identification).
>Does that mean increased polarization and political violence? Surely, surely.
I have seen statements like this from your opponents interpreted as veiled threats in the past.
> Your point would be much stronger with evidence, if only to calibrate everyone's understanding of what you mean by "left content".
I think it's extremely likely that people will see the "de-ranking" of content they agree with as bias, regardless of their place on the spectrum.
Similar: "Biden must have committed election fraud, because all of my friends voted for Trump and I don't know anyone who voted for Biden." (previous election, obviously) Well, is that because no-one voted for Biden, or that the friends/content you see is tuned to how you lean.
> while this announcement certainly seems to be in bad faith
Not really though. It means that feminist campaigners can advocate for single-sex spaces and services without the looming threat of being banned. This is great news and a win for free speech.
> people just do not wish to participate in other peoples gender performances.
The “bad faith” is in the pretending that we don’t all participate in gender performance with every single person we come into contact with, every single day, for our entire lives.
The post you are responding to does not claim otherwise.
Again: it is specifically pointing out that other people are not obliged to participate in other people’s performance.
People are free to act, have whatever cosmetic surgery or take whatever hormones they wish to.
Where their rights end is asking other people to refer to them based on their performance rather than their sex.
Again, it is not ‘bad faith’ for Meta to allow discourse from people to disagree with gender ideology. Meta are not hiding anything, they are directly saying that they want to allow people that disagree with gender ideology - which judging by the last election is most Americans - to use their services.
> this should be a boon both for far-right and left discussion.
If by left discussion you mean discussion of the genocide in Gaza, don't count on it, because this censorship is bipartisan in the United States.
Zuck cares about currying favor with the powerful. He doesn't give a crap about the powerless. Also, he's pretending that Texas, the proposed site for content moderation, is not politically biased, which is laughable. "We're moving from a blue state to a red state" is not a serious proposal for reducing or eliminating bias.
They've also said there will be more harmful (but legal) content on there as they'll no longer automatically look for it, but require it to be reported before taking action.
As someone who worked on harmful content, specifically suicide and self injury, this is just nuts - they were raked over the coals in both the UK by an inquest into the suicide of a teenage user who rabbit holed on this harmful content, and also with the parents of teenagers who took their lives, who Zuck turned around and apologised to as his latest senate hearing.
There is research that shows exposure to suicide and self injury content increases suicidal ideation.
I'm hoping that there is some nuance that has been missed from the article, but if not, this would seem like a slam dunk for both the UK and EU regulators to take them to task on.
This exactly mirrors my thoughts, although I don't work in your field. One quote:
"For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies)."
That is first order data and it's interesting. However, before making policy decisions, I would want the second order data: what is the human cost of those mistakes, and what percentage of policy-violating content will not be removed as a result of these changes? Finally, what's the cost of not removing that percentage?
For that matter, by talking about the percentage of active mistakes without saying how many policy violations are currently missed, you're framing the debate in a certain direction.
The human cost of a piece of content being taken down depends on the piece of content, and the reason behind posting it.
In the case of someone posting about recovery from self injury and including a photo of their healed self-harm scars, having that taken down by mistake would be more harmful than someone who posted a cartoon depiction of suicide for the lolz.
My personal belief, for whatever that's worth, is that communication and speech are one of the most powerful tools any of us have. Talking can change minds, move societies, arouse emotions, and in general makes a difference. This is true no matter the format (text, voice, etc.).
That means that restricting communication should not be a casual activity. Free speech is a good ideal for a reason.
It also means that, if you believe in the primacy of free speech, you are obligated to consider the implications of that belief. Speech has effects. In my adult life, since 1990, we have seen a major change in the ease of communication. IMHO, society hasn't been able to fully adjust to that change -- or rather, that huge suite of changes. I sincerely do not know what a healthy society using the Internet looks like; I don't think we're in one now. All of these arguments (on all sides, mine included) are hampered by our lack of perspective.
Which is why we should research this carefully - and the research thus far points to consumption or graphic or even borderline depictions of suicide, self injury and eating disorder content (eg thinspo) being bad for mental health in at least teens.
Meta seem to be making the case for those who would see social media banned for people under the age of 18. To enforce that properly would require needing ID, and that then opens a whole can of civil liberty issues.
The social "science" research in this area is junk with small effect sizes, unclear causality, and multiple uncontrolled variables. People who claim to be following the science in this area are generally being disingenuous and picking results that support their preferred ideology.
Given how easy it is to take things out of context, I'm not so sure that the original context really makes a difference.
There's more people online than any of us has heartbeats, and the n^2 number of user-user pairs generates detrimental effects that track any positive effects.
Much better, I think, for each of us to have a small and private personal social network, not to hand everything over to a foreign* company trying to project its social norms worldwide.
* Facebook claims about 3 billion active users, so for 89%-93.5%** of its users, the fact that Facebook is American makes them foreign.
That ignores the regulations in the EU, and the UK (coming into force this year), and also the huge volume of lawsuits they are facing in the US. Does everyone remember Zuck turning around to apologise to the parents in that senate hearing? Those parents must feel this is a slap in the face.
This is a decision for the US market first and foremost. The lawsuits you mention are sadly irrelevant to the decision-making; again, if you are about to be forced to make this change by Trump, the results of some cost/benefit study will not sway his reasoning. His decision is already made.
FWIW I would not be surprised if the bluster about championing free speech abroad gets quietly forgotten; we’ll see. They explicitly state they will comply with laws, which in EU likely means continuing to moderate (more not less over time, given the regulatory trends).
> we think one to two out of every 10 of these actions may have been mistakes
May have been a mistake? Reminds me of RTO and the subjective feeling of being more productive in the office. They have the feeling they may have made mistakes and base their new policy on that feeling.
I think what they are saying there is the press release interpretation of experiments showing a false positive rate of 10-20%, with error bars wide enough that stating a percentage gives too many significant figures. But the definition of FP is necessarily fuzzy; if you can perfectly identify them as FP at scale then you have built a better classifier and you no longer have the FP problem. So any statement about FP rates necessarily needs to be couched in uncertainty.
I don't think it's malicious wordsmithing where they are mis-representing the internal data, though I don't have the data to confirm.
The human cost can't be quantified in any meaningfully precise way on either side. The calculations are necessarily based on so many assumptions as to become entirely subjective. Ultimately the decisions will be made based on politics and business priorities, not any objective calculation of human cost.
> There is research that shows exposure to suicide and self injury content increases suicidal ideation.
Yes. However, I find this obsession with harm-based value judgment to the exclusion of all other considerations ethically problematic, to put it mildly. Ethics does not reduce solely to considerations of harm.
Absolutely they should, and when I worked there that was known as "protecting voice", that content has always been explicitly allowed because it is free expression, even if reading it can be difficult for some people. The same with someone posting images of healed scars because they've been overcoming their self harm.
The content I'm talking about is graphic photos of suicide and self injury, fresh, blood soaked cuts, bodies hanging, graphic depictions of eating disorder (that goes beyond "thinspo", which is more borderline, and so downranked and not recommended rather than removed).
It's the latter that we believed (based on the advice of experts who we relied on for guidance) is harmful when consumed in large quantities.
Counterpoint: censorship inherently harms everyone. People I follow on Youtube have repeatedly had their ability to discuss topics such as suicide seriously interfered with. It actually gets in the way of factual reporting when a suicide occurs in the community and of discussing the facts of the situation so that people can learn from it and possibly prevent future deaths.
Not to mention, people just straight up have a right to talk about these things. It is not moral to hold one person responsible for an unintended and not reasonably foreseeable reaction to the discussion. And joking about these topics is legitimately therapeutic for some.
I'm not talking about that here - and that always fell under protecting voice - if mistakes were made they should have been reversed on appeal. e.g. imagery of healed scars in the context of recovery, discussions of struggles with mental health, suicidal ideation etc.
I'm talking about graphic images of self harm, suicide, eating disorders. And at some point you have to weigh the maximalist interpretation of free speech "you have to host whatever I want, as long as it's not illegal" with "promoting this stuff causes active harm, no".
>And at some point you have to weigh the maximalist interpretation of free speech "you have to host whatever I want, as long as it's not illegal" with "promoting this stuff causes active harm, no".
The burden of proof is on you to demonstrate that it causes such harm.
I don't generally think people should be held responsible for the unintended reaction to their speech of a small minority of the audience.
Having a piece of content removed, or demoted and not recommended, is being held responsible?
Also as per the inquest into the death of Molly Russel found based on the preponderance of evidence, exposure to this kind of graphic content was largely the causative agent in her suicide.
What would the bar you require be, is there a bar?
> So, we’re going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.
I don't think this is exhaustive, and I think SSI (suicide/self-injury) + ED/etc. stuff is considered high-severity.
I've already seen disturbing stuff on X since Elon took over that I never would have seen when it was twitter. They don't even show the warning "this might be harmful content" on images and videos anymore. The X algo seems to go haywire every couple of days and dumps a bunch of this crap in my feed until I block 20+ bluecheck accounts showing this crap.
I believe it's only going to get worse going forward as they all adopt these policies.
My profile is largely unused, I follow no one, and like 1/3 times I open up the front page I get straight holocaust denial threads suggested. Completely insane.
“Think of the children” isn’t really a good argument for censoring completely legal political discourse, which is what has been happening.
They are admitting that there has been a global push against free speech on these platforms.
>There is research that shows exposure to suicide and self injury content increases suicidal ideation.
I mean do you really need research to show this link? Of course it does.
We are okay with slapping an “R” rating on movies and allowing parents to be the ones who decide what content their kids can see. Why can’t we decide that parents also need to be the ones to stop their kids from consuming bad content on social media?
Arguably they don't go both ways. The distinction is action vs inaction. If someone wants action from someone else, they need to argue the case for that action. Arguing for not doing something on the other hand is never necessary.
To see why, consider that the space of possible actions someone could not take is infinite. If there's an expectation that someone do a ton of research and work to argue for why they are not doing something, then the amount of work they would have to do is thus also infinite. This way lies madness, which is why in reality the default outcome that results from not acting is always taken as a given.
Sometimes this reality is obfuscated by activists. They find some group of people who are just doing their thing, and demand that those people do some extra things (usually some costly things). The arguments they make for this are weak, but when the targeted people say they'd rather not do those extra things the activists demand their targets argue for not doing what the activists want to whatever level of effort (or greater) they themselves made. This can be an effective bullying tactic but isn't legitimate: it's on those who want action to argue the case for it, not those who don't to argue against.
Digital platforms like social networks default to uncensored. If the operators do nothing, then by the way they are built content is allowed. It takes additional work to categorize posts and block certain kinds of content. So the default outcome is free speech. If someone wants someone else to do work to suppress that, then it's on them to prove that it's truly necessary and that the benefits outweigh the harms. But that doesn't cut both ways; it's not required for other people to take on the argument for free speech. That's the default outcome so it just wins by default if the other side can't prove their case to a sufficiently convincing level.
> Duty (criminal law), is an obligation to act under which failure to act (omission), results in criminal liability
You failed to act, which is why a law is sometimes required to compel action. However, saving a drowning person isn't something that triggers such a legal obligation in the USA unless you're the person who actually pushed someone into the water in the first place.
I don't get why this thread is getting so long or so abstract. The principles here are straightforward. Facebook don't actually have to care about what arguments activists make, but even if they did, it's on activists to win the argument for what they want. You don't get to automatically have your own way unless someone sits down and does a randomized controlled trial showing that you're wrong - and this is independent of what domain we're talking about.
But we're not talking about only the legal obligations. Plenty would argue that a person with a life ring and a drowning person in front of them have a moral imperative to act; the court of public opinion would be certainly negative about a video of someone casually watching the person die while holding the means to save them, even if you can't criminally prosecute.
In this particular case, changing the rules (and making the blog post explaining those changes!) is pretty clearly an action.
Here, the law flows from the moral judgment that there’s a fundamental distinction between action and inaction. Otherwise, you’d be morally culpable for not basically enslaving ourself to helping whenever happens to be the poorest.
> I don't get why this thread is getting so long or so abstract
Activists use abstraction to attempt to overcome settled understandings and norms. Of course there is a distinction between action and inaction—as you recognize it’s even a legally significant distinction. The very existence of that norm is the reason anyone would say “inaction is really a form of action.”
It’s like how the notion of “antiracism” is an effort to reframe race neutrality as a form of racism.
That's not really how language works. If not choosing to do something is a choice, then today we have all made an infinite number of choices. Nobody would ever express themselves that way.
But even if you want to play word games, choices and actions aren't the same thing. Choosing to act is quantitively different from choosing not to act because it involves a different level of effort. It's wrong to assume that they are morally equal.
This seems to be presuming that there is some clear delineation between acting and not acting, but going through some daily occurrences it's difficult for me to find an objective line, mostly because there are choices one could make that allow one to call something inaction while it requires active action.
Say for example I'm passing by a beggar on my way to work. Before deciding whether I give them money, I can first decide to ignore or not ignore them. From a basic human perspective I want to say hello and be friendly (and I choose to do this), but it does make me feel worse if I decline than if I had ignored them, exactly because it makes it feel like a choice. But if I ignore them, can I call passing by without giving them money less of a choice? I only moved my choice up one level in the tree of all possible decisions I can make.
Or, moving it to the example of the drowning man: imagine you're holding out your arm to see how long you can do it, and see the life ring flying towards you. If you choose not to act, it'll hang on your arm, and the person will drown. Is it nevertheless inaction on your part?
>then today we have all made an infinite number of choices
The way I like to think about it is that once a choice has risen to the level of conscious awareness, it is an illusion that a person can just decline to choose.
There’s lots of mainstream media content I think is psychologically harmful and should be suppressed, such as content normalizing adultery. But I’m quite content to live in a society where the social norm favors people saying what they want and the burden is on the opponents of that to produce strong evidence of harm.
But all we see are two proponents in a civil trial. Shouldn't the standard be the well known "preponderance of evidence"?
Though personally preponderance of evidence seems to be a shitty standard too because I might be listening to two awful theories and be forced to conclude one is the winner. Theories should rise above a minimum threshold to even consider sniffing at before we consider one as superior over the other.
I agree that there needs to be a better standard than just “more likely than not”. Freedom of expression is a fundamental good, and there should be clear evidence of harm outweighing that good, before curtailing it.
Regarding my previous comment, my intent was to point out the GP comment’s position (because the parent’s comment seemed to be beside the point), not necessarily to endorse it.
Right. The challenge for free speech absolutists is to demonstrate that free speech takes priority over moderating hate speech, adult content, highly addictive media etc... . That demonstration needs to evidence-based and framed in terms of short and long term social harms/impact. Simply saying "censorship hasn't gone well for some countries" or "having a free speech zone is extremely important to the future of civilization" is not enough.
>The challenge for free speech absolutists is to demonstrate that free speech takes priority over
Why? And how, in principle? Why is the burden of evidence not on others - and equally, how, in principle, could they furnish evidence?
The entire point is that freedom of speech is a core moral value; they have weighed the potential harms and come out against censorship, because they consider censorship to be inherently harmful. There is no objective way to compare different kinds of harm to each other; each individual's moral values are what they are.
When a free speech absolutist argues that freedom of speech is more important than whatever goal the censor has in mind, that argument is of fundamentally the same kind as the censor's argument, just with opposite polarity. When the censor says that "hate speech" needs to be prohibited, that, too, is based on a relative weighing of values and purported rights (i.e. freedom from hearing it).
You’re presuming that the debate has to be carried out according to utilitarian rules (do benefits of free speech outweigh harms caused by certain speech). But why should it be?
Consider hate speech. There is a clear short-term benefit of moderation: reducing the harms to marginalised people from being exposed to threats to their person, identity, and way of life. In the face of this benefit, the absolute free speech advocate must provide a counter-argument for why free speech overrides that harm-reduction.
>In the face of this benefit, the absolute free speech advocate must provide a counter-argument for why free speech overrides that harm-reduction.
Why are you not the one who must provide an argument for why this "reduction of harm" overrides the benefit of freedom of speech?
Further, a very large fraction of what I have seen classified as "hate speech" simply cannot reasonably be argued to constitute any kind of threat.
Finally: what do you mean by "identity"? When I have seen this term used by opponents of "hate speech", it generally seems to refer to something like a person's self-image. I cannot understand how this can in principle be "threatened", nor how it could constitute harm to learn that someone else sees you differently from how you see yourself.
> why this "reduction of harm" overrides the benefit of freedom of speech
There are some strong arguments for harm reduction being a more fundamental human value than freedom of speech.
Firstly, the modern conception of freedom of speech is often seen to be grounded in libertarian thought, in particular the works of Bentham and Mill. Yet Mill himself explicitly stated that these freedoms should be limited in the case where they cause harm to others. Thus freedom of speech has historically been seen as lower priority to harm reduction.
Secondly, there are in fact two competing interpretations for "freedom of speech": on one hand the equality of access to a public forum, on the other hand the ability to say whatever you want. I say "competing" because in a public forum without moderation, the tendency is for loud and offensive voices to drown out the discourse, effectively leaving marginalised people without a voice. This is especially potent in modern social media. To me it is similar to antitrust regulations in the market: we put these in place for the benefit of competition, as this typically improve social impacts. However in doing so we are limiting the freedom of corporations with large market share to collude, fix prices etc... .
Thirdly, history suggests that it's problematic for ideological values to trump the basic tenet of harm reduction. We see this for example in the Catholic church's refusal to support abortion rights or the use of condoms to prevent AIDs. If we don't ultimately assess the long-term social impact of a "core moral value" in terms of human harm and flourishing, then we risk entrapping ourselves in an ideological morass.
> what do you mean by "identity"? ... I cannot understand how this can in principle be "threatened"
As an example, homophobic comments are an attack on the sexual identity of homosexual people. It sends a message that they are unacceptable to society due to their inherent preferences, and that they should not express themselves as they naturally wish to. This causes psychological suffering.
>reducing the harms to marginalised people from being exposed to threats to their person, identity, and way of life
This only makes sense if you use a recent definition of "harm" created by censorship advocates that's divorced from the traditional meaning. In criminal law, harm traditionally (and still does in America) mean actually physically harming someone's body or making threats to do so. Censorship advocates are the ones making the claim that mere words should also constitute harm, so the onus is on them to justify why they want to change the meaning of the word like that.
Companies like Facebook pretending they are not publishers, people posting content believing they should be able to publish anything without consequences, and professional weather makers ( PR/comms/lobbyists etc ) using this confusion to get around traditional controls on their dark arts.
In the end I think the only solution that works in the long term is to have everything tied back to an individual - and that person is responsible for what they do.
You know - like in the 'real' world.
That does mean giving up the charade of pseudo-anonymity - but if we don't want online discourse dominated by bots controlled by people with no-conscience - then it's probably the grown up thing to do.
The only thing that removing anonymity would do is make it easier to harass people with dissenting opinions. Professional bad actors can switch to posting under "real people" names, just as spammers now post from home IP proxies.
I share your concern - however harassing people is illegal and if you can't be anonymous to do it then that's also much less likely.
I don't buy favourite argument of the US gun lobby - that only criminals ( yes by definition ) would have guns/anonymous accounts if you banned it therefore we shouldn't do anything.
You could apply that to anything that's illegal - by definition only criminals are outside the law - so why any laws at all?
I'd also be concerned about repressive governments - but I think you could distinguish between mass/public communication and private 1:1 communication. Just like in the real world there is a whole world of difference between saying something in private and publishing something in a national newspaper.
I suggest you consider looking how much it costs to go through the legal system, as it seems your assertion is based in a theoretical understanding of our system.
Filing a civil suit can be pretty expensive if you want a lawyer -- which, yes, you do effectively need one.
This is effectively a tax on the victims of harassment.
Social media are not publishers. They are way more public squares, but online.
On top of that, even when publishers usually curate content, there is no obligation to do so. It's just something that has been done, because publishing used to be expensive.
Now, when sharing data online is cheaper and cheaper, this limiting factor is fading away.
--
At the same time, we have just 16 hours of attention per day. So you have to decide whether you want to invest your time in more curated publishing (I read a lot of books, often old books which stood the test of time), or if you want to go to the public square where practically anyone can shout as he sees fit. I do that too, but I try to moderate both my time using social media and what I see there. And I am proud I haven't used TikTok, I stopped using Facebook, Instagram, I don't watch any Reels, Shorts, etc.
So publishers still are not lost, but what they are selling is not curation because of technological limitations, but because of limits of how much we can read and see in the day.
--
At the same time, publishers are biased. They publish what they see as high quality. They publish what they consider worthy. They publish things they would want to read. And they have publication checklists that prohibit publishing certain things even if they are true.
Public squares don't have such an attribute.
There are things to be published and heard, even when mainstream people would disagree.
There are things that should be public, even when it's against a law in certain countries.
And online anonymity mixed with public square enables people to tell about atrocities that happen, or about corruption, government inefficiencies, about people breaking human rights and so on.
--
If you end anonymity and public squares, you end a channel for democratic feedback.
Because publishers don't play this role any more. They are biased, people realize it and are fed of it.
> Social media are not publishers. They are way more public squares, but online.
I'd believe that if they didn't promote or suppress content - in my view as soon as you get into that game you become part of the publishing process.
> On top of that, even when publishers usually curate content, there is no obligation to do so. It's just something that has been done, because publishing used to be expensive.
Eh? Publishers take care of what they publish because they are responsible for it in law - if they publish a lie about somebody ( even if it's a quote from somebody else - ie somebody elses 'content' ) - they are on the hook for that.
In a similar way, if I defame you and then a newspaper/facebook promotes that around the world, most of the damage actually comes from the promotion of the original defamation - the publishing/amplification.
> If you end anonymity and public squares, you end a channel for democratic feedback.
You are already assuming we live in a society where people are too afraid to say what they think in public . And I would also argue if you stand on a soap box in a public square then you are not anonymous - you are public. You are confusing a public square with people whispering behind masks.
I'd like to think so, but I'm not so sure - doesn't it depend where the incentives come from?
Optimising simply for demand without any principles leads to things like street fentanyl, and junk food and mass shootings ( there is a demand to own assault rifles ).
Online right now there is a heady mix of large monetary incentives and the ability to rapidly optimise objective functions.
Let's not pretend Meta's recent change isn't simply about Zuckerberg maintaining his power.
I use Instagram and Threads specifically because of the relative lack of political content on them. If they also start to become cultural war grounds like everything else then RIP.
Zuck claims "Europe has an ever increasing number of laws,institutionalizing censorship and making difficult to build something innovative"
Ouch. As a European, I feel very wary of such a sentence and the implications.
Time for Europe to wake up ?
(edit: fix typos)
I'm not sure that we are awake. As a dev for a long time, I realized only 6 months ago, that all the tools I use daily are directly from US.
My job and my life would be very very different without this technology.
We are loosing ground, or more, we are falling down more and more quickly.
It is individual of course. But for example Emanuel Macron and Mario Draghi have sounded the bell quite clearly. As individual citizens we should try to buy European any time there is a European alternative.
It's pretty much right. Dig into what it takes to run a social network in most European countries and you'll hit at minimum the following problems:
• Lack of a DMCA equivalent. DMCA lays out a lightweight process for platforms to process copyright disputes which if they follow it will avoid legal liability, which is needed on any platform that hosts user generated content. The EU Copyright acts require platforms themselves to enforce copyright and prevent users violating it. This is a gigantic technical implementation problem all by itself. Also, the US has the legal concept of fair use but that's not a concept in much of Europe, so people posting parodies etc thinking it's OK can still create liability problems.
• No equivalent of Section 230. Many new laws that specifically criminalize the hosting of illegal speech, and which don't give any credit for effort. As what's illegal is vague and political in nature you can't make automated systems or even human-driven systems that reliably handle it, so the legal risks are large even with a good faith effort to comply.
• GDPR, "right to be forgotten" and NetzDG style laws have large fixed costs associated with compliance which established companies can absorb but startups can't. For instance it's common for EU lawmakers to demand 24 hour turnaround times, which you can't reliably comply with if you're a one man startup.
• Algorithmic transparency laws, which mean you can't obtain any competitive advantage by better ranking (being good at this is how TikTok got so big), and which can threaten your ability to clear spam or use ML.
• Laws around targeted advertising mean you can't generate revenue comparable to what the US based firms can do, so you can't be competitive and your users will be annoyed by low quality barrel scraping ads for casinos after they click "No" on a consent screen without reading it.
There's probably more. For example, running a commercial search engine or training AI models on the internet is illegal in the UK, because UK copyright law only allows "data mining" for research purposes. There's no way to argue it's fair use like they do in the US. Just one of many such problems off the top of my head.
Looking around my apartment and my life, I see a Japanese game console, Japanese camera, US speakers, US laptop, Czech/German car, French photo software, Czech IDE, Swedish furniture, Swiss/US computer accessories, Chinese IoT devices, and a lot of the stuff was manufactured in China. If anything, my life would be very different without China (whether I like it or not).
I don't know how to say this inoffensively, but a lot of US people seem to mistake the slightly higher chance (from 1/inf to 2/inf) of becoming a billionaire with a higher quality of life, and the ability of the select few to hoard capital for a rich society.
I know of exactly 0 European businesses they use free open source software for their office suites.
Z-E-R-O.
I don’t even think companies have their own mailservers anymore, its mostly gsuite and microsoft office 365; people aren’t even hosting business critical applications in Europe unless compliance forces them- let alone using European made tools to do it.
There's a lot more to life than a lot of things, I'm not really trying to discuss personal fulfilment, moreso mentioning that there's no reality where we can get by with European technology right now, and if the US decided to sanction a european country that country would suffer a pretty significant (trillion-euro most likely) shock to productivity, as not only would they need to find new tools and retrain, but they would also lose all their mail and documents.
Yes I somewhat agree on FOSS and I agree for the people.
But I think that for the capital, it is massively US controlled (though is international too).
Think of the seven first companies of the S&P500. (GAFAM, Nvidia, ...)
If you look at the cac40 (france) or EUROSTOXX50 : I dont use directly any products of the tech company. But I'm sure that these companies use at least one the seven.
Tech company in Europe are not ridiculous, but they are not leading the change. They optimize, they improve, but the lead is us centric.
We have ASML, but for how long. ?
The problem is that these platforms have to be built, and people have to willingly use them... which is hard, given Meta have built brilliant addiction machines.
The whole threat here is you can't regulate Meta away, because they'll use the US Government to bully you into not doing so. I'd imagine if the EU tried to publicly prop up a platform not making any profit, they'd do the same.
But yes, the only way is for this to happen. But either way, this was the scariest statement of the announcement(s).
As a European who does generaly feel that the continent is on its way to becomming a museum, describing the absolute bilge that the flagship products of Facebook, YouTube, X etc are as 'innovative' feels in the same ballpark as describing the work of tobacco companies to sell and advertise their products in the 50s-80s as innovative.
They were innovative.
I don’t know for other eu countries, but it seems that in France, there were only unsuccessful copycat of end user service. I’m probably a bit harsh, it’s because I m under impression that the gap between us (eu vs us) is widening. 10 years ago, there was open source, there was ovh, there was hope. With the cloud, we have surrender a lot of power to massive us company.
As a European I would say that Europe's governments are radically more focused on the well-being of their populations than say, the USA.
But... is it just luck or is it this Nanny-state issue that makes it very hard to think of a single major Internet destination or tech company that was born in Europe?
The through-line is US/China with the vast majority. Eu I can only think of Spotify for non-retail.
Being in Europe I find no shortage of local versions of companies for all kinds of providers but only the large social media or platforms are outside of EU mostly in US as a rule.
The issue seems to be that saturation is real and the moat gets larger with time when companies just gobble up all their competition. How could Here maps compete with the free google maps + apples large pockets, etc. TomTom used to be much larger and European, seems to still survive but nowhere near to the size it could've otherwise.
The faster we decouple from societies like american, the better we europeans will be. We europeans defend our European way of life, against the degenerate capitalism of the US.
As an American who lived in Europe in the 90s when I was young, a lot that I really appreciated about the European way of life has deteriorated and is now almost unrecognizable to me in some ways.
When I visit every few years, it amazes me how quickly Europe is “Americanizing”. More fast food and less traditional food. Ripping up vineyards that have been there for centuries. Fewer protections for your farmers. More people walking around staring at their phones and less people talking to each other in cafes. Seems like almost everyone dresses like Americans and can speak English now. And it’s hard to tell the difference between the coffee shops in Spain and those in San Francisco. How long until you start building suburbs and driving everywhere?
Don’t get me wrong—I love the U.S., and I love living here. But its culture is not for Europe.
Comments like this are interesting because the changes you’re describing aren’t really “Americanizing”, they’re just a sign of modern times.
For example: People weren’t walking around staring at their cellphones in Europe in the 90s because they were distinctly European. It was because we didn’t have smartphones anywhere. The smartphone changes happened in lockstep across the globe.
Likewise, many of your other points are purely people’s personal preferences. I think your criticisms are largely nostalgia for the 90s and your time spent living abroad, not an indictment of “Americanizing” Europe.
Vineyards are ripped up because they have become unprofitable due to decreased alcohol consumption in general. I'm not sure that has much to do with Americanization.
I challenge you to find another economic system that has worked in history, because it sure isn’t communism if that’s what you’re referencing. This is also aside from the fact that Europe is also a subscriber to capitalism.
America is the most successful country on this earth and we bankroll most of the rest of the world but somehow we’re always the bad guys.
As an American I’d be very happy if my tax dollars stopped getting spent on Europe.
> America is the most successful country on this earth
According to what metrics? life expectancy? crime rate? wealth per inhabitant? education? work life balance? health care? happiness? incarceration rate? human rights? corruption? freedom of press?
American tax dollars aren't spent in Europe or elsewhere in the world for some altruistic reason. The US want to maintain their hegemony and prevent other powers from emerging. They certainly don't care about Europeans or Taiwanese or whoever.
> I challenge you to find another economic system that has worked in history, because it sure isn’t communism if that’s what you’re referencing.
Not that I'm a big fan of communism or China, but communist China has been doing pretty well, and is getting more innovative than the US
The part of China that is innovative is not communist. They have the most free-market labor market, the most free-market regulations in everything except media (which is heavily controlled by the state).
China is the most brutally capitalist society in the world, with a dictator sitting on top managing it at the margins and ensuring media will never be free and threaten the communist party.
Somehow US Americans managed in about a year and some to almost singlehandedly fund complete destruction of already impoverished and entrapped society of 2.3 million people, most of them younger than 18. Nevermind the pressure or direct military attacks on other nations to not intervene.
And you wonder why you're viewed as baddies.
I'd be happy if your tax dollars stopped going outside of US, too.
There is also the good old: "We can't discuss changes because there is nothing better already existing. There can't be anything better because we cannot change"
I am concerned about the community notes model they're moving towards.
Community notes has worked well on Twitter/X, but looking at the design it seems super easy to game.
Many notes get marked 'helpful' (ie. shown) with just 6 or so ratings.
That means, if you are a bad actor, you can get a note shown (or hidden!) with just 6 sockpuppet accounts. You just need to get those accounts on opposite sides of the political spectrum (ie. 3 act like a democrat, 3 act like a republican), and then when the note that you care about comes up, you have all 6 agree to note/unnote it.
I speculated what Zuckerberg wanted and what he'd do when he visited Mar-a-lago[0]:
* Push to ban Tiktok
* Drop antitrust lawsuits against Meta
* Meta will relax "conservative" posts on its platforms
* Zuckerberg will donate to Trump's cause
So far, Zuckerberg has already donated to Trump's cause. Now he has relaxed "conservative" posts on its platforms directly or indirectly.
When Trump comes into power, he'll likely ask the FTC to drop its antitrust lawsuit against Meta under the disguise of being pro-business.
My last speculation is push to ban Tiktok. I'm sure it was discussed. Trump has donors who wanted him to reverse the Tiktok ban. Zuckerberg clearly wants Tiktok banned. Trump will have to decide who to appease when he comes into office.
I would be really interested in how someone could spin advocating for less moderation and at the same time asking to ban the competitors' social media platforms.
So lets take one of the most expensive, labor intensive parts of our business and replace it with crowdsourced notes.
As of 2022, Meta employed 15000 content moderators. Expected salary of 70K to 150K per person (salary + benefits, plus consulting premiums) so lets assume 110K.
This implies $1.65B in workforce costs for content moderation.
Meta is more likely to make their earnings....
Though I wonder if they will redeploy these people to be labelers for LLMs?
Again, conflating moderation within Meta, with fact-checking by third party orgs, which is what this is primarily about.
In reading the comments, it's clear to me that "community-based fact-checking" will not work since not even HN users can get basic facts straight (not due to any lack of intelligence, probably just didn't read the article or understand the context), how do we expect the FB userbase to do so?
It’s not conflating. They also announced that a lot of content that was moderated won’t be any more. For example labeling someone trans as having mental health issues was forbidden and it won’t be anymore. So they are reducing moderation too.
The discussion here is painful to read. The 'neutral' discussion of product features and how Austin, TX is more liberal than the rest of Texas are grotesque.
Zuckerberg says Facebook is going to be more "like X" and "work with Trump". It has changed its content policy to allow discussions that should horrify anyone.
"In a notable shift, the company now says it allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
"In other words, Meta now appears to permit users to accuse transgender or gay people of being mentally ill because of their gender expression and sexual orientation. The company did not respond to requests for clarification on the policy."
But Zuck himself says that they are also dialing their algorithms back in favor of allowing more bad content. It's not right.
I feel the same way and I think the writing is on the wall for the near future of the world. It is disheartening to see people on a forum like HN who I assumed have values similar to mine fall right in line with conservative propaganda and try to act like this isn't an overtly political action. This decision is political, and it goes a lot deeper than left vs right - its about attacking support for a baseline scientific 'truth' and fully accepting a post-truth world where reality is what the powerful deem it to be. This has always been the case to some extent but it has gotten so lopsided in the last decade that its hard to see how we come back from this.
I similarly share your pessimism. Ironically, I think a lot of the propaganda that is effective on HN's demographic works because it frames itself in a way that makes it appear logical and intellectually robust. Us devs love thinking we're the smartest person in the room and strong, logical thinkers who can't be fooled, but that's exactly why those kinds of propaganda and talking points can work so well. (I'm certainly guilty of it myself at times, fwiw.)
Fwiw, not everyone on 'hacker' news is like this, and many of the thoughtful ones are smarter than I am and skipped this post entirely. But its so disheartening the rot in the Silicon Valley ideology that's everywhere here.
Mark has looked at what has happened to Twitter since Musk took over, a notable decline in activity and value… and decided he wants a piece of that? Musk is begging people on Twitter to post more positive content, as it devolves into 4chan-lite.
If Musk’s ideological experiment with Twitter had proven the idea that you can have a pleasant to use website without any moderation then Mark’s philosophical 180 would at least make sense, but this doesn’t, at all. What’s to gain? Musk has done everyone a favor by demonstrating that moderation driven by a fear of government intervention was actually a good thing.
It starts to make more sense when you think about who is arm in arm with the president elect. I don't know that Musk believes his philosophy is wrong and now he has the power to pressure others.
I use meta products, it’s anecdotal but they’re dead. At least they seem very stagnant. This is appeasing the new establishment and hoping for more engagement ?
> Mark has looked at what has happened to Twitter since Musk took over, a notable decline in activity and value… and decided he wants a piece of that?
Hell yes he does, Twitter helped Musk get a seat at the table with Trump and the ability to influence US policy decisions at an unprecedented level. Zuck craves power and sees sucking up to the incoming administration as an easy path to get more of it.
Additionally, if you haven’t read the article you’re commenting on, community notes is an excellent replacement so-called fact checking services which are notoriously biased.
I have a feeling it is more part of an agreement with the new administration. It was an agreement with the old administration that led to the current platform where there is way too much overreach on things the govt didn't want discussed: COVID, Palestine, immigration, etc.
It’s funny to see these tech moguls bend the knee for the new king. All their values, their so called care for the community, everything they say, everything, … is just all a big play in an effort to make as much money as they can. It sickens me to watch this stuff unfold.
It’s not just a new king, it was the fact that the other party won the popular vote resoundingly after all these years meant that the 2016 elections weren’t just a fluke.
Repubs have all 3 branches for at least a few years now, and there will be enormous changes in tax policy in legislation that will be passed this year, due to many popular provisions of the 2017 TCJA expiring at the end of 2025. And Dems will basically be left out of the conversation as their votes are not needed.
Filibuster is for legislation that needs 60 Senate votes, tax changes only need 50.
There are also quite a few Democrats in swing districts who I bet will vote for tax cuts. They are basically only in office instead of their Republican opponents because their opponent opposed women’s rights.
That's not quite right. Nothing (or almost nothing?) needs 60 Senate votes to pass. The difference is that they've agreed not to filibuster tax laws, and you need 60 votes to break a filibuster.
So you're right on the practical effect, but the details are slightly off.
They won on the backs of decades of efforts to prove that the culture wars were unhealthy for America. That worrying about climate change was a hoax. That evolution itself is controversial. That universities and authority figures are not to be trusted. That somehow, Fox News, the biggest media corp in America, is not the main stream media.
They got here, by destroying our ability to fight disinformation. They beat climate science in the 90s, by giving air time to cranks, and then senators used those specious arguments to stall climate bills. When scientists came onto Fox to try and reach the audience, they were thrown to the lions for the entertain of the audience. Derided and mocked with gotchas and rhetorical arguments designed to win the perception game.
This is a continuation of that game. Because it works. The idea that free speech is at risk because of moderation is amazing, because it is being revived after being tested by everyone online. We started the internet without moderation, we believed that the best ideas win.
We have moderation everywhere now, because we know that this fact is empirically untrue. The most viral ideas propagate. The ones most fit to survive their medium - humans.
I agree that they won, because they played the game to win. But we should not miss how they worked hard, to set up the conditions for this type of a win.
Of the total national popular vote, Trump won by about 1%. That's not "resoundingly". That's a very thin margin. (I mean, it's better than he got in 2016 and 2020. But it's not resounding.)
It’s resounding because the expectations were that the nation’s voters were trending away from Republican politicians (or at least the popular vote), and the country was just waiting for old voters to die.
But that was shown to be completely wrong, even after women lost rights in quite a few states. The message was clear that Republicans are here to stay, and businesses better learn how to do business with them, or else face the consequences.
Popular vote doesn't win President, electoral college does, and that was 312 to 226, not barely, and Dems didn't win a single one of the 7 states that were supposedly in play (GA/NC/PA/MI/WI/NV/AZ).
In the legislature, it is almost impossible for Dems to regain control before 2028, as the majority of states electing senators in 2026 are very unlikely to elect a Dem. And I am not optimistic on Dems' chances in the 2026 House:
As far as I can tell, Repubs have the executive for at least 4 years, the judiciary for who knows how long, the Senate for at least 4 years, and the House for at least 2, if not 4 years.
Knowing this, it makes sense why businesses would want to cozy up to Republicans.
Will this totally end content moderation? That could be a small silver lining, as content moderation for FB appears to be extremely hazardous to one's mental health:
It is not obvious that many people (when was the last time a single post was seen by the entirety of the platform?) seeing occasional soul-destroying stuff is worse than seeing soul-destroying stuff as full-time employment, 8 hours a day, 5 days a week for the length of one's work life.
Also: perhaps the occasional soul-destroying post would help people break their social media addictions.
Certainly poor molly russell does not appear to have seen this cintent occasionally, which is just my point. There is no mention of how she accessed this content either: was it a message board, or was it served algorithmically, which is important to the contention here.
I am not sure that the death of one person outweighs the lifelong ptsd of 100% of fb content moderators. Again, my original claim is that it is not obvious.
I am not trying to trivialize this persons death. If it were up to me, I'd completely get rid of social media in an instant.
That's pretty much the only legislation I'd support, i.e., a compulsory setting for chronological ordering of events, which effectively disables "the algorithm." Seems like it would be agreeable to media companies and pure libertarians alike.
TBH I had assumed FB was just penalizing all political content or that people just tried like hell to avoid it because all I see on FB anymore is either stuff related to the few FB groups that keep me on the platform or endless reposts of basically pirated Reddit content for engagement.
Community notes and enforcement might help meta in the long run as being a step into more organically managed content that can scale better than simple moderation.
I have my serious gripes with how Instagram currently manages reports. I've recently reported a clear racist post promoted to me on Instagram that did not get removed or acted on. They seem to go the route of "block it so you cannot see the user anymore but let everyone else see it".
So as far as I can tell the only thing that Instagram actually moderate at the moment are gore and nudity, regardless of context. So barely dressed sexualised thirst traps are ok, black and white blurred nipples are not, everything else is a-ok.
wow so many warnings for the future.. They didnt intend to but FB now has some responsibility about whats generated on it as one of the most massive source of info in the planet...
Regardless of what you think about this step I find it disconcerting that we can now disagree on facts.
For example:
- whether crime is up or down
- whether the earth is warming or not
- how many people live in poverty
- what the rate of inflation is
- how much social security or healthcare costs
- etc
These are all verifiable, measurable facts, and yet, we somehow manage to disagree.
We always used to disagree and that is healthy, we avoid missing something. But in the past we could agree on some basic facts and then have a discussion.
Now we just end a discussion with an easy: "Your facts are wrong." And that leads to an total inability of having any discussion at all.
Fact checking is not censorship. Imagine math if we'd question the basic axioms.
What you're talking about is statistics. Statistics are not irrefutable facts. They're data points from a report, and they are often incredibly easy to manipulate depending on how the macro is assessed. Usually it's impossible to gather stats over large, complex, chaotic populations. Instead samples are taken and applied to the whole and interpolated in-between. And in that interpolation an incredible amount of manipulation and even pure laziness is possible. It's possible to misrepresent the error bars of your conclusion. It's possible to leave out important details. It's possible to be selective about your time frame. There are a myriad of ways to mess up or screw up statistics. The more chaotic the system, the more difficult it is.
Every single example mentioned by the GP isn’t just a statistical measure, they measure of a wildly political (as in, defined by humans in a deeply imprecise manner) issues:
> - whether crime is up or down
Which kinds of crimes? In which political boundaries? In which reporting period? Did definitions change? Is reporting down because of ineffective policing? Is reporting up because of effective policing? That statistical games played with crime stats are criminal.
> - whether the earth is warming or not
There is a reason the phrase “global warming” went out of fashion in preference of “climate change”. Warming up how much? Over what time period? With what error bounds? Assuming which runaway processes? In which areas? Due to which causes? What are the error bounds around the sign of the change?
> - how many people live in poverty
The government literally draws a line in the sand and declares anyone below a certain income level is living in poverty. Who set the level? Why did they set it there? What is the standard of living at that income level? In which areas? How long do people live in poverty? What, if anything, prevents them from moving upward? What is there effective standard of living after government programs and charitable giving is taken into account?
> - what the rate of inflation is
This is literally defined by bureaucrats at central banks. Inflation according to which index? How were the index components chosen? How are the index components weighted? Over what time period? In which areas? Even the concept of “inflation” is highly suspect and basically incoherent.
- how much social security or healthcare costs
Over what time period? How did the demographics change? How about inflation? Where did the cash flows go and how did they net out? Which purchasing regimes were in place? How did the programs change? What was the quality of the services?
If, in an argument, you want to go back to the data and do different or better statistics on it then by all means. I would _love_ to have a disagreement with someone that went in that direction and we could discuss the intricacies of how to interpret the information that we have. I have my own gripes about the statistics done by various groups, with changing the inflation calculation being a recent example of the bad side of this: https://www.nytimes.com/2022/05/24/technology/inflation-meas...
However, I think the key point here still stands. Most disagreements (at least in my experience) are not reaching this level, and are instead diving towards anti-intellectualism and dismissing statistics and data interpretation wholesale.
Fully agree. Statistics are not global irrefutable facts about society, it's literally just one or a group of person computing something random and claiming it is representing society as a whole, or a journalist saying he/she read that figure in a reputable source. Even from a mathematical point of view statistics are incredibly hard to manipulate, but even before that, reality cannot be really measured and put into numbers.
The problem I have with fact checkers, rather than "context expanders" is that their end product is a simple answer for things that may not be trivial. There may not be a clear binary answer.
> whether crime is up or down
Was the reporting consistent between the two timeframes (apathy, directions from police station, etc)? Was the reporting system fully operational both timeframes being compared? Is the reported vs actual crime ratio the same between the two timeframes?
> how many people live in poverty
> what the rate of inflation is
Is the metric calculated the same way between the two timeframes? If not, what's the justification for the new metrics? Is the answer the same if the old and new metric is used with the same data?
It’s not realistic, or IMO necessary, to put more into it than the original claim does, besides bringing actual sources to the table.
If the original claim is that crime is really up but it doesn’t show in the official figures because of subtle factors X Y and Z, then sure, a fact check saying this is wrong needs to dive in and explain why those factors don’t account for it.
But if it’s just “crime is up 87% since Biden took office” then “actually, crime is down N% in that period, see link from relevant stats agency here” is fine.
The world we exeprience and the language we use to describe it doesn't have axioms like math, so it's no surprise people routinely disagree about these topics. Most of the subjects in your list contain a great deal of nuance. For example:
> whether crime is up or down
What counts as "crime"? Is it based on a legal definition or a moral defintion? What jursidictions does this include? What time period are we using as a baseline? Do we account for the fact that different jurisdictions measure crime differently and do we use the raw reported numbers or adjust for underreporting in the statistics? Do we weight our consideration by the severity of the crime or is it just the number of recorded offenses? The laws themselves may have changed over the period of consideration, so how do we account for that?
These questions don't have objective answers, so it's unsurprising people disagree.
Every single one of your points is not boolean and depends on the definition and the data you include and exclude. For each you could easily find studies and statistics in either direction. The fact that this is apparently not obvious to you proves the point that all fact-checking is inherently biased and depends on the subjective opinions of the checking person.
People who study statistics are pretty good at saying "look, that data set was probably gamed, I would have done it <different way>", or "that conclusion does not follow from the data presented".
It's no different to someone claiming on twitter that they are a great programmer who can fix twitter's search in a weekend who then has to tweet for suggestions on how to write a search feature in javascript. People familiar with the subject matter can see right through your bravado.
I'm so tired of people with no expertise on anything insisting that people who have clear expertise "didn't think of trivial point A that just came to mind" as if some of these fields aren't centuries old and have been around the block a few times.
It's similar to the teenager insisting "you just don't get it mom", but like, mom totally gets it, she was a teenager once too. And while there are occasions when mom might not get it, like how she didn't grow up in a world with social media so she might not be able to help you through that, but she ABSOLUTELY gets that it feels like your world is ending when your first love leaves you, and in fact it is YOU who does not "get it" that you will move on eventually.
Not sure what you are trying to say - my point was that ie the question "is crime up or down" is not a yes/no answer. Depending on the input, you can easily create a statistic pointing in any direction. I think abtinf elaborated better on that here: https://news.ycombinator.com/item?id=42628198
My personal highpoint in using statistical methods was probably implementing an analysis of variance for thousands of lab values (https://en.wikipedia.org/wiki/Analysis_of_variance).
Most experts will not give simple answers to simple questions because they see the question itself as ill-posed. Theses could be written about "Is crime up or down?" GP's claim is that this has a simple answer that can be checked. The bigger issue isn't whether a dataset is statistically valid but which data would even be relevant to a particular underspecified and vague question.
All of these sorts of facts are manipulable and/or not easily knowable.
> - whether crime is up or down
Manipulable by the agencies that keep track of and publish those stats. Governments often manipulate these.
> - whether the earth is warming or not
There is a huge amount of controversy in climate science. Check the "Climate Gate" files from 2009 for example. Check out the controversies over weather station siting for another.
> - how many people live in poverty
Poverty levels vary with time and by country, and are typically set by governments. People often disagree as to what defines poverty. Poverty stats are manipulable.
> - what the rate of inflation is
You should look into what Argentina did around 2012.
> - how much social security or healthcare costs
The figures from the budget are not controversial. How much healthcare spending is wasteful is a completely different matter. Quality of healthcare is also very much subject to debate.
> These are all verifiable, measurable facts, and yet, we somehow manage to disagree.
They are not easily verifiable because they are mostly susceptible to manipulation. Therefore it's not surprising that people disagree.
> [...] And that leads to an total inability of having any discussion at all.
No, it means that discussion might have to start with the fact that there is disagreement as to facts and then you can have an open discussion about why, what is being done to prevent consensus forming as to those "facts", what needs to change to make that possible, etc.
No need to imagine, it's enough to look into non-Euclidean geometry (obtained by excluding Euclid's fifth axiom), non-standard models of geometry, or reverse mathematics (studying which axioms are necessary for a specific theorem to be provable).
I think the idea that a) people lack nuance now or b) that it’s simply social media’s fault is the exact same kind of lack of nuance that you seem to be objecting to.
Nothing I’ve seen suggests that mass media or mass propaganda contains less nuance now versus any other time. Propaganda of all forms (regardless of whether delivered by newspaper, radio, tv, or facebook) has always been a blunt instrument.
Exactly. We aren't capable of discussing shit online, which is unfortunately where the bulk of our culture's negative discourse is occurring. It's not the posts, even - it's the comment sections.
I don't care if someone shares propaganda, I care about the discussion that happens after they share it, in the comments. When was the last time on FB/IG that you saw someone share some propaganda (true or untrue, doesn't matter), and looked in the comments to find someone correct them, and then the two had a reasoned conversation wherein they traded perspectives and ultimately came to a healthy understanding of one another even if they disagreed?
Do you see that sort of conversation, or do you just see a shitload of people yelling at each other?
Nuance is dead with the short posts. "whether crime is up or down" may not be possible to post about realistically. On what timescale, which crime, has the reporting about this crime changed, has the classification changed, is it about confirmed crime or reports, etc. etc.
Specific crime is such a complex system now that we can (both accidentally and maliciously) post factual information that presents a small fragment of the issue, sometimes helpful, sometimes misinforming for the context we're talking about.
Aside from maybe "whether crime is up or down" (because of under-reporting), everything else can be objectively measured. The measurements might not fit with everyone's specific circumstance (eg. earth is warming as a whole but it's unseasonably cold where you live), but that's not a reason to throw up our hands and say "those things are actually not verifiable measurable facts within any useful definition".
The only items in the list that look reasonably easily answerable are how much social security costs and whether the earth is warming. Even the last one wouldn't be considered a good question to an actual scientist because of how vaguely it is phrased.
The earth has been warming. It's not a verifiable fact that it's still doing that today (you used present tense) or will continue into the future until the future comes and we've measured it. By the way, warming over what time period? It's colder now that it was at some times in its past so you could say we're in the middle of a longer term global cooling.
And of course you have to incorporate of the Earth's interior which is cooling. Are you sure that "fact" doesn't silently ignore almost all of the Earth?
There are rarely any two people experiencing the same inflation rate. As it heavily depends on any one buyers buying basket. Sure, you could, in theory, measure each persons inflation rate, but what for?
I strongly disagree that the rate of inflation is a fact, nor un-debatable. The mechanism for calculating it officially has changed drastically over the decades, and always in ways that reduce the official rate. It’s a politicized metric.
The Earth is warming, but how much of it is caused by humans is under debate. The Earth is still coming out of an ice age, so it would be warming even without humans.
Also, the more important question is: how much will it accelerate based on our emissions? If there are no positive feedback loops, it would only warm up 1C maximum, no matter how much more CO2 we will emit. But because of the positive feedback loops (warmer earth -> more water evaporating -> more warming), this warming can trigger a 4-5C further warming. The feedback loops are just theoretical(you can't measure them empirically) and the quality of the estimations is based on our understanding and modelling of the climate.
We've had in the last 100 years a temperature swing that usually takes a thousand years or more. We've already seen greater than +1C of temperature increase compared to before widespread use of fossil fuels.
Is that caused by humans? Sure that's up for debate, in the same way whether tobacco causes cancer is. People are willing to be wrong when being wrong gives them money/status/utility.
> We've had in the last 100 years a temperature swing that usually takes a thousand years or more.
A cute xkcd is not a time machine. You rely here on indirect measurements of tree ring measurements or ocean sediments. You can't verify if there were any other factors at play over the millennia, and I seriously doubt that these methods can even be theoretically +/- 0.5 degree C accurate. You may believe that, but you can't verify unless you travel into the past. Besides, 1000 years are NOTHING on the scale we are looking at. If you live anywhere north of the 40th degree, the place you now sit was probably covered by an ice sheet without a living thing in sight, only 10000 years ago. And a 100000 years ago. There is no way that you can divide that timescale into thousands and measure every one of them with a high enough precision to compare it with the present. The bold claims of climate science have lost any scientific humility.
What about them, and how was your debate class?
Can you measure the time of day an organism died with radiocarbon dating? This rhethorical question is meant as a hint.
Did you know how they calibrated radiocarbon at first? They used wine bottles from french cellars, because they have a year printed on them. That's scientific verification, because believe doesn't do it.
> If there are no positive feedback loops, it would only warm up 1C maximum, no matter how much more CO2 we will emit.
GHG emissions are still increasing. If we assume that temperature increase is only linear in the amount of atmospheric GHGs, that means temperature will continue to increase, not remain flat.
Little known fact (I am still amazed how people don't know the mechanics of global warming...): CO2 effect in the atmosphere is logarithmic, increasing with concentration. That is because CO2 can only block one band of light, so at one point, you're approaching asymptotic effect. That's why we keep talking about "doubling of CO2", because it's a logarithmic function....
But yes, the temperature will increase slightly because of CO2 emissions. That triggers more warming due to feedback effects though, and those are hard to quantify, and more scary.
The level of crime is pretty hard to measure. You can measure reported crime, but crimes are reported at different rates in response to complicated incentives.
How much the earth is warming depends on what you measure. Do you measure atmospheric temperature? Ocean temperature? And of course how much the world will warm is dependent on complicated models with tons of inputs.
How many people live in poverty depends on what your threshold for poverty is. There's a "Federal Poverty Level", but cost of living varies by significant amounts across the country.
The rate of inflation is highly dependent on the basket of goods measured and how improvements in goods are measured and so on. There are easily a dozen different measures of "inflation" and they're all reasonable and carefully considered, but none of them is the ground truth.
It is of course relatively easy to measure Social Security inflows and outflows, but usually when we talk about the "cost" of programs like this, we mean something like the net cost, which incorporates lots of societal effects. Also the interpretation of the accounting concept of the Social Security Trust Fund, despite being a fairly simple concept, has significant camps with diametrically opposed views.
With the exception of fiscal cost and global warming those are all quite subtle, actually. $Employer spends rather a lot of time replicating official inflation numbers, it's not trivial.
Yes (any more detail would be telling), ahead of time even, but my point is that we're mimicking the governments numbers not actually estimating a "true" value.
Could it be that nowadays we have so much more access to information that where we maybe agreed on facts in the past, they where really coarse and we did not really have much details on them, so it was maybe easier to agree?
No we don't have verifieable measurable facts for those areas. Standards and definitions vary by location and change over time. Don't forget the corruption and manipulation of numbers to achieve desired outcomes.
Sadly the consensus was abused to push narratives once too often instead of actual leadership/guiding people to concepts/understanding/consensus building. Our leaders forgot/got too lazy/became too corrupt/dogmatic/complacent to care how to lead, abused the levers, and now it's going to probably take a generation for society to organize new trusted mechanisms.
Crime statistics/reporting are extremely gamed. It took a friend having a heinous crime committed against her by a large group, on a side street just off downtown Santa Cruz with no reporting for me to realize just how bad. We've probably all at this point had crime committed against us that the police didn't document which then destroys our faith in crime statistics.
I'm a super hippie. But there was a lot of manipulation/playing fast and free by the earlier global warming folks to try and get their message across breaking peoples trust and you are never going to get that trust back with models/projections no matter how good/accurate the assumptions used for those models/projections once the trust was lost.
Things like using COVID funds to KNOWINGS TEMPORARILY reduce child poverty with the goal of having INCREASED CHILD POVERTY statistics in the near future so that it could be used as a policy weapon again just does damage and makes poverty statistics more meaningless. Just politicians using abusing and manipulating instead of leading, breaking down more levers.
Stop with how gamed 'rate of inflation' was by this administration. You are never going to convince people WHO CAN'T AFFORD TO LIVE and are in CONTSANT distress that 'things are getting worse more slowly' is good. Sorry, you are going to have to lead and convince people on that one, not lazily use numbers. Again, it's lack of leadership.
See how the same things can be interpreted differently by different people and how much it's that these have been abused/used for manipulation/out of laziness/instead of leading?
Source: Other than my personal crime experiences it's from living in a red state and talking with people why they support crazy stuff or reject what seems like common sense to me.
This is because we have started accepting kritik-style debates as serious in the last two decades. Kritik used to be considered a bad faith technique but nowadays it’s considered a smart “trick” to win arguments. It’s when a debate participant doesn’t engage in debating the subject on its own merits, but instead challenges the premise of the question or a premise of the opponent’s position.
Crude example:
- I believe climate change is exaggerated because the Summers haven’t gotten notably hotter.
- If you say that, then you are unaware and uninformed. You must be watching Fox News.
Another:
- I think we are in a cost of living crisis, because every year, more US men are in crippling debt.
- Wow, look at your use of ableist misogynist language! Way to pretend women don’t suffer with debt 13% more than men!
Another:
- As society, we should be respectful of others online, because internet is an important (and sometimes only) social network some people have.
- Social media is unnatural, harmful and should be banned.
These are three failed debates, in each there is no clash of opinions, and no side provided meaningfully stronger arguments to win the debate. In fact, the two debate opponents stated opinions on different subjects entirely. And yet nowadays, this is how most people debate, it is considered appropriate, even in academia. In politics, this technique is considered a total winner.
So it is a bit like refusing to engage with the basic axioms when arguing mathematical proofs and just saying “math is for nerds”. We have totally accepted that as normal, as a society.
You are being hoisted by your own petard. Lying with statistics is a very common thing and it is, in fact a cliche. I'm surprised you brought up the crime thing. There are so many problems with this. Also, note, one way to reduce "crime" is to just make many crimes legal but it does not change normal people's view of crime. What kind of statistics were used to decide that Iowa would go for Harris with an 18 point jump?
Only one of those questions (earth warming rate) is clearly defined and scientifically addressable, as all the others have fairly subjective definitions (what is poverty? what is crime? how do we measure inflation objectively? etc.)
Even with warming, a 'fact' would be a data point at a particular time and location, assuming your sensor was correctly calibrated. You have to look at millions of data points across the entire globe for decades to get a sense of the current warming rate (which could be negative, flat, or positive). You have to do complicated statistics on all those data points to get a warming rate, and you'll have error bars on that, and the end result is not a 'fact' so much as a bounded estimate (+0.1 C / decade +/- 10% is plausible for the average surface temperature change averaged over the entire planet).
We can't even say with real certainty that 2100 will be warmer than today, as a supervolcano, asteroid impact, or global nuclear war could reverse the trend.
I think prediction markets (polymarket et al) get this right. Every question as vague as "is the earth warming" has resolution details which define some way to resolve the question such that all parties (even those with economic interest to disagree) have trouble disputing the outcome.
For a question like the earth warming, it would usually be something like "according to ___.org website on Y date", which in that case the final prediction becomes: will the average temperature in the period from 2016-2026 be greater than Y on ___.org, which is a bit different than the original but easier to arbitrate.
You sort of made your own counterpoint by giving a list of statistics that are far from objectively measurable and whose result and meaning depends a lot on the details of what exactly you're measuring and how.
Take inflation for example. Measure inflation in terms of gold, broken arm repairs, hamburgers, or houses and any will give you wildly different figures. The government preferred index prices a basket of goods but the particulars of the basket may not match you or anyone you know, and various corrections are necessary but are themselves subjective. An often disputed one is correction for goods substitution-- if steak goes up people buy less steak and more rice. The current preferred model of the government chains these corrections even though in reality you can only replace so much steak with rice before it's all rice and no steak. These indexes also have corrections for goods increasing in quality-- the price went up but its because the thing got better, not because inflation. etc.
yadda yadda, I don't mean to import the debate here but the point is that there is something to debate particularly when the statistics don't match a person's lived experience -- when the things they need to live are rapidly increasing in price-- especially when politicians are abusing the stats beyond the breaking point (I think of the time when the Biden administration was crowing about something like the rate of inflation increase no longer increasing. What a jerk! ... or is that a snap? ;) ).
And even when the fact itself isn't really in dispute there is often plenty of room for reasonable people to debate the implications or relevance.
When people confused these subjective issues for "basic axioms" and then impose their understanding as "facts" it's extremely problematic and highly offensive to people whose experience has taught them otherwise.
Depends on the definitions; what is or isn't a crime changes over time in a given society. Taking "crime" as an aggregate conflates many different possible crimes and relies on a subjective weighting of their relative severity. Crime rates can vary wildly between various subgroups of the population. We can only meaningfully compare rates of crimes that are actually detected and result in law enforcement actions; an unknown and broadly unknowable amount of crime is overlooked.
> whether the earth is warming or not
Most of the disagreement is about the rate of change, the predicted future rate of change, the predicted impacts of those change, the extent to which we can do anything about it, and especially about the relative importance of the predicted impact vis-a-vis the effort that might be required to do something about it.
> how many people live in poverty, what the rate of inflation is
"Poverty" is generally measured in terms of income versus an arbitrarily decided baseline. The baseline at best varies over time specifically to remain in "real" terms, i.e. adjusted for "inflation" which is calculated on a basis which may bear no relation whatsoever to the rate of change in costs practically faced by the poorer segment of the population. Furthermore, income is nowhere near the entire picture of wealth, which in turn is not a full picture of economic well-being. Inflation measures are designed with "hedonic quality adjustments" (https://www.bls.gov/cpi/quality-adjustment/questions-and-ans...) in mind which involve subjectively putting numbers on a wide variety of factors - they're literally trying to measure "how much better" a cell phone becomes if the screen resolution increases, so that they can decide whether the increase in price is justified; and in many cases they just resort to assuming that the initial price is fair relative to existing devices when the new one hits the market.
>How much social security or healthcare costs
Again, this has to be considered in the context of inflation adjustments, because the value of currency is not objective. World currencies are not a unit of measurement for value; it's just another thing that you can exchange for other valuable goods and services. If they were objective, there would be no reason for exchange rates to vary over time; they vary because, among other things, of varying relative faith in the issuing governments, and varying supply (which governments can generally control more or less at will).
Aside from which, there are valid reasons why the per-capita costs might vary due to demographic changes. The disagreements I've seen haven't been about the bottom-line number in (say) the American federal government budget; they're about how to contextualize that number. Are per-capita costs changing? Are your personal costs changing? Are the costs of people like you changing? (Those answers could be different for many reasons.) How do they compare to costs in other countries? Is that justified? Is it explained by extenuating circumstances? How shall we compare the corresponding quality of care?
As far as I can tell they gave up moderation a few years ago, at least every time I report someone spamming about "Elon Musk giving away a million dollars if you click this shady link" or the like I invariably get told it meets their "community standards" and won't be removed. I guess technically I haven't seen a female nipple there though so, job well done?
They also allow the scammiest ads for products that are 100% obvious frauds - pure distilled snake oil. It really brings meta’s image to the dirt. They’re like an online super market tabloid these days.
Don't worry, there will be community notes and some form of eu/us/state notes.
The paradigm has changes, moderation has to be separated from censorship and transparent.
I would love to hear/read Audrey Tang's take on this, as CPP has been heavily involved in manipulating Chinese public opinion.
this is good. the automated systems were getting increasingly byzantine, with layers of rules trying to patch edge cases, which just created more edge cases.
I was recently browsing FB for the first time in months, and didn't see a peep from fact checkers, despite all the garbage-tier content FB is forcing into my feed including things like "see how this inventors new car makes fossil fuels and batteries obsolete". I spent most of my time on the site clicking "hide all from X", where X is some suggested page I never expressed interest in. The "shorts" on the site are always clickbaity boob-featuring things that I have no interest in either. The site is disgusting and distracting from any practical use, i.e. keeping in touch with friends, which is what I used to use it for.
It's funny how facebook got so political all the normies left, then they downranked political content so much that the political people left too. Facebook is a ghost town now.
Going back even further, one of the initial draws of Instagram pre-acquisition was that you could escape the toxicity of trolls and other socially unproductive behavior on Facebook.
Meta has a big problem coming up. They'll get to the point where they won't be able to hide Facebook and Instagram's lackluster appeal. I suspect we'll start seeing advertisers peel away, followed by a few savvy investors first. Let's just hope this doesn't trigger a market-wide correction.
>Let's just hope this doesn't trigger a market-wide correction.
My flippant, "I hate social media and think it was largely a mistake and needs to go away," view is to cheer for that correction. That said, I understand that I'm very biased here and might be ignorant.
Is there a reason I shouldn't cheer for such a correction?
I'd cheer for a correction if it were limited to social media valuations. My fear is that social media tanks followed by people broadly pulling money out of the market.
To me facebook seems a lot quieter but instagram is as busy with stuff as ever. We definitely have differences of opinion on that. Especially if TikTok is shut down (fingers crossed) most people will fall back on Instagram Reels.
Facebook and Instagram's (pre-Reels) strength was that it was easy to have accounts of all sizes engage and be engaged with. Whether you have 10 or 100000 friends/followers/etc, the barrier of entry to have some engagement wasn't high and it encouraged people with all sizes of accounts to post, comment, and "like". Social networking felt much more intentional with these platforms.
Instagram Reels certainly has a lot of activity, but it's activity is driven by users passively consuming popular and trending media. This isn't a bad model, but it's a shift away from intentional social networking.
Ultimately, I think Reels is more evidence that Meta has had a user engagement problem for a while. Their current strategy for Instagram seens to be to hope passive consumption keeps everyone in the app and fall back on the "town square" model for comments as a means of engagement.
They A/B test reels in facebook. My mother's facebook has reels in it. Not mine. Soon, the apps themselves will lose any sense of history and they will morph into whatever new content format is favourite. All you need is account with Meta. The content will find you. Zuck has that covered for you.
Doesn’t the second sentence explain the first? I can’t tell the number of times I’ve heard a variation of, “I hate Facebook [newsfeed]. I only use it for Messenger/ niche Groups/ local events/ Marketplace.”
Facebook has positioned itself so that it’s almost a necessity if you want to be involved in your community, however you define it. You may hate Zuck, moderation, and ‘the algo’ and yet you can’t get away from Meta the company. And millions of other users feel the same way.
Between that and people getting over constantly sharing what they did on vacation and what they cooked for breakfast or had at brunch, it is a lot quieter. At least Zuck chose to bring back political arguments as the mainstay right after the election rather than right before. It will be fairly quiet for a few years IF they keep up their efforts to limit Russian propaganda bots and don't add a bluecheck to promote them instead.
Don't know if they mandate it, but I know a few people who use either names that are a slight modification of their real name, or completely made up names.
Facebook has pretty advanced features that cross check your digital signatures like IP address, browser, registered email, etc to prevent sockpuppeting. This is especially true if you want to make ads with your account.
In summary, FB was pressured in 2016 to act on “foreign influence” the press hysterically parroted by politicians and leaders. FB bowed to the pressure. Now that the press lost all validity along with the X purchase, the press can no longer persuade Meta to “fact check.” FB is in a better spot to follow the X model of moderation. People arguing this is a bad move are ignoring the fact that FB was a censorship hotbed for the last four years.
It was evident that Mark Zuckerberg / Meta would have to once again "adapt" to another Trump presidency, but this is much more explicit than I expected, wow.
I know there has been a lot of ink spilled trying to persuade that Technology can't solve our deeper problems and Technologists are too optimistic about having real-world impact etc. etc.
But I think community notes (the model, not necessarily the specific implementation of one company or another) is one of those algorithms that truly solve a messy messy sticky problem.
Fact-checking and other "Big J Journalist" attempts to suppress "misinformation" is a very broken model.
1) It leads to less and less trust in the fact checkers which drives more people to fringe outlets etc.
2) They also are in fact quite biased (as are all humans, but it's less important if your electrician has socialist/MAGA/Libertarian biases)
3) The absolute firehose of online content means fact checkers/media etc. can't actually fact check everything and end up fact checking old fake news while the new stuff is spreading
The community notes model is inherently more democratic, decentralized and actually fair. and this is the big one it works! unlike many of the other "tech will save us" (e.g. web3 ideas) It is extremely effective and even-handed.
I recommend reading the Birdwatch paper [0], it's quite heartening and I'm happy more tech companies are moving in that direction
But community notes have been around since before Musk bought twitter and they have not had any effect at reducing the amount of outright falsehoods passed off as "news" on that hellscape. Why do people keep championing it as a success story when it demonstrably hasn't helped?
Frankly, if it worked, it would have been removed by now. It's "controlled opposition" basically.
Community notes is maybe the only good thing to happen to the microstructure of social media in years so I'm vaguely in favour of this.
The official fact checking stuff is far too easily captured, it was like the old blue checks — a handy indicator of what the ancien regime types think.
The fact-checking that Meta is ending, which put "misinformation" disclaimers on posts, is NOT the same as content moderation, which will continue.
A lot of comments in this thread reflect a conflation of these two, with stuff like "great! no more censorship!" or "I was once banned because I made a joke on my IG post", which don't relate to fact-checking.
Zuck's video claims Europe has been imposing a lot of censorship lately, which is a nicer way for him to say "we have done a crappy job at stopping misinformation and abusive material, got fined A LOT by countries who actually care about it, and that's somehow not our fault".
Community notes is good news, and something I was expecting to disappear from Twitter since Elon bought it a couple years ago, especially since they have called out his lies more than once. Hearing Facebook/Instagram/Threads are getting them is great.
Then he claims "foreign governments are pushing against American companies" like we aren't all subject to the same laws. And actually, it wasn't the EU who prohibited a specific app alleging "security risks" because actually they can't control what's said there; it was the US, censoring TikTok.
Perhaps we the europeans should push for a ban of US platforms like Twitter, especially when its owner has actually pledged to weaponise the platform to favour far-right candidates like AfD (Germany) or Reform UK. And definitely push for bigger fines to monopolistic companies like Meta.
Why should social media operators be responsible for "stopping misinformation" in the first place? That sounds a lot like the logic that was used to justify smashing the printing presses in Gutenberg's day, not to mention by countless villains of dystopian sci-fi (e.g. Fahrenheit 451), in turn based on other real-world concerns.
I think I should have a right to let others lie to me, and decide for myself if I believe them. In the alternative where someone prevents me from hearing it, that other person is deciding for me. Why should I accept that other person as more qualified to do my own thinking?
It's really strange to me how calls for banning "misinformation" in the US seem to come from the same political direction as complaints about controversial books being taken out of educational curricula.
In all cases what they mean is that they want opinions or statements that go against to whatever ideology or political faction they belong to to be censored.
Humans tend to strongly identify with such things and motivate their moral reasoning to fit.
I would wager Mark and other sharks like him would find this entire thread very amusing. For they have no ideology other than self interest, nothing they do is for any other purpose other than their own.
Off topic but related to holding communities to account: I wish there were a way to metamoderate subs on Reddit. The Texas subreddit has been co-opted by a moderator that bans anyone who criticizes their editorial decisions or notices antagonism trolls taking over the sub.
It's a welcome move as this "fact checkers" thing was doomed to fail, mostly because "who decides what the truth is, and who fact checks the fact checkers?".
Sad thing is, this move isn't motivated by Mark Zuckerberg having a eureka moment and now trying to seek out the truth to build a better product for human kind.
This move is motivated by Mark's realizing he is on the wrong side of American politics now, being left behind by the Trump/Musk duo.
It's just cheaper. That's the most important thing for corporations. It's also harder to accuse them of bias. Personally, I'm a little dubious about the effectiveness of fact checkers on people's opinions. If someone is a dullard who is willing to believe the most absurd propaganda or every conspiracy theory that exists, a fact checker won't solve the problem. They are used to being told that they are wrong. Of course they just can shadowban this content but in the end they profit from that.
Zuckerberg knows which way the winds are blowing in the US Capital and is ensuring he is aligned with them so to avoid political blowback on his company.
I suspect the changes to the fact checking / free speech will align with Trump's political whims. Thus fact checking will be gone on topics like vaccines, trans people, threats from immigrants, etc.
While the well documented political censorship at Meta affecting Palestine will remain because it does align with Trump's political whims...
"We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate."
"There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"
Isaac Asimov - Hitting the high notes even after 30 years from the pulpit.
Mark doing what Mark needs to do to keep that Meta stock elevated.
Zuck still dreams of his despotic dictatorial empire where he can enslave millions and make them all trans via Police enforcement. This move is just to stop bleeding users to X.
I’m less concerned by the change of fact checking to community notes, because meta had often neutered the ability of their fact checkers anyway.
What I am concerned about is their allowance of political content again.
Between genocides and misinformation campaigns, meta has shown that the idea of the town square does not scale. Not with their complete abandonment of any kind of responsibility to the social construct of its actual users.
Meta are an incredibly poor steward of human mental health. Their algorithms have been shown to create intense feedback loops that have resulted in many deaths, yet they continue down the march of bleeding as much out of people as possible.
Completely agree. Instead of one giant town square ("Facebook") what we would benefit from are 1000 smaller ones ("Facebook competitors") and some way to "travel" between them. That is a smaller more human scale that can be responsibly governed. It does not create hyper-billionaires though.
> Starting in the US, we are ending our third party fact-checking program and moving to a Community Notes model.
The Community Notes model works great on X at dealing with misinformation. More broadly, this is a vindication of the principle that putatively neutral "expert" institutions cannot be trusted unless they're subject to democratic checks and balances.
There is a long history but the short of it is, before Section 230, platforms that moderated user content faced potential liability. Oakmont v. Prodigy[1] is a case where Prodigy was held liable for defamatory posts due to its moderation efforts. However, in Cubby v. CompuServe[2], the court ruled that platforms without active moderation, CompuServe were not liable for user-generated content because they were just hosting with no active involvement. Section 230 protected platforms from liability for user content, allowing them to moderate in good faith without being held responsible for all harmful material if they weren't able to moderate everything.
I believe Elon and Trump, being the internet's biggest liars, have the goal to remove Section 230 making moderating online more or less a crime that will open you to litigation and allow them and all of their followers to spread lies not only unchecked but with the threat of punishment if a company, like Blue Sky, were to try to moderate them.
I wouldn't mourn the loss of BlueSky, because it's basically designed from the ground up to create filter bubbles and echo chambers, and social media needs way less of those.
Removing the politics from this is rather impossible because it was so deliberately timed and explicitly positioned as political. But as a PM addressing the pure product question, I’d say it’s an unnecessarily risky product move. You’ve basically forgone the option to use humans professionally incentivized to follow guidelines, and decided to 100% crowdsource your moderation to volunteers (for amplification control, not just labeling btw). Every platform is different, but the record of such efforts in other very high volume contexts is mixed at best, particularly in responding to well financed amplification attacks driven by state actors. Ultimately this is not a decision most any experienced PM would make, exactly because the risk is huge and upside low. X’s experience with crapification would get any normal PM swift and permanent retirement (user base down roughly 60%, valuation down $30B - how’s the look on your resume?... So I go back to the beginning - this is plutocrats at play and not even remotely in the domain of a carefully considered product decision.
I know some of those fact checkers. They are career journalists and the bar to tag a post as disinformation is extremely high.
To tag a post, they need to produce several pages of evidence, taking several days of work to research and document. The burden of proof is in every way on the fact checkers, not the random Facebook poster.
Generalizing this work as politically biased is a purposeful lie.
Even granting all that you say is true, it would be trivial for there to be bias in such an apparently rigorous process. All that is required is selective application of the rules.
Not really. Because if you make the argument that it was censorship then you have to say that any feed that is generated by an algorithm is censorship because the company is determining what, among what all users post, you should see, allowing certain posts to bubble up to the top and others to fall to the bottom.
>...the bar to tag a post as disinformation is extremely high. To tag a post, they need to produce several pages of evidence, taking several days of work to research and document.
Why was the Hunter Biden laptop story thus categorized? As I recall, "several days" did not elapse between the New York Post publication of the story and its suppression on social media.
I assume the data is showing that conservative users are growing either in raw numbers or in aggregate interaction on Facebook, and thus, will now be catered to.
Meta, as a company, doesn't have values beyond growth.
Great news. It's further evidence that the zeitgeist has shifted against the idea that platforms have a "responsibility" to do "good" and make the world "better" through censorship. Tech companies like Meta have done incalculable damage to the public by arrogating the power to determine what's true, good, and beautiful.
Across the industry, tech companies are rejecting this framework. Only epistemic and moral humility can lead to good outcomes for society. It's going to take a long time to rebuild public trust.
The moderation tools were themselves offensive and abusive. I use FB to read what my friends and relatives have to say. I don't want FB to interfere with their posts under any normal circumstance, but somehow, they felt like they should do this.
But the real reason I can't use FB much any more is that the feed is stuffed full of crap I didn't ask for, like Far Side cartoons etc.
while it's obviously fair to be very very wary of everything FB does, especially moderation, the other side of this is a worldwide campaign by the worst people alive to use these platforms to shape public opinion and poison our (ie at least the West's) culture to death.
hopefully i stop getting trouble for reposting things verifiable in the public record that other people spoke about in 2018, and not being banned for supporting capital punishment, a thing legal in the US, the native state of the brand.
The Metaverse and the WFH bets made by Zuck were controversial but at least it was something rooted in tech and population habits trends and vision without any political poop attached to it.
This one is pure political poop to please Orange Man.
Also I believe that fact-checking needed to be slowly sunsetted after the COVID emergency was over, but the timing of this announcement and the binary nature of the decision means that it was done with intention to get in the graces of the new administration.
If these techs executives become the American equivalent of Russian Oligarchs I hope that States would go after their wealth based on their residence and even ADS-B private jet trackers if they were to move to say Wyoming but partying every weekend in Los Angeles/NYC etc.
The litmus test of this is whether they roll it out globally. If they do, Meta truly has seen the light; if they don't, this is just a cynical attempt to butter up Trump in case he regulates them into oblivion (as one could argue they deserve).
It would have been a perfect opportunity to -add- community notes and study which worked side by side and choose the better of the two, instead evidently Musk and Drump pulled Zuck aside and told him to shape up and join the billionaire oligarchs club or face the consequences of a partisan DoJ and SEC.
Both the far-right and far-left live off misinformation, but right now the far-right is experiencing a renaissance, and tech moguls are bending the knee to be on good terms with the leaders.
MAGA and European far-right politicians have been moaning for ages that fact checking is "politically biased". The Biden laptop controversy was the catalyst for this.
Corporate censorship should have never happened. It is a huge corruption of public discourse and the political process. These platforms have hundreds of millions of users, or more, and are as influential as governments. They should be regulated like public utilities so they cannot ban users or censor content, especially political speech. Personally I don’t trust Zuck and his sudden shift on this and other topics. It doesn’t come with a strong enough rejection of Meta/Facebook’s past, and how they acted in the previous election cycle, during COVID, during BLM, etc. But I guess some change is still good.
But being at the head of a social network is political.
Every choice is political. Allowing extreme speech to circulate is political, not authorizing it is political too.
It is not corporate censorship, it's regulation. without regulation, it will be the voice of the loudest / strongest. And I think we need some rationality, not polarisation.
Feel free to correct me if I'm wrong, but I don't think there's any reasonable political discourse that is ever* censored by social media companies.
During COVID, there were people spreading lies about the vaccine, which many people believed, and many people died as a result of believing those lies. Even Louis Brandeis, one of the fiercest advocates of free speech, made an exception for emergency situations[0], which is arguably what a pandemic is.
But again, lies about a vaccine do not constitute reasonable public discourse, it is more akin to screaming fire in a crowded theater. If you have counter examples of regular public discourse that has been censored by a social media company, please share it.
* I realize "ever" is a stretch, I'm sure there are instances, but my understanding is that they are the exception rather than the rule.
[0] "If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence. Only an emergency can justify repression. Such must be the rule if authority is to be reconciled with freedom." - Louis Brandeis, Whitney vs. California
It's hard to talk about, because when a discussion is successfully censored you usually don't hear about it and presume any discourse on it would have been unreasonable.
I would point towards immigration as a topic where meaningful discourse is missing from social media. On most social media sites, the discussion will be dominated by people who think immigration should rarely if ever be restricted; Twitter has been colonized by some people who take the opposite extreme, often for overtly racist reasons, although this is tempered a bit by Elon Musk's personal support of high skill visas.
The "normie" immigration restrictionist position, that immigrants are great but only so long as they enter the country lawfully, is something I very often see expressed in news interviews or supported by older relatives and rarely if ever see expressed on a social media platform. I don't know how I'd go about proving this is downstream of fact checking, but there's a lot of orgs who argue that it's factually false to characterize, for example, someone who crosses the border without authorization and then applies for asylum as an illegal immigrant.
If you think this move exists in a vacuum or is actually about "getting back to their roots with free speech", you're wrong. Alongside Dana White joining the board[0], it's clear that this is solely about currying favor with the incoming administration.
It's not solely about currying favor. Many tech giants hate getting pushed around by politicians and courts around the world demanding censorship. Free speech rights in the US are much stronger than elsewhere in the world, and even businesses as large as Meta need political support to successfully push back on censorious overreach.
100%. It is about aligning with Trump's political opinions. Thus I do expect to see no fact checking of anti-trans, anti-vaccine and anti-immigrant content. But I don't think that Meta's documented censorship of Palestinian content [1] will change, because the censorship is inline with Trump's political opinions.
> When the White House called up Twitter in the early morning hours of September 9, 2019, officials had what they believed was a serious issue to report: Famous model Chrissy Teigen had just called President Donald Trump “a pussy ass bitch” on Twitter — and the White House wanted the tweet to come down.
My door to Meta is closed and will never reopen, no matter what.
Facebook has cost me all my friends.
WhatsApp sells my phone number.
Threads banned me for commenting too much without giving it my phone number.
Facebook keeps or kept censoring my posts.
Fuck Meta forever.
> We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
My mom and my wife’s mom both have remarked in the last year that they’re upset with speech policing. My mom can’t say things about immigration that she thinks as an immigrant, and my mother in law is censored on gender issues despite having been married to a transgender person in the 1990s. They’re not ideological “free speech” people. Neither are political, though both historically voted left of center “by default.”
The acceptable range of discourse on these issues in the social circles inhabited by Facebook moderators (and university staff) is too narrow, and imposing that narrow window on normal people has produced a backlash among the very people who are key users of Facebook these days (normie middle age to older people). This is a smart move by Zuckerberg.
>Ending Third Party Fact Checking Program, Moving to Community Notes
CNotes were extremely successful on X.
The problem with censorship, why digg and reddit died as platforms, you end up with second order consequences. The anti-free speech people will always deeply analyze their opponent's speech to find a violation of the rules.
They try to make rules that sound reasonable but are beyond section 230. No being anti-LGBT for ex. But then every joke, miscommunication, etc leads to bans. You also ban entire cultures with this rule. Ive had bans because I meant to add NOT to my 1 sentence, but failed to do so.
Then when it comes to politics. You've banned entire swaths of people/viewpoints. There's no actual meaningful conversation happening on reddit.
Reddit temporarily influenced politics in this way. In a recent election a politician built a platform that mirrored the subreddit. There was polls and if you were to go by reddit... the liberals were about to take at least a minority government, if not majority.
What actually happened? The platform was bizarre and very out of touch with the province. They got blasted in the election. The incumbent majority got stronger.
By all measures I can find, reddit continues to grow year over year, while X seems to have been flat or in decline, so I’m not sure this is a strong premise.
Reddit fell a great deal in rankings. They mostly use bots to make it appear like they are still relevant. Which ironically is creating a 'dead internet' conspiracy theory. In reality its just 'dead reddit'
If you want automated fact checking you need to create a god. (... and creating a human team that does the same is playing God)
If you want to identify contagious emotionally negative content you need ModernBERT + RNN + 10,000 training examples. The first two are a student project in a data science class, creating the second would wreck my mental health if I didn't load up on Paxil for a month.
The latter is bad for people whether or not it is true. If you suppressed it by a large factor (say 75%) in a network it would be like adding boron to the water in a nuclear reactor. It would reduce the negativity in your feed immediately, would reduce it further because it would stop it from spreading, and soon people would learn not to post it to begin with because it wouldn't be getting a rise out of people. (This paper https://shorturl.at/VE2fU notably finds that conspiracy theories are spread over longer chains than other posts and could be suppressed by suppressing shares after the Nth hop)
My measurements show Bluesky is doing this quietly, I think people are more aware that Threads does this; most people there seem to believe "Bluesky doesn't have an algorithm" but they're wrong. Some people come to Bluesky from Twitter and after a week start to confess that they have no idea what to post because they're not getting steeped in continuous outrage and provocation.
I'm convinced it is an emotional and spiritual problem. In Terry Pratchett's Hogfather the assassination of the Hogfather (like Santa Claus but he comes on Dec 32 and has his sleigh pulled by pigs) leads to the appearance of the Hair Loss Fairy and the God of Hangovers (the "Oh God") because of a conservation of belief.
Because people aren't getting their spiritual needs met you get pseudo-religions such as "evangelicals who don't go to church" (some of the most avid Trump voters) as well as transgenderists who see egg hatching (their word!) as a holy mission, both of whom deserve each other (but neither of whom I want in my feed.)
I've only been here about 1/30th as long as you, so I fully accept that I could be wrong here; but this really doesn't seem to measure up the standard of discourse that I understood to be expected on HN.
during the biden administration they were expected to shift their moderation policies to fit in with the political ideology currently in the white house
now it's been normalized and the other party is doing it. but the news outlets have waited until now to start crying wolf?
Maybe, just maybe, it's because most people in the media are Democrats, and therefore inherently self-biased in their concerns and worldviews, and they have a belief that prevents any critical self-examination easily summed up by the Stephen Colbert line that: "reality has a liberal bias."
You can't argue with someone who thinks their beliefs are merely "reality." At least the other side recognizes it as religion, etc.
> there is a huge difference between a belief and a fact.
What if a fact is disputed? Do you not have to choose which fact to believe?
Gestalting between two disputed facts is the basis for scientific revolutions.
Ptolemaic astronomers certainly had a belief that epicycles were "fact" and made every non-scientific attempt to destroy heliocentrism. Only when enough people didn't _believe_ in that "fact" did we evolve to better understanding.
You can say "these were not facts and were just flawed observations", but you'll ignore that Ptolemaics _said_ these ideas were facts and had strong evidence and a belief that it really was.
This model can be applied over and over again to many domains. This isn't my idea, rather it comes from the seminal work "The Structure of Scientific Revolutions" by TS Kuhn.
So, no, there is not a bold line between belief and fact. We choose what facts to believe.
I cited a major academic work to back up my position and gave a real world example to demonstrate the concept. What about Khun is crazy? You should attempt to engage in the topic and avoid ad hominem attacks. Or are you of the opinion that “we don’t believe in facts”?
More accurately, the quote is "Reality has a well known liberal bias," and was given in the persona of a character Colbert played on an Comedy Central show and can be seen with a certain irony.
I think this reinforces my argument that liberals view it as indisputable that there is no bias in their favor in media and all their opinions are "merely reality."
well I think it's important to point out context and to be accurate with regard to the actual quote. imprecision with words leads to misinterpretation.
I'm not clear what your larger point is though or why you're singling out my comment with your rebuttal.
The twitter files that showed that accounts of conservatives got special treatment that explicitly prevented them from facing consequences of breaking site rules?
I have no idea how you cane to the conclusion that they showed any such thing. Even Wikipedia (https://en.wikipedia.org/wiki/Twitter_Files) takes the stance that the points raised were generally showing bias against conservatives, and tries to downplay them.
I'd like to hear an informed take from anybody who thinks that Facebook's fact-checkers were a better product feature than Community Notes.
All of the articles I'm seeing about this online are ideological, but this feels like the kind of decision that should have been in the works for multiple quarters now, given how effective Notes have been, and how comically ineffective and off-putting fact-checkers have been. The user experience of fact-checkers (forget about people pushing bogus facts, I just mean for ordinary people who primarily consume content rather than producing it) is roughly that of a PSA ad spot series saying "this platform is full of junk, be on your guard".
The ideological bits are:
* Dana White added to the board.
* "Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content." - like people being in Texas makes them more objective?!
The actual mechanisms of running a social media network at scale are tricky and I think most of us would be fine with some experimentation. But it looks pretty political in the broader context, so maybe it's just a way of saying that certain kinds of 'content' like attacking trans people is going to be ok now.
I can't quite FB entirely, but Threads looks like a much less interesting option with Blue Sky being available and gaining in popularity.
I get how the partisan story is easy to tell here, but I'm saying something pretty specific: I think it would have been product development malpractice for this decision not to have been in the works for many, many months, long before the GOP takeover of the federal government was a safe bet. Community Notes has been that successful, and Facebook's fact-checkers have been that much of a product disaster.
I've never seen a wrong Facebook fact-check; I am warmly supportive of intrusive moderation; that's not where I'm coming from.
Clegg left a few days ago, and the Oversight Board issued a statement which sounds like they were in the dark:
> “We look forward to working with Meta in the coming weeks to understand the changes in greater detail, ensuring its new approach can be as effective and speech-friendly as possible.” [1]
So is it possible this was only announced recently. It might have been "in the works" in the C-suite for a bit longer, but there doesn't seem to be any evidence it was widely known before very recently.
[1] https://www.theguardian.com/technology/2025/jan/07/meta-face...
As a product decision taken independently, maybe. Running one of those things at scale with all kinds of people trying to subvert it for various reasons, including some downright evil ones, is not an easy task.
Announced together with everything else and given the timing, I just can't help but think there's a political component to all of it.
> I just can't help but think there's a political component to all of it.
"We're moving to Texas to eliminate perceptions of bias" is the biggest giveaway of this.
Austin is very left of center. If they end up there, they will have ideologically strayed in California while geographically moving to Texas.
Infowars was based in Austin. Joe Rogan is in Austin. How does moving to Austin mean they are "ideologically" in California?
Visit Texas. Then visit Austin. You'll know what I mean.
https://en.wikipedia.org/wiki/2024_United_States_presidentia...
Joe Rogan also moved from California.
Elon too, isnt it cheaper taxes for business there?
i mean they can just pretend and get paid
Stayed not strayed.
People move to other states due to state laws. City laws can easily be avoided by living and/or working just outside the city limits. Or more likely, state laws will preempt city laws that go against state level politics.
I don't at all doubt that they're going to do whatever they can to cast this presumably longstanding product plan in the light most favorable to the governing majority! I just want to get the causality right.
I don't understand though: What makes you think that you are getting the causality right? It seems to me like you're asserting the causality goes one direction, when there doesn't seem to be any evidence (at least in public) for that assertion at the moment. Have I just missed some other information on this that you're basing this on?
I think he is suggesting that this move has favorable PR optics for the incoming administration. Making it appear like a conservative victory may give them some slack or earn them some favors.
Is it not a conservative culture-war victory designed to earn favors? There is no external evidence of this having been anything other than a contingency around November 6 of last year, so it's hard to definitively say it's one or the other.
It's not really, tbh. Like the vast vast vast majority of content reviewers are outside California and have been for well over a decade.
The change here is to move the people designing the policies to Texas (basically a stealth layoff, tbh).
That being said, the moderation has been insanely bad for a while now, so all the model tuning seems like a worthwhile change to me.
The Texas thing sounds like PR but isn't really given their huge offices in Austin.
> The Texas thing sounds like PR but isn't really given their huge offices in Austin
That distinctly smells like pork barrel politicking: we're moving jobs from Commiefornia to your great state, and if your criminal [1] state AG sues us again over this function, he'll be putting Texans out a job.
1. Allegedly. Meta wouldn't dare call him thar, but he agreed to 100 hours of community service and paying restitution to those he allegedly defrauded to avoid a trial.
its called pre-conceding
Fact checking, Community Notes, whatever you want to call it, is inherently political.
To be clear: I absolutely do not dispute this. But in 2025 it seems pretty clear that you cannot run a mainstream large-scale social network without some kind of moderation, so every platform is going to do something. And all I'm saying is: what Facebook was doing before was bad, just as a product experience. Just wretched. Solved no problems, mostly surfaced stuff I wouldn't have paid attention to in the first place.
How does an average joe evaluate the claim that their content moderation was bad? Cause folks on the left seem very upset that it's being replaced by notes, and folks on the right seem very glad that it's going. How do I judge this for myself?
What I've read of the Community Notes algorithm casts it as far more neutral than any hiring decisions about professional content moderators could possibly be. If it's "political," it's in a similar way to comparing the GDP of various countries is political--reality gives the verdict, the politics is in whether that verdict was the optimal one to ask reality for.
People are going to believe it is political whether or not it is. I've been working at hard at talking about difficult issues in a depoliticized frame. It's hard.
Lately I've been talking with a lot of people trying to help find answers and something I am learning is to delete all the duckspeak from my vocabulary (there was an otherwise good article about "placement poverty" in medical education that I didn't post last weekend because but "X poverty" is duckspeak)
If I say anything at all to anyone about this or that and get a negative response about the words I use I take it very seriously and most of the time resolve to use different words in future.
What are more examples of duckspeak and is it context dependent?
Orwell defined it as thoughtless or formulaic speech.
There is an essay at the end of Orwell's 1984
https://gutenberg.net.au/ebooks01/0100021.txt
called "The Principles of Newspeak" that coins the word.
The slogan "My Body My Choice" has some of this character. It rolls off the tongue and stops thought. There is no nuance: the rights of the mother are inalienable. Opponents will talk about the inalienable rights of the fetus. There is no room for compromise but setting some temporal point in the pregnancy is a compromise like Solomon's that makes sense to the disengaged but gives no satisfaction to people who see it as moral issue. [1]
Note that this phrase turned out to be content-free and perfectly portable when it got picked up by anti-vaccine activists.
"Illegal Alien" is a masterpiece of language engineering that stands on its own for effectiveness. I mean, we all follow laws that we don't agree with or live with the threat of arrest and imprisonment if we don't. It's easy to see somebody breaking the law and not getting caught as a threat to the legitimacy of the system. "Undocumented Migrant" has been introduced as an alternative but it just doesn't roll off the tongue in the same way and since it is not so entrenched it comes across more as language engineering.
(Practically as opposed to morally: Americans would rather work at Burger King rather than get a few more $ per hour to get up early for difficult and dirty work which might have you toiling in the hot or the cold. An American would see a farmhand job at a dairy farm as a dead end job. A Mexican is an experienced ag worker who might want to save up money to buy his own farm. Which one does the dairy farmer want to have handling his cows?)
My son bristles at "healthcare" as a word consistently used for abortion and transgender medicine to the point where he shows microexpressions when reading discussions about access to healthcare in general.
This poster burns me up
https://www.pinterest.com/pin/741405157385448245/
in that teaching small children the alleged difference between two words will make a difference in the very difficult problems that (say) black [2] people have in America trivializes those problems. It trains them to become the kind of people who will trade memes online as opposed to facing those problems. In the meantime I've heard so many right wingers repetitively talk about "Equality of opportunity" vs "Equality of outcomes" which is a real point but reduces a complex and fraught problem to a single axis.
[1] There's a great discussion of this https://www.amazon.com/Rights-Talk-Impoverishment-Political-... although that book has a discussion of the Americans with Disabilities Act that hasn't aged well
[2] Bloomberg Businessweek has a policy to always say capital B when they talk about "Black" people. Do black people care? Does it really help them? What side of the barricades are they on when they write gushing articles about Bernard Arnault and review $250 bottles of booze and $3000/night hotel rooms.
> My son bristles at "healthcare" as a word consistently used for abortion and transgender medicine
In terms of cost, the items you cite are vanishingly small, and to conflate the two, one must have no experience of the medical system beyond twitter.
Is your son on his own? Did he have to pay the cost for a broken limb or a child's disease, or has he seen a family member go through a cancer? Maybe he would have a better sense of what "healthcare" means if he had actually been facing these situations.
> "Undocumented Migrant" has been introduced as an alternative but it just doesn't roll off the tongue in the same way and since it is not so entrenched it comes across more as language engineering.
It definitely comes across as language engineering. It's a legitimate category ("I'm an asylum seeker directly on my way to claim asylum from the nearest office") but expanded to include people who are just in the country illegally. It's too obvious to convince many people for very long.
Crossing the border is fine, unless it's a state line.
Then you cannot cross it or the intent is murder.
Unless your intent is to murder, than you can cross it - that's Healthcare.
Healthcare is a right. In fact, it's an societal obligation. Trust the Science(TM).
Your body, the state's choice.
Are you a racist, or a communist?
I think you'll find basically everything is political. Do you have a fear of debate or criticism?
No. I can't stand it that so many Americans have fallen under the spell of a fraudster while others are sharing hateful memes online and think it is activism. I need stronger language, not weaker language.
I don't like the word "debate" because it makes me think of a high school debate where you are assigned which side of the issue and it is about to winning or losing.
https://depts.washington.edu/fammed/wp-content/uploads/2018/...
in the current situation people feel they have exactly one candidate to vote for every time and thus we have no ability to vote out corrupt politicians. The political class wins and the rest of us lose.
(I am so concerned about people's inability or reluctant to change that I've experienced a call to the ministry and I'm working to use practices that I developed for selfish ends in the past to help others. Ideally when I offend you I want to strike you at the core and leave you haunted for months and not be able to think about the issue the same way ever again. If you're reacting to bits of trash somebody else stuck on me that I'm not aware of, I'm not going to get that strike in.)
Actually very few things have to be political. Politicising, that is rendering the concept to decision by a "body politic" is a choice that we're making right now, and we could choose to not do that. In fact, we have done that throughout our nation's history, and it's only in the last 20 years that I've seen the rise of "everything is political speech" to the degree that the brand of beans you buy in a store signals something to some group.
To wit there are a lot of totalitarians out there, and just because some group claims to be on your side or looking out for your interests versus some other group it doesn't mean they don't want your mind, body, and soul for their own purposes. We must take it upon ourselves to think for ourselves and to hold our own interests rather than to adopt the interests of the group we're in. Humans can engage in enterprise as a group for their own reasons, and we ought to embrace that instead of seeking to identify so wholly with the group that we lose ourselves.
Not a huge problem so long it remains a means to indicate that the post is hallucinatory. Content of checks/notes don't matter, it's tone policing.
[flagged]
Classically liberal, sure.
Modern progressives shut themselves off from any ideas they don’t already agree with, making it impossible for them to discern whether what they believe is true or not.
Of course this is also true of many religious conservatives. It’s just now equally true of those on the far left.
Please provide one example of your assertion.
Seems like legal vs. illegal immigration is low-hanging fruit.
What about them? That they exist? No one disputes that. That illegal immigrants cause crime? We have hard data on that; it's not true. That they are a drain on society via social programs? We have data on that too; they get taxes witheld but cannot claim refunds and cannot enroll in social benefit programs due to their lack of SSN.
On any topic you want to pick it's typically the radical right wing who have their fingers in their ears.
I think the bit where it’s illegal is the issue.
Nobody disagrees whether it is illegal. Whether it should stay illegal is the thing people disagree on.
I think you made the GPs point for them.
How? Whether it should remain illegal is not a factual question. You are being deliberately obtuse to avoid admitting you are wrong.
People are mad about a double standard: rules only apply to some people.
This isn’t hard to grasp.
The people who think illegal immigrants shouldn't be illegal don't think anyone should be illegal. What's the double standard? It's not like they think black people should be allowed in but white people shouldn't.
What's hard to grasp is how you think this applies to a discussion about differing facts based on political leaning. Nobody disagrees with the facts here, only on what should be done going forward. So, not really relevant to the discussion.
Noah Smith’s entire twitter feed is dedicated to pointing out progressive lies.
Is it universally true that every truth test requires leveraging the existence of false claims/things I don’t agree with? For example if Socrates is a man, if all men are mortal, what false fact would you need to draw the logical conclusion? Or am I missing your point?
I’m not reflecting this idea, of course, because I’m a progressive. It does seem a bit imaginary, though.
"Modern progressives" -- that's a wide net you're casting.
I consider myself to be a progressive and am more than happy to critique "lefty stuff" all day long. I know I'm not alone in that regard.
Try me.
I guess it depends
Is climate change driven by human activity? Do males have a natural advantage in sports? Do vaccines cause autism? Does rent control make housing more available?
The major political tribes are full of BS, because politics mostly isn't driven by disagreements about facts but by conflicting material interests. Partisans believe whats convenient.
How do you distinguish partisans from actual knowledge? The Steve Bannon philosophy of flood the zone with shit so it all looks the same seems to have killed public discourse IMO. It is easy to label everyone as partisans.
To your questions, the best explanations for climate change are human causes (and with very considerable evidence).
Women have higher pain tolerances and greater natural buoyancy, they are greatly advantaged at long distance cold water swimming. Many other sports require physical size and/or strength - so it does depend. Vaccines have no evidence of _causing_ autism, and the big paper that made that claim was retracted. I don't know about rent control and do not know what data exists.
Yeah, the answer of, yes, and here is all the evidence just doesn't seem to fly. I feel that trolling and trolls, and science illiteracy just have simply won the day.
> The Steve Bannon philosophy of flood the zone with shit so it all looks the same
FWIW, it's called the Firehose of Falsehood and the Soviets invented it.
https://en.wikipedia.org/wiki/Firehose_of_falsehood
> Do males have a natural advantage in sports? Do vaccines cause autism?
I won't argue about the other two, BUT.
We have facts for contact sports and for speed and strength sports, we've had these facts for millenia.
For the vaccine one, we also have facts. You're more likely to win the lottery than to get autism from them. I think they're probably the same odds as dying from a potted plant falling on your head while walking but anti vaxxers don't seem to be wearing helmets everywhere, that's so weird...
I am not saying vaccines cause autism or anything, but where are you pulling your odds out from?
I don't think any of these are ambiguous. My point is that sometimes right wingers take the nonsense position and sometimes left wingers take the nonsense position. Neither side reliably follows the evidence or "believes the science" so glib lines like "reality has a liberal bias" are shallow and silly.
The point of the phrase "reality has a liberal bias" is not "liberals never take a nonsense position", it's "more of the facts that liberals [just as tribalistically] believe in happen to also be true, when compared to conservatives".
That something like this might happen is not surprising. If you have two political groups and you assign both beliefs from a bag in a purely random process, odds are that one of the groups will end up with more true beliefs than the other, through no virtue of their own but through pure chance.
Conservatives believe the truth supports conservative beliefs, and liberals believe it supports liberal beliefs. This type of comment is about the same as just saying "I am a liberal", which almost by definition means you think liberal beliefs are true. It doesn't add much to the conversation.
Well, no. It means when facts are tested by objective means, more of them align with liberal beliefs than conservative beliefs. Unless you believe that facts can't be objectively tested?
[flagged]
I am on the US left by any survey measurable by my principles, while not from US, this logic also sounds juvenile. Stooping to the level that a single person should be able to represent a whole side, did you see Joe at the debates?
Oh boy. Are you trying to do the "both sides" thing? Joe was pretty bad at the debates. His voice was weak. He stuttered. He misspoke. It was bad. And then what happened? He stepped down as the party's candidate, and the rest is history, as they say.
That is quite different from making up wild stories about immigrants eating cats, fabricating nonsense about widespread election fraud / stolen elections, suggesting injecting bleach is a sufficient remedy for coronavirus, sharpie-ing atop hurricane maps to prove previous incorrect statements were totally real because... look: sharpie! And this man has never had more widespread support.
These. Parties. Are. Not. The. Same.
By the way, it wasn't just one man making this "immigrants are eating our pets" thing. In addition to Trump, other prominent Republicans such as J.D. Vance, Marc Molinaro, and Laura Loomer also repeated this lie.
Just because one political party is obviously worse doesn’t mean you should take everything said by the other political party as gospel truth.
Statistically, most US seems to believe that the Democratic party is obviously worse at the Federal level. They just lost an election on every metric, although they did win the lost-to-Trump-twice award after almost a decade of opportunities to come up with an effective counter-Trump strategy.
Trump and his antics are not mainstream conservative thought, especially not on a global or even 'western' level.
He does however have a knack for attracting people disenfranchised from politics.
He's been the undisputed head of the "conservative" party in the U.S. for 10 years now. And just won his second election, this time winning the popular vote. If that's not mainstream, I don't know what is.
Sure they are. People like the Cheneys or Mitt Romney are not mainstream conservatives in the US any more.
Accurate. It's difficult to argue that the mainstream US Republican isn't a populist now. Twice is not a fluke.
And ever since the 70s there's been a tension between the blocks of the Republican party: fiscal business conservatives, foreign policy hawks, and rural/religious conservatives.
After couple decades getting the final group fired up, they decided they wanted to drive. And the primary system rewarded them.
> the final group fired up, they decided they wanted to drive. And the primary system rewarded them.
I've been an outside observer of US politics for many decades, I'd characterize what happened not so much as the primary system rewarding them but more as a consummate grifter and snakeoil carpetbagger fooling them into thinking they've won.
They got fired up, they got the candidate they voted for, I'm not sure the expected rewards will follow as hoped and expected.
I think folks undersell Trump's intention to deliver. Just, to him, there's no objective reality outside of the message and the public reaction.
So he says "We'll build a wall!", then throws up a few miles of fencing, then takes some photos and says "We built the wall!", and people believe him?
That's job done.
Sure, there are a lot of interests around him, but I honestly don't think he's playing a master plan. He just lives inside messaging.
I have definitely heard conservatives complain that reality has a left-wing bias. Not in quite those words, but close enough that you wonder if it’s possible to die of cognitive dissonance.
Liberal as in classical liberalism, or as in progressivism (which is becoming increasingly authoritarian)?
Could you give an example of increasing authoritarianism from progressivism?
This was a thing in the 2010s when “cancel culture” and other SJW shenanigans were prevalent.
As far as I can tell the culture war is over since end of the pandemic - now the class war has begun, it’s going to be interesting
Tim Walz claimed there is "no guarantee to free speech on misinformation or hate speech, and especially around our democracy." That's false--the First Amendment has no such carveouts for those things. So it's concerning that Walz would think otherwise.
Hillary Clinton has made similar comments, saying "But I also think there are Americans who are engaged in this kind of propaganda, and whether they should be civilly, or even in some cases criminally, charged is something that would be a better deterrence, because the Russians are unlikely, except in a very few cases, to ever stand trial in the United States." But again, there is no First Amendment carveout for propaganda, Russian or otherwise.
There are some limits to protected speech, but they're rare and mostly limited to direct incitement of a crime or other threat.
Landlords in NYC can be fined up to $250,000 for misgendering tenants, i.e. compelled speech:
https://www.snopes.com/fact-check/transgender-pronouns-fine-...
And Canada has similar (and far more widespread and severe) laws punishing people for expressing wrongthink about trans issues.
In the final analysis, I don't think it matters. The former leads to the latter. The same is true of things like attempts to keep the LGB, but toss the T. The T follows from the LGB. The LGB already presupposes all that is needed to infer the T. You would be drawing an artificial line in the sand otherwise. It's ad hoc and doesn't work.
One common error people make is that they think they can pick and choose beliefs and positions a la carte and expect them to remain stable as fixed parameters of the environment. But that's not how ideas work. They aren't static in this way. Rather, they function much like presuppositions that, over time, are worked out, dialectically, if you will. Society is like a machine that works out the consequences of ideas over time.
So, I always find it amusing when anyone appeals to some fondly remembered status quo that held in a prior decade, believing that all one needs to do is return to that status quo "verbatim" and all will be well, as if these things were just a matter of arranging the furniture a certain way. You can't roll back the clock, and if you could, you would only recreate a similar development that led to the undesirable state of affairs in the first place.
This isn't an argument for some kind of Big P progessivism, or against tradition, only an account of how cultures develop over time. In our case, by understanding the tensions and contradictions within the liberalism tradition, we can come to explain why Western societies have moved in a certain direction over the last 200 years. Heck, we can go back further to the influence of Luther, or even further to Ockham, without whose ideas liberalism would arguably not exist.
If you begin with liberal blinders on, then that might be the picture you receive.
(I define here "liberal" and "liberalism" not in the lazy, colloquial partisan sense, as in "own the libs!" or "left wing", but the philosophical definition in the tradition of Hobbes, Locke, and others. In this sense, "we" are all liberals in the liberal West.)
Only if everything you don't agree with is "political"
Censorship, moderation, what kind of speech is acceptable, what does or doesn’t constitute a “fact”, are all political topics.
> what does or doesn’t constitute a “fact”, are all political topics.
It clearly is not. A fact is a fact by definition, regardless of what anyone happens to feel about it. There are facts that are known to be true beyond all possible doubt.
If it is uncertain or in doubt, then it's not a fact and shouldn't be corrected by fact checkers.
> There are facts that are known to be true beyond all possible doubt.
The problem is that some people believe a fact is one way beyond doubt, and other way believe it is the other way.
Epidemiology: Respirator masks help prevent infectious diseases
Economics: Rent control is always a bad idea
The way Community Notes usually end up working in practice is comments that provide sourced context that may be [arguably intentionally] omitted in a topic. For instance if it happens to be that there have been 27 different studies showing no statistically significant reduction in spread of infectious diseases with healthy individuals wearing masks, then that would likely be a community note on the first one. And vice versa if rent has been demonstrated to keep rents below the surrounding means in the cities of Blah, Bleh, and Bluh, then that would often end up a community note on the second.
It basically helps reduce the hyperbole/echo chamber effect of such comments/topics. Vice/versa if those topics were "Respirator masks are useless." and "Rent control is always good." then the community notes would tend to go in the opposite direction. It's just a really good idea. For that matter I think a similar algorithm would also work well on general upvote systems at large.
I'd also add that one of the biggest issues with "fact checkers" was not only sometimes questionable checking, but also a selection bias - where the ideological bias becomes rather overt in both directions. Whether that be in deciding to "fact check" the Babylon Bee (in an overt effort to get it deranked), or in choosing not to not fact check statements from the lying politicians that one happens to like.
> Economics: Rent control is always a bad idea
Well this is definitely false. If you're a politician who can afford a nice place then rent control is a great idea: it gets you elected (look, I made things cheap for you) and keeps you elected (look, I will solve all the problems underpriced rent brings).
Your example is a false equivalence. Economics does not define "good ideas" and "bad ideas," it only attempts to model resource dynamics. Whereas the spread of infectious disease is clearly quantifiable regardless of value assignment.
The presumed goal of rent control is to prevent rents from rising. If they actually cause rents to rise even more quickly then they are indeed "bad" (at achieving this goal).
The goal of rent control, as I infer from the mechanism, is to prevent existing tenants from being priced out of their current homes (eventually leading to eviction) - at least as I have seen in the US.
If the goal were to prevent rents from rising, the mechanism would do so directly, ie. regulate all rent, rather than limiting to continued rentals on certain types of property. Which would by definition prevent rents from rising, presumably along with other undesirable effects.
Anyways, the whole issue with conflating "bad" with objective consequences is the "presumed goal," which is of course totally subjective.
Economics is inherently a political venture. Organizing markets is political and obviously impacts politics.
Partly true, but besides the point. Making a blanket statement like "economics says rent control is bad," is only marginally better than saying "physics says nuclear weapons are bad." There is a critical assumption of values which is totally outside the objective of study.
Those aren’t good candidates for fact checking really. They are beliefs really, just very widespread ones with lots of support.
A good candidate for fact checking is something that is well documented objectively verifiable. Politician X said Y on TV the other day.
Here's another one - "Trump colluded with Putin to hack the election in 2016".
I have never seen an accepted fact checking site answer this, which is very strange since it is such an enormous and grave conspiracy theory if it were true. The Mueller report is extensive and quite conclusive in stating that no such evidence of collusion (conspiracy) was not found. Yet fact checkers are happy to check peripheral and far less consequential claims around the case for some reason (e.g., https://www.snopes.com/fact-check/mueller-report-no-obstruct...), but are strangely hesitant to address the elephant in the room.
Or for another example, there were many false or poorly substantiated claims made about covid and vaccines during the pandemic. I saw "reputable" fact checkers address a certain set of those claims about the virus and drugs, but were strangely silent when it came to a different set of claims.
So fact checkers don't even need to provide false content at all, they can be very political and biased simply by carefully choosing exactly what "facts" or claims that they address.
Another example: fact-checking prominent race activists in 2020. The public was grossly misinformed about the scale of police violence against black Americans: https://manhattan.institute/article/perceptions-are-not-real...
But even straightforward stuff goes unchallenged. Jada Pinkett Smith released a movie trailer claiming Cleopatra was black. When NBC covered the issue, they couldn’t even bring themselves to fact check her. They did a “he said, she said” article asserting that Egypt contested whether Cleopatra was black: https://www.nbcnews.com/news/world/queen-cleopatra-black-egy....
Well yes.
But how do we distinguish facts from non facts?
That is a dilemma humanity has struggled with for millennia. Humans are very bad at recognizing their own biases and admitting to themselves they were wrong about something.
> But how do we distinguish facts from non facts?
What do you mean how? Science. The process of science.
There might be people who want to believe gravity on Earth accelerates objects at 1m/s^2, but we can trivially establish through countless experiments repeatable by anyone who wants to try that this is not true.
If you can't measure it or repeatably demonstrate it then it's probably not a fact. If it can, then it is a fact and no amount of emotionally wanting to believe something else can make it not a fact.
The irony is that the example you cite, i.e. F = G * m1 * m2 / r^2 is demonstrably not the correct formula for gravity.
Science, the process of science, does not prove something as fact. It can only eliminate non-facts, and even then, the experiments may be flawed in their recognition.
> If you can't measure it or repeatably demonstrate it then it's probably not a fact. If it can, then it is a fact and no amount of emotionally wanting to believe something else can make it not a fact.
This is demonstrably false. If you witness an event once, you cannot necessarily repeat it, but you know for a fact that it happened. Unless you redefine the term "fact" narrowly, what you suggested is an ideology.
See how even the definition of "fact" is up for debate.
> Science, the process of science, does not prove something as fact.
I intentionally picked a wrong value for Earth gravity instead of the correct one to avoid nitpickery on precision, location, yada yada.
If someone has a feeling that Earth's gravity accelerates at 1m/s^2, they're just flat out wrong full stop. This is the problem with the anti-intellectual crowd who believes everyone's opinion has equal weight. No, it doesn't. If someone wants to believe Earth's gravity accelerates at 1m/s^2, then their opinion (on that topic) is worthless because it is known to be false and they don't deserve any recognition for the nonsense. Facts are facts, beliefs don't make them go away.
> This is demonstrably false. If you witness an event once, you cannot necessarily repeat it, but you know for a fact that it happened.
Not at all. Human memory is fallible so if you are the only one who saw that event and swear it is true that does not make it a fact no matter how hard you believe it.
That's why scientific process requires repeatable results that anyone can (re)validate over and over, not one-off recollections.
> Earth's gravity accelerates at 1m/s^2, they're just flat out wrong full stop
You do realize it depends on the distance of the object to Earth? So perhaps you are wrong not them depending on the context.
Now someone comes up and says I am nitpicking blah blah... well, the author should have been clear and not stating falsehood as fact! This is just your belief which does not change the incompleteness/incorrectness of the statement (as per the original post).
And this is the whole goddamn point. What's "fact" to someone can be incorrect, half-correct, wrong with completely good faith, or wrong with intent to mislead, etc. Who gets to decide all this is not as simple as "I am ScienceTM" Dr Fauci style.
You missed a basic element of what they said: "can't measure it or repeatably demonstrate it"; seeing a non-reproducible event with your eyes is a form of measurement, and that measurement could in principle be done by an objective machine (recorded by a camera). The potential for objective evidence is what distinguishes a matter of fact from a matter of opinion.
As to the "correct formula for gravity" - that's just bad faith nitpicking. "Newtonian gravitation is a fact" is both a strawman and completely irrelevant when it comes to social media fact checkers.
> You missed a basic element of what they said: "can't measure it or repeatably demonstrate it"; seeing a non-reproducible event with your eyes is a form of measurement, and that measurement could in principle be done by an objective machine (recorded by a camera). The potential for objective evidence is what distinguishes a matter of fact from a matter of opinion.
No. Recording an experiment does not constitute scientific repeatability of an experiment. (Not to mention Quantum Mechanics explicitly rejects your claim as a universal principle at the micro level.)
> As to the "correct formula for gravity" - that's just bad faith nitpicking. "Newtonian gravitation is a fact" is both a strawman and completely irrelevant when it comes to social media fact checkers.
No, it is not a strawman at all. It clearly illustrates via an example of something we have known to be false for about a century, yet not only we do not censor it on social media, we teach it to kids, and almost no one would object to it.
So, where do you draw the line?
I posit there exists facts that are unknowable by the scientific method. The GP claimed science as the end-all-be-all method to fact-check. My statement is that it's not sound, nor complete, in its ability to fact-check.
The scientific process works amazingly well for repeatable experiments, but it doesn't do anything at all for non-repeatable events. You can't use the scientific method to figure out who blew up the Nordstream pipeline, just for a relatively recent and hotly debated political fact.
And if I take a ballon, fill it with the right helium/air ratio so it sinks at exactly 1m/s²? It's a provable scientific fact that it's falling at 1m/s². Even if I leave off the part that it's a balloon, and talk antigravity fields or aliens or some crap, and "let you draw your own conclusions", the fact that the ballon fell at that rate would still be demonstrably true.
People want to sell you lies and get you to believe them, and they'll give all the half truths they can to support their version of the truth. they'll use misleading graphs with real numbers, so you can fact check the numbers on the graph and come away thinking the graph represents the truth of the matter. But X axis that don't start at zero, logarithmic Y axis that don't say they're logarithmic, Or pie graphs viewed from a funny angle, with slices that don't represent the percentage they're labeled by, or with percentages that add up to greater than 100%.
If all we wanted to run were trivial physics experiments, we'd be golden. The real world of social media facts include things we can't run science experiments for, or go back in time to redo, like economic stats that use a different formula today and there's not enough information to see what it was in the distant past. So we get these narratives from people who are trying to convince us to believe theirs by leaving off important context. Which is totally dishonest of them, but they have a vested interest in us believing a particular narrative.
You're reading them as saying that moderation is suspect because it's political, and all I read them to be saying is that political considerations are unavoidable when you moderate, in a manner distinctive to moderation.
I disagree with gravity though. It makes life a lot easier when you can fly.
It’s just intelligent falling. They want to keep you in the dark.
Answering this question has to be a political topic, because there's an infinite stream of people asking you the question (by posting things that may need to be fact checked), and you have to decide which ones to prioritize.
.. from lawdictionary.org :
> 2 : any of the circumstances of a case that exist or are alleged to exist in reality : a thing whose actual occurrence or existence is to be determined by the evidence presented at trial see also finding of fact at finding, judicial notice question of fact at question, trier of fact compare law, opinion
For most of my life, I would have agreed with you.
As I've gotten older, I've become increasingly skeptical of the idea of a "fact".
There's no way to separate information from human context. Even seemingly obvious things like "that shirt is blue". To who? My wife sees it as green, frequently.
Or things are reduced to tautological nonsense like "gravity keeps us on the ground". Hard fact, right? But define gravity. A physicist will give you an answer, that may or may not mean much. A layman's definition might be something like "it's the thing that keeps us stuck to the ground", and now we're back to tautological nonsense. The entire "water is wet" class of "facts".
Anything less trite instantly becomes less fact-like the more humans are involved.
"Trump is a criminal" many people would argue passionately that this is a hard, incontrovertible fact.
Nearly as many, (or maybe more?) would argue the opposite.
I like the approach of the Fair Witness in Stranger In A Strange Land: "What color is that house?" "It's yellow on this side."
I'm increasingly convinced that the belief in "facts" is more about the desire to be right and know things than anything to do with objective reality.
> As I've gotten older, I've become increasingly skeptical of the idea of a "fact".
I think the problem actually lies in your personal interpretation of what a "fact" should be, and how it contrasts with what facts actually are.
The definition of "fact" is "things that are known or proven to be true". Consequently, if you can prove that an assertion is not true then you prove it is not a fact. If your wife claims your shirt is green and not blue, does that refute the fact that your shirt is actually blue? No. Can you prove your shirt is blue? Can she prove your shirt is green? That is the critical aspect.
Just because someone disagrees with you, that does not mean either if you is right or wrong. You can both be stating facts if it just so happens you're presuming definitions that don't match exactly in specific critical aspects.
If your shirt is cyan, you can argue it's a fact the shirt is blue and argue it's a fact the shirt is green, because in RGB space both the blue channel and green channel is saturated. You can also state that it's a fact that your shirt is neither blue or green because there's a specific definition for that color and this one is in fact cyan, not blue or green.
If you can prove your assertion, it's a fact. If you're making claims you cannot prove or even support, they are not facts.
And more importantly, the problem tackled by fact checking is people making claims that are patently and ostentatiously false and fabricated in order to manipulate public perception and opinions. Does anyone care if your shirt is blue or green? No. Does anyone care if, say, Haitians are eating your pets? Yes.
Facts exist. Your first sentence has 11 words. Easy to verify, right? Doesn't matter who's counting.
May I suggest that your confusion comes from a conflation between facts and generalizations. Hard facts exist in strictly defined contexts. Relax the context, and you need to eventually reach for generalizations that less precise and potentially ambiguous.
If somebidy asked me whether the cup in you hand would fall and and shatter when they release it from their grip, my answer would of course depend on a few things I pick up from the context: what gravitational attraction would the cup experience in your current location? What material is the cup made of (porcelain, metal...)? So if we're standing on earth and the cup was made of porcelain, I'd answer that it would fall and likely shatter. Doesn't mean that any cup would shatter. Metal cups doesn't. But that's a different fact. So there is no generalized fact that all cups shatter when they fall. Some do, some don't. We can play the same game with gravity. The cup wouldn't fall if we were floating on the ISS. So the same cup doesn't fall in all locations it might conceivably be.
Many people don't want to deal with the level of precision that hard facts require. They get sloppy and then start these endless discussions of "this isn't true because..." etc. and everyone gets gradually more confused because nothing seems to be entirely true or false. The fundamental counter here is to dig in and tease the generalizations apart until they become sets of constrained hard facts.
> Your first sentence has 11 words.
It's, I think, quite relevant here to note that "word" is a famously hard to define concept in linguistics. That is, there is no generalized definition of the concept "word" that works across languages, writing systems (e.g. Chinese and Japanese writing don't traditionally use spaces to separate words), and ways of analyzing language (phonological words are different from grammatical words).
So to make your sentence more accurate, you'd have to say "there are 11 groups of letters separated by whitespace characters or punctuation before your first period".
“Facts is facts” works for counting words in a sentence.
It does not work for anything with nuance or context, or for unprovable propositions. It is a fact that there is no elephant in my house. But if you want to doubt that fact for the lulz or for profit, I will be hard pressed to prove it.
That’s where our modern populist / fascists have weaponized disingenuousness to prove that “up is down” is just as valid a statement as “up is up”.
While I get your point, and I think it's strong, I'm entirely unconvinced.
Everything we see, do and understand exists in a context window of an individual. We have a shared language, with which we can inexpertly communicate shared concepts. That language is terrible at communicating certain concepts, so we've invented things like math and counting to try to become more precise. It doesn't make those things "true" universally. It makes them consistent within a certain context.
How far it it from Dallas to Houston? On a paper map, it might be a few inches. True, within that context. Or you might get an answer for road miles. Or as the crow flies. In miles? Kilometers? It's only fairly recently (in human history) that we've even had somewhat consistent units of measure. And that whole conversation presupposes an enormous amount of culture knowledge and context - would that question mean anything to a native tribesman in Africa without an enormous amount of inculturation? Are their facts the same?
I'm not trying to make a "nothing is true, we can't know anything" kind of argument, that's lazy thinking.
I'm making an argument for maintaining skepticism in everything, even (especially?) things that you know for sure.
You still have to distinguish between hard, absolute facts which definitely exist and representations thereof in human language. The facts never change (the distance between Dallas and Houstom doesn't change while we are having this conversation), but accurate descriptions require additional concepts and now we get into the imprecise world of human communication. Doubting the precision and accuracy of human language is a fair point, but that doesn't make facts themselves subjective.
I admire the conviction that things become absolutely true at a sufficient level of specification.
So long as facts are represented in language, they are subject to language’s imprecision and subjectivity. And I don’t think that platonic ideals of facts, independent of representstion, have much utility.
> hard, absolute facts which definitely exist and representations thereof in human language
It's the distinction that you're drawing between those things that I'm skeptical of.
> How far it it from Dallas to Houston? On a paper map, it might be a few inches. True, within that context. Or you might get an answer for road miles. Or as the crow flies. In miles? Kilometers? It's only fairly recently (in human history) that we've even had somewhat consistent units of measure.
No one’s opinion is going to make them closer together or farther apart, though. The distance (in whatever context) can be known. Can be objectively measured. That makes it a fact.
> I'm making an argument for maintaining skepticism in everything, even (especially?) things that you know for sure.
Are you skeptical about which way to put your feet when you get out of bed? Do you check to make sure every single time?
I think you are trying hard and writing a lot to miss the parent's point. You're thing about the number of words in the sentence is like what the parent is mistakenly calling "tautological;" another way to say it is blatantly obvious and a banal observation. This is not the type of thing we are talking about here. This is entire post is about "facts" and "fact checking" in the case of socio-political issues, the kinds of things for which there are fact checkers. The parent is obviously correct. Just look at the state of actual "fact checking" of this variety in the real world. There is a lot of controversy and a lot of words are used in a very loose way, these are not simple physics problems that you can punch into a TI-86. The issue is clearly about "who are the fact checkers" or put another way "who decides the facts." In a court of the law in the US, the judge is only arbiter of facts, these can not even be appealed.
Everything is political, which is one of the statements made above.
Facts are political. Because facts actively change how you live your life.
The playwright who created the “kill all climate denialists” talks about how it took years for the play to get onto stage.
And then how he began to see the truth of climate denialists positions. That climate denialists believed the facts, and realized it meant their whole way of life was over. So they had to do something about it. They responded with denial. In a very real way, they lived their beliefs.
The fact of climate change IS political.
EVERYTHING is political, there is no fact that I cannot convert into a weapon, through some means or the other. Blaming fact checkers, is simply trying not to blame humans.
No, whether a coffee cup will break when you drop it or whatever that was is not a political thing. I'm not sure what the rest is about. To deny that there is a lot of subjectivity in the kinds of "facts" we are talking about her is just to deny reality.
How was I mistaken in my use of tautology?
My understanding is that it's supposed to be a reduction of a logical argument into the form A = A, or true = true.
When the words are different, but essentially mean the same thing, and used as a flawed proposition.
Am I wrong about that? I certainly don't want to bandy the word about incorrectly.
I’d respectfully submit that:
1) While “facts” undisputed exist, there are vanishingly few people sufficiently versed in both epistemology and myriad substantive areas for “fact checking” to make sense. In particular, domain experts are rarely sufficiently versed in epistemology to distinguish between facts they know by virtue of their expertise, and other things they also believe that aren’t really facts.
Moreover, the folks employed checking facts for companies like Facebook typically don’t have any expertise in either epistemology or the range of substantive areas in which they perform fact checking.
2) In practice, the issue in society isn’t “facts” but “trust.” You can build trust by being consistently correct about facts in a visible way. But you can’t beat people over the head with putative facts if they don’t trust you.
It sounds like you may be heading in the direction of postmodernism, and/or post-Marxist Critical Theory
I certainly hope not.
My intent isn't to devolve into some sort of bastardized nihilism, it's to inject skepticism into anything that I can be bothered to think about.
I find it useful as a tool for critical analysis. To question a premise, to poke at the facts, especially the inarguable, indisputable ones.
There seems to be an inverse relationship between the accuracy of a fact and the amount of trouble you get in for questioning it.
Subjective interpretation is very fundamental to being human and the way our minds work, but the underlying physical reality -- the wavelengths of light reflecting off the shirt -- can be measured objectively. A physicist might say that gravity is the curvature of spacetime caused by mass, which can be measured and tested.
Trump being a criminal is based on a shared legal and societal context. As a society, we accept that if you are convicted before a jury of your peers, you are guilty and have been convicted of a crime. Jury's get it wrong and the justice system is flawed and has made mistakes. A black man in the 1920s (or even the 1960s for that matter) being tried for murder with absolutely no evidence and sentenced to death is a clear miscarriage and corruption of justice. The testimony of Trump's employees during the trial, who all said they loved working there (most of them still worked there), but weren't willing to lie on the stand about checks and phone calls they participated in, was pretty clear cut. This wasn't random people off the street of [insert preferred liberal enclave here] testifying against him: it was his own people who still work for him.
Some people prioritize political allegiance over legal judgments when it suits them.
If we dismissed facts entirely, science, medicine, and countless other fields reliant on objective reality would collapse.
This exchange is a great example of the subjective nature of our experiences: as I've gotten older -- 38 now -- I've come to accept more and more that some things are objective reality, whereas in my teens and 20s, I questioned reality and society on the structural level, torn down to the studs. From Plato's cave, to brain in the vat, Kant, the Hindu Brahman and Maya, Buddhism, etc.
Your Trump trial example actually proves the opposite of the point you’re making. CNN’s legal analyst of all people wrote an article explaining why the prosecutors “contorted the law” in pursing Trump’s conviction: https://nymag.com/intelligencer/article/trump-was-convicted-.... Remember, the prosecutor initially declined to bring the case. And those problems with the underlying legal theory are still subject to review on appeal, which very well may result in the conviction being overturned. There’s actually a lot to debate there! Including whether the “shared context” you mention still holds in the circumstance of a blue-state jury trying Donald Trump. And I’d certainly not trust anyone—especially people without a legal background—to moderate people’s statements about Trump’s trial and conviction.
Heck, even lawyers don’t treat legal judgments as god-given “facts” except in specific legal circumstances. The questions at the back of every chapter in a law school textbook will ask the student whether a particular case was rightly decided or wrongly decided and why.
The better way to think about legal judgments is not in terms of “facts” but rather “process.” Even a final decision by the U.S. Supreme Court does not establish god given facts. It merely is the end of the line in a set of procedures that lead to a particular result in a particular case. But even judgments of the Supreme Court are second-guessed every day by 20-somethings in law schools around the country!
> Trump being a criminal is based on a shared legal and societal context.
To think that someone is a criminal, you have to believe they committed a crime. A trial is one way of establishing whether they did with certain standards of evidence and process. But it is very far from the be-all-end-all of the matter.
For example, virtually everyone believes OJ Simpson is a criminal, even though he was found not guilty at trial, and even though plenty of biases worked against him in that trial, theoretically.
For myself, I do believe that Trump was rightfully convicted and is a criminal. But that doesn't mean that "he was convicted" should force anyone else to believe this. It only means that a particular group of jurors believed it given the evidence that a judge found correctly collected and presented to them.
But, respectfully, even you, in your quest to cite facts require pointing out that your "facts" are not facts at all. The person in question, Trump, was not sentenced and therefore not "convicted" of anything. But this false claim is repeated a lot even by supposed "fact-checkers". Even the rest of that same paragraph is not made up of facts but you are trying to support some vague claim with appeals to things like "his own people wouldn't lie for him even though they loved him" or some such; you're bolstering a negative sentiment but not really clearly delineating anything resembling "facts". That's the issue that is being discussed and addressed by Meta at this point. Sure, we can call high schools physics problems as reflecting facts of nature, that's nice, but this is not what all the fuss is about.
> The person in question, Trump, was not sentenced and therefore not "convicted" of anything.
Sentencing != conviction. Conviction is the legal finding of guilt, sentencing is the appropriation of punishment.
Given your excessive use of scarequotes around "facts", getting this simple fact wrong is ironic.
That's a neat story.
"in United States practice, conviction means a finding of guilt (i.e., a jury verdict or finding of fact by the judge) and imposition of sentence. If the defendant fled after the verdict but before sentencing, he or she has not been convicted,"
https://law.stackexchange.com/questions/106159/if-someone-ha...
Not true in New York, where this particular trial took place. From your own link:
So not only is sentencing distinct from conviction semantically, it's also distinct legally in the state of New York.This is an instance where semantics are nothing more than, well, semantics.
The people who say that Trump has been ”convicted but not sentenced” actually mean that he’s been ”found guilty but not sentenced”, they just aren’t intimately familiar with legal terms of art.
If they simply say ”Donald Trump was found guilty but not sentenced” instead, they’ve silenced the nitpickers while still conveying the exact same message they intended to in the first place.
> This is an instance where semantics are nothing more than, well, semantics.
I'm hard pressed to think of an example of a fact that your statement wouldn't apply to.
Sometimes when people complain ”you’re just arguing semantics!”, the semantics do in fact need to be cleared up, because the words being used are confusing, or wrong in a way that’s preventing participants in the discussion from getting on the same page.
Here, no one is actually confused. Everyone knows and agrees that Trump was found guilty, but that he hasn’t been sentenced. The only sticking point is whether you can use the word ”convicted” to describe someone who is in that situation, and whether or not that’s the case doesn’t have any material effect on people’s understanding of reality. It’s just a matter of arguing over which words should be used, i.e. it’s just semantics.
I take the "this seems to be true, based on what I know, subject to more information" approach.
I'm ok with not knowing things.
We can measure all sorts of things, and put them in a human context, which is very reassuring. What's a wave? What's a wavelength? What's a unit of measure? These are not universal truths, these are human inventions. Things we've created in order to communicate a shared understanding with each other of things we've observed. It makes us feel knowledgeable, lets us build cool things, and that's a good thing!
It also interferes with learning, and that's a bad thing. For example, (and I'm not taking a position on this either way, because I don't know) I think it's very unlikely, based on your comment, that it would be easy to convince you that Trump is not a criminal. Or, to pick a less controversial topic, to convince the early Catholic church of the heliocentric model of the solar system. Because they already had the "facts."
It's a comfortable position to know things.
It's uncomfortable to not know. As I've gotten older, I've become more comfortable with being uncomfortable.
It would indeed be hard to convince me Trump has not committed crimes, considering a jury found that he had and the whole, "Walks like a duck, quacks like a duck," thing. Tony Accardo ran the Chicago Outfit for 4-5 decades and never spent a single day in jail. I don't think most people would agree that because he was never convicted (or even charged), he was not committing crimes.
If you read a story about a drug kingpin being convicted at trial, do you assume that he might be innocent?
Yes and no.
This is the line in the sand that makes sense in the pre internet era.
Online, EVERYTHING is political speech, because moderation is the only effective action we can take, and moderation is currently conflated with censorship. Even though it’s on a private platform.
I was working towards researching this and building the case out fully - but online speech efficacy is not served by the blunt measures of physical spaces, where the ability to speak is not as mediated.
Online, diversity of voices, capability of users to interact safely, resolution of conflicts, these are better measures of how healthy the market of ideas is.
The point of free speech is to have an effective exchange of ideas, even difficult ones. The idea of free speech is not in service of itself, its in service of a greater good.
1+1=2 is not a political statement
Apparently, it is now.
There are few things that aren’t political regardless how you feel about them
The earth is "round" can be made political, but there is a factual consensus.
Therefore, we rely on experts that decipher information to transcend political opinions. It saddens me when scientists become political, only to add confusion to the consensus, in an attempt to weaken it.
Long live Wikipedia.
The US is going to endure four more years of post-truth governance. It isn't in Zuckerberg's interest to have his organization pointing out that the emperor is unclothed when there is real risk of blowback in round 2.
There was always a political component to it. The Twitter files told us this. It's just the political component is going the other way.
what are the twitter files?
The documents provided by Elon Musk to Bari Weiss, Matt Taibbi etc when he took over Twitter.
> I just can't help but think there's a political component to all of it.
I mean, of course there is. The pressure to censor that began once Trump started dominating the Republican primaries in 2015, and escalated when the government chose a line on covid that absolved the government from responsibility for covid and made dubious claims about it, is ending. The reason the recent censorship frenzy began was political (nobody was censoring flat-earthers), and the reason it's ending is political.
Now the US can get back to just censoring Palestinians, like the old days.
Facebook is a corporation and can 'censor' whoever they like. They are not 'the US'.
Part of the reason why they moderate content is the same reason that a bar owner turfs out people who are rowdy and threatening the other patrons: because the normies will leave and you're left with a bunch of nasty, loud people.
That is, after all, why this site we're on right now is so heavily moderated: it makes for a better user experience.
I see what you’re saying, but I also think the user demographics of Hacker News reduce the likelihood of moderation to begin with.
Do you have showdead on? There is definite moderation going on, but a lot of it is collectively imposed (down votes, flagging). But, if you have your HN account set to show dead posts, you’ll see that even with this demographic there are still a good number of low quality posts.
I read with showdead on. I feel like people don't get modded for opinions here. Usually if the comments are dead it's because something is perceived as ad hominem, hostile, aggressive, violent, etc. It's usually the tone that gets them modded out and the content of the message, and a polite version of the same statement would stand.
There are outliers of course, but that's the general vibe.
I do now. Good point. I haven’t been on here very long and should have been more aware before saying something thats incorrect.
> I feel like people don't get modded for opinions here.
Agreed. That's why I used the term "low quality". The comments that get downvoted or flagged are usually either blatant spam/trolling or rude. If someone makes a quality argument, regardless of the opinion, it generally sticks around. I'll even up-vote comments I disagree with, if the author is making a good-faith effort. Not everyone does that, but enough people do and do so often enough that it helps to keep a complete hive-mind at bay (about most topics...).
But, I think that it's that simple level of moderation (which, I consider to still be moderation) that helps to keep discourse around here civil and interesting...
Yes, there are some threads that start where you just know nothing good will come from it, and in those cases we do see some admin moderation (hi @dang!). But, even then, I think the idea is that when discussing some topics, the thread will invariably end up going sideways. Those are the topics that end to get immediately flagged. And that's okay with me, because who has time for that, when we have so many other, more interesting things to argue (civilly) about?
That user has six karma and therefore does not have showdead on.
There's no karma threshold for turning showdead on.
That is correct. Possibly would change my perspective. Honestly a lot of these comments have and I do appreciate the input.
I don’t know if that’s true. SV culture has always been a very big tension between monied military-industrial types and (eventually also monied) antiwar hippies.
It’s well-documented in SV’s military history, as well as recently, where Apple wasn’t involved in FAA702 illegal spying on americans (PRISM) until after the famously anti-establishment Jobs died.
The SV culture seems to have shifted a bit rightward (as has the whole country, tbh) but the tension is still there, and the social conflict remains (although I think there are other factors, not the least of which is the skill and grace of @dang, that keep people on the better side of their behaviors here).
I agree with what you're saying about SV, especially the military-industrial types. I'm not entirely sure what the makeup of HN demographics is, and would like to know. I have a suspicion that it's not just folks in SV. I also should have clarified more. In my opinion, the discourse here is more civil than on other platforms. I would suggest that has something to do with a combination of education and niche interests that attract a different user base. So maybe not in terms of factual correctness, but certainly in terms of the ability to have a civil conversation.
I think you are like a fish who isn't aware of the water it's swimming in.
HN doesn't need much moderation, because the discourse is so civil here [narrator voice: because of the good moderation].
At scale, the long term community civility balance point is likely dominated by the average user's willingness to change their behavior as a result of peer feedback.
The HN userbase, feedback tools, karma-level-locked tools, and new users' personalities seem to create decent outcomes.
Which is to say, if someone acts like an asshat, folks let them know (either through downvotes, flags, or replies), and they modify their behavior to be closer to the community norm.
That said, I'm aware I don't see a lot of the most egregious stuff the Good Ship Dang torpedoes. Or what I expect are non-zero repeat trolls.
And honestly, the fact is that outside of very nerdy street cred, there's little incentive to actively manage discourse for commercial purposes on HN.*
* Outside of, you know, cloudflare tailscale rust (any other crawler alarms I can trip)
That’s a rather reductionist and slightly disparaging point of view. Moderation has its place I never said it didn’t, but do you really think that moderation is the only thing keeping this place from being 4chan? I think you have one deeply entrenched opinion and are ignoring that these are very different platforms.
HN is heavily moderated through a number of mechanisms: explicit community guidelines, community moderation (through voting), and active automated and manual moderation.
I think all of this working in conjunction is why it has remained a pretty great community for almost two decades. And I think that's a really impressive feat. I don't think it was accomplished via "a combination of education and niche interests that attract a different user base".
Indeed, I think HN has gotten better over time, even somewhat so in absolute terms, but very starkly relative to the deterioration of everything else. For example, back in the day, when twitter was first getting big in tech, a lot of people felt that it was a healthier place to discuss those topics than HN. I was never completely convinced of that, and have always been more active here than on twitter, but it was at least a very reasonable thing to think for awhile, IMO. But now I think it would be pretty crazy to think that twitter is healthier than HN. Similarly with similar communities on reddit.
I dunno, maybe there are some healthier spaces on mastodon or blue sky or threads or something now, but at least to me, HN has maintained a fairly stable fairly decent level of discourse for a very long time, and I don't think it is a result of luck or magic, but rather of hard and tireless work moderating the community.
Yea, I’ve become more aware of this since yesterday. I also think I should have provided way more context to what I was saying. I believe I came off as being against moderation but I’m not, I do think there is something unique about the user base just from the quality of content I see compared to other spaces, but I digress. I appreciate your thoughts and it gave me something to think about.
Last I ran the numbers, which was quite a few years ago, about 10% of HN posts were coming from IP addresses correlated to Silicon Valley (well, the Bay Area with a relatively wide radius). About 50% were coming from the US, and so on.
https://news.ycombinator.com/item?id=16633521 (March 2018)
I should check again.
Thanks @dang. Turned on showdead. I will say that I was completely unaware of the moderation efforts here and appreciate having this pointed out to me. I like this option too. As far as transparency goes I don’t think it gets much better than this.
Thanks for this!
i'm not from silly valley, but its the dominant voice here.
some of my downvotes are from bad tone, overreaction, hyperbole... some are because of the silly valley culture not realising they are a bunch of deluded maniacs, or just producing absolute garbage products.
its mostly the former.
as for demographics... well, i'm a single data point, but HN has a wide reach. its why a lot of us are here imo.
Facebook has said it was pressured by the Biden administration to censor topic like covid. This is as clear cut first amendment case as you will ever find.
If it's so clear cut then why did SCOTUS throw that case out?
Your being down voted is amazingly ironic for a topic on the politicization of fact checking. There are hundreds of comments here talking about how objective facts exists and the correctness of fact-checking. You reiterate the statement of the Facebook CEO and what that statement entails and you are moderated.
But facts are facts right?
Zuckerberg did say Facebook was pressured by the Biden administration to censor covid misinformation, and the Hunter Biden laptop story [0], [1], [2] (multiple left-wing references for good measure). If Zuckerberg is telling the truth, that is a clear cut first amendment violation.
A private company can censor whatever it wants (mostly) but not at the behest of the government, there's law against that.
[0] https://www.pbs.org/newshour/politics/zuckerberg-says-the-wh...
[1] https://www.bbc.co.uk/news/articles/czxlpjlgdzjo
[2] https://www.theguardian.com/technology/article/2024/aug/27/m...
It turns out that “normies” were people who have the kinds of normal, mainstream beliefs that Facebook has spent the past four years censoring.
The only thing that "turns out" is they wish to curry favor with the incoming administration. FB hasn't been censoring much of anything as far as I can tell; there are all kinds of vile, nasty comments all over it. Just unfriendly, unkind stuff, not even political things. It's probably one reason it's kind of struggling as a platform - that kind of thing isn't much fun.
But is it currying favor? Could just as well be "kiss the ring or you'll see your life's work AT&Ted into oblivion"
Perhaps both: might have started as a pragmatic offer to bury the hatchet, then quickly turned into the never ending firehose of demands of an extortionist who just realized that he still all the cards after the extortee has given in.
The parent's point, is that the incoming administration won the popular vote... they are the 'normies' now.
Most voters don't care much about any of the details of this. They're not terribly unhappy with FB because they're using to keep track of people from high school back in the 90ies, or their families, or local recreation groups or something. Or they're not using it at all because it's for old people like me.
This is all just loud, performative subjugation to the incoming administration, that does take things like attacking trans people and immigrants as good stuff.
I would actually offer they Facebook is changing because their base has grown tired of their antics. My normy friends and family have complained of censorship increasingly over the last year. When I asked why we still use the platform one friend replied: “birthday reminders.” Then I thought that actually does summarize what I use the platform for. Not a great prospect for a company.
What sorts of conversations are you attempting to engage in that it is 'censoring' you? It seems pretty rare to me - even in heated exchanges.
There is a campaign to capitalize on the idea that right wing people are censored.
And therefore all Americans are censored.
This fight has been fought before, at the dawn of moderation. It’s been fought here on HN. Back when people used to hold libertarian beliefs openly. “The best ideas rise to the top”. No, they frikking dont. The most viral ideas, the most adaptive ideas - those are the ones that survive.
Everyone learned that moderation is needed, that hard moderation is the only way to prevent spaces from attracting emotional arguments, harassment, stalking, and hate speech.
Maybe this time its different.
Moderation is both thankless, soul crushing, and traumatic. Mods r/neworleans effectively became first responders on Jan 1st. I know mods see everything from dead baby pictures, burning bodies, accidental deaths, to worse.
IF this works, and reduces the need for mods, great! My suspicion is that it’s going to radicalize more people, faster. Its going to support the creation of more demagogues, and further reduce our ability to communicate with each other.
49.8% to 48.3% of the popular vote.
That's a pretty thin advantage, and still barely not an outright majority.
Nearly all the levers of control of the US government to almost no control over the US government: that's a massive advantage. I can't help believing this, not the popular vote, is the motivation.
Exactly. Particularly the power of the incoming President to create bad PR (with 50% of the country) and the House to haul people into public testimony and yell at them.
Not to mention the federal money spigot.
Big companies aren't stupid and are largely amoral.
That's the silver lining through all of that: when right-wing ideologues start imposing their own groupthink model on social media, it stops being fun and people start to leave. Just look at Twitter. It's just not as fun anymore on there.
I expect it was an easy bone to throw the incoming administration, which the tech world learned from v1 is placatable by giving them PR / sound bite wins.
To the broader concern, this feels like Facebook making their original sin again.
Namely defunding and destroying revenue for a task that takes money (fact checking) and then expecting a free, community-driven approach to replace it.
Turns out, hot takes for clicks are a lot cheaper than journalism.
In this case, where is the funding to support nuanced, accurate fact checking at scale from?
Because it sure seems like Facebook isn't going to pay.
> I've never seen a wrong Facebook fact-check
Did you mean to say Note here?
Obeying in advance, especially the Dana White appointment. Not that this move to community notes wasn't also a good product decision.
No, I meant to say Facebook fact-check.
> Facebook's fact-checkers have been that much of a product disaster.
> I've never seen a wrong Facebook fact-check
Confused between these two statements, then.
Do you believe the success or failure of these moderating features comes down to how accurate they are? People actually like Community Notes; they're part of the discourse on Twitter (even if most of them are pretty bad, some of them are timely and sharp). Meanwhile: Facebook's fact-checking features really do work sort of like PSA's for trolls. All the while, fact-checks barely scratch the surface of the conversations happening on the platform.
Facebook and Twitter are also unalike in their social dynamics. It makes sense to think of individual major trending stories on Twitter, which can be "Noted", in a way it doesn't make sense on Meta, which is atomized; people spreading bullshit on Meta are carpet bombing the site with individual hits each hoping to get just a couple eyeballs, rather than a single monster thread everyone sees.
(This may be different on Threads, I don't use Threads or know anybody who does).
Success in what definition?
PR/political success is certainly not correlated with accuracy, given the very act of telling a group they're wrong tends to piss them off.
In terms of encouraging discourse that maximizes user enjoyment of the platform? That's a difficult one. Accuracy probably doesn't do a whole lot there either: HN knows the people love someone being confidently wrong.
Success in terms of society? Probably more yes, albeit with the caveat that only a correction that someone feels good about actually wins hearts and minds. Otherwise they spiral off into conspiracies about "the man" keeping them down. (Read: conservative reality)
It's also important to remember that Zuckerberg only tacked into moderation in the first place due to prevailing political winds -- he openly espoused absolutist views about free speech originally, before some PR black eyes made that untenable.
To me, both approaches to moderation at scale (admins moderating or users moderating) are band-aids.
The underlying problem is algorithmic promotion.
The platforms need to be more curious about the type of content their algorithms are selecting for promotion, the characteristics incentivized, and the net experience result.
Rage-driven virality shouldn't be an organizational end unto itself to juice engagement KPIs and revenue. User enjoyment of the platform should be.
> he openly espoused absolutist views about free speech originally, before some PR black eyes made that untenable.
Note that openly espousing absolutist views about free speech means less than nothing. Elon Musk and Donald Trump openly profess such views, while constantly shouting down, blocking, or even suing anyone who dares speak against them with any amount of popularity.
> Do you believe the success or failure of these moderating features comes down to how accurate they are? People actually like Community Notes; they're part of the discourse on Twitter (even if most of them are pretty bad, some of them are timely and sharp). Meanwhile: Facebook's fact-checking features really do work sort of like PSA's for trolls. All the while, fact-checks barely scratch the surface of the conversations happening on the platform.
You're making a whole host of assumptions and opinions about this, with little in the way of data (I get it, you don't work at FB, how much data could you have?), just making blanket statements: "People hate Fact Checks", "People actually like Community Notes" and accepting them as accurate.
I use Facebook, a lot (again: all the politics in my town happens there), and almost nothing is fact-checked; I see one fact-check notice for every 1,000 bad posts I see. I feel like I'm on pretty solid ground saying that what they're doing today isn't working.
Meanwhile: Community Notes have become part of the discourse on Twitter; getting Noted is the new Ratio'd.
Accuracy has nothing to do with any of this. I don't think either Notes or Warnings actually solves "misinformation". I'm saying one is a good product design, and the other is not.
Not seeing fact checks likely means it's working: "Once third-party fact-checkers have fact-checked a piece of Meta content and found it to be misleading or false, Meta reduces the content’s distribution "so that fewer people see it.""
The issue with Community Notes is that if enough people believe a lie, it will not be noted. This lends further credence to a certain set of "official" lies.
> I feel like I'm on pretty solid ground saying that what they're doing today isn't working
How does that follow at all?
It's not that they're inaccurate, it's just that they cherry-pick the topics to fact-check and their choice (in my limited experience) is always biased leftwards. You can be absolutely correct and absolutely malicious at the same time.
> I get how the partisan story is easy to tell here, but I'm saying something pretty specific: I think it would have been product development malpractice for this decision not to have been in the works for many, many months, long before the GOP takeover of the federal government was a safe bet.
You're just stating that, in your personal opinion, a scenario would be bad. That says nothing about it actually taking place.
You're expressing your personal opinion in response to a message listing facts supporting the belief the scenario is actually taking place.
Meaning, it's still plausible this is what is actually happening.
Both professional fact-checkers and Community Notes have a pretty low false-positive rate.
It's the false negatives that are the differentiator, but false negatives are by definition invisible to the user.
When you evaluate moderation as a "product" you place more weight on factors that are mostly losers for third-party fact checkers and winners for Community Notes: speed and annoying tone.
But since false negatives are never seen, there's no visible "product" to be annoyed by. Sure, the platform fills up with even more disinfo, but users blame that on other user, not the moderation "product".
And this is where Community Notes fails. Because Notes require consensus from multiple groups with histories of diverse ideological perspectives, when one perspective has an interest in propagating disinfo, no Community Note appears.
Some studies show something like 75% of clear disinfo doesn't get a Community Note on X when it involves a hot partisan shibboleth.
False negatives are mostly invisible failures that make the entire platform worse, but the user can't blame it on a "product" because it's really the absence of a product that's the problem.
As a product decision, I agree.
But I think that can still be addressed separately from the fact that all the tech leaders in Silicon Valley are bending the knee to Trump (e.g. the Mar-a-Lago visits, the "donations" to his inauguration, etc.)
I'll give you an example I find analogous. When Bezos forbid the Washington Post from giving a presidential endorsement, he wrote an op-ed, https://www.washingtonpost.com/opinions/2024/10/28/jeff-bezo.... I pretty much agreed with the vast majority of what he wrote there. What I think is total BS, though, is his purported rationale and the timing of the decision. I think it's absolutely clear he did it because he didn't want to piss off Trump should he win (the "obeying in advance" part), which he did. The reason I believe this is because he made this decision so close to the election, and he apparently didn't feel the need to do this in previous years, or even the fact that WaPo made other political endorsements (e.g. Senate races in Maryland and VA) just before the presidential endorsement was banned. Bezos subsequent Mar-a-Lago visits and Amazon's inauguration "donation" pretty much confirm my view in my opinion.
In Zuckerberg's announcement, I thought the part he put in about fact checkers being "politically biased" was unnecessary (not to mention dubious IMO), and cleared seemed done to curry favor with the current powers that be.
As someone active in "resistance"-type organization from 2017-2021, with fundamentally the same politics now as I had then: I think all this "bend the knees" shit is mostly working to the benefit of the GOP, and I wish people would stop it. We lost an election, in part because we bet that the median voter was prepared to disqualify MAGA Republicans. They are not. Find a new angle, so we can win in the midterms. This isn't working.
I'm not trying to convince other voters. The "bend the knee" shit is not something I'm saying to try to change opinions. Like you say, clearly the majority of Americans don't care.
But it I'm pretty surprised at the outright transparent speed with which all these business leaders were willing to pay these naked fealty bribes, especially since for so long so many of them talked about these lofty goals besides just making money.
Italians in the 1930s didn't care either when Mussolini made corporations an arm of the state. But that doesn't mean what is happening now is any different.
I'm pretty sure they do this every cycle no matter who wins, but Democrats notice and recoil when it happens after a Republican win, and vice versa. There's also a titration of the news media mining clicks from a framing that de-"normalizes" the Trump administration. But that ship has sailed: you could say "This Is Not Normal" in 2017, which was a fluke nobody saw coming, but Trump won decisively this cycle, and absolutely everybody knew what we were getting into. It's time for the media to retire the schtick.
Is it a "schtick" to report such brazen cronyism?
I agree with the parent that Americans in general seem not to mind corruption, but we can't become so jaded as to think that it's not even worth mentioning that this is a problem.
Referring to public company CEOs warmly greeting the newly elected president as "brazen cronyism" is a schtick, yes.
It annoys me a lot that I have to point things like this out, because I think Trump is a grave problem for the country, but you have to beat him at the ballot box, and the schtick obviously isn't working there.
Moving employee jurisdiction to suit the incoming administration is hardly the same as a warm greeting though, is it?
In my country we have a different word for people giving large sums of money as gifts to incoming politicians, yet we seldom impose that definition on others. US politics is different and affects the climate here too, even though that population is around 20% or less of all Facebook users.
The way to win is with a more appealing set of policy proposals.
More centralized government control, "Karen" style moralizing, DEI, gun banning, global warming, more bureaucratic (and ineffective) regulation, abortions everywhere and the entire "woke" platform apparently isn't it.
I'd suggest defocusing on those and instead return to being the party of the "working man" and a stable economy.
"Wealthy corporations want to force you to work 80 hours a week to enjoy unfair profits or they will replace you with immigrant labor" should be the vibe while never once speaking about things like systemic racism or climate change. Also "the rent is too damn high!". Definitely don't have the party fronted by people who appear airheaded or unintelligent.
You have to speak to the concerns of the voter which I think are individual freedom and economic prosperity.
Once in power you can do whatever you like of course, as is traditional in politics and Trump won't be any exception.
Unfortunately there is no party of "the working man" since the citizens united ruling opened the floodgates for legal & private bribery, and arguably before that. Bernie Sanders, whatever you think of his proposals and views generally, is the rare exception who stands against the bribery and acts as a true populist, and for that he was undermined and defeated as a presidential candidate. People know the democratic party is two-faced, and I don't see how that can ever change, with money being so essential to US politics now.
MAGA didn't win with money. The democrats spent far more. They won with a message.
I'm fairly sure this is either untrue or unknowable. If the official "Harris campaign" spent more than the "Trump campaign" that doesn't actually mean much, considering how many other avenues exist to spend money that escape public scrutiny.
Even if you could account for all the dark money, that still leaves you with leveraging soft power - e.g. Musk using X as a de facto propaganda arm of the Republican party, which doesn't show up on any books.
I'm pretty sure it is knowable. The democrats spent far more.
Musk and X propaganda helped. Also Rogan and other podcasters, but look at how much propaganda the democrat side has/had. All the major media outlets. Reddit, etc etc. Plus the power of the federal government in censorship, courts and the like.
Look, I don't really care and don't trust anyone running for office much. I'm just pointing out what a winning platform would look like. MAGA won because they were speaking to things that more people found important. When the Democrats figure this out, they will be in the winning seat again. If they don't, then they will not win.
I'm saying that the democrats lost because they keep taking corporate/oligarch money and are at odds with the values of the people who would otherwise support them. They aren't the party that supports the little guy anymore, so they're basically without an argument aside from "not Trump". I don't think you understood my previous post, which was a critique of the democrats, which used to have "the working man"'s back.
Republicans have always been and continue to be pro-elite, pro-oligarchy, and against the economic interests anyone outside the upper class. They still have a better message than the democrats at the moment.
Ah gotcha. I misread and agree completely with what you state. That does appear (to me anyway) exactly what happened.
> More centralized government control, "Karen" style moralizing, DEI, gun banning, global warming, more bureaucratic (and ineffective) regulation, abortions everywhere and the entire "woke" platform apparently isn't it.
I totally agree with that.
> The way to win is with a more appealing set of policy proposals.
I completely disagree with that. At this point I think it's a bit laughable to think that the majority of Americans care about policy proposals. Trump's appeal, I believe, is that he gave a voice and an outlet for anger to large swaths of people who felt they had been ignored (which they largely had) and talked down to for years. The "elites" (often of both parties) had basically told people in hollowed-out communities and those with failing economic prospects that it was their fault - you just should have gotten a college education, or retrained for the new economy. The Democratic messaging made things worse by also saying "Hey, you know those social standards that were the norm up until the mid 90s? Well, if you believe those, you're a knuckle dragging bigot."
When people have simmering anger and rage, a "nice guy" approach isn't going to cut it. That's why so many people vote for Trump even when they find so many aspects of his personality distasteful.
I'm baffled why a politician hasn't taken more of the lead with the rage that has exploded since the CEO murder. Some elites on the right are trying to frame this as "The crazy Left condones murder!", while I see some elites on the left doing their usual useless finger wagging against insurance companies (see Elizabeth Warren). I just don't understand why a politician hasn't taken this torch and gone into "We're going to tear it all down" mode. I mean, of course there's Bernie, but at this point it needs a younger and more "firebrand" type of person.
I don't understand your point at all. Community Notes on E(x) has been ineffective, because ultimately the point of moderation is to delete posts which aren't true so they receive no reach and spread no disinformation.
Not to turn them into a public debate which might as well continue in the posts themselves.
Meta's political history has consistently been shady. Meta patented behavioural targeting technology in 2012 and was fined $5bn for its "accidental" links to anti-democratic election-fixers Cambridge Analytica/SCL, who have ties to far-right oligarchs in the US and the UK.
If you're looking for an ideological position, look there. The historical record is absolutely clear.
And then there are comments from Meta insiders, who - perhaps - have a clearer picture of what's going on than outsiders do.
As for malpractice, consider the recent AI rollout and rollback. It was an absolute fiasco for all kinds of reasons, PR and technical, not least of which was the way the bots themselves turned on the company.
Threads has already had a mini-exodus because of slanted moderation.
Meta is simply not a trustworthy company. So "Oh, let's scrap our moderation and do community notes" is hardly an isolated slip-up on an otherwise unblemished record of noble public service.
https://fortune.com/2025/01/04/meta-ai-accounts-bots-false-r...
https://www.platformer.news/meta-fact-checking-free-speech-s...
> ultimately the point of moderation is to delete posts which aren't true so they receive no reach and spread no disinformation.
That assumes that the correct amount of disinformation is zero, personally I wish to maintain my right to be wrong, and my right to tell others of my wrong ideas, and I hope they maintain the right to tell me I'm full of it.
Your position on censorship, moderation, as you call it, is your opinion, and your opinion only, and it is at odds with the position of X, and now Meta, who are taking the position that the point of moderation is to respect everyone's right to speech, while making it very obvious to those that care, that the speech may be less than truthful. Essentially everyone gets to speak, and everyone gets to make up their own mind. What a concept!
I also maintain a position of truth dies in the dark, and lies die in the light.
Most people aren't stupid, community notes breaks the echo chamber and provides a counterpoint.
That debate of free ideas has been working pretty well so far. So much so that we can usually tell who the bad guys are by how much the create darkness; how much they take on the role of arbiters of truth, how much they silence critics, think Soviet Russia, or North Korea for some good examples.
I don't think the point of the fact-checkers is so that facebook users like them, and it seems odd to pretend that was ever the point.
>I think it would have been product development malpractice
the thing is both community notes and top down moderation, if they have any purpose at all, are product malpractice. If they work, they are always going to be intrusive because that's what they're supposed to do, correct factually wrong information. Community notes is the neighborhood police, top down moderation is the feds but if they do their job either one is going to be annoying by definition.
If they're not intrusive they don't perform a corrective function and that's what largely happened to community notes. As time goes on they're more and more snarky and sarcastic meta comments rather than corrections.
But because they are community driven, they are snarky in a way that represents the community, which makes me question if they are intrusive at all. They are what the community grows them into.
It seems pretty clear to me that one of these features generally makes users happy and, at the same time, does correct some misinformation, and the other catches about 0.0001% of the bad stuff and turns it into advertisements for how bad the site is.
How can you possibly call community notes on Twitter a "success" when they demonstrably have not reduced the amount of actively made up shit on the site, and the same people who complain about a fact checker saying "no, vaccines do not change your DNA" are just as upset when that info comes from the community notes box, and the only reason there hasn't been widescale anger about them is because Elon wants to pretend it was his idea.
I'm not saying Twitter it is good. It is demonstrably not. But you're kidding yourself if you thought Facebook fact checking was suppressing the antivaxers and flat-earthers.
Oh, so community notes on twitter are actually not good, but its good that Facebook is implementing them anyway? You make no sense and are constantly equivocating back and forth in all your different posts.
They're not there to eliminate made up shit, they're the to add context - e.g "this post is made up and demonstrably false".
If it was in the works for a long time, then Zuckerberg has been planning to bend the knee to Trump for a long time.
Today, Trump in press conference (video at [0]:
Q: "Do you think Zuckerberg is responding to the threats you've made to him in the past?"
TRUMP: "Probably. Yeah. Probably."
This tells us all we need to know. It has nothing to do with facts and everything to do with yielding to political pressure to bend the media to his whims.
This is just the most standard and basic elements of autocracy, the autocrat must make all the institutions serve him, not the people. This includes not only the branches of government, but also of society, starting with the press, but also the corporate world, the academy, social groups, and everything else.
This is bog-standard autocracy, not democracy.
[0] https://x.com/atrupar/status/1876683641113248036
Bending left and right according to the government of the day doesn't tell you where the true center is.
Autocracy is not Left or Right. It is corrupting all the institutions to serve the will of the autocrat, not the will of the people.
Bending the knee to the autocrat, in this case explicitly changing your rules and operations to enable the autocrat and his followers to more easily spread their lies and intimidation is not political flexibility, it is obeying in advance to be complicit in implementing the autocracy.
It would be better if you didn't have to learn that the hard way, but our educational system and information distribution system has failed. This is just a more advanced and accelerated example of that failure.
[Edit: yes, my mistake to phrase it as political pressure — it was nothing of the sort — it was authoritarian extortion. Note Zuck has a case before the FTC.]
Autocracts doesn't get democratically elected, as far as I understand. Trump is a democratically elected leader who will end his term at most in 2028. Autocracts tend to not be democratically elected (or to change the rules once they're elected to never be deposed). Zuckerberg will bend his knee to the Democrats if they win next term. This is not autocracy, this is just knowing where the wind blows.
That doesn't make sense with the common use of the word. Autocracy is a much wider term than a militia style dictatorship, and is mostly used in the context of democracy.
Most, if not all, autocrats are democratically elected (with some wildly varying definition of democracy of course).
In current times, democratically elected autocrats include Putin of Russia, Orban of Hungary, Erdoğan of Turkey, Chavez/Maduro of Venezuela, Bukele of El Slavador, and more. Jumping back a most notorious autocrat, Hitler was democratically elected.
Autocracy is not typically imposed by conquest, it is mostly created by corruption of institutions. It is not binary, it is on a scale.
In full democracies, all the institutions of government, legislative, executive, and judicial, are independent and serve as checks & balances against each other. And the institutions of society, industry, trade, press, academic, sport, social, etc. are also fully independent.
Under autocracy, all of these governmental and societal institutions are corrupted to bend to the will of the autocrat, often by his using force of government to his corrupt ends.
This is exactly what Trump just admitted to and Zuckerberg just did — he threatened Zuckerberg with unfair government actions, and Zuckerberg is now converting Facebook to work to further Trump's goals instead of remaining an independent institution.
Here's just a few resources on elected autocrats [0] https://www.scientificamerican.com/article/meet-the-new-auto...
[1] https://nps.edu/-/nps-professor-takes-a-deep-dive-into-elect...
[2] https://academy.wcfia.harvard.edu/publications/democrats-and...
[3] https://press.umich.edu/Blog/2022/07/Elections-in-Modern-Dic...
> like people being in Texas makes them more objective?!
This is the least charitable interpretation. Obviously, it is not talking about a single person moving to Texas suddenly changing colors like a chameleon (although I suspect there is quite a bit of merit to that due to groupthink and community speech policing in BayArea/LA).
And yes, I think it won't be a stretch to think Texas would be more objective representation of general US PoV and less of a monoculture than FB sites in California. This is not a value judgement, just a natural function of the distribution of people.
Is the distribution of people in Austin so very different from the Bay Area?
Both states are internally diverse. And it’s just silly to suggest that “groupthink and community speech policing” is something that exists in California but not Texas.
Its slightly but consistently different. I moved from Austin (after 30+ years in TX) to the west coast, and the group think / speech policing is extremely noticeable to me (spend most of my time in Portland and SF), even though its not extremely different.
That being said I think a more nuanced but still political take on the move is, having moderators is important, and its less likely those moderation will be pressured to shut down if the moderators are actual jobs in a red state. Further the jobs are low skill jobs so they can be moved back (or elsewhere) as needed. Easy move even if the political capital is minor.
> Is the distribution of people in Austin so very different from the Bay Area?
If we just go by presidential election, Travis County's result is more balanced than SF and San Mateo, almost on par with Alameda county, so the answer is "slightly." However, the moment you get exposed outside the core Austin area, you deal with predominantly red areas. To get the same effect you have to go as far as Placer County or Sonoma, so I don't think the FB workers in Bay Area (SF/Menlo Park) have quite the same level of exposure.
I don't see how it matters where the mods are located when their instructions still come from California.
Of course it matters. Have you seen the emotional reaction people get to Trump/Kamala posts?
No because I don’t use these shit platforms. But the point is if policy says to moderate content of type ABC then I don’t see why someone in TX would do something different than someone in CA. It’s the same policy.
> “maybe it's just a way of saying that certain kinds of 'content' like attacking trans people is going to be ok now”
The new policy explicitly says that allegations of mental illness are not allowed except if the target is gay or trans, so, yeah…
https://www.wired.com/story/meta-immigration-gender-policies...
> it allows “allegations of mental illness or abnormality, based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
I think you misread that: it allows allegations of mental illness even on the basis of gender and religion, which before weren't allowed. It still allows allegations of mental illness based on other factors, because they were never disallowed in the first place.
No, it’s explicitly so that allegations of mental illness are forbidden except if the target is gay or trans.
Here’s another source:
https://www.nbcnews.com/tech/social-media/meta-new-hate-spee...
And the original document:
https://transparency.meta.com/en-gb/policies/community-stand...
Tier 2 forbids insults based on:
Mental characteristics, including but not limited to allegations of stupidity, intellectual capacity, and mental illness, and unsupported comparisons between PC groups on the basis of inherent intellectual capacity. We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”
There’s no ambiguity. Allegations of mental illness or abnormality are explicitly allowed based on gender or sexual orientation, but no other reason.
There is ambiguity, insofar as the whole document is a word salad of sentence fragments and rambling sentences that branch off in different directions without logical coherence.
It takes quite some effort to discern the intended meaning, which I agree matches your interpretation.
Even the tier system is declare but it's meaning never explained.
Calling out "weird" and no other word is hilarious, suggesting that Team MAGA is still sore over how much people enjoyed using that term to describe the bizarre behavior of of Trump and company.
You forgot the biggest one – replacing Nick Clegg as their global policy chief with Joel Kaplan, a Republican lobbyist.
Seems like not the biggest one? That seems like the kind of role you take knowing you're going to hold it only so long as you have a rapport with the current governing majorities.
This is on par for Meta. Don't forget that Cheryl Sandberg was their Democratic Party liaison.
I don't know Dana White and I don't know any predecessor. It isn't really relevant though apart from which actions they indeed did take in their approach.
Your second point about why people in Texas might be less biased is the distance to primary locations of tech companies perhaps? I don't think that it is convincing, but a lack of trust is the most severe problem of fact checkers.
I believe the concept cannot work though, especially if I look at the broader context.
No, user feedback is the better control mechanism. Also these fact checkers would never be independent and they would develop their own interest for even more moderation. They would never report that there isn't any more controversial content to be checked, because that is their raison d'être from day one.
> I don't know Dana White
Oh, he runs the UFC and also the new slap fighting league. What that has to do with Facebook? I have no idea.
Facebook is a rhetorical slap fighting league.
"Dana White added to the board."
Almost anyone added to the board will have some kind of political leaning. Why no mention of this when hard-left leaning people were added to the board?
"attacking trans people is going to be ok now."
This was never okay (and I don't think it's going to change). If you mean something like an opinion on child gender surgeries, this should have always been allowed and you can ignore if you don't agree and community notes will certainly have more information on it.
"Blue Sky being available and gaining in popularity."
So you dislike bias, but mention one of the most biased social media platforms on the Internet?
Zuckerberg just admit in his video that the Biden administration was working with Facebook to censor users. Why no mention of this? Isn't this also political bias that needs to be stopped?
It has nothing to do with 'bias' or protecting anyone and everything to do with authoritarians banning and silencing people they don't like, which is exactly what Blue Sky has done from day one and everyone against this change truly wants.
Time to go NOSTR.
Less drama, full speed.
> The ideological bits are: ...
Should we expect Meta doing 180 degrees u-turns every 4 years when another party wins US Presidential elections?
Only when the incoming party has threatened going after anyone who was against them with criminal charges.
this sort of applies to both parties you need to be more specific
No it doesn't
Given the extremes of presidential candidates, I think the answer is Yes, since there exists no middle ground between fact and fiction.
Or I guess you can just capitulate and leave it all to users to handle on their own, and wash your hands of the whole thing.
No, I expect over time they'll gradually settle into an equilibrium that works in both sets of circumstances.
Moving the moderation teams to Texas may be a way to induce a lot of the people working there to quit.
Texas is of course also an easier place to run a business.
I can’t help but roll my eyes at mindless euphemisms like “attacking trans people.”
There are very serious issues involving trans people with no easy answers. Like allowing minors access to irreversible treatments. Like women’s sports. Like the safety of women only spaces.
I bring this up because on so many questions like these, the progressive reaction is to shut down any discussion and isolate themselves from exposure to any ideas different from their own.
It doesn’t work. And it doesn’t help anyone.
And maybe this has something to do with why Facebook is migrating to a “Community Notes” model.
Is it not possible that ‘attacking trans people’ is both (sometimes) a euphemism for criticism of maximalist positions and (at other times) a perfectly normal term that designates approximately what ‘attacking x’ generally means? There is such a thing as an unsubstantive and utterly unpleasant insult explicitly motivated by the fact that its target is trans. Many trans people say that there are many such, and one does not need to believe everything that trans people say (surely with the result of inconsistency!) to think that the evidence they present is not wholly concocted.
Others may misidentify respectable, good, or correct arguments as ‘attacks’ in narrower senses, but that no more makes the underlying categories meaningless than the misapplication of such descriptions as ‘true’, ‘valid’, ‘scientifically established’, or ‘by definition’. I have no general pithy answer to what one should do about the sorts of attack I have described, but I venture that it is reasonable to talk or attempt to do something about them. What term would you prefer?
It’s possible theoretically.
In practice people complaining about attacks on trans people almost always want to shut down discussion about related topics all together.
I think that it would help if you were to suggest a term people who don’t want to ‘shut down discussion about related topics all together’ should use. Otherwise, the effect (although perhaps not the intention) of deprecating the term ‘attacks on trans people’ is that the sort of discussion you admit is possible theoretically will be impossible for want of a suitable term to designate the sorts of attacks it concerns.
Yes because it has no real life consequences like https://www.nature.com/articles/s41562-024-01979-5.epdf or https://goodlawproject.org/rise-of-deaths-young-trans-people...
I can't help but roll my eyes at "serious issues" you know in most states these anti trans laws were passes targeting handfuls of children in each state, sometimes a single child. But oh yes that's a serious issue for sure right now
[flagged]
This is a cheap political gotcha accompanied by a litany of unevidenced and vague allegations against a political out-group (which "particular group"? On what basis do you assert that "some AI somewhere" is involved, and why would that matter? Not to mention the tired "dog whistle" cliche) and a demand for self-censorship.
You've also made a bold claim about the relevant statistics without any kind of citation.
My understanding is that a higher standard of discourse is expected on HN.
But aside from that meta point: your argument seems to rest on the idea that your ideological opponents would prefer for cisgender teenage boys to be able to get mastectomies when they exhibit unwanted breast growth. But the source your interlocutor found suggests that the "breast reductions in teenage boys" you're talking about are in fact dominantly performed on transgender teenage boys (i.e., people your ideological opponents would consider "teenage girls"). So the intended gotcha doesn't even work; you haven't identified any kind of inconsistency in the position or potential for a "self-own".
> basically making it harder for teenage boys to be manlier
Making it harder for teenage boys having surgery to fit a stereotype sounds like a win?
I don't think you thought that point through.
My point was that the breathless hyperbole about "gender affirming" surgery is actually in direct opposition to "traditional male stereotype" of the same group--thus invalidating that the concern is a genuine issue rather than political rhetoric.
As to whether teenage boys should be getting that surgery? That's .. more complicated. Should one that lost 100+ pounds to be healthier be able to get that surgery? Probably. How big should the growth be before it becomes "medical"? Don't know.
This is why stuff like this should be left to doctors who actually understand the circumstances of the patient.
> thus invalidating that the concern is a genuine issue rather than political rhetoric.
You didn't invalidate the concern at all and just if anything bolstered it. One reason why people voted for Trump (I wouldn't vote for him myself) is that any discussion on these topics gets called a phobia or an ism.
> Should one that lost 100+ pounds to be healthier be able to get that surgery?
If they're an adult, they can do what they like.
> This is why stuff like this should be left to doctors who actually understand the circumstances of the patient.
Just because someone is a doctor does not mean they have an unquestionable moral or ethical compass, there are good doctors and bad doctors. When homosexuality was illegal in the UK, doctors would chemically castrate gay men.
This is exactly what I’m talking about.
A demand to censor any opinions dissenting from what you already believe.
Calling a legitimate argument a "dog whistle" is a classic tactic OP is talking about which is used to shut down discussion. Just debate the merit of what he's saying rather than try to label him as an enemy.
Breast reduction for children IS in fact irreversible. It causes huge scars and trying to get breast augmentation later is not actually restoring their body to its natural state. It is definitely something that is controversial. Also putting children on hormones is within scope of this conversation and DOES happen.
There are lots of people who detransition and regret their decision. Children who have been sterilized for life and have permanent scars. It's completely valid to have discussions about whether kids should be able to make these decisions (they shouldn't).
You are repeating the talking point without including the number:
The number of those kinds of surgeries people claim to be "oh so concerned" about is in the low double digits--generally low single digits--normally zero in a year.
When you get to some medical procedure that incredibly rare, the medical indications are generally really, really unique and should be left to doctors. (breast implants in girls are simply not done until 18+ unless cancer is involved, for example).
Despite what people seem to think, doctors don't just do this stuff randomly (at least in the US). They can and will lose their license for doing this kind of thing unless they follow established guidelines. And all those guidelines dictate that this kind of stuff is simply not done until after 18 unless there are incredibly extenuating medical circumstances.
> Breast reduction for children IS in fact irreversible. It causes huge scars and trying to get breast augmentation later is not actually restoring their body to its natural state.
I have yet to meet a girl or woman who had breast reduction and regretted it. See: Soleil Moon Frye, for example. She had genuine health issues. And, even still, she had to fight with her doctors to get it done at 16 rather than wait until 18.
> Children who have been sterilized for life and have permanent scars.
Cite examples. I suspect vastly more children have been sterilized for life from circumcision complications than from any other gender surgery.
>You are repeating the talking point without including the number
You have not provided any numbers of your own. Your interlocutor found the same source (https://www.reuters.com/investigates/special-report/usa-tran...) I did by putting "transgender youth surgery usa" into DDG.
Quoting:
> ...These drugs, known as GnRH agonists, suppress the release of the sex hormones testosterone and estrogen. The U.S. Food and Drug Administration has approved the drugs to treat prostate cancer, endometriosis and central precocious puberty, but not gender dysphoria. Their off-label use in gender-affirming care, while legal, lacks the support of clinical trials to establish their safety for such treatment. ... Over the last five years, there were at least 4,780 adolescents who started on puberty blockers and had a prior gender dysphoria diagnosis...
And more than that for hormone treatment:
> At least 14,726 minors started hormone treatment with a prior gender dysphoria diagnosis from 2017 through 2021, according to the Komodo analysis.
And far more than "low double digits--generally low single digits--normally zero" for surgeries:
> In the three years ending in 2021, at least 776 mastectomies were performed in the United States on patients ages 13 to 17 with a gender dysphoria diagnosis, according to Komodo’s data analysis of insurance claims. This tally does not include procedures that were paid for out of pocket.
(And also does not include cisgender patients without gender dysphoria but with unwanted breast growth.)
Thanks for providing actual numbers!
I would just like to say the discussion under your comment is exactly the kind of productive discussion citing papers and statistics I want to see more of.
Too many progressives want to terminate such discussions by censoring any dissenting opinions and attacking any kind of disagreement as bigotry.
> The number of those kinds of surgeries people claim to be "oh so concerned" about is in the low double digits--generally low single digits--normally zero in a year.
In the US it's hundreds of such surgeries each year, and rising, per https://www.reuters.com/investigates/special-report/usa-tran...
This is a lower bound as not all of these young girls get their breasts removed through health insurance, some will be paid for privately.
All right, fine. Let's use your definitions. Here is a report from the US in 2022: https://pmc.ncbi.nlm.nih.gov/articles/PMC9555285/
From 2013-2020 in Northern California we have:
> Among the 209 adolescents who underwent gender-affirming mastectomy, only two expressed regret.
> In our cohort, two patients (0.95%) expressed regret; one inquired about reversal surgery, but neither had undergone reversal surgery within follow-up periods of 3.7 years and 6.5 years.
Note the followups are into post-teenage years and most are very satisfied.
> Gender-affirming mastectomy, also known as “top surgery,” is the most prevalent surgery requested when considering all transgender adolescents, whereas “bottom surgery,” which affects genitalia and fertility, is relatively more complex and mostly performed after age 18.
As far as I can see, this is a medical system that is being very conservative (especially involving irreversible effects on fertility), involving parents/guardians at all stages, and prefers therapy first, hormones second, and surgery only as a very final choice. And note this level of conservatism in a system in Northern California--which is likely to be the most accepting of such medical actions.
So, if you are advocating that this should not be the case, understand that you are directly attempting to legislate the complex relationship between parent and teenager as well as both of them communicating with a medical professional for something which evidentially is a neutral to positive outcome for 98+% of the patients involved.
What right do YOU think you have to enter into that conversation at all?
Did you read this section of the paper?
> Our study has several limitations. First, its retrospective design meant we were unable to measure patient satisfaction and quality-of-life outcomes. Complications and any mention of regret were obtained from provider notes, which may be variable, and thus both may be under-reported. In addition, although an integrated health care system allows for continuity of care, some members may have transferred care or changed their insurance status and thus, subsequent complications, or reversal operations, would not have been captured. Next, our study was conducted at KPNC in an insured cohort of individuals with access to gender-affirming medical and surgical care. Therefore, our outcomes may not be representative of the general population, many of whom lack similar access to care. Finally, the time to develop postoperative regret and/or dissatisfaction remains unknown and may be difficult to discern.
You state that "the followups are into post-teenage years and most are very satisfied", but the authors were very explicit about not being able to determine this due to the study design.
The authors also report that:
> The median age at the time of referral was 16 years (IQR=2) and ranged from 12-17 years. Patients had a median post-operative follow-up length of 2.1 years (IQR 1.69).
Which implies that for many patients, the follow-up would have been within their teenage years.
Not only that, but the number of kids on hormone blockers is in the thousands (and increasing a lot every year). It's claimed that their effects are reversible but that is false, they lead to sterilization if the timing is wrong.
https://www.nhs.uk/conditions/gender-dysphoria/treatment/
>Long-term gender-affirming hormone treatment may cause temporary or even permanent infertility.
And the worst part of all:
>56 genital surgeries among patients ages 13 to 17
That's 56 kids who were permanently sterilized before their brain was even finished developing.
I have nothing against trans people, but many people draw the line when it comes to kids.
According to this way more recent study they are totally reversible: https://www.sciencedirect.com/science/article/pii/S0929693X2...
And this one says the same: https://academic.oup.com/jsm/article/20/3/398/7005631
And then there's article from Yale that actually disproves the cass report where the NHS guidelines are based on: https://law.yale.edu/sites/default/files/documents/integrity...
> I have nothing against trans people, but many people draw the line when it comes to kids.
Except when those children happen to be trans, that case they're not allowed to exist or be mutilated for life, even though it's easily preventable
I appreciate the study links, but it makes it really hard to take you seriously when you claim trans kids are not allowed to “exist”. That’s extreme hyperbole, as if they’re still alive they obviously exist.
If you don't allow for proper treatment like social transitioning and puberty blockers, they can't be themselves and therefore they can't exist.
Next to this there's also risk of those kids committing suicide because they can't get proper treatment, which is only getting worse with all the anti-trans laws. See https://www.nature.com/articles/s41562-024-01979-5.epdf
That study you cited is seriously flawed. Please see this detailed critique: https://genderstats.substack.com/p/activism-based-rather-tha...
So is that "critique"
Do you have any substantive criticism you could share?
For example how it cites the cass report that's been debunked quite a few times already
The Cass Review covers a lot of ground. Which parts of relevance to that article are you claiming have been "debunked", and on what basis?
I posted one of the better critiques (by Yale) already in the parent comment you're reacting to
Okay, which parts of the Review of relevance to that article do you believe McNamara et al have successfully refuted, and on what basis are you making this claim?
>According to this way more recent study they are totally reversible: And this one says the same:
I see nothing in your links that supports those conclusions. The second one at least asserts that recipients overwhelmingly don't want to reverse the effects, but this too is a complex topic (see e.g. https://slatestarcodex.com/2018/09/08/acc-entry-should-trans... ).
Also, the link you're responding to isn't a "study", but rather a position document from the NHS (UK national healthcare).
> I see nothing in your links that supports those conclusions.
I'd start with chapter 5.2.1.7 go from there.
> but this too is a complex topic (see e.g. https://slatestarcodex.com/2018/09/08/acc-entry-should-trans... ).
You can either force a trans kid to develop the wrong kind of secondary sex characteristics. With all trauma and painful corrective procedures that will follow later in life, or you can let them take a pill a day which will halt it until they're old enough to make that decision. That really doesn't seem difficult to me.
> Also, the link you're responding to isn't a "study", but rather a position document from the NHS
I know but it's still based on the cass report, which claims to be a study.
>I'd start with chapter 5.2.1.7 go from there.
As far as I can tell, you linked to abstracts for a paywalled academic papers.
>You can either
The point is about the objective fact of what the kids want. Your moral judgement of what should be done as a result, is irrelevant to that.
> As far as I can tell, you linked to abstracts for a paywalled academic papers.
Just scroll down, no paywall.
> The point is about the objective fact of what the kids want. Your moral judgement of what should be done as a result, is irrelevant to that.
This has nothing to do with my moral judgment. If a kid gets diagnosed with gender-dysphoria, they should get proper treatment. Social transition in combination with puberty blockers are the known effective treatment.
Not sure about the US, but here gender-dysphoria in children has to be diagnosed by a team of professionals that aren't allowed to steer them in any way.
[flagged]
Calling trans men "distraught girls" is just pure transphobia.
No, I just don't share your belief that these young girls are boys, or men, or male. There is nothing "phobic" about that.
Being “distraught” is pretty much inherent to being trans.
It’s the feeling that you are in the “wrong body”. That’s going to be distressing to anyone.
Not necessarily, see for example: https://www.youtube.com/watch?v=Lj4V-Nme86U
One of the challenges in discussing this issue more broadly is that "trans" encompasses such a wide range of different groups with very little in common, from the distraught young girls who want surgeons to cut out their breasts, to the middle-aged men who picked up a cross-dressing hobby, to the trenders who got a colorful new haircut and started making pronoun demands of others.
Surgery on males for gynecomastia IS classified as a "gender affirming surgery" in the statistics regardless of what you, personally, think.
So, when you see statistics about this, know that that particular operation is almost all of the cases.
It is not. Please see my other comment where I link to statistics published in an article from Reuters: https://news.ycombinator.com/item?id=42628952
The numbers for gender-affirming mastectomies only include children with a diagnosis of gender dysphoria, i.e. girls who want to be boys.
>There are very serious issues involving trans people with no easy answers.
Wait what?
> Like allowing minors access to irreversible treatments.
According to the standards of care, minors should only get puberty blocker, which are totally reversible. A new study been released a few weeks ago, actually based on facts: https://www.sciencedirect.com/science/article/pii/S0929693X2...
> Like women’s sports. Well, when a trans woman is on HRT for a few years, she has a muscle mass that's been totally grown under estrogen. This causes a of muscle atrophy and a massive drop in strength. That's why trans woman have been allowed to compete with cis woman for the last 25 years.
> Like the safety of women only spaces.
How's that even remotely relevant to transgender people? Are you really calling all trans woman perverts or simply afraid that men pretend to be trans? Because it's a lot easier to pretend to be a janitor.
The reversibility of puberty blockers is highly disputed.
Whether and under what circumstances trans women have no advantage over cos women is a highly complex question.
We already have men who freely admitted to claiming to be trans solely for the purpose of accessing women’s locker rooms.
> The reversibility of puberty blockers is highly disputed.
Not really, for more information about that read the study I posted.
> Whether and under what circumstances trans women have no advantage over cos women is a highly complex question.
Again, not really, except for all the misinformation online. If trans woman have such an high advantage, why haven't they dominated the Olympics for the last 20 years?
> We already have men who freely admitted to claiming to be trans solely for the purpose of accessing women’s locker rooms.
So? This happened maybe once or twice in the entire world, where pretending to be a janitor is something that's being done in every spy movie. Should we also ban janitors?
> > Whether and under what circumstances trans women have no advantage over cos women is a highly complex question.
> Again, not really, except for all the misinformation online. If trans woman have such an high advantage, why haven't they dominated the Olympics for the last 20 years?
Not really sure why you specify 20 years, but I'm too lazy to go through the history of IOC positions to figure out the one 20 years ago.
Because looking at the current one already provides the answer. The IOC doesn't take the position that it is a simple topic.
The wording in https://olympics.com/ioc/human-rights/fairness-inclusion-non... (and click through) is quite clear that they see a tension between inclusion along the axis of sexual identity and a continuation (or successor) or male/female category split.
>Should we also ban janitors?
Yes.
Unequivocally, yes.
[flagged]
Why wouldn't puberty blockers be reversible?
What's dubious about that peer reviewed study ?
Who's talking about males? Trans woman on HRT are not male, all biological processes in their bodies change because off the hormones
This comment fundamentally misunderstands what “male” means.
No matter how inconvenient a truth, humans cannot change sex.
Where is your actual evidence puberty blockers are reversible? They are male. Their reproductive systems are organized around creating sperm not eggs. HRT does not change a male into a female. There are myriad aspects of biology that still makes them male and confers all such advantages in athletics. This is just reality.
Disputed by the disingenuous. Notice who they always exclude from the restrictions from those "dangerous drugs"? Cis children. Magically that 0.01% of the population faces absolutely zero issues.
[flagged]
Why not? What's not reversible about them?
> most of us would be fine with some experimentation
This is why ATProto is a great foundation for to the next generation of social media applications. It makes experimentation easier and open for all. It removes the cost of switching to the better alternatives. ATProto enables real competition on a single, common social media fabric.
No it isn't. The only implementation of ATProto so far has been heavily criticized for immediately blocking anyone with wrong opinions, while at the same permitting pedolovers post without much trouble (that butterfly logo is a well-known pedophile logo).
More reports about the awful actions of bluesky/ATProto: https://www.newsweek.com/conservatives-join-bluesky-face-abu...
The Bluesky pedo trope is a right wing falsehood, yet another piece of their misinformation agenda
ATProto is an open protocol, anyone can add content to the network. Bluesky is a company that operates the most used application, a micro logging platform like Twitter.
Musk Social has far more awful actions and far more awful personal posts by the oligarch himself. The "awful" thing of blocking trolls on Bluesky is what makes it a place with more and better engagement. We don't all need to read all the awful shit people write online in the name of "free speech". I have every right to ignore or remove content I don't like from my information diet. The benefit of ATProto is that if you don't agree with the content moderation policies of Bluesky, you can write just a different client (many already exist) and subscribe to different moderation providers (many already exist), all without having to rebuild your social followings
Threads is confusing as all hell. Who are these random people? Which post am I replying to? Does this appear on my Instagram?
GP was asking about how fact checking is better than community notes, but you're saying that Meta's community notes will be worse than fact checking, which may be but which is not responsive to GP's question.
Perhaps part of it is the optics that California is interpreting it for other places?
How is it any better for Texas to interpret it for other places?
Not saying it’s better for anywhere, only how California might be seen.
Because liberals in Austin Texas have far more experience in what it means for liberal and conservative opinions to coexist together in one place, vs California where liberal opinions are the default and everything else must be shunned.
Blue Sky is what mastodon was when musk bought Twitter now X.
Also currently in the App Store (iPhone) bluesky sits at 167 .. Musk's X at 46 and Facebook at 19.
Austin is much closer to the center of mainstream American sentiment than the SF Bay Area, but it’s still to the left of center.
And Redding, California is far to the right of it.
It's just coded language for who they're going to favor, otherwise it makes no sense at all, as it's possible to find people of all political stripes in both states, as well as employees who would take their duty to stick to the facts very seriously.
It’s a Cost reduction garbed in PR.
They have teams in Austin already.
[flagged]
[flagged]
[flagged]
Parent obviously meant "center" to be the political center of the U.S. given the previous sentence. I'm not sure they're correct in either statement (not having investigated in any way), or that this is a reasonable thing to consider for a global platform (to the extent that Facebook is one).
Nonetheless, it's trivially true that somewhere in the US must be to the left of the political center of the U.S.
This is a statement about yourself, not the US.
Not really? The democratic and republican parties are both classical liberal parties, invested in business and capital as the standard and correct way to organize a society. Classical liberalism is a center-right ideology, globally.
Show me the party in the U.S. that wants to abolish private property, wants to provide food, healthcare, and housing to all, that wants to nationalize key industries, that wants to govern from a standpoint of "wellbeing for all". If you can point me to a place where that's the prevailing ideology, I'll gladly recant the idea that no place like that exists here.
You are not using the term “left of center” how most people do. Which is fine if you want to but then don’t get surprised when you have to explain yourself every single time.
BTW as an actual “classical liberal” I find it hilarious you describe the two parties that way.
Which of the two isn't a classical liberal party?
Just where exactly do you think the centre of the political spectrum is if anything left of it means full-on Marxist-Leninist communism?
Social democrats (e.g. Nordic model) are left of center, but aren't MLs or communists. Anarchists (e.g. Kropotkin) are left of center but aren't MLs or communists.
There's plenty of room between the center and Marxist Leninism.
I would say many labor politicians are centrist. Some democrats are center, some are center-left. Some are center-right.
Some members of Liberal parties are centrist.
The center has tons of parties in it.
You could just as easily say that the Republicans and Democrats are both left of center because neither party wants to restore a politically active monarchy, establish a national church and reform law and government under explicitly religious lines, restrict and revoke citizenship based on ethnicity, or install a military government. You might say, "but those are all crazy far-right things that no sane developed country would do", but I think nationalizing industries and abolishing private property are crazy far-left things that no sane developed country would do, either.
Canadian potash corporation, Chilean mining, French financial sector, gazprom in Germany, Indian fossil fuels, railways around the globe, Amtrak here in the U.S.
Many many nations are nationalizing things historically and through today.
Nationalization isn't a litmus test for if you are a leftist though, it's an example of one leftist policy.
In general, the left seeks social justice through redistributive social and economic policies, while the right defends private property and capitalism.
> In general, the left seeks social justice through redistributive social and economic policies, while the right defends private property and capitalism.
That’s an extremely left-skewed framing that leaves out a lot of important cultural issues. For instance, the leftists during the Spanish Civil War massacred Catholic priests and nuns and burned down churches while many on the right sought to protect the church and restore the Spanish monarchy.
It’s more correct to say that the right defends traditional institutions, which might include capitalism, but even these vary widely from country to country. For instance the United States never had a monarchy or an established religion; most of the American Founding Fathers would have sat somewhere left of center in the Estates General during the French Revolution, which is where we get the terms “left” and “right” from in the first place. But in an American context, the republic and the constitution are the traditional institutions that the American right has traditionally defended, even though they were established by the 18th century left.
Even when it comes to capitalism it’s not as clear cut. Prior to the American Civil War, the north was capitalist but the south had a precapitalist agrarian economy based on slave labor. The northern liberals, abolitionists, and capitalists formed a coalition to the left of the southern planters. Outside of areas that had widespread slavery, there’s also a long tradition of right wing critiques of capitalism as a destructive change to the traditional patterns of society, and there are many on the far right who seek to return to much older ways that are now lost.
You're generally correct, but I imagine you won't get a good reaction on HN to this viewpoint. Most people on here unfortunately don't really an understanding of politics beyond a very surface level one.
HN is not a political site and Deng doesn't allow much politics. Your generalization is based purely on only the surface level discussion allowed.
Maybe it's you that doesn't have nuanced 'understanding'?
I'm certainly no expert, I just wish we could at least use the surface level terms correctly.
I'd be thrilled to have the right correctly differentiate between the democrats and leftists. Using the right terms would be a useful start to having some dialogs.
Burlington, Vermont.
Burlington Vermont might be close.
Rutland
...how do you mean? What are you defining 'left of center' as?
Parent was making an observation that the entire US political discourse, including both sides, tends to be right of global center.
Why would the political leanings of other countries matter in a discussion about the US?
Because the US is part of the world.
Not a discussion about "the world."
I can't tell if you're being obtuse or obstinant.
The Left Right dichotomy is a fairly broad set of political ideas, especially globally. The Left typically includes socialists, communists, anarchists, labor movements, syndicalists, and social democrats. Typically, these movements are collectivist, whether that's collectivist in a big government or collectivist in small local communities.
Classical liberal policies, looks those of the Democrats and Republicans, are right of center.
An example, when was the last time the Democratic Party pushed for nationalization of a whole industry? Eg aerospace, rail, or energy? What about offering food and housing for everyone? Abolishing private property? Those are leftist policies.
[flagged]
[flagged]
Not at all - I'm just confused about the whole left / right distinction being proposed by the OP, since "nationalization" was never (as far as I can tell) part of the "left" at least when we talk about _socialism_. National socialists were definitely interested in "nationalizing" things, but socialists were a little bit more broad in their interpretation of what they were doing with "the stuff that isn't property" (at least as far as I understand it).
But maybe the OP was not talking about "what they thought they were doing" only describing "what they do / did"?
I was articulating the sorts of things leftists often push for. Nationalizing industry is one such thing - holding industry in common good for the people is one flavor of leftist. You see that in Soviet style communism, for example.
It's not the only way to be leftist. You can be leftist and anti central government, for instance. You cannot, however, be leftist and staunchly capitalist.
[dead]
Well, it's Austin, TX.
Travis County was blue 69-29.
Hardly a politically conservative place.
> "Move our trust and safety and content moderation teams out of California, and our US content review to Texas.
Prediction: it'll be cali-expats in Austin and nothing changes.
They’ll be inside the jurisdiction of whatever rules Texas feels like making.
Nah, loads of the current staff will leave, but they'll hire equivalent people in Austin.
The whole thing is ideological. Trump and Musk are undertaking their takeover on government, and so the trillion dollar companies which control the rules of the spaces in which the vast majority of our discourse today happens, do their thing and kiss the ring.
We can debate the merits of notes vs factcheck. But it's hard to see the bullshit about freedom of speech as anything other than that: you are now allowed to express opinions that the new regime shares. Long live the king.
>like people being in Texas makes them more objective
When the dominant ideology in Texas supports freedom of speech more than the dominant ideology in California: yes.
except when it comes to banning books in schools. and prohibiting classroom discussions on race or LGBT topics.
somehow free speech never seems to cut both ways with these people.
> "Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content." - like people being in Texas makes them more objective?!
The FB office in Austin, Texas is a moderately left-leaning area. Their office in Silicon Valley is about the most extreme left-wing place in the country. At the very least, teams at their Texas offices will have more overlap with the median voter than the ones in California. If their Texas offices were in rural rancher country, then I'd agree with your concern that it would just be swapping one bias for another.
It's not about actual employees, it's about signalling "Texas - yay!" and "California - booooo!" in order to make good with the incoming administration.
Grew up in Ohio. Always wanted to live in Silicon Valley. Been here 14 years now. Not leaving. But this is happening because of how terrible the California brand has become. Pretending our prestige and brand is the same as it was 20 (or even 10) years ago is not the answer.
Yeah I was recently given the choice to move for RTO to the bay area versus pacific northwest, and everyone I asked about this expressed their dissatisfaction with California.
That's a complicated topic, but part of that is because California has become a target for a number of people with money, influence and media outlets.
Not to say it doesn't have problems - like housing - that are self-inflicted. Just that a big part of the 'brand' problem is people targeting the state.
Yes there is a lot of “unfair competition” but ultimately you build a brand by demonstrating your positive qualities and making it clear what you stand for.
This is an us problem not a them problem.
How can you build that brand if someone is determined to tear it down for ideological reasons?
People care less about ideology than they do about their own lives and prosperity.
It used to be clear: you can make a better life in California. It was a land of growth, prosperity, and wealth. Growing families moving into golden cul-de-sacs.
We should actually make those things true again. Houses don’t need to be affordable in Palo Alto but not being affordable anywhere is a problem. We don’t need to develop Big Sur but not being able to develop any costal property is a problem. We don’t need to deport law abiding citizens because they fail an ICE sweep but not being able to deport career criminals is a problem.
Oh, I'm 100% on board with the housing stuff. That's what I do in terms of local politics here in Oregon.
But by and large, the 'branding' is places like Fox News crapping on California.
No that’s just talking heads carping on cable tv.
The problem is that we have lost any ability to make a positive case for California outside of niche political interests and very specific career paths.
Well, that, but also the worst housing markets in the country.
Techbros are pretty toxic, and that culture was very much SV 10-20 years ago.
That said, most of them have since (loudly) decamped the state.
Well, that and moderators being able to afford 1-bedroom apartments.
Says more about fb being penny pinching than anything. The kid working the panda express in california can afford a 1br apartment, why not a fb moderator?
> The kid working the panda express in california can afford a 1br apartment
A lot of them can't, actually, but that's really a different problem.
Have you ever been to or lived in Austin? Are you aware of how high the cost of rent is there now?
Actually the cost of rent and housing has dropped there the last few years, because they are doing a good job building. Not so great for my SFH's value, but its definitely dropping from "WTF" to "Seems more normal" pricing.
Every day "Austin" refers to a larger and larger part of the earth so maybe specifying where in Austin is appropriate?
https://www.msn.com/en-us/money/realestate/report-austin-top...
https://www.kut.org/austin/2024-06-13/austin-texas-rent-pric...
https://austin.urbanize.city/post/austin-rent-drops-december...
https://therealdeal.com/texas/austin/2024/05/01/apartment-re...
By the same - entirely unevidenced - reasoning, your posts ITT are about signalling the reverse in order to make good with sympathetic readers on HN.
See how that works?
The specific places in California where Facebook had "trust and safety and content moderation teams" were places that very much don't reflect the average politics of the US. That is naturally going to reflect itself in the ideological composition of employees, and therefore in political bias in the fact-checking process.
We've already seen harm from this. For example, Facebook suppressed the Hunter Biden laptop story (https://www.yahoo.com/news/zuckerberg-admits-facebook-suppre...), even though:
* there has never been any evidence provided to link the story to supposed Russian disinformation;
* The FBI (i.e., the agency supposedly telling Facebook and other social media companies to be on the lookout for such disinformation) acknowledged that they did in fact seize the laptop from the computer shop owner in 2019 (https://www.nytimes.com/2020/10/22/us/politics/hunter-biden-...) and verified that it was Hunter Biden's - which later came up in a criminal case against him in mid 2024 (https://www.nbcnews.com/politics/politics-news/live-blog/hun...
* there is no good reason a priori, outside of political bias, to suspect the New York Post (founded 1801 by Alexander Hamilton) of spreading such disinformation.
Thinking Menlo Park (or any of Silicon Valley, really) is in any way "extreme left-wing" is a sure indication you haven't spent any time there and are basing your viewpoints off of what others have said on social media. Billion dollar corporations by definition do not support anything remotely "extreme left-wing".
I’ve lived in SF, Mountain View and also the east bay and I’ve worked at a billion dollar company that did indeed support some very left-wing causes.
Despite having grown up in a light blue state, the difference in politics was very noticeable when I got to SF/SV. This isn’t a value judgement, just my observation.
That's why I was talking about Silicon Valley, not SF or east bay. They're much different places. Besides that, a corporation giving lip service to diversity =/= "extreme left-wing" views. These billion dollar corporations are still capitalist, through and through. Actual extreme left-wing views are staunchly opposed to capitalism.
Talking about "actual extreme left-wing views" is something that only really works in internet arguments where everything eventually trends into Communism vs Capitalism (TM).
In reality, every country has their own set of issues. Every democracy has their set of parties that exist somewhere in the policy space of issues relevant to them. In the US, we generally think of socially progressive policies as "left" along with non-market views of the economy. As such, the SFBA is generally much closer to the American "left" edge than the right.
I agree that South Bay and the Peninsula are less "left" than SF or Oakland, but I think this sort of argument is sophistry. That said, I don't really think moving hiring to Texas will change anything ideologically among employees and instead is just a way to signal to the new administration that they're Friends (TM) and on the backside a way to cost cut so they can pay less in Austin.
That's a funny way to say "I'm sorry, I should not have assumed you were unfamiliar with the region, when it has instead become clear that you live out there".
Actual extreme left-wing views are those that the average San Franciscan holds. Economics isn't everything.
Very few, if any, billion-dollar corporations are in any way “extreme left wing”.
But that is not “by definition”. The definition of a “billion-dollar company” is that it is valued by investors at a billion dollars. That definition has absolutely nothing to do with its political leanings.
“Vanishingly unlikely” sure. But not by definition.
What I mean is an extreme left-wing views would advocate for the nationalization or abolition of all private companies, so a corporation couldn't fit into that.
[flagged]
Those ideological changes are corrective though. California is obviously very far in one direction politically, and presumably the existing Meta board members are not right wing.
[dead]
[flagged]
That's a housing problem. California is very NIMBY and doesn't build enough homes.
That has nothing to do with how 'objective' fact checking or content review or whatever is from people in both places.
This is just very thinly coded language signalling who they're going to favor.
[flagged]
Isn't it a bit of a stereo type prejudgement to say all Texans are like that?
I don’t use either Facebook or X so I have no personal experience. But the New York Times cited this meta-analysis for the proposition that they’re not ineffective:
Fact-checker warning labels are effective even for those who distrust fact-checkers
https://www.nature.com/articles/s41562-024-01973-x
They also cited this paper for the proposition that Community Notes doesn’t work well because it takes too long for the notes to appear (though I don’t know whether centralized fact checks are any better on this front, and they might easily be worse):
Future Challenges for Online, Crowdsourced Content Moderation: Evidence from Twitter’s Community Notes
https://tsjournal.org/index.php/jots/article/view/139/57
Here's the Community Notes whitepaper [1], for how it all works. Previous discussion [2].
[1] Birdwatch: Crowd Wisdom and Bridging Algorithms can Inform Understanding and Reduce the Spread of Misinformation, https://arxiv.org/abs/2210.15723
[2] https://news.ycombinator.com/item?id=33478845
Thanks for pushing for clarity here. So: I'm not saying that fact-checker warnings are ineffective because people just click through and ignore them. I doubt that they do; I assume the warnings "work". The problem is, only a tiny, tiny fraction of bogus Facebook posts get the warnings in the first place. To make matters worse, on Facebook, unlike on Twitter, a huge amount of communication happens inside (often very large) private groups, where fact-checker warnings have no hope of penetrating.
The end-user experience of Facebook's moderation is that amidst a sea of advertisements, AI slop, the rare update from a distant acquaintance, and other engagement-bait, you get sporadic warnings that Facebook is about to show you something that it thinks you shouldn't see. It's like they're going out of their way to make the user experience worse.
A lot of us here probably have the experience of reporting posts to Facebook for violating this or that clearly-stated rule. By contrast, I think very few of us have the experience of Facebook actually taking any of them down. But they'll still flash weird fact-checker posts. It's all very silly.
So, why wasn't a mixed approach taken? That's the obvious question you should be asking. Paid fact checkers are leaps in quality and depth of research, meanwhile Jonny Twoblokes doesn't have the willingness to research such topic, nor the means to provide a nuanced context to the information. You are saying that the impact was limited, but it was not because it was low quality. If you do both, where the first draft id done by crowdsource with the professional fact checker to give the final version, I don't think you would have a good reason to not do it.
I've answered elsewhere on the thread why I think the warning-label approach Facebook took was doomed to failure, as a result of the social dynamics of Facebook.
> Fact-checker warning labels are effective even for those who distrust fact-checkers
Yes, but are they true?
Haha yeah indeed, I was also reading this thinking: "uhm, ok, how can they be 'effective' if they're false in the first place?"
Lol sometimes people just have no logic
Notably Zuckerberg did not cite any data for his assertions that community notes are effective.
A way to quantify this doesn't immediately come to my mind. Maybe reasonable metrics would be:
1. What % misleading/false posts are flagged
2. What % of those flagged are given meaningful context/corrections that are accurate.
It seems there's circular logic of first determining truth with 1, and then maybe something to do with a "trust"/quality poll with 2. I suspect a good measurement would be very similar to the actual community notes implementation, since both of those are the goal of the system [1].
[1] https://arxiv.org/abs/2210.15723
The deep irony is that some of the original contributors to Birdwatch were working on this stuff at Facebook before being blocked for various reasons and leaving to work at Twitter.
To steelman this a bit, early versions of Birdwatch had problems with unsourced notes and speed of note display. There’s a bunch of research that shows that 1st impressions of info tend to dominate, so speed matters a lot.
In practice FB’s program was poorly resourced and overly complex so I’m not sure it ever achieved its theoretically lower latency.
I don't care about the fact checking part but I do care about the "removing the limits on political content on feeds".
I think everyone can agree that polarizing content being pushed into people's feed for engagement is a very very bad mix with politics. There is no benefit for anyone in doing this, except for meta's metrics and propaganda outlets.
Didn't they get in trouble with lawmakers and / or advertisers for that in the first place?
Also it benefits the extremists that Zuck (and others) are cozying up to. I mean… pretty obvious not to mention that.
[flagged]
People who are floating using the military to steal territory from a NATO ally, as just one totally random example.
Yeah, I know what the press release said lol. Do you typically take press releases as fact?
Journalist: "Can you assure the world that as you try to get control of [Greenland and Panama], you are not going to use military or economic coercion?"
Extremist: "No. You're talking about Panama and Greenland: No, I can't assure you on either of those two..."
Journalist: "Will you commit that you are not going to use the military?"
Extremist: "No, I'm not going to commit to that."
No, I don’t take press releases as fact - do you not see that mainstream opinion on gender and immigration is clearly not in line with what Facebook were moderating for?
Compared to compelling people to believe in gender ideology, industrial scale suppression of dissent on private platforms, and teaching race based original sin in schools, being the third president to want to get control of Greenland doesn’t seem particularly extreme.
Also, as was pointed out but you omitted from the question you’re quoting, asking a military commander their strategy is a very poor question.
The most useful result of Community Notes I've seen is when someone posts something Y, and then a few hours later it comes out that actually it was Z, community notes have been able to attach "actually it was Z" to the original viral post, still being shared.
I don't know if anyone cared much about fact checker reports (or if anyone even bothered to track how often they ended up being wrong when looking back in review).
Also didn't know Meta was outsourcing fact-checkers which is a very terrible idea that sponsored a shady economy of ghost workers that were paid pennies for reviewing gore content.
It'll really take a special mind to think Community Notes wasn't a positive feature added to the social network sphere. Musk despite his schtick did very bold things that other platforms wouldn't think of doing, such as open-sourcing the recommendation system or recently suggesting the idea of optimising content with unregretted time spent that will reward healthy content and punish toxic content even if the two had the same number of impressions.
The overtone window is shifting towards a more open speech and less of self-gratifying echo chambers that promoted the toxic cancel culture.
> It'll really take a special mind to think Community Notes wasn't a positive feature added to the social network sphere.
Attributing it to Musk, though, would require a time machine.
> recently suggesting the idea of optimising content with unregretted time spent that will reward healthy content and punish toxic content even if the two had the same number of impressions
The precise sort of censorship and "cancel culture" he decried upon purchase.
Facebook's approach to fact checking has always been cost-optimization.
It would have been a drag on profits to hire professionals to fact check and provide them enough time to do their job, at scale.
They quote numbers about how much they're spending as proof they're doing something, but that spend isn't normalized against the scale of their platform.
How about the fact that Meta killing their fact-checking feature will have a very direct impact on the quality of Community Notes? Per today's Platformer:
"Another wrinkle: many Community Notes current cite as evidence fact-checks created by the fact-checking organizations that Meta just canceled all funding for." (https://www.platformer.news/meta-fact-checking-free-speech-s...)
I assume these businesses are ad supported.
Does Facebook’s patronage constitute a significant % of the industry?
I don’t think the fact-checkers were a better product feature in the current environment. I do think that the reasons they aren’t a good product feature are linked to a concerted effort to convince people to distrust fact-checkers. I recognize that many people would say the distrust arose from the way fact-checkers behaved; I don’t think that’s true.
From a product perspective, once it’s accepted that Community Notes go through an algorithmic filtering process (which they must), you have to accept that you’ve lost most potential for third party viewpoints. There is nothing stopping ideological companies from putting their thumbs on the scale.
Back to product perspective: that means there’s no barrier preventing Notes from losing trust in the same way fact checkers have. The playing field is not static.
I think the speed of the rollout will tell us a lot about how long this has been in the works. It’s not a one week feature, although I will remember that Meta produced Threads very quickly.
I'm not sure about better, but I'm concerned about a second Rohingya genocide.
There was a lot wrong with Facebook's moderation system. Spend any time in any politically active groups -- or groups that like to discuss politics -- and you'll quickly find people complaining about deranking. Based on both the extreme frequency with which it's reported and my own experiences with Meta, I believe that they're not making it up.
But Meta's moderation tools don't primarily exist -- as I understand it -- to keep discourse informative. They exist so that Meta doesn't accidentally become somewhat responsible for another genocide.
I think that community notes may be a better move for public discourse, but most conversations on Facebook itself happen in groups, and in groups nobody is going to be posting Community Notes that go against the trend of the group -- even if they might be useful for totally public discourse.
I tend to blame the people actually doing the genocide for genocide, rather than a social media network. Ultimately I think one can clearly draw the line for personal responsibility well before literal murder.
Tens of thousands of have been raped, entire towns have been destroyed, around 50k killed and 700k forced to flee.
If Western countries actually cared about the human cost of this genocide, it would be almost a trivial matter to stop it overnight with a few well placed missiles against Myanmar's military, which continues to perpetrate the genocide even today.
Instead, no real action is taken and it's just a talking point for "Facebook bad." Blaming Facebook for a genocide is like blaming videogames for an active mass shooter w/o actually doing anything to stop them.
Eh, I don't think that lens is useful. It appears to me that the genocide very likely may not have occurred -- and certainly would have harmed fewer people -- if Facebook didn't exist.
It is not simply a matter of it happening elsewhere on the internet -- Myanmar is one of the countries that Facebook provided its Free Basics package to.
Of course, I think the bulk of the blame lays on those actively perpetrating the genocide. But I'm concerned mostly with outcomes, and it seems that with different behavior from Facebook, there would have been a different outcome in Myanmar.
We can look at precedent here. RTLM's involvement in the Rwandan genocide for example would be a good place to start. There's a pretty explicit connection between the radio propaganda (RTLM furthered the Hutu Power ideology) and the actual violence. We should be able to draw a distinction between Jack Thompson and Tipper Gore fearmongering versus explicitly violent rhetoric designed to dehumanize people and promote the eradication of those people.
The actions taken by the US in response to the genocide in Myanmar were largely economic because, I would think, their proximity to China. Can't imagine direct intervention would have gone smoothly.
For the record, I don't think our response in Myanmar or Rwanda were good, not trying to dispute or downplay that.
> but this feels like the kind of decision that should have been in the works for multiple quarters now
My take is that while it must have been a potential plan for some time and switching to this plan can't have just been an “overnight” decision since the election, the timing suggests that either they were waiting for the outcome of the election and using that result in the decision-making process, or that the election result pulled the decision¹ forward.
----
[1] Or the implementation, if the decision had already been made. They may have already moving towards this happening, purely as a business decision based on internal effectiveness studies, no matter who was in power, but given the election result there are some political benefits to rolling the plan out now instead of in Q2 or Q3.
Yeah I'd like to hear this too. I use both and I love community notes. People are pretending like this is some big culture war issue and a win for the right but I've seen community notes call out Elon for retweeting bullshit more times than I count. (As well as calling Jacobin our on there's)
I also appreciate that if I liked a post that community notes called out and I'm getting a notification that was misinformation.
Well the presidential election was a win for the right. FB and Meta have always complied with and often been an arm of the US govt regarding regulating speech on social media, and they are not really changing that. It's the gov't that's changing.
> I'd like to hear an informed take from anybody who thinks that Facebook's fact-checkers were a better product feature than Community Notes.
Zuckerberg's framing of this as being about "fact checking" is intentional misdirection. Very little checking of facts was actually happening.
This is about moderation. Specifically, reducing the obstacles to posting racist/misogynist/political abuse amd threats. The objective is to make Facebook acceptable as a platform for the incoming US administration and its supporters, while simultaneously increasing engagement with more inflammatory user-generated content.
So its primarily a demonstration of fealty to Trump and co, with upsides.
Trump and Zuck recently met privately. I do wonder if these changes are, in part, also a quid pro quo for Trump undertaking to continue with the ban on TikTok in the US.
https://gradientflow.com/the-moderation-dilemma-a-balanced-l...
Facebook has a long, bloody history of expanding their services into areas without investing in content moderation first. Sometimes they don’t have a single employee who can speak the language of their users. As a result, tens of thousands of people have died in genocide.
You can’t have community notes if you don’t already have a community established. Community notes won’t help if the community’s behavior is the problem.
Many people will die as a result of this decision.
Ah right, because calling it a product feature suddenly makes the assessment of it objective and non-political
Generally fb has trended to worse rather than better. I already passed my personal tipping point years ago and quit fb.
Same. I deleted my account in like 2018.
Since then Marketplace has more or less destroyed Craigslist. So two months ago I tried to create an account strictly for Marketplace. My email, phone, and location have all changed since 2018. Despite verifying phone and doing the most extreme KYC step of taking a picture of myself with my ID I still could not make a new account. So maybe they should focus on that?
SAD! Craigslist was a much better product and community even without the luxury of identity verification. It had some obvious spam but by and large worked fine once you got the hang of it. Marketplace is a cesspool of lowballers and sex workers with some shitty ML sprinkled on it, underneath it all some slow and clunky RPCs that need refresh all the time.
Forget about the sucky product. Who has Facebook been hiring in the past decade that built that technical crapshoot.
It is what it is. It's a hotspot for local politics, so quitting it isn't really an option for me.
It's also the marketplace in some countries. Wanna sell some furniture locally? It may be close to the only option.
Just wait until they release a job board. They’ll figure this out soon.
They tried this years ago, but didn't make it work.
All the burning man camps I get invited to are a bunch of Gen X-ers conferring on Facebook groups
so I wind up making a new Facebook account once a year for a few months
although could see this moving to Discord across those same age ranges, I’m in some local groups there which overlap with festivals/events/things like the burn.
Yeah younger millenials and GenZ tend to do this sort of conversation on Discord.
yeah exactly, its now a better platform and has enough critical mass. With Nitro/Discord's paid plan you can change your profile per server if you identify different ways in different groups
I've seen Gen X-ers be notoriously inflexible about considering Discord or anything besides Facebook Groups, but as they say: nobody can prevent you from becoming like your parents
I tell that cohort "you can't Google this, you have to join the platform and search that channel", and they balk as if their Facebook Group that's segregating them is any different
back to burning man specifically, at this point it seems like I can get invited to different camps, so I'm excited about that. mixed age groups, stays fresh
> I've seen Gen X-ers be notoriously inflexible about considering Discord or anything besides Facebook Groups, but as they say: nobody can prevent you from becoming like your parents
Yeah I'm a millenial with older and younger friends. I found that around 35 +- 4 years you generally have people get more annoyed and flippant at change. I get it, at this age you're probably at the peak of both career and life responsibilities, and you want to focus your energy on your family/career/other loved ones, and the last thing you want to do is learn something new for doing what you've been doing for the last 18 years (chatting about something online.)
But it's been pretty fascinating watching the change as my older millenial/young GenX friends are getting into Back In My Day conversations while my GenZ friends talk about new fashions and music.
Every Xer I know left FB years ago.
It’s amazingly bad. My feed is just endless blatantly obvious engagement bait, interspersed with occasional posts from people I actually want to see.
It doesn’t matter if they were better or worse, it’s all relative. It depends on who you ask, everyone will give a different answer. You are looking at this from a technological and problem solving perspective, while the people who made the decision prioritized these much lower on their list. You need to think like a politician and consider the PR side of things. This is not about solving the problem, it’s about perception, only perception.
By implementing community notes, Facebook is shifting responsibility. Previously, the perception was that Facebook was doing fact checking (and no one really cared about the third parties). Now, the responsibility moves to the community. Not only does this shift responsibility, but it also makes Facebook appear politically neutral to Republicans, because they can say, "Hey, we did exactly what Musk did, and you liked it. We are politically neutral".
It also gives Facebook a new product feature that encourages user activity.
It was the correct chess move given the current board.
I think both are atrocious features. It would be useful to know facts about a site or article: this is a new domain, this is a state-run outlet, etc.
But other than that, how about I get to use my critical thinking to evaluate the content I access without my “betters” trying to color it first?
Any day now, I’m sure Gmail will introduce a feature where Gemini will warn you that the article your grumpy uncle sent you is not nuanced enough. Or your cell provider will monitor your texts and inject warnings that the meme you shared doesn’t tell the whole story.
> how about I get to use my critical thinking
Because no-one, including you, is an expert on everything.
So there will be many topics for which you will not be able to make an informed judgement about the accuracy of the content. And on a social network centred around sharing it can be very easy for inaccuracies to spread.
> Because no-one, including you, is an expert on everything.
As I said, god forbid I forget my place and use my mind in the domain of my betters.
You can continue to use your mind.
Pretend that the Community Notes are a conspiracy to rob you of your free will and ignore them.
<country hick accent>Looks like we got ourselves a reader…
Yep, reading, researching, considering what things matter given your own life experience and situation, these are all meaningless in the face of THE EXPERTS!
/s
When J.S. Mill wrote about infallibility[1], I can't remember if he wrote about outsourcing that infallibility belief to others, but if he did, he predicted the last 5 years of pro-censorship arguments perfectly.
[1] https://www.bartleby.com/lit-hub/on-liberty/chapter-ii-of-th...
I'm no expert in this domain, but the larger issue at play here is that:
1. certain groups are arguing for assigning trust to a group to perform case-by-case censorship as a countermeasure to propaganda and disinformation,
2. other groups (sometimes purposefully) misinterpreting this as blanket censorship and conjure up several slippery-slope warnings.
When talking about general things, it sounds very noble to talk about protecting every budding idea... therefore group #2 gets to trot around the higher moral ground when arguing in this way.
When talking about the specific ideas being "censored" (e.g. "immigrants eating dogs"), group #1 gets to claim group #2 is some flavor of crazy.
What both miss is that they have been pitted against each other by so many interest groups: nation-state and corporate.
This is happening all around the globe.
I don't really mind how they police things and it's not the point of this announcement. The technology firms think Trump could be so dangerous to their businesses that they are willing to completely give in pre-emptively to this threat. What else are they willing to do given this, interfere in elections for example? Promote misinformation that benefits Trump? Undermine truth about vaccines and safety in our health system? The list of potential problems is quite long.
[flagged]
> Community notes are just opinions of random people on internet
It's much more complicated than that. Here's the white paper: https://arxiv.org/abs/2210.15723
I "trust" Wikipedia more than I do fullfact and so on. They've all overplayed their hand.
>I "trust" Wikipedia more than I do fullfact and so on
Philip Cross is very pleased to hear that!
You can not sue Wikipedia. You could sue facebook.
I hope I don’t sound condescending, that’s not my intent, but this made me smile, it means you think Wikipedia is special. I like that.
But, for the record, they regularly get sued. I think they are being seriously sued in India for defamation at the moment, for example.
But could you actually get any money from suing Wikipedia? They would just deflect blame to volunteer editors.
Why do you think facebook is ending fact-checkers now? Editors are hired by facebook, facebook is the publisher. If facebook publishes "fact" and people get harmed as result, Facebook gets sued to bankruptcy. There is no protection from government anymore!
> If facebook publishes "fact" and people get harmed as result, Facebook gets sued to bankruptcy.
What a nice reality would it be where Facbook could be actually sued to bankruptcy for whatever reason, let alone such minor one. Sadly it's not our reality.
Of course you can sue Wikipedia. There's no law against suing Wikipedia.
Wikipedia is just a platform, it is not a publisher. Facebook was the publisher!
Suing Wikipedia would be like suing email and SMTP protocol!
In the U.S. (relevant because it is home to the Wikimedia Foundation), you can sue anyone for any reason at any time. You might get immediately dismissed, sued back ("abuse of process" or similar), or something along those lines, but there is nothing structural that stops you.
The structural reason that you can't sue email is that email is not an "anyone", it's an abstract concept. How would you even e.g. notify "email" that it is under litigation?
Neither email not SMTP are legal entities.
Wikipedia (or more precisely, the Wikimedia Foundation that owns it) is.
You can absolutely sue Wikipedia [1]
[1] https://en.wikipedia.org/wiki/Asian_News_International_vs._W...
edit: My bad, I get the joke now.
I'm referring to the organisations that anoint themselves as arbiters of truth rather than just Facebook but I suppose the point almost stands
"fact-checkers were authoritative source of the truth"
There is no such thing. If your understanding of truth is so flat, you're incredibly ignorant and dangerously foolish. Biases, perception, and propaganda influence the "truth" you see in the world. And no one is immune to it. Even large groups of very smart people are not immune to it. In fact they're even more often prone to groupthink.
Woosh?
[flagged]
You can’t possibly seriously believe this?
The whole point of science is to eliminate human authority as a source of truth. Every claim must be peer reviewed, should be replicated by independent parties, and open to falsification by new evidence.
“Appeal to authority” is always the wrong approach if you are seeking truth.
I think it was very clearly sarcasm.
Well that’s a relief.
Stumbled upon this yesterday or so:
https://www.youtube.com/watch?v=eZlbQqXBsn0
« I think one of the troubles of the world has been the habit of dogmatically believing something or other and I think all these matters are full of doubt and the rational man will not be too sure that he's right; I think we ought always to entertain our opinions to some measure of doubt » (Russel)
This reminds me of William Buckley preferring to be governed by the first 100 names of the Boston phone book over the Harvard faculty.
Facebook lifts ban on posts claiming Covid-19 was man-made (2021)
https://www.theguardian.com/technology/2021/may/27/facebook-...
Thank God Congress finally passed a law that made it illegal to get brain cancer from cell phones.
> Community notes are just opinions of random people on internet. Like Wikipedia.
IDK if you intended to compare this to the world's #1 comprehensive and trustworthy repository of information.
But if you did, mission accomplished.
Joke ;)
> Community notes are just opinions of random people on internet.
No. Community Notes is an open-source peer review-like system but designed in a way to limit bias: When sets of note contributors (the peers in this case) who normally strongly oppose each other’s views on Topic A strongly agree on a point made re Topic A, we’re likely getting closer to the truth.
When you use the term "open-source" in this context, what do you mean?
> When you use the term "open-source" in this context, what do you mean?
The code for its implementation is literally available on GitHub [0] under the Apache License 2.0.
[0]: https://github.com/twitter/communitynotes
Thanks!
[flagged]
Well the most mainstream "news" source Fox news had to pay out almost a billion dollars for dis-information, so the biggest mainstream 'news' (though they did claim in court no one could possibly think they are news so it's ok for them to lie) institution kind of had to apologize.
> Well the most mainstream "news" source Fox news had to pay out almost a billion dollars for dis-information […]
The case in question did NOT go to trial, so your claim isn’t entirely correct, but yes, all mainstream “news” outlets (including Fox) abuse our trust by constantly lying to us—I don’t watch or trust any of them.
I remember when Rachel Maddow told us, “Now we know that the vaccines work well enough that the virus stops with every vaccinated person. A vaccinated person gets exposed to the virus? The virus does not infect them; the virus cannot then use that person to go anywhere else.” [0]
[0]: https://x.com/kevinnbass/status/1839635641019081160
The court made findings though, didn't they? Check page 44 under 'Fair Report'.
https://deadline.com/wp-content/uploads/2021/12/Civil-Opinio...
The comment you're replying to was a joke.
By the time I realized that, it was too late.
Fact checkers will link or establish their evidence.
The claim that the president was a Russian spy was never made afaik. But if you have evidence of a fact checker saying this, I’d appreciate it.
I think you aren’t going to find it because overlapping fact checkers with news media is a slippery thing.
News media is going to be combining opinion and news, to push an angle.
Fact checkers are wont. I suspect you are shifting your ire from media, to fact checkers, which wouldn’t be fair.
However if there was a fact check that said Trump was a Russian plant? That would negate my contention.
> The claim that the president was a Russian spy was never made afaik. But if you have evidence of a fact checker saying this, I’d appreciate it.
I didn’t save the links, so no, I don’t have evidence ready to show you, and it’s not like I can just go to their websites and see an accurate history of their conclusions on specific claims, given that many of them have a history of simply burying their original conclusions once it becomes obvious they were wrong (e.g., [0]).
[0]: https://www.forbes.com/sites/theapothecary/2013/12/27/in-200...
Also I saw an interesting interview with Marc Andreessen recently where he mentioned about how the Dems would fund "Disinformation Research" units at universities. These research units would (shockingly!) be staffed by 100% Democrat supporters and (even more shockingly) would tend to view everything the Dems disagree with as "disinformation". These groups would then apply pressure to media/social media companies to suppress content. So they were able to breach the first amendment by using censorship by surrogacy. The Democrat censorship industrial complex was ugly and insidious and leading us to a very dark place indeed.
Fact checkers are the technocratic solution, they're a panel of experts to Community Note's jury of our peers. Fact checkers are a much better product feature than community notes if we want a feature that best serves people who care about facts. That's not our world, though. People don't care about facts, we are humans, our lives are lived based on vibes. The average person would rather listen to their idiot friend's uneducated thoughts about transgender women in sport than listen to a lecture from an expert. Community notes is probably a better feature for the real world, but it's still junk, "effective" is not a label the feature deserves, because the majority of misinformation on X goes un-noted.
Do you have some kind of analysis demonstrating Facebook fact checkers are more accurate than X’s Community Notes?
Indeed, how to fact check the fact checkers?
If we could have legitimate fact checking that really works, then I guess we wouldn't need any politics at all.
> Indeed, how to fact check the fact checkers?
Like any other work, it can be reviewed by supervisors within the company and/or the client (Meta). If a sample of an employee's work shows that they often hide content that isn't factually false, they are performing their job poorly. If Meta doesn't like the job the company is doing, the contract can be cancelled.
> If we could have legitimate fact checking that really works, then I guess we wouldn't need any politics at all.
You absolutely need both. Politics is about which decisions to make within the context of shared facts. The amount of the US national debt, the number of people caught crossing the border illegally in 2024, or the number of people sleeping on the streets in San Francisco are all matters of fact. What to do about them is politics.
It are also facts that many politicians are corrupt and are fooling us. But they arranged it nicely so that they aren't being fact checked.
And the ones in power and with money can decide who the fact checkers will be. And the ones in power and with money can help and support each other. Because we want to keep the money inside the family, to protect the facts you know.
When you grow up you start to understand that you can't trust all authority all the time.
I was answering your question. You asked how fact checkers can be fact checked and the answer is like any other job. Fact checking isn't magic, and it's existed for a long time. It's basically what newspaper sub-editors do.
> When you grow up you start to understand that you can't trust all authority all the time.
I think you know I'm not arguing for this. Don't misrepresent my position, please.
Well I think what you are calling fact checking is actually journalism.
The concept of fact checking is a very recent movement, with the idea that we could filter out the "fake news" on the internet, which is also a recent concept.
But it turned out that the so called "fake news" wans't always so fake, and that the fact checkers weren't always so factual.
So it turns out that you can't trust any group to determine what the facts are for the rest of the people.
You can fact-check for yourself, but don't put your "facts" on other people like they're real facts. Leave other people in their respect, and let them think for themselves. You can of course share your knowledge, but you should let the other person ultimately decide what they believe for themselves.
It sounds like you are disagreeing with the concept of facts, but facts do exist. If someone claims that a politician said a particular thing in a speech yesterday, and the politician gave no speech yesterday, then the claim is factually false. It's not a matter of respect or disrespect to say so, and it doesn't matter what you choose to believe on that topic.
> The concept of fact checking is a very recent movement, with the idea that we could filter out the "fake news" on the internet, which is also a recent concept.
Again, this is not accurate. Look at the job sub-editors have been doing for a century or more. Their main role is to save the newspaper from getting sued or looking silly by striking out or questioning any claim that can't be proven to be true, or corroborated by multiple sources. Fact checking is not a new discipline.
Well it has a lot to do also with the way you say things, how you interpret the words. Maybe the politician did give some kind of speech, but maybe it wasn't an official speech. There's always more to the story, and multiple ways of interpreting things.
Of course some facts are less flexible than others. Like most people wouldn't argue whether a football is round. Although it matters if you're talking about an American football or a soccer football. So context also matters, and that can be confusing sometimes.
So the facts that the fact checkers were called in to tackle, were so flexible that it turns out it's not doable in a secure way.
And newspapers also don't always have the correct facts. Often things in the newspapers are wrong. And no they are not always being sued for that.
Again, you can fact-check for yourself, that is totally fine, and I would even encourage it. Then you make up your own mind and you are more independent and less shapable by others.
Breaking: Leading Fact Checkers Investigate, Find Leading Fact Checkers More Accurate Than Community Notes.
"People don't care about facts" is such an asinine reactionary way of thinking about macro dynamics in the world. It has no predictive power at all.
We don’t. People are social. We care about what the people in our community think, whether it’s factually accurate or not is inconsequential. Those of us wasting our lives arguing on the internet in the pursuit of truth are a tiny minority of atypical people. People yearn for the warm embrace of affirmation, not the cold hard truth challenging them at every turn.
You have too many abstractions between you and understanding other people.
Most people in your country are actually not that different from you.
Well first you've got to define what is meant by "facts". Most people presume the word refers to some kind of community consensus, and then they immediately gatekeep what counts as the "community" among which the consensus is shared.
However the basis for fact is precisely predictive power, so it's actually more like the battle between science and superstition. Information that can directly empower a person is not necessarily information that will help them to feel more comfortable or confirm their biases.
Americans don't care about facts.
There's a reason why you have Creationists at the highest levels of government.
Unnecessary attacks like this don't help your cause and part of what has driven the other side to the point they are at.
Do you mean that OP is incorrect, or just impertinent? Just because you have to use a light touch does not mean your friend does not have a Problem. (And I'm speaking as an American)
Europeans are just as silly but mistake failure for sincerity. As a sad fantasist I'm immensely fond of Anglo culture but many brits are totally misaligned and insane.
It resembles Objectivism. "The facts are the facts and you should see them the same way I do or else!!"
More like “the facts are the facts and reality does not care if you don’t believe in it”. It’s a special kind of nihilism to want to stick it to the universe and insist on one’s own alternative reality like an overgrown angry teenager edgelord.
Ayn Rand was pretty insistent that we should be able to objectively ascertain the facts. Objectivism failed precisely because we're not really all that rational, and because apart from the irrational part of us there's also the fact that we can manipulate perception and gaslight others. If you're a newcomer to a pair of groups that vehemently disagree as to the facts you might soon find that you have to make a choice yourself as to which group to join, and suddenly you have to deal with social pressures not just facts. Do you want to be in the in-group or in the out-group? Can you deal with the shaming that goes with being in the out-group? Etc.
It's all so tedious, but this is what we humans are like.
https://www.pbs.org/newshour/economy/column-this-is-what-hap...
It’s true. Fact checking was found to scarcely impact misinfo.
I’m in the field and I am thinking of how to work without focusing on truth, because that’s how most humans work!
> As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
IMO the concerning part is hidden at the bottom. They want to go back to shoveling politics in front of users. They say it is based on viewing habits, but just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks. I just can't look away. FB makes theirnacrions sound noble or correct, but this is self serving engagement optimization.
Social media sites should give users an explicit lever to see political content or not. Maybe I'll turn it on for election season and off the rest of the year. Some political junkies will always have it set to "maximum". IMO that is better FB always making that decision for me.
>Social media sites should give users an explicit lever to see political content or not
Facebook does sorta have this, under Settings & Privacy > Content Preferences > Manage defaults. Note that the only options for "Political content" are "Show more" and "Default". The other categories listed also include "Show less". There is no "off" option for any of the categories.
IIRC, Political Content is by default restricted on Threads. But if someone you follow engages with or posts content that is political in nature, fb doesn't hide that for you
They will just relabel what is political. Union organizing? A bill on internet censorship? Anything mildly inconvenient to Meta or its shareholders? That's politics, you said you don't want to see any politics, didn't you? The culture war? Well, that's just pop culture, so that gets a pass.
Everything important is politics though. Celeb talks about her experiences - politics. Earth is getting warmer - politics.
Our lives ARE political.
Hell, right now researchers on misinformation are being harassed by senators to bankrupt them, and create living lessons to stop others from reducing the reach of manipulative content.
WE already had the entire free speech fight at the dawn of content moderation. We collectively ran millions of experiments, and realized that if you dont moderate community spaces, the best ideas DONT rise to the top, the most viral and emotional ones do.
If you want to see what no moderation looks like, you can see 4 Chan.
By nature, taking a stand on being factual, is automatically political because there are people who are disadvantaged by facts. Enron and oil producers spread FUD over global warming because it was problematic for their profits.
Stopping their FUD, is censorship via moderation. How is a regular joe going to combat a campaign designed to prevent people from reaching consensus?
Anyway, this is going to be fun.
I really do wish that one of the major platforms would a strict white- and black- list. "Doomscrolling" would be so much nicer if one could have, say, strict filters set to "Don't ever show me pranks, fake useless diy, kids being exploited, anything gym related" and "I really like snowboarding, WW2 history and pinball machines." Of course, the algorithm is still gonna "do its thing", but with a few hard guides.
Sure, initially the platform's view time would decrease, but then maybe people would actually like that platform.
Meta has failed (abysmally) at identifying and categorizing content where you’ve said “show me less of this.”
Bluesky’s not my favorite website but Xblock is proof that the app can go “this is a twitter screenshot and she doesn’t want to see those” at scale.
AI could identify, label, and hide all of these things.
On bluesky it already does: “this is rude” or “this content promotes self harm” , I wish both websites could suppress , snooze, or completely nuke “viral” or political content be it left or right. In bluesky’s case it’s not that I disagree with them. It’s just that I’ve had this shit that I more or less agree with shoved down my throat from every angle for a decade and I’m exhausted and don’t want to see or engage with it anymore. People who have nothing else to say 24/7 every single day of their life and mine just need to go away and I wish the AI on bluesky would just let me filter people whose content is primarily political temper tantrums because I don’t have the time or will to mute or block them all so I just don’t use the product.
In fact for moderation purposes, Facebook already is doing that on their back end. (a few years ago you could see automatically generated alt text like “a woman holding a baby” though I don’t use meta at the present time and don’t know if it’s still doing this.)
AI is already analyzing the memes and purging ones with themes they don’t like on FB though . Unlike bluesky moderation, it’s not presented as something I can leverage or access to make my experience more enjoyable on Facebook.
But that’s not how they’re leveraging AI right now. They won’t let it prevent me from seeing memes posts and content with themes **i** don’t like.
Reddit already has this feature, although it might be underused. Set up a multireddit. Everything you want and nothing you don't. They are also not bottomless (well, more so if you stick to smaller subs), so if you don't put too many subs in your multi you can also hard-limit your feed time. They're great.
In some way this already works - if you have the skill to actually not watch the stuff and flag it as “don’t show me again”.
If the platform’s view time increases only when it shows you “snowboarding, WW2 history and pinball machines", then you and the platform are aligned.
You talk about this like it's a service for the users.
> We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Great, so more filter bubbles? They don't learn, or more likely, don't care.
Filter bubbles are in. Blue sky and mastodon show that people want to self segregate. Even people remaining on Twitter are happy with the exodus.
Facebook is explicitly pro filter bubble. The community notes will come from your ingroup.
One irony is that diversity in online spaces leads to division. People no matter their politics and interests prefer people similar to them.
One way to look at this is by geography. Think of how a group of non English speaking Africans would talk together.
The other irony is that groups of people view the other groups as not similar to them and want to change them. It's always the outgroup that needs it's filter bubble bursting. It's always the other that is brainwashed.
So the downside of filter bubbles remain: more division, more separation between different people.
For me the major breaking change on social media is the forcing of non linear timelines. They're required to increase engagement and promoting content but thats the crux of the issue.
I liked the way early twitter worked, I have my bubble being the people I follow and I can see glimpses of the outside from the trending topics and what comes in as retweets, news, etc. Being able to see a thread without being logged in. Seeing analysis of people from the firehose showing different ways to see conversations and the bubbles.
I miss the fact that old tweets died, things had to be relevant to humans to be rekindled, meaning someone had to retweet to keep it alive instead of an algorithm deciding whats important for me based on how outrageous it is.
Bubbles are unavoidable, bubbles decided by algorithms are the worse of all alternatives.
Isn't there a difference between self-segregation and filter bubbles and how they're perceived?
If I go to a woodworking class, I won't be surprised to see people who like woodworking. If I go to the supermarket and everyone is talking about and liking woodworking, I start thinking that everyone likes woodworking.
A user explicitly signing up to specific topics are opting into a discussion. Filter bubbles are implicit.
Doubling-down on idiocracy and civilizational decline because there's money in it.
> They don't learn, or more likely, don't care.
Of course not. Enraged, uninformed people "engage", and that sells ads like hotcakes.
I don't know where people get this idea that Zuckerberg had any principles or gave a shit about anyone but himself. He's spineless, and his primary goal in life is has always been acquire as much wealth as possible by whatever means necessary.
> just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks
I guess FB will be the judge. They might even stop showing train wrecks to a person if they notice metrics dropping. Some of these metrics might even track the user’s well being, although most will focus on the well being of shareholders.
We lost the levers long time ago, replaced by opaque algorithms; are there any signs for this to change?
The way I read that — we tried hiding political content, but in the end lost user engagement to our competitors, so we decided to roll it back.
People say they don’t want political content, but they’re also more likely to engage with it if they see it.
> just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks
Maybe they need to be optimising for unregretted user seconds /s
What I think I just read is that content moderation is complicated, error-prone, and expensive. So Meta is going to do a lot less of it. They'll let you self-moderate via a new community notes system, similar to what X does. I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore. This is another big win for Meta, because minimizing their investment in content moderation and simplifying their product will reduce operating expenses.
It does however certainly fit the Golden rule - he with the gold makes the rules.
I was under the impression that Community Notes were designed to be resistant to sybil attacks, but I could be wrong. Community Notes have been used at Twitter for a long time. Are there examples of state-influenced notes getting through the process?
Twitter's Community Notes were designed to be resistant to sybil attacks. Meta is calling their new product Community Notes, but it would be a mistake to assume the algorithms are the same under the hood. Hopefully Meta will be as transparent as Twitter has been, with a regular data dump and so on.
How is that different from fact checkers? They can also be driven by large actors who pay shills to influence public opinion?
Only the name "Community Notes" is less misleading then "Fact checkers".
Fact checkers are employed by Meta?
And you are trying to say that makes it better?
Sure, I'll trust the leadership of this huge commercial company, famous for lots of controversies reagarding privacy of people. I'll trust them to decide for me what is true and what is not.
Great idea!
You can just pay people, regardless of their place of employment.
Qatar is not well known for paying people to bot on social media. They play the RT game by using their news network Al Jazeera to do that instead and give their propaganda a professional air. The first country to do this was India[1]. Israel has special units in the army to do this[2]. At this point so many countries pay people to do what you say, but Qatar doesn't, from what I can tell. If you have proof of it, I'm all ears.
I was cautiously optimistic when this was announced that India and Saudi Arabia (among others, incl. Qatar) might see some pushback on how they clamp down on free speech and journalism on social media. But since Zuck mentioned Europe, I fear those countries will continue as they did before.
[1] https://en.m.wikipedia.org/wiki/BJP_IT_Cell
[2] https://www.bbc.com/news/blogs-news-from-elsewhere-23695896
> it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
Or maybe such people have far better things to do than fact check concern trolls and paid propagandists.
I pay for some news subscriptions now. I actually love it. Read it, support journalism , log off. Done.
Right, so from where?
Many of us might pay for journalism if we knew who was producing content not already beholden to some ridiculous bias sink.
Checkout Ground News. Then you can choose your specific poison :)
There do seem to be a lot of people who enjoy fact checking concern trolls and paid propagandists.
I'm not sure if they do more good than harm. Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation. It just makes them look as frothing as trolls claim they are.
And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.
> Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation.
Your point is exactly why I can’t take anyone serious who claims that randoms “debating” will cause the best ideas to rise to the top.
I cant count how many times i’ve seen influencer propagandists engage in an online “debate”, be handheld walked through how their entire point is wrong, only for them to spew the exact same thing hours later at the top of every feed. and remember these are often the people with some of the largest platforms claiming they’re being censored … to millions of people lol.
it’s too easy to manipulate what rises to the top. for debate to be anything close to effective all parties involved have to actually be interested in coming closer to a truth. and the algorithms have no interest in deranking sophists and propagandists.
> And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.
Downvotes that hide posts below a certain threshold have always seemed like the best approach to me. Of course it also allows groups to silence views.
What I heard is that trying to maintain sane content is less profitable than the alternative, and definitely less politically advantageous.
> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
Strong disagree. This is a very naive understanding of the situation. "Fact-checking" by users is just more of the kind of shouting back and forth that these social networks are already full of. That's why a third-party fact checks are important.
I have a complicated history with this viewpoint. I remember back when Wikipedia was launched in 2001, I thought- there is no way this will work... it will just end up as a cesspool. Boy was I wrong. I think I was wrong because Wikipedia has a very well defined and enforced moderation model, for example: a focus on no original research and neutral point of view.
How can this be replicated with topics that are by definition controversial, and happening in real time? I don't know. But I don't think Meta/X have any sort of vested interest in seeing sober, fact-based conversations. In fact, their incentives work entirely in the opposite direction: the more anger/divisive the content drives additional traffic and engagement [1]. Whereas, with Wikipedia, I would argue the opposite is true: Wikipedia would never have gained the dominance it has if it was full of emotionally-charged content with dubious/no sourcing.
So I guess my conclusion from this is that I doubt any community-sourced "fact checking" efforts in-sourced from the social media platforms themselves will be successful, because the incentives are misaligned for the platform. Why invest any effort into something that will drive down engagement on your platform?
[1] Just one reference I found: https://www.pnas.org/doi/abs/10.1073/pnas.2024292118. From the abstract:
> ... we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. ...
True, but that doesn't discount that it's a win for Meta
1) Shouting matches create more ad impressions, as people interact more with the platform. The shouting matches also get more attention from other viewers than any calm factual statement. 2) Less legal responsibility / costs / overhead 3) Less potential flak from being officially involved in fact-checking in a way that displeases the current political group in power
Users lose, but are people who still use FB today going to use FB less because the official fact checkers are gone? Almost certainly not in any significant numbers
Yeah, I agree it's a win for Meta from a $$ perspective, just not for the reason the OP expressed (which was what I was disagreeing with). \
OP said it's a win for meta because it creates more engagement, which is a proxy for $$
But "fact-checking" by people in authority is OK? Isn't that like, authoritarian?
"Fact-checking" completely removed the ability for debate and is therefore antithetical to a functional democracy. Pushing back against authority, because they are often dead wrong, is foundational to a free society. It's hard to imagine anything more authoritarian than "No I don't have to debate because I'm a fact-checker and by that measure alone you're wrong and I'm right". Very Orwellian indeed!
Additionally, the number of times that I've observed "fact-checkers" lying thru their teeth for obvious political reasons is absurd.
> But "fact-checking" by people in authority is OK?
it's by third-party journalism organizations, not Meta employees, so not "people in authority"
They are given the title of fact checker, ending debate, this is the authoritarian part. It does not matter who employs them. If fact checkers were angels we wouldn’t have this problem. However fact checkers are subject to human nature just like the rest of us, to be biased, wrong, etc.. Do you think these fact checkers don’t have their own opinions? Do you think they don’t vote? Don’t lie?
You are assuming the people in social media are a representative cut of people in the society but what you will notice quickly is that this is not the case, just look at echo chambers.
If I am trying to debate the same fact on a far-right or far-left post, undoubtedly both will come up with the same discussion and conclusion - let's not lie to ourselves.
So for your claim to have any validity the requirement of a fair, unbiased group of people on all posts would need to be given (in the first instance, there are a lot more issues with this, just look at the loud people versus the ones not bothering anymore to comment as discussing seems impossible) and that is just de facto not the case and the reason fact-checking is indeed helpful.
Without some sort of controls in place, fact-checking becomes useless because it's subject to being gamed by those with the most time on their hands and/or malicious tools, e.g. bots and sock puppets.
You should look into the implementation, at least the one that X has published. It's not just users shouting back and forth at each other. It's actually a pretty impressive system
Its more naive to think a fact-checking unit susceptible to govt pressure is likely to be better. There will always be govt pressure in one form or another to censor content they doesnt like. And we've obviously seen how this works with the Dems for the last 4 years.
> They aren't explicit about it, but it's clear that pressure does not exist anymore
It's clear that the pressure comes now from the other side of the spectrum. Zuck already put Trumpists at various key positions.
> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
It's a good point. They're also going to push more political contents, which should increase engagement (eventually frustrating users and advertisers?)
Either way, it's pretty clear that the company works with the power in place, which is extremely concerning (whether you're left or right leaning, and even more if you're not American).
Is it less concerning if Facebook only worked with one side of politics? How is reducing censorship a bad thing?
Who said anything about that?
The pressure has just shifted from being applied by the left to the right. There is still censorship on Twitter, it is just the people Elon doesn't like who are getting censored. The same will happen on Facebook. Zuckerberg has been cozying up to Trump for a reason.
fb has been censoring left wing stuff and leaving fascists be since several years. This is just "like before, but even more" I think.
What is this based on? I see so many people shouting things like this, but there doesn't seem to be any basis for these arguments. They seem a bit useless and empty.
Experience.
Ah ok, nothing noteworthy
So glad FB abandoned moderation. Both of you guys (left and right) blame Facebook for censorship. What a thankless job. I'd throw my hands up as well.
If you care so much about it, now you can contribute with Community Notes. The power is in your hands! Go forth and be happy.
You're right, censorship is same as lack of censorship.
Heh?
> reduce operating expenses
If you assume they are immune to politics (not true but let's go with it), this is the most obvious reason.
They've seen X hasn't taken that much heat for Community Notes and they're like "wow we can cut a line item".
The real problem is, Facebook is not X. 90% of the content on Facebook is not public.
You can't Fact Check or Community Note the private groups sharing blatantly false content, until it spills out via a re-share.
So Facebook will remain a breeding ground of conspiracy, pushed there by the echo chamber and Nazi-bar effects.
How would fact checkers access the 90% of private content? And should they? I don't think so, even if the respective private content is questionable.
The EU goes its own way with trusted flaggers, which is more or less the least sensible option. It won't take long until bounds are overstepped and legal content gets flagged. Perhaps it already happened. This is not a solution to even an ill-defined problem.
Yes. Those are all bad solutions. Banning social networks would be probably better.
Right, if you don't agree with people at an online community, these communities should just be banned!
You would be a good dictator.
Good. Private communication is private, even if it's a group. The nice thing about the crazy is that they're incapable of keeping quiet: they will inevitably out themselves.
In the meantime, maybe now I can discuss private matters of my diagnosis without catching random warnings, bans, or worse.
What kind of diagnosis spawns so many fact checks that it's a problem? I'd think any discussion about medical issues would benefit greatly from the calling out of misinformation.
Amusingly enough, it's not misinformation being blocked or called out, it's just straight up censorship of any mention of the topic.
> They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore.
I didn't think it was any secret that Meta largely complies with US gov't instructions on what to suppress. It's called jawboning[1]
[1] https://www.thefire.org/research-learn/what-jawboning-and-do...
Yes, this just reads like "oh, thank God for that, that department was an expensive hassle to run".
I don't know if I'd call it a certain win for Meta long term, but it might well be if they play it right. Presumably they're banking on things being fairly siloed anyway, so political tirades in one bubble won't push users in another bubble off the platform. If they have good ways for people to ignore others, maybe they can have the cake and eat it, unlike Twitter.
Like Twitter, the network effect will retain people, and unlike Twitter, Facebook is a much deeper, more integrated service such that people can't just jump across to a work-alike.
A CEO who can keep his mouth shut is also a pretty big plus for them. They skated away from bring involved with a genocide without too many issues, so same ethical revulsion people have against Musk seems to be much less focused.
The trouble with fact checkers was quite evident in the Trump-Harris debate.
As a Harris supporter, I actually agree, I think it was way too heavy handed and hurt Harris more than helped. I’m not sure anymore what the goal of fact checking is (I’ve always felt it was somewhat dubious if not done extremely well).
Any fact checker is going to be inevitably biased. For a debate, there should be two fact checkers, each candidate gets to pick a fact checker.
That could lead to a debate between the fact checkers, which would derail the debate.
Better to not have fact checkers as part of the debate, and leave the fact checking to the post-debate analysis.
Agreed, I always felt like most of the fact checking that has become vogue in the past ten years is designed to comfort the people who already agree, not inform people who want genuine insight.
If you don’t have fact checkers, a debate loses all its value. Debates must be grounded in fact to have any value at all. Otherwise a “debate” is just a series of campaign stump speeches.
The value in a debate is the candidates can directly address the opposition's claims.
Theoretically, yes, but when every second sentence is a lie it becomes impossible.
They routinely do just that in campaign stump speeches.
Non-American here (i.e. did not watch the debate), what trouble became evident?
Were they fact-checking too much? Not enough? Incorrectly?
Only one side was fact checked.
Was it the side that did the vast majority of the lying?
Yeah, the problem is that if one side tells 100 lies, and the other tells 1 lie, you can't correct all 100 lies, but if you only correct the most egregious lies then statistically you'll only be correcting the one side, and if you correct 1 lie from each side, then you make it seem like both sides lie equally. The Gish Gallop wins again.
We would have to fact check if those numbers are correct.
Oh wait, fact checkers don't work, better just inform yourself and make up your own mind, and don't just believe some supposedly authoritarian figures.
Especially for live fact checking the greater the number of lies and the more obvious/blatant those lies are the more likely someone is to get fact checked.
This is the problem, you are clearly biased. She brought up the Charlottesville issue that has been widely debunked; it is blatantly false and well-known to be false. She was not fact-checked. That's the issue.
Only one side made claims like it being legal to abort babies post-birth.
[flagged]
This is a bit like the movie posters that quote "best movie of the year" when the full quote is "not the best movie of the year".
Go back a sentence.
https://www.reuters.com/article/world/fact-check-virginia-go...
> “where there may be severe deformities. There may be a fetus that’s non viable” he said. “If a mother is in labor, I can tell you exactly what would happen.”
Your dying grandma may go DNR, but that doesn’t mean murdering grandmas is broadly legal.
My wife does charity photography for https://www.nowilaymedowntosleep.org/. You see lots of this sort of withdrawal of care. Calling it an abortion is cruel and dumb.
> content moderation is complicated, error-prone, and expensive
I think the fact-checking part is pretty straightforward. What's outrageous is that the content moderators judge content subjectively, labeling perfect discussions as misinformation, hate speech, and etc. That's where the censorship starts.
How do you avoid judging actual human discussions subjectively? I remember being a forum moderator and struggling with exactly the same issues. No matter what guidelines we'd set, there'd be essentially legitimate discussions that were way over the line superficially, and on the other you'd have neo-nazis acting in ways that weren't technically bad, but were clearly leading there.
Facebook moderators have an even harder job than that because the inherent scale of the platform prevents the kinds of personal insights and contextual understanding I had.
My answer is don't. If something is subjective, then why bother? "Words are violence" is such a bullshit.
Okay, but you're saying this on a platform where the moderator (dang) follows intentionally vague and subjective guidelines, presumably because you like the environment more here than some unmoderated howling void elsewhere on the Internet.
Good point, and thanks. I have to admit I don't have a good answer to this. Maybe what dang needs to assess can be better defined or qualified? Like we can't define porn but we know it when we see it? On the other hand, assessing something is offensive or is hate speech is so subjective that people simply weaponize them, intentionally or unintentionally.
> we can't define porn but we know it when we see it?
But we don't, though. Or rather, there's broad consensus over most of it, but there's plenty of disagreement over where exactly the dividing line is.
The quality of the platform lives or dies on the quality of these decisions. If dang's choices are too bad, this site will die.
The situation is somewhat different between a niche community and a borderline monopoly. But it's also true that facebook's success depends on navigating it well. At the end of the day we can choose to use it or not.
To the extent that people feel forced to use a platform that's a reason to further bias away from suppressing free expression, even if the result is a somewhat less good platform.
You're still making subjective judgements wherever you draw the line. I don't know how a platform could avoid making subjective judgements at all and still produce an environment people want to be in.
> That's where the censorship starts.
It also starts when there is no third-party anymore. Where is the middle line?
I do not follow, I do not believe this is correct. Third parties introduce the censorship.
I thought there would be community notes. And how would third-party work? The Stanford doctor was banned from X because he posted peer-reviewed papers that challenge the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.
> The Stanford doctor was banned from X because he posted peer-reviewed papers that challenging the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.
Not familiar with that specific case, though generally I'm not a fan a bans. Fact checks are great though. There have been peer reviewed papers about midi-chlorians too (https://www.irishnews.com/magazine/science/2017/07/24/news/a...), but I'd sure hope that if someone brought it up in a discussion they'd be fact checked.
Community Notes is the best thing about Musk's Dumpster fire.
The problem with CN right now, though, is that Musk appears to block it on most of his posts, and/or right-wing moderators downvote the notes so they don't appear or disappear.
I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes. My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public. Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.
> the emperor knows that he is not wearing any clothes, and he does not care.
Indeed the ending of the famous story is:
> "But the Emperor has nothing at all on!" said a little child.
> "Listen to the voice of innocence!" exclaimed his father; and what the child had said was whispered from one to another.
> "But he has nothing at all on!" at last cried out all the people. The Emperor was vexed, for he knew that the people were right; but he thought the procession must go on now! And the lords of the bedchamber took greater pains than ever, to appear holding up a train, although, in reality, there was no train to hold.
Community notes launched at the start of 2021. It predates the buyout by almost two years.
If what they said about their design is to be believed, political downvoting shouldn't heavily impact them. I wish it was easier to see pending notes on a post though.
I agree, you should be able to see pending notes even if you're not a CN moderator.
You can see them, it's just that finding the button to do so on a post is difficult. I think you need to navigate to the post from the notes section of the website.
Right, I think that's the parent's point: CN is a great design, dragged down by the fact that Elon heavily puts his thumb on the scale to make sure posts he likes spread far and wide and posts he dislikes get buried, irrespective of their truth content.
This. You're getting downvoted as bad as me LOL
The bad faith “NNN - just expressing an opinion” is a cancer on CNs too.
To be fair, a lot (not all) of notes on Musk's posts are spurious, including the NNN's. It's clearly being misused there, but in general they seem to work very well indeed.
Perhaps, given the situation with Twitter, now "X", more web and mobile app users will come to understand that despite its size, Facebook is someone's personal website. Like "X", one person has control. Zuckerberg controls over 51% of the company's voting shares. Meta is not a news organization. It has no responsibility to uphold journalistic standards. It does not produce news; in fact, it produces no content at all. It is a leech, a parasite, an unnecessary intermediary that is wholly reliant on news content produced by someone else being requested through its servers.
News organizations have no responsibility either.
And I don't see why publisher of news even if they just re-publish should not be held to some responsibilities, like eg. abstaining from nefarious manipulation of content people see on their platform.
Not sure about US but unlike Facebook, news publishers are regulated by law where I live.
As if actual journalists care to uphold "journalistic standards."
X/FB is far more trustworthy than the legacy news media, which happily censors salient stories at the request of the government and pushes very specific agendas that are totally out of touch with the average voter.
I can't even count how many times I've seen literal video evidence for a story on X that the news media twists or refuses to cover.
I can't even count how many times I've seen literal video evidence for a story on X that was from totally unrelated incident but claimed to be proof of a completely made up thing that was happening right now.
Leaving Facebook, Instagram and Twitter a few years ago (and never joining TikTok) has been the number one top decision for my mental health. I wish everyone and society as a whole to make the same decision.
All I have on my Twitter feed is porn and jk Rowling tweets. I don't know what y'all are doing but my feed is exactly what I want.
> When we launched our independent fact checking program in 2016, we were very clear that we didn’t want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organizations... That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how.
This frustration with fact-checkers seems genuine. Mark alluded to it in https://techcrunch.com/2024/09/11/mark-zuckerberg-says-hes-d... which squares with how the Government used fact-checkers to coerce Facebook into censoring non-egregious speech (switchboarding) https://news.ycombinator.com/item?id=41370516
Alex Stamos pushed this initiative pretty hard outside of Facebook in 2019+, seemingly because he wasn't able to do inside of Facebook back in 2016/2018. But I haven't dug into his motivations.
If only they had lawyers to defend their free speech rights
Then the government sics the FCC or European Commission on you, who make trumped up charges that they push through a kangaroo court to fine you billions.
There's no fighting a government, and all governments are corrupt if they see an opportunity to rent-seek from you.
Examples of the FCC doing this?
Europe has way weaker free speech protections so I have no interest in defending them.
Asterisk just published an interview with the folks behind Community Notes at X (Twitter) - https://asteriskmag.com/issues/08/the-making-of-community-no...
I don't use Twitter so I hadn't seen it in action, but the interview convinced me that this is a good approach. I think this approach makes sense for Facebook as well.
Thanks for sharing this. So many people commenting on this topic have no idea how community notes even works. Today's New York Times article also failed to explain it, while just giving a general negative tone to the idea of switching to this model.
The median news article has something wrong in it.
Often I live through events and read about it in the daily paper and then read about it in The Economist and read a few more accounts of it. 5-25 years later a good well researched history of the event comes out and it is entirely different from what I remember reading at the time. Some of that is my memory but a lot of it is that the first draft of history is wrong.
When someone signed their name "Dan Cooper" and hijacked a plane a newspaper garbled that to "D B Cooper", the FBI thought it sounded cool so they picked it up, but it happens more often than not that journalists garble things like that.
https://en.wikipedia.org/wiki/The_Armies_of_the_Night
shows (but doesn't tell) that that a novelized accounts of events could be more true than a conventional newspaper account and similar criticisms come throughout the work of Joan Didion
https://en.wikipedia.org/wiki/Joan_Didion
If anything really makes me angry about news and how people consume it is this. In the age of clickbait everyone who works for The New York Times has one if not two eyes on their stats at all times. Those stats show that readers have a lot more interest in people like David Brooks and Ezra Klein blowing it out their ass and could care less about difficult journalism that takes integrity, elbow grease and occasionally can put you in danger done by younger people who are paid a lot less if they are paid at all. The conservative press was slow on the draw when it came to 'Cancel Culture', it was a big issue with the NYT editorial page because those sorts of people get paid $20k to give a college commencement address and they'd hate to have the gravy train stop.
Seen that way the problem with 'fake news' is not that it is 'fake' but that it is 'news'.
> Seen that way the problem with 'fake news' is not that it is 'fake' but that it is 'news'.
salient point. as a writer, the essential condition for any story is a conflict because it's the source of tension or dissonance that people engage with for resolution. the issue with the "fake news" wasn't the facts, it's that the conflict that brought them together as a story was manufactured cheaply from ideology. this had a compounding effect where the absurdity of the resulting conflict with reality drove further outrage from the other "side."
it's a pan-partisan problem. fine observation anyway, I'm provoked. to get better news, the conflict it expresses needs to be more organic. imo using community notes is way more organic than the governance model FB and formerly twiiter used.
Community notes seems to be quite well received. I like that the algorithm seems to be public and (IIUC) tamper-evident.
The obvious context is that either Meta gets out of the content moderation game voluntarily, or the incoming admin goes to war with them.
> focusing our enforcement on illegal and high-severity violations.
I imagine this will in practice determine how far they can go in the EU. Community notes, sure. No moderation? Maybe not.
I really like Community Notes, and hate the rest of what Twitter has become.
But... Community Notes is subject to "tampering." Elon's either removes the CNs himself from his posts, or his brigade downvote them to infinity so they don't appear on all the misinfo he posts.
Do we have any evidence that Musk has removed a CN on his own post? I've personally seen evidence to the contrary, and he makes a point of highlighting that even he gets a CN every now and then.
As the root comment noted, one of the great things about community notes on X are that the algorithm and the data it's operating on are public. If Musk were removing notes that would be trivial to prove. The fact that such claims of tampering are never accompanied by said proof should tell you all you need to know.
How would it be trivial? Can you describe in a more specific way?
The data I can find says it was last updated 9:02 PM Jan. 5, 2025 (presumably America/Chicago from my browser). That’s a >2 day window as of writing this comment.
Not throwing any accusation, just trying to understand the technicals.
If there was any manipulation of community notes in the last 2 days, how would we know?
If there’s manipulation of this data before it is published, such as ratings or notes never hitting these data files, how would we know?
Maybe, an individual could check to see their own contributions are included in updates to the published data. Is that sufficiently common such that it would get caught?
Community note data I can find (log in required): https://x.com/i/communitynotes/download-data
> If there was any manipulation of community notes in the last 2 days, how would we know?
You can't know until the data is published. 2 days isn't that long though. Just wait a couple more days for the next data dump, then run the algorithm and compare the results to what the X UI was showing at that time.
> If there’s manipulation of this data before it is published, such as ratings or notes never hitting these data files, how would we know?
That would be a bit more sneaky than just outright removing notes. As you noted, you'd need a user whose ratings or notes were omitted from the dump to notice and come forward. Or perhaps with careful analysis you could prove that the manipulated data could not have resulted in the allegedly removed note being shown and then later not shown, indicating something fishy happened.
Theoretically if X wanted to improve on this system, they could go even further and implement something like certificate transparency (append-only log verified by a publicly distributed merkle tree), or create an independent third party organization that users interact with to submit and rate notes, rather than that happening through X's UI. Given the threat model though, I feel like the UX and complexity trade-offs of that wouldn't be worth it. Open sourcing the data and algorithm as X has is already far more transparency than we get from any competing social media company.
When you ban anyone who speaks against you, you don't even need moderation! Problem solved.
But of course he can turn it off. He owns the entire platform and algorithms on it.
Musk can't ban people from HN. If there existed evidence of him removing CNs from his own Twitter posts, it could trivially be posted here.
How exactly would there be evidence if he can have every CN screened?
[Posted also in another thread:]
I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes. My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public. Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.
I don't think "CEO is able to remove community notes" is a strong mark against the community note algorithm. No system is immune to being turned off...
> Elon's either removes the CNs himself from his posts, or his brigade downvote them to infinity so they don't appear on all the misinfo he posts.
I don't know if this is the case, but X is Elon's property, so he can shape it as he pleases. Assuming that X (or Facebook) is unbiased and working for your benefit is simply foolish, unless you are Musk (or Zuckerberg).
Can you evidence that Musk posts things that are provably untrue?
Didnt Musk imply that the ex head of Twitter T&S was a pedophile
The exact tweet being - “ looks like Yoel is arguing in favor of children being able to access adult Internet services in his PhD thesis.”
Or this one where he accused his disabled employee ?
(using community notes to make the point no less) https://x.com/elonmusk/status/1633011448459964417?ref_src=tw...
From the sources I could find quickly to refresh my memory:
> Over the weekend, Musk shared some of Roth’s past tweets and what appears to be an excerpt from his PhD thesis about Grindr, the LGBTQ social media app. Roth is quoted as saying that the app is possibly too “lewd or hook-up-oriented” for people under age 18 who are already using it, but that providers should “focus on creating safe strategies … for queer young adults” that aren’t just about hook-ups. Musk commented, “Looks like Yoel is arguing in favor of children being able to use adult services in his PhD thesis.” On Monday, the tweet had more than 60,000 likes and received 15,000 retweets.
The thesis demonstrably exists (https://uploads-ssl.webflow.com/60981d118b006454de9222b2/61d...), and it does have a roughly matching quote at the bottom of PDF page 257 (labelled page 248). The idea of businesses "crafting safe strategies" to "safely connect queer young adults" (the context is very clear that Roth refers to people under the age of 18) is very reasonably interpreted as Musk did. There are very obvious reasons why existing services advertise themselves as 18+ and attempt to enforce that, and it should be clear to everyone that any such service intended specifically for minors could not plausibly be rendered safe.
The idea that this observation constitutes an accusation of pedophilia is 100% media spin, and does not reflect Musk's words.
Ideas like Roth's are not rare on the American (or Canadian) left, especially where they intersect with LGBT etc. rights - which is how things like https://www.cbc.ca/cbcdocspov/episodes/drag-kids can come to exist and be vigorously defended. This empowers quite a bit of culture warring from the American right.
Nancy Pelosi's husband's gay lover hammer attack?
Diver rescuers being pedophiles?
they asked for things that were provably untrue
These accusations are untrue until otherwise presented. Or is the burden of proof these days on the Innocent?
The post I replied to was accusing Musk of posting "misinfo". I responded by asking for evidence of Musk saying things that are provably untrue, because that is the standard of evidence that would be required to support such an accusation. This is not a criminal proceeding.
I'm certain it will make parts of the user experience worse, but at least for the Threads app, this seems at least a little necessary - if you're aiming to be the "new" twitter or whatever social need twitter was fulfilling, you need to break free of the shackles of IG/Meta moderation, which is very unforgiving and brutal in very subtle ways that aren't always easy to figure out. But basically, I find a platform like Threads/Twitter are probably unusable for a lot of people unless you can say "hey, you're an asshole" every now and then without Meta slapping you on the wrist or suppressing your content.
One of the only visible actions Meta has taken on my account was once when a cousin commented on a musical opinion I had posted to facebook, I jokingly replied "I'll fight you" and I caught an instant 2 week posting ban and a flag on my account for "violence." Couldn't even really appeal it, or the hoops were so ridiculous that I didn't try. The hilarious thing is these bans will still let you consume the sites' content (gotta get those clicks), you just are unable to interact with it. This kind of moderation is pointless as users will always get around it anyway - leading to stuff like "unalive" to replace killing/suicide references, or "acoustic" to refer to an autistic person, etc. Just silliness, as you'll always be able to find a way to creatively convey your point such that auto-moderators don't catch it.
I once posted a picture of an email stating my train was delayed in French. So the word 'retard' appeared in it. Instagram banned me from monetization or partnerships or something on my account, because the word for delay in French is offensive in English.
> because the word for delay in French is offensive in English
It's also the word for delay in English.
It's also not offensive in English, even though some virtue signalers insist on taking offense to it.
Right. I made a reference to educational development being retarded due to COVID restrictions and the very people you'd expect to be offended were of course offended.
Perhaps because virtually no one uses the term in that context anymore. It is often best to avoid ambiguity when posting online.
I think it's important to remember the real meaning of words. If you know language better, you can understand a lot more information, and you can express yourself better. Knowing the meaning and origin of words give you great insights into things.
Just because some childish people are misusing the word for some time, we shouldn't just ditch it like that. Words go back a long time.
We should just remove the negative use of it. And we do that by growing up, not by banning words.
Mechanics might retard or advance the ignition timing in an engine.
My own experience is the exact opposite. Out of all the times in my life I can recall ever having heard the word "retarded" used, I cannot think of any reason to suspect that any of them were meant as anything other than a synonym for "idiotic".
Which, of course, also referred to clinical mental disability at some point in history. As did "moronic", "imbecilic" and others. But nowadays they're really all just strong forms of "stupid".
Even in contexts where generic insults directed at people are not tolerated, it should be acceptable to recognize stupid ideas as such.
I think you've misunderstood, then. The GP's comment was using it in the technical sense (slowed/delayed, not the common "that's so dumb" form you've observed).
Ah. The comment was:
>Right. I made a reference to educational development being retarded due to COVID restrictions and the very people you'd expect to be offended were of course offended.
I misread that, and interpreted "retarded" as being a subjective judgment applied to the restrictions.
That said, the reading "[the process of] educational development has a mental disability" is utterly incoherent, so I still see no reasonable justification for taking offense.
Have you ever seen someone use the slur without intending the same mean-spiritedness that the "virtue signalers" are taking offense to?
Sure. I have a 50 year old friend who takes care of her retarded brother. When describing him and what she does, she simply calls him retarded, because he is, and people know what that word means.
One of the kindest women I know, but she doesn't bead around the bush or have time for euphemisms.
Idiot, retard, mentally handicapped, ect. It is all doomed to be a euphemistic treadmill because they can and are used as an insult. The insulting part isn't the word used, but the comparison drawn. Give it 10 years or so and whatever the current word is will also be out of favor as a pejorative.
Already the case - disabled is now lesser abled or something.
It’s pretty retarded.
To be clear, I am arguing the idea that banning words stops people from being mean, not for using those words needlessly.
That's the thing. They aren't taking offense to mean-spiritedness directed at the person being referred to that way, except in cases where that person actually does have such an intellectual disability. And such language is normally directed at people of ordinary intelligence, to call them out for failing to think things through when they're perfectly capable of it.
There are, and should be, contexts where insulting people is socially acceptable and where such insults should not be censored. And no matter what words you use (https://en.wiktionary.org/wiki/euphemism_treadmill), it's fundamentally impossible to get rid of the idea that a lack of (demonstrated) intelligence is inherently negative.
(It's noteworthy to me that the same activists don't seem to be able to identify any terms denoting lack of physical strength that are inherently offensive - except insofar as they invoke gender stereotypes. Why should it be any less objectionable to call someone a "weakling", for example?)
The criticism of the target’s intelligence or competence isn’t the mean-spiritedness I’m referring to. I’m referring to the deliberate and inherent mean-spiritedness towards people with intellectual disabilities that the slur is explicitly invoking.
>I’m referring to the deliberate and inherent mean-spiritedness towards people with intellectual disabilities that the slur is explicitly invoking.
I disagree that any such thing is invoked. It seems that you believe that when the word "retard" is used in these contexts, that it's meant to describe a person with an intellectual disability. I think it's merely intended to describe someone of low intelligence, which neither necessarily qualifies as nor is necessarily caused by a disability.
Nor do I agree that it's mean-spirited in a way that, say, the word "stupid" isn't. It's just more intense.
I don't think insult should be socially accepted, it shouldn't, it's not a nice thing. Rudeness, impoliteness, offense, why would we socially accept them?
Freedom and cencorship is another thing. You have the freedom to be rude and impolite, and it shouldn't be censored. But yeah you shouldn't expect people to like you or listen to you.
>Rudeness, impoliteness, offense, why would we socially accept them?
Because multiple kinds of social space exist, and some people enjoy being able to interact with each other that way and are happy to accept being the butt of the joke their fair share of the time.
Ah yeah, you are right, there are people that have been exposed to it so much that they think it is normal, and a necessary part of life.
Well you know, things can change. In the past it was a family outing to go watch a beheading. That was normal for them and good entertainment. And they would have used the same arguments as you to somebody critical about it.
And you're right, it is a valid choice, and if you really enjoy being humiliated, by all means, you have the freedom to.
I do think eventually when the rest of the people have grown up and moved on to much more intelligent endeavors, that you might start to think differently too. But maybe not, everyone has their own interests.
This sort of dismissiveness is not helpful to your cause.
What am I dismissing?
Haha, wait, are you offended? I thought you were one of those people who would enjoy that.
These words have non-offensive uses outside of schools and offices.
As someone who works adjacent to rail operations, it's somewhat common to see used in a completely straight-faced and serious way.
Plant failing to be properly retarded is a somewhat regular cause of near-miss safety incidents.
https://en.wikipedia.org/wiki/Retarder_(railroad)
There's offensive use of that term in English, for sure: https://en.wikipedia.org/wiki/Retard_(pejorative)
Language is what we make of it, it's not a fixed concept. If people take offense to it then it's offensive.
https://www.youtube.com/watch?v=NzdpxKqEUAw
As Stephen Fry said: "So fucking what?".
A thumbs-up gesture is offensive in the Middle East, should it be banned world-wide?
Funnily enough the original example upthread was the use of the word "retard" which is harmless in French, which ended up getting the user in trouble.
hi retard, good post :)
It's widely regarded as a slur.
dang, can i can him a retard and not get flagged or banned? hah
Nah it’s offensive. Just because you don’t take issue doesn’t mean it doesn’t hurt others.
They're not suggesting that they don't take issue, and so they don't need to take offense seriously.
They're suggesting that the people who conceivably might take issue generally don't and are instead being patronized by and condescended to by privileged, unaffiliated outsiders who assume -- without consent -- to speak on their behalf. And they don't take those people seriously.
It's totally reasonable to disagree with that view, but it's the not the same view your reply tries to engage with.
The thing is, you wouldn't use the slur except to invoke the mean-spiritedness that the people who find the slur offensive associate with the word. If you're using it because you think like-minded people will find it funny that you're using a term other people find offensive, that's still precisely the same mean-spiritedness.
No, you’re expressing a different, more lucid point of view (“the people who conceivably might take issue generally don’t”), which can be engaged with. For example, I would argue that it’s reasonable to take offense on behalf of people who can’t be part of the conversation at hand. (Otherwise it would be fine for whites to spew racist slurs in a group of only white people. If we disagree on that, we’re having the wrong conversation.) I would also point out that taking offense on behalf of others is a time-honored practice (“nobody says that about my little brother and gets away with it!”) But the GP (GGP?) did not say “the people who conceivably might take issue generally don’t.” They didn’t say “no one has standing to be offended by this term.” They just said “it’s not offensive” about a term that is offensive enough that we’re having an entire argument about it. That’s schoolyard-level discourse.
Using it as a noun or in name-calling is offensive, as a verb it isn't.
Oh yeah? Why?
Reminds me of:
Priest: “You have been found guilty by the elders of the town of uttering the name of our lord as so as a BLASPHEMER you are to be stoned to death.”
[…]
Priest: “BLASPHEMY! He said it again!”
Old man: “I don’t think it ought to be blasphemy. I just said ‘Jehova’”
Priest: “You said it again! You’re only making it worse!”
Old man: “Making it worse!? How can I make it worse!? Jehova, Jehova, Jenova!”
https://youtu.be/SYkbqzWVHZI
So it's not offensive. Just because it hurts you doesn't mean others meant it that way.
I’m glad we’re moving on from the world where everyone would constantly be yelling “You’re hurting me! You’re hurting me!”.
An alternative is to use “on the spectrum”. For example, your s.o. or someone else you’re arguing with is getting on your nerves so you say: “Hey! are you on the spectrum today or what?”
Offense is all about context. It is objectively quite offensive when used as a term for a person. (“Objectively” works here because a word being offensive is determined by how people view it. The views are subjective but the prevalence of those view is not.)
I've only ever seen it used to mean "delay" in occasional technical contexts, e.g. "fire retardant material", in practice it seems to be mostly a noun that means "stupid person".
Retarded timing is a common term in reference to a car’s ignition. And in biology, retarded growth is often used.
The words have the same meaning - "person with slowed down intellectual development"
"Developing intellect slowly" implies they're going to reach full intelligence at some point.
"Retard" means "thick", in this context, not "will get there eventually".
The technical definition is not how the euphemism is used.
It's not a euphemism. It's an epithet.
There's an interesting etymology of "retarded". Also "idiot", "imbecile", "moron", etc.
These were clinical classifications, initially used in the early days of psychology and sometimes overlapping discredited ideas like eugenics. But these were diagnoses -- you could be determined to be an idiot, which was worse than being an imbecile, which was worse than being a moron -- by a respected doctor.
Of course, schoolyard kids got a hold of the terms and used them to disparage their (probably cognitively healthy) peers. And so with "retarded" and "disabled" etc.
But "retarded" just means "slowed or delayed". Developmentally speaking, especially when surrounded by other kids in your same age group, that's a noticeably difficult thing to be.
It does not mean (and never meant) that you are certain to reach full cognitive ability eventually. Flights that are delayed are sometimes also cancelled.
you can also retard the thrust levers
In Chinese there's a common word that sounds like a particularly offensive racial slur to the untrained American ear. I've seen Chinese speakers called out for this in person, but everything got straightened out pretty quickly. This was pre-social media, but it's not hard to imagine a social media uproar over it these days.
那个 (nèige)[1]
It does stick out of Mandarin speech to the US English speaker, but it's typically pretty obvious from context that it's not related to the slur. It's never been worth more than a giggle when growing up, I'm spending like 100x more time on thinking about it right now than I have cumulatively in my life, despite having grown up around Chinese people.
[1]: https://resources.allsetlearning.com/chinese/grammar/The_fil...
Enjoy the uproar then https://www.bbc.com/news/world-asia-china-54107329.amp
This prof lost a gig
To me it feels like society is finally moving on from this insane over emphasis on finding things to be offended by and identity culture bs. I’m really hoping it peaked in the lockdown when people really had nothing better to do.
It is quite striking and bizarre the first time you’re in an extended conversation, and hear it over and over.
Surprised someone was called out though as all the social cues around should be enough to sense no ill intent.
This is different than the fact-checking, and has to do with automated moderation algorithms (which generally suck), which are continuing (because advertisers want them).
Yes, it is, but the salient point that I felt was clear in that post was to demonstrate that these systems don't work well, and that such systems have such a poor understanding of context and circumventions as to be rendered ineffective if not totally counterproductive. I'm fully aware such mechanisms aren't going anywhere, right now, but at least Meta is acknowledging the fact that at present, they aren't really providing the user experience they intended.
That aside, I find it offensive a little bit that Meta has taken it upon themselves to decide what the "right" discourse is that their users want to see, and would rather they create a mechanism to let users decide for themselves - which this does at least outwardly appear to be a move towards. They've also in the last few years toned down or removed some of the auto-modding in private groups, and shifted that responsibility towards its community members and moderators - which was also a similarly good step.
> auto-modding in private groups
that's very different and a case where the closed community should bear that responsibility
but as far global FB community -- which doesn't really exist (there is no "community", just users) -- or, more precisely, what ends up in people feeds, the fact checking was a good thing because a lot of people consume news that way; so this is a big step in the wrong direction
Some time around 2011, the Apple App store was warning me about rude words in the app description; unfortunately it was warning me about the German word "Knopf" which isn't rude. I think what happened is the English rude word list was translated into German, rather than just replaced with local rude words.
Button?
The word "knob" would also translate as "Knopf" in the sense of button, while also having a euphemistic meaning of "penis".
German native speaker here.
in no region or context does Knopf mean anything offensive, especially not "penis".
Yes, I know. My words seem to be easily misunderstood. The claim is that:
1) "knob" *in English* can mean "penis"
2) This is why "knob" was on the English rude words list
3) It looks like a rude word list containing "knob" was translated without context, so that the word "knob" became "Knopf" even though "Knopf" isn't rude.
Wäre es andersherum gewesen, wäre es so, dass „Schlange“ sowohl <<en:queue>> als auch <<en:penis>> bedeutet, und wenn „queue“ in einer englischen Liste mit Schimpfwörtern stünde, wären die meisten Leute sehr verwirrt.
ah. das kam so nicht rüber, thanks for clarifying.
Same would probably happen when talking in a video about the French theorem proofer called Coq.
I wonder how many Spanish speakers got banned for discussing Vantablack at the time.
As someone who speaks French this made me chuckle
retar dio
Pilots flying an Airbus get called a “retard” every time they land!
Even worse: they cannot get open mental healthcare for this without loosing their license.
Loose-loose-situation.
Me and my friend share a joke instagram account where we'll randomly make stupid posts just to entertain ourselves. One time he posted a picture of himself holding a chair, standing on one foot with a goofy smile on his face, captioned "I'll hit you with this chair! Just kidding!"
It got the account suspended until we deleted the post, claiming the post, and I quote, "could encourage physical violence and lead to a risk of physical harm, or a direct threat to public safety."
I sent an appeal, saying it was a clear joke that isn't directed at anyone, but after supposed "review" they determined the post is indeed against ToS.
A cynic of the large social media platforms might suspect they were deliberately underinvesting in their moderation workforce... so they could then justify doing away with the cost as soon as politically convenient.
At its base, moderation = time = money
Better quality moderation? More money.
The platforms would rather not carry that cost and therefore be more profitable. Convenient how that worked out.
Frankly, if "ban people for joke violence" is the price we have to pay for "ban people for real violence", I'll take it.
But what if banning joke violence increases the chance of real violence?
Not the person you responded to, but I assume they would be ok with unbanning the joke in that case.
I would rather people not die.
A tale as old as time. On old forums and groups: h4xor, ghey, etc.
Clbuttic
https://en.wikipedia.org/wiki/Scunthorpe_problem
I think you misunderstand what Meta is doing here. They’re not stopping moderation of posts.
Meta used to pay third party company fact checking companies to put disclaimers on “misinformation” posts on Facebook. They’re going to stop that now.
They’re still going to continue their other more traditional moderation where you’ll be banned for making an obvious tongue in cheek joke or whatever.
I don't, see my response here:
https://news.ycombinator.com/item?id=42626739
Moving the moderation team from California to Texas is also noteworthy.
Presumably because the political climate of California is so skewed from the rest of the country.
Probably much more to do with cost
> "acoustic" to refer to an autistic person
Is autistic an illegal word or something? What the fuck?
It’s self-censorship, some of which I find extremely weird and cringe.
Go on Twitter and you will see people self-censor the normal swear words too.
Shit becomes “sht”, fuck becomes “fck”
Very dystopian.
It's been trivially demonstrable that the use of "forbidden" terms or swearing can affect your ranking on their algorithms, whether it be displaying your comment, or your post on someone's feed, etc., at least on Meta's platforms. So no matter how "cringe" you may find it, it's done out of some degree of necessity and precisely because of these dumb moderation mechanisms, not out of any misguided, altruistic self censorship.
>It's been trivially demonstrable that the use of "forbidden" terms or swearing can affect your ranking on their algorithms
Is it though? A lot of this self censorship seems to be a cargo cult thing where people just copy what they've seen other people do and assume it's necessary when it's really not.
>Is it though?
Yes. There are countless stories from Youtube creators who had their videos taken down or demonetized or had to edit and reupload them, because the AI detected that words such as "suicide" were spoken. And it's common knowledge that requests for review are routinely denied (presented as "we reviewed your case and the ruling stands", a judgment often received in less time than the runtime of the video).
>There are countless stories from Youtube creators who had their videos taken down or demonetized or had to edit and reupload them, because the AI detected that words such as "suicide" were spoken.
I don't believe you. I've never seen any evidence of that.
If I put "youtube creator can't say suicide in video" into DDG, among the top results:
https://www.reddit.com/r/NewTubers/comments/18f7mas/question...
https://support.google.com/youtube/answer/2802245
https://www.reddit.com/r/NoStupidQuestions/comments/13qhtu1/...
https://www.theatlantic.com/ideas/archive/2023/03/youtube-co...
https://www.businessinsider.com/youtubers-identify-title-wor...
And there's a clear cause for it:
https://time.com/5096391/youtube-paul-logan-suicide-video/
1) Do you honestly think they would add "fuck" to a blocklist but then turn a blind-eye to "fck"? Basic profanity filters on old forum software were more strict.
2) I find it completely inane that you are willing to self-censor yourself for an algorithm. I guess we no longer need a ministry of truth if the people just produced censored content to begin with, right?
I don't think it's a literal blocklist; it's more an correlation algorithmically determined. If the four-letter word is correlated to hate or violence but the three-letter one is not then ... that's all that matters.
Then you've got 'YouTube-speak', where video creators swap in alternatives to words suspected of making the algorithm downrank/demonetize videos. 'Unalived' being a particularly common one, to avoid mentions of killing or suicide.
X today seems to not let any posts about death, swearing, etc get recommended.
Whereas if you replace those words with "unalive" and **'s, then you get far more views.
I'm sure there is some kind of filter.
It’s very human. This is no different from people using terms like gosh, shucks, and darn, instead of their stronger relatives. It’s just how profanity works, no need to worry about it.
Fck is just as strong as fuck, not sure how that could be confused at all.
No, definitely not. It’s stronger than “fudge” but not quite the same as the real thing.
For the record, Twitter currently punishes people who call VIP's mean names and seems to take action against all negativity pointed towards certain ideologies that fit with the owners preferences, and they're talking about some opaque "positivity" changes which actually sound like automating the current manual moderation behind their censorship of wrongthink.
We should stop pretending that that website resembles its preceding namesake, because it does not.
> but at least for the Threads app
Sorry for OT but what is the point of Threads? Twitter/X is already thing if you do not care about corporate-owned social media.
> corporate-owned social media
In what sense do you think X is not a "corporate owned" company?
The only people that would say this is unnecessary are the people that not currently being censored, and have no concept that they ever would be. Because they’re the Good People that think the Good Things.
You're all very likely correct, but given the timing, it's hard to assume good intent on Meta's part. This same week, they've "donated" $1 million to Trump's "inauguration fund," and added strong Trump ally to Meta's board. Significant changes to moderation might be good or might be bad, but given the other news, only the truly ingenuous would trust that it's intended to improve things.
Same thing with when Bezos declared that the Washington Post would no longer be endorsing presidential candidates, claiming that it was a neutral decision about returning the paper to its roots with unfortunate but coincidental timing. Despite that potentially being a reasonable decision in a vacuum, only an idiot would have believed that Bezos was being honest about his motivation.
I'm sure it's a win for Meta (less responsibility, less expense, potentially less criticism, potentially more ad dollars), but certainly a loss for users. More glad than ever that I deleted my FB account 10 years ago, and Twitter once it went X.
My twitter account wasn't big, but it was non-trivial (~30K followers). A post could usually get me to experts on most topics, find people to hang out with in most countries, etc. There were many benefits, so deleting was very hard.
But it was eating my brain. I found myself mostly having tweet-shaped thoughts, there was an irresistible compulsion to check mentions 100 times a day, I somehow felt excluded from all the "cool" parts which was making me miserable. But most importantly, I was completely audience captured. To continue growing the account I had to post more and more ridiculous things. Saying reasonable things doesn't get you anywhere on Twitter, so my brain was slowly trained to have, honestly, dumb thoughts to please the algorithm. It also did something to attention. Reading a book cover to cover became impossible.
There came a point when I decided I just don't want this anymore, but signing out didn't work-- it would always pull me back in. So I deleted my account. I can read books again and think again; it's plainly obvious to me now that I was very, very addicted.
Multiply this by millions of people, and it feels like a catastrophe. I think this stuff is probably very bad for the world, and it's almost certainly very bad for _you_. For anyone thinking about deleting social media accounts, I very strongly encourage you to do it. Have you been able to get consumed by a book in the past few years? And if not, is this _really_ the version of yourself you really want?
Like alcohol and drugs, I think there's a certain kind of person that's susceptible to social media addiction. I don't think it's a large segment of the population but I also have no idea how big it is either.
Plenty of people can drink or consume weed in moderation. Likewise I know a lot of people who mostly use socials in the bathroom or before bed but rarely elsewhere.
Smoking is a better analogy IMO.
Seconded on "tweet-shaped thoughts," Threads is doing this to me as well.
If I'm honest with myself, I too had become addicted to Twitter. Elon's oligarchic takeover gave me the push to not only stop going but eventually delete my account altogether (so I wouldn't be tempted to go back into the bar so to speak). So for that I suppose I should be grateful to our new Generalissimo.
Why is it “certainly a loss for users”? Many are likely to enjoy the ability to post without censorship on topics they care about.
Fact-checking and censorship are two very different things.
Indeed. This was more censorship than fact-checking.
That sounds like a line from the CCP.
Deleting isn't fact-checking. Whereas "community noting" actually can make a case for being fact-checking.
Fact checkers weren't deleting posts and didn't even have the right to do so. They are separate journalistic orgs tagging posts. Deleting is done by Meta moderators, which is something else entirely.
I think you also just proved my point that if HN users can't even get basic facts about an event right, how do you expect the average FB user to do so? Goes to show that even on HN "community noting" would be a disaster.
The problem with "fact-checking" is that if it's done by humans at all then it will be heavily biased.
With Silicon-Valley people being in charge of "fact-checking" for the past decade there's been countless examples of them doing mass cancellations calling things lies that we all know ended up being true.
> countless examples of them doing mass cancellations calling things lies that we all know ended up being true
really? like what, exactly? please give concrete examples or this is just hot air
https://www.theguardian.com/technology/2021/may/27/facebook-...
"Facebook lifts ban on posts claiming Covid-19 was man-made (2021)"
It is not known to be true of course, but it always obviously a possibility.
I mean, we can’t be correct retroactively can we?? I dont think all the doctors that came before antibiotics should be blamed for not knowing Germ theory.
IS this a reasonable expectation of fact checking?
I’m very curious now, I actually would love takes on this. I feel we are implying that the standards of fact checking validity weren’t met, but the standards haven’t been stated.
The reason censorship is generally undesirable is because it assumes the person doing the censoring is always correct, and that they're infallible perfect arbiters of truth incapable of letting their political motivations dictate their censorship decisions...which is of course false. They're very often wrong, and always make decisions based on their political leanings, even when it contradicts the evidence.
Suppressing the Hunter Biden laptop scandal by heavy censorship of any post about it on Facebook. For instance.
There's a high probability that heavily influenced the presidential 2020 election outcome.
https://judiciary.house.gov/media/in-the-news/facebook-execs...
If you're wanting to claim that `Cancel Culture` never happened, then I'm afraid, at this point in history, the burden of proof is on you, not me. lol.
I made no claim.
But the OP did make a claim that "calling things lies that we all know ended up being true"
I challenged that with a request for actual examples. Feel free to link to them.
No one needs proof Cancel Culture was real. Everyone knows at this point. So you can pretend you need proof if you want, but you're not fooling anyone.
Handwaving is not providing examples. Please try again.
You can go to the wikipedia page. You don't need to be spoon-fed.
See, thats not how it works in productive conversations. “Adult” so to speak conversations online, require the person making the claim to provide the evidence.
The act of not providing the evidence, is essentially a sign of not having an argument, and resorting to bluffs in the hope that people will take the emotions as facts.
But thats entirely self defeating - it reduces your argument to one about feels and vibes.
I always find this to be annoying, because I dont think people are so inaccurate.
You may well have evidence, and bringing it up makes the case.
And if you dont find evidence, then you improve your own argument. You end up checking and figuring out what made you hold that position.
It’s just a lost chance. And if people said they dont care to do this, then why the heck did they make the effort? You just lost your peace for no reason.
sorry, I only read your first sentence, but for something as well known as "Cancel Culture" if someone claims it must be proven to exist before it can be discussed then that is the person who's not acting in good faith, and has immediately discredited themselves, due to ignorance of very well known facts.
Asking people to list evidence for well known things is a well known troll-tactic, and often used as a way to deflect and redirect a discussion into the specifics of specific cases, especially when the main argument has nothing to do with any of the specific cases.
https://www.reuters.com/world/us/republican-led-us-house-pan...
There was a long period where people were getting banned from Twitter and Meta platforms for posting (true) claims about the Hunter Biden laptop story (which was, of course, extremely politically consequential)
Is that your example? It's not a very good one.
If you read the article you linked to, you find that 1) Twitter blocked tweets about the WP story, not banned users, and 2) they reversed that decision and unblocked the tweets 24 hours later as they realized their mistake.
It took the corporate media (CNN, ABC, CBS, MSNBC, PBS, etc) a full 3.5 years to admit the laptop was real. It wasn't just some little thing like you're trying to portray it as. It made the difference in the 2020 election.
People do not care about that laptop. They even voted for Felon to be president. Why is it such strong topic for you?
People finally figured out which party's policies are destroying the country. That's what the election was about.
Yes, people would care that the presidents adult son is pointing a gun at a prostitutes head on video.
Your attempt to minimise this as “people don’t care about a laptop” is either incredibly ignorant of this matter or deliberately misleading framing of the question.
The people saying the laptop doesn't matter are the same ones who believed the MSM story that it was Russian disinfo for 3.5 years.
They won't allow themselves to think it's important because that's an open admission (to themselves and others) of how thoroughly brainwashed they've become by trusting the MSM left-wing perspectives on every issue.
peaceful protests?
[flagged]
I’ve seen this happen before. Back in the good ole days of the libertarian internet.
You had subreddits which had zero moderation, because again “the best ideas succeed”. Those places got filled with the hate speech, vitriol, harassment, stalking and toxicity.
Minorities and women left, because they were basically hunted.
Logical arguments dont work, because hate, harassment and anger are emotionally driven behaviors.
This creates the toxic water cooler effect. The fact that its ok to say horrible things, attracts more people who are happy to say those things.
You lose diversity of arguments, view points and chances to challenge ideas.
You increase radicalization, dramatically speed up the sharing and conversion of anger into action.
Eventually, the subs brought in moderation. As did every social media platform in existence. The people who didn’t like it, created their own spaces.
Which didn’t do well. Because those positions and spaces are NOT popular. Facing this fact, they are now turning to shut off opposition and moderation, because that is necessary to keep the ball going.
This isn’t even opinion, this is the history of the past 30 years. It’s not even that old!
I really do hope this time its different. Genuinely, I said it when the new communities were created. I meant it then, I mean it now.
Moderation is fucking toxic and unhealthy. I rejoined moderation recently, and in the first 10 frikking items, I had to see a dead baby pic from an un covered ethnic war zone.
I really want this to succeed, and want it to be good for users. I am hoping it is.
But experience is clear - making space for hurtful speech, results in more hurtful speech and people just leaving to places where they dont have to be harassed.
Blue sky should probably see a jump in users over time this year.
> More glad than ever that I deleted my FB account 10 years ago
I hung on to facebook largely because marketplace makes parenting markedtly cheaper. I've used it less and less to the point I forget about it. This finally inspired me to full delete the account.
From bad to worse. Meta is probably one of the single largest funders of fact checking. Now that appears to be coming to an end. Third parties will no longer be able to flag misinfo on FB, Instagram or Threads in the US.
This is not good imho.
I think internet discussion worked far better without fact checkers, where some of them cannot really be called accurate. The community notes are the better approach. They aren't always correct either, but it certainly is the better fit for freedom of expression and freedom of speech. Fact checkers are the authority approach that just does not fit.
I haven't seen a single discussion be worse off due to fact checking, but I've seen tons of discussions where having it would improve things. I have seen people get mad because they can't post BS without it being challenged.
To claim internet discussion worked better without fact checking is something I haven't seen any actual evidence for, just opinions like yours.
Community notes is just a watered down, more easily 'ignored' version that appeases people that were angry about fact checkers to begin with.
Hopefully there is a push-back, likely from EU legislation. Between the AI generators many of these companies are implementing and changes like this, platforms need to be held more accountable for what they allow to be posted on them.
Claims are challenged all the time by other users and there are enough cases where fact checkers were wrong or heavily biased.
EU legislation tries to introduce "trusted flaggers". A ridiculous approach, an information authority by a state-like entity doesn't work, even if they paint these flaggers as independent. They simply are not, a trusted and verifiable fact.
Community notes provide higher quality info, it is the better approach. That is an opinion of course.
We will probably see community notes on trusted flaggers.
>Claims are challenged all the time by other users and there are enough cases where fact checkers were wrong or heavily biased.
I've only seen a handful of cases where they were wrong of heavily biased, but I've seen hundreds of cases where the poster refuses to accept they are wrong and the fact checkers are right.
>Community notes provide higher quality info, it is the better approach. That is an opinion of course.
Roughly the same info but from less trusted sources and with less controls being higher quality sounds like a big bag of wishes but not grounded in reality.
>We will probably see community notes on trusted flaggers.
I expect lots of partisan complaining and yelling, but not a lot of actual valid challenges.
I don't know. I believe the average internet user has less to gain to feed me wrong info. It happens of course, that is why you shouldn't believe everything you read on the internet.
A fact checker however has economic incentive towards their employers. You can paint them as independent, but the will always be in a precarious situation or are influenced by third party financiers. This does not at all evoke more trust than a random internet person. Trusted source is pretty subjective, but for me "official" fact checkers don't have too much of that.
Exposure to many viewpoints, including wrong ones, provides a counterbalancing effect. When you actively try and suppress information you create a “forbidden knowledge” effect where people seek out silos where extreme and wrongheaded information gets passed without the “sunlight is the best disinfectant”—-it grows faster…becomes more wrong, more extreme, and more dangerous.
Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
There's some academic research to the contrary; banning /r/fatpeoplehate and /r/coontown on Reddit reduced incidents of hateful speech across the platform.
https://www.reddit.com/r/science/comments/6zg6w6/reddits_ban... / https://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf
"Sunlight is the best disinfectant" is a great pithy slogan, but modern society needs bleach and chlorhexidine sometimes.
Maybe it reduced hate on this single metric, but the complaint is more about the errors in fact checking.
And single subreddits aren't really convincing about the reliability of fact checkers if their independence is in question. In the end they do rely on a truth-authority, which is problematic, especially for political content. And Meta reported that political demands increased.
So your example is two places that were intentionally moderated to be hateful and also suppressed the non-hateful speech in those subreddits?
So removing a censored platform eliminated the problem? Amazing how that works!
No, you should actually go and read the paper. It didn't just reduce the type of content posted in the subreddit, they tracked individual users that were active and their behavior overall changed, including in other subreddits compared to before.
Essentially what it showed was that if you pull people out of a particular echo chamber, then that had a sustained effect on how they behaved. Which is evidence contrary to the often made claim that they'd just leave and go somewhere else. It's in line with the theory that the internet fosters extremism because it enables insular pathological communities that in the analog era you'd have been slapped out of long ago by people who aren't nuts.
> Essentially what it showed was that if you pull people out of a particular echo chamber, then that had a sustained effect on how they behaved.
So…silos and echo chambers are bad. Seems to me that was part of my original point. I am suggesting that censorship of information leads people to the silos.
So you are saying, that things got better when people were banned.
Because when they got banned, many other communities saw improvements as well, not just those?
No I am saying that when you censor/suppress debate in the public square you drive people underground where they land in echo chambers and develop extreme views because they don’t have public debate.
You don’t need to ban people from echo chambers if they don’t land there in the first place.
Your solution is reactive to a problem you caused. My solution is don’t create the problem in the first place.
So I have done the leg work to see what happens and it turns out that if you give space to extremist views they overtake other conversations and dominate the community.
What people don’t seem to grasp is that all speech is not equal, and that our brains react very predictably to certain arguments and content.
For example, your argument is not supported by the paper, which I have read. Because the paper shows behavior of the bad actors changed across the site, and became less hateful.
However the argument is complex, and goes against commonly held beliefs, such as sunlight is the best disinfectant etc.
More exposure results in more reinforcement of popular ideas, until something happens externally.
When you feel the need to censor or suppress information all you are doing is admitting that your argument is just not as persuasive as the opposition and requires handicapping. People see that as the same thing as your argument being false which is why they always work their way tirelessly around your efforts to suppress and censor.
If you get to the point where you feel you need to censor, suppress, or outright ban voices to be heard, you have already lost the communication high ground and no matter how true or good your opinion/idea/position. It will lose in the court of public opinion…and frankly should…because you did not put the appropriate effort in to be persuasive.
> There's some academic research to the contrary; banning /r/fatpeoplehate and /r/coontown on Reddit reduced incidents of hateful speech across the platform.
That does not imply it reduced hateful speech overall, maybe the censorship just increased antipathy and drove that speech underground or to other platforms where it couldn't be seen.
"Off Reddit" is a win. Recruitment in neutral-ish venues like Reddit is critical for extremist groups; people aren't starting on Stormfront.
Not necessarily. If it drives the content off Reddit but onto another platform that's friendly to only these extremists and their views then you may just end up radicalizing the members of the original banned subs even more.
I don't know if that's what happened and there's probably a lot more research to do here but I'm not convinced that deplatforming is actually a good outcome societally without more data.
That's still just a conjecture of a meaningful effect. Recruiters are able to change tactics in response you know. You're just naively assuming that those old tactics worked better just because reddit itself changed, but it could very well be the case that the more extreme rhetoric only attracted people who were already extremist and turned off moderates, but a more moderate approach that's now required could funnel more moderate people into an extremist pipeline.
"Off reddit" is just a win for reddit's PR, and that's why they did it, and no other reason and no other effects can be inferred.
The claim you are addressing is a separate one from the fatpeople hate story.
And that claim is evidenced, It’s not conjecture. I dont have it handy on me, but we have mapped out the ways people are recruited, and things like fatpeoplehate, coontown, are the funnels for groups to find new recruits.
Here’s one - https://dl.acm.org/doi/abs/10.1145/3447535.3462504
There’s several others on things from ISIS to hacktivists. The mechanism is the same, heck - “red pill” is the term for this, it’s actually quite known.
[flagged]
Against what, microbes?
There’s a reason surgeons disinfect their hands with more than a skylight. Sunlight is a shitty disinfectant.
I wasn‘t aware that society means surgery. Likewise that veiled means literal. By extension, ethnic cleansing probably means giving certain parts of a population a well deserved bath?
Edit: I did not want to imply that you meant it that way. But in a different context, or coming from the wrong person, it may sound like a dog whistle.
>Exposure to many viewpoints, including wrong ones, provides a counterbalancing effect. When you actively try and suppress information you create a “forbidden knowledge” effect where people seek out silos where extreme and wrongheaded information gets passed without the “sunlight is the best disinfectant”—-it grows faster…becomes more wrong, more extreme, and more dangerous.
Fact checkers don't suppress information, they add context and information to posts others make and provide the exposure to many viewpoints that echo chambers often do not have.
People haven't stopped posting wrong and biased information with fact checkers, they just have the counterpoint to their bullshit displayed alongside their posts on the platform.
>Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
My decades of watching is exactly the opposite. Extremism is and was rampant long before fact checking, and fact checking really only served to push some of the most extreme content to the margins and to smaller platforms that don't have it. It concentrates it in some ways as many of these opinions fall apart quickly when exposed to truth and facts.
> Fact checkers don't suppress information,
I think some moderation is important, but misrepresenting fact checkers (damn ironic actually) doesn't serve us. Of course fact check suppresses information! That's the whole point. Sometimes it results in straight up deletion, but even when not it results in lowered reach aka suppression of what the algorithm would normally allow to trend, etc.
>Of course fact check suppresses information! That's the whole point
Its not. The fact checkers in this case, and almost all cases we're discussing ADD information that challenges the posted data, not censor or restrict it from being posted.
Outside of illegal content that is. Content deemed illegal was removed by moderation teams, this was before fact checking, and will continue with community notes with little to no change.
Yes I am aware of what a fact checker is supposed to do and am aware of what they really do.
What they really do is spin information.
> Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
Seems like the opposite. Traditionally we only had siloed forums which were often heavily moderated by volunteers who considered the forums their personal fiefdom, read every single thread and deleted stuff for being "off topic" never mind objectionable, plus the odd place like /b/ which revelled in being unmoderated. Then you ended up with more people on big platforms that were comparatively-speaking, pretty lightly and reactively moderated. Then you ended up with politicians weighing in against moderation with the suggestion even annotating content published on their platform was a free speech violation, let alone refraining from continuing to publish it.
The difference between antivax sentiment now and circa 2005 isn't that nobody ever determined that they weren't having that nonsense on their forums or closed threads with links to Snopes back then or that it's become difficult to find any references to it outside antivaxxer communities since then. Quite the opposite, the difference is that it's now coming from the mouth of a presumptive Health Secretary, amplified on allied news networks and now we have corporations running scared that labelling it a hoax might run the risk of offending the people in charge. Turns out sunlight is a catalyst for growth
> The difference between antivax sentiment now and circa 2005
The antivax movement literally grew exponentially when vaccine information started to be actively censored on the largest social media platforms and you think that is because there wasn’t enough censorship? People were literally driven into antivax information silos because a bunch of idiots decided that vaccine criticism should be forbidden in the public square
Wow.
Sorry, but I live in a country using exactly the same social media providers as you, subject to exactly the same (actually pretty limited) censorship and without widespread, committed and politically-aligned antivax sentiment
People in the US didn't need to be "driven into antivax information silos", because those antivax information silos were their favourite talk show hosts and some of the country's most prominent politicians. Turns out that promotion of antivax sentiment as an important issue that must be discussed and constant attacks on public health officials doesn't "disinfect" people against the belief that there might be some truth to it...
So you are arguing for exactly what? You don’t want freedom of speech? You don’t want body autonomy? You want authoritarian control of the populace?
Not sure where you live, but if those are the things that are important to your leaders and people, I wouldn’t want to live there or even visit. Sounds awful.
I don't recall expressing any of those sentiments you've attributed to me, but I'll note it's quite a shift on your side from "sunlight is the best disinfectant" to "your country's mainstream media and politicians didn't encourage antivax sentiment enough to reduce vaccination levels or increase death rates to US levels? Sounds horrible"
I note that the original topic was about Zuckerberg being so afraid of his corporation being censured by the incoming government that he's pledged to move his moderation team to a state which voted for them and refrain from publishing any "fact checking" notes in Facebook's name lest they conflict with the government and its supporters. That doesn't sound like a libertarian paradise either
> I don't recall expressing any of those sentiments you've attributed to me
Perhaps I misunderstood your intentions then.
If you believe that antivax debate was in the mainstream in the US and there wasn’t an active attempt to suppress just because some voices bled through the censorship, you are simply wrong. Zuckerberg even noted in this announcement that pressure from the Biden administration to censor speech was significant.
My consistent point here is that censorship drives extremism because it suppresses the debate where the debate wants to take place and pushes the conversation to those interested in the topic to siloed echo chambers. That definitely happened around vaccines in the US over the last 4-5 years. I know that happens for a fact and have personally tried to gently encourage people I know that felt the censorship frustrations and leapt to other platforms to still read all sides before making decsions.
Whatever Zuckerberg’s internal motivations are on this change of policy, I don’t care. Community notes seems to be a better way than suppression. Others may have a different opinion and thats ok. I encourage them to freely express it and would never support any one trying to shut that debate down.
How wrong of me to think that high-profile politicians and wall to wall cable news coverage are anything other than little-noticed voices bleeding through the all-pervading censorship of... two internet companies deleting a handful of accounts after people had pointed out how many million likes their dangerous medical advice was getting and some algorithmic "are you sure you want to link to this hoax?" interstitials. Really, the argument that Meta's moderation was futile and inept (even more so than its policing of scam ads and spambots) has far more credibility than attempts to portray it as some evil internet police forcing people to hide out on tiny islands of antivax.
It seems a little unlikely that people who decided to delete their Facebook account and seek out an echo chamber because they didn't like seeing FactCheck.org links slapped on vaccine function would have nevertheless listened very carefully to FactCheck.org or the public health officials their favourite politicos were slagging off if only they were able to d̶e̶b̶a̶t̶e̶ post misleading memes about public health on Facebook first. I mean, the anger at third party fact checkers is explicit rejection of the idea there's anything to debate.
Anyway, regardless of whether self-proclaimed fact checkers actually live up to their label, it's difficult to describe a corporation bending the knee to an incoming administration that's determined that corporations shouldn't link to them as a victory for free speech or enabling controversial viewpoints to be debated as opposed to merely promoted on internet platforms. Must be wonderful for Zuckerberg to be able to express himself freely without any threat of censure whatsoever on the day he announces that he'll be firing his his moderation team so he can relocate it to a state the incoming administration considers less susceptible to wrongthink
The principle is sound, but it’s a principle.
The mechanisms of online speech show us a few other issues.
For example certain ideas are far more “fit” for transmission and memory than others. Take a look at something as commonplace as “ghosts” or the idea of penguins. Ghosts are in all cultures, and they are essentially people with some additional properties. Penguins are birds that dont fly.
Brains absorb stories and ideas like flightless birds easily, because they build on pre existing concepts.
Talk about spacetime, or multiple dimensions and you aren’t going to have the same degree of uptake.
So when I put certain ideas into competition with each other, all else being equal - the more suited for human foibles, the more successful the idea.
People also dont make that much effort to seek out forbidden knowledge. Conservative main stream media has made many things forbidden - 1/3rd of America isnt aware that Obamacare and the ACA are the same thing.
Sunlight is the best disinfectant for certain breeds of germs. Many others get on just fine.
In my many decades of online existence, which includes being on multiple sides of moderation, extremism was on the rise from before, because we had created the arguments and structures that thrive on it.
Content moderation was a hap hazard effort created out of necessity to stall it.
Personally - I hope this works. Moderation sucks, and is straight up traumatic. If we can get better, more effective market places of ideas, then I am all for it.
I care about the effectiveness of the exchange of ideas. I see free speech as a principle that supports this. But the goal is always the functioning of the marketplace.
> Seems to me in my experience after decades of watching and participating in online discussion extremism really only became more problematic when fact checking and active efforts to suppress took hold. Whatever the good intentions may have been, the results were worse.
This is just overtly and flatly wrong. I reject your experience fully because over the past few decades the internet has become more open, not less. We openly debated people that believed vaccines caused autism and gave them microphones. Every single loud asshole and dipshit was given maximum volume on whatever radio show or podcast or social media platform they could want.
You can reject my experience all you want but the reality is that between 2020 and 2023ish the world’s top social media platforms became less open about specific kinds of information and actively tried to censor and suppress any contrary information to a government opinion/narrative about certain subjects. During this time certain forms of extremism exploded in popularity as people were driven to information silos to find and learn about the information that the social media platforms were trying to suppress. Those silos generally didn’t have censorship but they also didn’t have contrarian voices either. So when folks landed in those silos all they heard was the assholes at the loud volumes and without the contrarians, followed those assholes.
Specifically to vaccines, the antivax crowd was pretty minimal to a some nutjob soccer moms, holistic medicine fanatics, and RFKjr until you stopped having conversations with them, because you folks who want or believe that censorship is good silenced the debate and did not follow them to the forums where they went to spread their ideas to continue the debate.
I am absolutely convinced that the growth in the antivax movement is directly tied to the censorship effort (and the desire of the government to not be completely honest about the vaccines at the time).
No free lunch here. Social media is different from systems in the past cuz it give everyone Free Broadcast capability.
In the past people were told they had Free Speech, but they didn't have Free access to Broadcast Media (newspapers/radio/tv/movie studios/satellites). It was always up to someone else with Access to Broadcast(one to all messaging) to prop up voices they thought was important.
Shannon's Information theory tells us Social Media as a system can't work cause - once you tell people their voice matters, give everyone in the room a mic, plugged into the same sound system, and allow everyone to speak, firstly you get massive noise, secondly as a reaction people will scream louder and louder and repeat their message more and more. Noise only compounds. The math says it can't work. The way people are debating about this is under an assumption that it can.
> The math says it can't work. The way people are debating about this is under an assumption that it can.
Yet here we are…the math seemed to work overall just fine minimizing the anti-vax movement until someone started externally futzing with the numbers to try and force a specific result to that math. When you do that apparently more of your components run off to form other equations and no longer participate in your equation then before you tried to manipulate the messaging.
You are not going to get everyone to agree with you…ever. But suppressing and censoring debate in the real world example of vaccine acceptance to try and achieve that result backfired spectacularly by galvanizing and growing that movement far far beyond what it was…or should have ever been.
Minimal? Again you are just objectively wrong. The antivax movement had been growing since the 90s, RFK Jr didn't exist in a vacuum. The entire reason why there was push back against the COVID vaccine in the first place was because this movement was there already, much like the movement against abortion.
You are rewriting history to fit your viewpoint which is wrong. The reality is that you are wrong. And those silos that people moved to were equally sinful of censoring voices and banning people not aligned with their beliefs. Even now Musk has no problem censoring and banning people off Twitter for being too mean to him.
[dead]
You must have slept walked through covid then.
Citing the simple fact that every western government ignored their own pandemic plans and did adlib bingo instead was enough to get you banned of Twitter, Facebook and reddit for close to two years.
> I haven't seen a single discussion be worse off due to fact checking
The idea that there is some official governing body that has access to undisputable facts and they have the power to designate what you or I or anyone else can talk about is preposterous and, frankly, anyone on a site called Hacker News should be ashamed for supporting it.
>The idea that there is some official governing body
Platforms were encouraged to create their own departments, and have. There is no "one" or "governing" body here, so this is more hyperbole in this already flagrantly absurd discussion.
>have the power to designate what you or I or anyone else can talk about is preposterous
No one is stopping you from posting bullshit, fact checkers simply post the corresponding challenge or facts that allow others to see the lack of truth in your statements.
The idea you can say whatever you want, lie all you want, and be unchallenged as some form of right is absurd. Claiming because you can be challenged is censoring you or preventing you from talking is also completely absurd.
>and, frankly, anyone on a site called Hacker News should be ashamed for supporting it.
Frankly anyone on this site should be able to separate hyberbolic strawmen from reality.
> Platforms were encouraged to create their own departments, and have. There is no "one" or "governing" body here, so this is more hyperbole in this already flagrantly absurd discussion.
> Finally, in the midst of operating or considering up to three different avenues of “misinformation reporting” (switchboarding, EI-ISAC, and the “misinformation reporting portal”), by early 2020, CISA had dropped any pretense of focusing only on foreign disinformation, openly discussing how to best monitor and censor the speech of Americans.
That's a quote taken directly from the House Judiciary report on "disinformation", page number 31 - https://judiciary.house.gov/sites/evo-subsites/republicans-j...
Here's another one
> The EIP repeatedly used its fourth category, in particular, to justify the censorship of conservative political speech: the “Delegitimization of Election Results,” defined as “[c]ontent that delegitimizes election results on the basis of false or misleading claims.”166 This arbitrary and inconsistent standard was determined by political actors masquerading as “experts” and academics. But even more troubling, the federal government was heavily intertwined with the universities in making these seemingly arbitrary determinations that skewed against one side of the political aisle.
So please, let's not pretend that the fact-checking organizations, the information streams they themselves depended upon and the pressure that was applied to all of the social networks was organic "encouragement" meant to challenge bullshit posted online - it was a censorship campaign by the United States government, plain and simple.
A voice of sanity in a cacophony of madness. I hold no sympathy for Meta but it's laughable that so-called "fact-checkers" are anything but "status-quo enforcers".
When you say this, what are you referring to? Was this about the general vibe of online conversations, or are you talking about specific incidences or traits?
The problem with "Fact-Checkers" was that since they're human they're going to impose their own biases, and their own sense of morality. For well over a decade the majority of them were also left-leaning (per Silicon Valley), and so even true things that conservatives were trying to say got "censored" because these left-leaning folks believed their own sense of truth and morality were superior.
Who was checking the fact checkers, when they were wrong quite often?
> when they were wrong quite often?
citation please
There you go: https://judiciary.house.gov/media/in-the-news/facebook-execs...
I've not seen any examples of the "official" fact-checkers being wrong; have you?
Joe Biden is sharp as a tack and any videos purporting to show the opposite are cheap fakes deceptively edited by the Republicans and their far right allies [1] [2] [3]
[1] https://www.politifact.com/article/2024/jun/21/cheap-fake-vi...
[2] https://apnews.com/article/biden-trump-videos-age-cheap-fake...
[3] https://www.nbcnews.com/tech/misinformation/biden-g7-video-j...
It's trivial to find examples. I put "fact checkers were wrong" into DDG and turned up:
https://www.telegraph.co.uk/business/2025/01/07/five-times-f...
https://www.bmj.com/content/376/bmj.o95
https://reason.com/2021/12/29/facebook-masks-false-informati...
Even when they aren't wrong, they can be biased. See for example:
https://www.allsides.com/blog/media-bias-alert-politifact-fa...
Also, compare and contrast how they handled Sanders and Trump's presentations of substantially the same claim:
https://www.politifact.com/factchecks/2015/jul/13/bernie-san...
https://www.politifact.com/factchecks/2016/jun/20/donald-tru...
There's an entire site dedicated to pointing out more examples, aptly named https://www.politifactbias.com/ . They show their work in great detail.
It's trivial to introduce bias by simply being selective about who you hold to greater scrutiny (https://slatestarcodex.com/2014/08/14/beware-isolated-demand...).
In the examples you provided, they mostly deal with hotly-contested information around Covid-19, where there exists countless amounts of incorrect information, politicized reporting, and straight up propaganda. I'm not surprised that Facebook's fact-checkers got a couple articles mislabeled, especially if they blended in with the wave of genuine disinformation that accompanied the pandemic.
Given that there seems to only be two articles that are listed as falsely reported as misinformation (the Reason article and the BMJ article also mentioned in the Telegraph report from today), I have to assume that there actually aren't that many large errors on the part of the fact checkers. If there were more than two or the mistakes were much bigger, then the free speech advocates would never stop mentioning it.
There can definitely be bias when it comes to fact-checking, I wouldn't deny that. I also think that education and knowledge sharing can be greatly harmed by social media incentives to provide the most "engagement". Having an actual human in the process somewhere introduces some error but also cuts down on a lot of the dumb crap that would otherwise spread.
You asked if I saw examples and said that you haven't seen any examples; I showed you examples.
There certainly are more examples, and the free speech advocates I know do talk about the subject generally quite a bit.
One I just now remembered: Dr. John Campbell (https://www.youtube.com/@campbellteaching) has run into issues with this and has pointed out many other cases where established "knowledge" about Covid that we were previously not allowed to criticize, turned out to be objectively wrong. These disputes have resulted in many other people being censored despite later being shown to be correct, or at least reasonably justified by the best information available at the time.
This is someone who was proactively warning about the potential severity of Covid well before others, and advocating for proper hand-washing very early on (before more science emerged suggesting that skin contact is a relatively minor transmission vector). In the early days of the pandemic, he was complaining loudly about Fauci's initial mask rhetoric, arguing that the general population absolutely should wear masks and that production needed to step up. He's been doing serious medical content on Youtube for 17 years (sort by oldest to see) and first posted about Covid on Jan 26 2020 when awareness was still low and it was imagined that the virus had been contained to China and presented extensive detail on what little was known at the time (https://www.youtube.com/watch?v=aPvpfC7NfR0).
But now he mostly makes videos against "the establishment", out of frustration with their unwillingness to consider new science over dogma.
I apologize for not scouring the internet for examples. If you had not sought those examples out and provided them, I probably would have never seen any cases of incorrect fact-checking in my actual life, but I would have seen many cases of misinformation being fact-checked. If you have to intentionally find such cases or hear them shouted from the rooftops by free speech advocates, then they probably aren't that many such cases.
I don't have time to search through an entire Youtube channel, but I will say this: there are many, many doctors out there with factually incorrect views about medical science. I personally have talked with doctors who think that the Covid vaccine killed hundreds of thousands of people (it didn't). I do not necessarily think this doctor is wrong, but from the perspective of a fact-checker who is given the current best knowledge of Covid it is hard to determine who is making genuine good-faith efforts to criticize vs who is simply repeating what they want to be true.
And for the record, you absolutely are allowed to criticize the establishment views. When it comes to important topics like medical science, however, you may just have additional context added saying that this is a contrarian view which (statistically) is more likely to be false than the consensus. Everybody likes to complain loudly about being censored, but the reality is that their views are just being disputed and information provided that they are going against the mainstream view.
You wrote: "I've not seen any examples of the "official" fact-checkers being wrong; have you?".
So, you do now admit there are examples of official" fact-checkers being wrong?
Specifically, I was talking about in my daily usage, not a widely-distributed article on a single example. Have you personally seen any fact-checking whatsoever, much less fact-checking that is misleading? Or do you need to search it out in order to find it?
> Trump says the unemployment rate for African-American youths is 59 percent.
> In May, the bureau said the employment-population ratio for blacks ages 16 to 24 was 41.5 percent. Flipped over, that would mean that the unemployment ratio - although such a statistic is not published by the bureau - would be 58.5 percent. That’s pretty close to the 59 percent figure Trump cited, Sinclair noted.
> Mostly False
Crazy
Who was fact checking the fact checking fact checkers?
[dead]
[flagged]
[flagged]
> From bad to worse. Meta is probably one of the single largest funders of fact checking. Now that appears to be coming to an end. Third parties will no longer be able to flag misinfo on FB, Instagram or Threads in the US.
Zuck has probably done exactly that cost-benefit calculation — FB has put enormous resources into fact checking, and to most people it hasn't moved the needle on public perception in the slightest. Facebook is still seen through the lens of Cambridge Analytica, and as a hive of disinformation. The resources devoted to these efforts haven’t delivered a meaningful return, either in public trust or regulatory goodwill.
you can still flag via the community notes system.
Fact checkers are often wrong, and often corrupted by the activists that end up working at them. For example I’ve repeatedly noticed articles from Politifact that are blatantly wrong or very misleading. When I look up those authors and their other work, their bias is clear. Community notes on X/Twitter is far more effective and accurate.
The older I get, the more I realize that people just live in different realities and so many contradictory facts can be true. Obviously this is a source of conflict.
I don't think facts ever contradict each other, it's the stories people create to explain the facts that are at odds. These stories lead people to extrapolate other beliefs which they present as "facts", and it's an organic process of discussion and exposure that changes peoples minds over time.
I personally think aggressive fact checking authorities impedes this process, because people don't change their minds when faced with authoritarian power against which they are powerless, and because they are powerless here, they get angry and they disengage. This ends up which reinforcing their beliefs and now you've lost all chance of change.
Right. Imagine facts as data points on some Cartesian plane, and the narrative surrounding the facts as the curve fit to those points. The data points might all be sound, but by selectively omitting some, or by weighting their "uncertainty" higher or lower, you can fit just about any damn curve you want to them.
One such instantiation of this: https://chomsky.info/consent01/
I also think that simple exposure to a narrative, whether it has any actual facts/data backing it up or not, is likely the primary driver of people believing it.
Now, consider that in most "free speech" societies, those with money can repeat things many orders of magnitude more than others. Over time, this results in influence. Thus, while many countries have "free speech," I'd say they don't have "fair speech." The two concepts complement each other, but one is not the opposite of the other.
The idea of some kind of universal fact is also misleading, some statements of fact are only statements of belief, others are so ill-defined that people end up debating two different things.
Yeah, journalism always has some inherent bias. But to say that the X community is going to be less biased than a fact-checking organization staffed by journalists whose job is to be neutral (within what's humanly possible), is frankly absurd.
But they are not claiming to have the facts. That's the big difference.
Why is it absurd? Journalists don’t think their job is to be neutral. They are among the most biased. They abuse the trust given to them, which is why they don't deserve it. Community notes allows a diversity of opinions to compete, which is a better way to seek truth.
you're confusing fact checking with forum discussions and social media posts
What specifically is the difference? Other than an appeal to authority?
It's ending because the government that encouraged fact checking is ending. The new one has made it clear they despise fact checkers
Or they are more realistic, or less corrupt.
Seems to me that if some authority is determining what are facts and what are not for me, that I am easily shapable and foolable.
Community Notes at least don't claim they have the facts. So that leaves you more with a responsibility to make up your own mind.
I know this isn't for everyone, there are still a lot of people that like to have leaders tell them how they should live. But nowadays there are more and more people that like to have more independence. You will have to live with that too.
None of this is to do with anything about what people want. It's to do with the government. Meta has always, by necessity to some degree, gone with what the current US administration wants re: content moderation. This is the same thing.
Do you really think the company which has openly admitted it wants to create AI profiles that post as if they're humans and not tell you they are AI care at all about facts or what you think or believe?
Well yeah true, the decision is probably mostly made because of the change of government. The fact checking was pleasing the left, and now that the right has the power, this left-wing-propaganda thing has to go.
But then is community notes right-wing?
They could also have kept the fact checking system, but just alter the facts to please their agenda.
But they didn't do that, they are replacing it with Community Notes, which isn't some small group supposedly figuring out the facts for everyone, but a community build information system.
To me that seems a lot more fair and less prone to corruption. So regardless of the real motivation behind the move, I think it will have positive effects for society. At least a step in the right direction. Still a long way to go.
> The fact checking was pleasing the left, and now that the right has the power, this left-wing-propaganda thing has to go.
Yes you understand. Meta, due to its problems with moderation over the years, both legal and political, has largely ceded direction of that to the government. Previous government wanted things like fact-checking, an oversight board for moderation decisions, and censorship of certain issues. Current government doesn't want any moderation at all, like X, the social media owned by Trump's biggest ally, which he personally loved so much that he created his own Twitter clone when he was booted off of Twitter. So in that environment, the easiest, simplest thing is to treat Meta platforms like X. That's all there is to it. It signals commitment to the new administration, it heaves political and legal pressure off Meta, etc. much more than your suggestion, that they keep fact-checking but bias it towards the right (which would need to be explained to the administration, etc.) Just saying "We're like X now" gets the point across most cleanly, and it's cheaper
Right. And you know what type of government really despises fact checkers? Autocratic / oligarchic governments (Russia, China, etc.)
Sure, and that's the gov't we have now. The previous one was also suppressive but in different ways
That's simply not true.
Exactly! They simply used lawfare in an attempt to bankrupt, sieze the assets of, and imprison their main political opponents rather than keep the scale balanced (for the sake of democracy) /s
You know lawfare can only be used against you (in the US) to seize your assets, bankrupt you, and imprison you if you commit major crimes right?
Thank God. Fact checkers and political organisations pretending to fact check frequently spread false information. Aside from the 2020 election interference regarding the Hunter Biden laptop (which was falsely claimed to be a Russian disinformation effort), you can visit Snopes right now and read an article on how someone that blew up people (and now works for BLM) may not be a terrorist because ‘there are many different definitions of terrorist’.
https://www.snopes.com/fact-check/blm-terrorist-rosenberg/
I think that Snopes link makes it perfectly clear what is going on. Just because you disagree doesn't mean that it's wrong.
I think the Snopes link indicates the grandparent's point well, if not in the way that was intended: words being subjective and imprecise, the fact checker has many degrees of freedom. If we allow fact checkers to censor content, they will use the linguistic degrees of freedom to censor selectively to the benefit of their political bias. (Your terrorist is my freedom fighter, your demonstrator is my rioter, your just cause is an imposition on my freedoms, etc.)
Snopes was careful to show degrees of freedom with this fact check, but most social media fact checkers will not be so careful. Social media fact checkers will have a tendency to censor in the direction of the currently-in-power political party, because that party is able to set regulatory policy on social media companies. So the only thing which will prevent censorship from blowing with the political winds is to not have centralized censorship.
Community notes (as implemented at Twitter) require agreement of multiple people who are not in agreement on issues to agree on Notes. I am cautiously optimistic that it may be possible to correct wrong speech with more speech in a nonpartisan manner.
No. Someone who attacks civilians for political gain is a terrorist.
Edit for the reply below: yes that very obviously includes being a member of a group that attacks civilians for political purposes.
There being debate over whether other groups that do other things should be called terrorists is a separate matter.
Her specific crimes were possession of unregistered firearms, transport of firearms and explosives shipped in interstate commerce, unlawfully use of false identification documents, and robbing armoured cars.
Given all armoured car robbers would engage in such activities (unregistered firearms, explosives, fake papers, etc),
is it your position that all armoured car robbers are terrorists?
No. Due to rate limits, I replied above.
[flagged]
As a leftist, while this is concerning, it's also important to remember that Meta censors left content as much as it does right content.
So, while this announcement certainly seems to be in bad faith (what could Mark mean by "gender" other than transphobic discussion?), this should be a boon both for far-right and left discussion.
Does that mean increased polarization and political violence? Surely, surely.
He explained it in the next sentence. If people are free to say it in Congress they should be free to say it on Meta platforms too, and that includes a range of non-binary opinions that aren’t intrinsically istphobic.
>it's also important to remember that Meta censors left content as much as it does right content.
This is a bold claim. I see a lot of people in this discussion that seem to have a very different experience. Your point would be much stronger with evidence, if only to calibrate everyone's understanding of what you mean by "left content".
>what could Mark mean by "gender" other than transphobic discussion?
From what I've been able to tell the last several years, the overwhelming majority of your ideological opponents here have no interest in visiting physical harm upon others simply because of how they view and present themselves. They just don't want to be, or feel, compelled to treat the other person's self-image as an objective fact. Some of them additionally have concerns about capacity of minors to give informed consent for the related medical procedures, or consider it suspicious that the prevalence of such self-identification has risen drastically in recent years (to the point that they imagine social pressures toward such identification).
>Does that mean increased polarization and political violence? Surely, surely.
I have seen statements like this from your opponents interpreted as veiled threats in the past.
> Your point would be much stronger with evidence, if only to calibrate everyone's understanding of what you mean by "left content".
I think it's extremely likely that people will see the "de-ranking" of content they agree with as bias, regardless of their place on the spectrum.
Similar: "Biden must have committed election fraud, because all of my friends voted for Trump and I don't know anyone who voted for Biden." (previous election, obviously) Well, is that because no-one voted for Biden, or that the friends/content you see is tuned to how you lean.
> while this announcement certainly seems to be in bad faith
Not really though. It means that feminist campaigners can advocate for single-sex spaces and services without the looming threat of being banned. This is great news and a win for free speech.
there are plenty of TERFs on Meta’s platforms already
That's good, hopefully they can speak more freely now.
You know that this announcement is made to win favor with Trump. I would not expect that leftism will be any more allowed
I agree. At the very least, it's using Trump as cover.
That said, if they remove the political filter, they're opening the door for all discussion (even from the left).
Of course, they could surreptitiously filter out the left. Hell, why not?
That's my guess as to what they intend to do.
Just moving the needle for allowed content to include transphobia and racism.
[flagged]
> people just do not wish to participate in other peoples gender performances.
The “bad faith” is in the pretending that we don’t all participate in gender performance with every single person we come into contact with, every single day, for our entire lives.
The post you are responding to does not claim otherwise.
Again: it is specifically pointing out that other people are not obliged to participate in other people’s performance.
People are free to act, have whatever cosmetic surgery or take whatever hormones they wish to.
Where their rights end is asking other people to refer to them based on their performance rather than their sex.
Again, it is not ‘bad faith’ for Meta to allow discourse from people to disagree with gender ideology. Meta are not hiding anything, they are directly saying that they want to allow people that disagree with gender ideology - which judging by the last election is most Americans - to use their services.
> this should be a boon both for far-right and left discussion.
If by left discussion you mean discussion of the genocide in Gaza, don't count on it, because this censorship is bipartisan in the United States.
Zuck cares about currying favor with the powerful. He doesn't give a crap about the powerless. Also, he's pretending that Texas, the proposed site for content moderation, is not politically biased, which is laughable. "We're moving from a blue state to a red state" is not a serious proposal for reducing or eliminating bias.
> If by left discussion you mean discussion of the genocide in Gaza
It’s also a right wing complaint, and they’re also silenced for bringing it up.
Everytime someone calls Biden "Left Wing", I roll my eyes. So it's quite possible that you have a different definition of Right Wing than I do.
But Trump, Fox News, and the Republicans are absolutely actively aiding the genocide and squashing dissent.
[flagged]
Conspiracy theory?
Meta is giving up on the (impossible by design) task of policing their own platform.
The result will be even more poisonous to users.
Just like cigarette companies using chemicals in the papers so that they burn slower. Does it improve the product? Maybe, along one dimension.
> Meta is giving up on the (impossible by design) task of policing their own platform.
It's a bit more than giving up. They are also going to push more political contents on feed.
And save money in the meantime, assuming users will not leave because of this.
They've also said there will be more harmful (but legal) content on there as they'll no longer automatically look for it, but require it to be reported before taking action.
As someone who worked on harmful content, specifically suicide and self injury, this is just nuts - they were raked over the coals in both the UK by an inquest into the suicide of a teenage user who rabbit holed on this harmful content, and also with the parents of teenagers who took their lives, who Zuck turned around and apologised to as his latest senate hearing.
There is research that shows exposure to suicide and self injury content increases suicidal ideation.
I'm hoping that there is some nuance that has been missed from the article, but if not, this would seem like a slam dunk for both the UK and EU regulators to take them to task on.
This exactly mirrors my thoughts, although I don't work in your field. One quote:
"For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies)."
That is first order data and it's interesting. However, before making policy decisions, I would want the second order data: what is the human cost of those mistakes, and what percentage of policy-violating content will not be removed as a result of these changes? Finally, what's the cost of not removing that percentage?
For that matter, by talking about the percentage of active mistakes without saying how many policy violations are currently missed, you're framing the debate in a certain direction.
Indeed.
The human cost of a piece of content being taken down depends on the piece of content, and the reason behind posting it.
In the case of someone posting about recovery from self injury and including a photo of their healed self-harm scars, having that taken down by mistake would be more harmful than someone who posted a cartoon depiction of suicide for the lolz.
Yes.
My personal belief, for whatever that's worth, is that communication and speech are one of the most powerful tools any of us have. Talking can change minds, move societies, arouse emotions, and in general makes a difference. This is true no matter the format (text, voice, etc.).
That means that restricting communication should not be a casual activity. Free speech is a good ideal for a reason.
It also means that, if you believe in the primacy of free speech, you are obligated to consider the implications of that belief. Speech has effects. In my adult life, since 1990, we have seen a major change in the ease of communication. IMHO, society hasn't been able to fully adjust to that change -- or rather, that huge suite of changes. I sincerely do not know what a healthy society using the Internet looks like; I don't think we're in one now. All of these arguments (on all sides, mine included) are hampered by our lack of perspective.
Which is why we should research this carefully - and the research thus far points to consumption or graphic or even borderline depictions of suicide, self injury and eating disorder content (eg thinspo) being bad for mental health in at least teens.
Meta seem to be making the case for those who would see social media banned for people under the age of 18. To enforce that properly would require needing ID, and that then opens a whole can of civil liberty issues.
The social "science" research in this area is junk with small effect sizes, unclear causality, and multiple uncontrolled variables. People who claim to be following the science in this area are generally being disingenuous and picking results that support their preferred ideology.
The ideology of ... not doing something that could make adolescent (and adult) mental health worse, to the point of suicide?
Yeah, making that my ideology is a hill I'm willing to die on, sorry.
Forcing the entire world to conform to your idea of "child-safe" has negative consequences, too.
Can you share the negative consequences of not allowing, and not promoting, graphic images of self harm and suicide on a social media network please?
It gives a lot of unearned power to those who decide what constitutes "promoting," "graphic," "self harm," and "social media," for one thing.
If you or I happen to agree with the people who wield that power, rest assured it's only a temporary coincidence.
Given how easy it is to take things out of context, I'm not so sure that the original context really makes a difference.
There's more people online than any of us has heartbeats, and the n^2 number of user-user pairs generates detrimental effects that track any positive effects.
Much better, I think, for each of us to have a small and private personal social network, not to hand everything over to a foreign* company trying to project its social norms worldwide.
* Facebook claims about 3 billion active users, so for 89%-93.5%** of its users, the fact that Facebook is American makes them foreign.
** https://thesocialshepherd.com/blog/facebook-statistics#:~:te....
> However, before making policy decisions, I would want the second order data:
I think this the wrong lens. The correct lens is: if they don't voluntarily make this change, will they be forced to?
The incoming administration seems committed to banning "censorship", so I believe making a cost/benefit analysis is something of a false choice.
E.g. see https://www.youtube.com/watch?v=xJfUXVOoFBo
That ignores the regulations in the EU, and the UK (coming into force this year), and also the huge volume of lawsuits they are facing in the US. Does everyone remember Zuck turning around to apologise to the parents in that senate hearing? Those parents must feel this is a slap in the face.
This is a decision for the US market first and foremost. The lawsuits you mention are sadly irrelevant to the decision-making; again, if you are about to be forced to make this change by Trump, the results of some cost/benefit study will not sway his reasoning. His decision is already made.
FWIW I would not be surprised if the bluster about championing free speech abroad gets quietly forgotten; we’ll see. They explicitly state they will comply with laws, which in EU likely means continuing to moderate (more not less over time, given the regulatory trends).
> we think one to two out of every 10 of these actions may have been mistakes
May have been a mistake? Reminds me of RTO and the subjective feeling of being more productive in the office. They have the feeling they may have made mistakes and base their new policy on that feeling.
I think what they are saying there is the press release interpretation of experiments showing a false positive rate of 10-20%, with error bars wide enough that stating a percentage gives too many significant figures. But the definition of FP is necessarily fuzzy; if you can perfectly identify them as FP at scale then you have built a better classifier and you no longer have the FP problem. So any statement about FP rates necessarily needs to be couched in uncertainty.
I don't think it's malicious wordsmithing where they are mis-representing the internal data, though I don't have the data to confirm.
The human cost can't be quantified in any meaningfully precise way on either side. The calculations are necessarily based on so many assumptions as to become entirely subjective. Ultimately the decisions will be made based on politics and business priorities, not any objective calculation of human cost.
> There is research that shows exposure to suicide and self injury content increases suicidal ideation.
Yes. However, I find this obsession with harm-based value judgment to the exclusion of all other considerations ethically problematic, to put it mildly. Ethics does not reduce solely to considerations of harm.
Would you mind expanding on that please, what are the ethically problematic things you are trying to balance against this?
Freedom of expression comes to mind. If someone had a friend commit suicide, should they not be able to discuss their experience in public?
Absolutely they should, and when I worked there that was known as "protecting voice", that content has always been explicitly allowed because it is free expression, even if reading it can be difficult for some people. The same with someone posting images of healed scars because they've been overcoming their self harm.
The content I'm talking about is graphic photos of suicide and self injury, fresh, blood soaked cuts, bodies hanging, graphic depictions of eating disorder (that goes beyond "thinspo", which is more borderline, and so downranked and not recommended rather than removed).
It's the latter that we believed (based on the advice of experts who we relied on for guidance) is harmful when consumed in large quantities.
Counterpoint: censorship inherently harms everyone. People I follow on Youtube have repeatedly had their ability to discuss topics such as suicide seriously interfered with. It actually gets in the way of factual reporting when a suicide occurs in the community and of discussing the facts of the situation so that people can learn from it and possibly prevent future deaths.
Not to mention, people just straight up have a right to talk about these things. It is not moral to hold one person responsible for an unintended and not reasonably foreseeable reaction to the discussion. And joking about these topics is legitimately therapeutic for some.
I'm not talking about that here - and that always fell under protecting voice - if mistakes were made they should have been reversed on appeal. e.g. imagery of healed scars in the context of recovery, discussions of struggles with mental health, suicidal ideation etc.
I'm talking about graphic images of self harm, suicide, eating disorders. And at some point you have to weigh the maximalist interpretation of free speech "you have to host whatever I want, as long as it's not illegal" with "promoting this stuff causes active harm, no".
>And at some point you have to weigh the maximalist interpretation of free speech "you have to host whatever I want, as long as it's not illegal" with "promoting this stuff causes active harm, no".
The burden of proof is on you to demonstrate that it causes such harm.
I don't generally think people should be held responsible for the unintended reaction to their speech of a small minority of the audience.
Having a piece of content removed, or demoted and not recommended, is being held responsible?
Also as per the inquest into the death of Molly Russel found based on the preponderance of evidence, exposure to this kind of graphic content was largely the causative agent in her suicide.
What would the bar you require be, is there a bar?
> So, we’re going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.
I don't think this is exhaustive, and I think SSI (suicide/self-injury) + ED/etc. stuff is considered high-severity.
Fingers crossed.
I've already seen disturbing stuff on X since Elon took over that I never would have seen when it was twitter. They don't even show the warning "this might be harmful content" on images and videos anymore. The X algo seems to go haywire every couple of days and dumps a bunch of this crap in my feed until I block 20+ bluecheck accounts showing this crap.
I believe it's only going to get worse going forward as they all adopt these policies.
My profile is largely unused, I follow no one, and like 1/3 times I open up the front page I get straight holocaust denial threads suggested. Completely insane.
It’s okay to leave X.
“Think of the children” isn’t really a good argument for censoring completely legal political discourse, which is what has been happening.
They are admitting that there has been a global push against free speech on these platforms.
>There is research that shows exposure to suicide and self injury content increases suicidal ideation.
I mean do you really need research to show this link? Of course it does.
We are okay with slapping an “R” rating on movies and allowing parents to be the ones who decide what content their kids can see. Why can’t we decide that parents also need to be the ones to stop their kids from consuming bad content on social media?
Automatically demoting, not recommending and adding "mark as disturbing" screens is what's going away - which is akin to the "R" rating.
But at this point, I'm siding with the "no social media for adolescents" people more and more.
[flagged]
> And why aren't you considering the countervailing harm to society of centralized moderation?
Are there reproducible studies showing that? What is the effect size?
(These objections go both ways!)
Arguably they don't go both ways. The distinction is action vs inaction. If someone wants action from someone else, they need to argue the case for that action. Arguing for not doing something on the other hand is never necessary.
To see why, consider that the space of possible actions someone could not take is infinite. If there's an expectation that someone do a ton of research and work to argue for why they are not doing something, then the amount of work they would have to do is thus also infinite. This way lies madness, which is why in reality the default outcome that results from not acting is always taken as a given.
Sometimes this reality is obfuscated by activists. They find some group of people who are just doing their thing, and demand that those people do some extra things (usually some costly things). The arguments they make for this are weak, but when the targeted people say they'd rather not do those extra things the activists demand their targets argue for not doing what the activists want to whatever level of effort (or greater) they themselves made. This can be an effective bullying tactic but isn't legitimate: it's on those who want action to argue the case for it, not those who don't to argue against.
Digital platforms like social networks default to uncensored. If the operators do nothing, then by the way they are built content is allowed. It takes additional work to categorize posts and block certain kinds of content. So the default outcome is free speech. If someone wants someone else to do work to suppress that, then it's on them to prove that it's truly necessary and that the benefits outweigh the harms. But that doesn't cut both ways; it's not required for other people to take on the argument for free speech. That's the default outcome so it just wins by default if the other side can't prove their case to a sufficiently convincing level.
Choosing not to act is an action. The choice not to moderate certain content is a choice to permit certain content.
Not acting is, by definition, not an action.
I see someone drowning. I have a life ring. I choose not to throw it to them. They drown. Did I act?
https://en.wikipedia.org/wiki/Duty_(criminal_law)
No! First sentence:
> Duty (criminal law), is an obligation to act under which failure to act (omission), results in criminal liability
You failed to act, which is why a law is sometimes required to compel action. However, saving a drowning person isn't something that triggers such a legal obligation in the USA unless you're the person who actually pushed someone into the water in the first place.
I don't get why this thread is getting so long or so abstract. The principles here are straightforward. Facebook don't actually have to care about what arguments activists make, but even if they did, it's on activists to win the argument for what they want. You don't get to automatically have your own way unless someone sits down and does a randomized controlled trial showing that you're wrong - and this is independent of what domain we're talking about.
But we're not talking about only the legal obligations. Plenty would argue that a person with a life ring and a drowning person in front of them have a moral imperative to act; the court of public opinion would be certainly negative about a video of someone casually watching the person die while holding the means to save them, even if you can't criminally prosecute.
In this particular case, changing the rules (and making the blog post explaining those changes!) is pretty clearly an action.
Here, the law flows from the moral judgment that there’s a fundamental distinction between action and inaction. Otherwise, you’d be morally culpable for not basically enslaving ourself to helping whenever happens to be the poorest.
In my example, you'd be comfortable morally with "inaction"?
> I don't get why this thread is getting so long or so abstract
Activists use abstraction to attempt to overcome settled understandings and norms. Of course there is a distinction between action and inaction—as you recognize it’s even a legally significant distinction. The very existence of that norm is the reason anyone would say “inaction is really a form of action.”
It’s like how the notion of “antiracism” is an effort to reframe race neutrality as a form of racism.
Sure, but not choosing is a choice.
That's not really how language works. If not choosing to do something is a choice, then today we have all made an infinite number of choices. Nobody would ever express themselves that way.
But even if you want to play word games, choices and actions aren't the same thing. Choosing to act is quantitively different from choosing not to act because it involves a different level of effort. It's wrong to assume that they are morally equal.
This seems to be presuming that there is some clear delineation between acting and not acting, but going through some daily occurrences it's difficult for me to find an objective line, mostly because there are choices one could make that allow one to call something inaction while it requires active action.
Say for example I'm passing by a beggar on my way to work. Before deciding whether I give them money, I can first decide to ignore or not ignore them. From a basic human perspective I want to say hello and be friendly (and I choose to do this), but it does make me feel worse if I decline than if I had ignored them, exactly because it makes it feel like a choice. But if I ignore them, can I call passing by without giving them money less of a choice? I only moved my choice up one level in the tree of all possible decisions I can make.
Or, moving it to the example of the drowning man: imagine you're holding out your arm to see how long you can do it, and see the life ring flying towards you. If you choose not to act, it'll hang on your arm, and the person will drown. Is it nevertheless inaction on your part?
>then today we have all made an infinite number of choices
The way I like to think about it is that once a choice has risen to the level of conscious awareness, it is an illusion that a person can just decline to choose.
There’s lots of mainstream media content I think is psychologically harmful and should be suppressed, such as content normalizing adultery. But I’m quite content to live in a society where the social norm favors people saying what they want and the burden is on the opponents of that to produce strong evidence of harm.
Which studies have you read that show the psychological harm of seeing adulterous content?
Convenient that only your debate opponents need provide reproducible studies that meet your standards.
No, this is the “presumed innocent until proven guilty” principle.
But all we see are two proponents in a civil trial. Shouldn't the standard be the well known "preponderance of evidence"?
Though personally preponderance of evidence seems to be a shitty standard too because I might be listening to two awful theories and be forced to conclude one is the winner. Theories should rise above a minimum threshold to even consider sniffing at before we consider one as superior over the other.
I agree that there needs to be a better standard than just “more likely than not”. Freedom of expression is a fundamental good, and there should be clear evidence of harm outweighing that good, before curtailing it.
Regarding my previous comment, my intent was to point out the GP comment’s position (because the parent’s comment seemed to be beside the point), not necessarily to endorse it.
Why does that principle not apply to “moderation is bad”?
Right. The challenge for free speech absolutists is to demonstrate that free speech takes priority over moderating hate speech, adult content, highly addictive media etc... . That demonstration needs to evidence-based and framed in terms of short and long term social harms/impact. Simply saying "censorship hasn't gone well for some countries" or "having a free speech zone is extremely important to the future of civilization" is not enough.
>The challenge for free speech absolutists is to demonstrate that free speech takes priority over
Why? And how, in principle? Why is the burden of evidence not on others - and equally, how, in principle, could they furnish evidence?
The entire point is that freedom of speech is a core moral value; they have weighed the potential harms and come out against censorship, because they consider censorship to be inherently harmful. There is no objective way to compare different kinds of harm to each other; each individual's moral values are what they are.
When a free speech absolutist argues that freedom of speech is more important than whatever goal the censor has in mind, that argument is of fundamentally the same kind as the censor's argument, just with opposite polarity. When the censor says that "hate speech" needs to be prohibited, that, too, is based on a relative weighing of values and purported rights (i.e. freedom from hearing it).
You’re presuming that the debate has to be carried out according to utilitarian rules (do benefits of free speech outweigh harms caused by certain speech). But why should it be?
No, that's not how this works. By default everything permitted. The entire burden of evidence rests on those who want bans or restrictions.
Hmm. Who decides "how this works"?
Consider hate speech. There is a clear short-term benefit of moderation: reducing the harms to marginalised people from being exposed to threats to their person, identity, and way of life. In the face of this benefit, the absolute free speech advocate must provide a counter-argument for why free speech overrides that harm-reduction.
>In the face of this benefit, the absolute free speech advocate must provide a counter-argument for why free speech overrides that harm-reduction.
Why are you not the one who must provide an argument for why this "reduction of harm" overrides the benefit of freedom of speech?
Further, a very large fraction of what I have seen classified as "hate speech" simply cannot reasonably be argued to constitute any kind of threat.
Finally: what do you mean by "identity"? When I have seen this term used by opponents of "hate speech", it generally seems to refer to something like a person's self-image. I cannot understand how this can in principle be "threatened", nor how it could constitute harm to learn that someone else sees you differently from how you see yourself.
> why this "reduction of harm" overrides the benefit of freedom of speech
There are some strong arguments for harm reduction being a more fundamental human value than freedom of speech.
Firstly, the modern conception of freedom of speech is often seen to be grounded in libertarian thought, in particular the works of Bentham and Mill. Yet Mill himself explicitly stated that these freedoms should be limited in the case where they cause harm to others. Thus freedom of speech has historically been seen as lower priority to harm reduction.
Secondly, there are in fact two competing interpretations for "freedom of speech": on one hand the equality of access to a public forum, on the other hand the ability to say whatever you want. I say "competing" because in a public forum without moderation, the tendency is for loud and offensive voices to drown out the discourse, effectively leaving marginalised people without a voice. This is especially potent in modern social media. To me it is similar to antitrust regulations in the market: we put these in place for the benefit of competition, as this typically improve social impacts. However in doing so we are limiting the freedom of corporations with large market share to collude, fix prices etc... .
Thirdly, history suggests that it's problematic for ideological values to trump the basic tenet of harm reduction. We see this for example in the Catholic church's refusal to support abortion rights or the use of condoms to prevent AIDs. If we don't ultimately assess the long-term social impact of a "core moral value" in terms of human harm and flourishing, then we risk entrapping ourselves in an ideological morass.
> what do you mean by "identity"? ... I cannot understand how this can in principle be "threatened"
As an example, homophobic comments are an attack on the sexual identity of homosexual people. It sends a message that they are unacceptable to society due to their inherent preferences, and that they should not express themselves as they naturally wish to. This causes psychological suffering.
>reducing the harms to marginalised people from being exposed to threats to their person, identity, and way of life
This only makes sense if you use a recent definition of "harm" created by censorship advocates that's divorced from the traditional meaning. In criminal law, harm traditionally (and still does in America) mean actually physically harming someone's body or making threats to do so. Censorship advocates are the ones making the claim that mere words should also constitute harm, so the onus is on them to justify why they want to change the meaning of the word like that.
> In criminal law, harm traditionally (and still does in America) mean actually physically harming someone's body or making threats to do so.
Fraud can be criminal, without bodily harm or threats.
Verbal child abuse can be criminal, without bodily harm or threats.
There are lots of criminal harms not covered by your claimed definition in the American legal system.
Yes, here I am using "harm" in the common sense of physical or mental/psychological suffering.
Enjoy: https://www.nsrf.ie/wp-content/uploads/2023/09/Harmful-impac...
Showing pictures of suicide is "open discourse" now? That's what you're defending?
Are we arguing that graphic images of suicide and self injury are required for open discourse?
Freedom of speech is not about what's "required". That's why pornography is allowed to exist.
And Facebook don't allow pornography either. What point are you trying to make here?
The challenge here is three fold.
Companies like Facebook pretending they are not publishers, people posting content believing they should be able to publish anything without consequences, and professional weather makers ( PR/comms/lobbyists etc ) using this confusion to get around traditional controls on their dark arts.
In the end I think the only solution that works in the long term is to have everything tied back to an individual - and that person is responsible for what they do.
You know - like in the 'real' world.
That does mean giving up the charade of pseudo-anonymity - but if we don't want online discourse dominated by bots controlled by people with no-conscience - then it's probably the grown up thing to do.
The only thing that removing anonymity would do is make it easier to harass people with dissenting opinions. Professional bad actors can switch to posting under "real people" names, just as spammers now post from home IP proxies.
I share your concern - however harassing people is illegal and if you can't be anonymous to do it then that's also much less likely.
I don't buy favourite argument of the US gun lobby - that only criminals ( yes by definition ) would have guns/anonymous accounts if you banned it therefore we shouldn't do anything.
You could apply that to anything that's illegal - by definition only criminals are outside the law - so why any laws at all?
I'd also be concerned about repressive governments - but I think you could distinguish between mass/public communication and private 1:1 communication. Just like in the real world there is a whole world of difference between saying something in private and publishing something in a national newspaper.
I suggest you consider looking how much it costs to go through the legal system, as it seems your assertion is based in a theoretical understanding of our system.
Filing a civil suit can be pretty expensive if you want a lawyer -- which, yes, you do effectively need one.
This is effectively a tax on the victims of harassment.
Social media are not publishers. They are way more public squares, but online.
On top of that, even when publishers usually curate content, there is no obligation to do so. It's just something that has been done, because publishing used to be expensive.
Now, when sharing data online is cheaper and cheaper, this limiting factor is fading away.
--
At the same time, we have just 16 hours of attention per day. So you have to decide whether you want to invest your time in more curated publishing (I read a lot of books, often old books which stood the test of time), or if you want to go to the public square where practically anyone can shout as he sees fit. I do that too, but I try to moderate both my time using social media and what I see there. And I am proud I haven't used TikTok, I stopped using Facebook, Instagram, I don't watch any Reels, Shorts, etc.
So publishers still are not lost, but what they are selling is not curation because of technological limitations, but because of limits of how much we can read and see in the day.
--
At the same time, publishers are biased. They publish what they see as high quality. They publish what they consider worthy. They publish things they would want to read. And they have publication checklists that prohibit publishing certain things even if they are true.
Public squares don't have such an attribute.
There are things to be published and heard, even when mainstream people would disagree. There are things that should be public, even when it's against a law in certain countries.
And online anonymity mixed with public square enables people to tell about atrocities that happen, or about corruption, government inefficiencies, about people breaking human rights and so on.
--
If you end anonymity and public squares, you end a channel for democratic feedback. Because publishers don't play this role any more. They are biased, people realize it and are fed of it.
> Social media are not publishers. They are way more public squares, but online.
I'd believe that if they didn't promote or suppress content - in my view as soon as you get into that game you become part of the publishing process.
> On top of that, even when publishers usually curate content, there is no obligation to do so. It's just something that has been done, because publishing used to be expensive.
Eh? Publishers take care of what they publish because they are responsible for it in law - if they publish a lie about somebody ( even if it's a quote from somebody else - ie somebody elses 'content' ) - they are on the hook for that.
In a similar way, if I defame you and then a newspaper/facebook promotes that around the world, most of the damage actually comes from the promotion of the original defamation - the publishing/amplification.
> If you end anonymity and public squares, you end a channel for democratic feedback.
You are already assuming we live in a society where people are too afraid to say what they think in public . And I would also argue if you stand on a soap box in a public square then you are not anonymous - you are public. You are confusing a public square with people whispering behind masks.
If it's worth doing it should happen naturally, verified accounts having more weight in the eyes of readers etc.
I'd like to think so, but I'm not so sure - doesn't it depend where the incentives come from?
Optimising simply for demand without any principles leads to things like street fentanyl, and junk food and mass shootings ( there is a demand to own assault rifles ).
Online right now there is a heady mix of large monetary incentives and the ability to rapidly optimise objective functions.
Let's not pretend Meta's recent change isn't simply about Zuckerberg maintaining his power.
NYTimes with more on this: https://www.nytimes.com/live/2025/01/07/business/meta-fact-c...
Ironic that the NYT's article here focuses on the political angle instead of just the "facts" so to speak...
> It is likely to please President-elect Trump and his allies.
because part of reporting events is reporting the context and repercussions of those events
that's what journalism is about; otherwise we don't need newspapers, all we need are company PR releases
I use Instagram and Threads specifically because of the relative lack of political content on them. If they also start to become cultural war grounds like everything else then RIP.
Instagram comments seem hell bent on bringing culture war nonsense in. It's probably only a matter of time before it's exactly the same as Facebook.
Zuck claims "Europe has an ever increasing number of laws,institutionalizing censorship and making difficult to build something innovative" Ouch. As a European, I feel very wary of such a sentence and the implications. Time for Europe to wake up ? (edit: fix typos)
We are awake. We should decouple ourselves from the tech giants on the other side of the pond. They don't have our best interests in mind.
I'm not sure that we are awake. As a dev for a long time, I realized only 6 months ago, that all the tools I use daily are directly from US. My job and my life would be very very different without this technology. We are loosing ground, or more, we are falling down more and more quickly.
It is individual of course. But for example Emanuel Macron and Mario Draghi have sounded the bell quite clearly. As individual citizens we should try to buy European any time there is a European alternative.
>try to buy European any time there is a European alternative
Good luck with that considering:
>"Europe has an ever increasing number of laws,institutionalizing censorship and making difficult to build something innovative."
I don't take that for gospel. It is just Marc's poor take.
It's pretty much right. Dig into what it takes to run a social network in most European countries and you'll hit at minimum the following problems:
• Lack of a DMCA equivalent. DMCA lays out a lightweight process for platforms to process copyright disputes which if they follow it will avoid legal liability, which is needed on any platform that hosts user generated content. The EU Copyright acts require platforms themselves to enforce copyright and prevent users violating it. This is a gigantic technical implementation problem all by itself. Also, the US has the legal concept of fair use but that's not a concept in much of Europe, so people posting parodies etc thinking it's OK can still create liability problems.
• No equivalent of Section 230. Many new laws that specifically criminalize the hosting of illegal speech, and which don't give any credit for effort. As what's illegal is vague and political in nature you can't make automated systems or even human-driven systems that reliably handle it, so the legal risks are large even with a good faith effort to comply.
• GDPR, "right to be forgotten" and NetzDG style laws have large fixed costs associated with compliance which established companies can absorb but startups can't. For instance it's common for EU lawmakers to demand 24 hour turnaround times, which you can't reliably comply with if you're a one man startup.
• Algorithmic transparency laws, which mean you can't obtain any competitive advantage by better ranking (being good at this is how TikTok got so big), and which can threaten your ability to clear spam or use ML.
• Laws around targeted advertising mean you can't generate revenue comparable to what the US based firms can do, so you can't be competitive and your users will be annoyed by low quality barrel scraping ads for casinos after they click "No" on a consent screen without reading it.
There's probably more. For example, running a commercial search engine or training AI models on the internet is illegal in the UK, because UK copyright law only allows "data mining" for research purposes. There's no way to argue it's fair use like they do in the US. Just one of many such problems off the top of my head.
> Lack of a DMCA equivalent.
Good. It's heavily misused here.
> No equivalent of Section 230.
https://en.wikipedia.org/wiki/Digital_Services_Act
https://en.wikipedia.org/wiki/Electronic_Commerce_Directive_...
> Laws around targeted advertising mean you can't generate revenue comparable to what the US based firms can do...
Good! Agriculture is cheaper with slavery, but that isn't a great argument for permitting it.
We don't need social networks that are not compatible with the laws and rights you listed.
It should be hard to run a social network.
Why?
Looking around my apartment and my life, I see a Japanese game console, Japanese camera, US speakers, US laptop, Czech/German car, French photo software, Czech IDE, Swedish furniture, Swiss/US computer accessories, Chinese IoT devices, and a lot of the stuff was manufactured in China. If anything, my life would be very different without China (whether I like it or not).
I don't know how to say this inoffensively, but a lot of US people seem to mistake the slightly higher chance (from 1/inf to 2/inf) of becoming a billionaire with a higher quality of life, and the ability of the select few to hoard capital for a rich society.
What tools? The ones I use are done from people all over the world, certainly not predominantly in USA.
https://map.debian.net/
I know of exactly 0 European businesses they use free open source software for their office suites.
Z-E-R-O.
I don’t even think companies have their own mailservers anymore, its mostly gsuite and microsoft office 365; people aren’t even hosting business critical applications in Europe unless compliance forces them- let alone using European made tools to do it.
I'm sure there is more to life than using "open source office suite"
I'm not sure I understood your point.
There's a lot more to life than a lot of things, I'm not really trying to discuss personal fulfilment, moreso mentioning that there's no reality where we can get by with European technology right now, and if the US decided to sanction a european country that country would suffer a pretty significant (trillion-euro most likely) shock to productivity, as not only would they need to find new tools and retrain, but they would also lose all their mail and documents.
I'm trying to inform you that there are other jobs other than filling in data in excel.
If the USA sanctioned europe (lol) we'd be completely fine, don't worry.
Yes I somewhat agree on FOSS and I agree for the people. But I think that for the capital, it is massively US controlled (though is international too). Think of the seven first companies of the S&P500. (GAFAM, Nvidia, ...) If you look at the cac40 (france) or EUROSTOXX50 : I dont use directly any products of the tech company. But I'm sure that these companies use at least one the seven. Tech company in Europe are not ridiculous, but they are not leading the change. They optimize, they improve, but the lead is us centric. We have ASML, but for how long. ?
Yeah I'd agree that we should just forbid selling our software companies to not so friendly superpowers.
The problem is that these platforms have to be built, and people have to willingly use them... which is hard, given Meta have built brilliant addiction machines.
The whole threat here is you can't regulate Meta away, because they'll use the US Government to bully you into not doing so. I'd imagine if the EU tried to publicly prop up a platform not making any profit, they'd do the same.
But yes, the only way is for this to happen. But either way, this was the scariest statement of the announcement(s).
As a European who does generaly feel that the continent is on its way to becomming a museum, describing the absolute bilge that the flagship products of Facebook, YouTube, X etc are as 'innovative' feels in the same ballpark as describing the work of tobacco companies to sell and advertise their products in the 50s-80s as innovative.
They were innovative. I don’t know for other eu countries, but it seems that in France, there were only unsuccessful copycat of end user service. I’m probably a bit harsh, it’s because I m under impression that the gap between us (eu vs us) is widening. 10 years ago, there was open source, there was ovh, there was hope. With the cloud, we have surrender a lot of power to massive us company.
In italy there existed many similar things before. The thing is that in USA they invest 200x more to "distrupt"
Europe has anti-nazi laws for .. historical reasons.
What gets interpreted under anti-nazi law is the wrinkle though.
As a European I would say that Europe's governments are radically more focused on the well-being of their populations than say, the USA.
But... is it just luck or is it this Nanny-state issue that makes it very hard to think of a single major Internet destination or tech company that was born in Europe?
To me its seems that its all about cash: https://en.wikipedia.org/wiki/List_of_largest_Internet_compa...
The through-line is US/China with the vast majority. Eu I can only think of Spotify for non-retail.
Being in Europe I find no shortage of local versions of companies for all kinds of providers but only the large social media or platforms are outside of EU mostly in US as a rule.
The issue seems to be that saturation is real and the moat gets larger with time when companies just gobble up all their competition. How could Here maps compete with the free google maps + apples large pockets, etc. TomTom used to be much larger and European, seems to still survive but nowhere near to the size it could've otherwise.
The faster we decouple from societies like american, the better we europeans will be. We europeans defend our European way of life, against the degenerate capitalism of the US.
As an American who lived in Europe in the 90s when I was young, a lot that I really appreciated about the European way of life has deteriorated and is now almost unrecognizable to me in some ways.
When I visit every few years, it amazes me how quickly Europe is “Americanizing”. More fast food and less traditional food. Ripping up vineyards that have been there for centuries. Fewer protections for your farmers. More people walking around staring at their phones and less people talking to each other in cafes. Seems like almost everyone dresses like Americans and can speak English now. And it’s hard to tell the difference between the coffee shops in Spain and those in San Francisco. How long until you start building suburbs and driving everywhere?
Don’t get me wrong—I love the U.S., and I love living here. But its culture is not for Europe.
Comments like this are interesting because the changes you’re describing aren’t really “Americanizing”, they’re just a sign of modern times.
For example: People weren’t walking around staring at their cellphones in Europe in the 90s because they were distinctly European. It was because we didn’t have smartphones anywhere. The smartphone changes happened in lockstep across the globe.
Likewise, many of your other points are purely people’s personal preferences. I think your criticisms are largely nostalgia for the 90s and your time spent living abroad, not an indictment of “Americanizing” Europe.
Vineyards are ripped up because they have become unprofitable due to decreased alcohol consumption in general. I'm not sure that has much to do with Americanization.
I challenge you to find another economic system that has worked in history, because it sure isn’t communism if that’s what you’re referencing. This is also aside from the fact that Europe is also a subscriber to capitalism.
America is the most successful country on this earth and we bankroll most of the rest of the world but somehow we’re always the bad guys.
As an American I’d be very happy if my tax dollars stopped getting spent on Europe.
> America is the most successful country on this earth and we bankroll most of the rest of the world
I'm going to need a source (and some definitions) for that.
Communism is the godwin point of economical discussion. There is so much more possibility than unregulated capitalism / Individualism
> America is the most successful country on this earth
According to what metrics? life expectancy? crime rate? wealth per inhabitant? education? work life balance? health care? happiness? incarceration rate? human rights? corruption? freedom of press?
American tax dollars aren't spent in Europe or elsewhere in the world for some altruistic reason. The US want to maintain their hegemony and prevent other powers from emerging. They certainly don't care about Europeans or Taiwanese or whoever.
> I challenge you to find another economic system that has worked in history, because it sure isn’t communism if that’s what you’re referencing.
Not that I'm a big fan of communism or China, but communist China has been doing pretty well, and is getting more innovative than the US
The part of China that is innovative is not communist. They have the most free-market labor market, the most free-market regulations in everything except media (which is heavily controlled by the state).
China is the most brutally capitalist society in the world, with a dictator sitting on top managing it at the margins and ensuring media will never be free and threaten the communist party.
Bunch of lies lmao
Somehow US Americans managed in about a year and some to almost singlehandedly fund complete destruction of already impoverished and entrapped society of 2.3 million people, most of them younger than 18. Nevermind the pressure or direct military attacks on other nations to not intervene.
And you wonder why you're viewed as baddies.
I'd be happy if your tax dollars stopped going outside of US, too.
I might be missing something - are you saying the only choices of economic systems is communism or American style capitalism?
There is also the good old: "We can't discuss changes because there is nothing better already existing. There can't be anything better because we cannot change"
What he means is "I can't 100% control what news people get to read, and that's bad"
I am concerned about the community notes model they're moving towards.
Community notes has worked well on Twitter/X, but looking at the design it seems super easy to game.
Many notes get marked 'helpful' (ie. shown) with just 6 or so ratings.
That means, if you are a bad actor, you can get a note shown (or hidden!) with just 6 sockpuppet accounts. You just need to get those accounts on opposite sides of the political spectrum (ie. 3 act like a democrat, 3 act like a republican), and then when the note that you care about comes up, you have all 6 agree to note/unnote it.
I read this as Zuck kneeling to the new king and first lady (Musk). I highly doubt these changes were not influenced (forced?) by them.
Of course, that you can also read it as Zuck not having to kneel to the old king anymore.
I don't remember him changing these rules for Trump's first term. Do you?
I speculated what Zuckerberg wanted and what he'd do when he visited Mar-a-lago[0]:
* Push to ban Tiktok
* Drop antitrust lawsuits against Meta
* Meta will relax "conservative" posts on its platforms
* Zuckerberg will donate to Trump's cause
So far, Zuckerberg has already donated to Trump's cause. Now he has relaxed "conservative" posts on its platforms directly or indirectly.
When Trump comes into power, he'll likely ask the FTC to drop its antitrust lawsuit against Meta under the disguise of being pro-business.
My last speculation is push to ban Tiktok. I'm sure it was discussed. Trump has donors who wanted him to reverse the Tiktok ban. Zuckerberg clearly wants Tiktok banned. Trump will have to decide who to appease when he comes into office.
[0]https://news.ycombinator.com/item?id=42262573#42262975
> ban Tiktok
I would be really interested in how someone could spin advocating for less moderation and at the same time asking to ban the competitors' social media platforms.
The public seems to eat everything you feed them, so it doesn't really matter.
It's not about the users of that competing platform, but about the country where the parent company is registered (https://www.cnn.com/2023/03/24/tech/tiktok-douyin-bytedance-...).
you’re assuming some kind of ideologically purity when it comes to “freedom of information” when the real answer is profit motive.
Also, Zuck appointed Dana White to the board: https://about.fb.com/news/2025/01/dana-white-john-elkann-cha...
So they have also given a board seat to a friend of Trump.
But yeah, I think you're right that there is clearly some combination of dealmaking and bending the knee going on.
Polarization drives ad revenue. $10 says Zuck is going to start throwing grenades at the UK and EU soon too.
We're entering a dangerous period, and it's not for anything as noble as the virtues of absolute free speech
I think the way to deal with this is to just opt-out: don't use Facebook, Threads, X, etc. I gave up on Facebook years ago.
Dupe with more explicit comments:
https://news.ycombinator.com/item?id=42622082
So lets take one of the most expensive, labor intensive parts of our business and replace it with crowdsourced notes.
As of 2022, Meta employed 15000 content moderators. Expected salary of 70K to 150K per person (salary + benefits, plus consulting premiums) so lets assume 110K.
This implies $1.65B in workforce costs for content moderation.
Meta is more likely to make their earnings....
Though I wonder if they will redeploy these people to be labelers for LLMs?
Again, conflating moderation within Meta, with fact-checking by third party orgs, which is what this is primarily about.
In reading the comments, it's clear to me that "community-based fact-checking" will not work since not even HN users can get basic facts straight (not due to any lack of intelligence, probably just didn't read the article or understand the context), how do we expect the FB userbase to do so?
It’s not conflating. They also announced that a lot of content that was moderated won’t be any more. For example labeling someone trans as having mental health issues was forbidden and it won’t be anymore. So they are reducing moderation too.
The discussion here is painful to read. The 'neutral' discussion of product features and how Austin, TX is more liberal than the rest of Texas are grotesque.
Zuckerberg says Facebook is going to be more "like X" and "work with Trump". It has changed its content policy to allow discussions that should horrify anyone.
"In a notable shift, the company now says it allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
"In other words, Meta now appears to permit users to accuse transgender or gay people of being mentally ill because of their gender expression and sexual orientation. The company did not respond to requests for clarification on the policy."
But Zuck himself says that they are also dialing their algorithms back in favor of allowing more bad content. It's not right.
https://www.wired.com/story/meta-immigration-gender-policies...
I feel the same way and I think the writing is on the wall for the near future of the world. It is disheartening to see people on a forum like HN who I assumed have values similar to mine fall right in line with conservative propaganda and try to act like this isn't an overtly political action. This decision is political, and it goes a lot deeper than left vs right - its about attacking support for a baseline scientific 'truth' and fully accepting a post-truth world where reality is what the powerful deem it to be. This has always been the case to some extent but it has gotten so lopsided in the last decade that its hard to see how we come back from this.
I similarly share your pessimism. Ironically, I think a lot of the propaganda that is effective on HN's demographic works because it frames itself in a way that makes it appear logical and intellectually robust. Us devs love thinking we're the smartest person in the room and strong, logical thinkers who can't be fooled, but that's exactly why those kinds of propaganda and talking points can work so well. (I'm certainly guilty of it myself at times, fwiw.)
Yeah, its pathetic.
Fwiw, not everyone on 'hacker' news is like this, and many of the thoughtful ones are smarter than I am and skipped this post entirely. But its so disheartening the rot in the Silicon Valley ideology that's everywhere here.
The timing also pretty clearly signals that this should be interpreted by bigoted individuals as a green light for harassing speech.
[flagged]
Mark has looked at what has happened to Twitter since Musk took over, a notable decline in activity and value… and decided he wants a piece of that? Musk is begging people on Twitter to post more positive content, as it devolves into 4chan-lite.
If Musk’s ideological experiment with Twitter had proven the idea that you can have a pleasant to use website without any moderation then Mark’s philosophical 180 would at least make sense, but this doesn’t, at all. What’s to gain? Musk has done everyone a favor by demonstrating that moderation driven by a fear of government intervention was actually a good thing.
Could be an exit strategy… maybe he’s tired of running a social network and wants to help run governments and fly to space like the other guys.
New government. So you've got lack of moderation driven by a fear of government intervention.
It starts to make more sense when you think about who is arm in arm with the president elect. I don't know that Musk believes his philosophy is wrong and now he has the power to pressure others.
Community Notes has nothing to do with trashfire of posters on Twitter now. CN is probably the only good thing about Twitter right now.
I use meta products, it’s anecdotal but they’re dead. At least they seem very stagnant. This is appeasing the new establishment and hoping for more engagement ?
> Musk is begging people on Twitter to post more positive content
Is this the same Elon Musk that recently called a British member of parliament a "rape genocide apologist"?
Elon Musk has been radicalised and now he is using his platform to radicalise others.
> Mark has looked at what has happened to Twitter since Musk took over, a notable decline in activity and value… and decided he wants a piece of that?
Hell yes he does, Twitter helped Musk get a seat at the table with Trump and the ability to influence US policy decisions at an unprecedented level. Zuck craves power and sees sucking up to the incoming administration as an easy path to get more of it.
I’m not sure where you’re getting data from but Twitter seems fine: https://www.demandsage.com/twitter-statistics/
Additionally, if you haven’t read the article you’re commenting on, community notes is an excellent replacement so-called fact checking services which are notoriously biased.
I have a feeling it is more part of an agreement with the new administration. It was an agreement with the old administration that led to the current platform where there is way too much overreach on things the govt didn't want discussed: COVID, Palestine, immigration, etc.
The solution is to be a culture of primary sources and to make it easier to link to primary sources.
It’s funny to see these tech moguls bend the knee for the new king. All their values, their so called care for the community, everything they say, everything, … is just all a big play in an effort to make as much money as they can. It sickens me to watch this stuff unfold.
It’s not just a new king, it was the fact that the other party won the popular vote resoundingly after all these years meant that the 2016 elections weren’t just a fluke.
Repubs have all 3 branches for at least a few years now, and there will be enormous changes in tax policy in legislation that will be passed this year, due to many popular provisions of the 2017 TCJA expiring at the end of 2025. And Dems will basically be left out of the conversation as their votes are not needed.
- the house majority is a thin 1-2 seats and full of factions that can barely cooperate
- the filibuster still exists
- almost certain one or both houses flip in 2 years
Filibuster is for legislation that needs 60 Senate votes, tax changes only need 50.
There are also quite a few Democrats in swing districts who I bet will vote for tax cuts. They are basically only in office instead of their Republican opponents because their opponent opposed women’s rights.
That's not quite right. Nothing (or almost nothing?) needs 60 Senate votes to pass. The difference is that they've agreed not to filibuster tax laws, and you need 60 votes to break a filibuster.
So you're right on the practical effect, but the details are slightly off.
They won on the backs of decades of efforts to prove that the culture wars were unhealthy for America. That worrying about climate change was a hoax. That evolution itself is controversial. That universities and authority figures are not to be trusted. That somehow, Fox News, the biggest media corp in America, is not the main stream media.
They got here, by destroying our ability to fight disinformation. They beat climate science in the 90s, by giving air time to cranks, and then senators used those specious arguments to stall climate bills. When scientists came onto Fox to try and reach the audience, they were thrown to the lions for the entertain of the audience. Derided and mocked with gotchas and rhetorical arguments designed to win the perception game.
This is a continuation of that game. Because it works. The idea that free speech is at risk because of moderation is amazing, because it is being revived after being tested by everyone online. We started the internet without moderation, we believed that the best ideas win.
We have moderation everywhere now, because we know that this fact is empirically untrue. The most viral ideas propagate. The ones most fit to survive their medium - humans.
I agree that they won, because they played the game to win. But we should not miss how they worked hard, to set up the conditions for this type of a win.
Of the total national popular vote, Trump won by about 1%. That's not "resoundingly". That's a very thin margin. (I mean, it's better than he got in 2016 and 2020. But it's not resounding.)
It’s resounding because the expectations were that the nation’s voters were trending away from Republican politicians (or at least the popular vote), and the country was just waiting for old voters to die.
But that was shown to be completely wrong, even after women lost rights in quite a few states. The message was clear that Republicans are here to stay, and businesses better learn how to do business with them, or else face the consequences.
They barely scraped out a popular vote win. It's not a "resounding" victory, regardless of what you subjectively experience when you talk about it.
Popular vote doesn't win President, electoral college does, and that was 312 to 226, not barely, and Dems didn't win a single one of the 7 states that were supposedly in play (GA/NC/PA/MI/WI/NV/AZ).
In the legislature, it is almost impossible for Dems to regain control before 2028, as the majority of states electing senators in 2026 are very unlikely to elect a Dem. And I am not optimistic on Dems' chances in the 2026 House:
https://en.wikipedia.org/wiki/Party_divisions_of_United_Stat...
As far as I can tell, Repubs have the executive for at least 4 years, the judiciary for who knows how long, the Senate for at least 4 years, and the House for at least 2, if not 4 years.
Knowing this, it makes sense why businesses would want to cozy up to Republicans.
Will this totally end content moderation? That could be a small silver lining, as content moderation for FB appears to be extremely hazardous to one's mental health:
https://www.cnn.com/2024/12/22/business/facebook-content-mod...
Obviously exposing the same content which was proven to cause harm to the content moderators to absolutely everybody on the platform will be worse.
It is not obvious that many people (when was the last time a single post was seen by the entirety of the platform?) seeing occasional soul-destroying stuff is worse than seeing soul-destroying stuff as full-time employment, 8 hours a day, 5 days a week for the length of one's work life.
Also: perhaps the occasional soul-destroying post would help people break their social media addictions.
Counterpoint: Molly Russell.
https://www.judiciary.uk/wp-content/uploads/2022/10/Molly-Ru...
Certainly poor molly russell does not appear to have seen this cintent occasionally, which is just my point. There is no mention of how she accessed this content either: was it a message board, or was it served algorithmically, which is important to the contention here.
Served algorithmically: https://mollyrosefoundation.org/november-2023-new-research-e...
I am not sure that the death of one person outweighs the lifelong ptsd of 100% of fb content moderators. Again, my original claim is that it is not obvious.
I am not trying to trivialize this persons death. If it were up to me, I'd completely get rid of social media in an instant.
I'd love if they just sorted by timestamp, but no moderation + algorithm deciding what gets shown is not good.
That's pretty much the only legislation I'd support, i.e., a compulsory setting for chronological ordering of events, which effectively disables "the algorithm." Seems like it would be agreeable to media companies and pure libertarians alike.
TBH I had assumed FB was just penalizing all political content or that people just tried like hell to avoid it because all I see on FB anymore is either stuff related to the few FB groups that keep me on the platform or endless reposts of basically pirated Reddit content for engagement.
Community notes and enforcement might help meta in the long run as being a step into more organically managed content that can scale better than simple moderation.
I have my serious gripes with how Instagram currently manages reports. I've recently reported a clear racist post promoted to me on Instagram that did not get removed or acted on. They seem to go the route of "block it so you cannot see the user anymore but let everyone else see it".
So as far as I can tell the only thing that Instagram actually moderate at the moment are gore and nudity, regardless of context. So barely dressed sexualised thirst traps are ok, black and white blurred nipples are not, everything else is a-ok.
wow so many warnings for the future.. They didnt intend to but FB now has some responsibility about whats generated on it as one of the most massive source of info in the planet...
Regardless of what you think about this step I find it disconcerting that we can now disagree on facts.
For example:
- whether crime is up or down
- whether the earth is warming or not
- how many people live in poverty
- what the rate of inflation is
- how much social security or healthcare costs
- etc
These are all verifiable, measurable facts, and yet, we somehow manage to disagree.
We always used to disagree and that is healthy, we avoid missing something. But in the past we could agree on some basic facts and then have a discussion. Now we just end a discussion with an easy: "Your facts are wrong." And that leads to an total inability of having any discussion at all.
Fact checking is not censorship. Imagine math if we'd question the basic axioms.
What you're talking about is statistics. Statistics are not irrefutable facts. They're data points from a report, and they are often incredibly easy to manipulate depending on how the macro is assessed. Usually it's impossible to gather stats over large, complex, chaotic populations. Instead samples are taken and applied to the whole and interpolated in-between. And in that interpolation an incredible amount of manipulation and even pure laziness is possible. It's possible to misrepresent the error bars of your conclusion. It's possible to leave out important details. It's possible to be selective about your time frame. There are a myriad of ways to mess up or screw up statistics. The more chaotic the system, the more difficult it is.
It’s much worse than that.
Every single example mentioned by the GP isn’t just a statistical measure, they measure of a wildly political (as in, defined by humans in a deeply imprecise manner) issues:
> - whether crime is up or down
Which kinds of crimes? In which political boundaries? In which reporting period? Did definitions change? Is reporting down because of ineffective policing? Is reporting up because of effective policing? That statistical games played with crime stats are criminal.
> - whether the earth is warming or not
There is a reason the phrase “global warming” went out of fashion in preference of “climate change”. Warming up how much? Over what time period? With what error bounds? Assuming which runaway processes? In which areas? Due to which causes? What are the error bounds around the sign of the change?
> - how many people live in poverty
The government literally draws a line in the sand and declares anyone below a certain income level is living in poverty. Who set the level? Why did they set it there? What is the standard of living at that income level? In which areas? How long do people live in poverty? What, if anything, prevents them from moving upward? What is there effective standard of living after government programs and charitable giving is taken into account?
> - what the rate of inflation is
This is literally defined by bureaucrats at central banks. Inflation according to which index? How were the index components chosen? How are the index components weighted? Over what time period? In which areas? Even the concept of “inflation” is highly suspect and basically incoherent.
- how much social security or healthcare costs
Over what time period? How did the demographics change? How about inflation? Where did the cash flows go and how did they net out? Which purchasing regimes were in place? How did the programs change? What was the quality of the services?
If, in an argument, you want to go back to the data and do different or better statistics on it then by all means. I would _love_ to have a disagreement with someone that went in that direction and we could discuss the intricacies of how to interpret the information that we have. I have my own gripes about the statistics done by various groups, with changing the inflation calculation being a recent example of the bad side of this: https://www.nytimes.com/2022/05/24/technology/inflation-meas...
However, I think the key point here still stands. Most disagreements (at least in my experience) are not reaching this level, and are instead diving towards anti-intellectualism and dismissing statistics and data interpretation wholesale.
Fully agree. Statistics are not global irrefutable facts about society, it's literally just one or a group of person computing something random and claiming it is representing society as a whole, or a journalist saying he/she read that figure in a reputable source. Even from a mathematical point of view statistics are incredibly hard to manipulate, but even before that, reality cannot be really measured and put into numbers.
The problem I have with fact checkers, rather than "context expanders" is that their end product is a simple answer for things that may not be trivial. There may not be a clear binary answer.
> whether crime is up or down
Was the reporting consistent between the two timeframes (apathy, directions from police station, etc)? Was the reporting system fully operational both timeframes being compared? Is the reported vs actual crime ratio the same between the two timeframes?
> how many people live in poverty
> what the rate of inflation is
Is the metric calculated the same way between the two timeframes? If not, what's the justification for the new metrics? Is the answer the same if the old and new metric is used with the same data?
It’s not realistic, or IMO necessary, to put more into it than the original claim does, besides bringing actual sources to the table.
If the original claim is that crime is really up but it doesn’t show in the official figures because of subtle factors X Y and Z, then sure, a fact check saying this is wrong needs to dive in and explain why those factors don’t account for it.
But if it’s just “crime is up 87% since Biden took office” then “actually, crime is down N% in that period, see link from relevant stats agency here” is fine.
The latter is about a million times more common.
> Imagine math if we'd question the basic axioms.
The world we exeprience and the language we use to describe it doesn't have axioms like math, so it's no surprise people routinely disagree about these topics. Most of the subjects in your list contain a great deal of nuance. For example:
> whether crime is up or down
What counts as "crime"? Is it based on a legal definition or a moral defintion? What jursidictions does this include? What time period are we using as a baseline? Do we account for the fact that different jurisdictions measure crime differently and do we use the raw reported numbers or adjust for underreporting in the statistics? Do we weight our consideration by the severity of the crime or is it just the number of recorded offenses? The laws themselves may have changed over the period of consideration, so how do we account for that?
These questions don't have objective answers, so it's unsurprising people disagree.
Every single one of your points is not boolean and depends on the definition and the data you include and exclude. For each you could easily find studies and statistics in either direction. The fact that this is apparently not obvious to you proves the point that all fact-checking is inherently biased and depends on the subjective opinions of the checking person.
People who study statistics are pretty good at saying "look, that data set was probably gamed, I would have done it <different way>", or "that conclusion does not follow from the data presented".
It's no different to someone claiming on twitter that they are a great programmer who can fix twitter's search in a weekend who then has to tweet for suggestions on how to write a search feature in javascript. People familiar with the subject matter can see right through your bravado.
I'm so tired of people with no expertise on anything insisting that people who have clear expertise "didn't think of trivial point A that just came to mind" as if some of these fields aren't centuries old and have been around the block a few times.
It's similar to the teenager insisting "you just don't get it mom", but like, mom totally gets it, she was a teenager once too. And while there are occasions when mom might not get it, like how she didn't grow up in a world with social media so she might not be able to help you through that, but she ABSOLUTELY gets that it feels like your world is ending when your first love leaves you, and in fact it is YOU who does not "get it" that you will move on eventually.
Not sure what you are trying to say - my point was that ie the question "is crime up or down" is not a yes/no answer. Depending on the input, you can easily create a statistic pointing in any direction. I think abtinf elaborated better on that here: https://news.ycombinator.com/item?id=42628198 My personal highpoint in using statistical methods was probably implementing an analysis of variance for thousands of lab values (https://en.wikipedia.org/wiki/Analysis_of_variance).
Most experts will not give simple answers to simple questions because they see the question itself as ill-posed. Theses could be written about "Is crime up or down?" GP's claim is that this has a simple answer that can be checked. The bigger issue isn't whether a dataset is statistically valid but which data would even be relevant to a particular underspecified and vague question.
All of these sorts of facts are manipulable and/or not easily knowable.
> - whether crime is up or down
Manipulable by the agencies that keep track of and publish those stats. Governments often manipulate these.
> - whether the earth is warming or not
There is a huge amount of controversy in climate science. Check the "Climate Gate" files from 2009 for example. Check out the controversies over weather station siting for another.
> - how many people live in poverty
Poverty levels vary with time and by country, and are typically set by governments. People often disagree as to what defines poverty. Poverty stats are manipulable.
> - what the rate of inflation is
You should look into what Argentina did around 2012.
> - how much social security or healthcare costs
The figures from the budget are not controversial. How much healthcare spending is wasteful is a completely different matter. Quality of healthcare is also very much subject to debate.
> These are all verifiable, measurable facts, and yet, we somehow manage to disagree.
They are not easily verifiable because they are mostly susceptible to manipulation. Therefore it's not surprising that people disagree.
> [...] And that leads to an total inability of having any discussion at all.
No, it means that discussion might have to start with the fact that there is disagreement as to facts and then you can have an open discussion about why, what is being done to prevent consensus forming as to those "facts", what needs to change to make that possible, etc.
> Imagine math if we'd question the basic axioms.
No need to imagine, it's enough to look into non-Euclidean geometry (obtained by excluding Euclid's fifth axiom), non-standard models of geometry, or reverse mathematics (studying which axioms are necessary for a specific theorem to be provable).
Because social media has virtually eliminated peoples general ability to have constructive, level-headed conversations that take nuance into account.
I think the idea that a) people lack nuance now or b) that it’s simply social media’s fault is the exact same kind of lack of nuance that you seem to be objecting to.
Nothing I’ve seen suggests that mass media or mass propaganda contains less nuance now versus any other time. Propaganda of all forms (regardless of whether delivered by newspaper, radio, tv, or facebook) has always been a blunt instrument.
The issue is that before social media nobody took the guy bullshitting at the end of the bar seriously.
But with social media, his bullshit post looks just as authoritative as an expert who’s been studying the topic for decades.
I'm talking less about propaganda and more about the average person's ability to discuss the merits of climate change with one another online.
The average person doesn’t discuss, they repost. The things they repost are propaganda (be it true or untrue).
>The average person doesn’t discuss...
Exactly. We aren't capable of discussing shit online, which is unfortunately where the bulk of our culture's negative discourse is occurring. It's not the posts, even - it's the comment sections.
I don't care if someone shares propaganda, I care about the discussion that happens after they share it, in the comments. When was the last time on FB/IG that you saw someone share some propaganda (true or untrue, doesn't matter), and looked in the comments to find someone correct them, and then the two had a reasoned conversation wherein they traded perspectives and ultimately came to a healthy understanding of one another even if they disagreed?
Do you see that sort of conversation, or do you just see a shitload of people yelling at each other?
Nuance is dead with the short posts. "whether crime is up or down" may not be possible to post about realistically. On what timescale, which crime, has the reporting about this crime changed, has the classification changed, is it about confirmed crime or reports, etc. etc.
Specific crime is such a complex system now that we can (both accidentally and maliciously) post factual information that presents a small fragment of the issue, sometimes helpful, sometimes misinforming for the context we're talking about.
most of those things are actually not verifiable measurable facts within any useful definition
???
Aside from maybe "whether crime is up or down" (because of under-reporting), everything else can be objectively measured. The measurements might not fit with everyone's specific circumstance (eg. earth is warming as a whole but it's unseasonably cold where you live), but that's not a reason to throw up our hands and say "those things are actually not verifiable measurable facts within any useful definition".
The only items in the list that look reasonably easily answerable are how much social security costs and whether the earth is warming. Even the last one wouldn't be considered a good question to an actual scientist because of how vaguely it is phrased.
The earth has been warming. It's not a verifiable fact that it's still doing that today (you used present tense) or will continue into the future until the future comes and we've measured it. By the way, warming over what time period? It's colder now that it was at some times in its past so you could say we're in the middle of a longer term global cooling.
And of course you have to incorporate of the Earth's interior which is cooling. Are you sure that "fact" doesn't silently ignore almost all of the Earth?
There are rarely any two people experiencing the same inflation rate. As it heavily depends on any one buyers buying basket. Sure, you could, in theory, measure each persons inflation rate, but what for?
I strongly disagree that the rate of inflation is a fact, nor un-debatable. The mechanism for calculating it officially has changed drastically over the decades, and always in ways that reduce the official rate. It’s a politicized metric.
> - whether the earth is warming or not
The Earth is warming, but how much of it is caused by humans is under debate. The Earth is still coming out of an ice age, so it would be warming even without humans.
Also, the more important question is: how much will it accelerate based on our emissions? If there are no positive feedback loops, it would only warm up 1C maximum, no matter how much more CO2 we will emit. But because of the positive feedback loops (warmer earth -> more water evaporating -> more warming), this warming can trigger a 4-5C further warming. The feedback loops are just theoretical(you can't measure them empirically) and the quality of the estimations is based on our understanding and modelling of the climate.
https://xkcd.com/1732/
We've had in the last 100 years a temperature swing that usually takes a thousand years or more. We've already seen greater than +1C of temperature increase compared to before widespread use of fossil fuels.
Is that caused by humans? Sure that's up for debate, in the same way whether tobacco causes cancer is. People are willing to be wrong when being wrong gives them money/status/utility.
> We've had in the last 100 years a temperature swing that usually takes a thousand years or more.
A cute xkcd is not a time machine. You rely here on indirect measurements of tree ring measurements or ocean sediments. You can't verify if there were any other factors at play over the millennia, and I seriously doubt that these methods can even be theoretically +/- 0.5 degree C accurate. You may believe that, but you can't verify unless you travel into the past. Besides, 1000 years are NOTHING on the scale we are looking at. If you live anywhere north of the 40th degree, the place you now sit was probably covered by an ice sheet without a living thing in sight, only 10000 years ago. And a 100000 years ago. There is no way that you can divide that timescale into thousands and measure every one of them with a high enough precision to compare it with the present. The bold claims of climate science have lost any scientific humility.
> You may believe that, but you can't verify unless you travel into the past.
Do you believe in the method of radiocarbon dating? What about dinosaurs?
What about them, and how was your debate class? Can you measure the time of day an organism died with radiocarbon dating? This rhethorical question is meant as a hint. Did you know how they calibrated radiocarbon at first? They used wine bottles from french cellars, because they have a year printed on them. That's scientific verification, because believe doesn't do it.
> If there are no positive feedback loops, it would only warm up 1C maximum, no matter how much more CO2 we will emit.
GHG emissions are still increasing. If we assume that temperature increase is only linear in the amount of atmospheric GHGs, that means temperature will continue to increase, not remain flat.
Little known fact (I am still amazed how people don't know the mechanics of global warming...): CO2 effect in the atmosphere is logarithmic, increasing with concentration. That is because CO2 can only block one band of light, so at one point, you're approaching asymptotic effect. That's why we keep talking about "doubling of CO2", because it's a logarithmic function....
But yes, the temperature will increase slightly because of CO2 emissions. That triggers more warming due to feedback effects though, and those are hard to quantify, and more scary.
To be fair, the cause of the warming wasn't given as an example of indisputable fact.
Exactly. Supplying some context to support this:
The level of crime is pretty hard to measure. You can measure reported crime, but crimes are reported at different rates in response to complicated incentives.
How much the earth is warming depends on what you measure. Do you measure atmospheric temperature? Ocean temperature? And of course how much the world will warm is dependent on complicated models with tons of inputs.
How many people live in poverty depends on what your threshold for poverty is. There's a "Federal Poverty Level", but cost of living varies by significant amounts across the country.
The rate of inflation is highly dependent on the basket of goods measured and how improvements in goods are measured and so on. There are easily a dozen different measures of "inflation" and they're all reasonable and carefully considered, but none of them is the ground truth.
It is of course relatively easy to measure Social Security inflows and outflows, but usually when we talk about the "cost" of programs like this, we mean something like the net cost, which incorporates lots of societal effects. Also the interpretation of the accounting concept of the Social Security Trust Fund, despite being a fairly simple concept, has significant camps with diametrically opposed views.
With the exception of fiscal cost and global warming those are all quite subtle, actually. $Employer spends rather a lot of time replicating official inflation numbers, it's not trivial.
>$Employer spends rather a lot of time replicating official inflation numbers
Well? Does it match?
Yes (any more detail would be telling), ahead of time even, but my point is that we're mimicking the governments numbers not actually estimating a "true" value.
Could it be that nowadays we have so much more access to information that where we maybe agreed on facts in the past, they where really coarse and we did not really have much details on them, so it was maybe easier to agree?
No we don't have verifieable measurable facts for those areas. Standards and definitions vary by location and change over time. Don't forget the corruption and manipulation of numbers to achieve desired outcomes.
Sadly the consensus was abused to push narratives once too often instead of actual leadership/guiding people to concepts/understanding/consensus building. Our leaders forgot/got too lazy/became too corrupt/dogmatic/complacent to care how to lead, abused the levers, and now it's going to probably take a generation for society to organize new trusted mechanisms.
Crime statistics/reporting are extremely gamed. It took a friend having a heinous crime committed against her by a large group, on a side street just off downtown Santa Cruz with no reporting for me to realize just how bad. We've probably all at this point had crime committed against us that the police didn't document which then destroys our faith in crime statistics.
I'm a super hippie. But there was a lot of manipulation/playing fast and free by the earlier global warming folks to try and get their message across breaking peoples trust and you are never going to get that trust back with models/projections no matter how good/accurate the assumptions used for those models/projections once the trust was lost.
Things like using COVID funds to KNOWINGS TEMPORARILY reduce child poverty with the goal of having INCREASED CHILD POVERTY statistics in the near future so that it could be used as a policy weapon again just does damage and makes poverty statistics more meaningless. Just politicians using abusing and manipulating instead of leading, breaking down more levers.
Stop with how gamed 'rate of inflation' was by this administration. You are never going to convince people WHO CAN'T AFFORD TO LIVE and are in CONTSANT distress that 'things are getting worse more slowly' is good. Sorry, you are going to have to lead and convince people on that one, not lazily use numbers. Again, it's lack of leadership.
See how the same things can be interpreted differently by different people and how much it's that these have been abused/used for manipulation/out of laziness/instead of leading?
Source: Other than my personal crime experiences it's from living in a red state and talking with people why they support crazy stuff or reject what seems like common sense to me.
This is because we have started accepting kritik-style debates as serious in the last two decades. Kritik used to be considered a bad faith technique but nowadays it’s considered a smart “trick” to win arguments. It’s when a debate participant doesn’t engage in debating the subject on its own merits, but instead challenges the premise of the question or a premise of the opponent’s position.
Crude example:
- I believe climate change is exaggerated because the Summers haven’t gotten notably hotter.
- If you say that, then you are unaware and uninformed. You must be watching Fox News.
Another:
- I think we are in a cost of living crisis, because every year, more US men are in crippling debt.
- Wow, look at your use of ableist misogynist language! Way to pretend women don’t suffer with debt 13% more than men!
Another:
- As society, we should be respectful of others online, because internet is an important (and sometimes only) social network some people have.
- Social media is unnatural, harmful and should be banned.
These are three failed debates, in each there is no clash of opinions, and no side provided meaningfully stronger arguments to win the debate. In fact, the two debate opponents stated opinions on different subjects entirely. And yet nowadays, this is how most people debate, it is considered appropriate, even in academia. In politics, this technique is considered a total winner.
So it is a bit like refusing to engage with the basic axioms when arguing mathematical proofs and just saying “math is for nerds”. We have totally accepted that as normal, as a society.
You are being hoisted by your own petard. Lying with statistics is a very common thing and it is, in fact a cliche. I'm surprised you brought up the crime thing. There are so many problems with this. Also, note, one way to reduce "crime" is to just make many crimes legal but it does not change normal people's view of crime. What kind of statistics were used to decide that Iowa would go for Harris with an 18 point jump?
The amount of crime is dependent on the police department reporting, which we know has been cut back
Only one of those questions (earth warming rate) is clearly defined and scientifically addressable, as all the others have fairly subjective definitions (what is poverty? what is crime? how do we measure inflation objectively? etc.)
Even with warming, a 'fact' would be a data point at a particular time and location, assuming your sensor was correctly calibrated. You have to look at millions of data points across the entire globe for decades to get a sense of the current warming rate (which could be negative, flat, or positive). You have to do complicated statistics on all those data points to get a warming rate, and you'll have error bars on that, and the end result is not a 'fact' so much as a bounded estimate (+0.1 C / decade +/- 10% is plausible for the average surface temperature change averaged over the entire planet).
We can't even say with real certainty that 2100 will be warmer than today, as a supervolcano, asteroid impact, or global nuclear war could reverse the trend.
I think prediction markets (polymarket et al) get this right. Every question as vague as "is the earth warming" has resolution details which define some way to resolve the question such that all parties (even those with economic interest to disagree) have trouble disputing the outcome.
For a question like the earth warming, it would usually be something like "according to ___.org website on Y date", which in that case the final prediction becomes: will the average temperature in the period from 2016-2026 be greater than Y on ___.org, which is a bit different than the original but easier to arbitrate.
You sort of made your own counterpoint by giving a list of statistics that are far from objectively measurable and whose result and meaning depends a lot on the details of what exactly you're measuring and how.
Take inflation for example. Measure inflation in terms of gold, broken arm repairs, hamburgers, or houses and any will give you wildly different figures. The government preferred index prices a basket of goods but the particulars of the basket may not match you or anyone you know, and various corrections are necessary but are themselves subjective. An often disputed one is correction for goods substitution-- if steak goes up people buy less steak and more rice. The current preferred model of the government chains these corrections even though in reality you can only replace so much steak with rice before it's all rice and no steak. These indexes also have corrections for goods increasing in quality-- the price went up but its because the thing got better, not because inflation. etc.
yadda yadda, I don't mean to import the debate here but the point is that there is something to debate particularly when the statistics don't match a person's lived experience -- when the things they need to live are rapidly increasing in price-- especially when politicians are abusing the stats beyond the breaking point (I think of the time when the Biden administration was crowing about something like the rate of inflation increase no longer increasing. What a jerk! ... or is that a snap? ;) ).
And even when the fact itself isn't really in dispute there is often plenty of room for reasonable people to debate the implications or relevance.
When people confused these subjective issues for "basic axioms" and then impose their understanding as "facts" it's extremely problematic and highly offensive to people whose experience has taught them otherwise.
>These are all verifiable, measurable facts
No, they absolutely are not:
> whether crime is up or down
Depends on the definitions; what is or isn't a crime changes over time in a given society. Taking "crime" as an aggregate conflates many different possible crimes and relies on a subjective weighting of their relative severity. Crime rates can vary wildly between various subgroups of the population. We can only meaningfully compare rates of crimes that are actually detected and result in law enforcement actions; an unknown and broadly unknowable amount of crime is overlooked.
> whether the earth is warming or not
Most of the disagreement is about the rate of change, the predicted future rate of change, the predicted impacts of those change, the extent to which we can do anything about it, and especially about the relative importance of the predicted impact vis-a-vis the effort that might be required to do something about it.
> how many people live in poverty, what the rate of inflation is
"Poverty" is generally measured in terms of income versus an arbitrarily decided baseline. The baseline at best varies over time specifically to remain in "real" terms, i.e. adjusted for "inflation" which is calculated on a basis which may bear no relation whatsoever to the rate of change in costs practically faced by the poorer segment of the population. Furthermore, income is nowhere near the entire picture of wealth, which in turn is not a full picture of economic well-being. Inflation measures are designed with "hedonic quality adjustments" (https://www.bls.gov/cpi/quality-adjustment/questions-and-ans...) in mind which involve subjectively putting numbers on a wide variety of factors - they're literally trying to measure "how much better" a cell phone becomes if the screen resolution increases, so that they can decide whether the increase in price is justified; and in many cases they just resort to assuming that the initial price is fair relative to existing devices when the new one hits the market.
>How much social security or healthcare costs
Again, this has to be considered in the context of inflation adjustments, because the value of currency is not objective. World currencies are not a unit of measurement for value; it's just another thing that you can exchange for other valuable goods and services. If they were objective, there would be no reason for exchange rates to vary over time; they vary because, among other things, of varying relative faith in the issuing governments, and varying supply (which governments can generally control more or less at will).
Aside from which, there are valid reasons why the per-capita costs might vary due to demographic changes. The disagreements I've seen haven't been about the bottom-line number in (say) the American federal government budget; they're about how to contextualize that number. Are per-capita costs changing? Are your personal costs changing? Are the costs of people like you changing? (Those answers could be different for many reasons.) How do they compare to costs in other countries? Is that justified? Is it explained by extenuating circumstances? How shall we compare the corresponding quality of care?
As far as I can tell they gave up moderation a few years ago, at least every time I report someone spamming about "Elon Musk giving away a million dollars if you click this shady link" or the like I invariably get told it meets their "community standards" and won't be removed. I guess technically I haven't seen a female nipple there though so, job well done?
They also allow the scammiest ads for products that are 100% obvious frauds - pure distilled snake oil. It really brings meta’s image to the dirt. They’re like an online super market tabloid these days.
This is happening because Trump threatened to put Zuckerberg in prison for life (not an exaggeration):
https://finance.yahoo.com/news/trump-warns-mark-zuckerberg-c...
Trump himself confirmed this today:
https://bsky.app/profile/atrupar.com/post/3lf66oltlvs2l
I cannot believe anyone would actually be okay with this situation.
>This is happening because
Correlation is not causation, and coincidence definitely isn't.
Trump is politically incentivized to take credit for this. But he cannot in principle "confirm" anything about Zuckerberg's mental state.
Don't worry, there will be community notes and some form of eu/us/state notes. The paradigm has changes, moderation has to be separated from censorship and transparent. I would love to hear/read Audrey Tang's take on this, as CPP has been heavily involved in manipulating Chinese public opinion.
The piece on Axios:
# Meta eliminating fact-checking to combat "censorship"
https://www.axios.com/2025/01/07/meta-ends-fact-checking-zuc...
> Once the program is up and running, Meta won’t write Community Notes or decide which ones show up. They are written and rated by contributing users.
Sure "Meta" won't, but I wouldn't be surprised if a bunch of "contributing users" end up being facebook's AI accounts
CN was like a crowd-sourcing disagreement sticker that gets attached to some content. Yes it will be abused.
Facebook is virtual reality, whereas VRChat is inhabited by humans.
Good thing if they won’t abuse people in third world country with rubbish from social network.
Also I wonder if they will be federating with truth social and gab.
this is good. the automated systems were getting increasingly byzantine, with layers of rules trying to patch edge cases, which just created more edge cases.
Unpopular opinion: I would rather just be on a global-entry-esque kyc'd social media platform at this point.
Bots and gov-psyop trolls are certainly (hopefully) like 95% of the gross misinformation, right?
I'd give some reasonably trustworthy platform my Passport and identity to speak to only other people who have done the same.
Not at all! It’s been talked about before.
The problem becomes, do you trust the company implementing it?
It works in banking.
It’ll be cool to see what a self-regulating social network looks like as opposed to a more top-down approach for meta.
I was recently browsing FB for the first time in months, and didn't see a peep from fact checkers, despite all the garbage-tier content FB is forcing into my feed including things like "see how this inventors new car makes fossil fuels and batteries obsolete". I spent most of my time on the site clicking "hide all from X", where X is some suggested page I never expressed interest in. The "shorts" on the site are always clickbaity boob-featuring things that I have no interest in either. The site is disgusting and distracting from any practical use, i.e. keeping in touch with friends, which is what I used to use it for.
It's funny how facebook got so political all the normies left, then they downranked political content so much that the political people left too. Facebook is a ghost town now.
Going back even further, one of the initial draws of Instagram pre-acquisition was that you could escape the toxicity of trolls and other socially unproductive behavior on Facebook.
Meta has a big problem coming up. They'll get to the point where they won't be able to hide Facebook and Instagram's lackluster appeal. I suspect we'll start seeing advertisers peel away, followed by a few savvy investors first. Let's just hope this doesn't trigger a market-wide correction.
>Let's just hope this doesn't trigger a market-wide correction.
My flippant, "I hate social media and think it was largely a mistake and needs to go away," view is to cheer for that correction. That said, I understand that I'm very biased here and might be ignorant.
Is there a reason I shouldn't cheer for such a correction?
I'd cheer for a correction if it were limited to social media valuations. My fear is that social media tanks followed by people broadly pulling money out of the market.
To me facebook seems a lot quieter but instagram is as busy with stuff as ever. We definitely have differences of opinion on that. Especially if TikTok is shut down (fingers crossed) most people will fall back on Instagram Reels.
It's a different type of activity though.
Facebook and Instagram's (pre-Reels) strength was that it was easy to have accounts of all sizes engage and be engaged with. Whether you have 10 or 100000 friends/followers/etc, the barrier of entry to have some engagement wasn't high and it encouraged people with all sizes of accounts to post, comment, and "like". Social networking felt much more intentional with these platforms.
Instagram Reels certainly has a lot of activity, but it's activity is driven by users passively consuming popular and trending media. This isn't a bad model, but it's a shift away from intentional social networking.
Ultimately, I think Reels is more evidence that Meta has had a user engagement problem for a while. Their current strategy for Instagram seens to be to hope passive consumption keeps everyone in the app and fall back on the "town square" model for comments as a means of engagement.
They A/B test reels in facebook. My mother's facebook has reels in it. Not mine. Soon, the apps themselves will lose any sense of history and they will morph into whatever new content format is favourite. All you need is account with Meta. The content will find you. Zuck has that covered for you.
Instagram needs a Bluesky. It's truly an awful experience, but the only semi-competitor is TikTok which... isn't great either.
Users of this site have been saying that for literal years.
Meta. Microsoft. Amazon. Google.
Every one of their core user value propositions is worse now than it was in the 00s.
And all of them by allowing revenue optimization by 1,000 cuts to whittle away customer centricity over time.
Surprisingly, Facebook has 2.1 billion daily active users. I primarily use the app for its Marketplace feature as an alternative to Craigslist.
Doesn’t the second sentence explain the first? I can’t tell the number of times I’ve heard a variation of, “I hate Facebook [newsfeed]. I only use it for Messenger/ niche Groups/ local events/ Marketplace.”
Facebook has positioned itself so that it’s almost a necessity if you want to be involved in your community, however you define it. You may hate Zuck, moderation, and ‘the algo’ and yet you can’t get away from Meta the company. And millions of other users feel the same way.
> it’s almost a necessity if you want to be involved in your community
not really; I haven't had a FB account in 10 years
I use Craigslist for local ads.
Facebook has a net profit of $62 billion/year.
Between that and people getting over constantly sharing what they did on vacation and what they cooked for breakfast or had at brunch, it is a lot quieter. At least Zuck chose to bring back political arguments as the mainstay right after the election rather than right before. It will be fairly quiet for a few years IF they keep up their efforts to limit Russian propaganda bots and don't add a bluecheck to promote them instead.
Do they still mandate using your legal name? That's the biggest no-go for me. It's just awful opsec.
Don't know if they mandate it, but I know a few people who use either names that are a slight modification of their real name, or completely made up names.
How do they validate that? YouTube also wanted my full name before they finally switched back to usernames. I just made up a pseudonym back then.
Facebook has pretty advanced features that cross check your digital signatures like IP address, browser, registered email, etc to prevent sockpuppeting. This is especially true if you want to make ads with your account.
In summary, FB was pressured in 2016 to act on “foreign influence” the press hysterically parroted by politicians and leaders. FB bowed to the pressure. Now that the press lost all validity along with the X purchase, the press can no longer persuade Meta to “fact check.” FB is in a better spot to follow the X model of moderation. People arguing this is a bad move are ignoring the fact that FB was a censorship hotbed for the last four years.
It would be hilarious is somehow Elon/X claimed some kind of ownership or trademark or patent on the model.
It was evident that Mark Zuckerberg / Meta would have to once again "adapt" to another Trump presidency, but this is much more explicit than I expected, wow.
I know there has been a lot of ink spilled trying to persuade that Technology can't solve our deeper problems and Technologists are too optimistic about having real-world impact etc. etc.
But I think community notes (the model, not necessarily the specific implementation of one company or another) is one of those algorithms that truly solve a messy messy sticky problem.
Fact-checking and other "Big J Journalist" attempts to suppress "misinformation" is a very broken model.
1) It leads to less and less trust in the fact checkers which drives more people to fringe outlets etc.
2) They also are in fact quite biased (as are all humans, but it's less important if your electrician has socialist/MAGA/Libertarian biases)
3) The absolute firehose of online content means fact checkers/media etc. can't actually fact check everything and end up fact checking old fake news while the new stuff is spreading
The community notes model is inherently more democratic, decentralized and actually fair. and this is the big one it works! unlike many of the other "tech will save us" (e.g. web3 ideas) It is extremely effective and even-handed.
I recommend reading the Birdwatch paper [0], it's quite heartening and I'm happy more tech companies are moving in that direction
[0] https://github.com/twitter/communitynotes/blob/main/birdwatc...
Agreed. I think people are looking at CN/Birdwatch being from Twitter and seeing red without looking at the details.
CN predates Musk burning Twitter to the ground, and CN is actually a decent product that can only get better as it is honed.
But community notes have been around since before Musk bought twitter and they have not had any effect at reducing the amount of outright falsehoods passed off as "news" on that hellscape. Why do people keep championing it as a success story when it demonstrably hasn't helped?
Frankly, if it worked, it would have been removed by now. It's "controlled opposition" basically.
My gut feeling is that this will be accompanied by a relaxed policy on fake profiles too.
Community notes is maybe the only good thing to happen to the microstructure of social media in years so I'm vaguely in favour of this.
The official fact checking stuff is far too easily captured, it was like the old blue checks — a handy indicator of what the ancien regime types think.
The fact-checking that Meta is ending, which put "misinformation" disclaimers on posts, is NOT the same as content moderation, which will continue.
A lot of comments in this thread reflect a conflation of these two, with stuff like "great! no more censorship!" or "I was once banned because I made a joke on my IG post", which don't relate to fact-checking.
Zuck's video claims Europe has been imposing a lot of censorship lately, which is a nicer way for him to say "we have done a crappy job at stopping misinformation and abusive material, got fined A LOT by countries who actually care about it, and that's somehow not our fault".
Community notes is good news, and something I was expecting to disappear from Twitter since Elon bought it a couple years ago, especially since they have called out his lies more than once. Hearing Facebook/Instagram/Threads are getting them is great.
Then he claims "foreign governments are pushing against American companies" like we aren't all subject to the same laws. And actually, it wasn't the EU who prohibited a specific app alleging "security risks" because actually they can't control what's said there; it was the US, censoring TikTok.
Perhaps we the europeans should push for a ban of US platforms like Twitter, especially when its owner has actually pledged to weaponise the platform to favour far-right candidates like AfD (Germany) or Reform UK. And definitely push for bigger fines to monopolistic companies like Meta.
Why should social media operators be responsible for "stopping misinformation" in the first place? That sounds a lot like the logic that was used to justify smashing the printing presses in Gutenberg's day, not to mention by countless villains of dystopian sci-fi (e.g. Fahrenheit 451), in turn based on other real-world concerns.
I think I should have a right to let others lie to me, and decide for myself if I believe them. In the alternative where someone prevents me from hearing it, that other person is deciding for me. Why should I accept that other person as more qualified to do my own thinking?
It's really strange to me how calls for banning "misinformation" in the US seem to come from the same political direction as complaints about controversial books being taken out of educational curricula.
In all cases what they mean is that they want opinions or statements that go against to whatever ideology or political faction they belong to to be censored.
Humans tend to strongly identify with such things and motivate their moral reasoning to fit.
I would wager Mark and other sharks like him would find this entire thread very amusing. For they have no ideology other than self interest, nothing they do is for any other purpose other than their own.
What fact checkers? In the last one year or so my feed has been filled up with conspiracy theory garbage. Not even plausible stuff.
Off topic but related to holding communities to account: I wish there were a way to metamoderate subs on Reddit. The Texas subreddit has been co-opted by a moderator that bans anyone who criticizes their editorial decisions or notices antagonism trolls taking over the sub.
It's a welcome move as this "fact checkers" thing was doomed to fail, mostly because "who decides what the truth is, and who fact checks the fact checkers?".
Sad thing is, this move isn't motivated by Mark Zuckerberg having a eureka moment and now trying to seek out the truth to build a better product for human kind.
This move is motivated by Mark's realizing he is on the wrong side of American politics now, being left behind by the Trump/Musk duo.
It's just cheaper. That's the most important thing for corporations. It's also harder to accuse them of bias. Personally, I'm a little dubious about the effectiveness of fact checkers on people's opinions. If someone is a dullard who is willing to believe the most absurd propaganda or every conspiracy theory that exists, a fact checker won't solve the problem. They are used to being told that they are wrong. Of course they just can shadowban this content but in the end they profit from that.
Zuckerberg knows which way the winds are blowing in the US Capital and is ensuring he is aligned with them so to avoid political blowback on his company.
I suspect the changes to the fact checking / free speech will align with Trump's political whims. Thus fact checking will be gone on topics like vaccines, trans people, threats from immigrants, etc.
While the well documented political censorship at Meta affecting Palestine will remain because it does align with Trump's political whims...
- https://www.hrw.org/news/2023/12/20/meta-systemic-censorship...
- https://www.theguardian.com/technology/article/2024/may/29/m...
People down voting this are being silly.
Here's the topics the announcement mentioned:
"We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate."
Palestine is completely absent.
Translation: community notes are “good enough” from the perspective of the business community, and probably an order of magnitude cheaper.
"There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"
Isaac Asimov - Hitting the high notes even after 30 years from the pulpit.
Mark doing what Mark needs to do to keep that Meta stock elevated.
https://www.technologyreview.com/2021/03/11/1020600/facebook...
Zuck still dreams of his despotic dictatorial empire where he can enslave millions and make them all trans via Police enforcement. This move is just to stop bleeding users to X.
So much for the supposed zuck rebrand; it's still him
I’m less concerned by the change of fact checking to community notes, because meta had often neutered the ability of their fact checkers anyway.
What I am concerned about is their allowance of political content again.
Between genocides and misinformation campaigns, meta has shown that the idea of the town square does not scale. Not with their complete abandonment of any kind of responsibility to the social construct of its actual users.
Meta are an incredibly poor steward of human mental health. Their algorithms have been shown to create intense feedback loops that have resulted in many deaths, yet they continue down the march of bleeding as much out of people as possible.
> the idea of the town square does not scale.
Completely agree. Instead of one giant town square ("Facebook") what we would benefit from are 1000 smaller ones ("Facebook competitors") and some way to "travel" between them. That is a smaller more human scale that can be responsibly governed. It does not create hyper-billionaires though.
> Starting in the US, we are ending our third party fact-checking program and moving to a Community Notes model.
The Community Notes model works great on X at dealing with misinformation. More broadly, this is a vindication of the principle that putatively neutral "expert" institutions cannot be trusted unless they're subject to democratic checks and balances.
This is my conspiracy theory but this is all in preparation for the end of Section 230 which will also inadvertently kill Blue Sky.
Can you elaborate?
There is a long history but the short of it is, before Section 230, platforms that moderated user content faced potential liability. Oakmont v. Prodigy[1] is a case where Prodigy was held liable for defamatory posts due to its moderation efforts. However, in Cubby v. CompuServe[2], the court ruled that platforms without active moderation, CompuServe were not liable for user-generated content because they were just hosting with no active involvement. Section 230 protected platforms from liability for user content, allowing them to moderate in good faith without being held responsible for all harmful material if they weren't able to moderate everything.
I believe Elon and Trump, being the internet's biggest liars, have the goal to remove Section 230 making moderating online more or less a crime that will open you to litigation and allow them and all of their followers to spread lies not only unchecked but with the threat of punishment if a company, like Blue Sky, were to try to moderate them.
[1] https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
[2] https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
I wouldn't mourn the loss of BlueSky, because it's basically designed from the ground up to create filter bubbles and echo chambers, and social media needs way less of those.
I'm sure you also think Twitter is the free speech capital of the internet as well.
Removing the politics from this is rather impossible because it was so deliberately timed and explicitly positioned as political. But as a PM addressing the pure product question, I’d say it’s an unnecessarily risky product move. You’ve basically forgone the option to use humans professionally incentivized to follow guidelines, and decided to 100% crowdsource your moderation to volunteers (for amplification control, not just labeling btw). Every platform is different, but the record of such efforts in other very high volume contexts is mixed at best, particularly in responding to well financed amplification attacks driven by state actors. Ultimately this is not a decision most any experienced PM would make, exactly because the risk is huge and upside low. X’s experience with crapification would get any normal PM swift and permanent retirement (user base down roughly 60%, valuation down $30B - how’s the look on your resume?... So I go back to the beginning - this is plutocrats at play and not even remotely in the domain of a carefully considered product decision.
I know some of those fact checkers. They are career journalists and the bar to tag a post as disinformation is extremely high.
To tag a post, they need to produce several pages of evidence, taking several days of work to research and document. The burden of proof is in every way on the fact checkers, not the random Facebook poster.
Generalizing this work as politically biased is a purposeful lie.
Even granting all that you say is true, it would be trivial for there to be bias in such an apparently rigorous process. All that is required is selective application of the rules.
Did they even have authority to take down posts? That was always Meta's call. The fact-checkers -- which were separate news orgs -- would tag posts.
Yes, you are right. I believe tagging significantly reduced the chance of seeing the post in your feed, so it was similar in effect.
> was similar in effect
Not really. Because if you make the argument that it was censorship then you have to say that any feed that is generated by an algorithm is censorship because the company is determining what, among what all users post, you should see, allowing certain posts to bubble up to the top and others to fall to the bottom.
>...the bar to tag a post as disinformation is extremely high. To tag a post, they need to produce several pages of evidence, taking several days of work to research and document.
Why was the Hunter Biden laptop story thus categorized? As I recall, "several days" did not elapse between the New York Post publication of the story and its suppression on social media.
The very fact that he is admitting they are doing this because of Trump and that there will be "more bad stuff" is pretty fucking crazy.
They should have never gotten into that business in the first place
A strange game. The only winning move is not to play.
I assume the data is showing that conservative users are growing either in raw numbers or in aggregate interaction on Facebook, and thus, will now be catered to.
Meta, as a company, doesn't have values beyond growth.
Great news. It's further evidence that the zeitgeist has shifted against the idea that platforms have a "responsibility" to do "good" and make the world "better" through censorship. Tech companies like Meta have done incalculable damage to the public by arrogating the power to determine what's true, good, and beautiful.
Across the industry, tech companies are rejecting this framework. Only epistemic and moral humility can lead to good outcomes for society. It's going to take a long time to rebuild public trust.
The moderation tools were themselves offensive and abusive. I use FB to read what my friends and relatives have to say. I don't want FB to interfere with their posts under any normal circumstance, but somehow, they felt like they should do this.
But the real reason I can't use FB much any more is that the feed is stuffed full of crap I didn't ask for, like Far Side cartoons etc.
while it's obviously fair to be very very wary of everything FB does, especially moderation, the other side of this is a worldwide campaign by the worst people alive to use these platforms to shape public opinion and poison our (ie at least the West's) culture to death.
hopefully i stop getting trouble for reposting things verifiable in the public record that other people spoke about in 2018, and not being banned for supporting capital punishment, a thing legal in the US, the native state of the brand.
Tech has become so entrenched with Government.
The Metaverse and the WFH bets made by Zuck were controversial but at least it was something rooted in tech and population habits trends and vision without any political poop attached to it.
This one is pure political poop to please Orange Man.
Also I believe that fact-checking needed to be slowly sunsetted after the COVID emergency was over, but the timing of this announcement and the binary nature of the decision means that it was done with intention to get in the graces of the new administration.
If these techs executives become the American equivalent of Russian Oligarchs I hope that States would go after their wealth based on their residence and even ADS-B private jet trackers if they were to move to say Wyoming but partying every weekend in Los Angeles/NYC etc.
The litmus test of this is whether they roll it out globally. If they do, Meta truly has seen the light; if they don't, this is just a cynical attempt to butter up Trump in case he regulates them into oblivion (as one could argue they deserve).
Zuck is making the right noises. Time will tell.
if you use facebook you're an idiot
It would have been a perfect opportunity to -add- community notes and study which worked side by side and choose the better of the two, instead evidently Musk and Drump pulled Zuck aside and told him to shape up and join the billionaire oligarchs club or face the consequences of a partisan DoJ and SEC.
What a crock of shit. Freedom of speech is anathema to Facebook.
Free expression my ass. Freedom of speech is not about protecting speech you agree with.
oh no what happened?
Ironically the post is affected by Hacker News flame-war detection system.
If this occurs, and you feel it shouldn't, you can request mods disable the flamewar detector by emailing them at hn@ycombinator.com.
FYI Meta just removed Nick Clegg as their global head of policy and replaced him with Joel Kaplan, who was Trump's deputy chief of staff.
They also appointed Dana White, a prominent Trump supporter, to their board this week.
Their content moderation team is moving from California to Texas.
If people think all this is Meta going "neutral", you are delusional.
You have gotten to the heart of the matter. Well done, indeed, sir/madam!
Hot take:
This is to please the incoming president.
Both the far-right and far-left live off misinformation, but right now the far-right is experiencing a renaissance, and tech moguls are bending the knee to be on good terms with the leaders.
MAGA and European far-right politicians have been moaning for ages that fact checking is "politically biased". The Biden laptop controversy was the catalyst for this.
In what sense is this a hot take? This seems to be the dominant explanation by a wide margin.
"Our fact checking wasn't good enough, so we're outsourcing it to the public."
This is insane and clearly a political move. Maybe we just don't require social media as a species. That might be nice.
Corporate censorship should have never happened. It is a huge corruption of public discourse and the political process. These platforms have hundreds of millions of users, or more, and are as influential as governments. They should be regulated like public utilities so they cannot ban users or censor content, especially political speech. Personally I don’t trust Zuck and his sudden shift on this and other topics. It doesn’t come with a strong enough rejection of Meta/Facebook’s past, and how they acted in the previous election cycle, during COVID, during BLM, etc. But I guess some change is still good.
You can't use a social media platform that can't ban users, because it'll be full of spammers and people who only communicate in death threats.
But being at the head of a social network is political. Every choice is political. Allowing extreme speech to circulate is political, not authorizing it is political too. It is not corporate censorship, it's regulation. without regulation, it will be the voice of the loudest / strongest. And I think we need some rationality, not polarisation.
Feel free to correct me if I'm wrong, but I don't think there's any reasonable political discourse that is ever* censored by social media companies.
During COVID, there were people spreading lies about the vaccine, which many people believed, and many people died as a result of believing those lies. Even Louis Brandeis, one of the fiercest advocates of free speech, made an exception for emergency situations[0], which is arguably what a pandemic is.
But again, lies about a vaccine do not constitute reasonable public discourse, it is more akin to screaming fire in a crowded theater. If you have counter examples of regular public discourse that has been censored by a social media company, please share it.
* I realize "ever" is a stretch, I'm sure there are instances, but my understanding is that they are the exception rather than the rule.
[0] "If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence. Only an emergency can justify repression. Such must be the rule if authority is to be reconciled with freedom." - Louis Brandeis, Whitney vs. California
It's hard to talk about, because when a discussion is successfully censored you usually don't hear about it and presume any discourse on it would have been unreasonable.
I would point towards immigration as a topic where meaningful discourse is missing from social media. On most social media sites, the discussion will be dominated by people who think immigration should rarely if ever be restricted; Twitter has been colonized by some people who take the opposite extreme, often for overtly racist reasons, although this is tempered a bit by Elon Musk's personal support of high skill visas.
The "normie" immigration restrictionist position, that immigrants are great but only so long as they enter the country lawfully, is something I very often see expressed in news interviews or supported by older relatives and rarely if ever see expressed on a social media platform. I don't know how I'd go about proving this is downstream of fact checking, but there's a lot of orgs who argue that it's factually false to characterize, for example, someone who crosses the border without authorization and then applies for asylum as an illegal immigrant.
ITT: mental gymnastics
If you think this move exists in a vacuum or is actually about "getting back to their roots with free speech", you're wrong. Alongside Dana White joining the board[0], it's clear that this is solely about currying favor with the incoming administration.
[0] https://www.npr.org/2025/01/06/nx-s1-5250310/meta-dana-white...
It's not solely about currying favor. Many tech giants hate getting pushed around by politicians and courts around the world demanding censorship. Free speech rights in the US are much stronger than elsewhere in the world, and even businesses as large as Meta need political support to successfully push back on censorious overreach.
For context, in Germany you can face up to 3 years prison time for insulting a politician: https://www.dw.com/en/germany-greens-habeck-presses-charges-...
>It's not solely about currying favor. Many tech giants hate getting pushed around by politicians and courts around the world demanding censorship.
Taking steps to not be pushed around by an incoming president who has clearly suggested he'll push them around is, quite literally, currying favor.
> Many tech giants hate getting pushed around by politicians and courts around the world demanding censorship.
They may not like that but they also don't like to take responsibility either.
100%. It is about aligning with Trump's political opinions. Thus I do expect to see no fact checking of anti-trans, anti-vaccine and anti-immigrant content. But I don't think that Meta's documented censorship of Palestinian content [1] will change, because the censorship is inline with Trump's political opinions.
[1] https://www.hrw.org/news/2023/12/20/meta-systemic-censorship...
Maybe it’s not Trump.
Maybe the people elected Trump in a historic GOP win with demos that Reagan wouldn’t have won with… and Zuck sees the writing clearer than most?
The way you put it leaves out the cause and only gets the effect.
[flagged]
Just like complying with government censorship demands was about currying favor with the outgoing administration.
Like this!
https://www.rollingstone.com/politics/politics-news/elon-tru...
> When the White House called up Twitter in the early morning hours of September 9, 2019, officials had what they believed was a serious issue to report: Famous model Chrissy Teigen had just called President Donald Trump “a pussy ass bitch” on Twitter — and the White House wanted the tweet to come down.
Rollingstone makes up stories and is not a reliable source.
The claim was made in sworn testimony in a Congressional hearing by a Twitter executive.
https://www.nytimes.com/2023/02/08/us/politics/twitter-congr...
On video, if you like: https://twitter.com/Acyn/status/1623357770933145607
My door to Meta is closed and will never reopen, no matter what. Facebook has cost me all my friends. WhatsApp sells my phone number. Threads banned me for commenting too much without giving it my phone number. Facebook keeps or kept censoring my posts. Fuck Meta forever.
> We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
My mom and my wife’s mom both have remarked in the last year that they’re upset with speech policing. My mom can’t say things about immigration that she thinks as an immigrant, and my mother in law is censored on gender issues despite having been married to a transgender person in the 1990s. They’re not ideological “free speech” people. Neither are political, though both historically voted left of center “by default.”
The acceptable range of discourse on these issues in the social circles inhabited by Facebook moderators (and university staff) is too narrow, and imposing that narrow window on normal people has produced a backlash among the very people who are key users of Facebook these days (normie middle age to older people). This is a smart move by Zuckerberg.
Meta also nominated a Trump-affiliated boxing entertainment businessman to its board yesterday.
They’re doing everything they can to suck up to the incoming administration.
It seems one doesn't become billionaire without being a immoral opportunist...
[flagged]
[flagged]
[flagged]
Maybe some desperation going on behind the scenes ?
As opposed to them being brave, independent champions when it came to suppressing discussions about Covid or the Hunter Biden laptop
>Ending Third Party Fact Checking Program, Moving to Community Notes
CNotes were extremely successful on X.
The problem with censorship, why digg and reddit died as platforms, you end up with second order consequences. The anti-free speech people will always deeply analyze their opponent's speech to find a violation of the rules.
They try to make rules that sound reasonable but are beyond section 230. No being anti-LGBT for ex. But then every joke, miscommunication, etc leads to bans. You also ban entire cultures with this rule. Ive had bans because I meant to add NOT to my 1 sentence, but failed to do so.
Then when it comes to politics. You've banned entire swaths of people/viewpoints. There's no actual meaningful conversation happening on reddit.
Reddit temporarily influenced politics in this way. In a recent election a politician built a platform that mirrored the subreddit. There was polls and if you were to go by reddit... the liberals were about to take at least a minority government, if not majority.
What actually happened? The platform was bizarre and very out of touch with the province. They got blasted in the election. The incumbent majority got stronger.
> CNotes were extremely successful on X.
> reddit died
By all measures I can find, reddit continues to grow year over year, while X seems to have been flat or in decline, so I’m not sure this is a strong premise.
Facebook is #1, followed by youtube.
Tiktok is 4th.
Linkedin is 8th.
X is 12th.
Reddit is 16th.
Reddit fell a great deal in rankings. They mostly use bots to make it appear like they are still relevant. Which ironically is creating a 'dead internet' conspiracy theory. In reality its just 'dead reddit'
Ranked by whom, on what metrics?
What were their relative rankings on the same metrics, say, five years ago?
If you want automated fact checking you need to create a god. (... and creating a human team that does the same is playing God)
If you want to identify contagious emotionally negative content you need ModernBERT + RNN + 10,000 training examples. The first two are a student project in a data science class, creating the second would wreck my mental health if I didn't load up on Paxil for a month.
The latter is bad for people whether or not it is true. If you suppressed it by a large factor (say 75%) in a network it would be like adding boron to the water in a nuclear reactor. It would reduce the negativity in your feed immediately, would reduce it further because it would stop it from spreading, and soon people would learn not to post it to begin with because it wouldn't be getting a rise out of people. (This paper https://shorturl.at/VE2fU notably finds that conspiracy theories are spread over longer chains than other posts and could be suppressed by suppressing shares after the Nth hop)
My measurements show Bluesky is doing this quietly, I think people are more aware that Threads does this; most people there seem to believe "Bluesky doesn't have an algorithm" but they're wrong. Some people come to Bluesky from Twitter and after a week start to confess that they have no idea what to post because they're not getting steeped in continuous outrage and provocation.
I'm convinced it is an emotional and spiritual problem. In Terry Pratchett's Hogfather the assassination of the Hogfather (like Santa Claus but he comes on Dec 32 and has his sleigh pulled by pigs) leads to the appearance of the Hair Loss Fairy and the God of Hangovers (the "Oh God") because of a conservation of belief.
Because people aren't getting their spiritual needs met you get pseudo-religions such as "evangelicals who don't go to church" (some of the most avid Trump voters) as well as transgenderists who see egg hatching (their word!) as a holy mission, both of whom deserve each other (but neither of whom I want in my feed.)
This is unequivocally good. That’s it. That’s the comment.
great news by the zuck good to see the framework being laid is having benefits for everyone
News story about Zuckerberg sucking up to Trump.[1]
News story about other CEOs sucking up to Trump.[2]
News story about Bezos stucking up to Trump.[3]
"The Führer is always right" [4]
[1] https://www.cnn.com/2024/12/04/business/zuckerberg-trump-mus...
[2] https://www.foxbusiness.com/media/kevin-oleary-explains-why-...
[3] https://newrepublic.com/article/188170/jeff-bezoss-shocking-...
[4] https://en.wikipedia.org/wiki/F%C3%BChrerprinzip
I've only been here about 1/30th as long as you, so I fully accept that I could be wrong here; but this really doesn't seem to measure up the standard of discourse that I understood to be expected on HN.
It's not great, but it's unfortunately relevant to the article topic.
during the biden administration they were expected to shift their moderation policies to fit in with the political ideology currently in the white house
now it's been normalized and the other party is doing it. but the news outlets have waited until now to start crying wolf?
Maybe, just maybe, it's because most people in the media are Democrats, and therefore inherently self-biased in their concerns and worldviews, and they have a belief that prevents any critical self-examination easily summed up by the Stephen Colbert line that: "reality has a liberal bias."
You can't argue with someone who thinks their beliefs are merely "reality." At least the other side recognizes it as religion, etc.
there is a huge difference between a belief and a fact. most of the discontent in today's world is this exact issue...
> there is a huge difference between a belief and a fact.
What if a fact is disputed? Do you not have to choose which fact to believe?
Gestalting between two disputed facts is the basis for scientific revolutions.
Ptolemaic astronomers certainly had a belief that epicycles were "fact" and made every non-scientific attempt to destroy heliocentrism. Only when enough people didn't _believe_ in that "fact" did we evolve to better understanding.
You can say "these were not facts and were just flawed observations", but you'll ignore that Ptolemaics _said_ these ideas were facts and had strong evidence and a belief that it really was.
This model can be applied over and over again to many domains. This isn't my idea, rather it comes from the seminal work "The Structure of Scientific Revolutions" by TS Kuhn.
So, no, there is not a bold line between belief and fact. We choose what facts to believe.
we choose what facts to believe.
this just might be the craziest thing I’ve read recently but given the current state of affairs not all that surprising…
I cited a major academic work to back up my position and gave a real world example to demonstrate the concept. What about Khun is crazy? You should attempt to engage in the topic and avoid ad hominem attacks. Or are you of the opinion that “we don’t believe in facts”?
More accurately, the quote is "Reality has a well known liberal bias," and was given in the persona of a character Colbert played on an Comedy Central show and can be seen with a certain irony.
https://en.wikipedia.org/?redirect=no&title=Reality_has_a_we...
I think this reinforces my argument that liberals view it as indisputable that there is no bias in their favor in media and all their opinions are "merely reality."
well I think it's important to point out context and to be accurate with regard to the actual quote. imprecision with words leads to misinterpretation.
I'm not clear what your larger point is though or why you're singling out my comment with your rebuttal.
>they were expected to shift their moderation policies to fit in with the political ideology currently in the white house
They were expected to? Hmm, hot take.
I mean, they were, e.g. the Twitter files. Or all of the handwavey threats around section 230
The twitter files that showed that accounts of conservatives got special treatment that explicitly prevented them from facing consequences of breaking site rules?
I have no idea how you cane to the conclusion that they showed any such thing. Even Wikipedia (https://en.wikipedia.org/wiki/Twitter_Files) takes the stance that the points raised were generally showing bias against conservatives, and tries to downplay them.