Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
YouTube More Likely to Direct Election-Fraud Videos to Users Already Skeptical (nyu.edu)
25 points by rbanffy on Sept 8, 2022 | hide | past | favorite | 69 comments


Algorithm designed to show people what they want shows people what they want.


Do they really want that, though, or do they simply get more excited by it? Let's take a more striking example: someone trying to avoid wallowing in a pit of despair and grief over a loved one dying. I feel like many such people would tell you they are actively wanting to and even trying to avoid triggers for such, but once they run into one it pulls at them and they start to fall down a rabbit hole where a few hours later they are staring at old photos, listening to depressing music, and sobbing their eyes out.

Now, imagine you have the job to recommend content for them. I feel like not only would a compassionate person know to avoid these triggers "for the user's own good" (which I would find way too paternalistic), even a really competent human (or a truly powerful AI) that was optimizing for "what this user wants" would avoid triggering content as the customer probably is even able to express this want verbally.

The issue, though, is that if you happen to ascertain (whether by active analysis or more passive experimentation) "omg every time I show them this thing they immediately drop everything else they are doing and spend three hours watching similar videos!!! this is amazing: I have found the best possible thing to show them!!!" and act on that despite it not being what the user wants you are just being evil.

And that's Google: the people at Google that work on this software are evil. Google employees seem to just be selfishly optimizing for "engagement" (to the extent to which they succeed... honestly they aren't even all that great at it, but they do much better than we would actually want ;P) and will just blindly show people stuff that they themselves might actively wish to avoid but can't help but get sucked into... so Google can make more ad money.

In the case of these conspiracy theories, I will contend most people want to be informed. They are momentarily skeptical of something, which makes them likely to get turned on easily by these videos, but then it is Google that radicalizes them by realizing "omg when I show them more videos of this form they just keep digging into the rabbit hole!!! SO MUCH AD MONEY" and "ugh when I show them videos from the other side they get turned off and maybe even drop the whole issue and go off to do something else more productive... that sucks! :/".

The engineers at Google have thereby built a system that, sure, "shows you what you want", but it shows you the most twisted, myopic, and even cruel version of what you want in the name of making them ad revenue over all other potential benefits to the user. Sure, sometimes this can be rationalized as showing the user something they legitimately found fun or interesting, and if you go into it with a careful mind and a wall of defenses it is possible to get what you need despite this perverted optimizer... but, I would hope we all can understand that it goes much deeper than that and that the negative effects are not ones that should be ignored in the name of making more money.


The machines don't have malice, they aren't like humans.

No company is required to provide intellectually healthy results. How would those be defined in a fair way? Is there even a good way of detecting less direct codeword driven searches that might silo perspective?

A better approach might involve multiple, hopefully not for profit, fact checking and news organizations. Each might have a Truth / Lies and Bias / Informative score among others that helps show the quality of the information. This metadata could be a first class citizen on the content. A warning in front of content that has not yet been vetted or which includes lies / large bias might help warn and inform end users.


> The machines don't have malice, they aren't like humans.

Correct: it is the humans that built the machines in this way and continue to operate them that are malicious, not the machines. In this way I will note that it isn't even clear to me that companies can have malice: humans--employees of companies and developers of machines--have malice.

> No company is required to provide intellectually healthy results.

Correct: no one is required to not be evil. Google for a long time claimed to not be evil, but the people who work for Google are, in fact, quite evil :(. Regardless, the issue at hand was whether these systems are--and this is an exact quote from the post I am replying to--"designed to show people what they want"... they are not.


I agree with you that clicks and view time do not necessarily equate to desired content, but I disagree with you that Google uses those parameters because they're "evil". They make money when people continue using their products, regardless of whether that's because they're being sucked into a rabbit hole of engaging things they don't want, or because they're being sucked into a rabbit hole of things they actually do want.

I think the reason Google (and Amazon, and Netflix, and every other major tech company) uses clicks/view time as recommendation engine inputs is because... well, what else can they use? What quantifiable metric could possibly be used for large-scale, automated recommendations that more accurately indicates what someone actually wants to see more of? (This isn't rhetorical: if you have any ideas, I'd love to hear them.)

I don't think clicks/view time are the best metrics at all, and I don't think they're extremely accurate. But I also think they're the most accurate measure we've got, with the only other options being either (a) remove recommendations entirely, or (b) have humans manually monitor everything you do on the website, occasionally ask you why you clicked or watched things, and then make personal recommendations to you based on your answers. The latter of which is slow, more expensive, less scalable, more invasive to the user, and more tedious for the employees that would have to sit there monitoring you.

Maybe one day we'll have an AI method of using your comments and search terms as a better indication of desire (I dare say some of the recent LLMs are close to being ready for that task), but we're not there yet.


FWIW, my position is today (and has been for many years now) that recommendation systems are inherently problematic, so you aren't exactly trapping me in some kind of contradiction or paradox here by asking me "but how else could it be done?!"... I'd argue that some things simply shouldn't be done.

In the case of social networks, I think they served a positive function to both society and the people who used them back when they didn't have the recommendation algorithms, and your feeds were curated by you choosing to follow people explicitly; this, however, was not profitable, so we are now here.

If you are to do it, then yes: I think you probably need to do what TikTok is doing, and have humans heavily involved in the recommendation system in a way that attempts to put a hand on the wheel rather than the Google way to approach problems with algorithms on top of algorithms and no humans anywhere.


"They don't recommend people anti-democratic indoctrination videos because they actually want you to become anti-democratic; they recommend them because they don't care, and just want to make money" is not a solid argument against them being evil.


It depends - do you feel that Proctor and Gamble are guilty of using psychologcal understanding of dopamine hits to design their Ranch flavored Doritos?

In similar vein, would there be a difference in algorithm that shows you what you want, or determines which videos create the most dopamine hits and juices you in a particular direction?

There may be fancier ways to describe this, but would an algorithm that merely enhances diffusion be the same as an algorithm designed to drive molecules to the most extreme directions?


I mean, if I see someone crying while eating a giant bag of doritos because they really wish they weren't eating it but their cravings are making it hard for them to stop--and, honestly, I am pretty sure I've done exactly that in the past--and you were to tell me "Proctor and Gamble is merely giving them what they want" I'd call you, at best, a jerk.

> In similar vein, would there be a difference in algorithm that shows you what you want, or determines which videos create the most dopamine hits and juices you in a particular direction?

Yes, and the idea that you don't see this is literally shocking to me... it absolutely is the case that people can want things that they have a hard time avoiding, and the existence of a "dopamine hit" is not some kind of "proof of want". The comment I was replying to said that the algorithm was "designed to show people what they want"... that is categorically not true: it was designed to show people what makes Google the most money, full stop.

> There may be fancier ways to describe this, but would an algorithm that merely enhances diffusion be the same as an algorithm designed to drive molecules to the most extreme directions?

I'm sorry, but are you seriously now advocating for treating users--sentient beings with wants and motivations--as nothing more than molecules you can happily and gleefully move with your algorithms? Do you really not see anything wrong with this? :(


My post was directed at the Robot Toaster parent of your comment, and perhaps its user error that it posted as a reply to yours instead. My questions were meant to ask if there was a distinction between the motives of the two types of algorithms, at least without (too much) predilection, and see where the answers may fall.

I happen to agree with you, actually. It's obvious to me there is a distinction between a preference-neutral recommendation algorithm vs. one juiced for corporate/profit interests. That being said, I don't appreciate personal attacks in replies and I don't think there will be any more useful comments on this matter at least on my part.


I agreed with you up until this point:

> And that's Google: the people at Google that work on this software are evil.

This is an asinine assertion. The people at Google that work on this are the same as you and I, motivated by precisely the same incentives. They're no different from the people at Microsoft or Amazon or Apple or Netflix, and many of them probably worked at those companies previously. The upper management working at any modern tech company will readily abuse their position as a service provider if it makes them more money. Apple does it with the App Store. Microsoft does it with Windows APIs. Amazon does it with Prime and their shipping networks.

The lack of regulation and blind money-chasing is what allows for evil decisions. There's no minion of Sauron working at Google, plotting ways to make your user experience subtly worse. Accusing them of that is ridiculous, and I hope you don't look at your coworkers the same way.


> This is an asinine assertion. The people at Google that work on this are the same as you and I, motivated by precisely the same incentives.

I have on multiple occasions been given the opportunity to make a lot of money by doing the things that people at companies like Google and Apple do. I have also on numerous occasions been given the opportunity to be hired by these companies to work against the interests I've been engaged in in the community. I have watched on in horror while friends of mine--people I thought I knew--chose to work on things that I consider evil and then suddenly start defending what they with the talking points we'd been arguing against, together, for years, proving they were never much more than a mercenary willing to work for whomever had enough money--or at least a fun problem to solve--rather than picking and choosing their opportunities to do good in the world.

> The upper management working at any modern tech company will readily abuse their position as a service provider if it makes them more money. Apple does it with the App Store. Microsoft does it with Windows APIs. Amazon does it with Prime and their shipping networks.

This is irrelevant. Are you seriously trying to say "the people at Google aren't evil because then we'd have to claim the people at Apple and Amazon are also evil"? If any assertion here is "asinine" here I'd claim it is that one :/. Or maybe you somehow think you are trapping me in a hypocrisy? :( if so, maybe you should check who I am before attempting such ;P. I mean, I'm literally currently suing Apple over that App Store monopoly, and the vast majority of my comments on every single media platform for over a decade has called out people--including the people I am talking about in the previous paragraph here--who work at Apple!

> The lack of regulation and blind money-chasing is what allows for evil decisions. There's no minion of Sauron working at Google, plotting ways to make your user experience subtly worse.

There is no such thing as regulation that can regulate away all potential for evil. As for "blind money-chasing": yes, that's evil. And I never claimed that the people at Google were "plotting ways to make [our] user experience subtly worse"... I claimed they are blindly optimizing for money despite knowing the ill effects, and that makes them evil. You often have an opportunity to do good in a day as a human, and often that decision to do good comes at the cost of some money you could have made by being evil... I take it you don't think, unless there is a regulation preventing it, anyone should ever be good, and we should never hold other people to the standard of being good? :(

> Accusing them of that is ridiculous, and I hope you don't look at your coworkers the same way.

The company I work at has an internal culture of calling people out when they get greedy, and we try to screen the people we hire not merely on skills but on viewpoint and ethics. This is sometimes a tense discussion, and we've had somewhat brutal arguments and even people (once even me! ;P) who have quit to make a point about various issues; but, in the end, I have been fairly confident that we have acted in an ethical manner and have frankly left a LOT of money on the table in so doing. So, no: I don't look at my coworkers the same way... if I did, that would probably be a sign that I should quit the company I'm working at as maybe we are doing something evil :(.

So, no: I do not at all believe this is "an asinine assertion".

FWIW, I would be willing to accept a very different argument from the one you are making here, wherein some people simply don't have the privilege of working on things that are good, as they are in a situation where they are living day to day under the boot of a system that requires them to do something, and maybe it is better than someone who feels a bit bad about it but is forced to do it anyway does the job rather than someone else, as it will have some imprint of that belief left... but most of the people I know who do these jobs just don't care: they find doing this kind of work fun and they try to not pay attention to the negative effects while making a ton of money doing but one thing out of myriad they could be doing with their often over-qualified skillsets... they simply find the thought process of taking a pay cut to work on better things beneath them.


Well... yeah. That's the whole point of the algorithm - figure out what you're into, and show more of it to you.


I was going to say. Recommendation algorithms across the internet work on the simple principal that "if they click these things or watch these things a lot, then they must like them, so we should recommend more of that to them because they'll like our recommendations". So if people are thinking the election was a fraud, they've probably clicked and watched those conspiracy nut videos, and the algorithm will spit more back at them.

It's... how recommendations work.


They are doing exactly what they were designed and should do.

That is showing the user type of content they are interested in.

And there is absolutely nothing wrong with it.


The "problem" (if it's real, which I think it is) is that youtube's recommendations don't show you more "skepticism" content, but will continually try to push you further down rabbit holes. It works great if you are learning and suddenly interested in woodworking, but if you don't trust government, it's just going to take you to all the crazy people saying (((they))) are trying to keep you down and that's why your life is bad and you should hate and wish violence upon the people trying to "destroy your way of life".

After owning the same very active youtube account since 2007, youtube finally tried to radicalize me a few months back. I watched a single video from a more radical leftist (I guess class solidarity-ist) channel, and the very next loading of the homepage was at least half extreme leftist and extreme far right channels, trying to tell me that "democrats are trying to kill millions of babies a year" and "republicans are trying to instate a theocracy tomorrow" and "why you shouldn't believe anything (((they))) tell you". Google has known my political affiliation and opinions for over a decade, and still tried to show me OAN stuff, because hey I'm radicalizable right now and that's pretty profitable.

I've watched slightly leftist youtube videos for years now, and youtube never tried to radicalize me, but the second I dipped my toe into the dark and disgusting pool of extremism, that's all youtube wanted to show me. These videos were not meant to inform, but to enrage. The fact of the matter is that extremist content is just way better at activating much more extreme emotional responses, which fun story, leads to more engagement. So the algorithm LOVES pushing people towards extremeism, because it's super effective at doing what Daddy Google tells it to do, which is to get people more "engaged"


And it's worth noting (so I have been informed by multiple reputable sources) that the triple-parentheses construction is a common antisemitic dogwhistle—that is, the "they" there is implicitly "the Jews".


It's mostly used ironically now by Jews and allies but yes.

https://en.wikipedia.org/wiki/Triple_parentheses


That....is not what I've heard from Jewish people I know (quite recently), nor what that Wikipedia article says. It says that some people have used it that way "in solidarity", but not that this has become its primary usage—and, indeed, it says that Jewish groups have officially labeled it as hate speech.


I may have been mistaken about that, then. I thought it had been more or less mocked and false-flagged out of use by now but I guess not.


Except there is? Mass consumption of this particular flavor of garbage has real consequences.


Like a special flavor of propaganda with political and corporate support that created a movement to burn down cities, loot stores, murder innocent people? That kind of thing?


Seems like a question of limiting consumption rather than the type of information though, right?

Overconsumption has health consequences.


The recommendation algorithm is also designed for overconsumption.


Yeah

YT is a drug pusher. In the worse of ways

I notice that the moment I watch a video slightly "off center" it goes full slippery slope on my recommendations. No kidding

"Ah but this is what it should do". No, it shouldn't

To not give a political example, this is like chasing and knocking on people's doors at dinner time with a full McD meal because you had a frozen yogurt in the afternoon


That's a poor analogy. You're going to the website, it's not coming to you, and you're given recommendations, not having them load and play the videos for you automatically. This is more like you buy a frozen yogurt at McDonald's in the afternoon, and when you return to McDonald's, the cashier asks if you'd like a McFlurry this time, and maybe to try a Big Mac combo?


> not having them load and play the videos for you automatically

Well, but YT does exactly that with Autoplay (and the recommendations are based on the last video seen)


That's what you're told, but you don't know that. The algorithm is opaque.

Even if the algo is fair to you, it could be devastating to the more feeble-minded.. And you would never know, until some clownish grifter caricature became leader of the free world, wink. Even then, how would you trace that critical little 2% vote shift back to a particular algorithm?


What happens if a user of Youtube is depressed? Is Youtube more likely to direct and promote videos making the user more depressed? Say that you write an recommendation algorithm to maximize viewing time for advertising purposes does the recommendation algorithm take ethical decisions what is suitable content?

What if the user is viewing extreme content what will the recommendation algorithm recommend?


Even humans have trouble deciding (a) what constitutes "suitable content" and (b) whether we should be deciding that for other people. So of course our current algorithms don't take that into account, since we can't define it in the first place.

Maybe one day AI will get to the point where people will accept its decisions on such matters (knowing humanity, I deem this unlikely), but for now, there's no way to do that beyond just removing that content in the first place. And then you get into a huge debacle where people complain about censorship, and... yeah, it's not really a solvable problem.


Sorry, the only videos youtube should recommend are those impusely directed towards the watcher's present mental state, which is the soverign basis of free thought. /s

It turns out, freedom = living in a casino for the mind.


> “Many believe that automated recommendation algorithms have little influence on online ‘echo chambers’ in which users only see content that reaffirms their preexisting views,” observes Bisbee, now an assistant professor at Vanderbilt University.

Really? I was under the impression pretty much everyone was aware that these recommendation algorithms can help reinforce existing views. Discussion around “filter bubbles”[1] was a big thing a few years ago.

[1] https://en.wikipedia.org/wiki/Filter_bubble


Whatever you think about election fraud in particular, the ultimate effect of these recommendation engines is to close everyone off into “filter bubbles” where they only see things that conform to their worldview, and are encouraged toward more and more extreme and polarized views. I can’t help but think this phenomenon explains many of the problems in the world today.


This doesn't seem very new though. Traditional news was the same way; like if I'm a Democrat, I'm not going to subscribe to the Epoch Times, I'm going to subscribe to the New York Times. Similarly if I'm interested in lgbt issues I'd have subscribed to pink news and not the National Review. People expect the same from algorithms, so maybe now if you're too paranoid about covid YouTube isn't going to instead recommend you a Joe Rogan video it'll recommend you the same content you've been watching


2016 or 2020? The algorithm should know the difference


Is it accurate to say that the YouTube algorithm (and possibly the algorithms of other sites like TikTok) are creating personalized holes for people to fall into? I feel that way. I recognize that I sometimes get algorithmic recommendations for garbage content that I should stay away from even if the title, thumbnail, or preview is attractive to me.


Few things make me more fearful of fraud than an insistence that discussion of fraud is beyond the overton window.


Discussion of fraud is not beyond the Overton window. Continued discussion of dangerous ideas which have already been proven false should be, simply because of those two reasons. False balance is bad; false balance when one of the ideas on the scale is dangerous to human beings is worth removing.


Discussion of election fraud implies the dishonesty of government officials, which is strictly forbidden by the Anti-Bullying Act of 2031.


> discussion of fraud is beyond the overton window.

That is not what is happening here. There was discussion of fraud, it was investigated, and proven to be baseless.

Clinging to that insistence of fraud is what resulted in an attempted coup. Continued insistence at this point can only be attributed to ignorance or malice. There is no longer any good-faith reason to doubt the 2020 election results.


It's crazy how much of a rabbit hole reccomendations are. I wish I could disable the "explore" page on every social media I use, it would be heaven


We all know that 2016 was the only stolen election.


Not stolen, just a terrible consequence of a poor voting system (i.e. electoral college) and a larger-than-comfortable minority of voters who were also shitty humans.


Sorry, what does this have to do with the topic of the post? You're needlessly flaming political battles.


No, I don't believe 'we all' know such a thing, so could you please cite some data?

Offhand, my recollection of just about all the major press coverage (pick an article on it from any of them) is along these lines:

In 2016, Trump won the majority of Electoral College votes

In 2016, Hillary won the national popular vote

The interstate voting compact does not yet have enough electors to follow the popular vote.

My recollection of the timeline is: In early 2016 the DNC might have rigged their internal voting in support of Hillary, rather than Bernie. The press didn't do a good enough job covering that with transparency so I'm not sure. I know this almost alienated me (as a generally left leaning voter) from voting D. I know at least one friend who wanted change of any sort enough to vote for Trump after that. It was a close race, to alienating their base like that _may_ have rigged things in favor of the candidate that won the electoral vote.


Critics and pundits on the right like to portray the anger, resentment, and dissatisfaction on the left from the outcome of the vote as claiming trump "stole" the election.

There indeed were occasional headlines screaming trump "stole" the election (IIRC) and there were screams of "not my president", and even a few weirdos here in maine that were convinced Mitch McConnell and Susan Collins had cheated in their respective elections, but by the time of the Muller report, the allegations were much gentler. None of the investigations or impeachments were about "stealing the election", but rather violations, sometimes flagrant and absurd, of campaign finance laws and other points of how you are allowed to campaign for the presidency.

It turned out that a lot of the way the US has worked for 250 years is literally the honor code, and it's not illegal to be elected by being a huge ass or immoral person. Many people in Trump's campaign staff did end up in jail for campaign related crimes. There is also significant circumstantial evidence that Trump and some republicans are more friendly with putin than the average american hopes for, but turns out that is also not a crime.

Meanwhile party republicans, sometimes including trump, have claimed significant, direct manipulation of voting equipment, tallies, and voter rolls. These claims have gone to court and nearly every one was thrown out as baseless, and occasionally downright absurdist. The claims include millions of immigrants being bussed into places to vote, thousands (or more) of "dead" people voting, accusing vote counting volunteers of "fixing" the count against trump, and countless other things.

Compare: "The democrats managed to invent millions of fake votes" with no evidence to "Trump worked directly with the Russian government or individuals in the Russian government to manipulate the voter base, hack the democratic party's mail server, lie heavily to voters, and possibly hack into actual voting machines in a few states". For the latter, we have evidence that russia did in fact hack the DMC's emails, after Trump openly "asked" them to in a campaign speech, russia probably hacked some servers somewhat related to voting operations in some states (but not voting machines directly, probably), russian disinfo campaigns against american voters (though you can't blame trump for that I don't think), trump campaign advertisements that said shitty things to discourage black people from voting (which isn't a crime, hilary made stupid comments), and enough bad actions and connections in his friends and compatriots to get at least 10 people in prison.


I recall a lot of pundits said "Trump stole the election", and then we entered a multi-year investigation into Russian collusion which found nothing. That was definitely a mistake in hindsight to press so far into this


Wrong. 2000 was stolen, as was 2004.


he said, without evidence


Recommendation engines need to have "escape hatches" to make sure society doesn't unravel at the seams.


"Escape Hatches" sounds like a deceptively benign term for ideologically motivated thought control.


No it doesn't. It sounds like a way to stop otherwise normal-minded folks from slipping down a YouTube-encouraged rabbithole of things that are demonstrably untrue.


I am not at all trying to delve into the topic of the legitimacy of the 2020 election. So, my next question is not an assertion that fraud took place.

When you say "demonstrably untrue", who determines such things? It is usually not the case that one side is entirely and wholly untrue. Losing a court case or having one be dismissed on procedural grounds does not mean the defense or prosecution had no merit whatsoever with their claims.

What is the process for letting people make up their own minds on conflicting evidence? I sense the danger is letting, or even demanding, that private companies make these judgments and filter dissent away from the masses under the guise of safety. I also sense the people calling for such things understand that their own ideological peers are the ones who would be in control of such algorithms.


Here's the thing about truth: the "who" doesn't matter. There shouldn't even be a question of "who determines if something is untrue?", nor should we be advocating for every individual to decide the truth for themselves. There is one objective reality that exists outside of our own minds, desires, and decisions, whether we like that reality or not.

What determines truth is empirical evidence. If something has no empirical evidence, it is untrue. If something has empirical evidence, it is true. If there is conflicting evidence, then some of that evidence is invalid and it must be re-analyzed using math, existing knowledge from provable things, and formal logic. After doing so, you will either determine which of those things is true, or arrive at the conclusion that there isn't enough evidence either way and stop after saying "I don't know" rather than deciding which version you prefer.

It's not about appeals to authority. Expertise is about people who have more practice at finding, testing, and analyzing the evidence in their field than random Joe Schmo on the street; and it's about nothing more than that.

We should not let people "make up their own minds" on conflicting evidence -- which is another way of saying "let people make up their own reality and expect to live in it" -- we should encourage everyone to stop at "I don't know" when they aren't sure where the evidence actually leads, and defer to people who can follow the evidence, if such a person exists. And if no such person exists, then we as a species should all stop at "I don't know (yet)".


The issue here is that videos aren't actually "recommended" in the sense that your considered preferences are reviewed and a genuine recommendation made. Youtube isnt a friend, an academic supervisor, or a doctor. It isnt even aware of your interests, let alone "has them at heart".

Rather, it's a casino. And that's not value-neutral, freedom-neutral. It's pretty antithetical to freedom, as you can see from all the people destroying their lives at the gambling machines.


> When you say "demonstrably untrue", who determines such things?

Do you believe in objective reality?


What would such an "escape hatch" look like?

I sometimes get YouTube recommendations where I think, "why would I ever watch that?" and I either ignore them or mark them as "not interested". Most people probably do the same.


Stores in my state are legally forbidden from selling alcohol to very very drunk people. Casinos in my state are legally required to help problem gamblers get better and to work with them to limit their problem gambling, even to the business's detriment.

Why should ANYONE be allowed to profit from an addiction/slippery slope/radicalization mechanism?


That would be self-defeating, as the moment that the word gets out that algorithms are now intentionally manipulating away from certain content, the Streisand effect will instantly kick in. Which is already is.

Plus, while I do not buy into election conspiracies, imagine if there was an actual stolen election. Getting all of the tech companies to suppress the actual evidence would be fantastic if I were a dictator in the making. For that reason, I would argue, allowing election conspiracy theories to propagate is a necessary evil of a functioning democracy.


> I would argue, allowing election conspiracy theories to propagate is a necessary evil of a functioning democracy.

I think real-world experience has proven that this isn't true. I get the idea behind what you're saying but the US (and UK/Europe) has had pretty much a constant onslaught on election conspiracies at all levels of our democracy.

The outcome has not been to make our democracies better functioning.


What's the alternative? Every corrupt leader since Nero has called their opposition's true statements "conspiracy theories" or similar and suppressed their statements as such; or deliberately invented conspiracy theories about their opponents and declared the truth to be the conspiracy. You can look up speeches in which Fascist and Nazi leaders denounced the "conspiracy theories" against them, which were (at the time) actually "unfounded" despite turning out almost completely true. We didn't know about the true horrors of the Nazi death camps until after the war ended, despite the "conspiracy theories" running rampant and often violently suppressed.

The alternative is to, quite literally, allow someone to be an arbiter of the truth, and then suppress information they determine is not factual. To which I say, give me this power, and in 30 years, I will be an emperor.


The alternative is listening to the people running the elections at the grassroots level. They should be a mix of people representing various interests. There should also be international observers. If there is widespread fraud, the word will get out before the results. And you will hear about the fraud from everywhere, not just from a handful of weirdos or from top-level people of one of the parties.

Every sane electoral system starts from the assumption that the authorities responsible for the elections cannot be trusted. Instead of trusting the authorities, you trust a large number of people watching each other. Then everyone knows that if there is widespread fraud, it will be obvious, and hence the electoral system can be trusted.


The alternative is making it almost impossible to have corrupt leaders in the first place, through taking care of society and the people in it.

Strong local media, strong local politics, strong education systems, strong protections for whistleblowers, a healthy environment, clean water, good cheap food, affordable housing, quality entertainment - there are countries in the world where citizens have all of this. And they don't have "single arbiters of truth" managing it.


You could do it transparently, and without targetting certain content.

Assuming recommendation engines are showing people what they normally watch, you could simply have every X recommendation be whatever your model says is the polar opposite of that. Label it specifically something like "a change of pace", "counter points", or whatever, to make it transparent.

I do wonder if society has got so used to algorithmic bubbles, that just exposing people to the opposite viewpoint would cause outrage.


> that just exposing people to the opposite viewpoint would cause outrage

You mean in the way that video recommendations about stolen elections cause outrage?


I think this is one of those cases where optimization on the part of self-interested actors has ruined things for the rest of us.

It used to be that if one said, "allowing people to talk about obviously ridiculous things like conspiracy theories is fine, and important actually, because it's part of free speech," that would be pretty clearly true.

But at that time, the number of people actually talking about such ridiculous things was pretty small.

Since then, the number of people talking about election-related conspiracies has gone up precipitously, largely because of Trump and his supporters. Who, it apparently bears repeating, attempted an actual violent coup to force the election results to go the way they want them to, partly due to the constant repetition of the "election fraud" Big Lie.

So no, allowing these theories to propagate is demonstrably antithetical to the continued functioning of democracy.


I don't like how this article immediately equates election fraud to misinformation, without piecing together whether it actually was or wasn't. Fraud is a feature of every election, late night shows have run a segment on it every four years I've been alive, and with the sudden change in election rules and mail in ballots it seems reasonable that there could be legitimate fraud.

What would have been more interesting is if they looked into what the content said to see if those users were being fed nonsense or actual news about election fraud. An election official deleting voter logs after receiving a subpoena to hand them over is a real news, but this article lumps it in as misinformation, and what they're being shown would arguably be more interesting then just knowing they're reading something about election fraud


I'm not skeptical because of what Youtube shows me, or doesn't show me. Youtube is becoming nigh unwatchable with ads every two minutes.

Am skeptical because:

I always question agendas, and lately, I'm being told not to question this election outcome by Uniparty Dems & Repubs along with the MSM

The MSM has done an excellent job of Uniparty political advocacy and discounting issues raised.

For the last 20 years or so, the hacker community of which I am one, has said that computer voting machines were inherently insecure. Except in 2020 for some reason.

The weird disappearing story about the USPS contracted trucker Jesse Morgan ballot shipment with from state to state

What I saw on broadcast news with huge leads being flipped by late night ballot drops

The weirdness of late night ballot stuffing into machines recorded in GA after the monitors were told to go hom.

The complete strangeness of removing one political party from observing election tallies

The already published Get out the Vote campaigns funded by Zuckerbucks that went overwhelmingly to specific party dominated areas

The interviews, and published canvassing in various states that did not match election results with frankly strange reports of folks living at houses not theirs

Deliberate attempts to change voting laws to maximize mail-in voting

Failed efforts to clean up voting rolls.

Weakening of vote integrity processes around signature matching

Delayed vote tallying intended to provide time for vote manipulation.

Secretaries of State deliberately changing the rules last minute to favor the Uniparty, despite the clear language of the US Constitution around Legislature controlling it

The reports of ghost voters in multiple locations in swing states that appear to have never existed, living out of commercial addresses and property lacking housing

The counts of ballots exceeding registered voters

Lack of consequences for people voting in areas in which they not citizens

Data that confirms that certain states were awarded more population, and congressional representation, in the Census, than to which they were entitled

The statement that the then candidate Biden made in a freudian slip, "We're in a situation where we have put together, and you guys did it for President Obama's administration before this, we have put together, I think, the most extensive and inclusive voter fraud organization in the history of American politics"

The idea that Biden suddenly became infinitely more popular than Silver Tongued Charismatic leaders like Bill Clinton, and Barack Obama, who I thought were the most popular recent presidents, while hiding out in his basement and not giving much press. Where I live, there was a sea of support for Obama in road signs, on T shirts, everywhere. I saw nothing like that for Biden

The overall tepid support for Biden, and lack of signs of support for him, despite living in a Deep Deep Blue area, which I could easily compare with

Since I work in computer security focused on strong identity & authentication and proper security configurations, it is difficult to convince me that insecure processes, badly maintained voter rolls, and unauditable black box systems, that were not configured to secure standards, are going to actually result in a good outcome (just to name a few of the problems).

I'm not going to get into a heated defense of above, posting a thousand links or all that like I regularly do. I just have a well formed opinion at this point that the fix was definitely in. As it probably has been in for a great number of recent elections.


> The idea that Biden suddenly became infinitely more popular than Silver Tongued Charismatic leaders like Bill Clinton, and Barack Obama, who I thought were the most popular recent presidents, while hiding out in his basement and not giving much press. Where I live, there was a sea of support for Obama in road signs, on T shirts, everywhere. I saw nothing like that for Biden

That's because people were mostly voting against Trump than voting for Biden. Biden was never super popular; Trump is one of the most hated men on the planet.

Otoh Biden did campaign like crazy, so maybe you just weren't paying attention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: