Here is a link to a story published in journalism.co.uk just a couple days ago, on September 12, entitled "Just how effective is the new YouTube's fact-checking feature?" and written by Daniel Green.

The issues being raised in this article impact everyone and I believe the developments being described constitute an extremely serious threat to the principles of human freedom, democratic self-governance, and human rights including freedom of speech and freedom of the press as articulated in the First Amendment in the Bill of Rights.

To that end, I have just published a new video entitled "Electronic heresy tribunals: Algorithms deployed to prevent you from hearing 'conspiracy'," discussing some of my concerns regarding the topics in this article and the infringement on the human rights of freedom of speech and freedom of the press which are already being implemented in the united states and elsewhere, right now, by corporations such as YouTube (but there are certainly other corporations involved in this same type of illegal activity).

As the title suggests, the subject of the article is a new feature being tested by YouTube in India which will "combat the spread of misinformation" by throwing up "information panels" containing "content to debunk misleading claims" whenever users enter a search term related to "topics typically subject to misinformation."

Examples provided in the article of subjects deemed to be "responsible for the spread of misinformation" include "geoengineering" and "chemtrails." These topics, the article informs us, are seen as being conducive to "spreading conspiracy theories and views opposing the scientific consensus."

Apparently, the "scientific consensus" on those subjects (geoengineering and chemtrails) is that the men and women who are the citizens of the nation need not be informed on the subjects of what jet aircraft might be spraying into the atmosphere over their heads -- such things need not be discussed or subjected to a vote. In fact, it would be best if the public were never informed that such things are going on, and that anyone who asks questions about spraying unknown stuff out of airplanes be labeled a "conspiracy theorist" so that no pesky public debate on this topic need ever arise.

The article then goes on to inform us that this system, being tested-out in India, is one which "YouTube intends to extend to other countries throughout the rest of the year."

However, "information panels" which appear on your search results when you are looking into a topic that someone doesn't want the "hoi polloi" to know anything about (such as chemtrails and geoengineering) won't protect you from seeing a conspiracy video which itself does not have a label -- and so the article informs us that YouTube has a different solution to "tackle this" perceived problem: altering their algorithms so that far fewer people will ever encounter videos containing "misinformation" or "conspiracy theory."

Jumping rather abruptly to the united states, the article's author informs us that:

Since changes to the recommendation system came into effect in the United States in January, views of so-called 'borderline content,' videos that misinform but do not violate the website's guidelines, have dropped by more than 50 per cent.

That should be reassuring to everyone who was worried about men and women in the general public seeing videos discussing "views opposing the scientific consensus" without helpful warning labels provided by the major news platforms.

What would we do if the people of this nation were able to see videos about the collapse of World Trade Center Building 7, which fell into its own footprint at freefall speed (or at a speed which is indistinguishable from freefall speed by any scientific measure)? We simply cannot have people finding videos regarding that awful, murderous event without helpful warning labels containing "debunking" information from sources such as the New York Timesor NPR, which have spent the past 18 years pretending that the collapse of high-rise steel towers into their own footprints simply due to fire somehow does not oppose scientific consensus (or even the laws of physics).

Fortunately for all those who might have been worried that men and women might "stumble across videos in their feed" (as the article puts it, without a hint of irony) which don't come with helpful debunking material provided by the corporate-controlled media, the engineers at YouTube are on the case, and they have created algorithms which drastically reduce the chance that anyone might "stumble across" something so distasteful in their "feed."

The word feed, by the way, when used as a noun typically signifies grain and other material given to cattle or hogs in large troughs (as well as to horses, who sometimes get their "feed" in a "feed bag," one per horse -- in a kind of individualized fashion, more akin to the "feed" that is provided to individuals by the algorithms at YouTube).

So, rest easy: YouTube has come up with some excellent screens to ensure that you won't be getting any nasty bits in your individualized feed.

In fact, the article informs us, YouTube's algorithms are so effective that "views of so-called 'borderline content,' videos that misinform but do not violate the website's guidelines, are down by more than 50 per cent."

That's quite a drop! I know that if YouTube's revenues were to be "down by more than 50 per cent" in some quarter, the investment world would not like it very much.

Of course, the makers of videos which have been judged (in secret, apparently) to "misinform" probably do not even realize that they now have sophisticated algorithms working overtime to ensure that nobody sees the content they are creating and putting out on YouTube. They probably have absolutely no idea that they have violated some undefined standard, since (as the article informs us) these algorithms are set to find content that does not violate any actual published guidelines.

As I note in the above video, I myself have mentioned "chemtrails" and "geoengineering" many times before on this blog (you can do a search for these terms and find many photographs I took myself, which apparently makes me someone who "spreads views opposed to the scientific consensus," which the subject of chemtrails apparently indicates).

I have also mentioned my deep concern about Genetically Modified Organisms (GMOs) in the past, including during the period prior to a ballot initiative in my home state of California which would have required food manufacturers to inform buyers when genetically-modified ingredients are present in their food. That ballot initiative was defeated: there are a number of very powerful interests which are adamantly opposed to the men and women of this country knowing the extent to which their food contains GMOs (which might lead to political action).

I would not be at all surprised if videos discussing negative aspects of GMOs might be categorized as "misinformation" and actively suppressed by the algorithms in question -- although we have no actual way of knowing, since these algorithms do not judge videos based on any public guidelines (as the article explicitly states), but rather based on some sort of secret criteria known only to the shadowy adjudicators who have taken it upon themselves to steer you away from "borderline content."

I would in fact be very surprised if mention of inconvenient facts such as the collapse of Building 7 at freefall speed were not included in content considered to be not just "borderline" but actual "conspiracy theory," which must never be mentioned in any arena that would alert the majority of men and women in this country to an issue that they should look into as having a great deal of relevance to their own lives.

Judging by the fact that mainstream news sources such as the New York Times or the Washington Post have spent the past eighteen years studiously ignoring (or even ridiculing) the overwhelming evidence which points to the conclusion that the conventional or "official" narrative of the events of that awful day is completely bankrupt (a narrative which still frames their reporting on the subject of 9/11, as you can see from any of the articles they had on their site this past week discussing the anniversary and the "war on terror"), you can be fairly certain that YouTube (which gets its "content to debunk misleading claims" from the major media corporations when it plasters warning labels on searches about chemtrails in its India beta-test, as noted above) has its algorithms set to make it more difficult to find videos by those who offer evidence that challenges or refutes the "official" story.

I'm sure the same applies to video content which challenges or refutes the "official" story surrounding the events of the assassination of Martin Luther King, Jr., the assassination of John F. Kennedy, the assassination of Robert F. Kennedy, or the massacre at Jonestownin 1978. If someone doesn't want you to know about the information that refutes the accepted doctrine about these events, then YouTube is ready with algorithms to prevent the people from encountering any heresy.

Now, I am perfectly aware of the arguments in favor of creating algorithms and warning panels to combat deliberate misinformation (note that I have not achieved perfect awareness in all realms of human endeavor, but I am certainly very much aware of the arguments for the institution of some kind of counterbalance to false and misleading YouTube videos, which can and do influence the opinion of certain demographics in very significant and sometimes very harmful ways).

In this article, published in the New York Times about a month ago (please note that mention of the New York Times, which I have just criticized for continuing to promote the bankrupt "official" narrative of 9/11 -- and they continue to promote the bankrupt narratives of the other important events mentioned in preceding paragraphs as well -- is not ironic here: this article and others like it provide the basis for the argument in favor of the kind of censorship described in the above-mentioned journalism.uk.co article, which I oppose, and I oppose the New York Times position if indeed they are in favor of YouTube censorship), the power of YouTube to spread lies which lead young men (primarily young men) towards acceptance of fascistic arguments (in this case, in Brazil -- leading to the election of Jair Bolsonaro) is brought home to the reader very strongly (research credit for this story on Brazil and YouTube goes to the indefatigable Dave Emory, whose decades of invaluable research in opposition to the spread of fascism -- especially in the united states -- can be found at his website at spitfirelist.com).

After reading the above New York Times article about the use of misleading YouTube videos in Brazil, I can understand the impulse of well-meaning men and women to say, "We need to counter these kinds of outright fake headlines and YouTube videos that are leading impressionable young men (primarily young men) into the dangerous and despicable cesspool of fascist dogma."

However, I would argue that the implementation by YouTube (and other tech firms) of secret algorithms designed to dramatically reduce traffic to certain videos is wrong for the following reasons (much as I detest the use of misleading videos to drive disaffected young men towards fascistic beliefs, as described in the New York Times article):

  • First and foremost, secret algorithms or secret panels determining what is and what is not "misinformation" or "conspiracy theory" and then severely restricting the ability of such content to be found flies in direct violation of the First Amendment and the Bill of Rights. The concept of freedom of speech and freedom of the press are not simply "American" rights: they are universal rights. Secret algorithms that drastically impede the ability to find a video are a form of direct infringement of the First Amendment articulation of freedom of the press. This is illegal, according to the highest law of the land in the united states (ie, the Constitution itself). So, no matter how much of a "good idea" it seems (on the surface) to restrict the ability to find a video based on the principle of preventing fascistic content, the founders recognized that this seductive idea is absolutely inimical to a free and democratic society -- hence their decision to enshrine the absolute protection of a free press in the very First Amendment of the Bill of Rights. Secret panels or secret algorithms which stymie someone's video (without their knowledge of what protocols they have violated) as soon as it is published is a clear violation of freedom of the press (the printing press in 1776 and 1787 did not just belong to big corporate media concerns such as the New York Times: individual citizens such as Benjamin Franklin or Thomas Paine could print out their arguments and distribute them widely, using the printing press -- but under the new algorithms implemented by Google / YouTube, their pamphlets could be squelched before they ever reached a large number of citizens).

  • It should be abundantly clear that removing the right to freedom of speech and freedom of the press is in no way an appropriate counter to fascism: on the contrary, the ability to squash every expression of an opinion other than the "official" narrative is clearly an attribute of fascism itself.

  • The proper way to combat fascism is to remove the conditions in which it thrives. Ask any expert in the study of the conditions conducive to fascism and they will probably mention high levels of unemployment or underemployment, particularly of young men, and a feeling of economic despair or anxiety (leading to the feeling that you don't have a fair shot at obtaining economic success within the normal channels of the economy). This previous post discusses the work of economists such as Warren Mosler and Michael P. Hudson, who explain ways to provide for true employment, as well as lowering the crushing debt burdens which prevent young men and women from getting married, buying a house and having the kind of confidence in their future which is probably the strongest antidote to fascism, which is an ideology born of despair and desperation.

  • One of the most important ways to combat the rise of fascistic and extremist tendencies among young adults in their teens and twenties is to squarely oppose the policies of austerity and scarcity which create the kinds of despair and desperation mentioned in the preceding point. I have written numerous previous posts discussing my opposition to the politics of austerity, many of them referencing the absolutely critical analysis of Professor Michael P. Hudson, including: "Austerity is an affront to the gods," "Collaborators against the gods," "Thor's visit to Olaf Tryggvason," "Privatization vs the gods (and the people)," "Transforming everyone and everything into commodities," and "All this has happened before," (among many others). The real way to combat fascism is to remove the conditions in which it tends to grow -- not to remove freedom of speech and freedom of the press (itself a fascistic move, and one which clearly violates human rights as acknowledged in the Bill of Rights to the Constitution).

  • When videos lead to violence, or even the communication of threats, then those responsible for that violence or those threats can and must be prosecuted under the law. There are laws against "communicating a threat." But quashing the videos of anyone deemed to be talking about subjects judged by some secret panel to be "spreading conspiracy theories" or "in opposition to some amorphous 'scientific consensus'" is wrong and must be opposed. Prosecute those actually calling for violence or communicating threats, without violating the right to freedom of speech and freedom of the press guaranteed in the Bill of Rights.

  • Ultimately, the most organic and powerful way to prevent odious ideologies from gaining traction in society is to create an environment in which the vast majority of men and women see such ideologies as detestable and in which the vast majority of men and women make their opposition to the despicable propaganda of proponents of fascism and racism known in no uncertain terms. This type of natural reaction is immortalized in the famous scene from the movie The Blues Brothers, released in 1980 (perhaps the last year before aggressive austerity policies began to be implemented on a scale never seen before in the united states, and thus a movie which captures an attitude among the people of the nation which is now becoming more and more a thing of the past):

In such an environment, nobody feels the need to ask YouTube to create algorithms to combat detestable racist and fascist videos, because the vast majority of the population sees such ideology as hateful and harmful, and react to it in the way we see the people reacting to similar propaganda in the movie clip shown above.

The only way that ideologies such as those being espoused by the fascists in the clip above will find any foothold is if the economic environment is characterized by despair and hopelessness, brought about by deliberate institution of policies of austerity, neoliberalism, and privatization, as well as conditions which lead to ever-increasing indebtedness (including opposition to deficit spending by governments free to implement their own fiscal policy, as discussed in this previous post), as well as the kind of unemployment and underemployment which characterize policies of austerity and the reduction of fiscal policy as a tool for aiding in the growth of the economy -- all of which can be fixed without imposition of algorithms from YouTube suppressing freedom of speech for everybody in the name of (supposedly) combating the rise of fascism.

Indeed, without the economic policy fixes described above (and in the posts linked in the above paragraphs), implementing a bunch of YouTube algorithms to suppress certain types of videos will have little positive effect -- while at the same time violating the First Amendment and constituting a grave threat to democracy and the ability of men and women in the electorate to find and hear opposing views on important issues.

On the day I graduated from West Point, I took an oath to support and defend the Constitution of the united states against all enemies, foreign and domestic.

It is my considered opinion that the institution of secret "tribunals" (including secret tribunals implemented by impersonal algorithms and artificial intelligence) which have the ability to suppress and quash the visibility of videos which contain "misinformation" (even if violating no written guidelines) constitute a grave threat to democracy and to the human rights recognized in the Bill of Rights.

As I say in the above video, there is virtually no difference if we change the term "conspiracy theory" to the term "heresy," and empower secret tribunals to implement algorithms drastically reducing the visibility of any video deemed to be promoting "heresy" (or even "borderline heresy"). If the united states were to implement such heresy tribunals, there would be no difficulty seeing that move as a clear and unequivocal violation of the Bill of Rights.

And yet that is exactly what appears to be taking place in the united states (and around the world) right now.

We had better wake up and ask ourselves if we want to live in a world governed by this kind of secret tribunals deciding what information we are or are not allowed to see (in direct contradiction to the Bill of Rights).

Either that or just go back to our "feed" and take what we get.

original image: Wikimedia commons (  link  ).

original image: Wikimedia commons (link).