The EU is getting ready for total control of the digital space

The EU is getting ready for total control of the digital space

15.04.2024 – Norbert Häring

15 April 2024 | Under the pretext of promoting “civic” engagement, the EU Commission is funding the development of artificially intelligent software for the survellance and manipulation of social media – for use of state-funded, private watchdogs and government agencies. Preparatory work has been done in the USA. In the event of a “crisis”, all the stops can then be pulled out.

If you come across the website Hatedemics.eu, you have discovered the tip of a huge iceberg of systematic manipulation and censorship of the digital space by those in power. (I came across it via a post (German) on Apollo News).

Hatedemics is a project funded by the EU with one million euros to develop software (“artificial intelligence”) to search the digital space for oppositional views and movements. The artificial intelligence is also intended to help with the formulation and dissemination of counter-narratives.

Under the leadership of the Italian research center for artificial intelligence Fundazione Bruno Kessler, a consortium of 13 partners has been awarded the contract to develop the software.

This includes the globally operating Estonian security company Saher (security guard), which has branches in the UK and is active in areas such as counter-terrorism.

It also includes fact-checkers like FACTA and Pagella Politica, which earn their money predominantly through EU projects and from social media platforms. The EU has forced the latter to hire such private fact-checking companies as content censors with a code of conduct against disinformation.

The consortium furhter includes other such fact-checkers from other countries and civil organizations from the field of diversity and equality. It also includes public institutions.

Hatedemics is part of the €16 million euro “Citizens, Equality, Rights and Values” (CERV) programme, with which the EU Commission provides politically convenient so-called “non-governmental organizations” with government funding in order to “raise awareness of capacity building and the implementation of the Charter of Fundamental Rights of the European Union”.

Public-private censorship partnership

The call for proposals of the EU’s CERV program states:

“Projects under this priority should aim to enable civil society organisations to establish mechanisms of cooperation with public authorities to support the reporting of episodes of hate crime and hate speech; to ensure support to victims of hate speech and hate crime; and to support law enforcement, including through training or data collection methodologies and tools. Projects should also focus on activities that tackle hate speech online, including reporting content to IT companies, designing countering narrative and awareness raising campaigns, and educational activities to address the societal challenges of hate speech online.”

The Hatedemics project, which was successful with its application, aims to use artificial intelligence to make so-called civil society organizations and government agencies fit for the fight against “hate speech” online. This is used as a synonym for conspiracy theories, hate, agitation and disinformation. By providing artificial intelligence tools, civil society organizations are to be enabled to monitor, detect and report hate speech on the internet.

The software will also create “dialogue-based counter-narratives” and automatically measure the changes in behavior that are achieved through the use of counter-narratives.

The project sponsors promise: “The combination of these technologies will enable more targeted and timely online interventions.”

Mentor and pioneer USA

One million euros is hardly enough to develop an AI program of the kind described. But that is almost certainly not necessary. Because there is a pioneer who is certainly only too happy to make his preliminary work available to a censorship-minded EU Commission so that it can adapt it to local conditions and languages. As could hardly be otherwise, the preparatory work comes from the country in which the major digital platforms whose censorship is at issue are based.

The National Science Foundation (NSF), the US agency that funds science, has awarded at least 39 million dollars from 2021 to various university teams and companies to develop artificial intelligence for automated research and censorship of media in the digital space. This emerges from an interim report dated February 5, 2024 on the National Science Foundation (NSF) by the US Congressional Committee of Inquiry into Illegal Government Censorship Activities.

From emails and presentations that the committee was able to evaluate, it is clear that those involved were aware that this was a censorship program. It is also clear from the report that the NSF actively concealed its promotion of the questionable programs.

The creators of WiseDex, one of the funded AI programmes, praised it as “an opportunity for platform decision-makers to outsource the difficult task of censoring”.

The NSF-funded Co-Inisghts programme seems to come particularly close to what the EU Commission expects from Hatedemics. According to the description, it is able to filter out articles for fact-checking and compare statements from articles with fact-check articles. It is also supposed to operate automated channels for whistleblowers and convert the information into counteraction. In a presentation, the team promises that Co-Insights can analyse 750,000 blog and media articles per day and scan data from all major social media platforms.

The CourseCorrect programme, which “supports the efforts of journalists, developers and citizens to fact-check delegitimising information”, has a similar focus.

The task of the Hatedemics consortium should therefore only be to cobble together something suitable for the multilingual EU from one or more of these censorship programmes.

Conclusion and discussion

The EU provides large-scale funding and technical assistance to friendly “civil society” organisations and public-private partnerships to help suppress or counter dissenting online narratives. This complements the various EU initiatives that force digital platforms to block or limit transmission of inconvenient content. Thanks to the EU’s Digital Services Act (DSA), the “targeted online interventions” that the project organisers at Hatedemics promise to make possible can become openly totalitarian.

The law makes it possible to declare perfectly legal content “harmful” and therefore subject to deletion. As the platforms are threatened with very high penalties, it is ensured that the willingness to censor is high and, in case of doubt, they would rather delete and block than risk penalties.

In the event of a “crisis”, the possible “online interventions” that Hatedemics helps to prepare take on an even more drastic quality (German). The “crisis response mechanism” of the DSA (Art. 36) then comes into play and the EU Commission can immediately demand radical measures from the digital companies – such as manipulating search algorithms to make everything unwelcome untraceable, or demonetising all unwelcome publishers and publicists. It is up to the EU Commission to decide what other measures, in addition to those exemplified in the law, it wants to come up with and what it declares to be a crisis.

In the event of a “crisis”, the EU Commission can therefore use the findings and capabilities of programmes such as Hatedemics to take total control of the information and opinions disseminated on the internet via digital platforms. As long as it has not yet declared a crisis, it will limit itself to manipulation and censorship using the more subtle methods of public-private partnerships.

It is advisable to build up more analogue contacts and structures and not to rely too much on the rapidly dwindling freedom in the digital space. Technocrats with totalitarian ambitions can only monitor the digital space automatically and closely with the help of algorithms. They cannot do this with real-life encounters of humans.

German version

The EU is getting ready for total control of the digital space

15.04.2024 – Norbert Häring

15 April 2024 | Under the pretext of promoting “civic” engagement, the EU Commission is funding the development of artificially intelligent software for the survellance and manipulation of social media – for use of state-funded, private watchdogs and government agencies. Preparatory work has been done in the USA. In the event of a “crisis”, all the stops can then be pulled out.

If you come across the website Hatedemics.eu, you have discovered the tip of a huge iceberg of systematic manipulation and censorship of the digital space by those in power. (I came across it via a post (German) on Apollo News).

Hatedemics is a project funded by the EU with one million euros to develop software (“artificial intelligence”) to search the digital space for oppositional views and movements. The artificial intelligence is also intended to help with the formulation and dissemination of counter-narratives.

Under the leadership of the Italian research center for artificial intelligence Fundazione Bruno Kessler, a consortium of 13 partners has been awarded the contract to develop the software.

This includes the globally operating Estonian security company Saher (security guard), which has branches in the UK and is active in areas such as counter-terrorism.

It also includes fact-checkers like FACTA and Pagella Politica, which earn their money predominantly through EU projects and from social media platforms. The EU has forced the latter to hire such private fact-checking companies as content censors with a code of conduct against disinformation.

The consortium furhter includes other such fact-checkers from other countries and civil organizations from the field of diversity and equality. It also includes public institutions.

Hatedemics is part of the €16 million euro “Citizens, Equality, Rights and Values” (CERV) programme, with which the EU Commission provides politically convenient so-called “non-governmental organizations” with government funding in order to “raise awareness of capacity building and the implementation of the Charter of Fundamental Rights of the European Union”.

Public-private censorship partnership

The call for proposals of the EU’s CERV program states:

“Projects under this priority should aim to enable civil society organisations to establish mechanisms of cooperation with public authorities to support the reporting of episodes of hate crime and hate speech; to ensure support to victims of hate speech and hate crime; and to support law enforcement, including through training or data collection methodologies and tools. Projects should also focus on activities that tackle hate speech online, including reporting content to IT companies, designing countering narrative and awareness raising campaigns, and educational activities to address the societal challenges of hate speech online.”

The Hatedemics project, which was successful with its application, aims to use artificial intelligence to make so-called civil society organizations and government agencies fit for the fight against “hate speech” online. This is used as a synonym for conspiracy theories, hate, agitation and disinformation. By providing artificial intelligence tools, civil society organizations are to be enabled to monitor, detect and report hate speech on the internet.

The software will also create “dialogue-based counter-narratives” and automatically measure the changes in behavior that are achieved through the use of counter-narratives.

The project sponsors promise: “The combination of these technologies will enable more targeted and timely online interventions.”

Mentor and pioneer USA

One million euros is hardly enough to develop an AI program of the kind described. But that is almost certainly not necessary. Because there is a pioneer who is certainly only too happy to make his preliminary work available to a censorship-minded EU Commission so that it can adapt it to local conditions and languages. As could hardly be otherwise, the preparatory work comes from the country in which the major digital platforms whose censorship is at issue are based.

The National Science Foundation (NSF), the US agency that funds science, has awarded at least 39 million dollars from 2021 to various university teams and companies to develop artificial intelligence for automated research and censorship of media in the digital space. This emerges from an interim report dated February 5, 2024 on the National Science Foundation (NSF) by the US Congressional Committee of Inquiry into Illegal Government Censorship Activities.

From emails and presentations that the committee was able to evaluate, it is clear that those involved were aware that this was a censorship program. It is also clear from the report that the NSF actively concealed its promotion of the questionable programs.

The creators of WiseDex, one of the funded AI programmes, praised it as “an opportunity for platform decision-makers to outsource the difficult task of censoring”.

The NSF-funded Co-Inisghts programme seems to come particularly close to what the EU Commission expects from Hatedemics. According to the description, it is able to filter out articles for fact-checking and compare statements from articles with fact-check articles. It is also supposed to operate automated channels for whistleblowers and convert the information into counteraction. In a presentation, the team promises that Co-Insights can analyse 750,000 blog and media articles per day and scan data from all major social media platforms.

The CourseCorrect programme, which “supports the efforts of journalists, developers and citizens to fact-check delegitimising information”, has a similar focus.

The task of the Hatedemics consortium should therefore only be to cobble together something suitable for the multilingual EU from one or more of these censorship programmes.

Conclusion and discussion

The EU provides large-scale funding and technical assistance to friendly “civil society” organisations and public-private partnerships to help suppress or counter dissenting online narratives. This complements the various EU initiatives that force digital platforms to block or limit transmission of inconvenient content. Thanks to the EU’s Digital Services Act (DSA), the “targeted online interventions” that the project organisers at Hatedemics promise to make possible can become openly totalitarian.

The law makes it possible to declare perfectly legal content “harmful” and therefore subject to deletion. As the platforms are threatened with very high penalties, it is ensured that the willingness to censor is high and, in case of doubt, they would rather delete and block than risk penalties.

In the event of a “crisis”, the possible “online interventions” that Hatedemics helps to prepare take on an even more drastic quality (German). The “crisis response mechanism” of the DSA (Art. 36) then comes into play and the EU Commission can immediately demand radical measures from the digital companies – such as manipulating search algorithms to make everything unwelcome untraceable, or demonetising all unwelcome publishers and publicists. It is up to the EU Commission to decide what other measures, in addition to those exemplified in the law, it wants to come up with and what it declares to be a crisis.

In the event of a “crisis”, the EU Commission can therefore use the findings and capabilities of programmes such as Hatedemics to take total control of the information and opinions disseminated on the internet via digital platforms. As long as it has not yet declared a crisis, it will limit itself to manipulation and censorship using the more subtle methods of public-private partnerships.

It is advisable to build up more analogue contacts and structures and not to rely too much on the rapidly dwindling freedom in the digital space. Technocrats with totalitarian ambitions can only monitor the digital space automatically and closely with the help of algorithms. They cannot do this with real-life encounters of humans.

German version

Verwandte Beiträge