Modern astroturfing is the technique to create the impression that there is a large grass-roots support for a product, service, or idea through the use of highly automated social media “bots”, and dozens or hundreds of amplifier user accounts that are controlled by a single user.
When you read accounts that the Kremlin used 150 000 bots to magnify the appearance of support for Brexit in the final days of the 2016 referendum, that is a classic example of a modern astroturfing campaign. Consequently, there has been a lot of hysteria on both sides of the Atlantic ocean about this “new” phenomena. However, astroturfing is neither new nor uncommon. It predates the internet when letters to the editors of newspapers were written by public relations staff under pseudonyms and aliases.
Innovation is about taking two old ideas and fusing them into a new twist. Astroturfing for commercial purposes has been banned in the European Union since 2005, with the European Union Unfair Commercial Practises Directive. subsequently, it has also been banned in the United Kingdom under the Consumer Protection From Unfair Trading Regulations. It is illegal to pretend to be someone else in order to affect the perception of consumers on the internet.
When I worked in Public Relations as a copywriter, using the astroturfing technique was considered highly unethical, and we were barred from using it. But while the technique is considered highly unethical, that does not mean it’s not common. It is estimated that a third of all user reviews about products on the internet has been created by astroturfing campaigns. Since it is notoriously difficult to detect, it is rampant even if it is illegal.
What is new is that this unethical practice has been twisted into being used in mainstream politics. It has been used in politics before, of course. The term astroturfing itself comes from attempts to influence regulatory efforts with a conjured up grass-roots campaigning. One example is Philip-Morris’ The Advancement of Sound Science Center, a group that was created in 1993 to discredit a 1992 report from the United States Environmental Protection Agency about the dangers of second-hand smoking, as well as fighting pushes for regulations of tobacco smoking and sales.
What is also new is that one or more state-level actors have adopted these already existing techniques. Thus, the legal instruments to fight this new kind of influence peddling already exist, then, but in the realm of commercial communications. It is a question for society if those rules are going to be applied also to political communication, something that could open a can of worms ethically.
Free speech becomes an issue. The problems with correctly identifying who is a clone-warrior in a conjured up army of near identical user profiles and who is genuinely against something that is being debated politically could prove to be too difficult for democracy to deal with. In the commercial space, the issue has been discussed and dealt with for at least a decade. Laws have been passed, and ethical guidelines have been adopted. All of it has largely failed because of the problems in identifying who is fake and who is not.