Deceptive Tactics: Unmasking Consumer Exploitation by AI-Driven Websites
- Geoff Cohen
- Aug 1, 2023
- 2 min read
Updated: Nov 28, 2023
In the vast expanse of the internet, a dark underbelly thrives, exploiting consumers and undermining truth. AI-driven websites employ deceptive tactics, spreading propaganda, misinformation, and disinformation. It's a tale of manipulation and eroded trust that demands investigation.
In the US, a recent report by the Association of National Advertisers (ANA) found that “advertisers unknowingly spend an astounding $13 billion annually on made-for-advertising (MFA) websites alone, which translates into 21% of all impressions and 15% of ad spend.”
These MFA websites use catchy headlines, clickbait, and appealing designs to captivate users. They disseminate false narratives, leveraging AI algorithms to exploit societal divisions and sow discord. Targeting vulnerable audiences, they shape public opinion and undermine democracy.
These platforms expertly blend ads with content, blurring the line between reliable information and sponsored messages. Unwary users unwittingly become victims of manipulative advertising, perpetuating the spread of misinformation and propaganda.
The proliferation of these websites threatens privacy and data security. They collect user data without consent, exploiting personal information for targeted manipulation. Combined with AI algorithms, this data enables tailored content designed to mislead and manipulate users.
The consequences extend beyond individuals. Propaganda and misinformation erode public trust, destabilise society, and hinder decision-making. Fact-based discourse is replaced by echo chambers, fostering manipulation and social unrest.
There are ways to combat this. Consumers must exercise critical thinking, verify sources, and fact-check information, and media literacy programs are vital in teaching discernment between reliable and deceptive sources. Regulators and industry stakeholders also need to act. Stricter regulations on advertising, data privacy, and content standards are needed. Collaboration among technology companies, media organisations, fact-checkers, and governments can foster transparency and responsible AI use.
If we are to successfully mitigate against the harm done by misinformation and information manipulation, media platforms and ad-tech companies must combat deceptive editorial and content practices. Transparency, content moderation, and ad verification processes can mitigate false narratives, and investments in accurate AI technologies aid in identifying deceptive content.
It’s also important to strike back at the agents of misinformation where it hurts, by squeezing the flow of revenue to Made for Advertising sites. One such tool to do this is Trustlist, a platform that helps media buyers and advertisers cut off revenue streams for clickbait creators and disinformation spreaders, thereby promoting trust in journalism. putting a stop to the spread of misleading information, and ensuring that adspend isn’t wasted on Made for Advertising news sites. Ad buyers are supplied with a green list of approved sites, and a red list of sites that have been identified as lacking in the trust markers that Trustlist uses to identify if sites are authentic, reputable news providers. Adbuyers then place advertising accordingly.
The battle against consumer exploitation and the spread of propaganda, misinformation, and disinformation is challenging. Through awareness, responsible technology use, and collective action, we can restore trust, protect democracy, and better invest in the digital landscape. Together, we must stand against deception and safeguard the integrity of information.



Comments