top of page
GK

Social Media misinformaton via bots and fake accounts

Introduction

Social media platforms, integral to modern communication and information dissemination, are increasingly exploited by malicious actors to stir hatred and incite violence. This article delves into how these actors use fake accounts and bots to spread misinformation, foment racism, and provoke unrest. It also explores the best methods to prevent these threats and how social media companies can enhance their policing efforts.


Southport attack

Exploiting Social Media: Fake Accounts and Bots


Creation and Use of Fake Accounts

Fake accounts are a cornerstone of malicious activities on social media. Malicious actors, including hostile states and extremist groups, create numerous fake profiles to amplify their messages and manipulate public opinion. These accounts often pose as legitimate users, adopting credible personas to infiltrate online communities and spread divisive content.

For instance, during political campaigns, fake accounts can be used to post inflammatory comments, share misleading news, and engage with real users to make harmful narratives appear widespread. These accounts often impersonate influential figures or ordinary citizens to gain trust and legitimacy, making their deceptive messages more persuasive.


How Do Threat Actors Get Hold of Fake Accounts?


CloudSEK identified the first advertisement for a Gold account on dark web marketplaces in March 2023. Since then, the firm has observed a flood of X Gold account ads on the dark web, alongside fake or stolen Facebook, Instagram, Yahoo, and TikTok accounts.


Cybercriminals selling those accounts use several methods to acquire them, including:

  • Manually creating fake accounts: the advertisers manually make accounts, get them verified, and are ‘ready to use’ for their buyers. This is ideal for criminals who need pseudo-identity and do not want to be attributed to their actions.

  • Brute-forcing existing accounts: cybercriminals take over an existing account by users using a generic username and password combo list. The tools used to do this include Open Bullet, SilverBullet, and SentryMBA.

  • Using malware to harvest credentials and steal accounts: infostealers have a centralized botnet network where credentials from infected devices are harvested. These credentials are then further validated according to buyers' requirements, such as individual or corporate accounts, number of followers, region-specific accounts.


Deployment of Bots

disinformation chart

Bots are automated accounts programmed to perform specific tasks at high speed and volume. Malicious actors deploy bots to spread misinformation, amplify hate speech, and create the illusion of widespread support for certain views. Bots can flood social media platforms with posts, likes, and shares, rapidly disseminating harmful content far beyond what human users could achieve alone.

For example, during crises or contentious events, bots can swarm social media with coordinated messages, often using hashtags to trend topics and draw attention. This tactic was evident during the 2016 U.S. Presidential election, where bots played a significant role in spreading false information and deepening societal divisions.


Techniques of Misinformation and Incitement


Coordinated Campaigns

UK Riots

Malicious actors often organize coordinated campaigns to maximize the impact of their misinformation efforts. By synchronizing the activities of fake accounts and bots, they can dominate conversations, drown out legitimate discourse, and create a false sense of consensus. These campaigns are meticulously planned, with content tailored to exploit existing societal tensions and prejudices.


Amplifying Racism and Xenophobia

Racism and xenophobia are particularly potent targets for malicious actors. By spreading false narratives about minority groups and immigrants, they can incite fear and hatred, leading to real-world violence and discrimination. For instance, false stories about immigrants committing crimes can inflame public sentiment, resulting in attacks on immigration centers and other acts of violence.


Algorithm Manipulation

Social media algorithms, designed to prioritize engaging content, can inadvertently amplify harmful messages. Malicious actors exploit this by creating content that triggers strong emotional reactions, such as anger and fear. This content is more likely to be promoted by algorithms, increasing its reach and impact. By strategically using bots and fake accounts, malicious actors can manipulate algorithms to favor their divisive narratives.


 

Best Methods to Prevent Malicious Use of Social Media


Fake social media accounts

Enhanced Detection and Removal of Fake Accounts and Bots

One of the most effective ways to combat the misuse of social media is through the enhanced detection and removal of fake accounts and bots. Social media companies can use advanced AI and machine learning algorithms to identify patterns of inauthentic behavior. For instance, bots often exhibit unusual activity patterns, such as posting at inhumanly consistent intervals or engaging with a high volume of content in a short period.


Multi-Factor Authentication

Implementing multi-factor authentication (MFA) can make it more difficult for malicious actors to create and manage large numbers of fake accounts. By requiring additional verification steps, such as SMS codes or biometric data, social media companies can ensure that each account is tied to a real individual, reducing the prevalence of fake profiles.


Transparency in Algorithmic Decisions

Social media companies should increase transparency regarding how their algorithms work and make decisions about content promotion. By providing users with more information about why certain content is shown to them, companies can reduce the manipulation of algorithms by malicious actors. Additionally, allowing users to have more control over their feeds and the ability to flag suspicious activity can empower communities to help identify and counter harmful behavior.


Policing Social Media More Effectively


Strengthening Content Moderation

Keir Starmer v Elon Musk

Robust content moderation is crucial to policing social media effectively. Social media companies should invest in a combination of AI-driven tools and human moderators to identify and remove harmful content swiftly. AI can be used to flag potential violations, while human moderators can provide context and ensure accurate enforcement of community guidelines.


Real-Time Monitoring and Rapid Response

Implementing real-time monitoring systems can help social media platforms respond more quickly to coordinated campaigns and sudden surges in harmful content. By detecting unusual spikes in activity, platforms can investigate and mitigate the impact of malicious actors before their messages spread widely.


Collaboration with External Experts

Social media companies should collaborate with external experts, including cybersecurity firms, academics, and non-governmental organizations (NGOs). These partnerships can provide valuable insights into emerging threats and effective countermeasures. For instance, NGOs focused on human rights can offer guidance on balancing content moderation with free speech protections.


 

Challenges and Ethical Considerations


Balancing Free Speech and Safety

Regulating social media content involves a delicate balance between ensuring user safety and protecting free speech. Overzealous content moderation can lead to censorship and the suppression of legitimate discourse, while insufficient moderation allows harmful content to proliferate. Social media companies must develop nuanced policies that consider both the intent and impact of content, ensuring that enforcement actions are proportionate and justified.


Addressing Global Diversity

Social media platforms operate in diverse cultural and legal environments, complicating efforts to create uniform moderation policies. Content deemed harmful in one country may be acceptable in another, necessitating localized approaches. Companies must respect different norms and laws while maintaining a consistent commitment to preventing harm and promoting safe online spaces.


Conclusion

The exploitation of social media by malicious actors to stir hatred and incite violence through fake accounts and bots poses a significant threat to societal cohesion and public safety. By understanding the tactics used—such as coordinated campaigns, algorithm manipulation, and the deployment of bots—social media companies can develop and implement effective countermeasures.


Enhanced detection and removal of fake accounts, increased transparency in algorithmic decisions, and robust content moderation are crucial steps in mitigating these threats. Through collaboration with external experts and a commitment to balancing free speech with safety, social media platforms can better police their spaces and contribute to a safer, more inclusive online environment.

212 views

Comments


bottom of page