Fake news is not new, but the rate at which it can spread is. Many people have a hard time sorting real news from fake news on the internet, causing confusion.
One example of how quickly disinformation can spread is the conflict in the Ukraine. As part of its war efforts, Russia deployed another powerful weapon -- disinformation. Russia built a digital barricade to prevent its citizens from accessing information, cutting them off from the rest of the world. Instead, Russian citizens must rely on the information their authorities permit. The free and open internet does not exist in Russia.
One of the main problems with this digital barricade is the spreading of disinformation. Russians receive false information, such as the assertion that Ukraine is the aggressor in this conflict. This digital isolation enables Russia to clamp down on information not following the government line. Russia recently passed a censorship law preventing journalists, websites and other sources from publishing what government authorities deem as disinformation.
Social media is becoming a more common way for readers to get their news and information. However, not all information on these sites can be trusted. Disinformation can cause mistrust, as its main goal is deception. Disinformation can spread through bots, bias, sharing and hackers. Keep reading to learn 10 ways to spot disinformation on social media.
What is fake news?
Fake news is articles that are intentionally false and designed to manipulate the readers' perceptions of events, facts, news and statements. The information looks like news but either cannot be verified or did not happen. This fabricated information often mimics the real news media, without credibility and accuracy.
Some things that make a news story fake include:
- unverifiable information
- pieces written by nonexperts
- information not found on other sites
- information that comes from a fake site
- stories that appeal to emotions instead of stating facts
Categories of fake news include:
- Clickbait. This uses exaggerated, questionable or misleading headlines, images or social media descriptions to generate web traffic. These stories are deliberately fabricated to attract readers.
- Propaganda. This spreads information, rumors or ideas to harm an institution, country, group of people or individual -- typically for political gain.
- Imposter content. This impersonates general news sites to contain made-up stories to deceive readers.
- Biased/slanted news. This attracts readers to confirm their own biases and beliefs.
- Satire. This creates fake news stories for parody and entertainment.
- State-sponsored news. This operates under government control to create and spread disinformation to residents.
- Misleading headlines. These stories may not be completely false but are distorted with misleading headlines and small snippets displayed in newsfeeds.
Fake news is harmful because it can create misunderstanding and confusion on important issues. Spreading false information can intensify social conflict and stir up controversy. These stories can also cause mistrust.
What contributes to disinformation?
Fake news spreads more rapidly than other news because it appeals to the emotions, grabbing attention. Here are some ways disinformation spreads on social media:
- Continuous sharing. It's easy to share and "like" content on social media. The number of people that see this content increases each time a user shares it with their social network.
- Recommendation engines. Social media platforms and search engines also provide readers with personalized recommendations based on past preferences and search history. This further contributes to who sees fake news.
- Engagement metrics. Social media feeds prioritize content using engagement metrics, including how often readers share or like stories. However, accuracy is not a factor.
- Artificial intelligence. AI systems can also promote disinformation. AI can create realistic fake material based on the target audience. An AI engine can generate messages and test them immediately for effectiveness at swaying targeted demographics. It can also use bots to impersonate human users and spread disinformation.
- Hackers. These people can plant stories into real media news outlets, appearing as though they are from reliable sources. For example, Ukrainian officials reported hackers broke into government websites and posted false news about a peace treaty.
- Trolls. Fake news can also appear in the comments of reputable articles. Trolls deliberately post to upset and start arguments with other readers. They are sometimes paid for political reasons, which can play a part in spreading fake news.
Misinformation vs. disinformation
Misinformation and disinformation are two terms that can be used interchangeably -- however, they do have different meanings and intent.
Misinformation is inaccurate information shared without any intention to cause harm. Misinformation can be shared unintentionally either due to lack of knowledge or understanding of the topic. Typically, people spread misinformation unknowingly because they believe it to be true.
Disinformation is spread to deceive deliberately. Typically, there is an objective to disinformation. For example, some of the most profound disinformation posts revolve around the government such as the Russian’s government disinformation campaigns on its war with Ukraine to get public support. They post information they want people to believe that is not true.
10 ways to spot disinformation on social media
The first step of fighting the spread of disinformation on social media is to identify fake news. It's best to double-check before sharing with others. Here are 10 tips to recognize fake news and identify disinformation.
1. Check other reliable sources
Search other reputable news site and outlets to see if they are reporting on this story. Check for credible sources cited within the story. Credible, professional news agencies have strict editorial guidelines for fact-checking an article.
2. Check the source of the information
If this story is from an unknown source, do some research. Examine the web address of the page and look for strange domains other than".com" -- such as ".infonet" or ".offer." Check for any spelling errors of the company name in the URL address.
Consider the reputation of the source and their expertise on the matter. Bad actors may create webpages to mimic professional sites to spread fake news. When in doubt, go to the home page of the organization and check for the same information. For example, if a story looks like it is from the U.S. Centers for Disease Control and Prevention (CDC), go to the CDC's secured website and search for that information to verify it.
3. Look at the author
Perform a search on the author. Check for credibility, how many followers they have and how long the account has been active.
Scan other posts to determine if they have bot behaviors, such as posting at all times of the day and from various parts of the world. Check for qualities such as a username with numbers and suspicious links in the author's bio. If the content is retweeted from other accounts and has highly polarized political content, it is likely a fake bot account.
4. Search the profile photo
In addition to looking at the author's information and credibility, check their profile picture. Complete a reverse image search of profile photo on Google Reverse Image Search. Make sure the image is not a stock image or a celebrity. If the image doesn't appear to be original, then the article is likely not reliable because it is anonymous.
5. Read beyond the headline
Think about if the story sounds unrealistic or too good to be true. A credible story has plenty of facts conveyed with expert quotes, official statistics and survey data. It can also have eyewitness accounts.
If there are not detailed or consistent facts beyond the headline, question the information. Look for evidence to support that the event really happened. Make sure facts are not solely used to back up a certain viewpoint.
6. Develop a critical mindset
Don't let personal beliefs cloud judgment. Biases can influence how someone responds to an article. Social media platforms suggest stories that match a person's interests, opinions and browsing habits.
Don't let emotions influence views on the story. Look at a story critically and rationally. If the story is trying to persuade the reader or send readers to another site, it is probably fake news.
7. Determine if it is a joke
Satirical websites make the story a parody or a joke. Check the website to see if they consistently post funny stories and if they are known for satire. One such site known for doing this is The Onion.
8. Watch for sponsored content
Look at the top of the content for "sponsored content" or a similar designation. These stories often have catchy photos and appear to link to other news stories. They are ads designed to reach the reader's emotions.
Check the page and look for such labels as "paid sponsor" or "advertisement." These articles are baiting readers into buying something, whether they are legitimate or deceitful. Some of these sites may also take users to malicious sites to install malware. Malware can steal data from devices, causing hardware failure, or make a computer or system network inoperable.
9. Use a fact-checking site
Fact-checking sites can also help determine if the news is credible or fake. These sites use independent fact checkers to review and research the accuracy of the information by checking reputable media sources. They are often part of larger news outlets that identify incorrect facts and statements. Popular fact-checking sites include:
- PolitiFact. This Pulitzer Prize-winning site researches claims from politicians to check accuracy.
- Fact Check. This site from the Annenberg Public Policy Center also checks the accuracy of political claims.
- Snopes. This is one of the oldest and most popular debunking sites on the internet that focuses on news stories, urban legends and memes. The independent fact-checkers cite all sources at the end of the debunking.
- BBC Reality Check. This site is part of the British Broadcasting Company (BBC) that checks facts for news stories.
10. Check image authenticity
Modern editing software makes it easy to create fake images that look real. Look for shadows or jagged edges in the photo. Google Reverse Image Search is another way to check the image to see where it originated and if it's altered.
What are social networks doing to combat disinformation?
Social media platforms are cracking down on false information. In October 2023, the Israel-Hamas war took center stage on social media as disinformation started to spread quickly, and social media platforms are taking precautions.
Platforms issued statements about how they are handling disinformation on the war, which may be used to incite hate and violence. Here is what some social platforms released:
- TikTok. TikTok released a statement that said it launched a command center to manage safety globally. The company plans to improve the software to detect and remove any graphic or violent content, and it also hired Arabic and Hebrew linguists to moderate content.
- Facebook and Instagram. Facebook and Instagram parent company Meta stated they launched a special operations center with experts who speak Arabic and Hebrew to monitor content. They also lowered their threshold and rules for posting content to prevent questionable content.
- X. X announced it increased resources for the crisis and is monitoring content around the clock, especially content about hostages.
- YouTube. YouTube has removed videos since the attack and says it continues to monitor hate, graphic images and extremism, according to community guidelines.
- Telegram. The messaging app Telegram restricted Hamas-operated channels or those channels closely associated with the militant’s war group. These channels are no longer accessible to Telegram users.
Regular moderations to prevent disinformation
Facebook runs two initiatives to address the general rise of disinformation. News Integrity Initiative and Facebook Journalism Project highlight problems with fake news and spread awareness. The organization also takes actions against pages and individuals that share fake news and remove them from the site.
Instagram and Facebook have a new "false information" label to combat disinformation. Third-party fact checkers review and identify potential false claims and posts. If this team determines this information is untrue, they flag it with a label to notify social media users it contains misinformation. When readers want to view a post with this label, they must click an acknowledgement that says the information is not true. If they try to share this information, they get a warning they are about to share false information.
Twitter released a statement that it does not tolerate disinformation. They have suspended accounts for manipulative or spammy actions.
LinkedIn also encourages users to report any disinformation. If the review deems the information false, LinkedIn will remove the post. LinkedIn has a strict user agreement, and if users do not comply, they will be removed.
To fight fake news on social media, users must first recognize what is false. If the user deems the information as fake news, it's best to report it to the platform.