What does social media algorithm bias mean? Is it possible for the algorithm to be biased and prejudged? Why is this important?
Many of us follow the news on social media. We see different things in the posts. It has happened that the news of the death of an actor or an important person has been spread on social networks. Users and accounts with thousands of followers also published that false news. But a few hours have not passed and that rumor has been denied.
If you remember, when the corona vaccination started, contradictory words were said about it on social networks. Some people were against and they had reasons. It was interesting that posts with unrealistic and dangerous claims about the vaccine were being shared and getting likes and comments. Some people published very harmful claims for public health just to attract users’ opinions and be seen.
I’m sure you also remember examples of dangerous trends that have started in social networks. When we check anything, the first place we visit is social media; whether we like it or not, our judgment and behavior are affected by it. Unfortunately, sometimes that impact is so great and severe that it has terrible consequences for individuals and societies.
The important question is what to do. First, we need to see where the root of this problem is and what causes some posts and content to be seen more and some less. Next, we need to see what solutions are there to prevent the harmful and destructive effects of social networks on the behavior and reactions of users to a specific issue.
This content answers those questions.
Table of Contents
The bias of social network algorithms
Artificial intelligence has made many impossible things possible. Using artificial intelligence and machine learning, machines (computers and software) can be taught to learn and do things. The applications of artificial intelligence and machine learning in various fields of industry and technology have made fundamental changes in the way people work and live.
Some applications, or perhaps better to say, the results of using machine learning in different fields, are very obvious to all of us. Such as ChatGPT, which is one of the famous achievements of artificial intelligence. Smart and talking robots are the result of advances in machine learning and a highly specialized field called natural language processing.
Artificial intelligence and machine learning have other uses that we are not aware of. But those applications are effective in the lives of each one of us. Social network development companies employed artificial intelligence and machine learning to organize the huge amount of content produced in these networks for users.
Algorithms of social networks were created to use the reactions that each user has on social networks to recognize that user and show him posts and content (including advertising and sponsored content) in the feed (News Feed) that they know he likes and they are interested. So, there is no problem and algorithms will show users what they want.
Unfortunately, there is a problem and it is not that simple.
Linguistic bias of algorithms
Algorithms are taught by humans, just like a child. What if a child grows up in an environment where he only saw white people or only black people? Or what if the child grew up in an environment where there were only male principals or teachers? That kid thinks this is the world and if he is asked about colored people, he has no answer. Because he does not know that there are other races in the world.
Various research has shown that algorithms will be biased and biased if they are trained with biases and biases and are given only specific and directional data and information. So, when something isn’t in the algorithms’ vocabulary, they don’t see it. Of course, this is not the only problem with the linguistic bias of algorithms.
Social networks also use algorithms to prevent the publication of offensive, terrorist content, and attack and destruction to a person or group. That is, the algorithms are trained to delete some words if there are some words in the content. But the problem is that some words and phrases may be used in one text in a sense that is offensive and in another text used in a way that is not offensive. So, the meaning of words and phrases change according to the context. In addition, some words are not considered offensive to a group of people and communities.
So, if the algorithm is trained in Standard English and uses data from that language, it will not understand South African English (SAE) or Native African-American English (AAE). Therefore, it may mistakenly identify offensive or harmful content and remove it. As a result, the voice of some ethnic or racial groups and populations in social networks may be wrongly and unfairly reduced or even silenced.
Social and cognitive biases as a result of using algorithms in social networks
Like social media, search engines use algorithms to organize web content. For these algorithms, the attention and interaction of users to a content or post, or user account is very important. That is, the more users react to a topic or pay attention to a specific user’s posts, the more the algorithms show or suggest that topic, content, and user. The reason is that the content and the user are popular.
An important question that arises is what happens to less popular topics, contents, and views in the web space. Doesn’t this cause comments or people to be censored? Since popularity is not proof of the correctness, accuracy, or validity of a topic or person, does it not cause low-quality, emotional, baseless, and invalid content to circulate quickly on the web and influence people’s opinions about a topic? Doesn’t this make opinions and viewpoints that are against the majority stream unheard?
Some research has shown that the answer to the above questions is positive. This feature of algorithms may lead to different cognitive and social biases in users when facing different political, economic, social, cultural, religious, and even health issues.
The most common social and cognitive biases in social networks
The most important and common social and cognitive biases caused by the filtering of information by algorithms in cyberspace are the false truth effect, confirmation bias, authority bias, false consensus effect, and in-group bias. (Because the bias of in-group prejudice is clear from its name, I will not explain more about it.)
- False truth effect: Repeating the same thing over and over causes some people to believe that it is true.
- Confirmation Bias: Some people only see and read information that is compatible with their beliefs.
- Authority bias: If a scientist, politician, or famous person (with many followers) says something, some people think it must be true.
- The false consensus effect is similar to this bias. The difference is that due to false consensus, some people believe something that “everyone” thinks is true. For example, because a post has received thousands of likes or a video has been viewed millions of times, it must be valid, true, and reliable.