“Robots will come and replace humans.”

“Artificial intelligence will destroy many jobs and put people out of work.”

“Terrorists will use artificial intelligence as a weapon.”

“Governments are using artificial intelligence and machine learning to spy on their citizens.”

You don’t need to be interested in the field of artificial intelligence and machine learning to have heard one of those sentences. Many things have been said and written about the disadvantages, disadvantages and dangers of artificial intelligence. The news media didn’t miss anything and scared people very well about artificial intelligence and machine learning.

But is there really fear in machine learning and training algorithms? Should humans be afraid of a future where robots and voice assistants will know them better than themselves? Is there any benefit to fear? Is it possible to stop the rapid development of artificial intelligence? Is the inevitable fate of man something like the one depicted in the movie her or I, Robot?

In this content, I try to answer the questions posed above and realistically address the disadvantages, disadvantages and risks of artificial intelligence and machine learning.

The dark side of artificial intelligence and machine learning

Let me make one thing clear. In this content, I am not going to say that artificial intelligence has no flaws, harms or dangers. The point here is how to look at it? AI is science, and the history of artificial intelligence shows that many experts and scientists have worked in this field for many years. The progress of artificial intelligence and its sub-branches cannot be stopped either, because the applications of artificial intelligence sub-branches, especially machine learning, have benefited mankind and made human work and life easier in various fields.

So what to do and how to talk about the dark side of AI or Dark Al and machine learning? Let me give some examples. Is the Internet afraid? Who denies the disadvantages and dangers of the Internet? What about cryptocurrencies? Don’t they have any flaws or dangers? What about the vaccine? Some get vaccinated and unfortunately die. Medicines also have side effects!

But we use the internet, some people buy cryptocurrencies, we get vaccines and we take the medicines prescribed by the doctor. Only because we are aware of their dangers, do we act very carefully. We are both taking precautions and scientists and experts are trying to find ways to reduce the risks of drugs and vaccines. Companies are also trying to ensure the security of information and people on the Internet. Risks will never be zero, but they will definitely be controlled and managed. Artificial intelligence and machine learning are no exception to that rule.

“Artificial intelligence and machine learning is a human achievement and full of flaws, violations and risks, just like other human achievements.”

Next, we will review the risks and disadvantages of artificial intelligence and machine learning together. In each case, the solutions that have been found so far to eliminate those risks and harms will also be mentioned.

Risks and disadvantages of artificial intelligence and machine learning

Discussing the disadvantages and risks of artificial intelligence and finding solutions for them is a subject for academic research. Scientists in related fields have researched and proposed solutions in scientific papers on issues such as algorithm bias and discriminative algorithms. This content is not a place for an expert and scientific discussion about the dangers of artificial intelligence and machine learning. But an effort has been made to introduce the most important risks of AI and the solutions provided for them in a very simple and comprehensible manner.

1. Governments, global organizations, and tech companies are fighting black AI by creating laws, standards, and guidelines.

The most common danger and flaw that machines and systems built with artificial intelligence and machine learning have is that they may commit errors and even crimes and be abused by individuals and groups. Algorithms can be easily trained by machine learning with incorrect and directional information. That is, algorithms may be trained to commit crimes or even harm people.

Black artificial intelligence is a general term to call any type of error of automatic systems and machines. For example, drones are controlled by artificial intelligence. Imagine if terrorist groups have access to military drones and can use them, what dangers they will create for the people of the world and different countries.

Therefore, black AI is not only a threat to the privacy of users and humans. Black artificial intelligence and machine learning are dangerous for the national and economic security of world governments. For this reason, governments, the United Nations, and technology companies in various countries are fighting black artificial intelligence by establishing standards, guidelines, and even laws and regulations. These struggles are apart from the discussion of military applications of artificial intelligence and machine learning, as well as campaigns launched by activists to fight the dangers of artificial intelligence and machine learning in different countries.

I will mention an example of those struggles. Microsoft has launched a multifaceted program called Responsible AI. In the Microsoft program, standards have been established for the use of artificial intelligence and machine learning at Microsoft. With this program, Microsoft wants to show everyone that it is trying to use artificial intelligence and machine learning responsibly.

2. Professionals and companies are trying to ensure data security and user privacy against the criminal use of artificial intelligence and machine learning.

Data collection and analysis is one of the most important applications of machine learning. Data scientists in various industries and businesses build predictive models by training algorithms. Those models can predict the behavior and reactions of users or customers. Collecting data to train the algorithm is nothing more than saving every click, every like, and everything you and I do on the Internet. Now imagine a hacker gaining access to the collected data as well as user behavior prediction models of a very large business. What will happen?

Unfortunately, the risk is not only that the data is stolen, there is a bigger risk. Hackers, fraudsters, internet thieves and even organized crime networks and terrorist groups use machine learning and artificial intelligence to hack passwords, create chat bots to get information from users and phishing bank account information, hacking attacks, etc. (another example of black artificial intelligence) ). In addition to informing society about those risks and the efforts of companies and experts in the field of cyber security to protect data, governments need to establish the necessary laws so that every organization and business that collects data from users is obliged to protect that data.

3. The process of teaching algorithms and making decisions for machines should be clear, transparent and controlled.

Machine learning teaches the machine how to learn. Machine learning and deep learning also teach the machine natural human language (natural language processing). Well, the important question is: can the machine stop learning and not learn more? If the machine can go beyond training and understand its surroundings, it means it can make decisions and act on its own. And the danger here is that humans and even the engineer who created the algorithm cannot predict and understand those decisions.

Black box Al or closed artificial intelligence refers to systems and machines in which data and decision-making processes are not clear. To solve that problem, a campaign (solution) called Explainable Artificial Intelligence has been created, which tries to increase control over decision-making processes in machines.

When there is no control over training the machine, the algorithm may be trained with bias (Bias) and direction, intentionally or unintentionally. The person who builds the machine and trains it with raw data is a human (machine learning engineer). That person may not have taught the algorithm that the skin of all people is not white and that there are people with black skin in this world. Therefore, machine learning engineers must train algorithms fairly and not just train them with specific or biased information. Various campaigns are active in this field, and companies, such as Microsoft, are also trying to solve the problem of bias by following standards.

Leave a Reply

Your email address will not be published. Required fields are marked *