One of the most important steps in cancer diagnosis is the detection of cancer cells and the localization of cells.

Cell movements provide clues to how drugs or different genes affect the spread of tumors in the body. As a result, the nucleus of the cells can be traced with the help of microscopic films. However, tracking cancer cells manually is a difficult task. The project we examine in this field was carried out by Jacquemitt, a cell biologist.

Jacquemitt trained a machine to track cell nuclei for him. The methods used in this project were taken from the ZeroCostDL4Mic platform; which is part of a collection of resources aimed at facilitating the use of artificial intelligence (AI) technology for scientists with minimal programming experience.

Artificial intelligence technologies include several methods. One of these methods is called machine learning, which uses data that has been processed manually and makes predictions according to what the AI learns. In contrast, deep learning can identify complex patterns in raw data. It is used in self-driving cars, speech recognition software, computer games, and also for finding cell nuclei in large microscopic datasets.

The origin of deep learning dates back to 1940. When scientists designed a computer model with interconnected layers organized like neurons in the human brain. Decades later, researchers trained these “neural networks” to recognize shapes, words, and numbers. After less than a decade, deep learning entered the world of biology and medicine.

One of the biggest drivers for the growth of deep learning in this field has been the increase in biological data. Using new genome sequencing technologies, a single experiment can generate several gigabytes of information. The Cancer Genome Atlas, launched in 2006, has collected information on tens of thousands of samples from 33 types of cancer. This data is more than 2.5 petabytes (1 petabyte equals 1 million gigabytes).

Check the cell profile

Cancer biologist Neil Karagar took the first step in this direction in 2004. In his research, it has been shown that artificial intelligence can improve screening processes. But on the other hand, it is difficult for biologists to apply artificial intelligence. Because it will require coding training.

For this reason, Caragher’s team along with a team of computational biologists investigated the effects of various drugs on breast cancer cells. They further developed their method by staining the cells with a fluorescent dye and then using the open-source software CellProfiler to generate cell profiles.

Last year, the same group investigated how deep learning can improve this process. The researchers downloaded Carragher’s breast cancer data from the Broad Bioimage Benchmark Collection and used it to train a deep neural network that had previously only seen images of cars and animals. By scanning for patterns in breast cancer data, the model was trained to detect meaningful cellular changes. Because the software wasn’t told exactly what to look for, it found features the researchers hadn’t even considered.

Another group of researchers investigated genetic mutations. They stained the lung cancer cells using the Cell Painting protocol and examined the difference in the effect of drugs on the cells. They found that machine learning can identify meaningful patterns in images as well as the processes of measuring gene expression in cells.

As part of the Cancer Map pilot project, which maps the molecular networks found in human cancer, researchers are training a deep learning model to predict drug responses based on the cancer genome sequence. Such predictions are life and death and their accuracy is very important.

Some resist accepting the results obtained. Because the mechanisms in them are not clear and deep neural networks generate answers without revealing the process. A problem is known as the “black box”.

Systems based on deep learning make decisions and execute specific commands by imitating human thought patterns and through neural network algorithms.

The neural layers of deep learning systems are not designed and built by engineers; Rather, it is these different data and information that lead to the progress and improvement of the learning process of these algorithms.

Deep learning in protein localization

Another research group studied the application of deep learning in protein localization. The work is part of the Human Protein Atlas, a multi-year effort to map proteins. Spatial information shows in which part of the cell the proteins are located. If researchers knew this information, they could use it to gain deeper insights into biology.

This research group invited gamers to collaborate in using computers to locate proteins in cells. In EVE Online, gamers had to find fluorescently tagged proteins to earn more points. For this purpose, they improved an artificial intelligence system that was used for the same purpose.

The researchers eventually took their images to Kaggle: a platform that challenges machine learning experts to develop their best models to crunch data sets submitted by companies and researchers.

During 3 months, more than two thousand teams around the world competed together; so that they could develop a deep learning model that was able to detect protein and its spatial distribution. This was a challenging project. Half of the human proteins were found in different places in the cells. But they had more accumulation in the nucleus of the cells.

 

This algorithm was almost as accurate as human experts, But it had more speed and reproduction capability. In addition, it can express spatial information numerically. When we can express information in numerical form, we can integrate it with other types of data. This issue has caused the evolution of research in the field of cancer.

The future of deep learning in cellular localization

Many of the tools needed to build deep learning models are available online. including programming frameworks such as TensorFlow, Pytorch, Keras, and Caffe. Researchers who wish to research issues related to image analysis tools can use an online resource called the Image Image Forum.

Google’s free cloud service provides AI developers with access to several microscopic deep-learning tools. Everything you need will be installed in minutes. With a few clicks, users can use ready-made examples to train a neural network and then apply that network to their data. No coding is required.

Researchers who want to use larger data sets or train more complex models; may need computing resources beyond Google’s free service.

Leave a Reply

Your email address will not be published. Required fields are marked *