Home | Publier un mémoire | Une page au hasard

The impact of covid-19: to predict the breaking point of the disease from big data by neural networks


par Woohyun SHIN
Paris School of Business - MSc Data Management 2001
Dans la categorie: Informatique et Télécommunications > Intelligence artificielle
   
Télécharger le fichier original

précédent sommaire suivant

2.2. Deep Learning Model

To define deep learning, a neural network must be defined first. Neural network is a graph of nodes with weights and activation functions. Nodes are stacked up one by one and consist of a layer. On the network, each layer is partially or completely associated with the previous layer. These simple functions can be learned to better recognize increasingly complex input signals as layers are added one by one. The ultimate goal of this learning is to properly adjust the connection between each node on the network, its weight, and the value of each node so that it can be associated with a specific value when a value is entered. Deep learning forms a variety of architectures and builds up these layers countless times. Neural networks themselves are algorithms that were created decades ago. However, attention has been focused on various machine learning algorithms and has declined. However, the recent use of large datasets (BigData), the development of powerful hardware (Cluster and GPUs), and the advent of new learning algorithms have enabled us to learn a large neural network that goes far beyond the performance of many traditional machine learning methods.

For typical machine learning techniques, the limitations of computational performance did not allow good results as data grew. However, deep learning is advantageous as data and information grows, and rather requires much larger datasets than conventional machine learning algorithms. Deep Neural Network (DNN) has now become the standard for computer vision, voice processing and some natural language processing operations, providing better learning performance compared to models learned by existing users manually adjusting parameters, and has been actively introduced in other areas of machine learning.

The DNN model is very effective when a lot of data needs to be processed and nonlinear, such as weather forecasting problems. According to the papers on various DNN algorithm models, MPL models are used the most and are considered to have high accuracy[13].

- 7 -

The difference between Hadoop and the traditional Relational Database Management System (RDBMS) is that Hadoop can store structured, semi-formal, unstructured and all-format data, but RDBMS can only store structured data. In terms of speed, RDBMS also adopts the Schema On Write function, which has the disadvantage of being very slow when data is recorded. On the other hand, Hadoop adopts the Schema On Read function, which has the advantage of being very fast when data is written. Hadoop also has the advantage of being able to form a cluster using common hardware rather than specific equipment, because it uses clusters by connecting equipment in parallel. These advantages can be highly useful in economic terms. Because the cost of upgrading a computer's memory or hard disk is exponentially increased compared to the performance of the component. However, many companies are now adopting the technology because parallel architectures can translate computer performance improvements into an arithmetic growth. Hadoop also has an advantage in terms of network efficiency because it has a philosophy of sending light code to where data exists and processing it.

précédent sommaire suivant