The old concept of neural nets has made quite a comeback lately, demonstrating their potential a few months ago when Google’s AI managed to beat thrash the world Go champion. As we all know, however, Deep Learning technology is nothing new and we have grown accustomed to using it daily (perhaps without even realizing it) in browsing assistants such as SIRI or Markov chain autocomplete on our cell phones. Use of Big Data projects and IoT, as the two most complementary fields, have also driven the resurgence of this technology.
But do we really know what deep learning is and how to apply it?
Deep learning is really nothing more than a neural network in which a series of hidden layers is established to speed up learning rates. These neural nets are then combined to obtain knowledge and better actions to apply under a single input.
- The first to identify elephants
- The second to ascertain if they are herds
The two above nets would be divided into new RNs to see if they have ears, if the ears are big, another for the trunk and the same for the herds. All these would be aggregated from the bottom up until obtaining the set of qualities or facts that the last RN would work with. This is deep learning.
How might we implement it? This is the moot point in IA because there are two main aspects to clear up:
- Establishing the neural net model (previous decomposition method)
- Implementing the neural network
As engineers we can solve the first point by dint of mathematics and statistics. As regards the second point we in GMV are working with two principal tools: TensorFlow and Weka.
TensorFlow is an open-source software library for numerical computation using data flow graphs. This means that the main difference between TensorFlow and other RN libraries is that it is a data flow (graphs) description language for RNs. In these graphs the nodes represent mathematical operations while the graph edges represent the “tensors” that flow between them (data flows). This, for me, is the most interesting aspect of TF. The worst aspect… it’s fiendishly complex to use at conceptual level (at least for me).
It is important to point out here that TF was originally developed by researchers and engineers working in Google’s AlphaGo team, no less.
We at GMV, for our part, are now cutting our RN teeth and we have implemented a prototype for preliminary coding of air-quality data taken by network sensors.
This coding process is one of the operations carried out in community control centers. When sensor data gets to the control center, its value is noted down and it is then the task of an expert to judge the coding of each reading (normally at ten-minute intervals) to decide whether or not this reading is valid (it might be invalid because the sensor is under maintenance, for example, or the scale has bottomed out, etc.). Just imagine the delay this would entail if the population had to be warned of an above-threshold, alarm-tripping reading. For this very reason many public authority websites express the caveat that the figures shown have not been vetted.
The prototype we have developed carries out this operation in seconds, freeing the experts for other much more productive tasks, all with 99.8% effectiveness feedback thanks to the application of basic RNs. Imagine the huge saving in money and time for the control center.
As you can see, I’m only two steps away from taking over the world.
Author: Ángel Cristóbal Lázaro Ríos
Engineer at GMV’s Seville office
Las opiniones vertidas por el autor son enteramente suyas y no siempre representan la opinión de GMV
The author’s views are entirely his own and may not reflect the views of GMV