Eden brings you embedded AI on a cloudless platform
Author photography

Christian Morales

IoE Corp's Lead Internet Marketer
Author photography

Lucas Thil

IoE Corp Developer
Published - 04/04/2022|Reading time - 14 min 48 sec

Artificial Intelligence (AI) is, basically, a mathematical optimization that provides autonomously through experiences a fast path to solving a wide range of problems. In the following lines, we’ll present in detail what AI is, starting with a discussion about what intelligence is, and a brief history of the beginnings of AI, i.e., the creation of the Turing Machine. We’ll continue digging deeper into the fascinating world of AI, explaining the key differences between AI and Expert Systems. We also look at detailed types of AI, e.g., Deep Learning (DL) and Machine Learning (ML), and the areas in which AI can be used, such as prediction, optimization, or planning.

To finalize, we'll conclude with what AI means to our company, Internet of Everything Corporation, and how we are using Artificial Intelligence, within our decentralized autonomous virtual software technology projects. A solution that helps accelerate the Internet of Things (IoT) devices deployment, by creating an Eden 1.0 System working inside Online Private Gardens, that functions with beyond edge device clustering. Are you ready to begin?

What is Intelligence?

Intelligence is the ability to adapt to change, make smart decisions, and understand the importance of an environment. A dictionary description indicates that it is the ability to learn or understand or to deal with new or trying situations: REASON. We can summarize it as the skilled use of reason. Another dictionary definition, that relates to intelligence, is the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests). Merriam-Webster.

The term ‘Artificial Intelligence’ was coined by Professor John McCarthy in 1955, as a proposal for the 1956 Dartmouth Conference, the first artificial intelligence conference. The objective was to explore ways to make a machine that could reason like a human, was capable of abstract thought, problem-solving, and self-improvement.

· Prof. McCarthy: “… every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

Although McCarthy was the first person to coin AI and be the founder of MIT CSAIL and SAIL, before him there were other important figures that built the foundations of AI, such as Alan Turing. He initialized the concept of computers, which were created as machines to carry out mental and intellectual tasks. They entailed logic systems, automated learning procedures with the purpose of solving a wide range of problems based on experiences.

A brief history of Artificial Intelligence

Now that we know what intelligence is and who coined the word Artificial Intelligence, let’s go back to the birth of AI. As mentioned above, significant figures like Alan Turing began the development of the story of Artificial Intelligence. Thus, AI as we know it started with the work “Computing Machinery and Intelligence” (Mind, October 1950), by Alan Mathison Turing, in which he presented the Turing Test. In this paper, Alan Turing introduced the concept of intelligence in computers and formulated the Turing Test to be able to identify if a computer could be said to "think". The idea was that if a human interrogator could not tell the computer apart, through conversation, from a human being it could be considered to “think”.

From this point on, AI has gone through various developments and halts, beginning to take shape in the first AI research workshops, at the aforementioned 1956 Dartmouth Conference. Exploring the possibility that human intelligence could be reduced to step-by-step symbol manipulation, known as Symbolic AI or, as John Haugeland, named it GOFAI (Good Old-Fashioned Artificial Intelligence). A step that was possible due to the access of digital computers in the mid-1950s.

By the 1960s and 1970s, research had become a priority. It became heavily funded by the U.S. Department of Defense and AI labs were established worldwide. But by the mid-1970, the progress slowed down due to unforeseen difficulties. Funding was stopped. This later became known as AI winter. It was not until the early 1980s that AI started to come to life again, due to the commercial success of Expert Systems, and by the mid 80s the market had reached over the billion-dollar mark. Although countries like the U.S. and U.K. injected funding at an academic level.

This hype in AI research only lasted a few years as one of its flagships, the Lisp Machine, collapsed by 1987. As a consequence, another AI winter came upon it, and it was not until the late 1990s and early 21st century, with specific solutions to specific problems, that AI was back in the scene. Currently, AI is used by one out of five companies that report they have incorporated AI in some offerings or processes. But this new path that has been taken, is creating a stream of researchers who understand, AI is deviating from its initial purpose; to create versatile, fully intelligent machines.

Key steppingstone in AI

After looking at the foundations that have brought us to the current AI status, stepping stones that continue to develop artificial intelligence come from the improvements in deep learning.

Long Short-Term Memory (LSTM)

Long Short-Term Memory, was introduced by ​​Hochreiter & Schmidhuber (1997), as a solution for the long-term dependency of Recurrent Neural Networks (RNN). It basically allows for a Neural Network (NN) to perform on a higher level than other Artificial Neural Networks (ANN). The breakthrough is that LSTM, which is a type of RNN, provides us the possibility to keep track of earlier inputs and for longer periods of time, thus, eliminating the long-term dependency problem that was seen in common RNNs.

This development was achieved by implementing to an RNN a cell, an input gate, an output gate and a forget gate. By doing so, the cell inside the node is capable of removing or adding information to the cell state, which is carefully regulated by the gate structures. The forget gate decides what information is going to be omitted from the cell state. Once the information is stored, the input gate determines what information is relevant to store in the cell state.

After the process is complete, the cell state is updated with the information gathered by the input gate. Then it is time for the output gate to decide what information is going to be outputted. Through this process, the LSTM surpasses the abilities of common RNNs because although it uses present and past information to reach an assertion, it also stores for longer periods of time past information to be used with present input data.

Having this type of approach, helps AI to work as a human brain, retaining and forgetting information that helps to predict the current state. As such, LSTM works well for tasks related to:

· Connected handwriting recognition
· Speech recognition
· Anomaly detection in network traffic
· IDSs (intrusion detection systems)

BERT Transformers

The Bidirectional Encoder Representations from Transformers (BERT) is a Neural Network used for Natural Language Processing (NLP) tasks and it uses attention mechanisms to understand the semantics + meaning of the actual text. It is the best model in recent years and in some areas is performing better than humans on text comprehension.

The innovation BERT introduced to Transformers is its capacity to read in context bidirectionally, in other words, it enables the representation to fuse the left and the right context, which allows pre-training a deep bidirectional Transformer. In addition, it is also capable of “next sentence prediction” by jointly pre-training text-pair representations.

Before BERT, NLP Neural Networks were only able to read from right to left or from left to right. This process produces errors when translating or identifying the meaning of words within different contexts, but the capacity of BERT mitigates the problem as it is able to identify the meaning of the same word.

For example, BERT can understand the difference between, “The wolf ate the lamb” and “The lamb ate the wolf”, something that wasn’t possible before bidirectional training. Another example of the accuracy of bidirectional is seen within translations, as it can identify the different structures between languages. Thus, it doesn’t translate word by word but attends to the specifics of each language, e.g., “The house is red” if translated into Spanish, it knows that it has to use the feminine “la” so it would translate it to “La casa es roja”.

In conclusion, BERT has surpassed LSTM capabilities because it is able to remember long pieces of text and different types of text. In practical terms, this means it can be used for a wide range of NLP tasks, as it has obtained great results in:

· General Language Understanding Evaluation (GLUE)
· Stanford Question Answering Dataset (SQuAD)
· Situations With Adversarial Generations (SWAG)
· Multi-Genre Natural Language Inference (MNLI)
· Quora Question Pairs (QQP)
· Stanford Sentiment Treebank (SST-2)
· Corpus of Linguistic Acceptability (CoLA)

AI vs Expert Systems

We have already indicated that the field of Artificial Intelligence went into several ‘winters’ where its development slowed down. The causes were due to the limitations of computing power development, but also had to do with the success of Expert Systems, agents making decisions.

While AI had been a heuristic approach, since AI research began after the Dartmouth Conference, researchers in the following decades were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence, and considered this the goal of their field. But tangible results didn’t come to life, and the promise of this heuristic computational method slowly died out.

Due to the incapacity to produce machines with artificial general intelligence, another approach was developed, the expert system. In this case, the approach focused on specific activities that required experts to be consulted for advice to reach positive decision-making. Therefore, this approach resides in artificial intelligence software that creates knowledge bases with expert information for companies to use.

Unlike the first promise of AI, expert systems are not capable of learning autonomously, thus they have to be updated manually, as well as their development is focused on a specific domain. This leads to limitations such as it cannot produce correct results from less amount of knowledge, and it requires excessive training. The Expert Systems are very hard to maintain, especially over time, compared to AI which can always be automatically retrained and improved, thereby saving costs.

Types of Artificial Intelligence

As we have seen, AI has gone through various approaches that have led to different types of solutions. Keeping it simple, we can indicate that AI types consist of Smart Systems and Neural Networks.

Smart decision systems are based on statistical approaches, such as in Machine Learning. The AI will train a set of weights and possibly find invisible patterns from other techniques or humans. In recent years, it is the set of techniques outperforming every other prediction and classification method. The issue is that it requires a lot of data, extensive training time, and computational resources.

Automated or simple classification techniques differentiate basic data and deriving rules, e.g., Kmeans Clustering, and Trees. They are basic and draw simple lines to differentiate the data, do not require a lot of data, and can perform well in creating categories. On the other hand, their interpretation is open to the algorithms used, where often automated learning approaches are again the best way to perform classification.

More complex kinds exist with mathematical improvements such as Support Vector Machines (SVM) or XGBoost which are highly popular and efficient. They combine mathematical differentiation or statistical approaches to discriminate the data between categories.

Neural Networks: a subfield of machine learning where networks created by a set of weights activates according to the data passed through it. It tries to mimic the neuron's activation from the brain. Various architectures and types exist, going from the basic perceptron, through ANN, the aforementioned RNN, LSTM, and BERT, as well as various kinds of Deep Learning architectures. They are extremely popular because they can derive information and rules that humans would not be able to detect.

Areas, where these types of AI are used, are robotics, by inputting data to make robots aware of their environment. Neural networks are great for image processing, so they are being used in self-driving cars, and as we already went into detail with BERT, great for NLP, thus used within search engines.

It is worth mentioning there are limitations of certain methods like overfitting, which means it takes too long to train and uses too much power. This issue arises due to computers' architectures.

In addition to these types of AI, we can add three different stages within artificial intelligence:

· Artificial Narrow Intelligence (ANI)
· Artificial General Intelligence (AGI)
· Artificial Super Intelligence (ASI)

Today, almost all advances within AI fall under the category of ANI, also known as Weak AI. The AGI or Strong AI stage can be said to have been reached when machines are capable of thinking and making decisions just like us humans. The ASI stage indicates that machines have surpassed the capabilities of humans, ASI is currently a hypothetical situation only seen in movies and science fiction books.

A more accurate description of weak and strong AI is whether an AI is actually smart (strong) or if it pretends to be smart (weak). Currently, we don't know if it actually knows, or only reproduces stupidly what it saw, and as we mentioned, it is often weak, Will it be possible to break the barrier? Depends, but more complex forms of AI are getting closer to that, such as Transformers. In this sense, AGI would be a possible direction of the field to begin to reach strong or smart AI.

Artificial Intelligence’s Areas

As the broadness of intelligence is immense, from general knowledge to specific technical knowledge, AI researchers have broken down the focus into the areas of problems where they are used. The major efforts are taken in the following AI areas:

· Predicting → Based on data, it classifies an input to a specific output category. An example is predicting cancer patients based on image data.
· Optimizing → Used in drug research to find molecular dispositions most likely to affect the treatment of certain diseases, or optimizing the design of certain components in industrial processes to minimize resources/operating costs.
· Planning → Smart decision process to perform forecasting, such as predicting an increase of demand in certain stores and reasoning on the best decisions to make to meet that demand in supply chains.
· Perception → How aware is an agent in its environment. Think of robots working in a warehouse needing to transport boxes in an optimal manner while keeping safety measures optimal among human workers. Also in the context of speech and text treatment, understands the true meaning of a sentence, what it entails, and how to formulate clear answers in plain language.
· Reasoning → Full or semi-automated logical processes able to derive conclusions from predicates, creating general reasoning systems.

Artificial Intelligence’s Limitations

AI often requires an enormous amount of data, is costly to operate, but can offer results outperforming any other decision process if it is operated by state-of-the-art engineers and researchers.

Even though Artificial Intelligence has grown into a multi-billion dollar market, there are still limitations of AI, due to the current approach that relies heavily on centralization. Our brains are not computers, they are made out of neuronal connections based on biological hardware. AIs run on computer chips running binary code instructions serially. The hardware limits progress in developing next-generation AIs, they can not be centralized on one chip but need more power.

To overcome this issue, and begin to develop next-generation AI, the best solution is to mimic our brains, which are made up of several areas each processing a specific task; a decentralized architecture.

AI and IoT

At Internet of Everything Corporation, we have developed advances in the new kind of computing platform that is the Eden System, the next step in enabling greater AI capabilities. Therefore, the need for more computational power is reached using Eden for scalable solutions and alternatives to the cloud to harness full power for training better models.

This groundbreaking approach enables, sustainably and cost-efficiently, to apply artificial intelligence into massive IoT deployments. Thus, our projects encompass a wide range of solutions for, e.g., city infrastructure, supply chain management, energy and utilities, and space and defense. To achieve this, we based the Eden design on a decentralized model based on scalable device clustering.

In this way, data is processed to information locally in the Eden Edge Cluster so that raw data doesn’t need to be pushed to the public cloud. A model that is compute efficient and cost-effective by saving on bandwidth and external resources. The orchestration of computing and storage is done via service manifests that describe services rules, policies, and logic, the underlying orchestration mechanics is managed by an autonomous knowledge-based AI using network consensus over a blockchain as a deciding mechanism.

Eden Depo services are generated and deployed similarly to container images; the exception lies in that Eden is Messaging Passing Interface (MPI) and AI cluster enabled as a default. Installing our Eden system helps you audit your products, workflow, or even organization, the audit report clearly tell how different areas can be enhanced through optimization for:

· Profit
· Cost
· Consumer satisfaction

In addition, the Internet of Everything AI approach comes with sustainability first thinking, which not only opens up for carbon reduction and lowers power consumption, but in itself is inherently cost saving. Using Edge AI you can enhance current products and create new value, using datasets generated in the field on devices, that can refine the data into new valuable information.

Some examples that the Eden system’s Edge AI provides, for informed infrastructure come through the use of alert and alarm data to predict hardware failures before they happen, we can look at data to find an optimum level of operation. In other areas, sensors can be placed in strategic building locations that will help to gather information on energy usage and predict consumer behavior.

If you prefer to contact us for further information, access our Contact us page, we will answer your questions or get in touch with you as soon as possible.

Talk to us to discover our range of solutions
Contact Us