What Are Neural Networks?
A branch of machine learning, neural networks (NN), also known as artificial neural networks (ANN), are computational models — essentially algorithms. Neural networks have a unique ability to extract meaning from imprecise or complex data to find patterns and detect trends that are too convoluted for the human brain or for other computer techniques. Neural networks have provided us with greater convenience in numerous ways, including through ridesharing apps, Gmail smart sorting, and suggestions on Amazon.
The most groundbreaking aspect of neural networks is that once trained, they learn on their own. In this way, they emulate human brains, which are made up of neurons, the fundamental building block of both human and neural network information transmission.
See how Smartsheet can help you be more effective
Watch the demo to see how you can more effectively manage your team, projects, and processes with real-time work management in Smartsheet.
How the Biological Model of Neural Networks Functions
What are neural networks emulating in human brain structure, and how does training work?
All mammalian brains consist of interconnected neurons that transmit electrochemical signals. Neurons have several components: the body, which includes a nucleus and dendrites; axons, which connect to other cells; and axon terminals or synapses, which transmit information or stimuli from one neuron to another. Combined, this unit carries out communication and integration functions in the nervous system. The human brain has a massive number of processing units (86 billion neurons) that enable the performance of highly complex functions.
How Artificial Neural Networks Function
ANNs are statistical models designed to adapt and self-program by using learning algorithms in order to understand and sort out concepts, images, and photographs. For processors to do their work, developers arrange them in layers that operate in parallel. The input layer is analogous to the dendrites in the human brain’s neural network. The hidden layer is comparable to the cell body and sits between the input layer and output layer (which is akin to the synaptic outputs in the brain). The hidden layer is where artificial neurons take in a set of inputs based on synaptic weight, which is the amplitude or strength of a connection between nodes. These weighted inputs generate an output through a transfer function to the output layer.
How Do You Train a Neural Network?
Once you’ve structured a network for a particular application, training (i.e., learning), begins. There are two approaches to training. Supervised learning provides the network with desired outputs through manual grading of network performance or by delivering desired outputs and inputs. Unsupervised learning occurs when the network makes sense of inputs without outside assistance or instruction.
There’s still a long way to go in the area of unsupervised learning. “Getting information from unlabeled data, [a process] we call unsupervised learning, is a very hot topic right now, but clearly not something we have cracked yet. It’s something that still falls in the challenge column,” observes Université de Montréal’s Yoshua Bengio in the article “The Rise of Neural Networks and Deep Learning in Our Everyday Lives.”
Bengio is referring to the fact that the number of neural networks can’t match the number of connections in the human brain, but the former’s ability to catch up may be just over the horizon. Moore’s Law, which states that overall processing power for computers will double every two years, gives us a hint about the direction in which neural networks and AI are headed. Intel CEO Brian Krzanich affirmed at the 2017 Computer Electronics Show that “Moore’s Law is alive and well and flourishing.” Since its inception in the mid-20th century, neural networks’ ability to “think” has been changing our world at an incredible pace.
A Brief History of Neural Networks
Neural networks date back to the early 1940s when mathematicians Warren McCulloch and Walter Pitts built a simple algorithm-based system designed to emulate human brain function. Work in the field accelerated in 1957 when Cornell University’s Frank Rosenblatt conceived of the perceptron, the groundbreaking algorithm developed to perform complex recognition tasks. During the four decades that followed, the lack of computing power necessary to process large amounts of data put the brakes on advances. In the 2000s, thanks to the advent of greater computing power and more sophisticated hardware, as well as to the existence of vast data sets to draw from, computer scientists finally had what they needed, and neural networks and AI took off, with no end in sight. To understand how much the field has expanded in the new millennium, consider that ninety percent of internet data has been created since 2016. That pace will continue to accelerate, thanks to the growth of the Internet of Things (IoT).
For more background and an expansive timeline, read “The Definitive Guide to Machine Learning: Business Applications, Techniques, and Examples.”
Why Do We Use Neural Networks?
Neural networks’ human-like attributes and ability to complete tasks in infinite permutations and combinations make them uniquely suited to today’s big data-based applications. Because neural networks also have the unique capacity (known as fuzzy logic) to make sense of ambiguous, contradictory, or incomplete data, they are able to use controlled processes when no exact models are available.
According to a report published by Statista, in 2017, global data volumes reached close to 100,000 petabytes (i.e., one million gigabytes) per month; they are forecasted to reach 232,655 petabytes by 2021. With businesses, individuals, and devices generating vast amounts of information, all of that big data is valuable, and neural networks can make sense of it.
Attributes of Neural Networks
With the human-like ability to problem-solve — and apply that skill to huge datasets — neural networks possess the following powerful attributes:
Adaptive Learning: Like humans, neural networks model non-linear and complex relationships and build on previous knowledge. For example, software uses adaptive learning to teach math and language arts.
Self-Organization: The ability to cluster and classify vast amounts of data makes neural networks uniquely suited for organizing the complicated visual problems posed by medical image analysis.
Real-Time Operation: Neural networks can (sometimes) provide real-time answers, as is the case with self-driving cars and drone navigation.
Prognosis: NN’s ability to predict based on models has a wide range of applications, including for weather and traffic.
Fault Tolerance: When significant parts of a network are lost or missing, neural networks can fill in the blanks. This ability is especially useful in space exploration, where the failure of electronic devices is always a possibility.
Tasks Neural Networks Perform
Neural networks are highly valuable because they can carry out tasks to make sense of data while retaining all their other attributes. Here are the critical tasks that neural networks perform:
Classification: NNs organize patterns or datasets into predefined classes.
Prediction: They produce the expected output from given input.
Clustering: They identify a unique feature of the data and classify it without any knowledge of prior data.
Associating: You can train neural networks to "remember" patterns. When you show an unfamiliar version of a pattern, the network associates it with the most comparable version in its memory and reverts to the latter.
Neural networks are fundamental to deep learning, a robust set of NN techniques that lends itself to solving abstract problems, such as bioinformatics, drug design, social network filtering, and natural language translation. Deep learning is where we will solve the most complicated issues in science and engineering, including advanced robotics. As neural networks become smarter and faster, we make advances on a daily basis.
Real-World and Industry Applications of Neural Networks
As an August 2018 New York Times article notes, “The companies and government agencies that have begun enlisting the automation software run the gamut. They include General Motors, BMW, General Electric, Unilever, MasterCard, Manpower, FedEx, Cisco, Google, the Defense Department, and NASA.” We’re just seeing the beginning of neural network/AI applications changing the way our world works.
H3: Engineering Applications of Neural Networks
Engineering is where neural network applications are essential, particularly in the “high assurance systems that have emerged in various fields, including flight control, chemical engineering, power plants, automotive control, medical systems, and other systems that require autonomy.” (Source: Application of Neural Networks in High Assurance Systems: A Survey.)
We asked two experts in the engineering sector about how their applications improve retail, manufacturing, oil and gas, navigation, and information retrieval in office environments.
Rees offers some everyday examples of Wi-Fi use: “Supermarket chains use Wi-Fi scanners to scan produce in and out of their distribution centers and individual markets. If the Wi-Fi isn’t working well, entire businesses are disrupted. Manufacturing and oil and gas concerns are also good examples of businesses where Wi-Fi is mission critical, because ensuring reliability and optimization is an absolute requirement,” he says.
Wi-Fi is great, but it takes a lot of oversight to do its job. “Most enterprise or large-scale wireless local area network solutions require near-constant monitoring and adjustment by highly trained Wi-Fi experts, an expensive way to ensure the network is performing optimally,” Rees points out. “KodaCloud solves that problem through an intelligent system that uses algorithms and through adaptive learning, which generates a self-improving loop,” he adds.
Rees shares how KodaCloud technology takes advantage of neural networks to continuously improve: “The network learns and self-heals based on both global and local learning. Here’s a global example: The system learns that a new Android operating system has been deployed and requires additional configuration and threshold changes to work optimally. Once the system has made adjustments and measuring improvements necessitated by this upgrade, it applies this knowledge to all other KodaCloud customers instantaneously, so the system immediately recognizes any similar device and solves issues. For a local example, let’s say the system learns the local radio frequency environment for each access point. Each device then connects to each access point, which results in threshold changes to local device radio parameters. Globally and locally, the process is a continuous cycle to optimize Wi-Fi quality for every device.”
McOran-Campbell explains how Dawn functions based on two levels of biology: “At the first level, we use ANNs to process raw information. There are three different types of networks we use: recurrent neural networks, which use the past to inform predictions about the future; convolutional neural networks, which use ‘sliding’ bundles of neurons (we generally use this type to process imagery); and more conventional neural networks, i.e., actual networks of neurons. Conventional neural networks are very useful for problems like navigation, especially when they are combined with recurrent elements.
“At the more sophisticated, second level, Dawn’s structure emulates the best architecture that exists for processing information: the human brain. This allows us to break down the highly complex problem of autonomy the same way biology does: with compartmentalized ‘cortexes,’ each one with their neural networks and each with their communication pathways and hierarchical command structures. The result is that information flows in waves through the cortexes in the same way that it does in the brain. [In both instances, the process is optimized] for effectiveness and efficiency in information processing,” he explains.
Here’s a list of other neural network engineering applications currently in use in various industries:
Aerospace: Aircraft component fault detectors and simulations, aircraft control systems, high-performance auto-piloting, and flight path simulations
Automotive: Improved guidance systems, development of power trains, virtual sensors, and warranty activity analyzers
Electronics: Chip failure analysis, circuit chip layouts, machine vision, non-linear modeling, prediction of the code sequence, process control, and voice synthesis
Manufacturing: Chemical product design analysis, dynamic modeling of chemical process systems, process control, process and machine diagnosis, product design and analysis, paper quality prediction, project bidding, planning and management, quality analysis of computer chips, visual quality inspection systems, and welding quality analysis
Mechanics: Condition monitoring, systems modeling, and control
Robotics: Forklift robots, manipulator controllers, trajectory control, and vision systems
Telecommunications: ATM network control, automated information services, customer payment processing systems, data compression, equalizers, fault management, handwriting recognition, network design, management, routing and control, network monitoring, real-time translation of spoken language, and pattern recognition (faces, objects, fingerprints, semantic parsing, spell check, signal processing, and speech recognition)
Business Applications of Neural Networks:
Real-world business applications for neural networks are booming. In some cases, NNs have already become the method of choice for businesses that use hedge fund analytics, marketing segmentation, and fraud detection. Here are some neural network innovators who are changing the business landscape.
“Neural nets and AI have incredible scope, and you can use them to aid human decisions in any sector. Deep learning wasn’t the first solution we tested, but it’s consistently outperformed the rest in predicting and improving hiring decisions. We trained our 16-layer neural network on millions of data points and hiring decisions, so it keeps getting better and better. That’s why I’m an advocate for every company to invest in AI and deep learning, whether in HR or any other sector. Business is becoming more and more data driven, so companies will need to leverage AI to stay competitive,” Donner recommends.
The field of neural networks and its use of big data may be high-tech, but its ultimate purpose is to serve people. In some instances, the link to human benefits is very direct, as is the case with OKRA’s artificial intelligence service.
Like many AI companies, OKRA leverages its technology to make predictions using multiple, big data sources, including CRM, medical records, and consumer, sales, and brand measurements. Then, Bouarfa explains, “We use state-of-the-art machine learning algorithms, such as deep neural networks, ensemble learning, topic recognition, and a wide range of non-parametric models for predictive insights that improve human lives.”
According to the World Cancer Research Fund, melanoma is the 19th most common cancer worldwide. One in five people on the planet develop skin cancer, and early detection is essential to prevent skin cancer-related death. There’s an app for that: a phone app to perform photo self-checks using a smartphone.
Enevoldson adds that the phone app works fast: “In just 30 seconds, the app indicates which spots on the skin need to be tracked over time and gives the image a low, medium, or high-risk indication. The most recent data shows that our service has a specificity of 80 percent and a sensitivity of 94 percent, well above that of a dermatologist (a sensitivity of 75 percent), a specialist dermatologist (a sensitivity of 92 percent), or a general practitioner (a sensitivity of 60 percent). Every photo is double-checked by our team of image recognition experts and dermatologists for quality purposes. High-risk photos are flagged, and, within 48 hours, users receive personal medical advice from a doctor about next steps.” The app has 1.2 million users worldwide.
Talla’s neural network technology draws on different learning approaches. “We use semantic matching, neural machine translation, active learning, and topic modeling to learn what’s relevant and important to your organization, and we deliver a better experience over time,” he says. May differentiates Talla’s take on AI: “This technology has lifted the hood on AI, allowing users to train knowledge-based content with advanced AI techniques. Talla gives users the power to make their information more discoverable, actionable, and relevant to employees. Content creators can train Talla to identify similar content, answer questions, and identify knowledge gaps.”
Here are further current examples of NN business applications:
Banking: Credit card attrition, credit and loan application evaluation, fraud and risk evaluation, and loan delinquencies
Business Analytics: Customer behavior modeling, customer segmentation, fraud propensity, market research, market mix, market structure, and models for attrition, default, purchase, and renewals
Defense: Counterterrorism, facial recognition, feature extraction, noise suppression, object discrimination, sensors, sonar, radar and image signal processing, signal/image identification, target tracking, and weapon steering
Education: Adaptive learning software, dynamic forecasting, education system analysis and forecasting, student performance modeling, and personality profiling
Financial: Corporate bond ratings, corporate financial analysis, credit line use analysis, currency price prediction, loan advising, mortgage screening, real estate appraisal, and portfolio trading
Medical: Cancer cell analysis, ECG and EEG analysis, emergency room test advisement, expense reduction and quality improvement for hospital systems, transplant process optimization, and prosthesis design
Securities: Automatic bond rating, market analysis, and stock trading advisory systems
Transportation: Routing systems, truck brake diagnosis systems, and vehicle scheduling
The use of neural networks seems unstoppable. “With the advancement of computer and communication technologies, the whole process of doing business has undergone a massive change. More and more knowledge-based systems have made their way into a large number of companies,” researchers Nikhil Bhargava and Manik Gupta found in "Application of Artificial Neural Networks in Business Applications."
What Are the Types of Neural Networks?
Neural networks are sets of algorithms intended to recognize patterns and interpret data through clustering or labeling. In other words, neural networks are algorithms. A training algorithm is the method you use to execute the neural network’s learning process. As there are a huge number of training algorithms available, each consisting of varied characteristics and performance capabilities, you use different algorithms to accomplish different goals.
Collectively, machine learning engineers develop many thousands of new algorithms on a daily basis. Usually, these new algorithms are variations on existing architectures, and they primarily use training data to make projections or build real-world models.
Here’s a guide to some of today’s common neural network algorithms. For greater clarity around unfamiliar terms, you can refer to the glossaries in the resource section of this article.
A Layman’s Guide to Common Neural Network Algorithms
|Autoencoder (AE)||You typically use AEs to reduce the number of random variables under consideration, so the system can learn a representation for a set of data and, therefore, process generative data models.|
|Bidirectional Recurrent Neural Network (BRNN)||The goal of a BRNN is to increase the information inputs available to the network by connecting two hidden, directionally opposing layers to the same output. Using BRNNs, the output layer can get information from both past and future states.|
|Boltzmann Machine (BM)||A recurrent neural network, this algorithm is capable of learning internal representations and can represent and solve tough combined problems.|
|Convolutional Neural Network (CNN)||Most commonly used to analyze visual imagery, CNNs are a feed-forward neural network designed to minimize pre-processing.|
|Deconvolutional Neural Network (DNN)||DNNs enable unsupervised construction of hierarchical image representations. Each level of the hierarchy groups information from the preceding level to add more complex features to an image.|
|Deep Belief Network (DBN)||When trained with an unsupervised set of examples, a DBN can learn to reconstruct its inputs probabilistically by using layers as feature detectors. Following this process, you can train a DBN to perform supervised classifications.|
|Deep Convolutional Inverse Graphics Network (DCIGN)||A DCIGN model aims to learn an interpretable representation of images that the system separates according to the elements of three-dimensional scene structure, such as lighting variations and depth rotations. A DCIGN uses many layers of operators, both convolutional and deconvolutional.|
|Deep Residual Network (DRN)||DRNs assist in handling sophisticated deep learning tasks and models. By having many layers, a DRN prevents the degradation of results.|
|Denoising Autoencoder (DAE)||You use DAEs to reconstruct data from corrupted data inputs; the algorithm forces the hidden layer to learn more robust features. As a result, the output yields a more refined version of the input data.|
|Echo State Network (ESN)||An ESN works with a random, large, fixed recurrent neural network, wherein each node receives a nonlinear response signal. The algorithm randomly sets and assigns weights and connectivity in order to attain learning flexibility.|
|Extreme Learning Machine (ELM)||This algorithm learns hidden node output weightings in one step, creating a linear model. ELMs can generalize well and learn many times faster than backpropagation networks.|
|Feed Forward Neural Network (FF or FFNN) and Perceptron (P)||These are the basic algorithms for neural networks. A feedforward neural network is an artificial neural network in which node connections don’t form a cycle; a perceptron is a binary function with only two results (up/down; yes/no, 0/1).|
|Gated Recurrent Unit (GRU)||GRUs use connections through node sequences to perform machine learning tasks associated with clustering and memory. GRUs refine outputs through the control of model information flow.|
|Generative Adversarial Network (GAN)||This system pits two neural networks — discriminative and generative — against each other. The objective is to distinguish between real and synthetic results in order to simulate high-level conceptual tasks.|
|Hopfield Network (HN)||This form of recurrent artificial neural network is an associative memory system with binary threshold nodes. Designed to converge to a local minimum, HNs provide a model for understanding human memory.|
|Kohonen Network (KN)||A KN organizes a problem space into a two-dimensional map. The difference between self-organizing maps (SOMs) and other problem-solving approaches is that SOMs use competitive learning rather than error-correction learning.|
|Liquid State Machine (LSM)||Known as third-generation machine learning (or a spiking neural network), an LSM adds the concept of time as an element. LSMs generate spatiotemporal neuron network activation as they preserve memory during processing. Physics and computational neuroscience use LSMs.|
|Long/Short-Term Memory (LSTM)||LSTM is capable of learning or remembering order dependence in prediction problems concerning sequence. An LSTM unit holds a cell, an input gate, an output gate, and a forget gate. Cells retain values over arbitrary time intervals. Each unit regulates value flows through LSTM connections. This sequencing capability is essential in complex problem domains, like speech recognition and machine translation.|
|Markov Chain (MC)||An MC is a mathematical process that describes a sequence of possible events in which the probability of each event depends exclusively on the state attained in the previous event. Use examples include typing-word predictions and Google PageRank.|
|Neural Turing Machine (NTM)||Based on the mid-20th-century work of data scientist Alan Turing, an NTM performs computations and extends the capabilities of neural networks by coupling with external memory. Developers use NTM in robots and regard it as one of the means to build an artificial human brain.
|Radial Basis Function Networks (RBF nets)||Developers use RBF nets to model data that represents an underlying trend or function. RBF nets learn to approximate the underlying trend using bell curves or non-linear classifiers. Non-linear classifiers analyze more deeply than do simple linear classifiers that work on lower dimensional vectors. You use these networks in system control and time series predictions.|
|Recurrent Neural Network (RNN)||RNNs model sequential interactions via memory. At each time step, an RNN calculates a new memory or hidden state reliant on both the current input and previous memory state. Applications include music composition, robot control, and human action recognition.|
|Restricted Boltzmann Machine (RBM)||An RBM is a probabilistic graphical model in an unsupervised environment. An RBM consists of visible and hidden layers as well as the connections between binary neurons in each of these layers. RBNs are useful for filtering, feature learning, and classification. Use cases include risk detection and business and economic analyses.|
|Support Vector Machine (SVM)||Based on training example sets that are relevant to one of two possible categories, an SVM algorithm builds a model that assigns new examples to one of two categories. The model then represents the examples as mapped points in space while dividing those separate category examples by the widest possible gap. The algorithm then maps new examples in that same space and predicts what category they belong to based on which side of the gap they occupy. Applications include face detection and bioinformatics.|
|Variational Autoencoder (VAE)||A VAE is a specific type of neural network that helps generate complex models based on data sets. In general, an autoencoder is a deep learning network that attempts to reconstruct a model or match the target outputs to provided inputs through backpropagation. A VAE also yields state-of-the-art machine learning results in the areas of image generation and reinforcement learning.|
What Are Neural Networks in Data Mining?
In her paper “Neural Networks in Data Mining,” Priyanka Guar notes that, “In more practical terms, neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data. Using neural networks as a tool, data warehousing firms are harvesting information from datasets in the process known as data mining.”
Gaur continues, “The difference between these data warehouses and ordinary databases is that there is actual manipulation and cross-fertilization of the data, helping users make more informed decisions.”
Although you can use neural networks to data mine, developers generally don’t because NNs require long training times and often produce hard-to-comprehend models. When professionals do decide to use them, they have two types of neural network data mining approaches to choose from: one directly learns simple, easy-to-understand networks, while the other employs the more complicated rule extraction, which involves extracting symbolic models from trained neural networks.
Neural vs. Conventional Computers
One of the primary differences between conventional, or traditional, computers and neural computers is that conventional machines process data sequentially, while neural networks can do many things at once. Here are some of the other major differences between conventional and neural computers:
Following Instructions vs. Learning Capability: Conventional computers learn only by performing steps or sequences set by an algorithm, while neural networks continuously adapt their programming and essentially program themselves to find solutions. Conventional computers are limited by their design, while neural networks are designed to surpass their original state.
Rules vs. Concepts and Imagery: Conventional computers operate through logic functions based on a given set of rules and calculations. In contrast, artificial neural networks can run through logic functions and use abstract concepts, graphics, and photographs. Traditional computers are rules-based, while artificial neural networks perform tasks and then learn from them.
Complementary, Not Equal: Conventional algorithmic computers and neural networks complement each other. Some tasks are more arithmetically based and don’t require the learning ability of neural networks. Often though, tasks require the capabilities of both systems. In these cases, the conventional computer supervises the neural network for higher speed and efficiency.
As impressive as neural networks are, they’re still works-in-progress, presenting challenges as well as promise for the future of problem-solving.
The Challenges of Neural Networks
Cortx’s Cardinell says that the value and implementation of neural networks depend on the task, so it’s important to understand the challenges and limitations: “Our general approach is to do what works for each specific problem we’re trying to solve. In many of those cases, that involves using neural networks; in other cases, we use more traditional approaches.” Cardinell illustrates his point with this example: “For instance, in Perfect Tense, we try to detect whether someone is using a or an correctly. In this case, using a neural network would be overkill, because you can simply look at the phonetic pronunciation to make the determination (e.g., an banana is wrong). Neural networks are where most advances are being made right now. Things that were impossible only a year or two ago regarding content quality are now a reality.”
As useful as neural networks can be, challenges in the field abound:
Training: A common criticism of neural networks, particularly in robotics applications, is that excessive training for real-world operations is mandatory. One way to overcome that hurdle is by randomly shuffling training examples. Using a numerical optimization algorithm, small steps — rather than large steps — are taken to follow an example. Another way is by grouping examples in so-called mini-batches. Improving training efficiencies and convergence capabilities is an ongoing research area for computer scientists.
Theoretical Issues: Unsolved problems remain, even for the most sophisticated neural networks. For example, despite its best efforts, Facebook still finds it impossible to identify all hate speech and misinformation by using algorithms. The company employs thousands of human reviewers to resolve the problem. In general, because computers aren’t human, their ability to be genuinely creative — prove math theorems, make moral choices, compose original music, or deeply innovate — is beyond the scope of neural networks and AI.
Inauthenticity: The theoretical challenges we address above arise because neural networks don’t function exactly as human brains do — they operate merely as a simulacrum of the human brain. The specifics of how mammalian neurons code information is still an unknown. Artificial neural networks don’t strictly replicate neural function, but rather use biological neural networks as their inspiration. This process allows statistical association, which is the basis of artificial neural networks. An ANN’s learning process isn’t identical to that of a human, thus, its inherent (at least for now) limitations.
Hardware Issues: This century’s focus on neural networks is due to the million-fold increase in computing power since 1991. More hardware capacity has enabled greater multi-layering and subsequent deep learning, and the use of parallel graphics processing units (GPUs) now reduces training times from months to days. Despite the great strides of NNs in very recent years, as deep neural networks mature, developers need hardware innovations to meet increasing computational demands. The search is on, and new devices and chips designed specifically for AI are in development. A 2018 New York Times article, “Big Bets on A.I. Open a New Frontier for Chip Startups, Too,” reported that “venture capitalists invested more than $1.5 billion in chip startups” in 2017.
Hybrids: A proposal to overcome some of the challenges of neural networks combines NN with symbolic AI, or human-readable representations of search, logic, and problems. To successfully duplicate human intelligence, it’s vital to translate the procedural knowledge or implicit knowledge ( the skills and knowledge not readily accessible by conscious awareness) humans possess into an unequivocal form that uses symbols and rules. So far, the difficulties of developing symbolic AI have been unresolvable — but that status may soon change.
Computer scientists are working to eliminate these challenges. Leaders in the field of neural networks and AI are writing smarter, faster, more human algorithms every day. Engineers are driving improvements by using better hardware and cross-pollinating different hardware and software
The Future of Neural Networks
He adds, “It’s that old saying: ‘When your only tool is a hammer, everything looks like a nail.’ Except everything isn’t a nail, and deep learning doesn’t work for all problems. There are all sorts of developments to come in the next couple of decades that may provide better solutions: one-shot learning, contextual natural language processing, emotion engines, common sense engines, and artificial creativity.”
Here are some likely future developments in neural network technologies:
Fuzzy Logic Integration: Fuzzy logic recognizes more than simple true and false values — it takes into account concepts that are relative, like somewhat, sometimes, and usually. Fuzzy logic and neural networks are integrated for uses as diverse as screening job applicants, auto-engineering, building crane control, and monitoring glaucoma. Fuzzy logic will be an essential feature in future neural network applications.
Pulsed Neural Networks: Recently, neurobiological experiment data has clarified that mammalian biological neural networks connect and communicate through pulsing and use the timing of pulses to transmit information and perform computations. This recognition has accelerated significant research, including theoretical analyses, model development, neurobiological modeling, and hardware deployment, all aimed at making computing even more similar to the way our brains function.
Specialized Hardware: There’s currently a development explosion to create the hardware that will speed and ultimately lower the price of neural networks, machine learning, and deep learning. Established companies and startups are racing to develop improved chips and graphic processing units, but the real news is the fast development of neural network processing units (NNPUs) and other AI specific hardware, collectively referred to as neurosynaptic architectures. Neurosynaptic chips are fundamental to the progress of AI because they function more like a biological brain than the core of a traditional computer. With its Brain Power technology, IBM has been a leader in the development of neurosynaptic chips. Unlike standard chips, which run continuously, Brain Power’s chips are event-driven and operate on an as-needed basis. The technology integrates memory, computation, and communication.
Improvement of Existing Technologies: Enabled by new software and hardware as well as by current neural network technologies and the increased computing power of neurosynaptic architectures, neural networks have only begun to show what they can do. The myriad business applications of faster, cheaper, and more human-like problem-solving and improved training methods are highly lucrative.
Robotics: There have been countless predictions about robots that will be able to feel like us, see like us, and make prognostications about the world around them. These prophecies even include some dystopian versions of that future, from the Terminator film series to Blade Runner and Westworld. However, futurist Yonck says that we still have a very long way to go before robots replace us: “While these robots are learning in a limited way, it’s a pretty far leap to say they’re ‘thinking.’ There are so many things that have to happen before these systems can truly think in a fluid, non-brittle way. One of the critical factors I bring up in my book is the ability to establish and act on self-determined values in real-time, which we humans do thousands of times a day. Without this, these systems will fail every time conditions fall outside a predefined domain.”
Mind-melding between human and artificial brains, according to Yonck, is in our future: “I think artificial intelligence, artificial neural networks, and deep learning will eventually play a far more active role in retraining our brains, particularly as brain-computer interfaces (BCIs) become more prevalent and widely used. Deep learning will be essential for learning to read and interpret an individual brain’s language, and it will be used to optimize a different aspect of thought — focus, analysis, introspection. Eventually, this may be the path to IA (intelligence augmentation), a form of blended intelligence we’ll see around the middle of this century.”
Resources on Neural Networks
The brave new world of neural networks can be hard to understand and is constantly changing, so take advantage of these resources to stay abreast of the latest developments.
Neural network associations sponsor conferences, publish papers and periodicals, and post the latest discoveries about theory and applications. Below is a list of some of the major NN associations and how they describe their organizational goals:
The International Neural Network Society (INNS): The organization is for “individuals interested in a theoretical and computational understanding of the brain and applying that knowledge to develop new and more effective forms of machine intelligence.”
IEEE Computational Intelligence Society (IEEE CIS): This is a professional society of the Institute of Electrical and Electronics Engineers (IEEE) who focus on “the theory, design, application, and development of biologically and linguistically motivated computational paradigms that emphasize the neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained.”
European Neural Network Society (ENNS): This is an “association of scientists, engineers, students, and others seeking to learn about and advance our understanding of the modeling of behavioral and brain processes, develop neural algorithms, and apply neural modeling concepts to problems relevant in many different domains.”
International Institute for Forecasters (IIF): This organization is “dedicated to developing and furthering the generation, distribution, and use of knowledge on forecasting.”
Most of the titles provided below have been published within the last two years. We’ve also included a few classics of the discipline:
Aggarwal, Charu C. Neural Networks and Deep Learning: A Textbook. New York City: Springer International Publishing, 2018.
Goldberg, Yoav. Neural Network Methods for Natural Language Processing (Synthesis Lectures on Human Language Technologies). Williston: Morgan & Claypool Publishers, 2017.
Hagan, Martin T., Demuth, Howard B., and Beale, Mark H. Neural Network Design (2nd Edition). Martin Hagan, 2014.
Hassoun, Mohamad. Fundamentals of Artificial Neural Networks. Cambridge: The MIT Press | A Bradford Book, 2013.
Haykin, Simon O. Neural Networks and Learning Machines (3rd Edition). Chennai: Pearson India, 2008.
Heaton, Jeff. Introduction to the Math of Neural Networks. Heaton Research, Inc., 2012.
Taylor, Michael. Make Your Own Neural Network: An In-Depth Visual Introduction for Beginners. Independently Published, 2017.
The world of neural networks has its own language. Here are some resources to expand your technical vocabulary and understanding of the field:
ESA Neural Network Glossary: A compilation of neural networking terms from the European Space Agencies’ Earthnet Online site
Medium Neural Network Glossary: A frequently updated list of the latest terminology from the tech writing source site, Medium
Skymind A.I. Wiki Glossary: A frequently updated compendium of clearly defined terms concerning neural networks and deep artificial networks
The Future of Work with Automated Processes in Smartsheet
Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change.
The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed.
When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time. Try Smartsheet for free, today.