The Definitive Guide to Machine Learning

By Diana Ramos | September 19, 2018 (updated November 12, 2021)

Today, machine learning (ML) drives much of artificial intelligence (AI), and their combined influence is growing tremendously. ML and AI technologies have far-reaching effects in virtually every industry. VCs have invested billions of dollars in AI-based startups, and numerous companies have increased their R&D budgets substantially to explore the potential of machine learning and AI.

In this article, we define machine learning, discuss the nature and structure of its essential algorithms, explain why the technology matters, and talk about ML’s real-world uses in every business sector. Plus, we interview 10 entrepreneurs who are changing the face of business, medicine, and government.

What Is Machine Learning?

Machine learning is a pathway to and subfield of artificial intelligence that enables computer systems to gain knowledge, improve from experience, and make predictions with minimal human intervention. Machine learning programs discover patterns and interpret data using training models or a learning data set.

These training models use algorithms (a procedure or formula) that spur experience-based learning improvements, which, in turn, lead to greater accuracy over time.

There’s a lot of data to learn from. For instance, Americans alone generated 3,138,420 gigabytes of internet traffic for every minute of June 2018, according to Statista media usage research. To be clear, big data isn’t just a large database: The most desirable data holds complexity and depth and includes enough detail to solve problems that go beyond general computer programming capabilities. The explosion of big data has incited interest in machine learning and related fields, like neural networks and deep learning, which are leading to new applications every day.

Why Does Machine Learning Matter?

Machine learning is crucial because it builds precise models that help researchers and companies access insights, identify opportunities, and sidestep risk as they generate solutions to discover new drugs, build better cars, and ensure greater personal security. In fact, machine learning has become a business-critical problem-solving mechanism that’s recognized as a significant capability by business leaders: A July 2018 study by HFS Research found that “enterprises anticipate that machine learning will permeate and influence the majority of business operations, with over half (52 percent) expecting this impact in the next two years.”

Understanding machine learning data has led to an entirely separate field, called representation learning, and “a remarkable string of empirical successes in both academia and industry,” including natural language processing, object recognition, and speech recognition, according to the 2014 study, “Representation Learning: A Review and New Perspectives.

 

Richard Yonck

Richard Yonck, Founder and Lead Futurist of Intelligent Future Consulting and author of Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence, notes that ubiquity has numbed us to the sheer wonder of the present state of machine learning technology. “From search engines to our music accounts to digital assistants like Siri, we’re surrounded by AI. It’s long been observed that once a particular AI challenge has been overcome and attained, we stop thinking of it as AI, and it disappears into the background of our lives.” Yonck adds, “The future of machine learning and AI appears limitless. Machine learning is changing every industry and will continue to have massive impact on our day to day lives.”

The History of Machine Learning

Making life easier via some kind of machine learning has been a goal of humankind for thousands of years. The first practical applications of machine learning took root with devices such as Blaise Pascal’s mechanical adding machine, created in the early 17th century. The invention advanced in 1843 with the addition of Ada Lovelace’s mathematical algorithmic sequencing operations, which, in turn, led to the first digital computer in 1946. Another major milestone was the founding of MIT's Laboratory for Computer Science, which has since become the home of major advances in machine learning and artificial intelligence. The generation of neural networks in the late 1950s was also an essential breakthrough in the evolution of machine learning. Neural networks are computer systems modeled on the human brain and nervous system. With researchers all over the world working on ways to mechanize thought through the use of data, the field of machine learning has developed at an accelerated pace since the 1990s.

 

Machine Learning Milestones

Top Ways to Use Machine Learning in Business

Machine learning began in academia, where crucial theory and algorithms have been developed and where research is ongoing. According to AI Magazine, while scholarship remains instrumental to ML’s progress, AI researchers have found that “contact with real problems, real data, real experts, and real users can generate the creative friction that leads to new directions in machine learning.”

Some of those new directions and real-world applications are highlighted below and explained by the people who developed them.

Machine Learning as a Service

Machine learning as a service (MLaaS) is an array of services that provide machine learning tools as part of cloud computing services. Service providers offer tools such as deep learning, predictive analytics, application programming interfaces (APIs), natural language processing, data visualization, and more.

 

Remy Kouffman

 “The name of the game for every business is efficiency. Machine learning and its ability to improve exponentially over time will be the obvious answer for every company trying to compete in an ever-evolving landscape,” says Remy Kouffman, CEO and Co-Founder of Knockout AI. The company’s mission is to drastically reduce the barrier of entry for entrepreneurs and app developers, so these users can leverage machine learning by integrating it directly into their code through a software development kit. “Within the next 5 to 7 years, every company will have some type of intelligence assisting it in its daily business objectives. We believe establishing an easy-to-use machine learning platform will open the floodgates to app developers, coders, and businesses,” Kouffman predicts.

 

Sivan Metzger

“Our solution solves the issues many organizations encounter when they automate and scale the deployment and management of machine learning services in production,” says Sivan Metzger, CEO and Co-Founder of ParallelM. He continues, “These machine learning issues stem from the significant gap between an organization’s data science and operations teams when it comes to areas of expertise and experience. They often share machine learning runtime responsibilities that frequently lead to the delay of business results and ROI, due to the immaturity of the handoff and ongoing processes between these two critical departments.” MCenter, ParallelM’s solution, moves machine learning pipelines into production, automates orchestration, and enables 24/7 machine learning performance. MCenter is a space where business analysts, data scientists, and IT operations align to deliver enterprise-wide machine learning through automation, optimization, and scale.

Machine Learning in Healthcare

Machine learning offers game-changing, real-world benefits in diagnosis, pharmaceuticals, treatment, critical decision making in medical practice, personalized treatment, and self-care.

 

Matthew Enevoldson

Matthew Enevoldson is the PR Manager for SkinVision, a phone app that uses algorithms to test for visible signs of skin cancer. One in five people develop skin cancer in their lifetime, and early detection and intervention is a key to survival. “The algorithms have been developed and tested in cooperation with dermatologists to check for irregularities in shape, color, and texture of lesions,” explains Enevoldson. “For a SkinVision user, it starts with downloading the app and taking a photo with the automatic camera that ensures that all photos come out framed the same. The proprietary mathematical algorithm then calculates the skin lesion fractal dimensions and surrounding tissues to construct a structural map that brings to light the different growth patterns of involved tissues. The map indicates which skin irregularities should be tracked over time and gives each irregularity a low, medium, or high-risk indication within 30 seconds.”

 

Loubna Bouarfa

Loubna Bouarfa, CEO and Founder of OKRA and member of the EU’s High- Level Expert Group on AI says, “We believe that transforming healthcare requires a scalable and multidisciplinary approach. OKRA technologies is a machine learning platform, with real-world, real-time capabilities. OKRA develops AI software for human outcome prediction in both healthcare and foster care (e.g., outcome prediction systems, decision support systems, prediction algorithms, and matching algorithms), using structured and unstructured data sources. OKRA works with healthcare organizations and consumer goods companies around the world to make better decisions for patients and achieve better outcomes. Our platform also reduces the time it takes to diagnose disease, identifies misdiagnosed patients, and predicts treatment response in real time.”

 

Vahid Zadeh

“Every second in a CrossFit workout counts,” says Vahid B. Zadeh, Chief Algorithms Officer at Canada-based PUSH, parent company of NEXUS, the first fitness wearable that measures an athlete’s workout and quantifies training. NEXUS automatically provides users with training metrics that would usually be calculated manually by coaches. “The amount of external interference with the athletes and their interaction with the app needs to be practically zero. So, the product has to act autonomously as much as possible during the workout and limit the interactions to the time prior to and the time following the session. A simplified version of the problem we are solving is known in the literature as the ‘exercise detection’ (ED) or ‘human activity recognition’ (HAR). It entails the analysis of a movement and the classification of the type of motion based on the features extracted from the data that is collected during the execution of the movement.”
 

Machine Learning in Marketing and Sales

To solve problems faster and more clearly, marketers are using ML to anticipate, understand, and act on the problems in order to beat their competition. Machine learning takes marketing automation to a new level of accuracy and speed as it provides users with accelerated personalization, lead scoring, sales forecasting, and recommendations. In addition, machine learning can be used in fraud detection to protect businesses and their customers from security attacks. Chatbots have come into their own and are ever-evolving as a way to improve customer service communication with automated messaging apps. Learn more with this comprehensive guide to artificial intelligence chatbots.

 

Alex Cardinel

Alex Cardinell is the Founder and CEO of Cortx, an artificial intelligence company related to content. The company’s WordAi is an intelligent content rewriter; its product Article Forge generates SEO content, and MicrositeMasters tracks marketing efforts and provides usable feedback. The latest product released by Cortx is Perfect Tense, an AI-powered spelling and grammar corrector. “In an article that has 10 mistakes, Perfect Tense might be able to automatically correct eight of them with no human involvement whatsoever. If it’s cost prohibitive to proofread that text, this [Perfect Tense] becomes a very good deal. And, even if you do have an editor, this might allow that editor to work five times faster than they were previously able to,” says Cardinell.

 

Rob May

“At Talla, we build digital workers that assist employees with daily tasks concerning information retrieval, access, and upkeep. We give businesses the ability to adopt AI in a meaningful way and the power to start realizing immediate improvements to employee productivity and knowledge sharing across the organization,” explains Rob May, CEO and Co-Founder of Talla and BotChain. “For example, if a company stores their product documentation in Talla, its sales reps can instantly access that information while on sales calls. This ability to immediately and easily access accurate, verified, and up-to-date information has a direct impact on revenue. Talla also makes the process of onboarding and training new reps better, faster, and less expensive by having information delivered to employees when they need it,” he says.

Machine Learning in Finance

Machine learning was used in finance before the advent of search engines, mobile banking apps, and chatbots. With the high volume of precise historical records and the quantitative nature of the finance world, the industry is ideally suited to the use of machine and artificial learning. ML and AI apply well to every facet of finance operations, from portfolio management to trading to data security.

 

Sharan Gurunathan

"Our contract review automation solution leverages the power of IBM Watson Discovery Services for Element Classification in order to break down complex contracts into smaller categories that the machine can analyze quickly — things like privacy, deliverables, communication, payment terms, governing laws, and more. Our services are all based on natural language analysis and the artificial intelligence/machine learning (AI/ML) ‘training’ that the system has received on specific contract language,” says Sharan Gurunathan, Executive Vice President and Principal Solutions Architect at Coda Global.

“When it comes to the financial sector, there are several use cases. In retail banking, for example, banks can use this tool to highlight for customers the key contractual aspects inside long documents so that there is no ambiguity. Another potential application for this kind of AI technology lies in automating customer support through chat bots, which provide phone responses by leveraging natural language processing to decipher questions and correlate answers from the bank’s vast storehouse of available information. By matching customer questions with the bank’s standard operating procedures, the machine can handle some of the front-end workload, leveraging a human response only on an as-needed basis,” he emphasizes.

Machine Learning in Logistics and Transportation

Knowledge and insights from machine learning are revolutionizing supply chain management. Machine learning uncovers supply chain data patterns that quickly establish factors that influence a supply networks’ success. Simultaneously, ML continually learns and upgrades information and predictions. In the transportation realm, as self-driving cars take to the roads, video surveillance will check traffic, ease congestion, model large-scale transportation systems, and help guide all types of air traffic — including drones.

 

Neill Mcoran Campbell

“AEIOU began as a ‘service-leasing’ company where we would create our own drones, combine them with our AI, and then lease their use as a service to companies traditionally interested in adding unmanned aerial systems/drones (UAS) to their operations,” AEIOU CEO Neill McOran-Campbell explains. “After working with both AI and drones for some time, we realized that the benefits AI has to offer UAS operations goes far beyond what we originally planned. On-board AI offers UAS advantages in essentially all aspects of operation and use. Our on-board AI platform (known as 'Dawn') performs autonomously in navigation, obstacle avoidance, object tracking, aircraft monitoring, and services, such as infrastructure inspection and package delivery,” he says.

Machine Learning in Oil and Gas

Machine learning improves efficiency and safety and reduces costs in the oil and gas sectors. Vital to the operations of many oil and gas companies, machine learning enables huge volumes of information to be accumulated in real time. It also allows data sets to be translated into actionable insights in a business where cost margins are often a make-or-break proposition.

 

Huw Rees

Wireless communication is a good match for the oil and gas industry because production facilities are often remote, both on and offshore. In the past, high-latency, low-bandwidth communication satellites were adequate to transmit telemetry to producers. However, these satellites were insufficient for system automation controls that required higher speed. For example, mobile or fixed multi-services, such as the voice and video communication that many producers need, require more robust bandwidth. “Most enterprise or large-scale WLAN solutions require near-constant monitoring and adjustment by highly trained Wi-Fi experts — an expensive way to ensure the network is performing optimally,” says Huw Rees, VP of Sales and Marketing for KodaCloud. He adds, “In fact, human technicians can’t actually manage all the Wi-Fi interactions with every user and every device, so the network typically runs sub-optimally. Our cloud-based AI service monitors these interactions 24/7 and adjusts and alerts in real time, thereby optimizing each and every device’s connection to the Wi-Fi and significantly improving the overall quality of experience.”

Machine Learning in Government

The U.S. Federal Government is going through a digital revolution centered on using the cloud, machine learning, and AI to drive improved outcomes. Those outcomes range from more effective cyber defense strategies, including improved natural language processing, to improved public health. The government is moving to mobile and web-based technologies, the use of open-source software and open standards, and easily provisioned computing and storage — all with an eye toward enhanced data security. A quickly emerging feature of the government’s national security and public safety digital transformation strategy is the application of advanced mathematics and AI. The U.S. is utilizing advanced math and AI to reduce the use of resources, time, and currently ill-defined methods for processing and capitalizing on information.

Major Companies and Machine Learning in 2018

In futurist Yonck’s view, the current top use case for machine learning concerns cybersecurity: “Malicious activity and vulnerability detection, as well as countermeasures, are of paramount importance. The costs of cybercrime have been soaring, and financial institutions and big corporations have been struggling to stay ahead. Unfortunately, such machine learning defense strategies are leading to similar countermeasures. It’s all gradually leading to a digital immune system. This [immune system] will be especially crucial as we bring more and more Internet of Things (IoT) devices online, exponentially increasing the potential security holes in our networked world.In the end, the only viable response is automation.”

Companies that have been around for a long time are still coming up with new ways to use machine learning in order to improve company and consumer security, customer service, and more:

  • Facebook: Faced with the need to stop spammers, hoaxes, and hackers (e.g., the hostile interference in the 2016 election), Facebook is speeding up its use of machine learning to spot malicious actors and hoax articles and shut them down.

  • Hubspot: The company is currently building GrowthBot ,  a chatbot for marketing and sales. A bot is an automated computer program designed to simulate conversation with human users.

  • IBM: The company now provides AI-driven solutions, so manufacturers can aggregate data from multiple sources. With AI, these customers can run more efficient operations and reduce costs. Leveraging machine learning and AI to integrate vast quantities of plant data, manufacturers can get a holistic data picture that improves throughput, quality, cost, and fulfillment.

  • Salesforce: Salesforce's AI-powered Einstein Bots for Service provide more seamless and intuitive service to its customers. The bots use machine learning and natural language processing to automate routine service requests.

  • Yelp: Machine learning algorithms help the company’s staff to categorize, compile, and label images more efficiently. That’s no small feat, given that millions of photos are added to the site per year, according to DMR Business Statistics.

  • Alphabet/Google: For two decades, Google has connected people to information to solve real-world problems. The announcements made at their 2018 annual developers conference covered applications, from mapping to small business assistance to support for the disabled and more. At the event, CEO Sundar Pichai introduced “a deep learning model that can use images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy.”

  • Twitter: The social network is looking at more and better ways to deliver the news of the day with improved machine learning-enabled timelines.

  • Amazon: As a leader in customer experience innovation, Amazon has taken things to the next level by announcing that it is reorganizing the company around its AI and machine learning efforts.

“As powerful as these tools are, these are still early days. A company can say they’ve reorganized around AI and machine learning, as Amazon has, but, thankfully, these organizations still remain very human-centric,” says Yonck.

Machine Learning Basics

How do all these machine learning-based applications actually work? Any attempt to simplify the machine learning process is challenging, particularly because advances in the field are being made daily. To gain a basic understanding, an excellent resource is University of Washington’s Pedro Domingos’ "A Few Useful Things to Know about Machine Learning." Domingos uses layman’s terms and clear, helpful explanations.

Domingos is one of the data scientists who is looking for a master algorithm that can learn anything. The iterative aspect of machine learning is important because it enables the ability to independently adapt, and data scientists are hoping that one day there will indeed be a one-size-fits-all algorithm that can learn anything. In the meantime, algorithms still need to be trained. All machine learning algorithms rely on three training elements to arrive at solutions:

  • Representation: This is a series of techniques that permits a system to find the features that can classify data. A representation produces a result for specific inputs. It is a set of models that a learner algorithm can learn.

  • Evaluation: A learner algorithm can create more than one model, but it doesn’t know the difference between a good model and a bad model. The evaluation function scores the models.

  • Optimization: This training process searches the models to determine which one will best solve the problem and selects it.

What Is a Machine Learning Model? Supervised vs. Unsupervised Models

A machine learning model is the model artifact that is produced by a training technique. Once the three training parameters for machine learning algorithms have been established, you need to decide whether to use a supervised or unsupervised machine learning model.

 

Machine Learning Process Update
  • Supervised Learning: The supervised approach is similar to the process of a human learning under the guidance of a teacher. The algorithm learns from example data that is tagged with correct examples, so it can later predict the correct response when new examples are supplied. Supervised learning distinguishes between two different types of problems:

    • Classification: The target is a qualitative variable, such as physical characteristics.
    • Regression: The target is a numeric value, such as home pricing in a specific zip code.

Unsupervised Learning: Unsupervised learning is when the algorithm learns from examples without any associated answers. The algorithm organizes and restructures the data into new classifications. This type of learning can give useful insights into data meanings. Many recommendation systems are based on unsupervised learning. Unsupervised learning relies on clustering, where the target is the discovery of inherent groupings in the data, such as the customers being grouped by purchasing behavior.

Machine Learning Techniques and Algorithm Examples

The engine of machine learning is the algorithm, which is a procedure or formula used to solve a problem. The rub is selecting the right algorithm. The “no free lunch” theorem, which states that no algorithm works best for every problem, is a centerpiece of machine learning, and is especially applicable in the case of supervised learning. For example, decision trees are not superior to neural networks as a problem-solving tool. For a deeper dive into techniques and algorithms, watch this video, "Introduction to Machine Learning," from McGill University’s Doina Precup.

There are many factors to consider in algorithm selection, like the structure and size of the data set. Machine learning algorithms number in the the many thousands, and more are invented every day to solve specific challenges. For instance, Google changes its search engine algorithm up to 600 times each year. The type of problem being solved dictates (or at least provides guidelines for) which algorithm is needed.

Classification and Regression Algorithm Examples

Here are basic explanations of how the algorithms (shown in the machine learning diagram above) are applied to problems:

  • Decision Trees: Statisticians have been using decision trees since the 1930s. They were first presented as a machine learning algorithm in 1975, in a book by J. Ross Quinlan of the University of Sydney. Decision trees are used to develop predictions based on observing conclusions about the value of a target. Here’s an example of a decision tree used to determine credit risk:

 

Decision Tree Example
  • Bayesian Networks: A set of variables and their associated conditional dependencies are represented in this graphical model. It’s often used to understand the probable relationships between symptoms and diseases. Given the symptoms, the network can be used to measure the chances of the presence of various diseases.

  • Support Vector Machines: Using training examples that are labeled as belonging to one of two possible categories, the support vector machine trains a model to assign new data examples to one of the two categories. It is often used for image classification.

  • Random Forest: Random forest constructs multiple decision trees and then merges them for a more accurate and stable prediction. In medicine, this algorithm can be used to identify diseases by analyzing patient medical records, or to identify the appropriate combination of components in medicines.

  • Neural Networks: Also known as artificial neural networks (ANNs), this algorithm is modeled on the human brain’s biological neural networks. Neural network “learning” happens as they perform tasks after considering examples, generally without any specific task programming rules. ANNs are applied to many different problems including social network and email spam filtering, quantum chemistry, finance, gaming, machine translation, medical diagnosis, pattern recognition, sequence recognition, and finance.

Regression Algorithm Examples in Brief:

  • Simple Linear Regression: This technique looks into the relationship between a target and a predictor. This technique finds causal effects between variables and is used for taks like forecasting and time series modelling.

  • Lasso Regression: Used for feature selection, this algorithm the interpretation of a model easier by removing redundant variables and reduces the size of the problem for faster analysis. This is useful for large and complex data sets (for example, cancer diagnosis).

  • Logistic Regression: Used in binary classification, this is one of the first algorithms developed for regression analysis. This algorithm is used to find the probability of success or failure. This algorithm is employed when the dependent variable is binary  (such as yes/no, true/false, or 0/1). It’s easy to implement and is used in many tasks, particularly to develop a performance baseline.

  • Multiple Regression: This algorithm is used to learn more about the relationships between predictors or variables and a criterion or dependent variable. It can be used to predict behaviors, such as purchasing, based on multiple factors.

  • Boosting: If there are a number of weak classifiers, boosting helps generate a strong one. Using training data to build a model, a second model is created to correct errors identified in the first model. The process is repeated until training set prediction is achieved, or you added the maximum number of models. Use cases include predicting how many rental units will be available in a market at a specific time or predicting ratings on social media platforms.

Clustering Algorithm Examples in Brief:

  • K-Means: When data has no defined groups or categories or groups, the goal is to locate groups within the data. With this method, the number of groups are represented by K as a variable. This iterative algorithm then assigns data to a K-group based on identified features.

  • Mean-Shift: This iterative algorithm looks for maximum density, or modes. Mean-shift algorithms have applications in the field of image processing and computer vision.

  • Gaussian Mixture: Weighting factors are assigned to data to label differing levels of importance. The model often results in overlapping bell-shaped curves. It can be applied to problems like weather observation modeling or feature extraction from speech data.  

  • EM-Clustering: The concept of this expectation-maximization algorithm is derived from the Gaussian mixture model. An iterative method, it’s used to find maximum likelihoods in a data set of variables. It’s often used in medical imaging applications.

  • Hierarchical Clustering: This process begins by treating each observation as a separate cluster. Then, it iteratively identifies the two clusters that are in closest proximity, and merges the two clusters that are most similar until all similar clusters are merged. The endpoint is a set of clusters, visualized as a type of tree diagram called a dendogram. This process is often used in risk or needs analysis.

Machine Learning Software

It’s possible for non-pros to set up machine learning using basic programming and software with varying degrees of success. In his article, “Google Says Machine Learning Is the Future. So, I Tried It Myself,” journalist Alex Hern relays his journey and that of fellow writer Robin Sloan, who took the do-it-yourself plunge using open-source software with mixed results. Hern notes that while his attempts to train a neural network to write editorial content failed, a game-changing natural language program is sure to be right around the corner.

Whether you’re a machine learning professional or a novice, there are three different types of machine learning software to choose from:

  • Free and Open Source: There are many open source machine learning frameworks available that provide a framework for engineers to build, implement and maintain systems, form new projects and create original systems. Python, the preeminent open source framework, is freely usable and distributable, even for commercial use. Python is popular because it is more intuitive than many other programming languages, and also offers tools, an array of frameworks and libraries, and extensions that make it suitable for many different applications.

  • Proprietary Software: This is software for sale. Machine learning proprietary software is the intellectual property of the developer(s) and the source code is closed and proprietary. The source code is sold as a commercial product built into a complete platform and licensed to users for a fee. Proprietary software is full-featured and designed to be ready to be deployed and used, complete with service and support.

  • Proprietary with Free and Open Source Editions: Machine learning is all about building on previous knowledge and that applies to the ability to improve frameworks. Open source software with commercial support combine customizability and community of open source options with the dedicated support from commercial partners. These hybrid options are appropriate for teams that want the flexibility of open source packages but also need a support safety net for mission-critical applications.

Defining Machine Learning-Related Terms

Many terms in data science are still maturing and often used interchangeably — and incorrectly. For example, people frequently use the term machine learning interchangeably with the following terms: data science, artificial intelligence, and deep learning. To help clear up the confusion, following is a round-up of definitions in various fields and techniques related to machine learning:

Machine Learning vs. Data Science

Data science, which uses both structured and unstructured data, concerns itself with the extraction of knowledge from data, data cleansing, and data analysis. Machine learning creates systems that learn from data and includes techniques that can be highly useful for a data scientist, such as decision trees, algorithms, and deep learning.

Machine Learning vs. Neural Networks

Within the machine learning field, neural networks are one of many techniques used to identify underlying relationships in a data set. A neural network or artificial neural network (ANN) is a sequence of algorithms designed to analyze underlying relationships in a data set by using a process that mimics human brain functions.

A neural network passes data through layers of nodes that are interconnected. The network classifies information and layer characteristics before relaying the results to other nodes in succeeding layers. A typical application can be used to solve business problems such as risk management, customer research, or sales forecasting.

Machine Learning vs Deep Learning

Deep learning uses a complex neural network that has many more layers than a basic neural network. A typical neural network may have two to three layers, while deep learning networks may have hundreds. The advantage of having multiple layers is the ability to develop the significantly greater levels of abstraction essential to complex tasks like automatic translation and image recognition.

Machine Learning vs. Data Mining

Data mining combines statistics with other programming techniques to find patterns buried within data. These patterns explain certain phenomena, so you can build models to predict future outcomes. Machine learning incorporates data mining principles, but it also makes automatic correlations and learns from them in order to create new algorithms.

Machine Learning vs. Statistical Learning

Statistics is a long-standing subfield of mathematics and refers to a vast set of tools for understanding data. Machine learning and statistics are closely related, so much so that statisticians refer to machine learning as “statistical learning” or “applied statistics.” Supervised statistical learning builds statistical models for predicting an output based on one or more inputs, while machine learning finds patterns.

Machine Learning vs. Predictive Analytics

Encompassing a variety of techniques, including data mining, predictive modeling, and machine learning, predictive analytics uses historical and current statistics to estimate future outcomes. Machine learning gives computers the ability to learn without programming.

Machine Learning vs. Artificial Intelligence (AI)

Artificial intelligence is designed to imitate human decision-making processes and complete tasks in increasingly human ways. Today, we use AI in numerous ways, including to recommend products or movies, enable natural language processing and understanding, utilize stored information in near real time, and enhance robotics.

The terms machine learning and artificial intelligence are often used interchangeably, but machine learning doesn’t fully define AI. Machine learning can’t exist without AI, whereas AI can exist without machine learning. While a machine can become more efficient at learning, it doesn’t mean that it’s intelligently aware — so far, no machine can match human or self-awareness.

The individual or combined practice of these aforementioned fields can open up the possibility of ethical challenges.

Ethical Considerations in Machine Learning

New technologies often create their own challenges — for example, putting large numbers of people out of work through the mechanization of tasks. ParallelM’s Metzger says, “Ethical concerns have emerged throughout our history and are eventually dealt with when they become material problems. I’m certain we will find ourselves at a crossroads in the future, confronting ethical concerns and other issues we cannot even grasp at this point. But, I’m also confident we will find ways to mitigate the risks these issues introduce.” Here are just a few ethical considerations that have already surfaced:

  • Responsible Data Collection: Human beings are now being defined by the quintillions of data records that a wide variety of entities collect about them and how they live on a daily basis: what they buy, what they eat, how and when they travel and surf the web, and who their “friends” are. With the new reality of such rapid information gathering, ideas about representation, privacy, and fairness are transforming, and some data interpretation may favor certain groups of people over others.

  • Language and Algorithmic Bias: How data is named, sorted, and trained reflects the inherent biases of developers. Systems trained on biased data sets may “bake in” various prejudices related to culture or ethnicity. For example, using hiring data from a company with xenophobic hiring practices may lead to a machine learning system that duplicates that bias by scoring potential hires negatively when those applicants have “foreign-sounding” names.

  • Healthcare Issues: Machine learning offers the possibility of improvement in many areas of healthcare, from diagnosis and pathology to treatment. There’s always the possibility of biased training data, and the desire for the profit-driven design of clinical-decision support systems may hold sway. Using data may also take on more power than it should in clinical decision making by changing the nature of patient-physician relationships through the erosion of trust, good will, confidentiality, respect, and personal responsibility.

For OKRA’s Bouarfa, “Ethical concerns are a critical consideration when implementing AI technologies, as the insights generated by the tech can have a real impact on human lives, values, and freedom. However, when well-enabled with regulations, these technologies can provide enhanced accountability and more statistical evidence for decisions which remain, ultimately, in the hands of highly skilled professionals, such as healthcare practitioners and foster care professionals. The only way to achieve uplifting human outcomes through AI is by applying the right regulations and policies to machine learning system development and production. With sufficient and intelligent oversight in place, AI can effectively impact the health, safety, and freedom of the wider society.

Challenges and Limitations in Machine Learning

“One of the challenges is the leg work it can take to get old, unstructured data formatted so that it’s machine readable and ready for insights and automation-driven workflows,” Talla’s May points out. He continues, “If you took all the documents in your Google Drive, for instance, and tried to use AI to draw conclusions, they wouldn’t be very good because the information wasn’t set up with the intent of machine feasibility. At Talla, we highly recommend starting as soon as possible with new document creation in the machine-readable knowledge base, because there are slight behavior shifts in annotating the new content you create that vastly amplify how useful it can be in the future.”

Here are some challenges and helpful tips to consider:

  • Need for Diverse Teams: It’s critical to have diverse teams working on machine learning algorithms in order to feed a full range of possibilities and features that make sense in the real world. For example, if you only feed your facial recognition algorithm Caucasian faces, it won’t be trained to recognize faces of color. Google discovered this a few years ago when people of color were incorrectly tagged. As a result, the company received some very negative publicity.

  • Overfitting and Underfitting: A common problem in machine learning is overfitting, in which the system learns a function that understands the training data the model learned from, but can’t generalize when faced with new test data. Therefore, that system picks up peculiarities that aren’t representative of patterns in the real world. This issue becomes particularly problematic as models increase in complexity. Underfitting is a related issue in which the model isn’t complex enough to capture the underlying data trend.

  • Dimensionality: While having a large amount of data to work from can be a good thing, some machine learning problems may involve thousands or even millions of possible features, i.e., dimensions, for training. Known as “the curse of dimensionality,” this phenomenon (coined by mathematician Richard Bellman) slows training and makes it harder to find solutions. The key is to reduce the number of features in order to make the process manageable. The two main approaches to reducing dimensionality are projection, which transforms data in a high-dimensional space into fewer dimensions, and manifold learning, a class of methods to describe the low-dimensional, smooth structure of high-dimensional data.

  • Handwriting Recognition: One of the greatest challenges in machine learning is enabling computers to interpret the endless variations in human-generated script, that is,  handwritten text, and translate it into digital form.

“As with any bleeding-edge technology, there is always room for improvement [in machine learning], and language and algorithmic bias definitely fall into that category. While there are already a variety of tools and techniques available to developers that can help minimize these challenges, they cannot yet be avoided altogether,” explains Coda Global’s Sharan Gurunathan. “While we expect AI-based systems to be purely objective and task-driven, human beings built these systems, and, as a result, the machines can reflect the biases their human developers hold. Therefore, it is imperative for developers who build solutions using AI/ML to study this phenomenon and address language and algorithmic bias from the start,” he adds.

Machine Learning Best Practices

The answers to the technical challenges of machine learning are to keep up with the literature, share knowledge with colleagues, and use that newfound knowledge to inform best practices. Here are some best practice concepts:

  • Use Adequate Data: Having an overabundance of data is always best, even if the data is only tangentially related to the outcome being predicted. Data is the oxygen required to bring life to any machine learning solution.

  • Check for Data Problems: Spot-check algorithms to assess which ones to focus on and which ones to put aside.

  • Obtain Experimental Data: As you work, use some experimental data to check hypotheses if possible.  

  • Predict Effects: Whether or not data is labeled correlative or causal, it’s important to forecast effects.

  • Cross-Validate: You want your chosen classifier or learning algorithm to perform well on new data, so set aside a portion of your training data set for cross-validation.

  • Rely on Bootstrapping: Used in statistical classification and regression, this meta-algorithm is a simple computer program employed to activate a more complicated system of programs and improve the accuracy and stability of machine learning algorithms. It also reduces variance and helps avoid overfitting.

  • Check for False Positive and False Negative Rates: A false positive model predicts the positive class in error; a false negative model predicts the negative class in error. While it’s challenging to eliminate false positives and negatives in every situation, the programmer can choose and tweak algorithms to strike a balance for each use case.

  • Think about Sensitivity and Specificity Criteria: Sensitivity and specificity evaluate binary classifier predictive accuracy. Sensitivity is a measure of how well classifiers identify positive cases. Specificity is the proportion of truly negative cases classified as negative.

  • Use Total Operating Characteristic and Receiver Operating Characteristic Methods: To evaluate classified output quality, there are two graphical methods you can use. The total operating characteristic (TOC) expresses a model's diagnostic ability. The TOC shows the numerators and denominators of previously mentioned rates. This method provides more information than the commonly used receiver operating characteristic (ROC), which graphically expresses the diagnostic ability of a binary classifier system as you vary discrimination thresholds.

The Future of Machine Learning

Futurist Yonck thinks privacy issues will be a concern in the near future: “There are and will be many negative aspects to the increasing use of facial recognition everywhere, including on social media apps. This will be especially evident as the IoT fills our world with all manner of sensors and feedback.”

Yonck believes people can learn to adjust: “Just as we view privacy differently from our grandparents, so too will the next generations view it differently from us. This may result in accepting a world where facial recognition is broadly used in exchange for the many conveniences and personalizations it will make possible.”

But Yonck adds a caveat: “Some unforeseen future event or catastrophe could result in a major backlash in which younger generations reject these technologies en masse, embracing a new age of actively sought hyper-privacy.”

What else lies ahead? Here are some potential futures:

  • Improved Unsupervised Algorithms: Advances in building smarter, unsupervised learning algorithms will lead to speedier, more precise outcomes and the ability of AI to handle unexpected events, create new behaviors, and carry on with its process in everything from air traffic control to complex medical analysis and, of course, robotics.

  • Collaborative Learning: As IoT expands, it’s likely that professionals will utilize large numbers of separate computational entities to learn collaboratively. This method will produce better learning results than solitary processing would.

  • Cognitive Services: Machine learning application programming interfaces (APIs) will enable speech, facial, and vision recognition, handwriting analysis, and speech and language understanding in their applications, all of which will bring us to a level of deeper personalization.

  • Deeper Personalization: In the future, users will likely receive far more precise recommendations as advertising becomes less inaccurate and more effective. The use of this technology will result in a vastly improved user experience on a day-to-day basis.

  • Welcome to Westworld: While we’re much closer to building realistic artificial intelligence, creating a self-aware android that looks completely human, like characters Maeve, Bernard, or Dolores, remains an enormous challenge. Futurists like Yonck think that it’s possible, and many of the AI technologies that underpin such beings are in the works, but the time frame is impossible to foresee — it could happen a decade or a half- century from now.

  • Guardrails for the Age of Machine Learning: If the negative possibilities of a future enabled by machine learning, robots, and other AI-related tech keep you up at night, you may want to check out the Future of Life Institute and its FAQ page. Industry leaders are partnering to fill the gap in understanding the benefits of these new technologies.The Partnership on Artificial Intelligence to Benefit People and Society is a group founded on the principles of Amazon, Facebook, Google, Microsoft, and IBM. Apple is also in talks to join as of this writing.

In a recent presentation on machine learning, AI, and robots, Yonck suggested that the best attitude to take is that change is inevitable and that if you believe your job may soon be obsolete, it’s time to prepare now.

Alex Cardinell of Cortyx agrees and warns, “Artificial intelligence and text generation in general are improving and will continue to improve at an incredibly fast rate, especially over the next five-to-10 years. So, writers should probably be a little worried.”

There may be ethical, safety, and other challenges, but the ubiquity of machine learning and AI means the need for trained professionals is intensifying.

The Growing Job Market for Machine Learning Professionals

Finding talent in machine learning is emerging as a key challenge; in the HFS research cited previously in this article, 42 percent of companies recognize significant skill deficiencies as they shift from traditional IT to machine learning and data science skills.

LinkedIn published a report in late 2017 naming the fastest growing jobs in the U.S. The top two jobs were in machine learning, which grew by 9.8X in the past five years; the data science profession has grown 6.5 times since 2012. The 2017 report The Quant Crunch: How the Demand for Data Science Skills Is Disrupting the Job Market found that machine learning job listings have increased by 17 percent, with an average starting salary of $114,000.

Machine Learning Resources

Keeping up with rapid changes in machine learning and related fields is easier with resources like professional publications, programs at universities, glossaries, and recently released books.

The following professional publications report on the latest research and developments:

  •  ACM TKDD: Publishes papers addressing technical and logical foundation of data mining and knowledge discovery.  
  • Big Data: Covers the opportunities and challenges of the collection, analysis and dissemination of vast amounts of data.

  • Case Studies In Business, Industry, and Government Statistics: Showcases data analysis case studies of novel techniques applied to known or novel data to be used for for instruction, training, or self-study.

  • Chance: Entertains and informs non-technical audiences about the latest in sound statistical practice.

  • Data Science Journal: Publishes papers on the use and reuse of research data and databases, and its management across all research areas the arts, humanities, technology and science.

  • EPJ Data Science Journal: Covers a wide range of research domains and applications  with an emphasis on techno-socio-economic systems that view the digital “tracks” of people as first-order subjects for scientific investigation.

  • IEEE Transactions on Knowledge and Data Engineering: Informs developers, managers, researchers, users, and strategic planners about and state-of-the-practice and state-of-the-art activities in data engineering and knowledge areas.

  • Intelligent Data Analysis: Explores issues related to the research and applications of artificial intelligence in data analysis across a variety of disciplines.

  • International Journal of Data Mining and Bioinformatics: Facilitates collaboration between data mining researchers and bioinformaticians who use data mining for bioinformatics, and provides a unified forum for students, researchers, practitioners, and policymakers in a rapidly expanding multi-disciplinary research area.

  • Journal Of Big Data: Publishes research papers and case studies covering a broad range of topics related to big data applications, analytics, and data-intensive computing.

  • Journal of Data Mining and Knowledge Discovery: Publishes technical papers related to the research and practice of data mining and knowledge discovery, surveys major areas and techniques, and describes significant applications in detail.

  • Journal of Machine Learning Research: Provides an international forum for high quality scholarly articles related to all areas of machine learning.

  • Knowledge and Information Systems (KAIS): Reports on new advances and emerging topics related to advanced information and knowledge systems and provides an international forum for professionals and researchers.

  • Machine Learning: Publishes articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems for an international audience.

  • Predictive Modeling News: Covers a wide range of healthcare predictive analytics topics from clinical, care management, actuarial, operations, and technological perspectives.

  • SIGKDD Explorations: Supports the adoption, advancement, and education related to the science of data mining and knowledge discovery gained from all data types stored in computers or computer networks.

  • Statistical Analysis and Data Mining: Addresses the broad area of data analysis, including statistical approaches, data mining algorithms, and practical applications with an emphasis on solving real problem in commerce, engineering and science.

Universities are training the next generation of data scientists, and machine learning and AI experts to fill the current and anticipated talent gaps. Here are some top universities offering programs and courses in these related disciplines:

  • California Institute of Technology

  • Carnegie Mellon University

  • Columbia University

  • Cornell University

  • Georgia Tech

  • Johns Hopkins University

  • University of California at Berkeley

  • Stanford University

  • University of Washington

  • University of California San Diego

  • University of Massachusetts Amherst

  • University of Illinois Urbana Champaign

  • Penn State University

  • University of North Carolina at Chapel Hill

  • University of Michigan

  • University of Wisconsin-Madison

These useful glossaries clarify machine learning and AI terminology for beginners and developers:

For newcomers to machine learning, having access to long-form, detailed information is a good starting point. Here are some recent book releases that can help:

  • Chollet, Francois. Deep Learning with Python (1st ed.). Shelter Island: Manning Publications, 2018.

  • Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems (1st ed.). Newton: O’Reilly Media, 2017.  

  • Goodfellow, Ian. Deep Learning. Cambridge: The MIT Press, 2016.

  • White, Michael B. Machine Learning: A Journey from Beginner to Advanced, Including Deep Learning, Scikit-Learn, and TensorFlow (2nd ed.). CreateSpace Independent Publishing, 2018.

  • Theobald, Oliver. Machine Learning for Absolute Beginners: A Plain English Introduction (2nd ed.). Scatterplot Press, 2017.

  • Gift, Noah. Pragmatic AI: An Introduction to Cloud-Based Machine Learning (1st ed.). Boston: Addison-Wesley Professional, 2018.

There are also hundreds of machine learning and AI conferences being staged around the world — they’re proliferating almost as fast as algorithms.

The Future of Work Automation with Smartsheet

Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change. 

The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed. 

When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time. Try Smartsheet for free, today.

 

 

 

Discover why over 90% of Fortune 100 companies trust Smartsheet to get work done.

Try Smartsheet for Free Get a Free Smartsheet Demo