Photo by RoonZ nl on Unsplash

Machine Learning: History, Use-Cases, and Problems

Carter McKay
22 min readMay 26, 2022

--

Abstract

Machine learning is a subfield of artificial intelligence that deals with the creation of algorithms that can learn from and make predictions on data. Machine learning has many applications in real-life, including in the fields of chatbots, translation, healthcare, finance, and customer service. Machine learning is also used in autonomous vehicles and robots. The impact of machine learning on the workforce is both significant and far-reaching. Many industries are being disrupted by machine learning, with jobs being created and destroyed in the process. However, machine learning is also creating new opportunities for workers in many industries. In the future, machine learning is likely to have an even greater impact on the workforce. Job losses in some industries are likely to be offset by job gains in others. Overall, the impact of machine learning on the workforce is likely to be positive.

Artificial intelligence, also known as AI, has applications that can range from simple problems to complex issues like driving cars. This paper will cover a section of AI called Machine Learning (ML) and its applications to real-life, but also its consequences. Machine Learning and Artificial intelligence have a place in this world to help humans in specific tasks like grammar, research, and development. Giving it too much data and control will get to a point where we can’t live without it. That is when it becomes a liability, and the consequences are dire. This is where the line between human and machine gets blurred, and it can become a problem. Machine Learning is a way for computers to learn without being explicitly programmed. This is where the computer can learn on its own by recognizing patterns and making predictions. Machine learning algorithms are used in a variety of applications, such as email filtering and computer vision. The goal of machine learning is to automatically improve given experience. For example, if we were training a computer to recognize images, we would give it many different pictures until it could recognize the features that make up an image of a specific object. The more experience the computer has, the better it will be at recognizing images. This paper will examine the history of AI and where ML fits into Artificial Intelligence. Its current uses for ML in your personal and work environment and the ways I have applied them to my life and work. Lastly, it will cover the pitfalls of machine learning, and where it should and shouldn’t be used.

History

The history of AI is a long and complicated one. The term Artificial Intelligence was coined by John McCarthy in 1955. McCarthy was a mathematician who also worked in computer science, cognitive science, and linguistics. He is considered the father of AI. McCarthy came up with the term to define the science and engineering of making intelligent machines. McCarthy along with Marvin Minsky, Claude Shannon, and Alan Turing, are considered the founders of AI. Alan Turing was a British mathematician and computer scientist who worked on breaking the German’s Enigma code during WWII. AI and mechanical machines that think for themselves have been discussed since Homer first wrote of mechanical “tripods” waiting on gods (Buchanan, B. 2006). Until the last half-century though, humans haven’t been able to create reliable AIs that can predict effectively. Machine learning is all about predicting an outcome, whether that be a question that needs an answer or a chess move that requires a counter move. You can’t talk about AI/ML without talking about Alan Turing; his most significant achievements were theorizing that artificial intelligence can be created by training using data and then letting the AI learn by itself. This is still how AI/ML algorithms are trained today.

Turing was a founding father of artificial intelligence and of modern cognitive science, and he was a leading early exponent of the hypothesis that the human brain is in large part a digital computing machine. He theorized that the cortex at birth is an “unorganised machine” that through “training” becomes organized “into a universal machine or something like it.” Turing proposed what subsequently became known as the Turing test as a criterion for whether an artificial computer is thinking (Copeland, B. 2021 )

You’ve seen the Turing test in action before if you’ve ever gone to a website that has one of the “select all the stoplights” problems. Training data is the most important part of any AI/ML algorithm; if you give an algorithm bad training data you will get bad results. One of the most impressive and most interesting parts of just needing to train an algorithm is that you can train it so that it can solve any problem. By giving it a chess problem and solution, it will be able to start predicting chess solutions. If you wanted a conversational AI/ML to answer questions, then you would want to train it with hundreds of question and answer pairs. Just by changing the types of training data, you can get wildly different results.

Claude Shannon was an American mathematician, electrical engineer, and cryptographer. Shannon is considered the father of information theory. Information theory is the study of the representation, transmission, storage, and retrieval of information. Marvin Minsky was an American cognitive scientist, computer scientist, and author. Minsky’s main focus was on artificial intelligence, neural networks, and robotics. The first attempt at creating an AI machine was in the late 1800s. Charles Babbage, an English mathematician, created a machine called the Analytical Engine. The Analytical Engine was a machine that could be programmed to perform any calculation that could be done by hand. Babbage never completed the machine, but his ideas laid the foundation for the modern computer. In the early 1950s, a group of scientists at Dartmouth College created a computer program called the Dartmouth Summer Research Project on Artificial Intelligence. This was the first AI research project. The scientists working on the project were McCarthy, Minsky, Shannon, and Turing. They came up with the term Artificial Intelligence to define their research. The Dartmouth Summer Research Project on Artificial Intelligence is considered the beginning of AI as a field of study.

The first AI applications were developed in the 1960s. These applications were able to solve simple problems like playing checkers and solving algebraic equations. In 1966, Edward Feigenbaum and Julian Feldman published a book called Computers and Thought. This book was the first to suggest that computers could be used to simulate human thought. In the 1970s, AI research began to focus on making computers more like humans. Researchers developed programs that could understand natural language and recognize objects. In the 1980s, the focus of AI shifted from making computers more like humans to make them more efficient at doing specific tasks. This shift was due to the success of expert systems. Expert systems are computer programs that use rules to solve problems in a specific domain. In the 1990s, AI research focused on machine learning. Machine learning is a method of teaching computers to learn from data. Machine learning is a subfield of AI. The current state of AI is based on the state of machine learning. Machine learning algorithms are used to automatically improve the performance of a task given data.

The simplest machine learning algorithms are called if-else algorithms. They work by checking if something is true (i.e. 1 + 1 = 2) then proceeding, or (else) doing something else instead. These simple algorithms are helpful when you want to automate simple tasks, for example: when I get a text message, create a row in google sheets with the time at which the message was received. While if-else algorithms are technically considered AI because based on the input the output will be different, they aren’t very useful when you are trying to have an AI answer questions about people in history. The most common type of complex machine learning algorithm is the artificial neural network. Neural networks are modeled after the brain, and they are composed of a series of interconnected nodes, or neurons. Each node has a weighted input and an activation function. The activation function determines whether or not the neuron will fire, and the weight of the input determines how much the neuron will fire. Neural networks can have a single layer or multiple layers. The more layers there are, the more complex the neural network is. Machine learning is divided into two main types: supervised learning and unsupervised learning. Supervised learning is where the data are labeled and the computer is given a set of training data. The computer is then able to learn and generalize from the training data. Unsupervised learning is where the data are not labeled and the computer is not given a set of training data. The computer is able to learn from the data but, it does not generalize from the data.

Use Cases

Machine learning is a type of artificial intelligence that provides computers with the ability to learn without being explicitly programmed. It’s scary how fast this technology has progressed in just a few years, and it will only get better at an even faster rate as more people become interested in the field. I use machine learning every day for work because I’ve been able to automate around 20% of my job using machine learning. My assistant, named Carter McKAI writes code, answers email and Slack messages, and finds relevant research for me and a couple of other people I’ve allowed to use the technology. You might not realize it, but you also use ML every day. It might not be making you money like it does for me, but applications like Siri and Cortana act as virtual assistants that can answer your questions and complete tasks. ML isn’t just useful as a virtual assistant though, it is also used by media companies like Netflix and Google to suggest certain titles to you. If you use Grammarly, the grammar autocorrect engine, that is another instance of ML being utilized to help humans. There are 3 main use-cases in machine learning in everyday life: active, passive, and autonomous. Active machine learning is where the user is actively involved in the learning process. This could be something as simple as playing a game that gets progressively more difficult the more you play it, or using a chatbot that gets better at understanding you the more you talk to it. Passive machine learning is where the user is not actively involved in the learning process. This could be something like a website that tracks your browsing habits and shows you targeted ads, or a music streaming service that creates a personalized playlist for you based on your listening history. Autonomous machine learning is where the machine is completely independent and makes its own decisions. This could be something like a self-driving car, or a robot that can clean your house.

Active Use-Cases

The idea of chatbots is to create some sort of artificial intelligence that can hold a conversation with a human. Chatbots are one of the most used ML technologies in the 21st century. They help users send texts, prove points, translate languages, and much more. Chatbots are built on top of a ML technology called NLP (Natural Language Processing).

“Natural Language Processing is a form of AI that gives machines the ability to not just read but to understand and interpret human language. With NLP, machines can make sense of written or spoken text and perform tasks including speech recognition, sentiment analysis, and automatic text summarization.” (Ximena Bolaños, 2021).

When was the last time you asked Siri to provide you with directions to a location, to send a text, or to give you a riddle. NLP is used to build chatbots. Chatbots are computer programs that can mimic human conversation. NLP allows chatbots to understand the sentiment of a conversation, the meaning of words, and the intent of a user. This technology is used in many different applications such as customer service, marketing, and even healthcare. One company that uses NLP-powered chatbots is Google. Google has developed a messaging app called Allo. Allo uses NLP to provide users with suggested replies to messages. Allo also uses NLP to provide users with information about their surroundings based on their location data. Siri and other chatbots have become incredibly helpful technology that billions of people use every day.

Have you ever been researching something and found that the article you were looking for was in a different language? You probably used Google translate or another translating technology. This technology has been built on NLP algorithms.

“Translating languages is more complex than a simple word-to-word replacement method. Since each language has grammar rules, the challenge of translating a text is to do so without changing its meaning and style. Since computers do not understand grammar, they need a process in which they can deconstruct a sentence, then reconstruct it in another language in a way that makes sense.”(Ximena Bolaños, 2021).

The NLP algorithm decides what is trying to be communicated through the text,then it finds an effective way to communicate that in the second language while keeping the meaning and tone of the first language. NLP is used in a variety of different fields. One example is healthcare. NLP is used to process and analyze patient records in order to improve diagnosis accuracy and treatment recommendations. It is used to help lawyers research cases and predict the outcomes of legal decisions. NLP is used to process financial documents and make predictions about the stock market. Its use-cases extend to being used to build chatbots that can provide customer support and answer questions. It can process and analyze large amounts of text data in order to extract valuable information.

One of my use cases for Carter McKAI is paragraph generation. Given a prompt it will generate a paragraph pertaining to the prompt. For example, given the prompt: “Write a paragraph on the effects on homework on children”, Carter McKAI generated the paragraph: “Homework can have both positive and negative effects on children. The positive effects are that it can help them to learn time management skills, become more responsible and improve their grades. However, the negative effects are that it can cause them to feel overwhelmed, stressed and anxious.” This paragraph is completely free of plagiarism and can be used as a good starting/introduction for the topic. Carter McKAI also can generate and edit code, based on prompts. Given a prompt of “generate a website that says Hello World, use css to make it look good” it generated the website: https://ztfojw.csb.app/test.html.

Passive Use-cases

Have you ever been scrolling through YouTube and received a recommendation for the most relevant video? Almost all social media platforms use ML to provide you with suggestions for what content you should consume. When you login to YouTube for the first time you will see a lot of wildly different videos on the home page. By watching videos YouTube can start building ML models to recommend more relevant videos to you. You don’t directly tell YouTube what categories of videos you like; instead they passively recommend videos. Something scary about this is that the more data you passively give to big companies, the better they can target you. Systems like these are slowly shaping us into only consuming content that we agree with. In a way these systems have constrained us from accessing content that could be fulfilling, but instead we are only interacting with things that reaffirm our beliefs. More and more we are only interacting with content that is directed towards our own interests, and this has major implications for what we know about the world.

Grammarly is another passive way that machine learning helps users. Grammarly works by being trained on billions of lines of text from the internet, and is meant to spot spelling and grammar errors. You don’t directly interface with Grammarly and ask it to find the errors in a sentence, it just does. There are a few potential downsides to using Grammarly. First, it is possible that the software could make mistakes in its corrections. This is unlikely, but possible. Second, some people may not like the idea of having their writing analyzed by a machine learning algorithm. Why is collecting data a bad thing? There are a few potential problems with collecting data via machine learning. First, it is possible that the data collected could be inaccurate or contain errors. This is especially true if the data is being collected automatically without any human oversight. Second, there is always the possibility that the data could be used for nefarious purposes, such as identity theft or fraud.

Autonomous Use-Cases

Tesla is the leader in self-driving consumer vehicles and also their use of Machine Learning in their cars, since their cars can drive themselves without any intervention from a human driver. Tesla has announced that all new cars will come with the hardware needed for fully self-driving cars. The company said: “All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long-distance trips with no action required by the person in the driver’s seat.” (Tesla) Another impressive feature Tesla has built into their vehicles is the ability for it to park itself, and also for it to come to pick you up when you’re ready to leave. Tesla’s website details this process by stating, “When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you.”(Tesla) In most ML use cases, you don’t need extremely fast response times, but when it comes to driving a vehicle in the real world, the ML algorithm needs to be able to make decisions in milliseconds. This is why Tesla has spent millions of dollars on creating a custom ML algorithm.

Do you own or have you seen one of the floor cleaning robots? While not necessarily built using ML, they do use AI to avoid objects and find the most efficient routes to clean your floors. These robots have to make millions of decisions in their lifetimes, and these decisions can be anywhere from how far away from the distance to the wall, to whether or not to clean the floor in rows or columns. Amazon has been using autonomous robots in their facilities for years. These robots are responsible for retrieving items from shelves and delivering them to humans who will then pack them into boxes. These robots have to be very efficient in order to keep up with the high demand of online shopping. They use sensors and cameras to map out the warehouse and find the shortest route to their destination.

There are many different types of ML and each has its own place in the world. Some are also more dangerous than others. If Siri tells you that the Eiffel tower is 1183 ft instead of 1083 ft, it does not really have that much of an effect; but if a Tesla makes a wrong prediction, then that could result in a car accident. The following is from a research paper published by the European Parliamentary Research Service: “The potential impacts of AI are far-reaching, but they also require trust from society. AI will need to be introduced in ways that build trust and understanding, and respect human and civil rights. This requires transparency, accountability, fairness and regulation.” (EPRS, 2020). Who is responsible for the technology, who’s regulating it and where do you draw the line between what’s ethical or not? So if we want to use ML and AI, it’s important that we do so in a way that is safe for us all

Machine Learning’s Impact on the Job Industry

The impact of machine learning on job industries is both significant and far-reaching. Many industries are being disrupted by machine learning, with jobs being created and destroyed in the process. However, machine learning is also creating new opportunities for workers in many industries. Some of the most significant impacts of machine learning are being felt in the manufacturing and agriculture industries. Machines are increasingly able to perform tasks that have traditionally been done by human workers, such as sorting and packaging products. This is leading to a decline in the need for human workers in these industries. The healthcare industry is also being impacted by machine learning. Medical diagnostic tools are becoming more sophisticated, and are able to detect diseases and conditions with greater accuracy. This advance has led to a decline in the need for human doctors and nurses in many settings. The financial services industry is also being impacted by machine learning. Algorithms are being used to make investment decisions and to provide financial advice, resulting in a decline for human financial advisors. In the future, machine learning is likely to have an even greater impact on the workforce. Job losses in some industries are likely to be offset by job gains in others. Overall, the impact of machine learning on the workforce is likely to be positive, as more jobs are created then destroyed. Machine learning is making jobs easier by automating certain tasks and providing employees with new opportunities to learn new skills. It is also making work more fulfilling by providing employees with new opportunities to learn and grow. In the long run, machine learning is likely to have a positive impact on employment and job satisfaction. For instance, machine learning can help to automate tasks such as data entry and analysis, which can free up employees’ time to focus on more creative and strategic tasks. Additionally, machine learning can provide employees with new opportunities to learn new skills, such as how to use new software or how to interpret data. Additionally, machine learning can provide employees with new opportunities to learn how to use new software, which can make their jobs easier and more efficient.

Machine Learning Pitfalls

In the above, it seems like ML is very flexible and can be used in almost any industry, but there are a lot of instances where machine learning shouldn’t be used, or where it might become dangerous. Microsoft once built a ML chatbot called Tay, which was trained on hundreds of thousands of tweets. “Tay grew from Microsoft’s efforts to improve their “conversational understanding” (Lexalytics. 2020). To that end, Tay used machine learning and AI. As more people talked with Tay, Microsoft claimed, the chatbot would learn how to write more naturally and hold better conversations.” (Lexalytics. 2020). Instead Tay began learning from toxic tweets, and started regurgitating what it learned. How did this happen? Microsoft’s chatbot Tay began learning from toxic tweets because it was not given clear guidelines on what kind of language was appropriate. Without this clear guidance, the chatbot was left to learn from the tweets it was exposed to, which were often toxic and offensive. Microsoft has since taken Tay offline and has apologized for the chatbot’s offensive behavior. Microsoft’s chatbot Tay is a prime example of why it is important to consider the data that you are feeding into your machine learning algorithm. If the data is not high quality, or does not represent the population that you are trying to target, then your machine learning algorithm will likely produce inaccurate results. In Microsoft’s case, their chatbot learned from toxic tweets because that was the type of data it was exposed to. As a result, the chatbot began regurgitating offensive language. This highlights how important it is to consider the quality of data when training a machine learning algorithm. Below is a screenshot that Lexalytics took of a couple of Tay’s tweets.

Figure 1 Screenshots of tweets that Tay sent

If you want to build a machine learning bot that will converse with humans, you want to control what data you train it with. ML algorithms are like children. If a child grows up in a household where offensive language is accepted, then they will be more likely to say offensive things. Another example of where machine learning is being used by companies is in Google Photos. Google Photos uses machine learning to identify people in photos and group them together. This is done by analyzing the pixels in the photos and finding patterns that match known faces. Google Photos can also identify objects in photos and label them accordingly. The bad implications of this are that Google is able to collect a lot of data about people without them knowing. They are able to see who people are, where they are, and what they are doing. This can be used for targeted advertising or even for more nefarious purposes. Machine learning is a powerful tool that can be used for good or bad. It is important to be aware of the implications of using machine learning before using it. One of the most common applications of machine learning is in facial recognition software. This software is used by law enforcement and security agencies to identify people in photos and videos. It can be used to track people’s movements and to find out personal information about them. This can be used for intrusive surveillance and to target people for marketing purposes. Facial recognition can have harmful implications if it is not used responsibly. Machine learning also suffers when not provided with enough data, because machine learning is all about predictions. If you don’t provide enough data, machine learning algorithms will not be able to understand what it’s supposed to be predicting.

Conclusion

In conclusion, machine learning is a field of AI with a lot of potential, and it is being used in many different industries with more and more use cases occurring every day. While machine learning is very powerful, it also has the potential to do a lot of harm if not used correctly. I believe that machine learning will have a very positive impact on the workforce overall, but there will be some industries that are disrupted in a negative way. This can be through job loss, or even through malicious use of the technology. The potential for machine learning is vast, and its applications seem nearly limitless. With the ability to learn and improve on its own, machine learning has the potential to change the way we live and work for the better. In many ways, it already has. For example, machine learning is being used to diagnose diseases, to improve self-driving cars, and to make financial predictions. These are just a few examples of the many ways in which machine learning is being used to improve our lives. With the vast potential for machine learning, it is important to ensure that we use it. Not just as a way to make computer users’ jobs easier or faster, but as a way to improve the world as a whole. As machine learning becomes more advanced, I believe that we will see even more positive applications of the technology. It is important to keep in mind, however, that with great power comes great responsibility. As machine learning becomes more prevalent, it is important to ensure that it is used for good and not for evil. In many ways, machine learning is already making our lives easier and more efficient. The potential for machine learning is great, and I believe that its positive applications will continue to grow. It is important to remember, however, that machine learning is a tool that can be used for good or for evil. As we continue to use machine learning, it is important to ensure that we do so in a way that benefits society as a whole. As machine learning continues to evolve, it is likely that we will see even more ways in which it can make our lives better.

Bibliography

A. Nayak and K. Dutta (2017), “Impacts of machine learning and artificial intelligence on mankind,” 2017 International Conference on Intelligent Computing and Control (I2C2), pp. 1–3, DOI: 10.1109/I2C2.2017.8321908.

This paper walks through the history of AI/ML and shows its potential significance in modern-day computing. It explains the design of data structures and algorithms and the reasoning behind machines doing tasks better and faster than humans. It also describes how machine learning is currently being utilized and the effect it has had on different industries.

Schmidt, J., Marques, M.R.G., Botti, S. et al. Recent advances and applications of machine learning in solid-state materials science. npj Comput Mater 5, 83 (2019). DOI:

https://doi.org/10.1038/s41524-019-0221-0

This article details machine learning principles, algorithms, and descriptors, emphasizing their application to material science. It covers how data is collected, prepared, and labeled and the importance of these steps. After the data steps, it moves on to picking your model, training said model, and fine-tuning.

Bolaños, X. (2021, September 29). Natural language processing and machine learning. Encora. https://www.encora.com/insights/natural-language-processing-and-machine-learning

This article covers NLP or Natural Language Processing, which is the technology that makes AI/ML chatbots run. It covers the companies and products that use NLP, and where NLP fits in the AI/ML realm. It explains how NPL can and is applied, and the advantages of NLP but also the disadvantages of it.

IBM Cloud Education. (n.d.). What is machine learning? IBM. https://www.ibm.com/cloud/learn/machine-learning

This article provides excellent simple explanations for average computer users. It shows the differences between Machine Learning, Deep Learning, and Neural Networks and their advantages. One of its main points is real-world use cases, including, Speech recognition, Customer service, and Automated stock trading. It also focuses on some of the challenges with machine learning, like the impact of AI on jobs, privacy, and bias/discrimination.

European Commission, Directorate-General for Research and Innovation, Cappelli, P. (2020). The consequences of AI-based technologies for jobs, Publications Office. https://data.europa.eu/doi/10.2777/348580

This article starts by covering the nature of the discussion behind AI in the job space, why the discussion is important and worth investigating. Next, it moves on to how AI has played a part in the job market in the past, and how it might play a part in the future. It explains that certain jobs that require a limited amount of skill will be replaced by raising the required skill for jobs.

28th, V. D. (2020, June 5). Stories of AI failure and how to avoid similar AI fails. Lexalytics. Retrieved February 25, 2022, from https://www.lexalytics.com/lexablog/stories-ai-failure-avoid-ai-fails-2020

This article covers the biggest AI failed experiments, ranging from Microsofts Tay Twitter bot that turned into a Nazis, to Apple’s face id tech being foiled by fakes masks. It shows that by giving AI the ability to learn you can get disastrous effects. This article also talks about how to avoid these downfalls and how to build a more resilient AI/ML technology.

Tesla, Autopilot. Tesla.com. Retrieved March 25th, 2022, from

https://www.tesla.com/autopilot

This is the Tesla website that is dedicated to its autopilot feature. It goes into detail about what the autopilot is, what it can do, and how to use it. This website would be useful for someone who is researching the Tesla autopilot, or someone who is considering buying a Tesla with the autopilot feature.

EPRS (March 2020), The ethics of artificial intelligence: Issues and initiatives, from: https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf

This report provides an overview of the ethical issues associated with artificial intelligence (AI), as well as existing initiatives at the international, European and national levels aimed at addressing these issues. It begins by outlining some key concepts related to AI ethics, before discussing the main ethical concerns that have been raised in relation to its development and use. These include fears about job losses and other economic impacts resulting from increased automation; risks to privacy and data security; potential biases in decision-making systems based on machine learning algorithms; and the possibility of AI being used for malicious purposes, such as through cyber-attacks or autonomous weapons systems. The report then looks at efforts underway to develop ethical frameworks for AI, including initiatives led by businesses, civil society organizations and international bodies such as the United Nations Educational, Scientific and Cultural Organization (UNESCO). Finally, it summarizes recent debates on AI ethics within the European Union Institutions.

Buchanan, B. G. (2005). A (Very) Brief History of Artificial Intelligence. Aitopics. Retrieved April 15, 2022, from https://aitopics.org/doc/journals:0C596275/

This article provides a brief history of artificial intelligence and its development over time. It discusses the major milestones and achievements in AI technology and research, as well as the challenges that have been faced along the way. This article would be useful for anyone interested in learning more about artificial intelligence and its development.

--

--

Carter McKay

Full-Stack Develop. Data and Web Dev Engineer. Problem Solver. Terrible at Graphic Design. Give me a wireframe and I’ll build it.