Despite science fiction's compendium of out of control Artificial Intelligence taking over the world, economic theory suggests that human judgment must accompany advances in AI. This makes those who display good judgment more valuable, not less. So, the question is, what defines good judgment and how do we measure that?
What Makes AI Useful?
AI advancements bring down the cost of prediction. Prediction is not the mystic divination of the future. Rather, it takes into account all available data to create new information about possible outcomes. Often, this means siphoning through a lot of data and breaking it up into processable chunks. Facial recognition is a perfect example of this. Breaking up the patterns in images into components allows machines to determine whether each image has a human face. Although this technology is useful on a number of levels, it comes at a high cost. Fortunately, as AI advances, machine prediction will become faster and more affordable.
Al is useful because its prediction capability helps humans make better decisions. However, decision-making also involves judgment, something humans still do better than machines. For instance, think about credit card fraud. Creditors want to allow you to make purchases while detecting possible fraud. Al makes this prediction, but it is imperfect and cannot consistently decline transactions if and only if fraud exists. So, there's a trade-off between fraud prevention and inconvenience to the consumer for false alarms that prevent them from making legitimate purchases.
So, how does an industry based on convenience minimize client dissatisfaction? Someone has to decide what level of risk is acceptable to balance possible fraud with potential customer defection. This requires judgment. Humans evaluate the risk of fraudulent transactions that result in a loss to the credit card association versus potentially losing customers who can no longer use their card while on vacation. Also, the risk might be more worthwhile for high net worth clients than occasional users. This requires human intervention that minimizes annoying contact points with the customer.
So, powerful AI algorithms still require programmers and other human talent to make judgments on how AI technology is implemented.
What is Judgment?
In the credit card example, judgment balances the reward for preventing fraud by declining bad transactions against the risk of losing customers over inadvertently canceling good ones. Other scenarios are more complicated without an obvious payoff. Human experience helps by allowing you to make decisions based on experience.
The benefit of hindsight helps improve our decision-making in business. When the payoff isn't obvious, we have to apply long-term thinking to make decisions that will only benefit the organization months or years in the future. Things can go wrong. Is a new initiative worth an unsure achievement that doesn't immediately translate to the bottom line?
Humans are needed to make judgments and will specialize in the costs-benefit analysis of every aspect of the decision-making process. So, why can't AI calculate this just as effectively? There's a trade-off between optimizing profit and reducing inconvenience to customers. Programmers have to show the AI what an appropriate measure is, and this requires human judgment for different scenarios, as well as constant recalibration of the AI data points. As automation advances, the demand for this skill set will increase rather than diminish.
How to Determine Appropriate Reward
Like people, AIs learn from experience. A typical AI technique is learning via reinforcement. During this process, a computer receives training to repeat patterns that maximize positive outcomes, also called reward functions. This is a subset of game theory, but there are ways to cheat electronic gaming systems. In one game, the AI found that going in circles maximized the points scored without following normal rules. It didn't cheat, but it wasn't engaging in what a reasonable person would call fair play.
Here, the important factor is that most AI applications are given goals that don't align with the intangible ideas that govern an organization, including ethical and moral tenants that businesses value above bottom line goals like profit and growth. For the foreseeable future, you and other human actors will continue to supply judgment to organizational decision-making to prevent untenable AI decisions based on over-emphasized rewards.
Tweaking AI decisions to meet the desired goals involves a deep understanding of how and why machines come to each decision. Prediction mistakes, left uncorrected, can render AI useless, and even become detrimental to the business process. Instead, pairing the marvelous tenacity of the machine mind with moral and intuitive human judgment results in the optimal application of automated learning.
Reward Function Engineering
As AI capability advances, programmers and other professionals are needed to work out how to employ this technology. Reward function engineering is a new field that considers rewards for different actions, using AI predictions. To succeed in this field, you should understand the needs of an organization as well as the limits and capabilities of machine learning. This differs from AI training via human interaction. In fact, it is something completely new.
Reward function engineering sometimes requires reverse engineering by programmers to force certain actions to lead to hard-coded rewards. It considers whether a decision fits the organization’s overall goals. Sometimes, hard-coding goals is too complex, and we face too many potential outcomes, making it too complex or to too costly for anyone to predict payoffs. AI can still predict outcomes and suggest possible rewards that validate the automation. Then, humans will read the prediction and assess whether the payoff makes sense.
This is how decision-making currently works in most organizations, including ones that don't use AI. Most of us perform reward function engineering based on human data compilation and analysis. Consider a non-business application. As parents, we teach our children morals and values, while trainers show new employees how a certain task is performed optimally. In the moment, different points may be emphasized or clarified based on an individual understanding of the instructions provided.
Human interaction doesn't separate prediction and judgment. Instead, they are performed simultaneously. Distinct reward function engineering isn't required. As machines improve their predictions, human judgment becomes critical. We use intuition and morality to prevent decisions that would be detrimental to our consumers, our product, and our employees. AI might focus on the desired outcome to the exclusion of these other factors.
It's hard to know what other roles in decision-making demand human intuition and guidance. There is a fear factor in imagining a world of self-driving cars and deep-thinking machines whose predictions happen too quickly and are too complex for human comprehension. Certainly, machines will replace us in some capacity. However, the ability of AI to surpass our capabilities isn't so different from other innovations. Two hundred years ago, nobody imagined that carriages and pedestrian travel would be universally replaced by a machine. Cars and trucks outperform our physical abilities to traverse distances, but who would want it any other way?
In this way, AI predictions are also a tribute to human innovation that can and should change the way we live and do business.
AI Affects Both Employers and Employees
Whether you are looking for talent to hire or a job, ICS can help. We are here for your staffing needs, and we're always looking to match people with the right people. Check out our job postings if you need to find a job in AI now, or click below to hire the talent you need to make AI a success in your company.