Artificial intelligence explains itself to humans. And it pays off

Microsoft Corp’s LinkedIn boosted subscription revenue by 8% after arming its sales team with artificial intelligence software that not only anticipates customers at risk of cancellation, but also
Explains how it came to an end. The system, which was introduced last July and will be described in a LinkedIn blog post on Wednesday, represents a major advance in getting AI to “display its work” in a useful way.

While AI scientists have no problem designing systems that make accurate predictions about all kinds of business outcomes, they are finding that to make these tools more effective for
As human operators, the AI ​​may need to explain itself through another algorithm. The emerging field of “Explainable AI,” or XAI, has spurred significant investment in Silicon Valley as startups and cloud giants compete to make opaque software more understandable and sparked debate in Washington and Brussels as regulators want to ensure automated decision-making. Fairly and transparently.

AI technology can perpetuate societal biases such as those around race, gender, and culture. Some AI scholars view explanations as an essential part of mitigating these problematic outcomes. US consumer regulators, including the Federal Trade Commission, have warned over the past two years about the potential for unexplainable artificial intelligence to be investigated. The European Union could pass next year the Artificial Intelligence Act, a comprehensive set of requirements including that users be able to interpret automated predictions.

Proponents of explainable AI say it has helped make AI more effective in areas such as healthcare and sales. Google Cloud sells interpretable AI services that, for example, tell customers trying to pixel-optimize their systems and soon which training examples are most important in predicting the subject of an image.

But critics say the explanations for why AI predicted what you did are too unreliable because AI’s technology for interpreting machines isn’t good enough. Precision and making it actionable for users – there is still room for improvement. But after two years of trial and error in a relatively low-risk application, LinkedIn says its technology has yielded practical value.

Proof of this is the 8% increase in renewal bookings during the current fiscal year, higher than the expected natural growth. Linkedin declined to specify the interest in dollars, but described it as substantial. Before, LinkedIn salespeople relied on their intuition and some intermittent automated alerts about customers’ adoption of services.

Now, AI is quickly dealing with research and analysis. Dubbed LinkedIn’s CrystalCandle, it invokes unnoticed trends and its causes help salespeople fine-tune their tactics for staying at risk.
Customers on board and offer others to promotions. LinkedIn says explainer-based recommendations have expanded to more than 5,000 of its sales staff
Recruitment offers, advertising, marketing and education.

“I helped experienced salespeople by arming them with specific insights to navigate conversations with potential customers. I also helped new salespeople get immersed in the market immediately,” said Parvez.
Ahmed, Director of Machine Learning at LinkedIn and Head of Applied Research in Data Science.

To explain or not to explain?

In 2020, LinkedIn for the first time made predictions without explanations. A score with an accuracy of around 80% indicates the probability of a soon-to-be customer suspension
fixed or cancelled. Salespeople are not fully earned. The team that sells LinkedIn Talent Solutions’ recruiting and recruiting software hasn’t been clear about how to adapt its strategy, especially when the odds of not renewing a client are no better than a coin toss.

Last July, they started seeing a short, automatically generated paragraph highlighting the factors that influence the outcome. For example, AI determined that a client is likely to upgrade because it has increased by 240 workers over the past year and candidates have become 146% more responsive in the past month. Additionally, an indicator that measures a customer’s overall success using LinkedIn recruitment tools has increased 25% in the past three months.

Based on explanations, sales reps are now directing customers to training, support, and services that are improving, said Lekha Doshi, LinkedIn’s vice president of global operations.
Experience them and keep them spending. But some AI experts question whether explanations are necessary. Researchers say they can even cause harm, generate a false sense of security in AI or pay design sacrifices that make predictions less accurate.

People are using products like Tylenol and Google Maps running within them, said Fei-Fei Li, co-director of the Institute for Human-Centered Artificial Intelligence at Stanford University.
is not accurately understood. In such cases, rigorous testing and monitoring have dispelled most doubts about its efficacy. Likewise, AI systems in general can be considered fair even if
Individual decisions are ambiguous, said Daniel Roy, associate professor of statistics at the University of Toronto.

LinkedIn says that the integrity of an algorithm cannot be evaluated without understanding how it thinks. It also asserts that tools like CrystalCandle can help AI users in other areas. Doctors can see why AI predicts that someone is more at risk of developing a disease, or people can be told because of the AI’s recommendation to refuse them a credit card. The hope is that the explanations will reveal whether the system is compatible
It’s with concepts and values ​​one would like to promote, said Ben Kim, an artificial intelligence researcher at Google. “My view is that interpretability ultimately allows for a conversation between machines and humans,” she said. “If we really want to enable human-machine collaboration, we need to.”

.

[ad_2]

Related posts

Leave a Comment