The concrete risks of AI misdeployment

I often talk about how a mismanaged transition to the AI economy may be a societal-scale risk.

However, we cannot treat this as an abstract issue for governments alone to solve. As business owners, we face similar and very concrete risks to our clients, employees and bottom line as we deploy AI.

Let me explain how.

Misdeployment

Sometimes launching an AI product goes wrong.

misdeployment

We call this a ‘misdeployment’: AI products that seem promising during internal tests fail to perform in the real world due to unforeseen complexity.

  • Zillow losing $880M when launching their real estate trading algorithm.
  • McDonald’s drive thru chatbot failing despite a multi-year, costly development cycle.
  • Bing’s uncontrolled responses damaging Microsoft’s brand reputation.
  • Google’s chatbot providing incorrect information during a Superbowl ad, causing their stock to fall by $100 billion

As you can see, misdeployment risks can cost millions, if not billions, of dollars. And as AI evolves, so will the types of risks we encounter when releasing new technologies.

For example, a 14 year old user of the chatbot service ‘character.ai’ who conversed with a pretend fictional character may have died as a result, an incredibly tragic and heartbreaking outcome no matter the underlying reason.

This is a serious topic and as we deploy these complex new interfaces and products, we don’t just have an obligation to our investors, but a deep responsibility to our users, employees and the people who may face the negative externalities of our design decisions.

Avoiding misdeployment

Fortunately, due to the diligent work of researchers in AI safety, a new industry is emerging to tackle this problem: AI assurance.

The budding field includes companies specializing in securing and controlling AI agents, designing and running pre-deployment tests, and safeguarding systems against unintended or misaligned AI actions.

So as you decide how to deploy your next AI product, internally or externally, you may first:

  • Source your AI solutions, such as robots, from companies that prioritize assurance and safety to mitigate potential workplace hazards.
  • Consult with specialists to control intelligent agents and ensure they stay on script during customer interactions.
  • Co-develop verification algorithms that support your agents’ conclusions, reducing legal liability in areas like accounting.
  • Provide training that helps employees recognize when autonomous intelligent agents are disrupting systems and how to act in such situations.
  • Ensure high quality data annotation to train your agents properly, preventing potential harm to users.
  • Use monitoring platforms to verify interactions with your agents, reducing critical mistakes and understanding their behaviors.

And even this may not be enough once we pass the 2030 mark. As autonomous machine intelligence transforms nearly every industry, we all have a responsibility to ensure this transition happens safely and to everyone’s benefit.

Related writing