Register now for better personalized quote!

How CXOs can navigate this roadmap for responsible AI

Feb, 11, 2022 Hi-network.com

Recently, the Business Roundtable, an influential group of CEOs of major US companies, published a Roadmap for Responsible Artificial Intelligence. While many companies are already thinking about responsible AI due to market forces such as the impending Artificial Intelligence Act in Europe and the demands of values-based consumers, this announcement will elevate the conversation to the C-suite. 

Artificial Intelligence

  • 8 ways to reduce ChatGPT hallucinations
  • AI is transforming organizations everywhere. How these 6 companies are leading the way
  • 3 ways AI is revolutionizing how health organizations serve patients. Can LLMs like ChatGPT help?
  • If AI is the future of your business, should the CIO be the one in control?

Some of the principles are refreshingly prescriptive, such as "innovate with and for diversity." Others, such as "mitigate the potential for unfair bias," are too vague or incomplete to be useful. For tech and business leaders interested in adopting any or all of these principles, the devil is in the details. Here's our brief take on each principle: 

  1. Innovate with and for diversity.When the folks conceiving of and developing an AI system all resemble each other, there are bound to be significant blind spots. Hiring diverse teams to develop, deploy, monitor, and use AI helps to eradicate these blind spots and is something we at Forrester have been recommending since our first report on the ethics of AI in 2018. 
  2. Mitigate the potential for unfair bias.There are over 20 different mathematical representations of fairness, and selecting the right one depends on your strategy, use case, and corporate values. In other words, fairness is in the AI of the beholder.  
  3. Design for and implement transparency, explainability, and interpretability.There are many different flavors of explainable AI (XAI) - transparency relies on fully transparent "glass box" algorithms, while interpretability relies on techniques that explain how an opaque system such as a deep neural network functions.  
  4. Invest in a future-ready AI workforce.AI is more likely to transform most people's jobs than eliminate them, yet most employees aren't ready. They lack the skills, inclinations, and trust to embrace AI. Investing in the robotics quotient - a measure of readiness - can prepare employees for working side by side with AI. 
  5. Evaluate and monitor model fitness and impact.The pandemic was a real-world lesson for companies in the danger of data drift. Companies need to embrace machine learning operations (MLOps) to monitor AI for continued performance and consider crowdsourcing bias identification with bias bounties. 
  6. Manage data collection and data use responsibly.While the Business Roundtable framework emphasizes data quality and accuracy, it overlooks privacy. Understanding the relationship between AI and personal data is crucial for the responsible management of AI.  
  7. Design and deploy secure AI systems.There is no secure AI without robust cybersecurity and privacy practices.  
  8. Encourage a companywide culture of responsible AI.Some firms are beginning to take a top-down approach to foster a culture of responsible AI by appointing a chief trust officer or chief ethics officer. We expect to see more of these appointments in the coming year. 
  9. Adapt existing governance structures to account for AI.Ambient data governance, a strategy to infuse data governance into everyday data interaction and intelligently adapt data to personal intent, is ideally suited for AI. Map your data governance efforts in the context of AI governance. 
  10. Operationalize AI governance throughout the whole organization.In many organizations, governance has become a dirty word. That's not only unfortunate, but also quite dangerous. Learn how to overcome governance fatigue. 

What's Missing 

As robust and well-meaning as the Business Roundtable's roadmap is, it's missing two critical elements that companies must embrace to adopt AI responsibly: 

  • Mitigate third-party risk through rigorous due diligence.Most companies are adopting AI in partnership with third parties - by buying third-party AI solutions or by developing their own solutions using AI building blocks from third parties. In either case, the third-party risk is real and needs to be mitigated. Our report, AI Aspirants: Caveat Emptor, explains best practices for reducing third-party risk in the complex AI supply chain. 

  • Test AI to diminish risk and to increase business value.AI-infused software introduces uncertainty that necessitates extra testing of interactions between the various models and the automatic software. Forrester has developed a test strategy framework that is based on business risk and suggests the level and type of testing needed. 

The emphasis on responsible AI is not going away anytime soon. Companies that invest in people, processes, and technologies to ensure ethical and responsible adoption of AI will future-proof their businesses from regulatory or reputational disruption. 

This post was written by VP, Principal Analyst Brandon Purcell and it originally appearedhere.

Recommends

The 5 best VPN services (and tips to choose the right one for you)The best AI art generators: DALL-E 2 and other fun alternatives to tryThe best Android phones you can buy (including a surprise pick)The best robot vacuum and mop combos (and if they're worth the money)
  • The 5 best VPN services (and tips to choose the right one for you)
  • The best AI art generators: DALL-E 2 and other fun alternatives to try
  • The best Android phones you can buy (including a surprise pick)
  • The best robot vacuum and mop combos (and if they're worth the money)

tag-icon Hot Tags : Artificial Intelligence Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.