Register now for better personalized quote!

HOT NEWS

ServiceNow's 4 key AI principles and why they matter to your business

Jul, 11, 2024 Hi-network.com
100canon-7222-large

Amy Lokey, chief experience officer at ServiceNow.

Image: ServiceNow

ServiceNow is a$9 billion platform-as-a-service provider. Just about 20 years old, the Santa-Clata, Calif.-based company focused initially on IT service management, a strategic approach to managing and delivering IT services within an organization based on business goals.

Over the years, it's become a full enterprise cloud platform, with a wide range of IT, operations, business management, HR, and customer service offerings. More recently, it has fully embraced AI, rebranding itself with the tagline, "Put AI to work with ServiceNow."

Also: Generative AI can transform customer experiences. But only if you focus on other areas first

In May, ServiceNow announced a suite of generative AI capabilities tailored to enterprise management. As with most large-scale AI implementations, there are a lot of questions and opportunities that arise from widespread AI deployment.

had the opportunity to speak with Amy Lokey, chief experience officer at ServiceNow. Prior to her role at ServiceNow, Lokey served as VP for user experience -- first at Google and then at LinkedIn. She was also a user experience designer at Yahoo!

Let's get started.

: Please introduce yourself and explain your role as chief experience officer at ServiceNow.

Amy Lokey:I have one of the most rewarding roles at ServiceNow. I lead the global Experience team. We focus on making ServiceNow simple, intuitive, and engaging to use.

Using enterprise software at work should be as elegant as any consumer experience, so my team includes experts in design, research, product documentation, and strategic operations. Our mission is to create product experiences that people love, making their work easier, more productive, and even delightful.

: What are the primary responsibilities of the chief experience officer, and how do they intersect with AI initiatives at ServiceNow?

AL:The title, chief experience officer, is relatively new at ServiceNow. When I joined almost five years ago, we were in the early phases of our experience journey. Our platform has been making work work better for 15 years.

My job was to make the user experience match the power of the product. This approach is key to our business strategy. ServiceNow is an experience layer that can help users manage work and complete tasks across other enterprise applications. We can simplify how people do their work, and to do that, we need to be user-experience-driven in our approach and what we deliver for our customers.

Also: 6 ways AI can help launch your next business venture

Today, a critical part of my role is to work with our Product and Engineering teams to make sure that generative AI, embedded in the ServiceNow platform, unlocks new standards of usefulness and self-service. For example, enabling customer service agents to summarize case notes. This seemingly simple feature is helping cut our own agents' case resolution time in half.

That's what makes AI experiences truly magical: making people more productive, so they can do the work that's meaningful, rather than mundane.

: Can you elaborate on ServiceNow's approach to developing AI ethically, focusing on human-centricity, inclusivity, transparency, and accountability?

AL:These principles are at the heart of everything we do, ensuring that our AI solutions genuinely enhance people's work experiences in meaningful ways.

First and foremost, we place people at the center of AI development. This includes a "human-in-the-loop" process that allows users to evaluate and adjust what AI suggests, to ensure it meets their specific needs. We closely monitor usefulness through in-product feedback mechanisms and ongoing user experience research, allowing us to continuously refine and enhance our products to meet the needs of the people who use them.

Inclusivity is also essential, and it speaks directly to ServiceNow's core value of "celebrate diversity; create belonging." Our AI models are most often domain-specific: trained and tested to reflect and accommodate the incredible number of people who use our platform and the main use cases for ServiceNow.

Also: We need bold minds to challenge AI, not lazy prompt writers, bank CIO says

With a customer base of more than 8,100 enterprises, we also leverage diverse datasets to reduce the risk of bias in AI. All of this is underscored by our broad-based, customer-supported AI research and design program that puts, and keeps, inclusivity at the forefront of all our product experiences.

Transparency builds trust. We intentionally create product documentation that is both comprehensive and clear. Generative AI is built directly into the Now Platform, and we want customers to know how it works and understand that they're in control.

When designing our product experiences, we make it clear where Now Assist GenAI is available and allow people to decide when and how they use it. Our recently published Responsible AI Guidelines handbook is a testament to this commitment, offering resources to help customers evaluate their AI use and ensure it remains ethical and trustworthy.

Lastly, accountability is the cornerstone of our AI experiences. We take our responsibilities regarding AI seriously and have adopted an oversight structure for governance. We collaborate with external experts and the broader AI community to help refine and pressure-test our approach. We also have an internal Data Ethics Committee and Governance Council that reviews the use cases for the technology.

: In what ways does ServiceNow ensure inclusivity in its AI development process?

AL:While AI has tremendous potential to make the world a better, more inclusive place, this is only possible if inclusivity is considered intentionally as part of the AI strategy from the start. Not only do we follow this principle, but we also continually review and refine our AI model datasets during development to make sure that they reflect the diversity of our customers and their end users.

While we offer customers a choice of models, our primary AI model strategy is domain-specific. We train smaller models on specific data sets, which helps weed out bias, significantly reduces hallucinations, and improves overall accuracy compared to general-purpose models.

: What measures does ServiceNow take to maintain transparency in its AI projects?

AL:We take a very hands-on approach to promoting open-science, open-source, open-governance AI development. For example, we've partnered with leading research organizations that are working on some of the world's biggest AI initiatives. This includes our work with Nvidia and Hugging Face to launch Starcoder2, a group of LLMs with open development that can be customized by organizations as they see fit.

We're also founding members of the AI Alliance, which includes members across academia, research, science, and industry, all of whom are dedicated to advancing AI that is open and responsible. Additionally, we have internally invested in AI research and development. Our Research team has published more than 70 studies on generative AI and LLMs, which have informed the work our Product Development team and Data Ethics Committee are doing.

Also: Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO

On a day-to-day basis, transparency comes down to communication. When we think about how we communicate about AI with customers and their end users, we over-communicate both the limits and the intended usage of AI solutions to give them the best, most accurate picture of the tools we provide.

This encompasses mechanisms, including model cards we've created, which are updated with all our scheduled releases and explain each AI model's specific context, training data, risks, and limitations.

llm-model-card

Sample model card

ServiceNow

We also build trust by labeling responses that were provided by LLMs in the UI so that users know that they were AI-generated and by citing sources so customers can understand how the LLM came to that conclusion or found information.

: Can you provide examples of how ServiceNow's Responsible AI Guidelines have been implemented in recent projects?

AL:Our Responsible AI Guidelines handbook serves as a practical tool to foster deeper, critical conversations between our customers and their cross-functional teams.

We applied our guidelines to Now Assist, our generative AI experience. Our Design team uses them as a north star to ensure that our AI innovations are human-centric. For example, when designing generative AI summarization, they referenced these principles and created acceptance criteria based on them. Additionally, to reinforce our core principle of transparency, we are also publishing model cards for all Now Assist capabilities.

Also: The ethics of generative AI: How we can harness this powerful technology

We have also developed an extensive AI product experience pattern and standards library that adheres to the guidelines and includes guidance on things like generative AI experience patterns, AI predictions, feedback mechanisms to support human feedback, toxicity handling, prompting, and more.

During our product experience reviews, we use the guidelines to ask our teams critical audit questions to ensure our AI-driven experiences are beneficial and operate responsibly and ethically for our customers. Multiple teams at ServiceNow have used the guidelines as reference for policies and other work. For example, the core value pillars of our guidelines play an important role in our ongoing AI governance development processes.

Our Research team references specific guidelines within the handbook to formulate research questions, offer recommendations to product teams, and provide valuable resources that inform product design and development, all while advocating for human-centered AI.

Most importantly, we recognize these guidelines are a living resource and we are actively engaging with our customers to gather feedback, allowing us to iterate and evolve our guidelines continually. This collaborative approach ensures our guidelines remain relevant and effective in promoting responsible AI practices.

: What steps does ServiceNow take to help customers understand and use AI responsibly and effectively? How does ServiceNow ensure that its AI solutions align with the ethical standards and values of its customers?

AL:Simply put, we build software we know our customers can use. We talk with customers across a range of industries, and we run ServiceNow on ServiceNow. We are confident that we and our customers have what is needed in the Now Platform to be able to meet internal and external requirements.

We build models to meet specific use cases and know what we're solving for, all aligned to our responsible AI practices. Because we're a platform, customers don't have to piece together individual solutions. Customers leverage the comprehensive resources we've created for responsible AI right out of the box.

Also: How Deloitte navigates ethics in the AI-driven workforce: Involve everyone

: What challenges do companies face when communicating their use of AI to customers and partners, and how can they overcome these challenges?

AL:One of the biggest challenges companies face is misunderstanding. There is a lot of fear around AI, but at the end of the day, it's a tool like anything else. The key to communicating about the use of AI is to be transparent and direct.

At ServiceNow, we articulate both the potential and the limits of AI in our products to our customers from the start. This kind of open, honest dialogue goes a long way toward overcoming concerns and setting expectations.

: How can businesses balance the benefits of AI with the need to maintain stakeholder trust?

AL:For AI to be trusted, it needs to be helpful. Showing stakeholders, whether they're an employee, a customer, a partner, or anything in-between, how AI can be used to improve their experiences is absolutely critical to driving both trust and adoption.

Also: AI leaders urged to integrate local data models for diversity's sake

: How can companies ensure that their AI initiatives are inclusive and benefit a diverse range of users?

AL:The importance of engaging a diverse team simply can't be overstated. The use of AI has implications for everyone, which means everyone needs a seat at the table. Every company implementing AI should prioritize communicating and taking feedback from any audience that the solution will impact. AI doesn't work in a silo, so it shouldn't be developed inside one either!

At ServiceNow, we lead by example and take care to make sure that our teams who develop AI solutions are diverse, representing a wide range of people and viewpoints. For instance, we have an Employee Accessibility Panel that helps validate and test new features early in the development process so that they work well for those with different abilities.

: What are some best practices for companies looking to develop and deploy AI responsibly?

AL:Ultimately, companies should be thoughtful and strategic about when, where, and how to use AI. Here are three key considerations to help them do so:

  1. Incorporate human expertise and feedback: Practices such as user experience research should be done throughout the process of developing and deploying AI, and continue on an ongoing basis post-deployment. That way, companies can better ensure that AI use cases are always focused on making work better for human beings.
  2. Give more controls to users: This can include allowing users to accept and review AI-generated outputs before accepting them or being able to turn off generative AI capabilities within products. This helps maintain transparency and allows users control over how they want to interact with AI.
  3. Make sure documentation is clear: Whether it's model cards that explain each specific AI model's context, or labeling AI-generated outputs, it's important that end users are aware of when they are interacting with AI and the context behind the technology.

: What are the long-term goals for AI development at ServiceNow, and how do they align with ethical considerations?

AL:The beauty of the Now Platform is that our customers have a one-stop shop where they can apply generative AI to every critical business function, which drives tangible outcomes. Generative AI has moved from experimentation to implementation. Our customers are already using it to drive productivity and cost efficiency.

Also: Master AI with no tech skills? Why complex systems demand diverse learning

Our focus is on how we improve day-to-day work for customers and end users by helping them to work smarter, faster, and better. AI augments the work we already do. We're deeply committed to advancing its use responsibly.

It's very important to how we design our products, and we're committed to helping our customers take advantage of it responsibly as well.

: What advice would you give to other companies looking to advance AI responsibly?

AL:Responsible AI development shouldn't be a one-time check box, but an ongoing, long-term priority. As AI continues to evolve, companies should be nimble and ready to adapt to new challenges and questions from stakeholders without losing sight of the four key principles:

  1. Build AI with humans at the core.
  2. Prioritize inclusivity.
  3. Be transparent.
  4. Remain accountable across your customers, employees, and humanity writ large.

Final thoughts

's editors and I would like to share a huge shoutout to Amy for taking the time to engage in this interview. There's a lot of food for thought here. Thank you, Amy!

What do you think? Did Amy's recommendations give you any ideas about how to deploy and scale AI responsibly within your organization? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

tag-icon Hot Tags : Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.