Register now for better personalized quote!

Human oversight key to keeping AI honest

Jun, 29, 2023 Hi-network.com
Yuichiro Chino/Getty Images

Still lacking key traits needed to decipher information, artificial intelligence (AI) should not be left on its own to decide what content people should read. 

Human oversight and guardrails are critical to ensure the right content is pushed to users, said Arjun Narayan, head of trust, safety, and customer experience at SmartNews. 

Also: Six skills you need to become an AI prompt engineer

The news aggregator platform curates articles from 3,000 news sources worldwide, where its users spend an average of 23 minutes a day on its app. Available on Android and iOS, the app has clocked more than 50 million downloads. Headquartered in Tokyo, SmartNews has teams in Japan and the US, comprising linguists, analysts, and policymakers. 

The company's stated mission, amid the vast amount of information now available online, is to push news that is reliable and relevant to its users. "News should be trustworthy. Our algorithms evaluate millions of articles, signals, and human interactions to deliver the top 0.01% of stories that matter most, right now," SmartNews pitches on its website. 

The platform uses machine learning and natural language processing technologies to identify and prioritize news that users want. It has metrics to assess the trustworthiness and accuracy of news sources. 

Also: Mass adoption of generative AI tools is derailing one very important factor, says MIT

This is critical as information increasingly is consumed through social media where veracity can be questionable, Narayan said. 

Its proprietary AI engine powers a news feed that is tailored based on users' personal preferences, such as topics they follow. It also uses various machine learning systems to analyze and evaluate articles that have been indexed to determine if the content is compliant with the company's policies. Non-compliant sources are filtered out, he said. 

Because customer support reports to his team, he added that user feedback can be quickly reviewed and incorporated where relevant. 

Like many others, the company currently is looking at generative AI and assessing how best to use the emerging technology to further enhance content discovery and search. Narayan declined to provide details on what these new features might be. 

He did stress, though, the importance of retaining human oversight amid the use of AI, which still was lacking in some areas. 

Also: If you use AI-generated code, what's your liability exposure? 

Large language models, for instance, are not efficient in processing breaking or topical news but run at higher accuracy and reliability when used to analyze evergreen content, such as DYI or how-to articles.

These AI models also do well in summarizing large chunks of content and supporting some functions, such as augmenting content distribution, he noted. His team is evaluating the effectiveness of using large language models to determine if certain pieces of content meet the company's editorial policies. "It's still nascent and early days," he said. "What we've learnt is [the level of] accuracy or precision of AI models is as good as the data you feed it and train it."

Models today largely are not "conscious" and lack contextual comprehension, Narayan said. These issues can be resolved over time as more datasets and kinds of data are fed into the model, he said.

Equal effort should be invested to ensure training data is "treated" and free of bias or normalized for inconsistencies. This is especially important for generative AI, where open datasets commonly are used to train the AI model, he noted. He described this as the "shady" part of the industry, which will lead to issues related to copyright and intellectual property infringements. 

Also: A thorny question: Who owns code, images, and narratives generated by AI? 

"Right now, there isn't much public disclosure about what kind of data is going into the AI model," he said. "This needs to change. There should be transparency around how they're trained and the decision logic, because these AI models will shape our world views." 

He expressed concerns about "hallucinations," where the AI is capable of generating false information so realistic people are convinced to be true. 

Such problems further emphasize the need for some form of governance that involves humans overseeing the content that is pushed to users, he said. 

Organizations also need to audit what comes out of their AI models and implement the necessary guardrails. For instance, there should be safety nets in place when the AI system is asked to provide instructions on building a bomb or writing an article that plagiarizes. 

"AI, right now, is not at a stage where you can let it run on its own," Narayan said, adding that there should always be investment in human capabilities and oversight. "You need guardrails. You don't want content that isn't proofread or fact-checked."

And amid the hype, it is important to be mindful of the limitations of generative AI, which models still are not trained to handle breaking news and do not run well on real-time data. 

Also: Who owns the code? If ChatGPT helps you write your app, does it belong to you?

Where AI has worked better is in powering its recommendation engine, which SmartNews uses to prioritize articles deemed to be of higher interest based on certain background signals, such as the user's reading patterns. These AI systems have been in use over the past decade, where rules and algorithms have been continuously finetuned, he explained.  

While he was unwilling to divulge details about how generative AI could be incorporated, he pointed to its potential in easing human interaction with machines. 

Anyone, including those without a technical background, will be able to get the responses they need as long as they know how to ask the right questions. They then can repurpose the answers for use in their daily activities, he said. 

Some areas of generative AI, though, remain gray. 

According to Narayan, there are ongoing discussions with publishers on its news platform about how articles written completely by AI, as well as those written by humans but augmented with AI, should be managed. And should rules be established for such articles, how then would these be enforced?

In addition, there are questions about the level of disclosure that should apply to the different variations, so readers know when and how AI is used. 

Also: Is humanity really doomed? Consider AI's Achilles heel

Regardless of how these eventually will be addressed, what remains a mandate is editorial oversight. Again stressing the importance of transparency, Narayan said every piece of content still will have to meet SmartNews' editorial policies on accuracy and trustworthiness. 

He expressed alarm over tech layoffs that saw the removal of AI ethics and trust teams. "I will tell you now, it's so important to continue to have [human] oversight and investment in safety guardrails. If the diligence is missing, we're going to create a monster," he said. "Automation is great [and allows] you to scale systems, but nothing comes close to human ingenuity."

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Hot Tags : Artificial Intelligence Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.