Register now for better personalized quote!

OpenAI announces updates to API, including improved function calling capabilities

Jun, 15, 2023 Hi-network.com
Jakub Porzycki/NurPhoto via Getty Images

As OpenAI has gained more popularity than ever thanks to the launch of ChatGPT last fall, its application programming interface (API) has also become a sought-after tool for developers. Different companies have found inspiration in OpenAI's success and want to replicate it, leading to developers using its APIs to build more and more tools. 

To help deal with this surge in demand, OpenAI has announced some changes to its API, such as improved function calling capability, more steerable versions of GPT-4 and GPT-3.5 Turbo, a new 16k context version of GPT-3.5 Turbo, and a 75% cost reduction in the Embeddings model, the latter resulting in reduced costs for developers that pay for the API. 

Also:92% of programmers are using AI tools, says GitHub developer survey

Some of the new updates include a new function calling capability in the Chat Completions API that will allow developers to connect the power of the GPT models with external tools more reliably. With the update, developers will be able to provide instructions for GPT-4 and GPT-3.5 Turbo by describing the functions, and the model will output a JSON object with the necessary arguments to call those functions.

This update makes it easier for developers to build chatbots or applications that interact with external tools and APIs to perform specific tasks, such as sending emails, retrieving weather or flight information, or extracting data from text sources such as websites.

Also:GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?

The updates also make for more steerable GPT-4 and GPT-3.5 models, so developers can exert greater control over the models' output. OpenAI is allowing developers to craft the context, specify desired formatting, and provide instructions to the model about the desired output. Essentially, developers have more say over the tone, style, and content of the responses generated by the models used in their applications. 

OpenAI also announced the launch of the new 16k context version of GPT-3.5 Turbo, which differs from the GPT-3.5 model behind ChatGPT, as it was specifically tailored for developers to create chat-based applications. The latest 16k model is an upgraded variant from the standard, previously used 4k model. 

Also:AMD unveils MI300x AI chip as 'generative AI accelerator'

The context in "16k context" is used to describe the text within a conversation that helps provide context and helps the model understand the input and provide responses that are relevant to the conversation. The 4k, or 4,000, tokens in the standard GPT-3.5 Turbo model limited the model's ability to maintain the context in a conversation to a few paragraphs. 16k, or 16,000, tokens, equals about 20 pages of text, giving the model a larger amount of text as context.

Lastly, OpenAI announced it's been able to become more efficient and reduce costs, so it's reducing the Embeddings model by 75% to$0.0001 per 1k tokens, and the GPT-3.5 Turbo by 25% to$0.0014 per 1k input tokens and$0.002 per 1k output tokens. The new GPT-3.5 Turbo-16k model is priced at$0.003 per 1k input tokens and$0.004 per 1k output tokens.

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Hot Tags : Artificial Intelligence Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.