Amazon is most commonly associated with its e-commerce platform, which has become a giant in the industry due to its ability to sell everything you can think of and deliver them to your doorstep within two days with a membership.
However, Amazon also has a strong presence in cloud computing and is about to become more involved with generative AI.
Also: How UPS workers' big contract win could impact Amazon
On Tuesday, Amazon held its AWS (Amazon Web Services) Summit in New York, an event focused on Amazon's work in the cloud that features expos, learning sessions, and a keynote address.
This year, Amazon used the platform to unveil several significant generative AI announcements that will optimize the creation of AI platforms for developers and ease AI integration for enterprises.
To build and power an AI model, there are several components, beginning with the actual chips you will use to power the model, then building and training the model, and finally applying the model in the real world.
Also: Generative AI will soon go mainstream, say 9 out of 10 IT leaders
AWS's announcements today help optimize every step of the process. Here is a roundup of some of the most noteworthy announcements.
AWS HealthScribe is a HIPPA-eligible generative AI-powered service that transcribes conversations between patients and clinicians and creates clinical documents.
The generated clinical notes include summaries of the patient-clinician interaction, AI-generated insights, references to the original transcription, and structured medical terms.
AWSThe overall purpose of this service is to curtail the amount of time clinicians spend having to write detailed documentation and allow them to use that time towards something of more value, such as face-to-face time with patients.
"With AWS HealthScribe, healthcare software providers can use a single API to automatically create robust transcripts, extract key details (e.g., medical terms and medications), and create summaries from doctor-patient discussions that can then be entered into an electronic health record (EHR) system," said Amazon in thepress release .
Also: This is how generative AI will change the gig economy for the better
To address privacy concerns, Amazon shares that the model has data security and privacy built-in, not retaining any customer data after processing it and encrypting customer data in transit.
This isn't the first time generative AI has been geared towards the medical field, as seen by Google's launch of Med-PaLM 2 in April.
Generative AI has surged in popularity since the release of ChatGPT in November. The technology has the potential to improve workflows and productivity across industries and as a result, has become a skill highly sought out by employers.
Despite the benefits and popularity of generative AI, many people feel like they don't have the proper knowledge to use AI correctly, as previously reported on by .
AWS added seven different generative AI courses for people of all skill sets and experiences. The courses cover different generative AI aspects, including topics as complex as learning how to build using Amazon CodeWhisperer or as big picture as learning about different ways to use AI for your business.
You can browse the courseshere .
To help developers build their foundational models, Amazon launched its foundational model service Amazon Bedrock back in April.
With Amazon Bedrock, developers can choose which foundational model they want for their specific use case. The choices included Amazon's Titan, Anthropic's Claude, Stability.ai's Stable Diffusion, and AI21 Labs Jurassic-2, until today.
Also: The best AI art generators
At AWS,Amazon announced that the choices will include Claude 2, Anthropic's latest LLM, SDZL 1, Stability AI's latest text-to-image model, and a brand new foundational model -- Cohere.
By expanding the range of available models, customers can more easily find the one that best fits their needs.
Amazon Bedrock is also introducing agents, which allow developers to build AI applications using proprietary data without manually training the model on the data.
Since agents will allow applications to access organization-specific data, developers will be able to create applications capable of accomplishing a broader range of tasks with up-to-date answers.
These Amazon Bedrock new features are available in preview today.
When you input a prompt into a generative AI model, the prompt you enter is conversational, and the output you receive is conversational, too, as seen by the rise of massively popular chatbots.
Many of these generative AI applications use vector embedding, numerical values given to text, image, and video data that show contextual relationships between data, to help generate accurate responses.
Also: 6 skills you need to become an AI prompt engineer
AWS's vector engine for Amazon OpenSearch Serverless makes it easier for developers to search embeddings and incorporate them into LLM applications.
Now available in preview, the new vector engine allows developers to store, search, and retrieve billions of vector embeddings in real time without worrying about the underlying infrastructure, according to the release.
This will make it easier for developers to fine-tune models, and as a result, create models that produce better, more accurate results.
In March, Amazon announced its Amazon Elastic Compute Cloud (Amazon EC2) P5 Instances powered by Nvidia H100 Tensor Core GPUs and meant to deliver the compute performance needed to build and train machine learning (ML) models.
These instances can provide up to six times faster training times compared to the previous model, and reduce up to 40% of training costs, according to Amazon.
Today, the Amazon Elastic Compute Cloud (Amazon EC2) P5 Instances became generally available.