Remove Generative AI Remove Performance Remove Training
article thumbnail

Multi-LLM routing strategies for generative AI applications on AWS

AWS Machine Learning - AI

Organizations are increasingly using multiple large language models (LLMs) when building generative AI applications. Although an individual LLM can be highly capable, it might not optimally address a wide range of use cases or meet diverse performance requirements.

article thumbnail

Build a multi-tenant generative AI environment for your enterprise on AWS

AWS Machine Learning - AI

While organizations continue to discover the powerful applications of generative AI , adoption is often slowed down by team silos and bespoke workflows. To move faster, enterprises need robust operating models and a holistic approach that simplifies the generative AI lifecycle.

article thumbnail

Reduce ML training costs with Amazon SageMaker HyperPod

AWS Machine Learning - AI

Training a frontier model is highly compute-intensive, requiring a distributed system of hundreds, or thousands, of accelerated instances running for several weeks or months to complete a single job. For example, pre-training the Llama 3 70B model with 15 trillion training tokens took 6.5 During the training of Llama 3.1

Training 107
article thumbnail

Accelerate AWS Well-Architected reviews with Generative AI

AWS Machine Learning - AI

In this post, we explore a generative AI solution leveraging Amazon Bedrock to streamline the WAFR process. We demonstrate how to harness the power of LLMs to build an intelligent, scalable system that analyzes architecture documents and generates insightful recommendations based on AWS Well-Architected best practices.

article thumbnail

Principal Financial Group uses QnABot on AWS and Amazon Q Business to enhance workforce productivity with generative AI

AWS Machine Learning - AI

With the QnABot on AWS (QnABot), integrated with Microsoft Azure Entra ID access controls, Principal launched an intelligent self-service solution rooted in generative AI. Principal implemented several measures to improve the security, governance, and performance of its conversational AI platform.

article thumbnail

Four generative AI use cases for businesses

CIO

As business leaders look to harness AI to meet business needs, generative AI has become an invaluable tool to gain a competitive edge. What sets generative AI apart from traditional AI is not just the ability to generate new data from existing patterns.

article thumbnail

Supercharge your auto scaling for generative AI inference – Introducing Container Caching in SageMaker Inference

AWS Machine Learning - AI

Today at AWS re:Invent 2024, we are excited to announce the new Container Caching capability in Amazon SageMaker, which significantly reduces the time required to scale generative AI models for inference. In our tests, we’ve seen substantial improvements in scaling times for generative AI model endpoints across various frameworks.