Remove Lambda Remove Scalability Remove Storage
article thumbnail

Automate Amazon Bedrock batch inference: Building a scalable and efficient pipeline

AWS Machine Learning - AI

To address this consideration and enhance your use of batch inference, we’ve developed a scalable solution using AWS Lambda and Amazon DynamoDB. We walk you through our solution, detailing the core logic of the Lambda functions. Amazon S3 invokes the {stack_name}-create-batch-queue-{AWS-Region} Lambda function.

article thumbnail

Create a generative AI–powered custom Google Chat application using Amazon Bedrock

AWS Machine Learning - AI

Before processing the request, a Lambda authorizer function associated with the API Gateway authenticates the incoming message. After it’s authenticated, the request is forwarded to another Lambda function that contains our core application logic. The code runs in a Lambda function. Implement your business logic in this file.

article thumbnail

Build a video insights and summarization engine using generative AI with Amazon Bedrock

AWS Machine Learning - AI

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. AWS Lambda is an event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.

article thumbnail

Enhance customer support with Amazon Bedrock Agents by integrating enterprise data APIs

AWS Machine Learning - AI

The workflow includes the following steps: Documents (owner manuals) are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. The Lambda function runs the database query against the appropriate OpenSearch Service indexes, searching for exact matches or using fuzzy matching for partial information.

Lambda 127
article thumbnail

Multi-LLM routing strategies for generative AI applications on AWS

AWS Machine Learning - AI

Semantic routing offers several advantages, such as efficiency gained through fast similarity search in vector databases, and scalability to accommodate a large number of task categories and downstream LLMs. These embeddings are then saved as a reference index inside an in-memory FAISS vector store, which is deployed as a Lambda layer.

article thumbnail

From “Secondary Storage” To Just “Storage”: A Tale of Lambdas, LZ4, and Garbage Collection

Honeycomb

When we introduced Secondary Storage two years ago, it was a deliberate compromise between economy and performance. Compared to Honeycomb’s primary NVMe storage attached to dedicated servers, secondary storage let customers keep more data for less money. Enter AWS Lambda. Today things look very different.

Lambda 86
article thumbnail

How BQA streamlines education quality reporting using Amazon Bedrock

AWS Machine Learning - AI

The solution consists of the following steps: Relevant documents are uploaded and stored in an Amazon Simple Storage Service (Amazon S3) bucket. The text extraction AWS Lambda function is invoked by the SQS queue, processing each queued file and using Amazon Textract to extract text from the documents.

Education 103