Remove Hardware Remove Machine Learning Remove Training
article thumbnail

Reduce ML training costs with Amazon SageMaker HyperPod

AWS Machine Learning - AI

Training a frontier model is highly compute-intensive, requiring a distributed system of hundreds, or thousands, of accelerated instances running for several weeks or months to complete a single job. For example, pre-training the Llama 3 70B model with 15 trillion training tokens took 6.5 During the training of Llama 3.1

article thumbnail

ThirdAI raises $6M to democratize AI to any hardware

TechCrunch

Houston-based ThirdAI , a company building tools to speed up deep learning technology without the need for specialized hardware like graphics processing units, brought in $6 million in seed funding. It was when he was at Rice University that he looked into how to make that work for deep learning.

Hardware 277
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Stability AI backs effort to bring machine learning to biomed

TechCrunch

Called OpenBioML , the endeavor’s first projects will focus on machine learning-based approaches to DNA sequencing, protein folding and computational biochemistry. Stability AI’s ethically questionable decisions to date aside, machine learning in medicine is a minefield. Predicting protein structures.

article thumbnail

Dulling the impact of AI-fueled cyber threats with AI

CIO

While LLMs are trained on large amounts of information, they have expanded the attack surface for businesses. From prompt injections to poisoning training data, these critical vulnerabilities are ripe for exploitation, potentially leading to increased security risks for businesses deploying GenAI.

article thumbnail

Efficiently train models with large sequence lengths using Amazon SageMaker model parallel

AWS Machine Learning - AI

Across diverse industries—including healthcare, finance, and marketing—organizations are now engaged in pre-training and fine-tuning these increasingly larger LLMs, which often boast billions of parameters and larger input sequence length. This approach reduces memory pressure and enables efficient training of large models.

article thumbnail

Cellino is using AI and machine learning to scale production of stem cell therapies

TechCrunch

technology, machine learning, hardware, software — and yes, lasers! Founded by a team whose backgrounds include physics, stem cell biology, and machine learning, Cellino operates in the regenerative medicine industry. — could eventually democratize access to cell therapies.

article thumbnail

Specialized hardware for deep learning will unleash innovation

O'Reilly Media - Data

In this episode of the Data Show , I spoke with Andrew Feldman, founder and CEO of Cerebras Systems , a startup in the blossoming area of specialized hardware for machine learning. Since the release of AlexNet in 2012 , we have seen an explosion in activity in machine learning , particularly in deep learning.

Hardware 154