News

At the Johannesburg event, the tech giant showcased the evolution of services from simple model access to customisable, ...
Together with Anthropic, AWS is building an EC2 UltraCluster of Trn2 UltraServers, named Project Rainier, which will scale out distributed model training across hundreds of thousands of Trainium2 ...
AWS Trainium chips will be the preferred processors for training Mosaic AI models on the Databricks platform, the company announced today. The deal represents a blow to Nvidia’s continued AI dominance ...
AWS unveils Blackwell-powered instances for AI training and inference To power customer training and inference workloads, AWS unveiled two new system configurations: the P6-B200 and P6e-GB200 ...
AWS added Intelligent Prompt Routing and Prompt Caching to Bedrock in hopes of getting model usage prices down.
AI Foundational Models are essential for the next generation of technology. AWS just added foundational models and new low cost chips to their offerings at re:invent ...
Another infrastructure priority at AWS is to continuously improve the energy efficiency of its data centers. Training and running AI models can be extremely energy-intensive. “AI chips perform ...
Among innovative demos in the GenAI Zone, one stood out for its advancements in accessibility, writes JASON BANNIER.
Amazon SageMaker launched in 2017 for managing the entire machine learning lifecycle, from building and training models to deploying and managing predictive models at scale.
Amazon is launching a new AWS service that lets companies upload and customize generative AI models -- and serve those models through APIs.
Mindbeam AI Unveils Litespark Framework: Accelerating Large Language Model Training from Months to Days with NVIDIA Accelerated Computing ...