Separator

Texas is Central to Amazon's AI Initiative in the United States

Separator

image

Tech giant Amazon aims to emerge from Nvidia's influence with bespoke "Trainium" chips specifically created for machine learning as billions of dollars are invested in artificial intelligence (AI).

Annapurna Labs, an Amazon subsidiary located in Austin, Texas, was evaluating the durability of its newest Trainium generation during a recent AFP visit to the site.

Texas is becoming a US tech hub, attracting investments with low energy costs, lenient regulations, tax benefits, and relatively affordable real estate for large data centers.

In a thunderous din, UltraServers equipped with 144 Trainium AI-accelerator chips were undergoing testing at Annapurna during a standard inspection before shipment.

Following years of dependency on suppliers for chips, Amazon's cloud computing branch, Amazon Web Services (AWS), started creating its own by purchasing the Israeli startup Annapurna Labs in 2015.

The initial introduction of Graviton and Inferentia chips occurred in 2018, with Graviton aimed at general cloud computing and Inferentia designed for AI model support.

Also Read: How This Techie Turned Visa Struggles into Startup Success

The initial Trainium was released in 2020, succeeded by a second generation that promised significant performance improvements.

Also Read: 5 Latest CHRO Appointments in Global Corporations

Launched in December, Trainium 3 chips are claimed to double the performance of the second generation, all while being smaller than a credit card.

Kristopher King, director of the Annapurna lab in Austin, argued that the new Trainium chips can reduce the costs of creating and operating generative AI models by up to 40 percent compared to the graphics processing units (GPUs) that are currently considered the "gold standard" for AI.

 

Also Read: A Brief History of India's Transformation Under PM Narendra Modi

In addition to competitively pricing Trainium chips, AWS aims to highlight reliability as a key feature, as data centers must function continuously for extended periods.

According to Mark Carroll, head of engineering at Annapurna, the development of AI necessitates the simultaneous operation of hundreds of thousands of chips for extended periods.

 

In Print




Most Viewed



🍪 Do you like Cookies?

We use cookies to ensure you get the best experience. Read more…