Home Business AMD Launches New Chips To Run Massive Language Fashions In Superior GenAI Period

AMD Launches New Chips To Run Massive Language Fashions In Superior GenAI Period

AMD Launches New Chips To Run Massive Language Fashions In Superior GenAI Period

[ad_1]

New Delhi: AMD has introduced new accelerators and processors for working massive language fashions or LLMs, as graphics chip maker Nvidia leads the generative AI chips race. The AMD Instinct MI300X chip presents industry-leading reminiscence bandwidth for generative AI and management efficiency for big language mannequin (LLM) coaching and inferencing whereas because the AMD Instinct MI300A accelerated processing unit (APU) combines newest AMD CDNA 3 structure and “Zen 4” CPUs to ship breakthrough efficiency for HPC and AI workloads.

“AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be in large scale cloud and enterprise deployments,” mentioned Victor Peng, president, AMD.

“By leveraging our leadership hardware, software and open ecosystem approach, cloud providers, OEMs and ODMs are bringing to market technologies that empower enterprises to adopt and deploy AI-powered solutions,” Peng added.

Customers leveraging the most recent AMD Instinct accelerator portfolio embody Dell Technologies, Hewlett Packard Enterprise, Lenovo, Meta, Microsoft, Oracle, Supermicro and others.

AMD Instinct MI300X delivers practically 40 per cent extra compute models, 1.5x extra reminiscence capability, 1.7x extra peak theoretical reminiscence bandwidth in addition to assist for brand spanking new math codecs equivalent to FP8 and sparsity — all geared in the direction of AI and HPC workloads.

AMD Instinct MI300X accelerators function class 192GB reminiscence capability in addition to 5.3 TB per second peak reminiscence bandwidth, to ship the efficiency wanted for more and more demanding AI workloads, mentioned the corporate.

The AMD Instinct Platform is a management generative AI platform constructed on an {industry} normal OCP design with eight MI300X accelerators to supply an {industry} main 1.5TB of HBM3 reminiscence capability.

[ad_2]

Content Source: zeenews.india.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here