Flexible memory capacity expansion for data intensive workloads

内存容量扩展,优化成本和性能,实现计算资源和内存资源的智能平衡.

Ability to scale servers with high-capacity CXL standards-based memory

使用Compute Express Link™(CXL)的CZ120内存扩展模块使服务器oem能够扩展, integrate and expand memory capacity for a multitude of application workloads.
+

Optimized performance beyond the direct-attach memory channels

灵活地组合具有更高内存容量和低延迟的服务器,以满足高达24%的应用程序工作负载需求1 greater memory bandwidth per core versus RDIMM only.

1. MLC bandwidth using 12-channel 4800MT/s RDIMM + 4x256GB CZ120 vs. RDIMM only.

+

Lower total cost of ownership (TCO)

更好地利用有限应用程序的计算和内存资源,降低资本支出和运营支出.
+

How CXL will aid in AI and Large Language Models (LLMs) in solving new problems

Ryan Baxter Sr. Director, Product Management, 美光科技专访Six Five Insider Edition的Patrick Moorhead和Daniel Newman,讨论使用CXL(TM)进行内存扩展。, its role in AI, and next steps in development and deployment.
Watch the video >
+

How CXL will help overcome memory bandwidth challenges in the data center

Patrick Moorhead, Moor Insights and Strategy, 本文将与Ryan Baxter一起讨论CXL在数据中心中的内存共享优势.
Watch the video >
+

Micron memory expansion for data-intensive workloads

See how Micron memory epansion modules supporting CXL, 通过为新兴的数据密集型应用程序和工作负载提供内存容量和带宽扩展,解决系统内存瓶颈问题.
Watch the video >
+

Micron enabling the next-generation scalable and flexible data center using CXL

Data centers are becoming more complex with increasing workload demands. 美光正在使用CXL来塑造数据中心的未来,以提供灵活和可扩展的内存共享和数据中心内存扩展.
Watch the video >
+

Frequently asked questions

What is CXL?

CXL (Compute Express Link) is a high-speed interconnect, industry-standard interface for communications between processors, accelerators, memory, storage, and other IO devices.

CXL increases efficiency by allowing composability, scalability, and flexibility for heterogeneous and distributed compute architectures. CXL allows applications to share memory among CPU, GPU and FPGA devices which enables sustainability leading to accelerated compute.

What are the three types of CXL devices?

Type 1 (CXL.io) CXL device

该协议用于设备初始化、连接、枚举和设备发现. It is used for devices like FPGAs and IPUs that support CXL.io. Type 1 devices implement a fully coherent cache but no host-managed device memory.

Type 2 (CXL.cache) CXL device

该协议实现了一个可选的一致缓存和主机管理的设备内存. Typical applications are devices that have high-bandwidth memory attached.

Type 3 (CXL.mem) CXL device

This protocol is used only for host-managed device memory. Typical applications are as memory expanders for the host.

What is the main advantage of CXL?

The key advantage of CXL is the expansion of the memory for compute nodes, filling the gap for data-intensive applications that require high bandwidth, high capacity and low latency.

What is Micron’s perspective on CXL?

Modern compute architectures are prone to the “memory wall” problem. CXL提供了必要的体系结构来平衡计算和内存的伸缩差距. 它创造了一个新的载体,通过内存扩展实现经济可行的内存解决方案, impacting DRAM bit growth rate.

Additionally, CXL灵活且可扩展的体系结构提供了更高的计算和内存资源利用率和操作效率,可以根据工作负载需求扩展或扩展资源.

要了解更多沙巴体育安卓版下载美光对CXL对DRAM位增长率影响的看法,请阅读我们的 white paper.

What is the memory wall problem and how does CXL help?

现代并行计算机体系结构容易出现限制应用程序处理性能的系统瓶颈. Historically, this has been known as the “memory wall”, 微处理器性能的提高速度远远超过DRAM内存速度的提高速度.

用于内存-设备内聚和一致性的CXL协议属性通过支持服务器DIMM插槽之外的内存扩展来解决内存墙问题. CXL内存扩展是一种双管齐下的方法,通过增加带宽来克服内存墙,同时为支持CXL的服务器增加数据密集型工作负载的容量.

What is Micron’s perspective on the impact of CXL on DRAM bit growth rate?

CXL附加内存为分层内存存储的新领域提供了巨大的增长机会,并支持独立于CPU内核的内存扩展. CXL will help sustain a higher rate of DRAM bit growth, but don’t expect CXL to cause an acceleration in DRAM bit growth. Overall, it’s a net positive for DRAM growth.

美光对CXL技术的承诺将为客户和供应商带来什么?

美光对CXL技术的承诺使客户和供应商能够推动内存创新解决方案的生态系统. 了解更多有关美光如何在我们的平台上实现下一代数据中心创新的信息 data center solution page.

How will CXL architecture change the data center?

CXL is a cost-effective, flexible, and scalable architectural solution that will shape the data center of the future. 它将改变传统的机架和堆栈架构的服务器和fabric交换机在数据中心的部署方式.

Purpose-built servers that have dedicated fixed resources comprised of CPU, memory, 网络和存储组件将让位于这些更灵活和可扩展的架构. Servers in the rack – once interconnected to fixed resources for network, 存储和计算——将动态组合,以满足人工智能和深度学习等现代和新兴工作负载的需求. Eventually, 数据中心将迁移到所有服务器元素的完全分解, including compute, memory, network, and storage.

+