Diverse Architectures for Unmatched Innovation

Unprecedented choice in architecture to solve for any compute need.

Leadership Across the Compute Spectrum

The range of computing applications today is incredibly varied—and it's growing more so, especially with the proliferation of data, edge computing, and artificial intelligence. However, different workloads require different types of compute.

Intel is uniquely positioned to deliver a diverse mix of scalar, vector, matrix, and spatial architectures deployed in CPU, GPU, accelerator, and FPGA sockets. This gives our customers the ability to use the most appropriate type of compute where it's needed. Combined with scalable interconnect and a single software abstraction, Intel's multiple architectures deliver leadership across the compute spectrum to power the data-centric world.

  • Scalar architecture typically refers to the type of workloads that are optimal on a CPU, where one stream of instructions operates at a given rate typically driven by CPU clock cycles. From system boot and productivity applications to advanced workloads like cryptography and AI, scalar-based CPUs work across a wide range of topographies with consistent, predictable performance.
  • Vector architecture is optimal for workloads, which can be decomposed into vectors of instructions or vectors of data elements. GPUs and VPUs deliver vector-based parallel processing to accelerate graphics rendering for gaming, rich media, analytics and deep learning training and inference. By scaling vector architectures from client, data center, and the edge, we can take parallel processing performance from gigaFLOPS to teraFLOPS, petaFLOPS, and exaFLOPS.
  • Matrix architecture derives its name from a common operation typically performed for AI workloads (matrix multiplication). While other architectures can execute matrix multiply code, ASICs have traditionally achieved the highest performance implementing the type of operations typically needed for AI inferencing and training, including matrix multiplication.
  • Spatial architecture is a special architecture usually associated with an FPGA. Here, the data flows through the chip, and the computing operation performed on the data element is based on the physical location of the data in the device. The specific data transformation algorithm that has been programmed into the FPGA.

Scalar Focused: Versatile, General Purpose

From system boot to productivity applications to advanced workloads like cryptography and AI, most computing needs can be covered by scalar-based central processing units, or CPUs. CPUs work across a wide range of topographies with consistent, predictable performance.

Intel delivers two world class microarchitectures for CPUs with the Efficient-core microarchitecture and the Performance-core microarchitecture. These microarchitectures are at the center of the various lines of CPUs for Intel from low TDP Mobile devices to powerful Xeon® based data centers. Our scalable range of CPUs gives customers the choice to balance performance, power efficiency, and cost.

Vector Focused: Highly Parallel Processing

Graphics processing units, or GPUs, deliver vector-based parallel processing to accelerate workloads such as real-time graphics rendering for gaming. Because they excel at parallel computing, GPUs are also a good option to accelerate deep learning and other compute intensive workloads.

Intel's integrated GPUs bring excellent visuals to millions of PCs. With the Xe architecture, we've expanded our GPU IP portfolio to scale to discrete client and data center applications. Providing increased functionality in fast-growing areas including rich media, graphics, and analytics.

Our current GPU IP portfolio will take parallel processing performance from teraFLOPS to petaFLOPS to exaFLOPS. It includes 3 microarchitectures:

  • Xe LP for efficient graphics
  • Xe HPG for high performance graphics
  • Xe HPC for high performance computing

Xe HPG and Xe HPC microarchitectures are built with a new compute block that we call Xe-core. Xe-cores integrate a set of vector engines and can be optimized for different workloads and market segments, notably with the addition of Xe Matrix Extensions (XMX) to accelerate AI training and inferencing.

Matrix Focused: Accelerators and New CPU Instructions

From the data center to edge devices, AI continues to permeate all aspects of the compute spectrum. To that end, we've developed purpose-built accelerators and added microarchitectural enhancements to our CPUs with new instructions to accelerate AI workloads.

An application-specific integrated circuit (ASIC) is a type of processor that are built from the ground up for a precise usage. In most cases, ASICs will deliver best-in-class performance for the matrix compute workloads it was designed to support.

Intel is extending platforms with purpose-built ASICs that offer dramatic leaps in performance for Matrix applications. These include Habana AI processors and the Ponte Vecchio High Performance Compute GPUs with new XMX (Xe Matrix Extensions) technology. Each XMX engine is built with deep systolic arrays, enabling Ponte Vecchio to have significant amounts of both vector and matrix capabilities in a single device.

In addition, Intel® Deep Learning Boost (Intel® DL Boost), available on 3rd Gen Intel® Xeon® Scalable processors and 10th Gen Intel® Core™ processors, adds architectural extensions to accelerate Vector Neural Network Instructions (VNNI). In order to dramatically increase the instructions per clock (IPC) for AI applications, we have introduced a new technology called Intel® AMX (Advanced Matrix Extensions). This technology will first be available as part of our next generation Sapphire Rapids architecture that significantly increases matrix type operations.

Learn more ›

Intel® Xe Matrix Extensions (Intel® XMX)

Deep systolic arrays

Intel® Advanced Matrix Extensions (Intel® AMX)

Tile Matrix Multiplication Accelerator

Spatial Focused: Reprogrammable FPGAs

Field programmable gate arrays, or FPGAs, are integrated circuits that can physically manipulate how their logic gates open and close. The circuitry inside an FPGA chip is not hard etched—it can be reprogrammed as needed.

Intel® FPGAs provide completely customizable hardware acceleration while retaining the flexibility to evolve with rapidly changing computing needs. As blank, modifiable canvases, their purpose and power can be easily adapted again and again.

Intel® Agilex™ FPGAs and SoCs

The Intel® Agilex™ FPGA family leverages heterogeneous 3D system-in-package (SiP) technology to integrate Intel's first FPGA fabric built on 10 nm process technology.

Learn more

Intel® Stratix® 10 NX FPGA

Intel® Stratix® 10 NX FPGA is an AI-optimized FPGA for high-bandwidth, low-latency artificial intelligence (AI) acceleration applications. The Intel® Stratix® 10 NX FPGA delivers accelerated AI compute solution through AI-optimized compute blocks with up to 15X more INT81 throughput than standard Intel® Stratix® 10 FPGA DSP Block.

Learn more

Next-Generation Architectures

At Intel, we're planning for the architectures of the future with research and development in next-generation computing. Among these are quantum and neuromorphic architectures.

Unified Programming with OneAPI

Our oneAPI initiative will define programming for a multiarchitecture world. It will deliver a unified and open programming experience to developers on the architecture of their choice, eliminating the complexity of separate code bases, programming languages, tools, and workflows.

Explore software

Six Pillars of Technology Innovation for the Next Era of Computing

Intel is innovating across six pillars of technology development to unleash the power of data for the industry and our customers.

Notices & Disclaimers2 3 4



根據 Intel 內部估計。


所有產品計畫與發展藍圖均可能在未經事先通知的情況下變更。 本網頁上對於未來計畫或期望的陳述,為前瞻性之陳述。這些陳述基於目前的期望,並且涉及許多風險和不確定性,可能導致實際結果與此類陳述中明示或隱含的結果大不相同。如需可能導致實際結果出現重大差異之因素的更多資訊,請至 www.intc.com,查看我們的最新收益發布與 SEC 文件存檔。


Intel 會使用代碼名稱來辨識開發中但尚未公開提供的產品、技術或服務。它們不是「商業」名稱,且無意作為商標使用。