EDA enters a new era of AI design: Synopsys, Cadence, Google, and NVIDIA start using AI for complex chip design

Today’s advanced chip designs can require hundreds of design engineers, equipped with state-of-the-art EDA design tools, and take 2-4 years to complete. EDA developers and advanced AI chip design companies have begun to consider whether AI-assisted chip design can speed up the design process and reduce the investment of resources such as labor and time? The answer is yes, if hardware development becomes more agile and autonomous, the expensive and lengthy chip design process could be shortened from 2-3 years to 2-3 months. AI plays a vital role in the new generation of EDA design tools. EDA vendors such as Synopsys and Cadence, as well as AI chip design companies such as Google and Nvidia, have begun to use AI for complex chip designs, and they have achieved amazing results.

AI in chip design

Last year, Synopsys released DSO.ai (Design Space Optimization AI) software, which allows IC design engineers to more autonomously determine the best way to arrange the layout ON the chip to reduce area and power consumption, thus starting the EDA design. artificial intelligence process. Using reinforcement learning, DSO.ai can evaluate billions of alternatives against design goals and quickly produce designs that significantly outperform good engineers. The potential of the problems/solutions that DSO.ai can solve is enormous: the number of possible solutions for emitting various components on a chip is approximately 10 to 90,000 (1,090,000). In contrast, Google AI mastered only 10 to the 360th power of Go (10360) in 2016. Since AI using reinforcement learning can play chess better than world champions, AI should be able to design better chips than talented engineers if it is willing to spend computing time doing it. DSO.ai’s trial results were impressive, achieving an 18% increase in operating frequency and a 21% reduction in power consumption, while reducing engineering time from six months to one month.

Similar results have been published recently by Google and Nvidia. Another EDA developer, Cadence, also released the Cerebrus smart chip explorer tool, an AI-optimized design platform similar to Synopsys DSO.ai. Before exploring these latest AI design trends, let’s take a look at how the semiconductor design space is changing.

EDA enters a new era of AI design: Synopsys, Cadence, Google, and NVIDIA start using AI for complex chip design

A good starting point is the Gajski-Kuhn diagram, which depicts all the steps a chip design takes along three axes:

The first axis is the Behavioral level, where the architect defines what the chip will do, including transfer functions, logic, RTL, algorithms, and systems.

The second axis is the structural level, where the architect determines how the chip is organized, including transistors, gate arrays and flip-flops, ALUs and registers, subsystems and buses, CPs and memories.

The third axis is Geometry, where engineers define how chips are laid out, including polygons, cell and module planning, macro and floor planning, clusters, chips, and physical partitioning.

All chip design teams work around specifications and steps on these three axes, with each step moving clockwise toward the central goal (ie, delivery to the foundry for tape-out), advancing the next phase of work in a clockwise direction. All applications of AI to date have been in geometric space or the axis of physical design to address Moore’s Law delays.

Synopsys’ DSO.ai is the first to apply AI to the physical design process, generating floorplans that consume less power, run more frequently, and take up more space than experienced designers can plan The best space is even smaller. The far-reaching impact of AI on productivity is noteworthy, with DSO.ai users being able to achieve in a few days what used to take a team of experts weeks to complete.

Both Google and Nvidia research teams have published research papers using reinforcement learning for physical design. Google uses AI for floorplanning of its next-generation TPU chip designs, and is investigating the role of AI in architectural optimization. Nvidia is also focusing on the easy-to-achieve outcome: floorplanning. They will use all the GPU computing power they have in-house and use artificial intelligence to design better AI chips.

Cerebrus Smart Chip Explorer

Cadence recently launched a “smart chip explorer” called Cerebrus that uses reinforcement learning to optimize the physical design flow. Cerebrus is functionally similar to Synopsys’ DSO.ai, focusing on physical design. While Google and Nvidia may have the resources and knowledge to develop their own AI for chip design optimization, they may just use it for themselves, and most chip design firms and projects will still opt for tools from EDA vendors. The release of Cadence Cerebrus appears to further validate reinforcement learning as the next big shift in chip design methodology. We believe that AI will gradually permeate every part of the IC design process as designers become more accustomed to letting machines determine layout, and as competitive pressure increases.

EDA enters a new era of AI design: Synopsys, Cadence, Google, and NVIDIA start using AI for complex chip design

Pictures to improve productivity have always been the main theme in the evolution of chip design history. In the early stages of chip design, each Transistor was created individually and wired up manually in a fully custom layout editor, a time-consuming process.To improve efficiency, digital chip designs began to use standard cell and schematic netlist approaches, which allowed engineers to implement digital logic

Designing is faster, but creating a schematic netlist manually takes a lot of effort. When desktop Unix workstations came along, every engineer started to have more computing power, so RTL synthesis became popular. Chip designers can use high-level languages ​​such as VHDL and Verilog to capture digital logic functions and easily synthesize netlists that include millions of gates. However, the giant leap in productivity brings another question, namely how to lay out millions of standard cells? Therefore, following RTL synthesis, automatic place and route systems were developed. Now that large netlists can also be implemented quickly, EDA once again significantly increases productivity.

Cadence’s Cerebrus is built on a large-scale computing and machine learning architecture and leverages the complete Cadence digital full-flow solution. Cerebrus delivers better design PPA results (performance, power and area) with a unique reinforcement machine learning engine. By using fully automated, machine learning-driven RTL-to-GDS full-process optimization technology, Cerebrus can deliver these better PPA results faster than a manually tuned process, greatly increasing the productivity of engineering teams.

Cerebrus uses scalable distributed computing technology resources, whether on-premises or in the cloud, to accelerate the flow of complex SoC designs.


Both AI chip designers (NVIDIA and Google) and EDA tool developers (Synopsys and Cadence) are experimenting with AI-led chip design optimizations to improve performance, cost, and power consumption. There is no doubt that NVIDIA and Google are focusing on designing and developing better GPUs and Cloud TPUs to improve their respective market competitive advantages. However, AI optimization is just a tool they use to help improve their own products and services.

The global EDA leader has opened a new era of AI design. Synopsys’ DSO.ai and Cadence’s Cerebrus platform will take the lead in leveraging AI advantages over human engineers in physical design, accelerating the design process of today’s most complex chips.

At the upcoming Cadence Live China online user conference on October 12, Cadence CEO Chen Liwu, Cadence President Dr. Anirudh Devgan, and VeriSilicon Chairman/President and CEO Dr. Dai Weimin will bring complex chip designs to the audience. The keynote speech on the latest EDA technology trends, interested friends, please register to participate.