Innovotek -News & BLOG
A blog about the news of the latest technology developments, breaking technology news, Innovotek Latest news and information tutorials on how to.
Opinion: Custom IC design needs variation analysis
- Font size: Larger Smaller
- Hits: 7280
- 0 Comments
- Subscribe to this entry
- Bookmark
After more than 35 years in the semiconductor business, custom IC design remains an extremely interesting challenge.
I have always felt the job of a designer is to optimize designs within the process capabilities. The design exercise is still a process of trading off the marketing requirements of function, performance, cost and power. This is especially true in the case of custom design. There is never a perfect answer, only a most right, or said differently, the least wrong. This is fundamentally due to the inherent reliance of statistical process control in discrete manufacturing, such as semiconductor, chemical, genetics and steel foundries.
Designers relying on discreet manufacturing processes need to rely on accurate process models (SPICE for semiconductor). As variance increases, more models or corners are required to represent the process capability.
As custom designers shift to today's 45-, 32- and 28nm technology nodes, they encounter exponentially worse variation issues, resulting in a measured silicon performance and parametric yield that is significantly worse than its predicted simulation performance and yield. These challenges prevent the promises of smaller nodes—improved power, performance and area—from ever being realized. Designers are forced to either sacrifice performance by unnecessarily over-margining or guard-banding performance, or to take a yield hit by under-designing.
With the use of common commercial foundry processes, this makes it increasingly difficult to differentiate your design competitively. Performance hits are unacceptable as semiconductor companies are all using the same handful of foundries trying to produce competitive chips, while yield hits are unacceptable for high volume applications since costs quickly skyrocket.
Custom IC design constitutes a large percentage of the SoC. Analog/mixed-signal, RF, embedded memory and high-speed digital I/O designers all use the traditional custom EDA tools provided by Cadence, Synopsys and Mentor. How can they fully reap the benefits of technology migration without taking a yield hit with the design time and resources that are available?
Engineers must consider these types of variation during design: global random, local random (mismatch), environmental, proximity and parasitic variation. Traditional approaches using EDA tools to analyze for variation include PVT (process, voltage, temperature) corner analysis and Monte Carlo analysis. However, these approaches are breaking down for modern technology nodes.
PVT analysis and Monte Carlo analysis are poorly suited to rapid design iterations in modern technology nodes. In the past, there were only 5-20 PVT corners to analyze, but for custom designers to properly bound variation in modern technologies, they need to investigate more variables and more values per variable, leading to thousands of corners. In the iterative design loop, you could guess which corners matter (sacrificing accuracy), or use all corners (sacrificing speed), but neither is acceptable.
Monte Carlo analysis fares even worse in the design loop: Applying Monte Carlo methods to a small number of samples is highly inaccurate, and on a large number of samples it takes far too long. Making matters worse, Monte Carlo often takes too many simulations for final verification. As a result of all this overhead, in the interest of time designers often just make educated best guesses. This is just too risky with the cost of a SoC design and silicon spins.