Today's hi-tech electronics industry requires bigger, faster, more complex and “reasonably” priced chips to be put on the market faster and predictably to meet time-to-market requirements. Because so many areas in our hi-tech society depend on VLSI chips, the Electronic Design Automation (EDA) industry, although a financially relatively small industry, is a critical path for a great number of industries. However, many of the latest indicators in this small industry point to difficulties in supplying what these other industries require fast enough. The efforts required to design some of the latest new chips is growing out of proportion in comparison to design efforts required in the past and, to make matters worse, the market windows are getting shorter.
Short market windows mean short periods for recouping investments made in developing chips. Accordingly, new chips have to be designed as rapidly and economically as possible or new ways have to be found to extend the useful life of these chips or at least parts of them. Both of these approaches are being vigorously pursued by the EDA industry. Design efficiency is addressed by streamlining, standardization, and higher and higher levels of abstraction in the specification and design processes. Useful lifetime extensions are obviously addressed with IP reuse through retargeting.
The time-to-market requirement is also addressed with IP reuse, largely by eliminating redesign steps. This means that IP reuse is a big step forward in helping to meet time-to-market requirements and other related challenges.
To supply increased performance as required by the market, the dimensions of critical minimum physical device geometries on VLSI designs must continue to shrink, while the complexity of these chips has to continue to grow; and it is growing dramatically. Minimum sizes of critical layout dimensions have traditionally been the dominant factor in determining maximum chip performance. In addition, the total level of functionality that can be placed on a chip critically depends on how small the layout features and how large the maximum chip sizes can be made. As a result of these market forces, the level of functionality of a single chip is reaching into the millions of transistors today and continues to increase rapidly. With the high numbers of transistors on chips, innovative, more productive techniques for placing large numbers of devices on silicon are constantly needed. IP reuse will help to place large numbers of devices on a chip more rapidly by just reusing and remapping
Placing more functionality on fewer chips or even a single chip offers many desirable features, such as increased miniaturization and increased packing density. Maximum packing density and miniaturization are very important for many applications. Minimizing the number of chips for a given function not only increases the packing density, it also reduces the number of times the electronic signal has to leave and get back to a chip.
If signals propagate only in one chip and do not have to propagate between chips, it lowers the size of the system substantially and also increases system reliability and speed. Improvements are dramatic when reducing the number of PC Boards (PCBs) and still significant when reducing the number of Multi-Chip Modules (MCMs), although MCMs are a much better high speed solution. However, besides the advantages of packing density, reliability and speed, PCBs and MCMs are rather expensive and the timing analysis with all these interconnects is less precise than what is possible with chips alone.
Once a complex chip is designed, verification of its functionality and performance are other serious challenges. Verification or validation can be performed with an increasing number of methodologies, each claiming to offer the ultimate convenience, speed and accuracy. Timing analysis, simulation on various levels of abstraction from the functional to the physical, cycle-based simulation, formal verification, emulation and, of course, fortunately, new methods continue to appear on the horizon.
This increasing number of available methodologies indicates the difficulty of the tasks and the need to minimize the risk of in-field failures in a world that increasingly relies on hi-tech gadgets. Yet the tasks of verification and testing are generally so difficult that one can often only obtain high probabilities of failure-free parts, especially in testing. This is yet another good argument in favor of reusing previously field-tested parts, although there is no guarantee that even a field-tested part is completely fault-free. However, at least there is a track record that provides an additional sense of security.
Of course, the verification challenges are growing every day. While it is already very difficult to properly verify and test today's chips with around one million transistors, the multimillion transistor chips will be even more difficult to verify, validate and test. It is a trend to use several verification tools, not just one. Most complex designs probably require a combination of all the different verification and testing methods to achieve a sufficient level of confidence. Until now, an educated selection of just one or two of the available methods was often satisfactory. And combining all the different methods, i.e. “throwing everything you have got at the chip,” will make verification and testing very costly and time-consuming, and it requires a wide spectrum of skills.
Considering all of this, it should not be surprising that any new ways to keep the efforts of simulation and verification for these large chips under control are welcome. Since the topology and netlist of a chip remain unchanged through migration with Hard IP reuse, simulation and test vector suites can generally be reused, and timing changes caused by a change in technology tend to slay within manageable limits.
Hierarchical approaches are often used to keep complexities within manageable limits. This is particularly true for one of the most difficult and time-consuming steps in the design process, i.e. verification. To be able to perform verification hierarchically, Hard IP migration that is fully hierarchical would help. Fortunately, fully hierarchical migration is, in fact, becoming available right now.
We examine what hierarchy maintenance means for migration in later chapters. In fact, we discuss and examine questions of limited hierarchy maintenance in Chapter 2 and complete hierarchy maintenance in Chapter 5.
Although the EDA and chip production industries are very young, there is now a substantial arsenal of very recent, excellent designs ranging from microprocessors to digital signal processors to controller chips and more. These designs are not outdated from the design concept point of view. Most of them are strictly state of the art and they are known to work. If anything is outdated about these chips, it is that they were fabricated by “yesterday's” processing technology. Processing technology has moved at a fierce pace from 0.5 microns minimum critical dimensions only recently to 0.18 microns and rapidly going to smaller minimum dimensions.
Processing technology is moving so fast that design innovation can not keep pace with it. In fact, it is estimated that only 25 percent of the advances in chip performance are due to design innovation, while an amazing 75 percent is due to advances in processing capabilities. This means that the large “mountain” of previously designed chips has only a minor flaw of having been laid out according to obsolete processing layout design rules.
Another way to keep pace and profit from processing technology advances and the rather fluid set of process and layout parameters is to be able to implement changes for rapidly retargeting. Considering the extremely competitive environment and the enormous investments, these processing lines are constantly tweaked to get the best possible performance and yield. Consequently, some “last minute” changes in design rules are always possible and indeed highly probable. Fortunately, for retargeting, these minor tweaks can be implemented “on the fly” by minor changes in the compactor technology file with a quick rerun to fully benefit from the latest processing changes. Even in the case of manually trimmed cells, such minor adjustments will very often still potentially squeeze a few more megahertz out of a chip or increase the yield.
Faster chips can be fabricated with minimum effort by retargeting to newer processes, using the exact same design from netlist to floorplan to routing.