Your SlideShare is downloading. ×
Design challenges in physical design
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Design challenges in physical design


Published on

Published in: Education, Technology, Business

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide
  • The total number of design rules has doubled between 90 and 28nm, and rule complexity is outpacing that for both LEF rules and the built-in DRC checkers provided with custom design tools. A growth in metal rules is partially due to an increasing number of metal layers. But the front end—where AMS/custom designers spend most of their time—has seen growth at the same rate without any new layers being involved ( Figure 3 , p. 42). Not only are there more rules, but these rules are becoming ever more complex. For example, where there used to be a single type of via enclosure, there can now be five or more, depending on the local environment. At the front end, transistor rules now have to deal with complex local interactions involving multiple layers, some of which are not even drawn, making it difficult for the layout engineer to understand why a transistor is ‘bad’. The DRC checkers built into AMS/custom design tools are designed to assist with simple design rule specifications such as minimum width or space. Historically, custom designers simply memorized the rules not included in their tools, largely because they could—at 90nm, it was a short list. At 28nm, it is much longer than designers can cope with, if they can even understand the rules, given the complexity they now encapsulate. Today, rules address such challenging requirements as multi-dimensional width/spacing interactions; ‘keep-away’ zones that aim to minimize the lithographic impact on transistor performance or reliability; and pitch checks that make printing easier and more consistent. Context can be as important as the actual configuration. While built-in DRC checkers, constraints, and LEF rules are continuously being expanded to increase coverage, it is a full-time job to keep up with the leading edge of technology in its entirety. AMS/custom designers have historically implemented layouts using a combination of manual and automated techniques, and then run large sections of their layouts through full-chip verification in batch mode. They have fixed resulting DRC errors and applied DFM optimizations, then moved on to the next section of the layout and repeated the process. This worked fine when designers could understand the issues, correct them, and get to closure within a few iterations. However, as the DRC/DFM rules have become increasingly complex, the number of DRC violations and DFM recommendations has also increased, requiring more time to be spent debugging and correcting the design. Debugging itself is getting trickier—in some cases, the fixes applied by designers introduce new DRC errors that are not identified until a subsequent batch verification run. With the number of iterations rising, tapeout schedules have started to slip. Full-featured DRC/DFM engines employ sophisticated techniques ( Figure 4 ) to help engineers evaluate and optimize designs. With access to a fully featured DRC engine, they can take advantage of such advanced capabilities as equation-based DRC to obtain precise information about the error condition and any necessary corrections, and also pattern matching that identifies known problematic configurations. In addition, automated evaluation against a qualified deck of recommended rules enables designers to quickly prioritize changes that will improve areas particularly  susceptible to manufacturing defects. Process simulation, in the form of model-based tools, has long been employed by foundries. Simulating processes such as lithography or planarity helps predict design failures that will result from specific process conditions. This technique proved so useful at 40nm that foundries now recommend (and in some cases require) the use of such models alongside DRC within the design flow. Houses that do not implement model-based DFM can encounter performance issues or variations in yield caused by design conditions that could have been corrected during layout. At 28nm, fill is no longer a simple check-box task. Smarter solutions are needed to manage issues arising from tighter spacing. These include poly and metal density variation, multi-dimensional planarity interactions, and the impact of fill on timing. Advanced fill techniques can both prevent defects by maintaining better planarity and improve parametric yield by introducing less parasitic capacitance. Double patterning has been identified as an enabling technology for 20nm and below, meaning designers must be able to analyze their designs for both patternability (whether or not a single layout structure can safely be decomposed into two masks) and composability (whether or not combinations of configurations in a complete layout remain decomposable). This new requirement makes layout even more challenging. The complexity of all of these technologies, and the tight correlation they must maintain with foundry data and processes, makes it unlikely that AMS/custom design tool vendors will implement the full range of tools and technologies available in full-featured DRC/DFM engines. Even if the vendors wanted to provide signoff verification in their internal DRC checking, it would not make much sense. AMS/custom design tools are intended to make layout engineers more productive, not perform signoff verification. Providing signoff verification requires extensive and continuous interactions with foundries to ensure that the latest process requirements and interactions have been identified, categorized, and translated into a complete set of design rules and checks that ensure design compliance means manufacturability.  The end result is that we now have a significant gap in the custom design process that is impacting the quality and capability of AMS designs from 28nm and below. So how are custom designers coping? In general, their responses fall into three categories: Less optimization —designers are stopping as soon as they get to a DRC-clean design. However, they know they are leaving quality and performance enhancements on the table, which means they are also surrendering some competitiveness. They know this will hurt profit down the road; what they don’t know is when or how.  Longer production schedules —some design teams opt to let milestones slip so that they can undertake additional optimizations. However, this strategy only works for a limited period of time for both individual designers and design companies. Staff augmentation —design houses are adding layout engineers. While this approach enables more work to be done without a change in methodology, it is a costly solution that does not scale well, even if companies pursue least-cost engineering resources. None of these approaches solves the underlying problem—the inability of designers to access the full range of DRC and DFM methodologies concurrently with design layout implementation. Designers have their own bandwidth restraints too—they can deal with a lot of simple rules or a few complex ones, but not a lot of complex rules all at the same time. What can we do? As discussed, DRC and production schedules are not the only challenges here. Foundries are expanding tapeout requirements to include such processes as pattern matching, process simulations, and recommended rule compliance. There are new manufacturing requirements such as double patterning being added to the design team’s list of responsibilities. As a result, converging on a tapeout design while maintaining production schedules is getting harder and harder to reconcile. As we reach 28nm, it would seem that designers are better off using the same signoff tools used by the foundries to ensure concurrence between design layout checks and silicon results. However, without real-time access to the tools and techniques discussed above, custom designers will always be fighting a losing battle against both design quality and time-to-market. Using industry interfaces such as the OpenAccess Run Time Model, it is now possible to embed a signoff-quality DRC/DFM engine in custom design tools to run DRC and DFM analysis in real time. This provides immediate feedback on each shape as it is incorporated into the layout. With sufficient speed, such an environment can even provide visual cues that show the designers where a shape can be placed, or how dimensions must be optimized, during drawing. By making signoff DRC part of layout creation, designers can work in confidence, knowing that they are checking configurations against foundry-qualified signoff rule decks. They can automatically map PDK layers to GDSII layers. Without the need to run multiple batch DRC/DFM verifications, and with more error information provided in the design environment, designers can have more time to implement design optimizations, while still meeting time-to-market schedules.
  • These effects can erode timing margins, introduce reliability problems and lead to circuit failure. The problems become significant and even dominant with high performance designs. The main issues of VDSM high performance physical designs are current density and power distribution, synchronization, manufacturing variability, high-frequency and coupling noise.
  • IR drop: The implication is the reference voltage VDD is different at different places in the chip causing on chip variations . IR drop has a negative impact on timing due to reduced VDD => (Vdd - I*R) IR drop in power grid creates variation in delay across the chip. Performance of chip varies by approx 7-9% when the Vdd changes by 10% Affect clock skew in the clock network
  • CROSSTALK: Undesirable electrical interaction between 2 or more physically adjacent nets due to capacitive coupling…solution to this problem is to increase the spacing between interconnects or shielding nets with ground connection Electromigration is the gradual displacement of metal atoms in a conductor. The primary causes are current density and temperature. The phenomenon is particularly likely to affect the thin, tightly spaced interconnect lines of deep-submicron designs. As current flows through a wire, the movement of electrons interacts with metal ions in the conductor, and atoms are forced to move along with the flow of electrons. The process of electromigration is analogous to the movement of small pebbles in a stream from one point to another as a result of the water gushing through the pebbles.  Some of the migrating atoms may be deposited into "hillocks" at metal grain boundaries. Electromigration can also cause thinning or voids at grain boundaries that reduce or stop current flow. Contact holes and vias are particularly susceptible to electromigration     Because of the mass transport of metal atoms from one point to another during electromigration, this mechanism leads to the formation of voids at some points in the metal line and hillocks or extrusions at other points.  It can therefore result in either: 1) an open circuit if the void(s) formed in the metal line become big enough to sever it; or 2) a short circuit if the extrusions become long enough to serve as a bridge between the affected metal and another one adjacent to it. Electromigration is worst when signal direction is kept constant, as in analog/ mixed-signal circuits and digital circuit power supply lines. In these nets, wire self-heating or joule heating--caused by the interaction of electrons with the lattice of the conductor--can produce thermal energy that causes electromigration. The most common "cure" for electromigration is to widen wires to reduce current density. Other approaches include providing redundant vias, also designing the circuit to run at lower voltage levels and controlling temperature by using a thermal-aware IC design methodology
  • As difficult as the challenges of verification are at the 32- and 22-nm nodes, those of manufacturing a device with reasonable yields and reliability are perhaps even greater. Doing so will require an extremely sophisticated physical implementation environment that accounts for physical effects in the design loop as well as manufacturing variability in its optimization routines. The manufacturing challenges also open up new EDA opportunities after tapeout and signoff, such as source-mask optimization and other computational lithography techniques that extend the life of fabrication equipment far beyond prior physical limits. In addition, developments like multilevel (3D) die packaging, through-silicon via (TSV) structures, and other non-traditional techniques for device scaling are pushing system and silicon design issues closer together. The term “design for manufacturing” (DFM) reflects the need to consider manufacturing variability in design and to optimize for both functional and parametric yield. Yet it’s important to emphasize that DFM isn’t simply an additional tool or discrete step in the design process, but rather an integration of manufacturing-process information throughout the IC design and verification flow. In 2009, more designers will be forced to migrate to a manufacturing-aware approach that extends across the entire design and verification life cycle, starting with cell library development and extending through place and route, physical verification, layout optimization, mask preparation, testing, and failure analysis. A particularly difficult manufacturing challenge will come at 22 nm, when traditional equipment-based solutions enabling device scaling are no longer feasible due to engineering limitations on achievable scanner wavelength and numerical aperture. At 22 nm, traditional optical-proximity-correction (OPC) techniques and parametric shape optimization of source illumination won’t be enough to ensure high yield at such small dimensions. Instead, the industry will have to look to computational lithography technology to take us to the next couple of process nodes. The next major improvement is called source-mask optimization, or SMO ( see the figure ). In this methodology, a “pixelated” source replaces predefined illuminator shapes, opening up an additional degree of freedom in computational lithography. As in the past, the objective is to create beneficial interference patterns. But with a pixilated source, the illumination shape is virtually unconstrained. This means each specific design going through the fab can have a custom-tailored illumination pattern, optimized to provide a maximum process window for the specific design. To enable source and mask optimization, computational algorithms and methods for optimizing both source and mask, pixel by pixel, will be developed. Equally important will be the selection and development of the appropriate compute platform to provide the computational capacity and speed required to suit it for production usage. Physical Prototyping Trends The coming months and years will see a renewed interest in physical prototyping of IC designs. This will serve the need for feasibility validation of projects earlier in the design process, coupled with the increased search for implementation options that will result in power and/or cost benefits. In the past, a relatively easily accomplished process shrink brought these gains. But below 65 nm, such scaling is painfully difficult. So floorplanning will come back into vogue in a big way in the form of structured, efficient “path finding” exploration and optimization methodologies.  Another trend is the emergence of 3D silicon, with the advent of viable TSV technologies that will lead to a number of new EDA and silicon products. TSV technology offers incredible savings in power and quantum availability of memory bandwidth at the point of need in large systems-on-a-chip (SoCs). With it, SoC architects can quickly develop multiple product variants with varying degrees of memory sizes and types, all of which can be customized at build time based on predefined “plugs and sockets” or TSVs.  Looking beyond the die itself, there will be greater need to cross chip, package, and board boundaries to effectively manage design issues such as signal integrity, power integrity, and electromagnetic interference (EMI). Chips that function properly in isolation can often fail when packaged and/or board-mounted. Designers will be forced to examine noise and power margins on a broader level that encompasses the chip, package, and board to effectively manage timing and signal-integrity closure. This will also entail cross-team collaboration with tools that facilitate real-time and incremental collaboration between widely geographically dispersed individuals and teams. Finally, the growing analog content in SoCs demands a more fleshed-out reuse methodology for analog IP, particularly when it comes to process portability. Look for an increased emphasis on technologies to achieve this aim  (see “Automating Analog IP Process Migration,” ED Online  20478 ) .
  • The 28nm process node has once more raised the design bar in terms of the DFM checks needed to realize a design. This is particularly true for analog and mixed-signal engineering, where rules that could once be maintained manually now need to be addressed in a more integrated, automated, and timely way. The article explores the challenges 28nm presents and describes the kind of design infrastructure needed to overcome them. second- and third-order effects have become serious concerns. Additionally, entirely new effects have emerged at 28nm. As a result, existing design and verification techniques are starting to fall short. The sky isn’t falling, but this node still presents a much greater challenge than its predecessors. 22nm Design Challenges at ISSCC 2011 By: David Kanter  | 03-14-2011 Differences at 22nm Overall, there seems to be a fairly firm consensus regarding scaling to the 22nm node. To a large extent this is an illustration of the nature of the challenges: physical phenomena impact everyone equally. Fortunately, there were some points of divergence between the panelists to liven the discussion. These differences largely stem from the economic situations facing leading semiconductor companies. Both IBM and AMD have used partially depleted silicon-on-insulator (PD-SOI) down to the 32nm node, and Ghavam Shahidi maintained there are sufficient performance and variability benefits going forward. While PD-SOI boosts performance over bulk silicon, it significantly increases the cost of wafers and thus raises the variable manufacturing cost of chips. In essence, PD-SOI reduces the fixed costs to develop a new process technology, but increases the variable costs of manufacturing the resulting chips. One exception to the consensus view on 22nm was IBM’s stance that PD-SOI will continue to be useful for high performance applications. Global Foundries did not seem to have nearly such a sanguine outlook, undoubtedly because foundry customers focus heavily on variable manufacturing costs. Most of the panelists expressed hope for fully depleted SOI at a future node (15nm or below), which eliminates random dopant fluctuation, but it is a fundamentally different technology than PD-SOI. Everyone seemed to agree that packaging will play an important role in the future, but Global Foundries had an even stronger view. Bill Liu suggested that  3D packaging and integration , particularly through-silicon vias (TSVs) were essential to continue Moore’s Law. While he acknowledged that there were still issues with wafer bumping, he was also far more optimistic about the timeline for viability of TSVs. The other panelists did not seem to think that TSVs were viable at the 22nm node. Mark Bohr of Intel made a contrarian point about the costs of double patterning, which most of the panelists considered to be unattractive. It seems expensive to reduce throughput by using two exposures on critical layers, since it reduces throughput. However, double patterning significantly reduces capital expenditures, since the expensive lithography equipment can be re-used across future generations. In contrast, using immersion lithography to achieve the same benefits requires new equipment and introduces yield risks. Moreover, judicious use of RDRs can mitigate the number of layers that need double patterning and thus the throughput impact. Additionally, double patterning is manufacturing techniques that works with almost any type of lithography and is a valuable skill to master going forward. The last and least surprising difference was on 450mm wafers. Intel and TSMC clearly believe that increasing wafer size will significantly improve the cost structure for manufacturing. TSMC has even publicly committed to a 450mm fab for the 20nm node. Global Foundries and IBM were much less enthusiastic about the prospect and were concerned with the increased cost of process technology development and fabs. This is entirely expected since Intel and TSMC have substantially higher volumes and are willing to increase their capital expenditures to reduce variable costs. Conclusions One point which was raised by Min Cao, but likely a universal opinion, was the importance of avoiding deterministic methods for designers. Variation has the biggest impact on worst case performance and power. But it is nearly impossible that all the transistors, interconnects, etc. within a single circuit will suffer from the worst case variation. Statistical design using Monte Carlo methods focuses on an entire circuit and considers more realistic ‘worst case’ scenarios and substantially mitigates the impact of variation. However, the computational overhead is significant, so Monte Carlo modeling must be used selectively. In many respects, the idea that designers should think probabilistically, rather than deterministically highlights the fundamental implication of the panel. At 22nm and beyond, manufacturing can no longer cleanly abstract away the underlying physical challenges of semiconductor scaling. This drives the need for co-optimization between process technology and chip design. The inescapable conclusion is that the physical design of integrated circuits is becoming ever more critical at smaller geometries. A keen grasp of semiconductor physics means that a design team can more readily anticipate and adapt to the risks and implications of the challenges at 22nm and beyond. This means circuit designers can help to influence architectural choices in the right direction. It will improve the co-optimization process and enable teams to creatively adjust circuits to the needed restrictive design rules. Similarly, changes in process characterization and design rules, or unexpected yield issues are much less disruptive and dangerous to schedules for a team that intimately understands the related trade-offs. The fundamental implication is that engineers and design teams that best understand the underlying physics will likely achieve better performance, power and costs and yielding a significant competitive advantage. Designing in 45nm allows another doubling in transistor density vs 65nm, both for logic gates and for SRAM cells. However, Lithography implications are such that Design rules have to be augmented with recommended rules and regular design, and verified with full Lithography and CMP simulation. Low-power techniques have to be even more elaborate than in 65nm, to compensate for less natural voltage swing in logic and SRAMs, and more gate leakage. Finally, reliability and ESD models have to be taken into account by design and phenomena such as Hot Carrier Injection (HCI) and Negative Bias Temperature Instability (NBTI) are part of library and chip design verification suites. It is only with a holistic approach encompassing process, device modeling, reliability, litho, memory designers, IO designers, power switch experts, that the full capabilities of the 45nm node will be unleashed.
  • 3D systems: higher integration density, reduced interconnect lengths, heterogeneous system integration, smaller footprint area Recent problems during 3D physical design Vertical dependencies, sophisticated thermal management, blockages due to thermal and through silicon vias (TSV), … Solutions New 3D layout representations, regions for thermal vias, modeling of interconnect resources in 3D, … the lack of physical design tools that can handle TSVs and 3D die stacking delays the mainstream acceptance of this technology. As of early 2010, the only commercial tool available for TSV-based 3D IC design is MAX-3D Layout Editor by Micro Magic, Inc [2]. This tool only supports layout editing for 3D ICs and does not offer automatic placement and routing. In addition, none of the commercial tools available for timing, power, signal integrity, power supply noise, and manufacturability analysis tools handles TSV and 3D stacking directly. Advances in 3D integration and packaging are undoubtedly gaining momentum and have become of critical interest to the semiconductor community   A multichip module could be defined in a number of ways. Some define it as a structure consisting of two or more integrated circuits electrically connected to a common circuit base and interconnected by conductors in that base   The driving forces behind the development of three-dimensional packaging technology are similar to the MCM technology, although the requirements for the 3D technology are more aggressive. These requirements include the need for significant size and weight reductions, higher performance, small delay, higher reliability and, potentially, reduced power consumption. TSV provides the possibility of arranging digital and analog functional blocks across multiple dies at a very fine level of granularity, as illustrated in Figure 1. This results in a decrease in the overall wire length, which naturally translates into less wire delay and less power. Advances in 3D integration and packaging are undoubtedly gaining momentum and have become of critical interest to the semiconductor community. Figure 1: (a) 2-tier 3D IC with face-to-face bonding, (b) top-down view of 2D vs 3D layout. (Source
  • ITRS-International technology road map for semiconductors. CMOS or BEOL (back-end-of-line) metallization
  • Several innovative cooling solutions have been proposed, including carbon nano-tube based [7] and liquid cooling with micro-scale fluidic channels (MFC) directly inserted into the 3D ICs, the thermal conductivities of copper, silicon, and silicon dioxide at 25oC are 410, 149, and 1.4W/m/K ,, Several innovative cooling solutions have been proposed, including carbon nano-tube based [7] and liquid cooling with micro-scale fluidic channels (MFC) directly inserted into the 3D ICs, as illustrated in Figure 5(b). Research is needed to demonstrate the benefit and overhead of these thermal solutions at physical design level. Possible solutions include thermal DAC.COM larger area, more power consumption, and more noise, which leads to less performance and diminishing benefit of TSV-based 3D IC technology. Research is needed on P/G network synthesis, optimization, and analysis to addresses these issues while minimizing on-chip resource usage such as P/G wires, P/G TSVs, and on-chip decoupling capacitors.
  • to improve performance and power of the overall chip/interposer 3D system under reliability, manufacturability, and cost requirements. Thermal management in the interposer itself is another important reliability issue that must be investigated.
  • Possible solutions include TSV-aware CMP fill synthesis for the top and bottom metal layers, TSV stress-aware timing analysis and physical design [10], and TSV-aware substrate and device reliability modeling and optimization.
  • ITRS-International technoglogy road map for semiconductors
  • Transcript

    • 1. SeminarDesign challenges in Physical Design Shankardas Deepti Bharat CGB0911002 VSD531 M. Sc. [Engg.] in VLSI System Design Module Title: IC planning & Implementation Module Leader: Mr. Chandramohan P. M. S. Ramaiah School of Advanced Studies 1
    • 2. AgendaIntroductionGeneral challenges faced in designAnalogDigitalMixedNode comparisons in detail3D challengesSummaryReference M. S. Ramaiah School of Advanced Studies 2
    • 3. Introduction With each new process technology, the number of transistors per unit areadoubles The typical area of a chip has remained more or less the same. Designers use the extra real estate to add more functions—Bluetooth one year, Wi-Fi the next, streaming video after that, and so on. As a result, the layout data for a chip design also doubles with each new node. The main issues of VDSM high performance physical designs are current density and power distribution, synchronization, manufacturing variability, high- frequency and coupling noise. M. S. Ramaiah School of Advanced Studies 3
    • 4. Design objectives & ChallengesDesigners main motive with respect to design:Design Objectives:– Power (dynamic/static)– Timing (frequency)– Area (cost/yield)– Yield (cost)Challenges:– Manufacturing technology– Leakage power– Interconnect delay– Congestion– Reliability Figure 1. Physical flow [1] M. S. Ramaiah School of Advanced Studies 4
    • 5. Congestion Design is said to be congested if there more tracks to be routed than the available tracks Objective: Determine routes (tracks, layers, and vias) for each net Such that the total wire length is minimized. Be careful with routing critical nets and clock nets Figure 2. Routing congestion [1] M. S. Ramaiah School of Advanced Studies 5
    • 6. IR drop•Resistance in power grid causes a reduced supply voltage at the deliverypoint•An IR drop from 1.7V to 1.6V is capable of producing delay variation of50% or more•IR drop can be minimized •by increasing the number of core power pads •by increasing the number of metal layers carrying the power •by making the top metal layer extra thick for increased conductivity •by increasing the width of the power rails •by increasing the # of straps•IR drops leads to performance drop, signal integrity problems & electromigration Figure 3. IR drop analysis [3] M. S. Ramaiah School of Advanced Studies 6
    • 7. Approach for power distribution Strapping and rings for  Using Power ringstandard cell Figure 4. Power distribution [2] M. S. Ramaiah School of Advanced Studies 7
    • 8. Signal integrity •Ability of an electrical signal to carry information reliably & resist the effects of high frequency interference from nearby signals . •Conditions that can impact signal integrity are crosstalk &electromigration. Crosstalk Electromigration Figure 5. Crosstalk & electromigration [2] M. S. Ramaiah School of Advanced Studies 8
    • 9. Challenges in Digital Design Chip assembly predictability• Physical Design Integration• Mixed-Signal Systems• Verification• Routing• Continuous Regression Simulation Chip integration speed and accuracy• Floorplanning and Optimization• Physical Verification• Physical Design Integration Rapid migration to a new process technology M. S. Ramaiah School of Advanced Studies 9
    • 10. Challenges in Analog Design •Analog design often requires a very different variety of technology features, model accuracy, and integration sensitivities than digital design •Sensitivity to parasitics and overall modeling accuracy is much higherfor an analog design compared to digital design •Modeling of noise coupling is a critical need in analog design, and requires the extraction of substrate and well characteristics not typically required in most digital designs •The most problematical requirement of analog integration is the need for power-supply voltages which are at different potentials and/or electrically isolated from digital power supplies M. S. Ramaiah School of Advanced Studies 10
    • 11. Challenges in Mixed Design•Analog circuits has to follow digital process which limits the analog circuitperformances•Design methodology/tools for reliability•Complex Interaction Between Analog and Digital blocks•MS involve signal paths crossing the interface between digital and analogblocks•Parasitic Coupling•Power dissipation constraints•Hot carrier injection•Reuse of IP’s•Testing•RF to a mixed-signal chip adds considerable risk•Model accuracy, and design methodology/tools for reliability M. S. Ramaiah School of Advanced Studies 11
    • 12. Comparison of NodesLP devices do not follow HP scalingtrends. LP devices provide highervalues of Tox Radios, sensors, I/O, controllers, power management—all require The total number of design or provide interfaces to or rules has doubled between from variable power and 90 and 28nm, and rule signal sources. complexity is outpacing that for both LEF rules and the built-in DRC checkers provided with custom design tools Full-featured DRC/DFM engines offer a variety of techniques and information to assist designers during debugging and design optimization Difference in emphasis also affects the roles people play in the design process. Figure 6. Nodes comparison [5] M. S. Ramaiah School of Advanced Studies 12
    • 13. Challenges in 3D designThermal issues (power density)• Consideration of active regions• Heat conduction (thermal vias)High design complexity•Additional degree of freedom (3rd dimension)•Vertical constraints•Efficient data structures and algorithmsTestability/Reliability/Yield/Costs•Redundant through silicon vias•Conduct integrated test structures outsideReuse of existing (2D) IP blocksBlockage area for TSVs Figure 7. 3D design challenges [6] and [7] M. S. Ramaiah School of Advanced Studies 13
    • 14. 3D design challenges Research is also required to investigate the impact of TSV location •Pseudo 3D tools & their limitations on 3D IC design quality and reliability Possible solutions include tradeoff •Existing tools are used as a 3D extension studies between regular and non- regular TSV placement with respect to •Capable of handling simple 3D design wherein existingthese metrics 2D designs are stacked & connected without any major design change •This is done with the help of TSVs which deliver signal, power & clock in vertical directions •TSV management, How many & where?Mainly due to the large TSV size, •wirelength begins to increase as the TSVs have significant impact on the The count and location ofTSV count goes beyond this quality and reliability of 3D IC layoutsoptimum point. The number ofTSVs used in factor layout entirely • Cost 3D ICdepends on how the design ispartitioned into multiple dies. Figure 8. 3D design challenges [6] M. S. Ramaiah School of Advanced Studies 14
    • 15. 3D design challengesThe 3D clock tree itself is the longest wire in Cooling solutions have beenthe circuit and contains many buffers to controlskew and slew. Since the delay characteristics proposed, including carbon nano-tube •Thermal managementof clock wires, buffers, and TSVs are based and liquid cooling with micro-significantly affected by the temperature, care scale fluidic channels (MFC) directly •Dies are stacked,must be taken to ensure that the skew is kept several hotspots are created which increase the 3D ICs inserted intominimum based on a given non-uniformthermal profile. background temperature the •Dummy TSVs are inserted to alleviate thermal problems Less congestion ,IR •May negatively impact area & manufacturability drop •Other cooling solutions such as carbon nano tube based and liquid cooling have been proposed •Clock delivery •Clock TSVs, as in the case with signal and P/G TSVs, occupy layout space and cause coupling •High thermal variations in 3D ICs induce a substantial amount of skew variation in the clock tree, which has adverse implications for the performance and reliability of 3D IC Figure 9. 3D design challenges [6] M. S. Ramaiah School of Advanced Studies 15
    • 16. 3D design challenges Possible solutions include TSV-aware CMP fill synthesis for the top and bottom metal •TSVs induced design manufacturing issues layers, TSV stress-aware timing analysis and physical design and TSV-aware substrate •TSVs in 3D IC layouts cause significantlyand device reliability modelling and non-uniform layout density optimization. distributions on the active, poly, and M1 layers •Density variation issue is expected to cause trouble during CMP steps and requires new TSV-aware solutions Thermal management in the interposer itself is another important reliability issue that• Interposer-Based 3D Integration must be investigated.•Interposers today are typically made of silicon or glass and provide several layers ofmetal and vias for fine-pitch electrical connection among the dies that are surface-mounted .• 3D IC/interposer co-design methodology is crucial, where physical design for thissystem like P/G network synthesis and analysis are conducted in both levelssimultaneously with collaborative methods. Figure 10. 3D design challenges [6] M. S. Ramaiah School of Advanced Studies 16
    • 17. Summary •Each successive advancement of semiconductor technology a new VDSM challenge is born. •For high performance reliable designs the industry has to face a wide variety of phenomenon such as heat dissipation, electro migration, interconnectcoupling & more. •Nevertheless in many cases of high performance designs current EDAtechnology does not have the full power to provide the best solution. •In 3D ICs, TSVs have been widely used for thermal management, power & clock delivery. •Accurate electrical, mechanical, and thermal modeling of TSVs is essentialin successful physical design of TSV-based 3D Ics. M. S. Ramaiah School of Advanced Studies 17
    • 18. References[1] T. R. Bednar et al., ‘Issues and strategies for physical design of SOC ASICs’ , IBM Journal of Research and Development, 46 (6) , November 2002[2] Chunh-Wei Lin et al., ‘Recent Research & Emerging Challenges in Physical Design forManufacturability /Reliability’, [white paper] available at <> Retrieved on 01 Apr 2012[3] Dr. Danny Rittman, Challenges & Solutions in Physical Design for High-performance IC Design inVery-Deep-Sub- Micron (VDSM) Era, Jan 2004[4] Joe Davis (2011) ‘The challenge of analog, mixed-signal and custom physical implementation at28nm’, [online] available at <>Retrieved on 01 Apr 2012[5] Robert Fischbach, Jens Lienig, Tilo Meister, 3D Physical Design: Challenges and Solutions Instituteof Electromechanical and Electronic Design, Dresden University of Technology, Dresden, October 12,2011[6] Sung Kyu Lim, ‘TSV-Aware 3D Physical Design Tool Needs for Faster Mainstream Acceptance of3D ICs’, [online] available at < .com.pdf> Retrieved on01 Apr 2012 M. S. Ramaiah School of Advanced Studies 18
    • 19. Thank YouM. S. Ramaiah School of Advanced Studies 19
    • 20. RemarksSl. No. Topic Max. marks Marks obtained 1 Quality of slides 5 2 Clarity of subject 5 3 Presentation 5 4 Effort and question handling 5 Total 20 M. S. Ramaiah School of Advanced Studies 20