2014 9th International Design and Test Symposium Design,Manufacturing Test of Integrated Circuits in the Nanotechnology Era Omar Kebichi,ChipTestEstimate,Lexington,MA,USA. Omar.kebichi@chiptestestimate.com Abstract bottleneck reside on lithography since it is not This article is meant to highlight the state of the art of keeping up with the device down scaling trend.The chip design,manufacturing and test and the challenges it is wavelength of current lithography devices is facing while keeping up with Moore's law at the nanoscale becoming larger than the size of transistor being technology node.We will outline the different challenges printed on the wafer mask.One solution the semiconductor industry is facing at sub-45 nm and manufacturing industry has been looking for is to use highlight the different approaches the engineering Extreme ultra violet lithography [1].However,many community is adopting in the design,manufacturing and challenges remains to be solved for high volume test fields production using EUV lithography.Another alternative to EUV lithography is to use immersion 1-INTRODUCTION lithography.The current lithography device is being In chapter 2 we will highlight the main kept but its numerical aperture is augmented thanks challenges seen by the manufacturing industry at sub- to the refraction index of the liquid between the last 45 nm and how they are coping with keeping up with lens element and the wafer(see fig.2).Also,another Moore's law.In chapter 3 we will highlight the way for extending the life of current lithography is to challenges that the design engineers are facing and use the double,triple or quadruple patterning how they are coping with the deluge of transistors technique.By increasing the number of sub-patterns, being implemented in the same chip(i.e.billions of the density of features to be printed on the wafer is transistors).Finally in chapter 4 we will be discussing also increased (i.e.doubled,tripled or quadrupled see the different test and yield enhancement techniques fig 3).Finally,using above lithography solutions being used in order to keep overall chip cost under coupled with OPC software (Optimal proximity control. correction)to preprocess the layout edges will definitely help increasing the overall yield(see fig 4) 2-SEMICONDUCTOR MANUFACTURING [2]. AT NANOSCALE In order to keep up with Moore's law,the transistor parameters have been scaled down at a rate Manufacturing Areas in IC Fab where transistor density have been doubling every 1- 2 years.Such drastic down scaling of feature size have been benefic for chip designers and system integrators allowing them to embed at low cost more complex functions in the same integrated circuit.But, lately,we have seen that it is more complex to keep up with Moore's law especially for feature size at nanometer scale.Designing and manufacturing a chip with device's size at the nanoscale is coming with some pain.Many parameters that were not of high importance at earlier node technology are becoming such a hurdle to leap over before being able to adopt the next lower node technology.There are hundreds Figure 1:Several hundred discrete process steps steps in manufacturing process(see fig.1)and the 978-1-4799-8200-4/14/S31.00©2014IEEE 13
978-1-4799-8200-4/14/$31.00 ©2014 IEEE Design, Manufacturing & Test of Integrated Circuits in the Nanotechnology Era. Omar Kebichi, ChipTestEstimate, Lexington, MA, USA. Omar.kebichi@chiptestestimate.com Abstract This article is meant to highlight the state of the art of chip design, manufacturing and test and the challenges it is facing while keeping up with Moore’s law at the nanoscale technology node. We will outline the different challenges the semiconductor industry is facing at sub-45 nm and highlight the different approaches the engineering community is adopting in the design, manufacturing and test fields . 1- INTRODUCTION In chapter 2 we will highlight the main challenges seen by the manufacturing industry at sub- 45 nm and how they are coping with keeping up with Moore’s law. In chapter 3 we will highlight the challenges that the design engineers are facing and how they are coping with the deluge of transistors being implemented in the same chip (i.e. billions of transistors). Finally in chapter 4 we will be discussing the different test and yield enhancement techniques being used in order to keep overall chip cost under control. 2- SEMICONDUCTOR MANUFACTURING AT NANOSCALE In order to keep up with Moore’s law, the transistor parameters have been scaled down at a rate where transistor density have been doubling every 1- 2 years. Such drastic down scaling of feature size have been benefic for chip designers and system integrators allowing them to embed at low cost more complex functions in the same integrated circuit. But, lately, we have seen that it is more complex to keep up with Moore’s law especially for feature size at nanometer scale. Designing and manufacturing a chip with device’s size at the nanoscale is coming with some pain. Many parameters that were not of high importance at earlier node technology are becoming such a hurdle to leap over before being able to adopt the next lower node technology. There are hundreds steps in manufacturing process (see fig. 1) and the bottleneck reside on lithography since it is not keeping up with the device down scaling trend. The wavelength of current lithography devices is becoming larger than the size of transistor being printed on the wafer mask. One solution manufacturing industry has been looking for is to use Extreme ultra violet lithography [1]. However, many challenges remains to be solved for high volume production using EUV lithography. Another alternative to EUV lithography is to use immersion lithography. The current lithography device is being kept but its numerical aperture is augmented thanks to the refraction index of the liquid between the last lens element and the wafer (see fig. 2). Also, another way for extending the life of current lithography is to use the double, triple or quadruple patterning technique. By increasing the number of sub-patterns, the density of features to be printed on the wafer is also increased (i.e. doubled, tripled or quadrupled see fig 3). Finally, using above lithography solutions coupled with OPC software (Optimal proximity correction) to preprocess the layout edges will definitely help increasing the overall yield (see fig 4) [2]. Figure 1: Several hundred discrete process steps 2014 9th International Design and Test Symposium 13
2014 9th International Design and Test Symposium Finally,despite all the challenges,the industry is still keeping up with Moore's law and is also turning Projection to the packaging industry in order to push optics miniaturization even further.The vertical dimension Liquid supply is being used in order to have denser chips.Different 3D chips techniques are being adopted like flip chip, silicon interposer,through silicon via etc...(See fig Wafer stage 5).Such drastic miniaturization will just highlight the complexity the chip designer community will be facing in order to keep up with chip averaging mersior (Scanning motion) billions of transistors at sub-20 nm. TSV Figure 2:Immersion Lithography connections Package Subs Figure 5:3-D Chips Figure 3:Double patterning 3-DESIGNING COMPLEX CHIP (INTEGRATING NANOSCALE DEVICES) 王王 No OPC By being able to keep up with Moore's law even at nanoscale it is allowing nowadays designers to pack billions of transistors in the same chip. However,designing such complex chip is proven to be far different to what designers were used to.For 王王 example low power dissipation is an important Vith OP differentiator for a chip design.Therefore,designers have to design in a context of low power (i.e.reduce dynamic and static current)especially if the chip will Figure 4:Optical Proximity Correction go on power budgeted devices (i.e.servers or hand held).As shown on figure 6,the static leakage is becoming a bigger issue for lower nodes,therefore Another side effect of the downscaling sub-45 designers need to design with in mind reducing not nm is the power consumption.For a sudden,static just dynamic current but also static current.For power start growing exponentially for lower nodes reducing dynamic current,different design (see fig 6).Therefore,static power dissipation issue techniques are being used like gating off the clocks is becoming also the bottleneck for Moore's law not being used,shutting off power domains when not trend.The main factor of static dissipation is the gate used,adding Voltage Island,better encoding like 1- leakage.One solution the industry is adopting is the of-n domino logic etc... High-K gate for slowing the transistor gate leakage On the other hand,for reducing static current,the [3].Another,solution being adopted by many designer are using of multi-V,(i.e.low V,transistors semiconductor manufacturers is the use the fin-FET for fast path and high V transistors for slow path). transistor [4]instead of planar transistor.Since fin- FET transistor is proven to have better control of gate leakage. 14
Figure 2: Immersion Lithogr Figure 3: Double patternin Figure 4: Optical Proximity Cor Another side effect of the downs nm is the power consumption. For a power start growing exponentially fo (see fig 6). Therefore, static power di is becoming also the bottleneck for trend. The main factor of static dissipat leakage. One solution the industry is a High-K gate for slowing the transisto [3]. Another, solution being adopt semiconductor manufacturers is the us transistor [4] instead of planar transis FET transistor is proven to have better leakage. No OPC With OPC raphy ng rrection scaling sub-45 sudden, static or lower nodes ssipation issue Moore’s law tion is the gate adopting is the or gate leakage ted by many se the fin-FET stor. Since fincontrol of gate Finally, despite all the challe still keeping up with Moore’s law to the packaging industry i miniaturization even further. The is being used in order to have den 3D chips techniques are being ad silicon interposer, through silicon 5). Such drastic miniaturization w complexity the chip designer facing in order to keep up w billions of transistors at sub-20 nm Figure 5: 3-D Ch 3- DESIGNING COM (INTEGRATING NANOSCAL By being able to keep up wit at nanoscale it is allowing now pack billions of transistors i However, designing such comple be far different to what designer example low power dissipatio differentiator for a chip design. T have to design in a context of low dynamic and static current) espec go on power budgeted devices ( held). As shown on figure 6, t becoming a bigger issue for low designers need to design with in just dynamic current but also reducing dynamic current, techniques are being used like g not being used, shutting off powe used, adding Voltage Island, bet of-n domino logic etc... On the other hand, for reducing designer are using of multi-Vt (i. for fast path and high Vt transisto enges, the industry is w and is also turning in order to push e vertical dimension nser chips. Different dopted like flip chip, n via etc… (See fig will just highlight the community will be with chip averaging m. hips MPLEX CHIP LE DEVICES) th Moore’s law even wadays designers to in the same chip. ex chip is proven to rs were used to. For on is an important Therefore, designers w power (i.e. reduce cially if the chip will (i.e. servers or hand the static leakage is wer nodes, therefore n mind reducing not static current. For different design gating off the clocks er domains when not tter encoding like 1- g static current, the .e. low Vt transistors rs for slow path). 2014 9th International Design and Test Symposium 14
2014 9th International Design and Test Symposium 三sg黄仔童行a量【童GGa9C In today's complex chips,designers has to integrate many IPs from different vendors,tens of IP Leakage power RAMs coming from different vendors,have to take in account tens of clock domains and power domains etc...(see fig 8). ase in CPU TCAM/BCAM ata Cache L7 ROM GPU ROM SRAM SRAM OTP Glue Logic Finer process goometries ilue Logic Figure 6:leakage power issue in lower nodes Data Encrypoon Engint Also.new standard has emerged helping ■■■■■ ■■■■■■■■■ designers describing the power intent of a specific architecture(I.e.Unified Power Format).As figure 7 Figure 8:Multiple embedded memory IPs in multicore SoC shows,the impact of power intent description is bigger at higher level of abstraction.Therefore, Such proliferations of soft cores and such increase designers have to think about low-power at early number of functions being implemented on the chip stage of design process.Power intent has the best impact when applied at the architecture level.Also, is pushing the design complexity away from design per say and towards the verification of the all EDA point tools are nowadays reading in power implemented function/integration of different soft intents,therefore at each step of the design flow the and hard cores.It is easy to see in a design group that power intent need to be read in by the point tool an the verification engineers are outnumbering by far the act upon accordingly designers involved in the same chip design.In order Impact to keep the design cost under control new verification techniques are being adopted by the design community (i.e.assertion based verification.code 90% SW/Macro-Architecture coverage,functional coverage,Emulation etc...). Such verification techniques will help verifying that ■ the chip design is still complying with the initial Micro-Architecture design specifications.For example,since the chip design have to be refined through multiple level of 309 abstractions (i.e.architecture level,RTL level gate level,soft and hard IP integration etc..)therefore the Layout designer has to make sure no 'bugs'did slip through 10% when chip design is being refined down to the layout level.However,since the chip is very complex,it is humanly impossible to do verification manually. Therefore,automatic verification tools implementing ESL....RTL Gate Backend above verification techniques will be used in order to Figure 7:Power intent impact on different level of perform a thorough verification of the final chip. abstraction 15
Figure 6: leakage power issue in lo Also, new standard has eme designers describing the power intent architecture (I.e. Unified Power Forma shows, the impact of power intent bigger at higher level of abstractio designers have to think about low-p stage of design process. Power inten impact when applied at the architectu all EDA point tools are nowadays rea intents, therefore at each step of the d power intent need to be read in by the act upon accordingly. Figure 7: Power intent impact on diff abstraction RTL Gate Logic Micro -Architecture ESL Impact 90% 10% 30% SW/Macro -Architecture ower nodes erged helping t of a specific at). As figure 7 description is on. Therefore, power at early nt has the best ure level. Also, ading in power design flow the e point tool an fferent level of In today’s complex chips integrate many IPs from differen RAMs coming from different ven account tens of clock domains etc…( see fig 8). Figure 8: Multiple embedded multicore SoC Such proliferations of soft core number of functions being imple is pushing the design complexity per say and towards the v implemented function/integratio and hard cores. It is easy to see in the verification engineers are outn designers involved in the same c to keep the design cost under con techniques are being adopte community (i.e. assertion based coverage, functional coverage, Such verification techniques will the chip design is still comply design specifications. For exam design have to be refined throug abstractions (i.e. architecture lev level, soft and hard IP integration designer has to make sure no ‘bu when chip design is being refined level. However, since the chip is humanly impossible to do ver Therefore, automatic verification above verification techniques wil perform a thorough verification o e Backend Layout , designers has to t vendors, tens of IP ndors, have to take in and power domains d memory IPs in C. s and such increase emented on the chip y away from design verification of the n of different soft n a design group that numbering by far the chip design. In order ntrol new verification d by the design d verification, code Emulation etc...). l help verifying that ying with the initial mple, since the chip gh multiple level of vel, RTL level gate n etc..) therefore the ugs’ did slip through d down to the layout s very complex, it is rification manually. n tools implementing ll be used in order to f the final chip. 2014 9th International Design and Test Symposium 15
2014 9th International Design and Test Symposium 4-CHIP TEST magnitude the total number of test vectors.Also,in In chapter 2 we highlighted that semiconductor today's chips,we can see up to 60%of the chip area manufacturing process involves hundreds discrete is dedicated to embedded memories.Such memories steps and each one of them is prone to defects.For can be tested cost effectively mainly through BIST example a defect can be caused by contamination, (Built-In Self-Test)[8][9][10][11].The BIST extra metal,insufficient doping,process,mask error controller will be implemented in the design and used etc...(See fig.9).Therefore,it is mandatory to test to test automatically in-situ all the embedded thoroughly the chip after being manufactured and memories (see fig.11). screen any defective part before shipping it to customer (see fig 10). 0000000000000000 1/O Pads Memory Contamination Array Extra metal A IP Core G IP Core er Core Figure 9:Defects during manufacturing process Figure 11:Chip test through ATPG,JTAG BIST The other aspect of test is the yield enhancement. Defect-Free Defect-Free In order to increase the yield for a given node Fabrication Testing technology,the manufacturer would like to know Defective Escape which type of defects is causing yield loss and more importantly where in the chip the defect is happening. By having such important data the manufacturer will YIELD DEFECT most likely identify quickly the problem by using LEVEL yield enhancement software tools and electronic microscope [12][13].By understanding the cause of QUALITY the defect the manufacturer will be in a position to fix OF TEST it and then move to volume production sooner Therefore,any test tool being used to derive test Figure 10:Manufacturing test vectors that detect any potential defect need to be Also,one of the side effects of downscaling sub 45 coupled with diagnosis tool allowing it to pinpoint nm is the myriad of potential defects that the test the defect in the chip with regards to its 'x,y' engineer has to track.For a sudden,test engineer has coordinates.Using efficient diagnosis solution is the to deal with new fault types to test and at same time only way for manufacturer to mature quickly a new large volume of faults to cover while having low pin technology node and use it to its full potential. number at the chip periphery to use during test. Therefore,new techniques has to be adopted in order to keep the test at high quality (I.e.low DPPM)and CONCLUSION at same time low cost.It is mandatory for design The advances made in the semiconductor group to invest on software tools allowing generation industry at the nanotechnology scale is tremendous. of test patterns with high fault coverage [5]. Other industries like pharmaceutical,sport and Moreover,to keep under control the total test cost the clothing etc...Can easily make use of the above industry is adopting compression techniques [6][7]. advances.Also,such leap in technology allowing Such techniques allows reduction by an order of integration of billion transistors in same chip is 16
4- CHIP TEST In chapter 2 we highlighted that semiconductor manufacturing process involves hundreds discrete steps and each one of them is prone to defects. For example a defect can be caused by contamination, extra metal, insufficient doping, process, mask error etc… (See fig. 9). Therefore, it is mandatory to test thoroughly the chip after being manufactured and screen any defective part before shipping it to customer (see fig 10). Figure 9: Defects during manufacturing process Figure 10: Manufacturing test Also, one of the side effects of downscaling sub 45 nm is the myriad of potential defects that the test engineer has to track. For a sudden, test engineer has to deal with new fault types to test and at same time large volume of faults to cover while having low pin number at the chip periphery to use during test. Therefore, new techniques has to be adopted in order to keep the test at high quality (I.e. low DPPM) and at same time low cost. It is mandatory for design group to invest on software tools allowing generation of test patterns with high fault coverage [5]. Moreover, to keep under control the total test cost the industry is adopting compression techniques [6] [7]. Such techniques allows reduction by an order of magnitude the total number of test vectors. Also, in today’s chips, we can see up to 60% of the chip area is dedicated to embedded memories. Such memories can be tested cost effectively mainly through BIST (Built-In Self-Test) [8] [9] [10] [11]. The BIST controller will be implemented in the design and used to test automatically in-situ all the embedded memories (see fig. 11). Figure 11: Chip test through ATPG, JTAG & BIST The other aspect of test is the yield enhancement. In order to increase the yield for a given node technology, the manufacturer would like to know which type of defects is causing yield loss and more importantly where in the chip the defect is happening. By having such important data the manufacturer will most likely identify quickly the problem by using yield enhancement software tools and electronic microscope [12] [13]. By understanding the cause of the defect the manufacturer will be in a position to fix it and then move to volume production sooner. Therefore, any test tool being used to derive test vectors that detect any potential defect need to be coupled with diagnosis tool allowing it to pinpoint the defect in the chip with regards to its ‘x, y’ coordinates. Using efficient diagnosis solution is the only way for manufacturer to mature quickly a new technology node and use it to its full potential. CONCLUSION The advances made in the semiconductor industry at the nanotechnology scale is tremendous. Other industries like pharmaceutical, sport and clothing etc... Can easily make use of the above advances. Also, such leap in technology allowing integration of billion transistors in same chip is Contamination Extra metal A T P G 2014 9th International Design and Test Symposium 16
2014 9th International Design and Test Symposium opening a whole new markets for system integrators. [12]"Survey of Scan Chain Diagnosis".Yu Huang et For example,Electronic software is becoming a al.IEEE Design Test of Computer,2008 large segment for automotive and medical [13]"Detection and Diagnosis of Static Scan Cell instruments thanks to the miniaturization trend at low Internal Defect",Yu Huang et al.ITC 2008 cost (I.e.transistor downscale and vertical chip stacking).One question remains is how far can the semiconductor technologist push further the transistor downscaling...At some point the physical limit of the gate transistor will be attained and therefore no more downscaling possible.We might see other type of transistors coming to the rescue (i.e.Carbone nanotube transistor)in order to keep up with such lovely miniaturization trend. REFERENCES "Extreme ultraviolet lithography:A review". Banqiu Wu,Ajay Kumar,Journal of vacuum science and technology,2007 [2]"High accurate optical proximity correction under the influences of lens aberration in 0.15 logic process"Harazaki,K.et al:Microprocessors and Nanotechnology Conference,2000. 3]"Ultrathin high-K gate stacks for advanced CMOS device"Buchanan,D.A et al:IBM Thomas J. Watson Res.Center,NY,USA [4]"In search of "Forever,"continued transistor scaling one new material at a time", Chau,R.S.Ghani,T.Mistry,K.Tyagi,S.Bohr. M.T. [5]Digital Systems Testing Testable Design. Miron Abramovici.Melvin Breuver,Arthur D. Friedman 6/"Survey of Test Vector Compression Techniques".N.Touba,IEEE Design Test of Computers,2006 [7]"Embedded deterministic test"Rajski.J.and Tyszer,J.and Kassab,M.and Mukherjee,N.(2004).. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. [8]"A Tool for Automatic Generation of BISTed and Transparent BISTed Rams",Omar Kebichi et al, ICCD 1992 [9]"Exact Aliasing Computation for RAM BIST". Omar Kebichi.M.Nicolaidis.V.N.Yarmolik.ITC 1995.USA. [10]"Enabling Embedded Memory Diagnosis via Test Response Compression",Omar Kebichi,Janusz Rajski,Wojciech Maly,et al,VTS 2001,USA. [11]"Full-speed scheme for memory BIST".Omar Kebichi,Wu-Tung Cheng.Chris Hill.IEEE International Workshop on Memory Testing,USA 17
opening a whole new markets for system integrators. For example, Electronic & software is becoming a large segment for automotive and medical instruments thanks to the miniaturization trend at low cost (I.e. transistor downscale and vertical chip stacking). One question remains is how far can the semiconductor technologist push further the transistor downscaling... At some point the physical limit of the gate transistor will be attained and therefore no more downscaling possible. We might see other type of transistors coming to the rescue (i.e. Carbone nanotube transistor) in order to keep up with such lovely miniaturization trend. REFERENCES [1] “Extreme ultraviolet lithography: A review”, Banqiu Wu, Ajay Kumar, Journal of vacuum science and technology, 2007 [2] “High accurate optical proximity correction under the influences of lens aberration in 0.15 logic process” Harazaki, K. et al; Microprocessors and Nanotechnology Conference, 2000. [3] “Ultrathin high-K gate stacks for advanced CMOS device” Buchanan, D.A et al; IBM Thomas J. Watson Res. Center, NY, USA [4] “In search of "Forever," continued transistor scaling one new material at a time”, Chau, R.S. ; Ghani, T. ; Mistry, K. ; Tyagi, S. ; Bohr, M.T. [5] Digital Systems Testing & Testable Design, Miron Abramovici, Melvin Breuver, Arthur D. Friedman [6] “Survey of Test Vector Compression Techniques". N. Touba, IEEE Design & Test of Computers, 2006 [7]"Embedded deterministic test" Rajski, J. and Tyszer, J. and Kassab, M. and Mukherjee, N. (2004).. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. [8] “A Tool for Automatic Generation of BISTed and Transparent BISTed Rams”, Omar Kebichi et al, ICCD 1992 [9] “Exact Aliasing Computation for RAM BIST”, Omar Kebichi, M. Nicolaidis, V. N. Yarmolik, ITC 1995, USA. [10] “Enabling Embedded Memory Diagnosis via Test Response Compression”, Omar Kebichi, Janusz Rajski, Wojciech Maly, et al, VTS 2001, USA. [11] “Full-speed scheme for memory BIST”, Omar Kebichi, Wu-Tung Cheng, Chris Hill, IEEE International Workshop on Memory Testing, USA [12] “Survey of Scan Chain Diagnosis”, Yu Huang et al, IEEE Design & Test of Computer, 2008 [13] “Detection and Diagnosis of Static Scan Cell Internal Defect”, Yu Huang et al, ITC 2008 2014 9th International Design and Test Symposium 17