[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20110288808A1 - Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure - Google Patents

Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure Download PDF

Info

Publication number
US20110288808A1
US20110288808A1 US12/784,142 US78414210A US2011288808A1 US 20110288808 A1 US20110288808 A1 US 20110288808A1 US 78414210 A US78414210 A US 78414210A US 2011288808 A1 US2011288808 A1 US 2011288808A1
Authority
US
United States
Prior art keywords
test
failure
tests
test block
history data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/784,142
Inventor
Wei Fan
Nagui Halim
Mark C. Johnson
Srinivasan Parthasarathy
Deepak S. Turaga
Olivier Verscheure
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/784,142 priority Critical patent/US20110288808A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TURAGA, DEEPAK S., HALIM, NAGUI, PARTHASARATHY, SRINIVASAN, FAN, WEI, JOHNSON, MARK C., VERSCHEURE, OLIVIER
Publication of US20110288808A1 publication Critical patent/US20110288808A1/en
Priority to US13/972,566 priority patent/US9342424B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2851Testing of integrated circuits [IC]
    • G01R31/2894Aspects of quality control [QC]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • G06F11/27Built-in tests

Definitions

  • the present invention generally relates to an ATE (Automated Test Equipment). More particularly, the present invention relates to optimizing a test flow within an ATE.
  • ATE Automated Test Equipment
  • An ATE station refers to any automated device that is used to test printed circuit boards, integrated circuits or any other electronic components.
  • Agilent® Medalist i1000D Agilent® Medalist i3070, Teradyne® Catalyst, Teradyne® Tiger, Teradyne® FLEX and Teradyne® UltraFLEX are examples of an ATE station.
  • a semiconductor manufacturing process requires a sequence of complex operations on each wafer to create multi-layered physical and electrical structures that form a desired very large scale integrated circuitry (VLSI). Defects in the process may occur due to several operational, mechanical or chemical control errors, or due to environmental uncertainty, e.g., contamination during the process.
  • VLSI very large scale integrated circuitry
  • a set of comprehensive electrical e.g., a test for power consumption of each semiconductor chip on each wafer
  • functional e.g., a behavioral test on each semiconductor chip on each wafer
  • characterization tests e.g., tests measuring area or clock frequency of semiconductor chip on each wafer
  • An automated test equipment (ATE) station operates a sequence of such tests on all pins on all semiconductor chips on each wafer. Additional stages of tests may then be performed by other ATE stations to simulate different environmental settings, or measure different parameters etc.
  • An end-to-end test process i.e., process running all tests on every semiconductor chip
  • Optimization of the test process involves an appropriate scheduling of wafers and lots onto multiple ATE stations and across stages of test settings in order to optimally utilize testers and maximize test throughput.
  • the optimization of the test process includes an optimization of a test flow within each ATE station to minimize a time to detect any defects or failures on a semiconductor chip or a wafer.
  • the present invention describes a system and method for optimizing a test flow within an ATE station to minimize the time to detect any defects or failures on a semiconductor chip or a wafer.
  • a computer-implemented system for optimizing a test flow within an ATE (Automated Test Equipment) station for testing at least one semiconductor chip.
  • the test flow lists a plurality of test blocks or tests in a sequence according to which the plurality of test blocks or tests are run.
  • a test block including one or more tests that need to be run together in a specific sequence.
  • the system determines one or more of: a test failure model, a test block duration and a yield model.
  • the test failure model determines an order or sequence of the test blocks.
  • the test block duration describes how long it takes for the ATE station to complete all tests in a test block.
  • the yield model describes whether a semiconductor chip is defective or not.
  • the system schedules the test flow based on said one or more of: the test failure model, the test block duration and the yield model.
  • the system automatically conducting tests in the plurality of test blocks on at least one wafer or at least one semiconductor chip according to the scheduled test flow.
  • the test failure model comprises: an independent failure model and a dependent failure model.
  • the independent failure model represents that a success or failure of a test, or test block, does not depend on a success or failure of any other tests or test blocks run before or after in the test flow.
  • the dependent failure model represents that the success or failure of a test depends on a success or failure of another test or test block run before or after in the test flow.
  • the independent and dependant failure models respect the test process constraint on particular test blocks.
  • a computer-implemented method for optimizing a test schedule of programming codes comprising a step of determining one or more of: an independent failure model and a dependent failure model, a step of ordering the plurality of the test blocks according to the determined failure model, and a step of testing the programming codes according to the order.
  • the independent failure model represents that a success or failure of a first test does not depend on a success or failure of a second test run before or after the first test.
  • the dependent failure model represents that the success or failure of the first test depends on a success or failure of the second test run before or after the first test.
  • FIG. 1 illustrates a flow chart describing method steps for an independent failure model according to one embodiment of the present invention.
  • FIG. 2 illustrates a flow chart describing method steps for a dependent failure model according to one embodiment of the present invention.
  • FIG. 3 illustrates a system diagram for running the failure models according to one embodiment of the present invention.
  • FIG. 4 illustrates a distribution of N b f and t b for test blocks according to one exemplary embodiment of the present invention.
  • FIG. 5 illustrates an exemplary hardware configuration for running the failure models according to one embodiment of the present invention.
  • a test floor may include a plurality of ATE stations (i.e., individual testers).
  • An ATE station includes at least one test flow, which is defined by a test program (e.g., a test program 340 in FIG. 3 ).
  • a test flow comprises a plurality of test blocks.
  • the test flow lists the plurality of tests or test blocks in a sequence according to which they are run.
  • a test block includes one or more individual tests, e.g., tests measuring leakage current and threshold voltage.
  • the test block refers to a group of individual tests that could not or should not be separated.
  • the individual tests in the test block need to be run together in a specific sequence.
  • the individual testers may perform the individual tests.
  • an ATE station receives as input one or more of: a test process structure, a test process constraint and a stopping criterion.
  • the test process structure includes, but is not limited to: a list of tests to be performed or already performed on a wafer and/or a semiconductor chip, characteristics of the tests and individual durations of the tests.
  • the test process constraint describes relationships between the tests, e.g., precedence and dependence relationships between the tests.
  • the test stopping criterion describes when one or more of the tests will stop, e.g., stop on a single test fail or stop on successful completion of all tests.
  • a computing system e.g., a computer 305 in FIG.
  • the computing system may utilize history data (e.g., history data 300 ) to determine an optimal sequence of the tests.
  • the computing system 305 determines one or more of: a test failure model, a test block duration and a yield model.
  • the test failure model determines an order or sequence of the test blocks.
  • the test failure model includes, but is not limited to: an independent failure model (i.e., method steps described in FIG. 1 ) and a dependent failure model (i.e., method steps described in FIG. 2 ).
  • the independent failure model represents that a success or failure of a test does not depend on a success or failure of any other test(s) in the test flow.
  • the dependent failure model represents that the success or failure of a test depends on a success or failure of another test(s) in the test flow.
  • the test block duration describes how long it takes for the ATE station to complete all tests in a test block.
  • the yield model describes whether a semiconductor chip is good or defective, e.g., labeling a semiconductor chip as defective if a test on the semiconductor chip resulted in a fail.
  • the computing system 305 schedules a test flow within an ATE station according to one or more of: the test failure model, the test block duration and the yield model.
  • the ATE station e.g., an ATE station 310 in FIG. 3
  • the computing system 305 to schedule the test flow, the computing system 305 respects the test process constraints (e.g., particular test blocks and individual tests cannot be reordered) while running the failure model(s).
  • the test process constraints e.g., particular test blocks and individual tests cannot be reordered
  • an ATE station 310 tests all semiconductor chips on the wafer by running through its test flow (comprising electrical, functional and characterization tests) on each semiconductor chip to determine semiconductor chip yield and defects.
  • this test flow is pre-determined and fixed for all the semiconductor chips.
  • the computing system 305 determines and/or modifies the test flow at real-time, e.g., by executing the failure models.
  • the test flow is logically partitioned into test blocks, e.g., by grouping inseparable tests (i.e.
  • test tests that must be tested together in a specific sequence e.g., a test sequence for a memory test and repair
  • groping tests that need to be tested in a specific order in the test flow.
  • electrical tests for detecting open circuits and short circuits may be grouped together and run first.
  • the semiconductor chip fails specific electrical or functional tests, the semiconductor chip is considered to have zero yield (along with a reason or sort code), and testing on the semiconductor chip may be stopped after performing all the tests in the relevant test block. Unless further characterization is warranted, remaining tests in other test blocks are not performed on the failed semiconductor chip and the ATE station starts testing a next semiconductor chip.
  • An example of test names, test blocks, times per test for a semiconductor wafer or chip test is shown Table 1.
  • Test Block Test name Test Duration Block 1 Test 05034 0.05 sec Test 07803 0.03 sec Block 2 Test 01111 0.04 sec Test 09093 0.01 sec Test 07044 0.005 sec Block 3 Test 04342 0.02 sec Test 98743 0.05 sec
  • tests are grouped into three test blocks: a test block 1 including tests 05034 and 07803, a test block 2 including tests 01111, 09093 and 07044 and a test block 3 including tests 04342 and 98743.
  • Table 1 illustrates these three test blocks, there may be a plurality of test blocks each including one or more tests.
  • the Table 1 illustrates test duration per each test.
  • the Table 1 may further include a separate column (not shown) for a sort code to indicate a sort associated with a failure. When a particular test results in a fail and a semiconductor chip is determined as a defective chip, the ATE station may stop testing the semiconductor chip and identify it with a fail sort.
  • test block When multiple tests are grouped into a test block, even if one test results in a fail, the testing may continue until the end of the test block before stopping and assigning a sort. For example, even if Test 05034 results in a fail on a semiconductor chip, the ATE station will also run Test 07803 on that semiconductor chip before stopping. Other tests/test blocks will not need to be run on that semiconductor chip to determine it is defective. Additionally, in one embodiment, there may be fixed precedence constraints between blocks, e.g., the first five test blocks are required to be performed for an initialization of the ATE station.
  • a goal of test flow optimization performed by the computing system 305 involves reordering the test blocks, while satisfying the test process constraint(s), to minimize a mean time to detect failure(s) on a semiconductor chip or a wafer.
  • the computing system 305 runs both the independent failure model and the dependent failure model.
  • FIG. 1 illustrates method steps for the independent failure model according to one embodiment of the invention.
  • a failure of any test does not depend on a failure, success and/or any parametric measurements (e.g., measuring power consumption of a semiconductor chip, a threshold voltage of a semiconductor chip) of any other test.
  • any parametric measurements e.g., measuring power consumption of a semiconductor chip, a threshold voltage of a semiconductor chip
  • a test block b has test duration t b associated with it, corresponding to a sum of test durations for all tests within the test block b.
  • the first F ( ⁇ B) number of test blocks cannot be reordered as they are used for a setup or initialization of the ATE station.
  • the first F numbers of tests need to be performed before any other tests.
  • the goal of a test flow optimization e.g., reordering tests based on a failure model is to reorder test blocks F+1 st through B th appropriately.
  • the computing system 305 accesses and retrieves from a storage device (e.g., a storage device 300 ) history data (e.g., a history of N number of tested semiconductor chips) representing a typical distribution of fails and passes of tests.
  • a storage device e.g., a storage device 300
  • history data e.g., a history of N number of tested semiconductor chips
  • the computing system 305 computes N P , the number of pass chips (i.e., the number of semiconductor chips passed all tests within the test block b) and N b f , the number of chips that were failed by a test within the test block b based on the history data.
  • the computation done by the computing system 305 is based on
  • the computing system 305 obtains or computes the test block duration of each test block b from the history data.
  • the computing system 305 schedules a test flow including the B number of test blocks to optimize a mean time to detect a failure for a set of N number of semiconductor chips, e.g., by ordering test blocks F+1 St through B th in a decreasing order of a fraction
  • the computing system 305 places a test block having a largest value of
  • the computing system 305 places a test block having a second largest value of
  • the computing system places a test block having a smallest value of
  • the ATE station conducts tests in the B number of test blocks according to the scheduled test flow.
  • the ATE station or the computing system 305 may update the history data based on test results of the conducted tests.
  • the computing system 305 may periodically, e.g., every 0.1 second, re-run steps 100 - 120 in FIG. 1 .
  • the scheduled test flow can be implemented in a straightforward manner: the computing system 305 places tests with the highest likelihood of catching defective semiconductor chips in the shortest amount of times as early in the test flow as possible. Thus, the computing system 305 ensures that a resulting test flow minimizes a time to detect defective chip.
  • the independent failure model assumes that the computing system 305 can use the history data of the already tested or currently tested semiconductor chips or wafers to build an optimized test flow for testing.
  • the computing system 305 may dynamically change the test flow in real-time.
  • the computing system 305 tracks N b f , and t b and periodically (e.g., every 0.1 second) reorders the tests in the test flow.
  • a period of the update depends on variability in data characteristics of the history data. Additionally, there may be different optimal test flows for different collections of wafers based on their common manufacturing equipment or period of manufacturing.
  • the optimal test flow can also be different for different spatial locations of semiconductor chips on the wafer, as semiconductor chips in edge locations may experience different defect types than semiconductor chips in center locations of the wafer.
  • the historical data may also include semiconductor chips' test data from previous test operations—once the semiconductor chips have been properly identified through electronic chip identification (ECID), and any newly acquired test data that is pertinent.
  • EID electronic chip identification
  • FIG. 2 illustrates method steps for the dependant failure model according to one embodiment of the present invention.
  • a conditional probability of a specific test resulting in a fail is a probability of a failure given the test results (e.g., pass rates, failure rates, or parametric measurements) of known dependent tests (i.e., a test A depends on a test B if a success or failure of the test A depends on a success or failure of the test B.)
  • the conditional and marginal probabilities of the test results could be different and therefore result in different scheduling schemes (e.g., the dependent failure model different from the independent failure model).
  • the computing system 305 may compute a following conditional probability, a probability of that test 07083 results in fail given that test 05034 results in a success.
  • the history data further captures sufficient information about test failure dependencies including the conditional probabilities of tests resulting in failures given than previous tests resulted in passes, or outlier measurements.
  • the computing system 305 determines an optimal test flow by an iterative approach. At the end of iteration i, the computing system 305 determines the i th block to be scheduled in the test flow.
  • N p , N b f and N denote the number of pass chips within a test block b, the number of semiconductor chips that were failed by a test within the test block b, and the total number of chips tested in the test block b respectively.
  • N b f (i) denote the number of semiconductor chips that were failed by the test block b given that the semiconductor chips did not fail at all the test blocks 1, . . . , i, where the i represents a test block number i which is less than or equal to b.
  • computing system 305 computes the N b f (i) based on the history data.
  • the history data may include, but is not limited to: a summary of test effectiveness over time, i.e., for each test, the history data includes the number of semiconductor chips for which the test resulted in a success and the number of semiconductor chips for which the test resulted in a failure over a period of time.
  • the computing system 305 computes or obtains a test block duration t b for each test block b from the history data.
  • the computing system 305 respects a test process constraint, e.g., by scheduling F number of blocks which cannot be reordered at a beginning of the test flow.
  • the F+1 st block to be scheduled is a test block b with a largest ratio
  • computing system 305 selects a test block with the largest
  • the F+i+1 st block in the test flow is a test block with a largest ratio
  • the dependant failure model minimizes a test completion time, e.g., by scheduling test blocks in a decreasing order of
  • the ATE station After the computing system 305 completes scheduling the test flow, e.g., by running method steps 200 - 220 , the ATE station conducts tests according to the scheduled test flow.
  • the ATE station or computing system 305 may update the history data based on test results of the conducted tests.
  • the computing system 305 may periodically, e.g., every 0.1 second, re-run steps 200 - 220 in FIG. 2 .
  • the computing system 305 uses history data to build an optimal schedule for testing as described above.
  • the computing system 305 dynamically updates the scheduled test flow in real-time.
  • the computing system 305 tracks quantities N b f , N b f (i), N p , t b and periodically reorders the tests in the test flow.
  • a period of the update depends on the variability in data characteristics obtained during the testing.
  • FIG. 3 illustrates system diagram depicting an ATE station 310 , a storage device (e.g., a disk, a flash drive, an optical disc, etc.) storing history data 300 and the computing system 305 (e.g., a server device, a laptop, desktop, netbook, workstation, etc.).
  • the ATE station 310 includes, but is not limited to: a tester interface 330 and control interface 335 .
  • the tester interface 330 provides common and compatible interfaces between the ATE station 310 and a broader network (i.e., a network to which the ATE station 310 is connected).
  • the control interface 335 controls the individual testers.
  • the control interface 335 includes, but is not limited to: a tester control system 350 , test program 340 and test data interface (TDI) 325 .
  • the tester control system 350 may have an API (Application Programming Interface) 355 that specifies an interface of the tester control system 350 and controls behavior of objects specified in the interface of the tester control system 355 .
  • Test program 340 may also have an API 345 that specifies an interface of the test program 340 and controls behavior of objects specified in the interface of the test program 340 .
  • the TDI 325 updates the history data 300 based on test results of current tests 360 .
  • the TDI 325 also provides a test result of current tests 360 to the computing system 305 .
  • the computing system 305 runs sequencing algorithms 320 (e.g., independent failure model and/or dependent failure model) upon receiving the history data 300 and the test result of current tests 360 .
  • the computing system 305 outputs a new test flow 370 updating current test flow to the ATE station 310 .
  • the ATE station 310 Upon receiving the new test flow 370 , the ATE station 310 runs tests according to the new test flow 370 .
  • the computing system 305 may apply the failure models to sorts (i.e., distinguishing defective semiconductor chips), functional tests, and any other tests used for evaluating a chip.
  • the computing system 305 performs, makes decisions, and takes actions that may be initiated by a mid-chip test which follows a test performed during a testing of a semiconductor chip or after a completion of the testing on entire semiconductor chips in a wafer or samples of those semiconductor chips. Actions initiated by the mid-chip test immediately impact the testing of that single semiconductor chip.
  • a predefined order i.e., testing in an sequential order of a test block 1, test block 2, . . . , test block 33
  • the mean time to detect a fail (a sum of test times for failing chips/the number of failing chips) on a semiconductor chip is 3.03 seconds
  • the mean time for pass chips i.e. chips that successfully pass all tests, and hence are tested by all blocks) is 7.57 seconds (a sum of test times for passing chips/the number of passing chips).
  • the computing system 305 schedules a rest of the test blocks (i.e., test block 6 to test block 33) based on a decreasing order of a fraction
  • the computing system 305 obtains an optimal test flow illustrated in Table 2.
  • the table 2 shows the predefined order Pre and the optimal test flow Opt.
  • the computing system 305 optimizes a test schedule of programming codes (e.g., codes written by C/C++, Java®, or .Net, etc.).
  • the test schedule includes a plurality of test blocks.
  • a test block comprises at least one test including, but not limited to: a unit test, an integration test, a regression test, a system test, a simulation test, compile/build time test, runtime test.
  • the unit test is a software verification method that evaluates whether an individual unit (e.g., a class or function) of programming codes meets its intended design or behavior.
  • the integration test is a software testing method in which individual software units are combined and evaluated together as a group.
  • the regression test is a software testing method for discovering software bugs which did not exist in a previous software version but emerged in a current software version. For example, a function that worked correctly in the previous version might stop working properly at the current version.
  • the system testing refers to a testing performed on a complete system to evaluate the system's performance and functionality.
  • the simulation testing includes a simulation of a logic behavior of the programming code for various configurations of a software design.
  • the compile/build time test is a test done during the compilation of the programming codes.
  • the compile/build time test includes, but not limited to: checking syntaxes of the programming codes.
  • the runtime test evaluates whether the programming codes operates as intended.
  • N b f refers to the number of programming codes failed within a test block b.
  • t b refers to a test duration of a test block b.
  • N b f (i) refers to the number of programming codes failed given those programming codes passed test blocks 1, 2, . . . i.
  • the computing system 305 determines whether the independent failure model or the dependent failure model is used for the optimizing the test schedule. If there exist a plurality of dependencies between programming codes, e.g., methods or functions frequently calls other methods or functions, the computing system 305 may choose the dependent failure model. If the programming codes are independent each other, e.g., a unit testing, the computing system 305 may choose the independent failure model. Then, the computing system 305 arranges or orders test blocks according to the chosen model. The computing system 305 tests the programming codes according to the order.
  • the computing system 305 optimizes a test schedule for the system test and/or the simulation test conducted on the programming codes.
  • a test block is a stage of the system test and/or the simulation test.
  • the computing system 305 Upon choosing a failure model (the independent failure model or the dependent failure model), the computing system 305 arranges or orders the test blocks (stages) in the system test and/the simulation test according to the chosen failure model. Then, the computing system 305 conducts the system test and/or simulation test in the order.
  • the method steps in FIGS. 1-2 are implemented in hardware or reconfigurable hardware, e.g., FPGA (Field Programmable Gate Array) or CPLD (Complex Programmable Logic Device), using a hardware description language (Verilog, VHDL, Handel-C, or System C).
  • the method steps in FIGS. 1-2 are implemented in a semiconductor chip, e.g., ASIC (Application-Specific Integrated Circuit), using a semi-custom design methodology, i.e., designing a chip using standard cells and a hardware description language.
  • the hardware, reconfigurable hardware or the semiconductor chip operates the method steps described in FIGS. 1-2 .
  • FIG. 5 illustrates an exemplary hardware configuration of a computing system 305 running and/or implementing the method steps in FIGS. 1-2 .
  • the hardware configuration preferably has at least one processor or central processing unit (CPU) 511 .
  • the CPUs 511 are interconnected via a system bus 512 to a random access memory (RAM) 514 , read-only memory (ROM) 516 , input/output (I/O) adapter 518 (for connecting peripheral devices such as disk units 521 and tape drives 540 to the bus 512 ), user interface adapter 522 (for connecting a keyboard 524 , mouse 526 , speaker 528 , microphone 532 , and/or other user interface device to the bus 512 ), a communication adapter 534 for connecting the system 305 to a data processing network, the Internet, an Intranet, a local area network (LAN), etc., and a display adapter 536 for connecting the bus 512 to a display device 538 and/or printer 539 (e.g., a
  • the present invention can be realized in hardware, software, or a combination of hardware and software.
  • a typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and run, controls the computer system such that it carries out the methods described herein.
  • the present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods.
  • Computer program means or computer program in the present context include any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after conversion to another language, code or notation, and/or reproduction in a different material form.
  • the invention includes an article of manufacture which comprises a computer usable medium having computer readable program code means embodied therein for causing a function described above.
  • the computer readable program code means in the article of manufacture comprises computer readable program code means for causing a computer to effect the steps of a method of this invention.
  • the present invention may be implemented as a computer program product comprising a computer usable medium having computer readable program code means embodied therein for causing a function described above.
  • the computer readable program code means in the computer program product comprising computer readable program code means for causing a computer to affect one or more functions of this invention.
  • the present invention may be implemented as a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for causing one or more functions of this invention.
  • the present invention may be implemented as a computer readable medium (e.g., a compact disc, a magnetic disk, a hard disk, an optical disk, solid state drive, digital versatile disc) embodying program computer instructions (e.g., C, C++, Java, Assembly languages, .Net, Binary code) run by a processor (e.g., Intel® CoreTM, IBM® PowerPC®) for causing a computer to perform method steps of this invention.
  • the present invention may include a computer program product including a computer readable storage medium having computer readable program code embodied therewith.
  • the computer readable program code runs the one or more of functions of this invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Quality & Reliability (AREA)
  • Tests Of Electronic Circuits (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)

Abstract

The present invention describes a method and system for optimizing a test flow within each ATE (Automated Test Equipment) station. The test flow includes a plurality of test blocks. A test block includes a plurality of individual tests. A computing system schedule the test flow based one or more of: a test failure model, test block duration and a yield model. The failure model determines an order or sequence of the test blocks. There are at least two failure models: independent failure model and dependant failure model. The yield model describes whether a semiconductor chip is defective or not. Upon completing the scheduling, the ATE station conducts tests according to the scheduled test flow. The present invention can also be applied to software testing.

Description

    BACKGROUND
  • The present invention generally relates to an ATE (Automated Test Equipment). More particularly, the present invention relates to optimizing a test flow within an ATE.
  • An ATE station refers to any automated device that is used to test printed circuit boards, integrated circuits or any other electronic components. Agilent® Medalist i1000D, Agilent® Medalist i3070, Teradyne® Catalyst, Teradyne® Tiger, Teradyne® FLEX and Teradyne® UltraFLEX are examples of an ATE station.
  • A semiconductor manufacturing process requires a sequence of complex operations on each wafer to create multi-layered physical and electrical structures that form a desired very large scale integrated circuitry (VLSI). Defects in the process may occur due to several operational, mechanical or chemical control errors, or due to environmental uncertainty, e.g., contamination during the process. After manufacturing of semiconductor chips on each wafer is complete, a set of comprehensive electrical (e.g., a test for power consumption of each semiconductor chip on each wafer), functional (e.g., a behavioral test on each semiconductor chip on each wafer), and characterization tests (e.g., tests measuring area or clock frequency of semiconductor chip on each wafer) are performed to determine an actual wafer and semiconductor chip yield. These tests require several detailed measurements of various electrical parameters, using different test configurations. An automated test equipment (ATE) station operates a sequence of such tests on all pins on all semiconductor chips on each wafer. Additional stages of tests may then be performed by other ATE stations to simulate different environmental settings, or measure different parameters etc. An end-to-end test process (i.e., process running all tests on every semiconductor chip) consumes a significant amount of time, and it is critical that the process be optimized appropriately. Optimization of the test process involves an appropriate scheduling of wafers and lots onto multiple ATE stations and across stages of test settings in order to optimally utilize testers and maximize test throughput.
  • It would be desirable that the optimization of the test process includes an optimization of a test flow within each ATE station to minimize a time to detect any defects or failures on a semiconductor chip or a wafer.
  • SUMMARY OF THE INVENTION
  • The present invention describes a system and method for optimizing a test flow within an ATE station to minimize the time to detect any defects or failures on a semiconductor chip or a wafer.
  • In one embodiment, there is provided a computer-implemented system for optimizing a test flow within an ATE (Automated Test Equipment) station for testing at least one semiconductor chip. The test flow lists a plurality of test blocks or tests in a sequence according to which the plurality of test blocks or tests are run. A test block including one or more tests that need to be run together in a specific sequence. The system determines one or more of: a test failure model, a test block duration and a yield model. The test failure model determines an order or sequence of the test blocks. The test block duration describes how long it takes for the ATE station to complete all tests in a test block. The yield model describes whether a semiconductor chip is defective or not. The system schedules the test flow based on said one or more of: the test failure model, the test block duration and the yield model. The system automatically conducting tests in the plurality of test blocks on at least one wafer or at least one semiconductor chip according to the scheduled test flow.
  • In a further embodiment, the test failure model comprises: an independent failure model and a dependent failure model. The independent failure model represents that a success or failure of a test, or test block, does not depend on a success or failure of any other tests or test blocks run before or after in the test flow. The dependent failure model represents that the success or failure of a test depends on a success or failure of another test or test block run before or after in the test flow.
  • In a further embodiment, the independent and dependant failure models respect the test process constraint on particular test blocks.
  • In another embodiment, there is provided a computer-implemented method for optimizing a test schedule of programming codes. The test schedule lists a plurality of test blocks or tests in a sequence according to which the plurality of test blocks or tests are run. A test block includes one or more tests that need to be run together in a specific sequence. The method comprises a step of determining one or more of: an independent failure model and a dependent failure model, a step of ordering the plurality of the test blocks according to the determined failure model, and a step of testing the programming codes according to the order. The independent failure model represents that a success or failure of a first test does not depend on a success or failure of a second test run before or after the first test. The dependent failure model represents that the success or failure of the first test depends on a success or failure of the second test run before or after the first test.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings are included to provide a further understanding of the present invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. In the drawings,
  • FIG. 1 illustrates a flow chart describing method steps for an independent failure model according to one embodiment of the present invention.
  • FIG. 2 illustrates a flow chart describing method steps for a dependent failure model according to one embodiment of the present invention.
  • FIG. 3 illustrates a system diagram for running the failure models according to one embodiment of the present invention.
  • FIG. 4 illustrates a distribution of Nb f and tb for test blocks according to one exemplary embodiment of the present invention.
  • FIG. 5 illustrates an exemplary hardware configuration for running the failure models according to one embodiment of the present invention.
  • DETAILED DESCRIPTION
  • According to one embodiment of the present invention, a test floor may include a plurality of ATE stations (i.e., individual testers). An ATE station includes at least one test flow, which is defined by a test program (e.g., a test program 340 in FIG. 3). A test flow comprises a plurality of test blocks. The test flow lists the plurality of tests or test blocks in a sequence according to which they are run. A test block includes one or more individual tests, e.g., tests measuring leakage current and threshold voltage. The test block refers to a group of individual tests that could not or should not be separated. The individual tests in the test block need to be run together in a specific sequence. The individual testers may perform the individual tests.
  • According to one embodiment of the present invention, an ATE station receives as input one or more of: a test process structure, a test process constraint and a stopping criterion. The test process structure includes, but is not limited to: a list of tests to be performed or already performed on a wafer and/or a semiconductor chip, characteristics of the tests and individual durations of the tests. The test process constraint describes relationships between the tests, e.g., precedence and dependence relationships between the tests. The test stopping criterion describes when one or more of the tests will stop, e.g., stop on a single test fail or stop on successful completion of all tests. A computing system (e.g., a computer 305 in FIG. 3) running one or more of failure models (e.g., an independent failure mode illustrated in FIG. 1 and/or a dependent failure mode illustrated in FIG. 2) optimally order the tests to minimize a mean time (i.e., an average time) to detect a failure on a semiconductor chip or a wafer. The computing system may utilize history data (e.g., history data 300) to determine an optimal sequence of the tests.
  • According to one embodiment of the present invention, the computing system 305 determines one or more of: a test failure model, a test block duration and a yield model. The test failure model determines an order or sequence of the test blocks. The test failure model includes, but is not limited to: an independent failure model (i.e., method steps described in FIG. 1) and a dependent failure model (i.e., method steps described in FIG. 2). The independent failure model represents that a success or failure of a test does not depend on a success or failure of any other test(s) in the test flow. The dependent failure model represents that the success or failure of a test depends on a success or failure of another test(s) in the test flow. The test block duration describes how long it takes for the ATE station to complete all tests in a test block. The yield model describes whether a semiconductor chip is good or defective, e.g., labeling a semiconductor chip as defective if a test on the semiconductor chip resulted in a fail. Then, the computing system 305 schedules a test flow within an ATE station according to one or more of: the test failure model, the test block duration and the yield model. After the computing system 305 completes the scheduling, the ATE station (e.g., an ATE station 310 in FIG. 3) automatically conducts the tests included in the test flow on at least one wafer and/or at least one semiconductor chip according to the scheduled test flow.
  • According to a further embodiment, to schedule the test flow, the computing system 305 respects the test process constraints (e.g., particular test blocks and individual tests cannot be reordered) while running the failure model(s).
  • During a manufacturing test on a wafer or semiconductor module lot, an ATE station 310 tests all semiconductor chips on the wafer by running through its test flow (comprising electrical, functional and characterization tests) on each semiconductor chip to determine semiconductor chip yield and defects. Traditionally, this test flow is pre-determined and fixed for all the semiconductor chips. However, according to one embodiment of the present invention, the computing system 305 determines and/or modifies the test flow at real-time, e.g., by executing the failure models. Additionally, the test flow is logically partitioned into test blocks, e.g., by grouping inseparable tests (i.e. tests that must be tested together in a specific sequence, e.g., a test sequence for a memory test and repair) or groping tests that need to be tested in a specific order in the test flow. For example, electrical tests for detecting open circuits and short circuits may be grouped together and run first. When a semiconductor chip fails specific electrical or functional tests, the semiconductor chip is considered to have zero yield (along with a reason or sort code), and testing on the semiconductor chip may be stopped after performing all the tests in the relevant test block. Unless further characterization is warranted, remaining tests in other test blocks are not performed on the failed semiconductor chip and the ATE station starts testing a next semiconductor chip. An example of test names, test blocks, times per test for a semiconductor wafer or chip test is shown Table 1.
  • TABLE 1
    Semiconductor test table illustrating test blocks, tests in
    each test block and test duration of each test.
    Test Block Test name Test Duration
    Block 1 Test 05034 0.05 sec
    Test 07803 0.03 sec
    Block 2 Test 01111 0.04 sec
    Test 09093 0.01 sec
    Test 07044 0.005 sec 
    Block 3 Test 04342 0.02 sec
    Test 98743 0.05 sec
  • In the Table 1, tests are grouped into three test blocks: a test block 1 including tests 05034 and 07803, a test block 2 including tests 01111, 09093 and 07044 and a test block 3 including tests 04342 and 98743. Though Table 1 illustrates these three test blocks, there may be a plurality of test blocks each including one or more tests. The Table 1 illustrates test duration per each test. The Table 1 may further include a separate column (not shown) for a sort code to indicate a sort associated with a failure. When a particular test results in a fail and a semiconductor chip is determined as a defective chip, the ATE station may stop testing the semiconductor chip and identify it with a fail sort. When multiple tests are grouped into a test block, even if one test results in a fail, the testing may continue until the end of the test block before stopping and assigning a sort. For example, even if Test 05034 results in a fail on a semiconductor chip, the ATE station will also run Test 07803 on that semiconductor chip before stopping. Other tests/test blocks will not need to be run on that semiconductor chip to determine it is defective. Additionally, in one embodiment, there may be fixed precedence constraints between blocks, e.g., the first five test blocks are required to be performed for an initialization of the ATE station. A goal of test flow optimization performed by the computing system 305 involves reordering the test blocks, while satisfying the test process constraint(s), to minimize a mean time to detect failure(s) on a semiconductor chip or a wafer. The computing system 305 runs both the independent failure model and the dependent failure model.
  • FIG. 1 illustrates method steps for the independent failure model according to one embodiment of the invention. Under the independent failure model, a failure of any test does not depend on a failure, success and/or any parametric measurements (e.g., measuring power consumption of a semiconductor chip, a threshold voltage of a semiconductor chip) of any other test. For example, consider that a number B of test blocks (numbered 1 through B) based on a predefined order, e.g. an order used in an ATE station. A test block b has test duration tb associated with it, corresponding to a sum of test durations for all tests within the test block b. Consider that there is a test process constraint: the first F (≦B) number of test blocks cannot be reordered as they are used for a setup or initialization of the ATE station. Thus, the first F numbers of tests need to be performed before any other tests. The goal of a test flow optimization (e.g., reordering tests based on a failure model) is to reorder test blocks F+1st through Bth appropriately.
  • The computing system 305 accesses and retrieves from a storage device (e.g., a storage device 300) history data (e.g., a history of N number of tested semiconductor chips) representing a typical distribution of fails and passes of tests. At step 100 in FIG. 1, for each test block b, the computing system 305 computes NP, the number of pass chips (i.e., the number of semiconductor chips passed all tests within the test block b) and Nb f, the number of chips that were failed by a test within the test block b based on the history data. The computation done by the computing system 305 is based on
  • N = b = 1 B N b f + N P ( 1 )
  • At step 110, the computing system 305 obtains or computes the test block duration of each test block b from the history data. At step 120, the computing system 305 schedules a test flow including the B number of test blocks to optimize a mean time to detect a failure for a set of N number of semiconductor chips, e.g., by ordering test blocks F+1St through Bth in a decreasing order of a fraction
  • N b f t b ,
  • where F+1≦b≦B. In other words, the computing system 305 places a test block having a largest value of
  • N b f t b
  • at F+1st position in the test flow. The computing system 305 places a test block having a second largest value of
  • N b f t b
  • at F+2nd position in the test flow. The computing system places a test block having a smallest value of
  • N b f t b
  • at Bth position in the test flow. Then, the ATE station conducts tests in the B number of test blocks according to the scheduled test flow. The ATE station or the computing system 305 may update the history data based on test results of the conducted tests. The computing system 305 may periodically, e.g., every 0.1 second, re-run steps 100-120 in FIG. 1.
  • The scheduled test flow can be implemented in a straightforward manner: the computing system 305 places tests with the highest likelihood of catching defective semiconductor chips in the shortest amount of times as early in the test flow as possible. Thus, the computing system 305 ensures that a resulting test flow minimizes a time to detect defective chip.
  • According to one embodiment, the independent failure model assumes that the computing system 305 can use the history data of the already tested or currently tested semiconductor chips or wafers to build an optimized test flow for testing. The computing system 305 may dynamically change the test flow in real-time. In order to achieve the dynamic real-time update of the test flow, the computing system 305 tracks Nb f, and tb and periodically (e.g., every 0.1 second) reorders the tests in the test flow. A period of the update depends on variability in data characteristics of the history data. Additionally, there may be different optimal test flows for different collections of wafers based on their common manufacturing equipment or period of manufacturing. The optimal test flow can also be different for different spatial locations of semiconductor chips on the wafer, as semiconductor chips in edge locations may experience different defect types than semiconductor chips in center locations of the wafer. The historical data may also include semiconductor chips' test data from previous test operations—once the semiconductor chips have been properly identified through electronic chip identification (ECID), and any newly acquired test data that is pertinent.
  • FIG. 2 illustrates method steps for the dependant failure model according to one embodiment of the present invention. Under the dependent failure model, a conditional probability of a specific test resulting in a fail is a probability of a failure given the test results (e.g., pass rates, failure rates, or parametric measurements) of known dependent tests (i.e., a test A depends on a test B if a success or failure of the test A depends on a success or failure of the test B.) The conditional and marginal probabilities of the test results could be different and therefore result in different scheduling schemes (e.g., the dependent failure model different from the independent failure model). For instance, if test 05034 and test 07083 in Table 1 are related tests, the computing system 305 may compute a following conditional probability, a probability of that test 07083 results in fail given that test 05034 results in a success. In order to optimize a test flow under the dependant failure model, the history data further captures sufficient information about test failure dependencies including the conditional probabilities of tests resulting in failures given than previous tests resulted in passes, or outlier measurements.
  • The computing system 305 determines an optimal test flow by an iterative approach. At the end of iteration i, the computing system 305 determines the ith block to be scheduled in the test flow. Let Np, Nb f and N denote the number of pass chips within a test block b, the number of semiconductor chips that were failed by a test within the test block b, and the total number of chips tested in the test block b respectively. Let Nb f(i) denote the number of semiconductor chips that were failed by the test block b given that the semiconductor chips did not fail at all the test blocks 1, . . . , i, where the i represents a test block number i which is less than or equal to b. At step 200 in FIG. 2, for each test block b, computing system 305 computes the Nb f(i) based on the history data. The history data may include, but is not limited to: a summary of test effectiveness over time, i.e., for each test, the history data includes the number of semiconductor chips for which the test resulted in a success and the number of semiconductor chips for which the test resulted in a failure over a period of time. At step 210, the computing system 305 computes or obtains a test block duration tb for each test block b from the history data. At step 220, the computing system 305 respects a test process constraint, e.g., by scheduling F number of blocks which cannot be reordered at a beginning of the test flow. The F+1st block to be scheduled is a test block b with a largest ratio
  • N b f ( i ) t b ,
  • where F+1≦b≦B. In other words, computing system 305 selects a test block with the largest
  • N b f ( i ) t b
  • to be scheduled as the ith test block. More generally, the F+i+1st block in the test flow is a test block with a largest ratio
  • N b f ( F + i ) t b .
  • The dependant failure model minimizes a test completion time, e.g., by scheduling test blocks in a decreasing order of
  • N b f ( i ) t b .
  • After the computing system 305 completes scheduling the test flow, e.g., by running method steps 200-220, the ATE station conducts tests according to the scheduled test flow. The ATE station or computing system 305 may update the history data based on test results of the conducted tests. The computing system 305 may periodically, e.g., every 0.1 second, re-run steps 200-220 in FIG. 2.
  • According to one embodiment of the present invention, the computing system 305 uses history data to build an optimal schedule for testing as described above. The computing system 305 dynamically updates the scheduled test flow in real-time. In order to achieve the dynamic real-time update, the computing system 305 tracks quantities Nb f, Nb f(i), Np, tb and periodically reorders the tests in the test flow. A period of the update depends on the variability in data characteristics obtained during the testing.
  • FIG. 3 illustrates system diagram depicting an ATE station 310, a storage device (e.g., a disk, a flash drive, an optical disc, etc.) storing history data 300 and the computing system 305 (e.g., a server device, a laptop, desktop, netbook, workstation, etc.). The ATE station 310 includes, but is not limited to: a tester interface 330 and control interface 335. The tester interface 330 provides common and compatible interfaces between the ATE station 310 and a broader network (i.e., a network to which the ATE station 310 is connected). The control interface 335 controls the individual testers. The control interface 335 includes, but is not limited to: a tester control system 350, test program 340 and test data interface (TDI) 325. The tester control system 350 may have an API (Application Programming Interface) 355 that specifies an interface of the tester control system 350 and controls behavior of objects specified in the interface of the tester control system 355. Test program 340 may also have an API 345 that specifies an interface of the test program 340 and controls behavior of objects specified in the interface of the test program 340. The TDI 325 updates the history data 300 based on test results of current tests 360. The TDI 325 also provides a test result of current tests 360 to the computing system 305. The computing system 305 runs sequencing algorithms 320 (e.g., independent failure model and/or dependent failure model) upon receiving the history data 300 and the test result of current tests 360. The computing system 305 outputs a new test flow 370 updating current test flow to the ATE station 310. Upon receiving the new test flow 370, the ATE station 310 runs tests according to the new test flow 370. The computing system 305 may apply the failure models to sorts (i.e., distinguishing defective semiconductor chips), functional tests, and any other tests used for evaluating a chip. The computing system 305 performs, makes decisions, and takes actions that may be initiated by a mid-chip test which follows a test performed during a testing of a semiconductor chip or after a completion of the testing on entire semiconductor chips in a wafer or samples of those semiconductor chips. Actions initiated by the mid-chip test immediately impact the testing of that single semiconductor chip.
  • According to one exemplary embodiment, the computing system 305 accesses and retrieves history data of N semiconductor chips, e.g., N=21,645, with Np=13,822. In this history data, a distribution for Nb f and tb for all 33 test blocks (i.e., B=33), given a predefined order (i.e., testing in an sequential order of a test block 1, test block 2, . . . , test block 33), is illustrated in FIG. 4. Based on this predefined order, the mean time to detect a fail (a sum of test times for failing chips/the number of failing chips) on a semiconductor chip is 3.03 seconds, while the mean time for pass chips (i.e. chips that successfully pass all tests, and hence are tested by all blocks) is 7.57 seconds (a sum of test times for passing chips/the number of passing chips).
  • Given that the first five test blocks cannot be reordered, i.e., F=5, the computing system 305 schedules a rest of the test blocks (i.e., test block 6 to test block 33) based on a decreasing order of a fraction
  • N b f t b .
  • Then, the computing system 305 obtains an optimal test flow illustrated in Table 2. The table 2 shows the predefined order Pre and the optimal test flow Opt.
  • TABLE 2
    Test block reordering
    Pre Opt
    1 1
    2 2
    3 3
    4 4
    5 5
    6 18
    7 25
    8 26
    9 24
    10 27
    11 7
    12 5
    13 13
    14 8
    15 9
    16 19
    17 17
    18 28
    19 6
    20 15
    21 11
    22 16
    23 21
    24 10
    25 12
    26 22
    27 29
    28 20
    29 30
    30 23
    31 31
    32 32
    33 33

    The optimal test flow leads to a mean time to detect a fail of 1.52 seconds, while not affecting the mean time for pass chips. Thus, the computing system 305 achieves 49.8%
  • ( 3.03 - 1.52 3.03 × 100 = 49.8 )
  • reduction in time to detect fail(s) and an overall reduction of testing time by 14.2%
  • ( ( 3.03 + 7.57 ) - ( 1.52 + 7.57 ) ( 3.03 + 7.57 ) × 100 = 14.2 ) ,
  • e.g., by running the independent failure model and/or dependant failure model.
  • According to one embodiment of the present invention, the computing system 305 optimizes a test schedule of programming codes (e.g., codes written by C/C++, Java®, or .Net, etc.). The test schedule includes a plurality of test blocks. A test block comprises at least one test including, but not limited to: a unit test, an integration test, a regression test, a system test, a simulation test, compile/build time test, runtime test. The unit test is a software verification method that evaluates whether an individual unit (e.g., a class or function) of programming codes meets its intended design or behavior. The integration test is a software testing method in which individual software units are combined and evaluated together as a group. The regression test is a software testing method for discovering software bugs which did not exist in a previous software version but emerged in a current software version. For example, a function that worked correctly in the previous version might stop working properly at the current version. The system testing refers to a testing performed on a complete system to evaluate the system's performance and functionality. The simulation testing includes a simulation of a logic behavior of the programming code for various configurations of a software design. The compile/build time test is a test done during the compilation of the programming codes. The compile/build time test includes, but not limited to: checking syntaxes of the programming codes. The runtime test evaluates whether the programming codes operates as intended.
  • In this embodiment, Nb f refers to the number of programming codes failed within a test block b. In this embodiment, tb refers to a test duration of a test block b. Nb f(i) refers to the number of programming codes failed given those programming codes passed test blocks 1, 2, . . . i.
  • In this embodiment, the computing system 305 determines whether the independent failure model or the dependent failure model is used for the optimizing the test schedule. If there exist a plurality of dependencies between programming codes, e.g., methods or functions frequently calls other methods or functions, the computing system 305 may choose the dependent failure model. If the programming codes are independent each other, e.g., a unit testing, the computing system 305 may choose the independent failure model. Then, the computing system 305 arranges or orders test blocks according to the chosen model. The computing system 305 tests the programming codes according to the order.
  • In a further embodiment, the computing system 305 optimizes a test schedule for the system test and/or the simulation test conducted on the programming codes. In this embodiment, a test block is a stage of the system test and/or the simulation test. Upon choosing a failure model (the independent failure model or the dependent failure model), the computing system 305 arranges or orders the test blocks (stages) in the system test and/the simulation test according to the chosen failure model. Then, the computing system 305 conducts the system test and/or simulation test in the order.
  • In one embodiment, the method steps in FIGS. 1-2 are implemented in hardware or reconfigurable hardware, e.g., FPGA (Field Programmable Gate Array) or CPLD (Complex Programmable Logic Device), using a hardware description language (Verilog, VHDL, Handel-C, or System C). In another embodiment, the method steps in FIGS. 1-2 are implemented in a semiconductor chip, e.g., ASIC (Application-Specific Integrated Circuit), using a semi-custom design methodology, i.e., designing a chip using standard cells and a hardware description language. Thus, the hardware, reconfigurable hardware or the semiconductor chip operates the method steps described in FIGS. 1-2.
  • FIG. 5 illustrates an exemplary hardware configuration of a computing system 305 running and/or implementing the method steps in FIGS. 1-2. The hardware configuration preferably has at least one processor or central processing unit (CPU) 511. The CPUs 511 are interconnected via a system bus 512 to a random access memory (RAM) 514, read-only memory (ROM) 516, input/output (I/O) adapter 518 (for connecting peripheral devices such as disk units 521 and tape drives 540 to the bus 512), user interface adapter 522 (for connecting a keyboard 524, mouse 526, speaker 528, microphone 532, and/or other user interface device to the bus 512), a communication adapter 534 for connecting the system 305 to a data processing network, the Internet, an Intranet, a local area network (LAN), etc., and a display adapter 536 for connecting the bus 512 to a display device 538 and/or printer 539 (e.g., a digital printer of the like).
  • Although the embodiments of the present invention have been described in detail, it should be understood that various changes and substitutions can be made therein without departing from spirit and scope of the inventions as defined by the appended claims. Variations described for the present invention can be realized in any combination desirable for each particular application. Thus particular limitations, and/or embodiment enhancements described herein, which may have particular advantages to a particular application need not be used for all applications. Also, not all limitations need be implemented in methods, systems and/or apparatus including one or more concepts of the present invention.
  • The present invention can be realized in hardware, software, or a combination of hardware and software. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and run, controls the computer system such that it carries out the methods described herein. The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods.
  • Computer program means or computer program in the present context include any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after conversion to another language, code or notation, and/or reproduction in a different material form.
  • Thus the invention includes an article of manufacture which comprises a computer usable medium having computer readable program code means embodied therein for causing a function described above. The computer readable program code means in the article of manufacture comprises computer readable program code means for causing a computer to effect the steps of a method of this invention. Similarly, the present invention may be implemented as a computer program product comprising a computer usable medium having computer readable program code means embodied therein for causing a function described above. The computer readable program code means in the computer program product comprising computer readable program code means for causing a computer to affect one or more functions of this invention. Furthermore, the present invention may be implemented as a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for causing one or more functions of this invention.
  • The present invention may be implemented as a computer readable medium (e.g., a compact disc, a magnetic disk, a hard disk, an optical disk, solid state drive, digital versatile disc) embodying program computer instructions (e.g., C, C++, Java, Assembly languages, .Net, Binary code) run by a processor (e.g., Intel® Core™, IBM® PowerPC®) for causing a computer to perform method steps of this invention. The present invention may include a computer program product including a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code runs the one or more of functions of this invention.
  • It is noted that the foregoing has outlined some of the more pertinent objects and embodiments of the present invention. This invention may be used for many applications. Thus, although the description is made for particular arrangements and methods, the intent and concept of the invention is suitable and applicable to other arrangements and applications. It will be clear to those skilled in the art that modifications to the disclosed embodiments can be effected without departing from the spirit and scope of the invention. The described embodiments ought to be construed to be merely illustrative of some of the more prominent features and applications of the invention. Other beneficial results can be realized by applying the disclosed invention in a different manner or modifying the invention in ways known to those familiar with the art.

Claims (25)

1. A computer-implemented method for optimizing a test flow within an ATE (Automated Test Equipment) station for testing at least one semiconductor chip, the test flow listing a plurality of test blocks or tests in a sequence according to which the plurality of test blocks or tests are run, a test block including one or more tests that need to be run together in a specific sequence, the method comprising:
determining one or more of: a test failure model, a test block duration and a yield model, the test failure model determining an order or sequence of the test blocks, the test block duration describing how long it takes for the ATE station to complete all tests in a test block, the yield model describing whether a semiconductor chip is defective or not;
scheduling the test flow based on said one or more of: the test failure model, the test block duration and the yield model; and
automatically conducting tests in the plurality of test blocks on at least one wafer or at least one semiconductor chip according to the scheduled test flow.
2. The computer-implemented method according to claim 1 further comprises:
receiving one or more of: a test process structure, a test process constraint and a stopping criterion, the test process structure including a list of the tests performed on the wafer and the semiconductor chip, characteristics of the tests and individual durations of the tests, the test process constraint describing relationships between the tests and the test stopping criterion describing when one or more of the tests will stop.
3. The computer-implemented method according to claim 2, wherein the test failure model comprises: an independent failure model and a dependent failure model, the independent failure model representing that a success or failure of a test or test block does not depend on a success or failure of any other test or test block run before or after in the test flow, the dependent failure model representing that the success or failure of a test depends on a success or failure of another test or test block run before or after in the test flow.
4. The computer-implemented method according to claim 3, wherein the independent failure model comprises steps of:
obtaining history data from a storage device;
computing a value Nb f representing a number of semiconductor chips failed within each test block b based on the history data;
computing the test block duration tb of each test block b based on the history data; and
reordering the plurality of test blocks in a decreasing order of a ratio
N b f t b .
5. The computer-implemented method according to claim 4, wherein the independent failure model further comprises steps of:
updating the history data after conducting the tests; and
periodically re-running the computing the value Nb f, the computing the duration based on the updated history data and the reordering.
6. The computer-implemented method according to claim 3, wherein the dependant failure model comprises steps of:
obtaining history data from a storage device;
for each test block b, computing a value Nb f(i) based on the history data, the Nb f(i) representing a number of semiconductor chips failed by the test block b given that the semiconductor chips passed all test blocks 1, i, the i representing a test block number which is less than b;
computing the test block duration tb of each test block b based on the history data; and
selecting a test block with a largest
N b f ( i ) t b
to be scheduled as an ith test block in the test flow.
7. The computer-implemented method according to claim 6, wherein the dependent failure model further comprises steps of:
updating the history data after conducting the tests; and
periodically re-running the computing the value Nb f(i), the computing the duration based on the updated history data and the selecting.
8. The computer-implemented method according to claim 6, wherein the history data include test failure dependency information including a conditional probability of a test resulting in a failure given that a previous test resulted in a pass.
9. The computer-implemented method according to claim 3, further comprising:
enforcing the test process constraint when scheduling the test flow based on the independent and dependent failure models.
10. The computer-implemented method according to claim 9, wherein the test process constraint includes that the particular test blocks cannot be reordered.
11. The computer-implemented method according to claim 1, wherein the scheduling is different for each different collection of wafers or for each different spatial location of semiconductor chips on the wafers.
12. A computer-implemented system for optimizing a test flow within an ATE (Automated Test Equipment) station for testing at least one semiconductor chip, the test flow listing a plurality of test blocks or tests in a sequence according to which the plurality of test blocks or tests are run, a test block including one or more tests that need to be run together in a specific sequence, the system comprising:
a memory device; and
a processor unit in communication with the memory device, the processor unit performs steps of
determining one or more of: a test failure model, a test block duration and a yield model, the test failure model determining an order or sequence of the test blocks, the test block duration describing how long it takes for the ATE station to complete all tests in a test block, the yield model describing whether a semiconductor chip is defective or not;
scheduling the test flow based on said one or more of the test failure model, the test block duration and the yield model; and
automatically conducting tests in the plurality of test blocks on at least one wafer or at least one semiconductor chip according to the scheduled test flow.
13. The computer-implemented system according to claim 12, wherein the processor unit further performs a step of:
receiving one or more of: a test process structure, a test process constraint and a stopping criterion, the test process structure including a list of the tests performed on the wafer and the semiconductor chip, characteristics of the tests and individual durations of the tests, the test process constraint describing relationships between the tests and the test stopping criterion describing when one or more of the tests will stop.
14. The computer-implemented system according to claim 13, wherein the test failure model comprises: an independent failure model and a dependent failure model, the independent failure model representing that a success or failure of a test or test block does not depend on a success or failure of any other test or test block run before or after in the test flow, the dependent failure model representing that the success or failure of a test depends on a success or failure of another test or test block run before or after in the test flow.
15. The computer-implemented system according to claim 14, wherein the independent failure model comprises steps of:
obtaining history data from a storage device;
computing a value Nb f representing a number of semiconductor chips failed within each test block b based on the history data;
computing the test block duration tb of each test block b based on the history data; and
reordering the plurality of test blocks in a decreasing order of a ratio
N b f t b .
16. The computer-implemented system according to claim 15, wherein the independent failure model further comprises steps of:
updating the history data after conducting the tests; and
periodically re-running the computing the value Nb f, the computing the duration based on the updated history data and the reordering.
17. The computer-implemented system according to claim 14, wherein the dependant failure model comprises steps of:
obtaining history data from a storage device;
for each test block b, computing a value Nb f(i) based on the history data, the Nb f(i) representing a number of semiconductor chips failed by the test block b given that the semiconductor chips passed all test blocks 1, . . . , i, the i representing a test block number which is less than b;
computing the test block duration tb of each test block b based on the history data; and
selecting a test block with a largest
N b f ( i ) t b
to be scheduled as an ith test block in the test flow.
18. The computer-implemented system according to claim 17, wherein the dependent failure model further comprises steps of:
updating the history data after conducting the tests; and
periodically re-running the computing Nb f(i), the computing the duration based on the updated history data and the selecting.
19. The computer-implemented system according to claim 17, wherein the history data include test failure dependency information including a conditional probability of a test resulting in a failure given that a previous test resulted in a pass.
20. The computer-implemented system according to claim 14, wherein the processor unit further performs a step of:
enforcing the test process constraint when scheduling the test flow based on the independent and dependent failure models.
21. The computer-implemented system according to claim 13, wherein the constraint includes that the particular test blocks cannot be reordered.
22. A computer program product for optimizing a test flow within each ATE (Automated Test Equipment), the test flow including a plurality of test blocks, a test block including a plurality of tests, the computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising steps of:
determining one or more of a test failure model, a test block duration and a yield model, the test failure model determining an order or sequence of the test blocks, the test block duration describing how long it takes for the ATE station to complete all tests in a test block, the yield model describing whether a semiconductor chip is defective or not;
scheduling the test flow based on said one or more of: the test failure model, the test block duration and the yield model; and
automatically conducting tests in the plurality of test blocks on at least one wafer or at least one semiconductor chip according to the scheduled test flow.
23. The computer program product according to claim 22, wherein the test failure model comprises: an independent failure model and a dependent failure model, the independent failure model representing that a success or failure of a test or test block does not depend on a success or failure of any other test or test block run before or after in the test flow, the dependent failure model representing that the success or failure of a test depends on a success or failure of another test or test block run before or after in the test flow.
24. A computer-implemented method for optimizing a test schedule of programming codes, the test schedule listing a plurality of test blocks or tests in a sequence according to which the plurality of test blocks or tests are run, a test block including one or more tests that need to be run together in a specific sequence, the method comprising:
determining one or more of: an independent failure model and a dependent failure model, the independent failure model representing that a success or failure of a first test does not depend on a success or failure of a second test run before or after the first test, the dependent failure model representing that the success or failure of the first test depends on a success or failure of the second test run before or after the first test;
ordering the plurality of the test blocks according to the determined failure model; and
testing the programming codes according to the order.
25. The computer-implemented method according to claim 24, wherein the independent failure model comprises steps of:
obtaining history data from a storage device;
computing a value Nb f representing a number of semiconductor chips failed within each test block b based on the history data;
computing the test block duration tb of each test block b based on the history data; and
reordering the plurality of test blocks in a decreasing order of a ratio
N b f t b ,
wherein the dependant failure model comprises steps of:
obtaining history data from a storage device;
for each test block b, computing a value Nb f(i) based on the history data, the Nb f(i) representing a number of semiconductor chips failed by the test block b given that the semiconductor chips passed all test blocks 1, . . . , i, the i representing a test block number which is less than b;
computing the test block duration tb of each test block b based on the history data; and
selecting a test block with a largest
N b f ( i ) t b
to be scheduled as an ith test block in the test flow.
US12/784,142 2010-05-20 2010-05-20 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure Abandoned US20110288808A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/784,142 US20110288808A1 (en) 2010-05-20 2010-05-20 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure
US13/972,566 US9342424B2 (en) 2010-05-20 2013-08-21 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/784,142 US20110288808A1 (en) 2010-05-20 2010-05-20 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/972,566 Continuation US9342424B2 (en) 2010-05-20 2013-08-21 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure

Publications (1)

Publication Number Publication Date
US20110288808A1 true US20110288808A1 (en) 2011-11-24

Family

ID=44973184

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/784,142 Abandoned US20110288808A1 (en) 2010-05-20 2010-05-20 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure
US13/972,566 Active 2031-02-21 US9342424B2 (en) 2010-05-20 2013-08-21 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/972,566 Active 2031-02-21 US9342424B2 (en) 2010-05-20 2013-08-21 Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure

Country Status (1)

Country Link
US (2) US20110288808A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8856774B1 (en) * 2013-10-24 2014-10-07 Kaspersky Lab, Zao System and method for processing updates to installed software on a computer system
US20150058668A1 (en) * 2010-05-20 2015-02-26 International Business Machines Corporation Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure
US20150066414A1 (en) * 2013-08-30 2015-03-05 Chroma Ate Inc. Automatic retest method for system-level ic test equipment and ic test equipment using same
WO2016172950A1 (en) * 2015-04-30 2016-11-03 Hewlett-Packard Development Company, L.P. Application testing
US20170023634A1 (en) * 2015-07-22 2017-01-26 Renesas Electronics Corporation Failure estimation apparatus and failure estimation method
US20170228301A1 (en) * 2014-10-29 2017-08-10 Advantest Corporation Scheduler
US20180314613A1 (en) * 2017-04-28 2018-11-01 Advantest Corporation User control of automated test features with software application programming interface (api)
US20190146904A1 (en) * 2016-06-22 2019-05-16 International Business Machines Corporation Optimizing Execution Order of System Interval Dependent Test Cases
CN109901050A (en) * 2019-02-25 2019-06-18 哈尔滨师范大学 A kind of three dimension system chip testing method for optimizing resources and system
US10452508B2 (en) 2015-06-15 2019-10-22 International Business Machines Corporation Managing a set of tests based on other test failures
US11042680B2 (en) * 2018-09-14 2021-06-22 SINO IC Technology Co., Ltd. IC test information management system based on industrial internet
CN114264930A (en) * 2021-12-13 2022-04-01 上海华岭集成电路技术股份有限公司 Chip screening test method
CN115600045A (en) * 2022-11-30 2023-01-13 中国人民解放军海军工程大学(Cn) Average detection time calculation method and system adopting universal detection tool for detection
CN115794506A (en) * 2022-10-26 2023-03-14 北京北方华创微电子装备有限公司 Wafer scheduling method and electronic equipment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101674344B1 (en) 2016-03-07 2016-11-08 사단법인 한국선급 Helicopter Take-off and Landing Facilities with Movable Access Platform
US9921264B2 (en) 2016-04-20 2018-03-20 International Business Machines Corporation Method and apparatus for offline supported adaptive testing
US10289535B2 (en) * 2016-05-31 2019-05-14 Accenture Global Solutions Limited Software testing integration
US11132286B1 (en) 2020-04-16 2021-09-28 International Business Machines Corporation Dynamic reordering of test case execution
US12050946B2 (en) 2020-09-21 2024-07-30 International Business Machines Corporation Just in time assembly of transactions

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5369604A (en) * 1993-02-05 1994-11-29 Texas Instruments Incorporated Test plan generation for analog integrated circuits
US20020155628A1 (en) * 2001-04-20 2002-10-24 International Business Machines Corporation Method for test optimization using historical and actual fabrication test data
US20030182604A1 (en) * 2002-02-20 2003-09-25 International Business Machines Corporation Method for reducing switching activity during a scan operation with limited impact on the test coverage of an integrated circuit
US6716652B1 (en) * 2001-06-22 2004-04-06 Tellabs Operations, Inc. Method and system for adaptive sampling testing of assemblies
US20040093180A1 (en) * 2002-11-07 2004-05-13 Grey James A. Auto-scheduling of tests
US7047463B1 (en) * 2003-08-15 2006-05-16 Inovys Corporation Method and system for automatically determining a testing order when executing a test flow
US20060195747A1 (en) * 2005-02-17 2006-08-31 Ankan Pramanick Method and system for scheduling tests in a parallel test system
US20090192754A1 (en) * 2005-07-06 2009-07-30 Optimaltest Ltd. Systems and methods for test time outlier detection and correction in integrated circuit testing
US20100235136A1 (en) * 2009-03-11 2010-09-16 International Business Machines Corporation System and method for automatically generating test patterns for at-speed structural test of an integrated circuit device using an incremental approach to reduce test pattern count
US20130006567A1 (en) * 2009-12-15 2013-01-03 Wolfgang Horn Method and apparatus for scheduling a use of test resources of a test arrangement for the execution of test groups

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5442562A (en) 1993-12-10 1995-08-15 Eastman Kodak Company Method of controlling a manufacturing process using multivariate analysis
US6078189A (en) * 1996-12-13 2000-06-20 International Business Machines Corporation Dynamic test reordering
US6138249A (en) 1997-12-11 2000-10-24 Emc Corporation Method and apparatus for monitoring computer systems during manufacturing, testing and in the field
US6167545A (en) 1998-03-19 2000-12-26 Xilinx, Inc. Self-adaptive test program
US6442445B1 (en) 1999-03-19 2002-08-27 International Business Machines Corporation, User configurable multivariate time series reduction tool control method
US6711514B1 (en) 2000-05-22 2004-03-23 Pintail Technologies, Inc. Method, apparatus and product for evaluating test data
KR20040067875A (en) 2001-05-24 2004-07-30 테스트 어드밴티지 인코포레이티드 Methods and apparatus for semiconductor testing
US6549864B1 (en) 2001-08-13 2003-04-15 General Electric Company Multivariate statistical process analysis systems and methods for the production of melt polycarbonate
US7248939B1 (en) 2005-01-13 2007-07-24 Advanced Micro Devices, Inc. Method and apparatus for multivariate fault detection and classification
US20110288808A1 (en) * 2010-05-20 2011-11-24 International Business Machines Corporation Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure
US9310437B2 (en) * 2011-03-25 2016-04-12 Taiwan Semiconductor Manufacturing Company, Ltd. Adaptive test sequence for testing integrated circuits

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5369604A (en) * 1993-02-05 1994-11-29 Texas Instruments Incorporated Test plan generation for analog integrated circuits
US20020155628A1 (en) * 2001-04-20 2002-10-24 International Business Machines Corporation Method for test optimization using historical and actual fabrication test data
US6716652B1 (en) * 2001-06-22 2004-04-06 Tellabs Operations, Inc. Method and system for adaptive sampling testing of assemblies
US20030182604A1 (en) * 2002-02-20 2003-09-25 International Business Machines Corporation Method for reducing switching activity during a scan operation with limited impact on the test coverage of an integrated circuit
US20040093180A1 (en) * 2002-11-07 2004-05-13 Grey James A. Auto-scheduling of tests
US7047463B1 (en) * 2003-08-15 2006-05-16 Inovys Corporation Method and system for automatically determining a testing order when executing a test flow
US20060195747A1 (en) * 2005-02-17 2006-08-31 Ankan Pramanick Method and system for scheduling tests in a parallel test system
US20090192754A1 (en) * 2005-07-06 2009-07-30 Optimaltest Ltd. Systems and methods for test time outlier detection and correction in integrated circuit testing
US20100235136A1 (en) * 2009-03-11 2010-09-16 International Business Machines Corporation System and method for automatically generating test patterns for at-speed structural test of an integrated circuit device using an incremental approach to reduce test pattern count
US20130006567A1 (en) * 2009-12-15 2013-01-03 Wolfgang Horn Method and apparatus for scheduling a use of test resources of a test arrangement for the execution of test groups

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150058668A1 (en) * 2010-05-20 2015-02-26 International Business Machines Corporation Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure
US9342424B2 (en) * 2010-05-20 2016-05-17 International Business Machines Corporation Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure
US20150066414A1 (en) * 2013-08-30 2015-03-05 Chroma Ate Inc. Automatic retest method for system-level ic test equipment and ic test equipment using same
US8856774B1 (en) * 2013-10-24 2014-10-07 Kaspersky Lab, Zao System and method for processing updates to installed software on a computer system
US20170228301A1 (en) * 2014-10-29 2017-08-10 Advantest Corporation Scheduler
US10255155B2 (en) * 2014-10-29 2019-04-09 Advantest Corporation Scheduler
WO2016172950A1 (en) * 2015-04-30 2016-11-03 Hewlett-Packard Development Company, L.P. Application testing
US10489282B2 (en) 2015-04-30 2019-11-26 Micro Focus Llc Application testing
US10452508B2 (en) 2015-06-15 2019-10-22 International Business Machines Corporation Managing a set of tests based on other test failures
US10365320B2 (en) * 2015-07-22 2019-07-30 Renesas Electronics Corporation Failure estimation apparatus and failure estimation method
US20170023634A1 (en) * 2015-07-22 2017-01-26 Renesas Electronics Corporation Failure estimation apparatus and failure estimation method
US10664390B2 (en) * 2016-06-22 2020-05-26 International Business Machines Corporation Optimizing execution order of system interval dependent test cases
US20190146904A1 (en) * 2016-06-22 2019-05-16 International Business Machines Corporation Optimizing Execution Order of System Interval Dependent Test Cases
US10592370B2 (en) * 2017-04-28 2020-03-17 Advantest Corporation User control of automated test features with software application programming interface (API)
US20180314613A1 (en) * 2017-04-28 2018-11-01 Advantest Corporation User control of automated test features with software application programming interface (api)
US11042680B2 (en) * 2018-09-14 2021-06-22 SINO IC Technology Co., Ltd. IC test information management system based on industrial internet
CN109901050A (en) * 2019-02-25 2019-06-18 哈尔滨师范大学 A kind of three dimension system chip testing method for optimizing resources and system
CN114264930A (en) * 2021-12-13 2022-04-01 上海华岭集成电路技术股份有限公司 Chip screening test method
CN115794506A (en) * 2022-10-26 2023-03-14 北京北方华创微电子装备有限公司 Wafer scheduling method and electronic equipment
CN115600045A (en) * 2022-11-30 2023-01-13 中国人民解放军海军工程大学(Cn) Average detection time calculation method and system adopting universal detection tool for detection

Also Published As

Publication number Publication date
US20160154719A9 (en) 2016-06-02
US20150058668A1 (en) 2015-02-26
US9342424B2 (en) 2016-05-17

Similar Documents

Publication Publication Date Title
US9342424B2 (en) Optimal test flow scheduling within automated test equipment for minimized mean time to detect failure
Miczo Digital logic testing and simulation
Jiang et al. Defect-oriented test scheduling
US8954918B2 (en) Test design optimizer for configurable scan architectures
US7478028B2 (en) Method for automatically searching for functional defects in a description of a circuit
US7487477B2 (en) Parametric-based semiconductor design
EP3945448A1 (en) Methods and systems for fault injection testing of an integrated circuit hardware design
US6327556B1 (en) AT-speed computer model testing methods
US6993470B2 (en) Method of evaluating test cases in a simulation environment by harvesting
US9715564B2 (en) Scalable and automated identification of unobservability causality in logic optimization flows
US6941497B2 (en) N-squared algorithm for optimizing correlated events
US9404972B2 (en) Diagnosis and debug with truncated simulation
Bodhe et al. Reduction of diagnostic fail data volume and tester time using a dynamic N-cover algorithm
CN110991124B (en) Integrated circuit repairing method and device, storage medium and electronic equipment
EP1403651B1 (en) Testing integrated circuits
US9057765B2 (en) Scan compression ratio based on fault density
US9098637B1 (en) Ranking process for simulation-based functional verification
WO2008010648A1 (en) Matching method for multiple stuck-at faults diagnosis
US20200242206A1 (en) Apparatus and method of operating timing analysis considering multi-input switching
US20170010325A1 (en) Adaptive test time reduction
Bittel et al. Data Center Silent Data Errors: Implications to Artificial Intelligence Workloads & Mitigations
US8516322B1 (en) Automatic test pattern generation system for programmable logic devices
Koppolu et al. Hierarchical diagnosis of identical units in a system
CN110749813B (en) Test system and method for generating adaptive test recipe
Kasturi Rangan Design-dependent monitors to track the timing of digital CMOS circuits

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FAN, WEI;HALIM, NAGUI;JOHNSON, MARK C.;AND OTHERS;SIGNING DATES FROM 20100504 TO 20100518;REEL/FRAME:024431/0205

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE