2007 intelligent test bench
TRANSCRIPT
8/8/2019 2007 Intelligent Test Bench
http://slidepdf.com/reader/full/2007-intelligent-test-bench 1/6
W H I T E P A P E R
The New Wave in Functional Verification:
Intelligent Testbench Automation
Mark Olen
System-Level Engineering Division
Mentor Graphics Corporation
w w w . m e n t o r . c o m
8/8/2019 2007 Intelligent Test Bench
http://slidepdf.com/reader/full/2007-intelligent-test-bench 2/6
Intelligent Testbech Automation
As design complexity increases, design verification becomes increasingly difficult. In fact, it is not uncommon for
project teams to spend more time verifying that a design will work than creating the actual design in the first place.
Project teams invest in a variety of tools to increase verification efficiency, including both formal analysis andfunctional simulation. Formal analysis performs useful checks to verify static and sometimes dynamic properties of a
design. Functional simulation exercises a design empirically to verify its response to external and sometimes internal
stimulus and data. However, verification by simulation requires testbenches - and creating good testbenches for a
complex design is a challenging endeavor. Issues such as growing chip complexities, HW/SW integration, and
module- to system-level design requirements have made it even more difficult to manually create a testbench that
achieves higher levels of functional coverage.
The need for more efficient testbench development has resulted in considerable investment by the EDA industry, with
a major emphasis on improving the languages used by engineers. Custom languages have been developed to provide
more programming capabilities for testbench development, and standards efforts are in place to promote common
testbench languages such as SystemVerilog or SystemC. While these languages do provide more power and flexibilityfor the testbench programmer, they do not reduce the amount or complexity of time necessary to program the
testbench.
An approach is needed that not only leverages the benefits of these new testbench languages, but applies a new layer
of technology to provide a steep functional gain in productivity and coverage. Intelligent testbench automation
enables design teams to leverage the benefits of common testbench languages while providing a next-generation level
of automation that increases functional coverage, which in turn, reduces overall testbench programming. This new
technology, called inFact, uses algorithms to automate generation of simulation sequences, data, and checks from a
concise behavioral description of a design's specifications. Intelligent testbench automation achieves a higher level of
functional coverage at the module, sub-module, and system level and finds more bugs faster than traditional methods.
When intelligent testbench technology is employed, project teams design with a higher level of confidence, designquality improves in a fraction of the time, and manufacturing respins are dramatically reduced.
So how does inFact work? The technology is based on rule-sets. Rule-sets show how high-level testing activities can
be performed as a series of lower-level testing activities. For example, one rule in a router testbench might be "Packet-
> Preamble-> MainHeader-> ExtentionHeader-> DataPayload-> Checksum." Other rules might show how "Preamble,"
"MainHeader," and the other components of "Packet" could be tested in terms of even lower-level testing activities.
The result is a highly layered testbench architecture that is easy to write, simple to understand, and highly reusable. At
the bottom of the rule hierarchy are individual "actions" which are no longer defined by rules. Instead, each action
corresponds to a task written in a standard testbench language such as SystemVerilog or SystemC, which implements
the action within a test.
When taken together, a modest number of rule sets can describe a very large set of tests. In fact, a well-chosen rule set
can compactly define how to run every test scenario for simulation, even when designs become large and complex.
While the application of rule sets, or Rules-Based Verification, is new to hardware design verification, it is actually an
established, time-proven technology. IBM pioneered the use of Rules-Based Verification in the late 1970's for
compiler design and verification. Compilers are an excellent application for this verification technology, as they can
be concisely expressed in Backus Naur Form (BNF), which today are known as rule sets. First IBM, and then many
companies thereafter, successfully used rule sets to verify compilers and some types of software programs.
2 Intelligent Testbench Automation
8/8/2019 2007 Intelligent Test Bench
http://slidepdf.com/reader/full/2007-intelligent-test-bench 3/6
However, several aspects of Rules-Based Verification prevented serious consideration of its application to hardware
testing. First and foremost, the technology used to synthesize test sequences from rule sets was limited to
deterministic systems and expressions, hence the excellent application for compiler verification. But complex
hardware designs are non-deterministic in nature with behavior that cannot be modeled or predicted by deterministic
expressions. In addition, Rules-Based Verification was not supported in languages familiar to hardware designers.
Now, with more than 20 years of research and development in this technology, intelligent testbench automation is
ready for mainstream usage. This new technology applies concise rule sets to the non-deterministic behavior of
complex VLSI designs and is implemented in common testbench languages such as SystemVerilog and SystemC, in
addition to Verilog and C/C++.
Fundamental to intelligent testbench automation is that every simulation run has a purpose, which is captured as a
"verification goal." A verification goal makes references to relevant parts of a rule set to specify a particular
verification objective. When simulation begins, a seed-testbench starts up along with the simulator and begins
generating tests per the verification goal. As the simulation proceeds, the algorithms monitor the progress towards the
verification goal and intelligently adapt the testbench to synthesize tests to achieve the desired coverage. Engineers no
longer need to write hard constrains to prevent unwanted illegal tests or soft constraints to prevent duplicative testing.
The rule sets guide the testbench synthesis process to create only legal (or wanted illegal) sequences, and the
algorithms progress towards completion of the verification goal without repeating tests (unless repeated tests are
desired).
The verification goal also allows the engineer to focus the testbench synthesis process on producing whatever type of
test is currently required. This capability, known as "testbench redirection," is important because a project team may
need different tests at different times during the verification process. At one moment the team may want to investigate
a problem report, and they'll need a specific test sequence. Next, they are trying to give quick feedback on a design fix
and need to concentrate testing on a particular function. Minutes later, the team needs to run a regression test so they
can thoroughly test a wide variety of functionality. Each time the team runs a different test they only need to modify
the verification goal. The rest of the testbench does not change or need to be changed. This not only saves editing
time, but eliminates a significant source of testbench bugs.
In addition to testbench redirection, inFact offers "testbench retargeting." For example, it is increasingly more critical
during the verification process to test complex designs at the module, subsystem, and system level. Usually, subsystem
level and system-level testing require completely new (or at least, heavily edited) testbench code.
Because the rule sets are declarative in nature, they accurately describe the functional behavior of an interface or bus
protocol independent of that function's environment. The intelligent nature of the algorithms can adapt to different
design environments and simulation needs without requiring changes to the rule sets. As a result, the same testbench
modules can be used for verifying a module, subsystem, and complete system-level design.
Intelligent Testbench Automation 3
8/8/2019 2007 Intelligent Test Bench
http://slidepdf.com/reader/full/2007-intelligent-test-bench 4/6
inFact offers testbench reuse as well. When a project team creates a traditional testbench, its application across
multiple design projects is usually limited. One common problem is that normally, subsequent generations of a
design employ different configurations of system resources (otherwise why create next-generation designs?).
For instance, memory arrays, configurations and mapping, DMA channels, interrupt, and other system-level
hardware resources change from one design to the next. Further, different designs implement different subsets of
interface and protocol specifications. While an original design may implement a full range of AMBAAHB
functionality, including wrap transfers to optimize performance with the system's cache, a subsequent design
may implement an AMBA AHB subset with no wrap transfers because there is no cache in the system - perhaps
to reduce product price. To further complicate the reuse problem, two design teams might intentionally
implement functionality from a common specification in two very different ways. This is usually the result of
each team addressing differences of the end-user market.
In all of these scenarios, it is extremely difficult for engineers to write highly flexible testbench code, while
getting all of the details right for each of the many possibilities.
4 Intelligent Testbench Automation
Figure 1. Unlike traditional testbenches, intelligent testbenches can be used across different design projects,retargeted at different levels of a design's hierarchy, and redirected to generate different tests based on theverification engineer's goals for the next simulation run.
Figure 2. Without changing the way pin-level simulation activity is written today (stimulus, data, checks),
testbench code is reduced by about 75% overall with algorithmic description when compared with traditional methods.
8/8/2019 2007 Intelligent Test Bench
http://slidepdf.com/reader/full/2007-intelligent-test-bench 5/6
inFact solves the testbench reuse problem with its rule set descriptions and an additional technology innovation that
manages design resources during simulation. It does this by separating the definition of a specification's functionality
from the decisions about what functionality to test next. In essence each testbench module becomes highly reusable
across multiple implementations of the same specification. The inherent architecture of a rule set models the
functionality of a specification independent of a design's unique specification. Note in Figure 3, the rule set, "Action
Library" and "Interface Description" are each derived completely from the functional specification. The information
required by the algorithms to ensure test sequence generation in a way that exercises a specific design's
implementation is separated from the rest of the testbench input. The algorithms use the DUT resources (described
above) to increase the density of stimulus across multiple busses and interfaces (thus stressing the design as much as
possible) without creating unintentional hardware conflicts. And finally, the verification can adjust the algorithms'
priorities and goals during simulation, synthesizing different sets of testbenches from the rule sets, while responding to
the dynamic responses of the design.
inFact modules make it practical to reuse the modules on a variety of real-world designs without requiring
project teams to incur the expense and risk of modifying unfamiliar code.
The technology also works exceedingly well for SoC design verification. In most cases, when building an SoC
testbench, engineers face two common challenges.
The first challenge relates to the fact that SoC designs must work in a variety of different configurations. Writing
code to assemble and operate the necessary configurations is prohibitively expensive and, as a result, only a few
configurations can be tested. But, inFact addresses this quite well. It can recognize the various testbenchmodules connected to the SoC at the beginning of simulation and enable individual test generators to
automatically adapt to the presence of each module, thus ensuring that the overall usage scenarios produced by
the SoC testbenches are legal.
Intelligent Testbench Automation 5
Figure 3. By separating the functional specification from design implementation, a testbenchmodule becomes highly reusable across multiple implementations of the same specification.
8/8/2019 2007 Intelligent Test Bench
http://slidepdf.com/reader/full/2007-intelligent-test-bench 6/6
6Performance Profiler
Corporate HeadquartersMentor Graphics Corporation8005 SW Boeckman RoadWilsonville, OR 97070-7777Phone: 503.685.7000Fax: 503.685.1204
Sales and Product InformationPhone: 800.547.3000
Silicon ValleyMentor Graphics Corporation1001 Ridder Park DriveSan Jose, California 95131 USAPhone: 408.436.1500Fax: 408.436.1501
North American Support Center Phone: 800.547.4303
EuropeMentor GraphicsDeutschland GmbH Arnulfstrasse 20180634 MunichGermanyPhone: +49.89.57096.0Fax: +49.89.57096.400
Pacific RimMentor Graphics (Taiwan)Room 1001, 10FInternational Trade BuildingNo. 333, Section 1, Keelung RoadTaipei, Taiwan, ROCPhone: 886.2.87252000Fax: 886.2.27576027
JapanMentor Graphics Japan Co., Ltd.Gotenyama Hills7-35, Kita-Shinagawa 4-chomeShinagawa-Ku, Tokyo 140JapanPhone: 81.3.5488.3033Fax: 81.3.5488.3004
For more information visit: www.mentor.comCopyright © 2007 Mentor Graphics Corporation. All Rights Reserved.
This document contains information that is proprietary to Mentor Graphics Corporation and may be duplicated in whole or in part by the original recipient for internal business purposes only, provided that
this entire notice appears in all copies. In accepting this document, the recipient agrees to make every reasonable effort to prevent the unauthorized use of this information. Seamless and Mentor Graphics are
registered trademarks of Mentor Graphics Corporation. All other trademarks mentioned in this document are trademarks of their respective owners.
MGC 3-07 TECH
The second challenge has to do with the programmable nature of SoC designs. The firmware interface of a
programmable SoC is a control interface that traditional pin-oriented testbenches simply cannot address. Without
controlling all design interfaces, thorough SoC testing is impossible. inFact solves this problem by creating
specific testbench modules that generate test firmware into the system memory for execution. By combining
firmware-generating testbench models, project teams can construct powerful SoC testbenches that can exercise
any part of a design's complex functionality.
In conclusion, the advantages of intelligent testbench automation have proven to be quite dramatic. The time
needed for programming has been reduced while the overall coverage area has been increased. It is reusable
across multiple designs without requiring significant changes to the rule sets. Further, algorithmic testbenches
can be performed at the module, sub-module, and full-system design levels. And with testbench redirection
capabilities, it allows design teams to easily redirect tests at various times during verification.
Ultimately, inFact finds many more bugs faster - with less engineering time required. It cuts down on respins
caused by functional design errors that escape functional verification. It is a proven technology that is exactly
what companies need if they are to bring their verification costs in line with their design costs as they face ever-
growing and increasingly complex design challenges.
Figure 4. Actual results from a leading consumer electronics company.