a novel approach

Upload: chettyravi

Post on 07-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/4/2019 A Novel Approach

    1/4

  • 8/4/2019 A Novel Approach

    2/4

    experiments on real examples show very promising results in termsof the performance and the accuracyof coverage reports.

    The remainder of this paper is organized as follows. First, theHDL model we use in our algorithm is described in Section2. Th eoperations of our dumpfile-based coverage analysis algorithm arepresented in Section 3. In Section4: we show some experimentalresults with five real designs. Finally, the conclusions and futureworks are given in Section5 .

    2. H D L M o d e li n gUntil now: most HDL simulators are still based on the

    event-driven approach. Therefore, it is convenient to model theHDL designsas some interacting events. In this work, we extendthe event graph in 191 to a 3-D event graph, which is a directedgraph G(V, E) with hierarchy, and use it to be the HDL model.

    In our model, an event is defined as the largest unit of a HD Lprogram that will be execu ted atom ically during simulation, i.e.: thecode in an event will be executed during one step of anevent-driven simulation. Each vertexv E V in the event graph

    represents an event in the HDL designs with some possible childnodes child(v) E V: which aregenerated by v. In HDL programs, ifthe code of an ev en tj is put into another eventi and separated byan @ ~ wpit or delay statement(#d): he ev en tj can be regarded asbeing generated by the eventi once the eventi is executed. Thosechild nodes are generated temporarily and will be deleted afterexecuted. Each directed e dge e(i, j )E E: where i and j E V:represents that the eventj is triggered by the event i. In HDLprograms, if there are any variables in the sensitivity list of ev en tjbeing the left-hand-side variablesof the assignmentsin the event i ~we say ev en tj s triggered by the eventi.

    In order to explain the event graph more clearly, an example ofthe event graphfor a small Verilog program is shown inFigure 3. The example shown in Figure 3(a) can be partitioned into fiveevents. Their relationship is built in the event graph shown inFigure 3(b). The dashed arrows represent their hierarchical

    relationship, and the solid ones represent their triggeringrelationship. Each vertex in the event graph has two rows. Theupper row is thetriggering co nditionof this event; and the lowerrow is the executable code of this event. Because the event graphmay be changed dynamically, we only show the event graph at$time = 0'. At $time= 0: E l an d E2 are executed. Then E l l andE2 1 are generated respectively at $time= 0 as shown in Figure3(b). E l l will be executed after a constant delay whenE l isexecuted,so we set its triggering conditionas he absolute time thatit will be executed. Because eventE2 2 has not been executed yet atthat time, it is not shown in Figur e 3(b).

    (b)(a' module example; FI F

    II Initial begin

    clk = 0 ;

    @ (negedge clk)$display("clk fallI");

    en dendmodule

    Figure 3 : (a) A Verilog program. (b) The event graph at time0

    In this simple example: we can directly show the executablecode of each event in the event graph. However, in real case:;: theexecutable code of an event could be very complex. Therefore, weuse thestatement tree to retain the complex actionsof each event.A statement tree is a rooted, directed graph with vertex :jetA'containing two types of vertices.A nonterminal vertex n has cineormore childrenchild(n) E N . A terminal vertex n: which representsthe terminal block in theHDL codes, has no children but a setofassignments, which are recorded innction(n). Th e en,teringconditions of each vertexn are recorded incond(n).

    3. Dumpfi le-Based Coverage Analysis

    3.1 Value Change Dump FilesValue change dump (VCD) files[81 record every value change

    of selected variables during simulation. With the VCD feature: wecan save the value changes of selected variables for any portionofthe design hierarchy during any specified time interval. The VCDformat is a very simple ASCII format. Besides some simplecommands which define the parameters used in the file: the formatis basically a table of the values of variables with -timinginformation. Because the VCD files keep all the value informationduring simulation, they are widely used in many tools forpost-processing such as the waveform viewers.

    3.2 Dum pfile-Based Coverage AnalysisSince the dumpfiles record every value change of selected

    variables, if we read a dumpfile from the beginning of it: we canobtain the valueof each variable recorded in the dumpfile at anyspecific time. Then we can exactly know which code has beenexecuted and what the exec ution. count is acc ordingto th edumpfiles with properly selected variables. By using these statistics:the code coverage can be easily calculated without running thesimulation again.

    Although the operations required for our DUCA(Dumpfile-based Coverage Analysis) algorithmare very similar to

    those in a simulation; the complexity is much lower because manyoperations can be skipped. In coverage analysis, the major concernis to decide which code is executed, not the value of each variable.Therefore: the operations whose results do not change the controlflow are skipped. As a result: only a few operations have to beevaluated again in our DUCA algorithm: and the computingcomplexity can be significantly reduced.

    3.2.1 Variable SelectionAs described above: not all information is required to decide

    which part of code is executed. In order to improve the efficiencyof our DU CA algorithm, an analysis should be performed first tofind the set of variables that affect the decisions. Actually, they areeither the variables appearing in the triggering conditionsof eachvertex in the event graph or the entering condition of each wr tex inthe statement trees. This variable selection operation also providesthe capability of partial coverage analysis. If the user's concern is

    only the code coverageof a part of the entire code, we can selectonly those variables required for this part of codeso that theanalysis time can be further reduced.

    3.2.2 Running AnalysisAfter conducting the variable selection step, we can start

    coverage analysis by tracing the changes of these signals in thedumpfiles. In order to explain the trace operation more clearly, anexample is shownin Figure 4. Given a value change of a signal in

    IV-2 18

  • 8/4/2019 A Novel Approach

    3/4

    the dumpfiles: we first find the fanouts of th is signal, which are thecode affected by this change, and then put marks on the associatedpositions in the event graph and the statement trees. If the affectedcode is the triggering condition of a vertex in the event graph, wemark itas Triggered; if the affected code is the enteri ng conditionof a vertex in the statement trees, we mark it as Modified. Withthose marks, we ca n avoid a lot of duplicate computations.

    always @ ( posedge clk 1 egin always @ negedae clk 1beginIf0

    out2 = 1 ;

    else endout1 * data ;

    endsignalschange

    find their fanouts in the source code

    0

    alwa 5 ne ed e c l

    mark the affected

    conditions inthe

    event graph and , M f i ,tatement treesT : Triggered O t l = o u t l= au t l=M : Modified data

    Figure 4 An illustration of the signa l change tracing.

    After all the signal changes at the sam e time are processed, wecan traverse the statement treesof the triggered events to decidewhich co de is executed. While traversing down the stateme nt trees,if the current vertex is markedas modified. the result of its enteringcondition may have been changed and has to be evaluated again todecide which path togo; if not, we can use the previous result toeliminate unnecessary computing. Then the used marks should becleared for the use of next labeling process. The conditionevaluation of the vertices that are marked as modified but not in thetraversing paths is also skipped because it is unnecessary now. The

    Modified labels of these vertices are kept until they are finally inthe traversing paths.

    3.2.3 Concurrent EventsTypically, there are a lot of concurrent events in a HDL

    programs. In event driven simulators, two events may occur at thesame simulation time unit with a delta delay apart. They will beexecuted sequentially even though they should occur at the sametime unit. Howev er, the delta delay is ignored w hile recorded inthe dumpfile. If we cannot recover this sequential relationship, wemay use the wrong values w hile referencing the values of variables.

    For this purpose, we keep the triggering information in theedges of the event graph. If an edge exists between two ev ents, theymust have a de lta delay in between. Then we can easily recover theexecution sequence between events by performing a bread-firstsearch (BFS) on the subgraph composingof triggered events. Thevalue of each variable in an event will be updated after theevaluation of this event is completed. In other words, all valuereferences will take the value just before evaluating this event.Itmakes the later events take the updated v alue even if they are in thesame simulation time unit but with delta delays.

    3.2.4 Non-Blocking AssignmentsIn HDL programs: while a non-blocking assignment(=>) is

    executed, its effect does not appear until the end of this time unit.That means all the blocking assignment(=) will be executed prior

    IV-219

    to the non-blocking assignments even they are located after thenon-blocking assignments in the HDL code. It can be viewed as ifthe simulator divides one time unit into two sequential parts. Theblocking assignments are in the first part, and the non-blockingassignments are in the second part. Therefore, we use a two-passtraversal to traverse the event graph. The first traversal onlyhandles the blocking assignments, and the non-blockingassignments are handled in the second traversal. Then all the issuesintroduced by non-blocking assignments can be resolved.

    3.2.5 W ait StatementsIf there arewait or @ statements in the HDL program, some

    events will appear as embedded in another event. Taking theVerilog program shown inFigure 3 as an example, event E21 isembedded in the statement tree of event E2 in the initial eventgraph. In other words, event E2 has no child node at the beginning.Once event E2 is triggered and the embedded event E21 is found inthe traversing pathof its statement tree: event E21 is generated as achild node of event E2 as shown in Figure 3(b). This kind of eventsis generated temporarily and will be deleted after executed. Asevent E21 appears, event E2 cann ot be executed until event E21 iscompleted. Therefore, an extra step should be performed to checkthe existence of child nodes while searching for the triggeredevents in the event graph. If child nodes exist, this event cannot beexecuted until all the child events are completed.

    3.2.6 Coverage ReportAfter traversing the dumpfiles: we can obtain the statistics on

    which code is executed and what the execution count is. However,there are many different coverage metrics which require differentcover age reports. Therefore, a post-processing st ep is added togenerate the reports for the user-specified coverage metrics. Mostof the coverage reports can be easily obtained from those statisticswith little computation overhead. This feature provides thecapability of switching the report for different coverage metricseasily. The users who want to see the coverage report of anothercoverage metric with the same input patterns do not have tore-insert PLI tasks and re-run the long simulation again.3.2.7 DUCA Algorithm

    pseudo codeof our DUCA algorithm is shown inFigure 5 . In order to give a sum mary of the above techniques, the overall

    DUCA (hdl-codeH ~ dump-file D . elected-coverageC ) {// analyze the source code and build the event graph

    // traversing the dumpfile

    Event-Graph - CodeAnalysis(H);for each time changet in D {

    // find executed codes by the event graphStats - xecutedCode(Event-Graph, t ) ;for each signal changes at timet in D {

    // mark nodes in the event graph and stmt treesMarkTree(Event-Graph , s , t ) ;

    1

    // generate coverage reportfor selected coverageGenReport(Stats, C ) ;

    1

    Figure5 : The pseudo codeof DUCA algorithm.

  • 8/4/2019 A Novel Approach

    4/4

    4. Experimental Resul tsAccor ding to the proposed algorithm, we im plement a prototype

    DUCA in C++ language. To conduct experiments, we appliedDUCA to five real designs written in Verilog HDL. The designstatistics are given inTable 1. The number of lines and the numberof basic blocks141 in the original HDL code are given in thecolumnslines an d blocks respectively . The number of nodes in theevent graph is given in the columnevenfs.The designZSDN-L2 isan ISDN level-2 receiver. The designPCPU is a simple 32-bitDLX CPU with 5-stage pipeline. The designDivider is an Si 8divider with input and output latches. The designME P is a blockmatching processorfor motion estimation. The designMPC is aprogrammableMPEG-I1system controller.

    Table 1 : Design Statistics.

    The experimental results ofDUCA, which are obtained on a300MHz UltraSparcI1 with the simulator Verilog-XL 2.5.16, areshown in Table 2 an d Table 3. In Table 2: we demonstrate theperformance ofDUCA. The column vecfors gives the number ofrandom vectors used for simulation. The CPU time required forpure simulation and coverage analysis are given in the columnssimulation an d anaIysis respectively. The gained speedup ratio,which is defined as(simulation / analysis), is given in the columnspeedup. In Table 3, we show three kinds of measured coveragedata and the accuracy of those results. Because random patterns areused in those experim ents, the cove rage may not very good. But itdoes not affect the accuracy of our tool.

    As shown in Table 2, o u r D U C A takes l ess t ime than

    simulation on average. In other words, we could have betterperformance than instrumentation-based tools because anextra simulation run is often necessary in those toolsso thatthe speedup ratio is surely smaller than1. For the larger cases,we can obtain a good speedup ratio because a lot of operations can

    Table 2 : Experimental results ofDUCA.

    I Design I StatementI Decision I Event I Accuracy I

    Table 3 : The coverage data obtained in the experiments.

    be skipped in the coverage analysis. For the smaller cases. thenumber of operations required in simulation is relatively small.Therefore, the number of skipped operations is also small such that

    the speedup is not obvious. In addition: the Verilog simulator is awell-developed commercial tool which has been greatly optimizedfor speed. If our prototype can be carefully optimized byexperienced engineers, we think the speedup will be better.

    5. Conclusions and Future WorksIn this paper, we proposed a novel approach for functional

    coverage measurement in HDL. The usage flow of the proposeddumpfile-based coverage analysis can be much easier and smootherthan that of existing instrumentation-based coverage tools.Nopre-processing tool is required in our usage flowso that the designflow can be kept the same as usual. In addition, no extra code isinserted such that the overhead on code size and simulatorperformance can be eliminated. Most importantly, the flexibility inchoosing coverage metrics and measured code regions is increased.Only one simulation run is neededfor any kind of coverage reportsbecause all of them can be derived from the same dumpfile:;. Theexperiments on real examples show very promising results.

    In this paper, the proposed algorithm is designed for mea:juringthe execution-based coverage metrics such as the statementcoverage and the decision coverage. However, it does not meanthat the value-based coverage metrics such as theFS M coverageand the toggle cove rage cannot be m easured. Actua lly, thosecoverage metrics can be obtained from the same dumpfiles in mucheasier ways. Because the dumpfiles record the values of variables,if we retrieve the values of the desired variables: such as the statevariablesof a FSM: from the dumpfiles: we can easily obtain thosecoverage data. Therefore, measuring coverage from th e dum pfilescould be a practical solu tion for most existing coverage m etrics.

    Although this algorithm can handle almost all descriptionsaccepted by the simulator, it may still fail when there is not enoughinformation in the dumpfiles. The designers often dump allinformation into the dumpfiles in the debuggingphase, but theyoften dump only portionsof variables in the integration phase tosave simulation time. In this situation, we need another techniquesto recover the other required information from existing data suchthat the coverage analysis can still b e performed. Developing suchtechniques and integrating those techniques into the proposedcoverage analysis algorithm is a subject of our future works.

    ReferencesA. Gupta,S. Malik, andP. Ashar, Toward Formalizing a ValidationMethodology Using Simulation Coverage , 341h DAC, Jun.1997.Mike Benjamin, Daniel Geist, Alan Hartman, Yaron Wolfsthal,Gerard Mas, and Ralph Smeets,A Study in Coverage-Driven TestGeneration, 36thDesign Automation Conference, Jun.1999.Tsu-Hwa Wang and Chong Guan Tan, Practical Code Coverage forVerilog,Intl Verilog HDL Conference, Mar.1995.D. Drako andP. Cohen, HDL Verification Coverage, IntegratedSystems Design M agazine, June1998.(http~//www.isdmag.com/Editorial/l998/CodeCoverage9806.html)

    CoverMe terTM, dvanced Technology Center.( http://www.covermeter. comCoverScanTM , esign Acceleration Incorporation.( http.//www.designacc.codproducts/coverscan/index. tml )HDLScore*, ummit Design Incorporation.( http-//~\?Nw.summit-design.com/products/hdlscore.htmlCadence Reference Manuals.R. S. French, M.S. Lam: J . R. Levitt, andK . Olukotun, A GeneralMethodfor Compiling Event-Driven Simulations ,32ndDAC 1995.

    IV-220

    http://www.covermeter.com/http://www.covermeter.com/