intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Sequential Verulog Topics part 14

Chia sẻ: Dasdsadqwe Dsadasdsadsa | Ngày: | Loại File: PDF | Số trang:11

83
lượt xem
2
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Exercises 1: A 4-bit full adder with carry lookahead was defined in Example 6-5 on page 109, using an RTL description. Synthesize the full adder, using a technology library available to you.

Chủ đề:
Lưu

Nội dung Text: Sequential Verulog Topics part 14

  1. 14.9 Exercises 1: A 4-bit full adder with carry lookahead was defined in Example 6-5 on page 109, using an RTL description. Synthesize the full adder, using a technology library available to you. Optimize for fastest timing. Apply identical stimulus to the RTL and the gate-level netlist and compare the output. 2: A 1-bit full subtractor has three inputs x, y, and z (previous borrow) and two outputs D(difference) and B(borrow). The logic equations for D and B are as follows: D = x'y'z + x'yz' + xy'z' + xyz B = x'y + x'z +yz Write the Verilog RTL description for the full subtractor. Synthesize the full subtractor, using any technology library available to you. Optimize for fastest timing. Apply identical stimulus to the RTL and the gate-level netlist and compare the output. 3: Design a 3-to-8 decoder, using a Verilog RTL description. A 3-bit input a[2:0] is provided to the decoder. The output of the decoder is out[7:0]. The output bit indexed by a[2:0] gets the value 1, the other bits are 0. Synthesize the decoder, using any technology library available to you. Optimize for smallest area. Apply identical stimulus to the RTL and the gate-level netlist and compare the outputs. 4: Write the Verilog RTL description for a 4-bit binary counter with synchronous reset that is active high. (Hint: Use always loop with the @(posedge clock) statement.) Synthesize the counter, using any technology library available to you. Optimize for smallest area. Apply identical stimulus to the RTL and the gate-level netlist and compare the outputs. 5: Using a synchronous finite state machine approach, design a circuit that takes a single bit stream as an input at the pin in. An output pin match is asserted high each time a pattern 10101 is detected. A reset pin initializes the circuit synchronously. Input pin clk is used to clock the circuit. Synthesize the circuit, using any technology library available to you.
  2. Optimize for fastest timing. Apply identical stimulus to the RTL and the gate-level netlist and compare the outputs. [ Team LiB ] [ Team LiB ] 15.1 Traditional Verification Flow A traditional verification flow consisting of certain standard components is illustrated in Figure 15-1. This flow addresses only the verification perspective. It assumes that logic design is done separately. Figure 15-1. Traditional Verification Flow As shown in Figure 15-1, the traditional verification flow consists of the following steps: 1. The chip architect first needs to create a design specification. In order to create a good specification, an analysis of architectural trade-offs has to be performed so that the best possible architecture can be chosen. This is usually done by simulating architectural models of the design. At the end of this step, the design specification is complete. 2. When the specification is ready, a functional test plan is created based on the design specification. This test plan forms the fundamental framework of the functional verification environment. Based on the test plan, test vectors are applied to the design-under-test (DUT), which is written in Verilog HDL. Functional test environments are needed to apply these test vectors. There are many tools available for generating and apply test vectors. These tools also allow the efficient creation of test environments. 3. The DUT is then simulated using traditional software simulators. (The DUT is normally created by logic designers. Verification engineers simulate the DUT.) 4. The output is then analyzed and checked against the expected results. This can be done manually using waveform viewers and debugging tools. Alternately, analysis can be automated by the test environment checking the output of the DUT or by parsing the log files using a language like PERL. In addition, coverage results are analyzed to ensure that the tests have exercised
  3. the design thoroughly and that the verification goals are met. If the output matches the expected results and the coverage goals are met, then the verification is complete. 5. Optionally, additional steps can be taken to decrease the risk of a future design re-spin. These steps include Hardware Acceleration, Hardware Emulation and Assertion based Verification. Earlier, each step in the traditional verification flow was accomplished with Verilog HDL. Though Verilog HDL remains the dominant method for creating the DUT, many advances have occurred in the other steps of the verification flow. The following sections describe these advances in detail. 15.1.1 Architectural Modeling This stage includes design exploration by the architects. The initial model typically does not capture exact design behavior, except to the extent required for the initial design decisions. For example, a fundamental algorithm like an MPEG decoder might be implemented, but the processor to memory bandwidth is not specified. The architect tries out several different variations of the model and make some fundamental decisions about the system. These decisions may include number of processors, algorithms implemented in hardware, memory architecture, and so on. These trade-offs will affect the eventual implementation of the target design. Architectural models are often written using C and C++. Though C++ has the advantage of object oriented constructs, it does not implement concepts such as parallelism and timing that were found in HDLs. Thus, creators of architectural models have to implement these concepts in their models. This is very cumbersome, resulting in long development times for architectural models. To solve this problem, architectural modeling languages were invented. These languages have both the object oriented constructs found in C++ as well as parallelism and timing constructs found in HDLs. Thus, they are well-suited for high-level architectural models. A likely advancement in the future is the design of chips at the architectural modeling level rather than at the RTL level. High-level synthesis tools will convert architectural models to Verilog RTL design implementations based on the trade-off inputs. These RTL designs can then go through the standard ASIC design and verification flow. Figure 15-2 shows an example of such a flow.
  4. Figure 15-2. Architectural Modeling Appendix E, Verilog Tidbits, contains further information on popular architectural modeling languages. 15.1.2 Functional Verification Environment The functional verification of a chip can be divided into three phases. • Block level verification: Block level verification is usually done by the block designer using Verilog for both design and verification. A number of simple test cases are executed to ensure that the block functions well enough for chip integration. • Full ChipVerification: The goal of full chip verification is to ensure that all the features of the full chip described in the functional test plan are covered. • Extended Verification: The objective of the extended verification is to find all corner case bugs in the design. This phase of verification is lengthy since the set of tests is not predetermined and it may continue past tape-out. During the functional verification phase, a combination of directed and random simulation is used. Directed tests are written by the verification engineers to test a specific behavior of the design. They may use random data, but the sequence of events are predetermined. Random sequences of legal input transactions are used towards the end of functional verifcation and during the extended verification phases in order to simulate corner cases which the designer may have missed. As Verilog HDL became popular, designers[1] started using Verilog HDL to both the DUT and its surrounding functional verification environment. In a typical HDL-based verification environment, [1] In this chapter, the words "designer" and "verification engineer" have been used interchangeably. This is because logic designers perform block level verification and are often involved in the full chip verification process. • The testbench consisted of HDL procedures that wrote data to the DUT or read data from it. • The tests, which called the testbench procedures in sequence to apply manually selected input stimuli to the DUT and checked the results, were directed only towards specific features of the design as described in the
  5. functional test plan. However, as design sizes exceeded million gates, this approach became less effective because • The tests became harder and more time consuming to write because of decreasing controllability of the design. • Verifying correct behavior became difficult due to decreasing observability into internal design states. • The tests became difficult to read and maintain. • There were too many corner cases for the available labor. • Multiple environments became difficult to create and maintain because they used little shared code. To make the test environment more reusable and readable, verification engineers needed to write the tests and the test environment code in an object oriented programming language. High-Level Verification Languages (HVLs) were created to address this need. Appendix E, Verilog Tidbits, contains further information on popular HVLs. HVLs are powerful because they combine the object oriented approach of C++ with the parallelism and timing constructs in HDLs and are thus best suited for verification. HVLs also help in the automatic generation of test stimuli and provide an integrated environment for functional verification, including input drivers, output drivers, data checking, protocol checking, and coverage. Thus, HVLs maximize productivity for creating and maintaining verification environments. Figure 15-3 shows the various components of a typical functional verification environment. HVLs greatly improve the designer's ability to create and maintain each test component. Note that Verilog HDL is still the primary method of creating a DUT. Figure 15-3. Components of a Functional Verification Environment In an HVL-based methodology, the verification components are simulated in the HVL simulator and the DUT is simulated with a Verilog simulator. The HVL simulator and the Verilog simulator interact with each other to produce the simulation results. Figure 15-4 shows an example of such an interaction. The HVL simulator and Verilog simulator are run as two separate processes and
  6. communicate through the Verilog PLI interface. The HVL simulator is primarily responsible for all verification components, including test generation, input driver, output receiver, data checker, protocol checker, and coverage analyzer. The Verilog simulator is responsible for simulating the DUT. Figure 15-4. Interaction between HVL and Verilog Simulators The future trend in HVLs is to apply acceleration techniques to HVL simulators to greatly speed up the simulations. These acceleration techniques are similar to those used for Verilog HDL simulators and are discussed in Section 15.1.3, Simulation. 15.1.3 Simulation There are three ways to simulate a design: software simulation, hardware acceleration, and hardware emulation. Software Simulation Software simulators were typically used to run Verilog HDL-based designs. Software simulators run on a generic computer or server. They load the Verilog HDL code and simulate the behavior in software. Appendix E, Verilog Tidbits, contains further information on popular software simulators. However, when designs started exceeding one million gates, software simulations began to consume large amounts of time and became a bottleneck in the verification process. Thus, various techniques emerged to accelerate these simulations. Two techniques, hardware acceleration and emulation, were invented. Hardware Acceleration Hardware acceleration is used to speed up existing simulations and to run long sequences of random transactions during functional and extended verification phases. In this technique, the Verilog HDL-based design is mapped onto a reconfigurable hardware box. The design is then run on the hardware box to produce simulation results. Hardware acceleration[2] can often accelerate simulations by two to three orders of magnitude.
  7. [2] Also know as "Simulation Acceleration." Hardware accelerators can be FPGA-based or processor-based. The simulation is divided between the software simulator, which simulates all Verilog HDL code that is not synthesizable, and the hardware accelerator, which simulates everything that is synthesizable. Figure 15-5 shows the verification methodology with a hardware accelerator. Figure 15-5. Hardware Acceleration Verification components may be simulated using a Verilog simulator or an HVL simulator. The simulator and the hardware accelerator interact with each other to produce results. Hardware accelerators can cut simulation times from a matter of days to a few hours. Therefore, they can greatly shorten the verification timeline. However, they are expensive and need significant set-up time. Another drawback is that they usually require long compilation times, which means that they are most useful only for long regression simulations. As a result, smaller designs still employ software simulation as the simulation technique of choice. Appendix E, Verilog Tidbits, contains further information on popular hardware accelerators. Hardware Emulation Hardware emulation[3] is used to verify the design in a real life environment with real system software running on the system. HW emulation is used during the extended verification phase since the design must be pretty stable. [3] Also know as "In-Circuit Emulation." One of the major benefits of hardware emulation is that hardware software integration can start before the actual hardware is available, thus saving time in the schedule. By running real-life software, conditions that are very difficult to set up in a simulation environment can be tested. When the design is complete, software engineers often want to run their software
  8. on the design before the design is realized on a chip. Here are a few examples of what the designer of a chip might want to do before a chip is sent for fabrication: 1. Designers of a microprocessor want to try booting the UNIX operating system. 2. Designers of an MPEG decoder want to have live frames decoded and shown on a screen. 3. Designers of a graphics chip want actual frame renderings to show up on the screen in real time. Running live systems with the design is an important verification step that reduces the possibility of bugs and a design re-spin. However, software simulators and hardware accelerators cannot be used for this purpose because they are too slow and do not have the necessary hooks to run a live system. For example, to boot UNIX with a software simulation of a design may take many years. Hardware emulation can boot UNIX in a few hours. Figure 15-6 shows the setup of a typical emulation system. Emulation is done so that the software application runs exactly as it would on the real chip in a real circuit. The software application is oblivious to the fact that it is running on an emulator rather than the actual chip. Figure 15-6. Hardware Emulation Hardware emulators typically run at megahertz speeds. However, they are very expensive and require significant setup time. As a result, smaller designs still employ software simulation as the simulation technique of choice. Appendix E, Verilog Tidbits, contains further information on popular hardware emulators. 15.1.4 Analysis An important step in the traditional verification flow is to analyze the design to check the following items: 1. Was the data received equal to the expected data? 2. Was the data received correctly according to the interface protocol?
  9. To analyze the correctness of the data value and data protocol, various methods are used. 1. Waveform Viewers are used to see the dump files. The designer visually goes through the dump files from various tests and ensures that the data value and the data protocol are both correct. 2. Log Files contain traces of the simulation run. The designer visually looks at the log files from various tests and determines the correctness of the data value and the data protocol based on the simulation messages. These methods are extremely tedious and time-consuming. Every time a test is run, the designers has to manually look through the dump files and log files. This method breaks down when a large number of simulation runs needs to be analyzed. Therefore, it is advisable to make your test environment self-checking. Two components are required for building a self-checking test environment: 1. Data Checker 2. Protocol Checker Data checkers compare each value output from the simulation and check the value on-the-fly against the expected output. If there is a mismatch, the simulation can be stopped immediately to display an error message. If there are no error messages, the simulation is deemed to complete successfully. Scoreboards are often used to implement data checkers. Scoreboards are often used to indicate the completion transactions and verify that data is received on the corrected interface. Scoreboards also ensure that data is not lost in the DUT, even if the protocol is followed, and that the data received is correct. Protocol checkers check on-the-fly whether the data protocol is followed at each input and output interface. If there is a violation of protocol, the simulation can be stopped immediately to display an error message. If there are no error messages, the simulation is deemed to complete successfully. A self-checking methodology allows the designer to run thousands of tests without having to analyze each test for correctness. If there is a failure, the designer can probe further into the dump files and the log files to determine the cause of the error. 15.1.5 Coverage Coverage helps the designer determine when verification is complete. Various
  10. methods have been developed and used to quantify the verification progress. There are two types of coverage: structural and functional. Structural Coverage Structural coverage deals with the structure of the Verilog HDL code and tells when key portions of that structure have been covered. There are three main types of structural coverage: 1. Code coverage: The basic assumption of code coverage is that unexercised code potentially bears bugs. However, code coverage checks how well RTL code was exercised rather than design functionality. Code coverage does not tell whether the verification is complete. It simply tells that verification is not complete until 100% code coverage is achieved. Therefore, code coverage is useful but it is not a complete metric. 2. Toggle coverage: This is one of the oldest coverage measurements. It was historically used for manufacturing tests. Toggle coverage monitors the bits of logic that have toggled during simulation. If a bit does not toggle from 0 to 1, or from 1 to 0, it has not been adequately verified. Toggle coverage does not ensure completeness. It cannot assure that a specific bit toggle sequence that represents high-level functionality has occurred. Toggle coverage does not shorten the verification process. It may even prolong the verification process as engineers try to toggle a bit, which cannot toggle according to the specification. Toggle coverage is very low- level coverage and it is cumbersome to relate a specific bit to a high-level test plan item. 3. Branch coverage: Branch coverage checks if all possible branches in a control flow are taken. This coverage metric is necessary but not sufficient. Functional Coverage Functional coverage perceives the design from a system point of view. Functional coverage ensures that all possible legal values of input stimuli are exercised in all possible combinations at all possible times. Moreover, functional coverage also provides finite state machine coverage, including states and state transitions. Functional coverage can be enhanced with implementation coverage by inserting coverage points or assertions in the RTL code. For example, it enhances the functional coverage metric by determining if all transactions have been tested
  11. while receiving an interrupt or when one of the internal FIFOs is full. Appendix E, Verilog Tidbits, contains further information on popular coverage tools.  
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2