This article is the summary of a paper presented at DVCon India 2015, jointly by Freescale Semiconductors and CircuitSutra Technologies
Developing the virtual model of a CPU and effectively using it in a virtual platform is a pretty complex task. One of the major contributing factors for this complexity is to ensure the correct behavior of the cpu model. The effects of a bug in a cpu model usually have a high cost in terms of its detection at a later stage. However, bugs can be prevented by effectively testing the model.
In this paper we discuss the approach followed to build a framework for such verification.
Execution of any cpu instruction depends on the input state determined by system register settings and instruction operands and can alter the state of its registers, memory, exception and MMU during execution. The basic underlying principle of the framework is to compare the output state of the cpu model against the golden reference model for every cpu instruction that is executed. If the states of the cpu model matches as that of golden reference model, the behavior of the cpu model is taken to be correct.
One of the challenges in the verification of the cpu model is to ensure that the test case data is comprehensive enough to cover maximum possibilities for the cpu model. Solution for this problem lies in the randomization of test case data and instruction set.
Our methodology follows following variations of randomization:
Our framework test case randomization includes instructions with valid operands and constrained setting of system registers such that no exception of interrupt condition is encountered. Instruction with valid operands and random setting of system registers to cover interrupt/exception scenarios and instructions with random operands and random system register settings
In this section we discuss about our verification framework, its constituents and test case scenarios.
There are four basic building blocks for the framework - Test generator, the CPU model that is to be verified, golden reference model and the test output validation.
Developing the virtual model of a CPU and effectively using it in a virtual platform is a pretty complex task. One of the major contributing factors for this complexity is to ensure the correct behavior of the cpu model. The effects of a bug in a cpu model usually have a high cost in terms of its detection at a later stage. However, bugs can be prevented by effectively testing the model.
In this paper we discuss the approach followed to build a framework for such verification.
Execution of any cpu instruction depends on the input state determined by system register settings and instruction operands and can alter the state of its registers, memory, exception and MMU during execution. The basic underlying principle of the framework is to compare the output state of the cpu model against the golden reference model for every cpu instruction that is executed. If the states of the cpu model matches as that of golden reference model, the behavior of the cpu model is taken to be correct.
One of the challenges in the verification of the cpu model is to ensure that the test case data is comprehensive enough to cover maximum possibilities for the cpu model. Solution for this problem lies in the randomization of test case data and instruction set.
Our methodology follows following variations of randomization:
Our framework test case randomization includes instructions with valid operands and constrained setting of system registers such that no exception of interrupt condition is encountered. Instruction with valid operands and random setting of system registers to cover interrupt/exception scenarios and instructions with random operands and random system register settings
In this section we discuss about our verification framework, its constituents and test case scenarios.
There are four basic building blocks for the framework - Test generator, the CPU model that is to be verified, golden reference model and the test output validation.
Test generator takes as input the ISA and list of constraints that needs to be considered while generating a test case. Constraints can be specific to Instruction type or can be applicable to set of instructions. The constraint list can vary as per the ISA and according to the different test case scenarios being exercised.
Our framework considers constraints like randomization of core registers, instruction operands for both valid and invalid instructions. Handling of scenarios like addresses alignment, address range, self-modifying code while generating test case for load/store instructions. Randomization of conditional flags in case of conditional instructions. Specific cases like 128 bit registers in case SIMD and Floating point instructions are handled accordingly.
In this section we discuss about the class of test suites generated in our framework, so as to ensure maximum coverage of the cpu model. Our framework considers the below mentioned variations of test scenarios.
Single starter / Jump starter test suites - This class of test cases deals with testing the basic functionality of every instruction - Integer, Floating Point, SIMD, Crypto: Arithmetic, Logical, Load, Store, Branch, System, etc. Operands are constrained to avoid generating exception and register data is randomized. Jump starter test suites exercises the same behavior as of single starter but in repeated manner. 100s of same instruction with different/random operands executing back to back
Instruction test mix suites - This class of test cases deals with testing a random sequence of valid instructions. Operands are again constrained to avoid generating exception and register data is also randomized here.
Random single starter, jump starter and test mix suites - Focus is on randomization of control register and handling of exception state.
Test suite with random data and operands - All system registers are fully randomized along with the randomization of code. Handles negative test case scenarios and exception handling mechanism.
Test suites for MMU validation - In this class of test suites we discuss about the various test data for validation of MMU. Test case for MMU include scenarios like individual control of MMU at different exception level, support for virtual address, validation of different translation levels and translation lookaside buffer [TLB] caching
Extensive testing of the CPU model helped us in fixings bugs at an early stage, result being s/w boot up with minimal effort.