Virtual-C IDE by Dieter R. Pawelczak E-Mail: [email protected] Web: https://sites.google.com/site/virtualcide/

Test Suite Compiler (TSC) INTRODUCTION TO TSC Version 1.7 2016-03-27

TSC 1

DISCLAIMER This software is provided "as is". In no event shall the authors or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption). The applications TSC, SimpleC and Virtual-C IDE (VIDE) are non-commercial products. The author cannot provide support of any kind.

LICENSE and LICENSE INFO The TSC (Test Suite Compiler) is part of the Virtual-C IDE written by Dieter R. Pawelczak. It uses portions of the integrated Simple-C Compiler also written by Dieter R. Pawelczak. Syntactically it is adapted from the Google C++ Testing Framework1 but not using any portions of its code. Conceptually based on the well-known xUnit frameworks and its test dialogs2. Portions of this manual are taken from the paper “A new Testing Framework for C-Programming Exercises and Online-Assessments”, by Dieter Pawelczak, Andrea Baumann, and David Schmudde, published at FECS’15 in Las Vegas, Nevada, USA, July 27-30 2015. Many thanks Andrea and David for your contribution! The Virtual-C IDE (VIDE) serves as a platform for teaching the C-programming language and for learning how to write programs in C. It is freeware for private, non-commercial use. For more information see “Virtual-C IDE - USER MANUAL”. Virtual-C IDE © 2016 by Dieter R. Pawelczak

1

Zhanyong Wan, et al. Google C++ Testing Framework – googletest: https://code.google.com/p/googletest/.

2

K. Beck, Test Driven Development: By Example. Addison-Wesley, 2002.

TSC 2

Introduction The Virtual-C IDE (VIDE) is a programming environment especially for learning and teaching the C programming language. It provides next to debugging capabilities visualizations for the data and control flow. C-programs run within a virtual machine (VM), which allows running platform independent ISO C programs. Access to the VM during execution enables a large set of program analyses. Programming exercises are therefore available for either offline testing of students’ programming code or with integration into an online assessment system. Crucial for the success of programming exercises is the feedback given to the students: it should be easy to understand, transparent, logical and target oriented. The proposed method is testing based on the xUnit test framework: Providing per unit a test suite with a hierarchical set of test cases and tests. Ideally each test checks an individual characteristic of the function under test, so that a student can easily follow the testing results. A test suite (TS) is a single test file and consists of test cases (TC) based on one or more tests. To those of you who are familiar with googletest, the differences (despite the programming language) compared to the Google C++ Testing Framework are: • • • • • • • • • • • •

Random test data via the macros ARGR & RAND. Simplified verification of output parameters with the ARG macro. Predefined tests: function and reference function tests, performance tests, and I/O tests No test fixture macro TEST_F. Instead, test fixtures are provides in the test prologue. Pseudo variables $out, $result, etc. FUT macro for simple error reporting. ASSERT_RANGE macro and ASSERT_EPSILON macro. Test and test case names can be string literals Heap and data segment re-initialization per test case for full application tests, i.e. execution of main(). Automatic prototyping for functions under test. GUI based selection/ de-selection of test cases Dynamic re-linking of C functions.

In general, open a test suite (file extension .tsc) will automatically view the available test cases in the test dialog; compare Fig. 1. In order to edit a test suite, press the edit button3. The test dialog has its on run button. A test suite is always run on the current active file in the editor (even if a project is defined in the build options). On run: first the C module under test (MUT) is compiled, information about the source code is stored in the static source information database SSID. source second, the test suite compiler (TSC) runs and already performs static tests on (MUT) and last, the output of the TSC is linked together with the C module and the MOPVM library extentsions. The later provide access to thevirtual machine during execution. These initial steps are called the static tests. This step might already produce errors and prevent further testing. The resulting executable represents the actual test: during its run, it invokes functions of the C module under test, checks assertions and generates the test report. Test cases summarize the results of the underlying tests in a traffic light scheme (red : fatal errors, yellow: errors and green : no errors). Each test logs ist results in the report. A progress bar and a numerical display of the passes/ fails summarize the overall results. The user can deselect test cases to focus on sepecific test cases. 3

During online assessments editing is not allowed.

Fig. 1: The test dialog

TSC 3

Assertions vs. expectations and warnings In accordance with the Google C++ Testing Framework the test framework distinguishes between assertions and expectations as expressed by the macros ASSERT_* and EXPECT_*. An assertion must be met and contradiction leads to an immediate failure of a test. An expectation might not be fulfilled. This will lead to a failure of the test, but its execution is continued. A typical example for expectation vs. assertion is a function modifying a pointer passed as parameter. It is wrong, if the student does not test for a NULL pointer; still the functional part might be implemented correctly for valid pointers. In case the programming assignment does not rely on the NULL pointer test, this test could use EXPECT_*, whereas the proper functionality is tested by assertions. The behavior of assertions and expectations can be expressed with the macros FATAL() and ERROR() respectively, to print a corresponding message in the report. Additionally, a test can always print warnings with the macro WARN(). Assertion

Expectation

Verifies

ASSERT_TRUE(condition) ASSERT_FALSE(condition) ASSERT_EQ(val1, val2) ASSERT_NE(val1, val2) ASSERT_LT(val1, val2) ASSERT_LE(val1, val2) ASSERT_GT(val1, val2) ASSERT_GE(val1, val2) ASSERT_STREQ(val1, val2)

EXPECT_TRUE(condition) EXPECT_FALSE(condition) EXPECT_EQ(val1, val2) EXPECT_NE(val1, val2) EXPECT_LT(val1, val2) EXPECT_LE(val1, val2) EXPECT_GT(val1, val2) EXPECT_GE(val1, val2) EXPECT_STREQ(val1, val2)

ASSERT_STRNE (val1, val2)

EXPECT_STRNE (val1, val2)

ASSERT_STRCASEEQ(val1, val2)

EXPECT_STRCASEEQ(val1, val2)

ASSERT_STRCASENE (val1, val2)

EXPECT_STRCASENE (val1, val2)

ASSERT_RANGE(val, val1, val2)

EXPECT_RANGE(val, val1, val2)

ASSERT_EPSILON(val, val1, val2)

EXPECT_EPSILON(val, val1, val2)

condition is true (value not 0) condition is false (value is 0) val1 == val2 val1 != val2 val1 < val2 val1 <= val2 val1 > val2 val1 >= val2 Strings with equal content, as like strcmp(val1, val2) == 0 or both values NULL Strings contents differ, as like strcmp(val1, val2) == 0, or one value NULL Strings with equal content (case insensitive) or both NULL Strings contents differ (case insensitive) or one value NULL Checks on int/ floating point values val, if val1 <= val <= val Checks on int/ floating point values val, if val1-val2 <= val <= val1+val2

Comparing structs or unions is not (yet) supported. An assertion or expectation checks the condition once. In case it fails, it will generate an entry in the report like, e.g.: failure: (val1) should be equal to: (val2), but result is: failure: (val) is not in range from (val1) to (val2), result is: failure: for string (val1) content (val2) is expected, but result is: These macros can be used anywhere in the tests. Note, that ASSERT macros use a return-statement to abort further test execution; thus ASSERT macros are only allowed in procedures, e.g. functions declared as void. Note, from Version 1.5 onwards, the macros FATAL, ERROR, WARN act like printf(). Therefore a proper format string is required. In order to print unknown string content, use %s.

Test suite and test cases A TSC file is a standard C file with extensions for the TSC compiler; compare Fig 2 for the TSC syntax. A file typically consists of some helper functions, mock functions and the test cases. A test case is defined between a test prologue and test epilogue. The test fixture is defined in the prologue; in between prologue and epilogue each test will use a clean test fixture. Optionally TCs can share local variables between tests. For each test case, even the data and heap segments are restored. Consecutive tests inside a test case share data and heap segments, which is for instance important when testing a set of functions with respect to linked lists. The epilogue allows modifying the overall test case result, adding text to the test report or performing cleanups as, e.g. freeing system resources. A simple test suite with a single test case and a single test could look like this:

TSC 4

#include extern int main(void); _testPrologue("return of main ") TEST($name, "main() returns EXIT_SUCCESS") { ASSERT_EQ(main(), EXIT_SUCCESS); } _testEpilogue();

The example above simply tests, if the function main returns the value EXIT_SUCCESS: The whole file represents a test suite with a single test case, named “return of main” and a single test, named “main() returns EXIT_SUCCESS) which checks the return value of main() on EXIT_SUCCESS. The pseudo variable $name refers to the current test case name. As the syntax in Fig.2 explains, a name is either a string literal or an identifier. You can for instance use the pseudo variable $name, to print the test case name in an error message.

FUT Macro Execution of a function under test can be simplified by using the FUT macro. A string representation of the actual expression is stored in the pseudo-variable $fut and can be used for customized error messages. The resulting value is stored in the pseudo variable $return. Please note, that $return in the FUT macro is stored as long double and even can hold pointers to strings. This is simply because FUT is a preprocessor macro and not evaluated by the TSC. #include extern int main(void); _testPrologue("return of main ") TEST($name, "main() returns EXIT_SUCCESS") { FUT(main()); if ($return != EXIT_SUCCESS) FATAL("%s should return 0, but" "result is %lf", $fut, $return); } _testEpilogue();

tsc-syntax: tsc-syntax declaration-list tsc-syntax testcase-list testcase-list testcase-list: testcase testcase-list testcase testcase: prolog test-list opt epilogue test-list: test test-list test prolog: _testPrologue ( name ) _testPrologue ( name , fixture ) fixture: compound-statement compound-statement , compound-statement epilogue: _testEpilogue ( ) ; _testEpilogue ( ) compound-statement test:

funcTest funcRefTest IoTest TEST ( tc-name , name ) compound-statement

name: identifier string-literal tc-name: name $name funcTest: _funcTest ( identifier , integer-constant , expression ) _funcTestEx ( identifier , integer-constant , expression ) funcRefTest: _funcRefTest ( identifier , identifier , integer-constant , expression ) _funcRefTestEx ( identifier , identifier ,integer-constant , expression ) IoTest: _IOTest ( ioparam , expression ) ; _IOTest ( ioparam , expression ) compound-statement

Fig. 2: The TSC syntax (referring to ISO C syntax)

The example above tests again the return value of main() but this time with a customized failure message.

Function injection The Virtual-C IDE uses dynamic linking for testing, i.e. the VM maintains a look-up-table for each function. A test case can modify the look-up-table with the _relinkSymbol() function by overwriting the original function pointer with a function pointer to a mock or a test function. This allows replacing any function as long as the new function provides the same signature. The example below shows a test case on the scanf() function by replacing

TSC 5

scanf() with myscanf(). This function counts the number of calls as well as it checks the format specifiers. The function injecttion is done here in the test fixture, thus it is active throughout the test case. Function injection can also be done on a test basis, i.e. each test can provide it’s own mock function. The original function linking is restored when running the next test case, thus the following test case will operate again on the original scanf()function unless it is re-linked again. #include extern int main(void); static int scanfCalls = 0; int myscanf(const char*format, ...) { va_list argptr; va_start(argptr,format); switch (scanfCalls) { case 0: if (!strstr(format, "%hhc")) ERROR("use the specifier %%hhc at first"); break; case 1: if (!_strregex(format, "%l[aefg]")) ERROR "use the specifier %%lf second"); break; default: break; } scanfCalls++; return vfscanf(stdin, format, argptr); } _testPrologue("Format specifiers", { _relinkSymbol(scanf, myscanf); /* call myscanf instead of scanf */ } ); TEST($name, "main() with input: a .4711") { _redirectStdin("a\n.4711\n", 8); ASSERT_EQ(main(), EXIT_SUCCESS); if (scanfCalls != 2) FATAL("You main() function should call scanf() twice!"); } _testEpilogue();

The example above tests, if the main() function calls scanf() with proper format specifiers for the input “a” and “.4711”. The functions _redirectStdin() and _strregex() are part of the MOPVM-Extension Library, see Table 1.

Function tests In addition to the TEST() macro, the TF defines the two macros _funcRefTest() and _funcTest(). Both macros allow a simple but powerful notation for function tests; the first requires a reference implementation for comparing the results. This short test description is possible by implicitly invoking assertions for given parameters and return values and by adding functional extensions to C. For every function test the TSC uses reflection by querying the SSID for function return and parameter types. The following example shows an implementation of a simple TC including four test descriptions. The _funcRefTest() macro expects the name of the FUT, a corresponding reference function, a factor specifying the count of allowed instructions compared to the reference function and the arguments for the function call. For each argument, an expression is given; arguments are comma separated. The ARGR() macro generates random test data in a given range for a specified type. Per default, each ARGR() adds three tests (additive, not combinatorial); an optional fourth argument can specify the number of

TSC 6

tests. Thus the _funcRefTest() example actually creates six tests. Strings are treated different, as char or wchar_t pointers are commonly used for character arrays; thus ARGR() creates a modifiable null-terminated string array of printable ASCII characters with a length corresponding to the given range (actually the allocated memory will always refer to the maximum range, only the string length varies). In a function tests with _funcTest() you provide the number of allowed instructions together with a list of arguments. For functions, the last parameter is an expression of the expected return value, compare last test in the following example. This macro can be easily used to test fix values or to forgo a reference implementation. #include /* a reference function */ int refMax(int a, int b) { return a > b ? a : b; } _testPrologue("Maximum test", { int x = 17; } );

// name for the report // a dummy test setup

/* a function test with a reference _funcRefTest(max, // refMax, // 5, // ARGR(int, INT_MIN, INT_MAX), // ARGR(int, INT_MIN, INT_MAX) // );

function */ function under test (FUT) reference function 5 times more instructions are allowed for execution random argument a random argument b

/* a function test with an expected _funcTest(max, // 0, // a = ARGR(int, INT_MIN, x), // b = rand()%x, // a > b ? a : b // );

result */ FUT default instruction limit random argument a random argument b expected result

_testEpilogue();

Function output parameters can be tested with the ARG() macro. In case a non-constant pointer parameter is passed via the macro, the result is compared with the argument of the reference implementation or the optional forth argument of ARG(); e.g. ARG(char*, s, 128, ”Hello World”) checks, if the contents of s is “Hello World” after the function call. The third parameter defines the maximum allocated memory size. The next example shows a test case with three different simple tests on strings. The second test uses the ARG() macro to feed an in-/ output parameter and to verify its contents afterwards. The third test uses ARG() in combination with a reference function. /* a simple reference procedure */ void refStrAppend(char* x, const char* append) { strcat(x, append); } /* a test case with variable s as test setup */ _testPrologue("Append string test", { char s[128] = "Hello "; } ); /* a function test with reference function */ _funcRefTest(strAppend, refStrAppend, 5, ARG(char*, s, 128), "World" ); /* same test with ARG parameter */ _funcTest(strAppend, 0, ARG(char*, s, 128, "Hello World"), "World"); /* tests with strings of random size */ _funcRefTest(strAppend, refStrAppend, 5, ARG(char*, s, 128), /* first argument s */ ARGR(char*, 0, 120)); /* a random string */ _testEpilogue();

TSC 7

Performance tests evaluate the number of instructions required for the execution of a FUT; the instruction counter can be queried with the MOPVM extension library function _getExecutionCount(). Each tests initially resets the counter, so that the counter can be evaluated in a TEST() macro. To access the execution counter from other tests within a TC, the instruction counter is additionally stored in the pseudo variables $1 … $n for n test cases. So each test can compare the performance of the previous tests. The execution count of a test can also be queried by $testName, as long as the test name is specified as a regular identifier, compare e.g. $insertAnn in the following example. These variables can be evaluated either in a TEST() macro or in the epilogue. A simple and far not complete example shows a test case checking on the performance of a binary tree insertion. The test case expects, that insertion of leafs at the same depth require about the same count of instructions. The insertion of the root, nodes or leafs in different depth cannot be performed with the same count of instructions, as an insertion in an array for instance would allow. typedef struct binTree { struct binTree* right, *left; } tBinTree; /* performance on insertion in a binary tree */ extern tBinTree*insert(tBinTree** root, char * value); _testPrologue("Insert in binary tree", // name {}, // empty clean test fixture { tBinTree *root = NULL; } // shared test fixture ); TEST($name, TEST($name, TEST($name, TEST($name, TEST($name,

insertMike) insertFred) insertAnn) insertStan) insertRose)

{ { { { {

ASSERT_NE(insert(&root, ASSERT_NE(insert(&root, ASSERT_NE(insert(&root, ASSERT_NE(insert(&root, ASSERT_NE(insert(&root,

TEST($name, performanceLeaf) { double ratio = (double)$insertAnn if (ratio < 0.99 || ratio > 1.01) FATAL("Ann & Rose inserted at } TEST($name, performanceTree) { if ($1==$2 || $1==$3 || $1==$4 || FATAL("Insertion of root/node if ($2==$4 || $3==$5) FATAL("Insertion of node/leaf } _testEpilogue();

"Mike"), NULL); "Fred"), NULL); "Ann"), NULL); "Stan"), NULL); "Rose"), NULL);

} } } } }

/ $insertRose; different cost!");

$1==$5) at same cost!"); at same cost!");

Far more detailed performance tests can be implemented using the statistic functions, see Table 1 in Appendix. You can use for instance the function readVMprimOperations(), to query the primitive instructions executed during a test: int divMod, mul, subAdd, logic, condBranch, gotos, calls; _readVMprimOperations(&divMod, &mul, &subAdd, &logic,&condBranch, &gotos, &calls); printf("Div/ Mod operations: %d\nMul operations: %d\nSub/Add operations: %d\n" "Bitwise logic operations: %d\nConditional branches: %d\nJump instructions %d\n" "Function calls: %d\n",divMod, mul, subAdd, logic, condBranch, gotos, calls);

I/O Tests A console C program typically reads data from stdin and prints results to stdout. I/O tests can be performed on functions or whole programs. The MOPVM extensions library allows simple redirection of stdin and stdout with the function redirectStdin() respectively redirectStdout(). Both functions plus assertions can be combined with the _IOTest macro. It requires next to the FUT a string literal as input for stdin. Instead of the NULcharacter, EOF is passed to the application. The optional third and further arguments present stdout. This is a list of string literals representing a regular expression on the expected or (with the !-Operator) unexpected output plus

TSC 8

a descriptive error message. Alternatively, the test can have a body for an explicit test definition: the pseudo variable $return refers to the return value of the FUT whereas $out can be used to check on stdout, compare the following example. /* an I/O test case for Fibonacci numbers */ _testPrologue("Fibbonacci output"); /* simple positive I/O test */ _IOTest(main, // call main() "5\n8\n10\n", // input is 5 8 10 plus enter "\\b5\\b", // regular expression, output should contain 5 "\\b21\\b", // 21 and "\\b55\\b", // 55 in that order. "Wrong output for the input 5, 8 and 10" //error message ); /* simple negative I/O test */ _IOTest(main, "11\n", // input is 11 plus enter !"55", // output should not contain 55 "For input 11 your program should not print 55!" ); /* explicit checks on stdout */ _IOTest(main, // call main() "3\n14\n-1\n" // input is 3 14 -1 plus enter ) { if ($return == 0) // check return value of main WARN("main() should return EXIT_FAILURE for -1."); If (!_containsRegEx("\\b2\\b[^3]*377\\s", $out)) FATAL("Fibonnaci of 3 and 14 expected!"); if (!_containsRegEx("error|invalid|illegal", $out)) FATAL("Error message expected for -1"); } _testEpilogue();

Utility RAND() The utility macro RAND()generates at arbitrary place random data in the test specification. RAND()works on tokens, i.e. you can insert data anywhere in the code. You can use RANDS()to create a string representation of the data. The pseudo variable $rand represents the last value, which was generated by RAND(); thus it allows to refer to the value of RAND()or RANDS(). The pseudo variable $rands holds the string representation of the data, which is suitable if you want to check the output of an application, or if you want to print the test data. The macros RAND()/RANDS() expect a minimum and a maximum value. The type is determined from the first parameter, e.g. RAND (1,10) generates an integer literal between 1 and 10, whereas RAND (1.0, 10.0) produces a floating point literal, and RAND (’a’, ’z’) a character literal. You can use the macro as often as you wish, please note, that $rand and $rands just refer to the last random literal4. /* an I/O test case using the RAND() macro */ _testPrologue("Entering your age"); _IOTest(main, RANDS(1,99) "\n", // input is a number between 1 and 99 $rands, // output should contain the age "Your program should print the age, that was entered!" ); _testEpilogue();

4

plus enter

RAND & RANDS have been introduced for tests without a reference implementation. Thus more than one random parameter is typically not suitable for writing tests.

TSC 9

Editing test suites Opening a .tsc file will open the test dialog. You can press Edit to edit the test file. To run the .tsc file select the file under test in the editor and press the run button in the test dialog. Alternatively, you can use the menu Build/ Compile during editing the .tsc file in the editor. This will compile the .tsc file and run the test on the c file in the left or right editor tab. The left tab is prioritized, as the right editor tab is often uesd for the TSC output. The results are shown in the test dialog.

Debugging test suites Currently directly debugging tests is not supported and an open issue for future versions. A work-around is debugging the generated C file: For debugging test suites you need to enable the Expert Mode of the Virtual-C IDE in the menu Settings. The Advanced Settings allows opening the resulting c file: toggle in the tab Tsc the checkbox Open in editor. Running a tsc file will pop-up the generated file in the editor. Now select in the Build Menu the item Mopcheck Target. Debugging this file will start at the function _mopcheck() instead of main(), so that a main() function can be called as a test function. You can either use the build options to combine the module under test with the generated c file or simply include the module under test in the generated c file. Please note, that clearing the data segment is not fully supported in the debug mode as well as accessing compiler attributes, i.e. queryXXXAttributes().

TSC 10

APPENDIX Tab. 1 MOPVM extensions library Version 1.5 (header file mopvmex.h)

Kind

Function

Description

Statistic

int _readVMprimOperations(int *divMod, int *mul, int *subAdd, int *logic, int *condBranch, int *gotos, int* calls);

Reads statistic information on the executed processor commands from the start of the test, up to the call to the query function.

int _readVMmemOperations(int *write, int *read);

Reads statistic information on the executed memory operations Resets the statistic information queries attributes (key) from the compiler/linker and copies to (dest). Keys are: - globals: number of global variables - functions: total number of functions - floatOps: total number of floating point operations - maxDepth: max loop depth - calls: total number of function calls - xxx_Calls: number of functions calls inside the function named xxx - XXXX: value of a #define, which is an int constant, e.g. #define XXXX 10 Convenient function to directly read integer attributes. Number of calls to WARN() Number of calls to FATAL() Number of calls to ERROR() Clears all counters and prints FUNCTIONAL TEST to output window redirects stdin to a string buffer. After len bytes EOF is sent (implicitly called in an IOTest). Note, that you should call fflush(stdin) before _redirectStdin() to clear the EOF flag. redirects stdout to a string buffer. After sizeOfBuffer bytes printed, the test is aborted with a FATAL failure. Convenient function to check the valid execution of a test, it allows to print specific messages in case the test execution exceeds the given limit or in case an exception occurred (implicitly called after every test) Prints directly to the message window. Same es fprintf(stderr,“%s“, str) during test run, this message is copied into the report Variadic version of _printStringToOutput(). Prints a warning (orange color), the header tsc.h provides the macro WARN() Prints an error (red color), the header tsc.h provides the macro ERROR() Prints a failure (red color), the header tsc.h provides the macro FATAL() Dynamically relinks a function to a new function. Note, that all function calls are relinked (even functions in the test) Restores any changes previously done by _relinkSymbol() (implicitly called on start of each test case). Limits the number of processor instructions for the execution and enables exception handling: in case the number of instructions is exceeded or an exception (would usually invoke the signal handler)

void _resetVMStatistics(void); void _queryAttributes(char*dest,const char*key);

int _queryIntAttributes(const char*key); int int int int I/O

_getWarnCount(void); _getFatalCount(void); _getErrorCount(void); _initMopCheck(void);

void _redirectStdin(const char* buffer, size_t len);

void _redirectStdout(char* buffer, size_t sizeOfBuffer); int _printFault(const char* limit, const char* execption);

int _printStringToOutput(char* str); int _printStringToOutputArgs(char* str, ...); int _printWarningToOutput(char* str); int _printErrorToOutput(char* str); int _printFatalToOutput(char* str); Testing

void _relinkSymbol(void(*oldFunction)(void), void(*newFunction)(void)); void _relinkFunctions(void);

void _setExecutionLimit(int limit);

TSC 11

int _getExecutionCount(void);

void _relinkDataSegment(void);

Heap

int _resetHeap(void); size_t _ptrHeapSize(void *ptr);

int _heapBlockCount(void);

void* _heapElement(int index); void _dumpHeapElement(unsigned char* ptr);

Machine

char* _getInvalidVariableValue(char* buffer, size_t buf_size, int index, const char* name);

Editor and Utilities

int _editorContains(const char* regex);

int _containsRegEx(const char* str, const char* regEx); char* _strregex(const char* str, const char* regEx);

Abbreviations FUT MUT SSID TSC TC TS VIDE

Function under test Module under test, C file which is tested. Static source information database Test suite compiler Test case Test suite Virtual-C IDE

occurs, the processor continues execution in the stack frame, that invoked _setExecutionLimit() (implicitly called before each test) returns the number of operations, which have been executed since the last call to _setExecutionLimit() (implicitly called after each test for pseudo variables $n relinks the data segment of the program under test (implicitly called at the beginning of each test case). Frees all memory on heap. Note, that all pointer become invalid (implicitly called at the beginning of each test case). returns the amount of memory bytes allocated for ptr on the heap. It returns 0, if the element is not in the heap segment or is already freed. return the number of memory blocks allocated on the heap. Note, that any read from stdin will allocate a page of memory for keyboard input. You can call fclose(stdin) to free this memory. returns a pointer to an allocated memory block on the heap, the index runs from 0..heapBlockCount-1 prints an hex dump of a heap-element, if it exists on the heap, otherwise it does nothing evaluates the value of a local variable, that is either not visible or not valid anymore. Values of invalid variables might have been overwritten. Returns NULL or a pointer to buffer with the evaluated value. checks, if the current editor window contains the given regular expression. Note, if the user changes the editor tab or closes the MUT, a wrong file is checked. checks, if a string contains the following regular expression. searches a regular expression inside a string and returns the first occurrence or NULL.

TSC 12

CHANGES News and Changes in Version 1.7 • Added RAND()/ RANDS() / $rand / $rands for simple random data generation. • Removed _evaluateJavaScript(). Note, the QtWebkit is deprecated and will be replaced latest in version 2. • Added getInvalidVariableValue(). • Fix in type output for structs, pointers and function prototypes using structs/ pointers. • ARGR(char, ‘a’, ‘z’) now uses char for printable ASCII instead of int codes. News and Changes in Version 1.5 • Bugfix in tsc regarding WS handling of c part • Added FUT macro and $fut • Fixed ASSERT-macros to call FUT only once • WARN, ERROR, FATAL now variadic News and Changes in Version 1.4 • Changed failure handling, added updateTestCase() and different print-Functions for failure, exit and warning. • Added ASSERT macros, introduced ARGR and ARG News and Changes in Version 1.3 • Added _evalJavaScript() News and Changes in Version 1.2 • Added _containsRegEx() • Added _strregex() News in Version 1.1 • Added _printFault() • Added _exitmop_()

Limits Function tests can not operate on functions returning or passing structs/ unions. Structs/ unions can not be compared in ASSERTS/ EXPECT/ ARG/ ARGR macros nor directly printed with WARN/ ERROR/ FATAL. The program under test can not use exit() or abort(), which leads to a failure of the test. Some auto generated string operations are limited to 4096 characters.

Future Union of JavaScript and MOPVMEX interface. Enabling easier and direct debugging of test suites files. Adding pointer to structs to ARG macro, structs and pointers to ASSERT/ EXPECT macros.

Test Suite Compiler (TSC)

Mar 27, 2016 - redirects stdout to a string buffer. After. sizeOfBuffer bytes printed, the test is aborted with a FATAL failure. int _printFault(const char* limit,.

387KB Sizes 1 Downloads 215 Views

Recommend Documents

Test Case Prioritization and Test Suite Optimization ...
Abstract: Software Testing is an important activity in Software Development Life Cycle. (SDLC). Software testing is a process of executing a program or application with the intent of finding the bugs. Testing is expensive and prone to mistakes and ov

COLLADA Conformance Test Suite and ... - Khronos Group
best solution for developers with regard to COLLADA. The WG agreed ... COLLADA Conformance Test Suite (CTS) should support OpenCOLLADA plugins for. MAX and ... MAYA plugins based on Feeling software FCOLLADA API. The core ...

COLLADA Conformance Test Suite and ... - Khronos Group
Other company and product names may be trademarks of the respective ... MAYA plugins based on Feeling software FCOLLADA API. .... Budget: $15K (US) ... framework, and of any test development you have previously performed in this.

TSC 2016 How to guide
Based on your zip code, this page will display stores within 100 miles of your location. Occasionally, some browsers cannot load the store finder properly.

TSC-The Heist.pdf
Page 1 of 2. D&D Travel Sized Campaigns. Character. Name. Race. Class/Level. Drow Rogue i. AC 12 HP 10 INI +3. SUPPLIES. Thieves' Tools. STR 1 CON +2 ...

Skeleton Test Suite: Testing Results Software Version ... - GitHub
Dec 1, 2012 - DROID 6.1 was used alongside Signature File v65. As of 25 October 2012, ... fmt/436: Digital Negative Format (DNG) 1.0. Gave explicit .... forward adding manually created skeleton-files for each new DROID signature created.

Logic Puzzles: A New Test-Suite for Compositional ...
The quality of this matching would be im- proved if it also relied on knowledge of structural semantics. This knowledge would be used to help capture and represent more precisely the meaning and information that are actually conveyed by the texts and

A Review Study of NIST Statistical Test Suite
Development of an indigenous Computer Package .... A concept of degrees of freedom is introduced in these tests in the form of blocks or classes. For such ...

ParTes. Test Suite for Parsing Evaluation - Semantic Scholar
Resumen: En este artıculo se presenta ParTes, el primer test suite en espa˜nol y catalán para la evaluación cualitativa de analizadores sintácticos automáticos. Este recurso es una jerarquıa de los fenómenos representativos acerca de la estru

Introducing a Test Suite Similarity Metric for Event ...
test suite. ▫ The parameter, n, allows the metric to consider event sequences ... Test Case. Executor. Matrices. Coverage. Matrices. TS3. TS2. TS1. TS3. TS2. TS1.

Design and Implementation of a Combinatorial Test Suite Strategy ...
Design and Implementation of a Combinatorial Test Su ... rategy Using Adaptive Cuckoo Search Algorithm_ p.pdf. Design and Implementation of a ...

COMPILER DESIGN.pdf
b) Explain the various strategies used for register allocation and assignment. 10. 8. Write short notes on : i) Error recovery in LR parsers. ii) Loops in flow graphs.

TSC ASL Winter 2016.pdf
SIGNATURE. CLIP AND SUBMIT TO: INTERLINK | 1002 Garden Lake Parkway | Toledo, OH 43614. CHECKS MADE PAYABLE TO: INTERLINK/DSC. Page 1 of ...

Compiler design.pdf
c) Briefly explain main issues in code generation. 6. ———————. Whoops! There was a problem loading this page. Compiler design.pdf. Compiler design.pdf.

ClamAV Bytecode Compiler - GitHub
Clam AntiVirus is free software; you can redistribute it and/or modify it under the terms of the GNU ... A minimalistic release build requires 100M of disk space. ... $PREFIX/docs/clamav/clambc-user.pdf. 3 ...... re2c is in the public domain.

TSC Newsletter Spring 2017.pdf
Tech Support and Network Fundamentals program can. troubleshoot. Students operate the helpdesk using an inventory and. trouble-ticket system just like they ...

TSC Newsletter Fall 2016.pdf
2016-17. Parents may pay textbook-rental and course fees online at ... bachelor's degree from Indiana University. ... Displaying TSC Newsletter Fall 2016.pdf.

TSC-Small Time Crooks.pdf
You decide to explore the quieter side streets around the village. Perhaps ... Main Streets. ... Up to four of these construct beasts can be used to attack Player.

Compiler design.pdf
3. a) Consider the following grammar. E → E + T T. T → T *F F. F → (E) id. Construct SLR parsing table for this grammar. 10. b) Construct the SLR parsing table ...

compiler design__2.pdf
Page 1 of 11. COMPILER DEDIGN SET_2 SHAHEEN REZA. COMPILER DEDIGN SET_2. Examination 2010. a. Define CFG, Parse Tree. Ans: CFG: a context ...

compiler design_1.pdf
It uses the hierarchical structure determined by the. syntax-analysis phase to identify the operators and operands of. expressions and statements. Page 1 of 7 ...

CS6612-COMPILER-LABORATORY- By EasyEngineering.net.pdf ...
1. Implementation of symbol table. 2. Develop a lexical analyzer to recognize a few patterns in c (ex. Identifers, constants,. comments, operators etc.) 3. Implementation of lexical analyzer using lex tool. 4. Generate yacc specification for a few sy

Compiler Design Syllabus.pdf
software design(PO→BCG ). iv. Working skills in theory and application of finite state machines, recursive descent,. production rules, parsing, and language ...