Commit | Line | Data |
---|---|---|
d667a7e6 | 1 | # -*- Mode: cperl; cperl-indent-level: 4 -*- |
b82fa0b7 MS |
2 | # $Id: Harness.pm,v 1.11 2001/05/23 18:24:41 schwern Exp $ |
3 | ||
a0d0e21e LW |
4 | package Test::Harness; |
5 | ||
b82fa0b7 | 6 | require 5.004; |
a0d0e21e LW |
7 | use Exporter; |
8 | use Benchmark; | |
4633a7c4 | 9 | use Config; |
760ac839 LW |
10 | use strict; |
11 | ||
b82fa0b7 MS |
12 | use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest |
13 | $Columns $verbose $switches | |
14 | @ISA @EXPORT @EXPORT_OK | |
15 | ); | |
4633a7c4 | 16 | |
9c5c68c8 MS |
17 | # Backwards compatibility for exportable variable names. |
18 | *verbose = \$Verbose; | |
19 | *switches = \$Switches; | |
20 | ||
21 | $Have_Devel_Corestack = 0; | |
22 | ||
b82fa0b7 | 23 | $VERSION = "1.21"; |
9b0ceca9 | 24 | |
f19ae7a7 JD |
25 | $ENV{HARNESS_ACTIVE} = 1; |
26 | ||
9b0ceca9 | 27 | # Some experimental versions of OS/2 build have broken $? |
9c5c68c8 MS |
28 | my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE}; |
29 | ||
30 | my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR}; | |
9b0ceca9 | 31 | |
17a79f5b | 32 | |
9c5c68c8 MS |
33 | @ISA = ('Exporter'); |
34 | @EXPORT = qw(&runtests); | |
35 | @EXPORT_OK = qw($verbose $switches); | |
4633a7c4 | 36 | |
9c5c68c8 MS |
37 | $Verbose = 0; |
38 | $Switches = "-w"; | |
39 | $Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80; | |
b82fa0b7 MS |
40 | $Columns--; # Some shells have trouble with a full line of text. |
41 | ||
42 | ||
43 | =head1 NAME | |
44 | ||
45 | Test::Harness - run perl standard test scripts with statistics | |
46 | ||
47 | =head1 SYNOPSIS | |
48 | ||
49 | use Test::Harness; | |
50 | ||
51 | runtests(@test_files); | |
52 | ||
53 | =head1 DESCRIPTION | |
a0d0e21e | 54 | |
b82fa0b7 MS |
55 | B<STOP!> If all you want to do is write a test script, consider using |
56 | Test::Simple. Otherwise, read on. | |
57 | ||
58 | (By using the Test module, you can write test scripts without | |
59 | knowing the exact output this module expects. However, if you need to | |
60 | know the specifics, read on!) | |
61 | ||
62 | Perl test scripts print to standard output C<"ok N"> for each single | |
63 | test, where C<N> is an increasing sequence of integers. The first line | |
64 | output by a standard test script is C<"1..M"> with C<M> being the | |
65 | number of tests that should be run within the test | |
66 | script. Test::Harness::runtests(@tests) runs all the testscripts | |
67 | named as arguments and checks standard output for the expected | |
68 | C<"ok N"> strings. | |
69 | ||
70 | After all tests have been performed, runtests() prints some | |
71 | performance statistics that are computed by the Benchmark module. | |
72 | ||
73 | =head2 The test script output | |
74 | ||
75 | The following explains how Test::Harness interprets the output of your | |
76 | test program. | |
77 | ||
78 | =over 4 | |
79 | ||
80 | =item B<'1..M'> | |
81 | ||
82 | This header tells how many tests there will be. It should be the | |
83 | first line output by your test program (but its okay if its preceded | |
84 | by comments). | |
85 | ||
86 | In certain instanced, you may not know how many tests you will | |
87 | ultimately be running. In this case, it is permitted (but not | |
88 | encouraged) for the 1..M header to appear as the B<last> line output | |
89 | by your test (again, it can be followed by further comments). But we | |
90 | strongly encourage you to put it first. | |
91 | ||
92 | Under B<no> circumstances should 1..M appear in the middle of your | |
93 | output or more than once. | |
94 | ||
95 | ||
96 | =item B<'ok', 'not ok'. Ok?> | |
97 | ||
98 | Any output from the testscript to standard error is ignored and | |
99 | bypassed, thus will be seen by the user. Lines written to standard | |
100 | output containing C</^(not\s+)?ok\b/> are interpreted as feedback for | |
101 | runtests(). All other lines are discarded. | |
102 | ||
103 | C</^not ok/> indicates a failed test. C</^ok/> is a successful test. | |
104 | ||
105 | ||
106 | =item B<test numbers> | |
107 | ||
108 | Perl normally expects the 'ok' or 'not ok' to be followed by a test | |
109 | number. It is tolerated if the test numbers after 'ok' are | |
110 | omitted. In this case Test::Harness maintains temporarily its own | |
111 | counter until the script supplies test numbers again. So the following | |
112 | test script | |
113 | ||
114 | print <<END; | |
115 | 1..6 | |
116 | not ok | |
117 | ok | |
118 | not ok | |
119 | ok | |
120 | ok | |
121 | END | |
122 | ||
123 | will generate | |
124 | ||
125 | FAILED tests 1, 3, 6 | |
126 | Failed 3/6 tests, 50.00% okay | |
127 | ||
128 | ||
129 | =item B<$Test::Harness::verbose> | |
130 | ||
131 | The global variable $Test::Harness::verbose is exportable and can be | |
132 | used to let runtests() display the standard output of the script | |
133 | without altering the behavior otherwise. | |
134 | ||
135 | =item B<$Test::Harness::switches> | |
136 | ||
137 | The global variable $Test::Harness::switches is exportable and can be | |
138 | used to set perl command line options used for running the test | |
139 | script(s). The default value is C<-w>. | |
140 | ||
141 | =item B<Skipping tests> | |
142 | ||
143 | If the standard output line contains the substring C< # Skip> (with | |
144 | variations in spacing and case) after C<ok> or C<ok NUMBER>, it is | |
145 | counted as a skipped test. If the whole testscript succeeds, the | |
146 | count of skipped tests is included in the generated output. | |
147 | C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason | |
148 | for skipping. | |
149 | ||
150 | ok 23 # skip Insufficient flogiston pressure. | |
151 | ||
152 | Similarly, one can include a similar explanation in a C<1..0> line | |
153 | emitted if the test script is skipped completely: | |
154 | ||
155 | 1..0 # Skipped: no leverage found | |
156 | ||
157 | =item B<Todo tests> | |
158 | ||
159 | If the standard output line contains the substring C< # TODO> after | |
160 | C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text | |
161 | afterwards is the thing that has to be done before this test will | |
162 | succeed. | |
163 | ||
164 | not ok 13 # TODO harness the power of the atom | |
165 | ||
166 | These tests represent a feature to be implemented or a bug to be fixed | |
167 | and act as something of an executable "thing to do" list. They are | |
168 | B<not> expected to succeed. Should a todo test begin succeeding, | |
169 | Test::Harness will report it as a bonus. This indicates that whatever | |
170 | you were supposed to do has been done and you should promote this to a | |
171 | normal test. | |
172 | ||
173 | =item B<Bail out!> | |
174 | ||
175 | As an emergency measure, a test script can decide that further tests | |
176 | are useless (e.g. missing dependencies) and testing should stop | |
177 | immediately. In that case the test script prints the magic words | |
178 | ||
179 | Bail out! | |
180 | ||
181 | to standard output. Any message after these words will be displayed by | |
182 | C<Test::Harness> as the reason why testing is stopped. | |
183 | ||
184 | =item B<Comments> | |
185 | ||
186 | Additional comments may be put into the testing output on their own | |
187 | lines. Comment lines should begin with a '#', Test::Harness will | |
188 | ignore them. | |
189 | ||
190 | ok 1 | |
191 | # Life is good, the sun is shining, RAM is cheap. | |
192 | not ok 2 | |
193 | # got 'Bush' expected 'Gore' | |
194 | ||
195 | =item B<Anything else> | |
196 | ||
197 | Any other output Test::Harness sees it will silently ignore B<BUT WE | |
198 | PLAN TO CHANGE THIS!> If you wish to place additional output in your | |
199 | test script, please use a comment. | |
200 | ||
201 | =back | |
202 | ||
203 | ||
204 | =head2 Failure | |
205 | ||
206 | It will happen, your tests will fail. After you mop up your ego, you | |
207 | can begin examining the summary report: | |
208 | ||
209 | t/base..............ok | |
210 | t/nonumbers.........ok | |
211 | t/ok................ok | |
212 | t/test-harness......ok | |
213 | t/waterloo..........dubious | |
214 | Test returned status 3 (wstat 768, 0x300) | |
215 | DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19 | |
216 | Failed 10/20 tests, 50.00% okay | |
217 | Failed Test Stat Wstat Total Fail Failed List of Failed | |
218 | ----------------------------------------------------------------------- | |
219 | t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19 | |
220 | Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay. | |
221 | ||
222 | Everything passed but t/waterloo.t. It failed 10 of 20 tests and | |
223 | exited with non-zero status indicating something dubious happened. | |
224 | ||
225 | The columns in the summary report mean: | |
226 | ||
227 | =over 4 | |
228 | ||
229 | =item B<Failed Test> | |
230 | ||
231 | The test file which failed. | |
232 | ||
233 | =item B<Stat> | |
234 | ||
235 | If the test exited with non-zero, this is its exit status. | |
236 | ||
237 | =item B<Wstat> | |
238 | ||
239 | The wait status of the test I<umm, I need a better explanation here>. | |
240 | ||
241 | =item B<Total> | |
242 | ||
243 | Total number of tests expected to run. | |
244 | ||
245 | =item B<Fail> | |
246 | ||
247 | Number which failed, either from "not ok" or because they never ran. | |
248 | ||
249 | =item B<Failed> | |
250 | ||
251 | Percentage of the total tests which failed. | |
252 | ||
253 | =item B<List of Failed> | |
254 | ||
255 | A list of the tests which failed. Successive failures may be | |
256 | abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and | |
257 | 20 failed). | |
258 | ||
259 | =back | |
260 | ||
261 | ||
262 | =head2 Functions | |
263 | ||
264 | Test::Harness currently only has one function, here it is. | |
265 | ||
266 | =over 4 | |
267 | ||
268 | =item B<runtests> | |
269 | ||
270 | my $allok = runtests(@test_files); | |
271 | ||
272 | This runs all the given @test_files and divines whether they passed | |
273 | or failed based on their output to STDOUT (details above). It prints | |
274 | out each individual test which failed along with a summary report and | |
275 | a how long it all took. | |
276 | ||
277 | It returns true if everything was ok, false otherwise. | |
278 | ||
279 | =for _private | |
280 | This is just _run_all_tests() plus _show_results() | |
281 | ||
282 | =cut | |
17a79f5b | 283 | |
a0d0e21e LW |
284 | sub runtests { |
285 | my(@tests) = @_; | |
9c5c68c8 | 286 | |
b82fa0b7 MS |
287 | local ($\, $,); |
288 | ||
289 | my($tot, $failedtests) = _run_all_tests(@tests); | |
9c5c68c8 MS |
290 | _show_results($tot, $failedtests); |
291 | ||
b82fa0b7 MS |
292 | my $ok = ($tot->{bad} == 0 && $tot->{max}); |
293 | ||
294 | die q{Assert '$ok xor keys %$failedtests' failed!} | |
295 | unless $ok xor keys %$failedtests; | |
296 | ||
297 | return $ok; | |
298 | } | |
299 | ||
300 | =begin _private | |
301 | ||
302 | =item B<_globdir> | |
303 | ||
304 | my @files = _globdir $dir; | |
305 | ||
306 | Returns all the files in a directory. This is shorthand for backwards | |
307 | compatibility on systems where glob() doesn't work right. | |
308 | ||
309 | =cut | |
310 | ||
311 | sub _globdir { | |
312 | opendir DIRH, shift; | |
313 | my @f = readdir DIRH; | |
314 | closedir DIRH; | |
315 | ||
316 | return @f; | |
9c5c68c8 MS |
317 | } |
318 | ||
b82fa0b7 MS |
319 | =item B<_run_all_tests> |
320 | ||
321 | my($total, $failed) = _run_all_tests(@test_files); | |
322 | ||
323 | Runs all the given @test_files (as runtests()) but does it quietly (no | |
324 | report). $total is a hash ref summary of all the tests run. Its keys | |
325 | and values are this: | |
326 | ||
327 | bonus Number of individual todo tests unexpectedly passed | |
328 | max Number of individual tests ran | |
329 | ok Number of individual tests passed | |
330 | sub_skipped Number of individual tests skipped | |
331 | ||
332 | files Number of test files ran | |
333 | good Number of test files passed | |
334 | bad Number of test files failed | |
335 | tests Number of test files originally given | |
336 | skipped Number of test files skipped | |
337 | ||
338 | If $total->{bad} == 0 and $total->{max} > 0, you've got a successful | |
339 | test. | |
340 | ||
341 | $failed is a hash ref of all the test scripts which failed. Each key | |
342 | is the name of a test script, each value is another hash representing | |
343 | how that script failed. Its keys are these: | |
9c5c68c8 | 344 | |
b82fa0b7 MS |
345 | name Name of the test which failed |
346 | estat Script's exit value | |
347 | wstat Script's wait status | |
348 | max Number of individual tests | |
349 | failed Number which failed | |
350 | percent Percentage of tests which failed | |
351 | canon List of tests which failed (as string). | |
352 | ||
353 | Needless to say, $failed should be empty if everything passed. | |
354 | ||
355 | B<NOTE> Currently this function is still noisy. I'm working on it. | |
356 | ||
357 | =cut | |
358 | ||
359 | sub _run_all_tests { | |
9c5c68c8 | 360 | my(@tests) = @_; |
a0d0e21e | 361 | local($|) = 1; |
9c5c68c8 MS |
362 | my(%failedtests); |
363 | ||
364 | # Test-wide totals. | |
365 | my(%tot) = ( | |
366 | bonus => 0, | |
367 | max => 0, | |
368 | ok => 0, | |
369 | files => 0, | |
370 | bad => 0, | |
371 | good => 0, | |
372 | tests => scalar @tests, | |
373 | sub_skipped => 0, | |
374 | skipped => 0, | |
375 | bench => 0 | |
376 | ); | |
774d564b | 377 | |
378 | # pass -I flags to children | |
81ff29e3 | 379 | my $old5lib = $ENV{PERL5LIB}; |
774d564b | 380 | |
1250aba5 GS |
381 | # VMS has a 255-byte limit on the length of %ENV entries, so |
382 | # toss the ones that involve perl_root, the install location | |
383 | # for VMS | |
384 | my $new5lib; | |
385 | if ($^O eq 'VMS') { | |
386 | $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC); | |
9c5c68c8 | 387 | $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g; |
1250aba5 GS |
388 | } |
389 | else { | |
390 | $new5lib = join($Config{path_sep}, @INC); | |
391 | } | |
392 | ||
393 | local($ENV{'PERL5LIB'}) = $new5lib; | |
a0d0e21e | 394 | |
b82fa0b7 | 395 | my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir; |
a0d0e21e | 396 | my $t_start = new Benchmark; |
9c5c68c8 | 397 | |
63b097aa | 398 | my $maxlen = 0; |
908801fe JH |
399 | my $maxsuflen = 0; |
400 | foreach (@tests) { # The same code in t/TEST | |
401 | my $suf = /\.(\w+)$/ ? $1 : ''; | |
402 | my $len = length; | |
403 | my $suflen = length $suf; | |
404 | $maxlen = $len if $len > $maxlen; | |
405 | $maxsuflen = $suflen if $suflen > $maxsuflen; | |
7a315204 | 406 | } |
908801fe JH |
407 | # + 3 : we want three dots between the test name and the "ok" |
408 | my $width = $maxlen + 3 - $maxsuflen; | |
b82fa0b7 | 409 | foreach my $tfile (@tests) { |
7a315204 | 410 | my($leader, $ml) = _mk_leader($tfile, $width); |
b82fa0b7 | 411 | print $leader; |
9c5c68c8 | 412 | |
b82fa0b7 | 413 | my $fh = _open_test($tfile); |
9c5c68c8 MS |
414 | |
415 | # state of the current test. | |
416 | my %test = ( | |
417 | ok => 0, | |
b82fa0b7 | 418 | 'next' => 0, |
9c5c68c8 MS |
419 | max => 0, |
420 | failed => [], | |
421 | todo => {}, | |
422 | bonus => 0, | |
423 | skipped => 0, | |
424 | skip_reason => undef, | |
425 | ml => $ml, | |
426 | ); | |
427 | ||
428 | my($seen_header, $tests_seen) = (0,0); | |
c07a80fd | 429 | while (<$fh>) { |
9c5c68c8 MS |
430 | if( _parse_header($_, \%test, \%tot) ) { |
431 | warn "Test header seen twice!\n" if $seen_header; | |
432 | ||
433 | $seen_header = 1; | |
434 | ||
435 | warn "1..M can only appear at the beginning or end of tests\n" | |
436 | if $tests_seen && $test{max} < $tests_seen; | |
437 | } | |
438 | elsif( _parse_test_line($_, \%test, \%tot) ) { | |
439 | $tests_seen++; | |
d667a7e6 | 440 | } |
9c5c68c8 | 441 | # else, ignore it. |
c07a80fd | 442 | } |
9c5c68c8 MS |
443 | |
444 | my($estatus, $wstatus) = _close_fh($fh); | |
445 | ||
b82fa0b7 MS |
446 | my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1; |
447 | ||
68dc0745 | 448 | if ($wstatus) { |
b82fa0b7 | 449 | $failedtests{$tfile} = _dubious_return(\%test, \%tot, |
9c5c68c8 | 450 | $estatus, $wstatus); |
b82fa0b7 | 451 | $failedtests{$tfile}{name} = $tfile; |
9c5c68c8 | 452 | } |
b82fa0b7 | 453 | elsif ($allok) { |
9c5c68c8 | 454 | if ($test{max} and $test{skipped} + $test{bonus}) { |
7b13a3f5 | 455 | my @msg; |
9c5c68c8 MS |
456 | push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}") |
457 | if $test{skipped}; | |
458 | push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded") | |
459 | if $test{bonus}; | |
460 | print "$test{ml}ok, ".join(', ', @msg)."\n"; | |
461 | } elsif ($test{max}) { | |
462 | print "$test{ml}ok\n"; | |
463 | } elsif (defined $test{skip_reason}) { | |
464 | print "skipped: $test{skip_reason}\n"; | |
465 | $tot{skipped}++; | |
c0ee6f5c | 466 | } else { |
45c0de28 | 467 | print "skipped test on this platform\n"; |
9c5c68c8 | 468 | $tot{skipped}++; |
c0ee6f5c | 469 | } |
9c5c68c8 | 470 | $tot{good}++; |
6c31b336 | 471 | } |
b82fa0b7 MS |
472 | else { |
473 | if ($test{max}) { | |
474 | if ($test{'next'} <= $test{max}) { | |
475 | push @{$test{failed}}, $test{'next'}..$test{max}; | |
476 | } | |
477 | if (@{$test{failed}}) { | |
478 | my ($txt, $canon) = canonfailed($test{max},$test{skipped}, | |
479 | @{$test{failed}}); | |
480 | print "$test{ml}$txt"; | |
481 | $failedtests{$tfile} = { canon => $canon, | |
482 | max => $test{max}, | |
483 | failed => scalar @{$test{failed}}, | |
484 | name => $tfile, | |
485 | percent => 100*(scalar @{$test{failed}})/$test{max}, | |
486 | estat => '', | |
487 | wstat => '', | |
488 | }; | |
489 | } else { | |
490 | print "Don't know which tests failed: got $test{ok} ok, ". | |
491 | "expected $test{max}\n"; | |
492 | $failedtests{$tfile} = { canon => '??', | |
493 | max => $test{max}, | |
494 | failed => '??', | |
495 | name => $tfile, | |
496 | percent => undef, | |
497 | estat => '', | |
498 | wstat => '', | |
499 | }; | |
500 | } | |
501 | $tot{bad}++; | |
502 | } elsif ($test{'next'} == 0) { | |
503 | print "FAILED before any test output arrived\n"; | |
504 | $tot{bad}++; | |
505 | $failedtests{$tfile} = { canon => '??', | |
506 | max => '??', | |
507 | failed => '??', | |
508 | name => $tfile, | |
509 | percent => undef, | |
510 | estat => '', | |
511 | wstat => '', | |
512 | }; | |
513 | } | |
514 | } | |
515 | ||
9c5c68c8 MS |
516 | $tot{sub_skipped} += $test{skipped}; |
517 | ||
518 | if (defined $Files_In_Dir) { | |
b82fa0b7 | 519 | my @new_dir_files = _globdir $Files_In_Dir; |
17a79f5b IZ |
520 | if (@new_dir_files != @dir_files) { |
521 | my %f; | |
522 | @f{@new_dir_files} = (1) x @new_dir_files; | |
523 | delete @f{@dir_files}; | |
524 | my @f = sort keys %f; | |
525 | print "LEAKED FILES: @f\n"; | |
526 | @dir_files = @new_dir_files; | |
527 | } | |
528 | } | |
a0d0e21e | 529 | } |
9c5c68c8 | 530 | $tot{bench} = timediff(new Benchmark, $t_start); |
d667a7e6 | 531 | |
774d564b | 532 | if ($^O eq 'VMS') { |
533 | if (defined $old5lib) { | |
534 | $ENV{PERL5LIB} = $old5lib; | |
b876d4a6 | 535 | } else { |
774d564b | 536 | delete $ENV{PERL5LIB}; |
537 | } | |
538 | } | |
9c5c68c8 MS |
539 | |
540 | return(\%tot, \%failedtests); | |
541 | } | |
542 | ||
b82fa0b7 MS |
543 | =item B<_mk_leader> |
544 | ||
7a315204 | 545 | my($leader, $ml) = _mk_leader($test_file, $width); |
b82fa0b7 MS |
546 | |
547 | Generates the 't/foo........' $leader for the given $test_file as well | |
548 | as a similar version which will overwrite the current line (by use of | |
549 | \r and such). $ml may be empty if Test::Harness doesn't think you're | |
7a315204 | 550 | on TTY. The width is the width of the "yada/blah..." string. |
b82fa0b7 MS |
551 | |
552 | =cut | |
553 | ||
554 | sub _mk_leader { | |
7a315204 JH |
555 | my ($te, $width) = @_; |
556 | ||
b695f709 | 557 | $te =~ s/\.\w+$/./; |
b82fa0b7 MS |
558 | |
559 | if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; } | |
560 | my $blank = (' ' x 77); | |
7a315204 | 561 | my $leader = "$te" . '.' x ($width - length($te)); |
b82fa0b7 MS |
562 | my $ml = ""; |
563 | ||
564 | $ml = "\r$blank\r$leader" | |
565 | if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose; | |
566 | ||
567 | return($leader, $ml); | |
568 | } | |
569 | ||
9c5c68c8 MS |
570 | |
571 | sub _show_results { | |
572 | my($tot, $failedtests) = @_; | |
573 | ||
574 | my $pct; | |
575 | my $bonusmsg = _bonusmsg($tot); | |
576 | ||
577 | if ($tot->{bad} == 0 && $tot->{max}) { | |
c9096ad5 | 578 | print "All tests successful$bonusmsg.\n"; |
9c5c68c8 | 579 | } elsif ($tot->{tests}==0){ |
6c31b336 | 580 | die "FAILED--no tests were run for some reason.\n"; |
9c5c68c8 MS |
581 | } elsif ($tot->{max} == 0) { |
582 | my $blurb = $tot->{tests}==1 ? "script" : "scripts"; | |
583 | die "FAILED--$tot->{tests} test $blurb could be run, ". | |
584 | "alas--no output ever seen\n"; | |
c07a80fd | 585 | } else { |
9c5c68c8 | 586 | $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100); |
6c31b336 | 587 | my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.", |
9c5c68c8 MS |
588 | $tot->{max} - $tot->{ok}, $tot->{max}, |
589 | 100*$tot->{ok}/$tot->{max}; | |
0a931e4a | 590 | |
9c5c68c8 | 591 | my($fmt_top, $fmt) = _create_fmts($failedtests); |
0a931e4a GS |
592 | |
593 | # Now write to formats | |
9c5c68c8 MS |
594 | for my $script (sort keys %$failedtests) { |
595 | $Curtest = $failedtests->{$script}; | |
760ac839 LW |
596 | write; |
597 | } | |
9c5c68c8 | 598 | if ($tot->{bad}) { |
9b0ceca9 IZ |
599 | $bonusmsg =~ s/^,\s*//; |
600 | print "$bonusmsg.\n" if $bonusmsg; | |
9c5c68c8 MS |
601 | die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.". |
602 | "$subpct\n"; | |
c07a80fd | 603 | } |
604 | } | |
f0a9308e | 605 | |
9c5c68c8 MS |
606 | printf("Files=%d, Tests=%d, %s\n", |
607 | $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop')); | |
608 | } | |
609 | ||
610 | ||
611 | sub _parse_header { | |
612 | my($line, $test, $tot) = @_; | |
613 | ||
614 | my $is_header = 0; | |
615 | ||
616 | print $line if $Verbose; | |
617 | ||
618 | # 1..10 todo 4 7 10; | |
619 | if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) { | |
620 | $test->{max} = $1; | |
621 | for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; } | |
622 | ||
623 | $tot->{max} += $test->{max}; | |
624 | $tot->{files}++; | |
625 | ||
626 | $is_header = 1; | |
627 | } | |
628 | # 1..10 | |
629 | # 1..0 # skip Why? Because I said so! | |
630 | elsif ($line =~ /^1\.\.([0-9]+) | |
b82fa0b7 | 631 | (\s*\#\s*[Ss]kip\S*\s* (.+))? |
9c5c68c8 MS |
632 | /x |
633 | ) | |
634 | { | |
635 | $test->{max} = $1; | |
636 | $tot->{max} += $test->{max}; | |
637 | $tot->{files}++; | |
b82fa0b7 | 638 | $test->{'next'} = 1 unless $test->{'next'}; |
9c5c68c8 MS |
639 | $test->{skip_reason} = $3 if not $test->{max} and defined $3; |
640 | ||
641 | $is_header = 1; | |
642 | } | |
643 | else { | |
644 | $is_header = 0; | |
645 | } | |
646 | ||
647 | return $is_header; | |
c07a80fd | 648 | } |
649 | ||
9c5c68c8 | 650 | |
b82fa0b7 MS |
651 | sub _open_test { |
652 | my($test) = shift; | |
653 | ||
654 | my $s = _set_switches($test); | |
655 | ||
656 | # XXX This is WAY too core specific! | |
657 | my $cmd = ($ENV{'HARNESS_COMPILE_TEST'}) | |
658 | ? "./perl -I../lib ../utils/perlcc $test " | |
659 | . "-r 2>> ./compilelog |" | |
660 | : "$^X $s $test|"; | |
661 | $cmd = "MCR $cmd" if $^O eq 'VMS'; | |
662 | ||
663 | if( open(PERL, $cmd) ) { | |
664 | return \*PERL; | |
665 | } | |
666 | else { | |
667 | print "can't run $test. $!\n"; | |
668 | return; | |
669 | } | |
670 | } | |
671 | ||
672 | sub _run_one_test { | |
673 | my($test) = @_; | |
674 | ||
675 | ||
676 | } | |
677 | ||
678 | ||
9c5c68c8 MS |
679 | sub _parse_test_line { |
680 | my($line, $test, $tot) = @_; | |
681 | ||
682 | if ($line =~ /^(not\s+)?ok\b/i) { | |
b82fa0b7 | 683 | my $this = $test->{'next'} || 1; |
9c5c68c8 | 684 | # "not ok 23" |
37ce32a7 MS |
685 | if ($line =~ /^(not )?ok\s*(\d*)(\s*#.*)?/) { |
686 | my($not, $tnum, $extra) = ($1, $2, $3); | |
687 | ||
688 | $this = $tnum if $tnum; | |
689 | ||
690 | my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/ | |
691 | if defined $extra; | |
692 | ||
693 | my($istodo, $isskip); | |
694 | if( defined $type ) { | |
695 | $istodo = $type =~ /TODO/; | |
696 | $isskip = $type =~ /skip/i; | |
697 | } | |
698 | ||
699 | $test->{todo}{$tnum} = 1 if $istodo; | |
700 | ||
701 | if( $not ) { | |
702 | print "$test->{ml}NOK $this" if $test->{ml}; | |
703 | if (!$test->{todo}{$this}) { | |
704 | push @{$test->{failed}}, $this; | |
705 | } else { | |
706 | $test->{ok}++; | |
707 | $tot->{ok}++; | |
708 | } | |
709 | } | |
710 | else { | |
711 | print "$test->{ml}ok $this/$test->{max}" if $test->{ml}; | |
712 | $test->{ok}++; | |
713 | $tot->{ok}++; | |
714 | $test->{skipped}++ if $isskip; | |
715 | ||
723ca54d IZ |
716 | $reason = '[no reason given]' |
717 | if $isskip and not defined $reason; | |
37ce32a7 MS |
718 | if (defined $reason and defined $test->{skip_reason}) { |
719 | # print "was: '$skip_reason' new '$reason'\n"; | |
720 | $test->{skip_reason} = 'various reasons' | |
721 | if $test->{skip_reason} ne $reason; | |
722 | } elsif (defined $reason) { | |
723 | $test->{skip_reason} = $reason; | |
724 | } | |
725 | ||
726 | $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this}; | |
727 | } | |
9c5c68c8 MS |
728 | } |
729 | # XXX ummm... dunno | |
730 | elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok? | |
731 | $this = $1 if $1 > 0; | |
732 | print "$test->{ml}ok $this/$test->{max}" if $test->{ml}; | |
733 | $test->{ok}++; | |
734 | $tot->{ok}++; | |
735 | } | |
736 | else { | |
737 | # an ok or not ok not matching the 3 cases above... | |
738 | # just ignore it for compatibility with TEST | |
739 | next; | |
740 | } | |
741 | ||
b82fa0b7 | 742 | if ($this > $test->{'next'}) { |
9c5c68c8 MS |
743 | # print "Test output counter mismatch [test $this]\n"; |
744 | # no need to warn probably | |
b82fa0b7 | 745 | push @{$test->{failed}}, $test->{'next'}..$this-1; |
9c5c68c8 | 746 | } |
b82fa0b7 | 747 | elsif ($this < $test->{'next'}) { |
9c5c68c8 MS |
748 | #we have seen more "ok" lines than the number suggests |
749 | print "Confused test output: test $this answered after ". | |
b82fa0b7 MS |
750 | "test ", $test->{'next'}-1, "\n"; |
751 | $test->{'next'} = $this; | |
9c5c68c8 | 752 | } |
b82fa0b7 | 753 | $test->{'next'} = $this + 1; |
9c5c68c8 MS |
754 | |
755 | } | |
756 | elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words | |
757 | die "FAILED--Further testing stopped" . | |
758 | ($1 ? ": $1\n" : ".\n"); | |
759 | } | |
760 | } | |
761 | ||
762 | ||
763 | sub _bonusmsg { | |
764 | my($tot) = @_; | |
765 | ||
766 | my $bonusmsg = ''; | |
767 | $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : ''). | |
768 | " UNEXPECTEDLY SUCCEEDED)") | |
769 | if $tot->{bonus}; | |
770 | ||
771 | if ($tot->{skipped}) { | |
772 | $bonusmsg .= ", $tot->{skipped} test" | |
773 | . ($tot->{skipped} != 1 ? 's' : ''); | |
774 | if ($tot->{sub_skipped}) { | |
775 | $bonusmsg .= " and $tot->{sub_skipped} subtest" | |
776 | . ($tot->{sub_skipped} != 1 ? 's' : ''); | |
777 | } | |
778 | $bonusmsg .= ' skipped'; | |
779 | } | |
780 | elsif ($tot->{sub_skipped}) { | |
781 | $bonusmsg .= ", $tot->{sub_skipped} subtest" | |
782 | . ($tot->{sub_skipped} != 1 ? 's' : '') | |
783 | . " skipped"; | |
784 | } | |
785 | ||
786 | return $bonusmsg; | |
787 | } | |
788 | ||
789 | # VMS has some subtle nastiness with closing the test files. | |
790 | sub _close_fh { | |
791 | my($fh) = shift; | |
792 | ||
793 | close($fh); # must close to reap child resource values | |
794 | ||
795 | my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ? | |
796 | my $estatus; | |
797 | $estatus = ($^O eq 'VMS' | |
798 | ? eval 'use vmsish "status"; $estatus = $?' | |
799 | : $wstatus >> 8); | |
800 | ||
801 | return($estatus, $wstatus); | |
802 | } | |
803 | ||
804 | ||
805 | # Set up the command-line switches to run perl as. | |
806 | sub _set_switches { | |
807 | my($test) = shift; | |
808 | ||
b82fa0b7 MS |
809 | local *TEST; |
810 | open(TEST, $test) or print "can't open $test. $!\n"; | |
811 | my $first = <TEST>; | |
9c5c68c8 MS |
812 | my $s = $Switches; |
813 | $s .= " $ENV{'HARNESS_PERL_SWITCHES'}" | |
814 | if exists $ENV{'HARNESS_PERL_SWITCHES'}; | |
815 | $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC | |
816 | if $first =~ /^#!.*\bperl.*-\w*T/; | |
817 | ||
b82fa0b7 | 818 | close(TEST) or print "can't close $test. $!\n"; |
9c5c68c8 MS |
819 | |
820 | return $s; | |
821 | } | |
822 | ||
823 | ||
824 | # Test program go boom. | |
825 | sub _dubious_return { | |
826 | my($test, $tot, $estatus, $wstatus) = @_; | |
827 | my ($failed, $canon, $percent) = ('??', '??'); | |
828 | ||
829 | printf "$test->{ml}dubious\n\tTest returned status $estatus ". | |
830 | "(wstat %d, 0x%x)\n", | |
831 | $wstatus,$wstatus; | |
832 | print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS'; | |
833 | ||
834 | if (corestatus($wstatus)) { # until we have a wait module | |
835 | if ($Have_Devel_Corestack) { | |
836 | Devel::CoreStack::stack($^X); | |
837 | } else { | |
838 | print "\ttest program seems to have generated a core\n"; | |
839 | } | |
840 | } | |
841 | ||
842 | $tot->{bad}++; | |
843 | ||
844 | if ($test->{max}) { | |
b82fa0b7 | 845 | if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) { |
9c5c68c8 MS |
846 | print "\tafter all the subtests completed successfully\n"; |
847 | $percent = 0; | |
848 | $failed = 0; # But we do not set $canon! | |
849 | } | |
850 | else { | |
b82fa0b7 | 851 | push @{$test->{failed}}, $test->{'next'}..$test->{max}; |
9c5c68c8 MS |
852 | $failed = @{$test->{failed}}; |
853 | (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}}); | |
854 | $percent = 100*(scalar @{$test->{failed}})/$test->{max}; | |
855 | print "DIED. ",$txt; | |
856 | } | |
857 | } | |
858 | ||
859 | return { canon => $canon, max => $test->{max} || '??', | |
860 | failed => $failed, | |
66fd8cb9 | 861 | percent => $percent, |
9c5c68c8 MS |
862 | estat => $estatus, wstat => $wstatus, |
863 | }; | |
864 | } | |
865 | ||
866 | ||
867 | sub _garbled_output { | |
868 | my($gibberish) = shift; | |
869 | warn "Confusing test output: '$gibberish'\n"; | |
870 | } | |
871 | ||
872 | ||
873 | sub _create_fmts { | |
874 | my($failedtests) = @_; | |
875 | ||
b82fa0b7 MS |
876 | my $failed_str = "Failed Test"; |
877 | my $middle_str = " Stat Wstat Total Fail Failed "; | |
9c5c68c8 MS |
878 | my $list_str = "List of Failed"; |
879 | ||
880 | # Figure out our longest name string for formatting purposes. | |
881 | my $max_namelen = length($failed_str); | |
882 | foreach my $script (keys %$failedtests) { | |
883 | my $namelen = length $failedtests->{$script}->{name}; | |
884 | $max_namelen = $namelen if $namelen > $max_namelen; | |
885 | } | |
886 | ||
887 | my $list_len = $Columns - length($middle_str) - $max_namelen; | |
888 | if ($list_len < length($list_str)) { | |
889 | $list_len = length($list_str); | |
890 | $max_namelen = $Columns - length($middle_str) - $list_len; | |
891 | if ($max_namelen < length($failed_str)) { | |
892 | $max_namelen = length($failed_str); | |
893 | $Columns = $max_namelen + length($middle_str) + $list_len; | |
894 | } | |
895 | } | |
896 | ||
897 | my $fmt_top = "format STDOUT_TOP =\n" | |
b82fa0b7 | 898 | . sprintf("%-${max_namelen}s", $failed_str) |
9c5c68c8 MS |
899 | . $middle_str |
900 | . $list_str . "\n" | |
901 | . "-" x $Columns | |
902 | . "\n.\n"; | |
903 | ||
904 | my $fmt = "format STDOUT =\n" | |
905 | . "@" . "<" x ($max_namelen - 1) | |
b82fa0b7 | 906 | . " @>> @>>>> @>>>> @>>> ^##.##% " |
9c5c68c8 MS |
907 | . "^" . "<" x ($list_len - 1) . "\n" |
908 | . '{ $Curtest->{name}, $Curtest->{estat},' | |
909 | . ' $Curtest->{wstat}, $Curtest->{max},' | |
910 | . ' $Curtest->{failed}, $Curtest->{percent},' | |
911 | . ' $Curtest->{canon}' | |
912 | . "\n}\n" | |
913 | . "~~" . " " x ($Columns - $list_len - 2) . "^" | |
914 | . "<" x ($list_len - 1) . "\n" | |
915 | . '$Curtest->{canon}' | |
916 | . "\n.\n"; | |
917 | ||
918 | eval $fmt_top; | |
919 | die $@ if $@; | |
920 | eval $fmt; | |
921 | die $@ if $@; | |
922 | ||
923 | return($fmt_top, $fmt); | |
924 | } | |
925 | ||
b82fa0b7 MS |
926 | { |
927 | my $tried_devel_corestack; | |
9c5c68c8 | 928 | |
b82fa0b7 MS |
929 | sub corestatus { |
930 | my($st) = @_; | |
c0ee6f5c | 931 | |
b82fa0b7 MS |
932 | eval {require 'wait.ph'}; |
933 | my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200; | |
c0ee6f5c | 934 | |
b82fa0b7 MS |
935 | eval { require Devel::CoreStack; $Have_Devel_Corestack++ } |
936 | unless $tried_devel_corestack++; | |
c0ee6f5c | 937 | |
b82fa0b7 MS |
938 | $ret; |
939 | } | |
c0ee6f5c | 940 | } |
941 | ||
c07a80fd | 942 | sub canonfailed ($@) { |
89d3b7e2 | 943 | my($max,$skipped,@failed) = @_; |
6c31b336 A |
944 | my %seen; |
945 | @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed; | |
c07a80fd | 946 | my $failed = @failed; |
947 | my @result = (); | |
948 | my @canon = (); | |
949 | my $min; | |
950 | my $last = $min = shift @failed; | |
760ac839 | 951 | my $canon; |
c07a80fd | 952 | if (@failed) { |
953 | for (@failed, $failed[-1]) { # don't forget the last one | |
954 | if ($_ > $last+1 || $_ == $last) { | |
955 | if ($min == $last) { | |
956 | push @canon, $last; | |
957 | } else { | |
958 | push @canon, "$min-$last"; | |
959 | } | |
960 | $min = $_; | |
961 | } | |
962 | $last = $_; | |
963 | } | |
964 | local $" = ", "; | |
965 | push @result, "FAILED tests @canon\n"; | |
b82fa0b7 | 966 | $canon = join ' ', @canon; |
a0d0e21e | 967 | } else { |
c07a80fd | 968 | push @result, "FAILED test $last\n"; |
760ac839 | 969 | $canon = $last; |
a0d0e21e | 970 | } |
c07a80fd | 971 | |
972 | push @result, "\tFailed $failed/$max tests, "; | |
89d3b7e2 IZ |
973 | push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay"; |
974 | my $ender = 's' x ($skipped > 1); | |
975 | my $good = $max - $failed - $skipped; | |
976 | my $goodper = sprintf("%.2f",100*($good/$max)); | |
9c5c68c8 MS |
977 | push @result, " (-$skipped skipped test$ender: $good okay, ". |
978 | "$goodper%)" | |
979 | if $skipped; | |
89d3b7e2 | 980 | push @result, "\n"; |
760ac839 LW |
981 | my $txt = join "", @result; |
982 | ($txt, $canon); | |
a0d0e21e LW |
983 | } |
984 | ||
b82fa0b7 | 985 | =end _private |
9c5c68c8 | 986 | |
b82fa0b7 | 987 | =back |
d667a7e6 | 988 | |
b82fa0b7 | 989 | =cut |
9c5c68c8 | 990 | |
9c5c68c8 | 991 | |
b82fa0b7 MS |
992 | 1; |
993 | __END__ | |
9c5c68c8 MS |
994 | |
995 | ||
cb1a09d0 AD |
996 | =head1 EXPORT |
997 | ||
c0ee6f5c | 998 | C<&runtests> is exported by Test::Harness per default. |
cb1a09d0 | 999 | |
9c5c68c8 MS |
1000 | C<$verbose> and C<$switches> are exported upon request. |
1001 | ||
1002 | ||
cb1a09d0 AD |
1003 | =head1 DIAGNOSTICS |
1004 | ||
1005 | =over 4 | |
1006 | ||
1007 | =item C<All tests successful.\nFiles=%d, Tests=%d, %s> | |
1008 | ||
1009 | If all tests are successful some statistics about the performance are | |
1010 | printed. | |
1011 | ||
6c31b336 A |
1012 | =item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.> |
1013 | ||
1014 | For any single script that has failing subtests statistics like the | |
1015 | above are printed. | |
1016 | ||
1017 | =item C<Test returned status %d (wstat %d)> | |
1018 | ||
9c5c68c8 MS |
1019 | Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8> |
1020 | and C<$?> are printed in a message similar to the above. | |
6c31b336 A |
1021 | |
1022 | =item C<Failed 1 test, %.2f%% okay. %s> | |
cb1a09d0 | 1023 | |
6c31b336 | 1024 | =item C<Failed %d/%d tests, %.2f%% okay. %s> |
cb1a09d0 AD |
1025 | |
1026 | If not all tests were successful, the script dies with one of the | |
1027 | above messages. | |
1028 | ||
d667a7e6 A |
1029 | =item C<FAILED--Further testing stopped%s> |
1030 | ||
1031 | If a single subtest decides that further testing will not make sense, | |
1032 | the script dies with this message. | |
1033 | ||
cb1a09d0 AD |
1034 | =back |
1035 | ||
9b0ceca9 IZ |
1036 | =head1 ENVIRONMENT |
1037 | ||
37ce32a7 MS |
1038 | =over 4 |
1039 | ||
b82fa0b7 | 1040 | =item C<HARNESS_IGNORE_EXITCODE> |
37ce32a7 MS |
1041 | |
1042 | Makes harness ignore the exit status of child processes when defined. | |
1043 | ||
b82fa0b7 | 1044 | =item C<HARNESS_NOTTY> |
9b0ceca9 | 1045 | |
37ce32a7 MS |
1046 | When set to a true value, forces it to behave as though STDOUT were |
1047 | not a console. You may need to set this if you don't want harness to | |
1048 | output more frequent progress messages using carriage returns. Some | |
1049 | consoles may not handle carriage returns properly (which results in a | |
1050 | somewhat messy output). | |
0d0c0d42 | 1051 | |
b82fa0b7 | 1052 | =item C<HARNESS_COMPILE_TEST> |
9636a016 | 1053 | |
37ce32a7 MS |
1054 | When true it will make harness attempt to compile the test using |
1055 | C<perlcc> before running it. | |
1056 | ||
b82fa0b7 MS |
1057 | B<NOTE> This currently only works when sitting in the perl source |
1058 | directory! | |
1059 | ||
1060 | =item C<HARNESS_FILELEAK_IN_DIR> | |
37ce32a7 MS |
1061 | |
1062 | When set to the name of a directory, harness will check after each | |
1063 | test whether new files appeared in that directory, and report them as | |
17a79f5b IZ |
1064 | |
1065 | LEAKED FILES: scr.tmp 0 my.db | |
1066 | ||
1067 | If relative, directory name is with respect to the current directory at | |
1068 | the moment runtests() was called. Putting absolute path into | |
1069 | C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results. | |
1070 | ||
b82fa0b7 | 1071 | =item C<HARNESS_PERL_SWITCHES> |
37ce32a7 MS |
1072 | |
1073 | Its value will be prepended to the switches used to invoke perl on | |
b82fa0b7 | 1074 | each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will |
37ce32a7 MS |
1075 | run all tests with all warnings enabled. |
1076 | ||
b82fa0b7 | 1077 | =item C<HARNESS_COLUMNS> |
37ce32a7 MS |
1078 | |
1079 | This value will be used for the width of the terminal. If it is not | |
1080 | set then it will default to C<COLUMNS>. If this is not set, it will | |
1081 | default to 80. Note that users of Bourne-sh based shells will need to | |
1082 | C<export COLUMNS> for this module to use that variable. | |
2b32313b | 1083 | |
b82fa0b7 | 1084 | =item C<HARNESS_ACTIVE> |
37ce32a7 MS |
1085 | |
1086 | Harness sets this before executing the individual tests. This allows | |
1087 | the tests to determine if they are being executed through the harness | |
1088 | or by any other means. | |
1089 | ||
1090 | =back | |
0a931e4a | 1091 | |
b82fa0b7 MS |
1092 | =head1 EXAMPLE |
1093 | ||
1094 | Here's how Test::Harness tests itself | |
1095 | ||
1096 | $ cd ~/src/devel/Test-Harness | |
1097 | $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose); | |
1098 | $verbose=0; runtests @ARGV;' t/*.t | |
1099 | Using /home/schwern/src/devel/Test-Harness/blib | |
1100 | t/base..............ok | |
1101 | t/nonumbers.........ok | |
1102 | t/ok................ok | |
1103 | t/test-harness......ok | |
1104 | All tests successful. | |
1105 | Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU) | |
f19ae7a7 | 1106 | |
cb1a09d0 AD |
1107 | =head1 SEE ALSO |
1108 | ||
b82fa0b7 MS |
1109 | L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for |
1110 | the underlying timing routines, L<Devel::CoreStack> to generate core | |
1111 | dumps from failed tests and L<Devel::Cover> for test coverage | |
1112 | analysis. | |
c07a80fd | 1113 | |
1114 | =head1 AUTHORS | |
1115 | ||
1116 | Either Tim Bunce or Andreas Koenig, we don't know. What we know for | |
1117 | sure is, that it was inspired by Larry Wall's TEST script that came | |
b876d4a6 | 1118 | with perl distributions for ages. Numerous anonymous contributors |
b82fa0b7 MS |
1119 | exist. Andreas Koenig held the torch for many years. |
1120 | ||
1121 | Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt> | |
1122 | ||
1123 | =head1 TODO | |
1124 | ||
1125 | Provide a way of running tests quietly (ie. no printing) for automated | |
1126 | validation of tests. This will probably take the form of a version | |
1127 | of runtests() which rather than printing its output returns raw data | |
1128 | on the state of the tests. | |
1129 | ||
1130 | Fix HARNESS_COMPILE_TEST without breaking its core usage. | |
1131 | ||
1132 | Figure a way to report test names in the failure summary. | |
37ce32a7 | 1133 | |
b82fa0b7 MS |
1134 | Rework the test summary so long test names are not truncated as badly. |
1135 | ||
1136 | Merge back into bleadperl. | |
1137 | ||
1138 | Deal with VMS's "not \nok 4\n" mistake. | |
1139 | ||
1140 | Add option for coverage analysis. | |
1141 | ||
1142 | =for _private | |
1143 | Keeping whittling away at _run_all_tests() | |
1144 | ||
1145 | =for _private | |
1146 | Clean up how the summary is printed. Get rid of those damned formats. | |
cb1a09d0 AD |
1147 | |
1148 | =head1 BUGS | |
1149 | ||
1150 | Test::Harness uses $^X to determine the perl binary to run the tests | |
6c31b336 A |
1151 | with. Test scripts running via the shebang (C<#!>) line may not be |
1152 | portable because $^X is not consistent for shebang scripts across | |
cb1a09d0 | 1153 | platforms. This is no problem when Test::Harness is run with an |
6c31b336 | 1154 | absolute path to the perl binary or when $^X can be found in the path. |
cb1a09d0 | 1155 | |
b82fa0b7 MS |
1156 | HARNESS_COMPILE_TEST currently assumes its run from the Perl source |
1157 | directory. | |
1158 | ||
cb1a09d0 | 1159 | =cut |