This is a live mirror of the Perl 5 development currently hosted at https://github.com/perl/perl5
Need to be ./executed.
[perl5.git] / lib / Test / Harness.pm
CommitLineData
d667a7e6 1# -*- Mode: cperl; cperl-indent-level: 4 -*-
2fe373ce 2# $Id: Harness.pm,v 1.17 2001/09/07 06:20:29 schwern Exp $
b82fa0b7 3
a0d0e21e
LW
4package Test::Harness;
5
b82fa0b7 6require 5.004;
a0d0e21e
LW
7use Exporter;
8use Benchmark;
4633a7c4 9use Config;
760ac839
LW
10use strict;
11
b82fa0b7
MS
12use vars qw($VERSION $Verbose $Switches $Have_Devel_Corestack $Curtest
13 $Columns $verbose $switches
14 @ISA @EXPORT @EXPORT_OK
15 );
4633a7c4 16
9c5c68c8
MS
17# Backwards compatibility for exportable variable names.
18*verbose = \$Verbose;
19*switches = \$Switches;
20
21$Have_Devel_Corestack = 0;
22
2fe373ce 23$VERSION = 1.25;
9b0ceca9 24
f19ae7a7
JD
25$ENV{HARNESS_ACTIVE} = 1;
26
9b0ceca9 27# Some experimental versions of OS/2 build have broken $?
9c5c68c8
MS
28my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
29
30my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
9b0ceca9 31
17a79f5b 32
9c5c68c8
MS
33@ISA = ('Exporter');
34@EXPORT = qw(&runtests);
35@EXPORT_OK = qw($verbose $switches);
4633a7c4 36
9c5c68c8
MS
37$Verbose = 0;
38$Switches = "-w";
39$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
b82fa0b7
MS
40$Columns--; # Some shells have trouble with a full line of text.
41
42
43=head1 NAME
44
45Test::Harness - run perl standard test scripts with statistics
46
47=head1 SYNOPSIS
48
49 use Test::Harness;
50
51 runtests(@test_files);
52
53=head1 DESCRIPTION
a0d0e21e 54
b82fa0b7
MS
55B<STOP!> If all you want to do is write a test script, consider using
56Test::Simple. Otherwise, read on.
57
58(By using the Test module, you can write test scripts without
59knowing the exact output this module expects. However, if you need to
60know the specifics, read on!)
61
62Perl test scripts print to standard output C<"ok N"> for each single
63test, where C<N> is an increasing sequence of integers. The first line
64output by a standard test script is C<"1..M"> with C<M> being the
65number of tests that should be run within the test
66script. Test::Harness::runtests(@tests) runs all the testscripts
67named as arguments and checks standard output for the expected
68C<"ok N"> strings.
69
70After all tests have been performed, runtests() prints some
71performance statistics that are computed by the Benchmark module.
72
73=head2 The test script output
74
75The following explains how Test::Harness interprets the output of your
76test program.
77
78=over 4
79
80=item B<'1..M'>
81
82This header tells how many tests there will be. It should be the
83first line output by your test program (but its okay if its preceded
84by comments).
85
86In certain instanced, you may not know how many tests you will
87ultimately be running. In this case, it is permitted (but not
88encouraged) for the 1..M header to appear as the B<last> line output
89by your test (again, it can be followed by further comments). But we
90strongly encourage you to put it first.
91
92Under B<no> circumstances should 1..M appear in the middle of your
93output or more than once.
94
95
96=item B<'ok', 'not ok'. Ok?>
97
98Any output from the testscript to standard error is ignored and
99bypassed, thus will be seen by the user. Lines written to standard
100output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
101runtests(). All other lines are discarded.
102
103C</^not ok/> indicates a failed test. C</^ok/> is a successful test.
104
105
106=item B<test numbers>
107
108Perl normally expects the 'ok' or 'not ok' to be followed by a test
109number. It is tolerated if the test numbers after 'ok' are
110omitted. In this case Test::Harness maintains temporarily its own
111counter until the script supplies test numbers again. So the following
112test script
113
114 print <<END;
115 1..6
116 not ok
117 ok
118 not ok
119 ok
120 ok
121 END
122
123will generate
124
125 FAILED tests 1, 3, 6
126 Failed 3/6 tests, 50.00% okay
127
128
129=item B<$Test::Harness::verbose>
130
131The global variable $Test::Harness::verbose is exportable and can be
132used to let runtests() display the standard output of the script
133without altering the behavior otherwise.
134
135=item B<$Test::Harness::switches>
136
137The global variable $Test::Harness::switches is exportable and can be
138used to set perl command line options used for running the test
139script(s). The default value is C<-w>.
140
141=item B<Skipping tests>
142
143If the standard output line contains the substring C< # Skip> (with
144variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
145counted as a skipped test. If the whole testscript succeeds, the
146count of skipped tests is included in the generated output.
147C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
148for skipping.
149
150 ok 23 # skip Insufficient flogiston pressure.
151
152Similarly, one can include a similar explanation in a C<1..0> line
153emitted if the test script is skipped completely:
154
155 1..0 # Skipped: no leverage found
156
157=item B<Todo tests>
158
159If the standard output line contains the substring C< # TODO> after
160C<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
161afterwards is the thing that has to be done before this test will
162succeed.
163
164 not ok 13 # TODO harness the power of the atom
165
166These tests represent a feature to be implemented or a bug to be fixed
167and act as something of an executable "thing to do" list. They are
168B<not> expected to succeed. Should a todo test begin succeeding,
169Test::Harness will report it as a bonus. This indicates that whatever
170you were supposed to do has been done and you should promote this to a
171normal test.
172
173=item B<Bail out!>
174
175As an emergency measure, a test script can decide that further tests
176are useless (e.g. missing dependencies) and testing should stop
177immediately. In that case the test script prints the magic words
178
179 Bail out!
180
181to standard output. Any message after these words will be displayed by
182C<Test::Harness> as the reason why testing is stopped.
183
184=item B<Comments>
185
186Additional comments may be put into the testing output on their own
187lines. Comment lines should begin with a '#', Test::Harness will
188ignore them.
189
190 ok 1
191 # Life is good, the sun is shining, RAM is cheap.
192 not ok 2
193 # got 'Bush' expected 'Gore'
194
195=item B<Anything else>
196
197Any other output Test::Harness sees it will silently ignore B<BUT WE
198PLAN TO CHANGE THIS!> If you wish to place additional output in your
199test script, please use a comment.
200
201=back
202
203
204=head2 Failure
205
206It will happen, your tests will fail. After you mop up your ego, you
207can begin examining the summary report:
208
2fe373ce
MS
209 t/base..............ok
210 t/nonumbers.........ok
211 t/ok................ok
212 t/test-harness......ok
213 t/waterloo..........dubious
b82fa0b7
MS
214 Test returned status 3 (wstat 768, 0x300)
215 DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
216 Failed 10/20 tests, 50.00% okay
217 Failed Test Stat Wstat Total Fail Failed List of Failed
218 -----------------------------------------------------------------------
219 t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
220 Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
221
222Everything passed but t/waterloo.t. It failed 10 of 20 tests and
223exited with non-zero status indicating something dubious happened.
224
225The columns in the summary report mean:
226
227=over 4
228
229=item B<Failed Test>
230
231The test file which failed.
232
233=item B<Stat>
234
235If the test exited with non-zero, this is its exit status.
236
237=item B<Wstat>
238
239The wait status of the test I<umm, I need a better explanation here>.
240
241=item B<Total>
242
243Total number of tests expected to run.
244
245=item B<Fail>
246
247Number which failed, either from "not ok" or because they never ran.
248
249=item B<Failed>
250
251Percentage of the total tests which failed.
252
253=item B<List of Failed>
254
255A list of the tests which failed. Successive failures may be
256abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
25720 failed).
258
259=back
260
261
262=head2 Functions
263
264Test::Harness currently only has one function, here it is.
265
266=over 4
267
268=item B<runtests>
269
270 my $allok = runtests(@test_files);
271
272This runs all the given @test_files and divines whether they passed
273or failed based on their output to STDOUT (details above). It prints
274out each individual test which failed along with a summary report and
275a how long it all took.
276
277It returns true if everything was ok, false otherwise.
278
279=for _private
280This is just _run_all_tests() plus _show_results()
281
282=cut
17a79f5b 283
a0d0e21e
LW
284sub runtests {
285 my(@tests) = @_;
9c5c68c8 286
b82fa0b7
MS
287 local ($\, $,);
288
289 my($tot, $failedtests) = _run_all_tests(@tests);
9c5c68c8
MS
290 _show_results($tot, $failedtests);
291
2fe373ce 292 my $ok = _all_ok($tot);
b82fa0b7
MS
293
294 die q{Assert '$ok xor keys %$failedtests' failed!}
295 unless $ok xor keys %$failedtests;
296
297 return $ok;
298}
299
300=begin _private
301
2fe373ce
MS
302=item B<_all_ok>
303
304 my $ok = _all_ok(\%tot);
305
306Tells you if this test run is overall successful or not.
307
308=cut
309
310sub _all_ok {
311 my($tot) = shift;
312
313 return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
314}
315
b82fa0b7
MS
316=item B<_globdir>
317
318 my @files = _globdir $dir;
319
320Returns all the files in a directory. This is shorthand for backwards
321compatibility on systems where glob() doesn't work right.
322
323=cut
324
325sub _globdir {
326 opendir DIRH, shift;
327 my @f = readdir DIRH;
328 closedir DIRH;
329
330 return @f;
9c5c68c8
MS
331}
332
b82fa0b7
MS
333=item B<_run_all_tests>
334
335 my($total, $failed) = _run_all_tests(@test_files);
336
337Runs all the given @test_files (as runtests()) but does it quietly (no
338report). $total is a hash ref summary of all the tests run. Its keys
339and values are this:
340
341 bonus Number of individual todo tests unexpectedly passed
342 max Number of individual tests ran
343 ok Number of individual tests passed
344 sub_skipped Number of individual tests skipped
2fe373ce 345 todo Number of individual todo tests
b82fa0b7
MS
346
347 files Number of test files ran
348 good Number of test files passed
349 bad Number of test files failed
350 tests Number of test files originally given
351 skipped Number of test files skipped
352
353If $total->{bad} == 0 and $total->{max} > 0, you've got a successful
354test.
355
356$failed is a hash ref of all the test scripts which failed. Each key
357is the name of a test script, each value is another hash representing
358how that script failed. Its keys are these:
9c5c68c8 359
b82fa0b7
MS
360 name Name of the test which failed
361 estat Script's exit value
362 wstat Script's wait status
363 max Number of individual tests
364 failed Number which failed
365 percent Percentage of tests which failed
366 canon List of tests which failed (as string).
367
368Needless to say, $failed should be empty if everything passed.
369
370B<NOTE> Currently this function is still noisy. I'm working on it.
371
372=cut
373
374sub _run_all_tests {
9c5c68c8 375 my(@tests) = @_;
a0d0e21e 376 local($|) = 1;
9c5c68c8
MS
377 my(%failedtests);
378
379 # Test-wide totals.
380 my(%tot) = (
381 bonus => 0,
382 max => 0,
383 ok => 0,
384 files => 0,
385 bad => 0,
386 good => 0,
387 tests => scalar @tests,
388 sub_skipped => 0,
2fe373ce 389 todo => 0,
9c5c68c8 390 skipped => 0,
2fe373ce 391 bench => 0,
9c5c68c8 392 );
774d564b 393
394 # pass -I flags to children
81ff29e3 395 my $old5lib = $ENV{PERL5LIB};
774d564b 396
1250aba5
GS
397 # VMS has a 255-byte limit on the length of %ENV entries, so
398 # toss the ones that involve perl_root, the install location
399 # for VMS
400 my $new5lib;
401 if ($^O eq 'VMS') {
2fe373ce
MS
402 $new5lib = join($Config{path_sep}, grep {!/perl_root/i;} @INC);
403 $Switches =~ s/-(\S*[A-Z]\S*)/"-$1"/g;
1250aba5
GS
404 }
405 else {
406 $new5lib = join($Config{path_sep}, @INC);
407 }
408
409 local($ENV{'PERL5LIB'}) = $new5lib;
a0d0e21e 410
b82fa0b7 411 my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
a0d0e21e 412 my $t_start = new Benchmark;
9c5c68c8 413
63b097aa 414 my $maxlen = 0;
908801fe
JH
415 my $maxsuflen = 0;
416 foreach (@tests) { # The same code in t/TEST
2fe373ce
MS
417 my $suf = /\.(\w+)$/ ? $1 : '';
418 my $len = length;
419 my $suflen = length $suf;
420 $maxlen = $len if $len > $maxlen;
421 $maxsuflen = $suflen if $suflen > $maxsuflen;
7a315204 422 }
908801fe
JH
423 # + 3 : we want three dots between the test name and the "ok"
424 my $width = $maxlen + 3 - $maxsuflen;
2fe373ce 425
b82fa0b7 426 foreach my $tfile (@tests) {
7a315204 427 my($leader, $ml) = _mk_leader($tfile, $width);
b82fa0b7 428 print $leader;
9c5c68c8 429
b82fa0b7 430 my $fh = _open_test($tfile);
9c5c68c8
MS
431
432 # state of the current test.
433 my %test = (
434 ok => 0,
b82fa0b7 435 'next' => 0,
9c5c68c8
MS
436 max => 0,
437 failed => [],
438 todo => {},
439 bonus => 0,
440 skipped => 0,
441 skip_reason => undef,
442 ml => $ml,
443 );
444
445 my($seen_header, $tests_seen) = (0,0);
2fe373ce 446 while (<$fh>) {
9c5c68c8
MS
447 if( _parse_header($_, \%test, \%tot) ) {
448 warn "Test header seen twice!\n" if $seen_header;
449
450 $seen_header = 1;
451
452 warn "1..M can only appear at the beginning or end of tests\n"
453 if $tests_seen && $test{max} < $tests_seen;
454 }
455 elsif( _parse_test_line($_, \%test, \%tot) ) {
456 $tests_seen++;
d667a7e6 457 }
9c5c68c8 458 # else, ignore it.
2fe373ce 459 }
9c5c68c8
MS
460
461 my($estatus, $wstatus) = _close_fh($fh);
462
b82fa0b7
MS
463 my $allok = $test{ok} == $test{max} && $test{'next'} == $test{max}+1;
464
2fe373ce 465 if ($wstatus) {
b82fa0b7 466 $failedtests{$tfile} = _dubious_return(\%test, \%tot,
9c5c68c8 467 $estatus, $wstatus);
b82fa0b7 468 $failedtests{$tfile}{name} = $tfile;
2fe373ce 469 }
b82fa0b7 470 elsif ($allok) {
2fe373ce
MS
471 if ($test{max} and $test{skipped} + $test{bonus}) {
472 my @msg;
473 push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
474 if $test{skipped};
475 push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
476 if $test{bonus};
477 print "$test{ml}ok, ".join(', ', @msg)."\n";
478 } elsif ($test{max}) {
479 print "$test{ml}ok\n";
480 } elsif (defined $test{skip_reason}) {
481 print "skipped: $test{skip_reason}\n";
482 $tot{skipped}++;
483 } else {
484 print "skipped test on this platform\n";
485 $tot{skipped}++;
486 }
487 $tot{good}++;
488 }
b82fa0b7
MS
489 else {
490 if ($test{max}) {
491 if ($test{'next'} <= $test{max}) {
492 push @{$test{failed}}, $test{'next'}..$test{max};
493 }
494 if (@{$test{failed}}) {
495 my ($txt, $canon) = canonfailed($test{max},$test{skipped},
496 @{$test{failed}});
497 print "$test{ml}$txt";
498 $failedtests{$tfile} = { canon => $canon,
499 max => $test{max},
500 failed => scalar @{$test{failed}},
501 name => $tfile,
502 percent => 100*(scalar @{$test{failed}})/$test{max},
503 estat => '',
504 wstat => '',
505 };
506 } else {
507 print "Don't know which tests failed: got $test{ok} ok, ".
508 "expected $test{max}\n";
509 $failedtests{$tfile} = { canon => '??',
510 max => $test{max},
511 failed => '??',
512 name => $tfile,
513 percent => undef,
514 estat => '',
515 wstat => '',
516 };
517 }
518 $tot{bad}++;
519 } elsif ($test{'next'} == 0) {
520 print "FAILED before any test output arrived\n";
521 $tot{bad}++;
522 $failedtests{$tfile} = { canon => '??',
523 max => '??',
524 failed => '??',
525 name => $tfile,
526 percent => undef,
527 estat => '',
528 wstat => '',
529 };
530 }
531 }
532
2fe373ce
MS
533 $tot{sub_skipped} += $test{skipped};
534
535 if (defined $Files_In_Dir) {
536 my @new_dir_files = _globdir $Files_In_Dir;
537 if (@new_dir_files != @dir_files) {
538 my %f;
539 @f{@new_dir_files} = (1) x @new_dir_files;
540 delete @f{@dir_files};
541 my @f = sort keys %f;
542 print "LEAKED FILES: @f\n";
543 @dir_files = @new_dir_files;
544 }
545 }
a0d0e21e 546 }
9c5c68c8 547 $tot{bench} = timediff(new Benchmark, $t_start);
d667a7e6 548
774d564b 549 if ($^O eq 'VMS') {
2fe373ce
MS
550 if (defined $old5lib) {
551 $ENV{PERL5LIB} = $old5lib;
552 } else {
553 delete $ENV{PERL5LIB};
554 }
774d564b 555 }
9c5c68c8
MS
556
557 return(\%tot, \%failedtests);
558}
559
b82fa0b7
MS
560=item B<_mk_leader>
561
7a315204 562 my($leader, $ml) = _mk_leader($test_file, $width);
b82fa0b7
MS
563
564Generates the 't/foo........' $leader for the given $test_file as well
565as a similar version which will overwrite the current line (by use of
566\r and such). $ml may be empty if Test::Harness doesn't think you're
2fe373ce
MS
567on TTY.
568
569The $width is the width of the "yada/blah.." string.
b82fa0b7
MS
570
571=cut
572
573sub _mk_leader {
2fe373ce
MS
574 my($te, $width) = @_;
575 chomp($te);
b695f709 576 $te =~ s/\.\w+$/./;
b82fa0b7
MS
577
578 if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
579 my $blank = (' ' x 77);
7a315204 580 my $leader = "$te" . '.' x ($width - length($te));
b82fa0b7
MS
581 my $ml = "";
582
583 $ml = "\r$blank\r$leader"
584 if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
585
586 return($leader, $ml);
587}
588
9c5c68c8
MS
589
590sub _show_results {
591 my($tot, $failedtests) = @_;
592
593 my $pct;
594 my $bonusmsg = _bonusmsg($tot);
595
2fe373ce
MS
596 if (_all_ok($tot)) {
597 print "All tests successful$bonusmsg.\n";
598 } elsif (!$tot->{tests}){
599 die "FAILED--no tests were run for some reason.\n";
600 } elsif (!$tot->{max}) {
601 my $blurb = $tot->{tests}==1 ? "script" : "scripts";
602 die "FAILED--$tot->{tests} test $blurb could be run, ".
9c5c68c8 603 "alas--no output ever seen\n";
c07a80fd 604 } else {
2fe373ce
MS
605 $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
606 my $percent_ok = 100*$tot->{ok}/$tot->{max};
607 my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
608 $tot->{max} - $tot->{ok}, $tot->{max},
609 $percent_ok;
0a931e4a 610
9c5c68c8 611 my($fmt_top, $fmt) = _create_fmts($failedtests);
0a931e4a 612
2fe373ce
MS
613 # Now write to formats
614 for my $script (sort keys %$failedtests) {
615 $Curtest = $failedtests->{$script};
616 write;
617 }
618 if ($tot->{bad}) {
619 $bonusmsg =~ s/^,\s*//;
620 print "$bonusmsg.\n" if $bonusmsg;
621 die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
9c5c68c8 622 "$subpct\n";
2fe373ce 623 }
c07a80fd 624 }
f0a9308e 625
9c5c68c8
MS
626 printf("Files=%d, Tests=%d, %s\n",
627 $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
628}
629
630
631sub _parse_header {
632 my($line, $test, $tot) = @_;
633
634 my $is_header = 0;
635
636 print $line if $Verbose;
637
638 # 1..10 todo 4 7 10;
639 if ($line =~ /^1\.\.([0-9]+) todo([\d\s]+);?/i) {
640 $test->{max} = $1;
641 for (split(/\s+/, $2)) { $test->{todo}{$_} = 1; }
642
643 $tot->{max} += $test->{max};
644 $tot->{files}++;
645
646 $is_header = 1;
647 }
648 # 1..10
649 # 1..0 # skip Why? Because I said so!
650 elsif ($line =~ /^1\.\.([0-9]+)
b82fa0b7 651 (\s*\#\s*[Ss]kip\S*\s* (.+))?
9c5c68c8
MS
652 /x
653 )
654 {
655 $test->{max} = $1;
656 $tot->{max} += $test->{max};
657 $tot->{files}++;
b82fa0b7 658 $test->{'next'} = 1 unless $test->{'next'};
9c5c68c8
MS
659 $test->{skip_reason} = $3 if not $test->{max} and defined $3;
660
661 $is_header = 1;
662 }
663 else {
664 $is_header = 0;
665 }
666
667 return $is_header;
c07a80fd 668}
669
9c5c68c8 670
b82fa0b7
MS
671sub _open_test {
672 my($test) = shift;
673
674 my $s = _set_switches($test);
675
676 # XXX This is WAY too core specific!
677 my $cmd = ($ENV{'HARNESS_COMPILE_TEST'})
678 ? "./perl -I../lib ../utils/perlcc $test "
2fe373ce
MS
679 . "-r 2>> ./compilelog |"
680 : "$^X $s $test|";
b82fa0b7
MS
681 $cmd = "MCR $cmd" if $^O eq 'VMS';
682
683 if( open(PERL, $cmd) ) {
684 return \*PERL;
685 }
686 else {
687 print "can't run $test. $!\n";
688 return;
689 }
690}
691
692sub _run_one_test {
693 my($test) = @_;
694
695
696}
697
698
9c5c68c8
MS
699sub _parse_test_line {
700 my($line, $test, $tot) = @_;
701
702 if ($line =~ /^(not\s+)?ok\b/i) {
2fe373ce
MS
703 $test->{'next'} ||= 1;
704 my $this = $test->{'next'};
9c5c68c8 705 # "not ok 23"
2fe373ce
MS
706 if ($line =~ /^(not )?ok\s*(\d*)[^#]*(\s*#.*)?/) {
707 my($not, $tnum, $extra) = ($1, $2, $3);
708
709 $this = $tnum if $tnum;
710
711 my($type, $reason) = $extra =~ /^\s*#\s*([Ss]kip\S*|TODO)(\s+.+)?/
712 if defined $extra;
713
714 my($istodo, $isskip);
715 if( defined $type ) {
716 $istodo = 1 if $type =~ /TODO/;
717 $isskip = 1 if $type =~ /skip/i;
718 }
719
720 $test->{todo}{$this} = 1 if $istodo;
721
722 $tot->{todo}++ if $test->{todo}{$this};
723
724 if( $not ) {
725 print "$test->{ml}NOK $this" if $test->{ml};
726 if (!$test->{todo}{$this}) {
727 push @{$test->{failed}}, $this;
728 } else {
729 $test->{ok}++;
730 $tot->{ok}++;
731 }
732 }
733 else {
734 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
735 $test->{ok}++;
736 $tot->{ok}++;
737 $test->{skipped}++ if $isskip;
738
739 $reason = '[no reason given]'
740 if $isskip and not defined $reason;
741 if (defined $reason and defined $test->{skip_reason}) {
742 # print "was: '$skip_reason' new '$reason'\n";
743 $test->{skip_reason} = 'various reasons'
744 if $test->{skip_reason} ne $reason;
745 } elsif (defined $reason) {
746 $test->{skip_reason} = $reason;
747 }
748
749 $test->{bonus}++, $tot->{bonus}++ if $test->{todo}{$this};
750 }
9c5c68c8
MS
751 }
752 # XXX ummm... dunno
753 elsif ($line =~ /^ok\s*(\d*)\s*\#([^\r]*)$/) { # XXX multiline ok?
754 $this = $1 if $1 > 0;
755 print "$test->{ml}ok $this/$test->{max}" if $test->{ml};
756 $test->{ok}++;
757 $tot->{ok}++;
758 }
759 else {
760 # an ok or not ok not matching the 3 cases above...
761 # just ignore it for compatibility with TEST
762 next;
763 }
764
b82fa0b7 765 if ($this > $test->{'next'}) {
2fe373ce 766 print "Test output counter mismatch [test $this]\n";
b82fa0b7 767 push @{$test->{failed}}, $test->{'next'}..$this-1;
9c5c68c8 768 }
b82fa0b7 769 elsif ($this < $test->{'next'}) {
9c5c68c8
MS
770 #we have seen more "ok" lines than the number suggests
771 print "Confused test output: test $this answered after ".
b82fa0b7
MS
772 "test ", $test->{'next'}-1, "\n";
773 $test->{'next'} = $this;
9c5c68c8 774 }
b82fa0b7 775 $test->{'next'} = $this + 1;
9c5c68c8
MS
776
777 }
778 elsif ($line =~ /^Bail out!\s*(.*)/i) { # magic words
779 die "FAILED--Further testing stopped" .
780 ($1 ? ": $1\n" : ".\n");
781 }
782}
783
784
785sub _bonusmsg {
786 my($tot) = @_;
787
788 my $bonusmsg = '';
789 $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
2fe373ce
MS
790 " UNEXPECTEDLY SUCCEEDED)")
791 if $tot->{bonus};
9c5c68c8
MS
792
793 if ($tot->{skipped}) {
2fe373ce 794 $bonusmsg .= ", $tot->{skipped} test"
9c5c68c8 795 . ($tot->{skipped} != 1 ? 's' : '');
2fe373ce
MS
796 if ($tot->{sub_skipped}) {
797 $bonusmsg .= " and $tot->{sub_skipped} subtest"
798 . ($tot->{sub_skipped} != 1 ? 's' : '');
799 }
800 $bonusmsg .= ' skipped';
9c5c68c8
MS
801 }
802 elsif ($tot->{sub_skipped}) {
2fe373ce
MS
803 $bonusmsg .= ", $tot->{sub_skipped} subtest"
804 . ($tot->{sub_skipped} != 1 ? 's' : '')
805 . " skipped";
9c5c68c8
MS
806 }
807
808 return $bonusmsg;
809}
810
811# VMS has some subtle nastiness with closing the test files.
812sub _close_fh {
813 my($fh) = shift;
814
815 close($fh); # must close to reap child resource values
816
2fe373ce 817 my $wstatus = $Ignore_Exitcode ? 0 : $?; # Can trust $? ?
9c5c68c8
MS
818 my $estatus;
819 $estatus = ($^O eq 'VMS'
820 ? eval 'use vmsish "status"; $estatus = $?'
821 : $wstatus >> 8);
822
823 return($estatus, $wstatus);
824}
825
826
827# Set up the command-line switches to run perl as.
828sub _set_switches {
829 my($test) = shift;
830
b82fa0b7
MS
831 local *TEST;
832 open(TEST, $test) or print "can't open $test. $!\n";
833 my $first = <TEST>;
9c5c68c8
MS
834 my $s = $Switches;
835 $s .= " $ENV{'HARNESS_PERL_SWITCHES'}"
836 if exists $ENV{'HARNESS_PERL_SWITCHES'};
837 $s .= join " ", q[ "-T"], map {qq["-I$_"]} @INC
838 if $first =~ /^#!.*\bperl.*-\w*T/;
839
b82fa0b7 840 close(TEST) or print "can't close $test. $!\n";
9c5c68c8
MS
841
842 return $s;
843}
844
845
846# Test program go boom.
847sub _dubious_return {
848 my($test, $tot, $estatus, $wstatus) = @_;
849 my ($failed, $canon, $percent) = ('??', '??');
850
851 printf "$test->{ml}dubious\n\tTest returned status $estatus ".
852 "(wstat %d, 0x%x)\n",
853 $wstatus,$wstatus;
854 print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
855
856 if (corestatus($wstatus)) { # until we have a wait module
857 if ($Have_Devel_Corestack) {
858 Devel::CoreStack::stack($^X);
859 } else {
860 print "\ttest program seems to have generated a core\n";
861 }
862 }
863
864 $tot->{bad}++;
865
866 if ($test->{max}) {
b82fa0b7 867 if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
9c5c68c8
MS
868 print "\tafter all the subtests completed successfully\n";
869 $percent = 0;
2fe373ce 870 $failed = 0; # But we do not set $canon!
9c5c68c8
MS
871 }
872 else {
b82fa0b7 873 push @{$test->{failed}}, $test->{'next'}..$test->{max};
9c5c68c8
MS
874 $failed = @{$test->{failed}};
875 (my $txt, $canon) = canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
876 $percent = 100*(scalar @{$test->{failed}})/$test->{max};
877 print "DIED. ",$txt;
878 }
879 }
880
881 return { canon => $canon, max => $test->{max} || '??',
882 failed => $failed,
66fd8cb9 883 percent => $percent,
9c5c68c8
MS
884 estat => $estatus, wstat => $wstatus,
885 };
886}
887
888
889sub _garbled_output {
890 my($gibberish) = shift;
891 warn "Confusing test output: '$gibberish'\n";
892}
893
894
895sub _create_fmts {
896 my($failedtests) = @_;
897
b82fa0b7
MS
898 my $failed_str = "Failed Test";
899 my $middle_str = " Stat Wstat Total Fail Failed ";
9c5c68c8
MS
900 my $list_str = "List of Failed";
901
902 # Figure out our longest name string for formatting purposes.
903 my $max_namelen = length($failed_str);
904 foreach my $script (keys %$failedtests) {
905 my $namelen = length $failedtests->{$script}->{name};
906 $max_namelen = $namelen if $namelen > $max_namelen;
907 }
908
909 my $list_len = $Columns - length($middle_str) - $max_namelen;
910 if ($list_len < length($list_str)) {
911 $list_len = length($list_str);
912 $max_namelen = $Columns - length($middle_str) - $list_len;
913 if ($max_namelen < length($failed_str)) {
914 $max_namelen = length($failed_str);
915 $Columns = $max_namelen + length($middle_str) + $list_len;
916 }
917 }
918
919 my $fmt_top = "format STDOUT_TOP =\n"
b82fa0b7 920 . sprintf("%-${max_namelen}s", $failed_str)
9c5c68c8 921 . $middle_str
2fe373ce
MS
922 . $list_str . "\n"
923 . "-" x $Columns
924 . "\n.\n";
9c5c68c8
MS
925
926 my $fmt = "format STDOUT =\n"
2fe373ce 927 . "@" . "<" x ($max_namelen - 1)
b82fa0b7 928 . " @>> @>>>> @>>>> @>>> ^##.##% "
2fe373ce
MS
929 . "^" . "<" x ($list_len - 1) . "\n"
930 . '{ $Curtest->{name}, $Curtest->{estat},'
931 . ' $Curtest->{wstat}, $Curtest->{max},'
932 . ' $Curtest->{failed}, $Curtest->{percent},'
933 . ' $Curtest->{canon}'
934 . "\n}\n"
935 . "~~" . " " x ($Columns - $list_len - 2) . "^"
936 . "<" x ($list_len - 1) . "\n"
937 . '$Curtest->{canon}'
938 . "\n.\n";
9c5c68c8
MS
939
940 eval $fmt_top;
941 die $@ if $@;
942 eval $fmt;
943 die $@ if $@;
944
945 return($fmt_top, $fmt);
946}
947
b82fa0b7
MS
948{
949 my $tried_devel_corestack;
9c5c68c8 950
b82fa0b7
MS
951 sub corestatus {
952 my($st) = @_;
c0ee6f5c 953
b82fa0b7
MS
954 eval {require 'wait.ph'};
955 my $ret = defined &WCOREDUMP ? WCOREDUMP($st) : $st & 0200;
c0ee6f5c 956
b82fa0b7
MS
957 eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
958 unless $tried_devel_corestack++;
c0ee6f5c 959
b82fa0b7
MS
960 $ret;
961 }
c0ee6f5c 962}
963
c07a80fd 964sub canonfailed ($@) {
89d3b7e2 965 my($max,$skipped,@failed) = @_;
6c31b336
A
966 my %seen;
967 @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
c07a80fd 968 my $failed = @failed;
969 my @result = ();
970 my @canon = ();
971 my $min;
972 my $last = $min = shift @failed;
760ac839 973 my $canon;
c07a80fd 974 if (@failed) {
2fe373ce
MS
975 for (@failed, $failed[-1]) { # don't forget the last one
976 if ($_ > $last+1 || $_ == $last) {
977 if ($min == $last) {
978 push @canon, $last;
979 } else {
980 push @canon, "$min-$last";
981 }
982 $min = $_;
983 }
984 $last = $_;
985 }
986 local $" = ", ";
987 push @result, "FAILED tests @canon\n";
988 $canon = join ' ', @canon;
a0d0e21e 989 } else {
2fe373ce
MS
990 push @result, "FAILED test $last\n";
991 $canon = $last;
a0d0e21e 992 }
c07a80fd 993
994 push @result, "\tFailed $failed/$max tests, ";
89d3b7e2
IZ
995 push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
996 my $ender = 's' x ($skipped > 1);
997 my $good = $max - $failed - $skipped;
998 my $goodper = sprintf("%.2f",100*($good/$max));
9c5c68c8
MS
999 push @result, " (-$skipped skipped test$ender: $good okay, ".
1000 "$goodper%)"
1001 if $skipped;
89d3b7e2 1002 push @result, "\n";
760ac839
LW
1003 my $txt = join "", @result;
1004 ($txt, $canon);
a0d0e21e
LW
1005}
1006
b82fa0b7 1007=end _private
9c5c68c8 1008
b82fa0b7 1009=back
d667a7e6 1010
b82fa0b7 1011=cut
9c5c68c8 1012
9c5c68c8 1013
b82fa0b7
MS
10141;
1015__END__
9c5c68c8
MS
1016
1017
cb1a09d0
AD
1018=head1 EXPORT
1019
c0ee6f5c 1020C<&runtests> is exported by Test::Harness per default.
cb1a09d0 1021
9c5c68c8
MS
1022C<$verbose> and C<$switches> are exported upon request.
1023
1024
cb1a09d0
AD
1025=head1 DIAGNOSTICS
1026
1027=over 4
1028
1029=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1030
1031If all tests are successful some statistics about the performance are
1032printed.
1033
6c31b336
A
1034=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1035
1036For any single script that has failing subtests statistics like the
1037above are printed.
1038
1039=item C<Test returned status %d (wstat %d)>
1040
9c5c68c8
MS
1041Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1042and C<$?> are printed in a message similar to the above.
6c31b336
A
1043
1044=item C<Failed 1 test, %.2f%% okay. %s>
cb1a09d0 1045
6c31b336 1046=item C<Failed %d/%d tests, %.2f%% okay. %s>
cb1a09d0
AD
1047
1048If not all tests were successful, the script dies with one of the
1049above messages.
1050
d667a7e6
A
1051=item C<FAILED--Further testing stopped%s>
1052
1053If a single subtest decides that further testing will not make sense,
1054the script dies with this message.
1055
cb1a09d0
AD
1056=back
1057
9b0ceca9
IZ
1058=head1 ENVIRONMENT
1059
37ce32a7
MS
1060=over 4
1061
b82fa0b7 1062=item C<HARNESS_IGNORE_EXITCODE>
37ce32a7
MS
1063
1064Makes harness ignore the exit status of child processes when defined.
1065
b82fa0b7 1066=item C<HARNESS_NOTTY>
9b0ceca9 1067
37ce32a7
MS
1068When set to a true value, forces it to behave as though STDOUT were
1069not a console. You may need to set this if you don't want harness to
1070output more frequent progress messages using carriage returns. Some
1071consoles may not handle carriage returns properly (which results in a
1072somewhat messy output).
0d0c0d42 1073
b82fa0b7 1074=item C<HARNESS_COMPILE_TEST>
9636a016 1075
37ce32a7
MS
1076When true it will make harness attempt to compile the test using
1077C<perlcc> before running it.
1078
b82fa0b7
MS
1079B<NOTE> This currently only works when sitting in the perl source
1080directory!
1081
1082=item C<HARNESS_FILELEAK_IN_DIR>
37ce32a7
MS
1083
1084When set to the name of a directory, harness will check after each
1085test whether new files appeared in that directory, and report them as
17a79f5b
IZ
1086
1087 LEAKED FILES: scr.tmp 0 my.db
1088
1089If relative, directory name is with respect to the current directory at
1090the moment runtests() was called. Putting absolute path into
1091C<HARNESS_FILELEAK_IN_DIR> may give more predicatable results.
1092
b82fa0b7 1093=item C<HARNESS_PERL_SWITCHES>
37ce32a7
MS
1094
1095Its value will be prepended to the switches used to invoke perl on
b82fa0b7 1096each test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
37ce32a7
MS
1097run all tests with all warnings enabled.
1098
b82fa0b7 1099=item C<HARNESS_COLUMNS>
37ce32a7
MS
1100
1101This value will be used for the width of the terminal. If it is not
1102set then it will default to C<COLUMNS>. If this is not set, it will
1103default to 80. Note that users of Bourne-sh based shells will need to
1104C<export COLUMNS> for this module to use that variable.
2b32313b 1105
b82fa0b7 1106=item C<HARNESS_ACTIVE>
37ce32a7
MS
1107
1108Harness sets this before executing the individual tests. This allows
1109the tests to determine if they are being executed through the harness
1110or by any other means.
1111
1112=back
0a931e4a 1113
b82fa0b7
MS
1114=head1 EXAMPLE
1115
1116Here's how Test::Harness tests itself
1117
1118 $ cd ~/src/devel/Test-Harness
1119 $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1120 $verbose=0; runtests @ARGV;' t/*.t
1121 Using /home/schwern/src/devel/Test-Harness/blib
1122 t/base..............ok
1123 t/nonumbers.........ok
1124 t/ok................ok
1125 t/test-harness......ok
1126 All tests successful.
1127 Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
f19ae7a7 1128
cb1a09d0
AD
1129=head1 SEE ALSO
1130
b82fa0b7
MS
1131L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1132the underlying timing routines, L<Devel::CoreStack> to generate core
1133dumps from failed tests and L<Devel::Cover> for test coverage
1134analysis.
c07a80fd 1135
1136=head1 AUTHORS
1137
1138Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1139sure is, that it was inspired by Larry Wall's TEST script that came
b876d4a6 1140with perl distributions for ages. Numerous anonymous contributors
b82fa0b7
MS
1141exist. Andreas Koenig held the torch for many years.
1142
1143Current maintainer is Michael G Schwern E<lt>schwern@pobox.comE<gt>
1144
1145=head1 TODO
1146
1147Provide a way of running tests quietly (ie. no printing) for automated
1148validation of tests. This will probably take the form of a version
1149of runtests() which rather than printing its output returns raw data
1150on the state of the tests.
1151
1152Fix HARNESS_COMPILE_TEST without breaking its core usage.
1153
1154Figure a way to report test names in the failure summary.
37ce32a7 1155
b82fa0b7
MS
1156Rework the test summary so long test names are not truncated as badly.
1157
1158Merge back into bleadperl.
1159
1160Deal with VMS's "not \nok 4\n" mistake.
1161
1162Add option for coverage analysis.
1163
1164=for _private
1165Keeping whittling away at _run_all_tests()
1166
1167=for _private
1168Clean up how the summary is printed. Get rid of those damned formats.
cb1a09d0
AD
1169
1170=head1 BUGS
1171
1172Test::Harness uses $^X to determine the perl binary to run the tests
6c31b336
A
1173with. Test scripts running via the shebang (C<#!>) line may not be
1174portable because $^X is not consistent for shebang scripts across
cb1a09d0 1175platforms. This is no problem when Test::Harness is run with an
6c31b336 1176absolute path to the perl binary or when $^X can be found in the path.
cb1a09d0 1177
b82fa0b7
MS
1178HARNESS_COMPILE_TEST currently assumes its run from the Perl source
1179directory.
1180
cb1a09d0 1181=cut