This is a live mirror of the Perl 5 development currently hosted at https://github.com/perl/perl5
Faster feature checks
[perl5.git] / Porting / bench.pl
CommitLineData
9e7973fa
DM
1#!/usr/bin/perl
2#
3# A tool for analysing the performance of the code snippets found in
4# t/perf/benchmarks or similar
5
6
7=head1 NAME
8
9bench.pl - Compare the performance of perl code snippets across multiple
10perls.
11
12=head1 SYNOPSIS
13
4a1358c2
FC
14 # Basic: run the tests in t/perf/benchmarks against two or
15 # more perls
9e7973fa 16
99b1e78b 17 bench.pl [options] perlA[=labelA] perlB[=labelB] ...
32dfbb33 18
5db17e29 19 # run the tests against the same perl twice, with varying options
32dfbb33 20
88f3a7c3 21 bench.pl [options] perlA=bigint --args='-Mbigint' perlA=plain
9e7973fa 22
5db17e29
DM
23 # Run bench on blead, saving results to file; then modify the blead
24 # binary, and benchmark again, comparing against the saved results
4044748b 25
88f3a7c3 26 bench.pl [options] --write=blead.time ./perl=blead
5db17e29 27 # ... hack hack hack, updating ./perl ...
88f3a7c3 28 bench.pl --read=blead.time ./perl=hacked
4044748b 29
68de41bc 30 # You can also combine --read with --write and new benchmark runs
5db17e29 31
c3bb902a 32 bench.pl --read=blead.time --write=last.time -- ./perl=hacked
4044748b 33
9e7973fa
DM
34=head1 DESCRIPTION
35
36By default, F<bench.pl> will run code snippets found in
37F<t/perf/benchmarks> (or similar) under cachegrind, in order to calculate
38how many instruction reads, data writes, branches, cache misses, etc. that
5db17e29
DM
39one execution of the snippet uses. Usually it will run them against two or
40more perl executables and show how much each test has gotten better or
41worse.
9e7973fa
DM
42
43It is modelled on the F<perlbench> tool, but since it measures instruction
44reads etc., rather than timings, it is much more precise and reproducible.
e34630bf 45It is also considerably faster, and is capable of running tests in
9e7973fa
DM
46parallel (with C<-j>). Rather than displaying a single relative
47percentage per test/perl combination, it displays values for 13 different
48measurements, such as instruction reads, conditional branch misses etc.
49
50There are options to write the raw data to a file, and to read it back.
51This means that you can view the same run data in different views with
4044748b
YO
52different selection and sort options. You can also use this mechanism
53to save the results of timing one perl, and then read it back while timing
5db17e29
DM
54a modification, so that you don't have rerun the same tests on the same
55perl over and over, or have two perl executables built at the same time.
9e7973fa
DM
56
57The optional C<=label> after each perl executable is used in the display
4044748b 58output. If you are doing a two step benchmark then you should provide
99b1e78b
DM
59a label for at least the "base" perl. If a label isn't specified, it
60defaults to the name of the perl executable. Labels must be unique across
61all current executables, plus any previous ones obtained via --read.
62
63In its most general form, the specification of a perl executable is:
64
88f3a7c3
DM
65 path/perl=+mylabel --args='-foo -bar' --args='-baz' \
66 --env='A=a' --env='B=b'
99b1e78b
DM
67
68This defines how to run the executable F<path/perl>. It has a label,
69which due to the C<+>, is appended to the binary name to give a label of
70C<path/perl=+mylabel> (without the C<+>, the label would be just
71C<mylabel>).
72
73It can be optionally followed by one or more C<--args> or C<--env>
74switches, which specify extra command line arguments or environment
75variables to use when invoking that executable. Each C<--env> switch
88f3a7c3
DM
76should be of the form C<--env=VARIABLE=value>. Any C<--arg> values are
77concatenated to the eventual command line, along with the global
78C<--perlargs> value if any. The above would cause a system() call looking
79something like:
99b1e78b 80
88f3a7c3
DM
81 PERL_HASH_SEED=0 A=a B=b valgrind --tool=cachegrind \
82 path/perl -foo -bar -baz ....
9e7973fa
DM
83
84=head1 OPTIONS
85
5db17e29
DM
86=head2 General options
87
9e7973fa
DM
88=over 4
89
90=item *
91
92--action=I<foo>
93
94What action to perform. The default is I<grind>, which runs the benchmarks
95using I<cachegrind> as the back end. The only other action at the moment is
96I<selftest>, which runs some basic sanity checks and produces TAP output.
97
98=item *
99
5db17e29 100--debug
9e7973fa 101
aa46525d 102Enable debugging output.
9e7973fa
DM
103
104=item *
105
5db17e29 106---help
9e7973fa 107
5db17e29 108Display basic usage information.
9e7973fa
DM
109
110=item *
111
aa46525d 112-v
5db17e29 113--verbose
9e7973fa 114
5db17e29 115Display progress information.
9e7973fa 116
5db17e29 117=back
9e7973fa 118
5db17e29
DM
119=head2 Test selection options
120
121=over 4
9e7973fa
DM
122
123=item *
124
5db17e29 125--tests=I<FOO>
df3d7b3a 126
68de41bc 127Specify a subset of tests to run (or in the case of C<--read>, to read).
5db17e29
DM
128It may be either a comma-separated list of test names, or a regular
129expression. For example
df3d7b3a 130
5db17e29
DM
131 --tests=expr::assign::scalar_lex,expr::assign::2list_lex
132 --tests=/^expr::/
df3d7b3a 133
9e7973fa 134
5db17e29
DM
135=back
136
137=head2 Input options
138
139=over 4
140
9e7973fa
DM
141
142=item *
143
5db17e29
DM
144-r I<file>
145--read=I<file>
9e7973fa 146
5db17e29 147Read in saved data from a previous C<--write> run from the specified file.
68de41bc
DM
148If C<--tests> is present too, then only tests matching those conditions
149are read from the file.
150
151C<--read> may be specified multiple times, in which case the results
152across all files are aggregated. The list of test names from each file
153(after filtering by C<--tests>) must be identical across all files.
154
4533e88f
DM
155This list of tests is used instead of that obtained from the normal
156benchmark file (or C<--benchfile>) for any benchmarks that are run.
9e7973fa 157
88f3a7c3
DM
158The perl labels must be unique across all read in test results.
159
5db17e29 160Requires C<JSON::PP> to be available.
9e7973fa 161
5db17e29
DM
162=back
163
164=head2 Benchmarking options
165
166Benchmarks will be run for all perls specified on the command line.
167These options can be used to modify the benchmarking behavior:
168
169=over 4
170
171=item *
172
1e072f25
DM
173--autolabel
174
175Generate a unique label for every executable which doesn't have an
176explicit C<=label>. Works by stripping out common prefixes and suffixes
177from the executable names, then for any non-unique names, appending
88f3a7c3
DM
178C<-0>, C<-1>, etc. text directly surrounding the unique part which look
179like version numbers (i.e. which match C</[0-9\.]+/>) aren't stripped.
1e072f25
DM
180For example,
181
182 perl-5.20.0-threaded perl-5.22.0-threaded perl-5.24.0-threaded
183
184stripped to unique parts would be:
185
186 20 22 24
187
188but is actually only stripped down to:
189
190 5.20.0 5.22.0 5.24.0
191
0a1b8eb0
DM
192If the final results are plain integers, they are prefixed with "p"
193to avoid looking like column numbers to switches like C<--norm=2>.
194
1e072f25
DM
195
196=item *
197
5db17e29
DM
198--benchfile=I<foo>
199
200The path of the file which contains the benchmarks (F<t/perf/benchmarks>
201by default).
9e7973fa
DM
202
203=item *
204
205--grindargs=I<foo>
206
8a094fee
JC
207Optional command-line arguments to pass to all cachegrind invocations.
208
9e7973fa
DM
209=item *
210
211-j I<N>
212--jobs=I<N>
213
214Run I<N> jobs in parallel (default 1). This determines how many cachegrind
88f3a7c3 215process will run at a time, and should generally be set to the number
9e7973fa
DM
216of CPUs available.
217
218=item *
219
5db17e29 220--perlargs=I<foo>
9e7973fa 221
99b1e78b 222Optional command-line arguments to pass to every perl executable. This
88f3a7c3 223may optionaly be combined with C<--args> switches following individual
99b1e78b
DM
224perls. For example:
225
226 bench.pl --perlargs='-Ilib -It/lib' .... \
227 perlA --args='-Mstrict' \
228 perlB --args='-Mwarnings'
229
230would cause the invocations
231
232 perlA -Ilib -It/lib -Mstrict
233 perlB -Ilib -It/lib -Mwarnings
5db17e29
DM
234
235=back
236
237=head2 Output options
238
88f3a7c3 239Any results accumulated via --read or by running benchmarks can be output
5db17e29
DM
240in any or all of these three ways:
241
242=over 4
9e7973fa
DM
243
244=item *
245
5db17e29
DM
246-w I<file>
247--write=I<file>
9e7973fa 248
5db17e29
DM
249Save the raw data to the specified file. It can be read back later with
250C<--read>. If combined with C<--read> then the output file will be
251the merge of the file read and any additional perls added on the command
252line.
253
254Requires C<JSON::PP> to be available.
9e7973fa
DM
255
256=item *
257
5db17e29 258--bisect=I<field,minval,maxval>
9e7973fa 259
88f3a7c3
DM
260Exit with a zero status if the named field is in the specified range;
261exit with 1 otherwise. It will complain if more than one test or perl has
262been specified. It is intended to be called as part of a bisect run, to
263determine when something changed. For example,
5db17e29
DM
264
265 bench.pl -j 8 --tests=foo --bisect=Ir,100,105 --perlargs=-Ilib \
266 ./miniperl
267
268might be called from bisect to find when the number of instruction reads
269for test I<foo> falls outside the range 100..105.
9e7973fa
DM
270
271=item *
272
5db17e29 273--show
9e7973fa 274
5db17e29
DM
275Display the results to stdout in human-readable form. This is enabled by
276default, except with --write and --bisect. The following sub-options alter
277how --show behaves.
9e7973fa 278
5db17e29 279=over 4
9e7973fa
DM
280
281=item *
282
5db17e29 283--average
9e7973fa 284
5db17e29
DM
285Only display the overall average, rather than the results for each
286individual test.
9e7973fa 287
5db17e29
DM
288=item *
289
88f3a7c3 290--compact=I<perl>
5db17e29
DM
291
292Display the results for a single perl executable in a compact form.
293Which perl to display is specified in the same manner as C<--norm>.
9e7973fa
DM
294
295=item *
296
5db17e29 297--fields=I<a,b,c>
9e7973fa 298
5db17e29 299Display only the specified fields; for example,
9e7973fa 300
5db17e29
DM
301 --fields=Ir,Ir_m,Ir_mm
302
303If only one field is selected, the output is in more compact form.
9e7973fa
DM
304
305=item *
306
5db17e29 307--norm=I<foo>
9e7973fa 308
5db17e29 309Specify which perl column in the output to treat as the 100% norm.
a6d04d4a
DM
310It may be:
311
312=over
313
314* a column number (0..N-1),
315
316* a negative column number (-1..-N) which counts from the right (so -1 is
317the right-most column),
318
319* or a perl executable name,
320
321* or a perl executable label.
322
323=back
324
5db17e29 325It defaults to the leftmost column.
9e7973fa
DM
326
327=item *
328
5db17e29 329--raw
9e7973fa 330
5db17e29
DM
331Display raw data counts rather than percentages in the outputs. This
332allows you to see the exact number of intruction reads, branch misses etc.
333for each test/perl combination. It also causes the C<AVERAGE> display
334per field to be calculated based on the average of each tests's count
335rather than average of each percentage. This means that tests with very
336high counts will dominate.
9e7973fa 337
5db17e29
DM
338=item *
339
340--sort=I<field:perl>
341
342Order the tests in the output based on the value of I<field> in the
343column I<perl>. The I<perl> value is as per C<--norm>. For example
344
345 bench.pl --sort=Dw:perl-5.20.0 \
346 perl-5.16.0 perl-5.18.0 perl-5.20.0
347
348=back
9e7973fa
DM
349
350=back
351
352=cut
353
354
355
356use 5.010000;
357use warnings;
358use strict;
d54523c4 359use Getopt::Long qw(:config no_auto_abbrev require_order);
9e7973fa
DM
360use IPC::Open2 ();
361use IO::Select;
c2d21e7a 362use IO::File;
9e7973fa
DM
363use POSIX ":sys_wait_h";
364
365# The version of the file format used to save data. We refuse to process
366# the file if the integer component differs.
367
368my $FORMAT_VERSION = 1.0;
369
370# The fields we know about
371
372my %VALID_FIELDS = map { $_ => 1 }
373 qw(Ir Ir_m1 Ir_mm Dr Dr_m1 Dr_mm Dw Dw_m1 Dw_mm COND COND_m IND IND_m);
374
375sub usage {
376 die <<EOF;
5db17e29
DM
377Usage: $0 [options] -- perl[=label] ...
378
379General options:
380
381 --action=foo What action to perform [default: grind]:
382 grind run the code under cachegrind
383 selftest perform a selftest; produce TAP output
9e7973fa 384 --debug Enable verbose debugging output.
9e7973fa 385 --help Display this help.
aa46525d 386 -v|--verbose Display progress information.
5db17e29
DM
387
388
389Selection:
390
391 --tests=FOO Select only the specified tests for reading, benchmarking
392 and display. FOO may be either a list of tests or
393 a pattern: 'foo,bar,baz' or '/regex/';
394 [default: all tests].
395
396Input:
397
398 -r|--read=file Read in previously saved data from the specified file.
399 May be repeated, and be used together with new
400 benchmarking to create combined results.
401
402Benchmarking:
403 Benchmarks will be run for any perl specified on the command line.
404 These options can be used to modify the benchmarking behavior:
405
1e072f25 406 --autolabel generate labels for any executables without one
5db17e29
DM
407 --benchfile=foo File containing the benchmarks.
408 [default: t/perf/benchmarks].
409 --grindargs=foo Optional command-line args to pass to cachegrind.
9e7973fa 410 -j|--jobs=N Run N jobs in parallel [default 1].
9e7973fa 411 --perlargs=foo Optional command-line args to pass to each perl to run.
5db17e29
DM
412
413Output:
414 Any results accumulated via --read or running benchmarks can be output
415 in any or all of these three ways:
416
417 -w|--write=file Save the raw data to the specified file (may be read
418 back later with --read).
419
420 --bisect=f,min,max Exit with a zero status if the named field f is in
421 the specified min..max range; exit 1 otherwise.
422 Produces no other output. Only legal if a single
423 benchmark test has been specified.
424
425 --show Display the results to stdout in human-readable form.
426 This is enabled by default, except with --write and
427 --bisect. The following sub-options alter how
428 --show behaves.
429
430 --average Only display average, not individual test results.
431 --compact=perl Display the results of a single perl in compact form.
432 Which perl specified like --norm
433 --fields=a,b,c Display only the specified fields (e.g. Ir,Ir_m,Ir_mm).
434 --norm=perl Which perl column to treat as 100%; may be a column
435 number (0..N-1) or a perl executable name or label;
436 [default: 0].
437 --raw Display raw data counts rather than percentages.
438 --sort=field:perl Sort the tests based on the value of 'field' in the
9e7973fa 439 column 'perl'. The perl value is as per --norm.
9e7973fa 440
9e7973fa
DM
441
442The command line ends with one or more specified perl executables,
443which will be searched for in the current \$PATH. Each binary name may
444have an optional =LABEL appended, which will be used rather than the
99b1e78b
DM
445executable name in output. The labels must be unique across all current
446executables and previous runs obtained via --read. Each executable may
447optionally be succeeded by --args= and --env= to specify per-executable
448arguments and environmenbt variables:
9e7973fa 449
99b1e78b
DM
450 perl-5.24.0=strict --args='-Mwarnings -Mstrict' --env='FOO=foo' \
451 perl-5.24.0=plain
9e7973fa
DM
452EOF
453}
454
455my %OPTS = (
456 action => 'grind',
457 average => 0,
4533e88f 458 benchfile => undef,
9e7973fa 459 bisect => undef,
df3d7b3a 460 compact => undef,
9e7973fa
DM
461 debug => 0,
462 grindargs => '',
463 fields => undef,
464 jobs => 1,
465 norm => 0,
466 perlargs => '',
467 raw => 0,
468 read => undef,
5db17e29 469 show => undef,
9e7973fa
DM
470 sort => undef,
471 tests => undef,
472 verbose => 0,
473 write => undef,
474);
475
476
477# process command-line args and call top-level action
478
479{
480 GetOptions(
481 'action=s' => \$OPTS{action},
482 'average' => \$OPTS{average},
1e072f25 483 'autolabel' => \$OPTS{autolabel},
9e7973fa
DM
484 'benchfile=s' => \$OPTS{benchfile},
485 'bisect=s' => \$OPTS{bisect},
df3d7b3a 486 'compact=s' => \$OPTS{compact},
9e7973fa
DM
487 'debug' => \$OPTS{debug},
488 'grindargs=s' => \$OPTS{grindargs},
f9fa26a6 489 'help|h' => \$OPTS{help},
9e7973fa
DM
490 'fields=s' => \$OPTS{fields},
491 'jobs|j=i' => \$OPTS{jobs},
492 'norm=s' => \$OPTS{norm},
493 'perlargs=s' => \$OPTS{perlargs},
494 'raw' => \$OPTS{raw},
ee172d48 495 'read|r=s@' => \$OPTS{read},
5db17e29 496 'show' => \$OPTS{show},
9e7973fa
DM
497 'sort=s' => \$OPTS{sort},
498 'tests=s' => \$OPTS{tests},
aa46525d 499 'v|verbose' => \$OPTS{verbose},
9e7973fa 500 'write|w=s' => \$OPTS{write},
f9fa26a6 501 ) or die "Use the -h option for usage information.\n";
9e7973fa
DM
502
503 usage if $OPTS{help};
504
505
9e7973fa
DM
506 if (defined $OPTS{read} or defined $OPTS{write}) {
507 # fail early if it's not present
508 require JSON::PP;
509 }
510
511 if (defined $OPTS{fields}) {
512 my @f = split /,/, $OPTS{fields};
513 for (@f) {
514 die "Error: --fields: unknown field '$_'\n"
515 unless $VALID_FIELDS{$_};
516 }
517 my %f = map { $_ => 1 } @f;
518 $OPTS{fields} = \%f;
519 }
520
521 my %valid_actions = qw(grind 1 selftest 1);
522 unless ($valid_actions{$OPTS{action}}) {
523 die "Error: unrecognised action '$OPTS{action}'\n"
524 . "must be one of: " . join(', ', sort keys %valid_actions)."\n";
525 }
526
527 if (defined $OPTS{sort}) {
528 my @s = split /:/, $OPTS{sort};
529 if (@s != 2) {
530 die "Error: --sort argument should be of the form field:perl: "
531 . "'$OPTS{sort}'\n";
532 }
533 my ($field, $perl) = @s;
5ad96e9e 534 die "Error: --sort: unknown field '$field'\n"
9e7973fa
DM
535 unless $VALID_FIELDS{$field};
536 # the 'perl' value will be validated later, after we have processed
537 # the perls
538 $OPTS{'sort-field'} = $field;
539 $OPTS{'sort-perl'} = $perl;
540 }
541
5db17e29
DM
542 # show is the default output action
543 $OPTS{show} = 1 unless $OPTS{write} || $OPTS{bisect};
9e7973fa
DM
544
545 if ($OPTS{action} eq 'grind') {
546 do_grind(\@ARGV);
547 }
548 elsif ($OPTS{action} eq 'selftest') {
5db17e29
DM
549 if (@ARGV) {
550 die "Error: no perl executables may be specified with selftest\n"
551 }
9e7973fa
DM
552 do_selftest();
553 }
554}
555exit 0;
556
557
558# Given a hash ref keyed by test names, filter it by deleting unwanted
559# tests, based on $OPTS{tests}.
560
561sub filter_tests {
562 my ($tests) = @_;
563
564 my $opt = $OPTS{tests};
565 return unless defined $opt;
566
567 my @tests;
568
569 if ($opt =~ m{^/}) {
570 $opt =~ s{^/(.+)/$}{$1}
571 or die "Error: --tests regex must be of the form /.../\n";
572 for (keys %$tests) {
573 delete $tests->{$_} unless /$opt/;
574 }
575 }
576 else {
577 my %t;
578 for (split /,/, $opt) {
9e7973fa 579 $t{$_} = 1;
e89a8e10
DM
580 next if exists $tests->{$_};
581
582 my $e = "Error: no such test found: '$_'\n";
583 if ($OPTS{verbose}) {
584 $e .= "Valid test names are:\n";
585 $e .= " $_\n" for sort keys %$tests;
586 }
587 else {
588 $e .= "Re-run with --verbose for a list of valid tests.\n";
589 }
590 die $e;
9e7973fa
DM
591 }
592 for (keys %$tests) {
593 delete $tests->{$_} unless exists $t{$_};
594 }
595 }
4044748b 596 die "Error: no tests to run\n" unless %$tests;
9e7973fa
DM
597}
598
599
600# Read in the test file, and filter out any tests excluded by $OPTS{tests}
957d8930
DM
601# return a hash ref { testname => { test }, ... }
602# and an array ref of the original test names order,
9e7973fa
DM
603
604sub read_tests_file {
605 my ($file) = @_;
606
ea572010
DM
607 my $ta;
608 {
609 local @INC = ('.');
610 $ta = do $file;
611 }
9e7973fa 612 unless ($ta) {
1137c9fa
DM
613 die "Error: can't load '$file': code didn't return a true value\n"
614 if defined $ta;
615 die "Error: can't parse '$file':\n$@\n" if $@;
9e7973fa
DM
616 die "Error: can't read '$file': $!\n";
617 }
618
1836b255
DM
619 # validate and process each test
620
621 {
a9b10838 622 my %valid = map { $_ => 1 } qw(desc setup code pre post compile);
1836b255
DM
623 my @tests = @$ta;
624 if (!@tests || @tests % 2 != 0) {
625 die "Error: '$file' does not contain evenly paired test names and hashes\n";
626 }
627 while (@tests) {
628 my $name = shift @tests;
629 my $hash = shift @tests;
630
631 unless ($name =~ /^[a-zA-Z]\w*(::\w+)*$/) {
632 die "Error: '$file': invalid test name: '$name'\n";
633 }
634
635 for (sort keys %$hash) {
636 die "Error: '$file': invalid key '$_' for test '$name'\n"
637 unless exists $valid{$_};
638 }
b0ecc2e1
DM
639
640 # make description default to the code
641 $hash->{desc} = $hash->{code} unless exists $hash->{desc};
1836b255
DM
642 }
643 }
644
957d8930
DM
645 my @orig_order;
646 for (my $i=0; $i < @$ta; $i += 2) {
647 push @orig_order, $ta->[$i];
648 }
649
9e7973fa
DM
650 my $t = { @$ta };
651 filter_tests($t);
957d8930 652 return $t, \@orig_order;
9e7973fa
DM
653}
654
655
5db17e29
DM
656# Process the perl name/label/column argument of options like --norm and
657# --sort. Return the index of the matching perl.
9e7973fa
DM
658
659sub select_a_perl {
660 my ($perl, $perls, $who) = @_;
a6d04d4a
DM
661 $perls ||= [];
662 my $n = @$perls;
663
664 if ($perl =~ /^-([0-9]+)$/) {
665 my $p = $1;
666 die "Error: $who value $perl outside range -1..-$n\n"
667 if $p < 1 || $p > $n;
668 return $n - $p;
669 }
670
671 if ($perl =~ /^[0-9]+$/) {
9e7973fa 672 die "Error: $who value $perl outside range 0.." . $#$perls . "\n"
a6d04d4a 673 unless $perl < $n;
9e7973fa
DM
674 return $perl;
675 }
676 else {
677 my @perl = grep $perls->[$_][0] eq $perl
678 || $perls->[$_][1] eq $perl,
679 0..$#$perls;
78d44f6b
DM
680 unless (@perl) {
681 my $valid = '';
682 for (@$perls) {
683 $valid .= " $_->[1]";
684 $valid .= " $_->[0]" if $_->[0] ne $_->[1];
685 $valid .= "\n";
686 }
687 die "Error: $who: unrecognised perl '$perl'\n"
688 . "Valid perl names are:\n$valid";
689 }
9e7973fa
DM
690 die "Error: $who: ambiguous perl '$perl'\n"
691 if @perl > 1;
692 return $perl[0];
693 }
694}
695
696
99b1e78b
DM
697# Validate the list of perl executables on the command line.
698# The general form is
699#
700# a_perl_exe[=label] [ --args='perl args'] [ --env='FOO=foo' ]
701#
702# Return a list of [ exe, label, {env}, 'args' ] tuples
703
704sub process_executables_list {
705 my ($read_perls, @cmd_line_args) = @_;
9e7973fa 706
99b1e78b 707 my @results; # returned, each item is [ perlexe, label, {env}, 'args' ]
81cb9d79
DM
708 my %seen_from_reads = map { $_->[1] => 1 } @$read_perls;
709 my %seen;
1e072f25 710 my @labels;
d54523c4 711
99b1e78b
DM
712 while (@cmd_line_args) {
713 my $item = shift @cmd_line_args;
714
715 if ($item =~ /^--(.*)$/) {
716 my ($switch, $val) = split /=/, $1, 2;
717 die "Error: unrecognised executable switch '--$switch'\n"
718 unless $switch =~ /^(args|env)$/;
719
720 die "Error: --$switch without a preceding executable name\n"
721 unless @results;
d54523c4 722
99b1e78b
DM
723 unless (defined $val) {
724 $val = shift @cmd_line_args;
725 die "Error: --$switch is missing value\n"
726 unless defined $val;
727 }
728
729 if ($switch eq 'args') {
730 $results[-1][3] .= " $val";
731 }
732 else {
733 # --env
734 $val =~ /^(\w+)=(.*)$/
735 or die "Error: --env is missing =value\n";
736 $results[-1][2]{$1} = $2;
737 }
738
739 next;
740 }
741
742 # whatever is left must be the name of an executable
743
744 my ($perl, $label) = split /=/, $item, 2;
1e072f25
DM
745 push @labels, $label;
746 unless ($OPTS{autolabel}) {
747 $label //= $perl;
748 $label = $perl.$label if $label =~ /^\+/;
749 }
81cb9d79
DM
750
751 die "Error: duplicate label '$label': "
752 . "each executable must have a unique label\n"
1e072f25 753 if defined $label && $seen{$label}++;
81cb9d79
DM
754
755 die "Error: duplicate label '$label': "
756 . "seen both in --read file and on command line\n"
1e072f25 757 if defined $label && $seen_from_reads{$label};
955a736c 758
9e7973fa 759 my $r = qx($perl -e 'print qq(ok\n)' 2>&1);
99b1e78b
DM
760 die "Error: unable to execute '$perl': $r\n" if $r ne "ok\n";
761
762 push @results, [ $perl, $label, { }, '' ];
9e7973fa 763 }
99b1e78b
DM
764
765 # make args '' by default
766 for (@results) {
767 push @$_, '' unless @$_ > 3;
768 }
769
1e072f25
DM
770 if ($OPTS{autolabel}) {
771
772 # create a list of [ 'perl-path', $i ] pairs for all
773 # $results[$i] which don't have a label
774 my @labels;
775 for (0..$#results) {
776 push @labels, [ $results[$_][0], $_ ]
777 unless defined $results[$_][1];
778 }
779
780 if (@labels) {
781 # strip off common prefixes
782 my $pre = '';
783 STRIP_PREFIX:
784 while (length $labels[0][0]) {
785 my $c = substr($labels[0][0], 0, 1);
786 for my $i (1..$#labels) {
787 last STRIP_PREFIX if substr($labels[$i][0], 0, 1) ne $c;
788 }
789 substr($labels[$_][0], 0, 1) = '' for 0..$#labels;
790 $pre .= $c;
791 }
792 # add back any final "version-ish" prefix
793 $pre =~ s/^.*?([0-9\.]*)$/$1/;
794 substr($labels[$_][0], 0, 0) = $pre for 0..$#labels;
795
796 # strip off common suffixes
797 my $post = '';
798 STRIP_SUFFFIX:
799 while (length $labels[0][0]) {
800 my $c = substr($labels[0][0], -1, 1);
801 for my $i (1..$#labels) {
802 last STRIP_SUFFFIX if substr($labels[$i][0], -1, 1) ne $c;
803 }
804 chop $labels[$_][0] for 0..$#labels;
805 $post = "$c$post";
806 }
807 # add back any initial "version-ish" suffix
808 $post =~ s/^([0-9\.]*).*$/$1/;
809 $labels[$_][0] .= $post for 0..$#labels;
810
0a1b8eb0
DM
811 # avoid degenerate empty string for single executable name
812 $labels[0][0] = '0' if @labels == 1 && !length $labels[0][0];
813
814 # if the auto-generated labels are plain integers, prefix
815 # them with 'p' (for perl) to distinguish them from column
816 # indices (otherwise e.g. --norm=2 is ambiguous)
817
818 if ($labels[0][0] =~ /^\d*$/) {
819 $labels[$_][0] = "p$labels[$_][0]" for 0..$#labels;
820 }
821
1e072f25
DM
822 # now de-duplicate labels
823
824 my (%seen, %index);
825 $seen{$read_perls->[$_][1]}++ for 0..$#$read_perls;
826 $seen{$labels[$_][0]}++ for 0..$#labels;
827
828 for my $i (0..$#labels) {
829 my $label = $labels[$i][0];
830 next unless $seen{$label} > 1;
831 my $d = length($label) ? '-' : '';
832 my $n = $index{$label} // 0;
833 $n++ while exists $seen{"$label$d$n"};
834 $labels[$i][0] .= "$d$n";
835 $index{$label} = $n + 1;
836 }
837
838 # finally, store them
839 $results[$_->[1]][1]= $_->[0] for @labels;
840 }
841 }
842
843
99b1e78b 844 return @results;
9e7973fa
DM
845}
846
847
8fbd1c2c 848
485eb009
DM
849# Return a string containing a perl program which runs the benchmark code
850# $ARGV[0] times. If $body is true, include the main body (setup) in
851# the loop; otherwise create an empty loop with just pre and post.
852# Note that an empty body is handled with '1;' so that a completely empty
853# loop has a single nextstate rather than a stub op, so more closely
854# matches the active loop; e.g.:
855# {1;} => nextstate; unstack
856# {$x=1;} => nextstate; const; gvsv; sassign; unstack
857# Note also that each statement is prefixed with a label; this avoids
a9b10838
DM
858# adjacent nextstate ops being optimised away.
859#
860# A final 1; statement is added so that the code is always in void
861# context.
862#
863# It the compile flag is set for a test, the body of the loop is wrapped in
864# eval 'sub { .... }' to measure compile time rather than execution time
9e7973fa
DM
865
866sub make_perl_prog {
485eb009 867 my ($name, $test, $body) = @_;
a9b10838
DM
868 my ($desc, $setup, $code, $pre, $post, $compile) =
869 @$test{qw(desc setup code pre post compile)};
485eb009 870
ed7dc8b7 871 $setup //= '';
485eb009
DM
872 $pre = defined $pre ? "_PRE_: $pre; " : "";
873 $post = defined $post ? "_POST_: $post; " : "";
874 $code = $body ? $code : "1";
875 $code = "_CODE_: $code; ";
a9b10838
DM
876 my $full = "$pre$code$post _CXT_: 1; ";
877 $full = "eval q{sub { $full }};" if $compile;
878
9e7973fa
DM
879 return <<EOF;
880# $desc
485eb009 881package $name;
9e7973fa
DM
882BEGIN { srand(0) }
883$setup;
884for my \$__loop__ (1..\$ARGV[0]) {
a9b10838 885 $full
9e7973fa
DM
886}
887EOF
888}
889
890
891# Parse the output from cachegrind. Return a hash ref.
892# See do_selftest() for examples of the output format.
893
894sub parse_cachegrind {
895 my ($output, $id, $perl) = @_;
896
897 my %res;
898
899 my @lines = split /\n/, $output;
900 for (@lines) {
901 unless (s/(==\d+==)|(--\d+--) //) {
902 die "Error: while executing $id:\n"
903 . "unexpected code or cachegrind output:\n$_\n";
904 }
905 if (/I refs:\s+([\d,]+)/) {
906 $res{Ir} = $1;
907 }
908 elsif (/I1 misses:\s+([\d,]+)/) {
909 $res{Ir_m1} = $1;
910 }
911 elsif (/LLi misses:\s+([\d,]+)/) {
912 $res{Ir_mm} = $1;
913 }
914 elsif (/D refs:\s+.*?([\d,]+) rd .*?([\d,]+) wr/) {
915 @res{qw(Dr Dw)} = ($1,$2);
916 }
917 elsif (/D1 misses:\s+.*?([\d,]+) rd .*?([\d,]+) wr/) {
918 @res{qw(Dr_m1 Dw_m1)} = ($1,$2);
919 }
920 elsif (/LLd misses:\s+.*?([\d,]+) rd .*?([\d,]+) wr/) {
921 @res{qw(Dr_mm Dw_mm)} = ($1,$2);
922 }
923 elsif (/Branches:\s+.*?([\d,]+) cond .*?([\d,]+) ind/) {
924 @res{qw(COND IND)} = ($1,$2);
925 }
926 elsif (/Mispredicts:\s+.*?([\d,]+) cond .*?([\d,]+) ind/) {
927 @res{qw(COND_m IND_m)} = ($1,$2);
928 }
929 }
930
931 for my $field (keys %VALID_FIELDS) {
932 die "Error: can't parse '$field' field from cachegrind output:\n$output"
933 unless exists $res{$field};
934 $res{$field} =~ s/,//g;
935 }
936
937 return \%res;
938}
939
940
941# Handle the 'grind' action
942
943sub do_grind {
7570f185 944 my ($cmd_line_args) = @_; # the residue of @ARGV after option processing
9e7973fa 945
5db17e29 946 my ($loop_counts, $perls, $results, $tests, $order, @run_perls);
9e7973fa 947 my ($bisect_field, $bisect_min, $bisect_max);
81cb9d79 948 my ($done_read, $processed, $averages, %seen_labels);
9e7973fa
DM
949
950 if (defined $OPTS{bisect}) {
951 ($bisect_field, $bisect_min, $bisect_max) = split /,/, $OPTS{bisect}, 3;
952 die "Error: --bisect option must be of form 'field,integer,integer'\n"
953 unless
954 defined $bisect_max
955 and $bisect_min =~ /^[0-9]+$/
956 and $bisect_max =~ /^[0-9]+$/;
957
958 die "Error: unrecognised field '$bisect_field' in --bisect option\n"
959 unless $VALID_FIELDS{$bisect_field};
960
961 die "Error: --bisect min ($bisect_min) must be <= max ($bisect_max)\n"
962 if $bisect_min > $bisect_max;
963 }
964
5db17e29
DM
965 # Read in previous benchmark results
966
ee172d48
YO
967 foreach my $file (@{$OPTS{read}}) {
968 open my $in, '<:encoding(UTF-8)', $file
1137c9fa 969 or die "Error: can't open '$file' for reading: $!\n";
9e7973fa
DM
970 my $data = do { local $/; <$in> };
971 close $in;
972
973 my $hash = JSON::PP::decode_json($data);
974 if (int($FORMAT_VERSION) < int($hash->{version})) {
975 die "Error: unsupported version $hash->{version} in file"
1137c9fa 976 . " '$file' (too new)\n";
9e7973fa 977 }
ee172d48 978 my ($read_loop_counts, $read_perls, $read_results, $read_tests, $read_order) =
957d8930 979 @$hash{qw(loop_counts perls results tests order)};
68de41bc
DM
980
981 # check file contents for consistency
982 my $k_o = join ';', sort @$read_order;
983 my $k_r = join ';', sort keys %$read_results;
984 my $k_t = join ';', sort keys %$read_tests;
985 die "File '$file' contains no results\n" unless length $k_r;
986 die "File '$file' contains differing test and results names\n"
987 unless $k_r eq $k_t;
988 die "File '$file' contains differing test and sort order names\n"
989 unless $k_o eq $k_t;
990
991 # delete tests not matching --tests= criteria, if any
ee172d48
YO
992 filter_tests($read_results);
993 filter_tests($read_tests);
68de41bc 994
81cb9d79
DM
995 for my $perl (@$read_perls) {
996 my $label = $perl->[1];
997 die "Error: duplicate label '$label': seen in file '$file'\n"
998 if exists $seen_labels{$label};
999 $seen_labels{$label}++;
1000 }
1001
f850a012 1002 if (!$done_read) {
ee172d48
YO
1003 ($loop_counts, $perls, $results, $tests, $order) =
1004 ($read_loop_counts, $read_perls, $read_results, $read_tests, $read_order);
f850a012 1005 $done_read = 1;
68de41bc
DM
1006 }
1007 else {
1008 # merge results across multiple files
1009
1010 if ( join(';', sort keys %$tests)
1011 ne join(';', sort keys %$read_tests))
ee172d48 1012 {
68de41bc
DM
1013 my $err = "Can't merge multiple read files: "
1014 . "they contain differing test sets.\n";
1015 if ($OPTS{verbose}) {
1016 $err .= "Previous tests:\n";
1017 $err .= " $_\n" for sort keys %$tests;
1018 $err .= "tests from '$file':\n";
1019 $err .= " $_\n" for sort keys %$read_tests;
1020 }
1021 else {
1022 $err .= "Re-run with --verbose to see the differences.\n";
1023 }
1024 die $err;
1025 }
1026
1027 if ("@$read_loop_counts" ne "@$loop_counts") {
1028 die "Can't merge multiple read files: differing loop counts:\n"
1029 . " (previous=(@$loop_counts), "
1030 . "'$file'=(@$read_loop_counts))\n";
ee172d48
YO
1031 }
1032
9daf692f
DM
1033 push @$perls, @{$read_perls};
1034 foreach my $test (keys %{$read_results}) {
1035 foreach my $label (keys %{$read_results->{$test}}) {
1036 $results->{$test}{$label}= $read_results->{$test}{$label};
ee172d48
YO
1037 }
1038 }
957d8930 1039 }
9e7973fa 1040 }
4533e88f
DM
1041 die "Error: --benchfile cannot be used when --read is present\n"
1042 if $done_read && defined $OPTS{benchfile};
9e7973fa 1043
5db17e29
DM
1044 # Gather list of perls to benchmark:
1045
7570f185 1046 if (@$cmd_line_args) {
f850a012 1047 unless ($done_read) {
4044748b
YO
1048 # How many times to execute the loop for the two trials. The lower
1049 # value is intended to do the loop enough times that branch
1050 # prediction has taken hold; the higher loop allows us to see the
1051 # branch misses after that
1052 $loop_counts = [10, 20];
8fbd1c2c 1053
4533e88f
DM
1054 ($tests, $order) =
1055 read_tests_file($OPTS{benchfile} // 't/perf/benchmarks');
4044748b 1056 }
8fbd1c2c 1057
7570f185 1058 @run_perls = process_executables_list($perls, @$cmd_line_args);
4044748b 1059 push @$perls, @run_perls;
9e7973fa
DM
1060 }
1061
244df321
DM
1062 # strip @$order to just the actual tests present
1063 $order = [ grep exists $tests->{$_}, @$order ];
1064
5db17e29
DM
1065 # Now we know what perls and tests we have, do extra option processing
1066 # and checking (done before grinding, so time isn't wasted if we die).
1067
5825b6d4
YO
1068 if (!$perls or !@$perls) {
1069 die "Error: nothing to do: no perls to run, no data to read.\n";
1070 }
5db17e29
DM
1071 if (@$perls < 2 and $OPTS{show} and !$OPTS{raw}) {
1072 die "Error: need at least 2 perls for comparison.\n"
1073 }
1074
1075 if ($OPTS{bisect}) {
1076 die "Error: exactly one perl executable must be specified for bisect\n"
1077 unless @$perls == 1;
1078 die "Error: only a single test may be specified with --bisect\n"
1079 unless keys %$tests == 1;
1080 }
8fbd1c2c
DM
1081
1082 $OPTS{norm} = select_a_perl($OPTS{norm}, $perls, "--norm");
5db17e29 1083
8fbd1c2c
DM
1084 if (defined $OPTS{'sort-perl'}) {
1085 $OPTS{'sort-perl'} =
1086 select_a_perl($OPTS{'sort-perl'}, $perls, "--sort");
1087 }
1088
df3d7b3a
DM
1089 if (defined $OPTS{'compact'}) {
1090 $OPTS{'compact'} =
1091 select_a_perl($OPTS{'compact'}, $perls, "--compact");
1092 }
5db17e29
DM
1093
1094
1095 # Run the benchmarks; accumulate with any previously read # results.
1096
1097 if (@run_perls) {
1098 $results = grind_run($tests, $order, \@run_perls, $loop_counts, $results);
1099 }
1100
1101
1102 # Handle the 3 forms of output
1103
9e7973fa
DM
1104 if (defined $OPTS{write}) {
1105 my $json = JSON::PP::encode_json({
1106 version => $FORMAT_VERSION,
1107 loop_counts => $loop_counts,
1108 perls => $perls,
1109 results => $results,
1110 tests => $tests,
957d8930 1111 order => $order,
9e7973fa
DM
1112 });
1113
1114 open my $out, '>:encoding(UTF-8)', $OPTS{write}
5825b6d4 1115 or die "Error: can't open '$OPTS{write}' for writing: $!\n";
9e7973fa
DM
1116 print $out $json or die "Error: writing to file '$OPTS{write}': $!\n";
1117 close $out or die "Error: closing file '$OPTS{write}': $!\n";
1118 }
5db17e29
DM
1119
1120 if ($OPTS{show} or $OPTS{bisect}) {
1121 # numerically process the raw data
1122 ($processed, $averages) =
9e7973fa 1123 grind_process($results, $perls, $loop_counts);
5db17e29 1124 }
9e7973fa 1125
5db17e29
DM
1126 if ($OPTS{show}) {
1127 if (defined $OPTS{compact}) {
df3d7b3a
DM
1128 grind_print_compact($processed, $averages, $OPTS{compact},
1129 $perls, $tests, $order);
1130 }
9e7973fa 1131 else {
957d8930 1132 grind_print($processed, $averages, $perls, $tests, $order);
9e7973fa
DM
1133 }
1134 }
5db17e29
DM
1135
1136 if ($OPTS{bisect}) {
1137c9fa 1137 # these panics shouldn't happen if the bisect checks above are sound
5db17e29
DM
1138 my @r = values %$results;
1139 die "Panic: expected exactly one test result in bisect\n"
1140 if @r != 1;
1141 @r = values %{$r[0]};
1142 die "Panic: expected exactly one perl result in bisect\n"
1143 if @r != 1;
1144 my $c = $r[0]{$bisect_field};
1145 die "Panic: no result in bisect for field '$bisect_field'\n"
1146 unless defined $c;
1147
a387d7f0
DM
1148 print "Bisect: $bisect_field had the value $c\n";
1149
5db17e29
DM
1150 exit 0 if $bisect_min <= $c and $c <= $bisect_max;
1151 exit 1;
1152 }
9e7973fa
DM
1153}
1154
1155
1156# Run cachegrind for every test/perl combo.
1157# It may run several processes in parallel when -j is specified.
1158# Return a hash ref suitable for input to grind_process()
1159
1160sub grind_run {
4044748b 1161 my ($tests, $order, $perls, $counts, $results) = @_;
9e7973fa
DM
1162
1163 # Build a list of all the jobs to run
1164
1165 my @jobs;
1166
957d8930 1167 for my $test (grep $tests->{$_}, @$order) {
9e7973fa
DM
1168
1169 # Create two test progs: one with an empty loop and one with code.
9e7973fa 1170 my @prog = (
485eb009
DM
1171 make_perl_prog($test, $tests->{$test}, 0),
1172 make_perl_prog($test, $tests->{$test}, 1),
9e7973fa
DM
1173 );
1174
1175 for my $p (@$perls) {
99b1e78b 1176 my ($perl, $label, $env, $args) = @$p;
9e7973fa
DM
1177
1178 # Run both the empty loop and the active loop
1179 # $counts->[0] and $counts->[1] times.
1180
1181 for my $i (0,1) {
1182 for my $j (0,1) {
60858fe8
JC
1183 my $envstr = '';
1184 if (ref $env) {
1185 $envstr .= "$_=$env->{$_} " for sort keys %$env;
1186 }
1187 my $cmd = "PERL_HASH_SEED=0 $envstr"
9e7973fa
DM
1188 . "valgrind --tool=cachegrind --branch-sim=yes "
1189 . "--cachegrind-out-file=/dev/null "
1190 . "$OPTS{grindargs} "
99b1e78b 1191 . "$perl $OPTS{perlargs} $args - $counts->[$j] 2>&1";
9e7973fa 1192 # for debugging and error messages
c385646f 1193 my $id = "$test/$label "
9e7973fa
DM
1194 . ($i ? "active" : "empty") . "/"
1195 . ($j ? "long" : "short") . " loop";
1196
1197 push @jobs, {
1198 test => $test,
1199 perl => $perl,
1200 plabel => $label,
1201 cmd => $cmd,
1202 prog => $prog[$i],
1203 active => $i,
1204 loopix => $j,
1205 id => $id,
1206 };
1207 }
1208 }
1209 }
1210 }
1211
1212 # Execute each cachegrind and store the results in %results.
1213
1214 local $SIG{PIPE} = 'IGNORE';
1215
1216 my $max_jobs = $OPTS{jobs};
1217 my $running = 0; # count of executing jobs
1218 my %pids; # map pids to jobs
1219 my %fds; # map fds to jobs
9e7973fa
DM
1220 my $select = IO::Select->new();
1221
1222 while (@jobs or $running) {
1223
1224 if ($OPTS{debug}) {
1225 printf "Main loop: pending=%d running=%d\n",
1226 scalar(@jobs), $running;
1227 }
1228
1229 # Start new jobs
1230
1231 while (@jobs && $running < $max_jobs) {
1232 my $job = shift @jobs;
1233 my ($id, $cmd) =@$job{qw(id cmd)};
1234
1235 my ($in, $out, $pid);
1236 warn "Starting $id\n" if $OPTS{verbose};
1237 eval { $pid = IPC::Open2::open2($out, $in, $cmd); 1; }
1238 or die "Error: while starting cachegrind subprocess"
1239 ." for $id:\n$@";
1240 $running++;
1241 $pids{$pid} = $job;
1242 $fds{"$out"} = $job;
1243 $job->{out_fd} = $out;
1244 $job->{output} = '';
1245 $job->{pid} = $pid;
1246
1247 $out->blocking(0);
1248 $select->add($out);
1249
1250 if ($OPTS{debug}) {
1251 print "Started pid $pid for $id\n";
1252 }
1253
1254 # Note:
1255 # In principle we should write to $in in the main select loop,
1256 # since it may block. In reality,
1257 # a) the code we write to the perl process's stdin is likely
1258 # to be less than the OS's pipe buffer size;
1259 # b) by the time the perl process has read in all its stdin,
1260 # the only output it should have generated is a few lines
1261 # of cachegrind output preamble.
1262 # If these assumptions change, then perform the following print
1263 # in the select loop instead.
1264
1265 print $in $job->{prog};
1266 close $in;
1267 }
1268
1269 # Get output of running jobs
1270
1271 if ($OPTS{debug}) {
1272 printf "Select: waiting on (%s)\n",
1273 join ', ', sort { $a <=> $b } map $fds{$_}{pid},
1274 $select->handles;
1275 }
1276
1277 my @ready = $select->can_read;
1278
1279 if ($OPTS{debug}) {
1280 printf "Select: pids (%s) ready\n",
1281 join ', ', sort { $a <=> $b } map $fds{$_}{pid}, @ready;
1282 }
1283
1284 unless (@ready) {
1285 die "Panic: select returned no file handles\n";
1286 }
1287
1288 for my $fd (@ready) {
1289 my $j = $fds{"$fd"};
1290 my $r = sysread $fd, $j->{output}, 8192, length($j->{output});
1291 unless (defined $r) {
1292 die "Panic: Read from process running $j->{id} gave:\n$!";
1293 }
1294 next if $r;
1295
1296 # EOF
1297
1298 if ($OPTS{debug}) {
1299 print "Got eof for pid $fds{$fd}{pid} ($j->{id})\n";
1300 }
1301
1302 $select->remove($j->{out_fd});
1303 close($j->{out_fd})
1304 or die "Panic: closing output fh on $j->{id} gave:\n$!\n";
1305 $running--;
1306 delete $fds{"$j->{out_fd}"};
1307 my $output = $j->{output};
1308
1309 if ($OPTS{debug}) {
1310 my $p = $j->{prog};
1311 $p =~ s/^/ : /mg;
1312 my $o = $output;
1313 $o =~ s/^/ : /mg;
1314
1315 print "\n$j->{id}/\nCommand: $j->{cmd}\n"
1316 . "Input:\n$p"
1317 . "Output\n$o";
1318 }
1319
4044748b 1320 $results->{$j->{test}}{$j->{plabel}}[$j->{active}][$j->{loopix}]
9e7973fa
DM
1321 = parse_cachegrind($output, $j->{id}, $j->{perl});
1322 }
1323
1324 # Reap finished jobs
1325
1326 while (1) {
1327 my $kid = waitpid(-1, WNOHANG);
1328 my $ret = $?;
1329 last if $kid <= 0;
1330
1331 unless (exists $pids{$kid}) {
1332 die "Panic: reaped unexpected child $kid";
1333 }
1334 my $j = $pids{$kid};
1335 if ($ret) {
1336 die sprintf("Error: $j->{id} gave return status 0x%04x\n", $ret)
1337 . "with the following output\n:$j->{output}\n";
1338 }
1339 delete $pids{$kid};
1340 }
1341 }
1342
4044748b 1343 return $results;
9e7973fa
DM
1344}
1345
1346
1347
1348
1349# grind_process(): process the data that has been extracted from
1350# cachgegrind's output.
1351#
8b6302e0 1352# $res is of the form ->{benchmark_name}{perl_label}[active][count]{field_name},
9e7973fa
DM
1353# where active is 0 or 1 indicating an empty or active loop,
1354# count is 0 or 1 indicating a short or long loop. E.g.
1355#
1356# $res->{'expr::assign::scalar_lex'}{perl-5.21.1}[0][10]{Dw_mm}
1357#
1358# The $res data structure is modified in-place by this sub.
1359#
1360# $perls is [ [ perl-exe, perl-label], .... ].
1361#
1362# $counts is [ N, M ] indicating the counts for the short and long loops.
1363#
1364#
1365# return \%output, \%averages, where
1366#
8b6302e0
DM
1367# $output{benchmark_name}{perl_label}{field_name} = N
1368# $averages{perl_label}{field_name} = M
9e7973fa
DM
1369#
1370# where N is the raw count ($OPTS{raw}), or count_perl0/count_perlI otherwise;
1371# M is the average raw count over all tests ($OPTS{raw}), or
1372# 1/(sum(count_perlI/count_perl0)/num_tests) otherwise.
1373
1374sub grind_process {
1375 my ($res, $perls, $counts) = @_;
1376
1377 # Process the four results for each test/perf combo:
1378 # Convert
8b6302e0 1379 # $res->{benchmark_name}{perl_label}[active][count]{field_name} = n
9e7973fa 1380 # to
8b6302e0 1381 # $res->{benchmark_name}{perl_label}{field_name} = averaged_n
9e7973fa
DM
1382 #
1383 # $r[0][1] - $r[0][0] is the time to do ($counts->[1]-$counts->[0])
1384 # empty loops, eliminating startup time
1385 # $r[1][1] - $r[1][0] is the time to do ($counts->[1]-$counts->[0])
1386 # active loops, eliminating startup time
1387 # (the two startup times may be different because different code
1388 # is being compiled); the difference of the two results above
1389 # divided by the count difference is the time to execute the
1390 # active code once, eliminating both startup and loop overhead.
1391
1392 for my $tests (values %$res) {
1393 for my $r (values %$tests) {
1394 my $r2;
1395 for (keys %{$r->[0][0]}) {
1396 my $n = ( ($r->[1][1]{$_} - $r->[1][0]{$_})
1397 - ($r->[0][1]{$_} - $r->[0][0]{$_})
1398 ) / ($counts->[1] - $counts->[0]);
1399 $r2->{$_} = $n;
1400 }
1401 $r = $r2;
1402 }
1403 }
1404
1405 my %totals;
1406 my %counts;
1407 my %data;
1408
1a961f9f 1409 my $perl_norm = $perls->[$OPTS{norm}][1]; # the label of the reference perl
9e7973fa
DM
1410
1411 for my $test_name (keys %$res) {
1412 my $res1 = $res->{$test_name};
1413 my $res2_norm = $res1->{$perl_norm};
1414 for my $perl (keys %$res1) {
1415 my $res2 = $res1->{$perl};
1416 for my $field (keys %$res2) {
1417 my ($p, $q) = ($res2_norm->{$field}, $res2->{$field});
1418
1419 if ($OPTS{raw}) {
1420 # Avoid annoying '-0.0' displays. Ideally this number
1421 # should never be negative, but fluctuations in
1422 # startup etc can theoretically make this happen
1423 $q = 0 if ($q <= 0 && $q > -0.1);
1424 $totals{$perl}{$field} += $q;
1425 $counts{$perl}{$field}++;
1426 $data{$test_name}{$perl}{$field} = $q;
1427 next;
1428 }
1429
1430 # $p and $q are notionally integer counts, but
1431 # due to variations in startup etc, it's possible for a
1432 # count which is supposedly zero to be calculated as a
1433 # small positive or negative value.
1434 # In this case, set it to zero. Further below we
1435 # special-case zeros to avoid division by zero errors etc.
1436
1437 $p = 0.0 if $p < 0.01;
1438 $q = 0.0 if $q < 0.01;
1439
1440 if ($p == 0.0 && $q == 0.0) {
1441 # Both perls gave a count of zero, so no change:
1442 # treat as 100%
1443 $totals{$perl}{$field} += 1;
1444 $counts{$perl}{$field}++;
1445 $data{$test_name}{$perl}{$field} = 1;
1446 }
1447 elsif ($p == 0.0 || $q == 0.0) {
1448 # If either count is zero, there were too few events
1449 # to give a meaningful ratio (and we will end up with
1450 # division by zero if we try). Mark the result undef,
1451 # indicating that it shouldn't be displayed; and skip
1452 # adding to the average
1453 $data{$test_name}{$perl}{$field} = undef;
1454 }
1455 else {
1456 # For averages, we record q/p rather than p/q.
1457 # Consider a test where perl_norm took 1000 cycles
1458 # and perlN took 800 cycles. For the individual
1459 # results we display p/q, or 1.25; i.e. a quarter
1460 # quicker. For the averages, we instead sum all
1461 # the 0.8's, which gives the total cycles required to
1462 # execute all tests, with all tests given equal
1463 # weight. Later we reciprocate the final result,
1464 # i.e. 1/(sum(qi/pi)/n)
1465
1466 $totals{$perl}{$field} += $q/$p;
1467 $counts{$perl}{$field}++;
1468 $data{$test_name}{$perl}{$field} = $p/$q;
1469 }
1470 }
1471 }
1472 }
1473
1474 # Calculate averages based on %totals and %counts accumulated earlier.
1475
1476 my %averages;
1477 for my $perl (keys %totals) {
1478 my $t = $totals{$perl};
1479 for my $field (keys %$t) {
1480 $averages{$perl}{$field} = $OPTS{raw}
1481 ? $t->{$field} / $counts{$perl}{$field}
1482 # reciprocal - see comments above
1483 : $counts{$perl}{$field} / $t->{$field};
1484 }
1485 }
1486
1487 return \%data, \%averages;
1488}
1489
1490
9e7973fa 1491
df3d7b3a 1492# print a standard blurb at the start of the grind display
9e7973fa 1493
df3d7b3a
DM
1494sub grind_blurb {
1495 my ($perls) = @_;
9e7973fa
DM
1496
1497 print <<EOF;
1498Key:
1499 Ir Instruction read
1500 Dr Data read
1501 Dw Data write
1502 COND conditional branches
1503 IND indirect branches
1504 _m branch predict miss
1505 _m1 level 1 cache miss
1506 _mm last cache (e.g. L3) miss
1507 - indeterminate percentage (e.g. 1/0)
1508
1509EOF
1510
1511 if ($OPTS{raw}) {
1512 print "The numbers represent raw counts per loop iteration.\n";
1513 }
1514 else {
1515 print <<EOF;
1516The numbers represent relative counts per loop iteration, compared to
df3d7b3a 1517$perls->[$OPTS{norm}][1] at 100.0%.
9e7973fa
DM
1518Higher is better: for example, using half as many instructions gives 200%,
1519while using twice as many gives 50%.
1520EOF
1521 }
df3d7b3a
DM
1522}
1523
1524
1525# return a sorted list of the test names, plus 'AVERAGE'
9e7973fa 1526
df3d7b3a
DM
1527sub sorted_test_names {
1528 my ($results, $order, $perls) = @_;
9e7973fa 1529
df3d7b3a 1530 my @names;
9e7973fa
DM
1531 unless ($OPTS{average}) {
1532 if (defined $OPTS{'sort-field'}) {
1533 my ($field, $perlix) = @OPTS{'sort-field', 'sort-perl'};
beb8db25 1534 my $perl = $perls->[$perlix][1];
df3d7b3a 1535 @names = sort
9e7973fa
DM
1536 {
1537 $results->{$a}{$perl}{$field}
1538 <=> $results->{$b}{$perl}{$field}
1539 }
1540 keys %$results;
1541 }
1542 else {
df3d7b3a 1543 @names = grep $results->{$_}, @$order;
9e7973fa
DM
1544 }
1545 }
1546
1547 # No point in displaying average for only one test.
df3d7b3a
DM
1548 push @names, 'AVERAGE' unless @names == 1;
1549 @names;
1550}
1551
1552
8f25a3c4
DM
1553# format one cell data item
1554
1555sub grind_format_cell {
1556 my ($val, $width) = @_;
31952d39 1557 my $s;
8f25a3c4 1558 if (!defined $val) {
31952d39 1559 return sprintf "%*s", $width, '-';
8f25a3c4 1560 }
8924d398
DM
1561 elsif (abs($val) >= 1_000_000) {
1562 # avoid displaying very large numbers (which might be the
1563 # result of e.g. 1 / 0.000001)
1564 return sprintf "%*s", $width, 'Inf';
1565 }
8f25a3c4 1566 elsif ($OPTS{raw}) {
31952d39 1567 return sprintf "%*.1f", $width, $val;
8f25a3c4
DM
1568 }
1569 else {
31952d39 1570 return sprintf "%*.2f", $width, $val * 100;
8f25a3c4
DM
1571 }
1572}
1573
df3d7b3a
DM
1574# grind_print(): display the tabulated results of all the cachegrinds.
1575#
1576# Arguments are of the form:
8b6302e0
DM
1577# $results->{benchmark_name}{perl_label}{field_name} = N
1578# $averages->{perl_label}{field_name} = M
df3d7b3a
DM
1579# $perls = [ [ perl-exe, perl-label ], ... ]
1580# $tests->{test_name}{desc => ..., ...}
31952d39 1581# $order = [ 'foo::bar1', ... ] # order to display tests
df3d7b3a
DM
1582
1583sub grind_print {
1584 my ($results, $averages, $perls, $tests, $order) = @_;
1585
1586 my @perl_names = map $_->[0], @$perls;
1a961f9f 1587 my @perl_labels = map $_->[1], @$perls;
df3d7b3a
DM
1588 my %perl_labels;
1589 $perl_labels{$_->[0]} = $_->[1] for @$perls;
1590
df3d7b3a
DM
1591 # Print standard header.
1592 grind_blurb($perls);
1593
1594 my @test_names = sorted_test_names($results, $order, $perls);
9e7973fa 1595
31952d39
DM
1596 my @fields = qw(Ir Dr Dw COND IND
1597 COND_m IND_m
1598 Ir_m1 Dr_m1 Dw_m1
1599 Ir_mm Dr_mm Dw_mm
1600 );
1601
1602 if ($OPTS{fields}) {
1603 @fields = grep exists $OPTS{fields}{$_}, @fields;
1604 }
1605
9e7973fa
DM
1606 # If only a single field is to be displayed, use a more compact
1607 # format with only a single line of output per test.
1608
31952d39 1609 my $one_field = @fields == 1;
9e7973fa 1610
31952d39
DM
1611 # The width of column 0: this is either field names, or for
1612 # $one_field, test names
9e7973fa 1613
31952d39
DM
1614 my $width0 = 0;
1615 for ($one_field ? @test_names : @fields) {
1616 $width0 = length if length > $width0;
1617 }
9e7973fa 1618
31952d39 1619 # Calculate the widths of the data columns
9e7973fa 1620
31952d39 1621 my @widths = map length, @perl_labels;
9e7973fa 1622
31952d39
DM
1623 for my $test (@test_names) {
1624 my $res = ($test eq 'AVERAGE') ? $averages : $results->{$test};
1625 for my $field (@fields) {
1626 for my $i (0..$#widths) {
1627 my $l = length grind_format_cell(
1628 $res->{$perl_labels[$i]}{$field}, 1);
1629 $widths[$i] = $l if $l > $widths[$i];
9e7973fa 1630 }
9e7973fa
DM
1631 }
1632 }
1633
31952d39 1634 # Print the results for each test
9e7973fa 1635
31952d39
DM
1636 for my $test (0..$#test_names) {
1637 my $test_name = $test_names[$test];
9e7973fa 1638 my $doing_ave = ($test_name eq 'AVERAGE');
31952d39
DM
1639 my $res = $doing_ave ? $averages : $results->{$test_name};
1640
1641 # print per-test header
9e7973fa 1642
31952d39
DM
1643 if ($one_field) {
1644 print "\nResults for field $fields[0]\n\n" if $test == 0;
1645 }
1646 else {
9e7973fa
DM
1647 print "\n$test_name";
1648 print "\n$tests->{$test_name}{desc}" unless $doing_ave;
1649 print "\n\n";
31952d39 1650 }
9e7973fa 1651
31952d39
DM
1652 # Print the perl executable names header.
1653
1654 if (!$one_field || $test == 0) {
9e7973fa 1655 for my $i (0,1) {
31952d39 1656 print " " x $width0;
9e7973fa
DM
1657 for (0..$#widths) {
1658 printf " %*s", $widths[$_],
31952d39 1659 $i ? ('-' x$widths[$_]) : $perl_labels[$_];
9e7973fa
DM
1660 }
1661 print "\n";
1662 }
1663 }
1664
31952d39
DM
1665 my $field_suffix = '';
1666
1667 # print a line of data
9e7973fa 1668
31952d39 1669 for my $field (@fields) {
91cde97c 1670 if ($one_field) {
31952d39 1671 printf "%-*s", $width0, $test_name;
91cde97c
DM
1672 }
1673 else {
31952d39
DM
1674 # If there are enough fields, print a blank line
1675 # between groups of fields that have the same suffix
1676 if (@fields > 4) {
1677 my $s = '';
1678 $s = $1 if $field =~ /(_\w+)$/;
1679 print "\n" if $s ne $field_suffix;
1680 $field_suffix = $s;
1681 }
1682 printf "%*s", $width0, $field;
91cde97c 1683 }
9e7973fa
DM
1684
1685 for my $i (0..$#widths) {
31952d39
DM
1686 print " ", grind_format_cell($res->{$perl_labels[$i]}{$field},
1687 $widths[$i]);
9e7973fa
DM
1688 }
1689 print "\n";
1690 }
1691 }
1692}
1693
1694
df3d7b3a
DM
1695
1696# grind_print_compact(): like grind_print(), but display a single perl
1697# in a compact form. Has an additional arg, $which_perl, which specifies
1698# which perl to display.
1699#
1700# Arguments are of the form:
8b6302e0
DM
1701# $results->{benchmark_name}{perl_label}{field_name} = N
1702# $averages->{perl_label}{field_name} = M
df3d7b3a
DM
1703# $perls = [ [ perl-exe, perl-label ], ... ]
1704# $tests->{test_name}{desc => ..., ...}
31952d39 1705# $order = [ 'foo::bar1', ... ] # order to display tests
df3d7b3a
DM
1706
1707sub grind_print_compact {
1708 my ($results, $averages, $which_perl, $perls, $tests, $order) = @_;
1709
df3d7b3a
DM
1710 # Print standard header.
1711 grind_blurb($perls);
1712
1713 print "\nResults for $perls->[$which_perl][1]\n\n";
1714
1715 my @test_names = sorted_test_names($results, $order, $perls);
1716
1717 # Dump the results for each test.
1718
1719 my @fields = qw( Ir Dr Dw
1720 COND IND
1721 COND_m IND_m
1722 Ir_m1 Dr_m1 Dw_m1
1723 Ir_mm Dr_mm Dw_mm
1724 );
1725 if ($OPTS{fields}) {
1726 @fields = grep exists $OPTS{fields}{$_}, @fields;
1727 }
1728
31952d39 1729 # calculate the the max width of the test names
df3d7b3a 1730
d00aa1f4
DM
1731 my $name_width = 0;
1732 for (@test_names) {
1733 $name_width = length if length > $name_width;
1734 }
1735
31952d39
DM
1736 # Calculate the widths of the data columns
1737
1738 my @widths = map length, @fields;
1739
1740 for my $test (@test_names) {
1741 my $res = ($test eq 'AVERAGE') ? $averages : $results->{$test};
1742 $res = $res->{$perls->[$which_perl][1]};
1743 for my $i (0..$#fields) {
1744 my $l = length grind_format_cell($res->{$fields[$i]}, 1);
1745 $widths[$i] = $l if $l > $widths[$i];
1746 }
1747 }
1748
1749 # Print header
1750
1751 printf " %*s", $widths[$_], $fields[$_] for 0..$#fields;
1752 print "\n";
1753 printf " %*s", $_, ('-' x $_) for @widths;
1754 print "\n";
1755
1756 # Print the results for each test
1757
df3d7b3a
DM
1758 for my $test_name (@test_names) {
1759 my $doing_ave = ($test_name eq 'AVERAGE');
1760 my $res = $doing_ave ? $averages : $results->{$test_name};
beb8db25 1761 $res = $res->{$perls->[$which_perl][1]};
d00aa1f4
DM
1762 my $desc = $doing_ave
1763 ? $test_name
1764 : sprintf "%-*s %s", $name_width, $test_name,
1765 $tests->{$test_name}{desc};
df3d7b3a 1766
31952d39
DM
1767 for my $i (0..$#fields) {
1768 print " ", grind_format_cell($res->{$fields[$i]}, $widths[$i]);
1769 }
d00aa1f4 1770 print " $desc\n";
df3d7b3a
DM
1771 }
1772}
1773
1774
9e7973fa
DM
1775# do_selftest(): check that we can parse known cachegrind()
1776# output formats. If the output of cachegrind changes, add a *new*
1777# test here; keep the old tests to make sure we continue to parse
1778# old cachegrinds
1779
1780sub do_selftest {
1781
1782 my @tests = (
1783 'standard',
1784 <<'EOF',
1785==32350== Cachegrind, a cache and branch-prediction profiler
1786==32350== Copyright (C) 2002-2013, and GNU GPL'd, by Nicholas Nethercote et al.
1787==32350== Using Valgrind-3.9.0 and LibVEX; rerun with -h for copyright info
1788==32350== Command: perl5211o /tmp/uiS2gjdqe5 1
1789==32350==
1790--32350-- warning: L3 cache found, using its data for the LL simulation.
1791==32350==
1792==32350== I refs: 1,124,055
1793==32350== I1 misses: 5,573
1794==32350== LLi misses: 3,338
1795==32350== I1 miss rate: 0.49%
1796==32350== LLi miss rate: 0.29%
1797==32350==
1798==32350== D refs: 404,275 (259,191 rd + 145,084 wr)
1799==32350== D1 misses: 9,608 ( 6,098 rd + 3,510 wr)
1800==32350== LLd misses: 5,794 ( 2,781 rd + 3,013 wr)
1801==32350== D1 miss rate: 2.3% ( 2.3% + 2.4% )
1802==32350== LLd miss rate: 1.4% ( 1.0% + 2.0% )
1803==32350==
1804==32350== LL refs: 15,181 ( 11,671 rd + 3,510 wr)
1805==32350== LL misses: 9,132 ( 6,119 rd + 3,013 wr)
1806==32350== LL miss rate: 0.5% ( 0.4% + 2.0% )
1807==32350==
1808==32350== Branches: 202,372 (197,050 cond + 5,322 ind)
1809==32350== Mispredicts: 19,153 ( 17,742 cond + 1,411 ind)
1810==32350== Mispred rate: 9.4% ( 9.0% + 26.5% )
1811EOF
1812 {
1813 COND => 197050,
1814 COND_m => 17742,
1815 Dr => 259191,
1816 Dr_m1 => 6098,
1817 Dr_mm => 2781,
1818 Dw => 145084,
1819 Dw_m1 => 3510,
1820 Dw_mm => 3013,
1821 IND => 5322,
1822 IND_m => 1411,
1823 Ir => 1124055,
1824 Ir_m1 => 5573,
1825 Ir_mm => 3338,
1826 },
1827 );
1828
5051ccfe
DM
1829 for ('./t', '.') {
1830 my $t = "$_/test.pl";
1831 next unless -f $t;
1832 require $t;
9e7973fa
DM
1833 }
1834 plan(@tests / 3 * keys %VALID_FIELDS);
1835
1836 while (@tests) {
1837 my $desc = shift @tests;
1838 my $output = shift @tests;
1839 my $expected = shift @tests;
1840 my $p = parse_cachegrind($output);
1841 for (sort keys %VALID_FIELDS) {
1842 is($p->{$_}, $expected->{$_}, "$desc, $_");
1843 }
1844 }
1845}