-Mixing declarations and code
-
- void zorklator()
- {
- int n = 3;
- set_zorkmids(n); /* BAD */
- int q = 4;
-
-That is C99 or C++. Some C compilers allow that, but you shouldn't.
-
-The gcc option C<-Wdeclaration-after-statements> scans for such problems
-(by default on starting from Perl 5.9.4).
-
-=item *
-
-Introducing variables inside for()
-
- for(int i = ...; ...; ...) { /* BAD */
-
-That is C99 or C++. While it would indeed be awfully nice to have that
-also in C89, to limit the scope of the loop variable, alas, we cannot.
-
-=item *
-
-Mixing signed char pointers with unsigned char pointers
-
- int foo(char *s) { ... }
- ...
- unsigned char *t = ...; /* Or U8* t = ... */
- foo(t); /* BAD */
-
-While this is legal practice, it is certainly dubious, and downright
-fatal in at least one platform: for example VMS cc considers this a
-fatal error. One cause for people often making this mistake is that a
-"naked char" and therefore dereferencing a "naked char pointer" have
-an undefined signedness: it depends on the compiler and the flags of
-the compiler and the underlying platform whether the result is signed
-or unsigned. For this very same reason using a 'char' as an array
-index is bad.
-
-=item *
-
-Macros that have string constants and their arguments as substrings of
-the string constants
-
- #define FOO(n) printf("number = %d\n", n) /* BAD */
- FOO(10);
-
-Pre-ANSI semantics for that was equivalent to
-
- printf("10umber = %d\10");
-
-which is probably not what you were expecting. Unfortunately at least
-one reasonably common and modern C compiler does "real backward
-compatibility" here, in AIX that is what still happens even though the
-rest of the AIX compiler is very happily C89.
-
-=item *
-
-Using printf formats for non-basic C types
-
- IV i = ...;
- printf("i = %d\n", i); /* BAD */
-
-While this might by accident work in some platform (where IV happens
-to be an C<int>), in general it cannot. IV might be something larger.
-Even worse the situation is with more specific types (defined by Perl's
-configuration step in F<config.h>):
-
- Uid_t who = ...;
- printf("who = %d\n", who); /* BAD */
-
-The problem here is that Uid_t might be not only not C<int>-wide
-but it might also be unsigned, in which case large uids would be
-printed as negative values.
-
-There is no simple solution to this because of printf()'s limited
-intelligence, but for many types the right format is available as
-with either 'f' or '_f' suffix, for example:
-
- IVdf /* IV in decimal */
- UVxf /* UV is hexadecimal */
-
- printf("i = %"IVdf"\n", i); /* The IVdf is a string constant. */
-
- Uid_t_f /* Uid_t in decimal */
-
- printf("who = %"Uid_t_f"\n", who);
-
-Or you can try casting to a "wide enough" type:
-
- printf("i = %"IVdf"\n", (IV)something_very_small_and_signed);
-
-Also remember that the C<%p> format really does require a void pointer:
-
- U8* p = ...;
- printf("p = %p\n", (void*)p);
-
-The gcc option C<-Wformat> scans for such problems.
-
-=item *
-
-Blindly using variadic macros
-
-gcc has had them for a while with its own syntax, and C99 brought
-them with a standardized syntax. Don't use the former, and use
-the latter only if the HAS_C99_VARIADIC_MACROS is defined.
-
-=item *
-
-Blindly passing va_list
-
-Not all platforms support passing va_list to further varargs (stdarg)
-functions. The right thing to do is to copy the va_list using the
-Perl_va_copy() if the NEED_VA_COPY is defined.
-
-=item *
-
-Using gcc statement expressions
-
- val = ({...;...;...}); /* BAD */
-
-While a nice extension, it's not portable. The Perl code does
-admittedly use them if available to gain some extra speed
-(essentially as a funky form of inlining), but you shouldn't.
-
-=item *
-
-Binding together several statements in a macro
-
-Use the macros STMT_START and STMT_END.
-
- STMT_START {
- ...
- } STMT_END
-
-=item *
-
-Testing for operating systems or versions when should be testing for features
-
- #ifdef __FOONIX__ /* BAD */
- foo = quux();
- #endif
-
-Unless you know with 100% certainty that quux() is only ever available
-for the "Foonix" operating system B<and> that is available B<and>
-correctly working for B<all> past, present, B<and> future versions of
-"Foonix", the above is very wrong. This is more correct (though still
-not perfect, because the below is a compile-time check):
-
- #ifdef HAS_QUUX
- foo = quux();
- #endif
-
-How does the HAS_QUUX become defined where it needs to be? Well, if
-Foonix happens to be Unixy enough to be able to run the Configure
-script, and Configure has been taught about detecting and testing
-quux(), the HAS_QUUX will be correctly defined. In other platforms,
-the corresponding configuration step will hopefully do the same.
-
-In a pinch, if you cannot wait for Configure to be educated,
-or if you have a good hunch of where quux() might be available,
-you can temporarily try the following:
-
- #if (defined(__FOONIX__) || defined(__BARNIX__))
- # define HAS_QUUX
- #endif
-
- ...
-
- #ifdef HAS_QUUX
- foo = quux();
- #endif
-
-But in any case, try to keep the features and operating systems separate.
-
-=back
-
-=head2 Problematic System Interfaces
-
-=over 4
-
-=item *
-
-malloc(0), realloc(0), calloc(0, 0) are non-portable. To be portable
-allocate at least one byte. (In general you should rarely need to
-work at this low level, but instead use the various malloc wrappers.)
-
-=item *
-
-snprintf() - the return type is unportable. Use my_snprintf() instead.
-
-=back
-
-=head2 Security problems
-
-Last but not least, here are various tips for safer coding.
-
-=over 4
-
-=item *
-
-Do not use gets()
-
-Or we will publicly ridicule you. Seriously.
-
-=item *
-
-Do not use strcpy() or strcat() or strncpy() or strncat()
-
-Use my_strlcpy() and my_strlcat() instead: they either use the native
-implementation, or Perl's own implementation (borrowed from the public
-domain implementation of INN).
-
-=item *
-
-Do not use sprintf() or vsprintf()
-
-If you really want just plain byte strings, use my_snprintf()
-and my_vsnprintf() instead, which will try to use snprintf() and
-vsnprintf() if those safer APIs are available. If you want something
-fancier than a plain byte string, use SVs and Perl_sv_catpvf().
-
-=back
-
-=head1 EXTERNAL TOOLS FOR DEBUGGING PERL
-
-Sometimes it helps to use external tools while debugging and
-testing Perl. This section tries to guide you through using
-some common testing and debugging tools with Perl. This is
-meant as a guide to interfacing these tools with Perl, not
-as any kind of guide to the use of the tools themselves.
-
-B<NOTE 1>: Running under memory debuggers such as Purify, valgrind, or
-Third Degree greatly slows down the execution: seconds become minutes,
-minutes become hours. For example as of Perl 5.8.1, the
-ext/Encode/t/Unicode.t takes extraordinarily long to complete under
-e.g. Purify, Third Degree, and valgrind. Under valgrind it takes more
-than six hours, even on a snappy computer-- the said test must be
-doing something that is quite unfriendly for memory debuggers. If you
-don't feel like waiting, that you can simply kill away the perl
-process.
-
-B<NOTE 2>: To minimize the number of memory leak false alarms (see
-L</PERL_DESTRUCT_LEVEL> for more information), you have to have
-environment variable PERL_DESTRUCT_LEVEL set to 2. The F<TEST>
-and harness scripts do that automatically. But if you are running
-some of the tests manually-- for csh-like shells:
-
- setenv PERL_DESTRUCT_LEVEL 2
-
-and for Bourne-type shells:
-
- PERL_DESTRUCT_LEVEL=2
- export PERL_DESTRUCT_LEVEL
-
-or in Unixy environments you can also use the C<env> command:
-
- env PERL_DESTRUCT_LEVEL=2 valgrind ./perl -Ilib ...
-
-B<NOTE 3>: There are known memory leaks when there are compile-time
-errors within eval or require, seeing C<S_doeval> in the call stack
-is a good sign of these. Fixing these leaks is non-trivial,
-unfortunately, but they must be fixed eventually.
-
-B<NOTE 4>: L<DynaLoader> will not clean up after itself completely
-unless Perl is built with the Configure option
-C<-Accflags=-DDL_UNLOAD_ALL_AT_EXIT>.
-
-=head2 Rational Software's Purify
-
-Purify is a commercial tool that is helpful in identifying
-memory overruns, wild pointers, memory leaks and other such
-badness. Perl must be compiled in a specific way for
-optimal testing with Purify. Purify is available under
-Windows NT, Solaris, HP-UX, SGI, and Siemens Unix.
-
-=head2 Purify on Unix
-
-On Unix, Purify creates a new Perl binary. To get the most
-benefit out of Purify, you should create the perl to Purify
-using:
-
- sh Configure -Accflags=-DPURIFY -Doptimize='-g' \
- -Uusemymalloc -Dusemultiplicity
-
-where these arguments mean:
-
-=over 4
-
-=item -Accflags=-DPURIFY
-
-Disables Perl's arena memory allocation functions, as well as
-forcing use of memory allocation functions derived from the
-system malloc.
-
-=item -Doptimize='-g'
-
-Adds debugging information so that you see the exact source
-statements where the problem occurs. Without this flag, all
-you will see is the source filename of where the error occurred.
-
-=item -Uusemymalloc
-
-Disable Perl's malloc so that Purify can more closely monitor
-allocations and leaks. Using Perl's malloc will make Purify
-report most leaks in the "potential" leaks category.
-
-=item -Dusemultiplicity
-
-Enabling the multiplicity option allows perl to clean up
-thoroughly when the interpreter shuts down, which reduces the
-number of bogus leak reports from Purify.
-
-=back
-
-Once you've compiled a perl suitable for Purify'ing, then you
-can just:
-
- make pureperl
-
-which creates a binary named 'pureperl' that has been Purify'ed.
-This binary is used in place of the standard 'perl' binary
-when you want to debug Perl memory problems.
-
-As an example, to show any memory leaks produced during the
-standard Perl testset you would create and run the Purify'ed
-perl as:
-
- make pureperl
- cd t
- ../pureperl -I../lib harness
-
-which would run Perl on test.pl and report any memory problems.
-
-Purify outputs messages in "Viewer" windows by default. If
-you don't have a windowing environment or if you simply
-want the Purify output to unobtrusively go to a log file
-instead of to the interactive window, use these following
-options to output to the log file "perl.log":
-
- setenv PURIFYOPTIONS "-chain-length=25 -windows=no \
- -log-file=perl.log -append-logfile=yes"
-
-If you plan to use the "Viewer" windows, then you only need this option:
-
- setenv PURIFYOPTIONS "-chain-length=25"
-
-In Bourne-type shells:
-
- PURIFYOPTIONS="..."
- export PURIFYOPTIONS
-
-or if you have the "env" utility:
-
- env PURIFYOPTIONS="..." ../pureperl ...
-
-=head2 Purify on NT
-
-Purify on Windows NT instruments the Perl binary 'perl.exe'
-on the fly. There are several options in the makefile you
-should change to get the most use out of Purify:
-
-=over 4
-
-=item DEFINES
-
-You should add -DPURIFY to the DEFINES line so the DEFINES
-line looks something like:
-
- DEFINES = -DWIN32 -D_CONSOLE -DNO_STRICT $(CRYPT_FLAG) -DPURIFY=1
-
-to disable Perl's arena memory allocation functions, as
-well as to force use of memory allocation functions derived
-from the system malloc.
-
-=item USE_MULTI = define
-
-Enabling the multiplicity option allows perl to clean up
-thoroughly when the interpreter shuts down, which reduces the
-number of bogus leak reports from Purify.
-
-=item #PERL_MALLOC = define
-
-Disable Perl's malloc so that Purify can more closely monitor
-allocations and leaks. Using Perl's malloc will make Purify
-report most leaks in the "potential" leaks category.
-
-=item CFG = Debug
-
-Adds debugging information so that you see the exact source
-statements where the problem occurs. Without this flag, all
-you will see is the source filename of where the error occurred.
-
-=back
-
-As an example, to show any memory leaks produced during the
-standard Perl testset you would create and run Purify as:
-
- cd win32
- make
- cd ../t
- purify ../perl -I../lib harness
-
-which would instrument Perl in memory, run Perl on test.pl,
-then finally report any memory problems.
-
-=head2 valgrind
-
-The excellent valgrind tool can be used to find out both memory leaks
-and illegal memory accesses. As of version 3.3.0, Valgrind only
-supports Linux on x86, x86-64 and PowerPC. The special "test.valgrind"
-target can be used to run the tests under valgrind. Found errors
-and memory leaks are logged in files named F<testfile.valgrind>.
-
-Valgrind also provides a cachegrind tool, invoked on perl as:
-
- VG_OPTS=--tool=cachegrind make test.valgrind
-
-As system libraries (most notably glibc) are also triggering errors,
-valgrind allows to suppress such errors using suppression files. The
-default suppression file that comes with valgrind already catches a lot
-of them. Some additional suppressions are defined in F<t/perl.supp>.
-
-To get valgrind and for more information see
-
- http://developer.kde.org/~sewardj/
-
-=head2 Compaq's/Digital's/HP's Third Degree
-
-Third Degree is a tool for memory leak detection and memory access checks.
-It is one of the many tools in the ATOM toolkit. The toolkit is only
-available on Tru64 (formerly known as Digital UNIX formerly known as
-DEC OSF/1).
-
-When building Perl, you must first run Configure with -Doptimize=-g
-and -Uusemymalloc flags, after that you can use the make targets
-"perl.third" and "test.third". (What is required is that Perl must be
-compiled using the C<-g> flag, you may need to re-Configure.)
-
-The short story is that with "atom" you can instrument the Perl
-executable to create a new executable called F<perl.third>. When the
-instrumented executable is run, it creates a log of dubious memory
-traffic in file called F<perl.3log>. See the manual pages of atom and
-third for more information. The most extensive Third Degree
-documentation is available in the Compaq "Tru64 UNIX Programmer's
-Guide", chapter "Debugging Programs with Third Degree".
-
-The "test.third" leaves a lot of files named F<foo_bar.3log> in the t/
-subdirectory. There is a problem with these files: Third Degree is so
-effective that it finds problems also in the system libraries.
-Therefore you should used the Porting/thirdclean script to cleanup
-the F<*.3log> files.
-
-There are also leaks that for given certain definition of a leak,
-aren't. See L</PERL_DESTRUCT_LEVEL> for more information.
-
-=head2 PERL_DESTRUCT_LEVEL
-
-If you want to run any of the tests yourself manually using e.g.
-valgrind, or the pureperl or perl.third executables, please note that
-by default perl B<does not> explicitly cleanup all the memory it has
-allocated (such as global memory arenas) but instead lets the exit()
-of the whole program "take care" of such allocations, also known as
-"global destruction of objects".
-
-There is a way to tell perl to do complete cleanup: set the
-environment variable PERL_DESTRUCT_LEVEL to a non-zero value.
-The t/TEST wrapper does set this to 2, and this is what you
-need to do too, if you don't want to see the "global leaks":
-For example, for "third-degreed" Perl:
-
- env PERL_DESTRUCT_LEVEL=2 ./perl.third -Ilib t/foo/bar.t
-
-(Note: the mod_perl apache module uses also this environment variable
-for its own purposes and extended its semantics. Refer to the mod_perl
-documentation for more information. Also, spawned threads do the
-equivalent of setting this variable to the value 1.)
-
-If, at the end of a run you get the message I<N scalars leaked>, you can
-recompile with C<-DDEBUG_LEAKING_SCALARS>, which will cause the addresses
-of all those leaked SVs to be dumped along with details as to where each
-SV was originally allocated. This information is also displayed by
-Devel::Peek. Note that the extra details recorded with each SV increases
-memory usage, so it shouldn't be used in production environments. It also
-converts C<new_SV()> from a macro into a real function, so you can use
-your favourite debugger to discover where those pesky SVs were allocated.
-
-If you see that you're leaking memory at runtime, but neither valgrind
-nor C<-DDEBUG_LEAKING_SCALARS> will find anything, you're probably
-leaking SVs that are still reachable and will be properly cleaned up
-during destruction of the interpreter. In such cases, using the C<-Dm>
-switch can point you to the source of the leak. If the executable was
-built with C<-DDEBUG_LEAKING_SCALARS>, C<-Dm> will output SV allocations
-in addition to memory allocations. Each SV allocation has a distinct
-serial number that will be written on creation and destruction of the SV.
-So if you're executing the leaking code in a loop, you need to look for
-SVs that are created, but never destroyed between each cycle. If such an
-SV is found, set a conditional breakpoint within C<new_SV()> and make it
-break only when C<PL_sv_serial> is equal to the serial number of the
-leaking SV. Then you will catch the interpreter in exactly the state
-where the leaking SV is allocated, which is sufficient in many cases to
-find the source of the leak.
-
-As C<-Dm> is using the PerlIO layer for output, it will by itself
-allocate quite a bunch of SVs, which are hidden to avoid recursion.
-You can bypass the PerlIO layer if you use the SV logging provided
-by C<-DPERL_MEM_LOG> instead.
-
-=head2 PERL_MEM_LOG
-
-If compiled with C<-DPERL_MEM_LOG>, both memory and SV allocations go
-through logging functions, which is handy for breakpoint setting.
-
-Unless C<-DPERL_MEM_LOG_NOIMPL> is also compiled, the logging
-functions read $ENV{PERL_MEM_LOG} to determine whether to log the
-event, and if so how:
-
- $ENV{PERL_MEM_LOG} =~ /m/ Log all memory ops
- $ENV{PERL_MEM_LOG} =~ /s/ Log all SV ops
- $ENV{PERL_MEM_LOG} =~ /t/ include timestamp in Log
- $ENV{PERL_MEM_LOG} =~ /^(\d+)/ write to FD given (default is 2)
-
-Memory logging is somewhat similar to C<-Dm> but is independent of
-C<-DDEBUGGING>, and at a higher level; all uses of Newx(), Renew(),
-and Safefree() are logged with the caller's source code file and line
-number (and C function name, if supported by the C compiler). In
-contrast, C<-Dm> is directly at the point of C<malloc()>. SV logging
-is similar.
-
-Since the logging doesn't use PerlIO, all SV allocations are logged
-and no extra SV allocations are introduced by enabling the logging.
-If compiled with C<-DDEBUG_LEAKING_SCALARS>, the serial number for
-each SV allocation is also logged.
-
-=head2 Profiling
-
-Depending on your platform there are various of profiling Perl.
-
-There are two commonly used techniques of profiling executables:
-I<statistical time-sampling> and I<basic-block counting>.
-
-The first method takes periodically samples of the CPU program
-counter, and since the program counter can be correlated with the code
-generated for functions, we get a statistical view of in which
-functions the program is spending its time. The caveats are that very
-small/fast functions have lower probability of showing up in the
-profile, and that periodically interrupting the program (this is
-usually done rather frequently, in the scale of milliseconds) imposes
-an additional overhead that may skew the results. The first problem
-can be alleviated by running the code for longer (in general this is a
-good idea for profiling), the second problem is usually kept in guard
-by the profiling tools themselves.
-
-The second method divides up the generated code into I<basic blocks>.
-Basic blocks are sections of code that are entered only in the
-beginning and exited only at the end. For example, a conditional jump
-starts a basic block. Basic block profiling usually works by
-I<instrumenting> the code by adding I<enter basic block #nnnn>
-book-keeping code to the generated code. During the execution of the
-code the basic block counters are then updated appropriately. The
-caveat is that the added extra code can skew the results: again, the
-profiling tools usually try to factor their own effects out of the
-results.
-
-=head2 Gprof Profiling
-
-gprof is a profiling tool available in many Unix platforms,
-it uses F<statistical time-sampling>.
-
-You can build a profiled version of perl called "perl.gprof" by
-invoking the make target "perl.gprof" (What is required is that Perl
-must be compiled using the C<-pg> flag, you may need to re-Configure).
-Running the profiled version of Perl will create an output file called
-F<gmon.out> is created which contains the profiling data collected
-during the execution.
-
-The gprof tool can then display the collected data in various ways.
-Usually gprof understands the following options:
-
-=over 4
-
-=item -a
-
-Suppress statically defined functions from the profile.
-
-=item -b
-
-Suppress the verbose descriptions in the profile.
-
-=item -e routine
-
-Exclude the given routine and its descendants from the profile.
-
-=item -f routine
-
-Display only the given routine and its descendants in the profile.
-
-=item -s
-
-Generate a summary file called F<gmon.sum> which then may be given
-to subsequent gprof runs to accumulate data over several runs.
-
-=item -z
-
-Display routines that have zero usage.
-
-=back
-
-For more detailed explanation of the available commands and output
-formats, see your own local documentation of gprof.
-
-quick hint:
-
- $ sh Configure -des -Dusedevel -Doptimize='-pg' && make perl.gprof
- $ ./perl.gprof someprog # creates gmon.out in current directory
- $ gprof ./perl.gprof > out
- $ view out
-
-=head2 GCC gcov Profiling
-
-Starting from GCC 3.0 I<basic block profiling> is officially available
-for the GNU CC.
-
-You can build a profiled version of perl called F<perl.gcov> by
-invoking the make target "perl.gcov" (what is required that Perl must
-be compiled using gcc with the flags C<-fprofile-arcs
--ftest-coverage>, you may need to re-Configure).
-
-Running the profiled version of Perl will cause profile output to be
-generated. For each source file an accompanying ".da" file will be
-created.
-
-To display the results you use the "gcov" utility (which should
-be installed if you have gcc 3.0 or newer installed). F<gcov> is
-run on source code files, like this
-
- gcov sv.c
-
-which will cause F<sv.c.gcov> to be created. The F<.gcov> files
-contain the source code annotated with relative frequencies of
-execution indicated by "#" markers.
-
-Useful options of F<gcov> include C<-b> which will summarise the
-basic block, branch, and function call coverage, and C<-c> which
-instead of relative frequencies will use the actual counts. For
-more information on the use of F<gcov> and basic block profiling
-with gcc, see the latest GNU CC manual, as of GCC 3.0 see
-
- http://gcc.gnu.org/onlinedocs/gcc-3.0/gcc.html
-
-and its section titled "8. gcov: a Test Coverage Program"
-
- http://gcc.gnu.org/onlinedocs/gcc-3.0/gcc_8.html#SEC132
-
-quick hint:
-
- $ sh Configure -des -Doptimize='-g' -Accflags='-fprofile-arcs -ftest-coverage' \
- -Aldflags='-fprofile-arcs -ftest-coverage' && make perl.gcov
- $ rm -f regexec.c.gcov regexec.gcda
- $ ./perl.gcov
- $ gcov regexec.c
- $ view regexec.c.gcov
-
-=head2 Pixie Profiling
-
-Pixie is a profiling tool available on IRIX and Tru64 (aka Digital
-UNIX aka DEC OSF/1) platforms. Pixie does its profiling using
-I<basic-block counting>.
-
-You can build a profiled version of perl called F<perl.pixie> by
-invoking the make target "perl.pixie" (what is required is that Perl
-must be compiled using the C<-g> flag, you may need to re-Configure).
-
-In Tru64 a file called F<perl.Addrs> will also be silently created,
-this file contains the addresses of the basic blocks. Running the
-profiled version of Perl will create a new file called "perl.Counts"
-which contains the counts for the basic block for that particular
-program execution.
-
-To display the results you use the F<prof> utility. The exact
-incantation depends on your operating system, "prof perl.Counts" in
-IRIX, and "prof -pixie -all -L. perl" in Tru64.
-
-In IRIX the following prof options are available:
-
-=over 4
-
-=item -h
-
-Reports the most heavily used lines in descending order of use.
-Useful for finding the hotspot lines.
-
-=item -l
-
-Groups lines by procedure, with procedures sorted in descending order of use.
-Within a procedure, lines are listed in source order.
-Useful for finding the hotspots of procedures.
-
-=back
-
-In Tru64 the following options are available:
-
-=over 4
-
-=item -p[rocedures]
-
-Procedures sorted in descending order by the number of cycles executed
-in each procedure. Useful for finding the hotspot procedures.
-(This is the default option.)
-
-=item -h[eavy]
-
-Lines sorted in descending order by the number of cycles executed in
-each line. Useful for finding the hotspot lines.
-
-=item -i[nvocations]
-
-The called procedures are sorted in descending order by number of calls
-made to the procedures. Useful for finding the most used procedures.
-
-=item -l[ines]
-
-Grouped by procedure, sorted by cycles executed per procedure.
-Useful for finding the hotspots of procedures.
-
-=item -testcoverage
-
-The compiler emitted code for these lines, but the code was unexecuted.
-
-=item -z[ero]
-
-Unexecuted procedures.
-
-=back
-
-For further information, see your system's manual pages for pixie and prof.
-
-=head2 Miscellaneous tricks
-
-=over 4
-
-=item *
-
-Those debugging perl with the DDD frontend over gdb may find the
-following useful:
-
-You can extend the data conversion shortcuts menu, so for example you
-can display an SV's IV value with one click, without doing any typing.
-To do that simply edit ~/.ddd/init file and add after:
-
- ! Display shortcuts.
- Ddd*gdbDisplayShortcuts: \
- /t () // Convert to Bin\n\
- /d () // Convert to Dec\n\
- /x () // Convert to Hex\n\
- /o () // Convert to Oct(\n\
-
-the following two lines:
-
- ((XPV*) (())->sv_any )->xpv_pv // 2pvx\n\
- ((XPVIV*) (())->sv_any )->xiv_iv // 2ivx
-
-so now you can do ivx and pvx lookups or you can plug there the
-sv_peek "conversion":
-
- Perl_sv_peek(my_perl, (SV*)()) // sv_peek
-
-(The my_perl is for threaded builds.)
-Just remember that every line, but the last one, should end with \n\
-
-Alternatively edit the init file interactively via:
-3rd mouse button -> New Display -> Edit Menu
-
-Note: you can define up to 20 conversion shortcuts in the gdb
-section.
-
-=item *
-
-If you see in a debugger a memory area mysteriously full of 0xABABABAB
-or 0xEFEFEFEF, you may be seeing the effect of the Poison() macros,
-see L<perlclib>.
-
-=item *
-
-Under ithreads the optree is read only. If you want to enforce this, to check
-for write accesses from buggy code, compile with C<-DPL_OP_SLAB_ALLOC> to
-enable the OP slab allocator and C<-DPERL_DEBUG_READONLY_OPS> to enable code
-that allocates op memory via C<mmap>, and sets it read-only at run time.
-Any write access to an op results in a C<SIGBUS> and abort.
-
-This code is intended for development only, and may not be portable even to
-all Unix variants. Also, it is an 80% solution, in that it isn't able to make
-all ops read only. Specifically it
-
-=over
-
-=item 1
-
-Only sets read-only on all slabs of ops at C<CHECK> time, hence ops allocated
-later via C<require> or C<eval> will be re-write
-
-=item 2
-
-Turns an entire slab of ops read-write if the refcount of any op in the slab
-needs to be decreased.
-
-=item 3
-
-Turns an entire slab of ops read-write if any op from the slab is freed.
-
-=back
-
-It's not possible to turn the slabs to read-only after an action requiring
-read-write access, as either can happen during op tree building time, so
-there may still be legitimate write access.
-
-However, as an 80% solution it is still effective, as currently it catches
-a write access during the generation of F<Config.pm>, which means that we
-can't yet build F<perl> with this enabled.
-
-=back
-
-
-=head1 CONCLUSION
-
-We've had a brief look around the Perl source, how to maintain quality
-of the source code, an overview of the stages F<perl> goes through
-when it's running your code, how to use debuggers to poke at the Perl
-guts, and finally how to analyse the execution of Perl. We took a very
-simple problem and demonstrated how to solve it fully - with
-documentation, regression tests, and finally a patch for submission to
-p5p. Finally, we talked about how to use external tools to debug and
-test Perl.
-
-I'd now suggest you read over those references again, and then, as soon
-as possible, get your hands dirty. The best way to learn is by doing,
-so:
-
-=over 3
-
-=item *
-
-Subscribe to perl5-porters, follow the patches and try and understand
-them; don't be afraid to ask if there's a portion you're not clear on -
-who knows, you may unearth a bug in the patch...
-
-=item *
-
-Keep up to date with the bleeding edge Perl distributions and get
-familiar with the changes. Try and get an idea of what areas people are
-working on and the changes they're making.
-
-=item *
-
-Do read the README associated with your operating system, e.g. README.aix
-on the IBM AIX OS. Don't hesitate to supply patches to that README if
-you find anything missing or changed over a new OS release.
-
-=item *
-
-Find an area of Perl that seems interesting to you, and see if you can
-work out how it works. Scan through the source, and step over it in the
-debugger. Play, poke, investigate, fiddle! You'll probably get to
-understand not just your chosen area but a much wider range of F<perl>'s
-activity as well, and probably sooner than you'd think.
-
-=back
-
-=over 3
-
-=item I<The Road goes ever on and on, down from the door where it began.>
-
-=back
-
-If you can do these things, you've started on the long road to Perl porting.
-Thanks for wanting to help make Perl better - and happy hacking!
-
-=head2 Metaphoric Quotations
-
-If you recognized the quote about the Road above, you're in luck.
-
-Most software projects begin each file with a literal description of each
-file's purpose. Perl instead begins each with a literary allusion to that
-file's purpose.
-
-Like chapters in many books, all top-level Perl source files (along with a
-few others here and there) begin with an epigramic inscription that alludes,
-indirectly and metaphorically, to the material you're about to read.
-
-Quotations are taken from writings of J.R.R Tolkien pertaining to his
-Legendarium, almost always from I<The Lord of the Rings>. Chapters and
-page numbers are given using the following editions:
-
-=over 4
-
-=item *
-
-I<The Hobbit>, by J.R.R. Tolkien. The hardcover, 70th-anniversary
-edition of 2007 was used, published in the UK by Harper Collins Publishers
-and in the US by the Houghton Mifflin Company.
-
-=item *
-
-I<The Lord of the Rings>, by J.R.R. Tolkien. The hardcover,
-50th-anniversary edition of 2004 was used, published in the UK by Harper
-Collins Publishers and in the US by the Houghton Mifflin Company.
-
-=item *
-
-I<The Lays of Beleriand>, by J.R.R. Tolkien and published posthumously by his
-son and literary executor, C.J.R. Tolkien, being the 3rd of the 12 volumes
-in Christopher's mammoth I<History of Middle Earth>. Page numbers derive
-from the hardcover edition, first published in 1983 by George Allen &
-Unwin; no page numbers changed for the special 3-volume omnibus edition of
-2002 or the various trade-paper editions, all again now by Harper Collins
-or Houghton Mifflin.
-
-=back
-
-Other JRRT books fair game for quotes would thus include I<The Adventures of
-Tom Bombadil>, I<The Silmarillion>, I<Unfinished Tales>, and I<The Tale of
-the Children of Hurin>, all but the first posthumously assembled by CJRT.
-But I<The Lord of the Rings> itself is perfectly fine and probably best to
-quote from, provided you can find a suitable quote there.
-
-So if you were to supply a new, complete, top-level source file to add to
-Perl, you should conform to this peculiar practice by yourself selecting an
-appropriate quotation from Tolkien, retaining the original spelling and
-punctuation and using the same format the rest of the quotes are in.
-Indirect and oblique is just fine; remember, it's a metaphor, so being meta
-is, after all, what it's for.