You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NOTE: The following is not a bug (although one can use it as an invitation to think about ways to reduce resource consumption in yakpro-po), so feel free to close it whenever you like. You might, however, want to include some of its information in your documentation, or README, to prepare users who obfuscate large projects.
Problem
Trying to obfuscate ~5000 PHP files of ~1000 lines each, yakpro-po stopped after processing ~1600 files with a simple (and frustrating)
Segmentation fault
No other messages were printed, except two lines in syslog:
However, rerunning yakpro-po would continue with the file where it had previously stopped, as if nothing had happened, for another 1500-1600 files - then stop at the next segmentation fault. A third run would continue from there up to the end. However, the files thus produced would be unusable, as the information from the first two runs that would normally be saved in yakpro-po's own directories (the translation tables and the like) would be lost due to the segfaults and thus the obfuscation would start as if it had started for the first time, only with a different "start file" each time. This indicated that the problem was rather "insufficient memory" than anything else.
But the value of memory_limit in the php.ini file of PHP CLI (which is different from the one for PHP on the web server!) was high enough:
memory_limit = 4096M
and there was no complaining about it from PHP, as there was previously, with much lower settings:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 421888 bytes) in /usr/local/bin/yakpro-po/include/functions.php on line 391
Debugging
I was thus confronted (for the first time) with the question:
How is one supposed to debug segmentation faults on the PHP CLI?
and, inside the gdb shell, run your script with your options, e.g.
run /usr/local/bin/yakpro-po original-dir -o destination-dir
When the segfault happened, gdb gives you the opportunity to type commands. Type:
bt
for 'backtrace'.
Although I did not have compiled PHP with debug support, this was enough to point me to the right direction about the reason for the segfault.
Reason
I had already put various 'echo's in place, in yakpro-po.php and (mainly)
include/functions.php
From these, it was clear that the problem occurred inside the call to $parser->traverse in the latter:
$stmts = $traverser->traverse($stmts);
The backtrace command in gdb above, showed more than 100000 lines like these:
#61947 0x0000555555afb110 in gc_mark_grey ()
#61948 0x0000555555afb110 in gc_mark_grey ()
#61949 0x0000555555afb110 in gc_mark_grey ()
...
and, at the end:
#104532 0x0000555555afb110 in gc_mark_grey ()
#104533 0x0000555555afb110 in gc_mark_grey ()
#104534 0x0000555555afb110 in gc_mark_grey ()
#104535 0x0000555555afb110 in gc_mark_grey ()
#104536 0x0000555555afbe0a in zend_gc_collect_cycles ()
#104537 0x00007ffff7f40f57 in xdebug_gc_collect_cycles () from /usr/lib64/php7.2/lib/extensions/no-debug-zts-20170718/xdebug.so
#104538 0x0000555555afb93f in gc_possible_root ()
#104539 0x0000555555b17a74 in ZEND_DO_FCALL_SPEC_RETVAL_UNUSED_HANDLER ()
#104540 0x0000555555b7c5fe in execute_ex ()
#104541 0x00007ffff7f1c1ed in xdebug_execute_ex () from /usr/lib64/php7.2/lib/extensions/no-debug-zts-20170718/xdebug.so
...
gc stands for 'garbage collector', so obviously there was a memory problem there. Looking at Segfault in garbage collector brought the breakthrough - namely the solution. :-)
Solution
This is a stack overflow in garbage collector. The solution is to increase limit for stack. To see your current limit, type
ulimit -s
I had 8192 - for a task of this size obviously totally undersized...Change this to something more appropriate, say
ulimit -s 102400
and retry - the segmentation fault is gone! :-)
The text was updated successfully, but these errors were encountered:
thanx for your reporting... it could help people....
can you make a little try?
just insert gc_collect_cycles(); at line 307 of include/functions.php ...
juste before the `continue;' statement...
and tell me if the problem is gone or not ( with the default ulimit value )
but it did not work: with max. stack size at 8192 (the standard value), I got a segmentation fault at the exact same place as before. It's as if the gc_collect_cycles() function did not have any effect at all...
NOTE: The following is not a bug (although one can use it as an invitation to think about ways to reduce resource consumption in yakpro-po), so feel free to close it whenever you like. You might, however, want to include some of its information in your documentation, or README, to prepare users who obfuscate large projects.
Problem
Trying to obfuscate ~5000 PHP files of ~1000 lines each, yakpro-po stopped after processing ~1600 files with a simple (and frustrating)
Segmentation fault
No other messages were printed, except two lines in syslog:
However, rerunning yakpro-po would continue with the file where it had previously stopped, as if nothing had happened, for another 1500-1600 files - then stop at the next segmentation fault. A third run would continue from there up to the end. However, the files thus produced would be unusable, as the information from the first two runs that would normally be saved in yakpro-po's own directories (the translation tables and the like) would be lost due to the segfaults and thus the obfuscation would start as if it had started for the first time, only with a different "start file" each time. This indicated that the problem was rather "insufficient memory" than anything else.
But the value of memory_limit in the php.ini file of PHP CLI (which is different from the one for PHP on the web server!) was high enough:
memory_limit = 4096M
and there was no complaining about it from PHP, as there was previously, with much lower settings:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 421888 bytes) in /usr/local/bin/yakpro-po/include/functions.php on line 391
Debugging
I was thus confronted (for the first time) with the question:
How is one supposed to debug segmentation faults on the PHP CLI?
I found the article at Debugging Segfaults in PHP helpful: for the PHP CLI, start php from gdb with
gdb php
and, inside the gdb shell, run your script with your options, e.g.
run /usr/local/bin/yakpro-po original-dir -o destination-dir
When the segfault happened, gdb gives you the opportunity to type commands. Type:
bt
for 'backtrace'.
Although I did not have compiled PHP with debug support, this was enough to point me to the right direction about the reason for the segfault.
Reason
I had already put various 'echo's in place, in yakpro-po.php and (mainly)
include/functions.php
From these, it was clear that the problem occurred inside the call to $parser->traverse in the latter:
$stmts = $traverser->traverse($stmts);
The backtrace command in gdb above, showed more than 100000 lines like these:
and, at the end:
gc stands for 'garbage collector', so obviously there was a memory problem there. Looking at Segfault in garbage collector brought the breakthrough - namely the solution. :-)
Solution
This is a stack overflow in garbage collector. The solution is to increase limit for stack. To see your current limit, type
ulimit -s
I had 8192 - for a task of this size obviously totally undersized...Change this to something more appropriate, say
ulimit -s 102400
and retry - the segmentation fault is gone! :-)
The text was updated successfully, but these errors were encountered: