:wq - blog » thread http://writequit.org/blog Tu fui, ego eris Mon, 22 Dec 2014 14:54:59 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.5 PHFOS update – more than 20 threads useable http://writequit.org/blog/2007/10/29/phfos-update-more-than-20-threads-useable/ http://writequit.org/blog/2007/10/29/phfos-update-more-than-20-threads-useable/#comments Mon, 29 Oct 2007 22:03:16 +0000 http://writequit.org/blog/?p=80 [UPDATE 10/30/07]: In the below post, use the link to the text file to get the latest version, I can’t edit the actual text on the page every time I update the script. The most up-to-date script can be found here.

Just a small update, you *can* actually use more than ~20 threads when using the phfos script. The key is to make sure you are using the --readstop option so the thread doesn’t sit there waiting for rand(maxTime-minTime+1) + minTime seconds keeping the file open. I was successfully able to run it with 1500 threads on my MacBook Pro and again with 1500 threads on my desktop Ubuntu machine.

Hopefully this will make the script infinitely more useful, as a lot more connections can be simulated now.

]]>
http://writequit.org/blog/2007/10/29/phfos-update-more-than-20-threads-useable/feed/ 1
PHFOS – Perl Hold File Open Script for stressing disk access like a customer environment http://writequit.org/blog/2007/10/25/phfos-perl-hold-file-open-script-for-stressing-disk-access-like-a-customer-environment/ http://writequit.org/blog/2007/10/25/phfos-perl-hold-file-open-script-for-stressing-disk-access-like-a-customer-environment/#comments Thu, 25 Oct 2007 19:33:14 +0000 http://writequit.org/blog/?p=79 [UPDATE 10/29/07]: You can now use phfos with lots more threads, read this post

It’s quick, it’s dirty, but here it is “PHFOS” (Note to self: get better at naming scripts). So here’s what it does.

Basically, you specify a directory with some kind of files in it, the script then spawns <n> threads that each keep a random file open for a random amount of time (to simulate customers accessing files in a random manner). There are options to change the maximum number of threads, the minimum time to keep a file open, the maximum time to keep a file open, whether to read the contents of the file, or to shortcircuit and immediately close after reading the file, etc. Take a look at the “-h” option to see what they do.

***WARNING ***
Don’t run this from a production machine, it can (and will, if you aren’t careful) take down an entire machine in a matter of seconds depending on the command-line options. I accidently ran this with the number of threads=1550 and killed my MacBook Pro laptop immediately. Experiment with low settings on the numbers until you reach the sweet spot, if you run into bus errors or kernel protection errors, try decreasing the numbers. Here are some good starter numbers:

If you have small (<10mb) files, try this:
./phfos.pl -r -v -d <dir> -n 20 --min=2 --max=3 --readstop

If you have larger files, you might try this:
./phfos.pl -r -v -d <dir> -n 15 --min=10 --max=30

Of course, you can take out the -r if you don’t want the files actually read, just opened. Read past the end if you want to read a bit about how it works and the problems you might run into.

Download the script here if you can’t copy it from below (and because WordPress mangles indentions).


#!/usr/bin/perl

use warnings;
use strict;

# PHFOS – Perl Hold File Open Script
use Getopt::Long;
use threads;
use threads::shared;
use POSIX;

sub print_usage {
print “Flags:\n”;
print ” -d <directory>\t\tDirectory to read files from (REQUIRED)\n”;
print ” -n <number>\t\tMaximum number of threads to spawn (default 10)\n”;
print ” -r\t\t\tRead the contents of the files in addition to opening\n”;
print ” –min=<seconds>\tMinimum number of seconds to keep a file open (default 5)\n”;
print ” –max=<seconds>\tMaximum number of seconds to keep a file open (default 10)\n”;
print ” –readstop\t\tIf set, immediately exit thread after reading contents of the file\n”;
print ” -v\t\t\tVerbose mode, tell me what files are open and for how long\n”;
print ” -h\t\t\tDisplay this usage\n”;
exit(0);
}

my %options = ();
my $verbose = 0;

GetOptions(“verbose!” => \$verbose, # verbose option
“d:s” => \$options{dir}, # directory to read files from
“n:i” => \$options{numOpen}, # number of threads to open at the same time
“min:i” => \$options{minTime}, # minimum time to keep a file open
“max:i” => \$options{maxTime}, # maximum time to keep a file open
“readstop” => \$options{readStop},
“r” => \$options{readFile}, # display usage
“h” => \$options{help} # display usage
);

if ($options{help}) { print_usage(); } # if help is set, display usage and exit
if (!defined($options{dir})) { print_usage(); }
if (!defined($options{numOpen})) { $options{numOpen} = 10; }

my $dir = $options{dir};
my @threadList;
my $maxThreads = $options{numOpen} || 10;
my $minTime = $options{minTime} || 5;
my $maxTime = $options{maxTime} || 10;
my @filelist = get_dir_list($dir);
my $index = 0;
our $readfile : shared = $options{readFile};
our $readstop : shared = $options{readStop};

while (0 == 0) {
my $file = $filelist[ rand @filelist ];
my $time = int( rand($maxTime-$minTime + 1)) + $minTime;
if (!$readstop) {
print “Randomly selected file: $file will be opened for $time seconds\n” if $verbose;
}
my $filename = $dir . “/” . $file;

my $newthread = threads->new(\&hold_file_open, $filename, $time);
$newthread->detach;
$index++;

if ($index >= $maxThreads) {
# give the OS a chance to recover
sleep(int(($minTime+$maxTime)/2));
$index = int($index / 2);
}

}

sub get_dir_list {
my $dirname = shift;
opendir(DIR, $dirname) || die “can’t opendir $dirname: $!”;
my @files = grep { /[^\.]/ && -f “$dirname/$_” } readdir(DIR);
closedir DIR;
return @files;

}

sub hold_file_open {
my $filename = shift; # which file to hold open
my $openlength = shift; # the length in seconds to keep it open
my $data;
my $size = 0;
my $bytesread = 0;
my $FIN = POSIX::open($filename);
if (!defined($FIN)) { die “Unable to open $filename\n”; }
if ($readfile) {
my $starttime = time();
while (($bytesread = POSIX::read($FIN,$data,65536)) > 0) {
$size = $size + $bytesread;
#print “[” . $data . “]”;
}
my $endtime = time();
my $readtime = $endtime – $starttime;
print “thread[” . threads->self->tid . “] read $size bytes from $filename in $readtime seconds\n” if $verbose;
if ($readstop) {
sleep(1); # give the OS a bit of time to play catchup
POSIX::close($FIN);
return 0;
}
my $newsleeptime = $openlength – $readtime;
if ($newsleeptime < 1) { exit(0); }
$openlength = $newsleeptime;
}
sleep($openlength);
POSIX::close($FIN);
return 0;
}

Okay, so basically, how it works is that it traverses the directory, looking for file and adding them to an array, if then randomly spawning a thread to open/read the file (up to “n” number of threads). Once it reaches the max number of threads as specified, it waits for ((minTime+maxTime)/2) seconds before halving the thread counter. Basically it waits the average amount of time (assuming true randomness) because by then, statistically half of the threads *should* have finished (I can’t keep track of this because of the thread->detach). At some times you will have slightly larger than “n” threads and sometimes slightly less.

Let’s also talk about a problem I’ve run into that I can’t seem to figure out, here’s the crashdump from Perl:

Exception: EXC_BAD_ACCESS (0x0001)
Codes: KERN_PROTECTION_FAILURE (0x0002) at 0x00000038

Thread 0 Crashed:
0 libSystem.B.dylib 0x90025c82 flockfile + 18
1 libSystem.B.dylib 0x900017c5 fileno + 37
2 libperl.dylib 0x97035b03 PerlIOStdio_dup + 159
3 libperl.dylib 0x970374da PerlIO_fdupopen + 156
4 libperl.dylib 0x96fd6bce Perl_fp_dup + 102
5 libperl.dylib 0x97037789 PerlIO_clone + 442
6 libperl.dylib 0x96fdb246 perl_clone + 1979
7 threads.bundle 0x0000e8bb Perl_ithread_create + 557
8 threads.bundle 0x0000ef01 XS_threads_new + 351
9 libperl.dylib 0x96fc11ad Perl_pp_entersub + 897
10 libperl.dylib 0x96fb8277 Perl_runops_standard + 19
11 libperl.dylib 0x96f4b5d8 perl_run + 724
12 perl 0x000020d2 0x1000 + 4306
13 perl 0x00001f92 0x1000 + 3986
14 perl 0x00001eb9 0x1000 + 3769

For some reason, after a random amount of time, the dispatch thread dies because it attempts to access bad memory. I’ve run ktrace and kdump to see if I could figure it out and it *looks* like it might be a file descriptor problem, however, I can’t figure out why. You shouldn’t run into the problem unless you run the program with a high thread count and a small min/max time (which you’re welcome to do, if you are masochistic).

Anyone out there that’s better at perl than I am, do you know what could be causing this problem? Send me an email or leave a comment!

]]>
http://writequit.org/blog/2007/10/25/phfos-perl-hold-file-open-script-for-stressing-disk-access-like-a-customer-environment/feed/ 3