:wq - blog » forkify http://writequit.org/blog Tu fui, ego eris Mon, 22 Dec 2014 14:54:59 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.5 Process pooling with a Rinda Tuplespace http://writequit.org/blog/2009/08/25/process-pooling-with-a-rinda-tuplespace/ http://writequit.org/blog/2009/08/25/process-pooling-with-a-rinda-tuplespace/#comments Tue, 25 Aug 2009 22:28:49 +0000 http://writequit.org/blog/?p=315 I going to talk about how I’m doing the process pooling in Forkify in this post, so let’s get started.

Feel free to look up the code or fork the project from the Github Forkify project.

The case for pool forking

So, why even do pooling instead of serial forking? Let’s look at an example:

[1, 2, 6, 1, 2, 3].forkify(3) { |n| sleep(n); }

What happens in the background when this is run – forkify will fork 3 processes at a time, each one sleeping for the array’s number, so for the first pass 3 processes will be created, sleeping for 1 second, 2 seconds and 6 seconds (respectively). Forkify will then do a Process.waitpid(<pid>) on all 3 of the processes.

It will end up waiting a tiny bit over 6 seconds (because the longest process will take 6 seconds to finish).

Next, it will create 3 more processes, which will sleep for 1, 2 and 3 seconds respectively.

This will take a bit over 3 seconds.

So, the total time will end up being a little over 9 (6 + 3) seconds (the extra is overhead). Seen here:

% time ruby -I lib -rforkify -e '[1, 2, 6, 1, 2, 3].forkify(3) { |n| sleep(n); }'
0.08s user 0.10s system 1% cpu 9.131 total

How it works with a pool

Let’s take the same example and see how it works for pool forking:

[1, 2, 6, 1, 2, 3].forkify(:procs => 3, :method => :pool) { |n| sleep(n); }

First, 3 processes will be created to perform duties, they will be given 1, 2 and 6 seconds to sleep for. The parent thread will immediately wait for them to finish. The first process will sleep for 1 second, check for work, grab another ‘1’ from the Tuplespace and start doing the work immediately. This will continue with the second process (which will finish after 2 seconds and grab more “work to be done” from the Tuplespace).

This ends up taking less time because you don’t block waiting for 1 process that takes a tremendous amount of time while the others have been finished for a while:

% time ruby -I lib -rforkify -e '[1, 2, 6, 1, 2, 3].forkify(:procs => 3, :method => :pool) { |n| sleep(n); }'
0.12s user 0.11s system 2% cpu 8.380 total

Not a lot of difference huh? A lot of the different has to do with the overhead for creating a Tuplespace and sharing it between processes, here’s a better example (pool first, then serial):

% time ruby -I lib -rforkify -e '[1, 1, 6, 1, 5, 1].forkify(:procs => 3, :method => :pool) { |n| sleep(n); }'
0.12s user 0.10s system 3% cpu 7.157 total

% time ruby -I lib -rforkify -e '[1, 1, 6, 1, 5, 1].forkify(3) { |n| sleep(n); }'
0.08s user 0.10s system 1% cpu 11.112 total

Imagine if the array had 100 items in it, but only a few took a really long time to process, as the differences increase, pool forking makes more sense to do.

How it works in the code

The Parent’s job:

(This is extremely stripped down to illustrate what’s happening)

The key here is to make sure that all the work items are in the TupleSpace before forking any children, so they don’t pull while we’re in the middle of writing and mess up the order, this is why I write all the tuples before starting the DRb service with the TupleSpace object.

After starting the children, I only need to wait on 3 pids, then I retrieve the results from the same TupleSpace in the correct order (regardless of the order they were added in)

The Child’s job:

(Again stripped down to essentials)

The interesting parts of this are the TupleSpaceProxy and being able to write objects to a TupleSpace without any kind of mutex. Rinda’s TupleSpaces take care of doing operations multi thread/process-safely, so I don’t have to worry about synchronizing the writing/taking of items out of the TupleSpace.

The child pulls items out of the TupleSpace, calls the block on them and writes the result tuple back into the same TupleSpace, since it’s shared over DRb the parent will be able to see all of the result tuples when it grabs the results.

Are there problems with this code? Yes – hardcoding a port is a bad idea and I’ve run into some issues with deadlock if 2 pooled forkifys are run immediately after each other (even a ‘puts “foo”‘ in between them fixes the problem, which makes me crazy when debugging). I also imagine that while the forkify itself works with processes, it is (ironically) not actually thread or process safe (because of the DRb hardcoded port amongst other things).

So, does anyone have any ideas of how I can do a producer/consumer pool-forking library? Please let me know :) Sadly there is a criminally-small amount of Ruby Rinda documentation available online right now (most of it is for RingServers instead of TupleSpaces)

Check out the full forkify_pool method and let me know if you have any suggestions. Pool forking requires Ruby 1.9 right now because some of the Rinda stuff doesn’t work in 1.8.

]]>
http://writequit.org/blog/2009/08/25/process-pooling-with-a-rinda-tuplespace/feed/ 0
Forkify 0.0.1 Ruby gem released, threadify with processes http://writequit.org/blog/2009/06/24/forkify-0-0-1-ruby-gem-released-threadify/ http://writequit.org/blog/2009/06/24/forkify-0-0-1-ruby-gem-released-threadify/#comments Wed, 24 Jun 2009 19:02:39 +0000 http://writequit.org/blog/?p=300 I’m happy to announce the release of another tiny Ruby gem, Forkify.

Forkify is basically Ara Howard’s Threadify (minus a few features), but instead of using threads, it uses fork to create multiple processes to perform the operations on an Enumerable type. It was born out of the desire to have the forkoff gem work on Ruby 1.9.1 (which it isn’t currently).

You can get Forkify using RubyGems:

sudo gem install forkify

Using it is super-simple also:

require 'forkify'
require 'pp' # optional, just for this example
r = [1, 2, 3].forkify { |n| n*2 }
pp r

=> [2, 4, 6]

Another example demonstrating the time-saving feature of using forks:

[3:hinmanm@Xanadu:~]% time ruby -rforkify -e “5.times.forkify { |n| sleep(1) }”
ruby -rforkify -e “5.times.forkify { |n| sleep(1) }”  0.02s user 0.03s system 5% cpu 1.030 total

[3:hinmanm@Xanadu:~]% time ruby -r forkify -e "5.times.forkify { |n| sleep(1) }"
ruby -r forkify -e "5.times.forkify { |n| sleep(1) }"  0.02s user 0.03s system 5% cpu 1.030 total

You can specify the maximum number of processes to spawn with an argument to forkify, the default is 5:

[1, 2, 3, 4, 5, 6].forkify(3) { |n| n**2 } # This will spawn 3 processes at a time to process the 6 items

Here you can see sleeping for 5 seconds 5 times only takes slightly longer than 1 second (instead of over 5 if you hadn’t forkified it)

Forkify is available as a gem via rubygems, or you can check out the code here: http://github.com/dakrone/forkify/. It’s still beta-quality, so please let me know if you run into any issues with it.

P.S. 100th post, woo!

]]>
http://writequit.org/blog/2009/06/24/forkify-0-0-1-ruby-gem-released-threadify/feed/ 1