My question is, perhaps, a poorly worded one and stems from my amateurish understanding of memory management.
My concern is this: I have a Perl script that forks many times. As I understand from the fork page in perldoc, copy-on-write is being implemented. Each of the children then calls system()
, forking again, to call an external program. The data from the external program is read back into the child, and dumped as a Storable file to be reaped and processed by the parent once all children have exited.
What concerns me is my perceived volatility of this situation. Consider, what I see in my mind, the worst case scenario: For each of the children, as soon as new data arrives, the entire copy-on-write memory becomes, well, copied. If this is the case, I am going to quickly run into memory problems after creating a few forks.
But alternatively, does copy-on-write only copy the smallest chunk of memory that contains the needed data? Then what is this quanta of memory? How is its size set?
I am uncertain as to whether the specifics of what I am asking are language dependent or dependent on some lower-level process.