I don t have a web page to refer you to but I might have a "back of the envelope" explanation that would help. The way simple random number generators work is by following the steps
- Use the last number generated
n
or a seed number.
- Multiply that number by a special large number
- Add another special large number
- Divide that by a third special large number and throw away the remainder
- Return the result
Now if you think about what happens in all but step 4 you are doing operations where only the lower bits can alter the lower bits of the result. Adding 1001 and 100...00001 will end in ...02 (Ha you though I was talking base 2, really these number are base 12 for giggles.) regardless of what is on the high end of the calculation. Similarly, when you multiply it will end in a 1, no matter what.
There is a similar problem at the top end as well, a billion times a billion will invariably dominate the contribution of the hundreds places of wither number. This points to the fact that the middle is where the good stuff happens. Lots of bits interact here--high, middle, and low.
That is the purpose of the division step, it cuts off the bottom chunk of the result where there was not as much interaction. The top chunk is not usually chopped off because the computer drops the upper bits when the multiplications do not fit into a machine word any more.
In the end though the cut off points are somewhat arbitrary and you can be more picky than the people who designed the algorithm and still chop off a few more bits.
For you question of how bad they can be, they can be really bad. The easiest way to see this is to group individual numbers into tuples and graph them. So if you had random numbers a, b, c, d, ...
graph (a,b), (c,d), ...
and look at the results. This is called a Spectral Test and Rand fails it beautifully. This one I have a link for try http://random.mat.sbg.ac.at/results/karl/spectraltest/