Although I have not tested G1 in production, I thought I would comment that GCs are already problematic for cases without "humongous" heaps. Specifically services with just, say, 2 or 4 gigs can be severely impacted by GC. Young generation GCs are usually not problematic as they finish in single-digit milliseconds (or at most double-digit). But old-generation collections are much more problematic as they take multiple seconds with old-gen sizes of 1 gig or above.
Now: in theory CMS can help a lot there, as it can run most of its operation concurrently. However, over time there will be cases where it can not do this and has to fall back to "stop the world" collection. And when that happens (after, say, 1 hour -- not often, but still too often), well, hold on to your f***ing hats. It can take a minute or more. This is especially problematic for services that try to limit maximum latency; instead of it taking, say, 25 milliseconds to serve a request it now takes ten second or more. To add injury to insult clients will then often time out the request and retry, leading to further problems (aka "shit storm").
This is one area where G1 was hoped to help a lot. I worked for a big company that offers cloud services for storage and message dispatching; and we could not use CMS since although much of the time it worked better than parallel varieties, it had these meltdowns. So for about an hour things were nice; and then stuff hit the fan... and because service was based on clusters, when one node got in trouble, others typically followed (since GC-induced timeouts lead to other nodes believe node had crashed, leading to re-routes).
I don t think GC is that much of a problem for apps, and perhaps even non-clustered services are less often affected. But more and more systems are clustered (esp. thanks to NoSQL data stores) and heap sizes are growing. OldGen GCs are super-linearly related to heap size (meaning that doubling heap size more than doubles GC time, assuming size of live data set also doubles).