As said before, there were even software rendered demos achieving realtime raytracing, like Heaven7 or Nature Suxx. I remember running them at decent speeds in a Pentium MMX at 200Mhz at the time, in 320*200 resolution. These worked so fast by subdividing the screen in tiles and doing the raytracing for the pixels in the corners of the tiles, and only if the values where much different subdivide and do more calculations in 4*4 tiles or 2*2 tiles, etc. Else, interpolate the values in between.
Nowadays it s much easier to optimize raytracing on the GPU with shader code and have per pixel raytracing of many spheres in HD resolution at good frame rates. Sult is just using another technique called raymarching, but essentially it s another way to raytrace (depending on how you define the terms) with it s advantages and disadvantages, for example it s very suitable for small size intros and specific twisted objects and repeatitions of objects in space that are not as easy or fast to achieve with traditional raytracing.
But based on the phrasing of your original question, I guess you might also be wondering, why these people do it and we haven t seen it in the mainstream? We have all heard from game industry developers that raytracing is something from the future and not achievable yet in gaming. So, how come some hobbyists from the demoscene make it possible, but industry veterans say the time is not yet? Technically we can already raytracing on 50 spheres or raymarch in repeatition of the same sphere and make it twist and distort. But, games are using polygons, hundreds of thousands or millions of them. To check a single ray against millions of polygons is an entirely different story. I know, there are methods like kd-tree subdivision of the space to only check a ray with few of the polygons locally, but it s still a very hard problem even with powerful GPUs. And maybe there is not much to gain, besides having perfect shadows and reflections for free (which in the polygon engines you have to achieve it with tedious ways) and a lot to lose. While the demoscene intros are mostly raytracing at scenes with abstract geometrical shapes, implicit functions or voxel data, which are all very abstract for real life 3d geometric scenes and video game characters. And most of the scenes are small, not gonna easily work in an open sandbox game.
So, while these intros really achieve realtime raytracing and we have done it even on the CPU since 2000, it s not really practical for game developing where polygon engines are still more efficient and useful in the real world. So, you ll hear professionals claiming the hardware is not ready for raytracing (in their million polygon scenes) yet see some hobbyist doing it on the GPU and in 4k sizes even.