English 中文(简体)
Simulation and synthetic video generation for evaluation of computer vision algorithms
原标题:

I am looking for an easy way to generate synthetic videos to test computer vision software.

Currently I am only aware of one tool that targets this need: ObjectVideo Virtual Video (OVVV). It is a HalfLife 2 mod that allows to simulate cameras in a virtual world.

But I am looking for a more open (like in open source) and maybe portable solution. One way would be to implement the needed functionality on top of one of the dozen open-source 3D engines. Though, it would be great if somebody knows a library or tool that already implements something like OVVV does.

Also, if you do not no a ready-to-use solution: how would you tackle the problem?

PS: The reason I ask here is that I want to minimize my efforts spent on this issue. It s not that I had no idea how to do it. But my solutions would require me to invest to much time into this. So I am looking for concrete tips here ... :-)

问题回答

If I were in your situation, I d probably use POV-Ray since it s possible to write code in any language to produce .pov files to feed it. This is great where precise geometry, lighting, textures and complex exact motions are important. POV-Ray can be run entirely from the command line or programmatically with a system() call or equivalent.

Although POV-Ray isn t open source in the usual sense, it is free and you can get the source for it.

What about using one of the open source game engines? If I recall correctly, the Quake engine is now in the public domain, and it may be sufficient for your needs.

Most of the engines provide scripting features (often Lua) intended for AI and object behaviors, but which could easily provide the programmability you need.

Edit: Tricks for applying noise/distortion and other post-processing effects programmatically to video

A short script written in AviSynth will provide blur, distortion, contrast/frame-rate changes, noise addition, and a host of other possible effects. These effects are provided on the fly on a frame-by-frame basis, so you don t need to "render" the output to a huge video file for testing. Video programs will treat the script files like a normal video, albeit with more CPU needs during playback. So, you can feed your computer vision package a bunch of AviSynth scripts for testing, which may all feed from the same video source, but apply different levels of noise, blur, etc. Could save a LOT of time and disk space in testing!

Their site is briefly down, I think, but you can find the packages to DL it everywhere, since it is open source and widely used.

I ve seen Ogre used for this exact purpose.





相关问题
Selenium not working with Firefox 3.x on linux

I am using selenium-server , selenium rc for UI testing in my application . My dev box is Windows with FireFox 3.5 and every thing is running fine and cool. But when i try to run selenium tests on my ...

Best browser for testing under Safari Mobile on Linux?

I have an iPhone web app I m producing on a Linux machine. What s the best browser I can use to most closely mimic the feature-limited version of Safari present on the iPhone? (It s a "slimmed down" ...

Code Coverage Tools & Visual Studio 2008 Pro

Just wondering what people are using for code coverage tools when using MS Visual Studio 2008 Pro. We are using the built-in MS test project and unit testing tool (the one that come pre-installed ...

Is there any error checking web app cralwers out there?

Wondering if there was some sort of crawler we could use to test and re-test everything when changes are made to the web app so we know some new change didn t error out any existing pages. Or maybe a ...

热门标签