English 中文(简体)
CPU在抽光时使用重度;来源?
原标题:Heavy CPU usage when draw gl scene; origin?

Since nothing but the size of the window evolve, is it normal my program needs one full core to render the scene on a maximized window ?

I m using Qt 4.7 in the C++ language on windows to draw 150 pictures (components are RGBA, each on a byte) of those dimensions : 1754*1240. I load my textures like this :

glGenFramebuffers(TDC_NB_IMAGE, _fborefs);
glBindFramebuffer(GL_FRAMEBUFFER, _fbo);
//initialize tex
glGenTextures(TDC_NB_IMAGE, _picrefs);
for (int i = 0 ; i < TDC_NB_IMAGE ; i++)
{
    qDebug() << "loading texture num : " << i;
    _pics[i].scale = 1.f;
    _pics[i].pos.rx() = i % ((int)sqrt((float)TDC_NB_IMAGE));
    _pics[i].pos.ry() = i / ((int)sqrt((float)TDC_NB_IMAGE));
    _pics[i].text.load("imgTest.png");
    glBindTexture(GL_TEXTURE_2D, _picrefs[i]);
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);//GL_LINEAR_MIPMAP_LINEAR
    glTexImage2D (GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_S3TC_DXT5_EXT,
        TDC_IMG_WIDTH, TDC_IMG_HEIGHT, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE,
        _pics[i].text.toImage().bits()
        );
    //glGenerateMipmap(GL_TEXTURE_2D);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _picrefs[i], 0);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);

I draw my scene like this :

glBindFramebuffer(GL_FRAMEBUFFER, _fbo);
glClear( GL_COLOR_BUFFER_BIT);
//for each image
for (int i = 0 ; i < TDC_NB_IMAGE ; i++)
{
    //compute coords
    if (_update)
    {
        //pos on 0,0
        _pics[i].quad.topleft.rx() = 0;
        _pics[i].quad.topleft.ry() = 0;
        _pics[i].quad.topright.rx() = TDC_IMG_WIDTH;
        _pics[i].quad.topright.ry() = 0;
        _pics[i].quad.botright.rx() = TDC_IMG_WIDTH;
        _pics[i].quad.botright.ry() = TDC_IMG_HEIGHT;
        _pics[i].quad.botleft.rx() = 0;
        _pics[i].quad.botleft.ry() = TDC_IMG_HEIGHT;
        //translate
        QPointF dec(0, 0);
        dec.rx() = _pics[i].pos.x() * TDC_IMG_WIDTH + _pics[i].pos.x() * TDC_SPACE_IMG;
        dec.ry() = _pics[i].pos.y() * TDC_IMG_HEIGHT + _pics[i].pos.y() * TDC_SPACE_IMG;
        _pics[i].quad.topleft += dec;
        _pics[i].quad.topright += dec;
        _pics[i].quad.botright += dec;
        _pics[i].quad.botleft += dec;
        //scale
        _pics[i].quad.topleft *= _globalScale;
        _pics[i].quad.topright *= _globalScale;
        _pics[i].quad.botright *= _globalScale;
        _pics[i].quad.botleft *= _globalScale;
        _update = false;
    }
    //prepare tex drawing
    //draw drawing area
    glBindTexture (GL_TEXTURE_2D, 0);
    glBegin (GL_QUADS);
    glTexCoord2f (0.0, 0.0);glVertex3f (_pics[i].quad.topleft.x(), _pics[i].quad.topleft.y(), 0);
    glTexCoord2f (1.0, 0.0);glVertex3f (_pics[i].quad.topright.x(), _pics[i].quad.topright.y(), 0);
    glTexCoord2f (1.0, 1.0);glVertex3f (_pics[i].quad.botright.x(), _pics[i].quad.botright.y(), 0);
    glTexCoord2f (0.0, 1.0);glVertex3f (_pics[i].quad.botleft.x(), _pics[i].quad.botleft.y(), 0);
    glEnd();
    //draw texture
    glBindTexture (GL_TEXTURE_2D, _picrefs[i]);
    glBegin (GL_QUADS);
    glTexCoord2f (0.0, 0.0);glVertex3f (_pics[i].quad.topleft.x(), _pics[i].quad.topleft.y(), 0);
    glTexCoord2f (1.0, 0.0);glVertex3f (_pics[i].quad.topright.x(), _pics[i].quad.topright.y(), 0);
    glTexCoord2f (1.0, 1.0);glVertex3f (_pics[i].quad.botright.x(), _pics[i].quad.botright.y(), 0);
    glTexCoord2f (0.0, 1.0);glVertex3f (_pics[i].quad.botleft.x(), _pics[i].quad.botleft.y(), 0);
    glEnd();
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);

After some benchmarking, it seems the heavy CPU usage comes from the "//draw texture" block. Actually, sometimes it takes 0ms and sometimes 400ms. Overall, the paintGL function takes 5seconds to render the scene when the window is maximized, and close to 0 when the window is at 800*600 size.
I change the scale during the rendering (only modifying _globalScale) so i can see the 150 pictures whatever the size of the window. The scale of the pictures doesn t change anything to the CPU usage.

I started using OpenGL 2 weeks ago, so i surely missed something in the documentation and tutorials.. but even if i read them again, i don t find neither the explanation nor another way to render those 150 pictures.
In the future it will be possible to modify a picture (more precisely a layer of this picture, which implies more textures) with a graphic tablet or even the mouse, so i need the speed improvement.

最佳回答

• 减少你的图像,以便迅速提供和降低记忆压力。 然后,当你重新拍摄某一特定的图像时,你就可以装满决议的图像,而不是低分辨率的文本。

问题回答

暂无回答




相关问题
Undefined reference

I m getting this linker error. I know a way around it, but it s bugging me because another part of the project s linking fine and it s designed almost identically. First, I have namespace LCD. Then I ...

C++ Equivalent of Tidy

Is there an equivalent to tidy for HTML code for C++? I have searched on the internet, but I find nothing but C++ wrappers for tidy, etc... I think the keyword tidy is what has me hung up. I am ...

Template Classes in C++ ... a required skill set?

I m new to C++ and am wondering how much time I should invest in learning how to implement template classes. Are they widely used in industry, or is this something I should move through quickly?

Print possible strings created from a Number

Given a 10 digit Telephone Number, we have to print all possible strings created from that. The mapping of the numbers is the one as exactly on a phone s keypad. i.e. for 1,0-> No Letter for 2->...

typedef ing STL wstring

Why is it when i do the following i get errors when relating to with wchar_t? namespace Foo { typedef std::wstring String; } Now i declare all my strings as Foo::String through out the program, ...

C# Marshal / Pinvoke CBitmap?

I cannot figure out how to marshal a C++ CBitmap to a C# Bitmap or Image class. My import looks like this: [DllImport(@"test.dll", CharSet = CharSet.Unicode)] public static extern IntPtr ...

Window iconification status via Xlib

Is it possible to check with the means of pure X11/Xlib only whether the given window is iconified/minimized, and, if it is, how?

热门标签