English 中文(简体)
Block filters using fragment shaders
原标题:

I was following this tutorial using Apple s OpenGL Shader Builder (tool similar to Nvidia s fx composer, but simpler).

I could easily apply the filters, but I don t understand if they worked correct (and if so how can I improve the output). For example the blur filter: OpenGL itself does some image processing on the textures, so if they are displayed in a higher resolution than the original image, they are blurred already by OpenGL. Second the blurred part is brighter then the part not processed, I think this does not make sense, since it just takes pixels from the direct neighborhood. This is defined by

float step_w = (1.0/width);

Which I don t quite understand: The pixels are indexed using floating point values??

Blurred Image http://img218.imageshack.us/img218/6468/blurzt.png

Edit: I forgot to attach the exact code I used:

Fragment Shader

// Originally taken from: http://www.ozone3d.net/tutorials/image_filtering_p2.php#part_2

#define KERNEL_SIZE 9

float kernel[KERNEL_SIZE];

uniform sampler2D colorMap;
uniform float width;
uniform float height;

float step_w = (1.0/width);
float step_h = (1.0/height);

// float step_w = 20.0;
// float step_h = 20.0;

vec2 offset[KERNEL_SIZE];

void main(void)
{
   int i = 0;
   vec4 sum = vec4(0.0);

   offset[0] = vec2(-step_w, -step_h);  // south west
   offset[1] = vec2(0.0, -step_h);      // south
   offset[2] = vec2(step_w, -step_h);       // south east

   offset[3] = vec2(-step_w, 0.0);      // west
   offset[4] = vec2(0.0, 0.0);          // center
   offset[5] = vec2(step_w, 0.0);       // east

   offset[6] = vec2(-step_w, step_h);       // north west
   offset[7] = vec2(0.0, step_h);       // north
   offset[8] = vec2(step_w, step_h);        // north east


// Gaussian kernel
// 1 2 1
// 2 4 2
// 1 2 1


   kernel[0] = 1.0;     kernel[1] = 2.0;    kernel[2] = 1.0;
   kernel[3] = 2.0; kernel[4] = 4.0;    kernel[5] = 2.0;
   kernel[6] = 1.0;     kernel[7] = 2.0;    kernel[8] = 1.0;


// TODO make grayscale first
// Laplacian Filter
// 0   1   0
// 1  -4   1
// 0   1   0

/*
kernel[0] = 0.0;    kernel[1] = 1.0;    kernel[2] = 0.0;
kernel[3] = 1.0;    kernel[4] = -4.0;   kernel[5] = 1.0;
kernel[6] = 0.0;   kernel[7] = 2.0; kernel[8] = 0.0;
*/

// Mean Filter
// 1  1  1
// 1  1  1
// 1  1  1

/*
kernel[0] = 1.0;    kernel[1] = 1.0;    kernel[2] = 1.0;
kernel[3] = 1.0;    kernel[4] = 1.0;    kernel[5] = 1.0;
kernel[6] = 1.0;   kernel[7] = 1.0; kernel[8] = 1.0;
*/

   if(gl_TexCoord[0].s<0.5)
   {
       // For every pixel sample the neighbor pixels and sum up
       for( i=0; i<KERNEL_SIZE; i++ )
       {
            // select the pixel with the concerning offset
            vec4 tmp = texture2D(colorMap, gl_TexCoord[0].st + offset[i]);
            sum += tmp * kernel[i];
       }

        sum /= 16.0;
   }
   else if( gl_TexCoord[0].s>0.51 )
   {
        sum = texture2D(colorMap, gl_TexCoord[0].xy);
   }
   else // Draw a red line
   {
        sum = vec4(1.0, 0.0, 0.0, 1.0);
   }

   gl_FragColor = sum;
}

Vertex Shader

void main(void)
{
    gl_TexCoord[0] = gl_MultiTexCoord0;
    gl_Position = ftransform();
}
问题回答

Texture coordinates conventionally reach from (0,0) (bottom left) to (1,1) (top right), so in fact, they are floats.

So if you have texturecoordinates (u,v), the "original" coordinates are computed by (u*textureWidth, v*textureHeight).

If the resulting values are not integral numbers, there may be different ways to handle that:

  • just take floor or ceil of the result in order to make the number integral
  • interpolate between the neighbouring texels

However I think every shading language has a method to access a texture by their "original", i.e. integral index.

@Nils, thanks for posting this code. I ve been trying to figure out a simple way to do a convolution on the GPU for some time now. I tried your code out and ran into the same dimming problem myself. Here s how I solved it.

  • You have to be careful with your step size to use texture width not image width. It usually gets re-sized to a power of 2 when the texture is bound in opengl.
  • You must also be sure to normalize your kernel by summing up all values in your kernel and dividing by that.
  • Also it helps if you convolve R G and B separately without the illumination, (the fourth component of the sample).

Here s a solution that doesn t have the dimming issue and that also bypasses the need for an offset array for 3x3 kernels .

I ve included 8 Kernels that worked for me without dimming.

uniform sampler2D colorMap;
uniform float width;
uniform float height;


const mat3 SobelVert= mat3( 1.0, 2.0, 1.0, 0.0, 0.0, 0.0, -1.0, -2.0, -1.0 );
const mat3 SobelHorz= mat3( 1.0, 0.0, -1.0, 2.0, 0.0, -2.0, 1.0, 0.0, -1.0 );
const mat3 SimpleBlur= (1.0/9.0)*mat3( 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 );
const mat3 Sharpen= mat3( 0.0, -1.0, 0.0, -1.0, 5.0, -1.0, 0.0, -1.0, 0.0 );
const mat3 GaussianBlur= (1.0/16.0)*mat3( 1.0, 2.0, 1.0, 2.0, 4.0, 2.0, 1.0, 2.0, 1.0 );
const mat3 SimpleHorzEdge= mat3( 0.0, 0.0, 0.0, -3.0, 3.0, 0.0, 0.0, 0.0, 0.0 );
const mat3 SimpleVertEdge= mat3( 0.0, -3.0, 0.0, 0.0, 3.0, 0.0, 0.0, 0.0, 0.0 );
const mat3 ClearNone= mat3( 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0 );

void main(void)
{
   vec4 sum = vec4(0.0);
   if(gl_TexCoord[0].x <0.5)
   {
      mat3 I, R, G, B;
      vec3 sample;

      // fetch the 3x3 neighbourhood and use the RGB vector s length as intensity value
      for (int i=0; i<3; i++){
        for (int j=0; j<3; j++) {
          sample = texture2D(colorMap, gl_TexCoord[0].xy + vec2(i-1,j-1)/vec2(width, height)).rgb;
          I[i][j] = length(sample); //intensity (or illumination)
          R[i][j] = sample.r; 
          G[i][j] = sample.g;
          B[i][j] = sample.b;  
        }
      }

      //apply the kernel convolution
      mat3 convolvedMatR = matrixCompMult( SimpleBlur, R);
      mat3 convolvedMatG = matrixCompMult( SimpleBlur, G);
      mat3 convolvedMatB = matrixCompMult( SimpleBlur, B);
      float convR = 0.0;
      float convG = 0.0;
      float convB = 0.0;
      //sum the result
      for (int i=0; i<3; i++){
        for (int j=0; j<3; j++) {
          convR += convolvedMatR[i][j];
          convG += convolvedMatG[i][j];
          convB += convolvedMatB[i][j];
        }
      }
      sum = vec4(vec3(convR, convG, convB), 1.0);

  }
   else if( gl_TexCoord[0].x >0.51 )
   {
        sum = texture2D(colorMap, gl_TexCoord[0].xy );
   }
   else // Draw a red line
   {
        sum = vec4(1.0, 0.0, 0.0, 1.0);
   }

   gl_FragColor = sum;
}




相关问题
OpenGL 3D Selection

I am trying to create a 3D robot that should perform certain actions when certain body parts are clicked. I have successfully (sort of) implemented picking in that if you click on any x-plane part, it ...

CVDisplayLink instead of NSTimer

I have started to implement cvDisplayLink to drive the render loop instead of nstimer, as detailed in this technical note https://developer.apple.com/library/archive/qa/qa1385/_index.html Is it ...

Can the iPhone simulator handle PVR textures?

I have a really weird problem with PVR textures on the iPhone simulator- the framerate falls through the floor on the iPhone simulator, but on the iPhone itself it works just fine. Has anyone had any ...

Calculate fps (frames per second) for iphone app

I am using an opengl es iphone application. What is the most accurate way to calculate the frames per second of my application for performance tuning?

Java - Zoom / 3D Data Visualization Libraries

What are the best libraries/frameworks for doing 3D and/or Zoom interfaces in Java? I d like to be able to do some prototyping of creating new types of interfaces for navigating within data and ...

FLTK in Cygwin using Eclipse (Linking errors)

I have this assignment due that requires the usage of FLTK. The code is given to us and it should compile straight off of the bat, but I am having linking errors and do not know which other libraries ...

热门标签