I've a `W x H x D' volumetric data that is zero everywhere except for little spherical volumes containing 1.
I have written the shader to extract the "intersection" of that 3D volume with a generic object made of vertices.
Vertex shader
varying vec3 textureCoordinates;
uniform float objectSize;
uniform vec3 objectTranslation;
void main()
{
vec4 v=gl_Vertex;
textureCoordinates= vec3( ((v.xz-objectTranslation.xz)/objectSize+1.0)*0.5, ((v.y-objectTranslation.y)/objectSize+1.0)*0.5);
gl_Position = gl_ModelViewProjectionMatrix*v;
}
Fragment shader
varying vec3 textureCoordinates;
uniform sampler3D volumeSampler;
void main()
{
vec4 uniformColor = vec4(1.0,1.0,0.0,1.0); //it's white
if ( textureCoordinates.x <=0.0 || textureCoordinates.x >= 1.0 || textureCoordinates.z <= 0.0 || textureCoordinates.z >= 1.0)
gl_FragColor =vec4(0.0,0.0,0.0,1.0); //Can be uniformColor to color again the thing
else
gl_FragColor = uniformColor*texture3D(volumeSampler, textureCoordinates);
}
In the OpenGL program, I'm looking the centered object with those almost-spherical patches of white on it from (0,100,0) eye coordinates, but I want that for another viewer (0,0,0) the spheres that lie on the same line-of-sight are correctly occluded, so that only the parts that I underlined in red in the picture are emitted.
Is this an application of raycasting or similar?