2

We have an MRI scan and we would like to perform real time raycasting in OpenGL on iOS in order to render the surface from different angles without polygonizing it. In fact we are only interested in the depth map generated by rendering.

Ive seen a number of examples of this in the appstore so Im sure its possible (eg. ImageVis3d). Can we used glTexImage3D to perform rendering? Is there a good online resource for using this function in iOS? Better yet is there a GitHub project or something similar which demonstrates the use of glTexImage3D in iOS?

Now assuming that a 2d or 3d texture exists in OpenGL ES memory is it possible to write to the same memory using a fragment shader and then rerender it without copying it back to the CPU? Im imagining a sculpting scenario that can deform the volume using a fragment shader.

4

1 回答 1

1

Im not sure what you mean by "Can we used glTexImage3D to perform rendering?" glTexImage3D is used to load 3D texture data to a texture buffer object. You can not read and write to the same memory in a shader (that would create lock issues). However what you can do is to double buffer:

Read from texture1 and write to texture2 and before the next frame you swap the bindings of the textures so that you read from texture2 and write to texture1.

Just to clarify: if you want to preform your writing operations in the fragment shader (meaning you want to write a value for each fragment of your projected primitive) then you would render-to-texture using an FBO (sounds like this is what you are looking for).

However if you only want to write values per vertex i.e. capture the output values of your vertex shader then you would use transform feedback.

于 2015-06-18T08:58:40.893 回答