Wow... that's the trick! reinterpret_cast is a "hidden" feature of expression language - it's neither in help, nor offered by content assistant. I completely forgot it. Thanks Kjell!
Mainly for Kjell and Ville: during my experiments with GPU computations I observed one problem (or at least a strange behavior) - if RenderTarget is set in MaterialTexture, the vertical scale of the applied RenderTarget (texture) depends on ZApplication.ViewportRatio setting. For instance, if ZApplication.ViewportRatio = Full window, the applied "texture" is used for the whole area, if ZApplication.ViewportRatio = 16:9, the applied "texture" is used just in the middle 16:9 rectangle of the whole area. To illustrate what I mean, open the attached example project, start it, and change ZApplication.ViewportRatio in runtime.
Suggestion: I would appreciate if MaterialTexture's Texture* + Origin parameters could be applied also to RenderTarget, if specified. Or at least, RenderTarget should be always stretched to full area, independently on ZApplication.ViewportRatio.
Question 1 (Kjell): in the example I used for GPU computation rendering of sprite. It is half size of MeshBox with scale (1,1,1), so I used gl_Position.xy *= 2.0 in vertex shader to cover the whole area of RenderTarget. What shape is more effective for rendering of GPU computations, mesh or sprite, or is it the same?
Question 2: do you think there should be some problem with this example on "usual" GPUs? I think VBOs are very common even for older GPUs.
Concerning ViewportRatio: simply set the viewport size to the same size as your FBO ( using glViewport ). This isn't the regular usage case for RenderTarget anyway .. in fact, you should allocate your own FBO instead ( so you can get the exact format you want ).
And concerning "shape for rendering": it doesn't make much of a difference, but the best option is to use the following Mesh ..
StevenM wrote:GpuComputationExample.zgeproj - Does anyone know how to fix the normals?
That example doesn't use any light calculations in its shader, so there's no use for normals. But if there were, in this specific case you can derive the normal from the sine wave .. but otherwise / alternatively you can calculate ( or sample ) the height for the adjacent cells over the X and Y axes, and use the cross product between the two generated vectors from those coordinates ( which is the more "generic" approach ).
@Rado1 - By the way, even though i'm sure it was just a example .. since you're not reading from a buffer, you don't need to use a separate ( computation ) shader pass for that kind of effect at all.
Kjell wrote:@Rado1 - By the way, even though i'm sure it was just a example .. since you're not reading from a buffer, you don't need to use a separate ( computation ) shader pass for that kind of effect at all.
Sure Kjell, this is just an example, a preparation for more complex usage of RenderTargets I want to apply in BallZ. In that case I want to use RenderTargets to keep the results of previous computations of positions, colors and sizes in order to achieve smooth transformations when changing scenes. The problem I have is that RenderTarget support just 4 floats per pixel and I need 7 floats, so I have to use three RenderTargets - one temporary for computation of shape parameters (used by two consequent shaders), another one for computation of positions and size the last for computation of colors. Computations of position, size and color use the previously computed values used in RenderTargets + parameters computed in previously applied shader.
Kjell, is there a way how to use FBO with more than 4-float depth? Or how shader could access some memory shared across rendering cycles (kinds of "persistent" arrays)? Simply, something which would allow me to use just one shader for computations and maybe also for rendering...
Rado1 wrote:is there a way how to use FBO with more than 4-float depth? Or how shader could access some memory shared across rendering cycles (kinds of "persistent" arrays)? Simply, something which would allow me to use just one shader for computations and maybe also for rendering...
You can't use more than 4 channels per FBO. However, you can bind & render to multiple render targets ( MRT / G-Buffer ) at the same time ( so you have multiple outputs instead of just the default gl_FragColor ).
Alternatively you could take the transform feedback route ( which outputs directly to a vertex buffer instead ), but this is something that got added to OpenGL over time, so you have to be careful that you only use functions / features that are supported in the OpenGL version you are targeting.
Kjell wrote:However, you can bind & render to multiple render targets ( MRT / G-Buffer ) at the same time ( so you have multiple outputs instead of just the default gl_FragColor ).
Even if MRT seems to be a good idea applicable also to older versions of GLSL, I'm not sure how to use it in ZGE; could you please give me some hints or example? Thanks in advance.
Rado1 wrote:Even if MRT seems to be a good idea applicable also to older versions of GLSL, I'm not sure how to use it in ZGE; could you please give me some hints or example?
There's not much additional work required compared to what you need for a simple / single floating-point FBO. Simply generate & attach more texture objects to your FBO ( instead of just one ) and use gl_FragData instead of gl_FragColor in your fragment shader.
"That example doesn't use any light calculations in its shader, so there's no use for normals. But if there were, in this specific case you can derive the normal from the sine wave .. but otherwise / alternatively you can calculate ( or sample ) the height for the adjacent cells over the X and Y axes, and use the cross product between the two generated vectors from those coordinates ( which is the more "generic" approach )."
Yes thats what I want to do - textured surfaces. Vertex displacment is so simple, but calculating the normals is a bit difficult for me - that sort of math is not something I do too often.
I came across an interesting, somewhat related article here - I'll have to try this:
Kjell wrote:There's not much additional work required compared to what you need for a simple / single floating-point FBO. Simply generate & attach more texture objects to your FBO ( instead of just one ) and use gl_FragData instead of gl_FragColor in your fragment shader.
Hi Kjell, can I use some of the ZGE components, for instance, Bitmap.Handle used as id in glFramebufferTexture2D? Or RenderTarget for defining FBO and SetRenderTarget for setting the current binding of FBO? For instance can I use SetRenderTarget and then in a consequent ZExpression to call glFramebufferTexture2D? ... something like this... Can I use Renderbuffers instead of textures, is it feasible and if so are there some advantages?
Rado1 wrote:can I use some of the ZGE components, for instance, Bitmap.Handle used as id in glFramebufferTexture2D?
You can .. but in that case you might as well use a RenderTarget, since that also provides a 32-bit ( 8-bit per channel ) texture object. However, you probably need something else than 8-bit per channel.
But perhaps Ville is willing to add a dropdown to RenderTarget allowing you select some more formats
Rado1 wrote:Or RenderTarget for defining FBO and SetRenderTarget for setting the current binding of FBO?
The handle of a RenderTarget isn't exposed. You could bind it ( using SetRenderTarget ) and get the handle through a OpenGL call .. but i wouldn't recommend taking this route.
Rado1 wrote:Can I use Renderbuffers instead of textures, is it feasible and if so are there some advantages?
You can .. the advantage is that they can be faster in some circumstances, but the biggest downside is that you can't use them as sampler in a shader.
Bind FBO for drawing (by glBindFramebuffer( GL_DRAW_FRAMEBUFFER, ...) + glViewport)
Render with material having a shader which uses gl_FragData for writing the computation results and B0 and B1 as textures - inputs from previous computation.
Set rendering to window framebufffer (by glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0))
Render scene by material having a shader which uses B0 and B1 as textures with inputs of the latest computation. OnClose:
Rado1 wrote:so what is the recommended sequence of commands/components?
Depends on what you're trying to do obviously .. but the sequence you describe is correct ( do keep in mind that you can't use the same texture as input & output at the same time ).