Posted: Tue Jul 14, 2015 10:55 pm
Code: Select all
reinterpret_cast<xptr>(8)
The rapid development solution with a tiny footprint
http://www.emix8.org/forum/
Code: Select all
reinterpret_cast<xptr>(8)
Code: Select all
ZZDC<?xml version="1.0" encoding="iso-8859-1" ?>
<Mesh Name="CanvasMesh">
<Producers>
<MeshBox Grid2DOnly="255"/>
</Producers>
</Mesh>
Don't you mean FBO?Rado1 wrote:I think VBOs are very common even for older GPUs.
That example doesn't use any light calculations in its shader, so there's no use for normals. But if there were, in this specific case you can derive the normal from the sine wave .. but otherwise / alternatively you can calculate ( or sample ) the height for the adjacent cells over the X and Y axes, and use the cross product between the two generated vectors from those coordinates ( which is the more "generic" approach ).StevenM wrote:GpuComputationExample.zgeproj - Does anyone know how to fix the normals?
Sure Kjell, this is just an example, a preparation for more complex usage of RenderTargets I want to apply in BallZ. In that case I want to use RenderTargets to keep the results of previous computations of positions, colors and sizes in order to achieve smooth transformations when changing scenes. The problem I have is that RenderTarget support just 4 floats per pixel and I need 7 floats, so I have to use three RenderTargets - one temporary for computation of shape parameters (used by two consequent shaders), another one for computation of positions and size the last for computation of colors. Computations of position, size and color use the previously computed values used in RenderTargets + parameters computed in previously applied shader.Kjell wrote:@Rado1 - By the way, even though i'm sure it was just a example .. since you're not reading from a buffer, you don't need to use a separate ( computation ) shader pass for that kind of effect at all.
You can't use more than 4 channels per FBO. However, you can bind & render to multiple render targets ( MRT / G-Buffer ) at the same time ( so you have multiple outputs instead of just the default gl_FragColor ).Rado1 wrote:is there a way how to use FBO with more than 4-float depth? Or how shader could access some memory shared across rendering cycles (kinds of "persistent" arrays)? Simply, something which would allow me to use just one shader for computations and maybe also for rendering...
Even if MRT seems to be a good idea applicable also to older versions of GLSL, I'm not sure how to use it in ZGE; could you please give me some hints or example? Thanks in advance.Kjell wrote:However, you can bind & render to multiple render targets ( MRT / G-Buffer ) at the same time ( so you have multiple outputs instead of just the default gl_FragColor ).
There's not much additional work required compared to what you need for a simple / single floating-point FBO. Simply generate & attach more texture objects to your FBO ( instead of just one ) and use gl_FragData instead of gl_FragColor in your fragment shader.Rado1 wrote:Even if MRT seems to be a good idea applicable also to older versions of GLSL, I'm not sure how to use it in ZGE; could you please give me some hints or example?
Yes thats what I want to do - textured surfaces. Vertex displacment is so simple, but calculating the normals is a bit difficult for me - that sort of math is not something I do too often."That example doesn't use any light calculations in its shader, so there's no use for normals. But if there were, in this specific case you can derive the normal from the sine wave .. but otherwise / alternatively you can calculate ( or sample ) the height for the adjacent cells over the X and Y axes, and use the cross product between the two generated vectors from those coordinates ( which is the more "generic" approach )."
Kjell wrote:There's not much additional work required compared to what you need for a simple / single floating-point FBO. Simply generate & attach more texture objects to your FBO ( instead of just one ) and use gl_FragData instead of gl_FragColor in your fragment shader.
You can .. but in that case you might as well use a RenderTarget, since that also provides a 32-bit ( 8-bit per channel ) texture object. However, you probably need something else than 8-bit per channel.Rado1 wrote:can I use some of the ZGE components, for instance, Bitmap.Handle used as id in glFramebufferTexture2D?
The handle of a RenderTarget isn't exposed. You could bind it ( using SetRenderTarget ) and get the handle through a OpenGL call .. but i wouldn't recommend taking this route.Rado1 wrote:Or RenderTarget for defining FBO and SetRenderTarget for setting the current binding of FBO?
You can .. the advantage is that they can be faster in some circumstances, but the biggest downside is that you can't use them as sampler in a shader.Rado1 wrote:Can I use Renderbuffers instead of textures, is it feasible and if so are there some advantages?
Depends on what you're trying to do obviously .. but the sequence you describe is correct ( do keep in mind that you can't use the same texture as input & output at the same time ).Rado1 wrote:so what is the recommended sequence of commands/components?