I can't test right now because I'm reading the forum on a small netbook with integrated graphics, but can't the buffer be passed to the shader just like how AudioArray is defined (a ShaderVariable with array binding)?Kjell wrote:Since ZGE doesn't support floating-point buffers ( it actually does internally for the GPU Array, but it's not exposed anywhere in the Editor ) you need to pull some ( fixed-point ) tricks to get the data around.
JPH, Ville - help with user bitmaps Import help for a script
Moderator: Moderators
I would help you with that-but I'm too much of a noob myself. Kjell can probably help but you will have to do some reseach -alphanimal wrote:OK thanks that's useful!
How can I store float (or fixed if necessary) variables for each vertex?
a lot of data is parallel processed in GLSL, reason why it is so incredibly fast - and as Ville put it - with GLSL you have to "think like the GPU".
So it's a paradigm that is different from what you may be use to.
The Orange Book is a great start - this book 2006 2nd edition is free online -
http://wiki.labomedia.org/images/1/10/O ... dition.pdf
Hey guys,
Edit - Actually, just went through the source ( Renderer.pas line 2103 ) and it seems that the data is uploaded each frame, in which case it probably pays off to bake a Bitmap instead ( and take the fixed-point overhead if needed ).
K
For any constant values that's probably the way to go yes ( due to the lack of vertex attributes ) .. but the results of his physics calculations should never leave the GPU, so using a ( CPU-bound ) array for those is obviously not a option ( if you're going for high-performance at least ).VilleK wrote:can't the buffer be passed to the shader just like how AudioArray is defined (a ShaderVariable with array binding)?
Edit - Actually, just went through the source ( Renderer.pas line 2103 ) and it seems that the data is uploaded each frame, in which case it probably pays off to bake a Bitmap instead ( and take the fixed-point overhead if needed ).
Haven't looked at your effect closely, but you're probably going to end up with at least two FBO's ( RenderTarget ) between which you "ping-pong" double buffer style. One containing the results for the current frame, the other for the previous frame.alphanimal wrote:How can I store float (or fixed if necessary) variables for each vertex?
You're making it sound more mythical / difficult then it really isStevenM wrote:So it's a paradigm that is different from what you may be use to.
K
-
- Posts: 25
- Joined: Sun Feb 27, 2011 1:09 am
OK thanks! Can you give me some example code?Kjell wrote:Haven't looked at your effect closely, but you're probably going to end up with at least two FBO's ( RenderTarget ) between which you "ping-pong" double buffer style. One containing the results for the current frame, the other for the previous frame.
I'm not sure if I need a double buffer. Why would I? As long as I can store some values somewhere I'm fine.
Particles don't need to interact. no calculation includes any parameter from a different vertex. (if that makes room for any parallel processing tweaks or simplifies things)
cheers
Hi alphanimal,
K
OpenGL doesn't like it when you're reading from and writing to the same FBO. Check out the following video ( the right side of the screen ) to see what happens if you do.alphanimal wrote:I'm not sure if I need a double buffer. Why would I?
The only way to write / store values from a shader is by rendering to a buffer.alphanimal wrote:As long as I can store some values somewhere I'm fine.
K
-
- Posts: 25
- Joined: Sun Feb 27, 2011 1:09 am
indeed I'm lost
In the example above...
How do the variables tex1 get to the shader? It's declared but never assigned anything (unlike the coord var).
I can imagine how it works though. texture() reads color information from the texture tex1, but where is it defined that my Bitmap that's assigned to the Material is called "tex1"?
Also, if I change the vertex coordinates within the shader code, it does not effect the actual vertices but only how they are rendered right?
How can I define a buffer and how can I "render" my physics properties into that?
In the example above...
How do the variables tex1 get to the shader? It's declared but never assigned anything (unlike the coord var).
I can imagine how it works though. texture() reads color information from the texture tex1, but where is it defined that my Bitmap that's assigned to the Material is called "tex1"?
Also, if I change the vertex coordinates within the shader code, it does not effect the actual vertices but only how they are rendered right?
How can I define a buffer and how can I "render" my physics properties into that?
This is a bit counter-intuitive yes. While you do explicitly need to define a variable name for ShaderVariable, ZGE assigns these names automatically when it comes to Textures. The first texture is called tex1, second tex2 etc.alphanimal wrote:How do the variables tex1 get to the shader? It's declared but never assigned anything (unlike the coord var). I can imagine how it works though. texture() reads color information from the texture tex1, but where is it defined that my Bitmap that's assigned to the Material is called "tex1"?
Correct, it only changes the way they are rendered, not the actual mesh data.alphanimal wrote:Also, if I change the vertex coordinates within the shader code, it does not effect the actual vertices but only how they are rendered right?
If you want to store results calculated by a shader, you can use RenderTarget in combination with the SetRenderTarget component.alphanimal wrote:How can I define a buffer and how can I "render" my physics properties into that?
K
Anyway, attached is a simple example of a double-buffer FBO setup. The actual "simulation" is super boring and frame-dependent .. but that's beside the point. What matters is that it's running entirely on the GPU.
K
- Attachments
-
- Buffer.zgeproj
- (2.83 KiB) Downloaded 1122 times
-
- Posts: 25
- Joined: Sun Feb 27, 2011 1:09 am
impressive though I have no idea what's going on here.
Can you explain what steps are carried out in the OnRender node?
Why did you switch to a Mesh instead of the DisplayList (whatever that is)
What's this about:
glMatrixMode(0x1700);
glPushMatrix();
glLoadIdentity();
What's going on in the BufferShader and in the EffectShader?
If I understand correctly, BufferShader is used to process "invisible" parameters.
EffectShader uses BufferShader's output to manipulate vertex positions.
Can you explain what steps are carried out in the OnRender node?
Why did you switch to a Mesh instead of the DisplayList (whatever that is)
What's this about:
glMatrixMode(0x1700);
glPushMatrix();
glLoadIdentity();
What's going on in the BufferShader and in the EffectShader?
If I understand correctly, BufferShader is used to process "invisible" parameters.
EffectShader uses BufferShader's output to manipulate vertex positions.
Hi alphanimal,
K
Just being lazy, taking the Display List route requires a bit more typingalphanimal wrote:Why did you switch to a Mesh instead of the DisplayList
Sorry, this is poor thinking on my part. Should have simply ignored the OpenGL matrices in the vertex program of the Buffer shader instead. Updated the file againalphanimal wrote:What's this about:
glMatrixMode(0x1700);
glPushMatrix();
glLoadIdentity();
The Buffer shader performs the calculations / simulation, while the Effect shader is simply used to show the results on the screen .. so you can ignore that.alphanimal wrote:What's going on in the BufferShader and in the EffectShader?
K
It's important to understand that in order to let the GPU do the physics you need to get the output of the calculations somehow. But what are shaders designed to output? Pixels! So this is what Kjell take advantage of here, the physics is calculated in the shader when it renders a polygon to a off-screen buffer. It stores the value in the r-channel of the output pixel.
This off-screen buffer is then used as a texture when rendering the mesh. So the vertex-shader on EffectShader reads a pixel from the texture and use the r-value to modify the z-coordinate of the vertex. So clever tricks like this are required to make GPUs do calculations.
This off-screen buffer is then used as a texture when rendering the mesh. So the vertex-shader on EffectShader reads a pixel from the texture and use the r-value to modify the z-coordinate of the vertex. So clever tricks like this are required to make GPUs do calculations.