I tried to use the post processing video tutorial as a basis for adding some effects to my game, but I can't get it to work as intended. The RenderSprite I added at the bottom of the OnRender stack only displays as a small box in the middle of the game video, instead of covering the entire windows. In the video, something is done to prevent this and let it render to the entire screen, but I can't seem to figure out how to apply this to my game.
I suppose I could create a model and have the texture rendered to it instead, but using a sprite covering the entire screen seems more convinient.
As always, thanks in advance for any help! I realize I don't understand all processes involved with making something like this, but it's so much fun playing around I can't help it!
My current code is included.
Post processing tutorial
Moderator: Moderators
Post processing tutorial
- Attachments
-
- SmallBallHD.zgeproj
- (36.55 KiB) Downloaded 585 times
Hi Imerion,
Attached are two ways to display a full-screen quad. One uses the fixed-function pipeline ( by overwriting the projection and modelView matrices ), and the other uses a shader.
Obviously this is only the first step of setting up post-processing, but you've got to start somewhere
K
Attached are two ways to display a full-screen quad. One uses the fixed-function pipeline ( by overwriting the projection and modelView matrices ), and the other uses a shader.
Obviously this is only the first step of setting up post-processing, but you've got to start somewhere
K
- Attachments
-
- Shader.zgeproj
- (756 Bytes) Downloaded 404 times
-
- Fixed.zgeproj
- (825 Bytes) Downloaded 433 times
Thanks! After modifying my code according to your examples, I got it displayed on the entire screen correctly. I also managed to add things which I could then render on the canvasmesh. But all things that are rendered in my game are split up in various states and models. Is there any way to "copy" everything displayed on the screen and render it to my canvasmesh?
Hi Imerion,
What you want to do instead, is to set the render target to your own RenderTarget before anything gets rendered. A good moment for this is the OnBeginRenderPass event. Then after everything has rendered, you want to set it back to the default framebuffer ( by using the SetRenderTarget component set to "none" ) and render your full-screen quad with the RenderTarget applied to it. Since the App.OnRender event is executed before the OnRender event of a active AppState, you generally want to do this in AppState:OnRender. Of course, when you're not using the OnRender event of any of the AppStates you want to use post-processing with, you can simply put it in App:OnRender instead.
+ By the way, in case you don't want to put all components you use to render your "canvas" in each AppState:OnRender that uses post-processing, you can put them in a Group in App:Content, and use a CallComponent component instead. Attached is a example illustrating such approach.
K
You could literally copy the framebuffer content to a texture, but that's not the way you should be doing that kind of thing.Imerion wrote:Is there any way to "copy" everything displayed on the screen and render it to my canvasmesh?
What you want to do instead, is to set the render target to your own RenderTarget before anything gets rendered. A good moment for this is the OnBeginRenderPass event. Then after everything has rendered, you want to set it back to the default framebuffer ( by using the SetRenderTarget component set to "none" ) and render your full-screen quad with the RenderTarget applied to it. Since the App.OnRender event is executed before the OnRender event of a active AppState, you generally want to do this in AppState:OnRender. Of course, when you're not using the OnRender event of any of the AppStates you want to use post-processing with, you can simply put it in App:OnRender instead.
+ By the way, in case you don't want to put all components you use to render your "canvas" in each AppState:OnRender that uses post-processing, you can put them in a Group in App:Content, and use a CallComponent component instead. Attached is a example illustrating such approach.
K
- Attachments
-
- CallGroup.zgeproj
- (1.18 KiB) Downloaded 413 times
Last edited by Kjell on Thu Mar 20, 2014 11:45 am, edited 1 time in total.
Hej Ville,
K
Wow, no idea how i could have missed that. Was already wondering where it went Anyway, updated the example attached to my previous post.VilleK wrote:There is a CallComponent component . At least it is visible in the Add Component dialog on the latest ZGE version. So you don't have to use a ZExpression.
Well, i just think the current order is a bit strange ( as previously mentioned ). There are a number of possible variations you can defend .. but i do feel that App:OnRender should be either first or last.VilleK wrote:And why would it be better if AppState OnRender would be executed before App.OnRender?
K