I'm currently writing a game on A开发者_运维技巧ndroid using a design inspired by Replica Island. That is, a dual thread approach, one logic thread and one render thread.
I have this set up with a lightweight render module sitting in the logic thread where it will basically just handle game objects that wants to be renderable. It sends render commands to the render thread, like "render sprite f at x,y".
I like this design but the problem for me currently lies in resource creation. Say sprite f needs a texture, this will be requested by it IN the logic thread on loading. Problem is that because the OpenGL context is in the render thread the texture can only be created by in the renderer thread. Currently the only sync point between the logic thread and the render thread is when they exchange render queues at the end of a frame and I'd like to keep it that way.
How can I solve the problem of wanting to create textures, vbo's etc on the logic thread while still only rendering stuff on the render thread? Would sharing OpenGL contexts be a good approach? Or some other design of the engine maybe?
EDIT: One alternativ would be to have another queue where you would put requests from the logic thread. Like "create texture" etc. The render thread would iterate this queue and process it before starting rendering the other queue. I'm starting to like this solution.
OpenGL requires glBindTexture be called from the render thread. I strongly suggest you do binding from the GLSurfaceView.Renderer onSurfaceCreated method (called by the render thread). Should the GL context get recreated this will be called again to rebind everything.
If you choose to drive it from another thread, pass the render thread a runnable by calling queueEvent on your GLSurfaceView object. Watch out for the race where onSurfaceCreated hasn't been called yet.
Good luck with your game!
精彩评论