Optimizing instantiated geometry for VR
A look at dynamic vs. static batching and combining meshes for improving performance in VR.
I wanted to play with the idea of visualizing proteins in VR. It's pretty cool to walk around a microscopic object, toss it in the air, and even stick your head inside! It also gives you a better understanding of the 3D structure of a protein.
I was able to whip up a system to color, scale and change resolution quickly enough. Unfortunately, VR requires frame rates of 90fps or higher so you don't get sick and disoriented. Creating a sphere, even one with just 20 tris, for every atom in a protein caused my Vive to drop to the compositor. I looked into optimizing geometry and learned about dynamic/static batching and mesh combining. Following are my findings, with links to unity documentation:
Dynamic / Static Batching
Moderate FPS improvment
Super easy to implement. Dynamic batching occurs automatically, but is not as efficient as static.
Static batching requires objects to be Static, meaning they can't be moved, materials can be updated and so on.
Once an object has been set to static, you can return it to 'dynamic' but still have issues changing materials and so on. I had to destroy them and reinstantiate.
Mesh combine
Massive FPS improvements
More difficult and computationally intensive
As all meshes are combined to single mesh, they share the same material. You can combine in stages, with your final mesh having multiple submeshes and materials.
More materials = more meshes = less gains.
The whole mesh is always drawn, regardless of distance. If you combine objects that are distant from each other, you can actually lose efficiency as they might normally be culled.
Meshes have a max of ~65k tris.
For my purposes, mesh combine was the way to go. I based my code off of a post found here, and created a more generic method that takes all the currently used materials, creates a dictionary with material names as keys and lists of combine meshes as values, then processes them into their appropriate meshes. Meshes have a max of ~64k, and I had WAY more geometry then that, so I had to batch them into smaller meshes. The end result went from 12-24fps to over 150fps, with over a million polygons. Colliders were a no go at this point, so raycasting to individual atoms is out of the question. Luckily I have an array of coordinates, so I can still draw call outs to different points of interest.
This is a pretty specific use case, but I think the idea is easily transferable to situations where the view is limited (rooms, obscured by hills, etc) so that you don't run into the issue of drawing objects far in the distance. I think it can be really useful in situations where you have geometry that shares a lot of simple materials. With the trend towards low poly art in indie games, this can allow a much greater amount of geometry on the screen, leading to more interesting and higher definition environments.