View Single Post
Old 24th September 2008, 10:16 PM   #5
lisa
Senior Member
Professional user
 
lisa's Avatar
 
Join Date: Mar 2005
Location: Phoenix, AZ
Posts: 917
Default Determining Polygon Count

Quote:
Originally Posted by headerko
got a question, how do you determine the level of your model details?
Is there a way how to set some vertex / surface limit per object / whole scene ?
I dont want to end up modeling and then having to redo everything just because the game runs at 5fps :P
Really, the whole question of how many polys comes to just one thing: How many polygons can you see *on-screen* per frame.

There is a physical limit to how many polygons a video card can draw. Most manufacturers publish this spec, although the number published is how many triangles the card could draw if that were the only thing your computer was doing. i.e. No AI, no sound, no physics, buffers perfectly sized, etc. Real-world, you'll probably get maybe 20-50% of the published numbers, maybe less, maybe more depending on the engine. Some types of geometry are easier to throw at the video card than others--basically anything that can be made into one long strip, like terrain or particles--so you might squeeze out more if your game is heavy on that type of geometry, but it's better to be conservative when estimating. You can always add more objects/higher detail later if it turns out you have room. Side note: most manufacturers list polygons *per second* not polygons *per frame*. Divide the number of polygons per second by your target framerate to determine the number of polygons per frame.

So, as you can see, the total number of polygons you can draw never changes. It's determined by your engine and your hardware.

Therefore your polygon budget per object is determined by the max polygons you can draw per frame, divided by the number of things you want to draw. For example, if you know your engine can render about 100,000 triangles per frame and you want 50 objects, that means you can have about 2000 triangles per object. (100000 / 50 = 2000) If you want 200 objects, that number drops to 500 polys per object. If you have 10 objects, you can spend 10,000 per object.

Of course, some objects don't need as many polygons as others so you can "rob Peter to pay Paul" to balance your scene. For example, if your budget is 2000 per object you probably don't need 2000 polys to make, say, a chair. But the stone lion next to the chair might benefit from a few more, so you might reduce the budget for the chair to 1000 polys so you can increase the budget for the lion to 3000. You want to be careful doing this so you don't accidentally cluster all your polygons in one spot, but this is really the key to budgeting your objects. You can also split the budget to have more of a certain object. Instead of one 2000 poly chair, you can make four 500 poly chairs.

Importantly, the number of objects you can *see* is not the same number as the number of objects in the *scene*. For most games, you can rarely if ever see the entire scene all at one time. The number of objects visible on-screen depends on your draw distance and any culling algorithms your engine uses to reduce the number of objects that are drawn. For example, some engines can remove objects that are hidden behind other objects, aka "occlusion"; other engines use "portals" to determine if a room can be seen from another room. (Note, these culling algorithms aren't "free" so they don't reduce the cost of objects you can't see 100%, but they help tremendously.) To re-iterate, your polygon budget per object is based on what you can see, not what's in the scene... it's only the number of polygons in the viewport that is important for determining how many polys your objects should have. The polygon budget in the scene is normally based on available memory. Some games may stream or "swap" chunks in and out so they can have scenes bigger than the amount of memory, but a surprising number load the whole scene up at once. Of course, you can go the other extreme as well: for example, since Speeding Ticket is a coin-op and we wanted *zero* load times, it loads *all* the scenes at boot and keeps them in memory all the time.

The other thing to take into account is LOD, or "level-of-detail". This lets you have many more objects on the screen without blowing your polygon budget. Many games have anywhere from three to fifteen versions of each model, each at a different polygon count. As the model moves further from the player, the game engine swaps out the high-polygon version with a low-polygon version. If you do it right, the player can't tell you did it. If you do it wrong, the objects will "pop" and the player will see the change. Make sure the object is far enough away when you swap, and that the silhouettes are the same and you'll avoid popping. You can find a great tutorial on building LODs here: http://www.ai-aardvark.com/modeling/...1/aia_LOD.html A little note on the tutorial: the author of this tutorial seems to advocate building a very large number of LODs. I'd say three to five is more typical, but like everything else it's a trade-off: more LODs gives you smoother transitions, but it eats a lot more RAM so you can have fewer objects overall. If your draw distance is really long, you'll want more LODs. If your draw distance is short, you can get away with fewer LODs. We use a different number of LODs for each model depending on where it will be seen or how it will be used.

Some games use a technique called CLOD, or "continuous" LOD, to avoid popping. CLOD is basically "on the fly" polygon reduction. It's an interesting technique, and it certainly has the potential to save a lot of work building LODs. The downside, of course, is more processor load and a less control over the final product. Jonathan Blow (Braid) has an old article arguing *against* both CLOD and progressive meshes: http://number-none.com/product/Rende...ast/index.html Even this many years later, I'd still have to agree. Sometimes simplier is better.

Incidentally, since I didn't mention this already, for 98% of cards\engines polygons = triangles, and only triangles. Anything that's not a triangle gets turned into a triangle when it gets sent to the card, so combining surfaces in AC3D from triangles to other shapes won't reduce your poly count in any way. Even engines that purportedly render "curved surfaces" generally convert the output into triangles when they send it to the card--in fact, it's almost impossible to do otherwise, with the exception of some DX10 cards with "geometry shaders". (But even then most are just doing the exact same thing on the GPU instead of the CPU.) So, when you're figuring out your polygon budget, always budget in *triangles* even if you plan to build your model with quads. 1 quad = 2 tris, of course. You can triangulate your model in AC3D to get an accurate count by clicking Surface > Triangulate.

One final note: I said earlier the number of polygons your engine can draw never changes. That's not entirely true. Some kinds of polygons draw faster than others. In general, unlit polygons draw much faster than lit polygons; opaque polygons draw faster than translucent polygons; and untextured polygons draw faster than textured polygons. If you're using shaders, some shaders are faster than others. Finally, the size of your textures can make a dramatic difference... textures can do more to effect rendering speed than geometry in many cases! A common mistake is to make something like a ladder out of a transparent texture on a quad to "save polys". Unless it's a ladder to the moon, a ladder doesn't have that many polygons and the alpha-mapped texture will cost you a lot more than a few triangles. For many engines, large alpha-mapped, heavily-textured objects tend to be the most expensive thing you can draw. Try to budget your texture memory as well as your polygons when planning your scene; the fewer things you have to swap in and out of texture memory, the faster your game will run.
lisa is offline   Reply With Quote