Continuing from yesterday’s introduction, we continue by making a garbage model to show what issues we’re dealing with and what the goal is. My time tracker reports 17 minutes for this entire process, including the creation of these images, so this is a very quick-n-dirty process.
I began by box-modeling a rough form to encompass the skull, then smoothing and subdividing to get it to fit somewhat tightly around the mesh.
Then I smooth this geometry again, but this time using a background constraint. This takes each point in my volume and fires a ray along its normal, noting the distance it takes to encounter a surface on the background mesh. This calculation is given a default cut off distance in case the ray doesn’t find anything… in which case various options are available, ie. averaging the distance of neighboring vertices. After this process, the point itself is relocated to the respective position from the stl mesh. Voila: shrink-wrapped cg surfaces.
This may or may not be sufficient for volume analysis – there’s a lot of empty space here. If so, the model’s weight would be 199 vertices versus 499,362.
Each point in the mesh now has a roughly desired position in space, defined by 3 axii (x,y,z). The next important ingredient is to give each of these points a relative coordinate on a construed 2D surface – a uv map. Why? So that….
…we can repeat the ray-firing process, this time not for each point on the mesh, but for each pixel laid out along the uv map. This generates a grey value map recording each pixel’s distance from the rough volume to the scanned surface. Here again, a cut off is involved. This range determines not only the distance at which a value is simply defaulted to, but also the range of information between the darkest and lightest values. Ideally, you want to have your rough volume approximate rather consistently the scanned item. Notice that we have not done anything artistic… no sculpting, no deviations from the source scan.
What’s the point of all this? As you can see, our result is visually meager, yet it renders in 3.9 seconds as opposed to 5. And yes, we cg artists are consider that a big deal, because render times scale exponentially with resolution and often with further shading complexities, which means a lot when rendering 24 to 30 frames for each and every second of footage. What’s more important here is that the number of polygons is now dynamic. At this resolution, we calculated 783,380 polys compared to 997,593 in the stl mesh. Worth it? Not likely. Yet as a dynamic asset it quickly becomes valuable – for example if the item is rendered far away from the camera it will only cause the creation of as little as 296. That’s a major savings. Inversely, there are also methods to drive the number of generated geometry in macro ranges, so that only information currently necessary is loaded, not the mesh as a whole.
Useful for scientific purposes? I have my ideas as to why, yes, for certain purposes it is – it enables sculpting processes and visualization for example, combined with a quantifiable deviation from the scanned material. Finally, we have to spend a bit more time making our mesh have a closer fit. That comes next.
I’m not doing a 100% step-my-step. Wouldn’t have time for that, unless you want a video walk through. Feedback is very welcome. Is this too much? Too little? Helpful?