The advent of materials processing (normal and specular maps) in Second Life brought about a number of changes to the way things are rendered, compared to how they used to be – at least for those of us whose graphics cards allow us to enable the Advanced Lighting Model (formerly known as “deferred rendering”. For a detailed coverage of this capability, please go over to Inara Pey’s excellent blog. Now, when this new capability was added, many people started jumping up and down about how “irrelevant” or “useless” it was, about how only… twenty users in total would be able to see materials, how it would really kill the performance of everyone’s viewer, etc.
I’m going to speak from my own experience. Up until this month, my main machine for using Second Life was a laptop. A 2009 midrange model, with a dual-core Intel T4300 CPU, 4GB of main RAM, and an ATI (now known as AMD) Mobility Radeon HD4500 graphics card. Those in the know understand that this was hardly “high end” even then, and it became antiquated relatively fast. I can’t vouch for how other people with older, and probably lower-spec, dedicated graphics cards, or with integrated Intel chipsets, would fare, but, ever since the 2012 updates to the rendering pipeline were made, I was able to run in deferred (ALM) practically all the time – without shadows and ambient occlusion. Yes, I know my computer’s performance wasn’t much. It was usable, though, and the in-world pictures I once envied so much were now within my reach. So, I believe that ALM, which is a prerequisite for viewing materials, is within the reach of more people than was believed back then.
Nowadays, I’m the happy owner of a laptop with a fourth-generation dual-core i7, 8GB RAM and an NVidia GeForce GT840M, as well as a desktop with an i7-4770K CPU, 16GB RAM and an ASUS ROG Poseidon GTX780 graphics card. As one would expect, my machines’ performance in SL is a few orders of magnitude above what I once was used to. Still, I have the feeling that, as beautiful as SL looks right now, it could be even more spectacular, had some rendering capabilities (i) not been removed with the advent of materials processing, (ii) been added.
What we lost
Before I proceed any further, I must say I’m very much aware we all love our shinies, but we want them to not cause any performance hit at all. This, of course, is something that just can’t be done. With materials, I expected that, depending on how extensive their use was, and how large the normal and specular maps used were, I stood to take quite a performance hit. And indeed, that’s what happened with my older machine. So, when we see that the new shinies cause our preferred viewer to go slower, we immediately start complaining – sometimes with, and sometimes without reason.
As I said in the introduction, materials processing changed quite a few things, and, unfortunately, we lost two capabilities along the way, in the name of performance. These are:
- The ability of the reflections in Linden Water (the sea in SL) to include the shininess of an object (this is described in BUG-5575, which I had filed);
- The ability of point lights and projected lighting (i.e. not sun and moon) to be reflected in the Linden Water (this is described in BUG-5583, which was filed by Whirly Fizzle of the Firestorm Team).
Both of these two JIRAs were unceremoniously closed, because, as per NORSPEC-310 (the specification for the materials processing capabilities in Second Life; it’s an internal document and most of us can’t see it), they would cause too much of a performance hit.
I don’t agree with the way things were done. Starting with my own JIRA (BUG-5575), I believe that the Lab could have followed a different way of thinking. Instead of “let’s delete it, because they’ll start complaining about lag again”, they could have said “let’s make no-materials, no-shininess reflections the default behaviour, and add a switch to add this rendering on top of everything else; if they think their machines can take it, let them have it.” I sincerely think this would have been a better approach.
Dwelling on it a bit more, I think it might have been better for the Lab to enable full materials (normal and specular maps) to be reflected in the water in two discrete stages. They could have allowed us to add the reflection of objects’ shininess, without the materials, as a first stage; and then, as a second stage, we would add the reflections of full materials.
Why am I saying that? The reason is that SL doesn’t offer any true mirror capability. To get a mirror effect for your in-world photography, you need to either resort to complex and highly convoluted methods involving custom-made poses, camera-locking scripts and post-processing like Laverne Unit did, or as Oracolo Janus did, use the sea in SL as a mirror. The latter method (since Oracolo’s blog is now no longer available, due to the closure of My Opera) was first documented by Zonja Capalini, whose tutorial I found through Inara Pey, who had used it to great effect in her own photography (and yours truly explained how you can create with Firestorm’s tools back in March). And these aren’t the only people who have used this effect for their SL photography. Many SL photographers have used this technique, from Caitlin Tobias to Whiskey Monday. So, my thought is… Why not give them the full package as an option? I believe it’d enable them to take even more stunning photographs, and these would act as excellent representation for the platform in general.
Too expensive for the rendering pipeline
I’m not going to argue that these options wouldn’t cause a performance hit. The more you ask of your GPU and CPU to process, the worse performance you get. I doubt there are many users in SL who don’t understand this simple fact. But still, these options would be only enabled by the users for certain occasions, when there would be reason enough for them to use them, i.e. when taking in-world photographs.
Please use the numbers below to navigate between the article’s pages