Let’s talk shinies

The advent of materials processing (normal and specular maps) in Second Life brought about a number of changes to the way things are rendered, compared to how they used to be – at least for those of us whose graphics cards allow us to enable the Advanced Lighting Model (formerly known as “deferred rendering”. For a detailed coverage of this capability, please go over to Inara Pey’s blog. Now, when this new capability was added, many people started jumping up and down about how “irrelevant” or “useless” it was, about how only… twenty users in total would be able to see materials, how it would really kill the performance of everyone’s viewer, etc.

I’m going to speak from my own experience. Up until this month, my main machine for using Second Life was a laptop. A 2009 midrange model, with a dual-core Intel T4300 CPU, 4GB of main RAM, and an ATI (now known as AMD) Mobility Radeon HD4500 graphics card. Those in the know understand that this was hardly “high end” even then, and it became antiquated relatively fast. I can’t vouch for how other people with older, and probably lower-spec, dedicated graphics cards, or with integrated Intel chipsets, would fare, but, ever since the 2012 updates to the rendering pipeline were made, I was able to run in deferred (ALM) practically all the time – without shadows and ambient occlusion. Yes, I know my computer’s performance wasn’t much. It was usable, though, and the in-world pictures I once envied so much were now within my reach. So, I believe that ALM, which is a prerequisite for viewing materials, is within the reach of more people than was believed back then.

Nowadays, I’m the happy owner of a laptop with a fourth-generation dual-core i7, 8GB RAM and an NVidia GeForce GT840M, as well as a desktop with an i7-4770K CPU, 16GB RAM and an ASUS ROG Poseidon GTX780 graphics card. As one would expect, my machines’ performance in SL is a few orders of magnitude above what I once was used to. Still, I have the feeling that, as beautiful as SL looks right now, it could be even more spectacular, had some rendering capabilities (i) not been removed with the advent of materials processing, (ii) been added.

What we lost

Before I proceed any further, I must say I’m very much aware we all love our shinies, but we want them to not cause any performance hit at all. This, of course, is something that just can’t be done. With materials, I expected that, depending on how extensive their use was, and how large the normal and specular maps used were, I stood to take quite a performance hit. And indeed, that’s what happened with my older machine. So, when we see that the new shinies cause our preferred viewer to go slower, we immediately start complaining – sometimes with, and sometimes without reason.

Inara Pey's "Exotix" look

Inara Pey’s “Exotix” latex look, photographed using Linden water as a mirror. The reflection no longer has the shininess ever since materials processing was added. Image courtesy: Inara Pey. Please click on the picture for the full-size version.

As I said in the introduction, materials processing changed quite a few things, and, unfortunately, we lost two capabilities along the way, in the name of performance. These are:

  • The ability of the reflections in Linden Water (the sea in SL) to include the shininess of an object (this is described in BUG-5575, which I had filed);
  • The ability of point lights and projected lighting (i.e. not sun and moon) to be reflected in the Linden Water (this is described in BUG-5583, which was filed by Whirly Fizzle of the Firestorm Team).
Shininess no longer rendered in an object's reflection in Linden water

As of mid-2012, shininess was no longer rendered in an object’s reflection in Linden water. Please click on the image for the full-size version.

Both of these two JIRAs were unceremoniously closed, because, as per NORSPEC-310 (the specification for the materials processing capabilities in Second Life; it’s an internal document and most of us can’t see it), they would cause too much of a performance hit.

I don’t agree with the way things were done. Starting with my own JIRA (BUG-5575), I believe that the Lab could have followed a different way of thinking. Instead of “let’s delete it, because they’ll start complaining about lag again”, they could have said “let’s make no-materials, no-shininess reflections the default behaviour, and add a switch to add this rendering on top of everything else; if they think their machines can take it, let them have it.” I sincerely think this would have been a better approach.

Dwelling on it a bit more, I think it might have been better for the Lab to enable full materials (normal and specular maps) to be reflected in the water in two discrete stages. They could have allowed us to add the reflection of objects’ shininess, without the materials, as a first stage; and then, as a second stage, we would add the reflections of full materials.

Why am I saying that? The reason is that SL doesn’t offer any true mirror capability. To get a mirror effect for your in-world photography, you need to either resort to complex and highly convoluted methods involving custom-made poses, camera-locking scripts and post-processing like Laverne Unit did, or as Oracolo Janus did, use the sea in SL as a mirror. The latter method (since Oracolo’s blog is now no longer available, due to the closure of My Opera) was first documented by Zonja Capalini, whose tutorial I found through Inara Pey, who had used it to great effect in her own photography (and yours truly explained how you can create with Firestorm’s tools back in March). And these aren’t the only people who have used this effect for their SL photography. Many SL photographers have used this technique, from Caitlin Tobias to Whiskey Monday. So, my thought is… Why not give them the full package as an option? I believe it’d enable them to take even more stunning photographs, and these would act as excellent representation for the platform in general.

Too expensive for the rendering pipeline

I’m not going to argue that these options wouldn’t cause a performance hit. The more you ask of your GPU and CPU to process, the worse performance you get. I doubt there are many users in SL who don’t understand this simple fact. But still, these options would be only enabled by the users for certain occasions, when there would be reason enough for them to use them, i.e. when taking in-world photographs.

This means that they’ll have framed the shot beforehand, chosen the desired windlight, taken a few quick’n’dirty snapshots to see how things are working, and then, when they’re satisfied with their composition and lighting, they’ll flip the switch to enable the reflection of shininess and / or materials. And then, they’ll disable this extra rendering, to frame a new shot, or just go away. That way, the performance hit would really be irrelevant on most occasions. Whatever performance loss they’d suffer would be only temporary; it’d last for a few moments, and then they’d be back to “business as usual”. And I must say here that, sadly, certain things simply cannot be easily done in post-processing (Photoshop, GIMP, etc). The more you can do from within the platform, the better – and your post-processing work will be made a lot easier.

It must be said here that everything we add, graphics-wise, adds to the load our computers have to face in SL. Turning ALM on equates to a performance hit, which can be significant or insignificant, depending on your machine. Adding shadows (Sun / moon, or Sun / moon and projectors) further taxes your machine’s performance. Ambient occlusion slows things down even more. And if you add the DoF effect, then you’re really making your machine work hard. Now, ALM, shadows, ambient occlusion, and DoF are all things that have been with us for a very long time. Personally, I’ve only been able to use them on my previous laptop since late 2011, when the rendering pipeline received a number of upgrades. Before that time, I’d just get a black screen. On all the ALM-capable machines I’ve used, however, I’ve made the following observations:

  • High-resolution (values above 2.00) shadows slow your machine down and can crash it if you’re trying to take high-resolution snapshots;
  • Screen space reflections cause an extra slow down;
  • Ambient occlusion itself slows down your machine when enabled, although I’ve yet to confirm the impact of its individual sub-settings;
  • DoF-wise, now… The lower you set your f-number (i.e. the wider you open your lens’ aperture), and the longer (i.e. more telephoto) your lens (greater focal length value), the worse your performance will be. I’ve found f-number values lower than 2.8 to be rather undesirable; on some machines, f-number values lower than 1.4 can cause the viewer to crash.

Yet, almost all of these things (with the exception of screen space reflections) have been taken for granted for quite some time now. They’re there; they’re an accepted reality, and they represent a graphical quality standard we strive to have in our viewers. In a way, they represent a benchmark for us all, and we aim to have machines powerful enough to allow us to have ALM with shadows (Sun / moon and projectors) and ambient occlusion almost always on (except maybe for very crowded regions) and to activate DoF for snapshots. From my experience, these settings can cause a significant performance hit on midrange and entry-level machines, and even high-end rigs can be significantly slowed down with these settings enabled while we’re in a particularly heavy and crowded region.

Lights (not sun or moon) reflected in RL water

The advent of materials, as documented in BUG-5583 (which was unceremoniously closed as “expected behavior”) means that we can no longer produce snapshots that emulate this RL photograph. The only lights that are now reflected in Linden water are the Sun and the moon. Image credit: Wikimedia Commons. Please click on the picture for a larger version.

Of course, not everyone’s machines can handle such settings. My previous laptop was literally struggling. But here’s the deal: No reasonable person I know questions the usefulness or desirability of this level of graphical quality. Only the usefulness of materials processing was questioned in the beginning, but now more and more content creators (especially in the virtual fashion and architecture business) have been adopting them, effectively telling their customers to proceed and upgrade their machines if they want to enjoy their new clothes, shoes, etc. in all their materials-enabled glory.

The shared experience requirement and performance issues

As it stands, SL has a number of rendering features that make it look better, but also make your computer work harder: ALM, antialiasing, anisotropic filtering, shadows, ambient occlusion, DoF. Here’s where I think the “shared experience” requirement, as documented in LL’s third-party viewer (TPV) policy for SL, comes into play (at the end of Section 2):

You must not provide any feature that alters the shared experience of the virtual world in any way not provided by or accessible to users of the latest released Linden Lab viewer.

The “shared experience” term has always been rather vague, but I think it means that a TPV must not offer features that make the “look” and “feel” of SL as experienced through them differ from that of the official viewer. As to the reasons behind this requirement, I can immediately think of issues like user support, codebase maintenance, coordination and cooperation with TPV developers, etc (merging new features in the official viewer with the codebase of a TPV that differs significantly in certain areas can be a very difficult task); there’s no need to dive into conspiracy theories.

The flipside

While such a policy can be advantageous for the Lab, it can also mean that TPV developers who want to make SL look better are discouraged from contributing what they can. Yes, I’m fully aware that not all TPV developers qualify as experts in the field of designing 3D engines, but at least their ideas can be leveraged by the Lab to make them work more efficiently and more reliably.

Crepuscular rays in Golden Gate Park

Crepuscular rays (god rays) in Golden Gate Park. Yes, rendering such lighting in real time would be demanding, hardware-wise, but things move forward, so I believe such capabilities are worth having in SL. Image source: Wikipedia. Please click on the picture for a larger version.

Then again, I know many people will start saying that shinies like god rays (crepuscular rays), prims that can be used as mirrors (see STORM-2055, which has also been documented by Inara Pey), lens flares, screen space reflections (I must say I’m not particularly fond of this particular option) and the like are not worth the effort, because a tiny fraction of users will be able to use them, while the rest will simply never be able to even see them. So, adopting a maximalist approach to the shared experience requirement, they could demand that none of these “useless shinies” be ever implemented, because their 10-year-old computers will never be able to render them.

Actually, I know for a fact that many such people have done so. It was actually as late as the middle of 2013 that the managers of several in-world dance clubs I know were going to great lengths to discourage their patrons from wearing mesh garments, hair, shoes and other accessories because “people on older viewers couldn’t see it.” Then again, back in 2006-2007, we didn’t even have antialiasing – and even this does require its share of GPU resources. Maybe, in the name of having a “shared experience” with the lowest common hardware denominator, we should get back to the way SL was in 2003?

I don’t think so. I believe that the best course of action for the Lab, tech-wise, is to always strive to keep SL relevant and reliable. As far as reliability is concerned, I think many, if not most, of the goals have already been achieved. It’s far more stable than it used to be, it’s much faster than it used to be, and it’s considerably prettier; the software engineers at the Lab deserve recognition for their efforts, and I must say I feel they’re underappreciated and get a lot of unfair flak.

Getting back on the topic of BUG-5575 and BUG-5583, I think it’s unfortunate that these capabilities were removed. From my understanding, since they existed before materials processing was implemented, the Lab could bring them back. However, as explained before, they were removed in the name of performance. To the best of my understanding, what Marissa Linden meant was that an “average” SL user’s computer wouldn’t be able to handle them without being slowed down too much.

Why bother?

Another factor we could examine in this matter is what exactly can the Lab expect as a reaction / response from the user base if it brought these capabilities back. As a matter of fact, NiranV Dean, developer of the Black Dragon Viewer, seems to have brought this functionality back. Would they go unnoticed? Would the user base go “meh”? Would anyone actually benefit from them?

Honestly, these questions are hard to answer. I fully understand that many users out there (the ones with low-end hardware, most notably) will not care. Some of them might even complain about how the Lab works on “stupid shinies that no one cares about or no one will be able to see” instead of “reducing lag” (even though most of the lag in SL has literally nothing to do with the platform itself and is caused by factors LL has no control over). Sadly, the Lab feels obliged to listen to these complaints, however unfair they are.

Then, I understand that even those of us with high-end hardware could find these capabilities to be taking a toll on our systems’ performance. But still, being able to turn them on or off according to the needs of the situation at hand, as I’ve already explained, would mean that this performance hit would only occur in very specific moments.

This brings us to an argument against the return of these capabilities: Their use would be very limited and thus, not worth the time and effort of LL’s developers. Would in-world photographers and machinima creators care? Would they use them? Would the “if you build it, they will come” approach work here or not? This is hard to predict. I know I’d use them, but that’s just my own personal take – I can’t speak for anyone else, as this depends hugely on each individual machinima creator’s or in-world photographer’s personal technique, preferences, style, thematic preferences etc.

I understand where the Lab’s people are coming from, really. SL has been receiving all these years tonnes of criticism for its outdated engine, sub-par camera offsets, poorly-designed avatar skeleton and mesh, for having ugly visuals, so the developers have tried (sometimes in half-hearted steps, as we saw with the introduction of rigged mesh) to improve its visual appeal. And then you see people whose machines are perfectly capable of running on ultra or high-ultra graphics settings at consistently smooth frame rates keep ALM off almost all the time. Yes, you people know who you are; the snapshots you upload on gyazo are not only taken with ALM off, but even without anti-aliasing. I know I’d be frustrated, if I were a Linden Lab developer; I’d feel I’ve put in endless hours of work for nothing.

My take on all this

I believe BUG-5575 and BUG-5583 shouldn’t have been closed, and that these capabilities should be reinstated officially. As well as the Black Dragon Viewer (which I’ve yet to test, to be honest) may or may not work, these features are now unsupported, so if they don’t work well at one point or another now or in the future, there’s really no one to turn to. Also, I do believe STORM-2055 deserves being worked on officially by the Lab. Yes, I understand that these features will not be for everyone or for every occasion. But they’re really nice to have. I think they’re worth bringing back, and I’ll explain why.

First of all, even if their usefulness is limited compared to the “bog-standard” features that are comfortably used by the majority of low-end machines out there, knowing that, with a hardware upgrade of moderate cost, you can have an SL experience of such outstanding graphical quality that would be on par with the best games of the current crop, is reassuring for both the user base and the personnel involved and confirms SL’s relevance in the current context.

Second, it provides designers developers of virtual environments for artistic, business, promotional and educational purposes with extra selling points, which, in turn, can reinvigorate the revenue generated by SL. Remember that the lack of a supermassive user base was not the only reason corporations left SL after the bubble burst; the role of the platform’s graphical and technical immaturity should not be underplayed. Yes, even these two “minor” features could help make SL look more appealing to the tourist, cultural, and business markets. It could very well keep virtual environment designers and developers interested in SL (rather than flocking off to the likes of Unity).

Third, removing functionality that, with the appropriate safeguards (for instance, keeping these features turned off by default and enabling them via a checkbox or something similar), can be prevented from impacting a user’s performance, is never the best idea.

Fourth, the Lab needs to harden its stance towards the “stop working on laggy shinies, I can’t see them on my 10-year-old Celeron box” brigade. In fact, it should adopt a “caveat emptor” stance. In the world of “normal” gaming, people understand completely that they are in control of, and responsible for, their graphics settings, depending on their hardware. One can’t expect to play, for example, Crysis 3 at a display resolution of 1920×1080 with ultra settings on an old, low-spec machine, and there’s no way Crytek can be considered responsible for the hardware a user has chosen to purchase.

Anyone who uses Second Life or other MMOs that are heavy on 3D graphics should understand that these environments demand some decent hardware. That’s just the way it is. Complaining about the fact that you can’t expect to run SL on a seven-year Celeron-based machine with 1GB of overall system RAM and an ancient integrated Intel graphics chipset (with shared memory) is pointless, and LL should finally point this simple fact out to its users rather than live in constant fear of people going up in arms in the forums.

So, to wrap it up: I think LL would do well to reconsider its approach to these three JIRAs, and ignore those who would bash them for trying to improve the graphical quality and beauty of Second Life.

.

See also:

.

Shortlink: http://wp.me/p2pUmX-Gk

Advertisements

One thought on “Let’s talk shinies

Comments are closed.