Back in June, I wrote a piece on Philip Rosedale’s (and others’) illusion that Linden Lab’s virtual world Second Life would become the next worldwide web, according to statements he made to the Guardian in May 2007. Of course, 6 years on from then, i.e. from the era when SL was the darling of the media, we all know that Second Life and the other virtual worlds are certainly not considered mainstream. It’s true that SL is the most prolific and popular virtual world platform, but it simply doesn’t have the number of users (or the image, but that’s another issue entirely) to make it “mainstream” in any generally accepted sense of the term. Mind you, none of the other virtual world platforms that were spawned from it have fared any better – in fact, some have even gone under.
If you try to look for explanations and theories for this failure of virtual worlds in general to become “mainstream” and fulfill the promise and the hype of yesteryear, the internet is full of them, but most of them concern Second Life only, which I personally find expectable, as it’s the most prominent target for criticism and scrutiny – even on this blog, there’s an older post that tried to approach the matter, albeit I now think my then-limited understanding of the technical and conceptual aspects of virtual worlds affected its ability to get to the core of the issue. Skim through any of these explanations: you’ll see people constantly complaining about lag, griefing, complex viewer software, content portability and tier cost. While it’s true that these issues are important to many, some are largely specific to Second Life and, in reality, they are not problems, but symptoms: they are manifestations of underlying problems, as I have explained before, and I think it would be beneficial to reiterate this particular point if we are to have a meaningful discussion of the subject at hand.
Problems and Symptoms
Before we go any further, we must sit down and consider whether the “problem-solving” that LL engages in and is demanded to engage in actually solves problems or not. Please note that I did not place the emphasis on “solves”, but on “problems”, straying quite far from the norm when it comes to speaking of LL’s and SL’s woes.
Because far too many people confuse problems with symptoms. I believe that right now is the best moment to draw a clearly visible line between the two, to help not only the analysis that will be part of these posts, but also readers’ understanding of it.
Symptom: It is the result of a problem; it is caused by a problem. It is, in essence, the evidence by which a problem can become known to us. However, because symptoms are usually all the warning and indication we get that something is wrong, and because of a lack of rational thinking that characterises far too many people (especially among those arrogant, ignorant fools who fashion themselves as “paradigms of rationality”), symptoms are misidentified as problems.
Problem: It is a holistic and systemic failure of something we are trying to accomplish and manifests itself through a variety of symptoms.
Why is this so important? Because, when you mistake a symptom for a problem, you are not fighting the underlying cause, but the result. This is a waste of time, money and resources. Lots of time spent, and what are the results? Nothing. It has no end and, by allowing the underlying problems to persist, the symptoms will persist too, resulting in frustration, disillusion and, more often than not, serious tensions.
On the other hand, identifying the problem and treating the problem instead of its symptoms gets things done; it has an end. It creates momentum and, yes, satisfaction. And, of course, it allows you to move on to the next issue.
Talking about failure
Now that the “problem vs symptom” issue is behind us, I believe we have a common ground for this analysis and we will be able to better focus on what’s really important instead of merely scratching the surface ad infinitum. The title of this section is “Talking about failure” and I’m sure most of you will be familiar with the almost boilerplate articles about how Second Life has “failed”; they keep popping up every now and then in the media, only to be followed by apologetics written by members of SL’s community. Occasionally, we may see some PR fluff from Linden Lab itself, such as this infographic that LL posted as part of its 10th birthday PR efforts.
Personally, I’m beginning to find this game of table tennis between SL’s detractors and apologists to have less and less value, as it’s becoming more and more a set of parallel monologues; journalists who know nothing about SL take a few bits from sources of dubious credibility and reliability, sneeze the shoddy source materials together and write articles that aren’t worth the electricity and caffeine used to write them. Then, people who like SL try to debunk the crap (some of them even seem to take it all personally). In other cases, of course, there are journalists who simply are not willing to forgive LL for its numerous mistakes and keep raising those points, especially if nothing has changed in the interim – but LL’s relationship with the media is a different topic entirely.
The thing is, has Second Life failed?
If we’re to see things strictly from a businessman’s point of view, the answer is no. It hasn’t. Ten years after, and it’s still got a steady number of dedicated users who simply won’t consider leaving. While this is certainly far from the levels of World of Warcraft‘s success, it’s still a profitable business that can afford to do serious development work to improve its platform. Actually, it’s still lucrative enough to continue being Linden Lab’s flagship product – and LL is a profitable enterprise as well. So no, neither Second Life nor Linden Lab have failed.
But it certainly hasn’t lived up to the hype of the “golden days”. It has not become the next worldwide web, or even part of it. It has not become the marketing tool for corporations. And, even though virtual worlds provide some utterly amazing opportunities for artistic expression, what has been achieved by the artistic communities is generally shunned, as it doesn’t make for the kind of sensationalist “journalism” that focuses on gambling, griefing and the sexual aspect of SL and SL-based virtual worlds (oddly enough, they never focus on the sexcapades that IMVU users keep spamming their friends’ Facebook profiles with – but that’s another story). In this respect, we need to acknowledge a certain perceived failure. As I wrote earlier in this article, many bloggers (including yours truly) and journalists have tried to explain this perceived failure, but most attempts have been superficial, at best, with only a few people bothering to attempt to get deeper than the epidermis.
Fallacies of infallible pundits
A common mistake among many “infallible” virtual reality pundits is their tendency to believe that the only virtual worlds out there are Second Life, the OpenSim grids (that essentially ride on Second Life’s coat-tails – and I don’t care whom and how much this will annoy) and, more recently, the various implementations and installations of Unity3D. However, MMORPGs are, in fact, virtual worlds – and quite successful, at that. Actually, a lot more successful than any of the aforementioned “true” virtual worlds.
In the infographic given by Linden Lab, we see some official figures, which should raise a few eyebrows as to their validity. LL speaks of 36 million accounts, with more than 1 million accounts active (according to various Lab sources, an account is generally considered if it is used at least once per month) and 400,000 new registrations each month. Personally, I believe that the real number of users is between 300,000 and 600,000 and the rest are alts – some people even openly boast about maintaining large armies of alts/sockpuppets that they use for trolling and bullying on the feeds and forums. And don’t even get me started on the throwaway accounts used by griefers and copybotters. And SL’s concurrency fluctuates around 50,000 users at any given time. Still, these numbers far outstrip anything we’ve seen in OpenSim.
These figures, though, are dwarved by the ones we see in MMORPGs. Shall we take World of Warcraft as an example? Despite recent losses, its userbase is still well over 8 million strong, as this report from February shows; it was 9.6 million back then. And, combined with Skylanders and Call of Duty (all of these games are subscription-based), it raised Blizzard’s sales from $1.17 billion to $1.32 billion in Q1 2013, meaning that we’re talking about a total of around $4.7 billion for the year. Or how about the Korean Silk Road Online franchise?
We’re talking about games with a long-standing, loyal, persistent following. Like the most successful grid-based virtual worlds, they are long-lived and their persistence has moved them beyond the status of a “traditional” game (whose lifespan usually doesn’t exceed three years). Also, given that their users are allowed to develop and mod them and even create content for them, I believe we could reliably say they are actually virtual worlds. It’s true that they don’t allow their users nearly as much freedom of creativity and expression as SL and OpenSim do, given that they’re quite strictly themed, but they’re persistent virtual environments that appeal to a broad userbase.
The Trough of Disillusionment
As you can see in this infographic from Gartner’s 2013 Hype Cycle for Social Software report (more information on the Hype Cycle can be found here), virtual worlds are in the bottom of the “trough of disllusionment”, where “[i]nterest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.”
So, why aren’t virtual worlds anywhere near mainstream yet?
There are many contributing factors to this. I’m not going to get into the factors that have to do with the habits and mentality of the (potential) users at this moment. Instead, I’m going to talk about the factors that are determined by the decisions of the companies that develop virtual world platforms, as these form the framework within which users (either individual users or companies, universities and non-profits) have to “play”. During a discussion with Will Burns (Aeonix Aeon in SL), Vice Chair of the IEEE Virtual World Standard Group and project manager of the Andromeda Media Group, he identified the following factors:
- A “walled garden” mentality
- Poor technical design choices that deprive the virtual world of the performance it could (and should) have
- Proprietary code (which goes hand-in-hand with the “walled garden” mentality)
- Lack of common standards, which means limited content portability, compatibility and cross-functionality
- Steep learning curve for users and budding content creators
- The persistent perception of virtual worlds as entities separate from the internet or, at best, as yet another addition to the internet
Explaining the “walled garden” mentality
To give you an idea of what a “walled garden” is, I’ll remind you that virtual world developers and proprietors like to tell everyone that they are basically Internet Service Providers (ISPs). The virtual land we rent from them is server space and time. The content we create, buy and use there is data. Yet, these ISPs have made sure that we can’t transfer our data from one ISP to the other. Whatever portability we have is extremely limited and shackled by technical constraints and idiotic copywrong licences. I can’t transfer my inventory, my clothing, my skins, my avatar shape, the textures I’ve bought, from Second Life to Kitely or InWorldz or what have you: I’ll have to negotiate with the content creators, pay extra for an additional licence to use these textures or copies of this content on other virtual worlds etc.
This, of course, applies to things we’ve bought as finished products in-world or on that particular grid’s marketplace. For objects we’ve made ourselves or from full-perm builders’ kits, we can extract these objects as mesh (either with tools like Mesh Studio or with the recent extraction capabilities that were offered in Singularity and Kokua). However, editing mesh objects in-world is not straightforward at all and, once you’ve set the texture repeats and offsets for each individual face, what you’ve done is pretty much final – unless you edit the object in an external application. However, this does allow us to at least bring an object to a certain level of completion (from a rough sketch to “try things on for size to a nearly-complete creation) and then improve it in an external application, taking advantage of its greater flexibility and power.
Gwyneth Llewelyn doesn’t see much wrong with this DRM policy, but this is short-sighted, to say the least. Yes, it allowed Second Life to grow, but it does nothing to help virtual worlds as a whole grow. Since all of these companies love likening themselves to ISPs so much, this policy is exactly like this scenario:
Let’s say I build a website and host it on ISP X. Then one day I decide I no longer want to host it there, but take it to a different ISP – let its name be Y. If ISP X embraced the “walled garden” mentality that LL has embraced and encouraged, I would not only be practically unable to reliably transfer my data between the two ISPs, but I would also be required to pay the creators of the stock images, widgets, buttons etc I used for extra licences to be used in the other ISP. Oh, and, in all likelihood, I’d have to pay the creators extra to make content that is compatible with the standards of the new ISP.
Now, it is said that this is all about “user retention”, but personally I don’t agree – I find it rather pathetic and I can’t help but wonder how the content creators in Second Life have encouraged (or, to be more exact, imposed) such a policy. Yes, they make money by making sure their stuff cannot be reliably and legally (at least without extra pay) be used in other virtual worlds, but they’re (a) missing out on a much bigger market that could be formed if the “walled garden” mentality was ditched, (b) effectively driving their customers to skip their demands and reuse the textures and all without the extra licensing fees, hoping that no one notices.
Content has to be portable from one virtual world to another
Thankfully, some third-party viewer developers have started understanding this simple fact. So, Singularity started allowing people to export their sculpts and prim builds as mesh (see Gwyneth Llewelyn’s report on this), and Kokua 220.127.116.11169 followed suit shortly afterwards, as Inara Pey reported. Personally, I expect other viewers to incorporate this capability soon enough, such as the ever-popular Firestorm, whose team, for the time being, has its hands full with CHUI and materials processing integration. But really, the ability to export content created in-world for external processing and for use in other virtual worlds should have been a no-brainer all along, regardless of what the paranoids who see copybotters everywhere
babblesay. Furthermore, a common scripting language needs to be developed and implemented, so that scripts from one virtual world will work in other virtual worlds without a problem.
I am aware, of course, of a certain paranoid idiot that squawks about how such requirements are “techno-communist propaganda” of the “open source cult”, but interoperability makes technical and business sense – at least to those who have functional brains. If you are a businessperson or someone who runs a company and you need to exchange data with other people from different computing ecosystems, you understand that proper interoperability and data portability will save you time and money, allowing you and your employees to focus on the important stuff. Exactly the same applies to content creators who wish to be present in a number of different virtual worlds – proper interoperability and content portability will allow you to sell your products in all these virtual worlds, with the same functionality, the same appearance, the same quality.
Did the “walled garden” mentality drive investors away?
Will Burns certainly seems to think so. He believes that LL’s abandonment of the Hypergrid was a sign that they opted to turn Second Life into a “walled garden” and believes that this decision led them to miss a great opportunity. In his view, investors want two things: (1) control and (2) money. Philip Rosedale, according to Burns, was not going to give either to them, so they left. He says that Rosedale could have turned things around in LL’s and SL’s favour by saying something like:
Second Life is based not as a platform by which Linden Lab is the creator of content, but instead the curator of content for those who utilize the platform. Instead of focusing on controlling the platform, we focus on expanding the usage of that platform worldwide and curate the content in turn creating a multi-billion dollar content curation cash cow. This also addresses our need through expanded usage, no or low cost entry from brands to reach that expanded population through our curated content services as marketing to them, and we also focus on the curation of content while offsetting our business model focus from in-house simulator rental to instead allow third party licenses to run them independently and interface with us. This is why I am focusing on Hypergrid, standards and the community – because in the end the more of them there are the more content they create, and the more they create the bigger this gets and the bigger it get the more they will sell. And we take a percentage of it.
Could this have been the single most important statement in the history of virtual worlds? As Burns points out, we’ll never know, since such words have never been uttered by any virtual world founder or CEO. History, after all, is not written in conditional speech, but only in the past tense. It seems that the idea of an underlying curator is not entirely understood – it took LL a fair while to get its own marketplace, and even as we speak its implementation leaves much to be desired. Furthermore, given LL’s idiotic ToS change, as documented by ON SL and commented on by ON SL and Inara Pey, such a statement is unlikely to ever be made. Especially if we consider that LL’s insistence on not removing the offending part from the revised ToS (a part that was copied verbatim from Section 2 of Desura’s ToS), despite the angry responses by top stock content providers like CGTextures and Renderosity, could set a precedent for other virtual world owners and CEOs to follow suit, causing significant trouble to in-world content creators, as stock content providers certainly aren’t willing (and who can blame them?) damage control canned statements like the one Peter Gray sent out.
Technical design choices
Burns pointed to issues like scalability and lack of support for features like procedural textures that could reduce the filesizes (and, therefore, the resulting lag) of the textures used, without any sacrifice in graphical quality. Personally, I am in no position to speak about scalability and about what could be done to ensure solid performance in a region with over 50 avatars on it. I do understand, however, that SL’s technological lock-in with Kakadu Software’s implementation of the JPEG2000 standard and its exclusive reliance on raster textures (which are notoriously bad for portraying text or a combination of text and graphics) contribute to the increased use of 1024×1024 textures, thereby crippling SL’s performance for everyone – of course, there are also content creators who just smack 1024×1024 textures on everything, even on the tiniest objects, just because they can, but if the option for procedural textures was there, I am sure SL’s overall performance would be much better. Please keep in mind that, given the fact that OpenSim grids are based on Second Life, these issues would also afflict them if their concurrency was anywhere near that of SL’s.
The only open source bit of Second Life is the viewer – and both it and its derivatives (i.e. the Third-Party Viewers we all know, use and love) are also used, with various adaptations, for OpenSim. The server, though, is not open source. So, OpenSim developers essentially look at the viewer’s source code and write server/simulator software that will work with the viewer’s existing code. I understand that, if LL’s simulator code was released under an open source licence, other developers could improve upon it – and LL could pick up and incorporate these improvements. Personally, although I am an “open source cultist”, I can understand why LL believes it’s in its best commercial interest to keep this code proprietary; furthermore, something doesn’t need to be open source to become mainstream – look at Microsoft’s Windows, or MS Office, or AutoCAD, or Photoshop. All of them are pieces of proprietary software, yet they’re the de facto standards for their industries.
Common standards (or lack thereof)
Builds (prim-based, sculpts and mesh) are 100% compatible between SL and OpenSim; it couldn’t be otherwise after all, as OpenSim relies on SL’s viewer. However, there are differences in the scripting languages of these platforms, so 100% compatibility is not guaranteed – OSSL is based on LSL, but there are (for instance) extensions that are not shared by both languages. Oh, and LL hasn’t implemented a mesh deformer for rigged mesh clothing.
Steep learning curve
Let’s face it – none of the grid-based virtual worlds are easy to get to grips with. Numerous functions, settings, a user interface that’s only a few steps away from becoming as baffling as that of a professional 3D graphics application… Plus, people are thrown into the deepest waters when they join any of these platforms and don’t know what they’re supposed to do. This drives thousands of people away.
The perception of grid-based virtual worlds by the public at large
Time to spit it out: The coverage of virtual reality by the “mainstream” media has always been, for its most part and with very few exceptions, complete and utter bollocks, laced with hypocrisy, ignorance and sensationalism that’d make rags like the Daily Mail look like The New Statesman. A vicious cycle that consists of the following:
- A few months (or years) of overhyping (which is not unlikely to be “encouraged” by the company that develops the product in question). During this time, the product is hailed as the silver bullet that’ll make the areas of Chernobyl and Fukushima safe for humans to live in, cure cancer, genital herpes, AIDS, bring about world peace, make you the best capuccino in the world – and all that while standing on its left foot’s big toe and singing Buddhist hymns in Klingon.
- Of course, there’s no way that any product that was overhyped in such a way can live up to expectations like these. So, once it becomes apparent that it wasn’t the aforementioned silver bullet at all, the very journalists that ovehyped it so much do a face-heel-turn and start bashing it; in the case of Second Life, the exact same journos that so eagerly and without any scrutiny or question promoted LL’s corporate hype (and many even went a few steps beyond that) were the ones that, a little later on, dismissed it as a “has-been”, a “dead duck”, as a virtual environment for socially inept and/or maladjusted people with no “real life” to speak of, and as a platform that enables and encourages “sexual perversion”.
Now, it’s true that LL has its share of responsibility for the way it has presented Second Life to outsiders. The creativity of individual users was practically ignored while LL tried to present Second Life as the best way for a corporation to have a web presence, even though its userbase has always been a drop in the ocean compared to that of other social networks, and even though it was still in its infancy, from a technical standpoint. As a matter of fact, the most vocal part of OpenSim’s community (and this is not limited to the mere user) seems more concerned with trying to take pot shots at LL and SL than with improving the visibility and perception of grid-based virtual worlds. This fractionism has contributed in no small part to the current image of SL and similar virtual worlds:
- SL is presented by many in the media as “the underbelly of the internet”: a haven for griefers, socially inept and maladjusted individuals and sexual perverts that is on its last throes as far as its viability and usefulness is concerned.
- OpenSim, on the other hand, is seen as an empty space where several scientists indulge in their geekiness until the sequester cuts their funding.
At least within Second Life and its community, there have been efforts to reverse this distorted image (such as Draxtor Despres’ excellent series of presentations entitled The Drax Files). But still, a lot remains to be done and both LL and the OpenSim grids will have to stop “antagonising” each other (really, none of the OpenSim-based grids is going to offer LL any kind of competition anytime soon) and start working hard to show the public at large what this kind of virtual reality can offer.
A different view
It’s no secret that one of the people I used to choose to discuss various topics with is Inara Pey. I admired her blogging work and her in-world presence; throughout her existence in the metaverse, she became highly influential, as she’s quite diligent and matter-of-fact. So, we sat down and talked about this topic and I must say that Her points are quite compelling. First of all, She pointed out a false assumption made by the media, companies specialising in virtual worlds, and even much of the userbase: Do virtual worlds actually need to become “mainstream”?
Virtual worlds can do fine without “going mainstream”
As surprising as it might sound, virtual worlds can survive, exist and thrive as niche products just fine, like they’ve done for so long. As She has pointed out in several of Her own posts, there’s nothing wrong with a product being “niche”. Real-world businesses know this fact very well and strive to focus their products to fit the needs of certain market “niches” and even create new niches and new wishes and “needs” that the buying public didn’t know it had (like Ford did with the Mustang back in the 1960s). Some of these “niche” products are adaptations of existing products in their portfolios, while some are developed from scratch with specific buyers in mind. It’s not rocket science – it happens all the time. And a niche can actually be quite long-lived.
And here’s a thought I have, which was caused by Her point. Maybe this drive to “go mainstream” actually distracts virtual world developers from improving themselves at what they can do and diverts resources to modifications for things they don’t need to do? Maybe it makes the marketing and PR people focus their attention on trying to appeal to people that won’t seriously consider using virtual worlds instead of addressing the right audience? Furthermore, given the success of MMORPGs that have adopted some of the features of virtual worlds, maybe virtual worlds have already achieved mainstream status, in a different from than originally envisioned and expected?
The “3D web” brouhaha
Second Life was touted as the next iteration of the web, and so have OpenSim grids. Much has been said about how the 3D web is “around the corner”, but this 3D web has yet to come. Then again, we must take into consideration the following facts:
- Most web content creators out there probably don’t need to make their content 3D.
- Most web content creators don’t have the skills required for 3D graphics and optimisation and probably can’t justify the effort needed for them to acquire such skills, as it’s doubtful the benefits will offset the cost.
- “3D web pages” would really be viewable by a smaller audience, as many computers that are now used primarily for web browsing and basic productivity work simply don’t have the processing power needed for rendering high-quality 3D graphics.
So, besides these facts that hold the much-talked about “3D web” back, we’re coming to a question that was rightly raised: Do virtual worlds need to be the “3D web”?
Virtual worlds are still not mature enough
Sad, but true. Both Second Life and OpenSim are still under heavy development regarding their underlying code base, which should be a wake-up call for everyone still talking about “mainstream adaptation”. There are so many changes going on every week to improve performance, safety, security and stability, that grid-based virtual worlds don’t look like a mature, stable platform on which only minor tweaking is needed here and there. Instead, they’re in their very early stages, despite the fact that Second Life has already been around for a decade. And here’s why Gartner put virtual worlds 5 to 10 years away from reaching any plateau of usability.
What does a virtual world need to survive?
Inara Pey considers content portability (which has been fully implemented – up to region terrain maps – in OpenSim, although precious little is done to drive up its usage), freedom of movement between grids etc, as features that would be nice to have, but not essential for a grid-based virtual world’s survival. According to her, a virtual world only needs four things:
- A stable operating platform with well-documented, flexible capabilities and functions which are transparent to the user.
- Applications to run on that platform; be they business-oriented, learning-oriented, entertainment-oriented or whatever, regardless of whether these are “curated content” or user-generated content or both.
- An easily-understood means of access (i.e. an easy-to-use viewer).
- An advertising budget that will put it right in the face of people and make them want to use it.
She actually has a point; the problematic data portability between WordPerfect and MS Word, Lotus 1-2-3 and MS Excel etc never stopped any of these applications from becoming the de facto standard. Yes, it’d be great if OSSL and LSL became 100% compatible with each other, but has the fact that they’re not 100% compatible stopped Second Life from dominating the grid-based virtual world market and gaining a foothold in a long-lived market niche?
I do believe that Will Burns’ idea of “curated content” could, if properly and wholeheartedly adopted and implemented by grid-based virtual world developers, really drive up their usage and attractiveness, greatly boosting their appeal to the average user who will become part of a large, active userbase. As to what it actually takes for a virtual world to survive, well, potential users need to find it useful and easy to use, plus it needs to be marketed appropriately (and please LL, ditch the “become your avatar” crap – this concept failed before; bringing it back from the dead is not a good idea), with campaigns that will target the right audience. And, of course, expectations and goals will have to be realistic – pipe dreams have no place in business.
- Linden Lab’s corporate pipe dream – this blog
- Today Second Life, tomorrow the world – Philip Rosedale interview to English newspaper “The Guardian”
- Why Second Life has not fulfilled its potential – this blog
- Second Life’s state of affairs – this blog
- Infographic: 10 Years of Second Life
- Sockpuppet (Internet) – Wikipedia
- World of Warcraft down to 9.6 million subscribers – WoW Insider
- Gartner 2013 Hype Cycle for Social Software Reveals a Wealth of Emerging Innovations – Gartner Inc.
- Hype Cycle Research Methodology – Gartner Inc.
- Official Blog of the Andromeda Media Group
- Mesh Studio – Second Life Marketplace
- On object export and Third-Party Viewers – this blog
- Prim-to-Mesh done just right – by Gwyneth Llewelyn
- Kokua offers .DAE exports – by Inara Pey
- Hypergrid – OpenSim
- IP Related Changes in the New TOS – ON SL
- Linden Lab’s PR Department Responds to TOS Concerns – ON SL
- Tos Change and content rights: LL provides statement – by Inara Pey
- 6th September 2013: Terms of Service update, using our images in Second Life is no longer allowed. – CGTextures
- Renderosity Products NOT Allowed at Second Life – Renderosity
- How we turn Second Life into a Lag Hell – this blog