For the moment, these are fully animated avatars with some Bento support + eye control. I’ll touch on plans as follows…
Some of you may recall the wyvern having made an appearance on the original Bento video put together by Linden Lab. By that point, the main model had been completed and rigged with a preliminary Bento rig. At the time, I chose to make use of both hind and wing bones to make this a mount, rather than a full avatar, but it also meant there needed to be a few compromises when it came to any treatment of a human avatar rider.
While I am not sure I couldn’t have made it work, there were a number of factors that spurred me to abandon the mount-as-part-of-existing-rig solution.
First, I was working on a number of projects at the time and various events called for projects that just made more sense to be released first.
Second, both the underlying Bento rig and development tools have changed since I started. We started off with no wings to some wings to potentially being able to ride rigged objects eventually.
Third, the final rig implementation as it stands best accommodates a fully opening and closing webbed wing structure by using most of the hand bones, which is a lot more difficult to do with just the Bento wings.
The added difficulty of dealing with an especially thin webbed membrane and preventing as much clipping as possible has been a major bone of contention for me and I feel like the current setup strikes the best balance.
As it stands now, I hope to keep an eye on the avatar and how folks like it. As always, updates are free, and with the new vendor system, I look forward to being able to just push updates to you automatically.
I will be looking in to tangential implementation as Animesh and hope to, ultimately, also provide a means for use as an Animesh mount… but until then will be catching up on a few items for both shops along with packing up for a major life change coming up in the next month or so.
I don’t want to bore folks much about it, but I must say I will be moving at the end of October. While I’ve made good memories where I am now, I’m really looking forward to making the new place my home.
Now that the Wyvern is out of the way, I hope to delve a bit more in to upcoming Animesh releases as well as other things I’ve been meaning to work on for a while.
Look for some freaky Halloweeny stuff coming up soon, along with some new household items as well.
I am also moving things around in the Organica shop and hoping to better feature everything that is on offer. Please let me know if you think things are going in a good direction or if you have any other feedback.
If you enjoy what I’m doing here or think someone else might also find it of use, please feel free to share this blog with them. If you’d like to keep up to date with posts, the RSS for this blog is here, I can also be found on Twitter and Plurk. The Discord server is here.
If you really like my stuff, perhaps consider donating to my Patreon? Your continued support helps to produce regular content (written, modelled, animated or otherwise) and helps to keep original content creation in Second Life!
This week has been a bit of a mish-mash from a work perspective – not only is Candy Faire coming up but we’ve gotten a major go-ahead with a work project, as well as some additional requests that I’ll touch on later this month.
Today, I figured I’d discuss some techniques with regard to texture maps and how they can provide additional detail to your work.
The Advanced Lighting Model has been a part of the SL viewer for around five years, yet at this stage it’s still quite seldom that current content creators make use of it to its greatest extent.
There is a common misconception that the Advanced Lighting Model is only usable if you can view shadows and projectors, but that’s not actually the case. The option to turn off shadows and projectors may be in a different spot depending on which viewer you use, but on the mainstream viewer, it’s located here:
So with that addressed, how can Advanced Lighting Model help us?
As I touched on during the last article, a major part of making more efficient content is finding ways to add detail without adding costly geometry, and a large part of that has to do with the texture map options provided by the ALM functionality.
What do all these texture maps do?
Diffuse (Texture tab)
Diffuse maps are the basic textures we use to give objects their flat colour. On their own, they do not provide any effect with regard to how shiny or bumpy an object may be, instead offering only a means to see colour on the object under diffuse lighting conditions (IE: no reflections or projections).
We might sometimes ‘bake in’ more details to a diffuse texture, but it’s important to note that these details will be static and not change with differing light conditions.
Normal maps (Bumpiness tab)
In SL, the Bumpiness tab uses ‘Normal’ maps, which tell the camera which angle each pixel should face and to what degree these pixels should recede or protrude. This shouldn’t be confused with bump or height maps, which use the monochrome spectrum to define distance of a given pixel from the camera – SL does use these to some degree, but not for the same purpose as one might expect.
Height maps become more commonly used when it comes to applications in SL where a strict height difference (and lack of angle info) is used – IE: Height maps for full regions – but I won’t be going in to that today.
Under the right circumstances, a normal map can make a big difference between a flat plane and a flat plane with rounded buttons. Most commonly, the addition of normals is most noticeable when combined with Shiny maps.
Shiny maps are a combination of two different types of maps – Specular and Glossiness maps.
Specular maps control how much light gets bounced back from an object’s surface, as well as what colour that light would be. Glossy maps control how shiny or matte something is.
Combined, Glossy and Specular maps can control how matte or shiny something is as well as how diffuse or sharp the reflected highlight will be.
Adding Material maps to your workflow:
Creating your own bump/height maps using a graphics editor:
Height/Bump maps are created in greyscale. Of note, it is typically not enough to simply convert your diffuse texture to black & white, as not all lighter details on your diffuse texture are necessarily supposed to be closer to the camera, nor darker details necessarily supposed to be farther from the camera.
I usually start with 50% grey as a background layer, then use additional layers to create depth. Anything I want to recede gets a darker shade. Anything I want to come forward gets a lighter one.
Areas of higher contrast will see sharper bumps. Areas of lower contrast will be more subtle.
But Aki! You just got done telling us we usually don’t use height maps on objects!
While I did mention Normal maps are more commonly used within Second Life, the truth is most Normal maps are acquired by creating a height or bump-map, then having a program convert it for you.
Just follow the installation instructions and you’ll find it under Filters in Photoshop the next time you load the program up. In it, you can control the scale of your normal map’s height, sample granularity and more, but usually those two (height and granularity) are all that I change.
Alternatively, there are a fair number of web-based converters with varying degrees of customization. Notably:
There are a variety of additional programs that will do conversions for you, usually (but not always) bundled in with other functionality.
Crazybump – offers a variety of licenses depending on use. For selling in SL, expect to purchase the professional license.
Quixel Suite – offers a variety of licenses depending on use. For selling in SL, an Indie license is usually sufficient. Of note, this suite includes nDo (which is the Normal generation component), dDo (which permits application of high detail scanned materials) and recently 3Do (which is a baking utility.) Each of these can be purchased separately.
Substance Painter – This is a pretty commonly used standalone program amongst SL creators these days. It allows similar functionality to Quixel’s dDo, but also offers more in-depth painting-style application, along with procedural and particle brushes. If you have already created some height/bump maps, you can still use this to export Normal maps too! Content creators making under US$100K can safely use the indie license.
Creating your own bump/height maps using a modelling program:
I mostly just use Blender or ZBrush, but I’m including a series of videos that go through normal creation using the big four modellers used for SL content creation below.
All of these methods assume that you have a higher-poly version of your low-poly model and that your low-poly model has already been assigned a UV layout:
Creating your own shiny maps using a graphics editor:
Much like creating Height/bump maps, both Specular and Glossy maps can be created with your favourite graphics editor or with a variety of other programs (Quixel Suite, Substance Painter). Most of these other programs will give you the appropriate separate maps when exporting ‘PBR’ materials – that is, Physics Based Rendering – but you’ll have to combine the specular and glossy maps in your graphic program later if you would like some variance in the matte/glossiness of the object.
In my own work, the glossy map is applied to the specular map as an alpha channel. Any areas on the alpha channel that are black will mask off the reflectiveness of the object.
What does this all end up looking like?
Finally, here’s what all of these texture maps look like, put together.
With the normal map added, the circle now has a bit more dimensionality. We are also able to see how specular highlights become less noticable as the camera moves away from them face-on.
If we add back that simple baked shadow texture from above, we can give this surface even more depth!
Together, these texture maps have extensive application. You can add iridescence to a Christmas tree ornament, for example, or texture to an elephant’s skin. You can add droplets of water to a leaf or provide a scaly sheen to a dragon’s hide.
Hopefully this provides a good starting-off point for your creations – I’d love to see what you come up with! If you have any questions, please feel free to give me a shout in-world on my main account (Aki Shichiroji) and I’ll be happy to provide feedback or help as necessary.
Hopefully I’ll have more to discuss release-wise, since Candy Faire is imminent! Additionally, there are some other Organica releases in the pipeline. See you next week!
Did you know I have a Patreon account? If you enjoy this content, please consider becoming a Patron! It helps me create more like it and offers a variety of rewards. Alternatively, if you like the sorts of things that I make for Second Life, drop by my Marketplace listing or my in-world stores and check out what I have to offer!
Unless otherwise noted, I (Aki Shichiroji) and this blog are not sponsored in any way. My thoughts are my own and not indicative of endorsement by any associated or discussed product/service/company.
There has been a lot of talk, recently, with regard to what is and isn’t possible within the platform when it comes to content creation and detail. We see this complaint come up commonly with all content, but more recently this has become more of a touchy issue with the coming of Animesh (animated mesh) objects, which are currently being tested on the Beta grid.
As things go, Animesh is currently limited to 20Ktris per linkset, which means that content creators have to be very cautious about the complexity of the models they intend to use.
The most immediate use case being presented are full scale animated NPCs based on existing mesh content (bodies, clothing, hair). Additional use cases include accessorizing of rezzable pets, customization of vehicles, and more.
However, the efficacy of Animesh in terms of accomplishing those goals is questionable.
As things stand, the current limitations (20KTris per linkset, minimum 200LI cost for animated mesh objects) are deliberately conservative so as to accurately assess graphic and server load under heavy use. These limitations are likely to change, but some of the suggestions as far as to what degree have been quite far-ranging. At recent user group meetings, we’ve seen suggestions anywhere from 100-500KTris per linkset so as to accommodate clothing, hair and body mesh.
This might not seem like a lot but for the moment let’s argue that the average fashionista might themselves average between 250-800Ktri range. Today in SL, this is only just manageable in a room with multiple such avatars because we can now elect to filter out performance-heavy individuals by using Avatar Complexity filters. (If you frequently allow your viewer not to do this, chances are you spend a fair amount on a new video card every couple of years. Not everyone can afford that!)
There is no immediate indication that we will have any similar functionality with Animesh. Apart from the polycount restriction and LI, there is also no immediate restriction on how many animesh can be drawn by your camera. As such, placing multiple such linksets in a given area may well create a negative experience for a large portion of the SL userbase, who may not have the most up to date equipment for enjoying Second Life.
It begs the question of content creators – Notwithstanding any easement of these restrictions, what can we as content creators do to create more efficient models for use as Animesh (or even for daily use on our own avatars)?
Design with efficiency in mind.
There are many workflows out there. Some of us are working with Blender, Maya, 3DSMax, SketchUp, ZBrush or even Marvelous Designer.
I am hesitant to point out any one workflow as being ‘bad’, but frankly some of these workflows are designed for higher-detail applications and not for immediate use in gaming.
Does this mean I think they shouldn’t be used?
Not at all, however it’s important for content creators to understand what kind of issues they are introducing to the viewer experience when they present un-optimized content to the consumer market, what the repercussions may be and how to mitigate them.
For example, Marvelous Designer allows designers to create garments based on traditional patterning and to see how those garments will fall on an avatar, but even with the recent addition of its quadrangulate functionality, it produces mesh with counter-intuitive edge flow. Additionally, the common practice with MD is to simply export a high-poly mesh to include fine details and call it a day, without regard to how that might impact the viewer in SL.
We have similar problems with ZBrush, which can handle millions of vertices at a time and which does have a means of retopology (making something less complex), but which still requires a lot of tweaking to create something with good enough edgeflow to work well in lower poly situations.
You can work from low-poly to high or high-poly to low based on your preference, but it should be noted that the average avatar doesn’t actually *need* Pixar-level graphic fidelity in their everyday SL experience.
Rather than importing garments to SL with every possible nook and cranny modelled in geometry, designers can (and should) make use of the tools afforded them by advanced materials! This can be done by baking down some of the details from their higher-poly models to diffuse maps but also by creating normal and specular maps that will take advantage of textures instead of geometry to create detail.
With this sort of workflow in mind, a 20KTri blouse could easily be reduced to 4-5KTri with minimal detail loss.
Even more savings can be had if animesh are designed and modelled with these restrictions in mind, rather than cobbled together from multiple sources.
With a custom designed animesh human, for example, there is no need to include a full mesh body – only those parts which are visible need be included. Clothing, hair, accessories – all of these can be developed with efficiency in mind to fit the criteria for Animesh limits.
Level of Detail models are also helpful with reducing viewer load at a distance, given the fact that Animesh do not currently become imposters at a distance (even though they express sped up animations just as avatars do).
Of course, it’s helpful not to think of SL purely in terms of efficiency. We could all just wear stick-figures or rez them and call it a day… but if the visual element were removed what would be the point?
Instead, I’d love to see limitations on these resources to encourage both more efficiency as well as stylistic choices that deviate from the norm. There is a vast niche of style that continues to go untapped within the platform and I’d be really interested to see more interesting art styles rather than a constant push towards photo-realism, personally.
I’m pleased to announce Wilds of Organica will be participating in this month’s round of The Arcade.
This time you can play for up to 10 different passerine or pelagic static birds that would go great with the special exclusive reward item, which is a bronze birdcage!
The birds are static models but four poses for each bird are available if exchanged (rez your transfer-only prize in the blue circle listed in the accompanying notecard for automated exchange).
Please note! While this gacha will make its way to the Wilds of Organica main store eventually, the cage itself will be exclusive to the event and is a mod/copy (with some copy-only scripts) item given out after 25 plays of the machine!
If you’d like your chance at it, be sure to visit The Arcade during the month of September to play the machine!
Have you heard about Project Bento yet? It’s an extension of the Second Life avatar, adding two new sets of limbs, a tail, face and hand bones, and more! It will bring a host of new possibilities to content creation in Second Life and is slated to arrive on AGNI (the main SL grid) by the end of Quarter 2.
I’ve got a new video up discussing new things I’m working on, as well as progress on Project Bento, which is just around the corner!
By the way, as I mentiond in the video, I recently wrote a new JIRA issue which focuses upon maximum file size for animations. The current cap is 120kB & 60 seconds; while I feel 60 seconds is quite generous, unfortunately the addition of so many new bones to the avatar and the implicit necessity for many animations to include *both* rotation and location data is likely to inflate the size of many Bento-related animations significantly. When I was putting together some simple standing animations for the wyvern, I found even a 300 frame animation – that utilized a couple of spine bones, the hind limbs, wings and tail – put me over the limit. While I was later able to scale the animation down in frames over time, not all animations would be as simple as a short looping stand animation, so I am somewhat concerned as to what this means for content creators who wish to utilize Bento bones to this degree.