Corgi Week 3: Weighting the Avatar

To adequately modify weighting for the avatar, there are a few methods of particular relevance:

Automatic Weights

This is a shortcut – one which attempts to assign the mesh to the bones that are closest to that particular area on the mesh. Automatic weighting is achieved by right-clicking the mesh, THEN the armature in Object mode, then Parenting the former to the latter by selecting ‘Ctrl + P’. This brings up the ‘Armature Deform’ menu, from which  ‘Automatic Weights’ should be chosen.

It is important to note here that with human avatars, it’s usually possible to use it to predict which parts of the mesh should be associated with which bones. The same cannot necessarily be said for avatars that depend upon a modified skeleton (like this Corgi). Sometimes Bone Heat works, sometimes it doesn’t. In any case, there will always be some degree of tweaking required afterward, so tools such as ‘Automatic Weights’ should be considered a useful tool in most cases, but not a magic bullet. This is why the following two methods are also very important to learn.

Manual assignation:

To accomplish either of the following two methods, the mesh needs to be parented to the armature. This can either be done by choosing ‘With Empty Groups’ from the Armature Deform menu (which we got to by selecting the mesh, then the armature – both in object mode – then hitting hotkeys Control + P) OR selecting the mesh and adding an Armature modifier, taking care to point the ‘Object’ field in that modifier to the appropriate armature.

Once this is done, the mesh can be weighted using the first, second or both of the following methods:

A) Assignment as an Edit Mode property – by selecting a single vertex or a whole group of them in Edit mode, you can affect their bone weighting by choosing the (above) indicated menu in the Edit properties tab, adjusting the associated weight and then hitting either ‘Apply’ or ‘Remove’. Vertices associated with a given bone can also be selected or deselected in this same menu.

B) Weight Painting – This is a weighting method which allows you to visualize the degree to which verts are weighted to a particular bone through the use of colour. It allows you to use a digital brush to add, subtract, draw, lighten, darken, blur or otherwise affect bone influence, which in this mode is represented by a gradient of colour, ranging from blue (no influence) to yellow(middling influence) to red (full influence).

There are pros and cons to using each of these methods and almost 100% of the time, I use the second method *after* having used the first, in order to make it look more natural.

The weighting process with the cute, fuzzy and not-at-all skinny corgi has been, inevitably, a bit different (and long-winded) compared to weighting the sleek & non-squishy Drider avatar covered not too long ago.

I’ve always found this to be the case – coming to a happy medium between influence from multiple bones in a soft mass is a very organic process that depends heavily upon an understanding of what you want to move, and where.  Don’t be discouraged if this doesn’t work out right away. Understanding a lot of this comes with experience & experimentation.

Whereas you can (for the most part) assign heavy influence of a rigid mass to a single bone, rigging to ensure smooth movement along a curvy mass often requires more of a gradiated transition – sometimes extending well past the immediate location of the bone.

For example, you could weight mesh along a tail rigidly, but when it comes time to move it, the mesh will be overly faceted and easily visible from afar as being unnatural.

Adding geometry judiciously at this stage is a good way of adding a more natural look. This is also a great opportunity to smooth out weights along

It’s at this stage that I have added more geometry to critical areas, such as joints and the tail.

Maintaining a low poly-count to start with is very helpful in reducing additional work when it comes to correcting delicate bone weights, but it’s also in these cases where adding intervening geometry is appropriate, and this is why, despite having used a Subsurface division modifier to visualize, I have not applied such modifiers permanently to my mesh. Being able to easily select and divide up edge loops and rings manually allows me the greatest ability to create more natural shapes while maintaining clean edge-flow.

The weighting and animation processes are inevitably intertwined. In the next little while, I’ll not only be animating but correcting vertices with stray weights as well. I will often be animating, find that a certain movement affects the mesh in some negative way, and as a result find that I need to go back to editing weights to prevent any significant negative outcomes.

It’s also during this process that any final joint position tweaks should be made. As was mentioned in the previous post, it’s important to ensure any such position changes are carried out between *both* the Control bones (green) and the Deform bones (blue, purple, red). Failing to do so can cause some unpredictable results upon export.

(which happened here some time ago while working on the Yeti)

So far, I’ve explained the concept of these weighting methods and discussed a few pitfalls, but I’d like to delve a little deeper next week with some video content, demonstrating the use of these Weighting tools in greater detail. If you’re looking to learn about weighting with Blender & Avastar and have any particular questions for me to work in to these videos, please leave a comment here or drop by my Discord server for a chat within the next couple of days! (Latest March 4, please!)


If you enjoy what I’m doing here or think someone else might also find it of use, please feel free to share this blog with them. If you’d like to keep up to date with posts, the RSS for this blog is here, I can also be found on Twitter and Plurk. The Discord server is here.

If you really like my stuff, perhaps consider donating to my Patreon? Your continued support helps to produce weekly content (written, modelled, animated or otherwise) and helps to keep original content creation in Second Life!

Thanks for your support!

Corgi Week 2: Using Blender 2.79+ & Avastar 2.X, Part 1

Just a note! Apologies for the late post. Due to some unforeseen local circumstances, I’ve been really occupied this week prepping proposals at work and dealing with RL circumstances. With that said, this article is gonna run long, so I’m splitting it in two more manageable sections.

Let’s begin!

Repositioning joints

The joint editing process has been refined a bunch lately – particularly with the most recent iteration of Avastar’s 2.X plugin, which now only works with Blender 2.79 and above.

Of note, the folks behind Avastar were able to iron out pretty much all of the kinks when it comes to ‘snapping’ one set of armature bones to the other and vice-versa. Whereas previously there were some issues with spine bones not translating properly, they seem to work well now. I can’t say that I’ve tried doing any significant constraint editing yet (such as in previous cases with the weeping willow and drider rigs) but I’m happy to see this functionality relatively stable now.

With the current setup, you can select your Avastar armature in Object mode, then switch to edit mode and either select Animation bone groups or Deform bone groups to work with from the Avastar tab menu on the left-hand side.

Joints can be repositioned by selection and translation, just as you would a vertex, edge or face.

If you are significantly changing the position of the hip and shoulder joints, also be sure to click ‘Enable’ next to ‘Structure’ on the Rig Config menu to allow structural joints in those areas to move, otherwise you’ll have a hard time moving the heads of mCollar/Collar and mHip/Hip bones. Additionally, somewhere along the way, Blender did finally fix X-mirroring for armatures, so you should definitely make use of this tool (found in the ‘Options’ tab when you’re in Edit Mode for your armature) in cases where you would like to maintain symmetry across your armature.

Here, I referenced dog skeletons and bone placement for general positioning. It might seem like the outer shape is what one should pay attention to when rigging, but It’s important to pay attention to analogous joint positions in your critter’s real-life counterpart and to position your own to similar positions.  This will help to keep your animations as natural as possible later.

 

Once the joint positions are roughly where they need to be with the Animation bone group, ensure that the corresponding Deformation bone group positions are in place by clicking ‘Cleanup Rig’ with ‘Target Animation Bone Group (green bones) selected in the following menu.

The alternative is to simply show both the Animation AND Deformation bone groups while in Edit mode and to adjust their joints concurrently.

You can check that the effect has been applied correctly by toggling visibility of said bone groups and ensuring that they match and overlap precisely. Once you have ensured that this is the case, it’s time to parent the avatar mesh to the skeleton.


The next post will be up in the next couple of days – in the mean time, If you like what you see but don’t think it’s quite right for you, perhaps consider donating to my Patreon? Your continued support helps to produce weekly content (written, modelled, animated or otherwise) and helps to keep original content creation in Second Life!

Corgi Week 1: Working with modifiers in Blender to help visualize advanced geometry efficiently

This week I figured I’d get started on some Wilds of Organica-related items, since the last couple weeks have been pretty landscaping and decor-heavy.

Linden Lab have taken the past few weeks off, as far as Content Creation user group meetings – they will resume tomorrow (Feb 15, 2018) at 1PM SLT, on Animesh 1 region (on ADITI grid).

It’s hard to gauge how much longer the testing period will take. When we last met, it seemed like development was ongoing, but felt like we were moving a bit closer toward final performance and animesh limits testing. There has been some push in terms of increasing the triangle limit, however I’m still of the opinion that 50K is far more than enough.


With that said, I’ve begun development on a new WoO product this week that would likely see use as either an avatar, animesh, or both. We’ll see how development goes moving forward – I’m excited to see just how well it can be implemented, given the positive experience I’ve had popping in animations and some basic scripts for existing content thus far.

When I’ve chosen a real-world subject, typically I’ll look up some basic reference material to get the proportions down. In this case, it’s mostly been looking up corgis on Google Image search. It’s easy to get caught up researching cute critters and your search results are likely to be the same as mine, so suffice it to say that I use such references as a touchstone to develop an idea of what age and proportion I want my avatar to be.

My preference, from a stylistic point of view, is to avoid being photorealistic in my translation of a real dog to SL dog. I don’t really enjoy the uncanny-valley look that often comes from taking references completely from photograph to projected final texture, so usually these references become proportional and very general references later on when I paint my fur textures manually.

In this case, I was going for non-puppy, but still on the young side, just to get a nice balance between lovable huge ears and adorable elongated & lowered carriage. I’ve also elected to go the non-docked route, although a bobtail option might be in the cards at some point.

The early hours of my avatar making process usually start with something very simple – like a box with some very simple extruded faces, to block out limbs, head, etc. Once I feel i have the main proportions down, I will usually toss on a few modifiers to help me get a clear idea of what the final product will look like, even if i might not apply all of these modifiers by the end of the project.

 

By using a SubSurf modifier, I can non-destructively visualize how my geometry might look if i were to subdivide and smooth. I don’t typically ever apply this modifier permanently because it’s too easy to just apply it and call it a day, without addressing some of the geometry problems that it introduces.

In particular, SubSurf tends to cause geometry to form vertices which either branch off in three or five directions, which is not ideal from an edge-flow perspective.  Also, it’s typically a lot more efficient to add edge loops and rings in areas where necessary, as opposed to allowing a modifier to add them all over your model.

I tend to add some edge loops and rings to some degree at this stage, to get the basic silhouette established, but do tend to add more later on, during the rigging stage, for added flexibility and attention to specific bone weights. One might conceivably use the ‘simple’ subdivision method rather than Catmull-Clark, however that subdivision method will not apply any vertex to vertex smoothing calculation and, at least for this use case, does little to add to this workflow.

I will typically mix the SubSurf modifier along with a Mirror and EdgeSplit modifier.

The Mirror modifier allows me to model on one side-only and have those changes propagate symmetrically.

I usually add an EdgeSplit modifier to help visualize the edge-flow of my model – in this case to help define key features in the surface as well as to give some definition to the fur.

I usually turn off angle calculations for this modifier, opting instead to define my sharp edges manually by right-clicking a sequence of edges, hitting Control E, then selecting ‘Mark Sharp’. Basically I want to adjust the geometry based on shapes I want to add to it, rather than adjusting the geometry based on what I have so far.

This helps to give the mesh a more distinct silhouette and to give me a better idea of how the geometry will need to be broken down later from the standpoint of someone who needs to rig and animate. As with the SubSurf modifier, unless I actually need a very sharp edge in my final model, I usually don’t permanently apply this modifier permanently either, since the resulting edge-splits will cause duplicate vertices that are not always necessary.

Below, you can see the difference between my Mirrored model (with no other modifiers added, but faces rendered as smooth), my mirrored model with edge-split, and finally with SubSurface divisions.

Once I am happy with this edgeflow and overall shape, I will usually begin modifying the Avastar Extended Bento skeleton to fit the avatar.

We’ll cover this, along with rigging our corgi, next week.


If you like what you see but don’t think it’s quite right for you, perhaps consider donating to my Patreon? Your continued support helps to produce weekly content (written, modelled, animated or otherwise) and helps to keep original content creation in Second Life!

Willow Tree Process (Part 4) – Rigging and animating

A small amount of downtime over the past couple of days has given me the opportunity to move forward with my Animesh Willow experiment.

At this point, I have to mention that this is all it is – an experiment. In the course of playing with animating a tree, I ran in to a number of hurdles which I’ll have to consider whether I want to get around before any possible release. (I’ll go in to these a little later).

From the hint that animesh might be a thing, I’d been thinking about using it for more efficient modelling of animated vegetation. Willows are the most obvious candidate for me, since I’ve long avoided creating more.

Original solutions for willows have historically included flexiprims and while these may still prove useful, I wanted to see what I could come up with that wouldn’t be so taxing on the viewer. The opportunity to create something that isn’t so heavily dependant on SL wind is also promising.

My willow tree armature required  some significant modification of the default Bento avatar armature.

Currently, Avastar allows a user to select and move bone joints for either the blue/purple SL armature or green Control Bones in edit mode, then to align them to their counterparts. This is what I did and (so far) I haven’t needed to adjust any of the parenting for this rig.

I opted not to make use of the lower limbs (for now) because doing so can present some orientation issues due to how bones are parented. If i need to in the future, I may put in more time to figure this out, but in this particular use case, I chose to just use the bones from torso up, arms, hands, wings, neck and head (no face), simply because these would handle the geometry sufficiently.

The result is, in a very general sense, positive.

For the most part, the trunk was parented to bones which are logically closer to the middle of the skeleton. So it got torso, chest, collarbones, upper, lower arm, neck, head, etc. Most of the fingers got assigned to equidistant areas around the trunk for foliage.

In hindsight, I would probably rig and model concurrently. Because there was a significant amount of foliage geometry mixed together, selecting appropriate foliage and assigning it to its nearest bone was a bit tedious. Doing this a bit at a time to ensure proper movement would have been the better way to go.

Fortunately, Avastar offers a means of checking for unweighted verts, so this process was made a bit easier as a result.

Weighting was undertaken mostly using the weight painting brush, but occasionally I would also hold down Ctrl while making my brush strokes to create a gradient of weights for my selected vertices.

Because there were so many vertices in relatively close proximity, I selected the bones I wanted in weight-painting mode, then hit ‘V’ to show vertices. I then selected the vertices I wanted to paint (rather than painting on everything)  and brushed on only the areas highlighted by the selected vertices.

Animating the tree:

Once all of the vertices in the geometry were assigned, it was time to try some basic animation. So far, I’ve just put together a basic sway animation as a test case, but I may continue to create a variety of other animations the tree can play on command.

In order to create an animation, I split off a window pane in Blender and switch it to ‘Dope Sheet’ view. This gives me a frame-by-frame listing of bones for which location and/or rotation* has changed, over time (in frames). There are other more detailed and useful views you can use for animation, but this is the most basic view you’ll need right away.

(* Scale changes are ignored by SL, both on the armature and animation side.)

The Dopesheet operates mostly from left to right, although it does list off bones which have been weighted, on the left hand side as ‘channels’. When a bone is selected in 3D view, the appropriate channel will highlight in the Dopesheet view. On the flipside, you can also left-click the name of the bone in Dopesheet view to select the bone in 3D view.

To animate, we need to ‘keyframe’ a set of changes in rotation and/or location and have Blender interpolate these transitions from keyframe to keyframe. In this case, the chief translations we need to make will be rotational.

To begin, I select every bone in the armature and keyframe the current rotation as a keyframe (Hotkey I, select ‘Rotation’). This will be my starting frame.

Next, we need to create the second position for the appropriate bones. Since I am only moving the hanging foliage, I select the appropriate bones (mostly just finger bones) and rotate them in the general direction I want.

Then, since I just want to test and loop motion between these two keyframes, I select all of the points from the first keyframe, duplicate them and move them to where I want my end frame to go, allowing the animation to seamlessly move from the last frame to the first when it loops.

 

Next, we need to define our export settings to convert these keyframes to a full blown animation that can be used in Second Life.

Of note: Normally, frames per second (FPS) is set around 24. This particular animation has been slowed down significantly such that only two frames play per second, for a much more subtle effect. This can be played with depending on application – sometimes I will tinker with this to speed up or slow down walk-loops for avatars.

By default, I export .ANIM files instead of .BVH files – I don’t play much with the system morphs that come with .BVH and in this case, such morphs (system avatar-based facial expressions, hand gestures) are not applicable to this sort of content.

Once I have defined the start and end frame for the animation as well as the start and end frame for the loop (not always the same!), I click ‘Export: AvatarAction’ and save it with an appropriate file name.

In-world, I enable my willow as an ‘Animated Mesh’ object and drop the animation in to the mesh. Some additional scripts are needed to make use of this animation – some sample scripts to get you started can be found on the Animesh regions on ADITI grid currently. Hopefully we’ll see some more sample scripts on the wiki soon too.

The result:

Current downsides:

  • Animesh currently can’t be resized. They make use of the armature, where the size is defined upon upload. It may be necessary to create several different sizes for variety and, depending on application, special attention to scaled animations may be necessary as well.
  • Transparent textures placed upon Animesh-enabled geometry currently do not cast a correct shadow.
  • Base 200LI – this is likely to change for the better. Vir Linden has always maintained that the current 200LI base is boilerplate and mainly intended to be more restrictive than the ultimate release. Once I have a better idea of base cost, I’ll have a better idea of whether I’d like to move ahead with further LOD optimization and more detailed animations.

So for now, this willow will be on my backburner until we have more info from the weekly content creation meetings (Thursdays at 1PM SLT, Animesh 4 region on ADITI grid).

In any case, I wish you all a very Happy New Year!

I’ve had the fortune of being able to pick up more work in the past year and also the opportunity to present my thoughts and new releases with you lately here on the blog – I’m really looking forward to keeping the ball rolling in the coming year and hope to have more to share with you soon!


If you found this or any other of my articles helpful, please consider becoming a Patron! Doing so supports further articles of this kind and my content creation in general.  Alternatively, if you like the sorts of things that I make for Second Life, drop by my Marketplace listing or my in-world stores and check out what I have to offer!

Unless otherwise noted, I (Aki Shichiroji) and this blog are not sponsored in any way. My thoughts are my own and not indicative of endorsement by any associated or discussed product/service/company.

 

Willow Tree Process (Part 3) – Foliage In the Round, Trunk texturing too!

Last time, we left off with the start of some great foliage for our willow tree, but the placement overall was a bit sparse.

Today, we’ll look in to ways of bulking up the foliage so that it looks more healthy.

At this stage, the easiest way to develop a stronger silhouette from all angles is to consider the foliage as multiple pieces of a whole, each varying in size but as a whole ‘mounding’ or ‘padding’ in key areas.

There are a few different techniques available for the tree-making process, but because we’re dealing with a tree that has somewhat out-of-the-ordinary foliage, I’ve chosen to create planes of geometry which have been mapped to parts of a larger texture and to have each of these planes intersect at a common area, to simulate a branch.

 

Here, I’ve used much the same process as last time to create a variety of different foliage shapes based upon some underlying branch drawings.

The same leaves and stems we used for the sideways texture are repurposed here, again with the help of bezier curves, which allow for non-destructive manipulation of geometry when a ‘curve’ modifier is added to the mesh object.

A gentle sweeping shape is added to the plane to simulate the slight upward growth, then strong downward plunge of foliage due to gravity. Once I have a shape I’m happy with, the geometry gets duplicated and resized, then I’ll take the geometry and map it to a different strand of foliage within the same UV map for some variety.

 

 

At this point, I split up my 3D view so that one view is using a rendered view and the other is solid or wireframe to properly place each piece so they intersect properly.

Once I have a cluster of this type of foliage that I’m happy with, it gets placed in strategic places where the other foliage type was lacking. It can also be helpful to hide the other foliage material temporarily to aid in clear placement.

It’s important to take multiple angles in to consideration here; while it’s not always possible for an object to look good from all angles, the goal here is to create visual interest through a play between areas where there are foliage and areas where there are not.

There’s still a ways to go in terms of filling out volume from the top-down view, but progress is being made!

My immediate priority is to create an effective silhouette along the top surface of the tree. Then, I do the same working from a top-down view, taking care to create leaf cover in trunk/branch areas which are still bare.

It’s during this stage that some experimentation in balancing the different foliage geometry shapes is important. I started out using a variety of upright planes to create the impression of volume from the front view, but adding rounded foliage makes a big difference! There’s still a lot of push and pull to go, but this has come a long way compared to the tree we were left with by the end of last week’s post.

Also, you might notice that I got around to texturing the trunk; this was accomplished by importing a .OBJ copy of the trunk to Substance Painter and working with the tools therein.

I usually start with a base wood material, but never leave it as is. For one thing, Substance Painter still isn’t smart enough to figure out how to hide seams, and for another, I like to add a lot of little touches to make the look a little more unique.

In this case, I created another layer overtop of the wood and used a scratchy brush to create the deep furrows this tree’s bark tends to have. The brush included both a diffuse and height element so that I could give the impression of accumulated dirt and shadow, paying particular attention to seams and minimizing the tonal differences in these areas.

I then also made use of a particle brush to blow some dust and grime all over to add a bit more age and wear to the texture.

These textures were then exported using my usual PBR SpecGloss configuration (the default preset in the exporter) and added back to the model in Blender for one more rendering pass, since I wanted just a bit more kick than the plain textures would provide, given SL’s existing material shaders (somewhat limited).

Moving forward, I’m likely to do a bit more balancing of foliage to make it a bit more subtle, but the basics are there.

Next week, I hope to have enough time to experiment with rigging & animations, plus consider the feasibility under current testing conditions.


If you found this or any other of my articles helpful, please consider becoming a Patron! Doing so supports further articles of this kind and my content creation in general.  Alternatively, if you like the sorts of things that I make for Second Life, drop by my Marketplace listing or my in-world stores and check out what I have to offer!

Unless otherwise noted, I (Aki Shichiroji) and this blog are not sponsored in any way. My thoughts are my own and not indicative of endorsement by any associated or discussed product/service/company.

 

Willow Tree Process (Part 2)

Today, I figured I’d touch on my process for creating textures.

While many folks prefer to use a photograph for their texture, I’ve always worked from scratch, creating my own textures digitally, while referencing a large number of photographs for ideas and clues about growth habit.

With respect to trees, I usually start with a few variations on a base leaf, taking care to work out the base silhouette.

In the case of weeping willows, the leaves are narrow,oblong, and taper gradually. While the final  product will ultimately be much smaller and not show small details like serrated edges, I usually add them anyway, along with veins so that these elements can give hints of themselves later.

It’s usually a good idea to create a variety of different leaves, even if they are a slight modification of one base shape. This allows the final branch texture to have some variation to it, even if, at a distance, the differences are small.

Sometimes, the use of traditional media for texturing is helpful too. I have used my share of drawing tablets but (even considering the use of Cintiq tablets) none of them can truly replicate the intuitiveness of simply taking pen or pencil to paper and simply drawing.  Sometimes, it’s just easier to sketch out a base to work from, clean it up or paint over it, rather than drawing and erasing ad nauseum via tablet, and this is what I’ve done here.

This and some other branches were drawn with pencil, scanned, cleaned up and painted over.  Using this process, I was able to put together a sideways branch, which is now at a prime stage for the addition of leaves in Blender.

I usually start by unwrapping the UV of a plane to fill the whole area of a UV layout matching the proportions of my leaf texture. In the Node Editor, this object gets assigned a material with the leaf texture as a diffuse map. I additionally assign transparency to the material, using transparency from the texture to be the deciding factor in what gets rendered.

The plane gets cut up so that each piece of geometry gets a different leaf. I then also bring in the branch texture and put it on a vertical plane object (using a similar node setup as above) by adding it to my Diffuse Map node in the Node Editor.

Once this is in place, I divide the Blender windows such that I can view a preview in Render mode on one side as well as edit in either Texture or Wireframe mode on the other. This allows me to move leaf textures to match the branch texture relatively quickly, while still seeing the results (and how the transparent textures interact with each other) in real-time.

In this case, leaf geometry was laid out and duplicated with an Array modifier and also given a curve modifier, so that the geometry would conform to an extruded curve (to act as a stem). This allowed me to move and deform the long string of leaves in any way I wanted.

Special consideration is made to maintain variation and depth. Being able to use a 3D program to put together this texture means that I can take the time to create parts of the foliage which move forward or recede. Setting my texture workflow up this way also means it would be easy to replace the leaf texture later for other texture sets (fall colours, for example).

Once I have an arrangement I’m happy with, I add a solid emissive blue background, set up some appropriate lighting, position the camera and take a render (F12) of the camera view.

The result gets saved and opened in Photoshop. I select as much blue as possible, then delete it from the layer, leaving behind a transparent background. I then add a Hue/Saturation adjustment layer and de-saturate any remaining blue colour on the preceding layer.

Any additional cleanup should be done to the texture at this stage. I save a .PSD file as well as a .PNG at full size, then I repeat the placement process for branches along the full trunk. Once I have finalized placement, the file gets saved again as a .TGA, with an appropriate background & alpha channel and at a more SL-appropriate image size.

There can be a lot of experimentation at this stage and the solution, for trees, isn’t always a flat billboarded texture. As it stands, this tree still looks a little spare!

In my next article, I’ll show what additional geometry and texture work goes in to making the tree look believable from multiple angles.


If you found this or any other of my articles helpful, please consider becoming a Patron! Doing so supports further articles of this kind and my content creation in general.  Alternatively, if you like the sorts of things that I make for Second Life, drop by my Marketplace listing or my in-world stores and check out what I have to offer!

Unless otherwise noted, I (Aki Shichiroji) and this blog are not sponsored in any way. My thoughts are my own and not indicative of endorsement by any associated or discussed product/service/company.

Texture Maps and How to Use Them

This week has been a bit of a mish-mash from a work perspective – not only is Candy Faire coming up but we’ve gotten a major go-ahead with a work project, as well as some additional requests that I’ll touch on later this month.

Today, I figured I’d discuss some techniques with regard to texture maps and how they can provide additional detail to your work.

The Advanced Lighting Model has been a part of the SL viewer for around five years, yet at this stage it’s still quite seldom that current content creators make use of it to its greatest extent.

There is a common misconception that the Advanced Lighting Model is only usable if you can view shadows and projectors, but that’s not actually the case. The option to turn off shadows and projectors may be in a different spot depending on which viewer you use, but on the mainstream viewer, it’s located here:

So with that addressed, how can Advanced Lighting Model help us?

As I touched on during the last article, a major part of making more efficient content is finding ways to add detail without adding costly geometry, and a large part of that has to do with the texture map options provided by the ALM functionality.

What do all these texture maps do?

Diffuse (Texture tab)

Diffuse maps are the basic textures we use to give objects their flat colour. On their own, they do not provide any effect with regard to how shiny or bumpy an object may be, instead offering only a means to see colour on the object under diffuse lighting conditions (IE: no reflections or projections).

Diffuse grey colour with no baked detail
In-World GIF. Click to animate!

We might sometimes ‘bake in’ more details to a diffuse texture, but it’s important to note that these details will be static and not change with differing light conditions.

 

In-world GIF! Click to Animate. Notice how the white highlight doesn’t change?

 

Normal maps (Bumpiness tab)

In SL,  the Bumpiness tab uses ‘Normal’ maps, which tell the camera which angle each pixel should face and to what degree these pixels should recede or protrude.  This shouldn’t be confused with bump or height maps, which use the monochrome spectrum to define distance of a given pixel from the camera – SL does use these to some degree, but not for the same purpose as one might expect.

Height maps become more commonly used when it comes to applications in SL where a strict height difference (and lack of angle info) is used – IE: Height maps for full regions – but I won’t be going in to that today.

A spherical normal map
In-world GIF – click to animate!

Under the right circumstances, a normal map can make a big difference between a flat plane and a flat plane with rounded buttons.  Most commonly, the addition of normals is most noticeable when combined with Shiny maps.

Shininess (Specular)

Shiny maps are a combination of two different types of maps – Specular and Glossiness maps.

Specular maps control how much light gets bounced back from an object’s surface, as well as what colour that light would be. Glossy maps control how shiny or matte something is.

Combined, Glossy and Specular maps can control how matte or shiny something is as well as how diffuse or sharp the reflected highlight will be.

Shiny maps are gloss and spec maps combined! Click to see this animated & notice the glossy ‘gloss map’ lettering and sphere.

Adding Material maps to your workflow:

Creating your own bump/height maps using a graphics editor:

Height/Bump maps are created in greyscale. Of note, it is typically not enough to simply convert your diffuse texture to black & white, as not all lighter details on your diffuse texture are necessarily supposed to be closer to the camera, nor darker details necessarily supposed to be farther from the camera.

Not all lighter details on your diffuse texture are necessarily supposed to be closer to the camera, nor darker details necessarily supposed to be farther from the camera. As such, plainly converting a diffuse texture to a height or normal map without any changes isn’t always the best solution.

I  usually start with 50% grey as a background layer, then use additional layers to create depth. Anything I want to recede gets a darker shade.  Anything I want to come forward gets a lighter one.

Areas of higher contrast will see sharper bumps. Areas of lower contrast will be more subtle.

But Aki! You just got done telling us we usually don’t use height maps on objects!

While I did mention Normal maps are more commonly used within Second Life, the truth is most Normal maps are acquired by creating a height or bump-map, then having a program convert it for you.

Free:

With Photoshop, you can use the NVIDIA Normal Map filter.

Just follow the installation instructions and you’ll find it under Filters in Photoshop the next time you load the program up. In it, you can control the scale of your normal map’s height, sample granularity and more, but usually those two (height and granularity) are all that I change.

Alternatively, there are a fair number of web-based converters with varying degrees of customization. Notably:

Normalmap Online

For pay:

There are a variety of additional programs that will do conversions for you, usually (but not always) bundled in with other functionality.

Crazybump – offers a variety of licenses depending on use. For selling in SL, expect to purchase the professional license.

Quixel Suite – offers a variety of licenses depending on use. For selling in SL, an Indie license is usually sufficient. Of note, this suite includes nDo (which is the Normal generation component), dDo (which permits application of high detail scanned materials) and recently 3Do (which is a baking utility.)  Each of these can be purchased separately.

Substance Painter – This is a pretty commonly used standalone program amongst SL creators these days. It allows similar functionality to Quixel’s dDo, but also offers more in-depth painting-style application, along with procedural and particle brushes. If you have already created some height/bump maps, you can still use this to export Normal maps too! Content creators making under US$100K can safely use the indie license.

Creating your own bump/height maps using a modelling program:

I mostly just use Blender or ZBrush, but I’m including a series of videos that go through normal creation using the big four modellers used for SL content creation below.

All of these methods assume that you have a higher-poly version of your low-poly model and that your low-poly model has already been assigned a UV layout:

3DSMax(by BracerJack) : https://www.youtube.com/watch?v=pGHirP8WE-I

Blender (free) – (by Grant Abbitt) : https://www.youtube.com/watch?v=o8giubIE1LY

Maya – (by Academic Phoenix Plus): https://www.youtube.com/watch?v=aoxs5c1bjw0

ZBrushDuylinh Nyugenhttps://www.youtube.com/watch?v=xaGMq1YJwio

Creating your own shiny maps using a graphics editor:

Much like creating Height/bump maps, both Specular and Glossy maps can be created with your favourite graphics editor or with a variety of other programs (Quixel Suite, Substance Painter). Most of these other programs will give you the appropriate separate maps when exporting ‘PBR’ materials – that is, Physics Based Rendering – but you’ll have to combine the specular and glossy maps in your graphic program later if you would like some variance in the matte/glossiness of the object.

A specular map, without any glossy channel applied.
Specular map only, applied to the grey diffuse texture – notice how the surface is uniformly glossy? Click for animation.

In my own work, the glossy map is applied to the specular map as an alpha channel. Any areas on the alpha channel that are black will mask off the reflectiveness of the object.

A shiny map – includes both specular highlights (inside the circle) and glossy elements (opaque vs transparent elements)
With the glossy alpha channel added to the specular map, we now have a shiny map. Notice how certain parts of the surface are now matte? (Click to animate)

What does this all end up looking like?

Finally, here’s what all of these texture maps look like, put together.

With the normal map added, the circle now has a bit more dimensionality. We are also able to see how specular highlights become less noticable as the camera moves away from them face-on.

If we add back that simple baked shadow texture from above, we can give this surface even more depth!

Together, these texture maps have extensive application. You can add iridescence to a Christmas tree ornament, for example, or texture to an elephant’s skin. You can add droplets of water to a leaf or provide a scaly sheen to a dragon’s hide.

Hopefully this provides a good starting-off point for your creations – I’d love to see what you come up with! If you have any questions, please feel free to give me a shout in-world on my main account (Aki Shichiroji) and I’ll be happy to provide feedback or help as necessary.

Hopefully I’ll have more to discuss release-wise, since Candy Faire is imminent! Additionally, there are some other Organica releases in the pipeline.  See you next week!


Did you know I have a Patreon account? If you enjoy this content, please consider becoming a Patron! It helps me create more like it and offers a variety of rewards. Alternatively, if you like the sorts of things that I make for Second Life, drop by my Marketplace listing or my in-world stores and check out what I have to offer!

Unless otherwise noted, I (Aki Shichiroji) and this blog are not sponsored in any way. My thoughts are my own and not indicative of endorsement by any associated or discussed product/service/company.

Why depending solely on the LI system is a false equivalency for good modelling

urdoinitrong

I’ve been thinking a lot about this lately – i’m sure I’m not the first one to bring this up…

Land Impact, as it relates to mesh these days, seems to be the be-all and end-all to consumer products in the Second Life space these days. From the start, back in the closed beta days, there was always a lot of push and pull, trying to design the system so that users would be encouraged to design intelligently, effectively and efficiently. As it happens, costs were put in place to better reflect this (as compared to sculpts).

Prim equivalencies for normal prims and sculpts didn’t change – Linden Lab cited not wanting to break content as the reason for this and I could go in to some depth about how this was a bad idea. I also have scripter friends who would take issue even with that policy, given that it seems to be enforced inconsistently; from modelling/texturing/animating to scripting. Neither of these issues is here or there, as it pertains to this discussion though.

The point I’m making here is that Linden Lab attempted to encourage better content creation practices by encouraging the use of multiple Levels of Detail (LODs), physics models, and the need for efficiency in scripting of these items. The Knowledge Base goes in to some detail regarding this.

Essentially, it is possible to have a good looking model upload for a high LI, with no LOD or physics optimization, OR upload for a fraction of that LI by making clever use of the opportunities afforded by LODs and physics models.

An important part of designing a good model is being able to make comprimises in complexity in order to make the viewer experience better, while not sacrificing too much in the way of quality.

Note, here, that I make a distinction between a complex model and a quality model – the two are not necessarily the same.

For one thing, it’s incredibly easy to abuse the system provided in order to upload a highly complex model while maintaining an unfairly low LI. How?

Well, you can upload up to three LODs to every model you upload to SL. In fact, you probably should. The rule of thumb, as far as the LOD generator is concerned, is that the ‘high’ version should have roughly 25% the number of faces compared to the full version of the model. ‘Medium’ should be half that of ‘high’, and ‘Low’ should be half that of ‘Medium’. If you leave these as they are, you may get a low LI object, you might not. It depends on a few factors, including how complex the model was to begin with, but also how many discreet parts there are, as well as whether the item is rigged or scripted.

But what happens if you suddenly just tell the viewer to load the full version of the model, then use the lowest possible quality for all the LODs, and tell the viewer that a substantial physics model isn’t necessary?

Basically you end up cheating the system, possibly without even knowing it.

I recently noted a significantly slow and poor viewer experience while exploring, and the persons involved had done exactly this. This prominent venue has dozens of vendors and they all come in at 3LI. Offhand, not bad, you might think, and under normal circumstances you might be right.

But in this case, no. I was getting frustrated. If I was a customer who didn’t know any better, I might just chalk it up to a bad computer. To be honest I’ll admit here my desktop computer could probably use more RAM and an upgrade from my midgrade video card within the next year. I also run three screens for my daily workflow and my usual load of programs includes Photoshop, Blender and either Chrome or iTunes.  But the common user of SL probably has a crappier computer than I do and it’s always a good idea to design for the lowest common denominator.

I decided to try and diagnose what might be the cause of framerates approaching 2-3FPS, even with basic shaders turned off, draw distance cranked all the way down, avatar imposters turned on and to the most stringent setting. I took the advice of Drongle McMahon on the SL Forums, showing a way to turn on rendering info and in particular to ascertain triangle and vertex count for selected objects.

Upon closer inspection, the vendor had over 18000 triangles – 20000 vertices. For reference, most mid-sized mesh houses come in at under 4000 triangles. Most main character avatars in video games similar in appearance to SL come in between 5000-7000 triangles. A simple box prim has 108 triangles and you can even make a box prim less complex by using a mesh box instead (since SL’s box has 18 triangles per face and 6 faces), which would get you 12 triangles. So basically, one of these vendors was taking up roughly the rendering capacity of four or five mesh houses, two or three avatars, or potentially *thousands* of mesh box equivalents. Within a 2m by 2m space. And there are dozens of these same vendors all over the venue.

What’s worse is this particular vendor takes significant advantage of the LOD uploads. If you are having a hard time loading things and as a result reduce the object detail in your graphical preferences (or if you’re an advanced user and have changed your rendervolLOD settings to something fairly low), then you won’t be able to view this vendor in any way other than a) the full, high-poly model or b) a broken mess of the minimum number of triangles required to upload and match up to the high-poly model.

Whereas in other modern platforms, such as Unity3D, it may be commonplace to budget perhaps between 100K-300K polygons for an entire scene, in SL it is often difficult to stick within those boundaries even if you control and created the immediate surroundings yourself. Content creators often design their objects to be ‘the stars’ of the show, regardless of their overall importance or how many resources have already been expended in the environment. It does not help that the content creation community is generally poorly educated on the subject or willfully ignores the consequences of irresponsible design decisions, in the name of creating something pretty.

Let me get something straight here – I’m not saying ‘don’t make or sell pretty things’; what I am saying is make pretty, quality things. But be smart about it and consider what is possible as well as what is responsible to inflict on an unpredictable number of people.

There are many things a designer can do in order to create more efficient quality content.

– For the full version, model to create good edge flow, avoid using the uploaded model to create fine details which could otherwise be achieved using a diffuse or normal map. High-poly models can be used to help create cavity and normal maps, but should not be uploaded to Second Life.

– Create efficient and accurate LODs which not only cut down on vert count but which are at least (at the medium level) somewhat representative of the object at a distance, to allow users to see at a glance what they should be looking at, rather than requiring them to zoom in. Most modelling programs should allow you to collapse parts of your model without significantly affecting the UV layout. Some also have modifiers (such as Blender’s most recent Decimate Modifier) which do some automatic face reduction, within limits.

– Don’t ‘cheat’ the LOD system by uploading a high-poly model and crappy, non-representational LODs.

– Stop making excuses for poor content by saying more complex content is necessary for a high quality, immersive SL experience. 1) SL isn’t a Pixar movie. You can’t expect it to look that way and run to any reasonable degree. 2) Quality content can look great, even at lower levels of detail. Any good content creator should know that and should know how to do that.

– Look forward to, and PUSH, the Materials project. Coupling a quality, low-poly model with a great diffuse, normal and specular map is the best road towards creating great, efficient content.  A developers build is out for it already and it’s always a good idea for content creators to provide their feedback to the developers so that any bugs can be acknowledged and fixed quickly. It’s not on the main viewer stream yet, but development has been moving at a steady pace. In the mean time, it wouldn’t hurt to learn how to create normal and specular maps anyway, since you can bake normal and specular effects in to your diffuse map already and upload it as a flat file.

– Make use of the wireframe mode in SL at least once or twice to see how it looks in-world, zoomed in. If the model looks almost solid even if you’re zoomed in, YOU’RE DOING IT WRONG.