Art Pipeline Part 1

Posted On: 2019-01-21

By Mark

I recently completed a customized art pipeline for maintaining character art, and both the process and results were pretty interesting, so I thought I would write up a bit about it. There's quite a bit to say on the topic, so I'll be making this a multi-part series. This post will explain in detail what the goals are, as well as why I picked those particular goals, while future posts will be detailed explanations of the implementation.

Gathering Requirements

Before starting on any of the pipeline work, I spent some time getting the art to look right in-engine. As I worked through the manual steps, I kept mental tabs on anything that consumed time or was error prone. Well before I was done with the art prototyping, I was quite confident that there were lots of opportunities to improve things, so long as I built the right tools to match the issues I was facing.

The first major aspect that I needed to account for was the way I composed the character out of multiple parts. Specifically, the way I implemented the art in-engine, the character's hair, clothes, and body are three separate textures that are placed directly on top of one-another. This approach affords several advantages:

  1. I found the normal maps were far easier to construct when I separated layers that could have massively different normals from their adjacent parts (hair, for example, does not lie flat on the head, so the line where between the hair and the head will have a sharp change in normals.)
  2. I expect that using separate textures will allow me the flexibility to use different material properties for each of the different textures (admittedly, I haven't yet used this feature, but I expect it will prove useful if I ever need to convey metallic or silky materials using lighting effects.)
  3. During art creation, I often skip over the construction stage of building out the character. This approach does have it's place (largely thumbnails and motion-centric work) but I found myself in that same habit as I worked on the art that would eventually go in-engine. By seperating out the character's body into a separate layer, I am able to use that as a form of post-hoc construction, which is useful for cross-checking that the art I've created still makes sense with regards to how the body is physically moving. (I often go back and forth between the complete image and this construction view, as adjustments to a pose often alter the silhouette, which in turn usually means more changes to the character to ensure it still reads clearly.)
Additionally, as I worked through additional poses (for the animation prototype) I found that the number of actual layers that go into a single pose are varied (for example, some poses have one or more arms on their own separate layer, to facilitate easily making small adjustments to the position of an arm that overlaps the body.) All of this complexity has benefits, but it also makes the process of converting them from the art software into the application more complex and error-prone. As such, this was a great candidate for automation.

The second aspect to account for was the use of normal maps together with transparent sprites. Although I was able to figure out a combination of settings that worked for this using the built-in shaders (Standard texture shader using Fade render mode and having both albedo and normal maps assigned) the process of setting each material correctly was time-consuming and error prone (the render mode in particular is especially prone to being forgotten, due to it's unusual position in the UI.) Finding a way to set this up automatically would save time and reduce the amount of simple mistakes that I make.

The third aspect to account for was the size of the sprites themselves. Although I originally started with a non-square sprite at 64x128, I ended up changing it to 128x128 as some of the poses were clipping outside the frame. While this works well for this character, it seems reasonably likely that I will need to use a different size for other characters (perhaps especially large or small characters). Since the actual size of the sprite is not what is rendered in-engine (I use textures on quads instead of sprites, due to the shader they use), I need to make sure that changes to a sprite size are easy to spot in the context of editing the art. For that reason (and several others) I am combining all the sprites for each character into a single file, but seperating them back out before creating normal maps or adding them to the game. Notably, I did not use this approach until I was sure that automation would work, as the time/effort cost of maintaining this without automation is higher than the actual benefits.

Picking the Tools

Once I knew what I wanted to automate, I next needed to decide what tools to use to perform the automation. Since I normally use the Gimp art program, I figured some portion of the process could be automated by creating my own plugin for it. The remainder of the process could be automated within Unity, using a combination of import presets , an AssetPostProcessor, and an editor extension . Which tool would be responsible for which aspect was actually quite straight-forward to decide: between Gimp and Unity is the step of creating normal maps (using Sprite Illuminator): as such, anything that must be completed before normal maps are the domain of a Gimp plugin, and anything that must be done after is done with Unity.

High-level Overview

Since the automation I am interested in is focused on the steps performed immediately before saving the file(s), the Gimp plugin provides a custom "export" functionality, that will perform all the steps automatically. Although the primary goal for the automation is to remove as many manual steps as possible, it still needs to afford flexibility for the aspects that I expect are likely to change over the course of the project. To achieve that, I decided to use a combination of layer groups and naming conventions to communicate intention from the artist (just myself, for now) to the program. Each pose should be grouped together in a single layer group, with the name of the pose as the name of the group. Each layer inside that group should be named both to uniquely identify the layer, and also to indicate to the plugin which layers should automatically be merged together into a single image (I call this the "category" of the layer: all layers for a pose that belong to the same category will be automatically merged together.) Since it is likely that the number, names, and purposes of the categories will change over time, using the layer names gives me the flexibility to use whatever categories make sense (and the plugin will continue to work without changes). Lastly, since the size of the sprites may vary (they've changed once already) I've coded it so that the artist can communicate the desired size using a combination of the individual layers' sizes and an offset position recorded in the layer group's name (this is actually something that I intend to revisit later, as the process is still too cumbersome.)

On the Unity side of things, the automation needs to set the textures to the correct values, based on whether it is a normal map or a sprite. It then also should create materials from the imported textures, using the correct settings, including the right textures for albedo, normal map, and ambient occlusion. Additionally, since the materials may be in varying degrees of completeness (for example, having just the albedo is enough to work on animations, so normal maps and/or occlusion may be added later) it should be possible to re-run this material generation process, with it instead updating the existing material.

To Be Continued

Next post, I will dive into the details about how I was able to create the automation. I will include the code for all the automation, in an effort to make the post as useful as possible. Hope you'll join me then!