From the Burrow

TaleSpire Dev Log 194

2020-06-25 00:05:40 +0000

WARNING: The visuals for, and even the kinds of, rulers in this log are 100% temporary. The point is to work out a possible structure for the code and network sync. Drawing any data or conclusions from the clips presented in this update is meaningless.

Heya folks, today Ree has been prototyping props, and I’m going to ramble below about what I’ve been noodling with.

After seeing a conversation in the discord about area-of-effect, I got a strong desire to hack on rulers again. My goal was to make a prototype of one possible mechanism which could give us fairly a simple way to define rulers.

I wanted the ability to support simple point-to-point or volume rulers, but also ones with some additional adjustable parameter (like a cone area-of-effect)

The approach is as follows:

  • A ruler is a container which holds some different visuals for rulers (which I’ll call presentations in this update)
  • The ruler is a simple state machine with four states
    • Placing the first point
    • Placing the second point
    • Adjusting the ‘parameter’ (optional step)
    • Done
  • The ruler might be visible to other players. In which case it handles the sync
  • A presentation does not have to support a parameter
  • Presentations can be cycled at any point before the ‘Done’ state

The ‘parameter’ is one part that might seem a bit vague right now. We should talk about it, but first, have a peek at this clip.

testing rulers kinds

Unlike the sphere and line rulers, the circle area-of-effect presentation had an additional step after placing the center and defining the radius. It allowed the user to show a slice of the circle. To make this, I needed the third point to be on a plane defined by the first and second points. From the volume selection tool, I had already seen that it was handy to define a plane to raycast against, so I decided to add that to the parameter. The optional parameter is either a point on the board (just like the first two), or it can define a plane to constrain the point. Simple stuff but might allow some useful tools.

So this prototype is done we can kick the tires and see if it’s in the right ballpark. It’s totally fine if we throw it all away, or gut it and do something similar. The main thing is that it’s something tactile in-game that we can play around with.

With this itch scratched, I’ve jumped branch to look at the session/chat panel prototype again.

That’s all for now. Seeya

p.s. Here is a little clip showing sync working between two copies of TaleSpire.

testing ruler sync

TaleSpire Dev Log 193

2020-06-21 16:31:32 +0000

Hi folks.

I’ve been poking a little at the occlusion culling, so I thought I’d show something I found handy.

We are not moving to Unity’s scriptable rendering pipeline yet. This means much of the work I’m doing right now is finding out how we should hook into Unity’s traditional rendering stack.

I really like BatchRendererGroup as it lets us set up batches and not have to update them per frame[0]. We can then map the batch-id to a specific zone and kind of asset. This now means that any active camera (or one that is told to render) will render the things we have submitted in those batches.

We really want to use lower-poly meshes for the shadows and occlusion-culling, so what we should put in those batches is actually the low poly-meshes (which we are going to call occlusion-meshes from now on). However, we don’t want to render the mesh itself to the final scene, we just want it to cast shadows in the main view and populate a depth buffer for us to use later.

Now those used to Unity might think of layers. You can tag objects with a specific layer and then tell a camera to only render particular layers. The issue with this is that if you filter out an object, you also can’t see it’s shadows, so that won’t do. Next, we could see that when adding batches to the BatchRendererGroup we can set the ShadowCastingMode to ShadowsOnly. Awesome, now we only get shadows in the main view… however, we then don’t get any depth information in the depth buffer. Damn. What ended up working was the following:

  • Make a shader with only a ShadowCaster pass [1].
  • Make a material using this, and use that in AddBatch.
  • Render the scene from the same orientation using a camera with a replacement shader and with a depth RenderTexture as the target.

What’s nice about this is that lights will now use our occlusion-meshes, but those meshes won’t show up in the final render.

With the depth buffer available, we now generate the depth-chain, as mentioned in a previous dev-log. We then have everything we need to get into the meat bit of the occlusion culling. Once we know what to draw, we will use DrawMeshInstancedIndirect to submit the high-poly meshes for rendering. At this point, we make sure to set receiveShadows to false and the ShadowCastingMode to Off. This lets us receive the shadows we computed earlier without asking Unity to compute more from these more detailed meshes.

Right now, all the tests I’m doing are just with randomly placed spheres, so screenshots are a little meaningless, however, once we get into the using real assets, I’ll start showing more.

Next, I need to work out a little mistake I’ve created in the depth-chain, refactor the tests to be a little easier to handle, and then get into the culling compute shader. Once I have this first test working, I’ll need to do a lot of planning to work out how we want to handle the data in the TaleSpire. The goal is to find a balance where we do as little per-frame work and as small per-frame GPU uploads as possible.

Until next time, Peace.

[0] and handle culling ourselves which is handy too [1] here is the shader we used in our recent tests

// Very minimal
Shader "OccluderShadowCaster"
{
    SubShader
    {
        Pass
        {
            Tags {"LightMode"="ShadowCaster"}

            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile_shadowcaster
            #pragma multi_compile_instancing
            #include "UnityCG.cginc"

            struct appdata
            {
                UNITY_VERTEX_INPUT_INSTANCE_ID
                float4 vertex : POSITION;
                float3 normal : NORMAL;
            };

            struct v2f {
                V2F_SHADOW_CASTER;
            };

            v2f vert(appdata v)
            {
                v2f o;
                UNITY_SETUP_INSTANCE_ID(v);
                TRANSFER_SHADOW_CASTER_NORMALOFFSET(o)
                return o;
            }

            float4 frag(v2f i) : SV_Target
            {
                SHADOW_CASTER_FRAGMENT(i)
            }
            ENDCG
        }
    }
}

TaleSpire Dev Log 192

2020-06-18 01:53:15 +0000

A small ‘still working’ update for the last couple of days. Ree has continued with terrain experiments and business work. I’ve been playing around with Unity’s BatchRendererGroup type to learn what we have in our toolbox as we start to rewrite how we handle the rendering of tiles and props.

More coming soon.

Ciao

TaleSpire Dev Log 191

2020-06-15 23:13:04 +0000

Hi folks.

Today, some of the things Ree and I have been working on don’t make for great writeups. He was on business tasks, which included working on our GDPR documentation, and I was finishing a feature we’ll be shipping in a few days if all goes well.

I’m not ready to say what it is as, although it’s tiny, we want to ship it alongside an upcoming asset update. You’ll just have to wait :p

After I finished off that feature, I started work on the in-game chat system. For messages to players within the same board, it’s easiest to use the RPC calls of the networking library we use. However, for campaign-wide messages, we will use our server to relay the message to all connections which are subscribed to that campaign. This is likely to get changed up depending on what we can do with WebRTC down the line.

I wrote the code to handle this on the backend, and while I was at it, I added some code to handle the upcoming bookmarks feature.

With that done, I started mocking up the UI for the chat panel. I’m no designer, so please excuse the ugliness, but a rough shape is forming at least.

uuuugly

The little boxes underneath the message text box are attachment slots. I think we will allow you to drop references to creatures, locations in the board, dice rolls, and other stuff and send them along with your message.

By the time this is ready, it will replace the session history, the messages you had there will now be in this log, which will support tag-based filtering.

You will be able to target your messages at any of the following:

  • Everyone in the board
  • Everyone in the campaign
  • Everyone in a specific party
  • A specific player

My current thinking is that you’ll usually use the drop-down to pick the target of the message, but if your message starts with @<someone>, it will as if you switched the target to that player, sent your message, and switch the target back. This is handy in cases when you need to send something very quickly, like if you are co-ordinating a mutiny with another player.

In time we also will want to support language proficiency in the chat. This would mean you can read messages that arrive in languages your creature understands (e.g. elvish), but the text will be garbled if it doesn’t support that. Obviously, this needs more information to be attached to each creature, so this is more likely to happen closer to when we add rule-system support. It doesn’t feel like something that should be prioritized over other features, though.

Time to call it a night.

Seeya!

TaleSpire Dev Log 190

2020-06-14 17:08:38 +0000

This week has been dedicated to research and planning, and it’s been fantastic.

We have a lot of things that need to be done by Early Access. Between our plans, user requests, and polish, we definitely have our work cut out for us.

Luckily some features are also polish, probably none more so than performance. We’ve known for a long time that we would need to take some big steps to tackle performance issues. TaleSpire is a user-generated-content game, and these have some challenges which are not present in many games. These typically revolve around the fact that the scene can change dramatically at any time and so you cannot use techniques that require precalculating lots of things at build time.

There are tools which we’ve mentioned we wanted to look into, the big two being Unity’s new ECS and Gpu occlusion culling.

ECS

We started by performing some quick and dirty tests with entities by adding an entity conversion component to the tiles being spawned. We happily noted that in our first scene, the time spent on rendering noticeably dropped. There were some cases, however, where it didn’t seem so cut and dry, and so we did some digging. It certainly feels like there has been a significant performance regression in the hybrid-renderer since the ‘Mega City’ demo unity was showing off last year. Based on the code, this seems centered around material properties. This was a bit disheartening but did teach me about the BatchRenderGroup class, which is likely going to become very important to us this year.

Now, before folks get up in arms against Unity, this system is in preview, it’s not shipping quality and the version in unity 2020 apparently already has significant performance improvements. This is just how software development is. We are just inside the sausage factory here.

Now it’s very true that, given the same engineers, general systems will often be slower than one made for a specific task. And from my reading, it seems that, in TaleSpire’s case, we can have opportunities to cache more heavily and get speedups based on assumptions we can make about our own data usage. We will dig into this over the coming months.

Another similar system is Unity’s new stateless physics system. From our tests, we see very promising speedups compared to the current system. Once again, I believe there are places where we can feed the system in a way that can limit the impact of some tasks they have to do when starting the physics step for that frame.

Now, changing the physics system means changing everything that uses the old system. That means board building tools, dice, and creatures, just to name three. And, let’s face it, those three are a lot of what TaleSpire is :D We are going to have to work very aggressively on this to get TS back into a playable state as quickly as possible, but it’s still going to take time.

If we can achieve both of these changes, we can eliminate the time it takes to spawn tiles. This means that we can remove the progressive-tile-load system, which is one of the most complicated parts of TaleSpire. Its whole job was to mitigate the fact that Unity’s previous object-oriented approach resulted in us not being able to spawn many objects per frame. This meant that what you saw, and the actual state of the board data are necessarily out of sync. We managed that, but, as mentioned, making that robust took a lot of work. I’d love to remove it.

So that’s all on the entities side. I’ve left out a lot of details and complexity, but I’m very optimistic that we can get see some big wins here.

Gpu Occlusion Culling

Occlusion culling is the act of not drawing things hidden from view (occluded) by other objects in the scene. It’s a very powerful concept but a reasonably complicated one, which often requires a lot of number crunching. Many games do this ‘offline,’ the level is analyzed by a program that precalculates what is visible from where. This would be done during builds of the levels and then would be used at runtime. As mentioned above, offline approaches are not ones TaleSpire can use, so what can we do? More recently, as GPUs have improved, people have worked out ways to achieve realtime versions of this on the GPU. It is very common in TaleSpire that people decorate the insides of builds, and none of those tiles usually need to be drawn when the camera is outside the building. In short, we might get substantial wins from occlusion culling.

I’d started implementing a GPU occlusion culler outside Unity before, but I’d lacked the Unity knowledge to know how to do it there without writing my own render pipeline. After seeing BatchRenderGroup, a bunch of stuff clicked, and now I’m pretty sure I know how to do this. As with my previous experiments, I’m basing my work on this blog post, and a yesterday I wrote a MonoBehaviour that computes the hierarchical depth (Hi-Z) mip chain for a camera.

I’ve got a few things on my todo list for this week, but as soon as possible, I want to write a working test version of this so I can explore how far we can take this.

One side note from this is that we will be moving to use low poly meshes for all occluders. This means line-of-sight, fog-of-war, and shadows should all get a speed boost.

Games Systems

As well as the research above, we spent a good deal of time exploring ideas for the following game systems.

Terrain/Water

We have an initial idea for a system that we hope will balance a decent game experience with compact data representation. No details on it today, but Ree is currently prototyping to see if the idea will feel good enough for a v1 in the Early Access.

Props System

The prop system has been near to releasing at least 3 times over the last 2 years. We sat down and started by discussing the implementation details (as it’s going to have to be made to work with all the changes mentioned above). However, as we talked, we grew increasingly concerned that the attachment-point based approach might not be good enough for handling the kinds of cases we had seen watching people build. We went back and forth on this for a bit and realized that we needed to prototype the tooling and see what it would end up feeling like. The last 20% of making something often takes 80% of the time, so it’s far better that we convince ourselves now than to build the wrong thing.

Locking the Unity version

Our modding system relies on Unity’s prefab format. Once people can make mods, we need to make sure that we don’t break mods by upgrading to a Unity version, which uses an updated prefab format. To us, this means we need to pick a Unity version and stick with it for a long time. Naturally, that means we can’t take advantage of the newer Unity feature or fixes, so we have to be sure that the version we have is something we can stick with. We’ll keep you posted on this too.

Wrapping Up

I’d like to talk on stream a bit about this stuff and a selection of the feature-requests. We won’t have timelines for the requests, but we can at least address what we see going into TS and what won’t. At that time, I’ll release the list of feature requests. I’m not going to do it before as the notes need cleaning-up and that work takes a lot of time.

Alright, that’s it for now,

Seeya tomorrow!

TaleSpire Dev Log 189

2020-06-11 00:42:26 +0000

Hi all.

Today I finished collating all the entries in the #feature-requests channel on the TaleSpire discord. With that done, @Ree and I went through them to work out what we wanted to do for each. What was cool was that there weren’t many major surprises. Most were about systems that we still have plenty of plans around improving. I’ll probably need to organize a dev-stream at some point so we can go through some of them and talk about the plans for TaleSpire in general. However, I’m not scheduling that today.

The rest of this week will be planning and prototypes around other major systems we need. The big ones being terrain, water, and the rendering changes we need to get the performance we all crave :)

Hope this finds you well. Back with more tomorrow

TaleSprie Dev Log 188

2020-06-09 02:02:37 +0000

Evening all.

Work goes well. @Ree has been looking back into the emote system work he did previously. Once that is up to scratch, we can get the data & sync side worked out, and then hopefully, it’s just testing and tweaks before we ship.

Today I started with a few hours of going through all the posts in the #feature-request channel in discord, making the big list of things to discuss internally. Starting on Wednesday, @Ree and I are taking some days to plan and hack on some experiments to set the direction for what we need to achieve over the coming year.

Inspired by an item on that list, I’ve been revisiting my old NDI tests to see how much work it would take to get this shipped experimentally. I can pick the source from NDI and get texture, but I’m currently fighting the UI side. Once that is doing what I want, I’ll need to handle some strangeness I saw when the call reconnects before looking at sync. I’m not yet sure how the sources are named on each client, so I’m not sure how to synchronize the feed information for each client. Regardless we’ll work it out or, if it takes too long, it’ll go back on the shelf for now.

That’s all for today. I’ll put out a bugfix tomorrow as today I was made aware of a bug that was introduced to line-of-sight in the last release.

Goodnight!

TaleSpire Dev Log 187

2020-06-02 17:19:15 +0000

A quick update tonight.

Work continues of the fog of war. I’ve been fixing bugs and implementing the code that handles cleaning up fog-of-war jobs when transitioning boards. I still think there are some memory bugs, but it’s all going in the right direction.

Ree’s work on the keyboard movement has gone well. We’ve found a couple of smalls bugs in the line-of-sight and creature scaling code, so we’ll get those fixed up asap.

Peace

p.s A quick video showing what the 30 unit radius of the fog-of-war update looks like

TaleSpire Dev Log 186

2020-06-02 00:58:13 +0000

TLDR: The first prototype of 3D fog of war started working about an hour ago. Check it out:

BIG OL’ CAVEAT: This is not the final mesh or shader used on the fog. This just shows that the raw transforms can work.

A lot of the leg work over the last days has been managing the pools of cubemaps, buffers and such used in the fog of war system so that we never allocate more often than we absolutely have to. We also have been making sure that no step blocks for longer than necessary and that we have something we can easily tune.

So here is the very rough rundown:

  • when you place a creature that you control, the scene (for a 30 unit radius) is rendered into a cubemap holding creature ids and distances from the observer.
  • the ids are used by the line of sight system to accurately determine what creatures are visible to the creature you just placed.
  • the cubemap (and some other data) is kicked over to the fog-of-war system that works out every cell (a 1x1x1 unit volume) visible for the observer.
  • It packs this data into a buffer that is then applied to the zone’s fog-mask.
  • The updated fog mask is handed over to the mesher which generates a relatively low poly mesh for the fog and sends gives it to the zone to display

Multiple of these can be done at once. The line-of-sight and fog-of-war updates that rely on processing the data in the cubemap are all on the GPU, and any other step that does any kind of data processing is done in jobs dispatched over multiple cores. I’ve not profiled this yet, but it’s feeling ok, and we have the tools to make this quick.

Right now, I’m just stoked this is starting to work. Tomorrow I’ll be bug-fixing, and after that, I’ll start work on the network sync for these updates. I’ll write a dev-log on that problem as it’s a fun one too.

We are also getting closer to the creature scaling update. We found a bug in the keyboard movement and we need to get fixed before we ship. We’ll keep you posted on that.

For now, I’m gonna go poke this system some more :)

Seeya!

p.s. Here is a pic of a fog mesh. You can see that the mesher does an ok job of cutting down the number of polys from the worst case.

pew

TaleSpire Dev Log 185

2020-05-27 23:41:49 +0000

Heya folks,

Today has been spent testing and fixing the backend changes for creature scaling and hooking the creature scale feature into the data model.

The scaling is feeling great, Ree’s done great work, and I’m very excited to see this ship. There are still bugs to be fixed, so we’ll keep working on those.

The server side has now been patched, and so everything is ready to go.

The first thing I’ll be doing tomorrow is adding some code to detect mismatched versions of the client. This will stop a person with an older version of the client connecting to a session hosted by someone with a newer version (and vice versa). This is important as the board sync format can change and are not compatible between different versions of the client [0].

That’s the lot for today

Ciao

[0] Please note that this format is not the same as the board persistence format for which there are format upgrade methods written.

Mastodon