A Disclaimer for Jettomero

February 16, 2017 - devlog

Please note this is not a tutorial. This is me attempting to explain a number of things to both you and I to alleviate certain expectations about Jettomero that I’m concerned some people may have and that may also be causing me stress/depression. I know there’s lots of great games out there that don’t conform to any pre-existing ideas of what a video game should be – even so I find it difficult at times to feel confident in pushing that myself. Before I start hyping the game for launch I’d like to make a few things clear.

  1. Jettomero probably isn’t the game you think it might be. This is intentional. The core concept of the game (long before it ever took any sort of shape) is still at the base of everything and is very important to me. In order for it to work I do need to undermine the player’s expectation. Some people may not like this. So be it.
  2. Jettomero is more of an experience than a game. This relates to my first point again, I’ll be playing with some gaming tropes explicitly to flip them on their heads.
  3. Jettomero will probably be confusing when and if you first pick it up. I’ll tell you how to play, but I won’t tell you why. I’m not going to give you a mission to complete. The meaning to your actions will need to come from you.
  4. Jettomero is only intended to be 1-2 hours long. This is all the time I need to do the things I want to do and I don’t see any reason to drag this out.

That being said, I do hope you enjoy Jettomero.

World Space to Screen Space UI

December 20, 2016 - devlog

It’s not actually that complicated but this is a simple technique for achieving the effect of 2D UI tracking world space positions. If you’re new to Unity then this tutorial may be helpful to you.

The Unity3D UI offers great flexibility, allowing easy setup of multiple canvases in screen space and world space. World space UI works exceptionally well for things like interactive computer screens, but may not always be exactly what you want due to scaling/orientation. In the above case, I needed a way to clearly mark all the planets in the system, so I render them all in the main screen space UI and use the setup as I will describe below.

Step 1) The parent rect transform should extend to all edges of the screen, otherwise this will not work.

Step 2) The UI object should be configured as such:

The important details to note are the anchor is in the lower left corner and the pivot is set to 0, 0. This ensures that the origin of the UI element will always track to the exact point. If you want it off-centered then you can modify the children of the main tracking UI element.

3) Now in your update function you’ll want to find the world position of your target and then set the UI RectTransform’s anchored position. Here’s the example from my code (where I’m iterating over a list of available planets).

Vector3 viewPos = Camera.main.WorldToViewportPoint(planets[i].transform.position);
planetLabels[i].anchoredPosition = new Vector2(viewPos.x * canvasSize.x, viewPos.y * canvasSize.y);

planetLabels[i].gameObject.SetActive(viewPos.z > 0);

The viewPos will always be a normalized value between 0 and 1. So when you apply that to the anchored position of the UI you’ll need to multiply it by the canvas’s size. The size won’t always match the screen’s dimensions, so to find the canvas size you should use Canvas.pixelRect.width and Canvas.pixelRect.height – referencing the Canvas object in the UI element’s upper hierarchy.

It’s also very important to note that if the viewPos.z is less than 0 it means that the object is behind the camera, so unless you take this into account for your handling (in my case I disable the game object), the UI will show up in a mirrored position if it is behind the camera.

That’s all there is to it. In the gif above I also modify the alpha of the parent CanvasGroup to fade in and out the elements. As always, let me know if you have any questions @GhostTimeGames on twitter.

Runtime crater mesh deformations

November 12, 2016 - devlog

Since I’m already deforming my planets at runtime using a noise map, it made sense to include the ability to add craters to the planets when a large event occurs (landing or blasting off for example).


It’s not entirely ideal but my system seems to work alright – the biggest issue is that I use the deformed surfaces for my mesh collider as well so I need to be careful how much I warp the surface or it might cause issues with character mobility.


Here’s the gist of it:

float craterSize = 5.0f;

Mesh mesh = GetComponent<MeshFilter>().mesh;
List<Vector3> vertices = mesh.vertices.ToList();
for (int i = 0; i < vertices.Count; i++) 
     vertices[i] = transform.TransformPoint(vertices[i]);
     float distFromCrater =  Vector3.Distance(vertices[i], craterOrigin);
     if (distFromCrater < craterSize)
          Vector3 dirFromCrater = (craterOrigin  vertices[i]).normalized;
          vertices[i] = vertices[i]  dirFromCrater * (craterSize  distFromCrater)/2.0f;   
     vertices[i] = transform.InverseTransformPoint(vertices[i]);


So once I have an array of all the vertices I convert each one into world space (using transform.TransformPoint()) and I can check how close each vertex is from the source of the crater explosion. If a vertex is within range of the crater explosion then I move it back away from the crater origin based on its proximity – so it’s most impacted at the center and becomes gentle towards the edges. Then I convert the vertices back to local space using transform.InverseTransformPoint() and I can assign my modified vertex array back to the mesh. It’s also very important to recalculate the normals.

That’s essentially all there is to it. I use a particle system using rock models to represent the earth that has been displaced and mask the sudden model change in the explosion effects as well. I’ll likely need to limit the number of craters that can be made in a single area to avoid the terrain being completely distorted since it’s definitely possible to start making things weird with enough repeated deformations.

Shader-driven Flame Effect

October 19, 2016 - devlog

I saw a cool gif of some animated flames the other day and wondered if I’d be able to recreate something similar in Unity3D. I’ve been using particle effects for almost everything up until now because I’m terrible at writing shaders but I thought I’d see what I could do. So here’s my less than perfect shader which you can copy and tweak as you desire.


I’ll try my best to explain what I’m doing in the shader so you’ll know the best places to start tweaking things.

First things first, my base texture was literally created by going to photoshop, selecting Filter->Render->Difference Clouds. I just needed something quick to test with and this ended up being perfect.


You could try out some different texture and see how it affects the effect. For this example I think you’ll want to make sure you select the Alpha from Grayscale and Transparency from Alpha options in the texture import settings.


Now for the shader code. Remember, I don’t really know what I’m doing here. But it works.

Shader GhostTime/FlameCutoffShader
     Properties {
     _Color (Main ColorColor) = (1,1,1,1)
     _MainTex (Base (RGB) Trans (A)2D) = white {}
     _SecTex (Second (RGB) Trans (A)2D) = white {}
     _Scale (ScaleFloat) = 1
     SubShader {
     Tags {Queue=Transparent IgnoreProjector=True RenderType=Transparent DisableBatching = True }
     LOD 200
     Lighting off
     cull off
     #pragma surface surf NoLighting noforwardadd alpha
     sampler2D _MainTex;
     sampler2D _SecTex;
     fixed4 _Color;
     float _Scale;
     struct Input {
         float2 uv_MainTex;
         float3 worldPos;

     fixed4 LightingNoLighting(SurfaceOutput s, fixed3 lightDir, fixed atten)
         fixed4 c;
         c.rgb = s.Albedo; 
         c.a = s.Alpha;
         return c;
     void surf (Input IN, inout SurfaceOutput o) {
        float time = _Time;

        fixed4 c = tex2D(_MainTex, IN.uv_MainTex + float2(0, time.r * 10));
        float cRight = tex2D(_SecTex, IN.uv_MainTex + float2(time.r * 0.10)).a;
        float cLeft = tex2D(_SecTex, IN.uv_MainTex + float2(time.r * –0.10)).a;

        float mixedAlpha = (c.a + 0.1) * (cRight + 0.5) * (cLeft + 0.5);

        float3 worldPos = mul(_Object2Worldfloat4(0,0,0,1)).xyz;

        float wDist = distance(worldPos, IN.worldPos);

        float distCut = _Scale/wDist;
        if (mixedAlpha < distCut)
            o.Alpha = 1.0;
            o.Albedo = _Color + distCut;
        else if (mixedAlpha – 0.01 < distCut)
               o.Alpha = 1.0;
               o.Albedo = float3(000);
            o.Alpha = 0;

     Fallback Transparent/VertexLit

Firstly, you’ll notice there’s two texture slots in the properties – I used the same texture for both, but you could change them up if you wanted to experiment with the effect. The first texture is the main flames source. The _Scale property is important for adjusting to change the reach of the flames. You’ll need to play with this and adjust it based on the size of the mesh you’re using. Unfortunately I couldn’t find a good way to automatically get the scale/bounds of the mesh from inside of the shader, so you’ll probably need to set this property from script.

So in the surf function we get the time from _Time – this is essential for animating the flame over time. Below that is where we sample the textures using offsets adjusted my the time.

fixed4 c = tex2D(_MainTex, IN.uv_MainTex + float2(0, time.r * 10));
float cRight = tex2D(_SecTex, IN.uv_MainTex + float2(time.r * 0.1, 0)).a;
float cLeft = tex2D(_SecTex, IN.uv_MainTex + float2(time.r * -0.1, 0)).a;

So the main texture sampler is being offset in the Y axis by 10 times the normal time rate. The cRight and cLeft values are being adjusted in the x direction at a much slower rate, and I only need their alpha. It’s worth noting that these time multipliers are entirely arbitrary and you may want to include them as customizable properties in the head of the shader.

Now I multiply the alpha of all three of these samples to create a combined shifting alpha. I add an arbitrary amount to each one to prevent them from being decreased too significantly.
float mixedAlpha = (c.a + 0.1) * (cRight + 0.5) * (cLeft + 0.5);
The left and right samples help to keep the flames a little more random and flicker, instead of just scrolling a single texture straight up the mesh.

Now I get the distance of each point from the origin of the object, because we want the flames to start dithering as they travel farther from the source.
float3 worldPos = mul(_Object2World, float4(0,0,0,1)).xyz;
float wDist = distance(worldPos, IN.worldPos);

Now that we have the distance we use it with our _Scale property to create the cutoff value. If you’re not seeing any flames on your mesh you’ll need to adjust the _Scale property in the material properties.
float distCut = _Scale/wDist;

Now we do our check against the cutoff value.
if (mixedAlpha < distCut) { o.Alpha = 1.0; o.Albedo = _Color + distCut; } else if (mixedAlpha - 0.01 < distCut) { o.Alpha = 1.0; o.Albedo = float3(0, 0, 0); } else o.Alpha = 0; So first thing to point out here - if we are setting our alpha to 1 then we also slightly modify the colour by the distance value - this creates the effect of the flame becoming darker the further away it gets from the origin. It's a subtle gradient but you could also consider doing a lerp to a secondary colour here, etc. Next do an else if check where we modify the original alpha value by -0.01 (another arbitrary value). This section will determine the edge of the flame, which I've coloured black. It's a thin line but you could replace the -0.01 with a customizable property to adjust the width of the edges. Same goes for the edge colour. And that's pretty much it. If you have any questions I'll see what I can do, but I'm also open to any suggestions for making this shader better. I'm sure it has lots of room for improvement but hopefully this is a helpful starting point for you.

Creating Lazer Eyes

September 29, 2016 - devlog

Yes, I know I spelled ‘laser’ wrong. But that’s just how I spell lazer because Z is a cool futuristic letter and lazers are also cool and futuristic. Anyway… I’m going to outline a couple techniques I used to produce this lazer vision effect in Jettomero using Unity3D’s line renderer component.
Ignoring the fact that each eye has it’s own beam, let’s focus on a single beam. The beam is made up of 3 line renderers. One is a solid line – the other two use a jagged lightning-like texture which I animate from script.
When using the Unity line renderer it’s a good idea to store your Vector3[] for the line points in it’s own separate array since the line renderer annoyingly doesn’t have a property or function to retrieve them otherwise. So all 3 line renderers in the beam should use the same start and end points – that much should be obvious. Now, we’ll want to get the 2 points of the beam to find the distance between them. Why? So we can adjust the tiling on the lightning material so that it doesn’t get stretched out when the beam becomes shorter or longer. Take a look at my Update function on a script I attach to the lightning lines.

    void Update () 
        currentOffset -= Vector2.right * Time.deltaTime * scrollSpeed;
        float length = Vector3.Distance(positions[0], positions[1]);
        GetComponent<LineRenderer>().material.mainTextureScale = new Vector2(length/5.0f, 1);
        GetComponent<LineRenderer>().material.mainTextureOffset = currentOffset;

You can see I calculate the length of the line each frame and then apply that to the mainTextureScale of the line renderer material (the length/5.0f is just an arbitrary value for how many tiles I want per unit of distance). That keeps the lightning arcs consistent across any length.

The other thing I do in the update function is animate the texture offset. I store a Vector2 on the script which can be continually changed over time using a speed that I randomize within a range in my start function. If you’re using multiple lines for an animated effect you will want to randomize the speed and start offset so that they aren’t all running overtop one another and you’ll get a much nicer effect. Here’s my start function:

    void Start () 
        randomOffset = Random.Range(0.0f, 1.0f);
        Vector2 currentOffset = new Vector2(randomOffset, 0.0f);
        scrollSpeed = Random.Range(4.0f, 6.0f);

Finally, for the end of the lazer I have two particle systems – one dispersing some sparks from a sphere emitter. The spark lines are just a thin cross texture all being spawned at a single point using random rotation, random start size, and a size over lifetime from 0 to 1. It’s the same effect I use for the sun’s rays in Jettomero only much faster.


If there’s anything here that isn’t totally clear please let me know and I’ll try to elaborate more thoroughly. This technique isn’t too fancy but it is a very simple way to get an interesting effect.

Hacking the lighting in a sandbox solar system

September 28, 2016 - devlog

I’ve been able to get most things working reasonably well in Unity3D, but this is one obstacle I’ve run up against that I haven’t been able to find an effective solution for yet. The issue? With a sun sitting right in the middle of my scene I need a point light in order to shine light evenly to all my planets. However, the point light’s shadows don’t work in the same way that a direction light’s shadows do – so I actually do need to use directional lights (as far as I’ve been able to tell so far.

Let’s look at some comparison screens – point light on the left, directional on the right:

As you can see here, everything looks about the same from space except for the angle of the light. But when we land on a planet you’ll see…

The directional light looks way better right!? Even if I turn on shadows on the point light – the vast distance between the sun’s light and the planet means that the shadows won’t ever appear. At one point I had set things up so that each planet had it’s own point light at a very close proximity so that it could cast some shadows – but the way that point lights and directional lights cast shadows is very different and ultimately I needed the directional effect.

And then there’s the other issue with point lights – which is that the light won’t be occluded by an object, so you can see all my objects on the dark side of the planet are still receiving light right through the planet’s body – which is not at all what I want. So the directional light is working pretty well for me on planets except that…

If I now look at the other planets from this planet, you’ll notice the light source for those isn’t accurate at all for the directional light – my script always points the directional light at the player’s current planet.

My solution up until now has been to switch my light source between point and directional whenever I land or take off from a planet, and try to hide the switch behind some particle effects. One potential solution might be to have both lights on at all times and change the layers of the lights’ culling masks and the target objects – but that might complicate my existing layer system. I spent a long time cracking my head against the wall on this one but so far this is the best solution I’ve come up with so far. It’s super hacky but unfortunately it’s a problem unique to using this open solar system setup.

I’d love to hear alternative ideas from anyone who might have one.

**UPDATE – While playing with the lights some more I discovered that setting the Render Mode on the point light to ‘Important’ will make the shadows work properly on the dark side of the planet. If I set up a spot light for each planet (that I can turn on/off when appropriate) then I can get my directional shadows this way as well – as long as the spot light is also set to ‘Important’.

Creating a Steady Cam Effect

September 24, 2016 - devlog

Here’s a quick overview of how I managed my idle camera movement for Jettomero. I really enjoy spending a lot of time of camera behaviour to get things feeling natural and effective at motivating gameplay. To capture that ‘giant monster’ movie feel I wanted to have a constant drifting of the camera – almost as a handheld camera simulation.

My camera control in Jettomero is actually surprisingly complex since I have 2 objects which use smoothing for their position and rotation – the camera object itself, and a camera target object that changes behaviour based on the game context (whether we’re flying around space or walking on a planet). I may go into depth about the camera system in another post but for now we’ll just look at the idle movement.


I was originally modifying position, rotation and zoom level all using the same technique but I’ve since simplified it to only adjust rotation because I found there was too much movement happening previously. So in the Start() function of my SteadyCam script I call this Coroutine.

    IEnumerator RotFlux()
        while (true)
            yield return new WaitForSeconds(Random.Range(0.5f, 1.5f));
            rotOffsetTarget = Quaternion.Euler(Random.Range(rotRange, rotRange), Random.Range(rotRange, rotRange), Random.Range(rotRange, rotRange));
            yield return null;

The variable rotOffsetTarget is a Quaternion that I declare at the top of the script, so this coroutine modifies it every 0.5 to 1.5 seconds – using a range that I’ve also defined at the top of script, as 3.0f. Now that I have a randomized rotation variation being set frequently I can include it in my Update function.

To simplify things – these are the two lines that affects my Camera rotation in the update function:

rotOffset = Quaternion.Lerp(rotOffset, rotOffsetTarget, Time.deltaTime);

transform.rotation = Quaternion.Lerp(transform.rotation, CameraFollow.Instance.transform.rotation * rotOffset, Time.deltaTime * rotSpeed);

First, I take my rotOffsetTarget and smooth to it using a Quaternion Lerp from rotOffset. Yes, I do a lot of smoothing.

As I had mentioned earlier, I have 2 levels of smoothing on the Camera, so CameraFollow.Instance.transform refers to the second level of smoothing, which I want my steady cam to follow and mimic. I take the rotation of that tranform, add my random rotation variation to it (Quaternions are added through multiplication) and then Lerp between my Camera object’s current rotation and this target rotation. My variable rotSpeed (set at 2.0f earlier in my script) determines how quickly I settle in to the new rotation.

I don’t have a professional background in code so I’m sure there’s a more effective way to do all this, but I’m happy to share my technique. If you have any further questions about my Camera behaviour feel free to drop me a line on Twitter.

Using a Particle System to Create Static Decorations

September 23, 2016 - devlog

Here’s a quick look at a useful trick for using the Shuriken Particle System in Unity3D to quickly make procedural arrangements of static elements.

I was playing with the idea of rings around planets when someone on twitter suggested an asteroid belt around the planets. This was an appealing idea so I considered how to quickly prototype it to get an idea of how it might work. Fortunately the Shuriken particle system in Unity3D is perfect for getting this up and running quickly. First I whipped up a ring mesh in Blender to use as the Mesh emitter, so all my asteroids would originate somewhere on the mesh (the edges in this particular case).


Here you can see the frame of the mesh that I use as my emission source.

Now there’s several important settings to use if you want the particle system to behave more like a prop than a dynamic effect.

1- Set the Start Lifetime to max – for me Unity capped the value at 100000 seconds – which is ~27 hours – so those particles aren’t dying any time soon.

2- Turn off the emission rate and use a single burst to spawn all your particles at once (also be sure to turn off looping). Here you can choose the range of how many particles you’d like to decorate with. For my asteroid belts I set it between 0 and 300 so that some planets would only have a few scattered here and there.


3- Use mesh renderers under the Render setting (optional). For me I wanted to use the same material as my planet for the asteroids and also didn’t want the default billboarding effect here so I was able to select a few different meshes to randomly assign to each particle.


Even if you want more control over your scene decoration at a later point, I think using Shuriken as a quick visual prototype is amazingly quick, clean and ideal for rapid iteration. In my case I might end up just sticking with my particle belt because it does pretty much everything I need it to do.


There’s lots of great ways to use this little hack, so be creative and see what you can do with it.

Handling Audio Fades in Coroutines

September 17, 2016 - devlog

Hi, here’s another quick demo of some C# code I wrote for Jettomero in Unity3D.

I was just adding some new functionality to my music system and needed to be able to transition volume levels on a given audio source. This is simple to accommodate using a coroutine and I can pass an Audio Source and target volume to a generic function to keep things clean and reusable. However, I needed to be able to call these volume transitions on potentially multiple audio sources at once and also needed to be able to stop any given transition if that audio source received an updated transition call. Fortunately Unity’s StopCoroutine will accept an IEnumerator as an argument so I can set up a reference to every coroutine before I start it and then use that reference to call it to stop later on. Since every audio source would only ever have a single active transition I decided to use a dictionary to store a paired AudioSource and IEnumerator. This way I can easily check the dictionary for an existing transition provided the audio source. Being careful to remove completed transitions from the dictionary again this technique seems to work perfectly for my needs. This code could obviously be adapted for non-audio purposes as well.

Here’s what the code looks like, (remember to add “using System.Collections.Generic” at the top of your .cs script to import the Dictionary support):


Dictionary<AudioSource,IEnumerator> activeTransitions = new Dictionary<AudioSource, IEnumerator>();

    void ChangeVolume(AudioSource source, float targetVolume, float transitionTime)
        if (activeTransitions.ContainsKey(source))
            IEnumerator activeTransition;
            activeTransitions.TryGetValue(source, out activeTransition);

        IEnumerator audioTransition = TransitionVolume(source, targetVolume, transitionTime);
        activeTransitions.Add(source, audioTransition);

    IEnumerator TransitionVolume(AudioSource source, float targetVolume, float transitionTime)
        float currentVolume = source.volume;
        float progress = 0.0f;
        while (progress < 1.0f)
            progress += Time.deltaTime/transitionTime;
            source.volume = Mathf.Lerp(currentVolume, targetVolume, progress);
            yield return null;

Creating Semi-Procedural Textures

August 8, 2016 - devlog

Hello, this is the first of what I hope will be a series of development logs documenting some of the process in creating my projects. Some of the posts may be specific to certain applications and others may just be rants on broader game dev subjects. This particular log outlines a system that I set up in Unity3D. The technique will rely some familiarity with Unity3D and is mostly just a creative solution for something I needed in my game. Hopefully you find it interesting and/or inspiring.

The planets in Jettomero are procedurally generated using multiple techniques. In this dev log I’ll explain how the textures for each planet are created. I may still adjust it in the future to allow for greater diversity but in the mean time…

This is the hierarchy view of my texture generation process. A disabled camera lives on the parent object and points at a set of 10 quads that live underneath. By default each quad is using an identical texture, using a threshold shader so that whites are transparent and the black lines will all overlap on top of a single white background quad.


Here you can see what the layered quads look like in the scene view. It’s not much to look at yet. The lines are taken from a hand-drawn sketch and were additionally thresholded to black/white in photoshop before being imported into unity.


If I were to wrap a texture like this around one of my planets right now it would look like this:screen-shot-2016-09-15-at-7-32-28-pm

It still looks kind of cool because of the thresholded shader I’m using but I want more detail and more randomness.

Now, when I run the game, every time a new planet is created it requests a new texture from my texture generator script. So here’s what the layered quads look like when they generate a new texture:


For each quad renderer I assign one of my inkline textures and then randomize the texture scale and offset while also rotating the quads. This creates a big old mess of lines that will never be the same again. Note that the layers all all visible because I use a shader on the quads that makes white transparent.

From here my disabled camera gets rendered to a Render Texture and and I can copy that into a Texture2D which can be passed along to the planet that requested it.

The code for this is all fairly simple:

public Texture2D NewGeneratedTexture(float scale = 0.0f)
        Texture2D refTex = refTextures[Random.Range(0, refTextures.Count)];

        foreach (Renderer r in GetComponentsInChildren<Renderer>())
            r.material.mainTexture = refTex;
            float randomScale = scale == 0.0f ? Random.Range(4.0f, 8.0f) : scale;
            float randomOffset = Random.Range(1.0f, 1.0f);
            r.material.mainTextureScale = Vector3.one * randomScale;
            r.material.mainTextureOffset = Vector3.one * randomOffset;

            r.transform.localEulerAngles = new Vector3(0.0f, 0.0f, Random.Range(0.0f, 1.0f) * 360.0f);

        Texture2D newTex = new Texture2D(defaultSize, defaultSize);

        RenderTexture.active = GetComponent<Camera>().targetTexture;

        newTex.ReadPixels(new Rect(0, 0, defaultSize, defaultSize), 0, 0);

        return newTex;

I keep an array of different inklines, one of which is chosen at random at the start (although I could mix and match if I wanted). Then for each quad renderer you can see I assign the texture to that renderer’s material and proceed to warp it in basic ways. Then I create an empty texture and render my extra camera’s view into the texture.

I’m not sure if this is the most efficient code or the best solution for what I’m doing but I’ve been happy with the results so far. My original approach generated a texture completely in code using Perlin noise (which is what I use for my terrain deformation maps still), but this new technique let’s me easily use a texture with a hand-drawn feel which creates what I find to be a very appealing image on its own.


If you have any questions feel free to follow up with me by email @ ghosttimegames at gmail dot com or via Twitter @GhostTimeGames.