Blur filter for UITexture in NGUI

SHADDERRRRS! Well that is it, I’m caught in shader land. The thing is, I don’t know jack **** about them. What I am missing mostly from Flash in Unity are filters like blur, drop shadow and glow. How you would go about doing that in Unity would be with shaders (I guess), so I stuck my nose in them. Here is my first version of a Blur filter/shader.

 

Shader "Unlit/Transparent Colored Blurred"
{
  Properties
  {
    _MainTex ("Base (RGB), Alpha (A)", 2D) = "white" {}
    _Distance ("Distance", Float) = 0.015
  }
 
  SubShader
  {
    LOD 100
 
    Tags
    {
      "Queue" = "Transparent"
      "IgnoreProjector" = "True"
      "RenderType" = "Transparent"
    }
 
    Cull Off
    Lighting Off
    ZWrite Off
    Fog { Mode Off }
    Offset -1, -1
    Blend SrcAlpha OneMinusSrcAlpha
 
    Pass
    {
      CGPROGRAM
      #pragma vertex vertexProgram
      #pragma fragment fragmentProgram
 
      #include "UnityCG.cginc"
 
      struct appdata_t
      {
        float4 vertex : POSITION;
        float2 textureCoordinate : TEXCOORD0;
        fixed4 color : COLOR;
      };
 
      struct vertexToFragment
      {
        float4 vertex : SV_POSITION;
        half2 textureCoordinate : TEXCOORD0;
        fixed4 color : COLOR;
      };
 
      sampler2D _MainTex;
      float4 _MainTex_ST;
      float _Distance;
 
      vertexToFragment vertexProgram (appdata_t vertexData)
      {
        vertexToFragment output;
        output.vertex = mul(UNITY_MATRIX_MVP, vertexData.vertex);
        output.textureCoordinate = TRANSFORM_TEX(vertexData.textureCoordinate, _MainTex);
        output.color = vertexData.color;
        return output;
      }
 
      fixed4 fragmentProgram (vertexToFragment input) : COLOR
      {
        float distance = _Distance;
        fixed4 computedColor = tex2D(_MainTex, input.textureCoordinate) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x + distance , input.textureCoordinate.y + distance )) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x + distance , input.textureCoordinate.y)) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x , input.textureCoordinate.y + distance )) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x - distance , input.textureCoordinate.y - distance )) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x + distance , input.textureCoordinate.y - distance )) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x - distance , input.textureCoordinate.y + distance )) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x - distance , input.textureCoordinate.y)) * input.color;
        computedColor += tex2D(_MainTex, half2(input.textureCoordinate.x , input.textureCoordinate.y - distance )) * input.color;
        computedColor = computedColor / 9;
 
        return computedColor;
      }
      ENDCG
    }
  }
}

It works ok, with a lot of restrictions. First, you can only use it for NGUI UITextures. I would love it to work with UISPrites, but they all share the same atlas, so if you blur one sprite you blur them all! Also, for now, you should leave a padding of 10 transparent pixels around your texture for a better effect.
 

ShaderBlurred

The Distance parameter is relative to the size of your texture so correct values will change from one texture to the other. Anyway you have it, I will keep working on it, but if you see anything that can be improved, don’t be afraid to tell me, I would really like to make it better.

, , , , , ,

No Comments


Updated version of the Masking shader for NGUI

I did a shader before to mask a texture so that it doesn’t have to be rectangular. Turns out I was using an old version of NGUI, so when I updated (to version 2.65) my previous shader didn’t work anymore. Also Nicki Thomas Hansen made another shader so that you could use the masked texture inside a clipped panel. In doing so he also explained what NGUI was doing and how it was selecting the correct shader. So, based on his AlphaClip, I remade my shader so that it works on the new version of NGUI. Here is the code for it:

Shader "Unlit/Transparent Colored Masked"
{
	Properties
	{
		_MainTex ("Base (RGB), Alpha (A)", 2D) = "white" {}
		_AlphaTex ("MaskTexture", 2D) = "white" {}
	}
 
	SubShader
	{
		LOD 100
 
		Tags
		{
			"Queue" = "Transparent"
			"IgnoreProjector" = "True"
			"RenderType" = "Transparent"
		}
 
		Cull Off
		Lighting Off
		ZWrite Off
		Fog { Mode Off }
		Offset -1, -1
		Blend SrcAlpha OneMinusSrcAlpha
 
		Pass
		{
			CGPROGRAM
				#pragma vertex vertexProgram
				#pragma fragment fragmentProgram
 
				#include "UnityCG.cginc"
 
				struct appdata_t
				{
					float4 vertex : POSITION;
					float2 textureCoordinate : TEXCOORD0;
					fixed4 color : COLOR;
				};
 
				struct vertexToFragment
				{
					float4 vertex : SV_POSITION;
					half2 textureCoordinate : TEXCOORD0;
					fixed4 color : COLOR;
				};
 
				sampler2D _MainTex;
				float4 _MainTex_ST;
				sampler2D _AlphaTex;
 
				vertexToFragment vertexProgram (appdata_t vertexData)
				{
					vertexToFragment output;
					output.vertex = mul(UNITY_MATRIX_MVP, vertexData.vertex);
					output.textureCoordinate = TRANSFORM_TEX(vertexData.textureCoordinate, _MainTex);
					output.color = vertexData.color;
					return output;
				}
 
				fixed4 fragmentProgram (vertexToFragment input) : COLOR
				{
					fixed4 computedColor = tex2D(_MainTex, input.textureCoordinate) * input.color;
					fixed4 alphaGuide = tex2D(_AlphaTex, input.textureCoordinate);
 
					if (alphaGuide.a < computedColor.a) computedColor.a = alphaGuide.a;
 
					return computedColor;
				}
			ENDCG
		}
	}
}

Funny how writing my previous post solved my future problem by someone else writing an answer post.
I will update the previous post so that it points to this post also.

UPDATE: I renamed all the variables to something readable, I thought it might be useful to understand what it is doing.

, , , , ,

No Comments


Kickstarter is my main new way to get games

I love Kickstarter. I think it is one of the best inventions from this decade. It gives the people a say in what gets made or not and that is a very powerful thing. So I decided that I would participate more to crowd funding. Notably in the video game area. Since I don’t really have time to get into huge games, the type of games usually found on Kickstarter are perfect for me. I’m going to try and buy most of my games from crowd funding.

Here are two games that I participated in:

The Fall

http://www.kickstarter.com/projects/189665092/the-fall-dark-story-driven-exploration-in-an-alien?ref=live

This seems to be made by one guy alone and it looks awesome. Just for that it was worth it to help it a bit. Also who wouldn’t want a deeper /dark metroid?

Sunless Sea

http://www.kickstarter.com/projects/failbetter/sunless-sea?ref=live

I liked the looks on this one, plus they said their inspiration was Don’t starve, FTL and roguelikes; that has to be good.

So there you have it, I think you should back those projects too, so that they are made more awesome. I will leave you with this tip, do not back too many projects at the same time, because you’ll get too many email updates from them.

, , ,

No Comments


What is coming for UI for Unity3D

As you might have guessed it from previous posts, I am currently building UI using NGUI in Unity3D. It is quite challenging to do everything I am used to with Flash. So when I saw this video from Unite 2013 by ArenMook, you can’t believe how happy I was. It solve mostly all the problems I have with doing 2D in Unity. Also I am pretty happy they didn’t go further with the OnGUI stuff, that was really awkward to use.

Here is the video (at the 10 minute mark is the very nice slide!):

 

Now all I am missing are Bitmap filters like blur, drop shadow and glow and you’ll find a pretty happy person in me!

, , , , ,

No Comments


Unity iOs Keyboards while in landscape

I have been playing with input fields in NGUI and it didn’t seem very straightforward what each keyboard was, so I took a screenshot of every type accessible. Here they are:

Default

ASCIICapable

NumbersAndPunctuation

URL

NumberPad

PhonePad

NamePhonePad

EmailAddress

 

As you can see, some are exactly the same (maybe they are different in portrait) so there is not as many options as it seems at first. I wish I could have done this for Android also, but I don’t have access to an Android tablet…

, , ,

No Comments


Unity Quick tips: Clearing the Cache

So when you build, sometimes it doesn’t take into account the new work you did. Or sometime it just gives you a weird error that you can’t figure out like this error “ArgumentException: get_temporaryCachePath  can only be called from the main thread.”. Well maybe it’s time to clear the Unity player’s cache. But how do you do such a thing; there doesn’t seem to be any options settings in the player. Well, you could search for “Clear cache unity” in Google (that is probably what you search for to arrive to this blog post) or you could go directly to this link.

Well that is it, clear cache away!

, ,

No Comments


Unity quick tips: Settings for Textures

Hey, I am working on a bigger post, but in the mean time, allow me to write this quick tip. In NGUI, when you don’t use an atlas for your sprites, you use the UITexture class. At first I was having so much trouble with it because the make pixel perfect button just didn’t seem to make my texture look pixel perfect. That was until I figured it out.

Making it work

When you select your texture file in Unity (the actual png file), the inspector will show you settings for it. By default, the type is set to ‘Texture’, which seems fine, but what you really want to be setting it at is ‘Advanced’. Than you will be able to change the next combobox. In 3D you want to have images (textures) that their width and height are powers of 2, example: 2, 4, 8, 16, 32, 64, 128, 512, 1024. When you do 2D (UI) that really never happens; never. That is why you will likely set the Non Power of 2 to ‘None’. That is the most important setting. Now when you will press Make Pixel Perfect button in NGUI, it will resize the texture to the correct size.

Figuring out compression

To be frank, I don’t totally understand the rest of the settings as they refer to 3D stuff, but by setting the Wrap Mode to ‘Clamp’, Filter Mode to ‘Trilinear’ and Aniso Level to ’4′ , your texture will look better at run time. Also, as you probably don’t need it, you should remove the check mark after Generate Mip Maps.

Well that is it for this quick tip, I hope it saved you some time when dealing with textures.

 

 

, , , , , , , , , ,

No Comments


Masking textures using shaders NGUI

If you follow this blog, you know that I am having some problems with Unity3d and NGUI. Mostly it’s because I was so familiar with Flash/AS3 that I am feeling kinda lost. But I am getting better at this 2D in a 3D world thing. One of the thing that I miss the most is masks. In Flash they are very easy to use and with them you can do a plethora of effects and animations. Now with bitmap based technologies, it is not such a simple task to implement a mask.

Clipping

NGUI Panels have the option of being clipped panels, which means that only a rectangle of the panel will be shown. This is great for some cases, like when you need your masked region to be a rectangle, but for most masking cases it won’t work. Also, it doesn’t allow nested clipping which is a bummer.

Using another camera

Also, this guy created a shader that allows you to do similar masking as in Flash. It looks good, and it does the desired effect, but there is one drawback, for every mask, you need a new camera… That makes it very hard to manage in a large project or if you have multiple masks. I would use clipping more than this technique because it is easier to deal with.

Transparency shader

Now, this is the technique I devised that allows you to have multiple textures masked at the same time each with their own masks. This is really good if you load images (thumbnails) from a server and need them to be masked.

To do it we need to create a new shader. We start that by taking the Unlit – Transparent Colored shader and we will add two lines of code to it. First we will give it another texture for input. Secondly, we will take the output of the shader, use its rgb colors, but use the alpha of the new input texture we added.  Here is the code :

Shader "Unlit/Transparent Colored with mask" {
  Properties {
    _MainTex ("Base (RGB), Alpha (A)", 2D) = "white" {}
    _AlphaTex ("Yeahyeah", 2D) = "white" {}
  }
 
  SubShader{
    LOD 100
 
    Tags{
      "Queue" = "Transparent"
      "IgnoreProjector" = "True"
      "RenderType" = "Transparent"
    }
 
    Pass {
      Cull Off
      Lighting Off
      ZWrite Off
      Fog { Mode Off }
      Offset -1, -1
      ColorMask RGB
      AlphaTest Greater .01
      Blend SrcAlpha OneMinusSrcAlpha
      ColorMaterial AmbientAndDiffuse
 
      SetTexture [_MainTex] {
        Combine Texture * Primary
      }
 
      SetTexture [_AlphaTex] {
        Combine previous, texture
      }
    }
  }
}

So that is the shader, but now we have to use it. This is actually what I found to be the most difficult part because there is a lot of documentation about how to make shaders, but not how to use them. So in the next chunk of code, we will create a texture in NGUI, give it a shader. After that we will feed the shader the textures it need to calculate the mask.

_newTexture = NGUITools.AddWidget&lt;UITexture&gt;(gameObject);
_newTexture.pivot = UIWidget.Pivot.TopLeft;
_newTexture.material = new Material(Shader.Find("Unlit/Transparent Colored with mask"));
_newTexture.mainTexture = myTexture2D;
_newTexture.MakePixelPerfect();
 
//now we give the shader the textures
 
_newTexture.material.SetTexture(<wbr />"_MainTex", testRed);
_newTexture.material.SetTexture(<wbr />"_AlphaTex", testAlpha);

In this testRed is the image we want to mask and testAlpha is the alpha channel we want our previous image to use.

So here you have it, I will add pictures later to illustrate it better, but for now that’s how it is. Note that with this technique you can’t really animate or nest the masks, but you can have a lot of them at the same time.

UPDATE : If you are using a version of NGUI that is higher than 2.64, you should probably use this shader instead.

, , , , , , , ,

3 Comments


Creating new GameObjects

I don’t know what you want when you create new GameObjects for 3D, but for my UI stuff, I need the new instance to be a child of the GameObject that I have selected in the hierarchy. When you use the command provided by the Unity editor, it creates the new GameObject at the root of your project. Plus when you move that GameObject (haha I wrote MovieClip at first) in the hierarchy, its position and scale will change according to the parent’s values. So basically, I was spending my time moving new GameObject and setting their position to (0,0,0) and scale to (1,1,1).

When I had enough of it, I took matters in my own hands and created a little panel that will create a new GameObject as a child of the GameObject selected in the editor. It also sets it position to 0 and scale to 1. Bam! Here is the code for it:

public class CreateGameObject : EditorWindow{
	[MenuItem("zedia/Utility/GameObject Creator")]
	public static void ShowWindow() {
		EditorWindow.GetWindow<CreateGameObject>("GameObject Creator");
	}
 
	void OnGUI (){
		GUILayout.Label("GameObject Creator4", EditorStyles.boldLabel);
		GUILayout.Label("Creates a new GameObject under the selected GameObject in the Hierarchy,", EditorStyles.label);
		GUILayout.Label("with local position (0,0,0) and local scale (1,1,1)", EditorStyles.label);
		EditorGUILayout.Separator();
		GUILayout.BeginHorizontal();
 
		var retainValue = GUILayout.Button("Add GameObject To:");
		GameObject sel = EditorGUILayout.ObjectField(Selection.activeGameObject, typeof(GameObject), true, GUILayout.Width(140f)) as GameObject;
		if (retainValue){
			CreateTheEmptyGameObject();
		}
	}
 
	public static void CreateTheEmptyGameObject(){
		GameObject newGameObject = new GameObject();
		newGameObject.transform.parent = Selection.activeGameObject.transform;
		newGameObject.transform.localScale = new Vector3(1,1,1);
		newGameObject.transform.localPosition = new Vector3(0,0,0);
		Selection.activeObject = newGameObject;
	}
 
	void OnSelectionChange () { Repaint(); }
}

Now, that was what I wanted, but it would have been even better if it could listen for a keyboard shortcut. So I set out to do just that in the previous little panel except it didn’t work. Something was preventing me from receiving the events from the keyboard (I can Debug.Log, but as soon as I execute code it doesn’t work). So I decided I would just create a MenuItem associated with a keyboard shortcut. Here is what the code look likes:

using UnityEngine;
using System.Collections;
using UnityEditor;
 
public class CreateGameObjectCommand : MonoBehaviour {
	[MenuItem("zedia/Utility/GameObject Creator Command #%m")]
	static void CreateGameObject () {
		GameObject newGameObject = new GameObject();
		newGameObject.transform.parent = Selection.activeGameObject.transform;
		newGameObject.transform.localScale = new Vector3(1,1,1);
		newGameObject.transform.localPosition = new Vector3(0,0,0);
		Selection.activeObject = newGameObject;
	}
}

So here you go; either a panel in the editor or a shortcut or both if you want to. Just grab the code and put the files in a folder named Editor, and Unity will add a menu for you. I hope Unity3D changes their default behavior for creating new GameObject but until then, we will make it work ourself!

, , , ,

2 Comments


Unity3D: the editor/code duality

So as most of you know I have been doing Flash for a long time and recently I have been doing Unity3d (well mostly NGUI you could say). It has been mostly fun and mostly rigid.

Rigid???

Yeah I would say rigid. Unity3d imposes you a way to work that is mostly to use the editor and not programming. They really really want you to use the editor. And coming from Flash this feels really weird. Probably because after 6 years of using Flash you kinda learn that the most you can do in code, the better and easier it is. Also, if you constantly switch between the editor and the code, it gets confusing, always switching paradigms.

GameObject for president!

GameObject is the root of everything you do in Unity, but one of the most annoying thing is that you can’t extend GameObject… Come on, let me do it, just the tip ;) . Here is what I would do if I could; I would make myself APIs for 2D. Moving something in x would be

gameObject.x = 30;

instead of

gameObject.tranform.localPosition = new Vector3d (30,  gameObject.tranform.localPosition.y, gameObject.tranform.localPosition.z);

Man have you seen the size of that thing just to move something in x???? Let me make it better for myself, I don’t care about 3D, I wouldn’t have a z value I would call it depth and it would make the code way more readable.

Encapsulation, what encapsulation

The other thing that bugs me is that to interact through code with a gameObject hierarchy you created, you basically have to know how it is built and you have to get your pieces using Find or GetComponent. Find statements are the ugliest ones as they use a string to get you what you want and is really error prone.

So you add your Script to your gameObject (proof that the editor is taking precedence over code) but to have interactions between multiple gameObject you have to do it through their scripts which you have to know their types, but have no idea if they exist and are linked or not. You’ll know at runtime when the error pops up.

Everything on a GameObject

Basically my point here is that you can’t do anything if it ain’t on a gameObject. Some core functions just won’t work if it ain’t. Like the WWW function(used to load stuff from the web) (really cryptic name if you ask me). It won’t work if it ain’t called from a MonoBehavior and MonoBehaviors can’t be instantiated, it needs to be added to a GameObject. So what about Models, code that only keep the states of an application or that loads data to hold it. Models have nothing to do with GameObjects, they should be allowed to use the WWW function. But no, if you want to, you need to create a GameObject, add to it a loading script, and wait for it to pass you back the data. Doesn’t that sound devious to you?

Let it define itself

I like Unity3d; it allows you to build for Android / iOs very easily, but the way it is so rigid really annoys me. I think that a project that is so young like Unity3D should not force the users down some path, it should let them find incredible and unthought ways of using it. I want to do 2D / UI with it and right now it is pretty annoying to do so.

, , , , , ,

6 Comments