虽然urp中是自带2D光照系统的,但是并不能和前向渲染同时使用,只能使用2D工作流了,所以在网上研究了一下怎样实现一个unity内的2D光照系统。
代码是来源于
https://github.com/SardineFish/Unity2DLightinggithub.com/SardineFish/Unity2DLightinghttps://link.zhihu.com/?target=https%3A//github.com/SardineFish/Unity2DLighting
这个项目是由一位知乎er写的,也有对应的文章
Unity中实现2D光照系统 - 知乎 (zhihu.com)https://zhuanlan.zhihu.com/p/67923713 阴影产生的原理是来自
如何在unity实现足够快的2d动态光照 - 知乎 (zhihu.com)https://zhuanlan.zhihu.com/p/52423823这两篇文章把原理都讲解的很细致,但是当我实际打开项目代码后,仍然是很懵逼的状态,并涉及一些平面几何知识,且缺少注释,很难理解。但经过一周的研读,我看懂了每个文件的意义与联系,在这里分享出来。
下面用到的两个几何图我上传在百度云( 提取码:5ohy)了,
https://pan.baidu.com/s/1RjqFIfN_bysRotxc5kmHAwpan.baidu.com/s/1RjqFIfN_bysRotxc5kmHAwhttps://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1RjqFIfN_bysRotxc5kmHAw
有兴趣的可以通过GeoGebra这个网站查看一下。
GeoGebra - 风靡世界, 过亿师生沉迷使用的免费数学软件https://www.geogebra.org/
由于我是用的是urp,所以首先删除了关于builtin渲染管线的线管代码,只留下了urp部分。关键部分集中在runtime和shader两个文件夹中。
首先来看Light2DRenderFeature这个文件,这个文件定义了在urp下的RenderFeature和renderpass,通过Light2DRenderFeature类中的AddRenderPasses方法将我们自定义的RenderPass加入到渲染队列中,在实际渲染中由RenderPass类中的Execute方法提交渲染上下文。
我是将渲染时机改为了在渲染完透明体之后,源代码是在全部渲染结束后(也就是说会覆盖在后期处理的效果上,并且不能再scene中预览)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;
namespace Lighting2D
{
public class Light2DRenderFeature : scriptableRendererFeature
{
public Light2DSystemSettings Settings;
public override void AddRenderPasses(scriptableRenderer renderer, ref RenderingData renderingData)
{
var pass = new RenderPass(Settings, renderer.cameraColorTarget);
pass.renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
renderer.EnqueuePass(pass);
}
public override void Create()
{
}
[System.Serializable]
public class RenderPass : scriptableRenderPass
{
[SerializeField]
RenderTargetIdentifier cameraColorTarget;
[SerializeField]
readonly Light2DSystemSettings settings;
[SerializeField]
readonly Light2DPass pass = new Light2DPass();
public RenderPass(Light2DSystemSettings settings, RenderTargetIdentifier colorTarget)
{
this.settings = settings;
cameraColorTarget = colorTarget;
}
public override void Execute(scriptableRenderContext context, ref RenderingData renderingData)
{
var cmd = CommandBufferPool.Get("Lighting 2D");
cmd.BeginSample("Lighting 2D");
var data = new Light2DRenderingData()
{
camera = renderingData.cameraData.camera,
cameraColorTarget = cameraColorTarget,
settings = settings,
};
pass.Render(cmd, ref data);
context.ExecuteCommandBuffer(cmd);
cmd.EndSample("Lighting 2D");
CommandBufferPool.Release(cmd);
}
}
}
}
可以看到,主要的commandBuffer都是在我们自定义的Light2DPass类中实现的,所以我们就跳转到Light2DPass文件。
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.Rendering;
namespace Lighting2D
{
public class Light2DPass
{
private readonly int ShaderIDLightMap = Shader.PropertyToID("_2DLightMap");
private readonly int ShaderIDShadowMap = Shader.PropertyToID("_2DShadowMap");
public void Render(CommandBuffer cmd, ref Light2DRenderingData data)
{
var lightmap = ShaderIDLightMap;
var shadowmap = ShaderIDShadowMap;
var screenSize = new Vector2(data.camera.pixelWidth, data.camera.pixelHeight);
cmd.GetTemporaryRT(
lightmap,
Mathf.FloorToInt(screenSize.x * data.settings.LightMapResolutionScale),
Mathf.FloorToInt(screenSize.y * data.settings.LightMapResolutionScale),
0,
data.settings.LightMapFliterMode,
data.settings.LightMapFormat);
cmd.GetTemporaryRT(
shadowmap,
Mathf.FloorToInt(screenSize.x * data.settings.ShadowMapResolutionScale),
Mathf.FloorToInt(screenSize.y * data.settings.ShadowMapResolutionScale),
0,
data.settings.ShadowMapFliterMode,
data.settings.ShadowMapFormat);
data.lightmap = lightmap;
data.shadowmap = shadowmap;
cmd.SetRenderTarget(lightmap);
cmd.ClearRenderTarget(true, true, Color.black);
bool shouldClearShadowMap = true;
foreach (var light in Light2Dbase.AssetsManager.Assets)
{
if (!light.enabled || !light.gameObject.activeInHierarchy)
continue;
if (shouldClearShadowMap)
{
cmd.SetRenderTarget(shadowmap);
cmd.ClearRenderTarget(true, true, Color.black);
shouldClearShadowMap = false;
}
if (light.LightShadows != LightShadows.None)
{
light.RenderShadow(cmd, ref data);
shouldClearShadowMap = true;
}
light.RenderLight(cmd, ref data);
}
cmd.SetGlobalFloat("_ExposureLimit", data.settings.ExposureLimit);
cmd.SetGlobalTexture("_LightMap", lightmap);
cmd.SetGlobalColor("_GlobalLight", data.settings.GlobalLight);
cmd.Blit(BuiltinRenderTextureType.None, data.cameraColorTarget, ShaderPool.Get("Lighting2D/DeferredLighting"), 0);
cmd.ReleaseTemporaryRT(lightmap);
cmd.ReleaseTemporaryRT(shadowmap);
}
}
}
可以看到,这里就是完整的2d灯光绘制过程了,结合前面的两篇文章,可以大致分析出来这个绘制过程主要分为1、请求资源;2、对每个灯光绘制lightmap和shadowmap;3、将lightmap与图像做一个混合处理。
1、请求资源:根据需要的rendertexture大小,申请临时rendertexture,并进行初始化工作。注意,两张map都是初始化为黑色的,这在我们一会理解shader时非常重要。
2、逐灯光绘制:首先绘制shadowmap,再绘制lightmap,并且每一次绘制shadowmap后,都对它进行初始化,所以shadowmap是每一个灯光都计算,但是不需要存储的,只是需要给下面绘制lightmap做一个参考;而lightmap是逐渐叠加的,最终就会得到一张完整的lightmap。我们之后再深入看两者分别是怎样绘制的。
3、叠加:这一步是相对简单的,只是把lightmap叠加在了前面的图像上,并没有用到shadowmap。我们一会再来看这部分的shader。
现在我们具体看一下对每个灯光都是怎样的操作,由于是先绘制的阴影,所以我们先看Light2Dbase这个文件,这个类是自定义的,是所有2D灯光的基类,来看一下绘制阴影的这个方法:RenderShadow
public void RenderShadow(CommandBuffer cmd, ref Light2DRenderingData data)
{
if(LightShadows == LightShadows.None)
return;
if(shadowMat == null)
shadowMat = new Material(Shader.Find("Lighting2D/Shadow"));
if (!ShadowMesh)
ShadowMesh = new Mesh();
if (!tempMesh)
tempMesh = new Mesh();
ShadowMesh.Clear();
//subShadowMesh.ForEach(mesh => mesh.Clear());
//subShadowMesh.Clear();
var meshBuilder = new MeshBuilder();
int count = Physics2D.OverlapCircleNonAlloc(transform.position, LightDistance, shadowCasters);
//CombineInstance[] combineArr = new CombineInstance[count];
for (var i = 0; i < count; i++)
{
Collider2D caster = shadowCasters[i];
if (caster is PolygonCollider2D)
{
var mesh = PolygonShadowMesh(caster as PolygonCollider2D);
//combineArr[i].mesh = mesh;
meshBuilder.AddCopiedMesh(mesh);
mesh.Clear();
}
}
ShadowMesh = meshBuilder.ToMesh(ShadowMesh);
if(LightShadows == LightShadows.Soft && ShadowSmooth == ShadowSmooth.VolumnLight)
{
cmd.SetGlobalFloat("_LightSize", LightVolumn);
cmd.DrawMesh(ShadowMesh, Matrix4x4.TRS(transform.position, transform.rotation, transform.localScale), shadowMat, 0, 1);
}
else
{
cmd.DrawMesh(ShadowMesh, Matrix4x4.TRS(transform.position, transform.rotation, transform.localScale), shadowMat, 0, 0);
if (LightShadows == LightShadows.Soft && ShadowSmooth == ShadowSmooth.Blur)
{
GaussianBlur.Blur(SmoothRadius, cmd, data.shadowmap, data.shadowmap, ShaderPool.Get("GaussianBlur/Blur"));
}
}
}
先大致看一下,每个灯光都会检测在其光照范围内的所有碰撞体,根据每个碰撞体建立一个子阴影mesh,并将全部的子阴影mesh全部拷贝到一个总的阴影mesh中,最后对阴影mesh进行绘制,其中比较难理解的就是如何计算子阴影mesh和如何在绘制mesh时计算阴影值。
我们先看一下如何计算子阴影mesh的,跳转到同文件内的PolygonShadowMesh方法,输入是一个碰撞体,输出就是子阴影mesh。
public Mesh PolygonShadowMesh(PolygonCollider2D pol)
{
var points = pol.GetPath(0);
var z = new Vector3(0, 0, 1);
MeshBuilder meshBuilder = new MeshBuilder(5 * points.Length, 3 * points.Length);
var R_2 = Mathf.Pow(LightDistance, 2);
var r_2 = Mathf.Pow(LightVolumn, 2);
for (var i = 0; i < points.Length; i++)
{
// transform points from collider space to light space
Vector3 p0 = transform.worldToLocalMatrix.MultiplyPoint(pol.transform.localToWorldMatrix.MultiplyPoint(points[(i + 1) % points.Length]));
Vector3 p1 = transform.worldToLocalMatrix.MultiplyPoint(pol.transform.localToWorldMatrix.MultiplyPoint(points[i]));
p0.z = p1.z = 0;
var ang0 = Mathf.Asin(LightVolumn / p0.magnitude); // Angle between lightDir & tangent of light circle
var ang1 = Mathf.Asin(LightVolumn / p1.magnitude); // Angle between lightDir & tangent of light circle
Vector3 shadowA = MathUtility.Rotate(p0, -ang0).normalized * (Mathf.Sqrt(R_2 - r_2) - p0.magnitude * Mathf.Cos(ang0));
Vector3 shadowB = MathUtility.Rotate(p1, ang1).normalized * (Mathf.Sqrt(R_2 - r_2) - p1.magnitude * Mathf.Cos(ang1));
shadowA += p0;
shadowB += p1;
int meshType = 0;
if (Vector3.Cross(p1 - p0, shadowB - p1).z >= 0)
{
meshType |= 1;
shadowB = MathUtility.Rotate(p0, ang0).normalized * (Mathf.Sqrt(R_2 - r_2) - p0.magnitude * Mathf.Cos(ang0));
shadowB += p0;
}
if(Vector3.Cross(p0 - shadowA, p1 - p0).z >=0)
{
meshType |= 2;
shadowA = MathUtility.Rotate(p1, -ang1).normalized * (Mathf.Sqrt(R_2 - r_2) - p1.magnitude * Mathf.Cos(ang1));
shadowA += p1;
}
var OC = (shadowA + shadowB) / 2;
Vector3 shadowR = OC.normalized * (R_2 / OC.magnitude);
if (meshType == 0)
{
meshBuilder.AddVertsAndTriangles(new Vector3[]
{
p0,
p1,
shadowB,
shadowA,
shadowR,
}, new int[] {
0, 3, 4,
1, 0, 4,
1, 4, 2,
}, new Vector2[]{
p0,
p0,
p0,
p0,
p0,
}, new Vector2[]{
p1,
p1,
p1,
p1,
p1,
});
}
else if (meshType == 1) // merge p0->p1 & p1->shadowB
{
meshBuilder.AddVertsAndTriangles(new Vector3[]
{
p0,
shadowB,
shadowA,
shadowR,
}, new int[] {
0, 2, 3,
0, 3, 1
}, new Vector2[]{
p0,
p0,
p0,
p0,
}, new Vector2[]{
p1,
p1,
p1,
p1,
});
}
else if (meshType == 2) // merge shadowA->p0 & p0->p1
{
meshBuilder.AddVertsAndTriangles(new Vector3[]
{
p1,
shadowB,
shadowA,
shadowR,
}, new int[] {
0, 2, 3,
0, 3, 1
}, new Vector2[]{
p0,
p0,
p0,
p0,
}, new Vector2[]{
p1,
p1,
p1,
p1,
});
}
else if (meshType == 3) // cross
{
meshBuilder.AddVertsAndTriangles(new Vector3[]
{
p1,
p0,
shadowB,
shadowA,
shadowR,
}, new int[] {
0, 3, 4,
1, 0, 4,
1, 4, 2,
}, new Vector2[]{
p1,
p1,
p1,
p1,
p1,
}, new Vector2[]{
p0,
p0,
p0,
p0,
p0,
});
}
if(DebugShadow)
{
Debug.DrawLine(transform.localToWorldMatrix.MultiplyPoint(p0), transform.localToWorldMatrix.MultiplyPoint(p1), Color.red);
Debug.DrawLine(transform.localToWorldMatrix.MultiplyPoint(p1), transform.localToWorldMatrix.MultiplyPoint(shadowB), Color.green);
Debug.DrawLine(transform.localToWorldMatrix.MultiplyPoint(p0), transform.localToWorldMatrix.MultiplyPoint(shadowA), Color.blue);
Debug.DrawLine(transform.localToWorldMatrix.MultiplyPoint(shadowA), transform.localToWorldMatrix.MultiplyPoint(shadowR), Color.white);
Debug.DrawLine(transform.localToWorldMatrix.MultiplyPoint(shadowB), transform.localToWorldMatrix.MultiplyPoint(shadowR), Color.white);
return meshBuilder.ToMesh(ShadowMesh);
}
}
var mesh = meshBuilder.ToMesh(tempMesh);
mesh.RecalculateNormals();
return mesh;
}
这是一个平面几何问题,可能说起来比较简单,但是看代码是非常痛苦的,完全不知道想要干嘛,所以我们数形结合一下,应该会比较容易理解。
首先,看一下for的参数,可以知道是对碰撞体的每一条边都计算一个子子阴影mesh,最后再把子子阴影mesh添加在子阴影mesh内。
p0、p1是一条遮挡边的两个端点,他进行了一串的坐标系转换,结果就是把这两个点转到了以当前灯光为原点的灯光坐标系中。LightVolumn是灯光的体积,LightDistance是灯光的范围,一个在内一个在外,我们可以得到下面的图像:
下面两个ang值其实计算的是O-px线段与px和圆lightvolumn的切线之间的角度,如下所示:四条虚线是四条切线。
接着来看两个shadow点,首先标出向量p0和p1,将其平移至对应的p点,方便理解我们的旋转操作,最后我们在把他们平移回去就可以了:
根据ang角度进行旋转,其实就是把两个px'给往两边掰开,使其和下方外侧的切线重合。就是下图中的p0''和p1'':
之后将其单位化,再乘上一个值,我们仔细看一下这个值,Mathf.Sqrt(R_2 - r_2)计算的是内圆的切点沿切线到外圆的长度(R2等价于O-t2的平方,r2等价于O-t1的平方,结果等价于t1-t2的长度),p0.magnitude * Mathf.Cos(ang0)计算的是切点到对应p点的距离(也就是图中O-p的长度,两者相减,就是p点沿切线到外圆的距离了,也就是图中p-t2的长度,由此就可以确定两个shadow点了。
将上图等价到阴影图中,t2就是两个p点沿对应p''向量与外圆的交点,这两个交点就是shadow点,如下图所示:
但是我们需要记得,我们刚刚是把p向量分别向外平移了一段距离,所以我们这时候实际得到的shadow点应该需要减去对应的p向量,这时我们在看代码,正好加上了对应的p向量,抵消了,所以shadow点就是图中这两个点。
接着的两个判断代码块时两种特殊情况,当p0-p1线段角度刁钻(接近垂直于圆表面),或者与我们现在图示的相对位置相反时,就对p''向量进行重新计算,以满足下面的需要。
现在我们已经得到了四个顶点——p0、p1、shadowA、shadowB,接下来要计算最后一个顶点。OC向量就是两个shadow向量组成的的菱形的对角线的一半:
最后的这个式子OC.normalized * (R_2 / OC.magnitude)比较难看明白在干嘛,其实就是计算两个shadow点的切线与oc延长线的交点,它经过化简之后就只剩下这个了,交点就是shadowR了:
我们手推一下这个:
这样,我们就求得了五个顶点,这就是我们需要的子子阴影mesh,但是,为了接下来的计算,我们需要把遮挡边的两顶点记录在uv中。要注意,这只是碰撞体的其中一条边,需要把每条边的子子阴影mesh拼接起来,才是碰撞体的子阴影mesh:
接着我们跳回到RenderShadow函数,通过把多个碰撞体的子阴影mesh拼接,得到了一个灯光的阴影mesh,接下来就可以绘制这个灯光造成的阴影了。
public void RenderShadow(CommandBuffer cmd, ref Light2DRenderingData data)
{
if(LightShadows == LightShadows.None)
return;
if(shadowMat == null)
shadowMat = new Material(Shader.Find("Lighting2D/Shadow"));
if (!ShadowMesh)
ShadowMesh = new Mesh();
if (!tempMesh)
tempMesh = new Mesh();
ShadowMesh.Clear();
//subShadowMesh.ForEach(mesh => mesh.Clear());
//subShadowMesh.Clear();
var meshBuilder = new MeshBuilder();
int count = Physics2D.OverlapCircleNonAlloc(transform.position, LightDistance, shadowCasters);
//CombineInstance[] combineArr = new CombineInstance[count];
for (var i = 0; i < count; i++)
{
Collider2D caster = shadowCasters[i];
if (caster is PolygonCollider2D)
{
var mesh = PolygonShadowMesh(caster as PolygonCollider2D);
//combineArr[i].mesh = mesh;
meshBuilder.AddCopiedMesh(mesh);
mesh.Clear();
}
}
ShadowMesh = meshBuilder.ToMesh(ShadowMesh);
if(LightShadows == LightShadows.Soft && ShadowSmooth == ShadowSmooth.VolumnLight)
{
cmd.SetGlobalFloat("_LightSize", LightVolumn);
cmd.DrawMesh(ShadowMesh, Matrix4x4.TRS(transform.position, transform.rotation, transform.localScale), shadowMat, 0, 1);
}
else
{
cmd.DrawMesh(ShadowMesh, Matrix4x4.TRS(transform.position, transform.rotation, transform.localScale), shadowMat, 0, 0);
if (LightShadows == LightShadows.Soft && ShadowSmooth == ShadowSmooth.Blur)
{
GaussianBlur.Blur(SmoothRadius, cmd, data.shadowmap, data.shadowmap, ShaderPool.Get("GaussianBlur/Blur"));
}
}
}
这里通过参数来确定绘制软阴影或是硬阴影,但是其实都在一个文件里,我们就跳到2DShadow文件,先来看绘制硬阴影的pass:
Pass
{
BlendOp Add
Blend One One
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float4 vertex : SV_POSITION;
float2 texcoord : TEXCOORD0;
};
v2f vert(appdata_base IN)
{
v2f OUT;
OUT.vertex = UnityObjectToClipPos(IN.vertex);
OUT.texcoord = IN.texcoord;
return OUT;
}
fixed4 frag(v2f IN) : SV_Target
{
float3 color = float3(1, 1, 1);
return fixed4(color, 1.0);
}
ENDCG
}
可以看到,就是单纯的把整个mesh绘制为纯白色,可是为什么是白色呢?注意看blend的操作是one add one,并且要记的,当初初始化的时候是黑色的,也就是说是反色存储的。要知道,现实中阴影是会叠加的,阴影叠加的越多,颜色应该是越黑,这时使用反色存储,白色的rgb(1,1,1)是非常方便进行叠加的,如果使用正常色,反而会更加麻烦一点。我们只需要在随后对其进行采样时返回正常色就可以了。
接下来我们看一下是如何绘制软阴影的,也就是第二个pass:
Pass
{
BlendOp Add
Blend One One
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata_t
{
float4 vertex : POSITION;
float4 color : COLOR;
float2 edgeA : TEXCOORD0;
float2 edgeB : TEXCOORD1;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 edgeA : TEXCOORD0;
float2 edgeB : TEXCOORD1;
float2 pos: TEXCOORD2;
};
uniform float _LightSize;
v2f vert(appdata_t i)
{
v2f o;
o.vertex = UnityObjectToClipPos(i.vertex);
o.edgeA = i.edgeA;
o.edgeB = i.edgeB;
o.pos = i.vertex;
return o;
}
inline float2 rotate(float2 v, float ang)
{
float2 r = float2(cos(ang), sin(ang));
return float2(r.x * v.x - r.y * v.y, r.x * v.y + r.y * v.x);
}
inline float cross2(float2 u, float2 v)
{
return cross(float3(u, 0), float3(v, 0)).z;
}
fixed4 frag(v2f i) : SV_Target
{
float d = distance(float2(0, 0), i.pos);
float ang = asin(_LightSize / d);
float2 left = normalize(rotate(-i.pos, ang));
float2 right = normalize(rotate(-i.pos, -ang));
float2 u = normalize(i.edgeA.xy - i.pos);
float2 v = normalize(i.edgeB.xy - i.pos);
if (cross2(v, u) < 0)
{
float2 t = u;
u = v;
v = t;
}
float leftLeak = saturate(sign(cross2(u, left))) * acos(dot(left, u));
float rightLeak = saturate(sign(cross2(right, v))) * acos(dot(right, v));
float total = acos(dot(right, left));
float3 color = saturate((leftLeak + rightLeak) / total);
return fixed4(1 - color, 1.0);
}
ENDCG
}
blend操作依然是 one aad one。顶点和vert和frag数据结构中都包含了我们在前面所传给顶点的uv1和uv2坐标,也就是遮挡边的两顶点坐标。接着我们来看两个inline函数,就是2D的旋转和叉乘操作。vert函数中也是常规操作,主要的难点就集中在了frag函数中。
在这个函数中使用到的_LightSize就是我们在前面所用的光源体积,也就是LightVolumn,也是内圆半径。此时,原点就已经是光源坐标了,两个edge点就是两个p点,也已经处于光源坐标系了,可以直接展开计算,不用进行坐标转换了。这个i.pos就是我们定义的阴影mesh中的任意一点的坐标。初始状态如图所示:
首先计算pos到原点的距离,而ang则是pos线段与内圆的切线之间的夹角:
接着我们来看下一步,它是将pos原点对称,向两边旋转ang角,再使其单位化,先把pos原点对称过去观察一下吧:
这时应该意识到,得到的left和right向量应是与两条内圆切线平行的单位向量,为了方便观察,我们将它们平移至pos点,命名为left'和right':
接着,将edge点减去pos点并单位化,就得到了pos指向两个edge点的单位向量,这次我们仍然把他们平移至pos点,命名为u'和v':
接下来的一组式子比较难理解,但是我们要明确,在这里我们最后需要得到的是遮挡值,遮挡值应该等于1-光照值,仔细观察一下u、v、left、right四个单位向量,应该可以看出似乎光照值可以用其中left-right与u-v之间的角度值得比值来简化,这个思想非常重要。
先来看前边的这一部分saturate(sign(cross2(u, left)))和saturate(sign(cross2(right, v))),这个计算是将叉乘的映射到0和1两个值,当叉乘为负结果为0,当叉乘为正结果为1,也就是说当u,v在外侧时leak值为0,u,v在内侧时leak就是后面这一部分。现在我们看一下后面的部分acos(dot(left, u))和acos(dot(right, v)),这两个值就是u-left和v-right的夹角,因为都在单位圆内,直接就相当于夹角了:
这两部分式子结合一下,就是只有当u、v在内侧时,对应得leak才是夹角值。这时候,需要分析一下到底什么时候u、v才会在内侧:
显然,这个时候u、v都是在外侧的,leak都为0。让我们移动一下edgeA再看一下:
这个时候,u就在内侧了,左边的值就为0了,显然右边的也应该是一样。结合一下参数名称,leak是“泄露”的意思,在这种情况下,应该指的就是pos点受到的光照值了。
放大仔细理解一下
当然,我们的光照值应该是一个0-1的数,应该再把上面得到的值除以一个无遮挡时的光照值,看一下代码,无遮挡光照值显然就是left-right之间的夹角,和上面的操作一样,就是total值了,两者再相除一下,就是我们需要的光照比值了,但是注意我们需要的是遮挡值,所以最后输出颜色时应该用1减一下才可以。
这样,我们就应该能得到一个灯光的shadowmap了,现在我们去看一看光照是怎样计算出来的,跳转回Light2DPass文件:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.Rendering;
namespace Lighting2D
{
public class Light2DPass
{
private readonly int ShaderIDLightMap = Shader.PropertyToID("_2DLightMap");
private readonly int ShaderIDShadowMap = Shader.PropertyToID("_2DShadowMap");
public void Render(CommandBuffer cmd, ref Light2DRenderingData data)
{
var lightmap = ShaderIDLightMap;
var shadowmap = ShaderIDShadowMap;
var screenSize = new Vector2(data.camera.pixelWidth, data.camera.pixelHeight);
cmd.GetTemporaryRT(
lightmap,
Mathf.FloorToInt(screenSize.x * data.settings.LightMapResolutionScale),
Mathf.FloorToInt(screenSize.y * data.settings.LightMapResolutionScale),
0,
data.settings.LightMapFliterMode,
data.settings.LightMapFormat);
cmd.GetTemporaryRT(
shadowmap,
Mathf.FloorToInt(screenSize.x * data.settings.ShadowMapResolutionScale),
Mathf.FloorToInt(screenSize.y * data.settings.ShadowMapResolutionScale),
0,
data.settings.ShadowMapFliterMode,
data.settings.ShadowMapFormat);
data.lightmap = lightmap;
data.shadowmap = shadowmap;
cmd.SetRenderTarget(lightmap);
cmd.ClearRenderTarget(true, true, Color.black);
bool shouldClearShadowMap = true;
foreach (var light in Light2Dbase.AssetsManager.Assets)
{
if (!light.enabled || !light.gameObject.activeInHierarchy)
continue;
if (shouldClearShadowMap)
{
cmd.SetRenderTarget(shadowmap);
cmd.ClearRenderTarget(true, true, Color.black);
shouldClearShadowMap = false;
}
if (light.LightShadows != LightShadows.None)
{
light.RenderShadow(cmd, ref data);
shouldClearShadowMap = true;
}
light.RenderLight(cmd, ref data);
}
cmd.SetGlobalFloat("_ExposureLimit", data.settings.ExposureLimit);
cmd.SetGlobalTexture("_LightMap", lightmap);
cmd.SetGlobalColor("_GlobalLight", data.settings.GlobalLight);
cmd.Blit(BuiltinRenderTextureType.None, data.cameraColorTarget, ShaderPool.Get("Lighting2D/DeferredLighting"), 0);
cmd.ReleaseTemporaryRT(lightmap);
cmd.ReleaseTemporaryRT(shadowmap);
}
}
}
可以看到是对该灯光的光照进行绘制,所以我们跳转到Light2Dbase的子类Light2D中的RenderLight方法:
using UnityEngine;
using System.Collections;
using UnityEngine.Rendering;
namespace Lighting2D
{
public enum LightType
{
Analytical,
Textured,
}
[ExecuteInEditMode]
public class Light2D : Light2Dbase
{
public LightType LightType = LightType.Analytical;
[Range(-1, 1)]
public float Attenuation = 0;
public Color LightColor = Color.white;
public float Intensity = 1;
public Texture LightTexture;
public Mesh Mesh;
private Material LightMaterial;
void Reset()
{
LightMaterial = new Material(Shader.Find("Lighting2D/2DLight"));
}
protected override void Awake()
{
Reset();
var halfRange = LightDistance / 2;
Mesh = new Mesh();
Mesh.vertices = new Vector3[]
{
new Vector3(-halfRange, -halfRange, 0),
new Vector3(halfRange, -halfRange, 0),
new Vector3(-halfRange, halfRange, 0),
new Vector3(halfRange, halfRange, 0),
};
Mesh.triangles = new int[]
{
0, 2, 1,
2, 3, 1,
};
Mesh.RecalculateNormals();
Mesh.uv = new Vector2[]
{
new Vector2 (0, 0),
new Vector2 (1, 0),
new Vector2 (0, 1),
new Vector2 (1, 1),
};
Mesh.MarkDynamic();
}
// Update is called once per frame
void Update()
{
UpdateMesh();
}
public void UpdateMesh()
{
Mesh.vertices = new Vector3[]
{
new Vector3(-LightDistance, -LightDistance, 0),
new Vector3(LightDistance, -LightDistance, 0),
new Vector3(-LightDistance, LightDistance, 0),
new Vector3(LightDistance, LightDistance, 0),
};
}
public override void RenderLight(CommandBuffer cmd, ref Light2DRenderingData data)
{
cmd.SetRenderTarget(data.lightmap);
//cmd.SetGlobalTexture("_MainTex", LightTexture);
cmd.SetGlobalColor("_Color", LightColor);
cmd.SetGlobalFloat("_Attenuation", Attenuation);
cmd.SetGlobalFloat("_Intensity", Intensity);
cmd.SetGlobalFloat("_LightRange", LightDistance);
cmd.SetGlobalFloat("_Intensity", Intensity);
cmd.SetGlobalTexture("_ShadowMap", data.shadowmap);
cmd.SetGlobalTexture("_Lightcookie", LightTexture);
var trs = Matrix4x4.TRS(transform.position, transform.rotation, transform.localScale);
switch(LightType)
{
case LightType.Analytical:
cmd.DrawMesh(Mesh, trs, LightMaterial, 0, 0);
break;
case LightType.Textured:
cmd.DrawMesh(Mesh, trs, LightMaterial, 0, 1);
break;
}
}
}
}
首先先看一下初始化操作,就是根据lightDistance创建一张子光照mesh,并在每一帧都更新其顶点。而RenderLight这个函数与RenderShadow函数来说是非常简单了,只是简单设置了一下参数,就直接进行绘制了,注意还把shadowmap当做参数传入了,接下来跳到2DLight文件看一下:
Shader "Lighting2D/2DLight"
{
Properties
{
}
SubShader
{
Tags
{
"Queue"="Transparent"
"RenderType"="Transparent"
"PreviewType"="Plane"
"CanUseSpriteAtlas"="True"
}
Lighting Off
ZWrite Off
BlendOp Add
Blend One One
// #0 analytic light
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
#include "2DLighting.cginc"
struct appdata_t
{
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 texcoord : TEXCOORD0;
float4 shadowUV: TEXCOORD2;
};
uniform fixed4 _Color;
uniform float _Attenuation;
uniform float _Intensity;
v2f vert(appdata_t v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.texcoord = v.texcoord;
o.shadowUV = ComputeGrabScreenPos(o.vertex);
return o;
}
fixed4 frag(v2f i) : SV_Target
{
float dist = distance(i.texcoord, float2(.5, .5));
dist /= .5;
dist = saturate(dist);
float illum = 0;
if(_Attenuation <= -1) // (-inf, -1]
{
illum = 0;
}
else if (_Attenuation <= 0) // (-1, 0]
{
float t = 1 / (_Attenuation + 1) - 1;
illum = exp(-dist * t) - exp(-t) * dist;
}
else if (_Attenuation < 1) // (0, 1)
{
float t = 1 / (1 - _Attenuation) - 1;
dist = 1 - dist;
illum = 1 - (exp(-dist * t) - exp(-t) * dist);
}
else
{
illum = dist >= 1 ? 0 : 1;
}
float3 color = illum * _Intensity * _Color;
i.shadowUV.xy /= i.shadowUV.w;
color = color * SAMPLE_SHADOW_2D(i.shadowUV);
return fixed4(color, 1.0);
}
ENDCG
}
// #1 Textured Light
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata_t
{
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 texcoord : TEXCOORD0;
float4 shadowUV: TEXCOORD2;
};
uniform sampler2D _Lightcookie;
uniform fixed4 _Color;
uniform float _Attenuation;
uniform float _Intensity;
sampler2D _ShadowMap;
v2f vert(appdata_t v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.texcoord = v.texcoord;
o.shadowUV = ComputeGrabScreenPos(o.vertex);
return o;
}
fixed4 frag(v2f i) : SV_Target
{
float3 color = tex2D(_Lightcookie, i.texcoord.xy) * _Intensity * _Color;
i.shadowUV.xy /= i.shadowUV.w;
color = color * (1 - tex2D(_ShadowMap, i.shadowUV).r);
return fixed4(color, 1.0);
}
ENDCG
}
}
}
先来看第一个pass,也就是程序化灯光,首先在vert中计算一下坐标和uv,因为shadowmap是在屏幕空间下的,所以在后面要采样的话就需要计算这一个点在屏幕中的位置。在frag函数中,就是根据距离和衰减值计算出了此像素的光照值,还需要采样一下shadowmap,具体的方法定义在2DLighting.cginc文件中,将shadowmap的值从反色变成正色,有阴影的地方变成了黑色,所以采样后直接乘一下光照值就是最终的lightmap了。
sampler2D _ShadowMap; #define SAMPLE_SHADOW_2D(uv) (1 - tex2D(_ShadowMap, (uv).xy).r)
注意一下这里的blend操作也是one add one,也就是有光照的地方叠加后就会更亮,纯黑的地方叠加就依然是纯黑,非常巧妙。
而第二个pass就是很类似了,只不过初始的光照值是通过采样一个光照贴图获得的。
这样遍历每一个灯光,就可以得到全屏幕的lightmap了,就可以进行Light2DPass文件的最后一部分:在最后的图像上叠加上计算的光照和阴影,也就是DeferredLighting文件中的内容:
// Upgrade NOTE: replaced '_Object2World' with 'unity_ObjectToWorld'
Shader "Lighting2D/DeferredLighting"
{
Properties
{
_MainTex ("Main Texture", 2D) = "white" {}
}
// #0 Deferred lighting
SubShader
{
Tags
{
"Queue"="Transparent"
"IgnoreProjector"="True"
"RenderType"="Transparent"
"PreviewType"="Plane"
"CanUseSpriteAtlas"="True"
}
Cull Off
Lighting Off
ZWrite Off
Blend DstColor Zero
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float4 vertex : SV_POSITION;
float2 texcoord : TEXCOORD0;
};
uniform sampler2D _LightMap;
uniform float _ExposureLimit;
uniform float3 _GlobalLight;
uniform int _UseMSAA;
uniform int _SceneView;
v2f vert(appdata_base v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.texcoord = v.texcoord;
return o;
}
fixed4 frag(v2f i) : SV_Target
{
float2 uv = i.texcoord;
#if SHADER_API_D3D11
if(!_SceneView && !_UseMSAA)
uv.y = 1 - uv.y;
#endif
float3 ambient = _GlobalLight;
float3 light = ambient + tex2D(_LightMap, i.texcoord).rgb;
if(_ExposureLimit >= 0)
light = clamp(light, 0, _ExposureLimit);
return fixed4(light, 1.0);
}
ENDCG
}
// #1 Gaussian blur
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float4 vertex : SV_POSITION;
float2 texcoord : TEXCOORD0;
};
sampler2D _MainTex;
float2 _MainTex_TexelSize;
v2f vert(appdata_base IN)
{
v2f OUT;
OUT.vertex = UnityObjectToClipPos(IN.vertex);
OUT.texcoord = IN.texcoord;
return OUT;
}
inline float3 filter(float2 uv, float2 d)
{
return tex2D(_MainTex, uv + d.xy * _MainTex_TexelSize.x).rgb
+ tex2D(_MainTex, uv - d.xy * _MainTex_TexelSize.x).rgb
+ tex2D(_MainTex, uv + d.yx * _MainTex_TexelSize.y).rgb
+ tex2D(_MainTex, uv - d.yx * _MainTex_TexelSize.y).rgb;
}
fixed4 frag(v2f IN) : SV_Target
{
float2 uv = IN.texcoord;
float3 col = 0.29234 * tex2D(_MainTex, uv).rgb;
col += 0.111768 * filter(uv, float2(1, 0));
col += 0.0499491 * filter(uv, float2(2, 0));
col += 0.013032 * filter(uv, float2(3, 0));
col += 0.00198168 * filter(uv, float2(4, 0));
return fixed4(col, 1.0);
}
ENDCG
}
}
}
第二个pass就是常见的高斯模糊,这里就不再细说了,当shadowmap和lightmap的分辨率较低时,在最后的绘制前模糊一下lightmap也能够得到比较好的效果。我们主要看一下第一部分的pass。
直接看一下,就是直接对lightmap做一个采样,并且附加一些其他参数,好像并没有采样原图像,这里我们看一下blend操作,它是DstColor Zero,也就是说,采样计算后的lightmap会直接乘上原图像,用blend操作代替了采样,很巧妙。这样我们就得到了拥有阴影及光照的画面了。
最后,只需要在Light2DPass中销毁不需要的rendertexture,并在Light2DRenderFeature文件中将刚刚的一系列commandBuffer提交就可以了,这样,我们就可以在画面上看到了:
总结一下:
原理比较简单
几何计算代码比较难看懂
blend操作用的很巧妙
但是,这个系统依然是比较初级的阶段,与官方的2D系统相比差了很多,缺少了很多特性,并且存在一些bug及无用代码。为了满足系统的要求,我会对bug进行修复、删除无用代码、尝试提高性能表现并且增加官方2D的类似特性,既提高自己也满足项目的开发,情各位拭目以待。
21.9.27.01:14



