Unity渲染5 Multiple Lights 学习笔记

Too add support for multiple lights to our shader, we'll have to add more passes to it. These passes will end up containing nearly identical code. To prevent code duplication, we're going to move our shader code to an include file.cookie

Unity doesn't have a menu option to create a shader include file. So you'll have to manually go to the asset folder of your project, via the file browser of your operating system. Create a My Lighting.cgincplain text file in the same folder as your lighting shader. You could do so by duplicating our shader, renaming it, and then clearing its contents.app

#if !defined(MY_LIGHTING_INCLUDED)
#define MY_LIGHTING_INCLUDED

#include "UnityPBSLighting.cginc"

…

#endif

Even though we have two directional lights, there is no visual difference. We can see their light independently, by having only one active at a time. But when both are active, only the main light has any effect.less

We see only a single light, because our shader only computes one light. The forward base pass is for the main directional light. To render an additional light, we need an additional pass.ide

SubShader {

		Pass {
			Tags {
				"LightMode" = "ForwardBase"
			}

			CGPROGRAM

			#pragma target 3.0

			#pragma vertex MyVertexProgram
			#pragma fragment MyFragmentProgram

			#include "My Lighting.cginc"

			ENDCG
		}

		Pass {
			Tags {
				"LightMode" = "ForwardAdd"
			}

			CGPROGRAM

			#pragma target 3.0

			#pragma vertex MyVertexProgram
			#pragma fragment MyFragmentProgram

			#include "My Lighting.cginc"

			ENDCG
		}
	}

We now see the secondary light, instead of the main light. Unity renders both, but the additive pass ends up overwriting the results of the base pass. This is wrong. The additive pass has to add its results to the base pass, not replace it. We can instruct the GPU to do this, by changing the blend mode of the additive pass.ui

 

The first time an object is rendered, the GPU checks whether a fragments ends up in front of anything else that's already been rendered to that pixel. This distance information is stored in the GPU's depth buffer, also known as the Z buffer. So each pixel has both a color and a depth. This depth represents the distance to the nearest surface from the camera, per pixel. It's like sonar.this

If there's nothing in front of the fragment that we want to render, then it's currently the surface closest to the camera. The GPU goes ahead and runs the fragment program. It overwrites the pixel's color and also records its new depth.spa

If the fragment ends up further away than what's already there, then there's something in front of it. In that case, we cannot see it, and it won't be rendered at all.rest

The depth-buffer approach only works with fully opaque objects. Semitransparent objects require a different approach. code

This process is repeated for the secondary light, except now we're adding to what's already there. Once again, the fragment program is only run if nothing is in front of what we're rendering. If so, we end up at the exact same depth as the previous pass, because it's for the same object. So we end up recording the exact same depth value.orm

Because writing to the depth buffer twice is not necessary, let's disable it. This is done with the ZWrite Off shader statement.

开开第二个Pass的时候,全部的Dynamic Batching所有挂了。

做者也不知道为何你会有一个额外的Batch,我试了试大概是付费版本的Unity就不会多一个Batches,免费版本的Unity空项目就会有4个顶点和2面须要一个额外的Batch来搬运这些顶点到GPU。

Preferably, opaque objects close to the camera are drawn first. This front-to-back draw order is efficient, because thanks to the depth buffer, hidden fragments will be skipped. If we would draw back-to-front instead, we'd keep overwriting pixels of more distant objects. This is known as overdraw, and should be avoided as much as possible.

Unity orders objects front-to-back, but that's not the only thing that determines the draw order. Changing GPU state is also expensive, and should be minimized too. This is done by rendering similar objects together. For example, Unity prefers to render the spheres and cubes in groups, because then it doesn't have to switch between meshes as often. Likewise, Unity prefers to group objects that use the same material.

 

You'll clearly see when objects come in and out of range, as they'll suddenly switch between being lit and unlit. This happens because the light would still be visible beyond the range that we chose. To fix this, we have to ensure that the attenuation and range are synchronized.

Realistically, light has no maximum range. So any range that we set is artistic liberty. Our objective then becomes to make sure that there is no sudden light transition when objects move out of range. This requires that the attenuation factor reaches zero at maximum range.

 

 

We want to create two shader variants for our additive pass. One for directional lights, and one for point lights. We do this by adding a multi-compile pragma statement to the pass. This statement defines a list of keywords. Unity will create multiple shader variants for us, each defining one of those keywords.

Each variant is a separate shader. They are compiled individually. The only difference between them is which keywords are defined.

In this case, we need DIRECTIONAL and POINT, and we should no longer define POINT ourselves.

This works for our two additive pass variants. It also works for the base pass, because it doesn't define POINT.

Unity decides which variant to use based on the current light and the shader variant keywords. When rendering a directional light, it uses the DIRECTIONAl variant. When rendering a point light, it uses the POINT variant. And when there isn't a match, it just picks the first variant from the list.

 

The default spotlight mask texture is a blurry circle. But you could use any square texture, as longs as it drops to zero at its edges. These textures are known as spotlight cookies. This name is derived from cucoloris, which refers to a film, theatre, or photography prop that adds shadows to a light.

The alpha channel of cookies is used to mask the light. The other channels don't matter. Here's an example texture, which has all four channels set to the same value.

Directional lights can have cookies too. These cookies are tiled. So they don't need to fade to zero at their edge. Instead, they have to tile seamlessly.

Cookies for directional lights have a size. This determines their visual size, which in turn affects how quickly they tile. The default is 10, but a small scene requires a much smaller scale, like 1.

To keep the amount of draw calls in check, you can limit the Pixel Light Count via the quality settings. This defines the maximum amount of pixels lights used per object. Lights are referred to as pixel lights, when they are computed per fragment.

Higher quality levels allow more pixel lights. The default of the highest quality level is four pixel lights.

Which lights are rendered is different for each object. Unity orders lights from most to least significant, based on their relative intensity and distance. The lights that are expected to contribute the least are discarded first.

 

Rendering a light per vertex means that you perform the lighting calculations in the vertex program. The resulting color is then interpolated and passed to the fragment program. This is so cheap, that Unity includes such lights in the base pass. When this happens, Unity looks for a base pass shader variant with the VERTEXLIGHT_ON keyword.

Vertex lighting is only supported for point lights. So directional lights and spot lights cannot be vertex lights.

To use vertex lights, we have to add a multi-compile statement to our base pass. It only needs a single keyword, VERTEXLIGHT_ON. The other option is simply no keyword at all. To indicate that, we have to use _.

To include all four vertex lights that Unity supports, we have to perform the same vertex-light computations four times, and add the results together. Instead of writing all the code ourselves, we can use the Shade4PointLights function, which is defined in UnityCG. We have to feed it the position vectors, light colors, attenuation factors, plus the vertex position and normal.

Now up to four lights will be included as pixel lights, if an objects ends up with more lights than the pixel light count. Actually, Unity tries to hide the transitions between pixel and vertex lights by including one light as both a pixel and a vertex light. That light is included twice, with varying intensity for its vertex and pixel versions.

By default, Unity decides which lights become pixel lights. You can override this by changing a light's Render Mode. Important lights are always rendered as pixel lights, regardless of the limit. Lights that are not important are never rendered as pixel lights.

Spherical Harmonics

球谐光照(Spherical Harmonic Lighting),根据索尼计算机娱乐的Robin Green在2003年的定义,“是一种能够计算IBL来源的3D模型灯光的技术,使咱们可以实时捕捉、二次照明和展现全局照明风格的图像。在2002年的Siggraph上,由Sloan,Kautz和Snyder在论文中做为快速逼真的照明技术介绍给大众。”

First, we'll only define the function from the point of view of the object's local origin. This is fine for lighting conditions that don't change much along the surface of the object. This is true for small objects, and lights that are either weak or far away from the object. Fortunately, this is typically the case for lights that don't qualify for pixel or vertex light status.

Second, we also have to approximate the function itself. You can decompose any continuous function into multiple functions of different frequencies. These are known as bands. For an arbitrary function, you might need an infinite amount of bands to do this.

Now activate a bunch of lights. Make sure that there are enough so all pixel and vertex lights are used up. The rest are added to the spherical harmonics. Again, Unity will split a light to blend the transition.

Surprise! Our objects aren't black anymore. They have picked up the ambient color. Unity uses spherical harmonics to add the scene's ambient color to objects.