// This script is a helper for the artists to re-synchronise all layered materials
[MenuItem("HDRenderPipeline/Synchronize all Layered materials")]
staticvoidSynchronizeAllLayeredMaterial()
// In case the shader code have change and the inspector have been update with new kind of keywords we need to regenerate the set of keywords use by the material.
// This script will remove all keyword of a material and trigger the inspector that will re-setup all the used keywords.
// It require that the inspector of the material have a static function call that update all keyword based on material properties.
[MenuItem("HDRenderPipeline/Reset all materials keywords")]
[MenuItem("HDRenderPipeline/Test/Reset all materials keywords")]
staticvoidResetAllMaterialKeywords()
{
try
}
// Funtion used only to check performance of data with and without tessellation
// We are currently supporting two different SSS mode: Jimenez (with 2-Gaussian profile) and Disney
// We have added the ability to switch between each other for subsurface scattering, but for transmittance this is more tricky as we need to add
// shader variant for forward, gbuffer and deferred shader. We want to avoid this.
// So for transmittance we use Disney profile formulation (that we know is more correct) in both case, and in the case of Jimenez we hack the parameters with 2-Gaussian parameters (Ideally we should fit but haven't find good fit) so it approximately match.
// Note: Jimenez SSS is in cm unit whereas Disney is in mm unit making an inconsistency here to compare model side by side
// TODO: Find a correct place to bind these material textures
// We have to bind the material specific global parameters in this mode
}
else
{
RenderVelocity(m_CullResults,camera,renderContext);// Note we may have to render velocity earlier if we do temporalAO, temporal volumetric etc... Mean we will not take into account forward opaque in case of deferred rendering ?
RenderVelocity(m_CullResults,hdCamera,renderContext);// Note we may have to render velocity earlier if we do temporalAO, temporal volumetric etc... Mean we will not take into account forward opaque in case of deferred rendering ?
// TODO: Check with VFX team.
// Rendering distortion here have off course lot of artifact.
// SSSSS algorithm need to know which pixels contribute to SSS and which doesn't. We could use the stencil for that but it mean that it will increase the cost of SSSSS
// A simpler solution is to add a slight contribution here that isn't visible (here we chose fp16 min (which is also fp11 and fp10 min).
// The SSSSS algorithm will check if diffuse lighting is black and discard the pixel if it is the case
#define PROP_DECL(type, name) type name##0, name##1, name##2, name##3
// sampler are share by texture type inside a layered material but we need to support that a particualr layer have no texture, so we take the first sampler of available texture as the share one
// Caution: C# code in BaseLitUI.cs call LightmapEmissionFlagsProperty() which assume that there is an existing "_EmissionColor"
// Set of users variables
float4 _BaseColor;
TEXTURE2D(_BaseColorMap);
SAMPLER2D(sampler_BaseColorMap);
TEXTURE2D(_MaskMap);
SAMPLER2D(sampler_MaskMap);
TEXTURE2D(_SpecularOcclusionMap);
SAMPLER2D(sampler_SpecularOcclusionMap);
TEXTURE2D(_NormalMap);
SAMPLER2D(sampler_NormalMap);
TEXTURE2D(_NormalMapOS);
SAMPLER2D(sampler_NormalMapOS);
TEXTURE2D(_DetailMask);
SAMPLER2D(sampler_DetailMask);
TEXTURE2D(_DetailMap);
SAMPLER2D(sampler_DetailMap);
TEXTURE2D(_HeightMap);
SAMPLER2D(sampler_HeightMap);
TEXTURE2D(_TangentMap);
SAMPLER2D(sampler_TangentMap);
TEXTURE2D(_TangentMapOS);
SAMPLER2D(sampler_TangentMapOS);
TEXTURE2D(_AnisotropyMap);
SAMPLER2D(sampler_AnisotropyMap);
TEXTURE2D(_SubsurfaceRadiusMap);
SAMPLER2D(sampler_SubsurfaceRadiusMap);
TEXTURE2D(_ThicknessMap);
SAMPLER2D(sampler_ThicknessMap);
TEXTURE2D(_SpecularColorMap);
SAMPLER2D(sampler_SpecularColorMap);
float _TexWorldScale;
float4 _UVMappingMask;
// Set of users variables
#define PROP_DECL(type, name) type name##0, name##1, name##2, name##3
// sampler are share by texture type inside a layered material but we need to support that a particualr layer have no texture, so we take the first sampler of available texture as the share one
// TODO: Currently it is not yet possible to use this feature, we need to provide previousPositionCS to the vertex shader as part of Attribute for GBuffer pass
// TODO: How to enable this feature only on mesh that effectively require it like skinned and moving mesh (other can be done with depth reprojection. But TAA can be an issue)
// TODO: It is not possible to use VelocityInGBuffer feature yet. This feature allow to render motion vectors during Gbuffer pass. However Unity have limitation today that forbid to do that.
// 1) Currently previousPositionCS is provide to the vertex shader with a hard coded NORMAL semantic (in the vertex declaration - See MeshRenderingData.cpp "pSecondaryFormat = gMotionVectorRenderFormat.GetVertexFormat();") mean it will overwrite the normal
// 2) All current available semantic (see ShaderChannelMask) are used in our Lit shader. Mean just changing the semantic is not enough, Unity need to unlock other Texcoord semantic
// 3) When this is solve (i.e move previousPositionCS to a free attribute semantic), Unity only support one pSecondaryFormat. Mean if we ahve a vertex color instance stream and motion vector, motion vector will overwrite vertex color stream. See MeshRenderingData.cpp
// All this could be fix we a new Mesh API not ready yet. Note that this feature only affect animated mesh (vertex or skin) as others use depth reprojection.
// TODO: not working yet, waiting for UINT16 RT format support