您最多选择25个主题
主题必须以中文或者字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符
1500 行
74 KiB
1500 行
74 KiB
using System.Collections.Generic;
|
|
using UnityEngine.Rendering;
|
|
using System;
|
|
using System.Linq;
|
|
using UnityEngine.Rendering.PostProcessing;
|
|
using UnityEngine.Experimental.Rendering.HDPipeline.TilePass;
|
|
|
|
#if UNITY_EDITOR
|
|
using UnityEditor;
|
|
#endif
|
|
|
|
namespace UnityEngine.Experimental.Rendering.HDPipeline
|
|
{
|
|
[Serializable]
|
|
public class RenderingSettings
|
|
{
|
|
public bool useForwardRenderingOnly = false; // TODO: Currently there is no way to strip the extra forward shaders generated by the shaders compiler, so we can switch dynamically.
|
|
public bool useDepthPrepass = false;
|
|
|
|
// We have to fall back to forward-only rendering when scene view is using wireframe rendering mode --
|
|
// as rendering everything in wireframe + deferred do not play well together
|
|
public bool ShouldUseForwardRenderingOnly()
|
|
{
|
|
return useForwardRenderingOnly || GL.wireframe;
|
|
}
|
|
}
|
|
|
|
// This holds all the matrix data we need for rendering, including data from the previous frame
|
|
// (which is the main reason why we need to keep them around for a minimum of one frame).
|
|
// HDCameras are automatically created & updated from a source camera and will be destroyed if
|
|
// not used during a frame.
|
|
public class HDCamera
|
|
{
|
|
public Matrix4x4 viewMatrix;
|
|
public Matrix4x4 projMatrix;
|
|
public Matrix4x4 nonJitteredProjMatrix;
|
|
public Vector4 screenSize;
|
|
public Vector4[] frustumPlaneEquations;
|
|
public Camera camera;
|
|
|
|
public Matrix4x4 viewProjMatrix
|
|
{
|
|
get { return projMatrix * viewMatrix; }
|
|
}
|
|
|
|
public Matrix4x4 nonJitteredViewProjMatrix
|
|
{
|
|
get { return nonJitteredProjMatrix * viewMatrix; }
|
|
}
|
|
|
|
public bool isFirstFrame
|
|
{
|
|
get { return m_FirstFrame; }
|
|
}
|
|
|
|
public Vector4 invProjParam
|
|
{
|
|
// Ref: An Efficient Depth Linearization Method for Oblique View Frustums, Eq. 6.
|
|
get { var p = projMatrix; return new Vector4(p.m20 / (p.m00 * p.m23), p.m21 / (p.m11 * p.m23), -1.0f / p.m23, (-p.m22 + p.m20 * p.m02 / p.m00 + p.m21 * p.m12 / p.m11) / p.m23); }
|
|
}
|
|
|
|
// View-projection matrix from the previous frame.
|
|
public Matrix4x4 prevViewProjMatrix;
|
|
|
|
// We need to keep track of these when camera relative rendering is enabled so we can take
|
|
// camera translation into account when generating camera motion vectors
|
|
public Vector3 cameraPos;
|
|
public Vector3 prevCameraPos;
|
|
|
|
// The only way to reliably keep track of a frame change right now is to compare the frame
|
|
// count Unity gives us. We need this as a single camera could be rendered several times per
|
|
// frame and some matrices only have to be computed once. Realistically this shouldn't
|
|
// happen, but you never know...
|
|
int m_LastFrameActive;
|
|
|
|
// Always true for cameras that just got added to the pool - needed for previous matrices to
|
|
// avoid one-frame jumps/hiccups with temporal effects (motion blur, TAA...)
|
|
bool m_FirstFrame;
|
|
|
|
public HDCamera(Camera cam)
|
|
{
|
|
camera = cam;
|
|
frustumPlaneEquations = new Vector4[6];
|
|
Reset();
|
|
}
|
|
|
|
public void Update(PostProcessLayer postProcessLayer)
|
|
{
|
|
// If TAA is enabled projMatrix will hold a jittered projection matrix. The original,
|
|
// non-jittered projection matrix can be accessed via nonJitteredProjMatrix.
|
|
bool taaEnabled = camera.cameraType == CameraType.Game
|
|
&& Utilities.IsTemporalAntialiasingActive(postProcessLayer);
|
|
|
|
Matrix4x4 nonJitteredCameraProj = camera.projectionMatrix;
|
|
Matrix4x4 cameraProj = taaEnabled
|
|
? postProcessLayer.temporalAntialiasing.GetJitteredProjectionMatrix(camera)
|
|
: nonJitteredCameraProj;
|
|
|
|
// The actual projection matrix used in shaders is actually massaged a bit to work across all platforms
|
|
// (different Z value ranges etc.)
|
|
Matrix4x4 gpuProj = GL.GetGPUProjectionMatrix(cameraProj, true); // Had to change this from 'false'
|
|
Matrix4x4 gpuView = camera.worldToCameraMatrix;
|
|
Matrix4x4 gpuNonJitteredProj = GL.GetGPUProjectionMatrix(nonJitteredCameraProj, true);
|
|
|
|
Vector3 pos = camera.transform.position;
|
|
|
|
if (ShaderConfig.s_CameraRelativeRendering != 0)
|
|
{
|
|
// Zero out the translation component.
|
|
gpuView.SetColumn(3, new Vector4(0, 0, 0, 1));
|
|
}
|
|
|
|
Matrix4x4 gpuVP = gpuNonJitteredProj * gpuView;
|
|
|
|
// A camera could be rendered multiple times per frame, only updates the previous view proj & pos if needed
|
|
if (m_LastFrameActive != Time.frameCount)
|
|
{
|
|
if (m_FirstFrame)
|
|
{
|
|
prevCameraPos = pos;
|
|
prevViewProjMatrix = gpuVP;
|
|
}
|
|
else
|
|
{
|
|
prevCameraPos = cameraPos;
|
|
prevViewProjMatrix = nonJitteredViewProjMatrix;
|
|
}
|
|
|
|
m_FirstFrame = false;
|
|
}
|
|
|
|
viewMatrix = gpuView;
|
|
projMatrix = gpuProj;
|
|
nonJitteredProjMatrix = gpuNonJitteredProj;
|
|
cameraPos = pos;
|
|
screenSize = new Vector4(camera.pixelWidth, camera.pixelHeight, 1.0f / camera.pixelWidth, 1.0f / camera.pixelHeight);
|
|
|
|
Plane[] planes = GeometryUtility.CalculateFrustumPlanes(viewProjMatrix);
|
|
|
|
for (int i = 0; i < 6; i++)
|
|
{
|
|
frustumPlaneEquations[i] = new Vector4(planes[i].normal.x, planes[i].normal.y, planes[i].normal.z, planes[i].distance);
|
|
}
|
|
|
|
m_LastFrameActive = Time.frameCount;
|
|
}
|
|
|
|
public void Reset()
|
|
{
|
|
m_LastFrameActive = -1;
|
|
m_FirstFrame = true;
|
|
}
|
|
|
|
static Dictionary<Camera, HDCamera> m_Cameras = new Dictionary<Camera, HDCamera>();
|
|
static List<Camera> m_Cleanup = new List<Camera>(); // Recycled to reduce GC pressure
|
|
|
|
// Grab the HDCamera tied to a given Camera and update it.
|
|
public static HDCamera Get(Camera camera, PostProcessLayer postProcessLayer)
|
|
{
|
|
HDCamera hdcam;
|
|
|
|
if (!m_Cameras.TryGetValue(camera, out hdcam))
|
|
{
|
|
hdcam = new HDCamera(camera);
|
|
m_Cameras.Add(camera, hdcam);
|
|
}
|
|
|
|
hdcam.Update(postProcessLayer);
|
|
return hdcam;
|
|
}
|
|
|
|
// Look for any camera that hasn't been used in the last frame and remove them for the pool.
|
|
public static void CleanUnused()
|
|
{
|
|
int frameCheck = Time.frameCount - 1;
|
|
|
|
foreach (var kvp in m_Cameras)
|
|
{
|
|
if (kvp.Value.m_LastFrameActive != frameCheck)
|
|
m_Cleanup.Add(kvp.Key);
|
|
}
|
|
|
|
foreach (var cam in m_Cleanup)
|
|
m_Cameras.Remove(cam);
|
|
|
|
m_Cleanup.Clear();
|
|
}
|
|
|
|
public void SetupGlobalParams(CommandBuffer cmd)
|
|
{
|
|
cmd.SetGlobalMatrix(HDShaderIDs._ViewMatrix, viewMatrix);
|
|
cmd.SetGlobalMatrix(HDShaderIDs._InvViewMatrix, viewMatrix.inverse);
|
|
cmd.SetGlobalMatrix(HDShaderIDs._ProjMatrix, projMatrix);
|
|
cmd.SetGlobalMatrix(HDShaderIDs._InvProjMatrix, projMatrix.inverse);
|
|
cmd.SetGlobalMatrix(HDShaderIDs._NonJitteredViewProjMatrix, nonJitteredViewProjMatrix);
|
|
cmd.SetGlobalMatrix(HDShaderIDs._ViewProjMatrix, viewProjMatrix);
|
|
cmd.SetGlobalMatrix(HDShaderIDs._InvViewProjMatrix, viewProjMatrix.inverse);
|
|
cmd.SetGlobalVector(HDShaderIDs._InvProjParam, invProjParam);
|
|
cmd.SetGlobalVector(HDShaderIDs._ScreenSize, screenSize);
|
|
cmd.SetGlobalMatrix(HDShaderIDs._PrevViewProjMatrix, prevViewProjMatrix);
|
|
cmd.SetGlobalVectorArray(HDShaderIDs._FrustumPlanes, frustumPlaneEquations);
|
|
}
|
|
|
|
// Does not modify global settings. Used for shadows, low res. rendering, etc.
|
|
public void OverrideGlobalParams(Material material)
|
|
{
|
|
material.SetMatrix(HDShaderIDs._ViewMatrix, viewMatrix);
|
|
material.SetMatrix(HDShaderIDs._InvViewMatrix, viewMatrix.inverse);
|
|
material.SetMatrix(HDShaderIDs._ProjMatrix, projMatrix);
|
|
material.SetMatrix(HDShaderIDs._InvProjMatrix, projMatrix.inverse);
|
|
material.SetMatrix(HDShaderIDs._NonJitteredViewProjMatrix, nonJitteredViewProjMatrix);
|
|
material.SetMatrix(HDShaderIDs._ViewProjMatrix, viewProjMatrix);
|
|
material.SetMatrix(HDShaderIDs._InvViewProjMatrix, viewProjMatrix.inverse);
|
|
material.SetVector(HDShaderIDs._InvProjParam, invProjParam);
|
|
material.SetVector(HDShaderIDs._ScreenSize, screenSize);
|
|
material.SetMatrix(HDShaderIDs._PrevViewProjMatrix, prevViewProjMatrix);
|
|
material.SetVectorArray(HDShaderIDs._FrustumPlanes, frustumPlaneEquations);
|
|
}
|
|
|
|
public void SetupComputeShader(ComputeShader cs, CommandBuffer cmd)
|
|
{
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._ViewMatrix, viewMatrix);
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._InvViewMatrix, viewMatrix.inverse);
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._ProjMatrix, projMatrix);
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._InvProjMatrix, projMatrix.inverse);
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._NonJitteredViewProjMatrix, nonJitteredViewProjMatrix);
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._ViewProjMatrix, viewProjMatrix);
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._InvViewProjMatrix, viewProjMatrix.inverse);
|
|
cmd.SetComputeVectorParam(cs, HDShaderIDs._InvProjParam, invProjParam);
|
|
cmd.SetComputeVectorParam(cs, HDShaderIDs._ScreenSize, screenSize);
|
|
cmd.SetComputeMatrixParam(cs, HDShaderIDs._PrevViewProjMatrix, prevViewProjMatrix);
|
|
cmd.SetComputeVectorArrayParam(cs, HDShaderIDs._FrustumPlanes, frustumPlaneEquations);
|
|
// Copy values set by Unity which are not configured in scripts.
|
|
cmd.SetComputeVectorParam(cs, HDShaderIDs.unity_OrthoParams, Shader.GetGlobalVector(HDShaderIDs.unity_OrthoParams));
|
|
cmd.SetComputeVectorParam(cs, HDShaderIDs._ProjectionParams, Shader.GetGlobalVector(HDShaderIDs._ProjectionParams));
|
|
cmd.SetComputeVectorParam(cs, HDShaderIDs._ScreenParams, Shader.GetGlobalVector(HDShaderIDs._ScreenParams));
|
|
cmd.SetComputeVectorParam(cs, HDShaderIDs._ZBufferParams, Shader.GetGlobalVector(HDShaderIDs._ZBufferParams));
|
|
cmd.SetComputeVectorParam(cs, HDShaderIDs._WorldSpaceCameraPos, Shader.GetGlobalVector(HDShaderIDs._WorldSpaceCameraPos));
|
|
}
|
|
}
|
|
|
|
public class GBufferManager
|
|
{
|
|
public const int MaxGbuffer = 8;
|
|
|
|
public void SetBufferDescription(int index, string stringId, RenderTextureFormat inFormat, RenderTextureReadWrite inSRGBWrite)
|
|
{
|
|
IDs[index] = Shader.PropertyToID(stringId);
|
|
RTIDs[index] = new RenderTargetIdentifier(IDs[index]);
|
|
formats[index] = inFormat;
|
|
sRGBWrites[index] = inSRGBWrite;
|
|
}
|
|
|
|
public void InitGBuffers(int width, int height, CommandBuffer cmd)
|
|
{
|
|
for (int index = 0; index < gbufferCount; index++)
|
|
{
|
|
cmd.GetTemporaryRT(IDs[index], width, height, 0, FilterMode.Point, formats[index], sRGBWrites[index]);
|
|
}
|
|
}
|
|
|
|
private RenderTargetIdentifier[] m_ColorMRTs;
|
|
public RenderTargetIdentifier[] GetGBuffers()
|
|
{
|
|
if (m_ColorMRTs == null || m_ColorMRTs.Length != gbufferCount)
|
|
m_ColorMRTs = new RenderTargetIdentifier[gbufferCount];
|
|
|
|
for (int index = 0; index < gbufferCount; index++)
|
|
{
|
|
m_ColorMRTs[index] = RTIDs[index];
|
|
}
|
|
|
|
return m_ColorMRTs;
|
|
}
|
|
|
|
public int gbufferCount { get; set; }
|
|
int[] IDs = new int[MaxGbuffer];
|
|
RenderTargetIdentifier[] RTIDs = new RenderTargetIdentifier[MaxGbuffer];
|
|
RenderTextureFormat[] formats = new RenderTextureFormat[MaxGbuffer];
|
|
RenderTextureReadWrite[] sRGBWrites = new RenderTextureReadWrite[MaxGbuffer];
|
|
}
|
|
|
|
public partial class HDRenderPipeline : RenderPipeline
|
|
{
|
|
readonly HDRenderPipelineAsset m_Asset;
|
|
|
|
readonly RenderPipelineMaterial m_DeferredMaterial;
|
|
readonly List<RenderPipelineMaterial> m_MaterialList = new List<RenderPipelineMaterial>();
|
|
|
|
readonly GBufferManager m_gbufferManager = new GBufferManager();
|
|
|
|
Material m_CopyStencilForSplitLighting;
|
|
Material m_CopyStencilForRegularLighting;
|
|
|
|
// Various set of material use in render loop
|
|
ComputeShader m_SubsurfaceScatteringCS { get { return m_Asset.renderPipelineResources.subsurfaceScatteringCS; } }
|
|
int m_SubsurfaceScatteringKernel;
|
|
Material m_CombineLightingPass;
|
|
// Old SSS Model >>>
|
|
Material m_SssVerticalFilterPass;
|
|
Material m_SssHorizontalFilterAndCombinePass;
|
|
// <<< Old SSS Model
|
|
|
|
Material m_CameraMotionVectorsMaterial;
|
|
|
|
Material m_DebugViewMaterialGBuffer;
|
|
Material m_DebugDisplayLatlong;
|
|
Material m_DebugFullScreen;
|
|
|
|
// Various buffer
|
|
readonly int m_CameraColorBuffer;
|
|
readonly int m_CameraSssDiffuseLightingBuffer;
|
|
// Old SSS Model >>>
|
|
readonly int m_CameraFilteringBuffer;
|
|
// <<< Old SSS Model
|
|
readonly int m_VelocityBuffer;
|
|
readonly int m_DistortionBuffer;
|
|
|
|
readonly int m_DeferredShadowBuffer;
|
|
|
|
// 'm_CameraColorBuffer' does not contain diffuse lighting of SSS materials until the SSS pass. It is stored within 'm_CameraSssDiffuseLightingBuffer'.
|
|
readonly RenderTargetIdentifier m_CameraColorBufferRT;
|
|
readonly RenderTargetIdentifier m_CameraSssDiffuseLightingBufferRT;
|
|
// Old SSS Model >>>
|
|
readonly RenderTargetIdentifier m_CameraFilteringBufferRT;
|
|
// <<< Old SSS Model
|
|
readonly RenderTargetIdentifier m_VelocityBufferRT;
|
|
readonly RenderTargetIdentifier m_DistortionBufferRT;
|
|
|
|
readonly RenderTargetIdentifier m_DeferredShadowBufferRT;
|
|
|
|
private RenderTexture m_CameraDepthStencilBuffer = null;
|
|
private RenderTexture m_CameraDepthBufferCopy = null;
|
|
private RenderTexture m_CameraStencilBufferCopy = null;
|
|
private RenderTexture m_HTile = null; // If the hardware does not expose it, we compute our own, optimized to only contain the SSS bit
|
|
|
|
private RenderTargetIdentifier m_CameraDepthStencilBufferRT;
|
|
private RenderTargetIdentifier m_CameraDepthBufferCopyRT;
|
|
private RenderTargetIdentifier m_CameraStencilBufferCopyRT;
|
|
private RenderTargetIdentifier m_HTileRT;
|
|
|
|
// Post-processing context and screen-space effects (recycled on every frame to avoid GC alloc)
|
|
readonly PostProcessRenderContext m_PostProcessContext;
|
|
readonly ScreenSpaceAmbientOcclusionEffect m_SsaoEffect;
|
|
|
|
// Stencil usage in HDRenderPipeline.
|
|
// Currently we use only 2 bits to identify the kind of lighting that is expected from the render pipeline
|
|
// Usage is define in LightDefinitions.cs
|
|
[Flags]
|
|
public enum StencilBitMask
|
|
{
|
|
Clear = 0, // 0x0
|
|
Lighting = 3, // 0x3 - 2 bit
|
|
All = 255 // 0xFF - 8 bit
|
|
}
|
|
|
|
// Detect when windows size is changing
|
|
int m_CurrentWidth;
|
|
int m_CurrentHeight;
|
|
|
|
// Use to detect frame changes
|
|
int m_FrameCount;
|
|
|
|
public int GetCurrentShadowCount() { return m_LightLoop.GetCurrentShadowCount(); }
|
|
public int GetShadowAtlasCount() { return m_LightLoop.GetShadowAtlasCount(); }
|
|
|
|
readonly SkyManager m_SkyManager = new SkyManager();
|
|
readonly LightLoop m_LightLoop = new LightLoop();
|
|
readonly ShadowSettings m_ShadowSettings = new ShadowSettings();
|
|
|
|
// Debugging
|
|
MaterialPropertyBlock m_SharedPropertyBlock = new MaterialPropertyBlock();
|
|
public DebugDisplaySettings m_DebugDisplaySettings = new DebugDisplaySettings();
|
|
private int m_DebugFullScreenTempRT;
|
|
private bool m_FullScreenDebugPushed = false;
|
|
|
|
public SubsurfaceScatteringSettings sssSettings
|
|
{
|
|
get { return m_Asset.sssSettings; }
|
|
}
|
|
|
|
private CommonSettings.Settings m_CommonSettings = CommonSettings.Settings.s_Defaultsettings;
|
|
private SkySettings m_SkySettings = null;
|
|
private ScreenSpaceAmbientOcclusionSettings.Settings m_SsaoSettings = ScreenSpaceAmbientOcclusionSettings.Settings.s_Defaultsettings;
|
|
|
|
public CommonSettings.Settings commonSettingsToUse
|
|
{
|
|
get
|
|
{
|
|
if (CommonSettingsSingleton.overrideSettings)
|
|
return CommonSettingsSingleton.overrideSettings.settings;
|
|
|
|
return m_CommonSettings;
|
|
}
|
|
}
|
|
|
|
public SkySettings skySettingsToUse
|
|
{
|
|
get
|
|
{
|
|
if (SkySettingsSingleton.overrideSettings)
|
|
return SkySettingsSingleton.overrideSettings;
|
|
|
|
return m_SkySettings;
|
|
}
|
|
}
|
|
|
|
public ScreenSpaceAmbientOcclusionSettings.Settings ssaoSettingsToUse
|
|
{
|
|
get
|
|
{
|
|
if (ScreenSpaceAmbientOcclusionSettingsSingleton.overrideSettings)
|
|
return ScreenSpaceAmbientOcclusionSettingsSingleton.overrideSettings.settings;
|
|
|
|
return m_SsaoSettings;
|
|
}
|
|
}
|
|
|
|
public HDRenderPipeline(HDRenderPipelineAsset asset)
|
|
{
|
|
m_Asset = asset;
|
|
|
|
// Scan material list and assign it
|
|
m_MaterialList = Utilities.GetRenderPipelineMaterialList();
|
|
// Find first material that have non 0 Gbuffer count and assign it as deferredMaterial
|
|
m_DeferredMaterial = null;
|
|
foreach (RenderPipelineMaterial material in m_MaterialList)
|
|
{
|
|
if (material.GetMaterialGBufferCount() > 0)
|
|
{
|
|
m_DeferredMaterial = material;
|
|
}
|
|
}
|
|
|
|
// TODO: Handle the case of no Gbuffer material
|
|
// TODO: I comment the assert here because m_DeferredMaterial for whatever reasons contain the correct class but with a "null" in the name instead of the real name and then trigger the assert
|
|
// whereas it work. Don't know what is happening, DebugDisplay use the same code and name is correct there.
|
|
// Debug.Assert(m_DeferredMaterial != null);
|
|
|
|
m_CameraColorBuffer = HDShaderIDs._CameraColorTexture;
|
|
m_CameraColorBufferRT = new RenderTargetIdentifier(m_CameraColorBuffer);
|
|
m_CameraSssDiffuseLightingBuffer = HDShaderIDs._CameraSssDiffuseLightingBuffer;
|
|
m_CameraSssDiffuseLightingBufferRT = new RenderTargetIdentifier(m_CameraSssDiffuseLightingBuffer);
|
|
m_CameraFilteringBuffer = HDShaderIDs._CameraFilteringBuffer;
|
|
m_CameraFilteringBufferRT = new RenderTargetIdentifier(m_CameraFilteringBuffer);
|
|
|
|
CreateSssMaterials(sssSettings.useDisneySSS);
|
|
|
|
m_CopyStencilForSplitLighting = Utilities.CreateEngineMaterial("Hidden/HDRenderPipeline/CopyStencilBuffer");
|
|
m_CopyStencilForSplitLighting.EnableKeyword("EXPORT_HTILE");
|
|
m_CopyStencilForSplitLighting.SetInt(HDShaderIDs._StencilRef, (int)StencilLightingUsage.SplitLighting);
|
|
m_CopyStencilForRegularLighting = Utilities.CreateEngineMaterial("Hidden/HDRenderPipeline/CopyStencilBuffer");
|
|
m_CopyStencilForRegularLighting.DisableKeyword("EXPORT_HTILE");
|
|
m_CopyStencilForRegularLighting.SetInt(HDShaderIDs._StencilRef, (int)StencilLightingUsage.RegularLighting);
|
|
m_CameraMotionVectorsMaterial = Utilities.CreateEngineMaterial("Hidden/HDRenderPipeline/CameraMotionVectors");
|
|
|
|
InitializeDebugMaterials();
|
|
|
|
// Init Gbuffer description
|
|
m_gbufferManager.gbufferCount = m_DeferredMaterial.GetMaterialGBufferCount();
|
|
RenderTextureFormat[] RTFormat;
|
|
RenderTextureReadWrite[] RTReadWrite;
|
|
m_DeferredMaterial.GetMaterialGBufferDescription(out RTFormat, out RTReadWrite);
|
|
|
|
for (int gbufferIndex = 0; gbufferIndex < m_gbufferManager.gbufferCount; ++gbufferIndex)
|
|
{
|
|
m_gbufferManager.SetBufferDescription(gbufferIndex, "_GBufferTexture" + gbufferIndex, RTFormat[gbufferIndex], RTReadWrite[gbufferIndex]);
|
|
}
|
|
|
|
m_VelocityBuffer = HDShaderIDs._VelocityTexture;
|
|
if (ShaderConfig.s_VelocityInGbuffer == 1)
|
|
{
|
|
// If velocity is in GBuffer then it is in the last RT. Assign a different name to it.
|
|
m_gbufferManager.SetBufferDescription(m_gbufferManager.gbufferCount, "_VelocityTexture", Builtin.GetVelocityBufferFormat(), Builtin.GetVelocityBufferReadWrite());
|
|
m_gbufferManager.gbufferCount++;
|
|
}
|
|
m_VelocityBufferRT = new RenderTargetIdentifier(m_VelocityBuffer);
|
|
|
|
m_DistortionBuffer = HDShaderIDs._DistortionTexture;
|
|
m_DistortionBufferRT = new RenderTargetIdentifier(m_DistortionBuffer);
|
|
|
|
m_DeferredShadowBuffer = HDShaderIDs._DeferredShadowTexture;
|
|
m_DeferredShadowBufferRT = new RenderTargetIdentifier(m_DeferredShadowBuffer);
|
|
|
|
m_MaterialList.ForEach(material => material.Build(asset.renderPipelineResources));
|
|
|
|
m_LightLoop.Build(asset.renderPipelineResources, asset.tileSettings, asset.textureSettings, asset.shadowInitParams, m_ShadowSettings);
|
|
|
|
m_SkyManager.Build(asset.renderPipelineResources);
|
|
m_SkyManager.skySettings = skySettingsToUse;
|
|
|
|
m_PostProcessContext = new PostProcessRenderContext();
|
|
m_SsaoEffect = new ScreenSpaceAmbientOcclusionEffect();
|
|
m_SsaoEffect.Build(asset.renderPipelineResources);
|
|
|
|
m_DebugDisplaySettings.RegisterDebug();
|
|
m_DebugFullScreenTempRT = HDShaderIDs._DebugFullScreenTexture;
|
|
}
|
|
|
|
void InitializeDebugMaterials()
|
|
{
|
|
m_DebugViewMaterialGBuffer = Utilities.CreateEngineMaterial(m_Asset.renderPipelineResources.debugViewMaterialGBufferShader);
|
|
m_DebugDisplayLatlong = Utilities.CreateEngineMaterial(m_Asset.renderPipelineResources.debugDisplayLatlongShader);
|
|
m_DebugFullScreen = Utilities.CreateEngineMaterial(m_Asset.renderPipelineResources.debugFullScreenShader);
|
|
}
|
|
|
|
public void CreateSssMaterials(bool useDisneySSS)
|
|
{
|
|
m_SubsurfaceScatteringKernel = m_SubsurfaceScatteringCS.FindKernel("SubsurfaceScattering");
|
|
|
|
Utilities.Destroy(m_CombineLightingPass);
|
|
m_CombineLightingPass = Utilities.CreateEngineMaterial("Hidden/HDRenderPipeline/CombineLighting");
|
|
|
|
// Old SSS Model >>>
|
|
Utilities.Destroy(m_SssVerticalFilterPass);
|
|
m_SssVerticalFilterPass = Utilities.CreateEngineMaterial("Hidden/HDRenderPipeline/SubsurfaceScattering");
|
|
m_SssVerticalFilterPass.DisableKeyword("SSS_FILTER_HORIZONTAL_AND_COMBINE");
|
|
m_SssVerticalFilterPass.SetFloat(HDShaderIDs._DstBlend, (float)BlendMode.Zero);
|
|
|
|
Utilities.Destroy(m_SssHorizontalFilterAndCombinePass);
|
|
m_SssHorizontalFilterAndCombinePass = Utilities.CreateEngineMaterial("Hidden/HDRenderPipeline/SubsurfaceScattering");
|
|
m_SssHorizontalFilterAndCombinePass.EnableKeyword("SSS_FILTER_HORIZONTAL_AND_COMBINE");
|
|
m_SssHorizontalFilterAndCombinePass.SetFloat(HDShaderIDs._DstBlend, (float)BlendMode.One);
|
|
// <<< Old SSS Model
|
|
}
|
|
|
|
public void OnSceneLoad()
|
|
{
|
|
// Recreate the textures which went NULL
|
|
m_MaterialList.ForEach(material => material.Build(m_Asset.renderPipelineResources));
|
|
}
|
|
|
|
public override void Dispose()
|
|
{
|
|
base.Dispose();
|
|
|
|
m_LightLoop.Cleanup();
|
|
|
|
m_MaterialList.ForEach(material => material.Cleanup());
|
|
|
|
Utilities.Destroy(m_DebugViewMaterialGBuffer);
|
|
Utilities.Destroy(m_DebugDisplayLatlong);
|
|
|
|
m_SkyManager.Cleanup();
|
|
|
|
m_SsaoEffect.Cleanup();
|
|
|
|
#if UNITY_EDITOR
|
|
SupportedRenderingFeatures.active = SupportedRenderingFeatures.Default;
|
|
#endif
|
|
}
|
|
|
|
#if UNITY_EDITOR
|
|
private static readonly SupportedRenderingFeatures s_NeededFeatures = new SupportedRenderingFeatures()
|
|
{
|
|
reflectionProbe = SupportedRenderingFeatures.ReflectionProbe.Rotation
|
|
};
|
|
#endif
|
|
|
|
void CreateDepthStencilBuffer(Camera camera)
|
|
{
|
|
if (m_CameraDepthStencilBuffer != null)
|
|
{
|
|
m_CameraDepthStencilBuffer.Release();
|
|
}
|
|
|
|
m_CameraDepthStencilBuffer = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 24, RenderTextureFormat.Depth);
|
|
m_CameraDepthStencilBuffer.filterMode = FilterMode.Point;
|
|
m_CameraDepthStencilBuffer.Create();
|
|
m_CameraDepthStencilBufferRT = new RenderTargetIdentifier(m_CameraDepthStencilBuffer);
|
|
|
|
if (NeedDepthBufferCopy())
|
|
{
|
|
if (m_CameraDepthBufferCopy != null)
|
|
{
|
|
m_CameraDepthBufferCopy.Release();
|
|
}
|
|
m_CameraDepthBufferCopy = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 24, RenderTextureFormat.Depth);
|
|
m_CameraDepthBufferCopy.filterMode = FilterMode.Point;
|
|
m_CameraDepthBufferCopy.Create();
|
|
m_CameraDepthBufferCopyRT = new RenderTargetIdentifier(m_CameraDepthBufferCopy);
|
|
}
|
|
|
|
if (NeedStencilBufferCopy())
|
|
{
|
|
if (m_CameraStencilBufferCopy != null)
|
|
{
|
|
m_CameraStencilBufferCopy.Release();
|
|
}
|
|
m_CameraStencilBufferCopy = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 0, RenderTextureFormat.R8, RenderTextureReadWrite.Linear); // DXGI_FORMAT_R8_UINT is not supported by Unity
|
|
m_CameraStencilBufferCopy.filterMode = FilterMode.Point;
|
|
m_CameraStencilBufferCopy.Create();
|
|
m_CameraStencilBufferCopyRT = new RenderTargetIdentifier(m_CameraStencilBufferCopy);
|
|
}
|
|
|
|
if (NeedHTileCopy())
|
|
{
|
|
if (m_HTile!= null)
|
|
{
|
|
m_HTile.Release();
|
|
}
|
|
// We use 8x8 tiles in order to match the native GCN HTile as closely as possible.
|
|
m_HTile = new RenderTexture((camera.pixelWidth + 7) / 8, (camera.pixelHeight + 7) / 8, 0, RenderTextureFormat.R8, RenderTextureReadWrite.Linear); // DXGI_FORMAT_R8_UINT is not supported by Unity
|
|
m_HTile.filterMode = FilterMode.Point;
|
|
m_HTile.enableRandomWrite = true;
|
|
m_HTile.Create();
|
|
m_HTileRT = new RenderTargetIdentifier(m_HTile);
|
|
}
|
|
|
|
}
|
|
|
|
void Resize(Camera camera)
|
|
{
|
|
// TODO: Detect if renderdoc just load and force a resize in this case, as often renderdoc require to realloc resource.
|
|
|
|
// TODO: This is the wrong way to handle resize/allocation. We can have several different camera here, mean that the loop on camera will allocate and deallocate
|
|
// the below buffer which is bad. Best is to have a set of buffer for each camera that is persistent and reallocate resource if need
|
|
// For now consider we have only one camera that go to this code, the main one.
|
|
m_SkyManager.skySettings = skySettingsToUse;
|
|
m_SkyManager.Resize(camera.nearClipPlane, camera.farClipPlane); // TODO: Also a bad naming, here we just want to realloc texture if skyparameters change (useful for lookdev)
|
|
|
|
bool resolutionChanged = camera.pixelWidth != m_CurrentWidth || camera.pixelHeight != m_CurrentHeight;
|
|
|
|
if (resolutionChanged || m_CameraDepthStencilBuffer == null)
|
|
{
|
|
CreateDepthStencilBuffer(camera);
|
|
}
|
|
|
|
if (resolutionChanged || m_LightLoop.NeedResize())
|
|
{
|
|
if (m_CurrentWidth > 0 && m_CurrentHeight > 0)
|
|
{
|
|
m_LightLoop.ReleaseResolutionDependentBuffers();
|
|
}
|
|
|
|
m_LightLoop.AllocResolutionDependentBuffers(camera.pixelWidth, camera.pixelHeight);
|
|
}
|
|
|
|
if (resolutionChanged && m_VolumetricLightingEnabled)
|
|
{
|
|
CreateVolumetricLightingBuffers(camera.pixelWidth, camera.pixelHeight);
|
|
}
|
|
|
|
// update recorded window resolution
|
|
m_CurrentWidth = camera.pixelWidth;
|
|
m_CurrentHeight = camera.pixelHeight;
|
|
}
|
|
|
|
public void PushGlobalParams(HDCamera hdCamera, CommandBuffer cmd, SubsurfaceScatteringSettings sssParameters)
|
|
{
|
|
using (new Utilities.ProfilingSample("Push Global Parameters", cmd))
|
|
{
|
|
hdCamera.SetupGlobalParams(cmd);
|
|
|
|
// TODO: cmd.SetGlobalInt() does not exist, so we are forced to use Shader.SetGlobalInt() instead.
|
|
|
|
if (m_SkyManager.IsSkyValid())
|
|
{
|
|
m_SkyManager.SetGlobalSkyTexture();
|
|
Shader.SetGlobalInt(HDShaderIDs._EnvLightSkyEnabled, 1);
|
|
}
|
|
else
|
|
{
|
|
Shader.SetGlobalInt(HDShaderIDs._EnvLightSkyEnabled, 0);
|
|
}
|
|
|
|
// Broadcast SSS parameters to all shaders.
|
|
Shader.SetGlobalInt( HDShaderIDs._EnableSSSAndTransmission, m_DebugDisplaySettings.renderingDebugSettings.enableSSSAndTransmission ? 1 : 0);
|
|
Shader.SetGlobalInt( HDShaderIDs._TexturingModeFlags, (int)sssParameters.texturingModeFlags);
|
|
Shader.SetGlobalInt( HDShaderIDs._TransmissionFlags, (int)sssParameters.transmissionFlags);
|
|
Shader.SetGlobalInt( HDShaderIDs._UseDisneySSS, sssParameters.useDisneySSS ? 1 : 0);
|
|
cmd.SetGlobalVectorArray(HDShaderIDs._ThicknessRemaps, sssParameters.thicknessRemaps);
|
|
cmd.SetGlobalVectorArray(HDShaderIDs._ShapeParams, sssParameters.shapeParams);
|
|
cmd.SetGlobalVectorArray(HDShaderIDs._HalfRcpVariancesAndWeights, sssParameters.halfRcpVariancesAndWeights);
|
|
cmd.SetGlobalVectorArray(HDShaderIDs._TransmissionTints, sssParameters.transmissionTints);
|
|
|
|
SetGlobalVolumeProperties(m_VolumetricLightingEnabled, cmd);
|
|
}
|
|
}
|
|
|
|
bool NeedDepthBufferCopy()
|
|
{
|
|
// For now we consider only PS4 to be able to read from a bound depth buffer. Need to test/implement for other platforms.
|
|
return SystemInfo.graphicsDeviceType != GraphicsDeviceType.PlayStation4;
|
|
}
|
|
|
|
bool NeedStencilBufferCopy()
|
|
{
|
|
// Currently, Unity does not offer a way to bind the stencil buffer as a texture in a compute shader.
|
|
// Therefore, it's manually copied using a pixel shader.
|
|
return m_DebugDisplaySettings.renderingDebugSettings.enableSSSAndTransmission || LightLoop.GetFeatureVariantsEnabled(m_Asset.tileSettings);
|
|
}
|
|
|
|
bool NeedHTileCopy()
|
|
{
|
|
// Currently, Unity does not offer a way to access the GCN HTile even on PS4 and Xbox One.
|
|
// Therefore, it's computed in a pixel shader, and optimized to only contain the SSS bit.
|
|
return m_DebugDisplaySettings.renderingDebugSettings.enableSSSAndTransmission;
|
|
}
|
|
|
|
RenderTargetIdentifier GetDepthTexture()
|
|
{
|
|
return NeedDepthBufferCopy() ? m_CameraDepthBufferCopy : m_CameraDepthStencilBuffer;
|
|
}
|
|
|
|
RenderTargetIdentifier GetStencilTexture()
|
|
{
|
|
return NeedStencilBufferCopy() ? m_CameraStencilBufferCopyRT : m_CameraDepthStencilBufferRT;
|
|
}
|
|
|
|
RenderTargetIdentifier GetHTile()
|
|
{
|
|
// Currently, Unity does not offer a way to access the GCN HTile.
|
|
return m_HTileRT;
|
|
}
|
|
|
|
private void CopyDepthBufferIfNeeded(CommandBuffer cmd)
|
|
{
|
|
using (new Utilities.ProfilingSample(NeedDepthBufferCopy() ? "Copy DepthBuffer" : "Set DepthBuffer", cmd))
|
|
{
|
|
if (NeedDepthBufferCopy())
|
|
{
|
|
using (new Utilities.ProfilingSample("Copy depth-stencil buffer", cmd))
|
|
{
|
|
cmd.CopyTexture(m_CameraDepthStencilBufferRT, m_CameraDepthBufferCopyRT);
|
|
}
|
|
}
|
|
|
|
cmd.SetGlobalTexture(HDShaderIDs._MainDepthTexture, GetDepthTexture());
|
|
}
|
|
}
|
|
|
|
private void PrepareAndBindStencilTexture(CommandBuffer cmd)
|
|
{
|
|
if (NeedStencilBufferCopy())
|
|
{
|
|
using (new Utilities.ProfilingSample("Copy StencilBuffer", cmd))
|
|
{
|
|
cmd.SetRandomWriteTarget(1, GetHTile());
|
|
// Our method of exporting the stencil requires one pass per unique stencil value.
|
|
Utilities.DrawFullScreen(cmd, m_CopyStencilForSplitLighting, m_CameraStencilBufferCopyRT, m_CameraDepthStencilBufferRT);
|
|
Utilities.DrawFullScreen(cmd, m_CopyStencilForRegularLighting, m_CameraStencilBufferCopyRT, m_CameraDepthStencilBufferRT);
|
|
cmd.ClearRandomWriteTargets();
|
|
}
|
|
}
|
|
|
|
cmd.SetGlobalTexture(HDShaderIDs._HTile, GetHTile());
|
|
cmd.SetGlobalTexture(HDShaderIDs._StencilTexture, GetStencilTexture());
|
|
}
|
|
|
|
public void UpdateCommonSettings()
|
|
{
|
|
var commonSettings = commonSettingsToUse;
|
|
|
|
m_ShadowSettings.maxShadowDistance = commonSettings.shadowMaxDistance;
|
|
m_ShadowSettings.directionalLightNearPlaneOffset = commonSettings.shadowNearPlaneOffset;
|
|
}
|
|
|
|
CullResults m_CullResults;
|
|
public override void Render(ScriptableRenderContext renderContext, Camera[] cameras)
|
|
{
|
|
base.Render(renderContext, cameras);
|
|
|
|
#if UNITY_EDITOR
|
|
SupportedRenderingFeatures.active = s_NeededFeatures;
|
|
#endif
|
|
|
|
if (m_FrameCount != Time.frameCount)
|
|
{
|
|
HDCamera.CleanUnused();
|
|
m_FrameCount = Time.frameCount;
|
|
}
|
|
|
|
GraphicsSettings.lightsUseLinearIntensity = true;
|
|
GraphicsSettings.lightsUseColorTemperature = true;
|
|
|
|
// This is the main command buffer used for the frame.
|
|
CommandBuffer cmd = CommandBufferPool.Get("");
|
|
|
|
m_MaterialList.ForEach(material => material.RenderInit(cmd));
|
|
|
|
// Do anything we need to do upon a new frame.
|
|
m_LightLoop.NewFrame();
|
|
|
|
ApplyDebugDisplaySettings();
|
|
UpdateCommonSettings();
|
|
|
|
// Set Frame constant buffer
|
|
// TODO...
|
|
|
|
// we only want to render one camera for now
|
|
// select the most main camera!
|
|
|
|
Camera camera = null;
|
|
foreach (var cam in cameras)
|
|
{
|
|
if (cam == Camera.main)
|
|
{
|
|
camera = cam;
|
|
break;
|
|
|
|
}
|
|
}
|
|
|
|
if (camera == null && cameras.Length > 0)
|
|
camera = cameras[0];
|
|
|
|
if (camera == null)
|
|
{
|
|
renderContext.Submit();
|
|
return;
|
|
}
|
|
|
|
// Set camera constant buffer
|
|
// TODO...
|
|
|
|
ScriptableCullingParameters cullingParams;
|
|
if (!CullResults.GetCullingParameters(camera, out cullingParams))
|
|
{
|
|
renderContext.Submit();
|
|
return;
|
|
}
|
|
|
|
m_LightLoop.UpdateCullingParameters( ref cullingParams );
|
|
|
|
// emit scene view UI
|
|
#if UNITY_EDITOR
|
|
if (camera.cameraType == CameraType.SceneView)
|
|
ScriptableRenderContext.EmitWorldGeometryForSceneView(camera);
|
|
#endif
|
|
|
|
CullResults.Cull(ref cullingParams, renderContext,ref m_CullResults);
|
|
|
|
Resize(camera);
|
|
|
|
renderContext.SetupCameraProperties(camera);
|
|
|
|
var postProcessLayer = camera.GetComponent<PostProcessLayer>();
|
|
HDCamera hdCamera = HDCamera.Get(camera, postProcessLayer);
|
|
PushGlobalParams(hdCamera, cmd, m_Asset.sssSettings);
|
|
|
|
// TODO: Find a correct place to bind these material textures
|
|
// We have to bind the material specific global parameters in this mode
|
|
m_MaterialList.ForEach(material => material.Bind());
|
|
|
|
var additionalCameraData = camera.GetComponent<HDAdditionalCameraData>();
|
|
if (additionalCameraData && additionalCameraData.renderingPath == RenderingPathHDRP.Unlit)
|
|
{
|
|
// TODO: Add another path dedicated to planar reflection / real time cubemap that implement simpler lighting
|
|
string passName = "Forward"; // It is up to the users to only send unlit object for this camera path
|
|
|
|
using (new Utilities.ProfilingSample(passName, cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT, ClearFlag.ClearColor | ClearFlag.ClearDepth);
|
|
RenderOpaqueRenderList(m_CullResults, camera, renderContext, cmd, passName);
|
|
RenderTransparentRenderList(m_CullResults, camera, renderContext, cmd, passName);
|
|
}
|
|
|
|
renderContext.ExecuteCommandBuffer(cmd);
|
|
CommandBufferPool.Release(cmd);
|
|
renderContext.Submit();
|
|
return;
|
|
}
|
|
|
|
InitAndClearBuffer(hdCamera, cmd);
|
|
|
|
RenderDepthPrepass(m_CullResults, camera, renderContext, cmd);
|
|
|
|
// Forward opaque with deferred/cluster tile require that we fill the depth buffer
|
|
// correctly to build the light list.
|
|
RenderForwardOnlyOpaqueDepthPrepass(m_CullResults, camera, renderContext, cmd);
|
|
RenderGBuffer(m_CullResults, camera, renderContext, cmd);
|
|
|
|
// If full forward rendering, we did not do any rendering yet, so don't need to copy the buffer.
|
|
// If Deferred then the depth buffer is full (regular GBuffer + ForwardOnly depth prepass are done so we can copy it safely.
|
|
if (!m_Asset.renderingSettings.useForwardRenderingOnly)
|
|
{
|
|
CopyDepthBufferIfNeeded(cmd);
|
|
}
|
|
|
|
// Required for the SSS and the shader feature classification pass.
|
|
PrepareAndBindStencilTexture(cmd);
|
|
|
|
if (m_DebugDisplaySettings.IsDebugMaterialDisplayEnabled())
|
|
{
|
|
RenderDebugViewMaterial(m_CullResults, hdCamera, renderContext, cmd);
|
|
}
|
|
else
|
|
{
|
|
using (new Utilities.ProfilingSample("Build Light list and render shadows", cmd))
|
|
{
|
|
// TODO: Everything here (SSAO, Shadow, Build light list, material and light classification can be parallelize with Async compute)
|
|
m_SsaoEffect.Render(ssaoSettingsToUse, this, hdCamera, renderContext, cmd, m_Asset.renderingSettings.useForwardRenderingOnly);
|
|
m_LightLoop.PrepareLightsForGPU(m_ShadowSettings, m_CullResults, camera);
|
|
m_LightLoop.RenderShadows(renderContext, cmd, m_CullResults);
|
|
|
|
cmd.GetTemporaryRT(m_DeferredShadowBuffer, camera.pixelWidth, camera.pixelHeight, 0, FilterMode.Point, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear, 1 , true);
|
|
m_LightLoop.RenderDeferredDirectionalShadow(hdCamera, m_DeferredShadowBufferRT, GetDepthTexture(), cmd);
|
|
|
|
PushFullScreenDebugTexture(cmd, m_DeferredShadowBuffer, hdCamera.camera, renderContext, FullScreenDebugMode.DeferredShadows);
|
|
|
|
renderContext.SetupCameraProperties(camera); // Need to recall SetupCameraProperties after m_ShadowPass.Render
|
|
m_LightLoop.BuildGPULightLists(camera, cmd, m_CameraDepthStencilBufferRT, GetStencilTexture());
|
|
}
|
|
|
|
// Caution: We require sun light here as some sky use the sun light to render, mean UpdateSkyEnvironment
|
|
// must be call after BuildGPULightLists.
|
|
// TODO: Try to arrange code so we can trigger this call earlier and use async compute here to run sky convolution during other passes (once we move convolution shader to compute).
|
|
UpdateSkyEnvironment(hdCamera, cmd);
|
|
|
|
RenderDeferredLighting(hdCamera, cmd);
|
|
|
|
// We compute subsurface scattering here. Therefore, no objects rendered afterwards will exhibit SSS.
|
|
// Currently, there is no efficient way to switch between SRT and MRT for the forward pass;
|
|
// therefore, forward-rendered objects do not output split lighting required for the SSS pass.
|
|
SubsurfaceScatteringPass(hdCamera, cmd, m_Asset.sssSettings);
|
|
|
|
// For opaque forward we have split rendering in two categories
|
|
// Material that are always forward and material that can be deferred or forward depends on render pipeline options (like switch to rendering forward only mode)
|
|
// Material that are always forward are unlit and complex (Like Hair) and don't require sorting, so it is ok to split them.
|
|
RenderForward(m_CullResults, camera, renderContext, cmd, true); // Render deferred or forward opaque
|
|
RenderForwardOnlyOpaque(m_CullResults, camera, renderContext, cmd);
|
|
|
|
RenderLightingDebug(hdCamera, cmd, m_CameraColorBufferRT, m_DebugDisplaySettings);
|
|
|
|
// If full forward rendering, we did just rendered everything, so we can copy the depth buffer
|
|
// If Deferred nothing needs copying anymore.
|
|
if (m_Asset.renderingSettings.useForwardRenderingOnly)
|
|
{
|
|
CopyDepthBufferIfNeeded(cmd);
|
|
}
|
|
|
|
RenderSky(hdCamera, cmd);
|
|
|
|
// Render all type of transparent forward (unlit, lit, complex (hair...)) to keep the sorting between transparent objects.
|
|
RenderForward(m_CullResults, camera, renderContext, cmd, false);
|
|
|
|
// Render fog.
|
|
VolumetricLightingPass(cmd, hdCamera);
|
|
|
|
PushFullScreenDebugTexture(cmd, m_CameraColorBuffer, camera, renderContext, FullScreenDebugMode.NanTracker);
|
|
|
|
// Planar and real time cubemap doesn't need post process and render in FP16
|
|
if (camera.cameraType == CameraType.Reflection)
|
|
{
|
|
using (new Utilities.ProfilingSample("Blit to final RT", cmd))
|
|
{
|
|
// Simple blit
|
|
cmd.Blit(m_CameraColorBufferRT, BuiltinRenderTextureType.CameraTarget);
|
|
}
|
|
|
|
}
|
|
else
|
|
{
|
|
RenderVelocity(m_CullResults, hdCamera, renderContext, cmd); // Note we may have to render velocity earlier if we do temporalAO, temporal volumetric etc... Mean we will not take into account forward opaque in case of deferred rendering ?
|
|
|
|
// TODO: Check with VFX team.
|
|
// Rendering distortion here have off course lot of artifact.
|
|
// But resolving at each objects that write in distortion is not possible (need to sort transparent, render those that do not distort, then resolve, then etc...)
|
|
// Instead we chose to apply distortion at the end after we cumulate distortion vector and desired blurriness. This
|
|
RenderDistortion(m_CullResults, camera, renderContext, cmd);
|
|
|
|
RenderPostProcesses(camera, cmd, postProcessLayer);
|
|
}
|
|
}
|
|
|
|
RenderDebug(hdCamera, cmd);
|
|
|
|
// bind depth surface for editor grid/gizmo/selection rendering
|
|
if (camera.cameraType == CameraType.SceneView)
|
|
{
|
|
cmd.SetRenderTarget(BuiltinRenderTextureType.CameraTarget, m_CameraDepthStencilBufferRT);
|
|
}
|
|
|
|
renderContext.ExecuteCommandBuffer(cmd);
|
|
CommandBufferPool.Release(cmd);
|
|
renderContext.Submit();
|
|
}
|
|
|
|
private static Material m_ErrorMaterial;
|
|
private static Material errorMaterial
|
|
{
|
|
get
|
|
{
|
|
if (m_ErrorMaterial == null)
|
|
m_ErrorMaterial = new Material(Shader.Find("Hidden/InternalErrorShader"));
|
|
return m_ErrorMaterial;
|
|
}
|
|
}
|
|
|
|
void RenderOpaqueRenderList(CullResults cull, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd, string passName, RendererConfiguration rendererConfiguration = 0)
|
|
{
|
|
if (!m_DebugDisplaySettings.renderingDebugSettings.displayOpaqueObjects)
|
|
return;
|
|
|
|
// This is done here because DrawRenderers API lives outside command buffers so we need to make call this before doing any DrawRenders
|
|
renderContext.ExecuteCommandBuffer(cmd);
|
|
cmd.Clear();
|
|
|
|
var drawSettings = new DrawRendererSettings(camera, new ShaderPassName(passName))
|
|
{
|
|
rendererConfiguration = rendererConfiguration,
|
|
sorting = { flags = SortFlags.CommonOpaque }
|
|
};
|
|
drawSettings.SetShaderPassName(1, new ShaderPassName("SRPDefaultUnlit"));
|
|
var filterSettings = new FilterRenderersSettings(true) {renderQueueRange = RenderQueueRange.opaque};
|
|
renderContext.DrawRenderers(cull.visibleRenderers, ref drawSettings, filterSettings);
|
|
|
|
#if UNITY_EDITOR
|
|
// in editor draw invalid things with error material
|
|
ConfigureErrorDraw(ref drawSettings);
|
|
renderContext.DrawRenderers(cull.visibleRenderers, ref drawSettings, filterSettings);
|
|
#endif
|
|
}
|
|
|
|
void RenderTransparentRenderList(CullResults cull, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd, string passName, RendererConfiguration rendererConfiguration = 0)
|
|
{
|
|
if (!m_DebugDisplaySettings.renderingDebugSettings.displayTransparentObjects)
|
|
return;
|
|
|
|
// This is done here because DrawRenderers API lives outside command buffers so we need to make call this before doing any DrawRenders
|
|
renderContext.ExecuteCommandBuffer(cmd);
|
|
cmd.Clear();
|
|
|
|
var drawSettings = new DrawRendererSettings(camera, new ShaderPassName(passName))
|
|
{
|
|
rendererConfiguration = rendererConfiguration,
|
|
sorting = { flags = SortFlags.CommonTransparent }
|
|
};
|
|
var filterSettings = new FilterRenderersSettings(true) {renderQueueRange = RenderQueueRange.transparent};
|
|
renderContext.DrawRenderers(cull.visibleRenderers, ref drawSettings, filterSettings);
|
|
|
|
#if UNITY_EDITOR
|
|
// in editor draw invalid things with error material
|
|
ConfigureErrorDraw(ref drawSettings);
|
|
renderContext.DrawRenderers(cull.visibleRenderers, ref drawSettings, filterSettings);
|
|
#endif
|
|
}
|
|
|
|
private static void ConfigureErrorDraw(ref DrawRendererSettings drawSettings)
|
|
{
|
|
drawSettings.SetShaderPassName(0, new ShaderPassName("Always"));
|
|
drawSettings.SetShaderPassName(1, new ShaderPassName("ForwardBase"));
|
|
drawSettings.SetShaderPassName(2, new ShaderPassName("Deferred"));
|
|
drawSettings.SetShaderPassName(3, new ShaderPassName("PrepassBase"));
|
|
drawSettings.SetShaderPassName(4, new ShaderPassName("Vertex"));
|
|
drawSettings.SetShaderPassName(5, new ShaderPassName("VertexLMRGBM"));
|
|
drawSettings.SetShaderPassName(6, new ShaderPassName("VertexLM"));
|
|
drawSettings.SetOverrideMaterial(errorMaterial, 0);
|
|
}
|
|
|
|
void RenderDepthPrepass(CullResults cull, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd)
|
|
{
|
|
if (!m_Asset.renderingSettings.useDepthPrepass)
|
|
return;
|
|
|
|
using (new Utilities.ProfilingSample("Depth Prepass", cmd))
|
|
{
|
|
// TODO: Must do opaque then alpha masked for performance!
|
|
// TODO: front to back for opaque and by material for opaque tested when we split in two
|
|
Utilities.SetRenderTarget(cmd, m_CameraDepthStencilBufferRT);
|
|
RenderOpaqueRenderList(cull, camera, renderContext, cmd, "DepthOnly");
|
|
}
|
|
}
|
|
|
|
void RenderGBuffer(CullResults cull, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd)
|
|
{
|
|
if (m_Asset.renderingSettings.ShouldUseForwardRenderingOnly())
|
|
return;
|
|
|
|
string passName = m_DebugDisplaySettings.IsDebugDisplayEnabled() ? "GBufferDebugDisplay" : "GBuffer";
|
|
|
|
using (new Utilities.ProfilingSample(passName, cmd))
|
|
{
|
|
// setup GBuffer for rendering
|
|
Utilities.SetRenderTarget(cmd, m_gbufferManager.GetGBuffers(), m_CameraDepthStencilBufferRT);
|
|
// render opaque objects into GBuffer
|
|
RenderOpaqueRenderList(cull, camera, renderContext, cmd, passName, Utilities.kRendererConfigurationBakedLighting);
|
|
}
|
|
}
|
|
|
|
// This pass is use in case of forward opaque and deferred rendering. We need to render forward objects before tile lighting pass
|
|
void RenderForwardOnlyOpaqueDepthPrepass(CullResults cull, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd)
|
|
{
|
|
// If we are forward only we don't need to render ForwardOnlyOpaqueDepthOnly object
|
|
// But in case we request a prepass we render it
|
|
if (m_Asset.renderingSettings.ShouldUseForwardRenderingOnly() && !m_Asset.renderingSettings.useDepthPrepass)
|
|
return;
|
|
|
|
using (new Utilities.ProfilingSample("Forward opaque depth", cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraDepthStencilBufferRT);
|
|
RenderOpaqueRenderList(cull, camera, renderContext, cmd, "ForwardOnlyOpaqueDepthOnly");
|
|
}
|
|
}
|
|
|
|
void RenderDebugViewMaterial(CullResults cull, HDCamera hdCamera, ScriptableRenderContext renderContext, CommandBuffer cmd)
|
|
{
|
|
using (new Utilities.ProfilingSample("DisplayDebug ViewMaterial", cmd))
|
|
{
|
|
if(m_DebugDisplaySettings.materialDebugSettings.IsDebugGBufferEnabled() && !m_Asset.renderingSettings.ShouldUseForwardRenderingOnly())
|
|
{
|
|
using (new Utilities.ProfilingSample("DebugViewMaterialGBuffer", cmd))
|
|
{
|
|
Utilities.DrawFullScreen(cmd, m_DebugViewMaterialGBuffer, m_CameraColorBufferRT);
|
|
}
|
|
}
|
|
else
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT, Utilities.kClearAll, Color.black);
|
|
// Render Opaque forward
|
|
RenderOpaqueRenderList(cull, hdCamera.camera, renderContext, cmd, "ForwardDisplayDebug", Utilities.kRendererConfigurationBakedLighting);
|
|
|
|
// Render forward transparent
|
|
RenderTransparentRenderList(cull, hdCamera.camera, renderContext, cmd, "ForwardDisplayDebug", Utilities.kRendererConfigurationBakedLighting);
|
|
}
|
|
}
|
|
|
|
// Last blit
|
|
{
|
|
using (new Utilities.ProfilingSample("Blit DebugView Material Debug", cmd))
|
|
{
|
|
cmd.Blit(m_CameraColorBufferRT, BuiltinRenderTextureType.CameraTarget);
|
|
}
|
|
}
|
|
}
|
|
|
|
void RenderDeferredLighting(HDCamera hdCamera, CommandBuffer cmd)
|
|
{
|
|
if (m_Asset.renderingSettings.ShouldUseForwardRenderingOnly())
|
|
{
|
|
return;
|
|
}
|
|
|
|
RenderTargetIdentifier[] colorRTs = { m_CameraColorBufferRT, m_CameraSssDiffuseLightingBufferRT };
|
|
RenderTargetIdentifier depthTexture = GetDepthTexture();
|
|
|
|
LightLoop.LightingPassOptions options = new LightLoop.LightingPassOptions();
|
|
options.volumetricLightingEnabled = m_VolumetricLightingEnabled;
|
|
|
|
if (m_DebugDisplaySettings.renderingDebugSettings.enableSSSAndTransmission)
|
|
{
|
|
// Output split lighting for materials asking for it (masked in the stencil buffer)
|
|
options.outputSplitLighting = true;
|
|
|
|
m_LightLoop.RenderDeferredLighting(hdCamera, cmd, m_DebugDisplaySettings, colorRTs, m_CameraDepthStencilBufferRT, depthTexture, m_DeferredShadowBuffer, options);
|
|
}
|
|
|
|
// Output combined lighting for all the other materials.
|
|
options.outputSplitLighting = false;
|
|
|
|
m_LightLoop.RenderDeferredLighting(hdCamera, cmd, m_DebugDisplaySettings, colorRTs, m_CameraDepthStencilBufferRT, depthTexture, m_DeferredShadowBuffer, options);
|
|
}
|
|
|
|
// Combines specular lighting and diffuse lighting with subsurface scattering.
|
|
void SubsurfaceScatteringPass(HDCamera hdCamera, CommandBuffer cmd, SubsurfaceScatteringSettings sssParameters)
|
|
{
|
|
// Currently, forward-rendered objects do not output split lighting required for the SSS pass.
|
|
if (!m_DebugDisplaySettings.renderingDebugSettings.enableSSSAndTransmission || m_Asset.renderingSettings.ShouldUseForwardRenderingOnly())
|
|
return;
|
|
|
|
using (new Utilities.ProfilingSample("Subsurface Scattering", cmd))
|
|
{
|
|
if (sssSettings.useDisneySSS)
|
|
{
|
|
hdCamera.SetupComputeShader(m_SubsurfaceScatteringCS, cmd);
|
|
|
|
cmd.SetComputeIntParam( m_SubsurfaceScatteringCS, HDShaderIDs._TexturingModeFlags, sssParameters.texturingModeFlags);
|
|
cmd.SetComputeVectorArrayParam(m_SubsurfaceScatteringCS, HDShaderIDs._WorldScales, sssParameters.worldScales);
|
|
cmd.SetComputeVectorArrayParam(m_SubsurfaceScatteringCS, HDShaderIDs._FilterKernels, sssParameters.filterKernels);
|
|
cmd.SetComputeVectorArrayParam(m_SubsurfaceScatteringCS, HDShaderIDs._ShapeParams, sssParameters.shapeParams);
|
|
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._GBufferTexture0, m_gbufferManager.GetGBuffers()[0]);
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._GBufferTexture1, m_gbufferManager.GetGBuffers()[1]);
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._GBufferTexture2, m_gbufferManager.GetGBuffers()[2]);
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._GBufferTexture3, m_gbufferManager.GetGBuffers()[3]);
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._DepthTexture, GetDepthTexture());
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._StencilTexture, GetStencilTexture());
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._HTile, GetHTile());
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._IrradianceSource, m_CameraSssDiffuseLightingBufferRT);
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._CameraColorTexture, m_CameraColorBufferRT);
|
|
cmd.SetComputeTextureParam(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, HDShaderIDs._CameraFilteringBuffer, m_CameraFilteringBufferRT);
|
|
|
|
// Perform the SSS filtering pass which fills 'm_CameraFilteringBufferRT'.
|
|
cmd.DispatchCompute(m_SubsurfaceScatteringCS, m_SubsurfaceScatteringKernel, ((int)hdCamera.screenSize.x + 15) / 16, ((int)hdCamera.screenSize.y + 15) / 16, 1);
|
|
|
|
cmd.SetGlobalTexture(HDShaderIDs._IrradianceSource, m_CameraFilteringBufferRT); // Cannot set a RT on a material
|
|
|
|
// Combine diffuse and specular lighting into 'm_CameraColorBufferRT'.
|
|
Utilities.DrawFullScreen(cmd, m_CombineLightingPass, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT);
|
|
}
|
|
else
|
|
{
|
|
cmd.SetGlobalTexture(HDShaderIDs._IrradianceSource, m_CameraSssDiffuseLightingBufferRT); // Cannot set a RT on a material
|
|
m_SssVerticalFilterPass.SetVectorArray(HDShaderIDs._WorldScales, sssParameters.worldScales);
|
|
m_SssVerticalFilterPass.SetVectorArray(HDShaderIDs._FilterKernelsBasic, sssParameters.filterKernelsBasic);
|
|
m_SssVerticalFilterPass.SetVectorArray(HDShaderIDs._HalfRcpWeightedVariances, sssParameters.halfRcpWeightedVariances);
|
|
// Perform the vertical SSS filtering pass which fills 'm_CameraFilteringBufferRT'.
|
|
Utilities.DrawFullScreen(cmd, m_SssVerticalFilterPass, m_CameraFilteringBufferRT, m_CameraDepthStencilBufferRT);
|
|
|
|
cmd.SetGlobalTexture(HDShaderIDs._IrradianceSource, m_CameraFilteringBufferRT); // Cannot set a RT on a material
|
|
m_SssHorizontalFilterAndCombinePass.SetVectorArray(HDShaderIDs._WorldScales, sssParameters.worldScales);
|
|
m_SssHorizontalFilterAndCombinePass.SetVectorArray(HDShaderIDs._FilterKernelsBasic, sssParameters.filterKernelsBasic);
|
|
m_SssHorizontalFilterAndCombinePass.SetVectorArray(HDShaderIDs._HalfRcpWeightedVariances, sssParameters.halfRcpWeightedVariances);
|
|
// Perform the horizontal SSS filtering pass, and combine diffuse and specular lighting into 'm_CameraColorBufferRT'.
|
|
Utilities.DrawFullScreen(cmd, m_SssHorizontalFilterAndCombinePass, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT);
|
|
}
|
|
}
|
|
}
|
|
|
|
void UpdateSkyEnvironment(HDCamera hdCamera, CommandBuffer cmd)
|
|
{
|
|
m_SkyManager.UpdateEnvironment(hdCamera,m_LightLoop.GetCurrentSunLight(), cmd);
|
|
}
|
|
|
|
void RenderSky(HDCamera hdCamera, CommandBuffer cmd)
|
|
{
|
|
m_SkyManager.RenderSky(hdCamera, m_LightLoop.GetCurrentSunLight(), m_CameraColorBufferRT, m_CameraDepthStencilBufferRT, cmd);
|
|
}
|
|
|
|
public Texture2D ExportSkyToTexture()
|
|
{
|
|
return m_SkyManager.ExportSkyToTexture();
|
|
}
|
|
|
|
void RenderLightingDebug(HDCamera camera, CommandBuffer cmd, RenderTargetIdentifier colorBuffer, DebugDisplaySettings debugDisplaySettings)
|
|
{
|
|
m_LightLoop.RenderLightingDebug(camera, cmd, colorBuffer, debugDisplaySettings);
|
|
}
|
|
|
|
void RenderForward(CullResults cullResults, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd, bool renderOpaque)
|
|
{
|
|
if (!m_Asset.renderingSettings.ShouldUseForwardRenderingOnly() && renderOpaque)
|
|
return;
|
|
|
|
string passName = m_DebugDisplaySettings.IsDebugDisplayEnabled() ? "ForwardDisplayDebug" : "Forward";
|
|
|
|
using (new Utilities.ProfilingSample(passName, cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT);
|
|
|
|
m_LightLoop.RenderForward(camera, cmd, renderOpaque);
|
|
|
|
if (renderOpaque)
|
|
{
|
|
RenderOpaqueRenderList(cullResults, camera, renderContext, cmd, passName, Utilities.kRendererConfigurationBakedLighting);
|
|
}
|
|
else
|
|
{
|
|
RenderTransparentRenderList(cullResults, camera, renderContext, cmd, passName, Utilities.kRendererConfigurationBakedLighting);
|
|
}
|
|
}
|
|
}
|
|
|
|
// Render material that are forward opaque only (like eye), this include unlit material
|
|
void RenderForwardOnlyOpaque(CullResults cullResults, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd)
|
|
{
|
|
string passName = m_DebugDisplaySettings.IsDebugDisplayEnabled() ? "ForwardOnlyOpaqueDisplayDebug" : "ForwardOnlyOpaque";
|
|
|
|
using (new Utilities.ProfilingSample(passName, cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT);
|
|
|
|
m_LightLoop.RenderForward(camera, cmd, true);
|
|
|
|
RenderOpaqueRenderList(cullResults, camera, renderContext, cmd, passName, Utilities.kRendererConfigurationBakedLighting);
|
|
}
|
|
}
|
|
|
|
void RenderVelocity(CullResults cullResults, HDCamera hdcam, ScriptableRenderContext renderContext, CommandBuffer cmd)
|
|
{
|
|
using (new Utilities.ProfilingSample("Velocity", cmd))
|
|
{
|
|
// If opaque velocity have been render during GBuffer no need to render it here
|
|
if ((ShaderConfig.s_VelocityInGbuffer == 1) || m_Asset.renderingSettings.ShouldUseForwardRenderingOnly())
|
|
return;
|
|
|
|
// These flags are still required in SRP or the engine won't compute previous model matrices...
|
|
// If the flag hasn't been set yet on this camera, motion vectors will skip a frame.
|
|
hdcam.camera.depthTextureMode |= DepthTextureMode.MotionVectors | DepthTextureMode.Depth;
|
|
|
|
int w = (int)hdcam.screenSize.x;
|
|
int h = (int)hdcam.screenSize.y;
|
|
|
|
m_CameraMotionVectorsMaterial.SetVector(HDShaderIDs._CameraPosDiff, hdcam.prevCameraPos - hdcam.cameraPos);
|
|
|
|
cmd.GetTemporaryRT(m_VelocityBuffer, w, h, 0, FilterMode.Point, Builtin.GetVelocityBufferFormat(), Builtin.GetVelocityBufferReadWrite());
|
|
Utilities.DrawFullScreen(cmd, m_CameraMotionVectorsMaterial, m_VelocityBufferRT, null, 0);
|
|
cmd.SetRenderTarget(m_VelocityBufferRT, m_CameraDepthStencilBufferRT);
|
|
|
|
RenderOpaqueRenderList(cullResults, hdcam.camera, renderContext, cmd, "MotionVectors", RendererConfiguration.PerObjectMotionVectors);
|
|
|
|
PushFullScreenDebugTexture(cmd, m_VelocityBuffer, hdcam.camera, renderContext, FullScreenDebugMode.MotionVectors);
|
|
}
|
|
}
|
|
|
|
void RenderDistortion(CullResults cullResults, Camera camera, ScriptableRenderContext renderContext, CommandBuffer cmd)
|
|
{
|
|
if (!m_DebugDisplaySettings.renderingDebugSettings.enableDistortion)
|
|
return;
|
|
|
|
using (new Utilities.ProfilingSample("Distortion", cmd))
|
|
{
|
|
int w = camera.pixelWidth;
|
|
int h = camera.pixelHeight;
|
|
|
|
cmd.GetTemporaryRT(m_DistortionBuffer, w, h, 0, FilterMode.Point, Builtin.GetDistortionBufferFormat(), Builtin.GetDistortionBufferReadWrite());
|
|
cmd.SetRenderTarget(m_DistortionBufferRT, m_CameraDepthStencilBufferRT);
|
|
cmd.ClearRenderTarget(false, true, Color.black); // TODO: can we avoid this clear for performance ?
|
|
|
|
// Only transparent object can render distortion vectors
|
|
RenderTransparentRenderList(cullResults, camera, renderContext, cmd, "DistortionVectors");
|
|
}
|
|
}
|
|
|
|
void RenderPostProcesses(Camera camera, CommandBuffer cmd, PostProcessLayer layer)
|
|
{
|
|
using (new Utilities.ProfilingSample("Post-processing", cmd))
|
|
{
|
|
if (Utilities.IsPostProcessingActive(layer))
|
|
{
|
|
cmd.SetGlobalTexture(HDShaderIDs._CameraDepthTexture, GetDepthTexture());
|
|
cmd.SetGlobalTexture(HDShaderIDs._CameraMotionVectorsTexture, m_VelocityBufferRT);
|
|
|
|
var context = m_PostProcessContext;
|
|
context.Reset();
|
|
context.source = m_CameraColorBufferRT;
|
|
context.destination = BuiltinRenderTextureType.CameraTarget;
|
|
context.command = cmd;
|
|
context.camera = camera;
|
|
context.sourceFormat = RenderTextureFormat.ARGBHalf;
|
|
context.flip = true;
|
|
|
|
layer.Render(context);
|
|
}
|
|
else
|
|
{
|
|
cmd.Blit(m_CameraColorBufferRT, BuiltinRenderTextureType.CameraTarget);
|
|
}
|
|
}
|
|
}
|
|
|
|
public void ApplyDebugDisplaySettings()
|
|
{
|
|
m_ShadowSettings.enabled = m_DebugDisplaySettings.lightingDebugSettings.enableShadows;
|
|
|
|
LightingDebugSettings lightingDebugSettings = m_DebugDisplaySettings.lightingDebugSettings;
|
|
Vector4 debugAlbedo = new Vector4(lightingDebugSettings.debugLightingAlbedo.r, lightingDebugSettings.debugLightingAlbedo.g, lightingDebugSettings.debugLightingAlbedo.b, 0.0f);
|
|
Vector4 debugSmoothness = new Vector4(lightingDebugSettings.overrideSmoothness ? 1.0f : 0.0f, lightingDebugSettings.overrideSmoothnessValue, 0.0f, 0.0f);
|
|
|
|
Shader.SetGlobalInt(HDShaderIDs._DebugViewMaterial, (int)m_DebugDisplaySettings.GetDebugMaterialIndex());
|
|
Shader.SetGlobalInt(HDShaderIDs._DebugLightingMode, (int)m_DebugDisplaySettings.GetDebugLightingMode());
|
|
Shader.SetGlobalVector(HDShaderIDs._DebugLightingAlbedo, debugAlbedo);
|
|
Shader.SetGlobalVector(HDShaderIDs._DebugLightingSmoothness, debugSmoothness);
|
|
}
|
|
|
|
public void PushFullScreenDebugTexture(CommandBuffer cb, RenderTargetIdentifier textureID, Camera camera, ScriptableRenderContext renderContext, FullScreenDebugMode debugMode)
|
|
{
|
|
if (debugMode == m_DebugDisplaySettings.fullScreenDebugMode)
|
|
{
|
|
m_FullScreenDebugPushed = true; // We need this flag because otherwise if no fullscreen debug is pushed, when we render the result in RenderDebug the temporary RT will not exist.
|
|
cb.GetTemporaryRT(m_DebugFullScreenTempRT, camera.pixelWidth, camera.pixelHeight, 0, FilterMode.Point, RenderTextureFormat.ARGBHalf, RenderTextureReadWrite.Linear);
|
|
cb.Blit(textureID, m_DebugFullScreenTempRT);
|
|
}
|
|
}
|
|
|
|
public void PushFullScreenDebugTexture(CommandBuffer cb, int textureID, Camera camera, ScriptableRenderContext renderContext, FullScreenDebugMode debugMode)
|
|
{
|
|
PushFullScreenDebugTexture(cb, new RenderTargetIdentifier(textureID), camera, renderContext, debugMode);
|
|
}
|
|
|
|
void RenderDebug(HDCamera camera, CommandBuffer cmd)
|
|
{
|
|
// We don't want any overlay for these kind of rendering
|
|
if (camera.camera.cameraType == CameraType.Reflection || camera.camera.cameraType == CameraType.Preview)
|
|
return;
|
|
|
|
using (new Utilities.ProfilingSample("Render Debug", cmd))
|
|
{
|
|
// We make sure the depth buffer is bound because we need it to write depth at near plane for overlays otherwise the editor grid end up visible in them.
|
|
Utilities.SetRenderTarget(cmd, BuiltinRenderTextureType.CameraTarget, m_CameraDepthStencilBufferRT);
|
|
|
|
// First render full screen debug texture
|
|
if(m_DebugDisplaySettings.fullScreenDebugMode != FullScreenDebugMode.None && m_FullScreenDebugPushed)
|
|
{
|
|
m_FullScreenDebugPushed = false;
|
|
cmd.SetGlobalTexture(HDShaderIDs._DebugFullScreenTexture, m_DebugFullScreenTempRT);
|
|
m_DebugFullScreen.SetFloat(HDShaderIDs._FullScreenDebugMode, (float)m_DebugDisplaySettings.fullScreenDebugMode);
|
|
Utilities.DrawFullScreen(cmd, m_DebugFullScreen, (RenderTargetIdentifier)BuiltinRenderTextureType.CameraTarget);
|
|
}
|
|
|
|
// Then overlays
|
|
float x = 0;
|
|
float overlayRatio = m_DebugDisplaySettings.debugOverlayRatio;
|
|
float overlaySize = Math.Min(camera.camera.pixelHeight, camera.camera.pixelWidth) * overlayRatio;
|
|
float y = camera.camera.pixelHeight - overlaySize;
|
|
|
|
LightingDebugSettings lightingDebug = m_DebugDisplaySettings.lightingDebugSettings;
|
|
|
|
if (lightingDebug.displaySkyReflection)
|
|
{
|
|
Texture skyReflection = m_SkyManager.skyReflection;
|
|
m_SharedPropertyBlock.SetTexture(HDShaderIDs._InputCubemap, skyReflection);
|
|
m_SharedPropertyBlock.SetFloat(HDShaderIDs._Mipmap, lightingDebug.skyReflectionMipmap);
|
|
cmd.SetViewport(new Rect(x, y, overlaySize, overlaySize));
|
|
cmd.DrawProcedural(Matrix4x4.identity, m_DebugDisplayLatlong, 0, MeshTopology.Triangles, 3, 1, m_SharedPropertyBlock);
|
|
Utilities.NextOverlayCoord(ref x, ref y, overlaySize, overlaySize, camera.camera.pixelWidth);
|
|
}
|
|
|
|
m_LightLoop.RenderDebugOverlay(camera.camera, cmd, m_DebugDisplaySettings, ref x, ref y, overlaySize, camera.camera.pixelWidth);
|
|
}
|
|
}
|
|
|
|
void InitAndClearBuffer(HDCamera camera, CommandBuffer cmd)
|
|
{
|
|
using (new Utilities.ProfilingSample("InitAndClearBuffer", cmd))
|
|
{
|
|
// We clear only the depth buffer, no need to clear the various color buffer as we overwrite them.
|
|
// Clear depth/stencil and init buffers
|
|
using (new Utilities.ProfilingSample("InitGBuffers and clear Depth/Stencil", cmd))
|
|
{
|
|
// Init buffer
|
|
// With scriptable render loop we must allocate ourself depth and color buffer (We must be independent of backbuffer for now, hope to fix that later).
|
|
// Also we manage ourself the HDR format, here allocating fp16 directly.
|
|
// With scriptable render loop we can allocate temporary RT in a command buffer, they will not be release with ExecuteCommandBuffer
|
|
// These temporary surface are release automatically at the end of the scriptable render pipeline if not release explicitly
|
|
int w = camera.camera.pixelWidth;
|
|
int h = camera.camera.pixelHeight;
|
|
|
|
cmd.GetTemporaryRT(m_CameraColorBuffer, w, h, 0, FilterMode.Point, RenderTextureFormat.ARGBHalf, RenderTextureReadWrite.Linear, 1, true); // Enable UAV
|
|
cmd.GetTemporaryRT(m_CameraSssDiffuseLightingBuffer, w, h, 0, FilterMode.Point, RenderTextureFormat.RGB111110Float, RenderTextureReadWrite.Linear, 1, true); // Enable UAV
|
|
cmd.GetTemporaryRT(m_CameraFilteringBuffer, w, h, 0, FilterMode.Point, RenderTextureFormat.RGB111110Float, RenderTextureReadWrite.Linear, 1, true); // Enable UAV
|
|
|
|
if (!m_Asset.renderingSettings.ShouldUseForwardRenderingOnly())
|
|
{
|
|
m_gbufferManager.InitGBuffers(w, h, cmd);
|
|
}
|
|
|
|
Utilities.SetRenderTarget(cmd, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT, ClearFlag.ClearDepth);
|
|
}
|
|
|
|
// Clear the diffuse SSS lighting target
|
|
using (new Utilities.ProfilingSample("Clear SSS diffuse target", cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraSssDiffuseLightingBufferRT, ClearFlag.ClearColor, Color.black);
|
|
}
|
|
|
|
// Old SSS Model >>>
|
|
if (!sssSettings.useDisneySSS)
|
|
{
|
|
// Clear the SSS filtering target
|
|
using (new Utilities.ProfilingSample("Clear SSS filtering target", cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraFilteringBuffer, ClearFlag.ClearColor, Color.black);
|
|
}
|
|
}
|
|
// <<< Old SSS Model
|
|
|
|
if (NeedStencilBufferCopy())
|
|
{
|
|
using (new Utilities.ProfilingSample("Clear stencil texture", cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraStencilBufferCopyRT, ClearFlag.ClearColor, Color.black);
|
|
}
|
|
}
|
|
|
|
if (NeedHTileCopy())
|
|
{
|
|
using (new Utilities.ProfilingSample("Clear HTile", cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_HTileRT, ClearFlag.ClearColor, Color.black);
|
|
}
|
|
}
|
|
|
|
if (m_VolumetricLightingEnabled)
|
|
{
|
|
ClearVolumetricLightingBuffers(cmd, camera.isFirstFrame);
|
|
}
|
|
|
|
// TEMP: As we are in development and have not all the setup pass we still clear the color in emissive buffer and gbuffer, but this will be removed later.
|
|
|
|
// Clear the HDR target
|
|
using (new Utilities.ProfilingSample("Clear HDR target", cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_CameraColorBufferRT, m_CameraDepthStencilBufferRT, ClearFlag.ClearColor, Color.black);
|
|
}
|
|
|
|
// Clear GBuffers
|
|
if (!m_Asset.renderingSettings.ShouldUseForwardRenderingOnly())
|
|
{
|
|
using (new Utilities.ProfilingSample("Clear GBuffer", cmd))
|
|
{
|
|
Utilities.SetRenderTarget(cmd, m_gbufferManager.GetGBuffers(), m_CameraDepthStencilBufferRT, ClearFlag.ClearColor, Color.black);
|
|
}
|
|
}
|
|
// END TEMP
|
|
}
|
|
}
|
|
}
|
|
}
|