// This function is a coarse approximation of computing fresnel0 for a different top than air (here clear coat of IOR 1.5) when we only have fresnel0 with air interface
// This function is equivalent to IorToFresnel0(Fresnel0ToIor(fresnel0), 1.5)
// This function is a coarse approximation of computing fresnel0 for a different top than air (here clear coat of IOR 1.5) when we only have fresnel0 with air interface
// This function is equivalent to IorToFresnel0(Fresnel0ToIor(fresnel0), 1.5)
stringmsg="Platform "+SystemInfo.operatingSystem+" with device "+SystemInfo.graphicsDeviceType.ToString()+" is not supported, no rendering will occur";
DisplayUnsupportedMessage(msg);
}
publicstaticvoidDisplayUnsupportedXRMessage()
{
stringmsg="AR/VR devices are not supported, no rendering will occur";
DisplayUnsupportedMessage(msg);
}
// Returns 'true' if "Animated Materials" are enabled for the view associated with the given camera.
All notable changes to this package will be documented in this file.
## [2018.2 undecided]
### Improvements
- Add stripper of shader variant when building a player. Save shader compile time.
- Disable per-object culling that was executed in C++ in HD whereas it was not used (Optimization)
- Enable texture streaming debugging (was not working before 2018.2)
## [2018.1 undecided]
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
### Improvements
- Configure the volumetric lighting code path to be on by default
- Trigger a build exception when trying to build an unsupported platform
- Introduce the VolumetricLightingController component, which can (and should) be placed on the camera, and allows one to control the near and the far plane of the V-Buffer (volumetric "froxel" buffer) along with the depth distribution (from logarithmic to linear)
### Changed, Removals and deprecations
- Remove Resource folder of PreIntegratedFGD and add the resource to RenderPipeline Asset
- Default number of planar reflection change from 4 to 2
### Bug fixes
- Fix ConvertPhysicalLightIntensityToLightIntensity() function used when creating light from script to match HDLightEditor behavior
- Fix numerical issues with the default value of mean free path of volumetric fog
- Fix the bug preventing decals from coexisting with density volumes
## [2018.1.0f2]
### Improvements
- Screen Space Refraction projection model (Proxy raycasting, HiZ raymarching)
- Screen Space Refraction settings as volume component
- Added buffered frame history per camera
- Port Global Density Volumes to the Interpolation Volume System.
- Optimize ImportanceSampleLambert() to not require the tangent frame.
- Generalize SampleVBuffer() to handle different sampling and reconstruction methods.
- Improve the quality of volumetric lighting reprojection.
- Optimize Morton Order code in the Subsurface Scattering pass.
- Planar Reflection Probe support roughness (gaussian convolution of captured probe)
- Use an atlas instead of a texture array for cluster transparent decals
- Add a debug view to visualize the decal atlas
- Only store decal textures to atlas if decal is visible, debounce out of memory decal atlas warning.
- Add manipulator gizmo on decal to improve authoring workflow
- Add a minimal StackLit material (work in progress, this version can be used as template to add new material)
## [0.1.6] - 2018-xx-yy
### Changed, Removals and deprecations
- EnableShadowMask in FrameSettings (But shadowMaskSupport still disable by default)
- Forced Planar Probe update modes to (Realtime, Every Update, Mirror Camera)
- Removed Planar Probe mirror plane position and normal fields in inspector, always display mirror plane and normal gizmos
- Screen Space Refraction proxy model uses the proxy of the first environment light (Reflection probe/Planar probe) or the sky
- Moved RTHandle static methods to RTHandles
- Renamed RTHandle to RTHandleSystem.RTHandle
- Move code for PreIntegratedFDG (Lit.shader) into its dedicated folder to be share with other material
- Move code for LTCArea (Lit.shader) into its dedicated folder to be share with other material
### Bug fixes
- Fix fog flags in scene view is now taken into account
- Fix sky in preview windows that were disappearing after a load of a new level
- Fix numerical issues in IntersectRayAABB().
- Fix alpha blending of volumetric lighting with transparent objects.
- Fix the near plane of the V-Buffer causing out-of-bounds look-ups in the clustered data structure.
- Depth and color pyramid are properly computed and sampled when the camera renders inside a viewport of a RTHandle.
- Fix decal atlas debug view to work correctly when shadow atlas view is also enabled
newDebugUI.Value{displayName=string.Empty,getter=()=>"Click in the scene view, or press 'End' key to select the pixel under the mouse in the scene view to debug."},
debugSettingsContainer.children.Insert(1,newDebugUI.Value{displayName=string.Empty,getter=()=>"Press PageUp/PageDown to Increase/Decrease the HiZ step."});
newDebugUI.Value{displayName="Start Linear Depth",getter=()=>screenSpaceTracingDebugData.loopStartLinearDepth},
newDebugUI.Value{displayName="Ray Direction SS",getter=()=>newVector2(screenSpaceTracingDebugData.loopRayDirectionSS.x,screenSpaceTracingDebugData.loopRayDirectionSS.y)},
list.Add(newDebugUI.FloatField{displayName="Shadow Range Max Value",getter=()=>lightingDebugSettings.shadowMaxValue,setter=value=>lightingDebugSettings.shadowMaxValue=value});
normalToWorldBatch[instanceCount].m03=fadeFactor*m_Blend;// vector3 rotation matrix so bottom row and last column can be used for other data to save space
normalToWorldBatch[instanceCount].m13=m_DiffuseTexIndex;// texture atlas indices needed for clustered
if(!m_AllocationSuccess&&m_PrevAllocationSuccess)// still failed to allocate, decal atlas size needs to increase, debounce so that we don't spam the console with warnings
{
Debug.LogWarning("Decal texture atlas out of space, decals on transparent geometry might not render correctly, atlas size can be changed in HDRenderPipelineAsset");
publicreadonlyGUIContentshape=newGUIContent("Type","Specifies the current type of light. Possible types are Directional, Spot, Point, Rectangle and Line lights.");
publicreadonlyGUIContent[]shapeNames;
publicreadonlyGUIContentenableSpotReflector=newGUIContent("Enable spot reflector","When true it simulate a spot light with reflector (mean the intensity of the light will be more focus with narrower angle), otherwise light outside of the cone is simply absorbed (mean intensity is constent whatever the size of the cone).");
// Additional shadow data
publicreadonlyGUIContentshadowResolution=newGUIContent("Resolution","Controls the rendered resolution of the shadow maps. A higher resolution will increase the fidelity of shadows at the cost of GPU performance and memory usage.");
// boundsOnClick implies that it gets refreshed only if the handle is clicked on again, but we need actual center and scale which we set before handle is drawn every frame
EditorGUILayout.PropertyField(d.renderPipelineResources,_.GetContent("Render Pipeline Resources|Set of resources that need to be loaded when creating stand alone"));
// 'm_CameraColorBuffer' does not contain diffuse lighting of SSS materials until the SSS pass. It is stored within 'm_CameraSssDiffuseLightingBuffer'.
m_CameraStencilBufferCopy=RTHandle.Alloc(Vector2.one,depthBufferBits:DepthBits.None,colorFormat:RenderTextureFormat.R8,sRGB:false,filterMode:FilterMode.Point,enableMSAA:true,name:"CameraStencilCopy");// DXGI_FORMAT_R8_UINT is not supported by Unity
m_CameraStencilBufferCopy=RTHandles.Alloc(Vector2.one,depthBufferBits:DepthBits.None,colorFormat:RenderTextureFormat.R8,sRGB:false,filterMode:FilterMode.Point,enableMSAA:true,name:"CameraStencilCopy");// DXGI_FORMAT_R8_UINT is not supported by Unity
Debug.LogError("Platform "+SystemInfo.operatingSystem+" with device "+SystemInfo.graphicsDeviceType.ToString()+" is not supported, no rendering will occur");
sv.ShowNotification(newGUIContent("Platform "+SystemInfo.operatingSystem+" with device "+SystemInfo.graphicsDeviceType.ToString()+" is not supported, no rendering will occur"));
#endif
returnfalse;
}
// VR is not supported currently in HD
if(XRSettings.isDeviceActive)
{
CoreUtils.DisplayUnsupportedXRMessage();
returnfalse;
}
boolIsSupportedPlatform()
{
// Note: If you add new platform in this function, think about adding support when building the player to in HDRPCustomBuildProcessor.cs
if(!SystemInfo.supportsComputeShaders)
returnfalse;
}
// Warning: (resolutionChanged == false) if you open a new Editor tab of the same size!
DecalSystem.instance.UpdateCachedMaterialData(cmd);// textures, alpha or fade distances could've changed
DecalSystem.instance.CreateDrawData();// prepare data is separate from draw
DecalSystem.instance.UpdateCachedMaterialData();// textures, alpha or fade distances could've changed
DecalSystem.instance.CreateDrawData();// prepare data is separate from draw
DecalSystem.instance.UpdateTextureAtlas(cmd);// as this is only used for transparent pass, would've been nice not to have to do this if no transparent renderers are visible, needs to happen after CreateDrawData
// TODO: Try to arrange code so we can trigger this call earlier and use async compute here to run sky convolution during other passes (once we move convolution shader to compute).
// This call overwrites camera properties passed to the shader system.
// TODO: check if statement below still apply
renderContext.SetupCameraProperties(camera,m_FrameSettings.enableStereo);// Need to recall SetupCameraProperties after RenderShadows as it modify our view/proj matrix
// Overwrite camera properties set during the shadow pass with the original camera properties.
// During rendering we use our own depth buffer instead of the one provided by the scene view (because we need to be able to control its life cycle)
// In order for scene view gizmos/icons etc to be depth test correctly, we need to copy the content of our own depth buffer into the scene view depth buffer.
// On subtlety here is that our buffer can be bigger than the camera one so we need to copy only the corresponding portion
// (it's handled automatically by the copy shader because it uses a load in pxiel coordinates based on the target).
// (it's handled automatically by the copy shader because it uses a load in pixel coordinates based on the target).
// This copy will also have the effect of re-binding this depth buffer correctly for subsequent editor rendering.
// NOTE: This needs to be done before the call to RenderDebug because debug overlays need to update the depth for the scene view as well.
// Need to account for the fact that the gaussian pyramid is actually rendered inside the camera viewport in a square texture so we mutiply by the PyramidToScreen scale
if(m_FrameSettings.enableDBuffer)// enable d-buffer flag value is being interpreted more like enable decals in general now that we have clustered
if((m_FrameSettings.enableDBuffer)&&(DecalSystem.m_DecalsVisibleThisFrame>0))// enable d-buffer flag value is being interpreted more like enable decals in general now that we have clustered
// The DebugNeedsExposure test allows us to set a neutral value if exposure is not needed. This way we don't need to make various tests inside shaders but only in this function.
// When we render using a camera whose viewport is smaller than the RTHandles reference size (and thus smaller than the RT actual size), we need to set it explicitly (otherwise, native code will set the viewport at the size of the RT)
// For auto-scaled RTs (like for example a half-resolution RT), we need to scale this viewport accordingly.
// For non scaled RTs we just do nothing, the native code will set the viewport at the size of the RT anyway.
// It means that we can end up rendering inside a partial viewport for one of these "camera space" rendering.
// In this case, we need to make sure than when we blit from one such camera texture to another, we only blit the necessary portion corresponding to the camera viewport.
// Here, both source and destination are camera-scaled.
// This particular case is for blitting a camera-scaled texture into a non scaling texture. So we setup the full viewport (implicit in cmd.Blit) but have to scale the input UVs.