Gaussian/Depth pyramids now allocate buffers with next power of two size (to account for platforms where NPOT RenderTexture Mips are not supported yet)
// In the context of HDRP, the internal render targets used during the render loop are the same for all cameras, no matter the size of the camera.
// It means that we can end up rendering inside a partial viewport for one of these "camera space" rendering.
// In this case, we need to make sure than when we blit from one such camera texture to another, we only blit the necessary portion corresponding to the camera viewport.
// This particular case is for blitting a camera-scaled texture into a non scaling texture. So we setup the full viewport (implicit in cmd.Blit) but have to scale the input UVs.
m_DepthPyramidBuffer=RTHandle.Alloc(size=>CalculatePyramidSize(size),filterMode:FilterMode.Trilinear,colorFormat:RenderTextureFormat.RFloat,sRGB:false,useMipMap:true,autoGenerateMips:false,enableRandomWrite:true);// Need randomReadWrite because we downsample the first mip with a compute shader.
// Instead of using the screen size, we round up to the next power of 2 because currently some platforms don't support NPOT Render Texture with mip maps (PS4 for example)
// Then we render in a Screen Sized viewport.
// Note that even if PS4 supported POT Mips, the buffers would be padded to the next power of 2 anyway (TODO: check with other platforms...)
m_DepthPyramidBuffer=RTHandle.Alloc(sizeScale,filterMode:FilterMode.Trilinear,colorFormat:RenderTextureFormat.RFloat,sRGB:false,useMipMap:true,autoGenerateMips:false,enableRandomWrite:true);// Need randomReadWrite because we downsample the first mip with a compute shader.