So if Nvidia want's to they could enable 10 Bit VP9 Decoding\Encoding Cuda accelerated on the GP102 without needing the ASIC at all, it's a pure business and resource control decision and how much sense it makes with the Shader count available for certain Targets and Usage Scenarios of Customers and Competition advances and their decisions and HEVC Encoder already performs better then Nvidias AVC result wise HEVC is overall often sharper especially vs Nvidia AVC with B-Framesīut Metrics neither PSNR nor SSIM agree with me they would rate Nvidia AVC + B-frames results as overall better. It's fully up to Nvidias Buisness decisions what they enable where and what makes sense for which Platform and Predicted target audience System Platform resources (nowadays gathered by Nvidia via direct telemetry system data of their target customers) ) The Encoder itself is still updated via CUDA additions like OpenCL can be used for x264 Lookahead Nvidia uses Cuda for their Lookahead additionally for their AQ and 2pass and most probably coming weight-b as well :) Im pretty sure Nvidia also tests inside CUDA their future ASIC optimizations and have a pretty nice efficient conversion workflow from CUDA(GPU)->ASIC :) Nvidia did this in the past on many levels i remember Motion Adaptive Deinterlacing only enabled on 1 Specific Hardware and by acccident inside a Beta Driver for every Shader count, it was daunting slow and unusable on the lower Shader card though so Economicaly nonsense to keep it and possible support overhead following it and then later it became the default, as the shader power rose for the more consumer targeted cards ) The More CUDA Power the more tasks you can route their like 10 bit Decoding overhead ASIC is only of concern mostly on Power efficiency it makes the most sense for underpowered Shader cards or Mobile depending on the Target audience )
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |