Intel Quick Sync Video is Intel's brand for its dedicated video encoding and decoding hardware core. Quick Sync was introduced with the Sandy Bridge CPU microarchitecture on 9 January 2011 and has been found on the die of Intel CPUs ever since.
The name "Quick Sync" refers to the use case of quickly transcoding ("converting") a video from, for example, a DVD or Blu-ray Disc to a format appropriate to, for example, a smartphone. This becomes critically-important in the professional video workplace, in which source-material may have been shot in any number of video-formats, all of which must be brought into a common format (commonly H.264) for inter-cutting.
Like most desktop hardware-accelerated encoders, Quick Sync has been praised for its speed.[5] The eighth annual MPEG-4 AVC/H.264 video codecs comparison showed that Quick Sync was comparable to x264 superfast preset in terms of speed, compression ratio and quality (SSIM);[6] tests were performed on an Intel Core i7 3770 (Ivy Bridge) processor. However, Quick Sync could not be configured to spend more time to achieve higher quality, whereas x264 improved significantly when allowed to use more time using the recommended settings.[6]
A 2012 evaluation by AnandTech showed that QuickSync on Intel's Ivy Bridge produced similar image quality compared to the NVENC encoder on Nvidia's GTX 680 while performing much better at resolutions lower than 1080p.[7]
Quick Sync was first unveiled at Intel Developer Forum 2010 (September 13) but, according to Tom's Hardware, Quick Sync had been conceptualized five years before that.[1] The older Clarkdale microarchitecture had hardware video decoding support, but no hardware encoding support;[5] it was known as Intel Clear Video.
The Quick Sync Video SIP core needs to be supported by the device driver. The device driver provides one or more interfaces, for example VDPAU, Video Acceleration API (VA-API) or DXVA for video decoding, and OpenMAX IL or VA API for video encoding. One of these interfaces is then used by end-user software, for example VLC media player or GStreamer, to access the Quick Sync Video hardware and make use of it.
Quick Sync support on Linux is available by both Intel VAAPI Driver (legacy, pre-Broadwell) and Intel Media Driver (Broadwell and newer) which also uses VA-API,[25][26] and through the Intel Media SDK.
Microsoft offers support for Quick Sync in Windows (in Windows Vista and later) based on supporting driver software from Intel and support through both DirectX as well as WMF (Windows Media Foundation). A wide range of applications are based upon this base support for the technology in Windows.
Apple added Quick Sync support in OS X Mountain Lion for AirPlay, FaceTime, iTunes, Safari, QuickTime X, iMovie, Final Cut Pro X, Motion and Compressor. Third-party software includes Adobe Premiere Pro, Adobe Media Encoder, DaVinci Resolve and others.
Support for Quick Sync hardware accelerated decoding of H.264, MPEG-2, and VC-1 video is widely available. One common way to gain access to the technology on Microsoft Windows is by use of the free ffdshow filter. Some other free software like VLC media player (since version 2.1.0 "Rincewind") supports Quick Sync as well. Many commercial applications also benefit from the technology today, including CyberLink PowerDVD, CyberLink PowerDirector and MacroMotion Bogart "gold" edition.
Support for hardware-assisted media encoding tailored for Quick Sync is widely available. Examples of such software with Quick Sync support during encoding processes are Emby Media Server,[28] Plex Media Server,[29] Badaboom Media Converter, CyberLink MediaShow, CyberLink MediaEspresso, ArcSoft MediaConverter, MAGIX Video Pro X, Pinnacle Studio (since version 18), Roxio Toast, Roxio Creator, XSplit Broadcaster,[30] XSplit Gamecaster[31] (all commercial) and projects like HandBrake,[32][33] Open Broadcaster Software[34] or applications for operation with a video content entering in Adobe CC2018.
I've never been a fan of hardware encoding, going back to my earliest impressions of CUDA. Now that I've been blessed with a modestly spiffy home machine, I've been doing a lot of testing of lightweight renderers for quick HD proofs (dailies) to throw up on the home media server for preview, as well as proxies, prerenders, rough cuts, and the like. It's the way I learned to work.
I really, really promised myself to give HEVC, and Intel QuickSync in general, a chance. Boy, have I been disappointed. I mean, 900% realtime renders at lower bitrates are tempting, even to retired warriors. However, this on-cpu stuff, next to x265/x264, which take longer to render, is simply horrid. That's a technical term.
+1. I look at it as "horses for courses." For best quality go slow with x264/5 cpu render. For other purposes perhaps QSV is good enough. I think I remember reading somewhere that QSV is not at the same quality standard as NVENC and VCE.
@Musicvid keep in mind HEVC is a new codec standard regardless of how you are rendering it, it yields superior image quality for comparable file sizes to H264. That said, it also requires a lot more overhead to decode, so it's definitely not for every situation (yet).
I wrote here ( under another username) last year that NVENC to AVC or HEVC encoding in the lowe bitrates is far better than the QSV option, that I never will use for encoding.
A lot of artifarcts in renders with the same settings compared to NVENC and when the development team of Vegas wrote here that there is a lot of communication between them and the developpers of NVidia my suspicion about quality grew more and more.
No, it claims to yield equal quality at lower file sizes; that's a big difference in thinking. What I'm finding out is yeah, I can render them three or four times smaller, but can I bear to watch them?
All I was saying is, let's not throw HEVC under the bus because NEVC's hardware encoding does a piss poor job. :) HEVC in and of itself, is a very good codec and provides many advantages over H264 (smaller file sizes, 10bit color, 4:4:4 colour space in version 2, 4K etc.) Anyway, I haven't played with any of the hardware accelerated codecs, we just use the good ol' tried and true let the CPU chug it out.
You would typically want to double the bitrate when hardware encoding, except with very low bitrates where doubling may not be enough to appear similar to the software equivalent. If you have the bandwidth available and you're uploading it somewhere for re-encode then that may work for you. If you are encoding the version that your viewer will see on the internet (no re-encode) then hardware is not good enough as efficiency is most important, you want your bitrate as low as possible especially if you know many viewers will be using mobile devices
What if you have a GPU that has HEVC encoding "built-in" like NVIDIA's Quadro P2000? I have Magix's Video Pro X and if I check 'Hardware Encoding' when rendering, I either get an encoding failure or the rendering slows to a crawl so painful, I end up cancelling the render.
In order to assess the viability of software/hardware HEVC, I used a quantitative approach similar like JN_. I focused completely on UHD though and compared XAVC-I, XAVC-S, Magix AVC and Magix HEVC for descending Mb/s, whereby I largely used the same "intervals" as suggested by the default render templates.
I used the Happy Otter Scripts (HOS) tool "Render Quality Metrics" -quality-metrics-2/ from W. Waag to evaluate Mean squared error (MSE) and Peak Signal Noise Ratio (PSNR, see also Wikipedia for some mathematical background).
P.S. Something wrong with your numbers. Mainconcept VBR with final avg bitrate 52.1 gives you file size 0.377 GB while Mainconcept VBR with final avg bitrate 29.2 gives file size 0.431 GB. This is mathematically impossible for the same source. If sources were different (the only way lower bitrate can gives bigger file), you cannot compare metrics.
P.P.S. It is not correct to compare VBR (Mainconcept) vs CBR (x264). Encoder with CBR cannot increase bitrate for more complicated frames and decrease it for simple frames that lead to worse overall quality. You should compare them in the similar conditions (all VBR or all CBR). Note, even in the unfair conditions x264 won in resulting size and quality (MSE) based by numbers you provided.
There are several variables and common templates left out of your first round so I would regard any conclusions as being tentative, contingent on including more data sets in subsequent testing (e.g., x264 CRF, minimum bitrate considerations for local encoders, etc.)
It should be noted that for PSNR (shadow noise), higher is better, and anything over 30dB should be fine for digital source. For the MSE scale, lower is better, with 0 being a perfect match [corrected]. Even my tired old eyes consistently see a difference above MSE 4.0.
Since MSE (actually RMS Regression, remember Algebra II?) is not perceptually weighted, I ran a round of software intermediate tests using the MSU SSIM freebie, but it is a hassle because one needs to crop to 640x480 for comparison. Also, I'm sure you know their annual encoder shootouts are regarded world-wide.
I have looked at the examples you provided in the forum and I too can see the difference, when looking at your material on my TV. As far as the root cause is concerned, it is difficult from my perspective to conclude from the distance whether it is rooted in a set-up issue of your computer/Vegas Software (too many variables), or whether it is due to the averaging process of the PSNR/MSE algorithm (e.g. high drop of minimal bitrate), which can of course not be excluded. Because I was aware of your results, I chose a sequence (altogether 3715 frames) including dark / shady pics (dark clothes) and I did not see any difference (which is not a proof either of course).
c80f0f1006