Most software-based video mixers, from OBS to the TriCaster, have integrated encoding capabilities, which are always an option. However, because encoding is a CPU-intensive process that can steal CPU cycles from these programs (and potentially degrade performance or stability), many producers eschew using an integrated encoder for a separate hardware encoder.
The major differentiators here are connection types (HDMI vs. SDI) as well as resolution and frame rate (1080p/4K, 30/60 fps). Also consider the number of connections, particularly in the context of a broadcast application or lecture capture, where the output signal will combine multiple sources like PowerPoint input from a laptop with a talking head video.
All transcoding and delivery services publish specifications that detail the protocols, container formats, codecs, and other information about the streams that they ingest. Here are the specifications for Wowza Streaming Engine. Some services are relatively generic; with H.264 and RTMP the main flavors available. All encoders should be able to connect to these services without issue.
One unit that ticks all the right boxes (and provides a great diagram to assist our discussion) is the URayCoder Wired 1080P H.264 HDMI Video Encoder, which is an Amazon Choice with a 4.5 star average with 100 ratings ($188). As the name suggests, the unit accepts HDMI input at 1080p maximum.
As you see in the diagram, you can also display the video from a separate decoder box or on a computer or mobile device that can access the URL of the video encoder, both relatively common features. URayCoder also has relatively low-cost units that accept SDI input ($248) and output HEVC $238).
The unit has A/V inputs for HDMI and SDI, with 2 XLR inputs plus RCA connectors for other audio sources. The unit supports FAT32, exFAT, and NTFS file systems, enabling recordings with no preset file size limits, and features dual-stream output so you can stream to two services simultaneously, or record to the SD card and stream to a service simultaneously.
Most units include multiple modems that connect separately to create sufficient bandwidth for the encoded stream. Connecting to different services like T-Mobile and Verizon also provides redundancy if one provider is swamped.
You can visualize the combination of these features in the Epiphan diagram for the Pearl2 encoder. This high-end Pearl2 has multiple inputs for up to 4K SDI and HDMI video with XLR connectors to support professional audio gear. As you can see, the unit has a large LCD panel for monitoring and some control plus a headphone jack for audio.
You create compositions from your various input sources by connecting to the system via a computer and web browser. Beyond the connected sources, you can add still images, logos, and text to the production. During the production, you can switch between various shots using controls on the unit or via a browser-based controller called Epiphan Live. You can record the program stream output and even record ISO streams of the various inputs.
Up until recent generations of hardware encoders they in general produced poorer quality videos with much increased filesizes. In consumer PCs and laptops they were designed more for fast streaming of video than producing high-quality videos.
Thus, you cannot expect Shotcut to use close to 0% CPU and much % of GPU when exporting using the hardware encoder because the reading of files and decoding alone becomes a bottleneck to feed the hardware encoder. Also, if you have any decent amount of image processing, you should expect a significant amount of CPU usage especially if parallel processing is enabled (it is by default).
Next, assuming you do not need to deinterlace and agree to limit oneself to the GPU effects, there is a major technical hurdle to transfer the decoded video in GPU RAM to OpenGL textures due to multiple APIs for multiple operating systems. Likewise, the complexity to convert OpenGL textures to hardware encoder frames for the various hardware encoding APIs. Any tool that claims to do all of these but does not ensure the video stays in GPU RAM is going to have limited performance gain if any.
I tried it last week (h264_qsv) on my Surface Pro, which has an integrated GPU. It shaved about 10% off my export time (compared to lbx264) and produced a video that was about 25% larger. When I exported as hevc_qsv, it performed quite a bit better than libx265. However it depends heavily on what filters you are using.
If you have a graphics card in your computer, rather than a GPU integrated on your CPU chip, you may get better performance. But you will have to do some experiments yourself, comparing export time, file size and video quality. There is no simple answer to your question.
Streaming encoders are an essential tool for transporting live video across the internet. Their utility is two-fold: content distributors use encoders to digitise video (changing from analog to digital) while simultaneously shrinking gigabytes of data down to megabytes.
Look no further. Our team of video engineering experts has put together a comprehensive comparison of the best live contribution encoders available in 2023. From free software options to 4K live streaming encoder hardware, we cover it all. Keep reading for the what, where, when, and why of video encoding.
Milliseconds after a stream is captured, an encoder uses video compression algorithms called codecs to condense the data. Live encoders employ lossy compression, tossing out unnecessary data to ensure the greatest reduction in file size possible without degrading perceptual video quality.
The encoder then packages the stream for delivery across the internet. This involves putting the components of the stream into a commonly accepted contribution format such as Real-Time Messaging Protocol (RTMP) or Secure Reliable Transport (SRT). RTMP and SRT describe streaming protocols that transport content between the encoder and the online video host.
In most cases, these streams are repackaged at the next step of the workflow for delivery to the end user. Protocols like HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) come into play here. These protocols make the content more scalable and adaptable for delivery to viewers with varying internet speeds.
Once the stream reaches viewers, a video decoder built into the player software or set-top box will decompress the data for playback. At this point, the video content has often been encoded, transcoded, delivered globally, and decompressed. Thanks to the efficiency afforded by the encoding solution used, viewers are none the wiser. All they know is that the video content is streaming live and in high quality.
Sometimes the encoder is built into the capture device itself. Other times, it requires a secondary software or hardware encoder for live streaming. With contribution encoding, content distributors generally convert the stream for delivery via RTMP, RTSP, SRT, or another ingest protocol.
The ability to fit more data into less space has changed the way video is stored and distributed. What once required renting VHS tapes or purchasing DVDs can now be accomplished by simply streaming video content over the top (OTT) or storing it in the cloud. Video encoders make this possible by compressing streaming data into a manageable size.
For use cases where reducing latency is a must, there are multiple opportunities to decrease the broadcast delay across the video supply chain. The live encoder, packager, CDN, and player must all be optimised accordingly.
Aside from the considerations above (whether or not your workflow will include encoding and transcoding, software vs. hardware solutions, 4K resolution, and low-latency encoding), here are 10 factors to mull over before selecting a live streaming encoder.
Regardless, we always suggest testing your internet strength to verify the stability of your broadcast signal. High-speed internet is also crucial for producing 4K streams, so try to go with an Ethernet-connected encoder when UHD resolution is the goal.
Depending on the criticality of your streaming content, you might require encoder and/or output redundancy. This helps ensure that your stream is resilient enough to survive a cable failure, loss of internet connectivity, or hardware (computer or encoder) failure. For anyone hosting live shopping experiences or news streams, redundancy is an important consideration.
OBS is the live encoding software that everyone should start with. The open-source solution is free, proven, and available on multiple systems (Windows, Mac, Linux). Its dedicated user base of OBS developers work to keep the tool relevant, which means its feature set is constantly growing. Useful plug-ins and integrations are always being added for that reason.
Support can be found in the community on their forums, discord, and Facebook groups. OBS also offers developer docs and a knowledge base of guides curated by their volunteer support team. When you first fire up the program, OBS Studio even offers a wizard to optimise setup.
vMix bridges the gap between software encoding and professional-quality video productions. Designed to run on a laptop but robust enough to run alongside purpose-built hardware, vMix has some great feature sets even for the minimum $60 price tag. Plus, the 60-day trial comes with all of the Pro features.
Rather than taking the form of a black box, the NAGASOFT VCS NSCaster-X1 is a touchscreen tablet for broadcasting, switching, mixing, recording, special effects, and monitoring. This complete live production system provides the flexibility to input an encoded stream from multiple cameras and devices and produce a highly professional live stream with graphic overlays, audio mixing, recording, and distribution. Alternatively, it can also be used like a contribution encoder.
Designed to make live streaming easier to operate, the touchscreen allows broadcasters to quickly switch between channels and start broadcasts. The NSCaster-X1 also offers Ethernet, Wi-FI, and 4G connectivity to meet the needs of remote encoding.
4a15465005