WowzaStreaming Engine does support multicast, but you have to use RTP, MPEG-TS protocols to deliver the video to plays like QuickTime or VLC. Silverlight embedded players should also support grabbing the RTP multicast stream from Wowza Streaming Engine.
For anyone else that finds this page while troubleshooting: if Wowza is running on Linux with multiple interfaces, you may need to disable reverse path filtering in Linux. The filter is dropping all packets not on the same subnet as the interface. While this is usually a good thing, the multicast source may not be on the same subnet as the interface you expect to receive multicast streams on. The following command needs to be executed by root and will disable reverse path filtering for all interfaces. I have not been successful in disabling the filter for a specific interface in Ubuntu 16.04 but there may be a way.
To set this up, connect your encoder to Wowza. Setup the Wowza transcoder to transcode to VP8. Then, connect thatto your stream target. In a few seconds, your video will be delivered over multicast, and you canplay that back with our Vivoh Multicast Player.
Vivoh now offers App based video playback with the same VP8 video codec but delivered via multicast. This takes the same royaltyfree codec that is used by WebRTC in Google Chrome or Microsoft Edge browsers andenables you to play it via multicast in the Vivoh Multicast App.
Vivoh is working to updated the Vivoh Multicast App to support the new AV1 video codec. This is also royalty free and is currently used by both the Netflix and Webex Apps (but with only limited and uncertain browser support). The quality is even better than both H.264 and VP8.
Today, enterprise video streaming is stuck with HTTP delivery which does not scale and with the H.264 video codec which has inferior video quality compared to the modern AV1 codecwhich is now used by Netflix and Webex.
All the 10 channels have unique multicast ip address and port. i have gone through the documentation and found that the streams have to be pushed to port 10000 (please correct me if i am wrong) but since my input feed is from multicast source, i am not sure how to achieve this. I am able to play the stream on udp protocol and the multicast ip using vlc player.
I have gone through the article and have setup the media server as per the document. Now i can see something being received but am not able to play the same in the browser. The following is the logs for your reference.
We have a home where we are using a single NVS200 to stream an analog DVR to 35 TSW panels. The client decided he wants to view video on all panels simultaneously and the NVS will only handle 10 concurrent streams. So we need to multicast. Network equipment has multicast enabled on it, as does the stream of the NVS we are using.
Question is should we be pointing our video URL to the multicast address of the stream we need as listed in the NVS? Or do we still continue to use the IP address of the NVS itself? I was in a support chat and they told me to use the NVS IP address still- the unit does all the work in the background. I really wasnt confident with their answer. What is the point of the multicast IP address and port then if I am not connecting to it?
URL we are using currently: rtsp://crestron:crestron@...:554/live2.sdp. The multicast IP address is 239.128.1.100. Do we really just enable it in the NVS and network equipment and thats it? And just ignore the multicast IP address? I cannot find any other docs on this aside from the outdated info in Online Help.
Thats what I figured. The person I was chatting with swore I should be using the NVS IP address instead. I mean whats the point of the multicast address then? Unfortunately I always seem to get the same person in support and that person has been less than helpful in the past and many times incorrect...
A peculiar scenario: a client is running a schedule on their local computer and wish to publish (UDP/Mpegts) to their wowza server for streaming. I am aware that multicast packets are discarded at the gateway/periphery but they say they have contacted Wowza and the response is this can be done, redirected them to the standard post:
This is actually more of a networking question but since connected with Wowza and the client says it is possible according to Wowza, I am posting here. I have also read other related posts but none of relevance or similar situation.
You can re-stream mpeg-ts over the internet using Wowza following the MPEG-ts guide, and you can publish mpeg-ts over the internet using Wowza Push Publish, but not multicast. Multicast is only supported on a vLAN that supports multicast.
Here is the setup. Wowza server running on Linux connected to a network segment with an MPEG4 enoder that is outputting a live UDP (not RTP or RTSP) multicast stream. I would like to configure the wowza server to pickup the stream and then unicast it to multiple destinations.
In a TCP network with hundreds connected users, network bandwidth becomes a limiting factor and so complex compression schemes are used to reduce the required bandwidth. These schemes add latency because they encode blocks of samples that improve compression. If there is even a single bit error occurs, the entire block is lost. Retransmission takes care of most packet drops.
But we want a low latency. Loss of a block would create a big silence that would quickly become fatiguing to the audience. The answer is to use a coding scheme that is very simple and though it is not very efficient, using multicast keeps the required network bandwidth in check. Normally audio is encoded at the CD standard rate of 44,100 samples per second or the video standard rate of 48,000 samples per second. Suppose you decide to encode 8 stereo samples per packet. If one packet is lost about 0.2 milliseconds of audio that will be replaced by silence or the value of last good sample. If the packet is errored, just through it away. The result may be a noticeable pop or not depending on the audio being encoded. Most listeners can tolerate this until the network error rate gets to be so high that the pops or silences makes it unlistenable. I would argue at that that error rate retransmission falls apart as a mitigating scheme too. So many packets are being retransmitted that they become the predominant traffic in the network and just make things worse.
So, I think multicast in a WiFi network can be a good choice if a coding scheme like PCM (wav) is chosen over complex block oriented coding. Upwards of 1 Mbps of channel bandwidth will be occupied compared to something like 32 Kbps for a more complex coder. But if there are more that 40 listeners, multicast is more conservative of network bandwidth.
Lets say we have a broadcaster "B" and two attendees "A1", "A2". Of course it seems to be solvable: we just connect B with A1 and then B with A2. So B sends video/audio stream directly to A1 and another stream to A2. B sends streams twice.
Is it possible somehow to solve this, so B sends only one stream on some server and attendees just pull this stream from this server? Yes, this means the outgoing speed on this server must be high, but I can maintain it.
26.05.2015 - There is no such a solution for scalable broadcasting for WebRTC at the moment, where you do not use media-servers at all. There are server-side solutions as well as hybrid (p2p + server-side depending on different conditions) on the market.
There are some promising techs though like -khan/WebRTC-Scalable-Broadcast but they need to answer those possible issues: latency, overall network connection stability, scalability formula (they are not infinite-scalable probably).
As it was pretty much covered here, what you are trying to do here is not possible with plain, old-fashionned WebRTC (strictly peer-to-peer). Because as it was said earlier, WebRTC connections renegotiate encryption keys to encrypt data, for each session. So your broadcaster (B) will indeed need to upload its stream as many times as there are attendees.
This works well because the broadcaster (B) only uploads its stream once, to Janus. Now Janus decodes the data using its own key and have access to the raw data (that it, RTP packets) and can emit back those packets to each attendee (Janus takes care of encryption for you). And since you put Janus on a server, it has a great upload bandwidth, so you will be able to stream to many peer.
So yes, it does involve a server, but that server speaks WebRTC, and you "own" it: you implement the Janus part so you don't have to worry about data corruption or man in the middle. Well unless your server is compromised, of course. But there is so much you can do.
To show you how easy it is to use, in Janus, you have a function called incoming_rtp() (and incoming_rtcp()) that you can call, which gives you a pointer to the rt(c)p packets. You can then send it to each attendee (they are stored in sessions that Janus makes very easy to use). Look here for one implementation of the incoming_rtp() function, a couple of lines below you can see how to transmit the packets to all attendees and here you can see the actual function to relay an rtp packet.
3a8082e126