I have a separate API wrapper or calls that I make to either the websocket interface or the HTTP REST interface to make the request for the streams.
The HTTP REST request looks like this:
{
"body":
{
"request":"rtp_forward",
"publisher_id":100,
"room":5,
"host":"192.168.0.17",
"audio_port":5000,
"video_port":5002
},
"janus":"message",
"transaction":"o4vup0qoomd"
}
Requesting that publisher 100 in room 5 forwards video rtp packets to port 5002 on host 192.168.0.17, audio rtp packets to 5000.
This is what you will receive from it.
{
"janus": "success",
"sender": 4003334502,
"transaction": "o4vup0oqoomd",
"plugindata": {
"plugin": "janus.plugin.videoroom",
"data": {
"publisher_id": 100,
"rtp_stream": {
"audio_stream_id": 123113458,
"audio": 5000,
"video_stream_id": 584326990,
"video": 5002,
"host": "192.168.0.17"
},
"room": 5,
"videoroom": "rtp_forward"
}
}
}
If you want to use the websocket interface you will have to add the session_id and handle_id fields to the request and fill them out accordingly.
What programming language are you using? I already have a crude C# api wrapper(it is very crude...and ugly but it works for me well enough). You could also simply utilize the javascript API wrapper.
As for gstreamer, you can do multiple things.
What I do to receive the stream.
"udpsrc port=5000 caps="application/x-rtp, media=(string)audio, payload=(int)111, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00\" ! rtpjitterbuffer do-lost=true latency=300 mode=0 ! rtpopusdepay ! queue ! opusparse ! opusdec ! audioresample ! audioconvert ! appsink sync=false caps="audio/x-raw, format=S16LE, layout=interleaved, rate=8000, channels=1" name=audio_sink_1 udpsrc port=5002 caps="application/x-rtp, media=(string)video, payload=(int)100, clock-rate=(int)90000, encoding-name=(string)VP8-DRAFT-IETF-01" ! rtpjitterbuffer do-lost=true latency=300 mode=0 ! rtpvp8depay ! queue ! vp8dec ! videoconvert ! videoscale ! videorate ! video/x-raw, format=(string)RGB, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, framerate=(fraction)16/1 ! appsink sync=false name=video_sink_1 caps="video/x-raw, format=(string)RGB, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive"
Make the caps anything you would like and you obivously don't have to push to an appsink. You can do whatever you like.
I actually take the feeds and put then through a compositor and a audiomixer, encode them again, multiplex them, and write to a file.
Let me know if you have any specific questions to anything I have said(typed :))