Having implemented this in licode more or less (licode lets you play data from a file), it's surprisingly hard to do this. Example: when a client requests a keyframe via a FIR request, it's impossible to fulfill that request because there isn't an encoder running on the fly: the stream is already encoded, so the client just has to wait until a keyframe comes along. This also creates all sorts of race conditions on stream start, where if one side starts sending data before the other is ready, the receiver misses a keyframe and can't render the video.
Unless it's a huge burden, it's not worth the effort of getting rid of this encode. Either that, or stop using WebRTC and swap over to using some playback streaming method (DASH, HLS, etc.). Is there some reason typical HTTP streaming won't work for you? It's *way* easier than using WebRTC.