It would depend on how competent you are in OpenGL.
I can only speak for the android API, not the iOS api, at this point but what you can do is:
Create a camera VideoTrack as usual.
Set up your own custom VideoRenderer to capture frames from the a local camera media stream's video track. The frames usually represent the raster image as a texture so it can stay in the GPU.
Derive your own texture from each frame.
Create your own custom VideoCapturer object that simply passes along that texture as a frame.
Use the VideoCapturer to build a VideoSource to build a VideoTrack to build a local MediaStream.