These are different things. The constraints you give to getUserMedia is used to actually get a stream.
Those tracks you then give to the peer connection and it will do the adjustments it needs to work. That is within the resource constraints that exist, which is not the same as the constraints you just set to get a video stream locally. The spec says this[0]:
The encoding and transmission of each
MediaStreamTrack SHOULD be
made such that its characteristics (
width,
height and
frameRate
for video tracks;
sampleSize,
sampleRate and
channelCount for audio tracks) are to a
reasonable degree retained by the track created on the remote side.
There are situations when this does not apply, there may for example be
resource constraints at either endpoint or in the network or there may
be
RTCRtpSender settings applied that instruct the implementation
to act differently.
The sender settings you can change with `setParameters`[1] -- particularly you'd update the encodings [2]. There isn't any mode to set the min framerate, though it seems there's a maxFramerate though, but it isn't in the spec [3], so don't know how it actually works.
You can test around a bit and see how the implementations behave. Even trying to take the track of the sender and trying `applyConstraints` on it. I wouldn't expect it to work. You could try rather restricting the size by setting scaleResolutionDownBy, so that the bits you have on the network and your cpu can be used to keep a higher framerate?
Someone might come with something smarter or more correct, but this should at least help a bit :)