Nathan,
Your first value is correct, since sin(NaR) = NaR.
I can confirm that if your input is 0b1000000000000001 = –maxPos = –2^56 = –72057594037927936, then the sin of that is the number you got from Wolfram Alpha and the 16-bit posit (eS = 2) for that is 0b1100000010000111 or 1100000010000111 as I like to color-code it for legibility. It sounds like you may be new to the problem of argument reduction. You need to store the value of π to sufficient precision that your angle modulo π is correct to high precision, high enough that the result rounds correctly. This is a classic problem and some great shortcuts have been found for it because you only need a range of digits of π, not all of them. Beyond some large number, you're multiplying an integer times 2π and you can ignore the effect of that. Below some small value, it cannot affect the sine of the reduced angle. The window moves as you increase the magnitude.
It looks like you multiplied that big number by some representation of π, perhaps a double-precision float with only 53 bits of significand, and your modulo logic returned zero, so you got sin(posit) = 0.
While the Posit Standard includes Sin(posit) as a require function, just for compatibility with just about every existing environment, I strongly recommend that people use SinPi(posit) = Sin(posit × π). Outside some range, all posits are signed integers (no fractional parts) and the SinPi of those values is exactly zero, greatly simplifying the code and making it run faster. Those angle values are the ones you need for radix-2 and radix-4 Fast Fourier Transforms, anyway. It seem so ridiculous to think about going around the unit circle order 2^54 times and then wanting a precise value of a trig function, any trig function.
Don't reinvent the wheel… just look up the standard ways to do high-accuracy argument reduction for trig functions and apply them to the input values.
Once you get the reduced argument, I recommend you use the methods described in "An Approach to Generate Correctly Rounded Math Libraries for New Floating-Point Variants" by Jay Lim et al. They applied my "Minefield Method" with full automation, and they get perfectly-rounded functions that are uniformly fast for all inputs. While Intel's ICC math library and the gcc math library have many outputs that are not correctly rounded, the method of that paper not only got perfect rounding but also was faster than ICC.
If you want to discuss further, maybe we should go off the shared email and I can share my algorithms for SinPi(posit), written in Mathematica. Unlike the methods of Jay Lim, I use 100% integer operations (no reliance on the availability of fast floating-point arithmetic) so those should be even faster than the ones described in Lim's paper. They might be helpful just to get all the exception regions right, like posit values so small that sin(posit) can return posit as the correctly-rounded result.
I'll attach a plot of SinPi(posit) where posit ranges from 0 to 32767 as a bit string input (representing 0 to 2^56 as real numbers). You can see how organized and predictable the output becomes, as opposed to the chaotic and usually meaningless scatter plot you get with the Sin function (whether using posits or floats).
Best,
John