Hi again Alex,
I have been there myself, forgetting to set build target to x64 :-) Glad to hear that it works now.
To simplify usage, I am preparing a simple VS 2010 project, that I intend to upload to the Github repository once it is ready. I am struggling a little with the x86 builds right now, but eventually I should be able to upload something.
While doing this, I actually stumbled upon problems trying to invoke the 3.9.2 binaries, and the README file attached with the binaries also state that the DLL:s do not work for VS 2010. So I must was apparently mistaken when claiming I had been able to use the 3.9.2 binaries. Anyway, I have tested with the 3.9.1 binaries, and they seem to be invokable from VS 2010 apps.
Regarding your questions, you might be able to get more expert advice from the main Ipopt mailing list, but I think:
1. No. You at least need to provide the first derivative always. There is support for approximating the Hessian (by setting the "hessian_approximation" option to "limited-memory", but there is no first derivative approximation support in Ipopt. If your objective is not too complex, you might be able to use automatic differentiation. According to
Wikipedia there are at least to C# AD libraries, but I have tried neither. Alternatively you should be able to revert to finite differences, although I am not sure how efficient that would be.
2. If I am not mistaken, you still need to make dummy implementations of the callbacks. I think it will suffice to leave everything unchanged and return true.
Actually, if you only have variable bounds in your problem, there might be other optimization codes that are more efficient than Ipopt. I think I recall a recent discussion on the mailing list indicating that Ipopt is not particularly suited for bounds-only NLPs. I might be mistaken, though.
I hope this response has been of help to you. Good luck with your future use of (cs)ipopt!
Best regards,
Anders