For what it's worth, I ran into the same issue on the same platform (ms windows) about 2 years ago. Do note however that this was using my own tesseract build!
My investigation then showed OpenMP, once triggered to start, would run my 16 cores at 100% forever, irrespective of any work done. Using a sampling profiler I found OpenMP was simply running 16 threads were every thread that wasn't given any work at that moment was idling by spinning, checking the work queue, thus running the CPU to max temp without being very useful.
This experience with OpenMP at the time was in line with what I observed OpenMP doing with other test applications on my machine (not built by me): CPU nice & quiet until #openmp pragma is hit, then *BAM!* CPU maximg out all cores until end of application. The stuff that can run multithreaded does, but that's always only part of the code / run-time, but OpenMP kept my cores at max throttle by spinning during the intermissions, until the application is terminated... so the preliminary conclusion was it was an issue inside OpenMP (or me missing non-obvious setting XYZ for OpenMP). Anyway, I booted OpenMP off my system and went back to doing multi threading old skool, which is sometimes hard but always felt more comfortable to me.
Take-away: if you want to investigate what happens over at yours, grab a sampling profiler (I used a commercial one from Intel at the time IIRC) and build from C/C++ source (or other means to get legible function names from debug info in the profiler run reports), e.g. using Visual Studio. Its work, its effort, but nobody else can look into your box(es) so you'll otherwise always depend on others' guesswork.