Hugin computing power

326 views
Skip to first unread message

E Kow

unread,
Jan 11, 2024, 6:19:44 AM1/11/24
to hugin and other free panoramic software
Hi,

As mentioned earlier I am often stitching 500 or more microscope images. 
I am thinking to get a new dedicated computer for this. 
How much computing power can Hugin utilize (RAM, GPU etc)?
Does it make sense to buy a really high spec desktop computer with high end graphics card?

David W. Jones

unread,
Jan 11, 2024, 11:04:46 PM1/11/24
to hugi...@googlegroups.com
--
A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ
---
You received this message because you are subscribed to the Google Groups "hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hugin-ptx+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hugin-ptx/1427c1e7-1780-428f-897c-d4df6c178d9cn%40googlegroups.com.

Hello!

I don't know how much Hugin can utilize re RAM, GPU, etc, but I run Hugin on a laptop with an 8-core/16-thread i9, 64GB of RAM, 2TB NVME PCI drive. The laptop has two GPUs - Intel UHD630 and NVidia GTX-1650 Max.

I run Linux and have never been able to get any application to use the GTX, but the Intel GPU works fine for remapping images.

I've done some big panoramas - not as many frames as yours! - and had the system consume more than 64GB of RAM. I don't think the RAM consumption is related to processes running on a GPU. It comes when Hugin goes to the blending process. Hugin happily runs 16 threads and takes as much memory as it needs.

I'm giving up on laptops as power computing platforms. While modern ones can pack fast processors and almost enough memory, they can't dissipate heat fast enough. Throttling kicks in, and then a processor capable of hitting a nominal 5GHz is running at 3GHz instead, with a temperature reported at 212F.

For comparison, the 2-core, no-thread Pentium 4 in my server (in a midsize desktop tower case) is set to run a constant 3GHz (and does) and runs at 110F. It never throttles.

My current plans would be for an AMD 7950X, 128GB (256GB if possible). My current camera shoots 20MPX and I prefer to work with RAW and high-dynamic range. I'm hoping to replace with camera with a 61MPX Sony A7R IVA model, so I expect RAM consumption will go up a lot.

I don't know if the Windows version of Hugin supports NVidia GPUs better than the Linux version does. I understand Linux supports AMD GPUs better than the NVidia line.

I'm sure there are people on the list that know more about Hugin and GPUs, maybe they have thoughts?

-- 
David W. Jones
gnome...@gmail.com
wandering the landscape of god
http://dancingtreefrog.com
My password is the last 8 digits of π.

David W. Jones

unread,
Jan 12, 2024, 7:33:53 PM1/12/24
to Maarten Verberne, hugin-ptx
Thanks, Maarten, but I wasn't the original poster. I was just responding
to E Kow and the list with the thoughts I've had.

On 1/11/24 23:12, Maarten Verberne wrote:
> https://www.cpubenchmark.net/singleThread.html
>
> maybe this helps you to find a speedy cpu for hugin.
>
I've already settled on that - Ryzen 7950X. Enblend and other graphics
applications I use really benefit from multiple cores and threads, so
the more cores, the better.

Since I want performance, Intel's continued love affair with efficiency
cores doesn't appeal to me. My experience with Intel performance
processors has been that while Intel loves to chant their peak clock
speeds, their processors can only hit that speed with a single core is
enabled. What's the point of that when the software happily and rapidly
uses multiple cores?

The i9 in my laptop has a nominal 5GHz max. The best it has ever done is
4.6GHz, and that only briefly before it throttled down due to temperature.

>
>
> Op 12-Jan-24 om 8:52 schreef Maarten Verberne:
>> Hi David,
>>
>> I do not know your working method, but a high end graphics card is
>> not really needed and won't improve speed that much since only nona
>> uses it for a split second.
>> recompiling hugin so enblend can use the gpu did not lead to a
>> speedincrease in my case, so i work now without -gpu in enblend...but
>> that might be different for you.
My working method is interactive. I don't do nearly as many images as
you do!
>>
>> I have stitched about 2m images this year via cmd scripts.
>> depending on how much cores you have you might be able to start more
>> than one cmd process.
>> fi i use a ryzen 5-3600 for stiching and have 3 cmd script run
>> simultanious.
>> as for the -g (GPU use) for nona, i've discovered nona doesn't work
>> well with nvidea, even an intel IGPU is quicker...as is an AMD card.

I think nona uses OpenGL, which NVidia doesn't really support. NVidia
wants to lock customers into their platform; the antithesis of OpenGL.
Nona plus the on-board Intel UHD-630 works fine.

Another question to think about.

Multicore CPU: yes, many cores/threads. Start three processes, each gets
a core/thread.

Is the same true about GPUs? Or does a GPU handle input from only one
source at a time? So if script 1 fires off Nona on the GPU, what happens
when script 2 and script 3 try to run nona on the GPU at the same time?

>>
>> lastly, my system used 3 HDDs (one for each cmd script that is
>> running) but an ssd would naturally help a bit there speedwise.
>>
>> all in all it takes me about 1 week to process 160.000 images to 80.000
>>
>> if you do find a way to speed that up, i'm very interested.
>> Maarten
>>
Maarten, in my experience, replacing your HDD drives with SSDs would
make a big difference. Even connected via SATA cables, SSD is faster.
NVME drives (if your motherboard supports them) would be even faster.

If your motherboard doesn't support NVME, you might invest in a 4-port
PCIe expansion card that adds NVME connections, and replace your HDDs
with NVME SSDs on the card. I think it would massively increase read and
write speeds.

>>
>>
>> Op 12-Jan-24 om 5:04 schreef David W. Jones:
>>> On 1/11/24 01:19, E Kow wrote:
>>>> Hi,
>>>>
>>>> As mentioned earlier I am often stitching 500 or more microscope
>>>> images.
>>>> I am thinking to get a new dedicated computer for this.
>>>> How much computing power can Hugin utilize (RAM, GPU etc)?
>>>> Does it make sense to buy a really high spec desktop computer with
>>>> high end graphics card?

E Kow, if you're still reading this:

Go for as much processor performance and memory you can. Hugin spends
nearly all of its processing time running on the CPU and using memory.

Don't worry about the GPU. Intel or AMD on-board GPU is plenty. Nona is
the only part of Hugin that uses GPUs. Maarten is right. At worst, nona
only takes a tiny fraction of time to remap images using onboard Intel
UHD GPUs. No need to spend money for high-end, mid-level, or low-end GPUs.

Maarten Verberne

unread,
Jan 13, 2024, 3:34:09 AM1/13/24
to David W. Jones, hugi...@googlegroups.com
> I've already settled on that - Ryzen 7950X. Enblend and other graphics
> applications I use really benefit from multiple cores and threads, so
> the more cores, the better.

i'm still dreaming of replacing my 3600 with a 5900 or 5950...but alas i
do not have the resources, let alone to go to a new platform. a 7950
would probably outpace the 3600 by a factor 2 or more.

>
> Since I want performance, Intel's continued love affair with efficiency
> cores doesn't appeal to me. My experience with Intel performance
> processors has been that while Intel loves to chant their peak clock
> speeds, their processors can only hit that speed with a single core is
> enabled. What's the point of that when the software happily and rapidly
> uses multiple cores?
>
since the whole process of stitching is more a linear process, it is
mostly a single thread process. that does not mean it won't run on
multiple cores, they just wait on each other.
so in my experience with hugin, the best single thread machine is the
quickest.


> My working method is interactive. I don't do nearly as many images as
> you do!

it was an experiment for me, but i'm not sure i will continue this, the
fact that, by now, i have a dedicated pc running for some 3 month a year
just to stitch the images is a bit much.


> I think nona uses OpenGL, which NVidia doesn't really support. NVidia
> wants to lock customers into their platform; the antithesis of OpenGL.
> Nona plus the on-board Intel UHD-630 works fine.

True, however nvidea does state their rtx30xx series does support
openGL, but not without a heavy penalty. when i saw my uhd630 run
circles around the dedicated rtx while the rtx was running at some 100
watt i knew all i needed to know.

> Another question to think about.
>
> Multicore CPU: yes, many cores/threads. Start three processes, each gets
> a core/thread.
>
> Is the same true about GPUs? Or does a GPU handle input from only one
> source at a time? So if script 1 fires off Nona on the GPU, what happens
> when script 2 and script 3 try to run nona on the GPU at the same time?

It stays linear, so they will wait for each other to finish and
sometimes one of the script gets ahead. the same is true for the gpu.

but with only one cmd script my cpu will only run at some 20-30% (all
cores) so that is where multiple scrips come in handy.


> Maarten, in my experience, replacing your HDD drives with SSDs would
> make a big difference. Even connected via SATA cables, SSD is faster.
> NVME drives (if your motherboard supports them) would be even faster.
>
> If your motherboard doesn't support NVME, you might invest in a 4-port
> PCIe expansion card that adds NVME connections, and replace your HDDs
> with NVME SSDs on the card. I think it would massively increase read and
> write speeds.
>
I do have place for one extra m.2 and have enough sata ports left and am
aware of the advantage of ssd, but i just can't afford that..at least
not this year.
my solution to use one hdd per running script and one 'master' for the
originals is the best i can currently do with the means i have.
if i had a method to seperate the nona tiff and the location where
enblend writes the final image, i might be able to gain a bit...but
wouldn't know how or if that can be done.

but if i decide to continue this year, there will come a time that i get
one or more ssd drives for this. (i would need something like 4TB just
to get started)

>
> Go for as much processor performance and memory you can. Hugin spends
> nearly all of its processing time running on the CPU and using memory.

CPU and GPU are more important, in my experience Hugin isn't that memory
intensive

David W. Jones

unread,
Jan 13, 2024, 5:11:42 AM1/13/24
to hugi...@googlegroups.com
On 1/12/24 22:34, Maarten Verberne wrote:
>> I've already settled on that - Ryzen 7950X. Enblend and other
>> graphics applications I use really benefit from multiple cores and
>> threads, so the more cores, the better.
>
> i'm still dreaming of replacing my 3600 with a 5900 or 5950...but alas
> i do not have the resources, let alone to go to a new platform. a 7950
> would probably outpace the 3600 by a factor 2 or more.
I don't have resources for it yet, either. But if my Dell laptop gives
up the ghost (the Thunderbolt/USB-C port died last year), the
replacement dollars go into the desktop. That currently has a
motherboard running an Intel Pentium 4, so a motherboard replacement/new
memory/new cooling system is inevitable.
>
>>
>> Since I want performance, Intel's continued love affair with
>> efficiency cores doesn't appeal to me. My experience with Intel
>> performance processors has been that while Intel loves to chant their
>> peak clock speeds, their processors can only hit that speed with a
>> single core is enabled. What's the point of that when the software
>> happily and rapidly uses multiple cores?
>>
> since the whole process of stitching is more a linear process, it is
> mostly a single thread process. that does not mean it won't run on
> multiple cores, they just wait on each other.
> so in my experience with hugin, the best single thread machine is the
> quickest.

Not in my experience. Stitching starts, 16 threads fire up, and checking
in htop shows none of them waiting for others.

I understand that enblend isn't always compiled with multiprocessor
support, maybe that's the difference?

>
>> My working method is interactive. I don't do nearly as many images as
>> you do!
>
> it was an experiment for me, but i'm not sure i will continue this,
> the fact that, by now, i have a dedicated pc running for some 3 month
> a year just to stitch the images is a bit much.
>
Yeah, might be a bit much. But might be more cost effective than
alternatives?
>
>> I think nona uses OpenGL, which NVidia doesn't really support. NVidia
>> wants to lock customers into their platform; the antithesis of
>> OpenGL. Nona plus the on-board Intel UHD-630 works fine.
>
> True, however nvidea does state their rtx30xx series does support
> openGL, but not without a heavy penalty. when i saw my uhd630 run
> circles around the dedicated rtx while the rtx was running at some 100
> watt i knew all i needed to know.

Yes, Nvidia isn't on my list. If I was using Adobe under Windows on this
machine, I suppose the resident RTX would get used.

I understand AMD's GPUs are more power efficient than the RTX3000 series
GPUs.

>
>> Another question to think about.
>>
>> Multicore CPU: yes, many cores/threads. Start three processes, each
>> gets a core/thread.
>>
>> Is the same true about GPUs? Or does a GPU handle input from only one
>> source at a time? So if script 1 fires off Nona on the GPU, what
>> happens when script 2 and script 3 try to run nona on the GPU at the
>> same time?
>
> It stays linear, so they will wait for each other to finish and
> sometimes one of the script gets ahead. the same is true for the gpu.
>
> but with only one cmd script my cpu will only run at some 20-30% (all
> cores) so that is where multiple scrips come in handy.
Ah. Interesting.
>
>> Maarten, in my experience, replacing your HDD drives with SSDs would
>> make a big difference. Even connected via SATA cables, SSD is faster.
>> NVME drives (if your motherboard supports them) would be even faster.
>>
>> If your motherboard doesn't support NVME, you might invest in a
>> 4-port PCIe expansion card that adds NVME connections, and replace
>> your HDDs with NVME SSDs on the card. I think it would massively
>> increase read and write speeds.
>>
> I do have place for one extra m.2 and have enough sata ports left and
> am aware of the advantage of ssd, but i just can't afford that..at
> least not this year.
> my solution to use one hdd per running script and one 'master' for the
> originals is the best i can currently do with the means i have.
> if i had a method to seperate the nona tiff and the location where
> enblend writes the final image, i might be able to gain a bit...but
> wouldn't know how or if that can be done.
>
Hmm, I'd think that since you're doing this through scripts, you'd have
control over where final images are written. I've never done that, but
might be worth asking about.
> but if i decide to continue this year, there will come a time that i
> get one or more ssd drives for this. (i would need something like 4TB
> just to get started)
>
I survive on a mere 2TB M2 drive, but don't do as much heavy lifting as you.

>>
>> Go for as much processor performance and memory you can. Hugin spends
>> nearly all of its processing time running on the CPU and using memory.
>
> CPU and GPU are more important, in my experience Hugin isn't that
> memory intensive
>
Hmm, I've had Hugin (particularly enblend) consume more than the 64GB
RAM in my laptop when stitching. Probably depends on the sizes of the
source images and the final image. Perhaps the image format, too?

I don't think GPU matters at all, as you pointed out about Intel onboard
GPUs outrunning the fancy GPUs. If the GPU supports OpenGL (without
throwing you out of house and home with its electric bill!), then any
basic GPU is good. :)

wirz

unread,
Jan 13, 2024, 8:51:11 AM1/13/24
to hugi...@googlegroups.com
Hei!


>>>
>>> I have stitched about 2m images this year via cmd scripts.
>>> depending on how much cores you have you might be able to start more
>>> than one cmd process.
>>> fi i use a ryzen 5-3600 for stiching and have 3 cmd script run
>>> simultanious.
>>> as for the -g (GPU use) for nona, i've discovered nona doesn't work
>>> well with nvidea, even an intel IGPU is quicker...as is an AMD card.
>
> I think nona uses OpenGL, which NVidia doesn't really support. NVidia
> wants to lock customers into their platform; the antithesis of OpenGL.
> Nona plus the on-board Intel UHD-630 works fine.

One might add that enblend optionally uses OpenCl.  I recently started
using that again on a linux / intel UHD620 and that works fine.  nvidia
isn't the biggest fan of supporting OpenCl of course, but the last time
I tried there were no unexpected issues with OpenCl on linux or windows
on nvidia hardware.


cheers, Lukas Wirz

Maarten Verberne

unread,
Jan 13, 2024, 9:22:40 AM1/13/24
to David W. Jones, hugin and other free panoramic software


Op 13-Jan-24 om 11:11 schreef David W. Jones:
> On 1/12/24 22:34, Maarten Verberne wrote:
> I don't have resources for it yet, either. But if my Dell laptop gives
> up the ghost (the Thunderbolt/USB-C port died last year), the
> replacement dollars go into the desktop. That currently has a
> motherboard running an Intel Pentium 4, so a motherboard replacement/new
> memory/new cooling system is inevitable.

in that case even a few gen old i3 will speed up things considerably due
to avx :)

> Not in my experience. Stitching starts, 16 threads fire up, and checking
> in htop shows none of them waiting for others.

a yes, that part is not single threaded, what i meant was that it starts
with nona>enblend and then back to next image nona>enblend.

if i start one process you'll see so many peaks per minute on the GPU,
where each peak is 2 images that are processed with nona.

if i start 3 (close after each other) you'll see a multitude of peaks,
close to 3x as much per minute.

but after a while they will 'latch up' for quite some time, so the peaks
on the gpu get wider and there are only so many peaks left as with one
script.
every now and then one of the quicker cores matches one of the slower
cores in finishing an image save earlyer and the whole sequence of loose
peaks starts over until they come together again.

>
> I understand that enblend isn't always compiled with multiprocessor
> support, maybe that's the difference?
>

i'm using the precompiled version of hugin in windows, and that seems to
be compiled properly for multriproc, it only lacks support for enblend
gpu out of the box. but i didn't find any improvements speed wise by
using enblend -gpu when i tried that.

> Yeah, might be a bit much. But might be more cost effective than
> alternatives?

at this moment, that is definitely true for me :)

>
> Yes, Nvidia isn't on my list. If I was using Adobe under Windows on this
> machine, I suppose the resident RTX would get used.
>
> I understand AMD's GPUs are more power efficient than the RTX3000 series
> GPUs.
>

yes they are, i used a hd6850 for a while that would be twice the speed
of the uhd630, 6x rtx3060...and that's a 10 year old card :)
however, i think the arc might be the real killer.

> Hmm, I'd think that since you're doing this through scripts, you'd have
> control over where final images are written. I've never done that, but
> might be worth asking about.

i think it is too little gain to pursue that for now.

> I survive on a mere 2TB M2 drive, but don't do as much heavy lifting as
> you.
>
for last year images i have 12TB in store, sorting and then stitching
triples that for the duration of the project.

> Hmm, I've had Hugin (particularly enblend) consume more than the 64GB
> RAM in my laptop when stitching. Probably depends on the sizes of the
> source images and the final image. Perhaps the image format, too?

absolutely.

>
> I don't think GPU matters at all, as you pointed out about Intel onboard
> GPUs outrunning the fancy GPUs. If the GPU supports OpenGL (without
> throwing you out of house and home with its electric bill!), then any
> basic GPU is good. :)
>
It matters in the speed nona works, and that's still significant if you
do a lot of stitching.

i might be able to catch some screen prints if you like.

Ah, as Lukas Wirz wrote, it appears openCL...got those mixed up, but the
point stands. a gpu that does openCL well is what you want, that ain't
nvidea, this is intel and amd.

About speedy cards, is it the FP64 performance that makes specific cards
shine more?

wirz

unread,
Jan 13, 2024, 10:10:18 AM1/13/24
to hugi...@googlegroups.com

>>
>> I don't think GPU matters at all, as you pointed out about Intel
>> onboard GPUs outrunning the fancy GPUs. If the GPU supports OpenGL
>> (without throwing you out of house and home with its electric bill!),
>> then any basic GPU is good. :)
>>
> It matters in the speed nona works, and that's still significant if
> you do a lot of stitching.
>
> i might be able to catch some screen prints if you like.
>
> Ah, as Lukas Wirz wrote, it appears openCL...got those mixed up, but
> the point stands. a gpu that does openCL well is what you want, that
> ain't nvidea, this is intel and amd.

No no, what I wrote was an addition, not a correction.  Hugin / nona use
OpenGL and enblend uses OpenCL.



Maarten Verberne

unread,
Jan 13, 2024, 10:54:31 AM1/13/24
to hugi...@googlegroups.com


Op 13-Jan-24 om 16:10 schreef wirz:
In that case, openGL is the issue with nvidea, couldn't tell if openCL
is a problem for nvidea..didn't test that combo with the rtx and enblend.

David W. Jones

unread,
Jan 13, 2024, 8:54:10 PM1/13/24
to hugin-ptx
Oh, it would have been a bit quicker, but my swap partition is also on
the NVME SSD, so it's pretty fast.

For comparison, ages ago, the desktop machine originally had a Sempron
processor and two GB of RAM. It took about 8 hours to stitch a panorama
made from 6MP images.

On 1/13/24 06:27, Maarten Verberne wrote:
> if you do not have >64Gb i would try to seperate the images in 2
> groups, stitch them first and then stitch the 2 group frames together...
>
> although it means more work for you and might have a quality
> degradation, it will probably be quicker if you can keep it within
> your available RAM.
>
>
> Op 13-Jan-24 om 11:11 schreef David W. Jones:
>> Hmm, I've had Hugin (particularly enblend) consume more than the 64GB
>> RAM in my laptop when stitching. Probably depends on the sizes of the
>> source images and the final image. Perhaps the image format, too?


David W. Jones

unread,
Jan 13, 2024, 8:59:34 PM1/13/24
to hugin-ptx
Interestingly, while I have OpenCL installed here, neither Hugin, nona
nor enblend use it. The only apps that it seems to be connected with are
Ardour (pro audio DAW), Blender, Kdenlive, etc.

David W. Jones

unread,
Jan 13, 2024, 9:11:32 PM1/13/24
to hugin-ptx
On 1/13/24 04:22, Maarten Verberne wrote:
>
>
> Op 13-Jan-24 om 11:11 schreef David W. Jones:
>> On 1/12/24 22:34, Maarten Verberne wrote:
>> I don't have resources for it yet, either. But if my Dell laptop
>> gives up the ghost (the Thunderbolt/USB-C port died last year), the
>> replacement dollars go into the desktop. That currently has a
>> motherboard running an Intel Pentium 4, so a motherboard
>> replacement/new memory/new cooling system is inevitable.
>
> in that case even a few gen old i3 will speed up things considerably
> due to avx :)
My Dell has an i9, so AVX and such is already there. The desktop upgrade
will bring those benefits to the desktop machine.
>
>> Not in my experience. Stitching starts, 16 threads fire up, and
>> checking in htop shows none of them waiting for others.
>
> a yes, that part is not single threaded, what i meant was that it
> starts with nona>enblend and then back to next image nona>enblend.
Nona is single-threaded. It runs through image remapping in less than 5
seconds on the UDH630.
>
> if i start one process you'll see so many peaks per minute on the GPU,
> where each peak is 2 images that are processed with nona.
>
> if i start 3 (close after each other) you'll see a multitude of peaks,
> close to 3x as much per minute.
>
> but after a while they will 'latch up' for quite some time, so the
> peaks on the gpu get wider and there are only so many peaks left as
> with one script.
> every now and then one of the quicker cores matches one of the slower
> cores in finishing an image save earlyer and the whole sequence of
> loose peaks starts over until they come together again.
>
>>
>> I understand that enblend isn't always compiled with multiprocessor
>> support, maybe that's the difference?
>>
>
> i'm using the precompiled version of hugin in windows, and that seems
> to be compiled properly for multriproc, it only lacks support for
> enblend gpu out of the box. but i didn't find any improvements speed
> wise by using enblend -gpu when i tried that.

Enblend 4.2 here doesn't offer the option to use the GPU. The "-g"
option here says "associated-alpha hack for Gimp (before version 2) and
Cinepaint".

>
>> Yeah, might be a bit much. But might be more cost effective than
>> alternatives?
>
> at this moment, that is definitely true for me :)
>
So we shall save up our pennies!
>>
>> Yes, Nvidia isn't on my list. If I was using Adobe under Windows on
>> this machine, I suppose the resident RTX would get used.
>>
>> I understand AMD's GPUs are more power efficient than the RTX3000
>> series GPUs.
>>
>
> yes they are, i used a hd6850 for a while that would be twice the
> speed of the uhd630, 6x rtx3060...and that's a 10 year old card :)
> however, i think the arc might be the real killer.
Arc cards sound interesting, but Linux support for AMD is much more mature.
>
>> Hmm, I'd think that since you're doing this through scripts, you'd
>> have control over where final images are written. I've never done
>> that, but might be worth asking about.
>
> i think it is too little gain to pursue that for now.
>
>> I survive on a mere 2TB M2 drive, but don't do as much heavy lifting
>> as you.
>>
> for last year images i have 12TB in store, sorting and then stitching
> triples that for the duration of the project.
My server has 14TB.
>
>> Hmm, I've had Hugin (particularly enblend) consume more than the 64GB
>> RAM in my laptop when stitching. Probably depends on the sizes of the
>> source images and the final image. Perhaps the image format, too?
>
> absolutely.
>
Many years ago, I was using a 6MP DSLR. I decided to run CPFind from
commandline set to use --fullscale. Processing just a single 5MP image
consumed 2GB of memory.
>>
>> I don't think GPU matters at all, as you pointed out about Intel
>> onboard GPUs outrunning the fancy GPUs. If the GPU supports OpenGL
>> (without throwing you out of house and home with its electric bill!),
>> then any basic GPU is good. :)
>>
> It matters in the speed nona works, and that's still significant if
> you do a lot of stitching.
>
> i might be able to catch some screen prints if you like.
>
> Ah, as Lukas Wirz wrote, it appears openCL...got those mixed up, but
> the point stands. a gpu that does openCL well is what you want, that
> ain't nvidea, this is intel and amd.
>
> About speedy cards, is it the FP64 performance that makes specific
> cards shine more?

I have no idea.

Maarten Verberne

unread,
Jan 14, 2024, 2:11:33 AM1/14/24
to hugi...@googlegroups.com
to make it more confusing, while it is nona -g it is enblend -gpu

but you'll have to compile enblend yourself to add gpu support


Op 14-Jan-24 om 3:11 schreef David W. Jones:

David W. Jones

unread,
Jan 14, 2024, 2:46:42 AM1/14/24
to hugin-ptx
Ah. Doesn't sound worth it to me. Thanks.

David W. Jones

unread,
Jan 16, 2024, 7:14:51 PM1/16/24
to hugin-ptx
Very impressive gain! I do think replacing the external HDD drives with
internal NVMEs would really speed up reading those 8K images and writing
out intermediate files.

I sometimes think of Hugin as a GUI for the tools in pano-tools. Useful
front end for me, while pano-tools can be used directly by those that do
heavy image processing (sounds like what you do).

Are the results of what you do publicly visible anywhere?

On 1/13/24 23:48, Maarten Verberne wrote:
> When i replied yesterday, i realized it was time to start sorting and
> stitching the last 2 month of 2023....i was dragging my feet starting
> this.
>
> With 3 terminals open running the script, i see one after another
> print nona.exe: using graphic card, then one after another Done Nona
> (from my script) and then one after another Done Enblend and then it
> starts again.
> the system now runs at 100 watt total, where about 15 watt is for the
> AMD RX 480 (only 10% peak load when nona is active), the rest is CPU
> (average 65% load, R5-3600), board, nvme + 1 hdd drive and >80% power
> supply....the 3 external hdd drives are not part of the 100 watt.
>
> Since yesterday it has stitched up some 20.000 images that are 8K,
> while with one script running it would be 7.000 - 8.000 images.
> Impressive gain isn't it?
> still, 10 days to go before it's finished with this run:)
>
> and that's the thing that keeps me from moving to more than 8K for
> this. time to stitch and hdd space.
> but for smaller ideas in the future, i love hugin.
>
>
> Op 14-Jan-24 om 8:46 schreef David W. Jones:

Jeff “weltyj” Welty

unread,
Jan 21, 2024, 10:10:51 AM1/21/24
to hugin and other free panoramic software
A few more thoughts:
---
multiblend is exceptionally fast compared to enblend (maybe 10x to 20x faster, as I recall).   I found in most situations it produces output just as visually appealing as enblend.  I can't remember the details now, but a couple of years ago I noticed some seam issues in some cases.  It is worth a look.
---
I've been doing some OpenCL coding recently and have a general idea how the GPU processing works:
There was a question upstream about how GPUs are utilized.  Basically the same way as a multi-core processor with multiple threads.  The application asks to use the GPUs. There is a mid-level management layer that is coordinating all the GPU requests.   What can slow things down the most in GPU-land (I suspect) is when multiple apps are all heavily using the GPUs, and there is a lot of data being moved back and forth between general CPU memory area and the GPU memory.

If you are also running something like darktable (or Adobe ...) you ABSOLUTELY should be looking at the graphics chip.  For example my Lenovo laptop cost about 30% more and got me the NVIDIA GeForce GTX 1050 Ti chip, which is (was!) on the low-end of the scale.   But it has 1024 GPUs, and 16gig of GPU memory.  darktable operations will literally run 100x faster for operations that are coded specifically for the GPUs.
---
As GnomeNomad said:   SSD - yes.  SSD+NVME -- double yes.
---
(If you aren't already doing this)  If finding control point matches is really slow, perhaps you could narrow down that process to only look for matches between images you *know* overlap, creating a lot of intermediate pto files, and then use pto_merge as a final step.
---
This info is a little out of date, but many years back when running highly CPU intensive processes (linux) we found that turning off the virtual CPUs in the BIOS was a performance gain:

  - The virtual CPU's are handy when you have something like a web-server, where there is a plenty of idle time between processing requests, and thread creation/deletion is using extra time.

  - But if everything is being performed on the same system CPUs, with data sitting in memory (SSD/NVME), only using the physical cores results in noticeable gain in processing time compared to virtual cores -- because  in the virtual core case threads are being swapped in/out of the physical cores (context switching overhead) which eats into the processing time with no actual gain in amount of data processed.   I don't remember the exact gain, but I think it was in the range of 20%
---
Hope that helps

Maarten Verberne

unread,
Jan 24, 2024, 4:04:07 AM1/24/24
to hugi...@googlegroups.com
My first Hugin project turned into a trilogy.
While the story stayed the same, you might still have a preference.
It will be publicly available tomorrow, but the links should already be
active.


RED: https://youtu.be/LMaIQQZKF14
WHITE: https://youtu.be/b1S9WM55-Dg
BLUE: https://youtu.be/JVLevahXJJ4

Chronicle of Breda 2023:
Time travel through the year.
On the left you'll find the Breda Percipitation Level, where the red
line represents the average precipitation for that day. The waterlevel
shows the actual rain that falls and is related to surface water levels.
On top there is a temperature indicator that writes the temperature
change per day and creates a annual overview.
The images follow the date.

David W. Jones

unread,
Jan 24, 2024, 11:40:08 PM1/24/24
to hugin-ptx
That's pretty good. How did you use Hugin in this project?

Maarten Verberne

unread,
Jan 25, 2024, 1:26:47 PM1/25/24
to hugi...@googlegroups.com
I'm not sure what you are asking.
Hugin was used to combine the images of 2 cams in a panorama, used the
template it created for the rest of the images.


Op 25-Jan-24 om 5:40 schreef David W. Jones:

David W. Jones

unread,
Jan 25, 2024, 6:43:59 PM1/25/24
to hugin-ptx
Thanks. I didn't know you had two cameras. Very cool!

Monkey

unread,
Jan 26, 2024, 6:01:38 AM1/26/24
to hugin and other free panoramic software
"multiblend is exceptionally fast compared to enblend (maybe 10x to 20x faster, as I recall)."

It's not a fixed increase. Enblend has O(n^2) runtime (it scales with the square of the number of pixels) whereas Multiblend is linear. I don't know of anyone who's tried a terapixel blend with both, but Multiblend should be on the order of 100,000 times faster :D

"Hugin was used to combine the images of 2 cams in a panorama"

For videos, you might want to consider Avisynth+ instead of Hugin. There are plugins for warping and fusing videos (I keep meaning to release an update for the latter).

Michael Sass

unread,
Jan 26, 2024, 7:19:39 PM1/26/24
to hugi...@googlegroups.com
Hi there,

I have used Hugin to stitch panoramas for a long time and need to load and get the free latest version please.

How do I go about it?

Cheers Mike.



--
A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ
---
You received this message because you are subscribed to the Google Groups "hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hugin-ptx+...@googlegroups.com.

David W. Jones

unread,
Jan 27, 2024, 12:48:43 AM1/27/24
to hugin-ptx
For Mac and Windows:

https://hugin.sourceforge.io/

Us Linux users have to download source and compile our own. At least
Debian seems to lag behind on Hugin releases. Don't know about other
distros.

On 1/26/24 12:15, Michael Sass wrote:
> Hi there,
>
> I have used Hugin to stitch panoramas for a long time and need to load
> and get the free latest version please.
>
> How do I go about it?
>
> Cheers Mike.


chaosjug

unread,
Jan 27, 2024, 3:48:37 PM1/27/24
to hugi...@googlegroups.com
Hi,

there is also a flatpak version which works great in my experience.

Regards,
Stephan

Carl von Einem

unread,
Feb 2, 2024, 8:38:15 AM2/2/24
to hugi...@googlegroups.com
I am running Hugin on Mac Pro (from 2012, the "cheese grater" casing
similar to the PowerMac G5 tower) Intel Xeon hardware using Xubuntu
(which equals to Ubuntu 22.04.3 but comes closer to the well known OS X
look and feel) .
Here I used this installation description
https://ubuntuhandbook.org/index.php/2022/12/hugin-2022-0-0-released-ubuntu-2204-2004/

Due to regular update intervals my current Hugin version is
Hugin Version: 2023.0.0.d88dc56ded0e

runs smoothly... I installed Xubuntu on an extra HD but you can also use
a simple extra partition if there is enough space available on an
existing hard disc.

Carl

Am 27.01.24 um 21:48 schrieb 'chaosjug' via hugin and other free
panoramic software:

photohounds

unread,
Feb 4, 2024, 9:23:52 AM2/4/24
to hugin and other free panoramic software
Nah, no compile needed.
Just get it from your standard repository. 
Works for RHEL and Fedora and probably others that are RPM based.
dnf install hugin   <--- from a konsole will usually do it.
Reply all
Reply to author
Forward
0 new messages