I suppose i should have really said hello on the introductions post first but, like any child with a new toy, i want to get straight on with the exciting stuff and get an issue addressed asap. I appreciate that i may not be using the correct tool (guidance appreciated) and that the programming behind the user interface may operate completely differently from PS but i am getting some super slow reactions.
I bought Affinity to manipulate film camera images which I've then scanned as a TIF file and remove the inevitable dust spots which i get from scanning. The thing is, however, that what appears to be the same tool (AF Inpainting & PS spot healing) operates at massively different rates. I did a test on the same image, removing the same mark, with the same brush pixel width and whilst PS took less than a second to make the repair Affinity took 1 minute 6 seconds!!
With PS it is a seamless operation of just moving between one dust spec and another, Affinity does not allow this. Considering that there is usually several dozen dust spots on each negative this is very problematic for me.
It would be interesting to know what the pixel dimensions of your images are, because normally, the inpainting process should only take significantly longer with large areas, not with those small spots you get from dust particles. You could try the Dust & Scratches filter instead, which works well depending on the contents of your images; if you have a lot of details to preserve, things get more complicated.
It may not be of any help here, but just as a side note: I've usually observed quite the opposite when comparing the speed of the two tools. With the images I tried, Photoshop always needed more time to finish the spot-healing job. Not sure what makes the difference in your case.
After reading your post i cropped down a sample 500x500 pixel around the same area mentioned in my first post (a small hair or dust mark) in case Affinity was trying to regenerate the whole image after each Inpainting action but it took the same extended period of time. I then saved-as the same 500x500 as an AF file but it still took the same time and the file size was still 1.13Gb??
I actually expected something like this. I've never dealt with images this large, so I can't say much more from personal experience. I wouldn't be surprised if the giant pixel dimensions are a main factor with the inpainting job taking so much time. All the information about what to inpaint needs to be sampled from somewhere, and with large images, the Inpainting Brush has a lot to analyse. Why Photoshop is so much better at these high resolutions is hard to tell. Maybe Adobe's algorithms just scale better in this case? Don't forget we're talking gigapixels here.
This is possibly because Affinity Photo's cropping actions are non-destructive by default. The parts of the image that you cropped are just hidden because of the smaller canvas size. If you use the Crop Tool again and then increase the width and height again, you'd see the cropped area of the image reappear. This means that you have to rasterise your layer first after cropping to actually discard the pixel information outside the visible image area.
I only work with 645, 6x7, and 10x8 scanned film so i am always going to encounter this problem as i can never get a scan done without dust. I always wet mount my negatives which keeps contamination to a minimum but i can't remove it completely.
I am surprised that, if what you are suggesting is true, why Affinity needs to sample the whole image to inpaint an area as it is only a local repair. Am i missing something like a pixel area to sample for repair. I suppose if this repair sample is a percentage of the image, not a pixel count, then that could result in a massive resample area.
Inpainting algorithms analyze the textural and/or structural elements of the image to determine what to inpaint into the selected area. In practice this means both local (nearby) & global (whole image) elements may need to be analyzed to determine what to use. It all depends on the detectable structure & texture present in the image; IOW, there is no one fixed percentage or pixel count of the image that the algorithms will always use.
From what I read in this tutorial, it is more about excluding areas from the 'donor' image than marking the areas to be used. It also looks like from the images in that tutorial that it does not try to do anything in the excluded areas, so the tunic of the man in blue ends up with an unnatural looking cutout in the excluded area. But maybe I am wrong about that -- I have not downloaded the 'try before you buy' version to see how well it works in practice.
While we are at it: Inpainting can also be done non-destructively in Photo. Create a new pixel layer, make sure it's selected, switch to the Inpainting Brush Tool and from the context toolbar, choose "Current Layer & Below". All the inpainting then goes to a separate layer.
But that's obviously the way it works. You can tell from how the Inpainting Brush Tool behaves especially when it produces faulty results. I've seen the algorithm sample a completely unsuitable part right from the opposite corner of the image only because it could't find an appropriate pattern. Of course, this isn't something you cannot fix with other tools, and most of the time, inpainting gives you a very good starting point. That being said, I doubt there's a way to speed it up unless you can make your images smaller.
I suppose it is habit, Walt, but having played with the Dust & Scratch filter it does seem to soften the whole image even when making very small amounts of adjustment. The other thing i have to contend with besides dust is tiny hairs which have a different shape and form to dust
I guess so. My comment was really questioning the need for the program to sample a part of the image several tens of thousand of pixels away from the few dozen pixels that were being repaired. If you were repairing the paintwork on your car you wouldn't take a paint sample from your neighbours car would you. You would take a sample of the paint on the damaged panel and blend it in. This is the logic to which i am trying to understand Affinity. I'm old you see :-)
Please bear in mind that i have very limited experience on either PS or Affinity, free trials of both plus an Affinity licence purchased at the Photo Show a couple of weeks ago, so am not trying to undermine either product, just trying to get my purchase to work for me.
In the analog world you are relying on the fact that dust, hairs, & such are on top of the negative -- IOW, there is a third dimension that the real world brush operates in to sweep this stuff off the negative. You don't have that luxury in the digital world -- everything is in a single two dimensional plane. But there is a trick some scanners use to do something similar, with a feature called Infrared cleaning, which relies on the fact that most photo dye pigments are mostly transparent to infrared light. But to use this, you need a scanner & the software that supports it.
My comment was really questioning the need for the program to sample a part of the image several tens of thousand of pixels away from the few dozen pixels that were being repaired. If you were repairing the paintwork on your car you wouldn't take a paint sample from your neighbours car would you. You would take a sample of the paint on the damaged panel and blend it in. This is the logic to which i am trying to understand Affinity. I'm old you see :-)
Programs, or at least most of them, have no idea what the image contains. Unlike humans, they have no way of 'seeing' your car or your neighbor's or anything else like you do. You know what a car or tree or whatever looks like (or if it is your neighbor's car or your own) through a vast store of learned experience, an incredibly complex neural network, & some very sophisticated processing in your brain. Programs don't have any of that; they can only 'see' the textural & structural patterns in the pixels of the image, much like a newborn infant sees the world as just a bunch of colored patterns.
Perhaps, then, rather than the Inpainting brush you should be using the Healing Brush, the Patch tool, or the Blemish Removal tool? I think that any of those will allow you to specify the source for the repair yourself, which may allow faster operation (but I'm still new with Affinity Photo myself, and haven't investigated those tools much yet).
All I can tell you is after my earlier reply, I downloaded the trial version & played with it for about an hour. Maybe that was not enough time to learn how to use it properly, but basically I could never get results close to the tutorial's.
b37509886e