Telecine without stepper motor

332 views
Skip to first unread message

Wheat Buckley

unread,
Jan 4, 2021, 9:52:37 AM1/4/21
to Raspberry Pi Film Capture

I've seen most everybody constructing their homemade telecine with the use of stepper motors.  I can see if building it from scratch that making a lot of sense, however, for someone modifying a projector, is the stepper motor absolutely necessary?  I can think of a couple of reasons why, but am also curious about what the consensus is from the forum.

Anyway, I've been attempting to first try keeping the original projector motor for my solution before moving on to the stepper solution.  Fortunately, the original projector motor has a variable speed switch and adjustment dial of something like 16-6fps.  I would be using a Pi 4 triggering the shutter with a Hall Effect sensor (pin 17) and so far, capturing the image with the OV5647 camera and likely planning to upgrade.

Using a simple FOR loop, I can capture just over 17fps at a resolution of 1640×1232 however once I introduce the trigger mechanism, the speed decreases to 1 image every 2-4 seconds.  Vastly slower.

The following code is what I so far cobbled together, which seems to work as I expect. There isn't any interface or anything fancy, just the fundamental process of capturing an image as the projector is playing.  Any advice or tips would be much appreciated for I am just figuring this all out.

Uncomment the camera.capture line to compare the speed.

import time 
import picamera 
import RPi.GPIO as GPIO 

sensorPin = 17 
GPIO.setmode(GPIO.BCM) 
GPIO.setup(sensorPin, GPIO.IN
photo_width = 640 #1640 
photo_height = 480 #1232 
counter = 0 

def how_long(start, op): 
   print('%s took %.2fs' % (op, time.time() - start)) 
   return time.time() 

def sensorCapture(channel): 
   global counter, start, annotation 
   if GPIO.input(channel): 
      counter+=1 
      filename = 'images/image%02d.png' % counter 
      print("Frame ", counter) 
      #camera.capture(filename) 
      start = how_long(start, 'Capture') 

start = time.time() 
camera = picamera.PiCamera() 
camera.resolution = (photo_width, photo_height) 
camera.shutter_speed = 800 
camera.awb_mode = 'sunlight' 
camera.iso = 60 start = how_long(start, 'Camera initialize') 
time.sleep(1) 
camera.start_preview(fullscreen=False, window=(0, 40, 640, 480)) 
start = time.time() 

GPIO.add_event_detect(sensorPin, GPIO.RISING, callback=sensorCapture, bouncetime=1)  

Dominique Galland

unread,
Jan 4, 2021, 11:47:52 AM1/4/21
to Raspberry Pi Film Capture

In principle, it is possible to keep the original engine if it can run at a variable speed and sufficiently slowly.
But the stepper controlled by the PI is still much more convenient, forward and backward, stop, frame advance etc...
For the convenience we should at least control the start and stop. 

For the capture two methods are possible
Repeat
  Advance one frame to the trigger and capture
or 
Start Continuous feed
Repeat
   On trigger event capture

In my "Yart" project both "On frame" and "On trigger" methods are implemented even if users choose the more reliable "On frame" method.

Now for your code there's no reason for it to be longer with the trigger.
Without capturing and writing the frames you have to get the triggers exactly at the fps  of the motor  otherwise there is a sensor problem.
This is the first thing you need to check.
You should also modify your how_long function to get the difference with the previous time.

After with capture it is also necessary that the PI has time to capture and process the frame before the next trigger.
This should be the case if the motor is running quite slowly but be aware that writing the file to the SD card may take a long time.

It would also be clearer to have a capture loop.
Repeat
   wait trigger event
   capture and process

Trigger
    on trigger set trigger event

Then for an optimal performance you should use capture_sequence and especially capture and process the frames in differents threads. 
In my project as in Joe Hermann's project, the frames are sent over the network to a PC.

In theory you could use my code in "On trigger" mode only and by simplifying TelecineMotor (only start and stop).

Manuel Ángel

unread,
Jan 4, 2021, 1:19:59 PM1/4/21
to Raspberry Pi Film Capture
In principle, if you can detect that there is a frame in position to be captured and at least you can stop the motor long enough until the next image can be captured, a stepper motor would not be necessary.
However, the stepper motor brings many advantages.

It is very easy to control with the Rasberry Pi. You can go forward, stop, go back, etc.

In my case, I don't use any type of trigger to capture the images, which has allowed me to simplify the hardware and software.

Once I have the film referenced in a known position, for example the first frame, it is possible to position the film in any desired frame, both forwards and backwards.

If a trigger is not used, make sure that the film faithfully follows the movement of the stepper motor. In my case, I did it using transmission through gear - pinion.

My projector goes back / forward exactly one frame for every revolution of the main axis. So in my software, I just have to worry about the stepper motor making the main shaft turn one full revolution.

My program is based on Joe Herman's original software, albeit heavily modified. I use a Raspberry Pi 3 that acts as the host computer's server.

Basically run the following sequence:

- The client (PC) requests the server to capture an image.
- The server captures the image and sends it via LAN to the PC. As soon as the image has been sent, it commands the motor control program to advance one frame. The control program runs concurrently in a separate process.
- As soon as the client receives the image, it orders the server to capture the next image and in the meantime, it processes the captured image. Post-processing is done in a separate thread.
- Once the image has been processed and saved on the client's hard drive, the next image is requested and the sequence is repeated.

In my case, if I only digitize each frame with a single image from the camera, a capture rate of 1 frame per second can be achieved. But normally I digitize each frame with several images, with increasing exposure times, in order to get HDR images. Each photo is captured at the camera's maximum resolution of 2592 x 1944 pixels and in the post-processing I resize it to HD.
Under these conditions each image takes several seconds to be captured.

In your case, I see that you use images in png format, which is much slower than jpg.
I have not tested it, but I think it is slower to save the image to the Raspberry Pi than to send it via LAN.
The video port of the camera is significantly faster than the still image port and the difference in quality is minimal.

Greetings and success in the construction of your telecine machine.

Dominique Galland

unread,
Jan 4, 2021, 2:03:11 PM1/4/21
to Raspberry Pi Film Capture
@Manuel

I agree with you that the trigger is not totally necessary if the motor control is precise enough but it is still a guarantee of good positioning.

But in the question asked the original engine it will run continuously, so the trigger is necessary.

I think your capture-send-receive-process flow could be faster :

1.On the PI capture and send are io operations that must be done in separate threads.
2. And on the PC receive and process as well as you do.
3. It does not seem that the client has to ask for the next frame every time. This network exchange is not useful and will slow down the process. It is the network that will serve as a Queue  between the PI and the PC.

In my project :
On the PI :
- A capture thread captures and advances the engine at maximum speed ( capture_sequence is used).
- A send thread sends the frames over the network.
A Queue is used between the two threads
If the Queue  is full, the capture thread stops momentarily.

On the PC as you do:
- A receiving thread
- A process/Save thread
The also a Queue between the two threads

Manuel Ángel

unread,
Jan 4, 2021, 2:36:01 PM1/4/21
to Raspberry Pi Film Capture

Hi Dominique.

Thanks for your suggestions.

I think I have not explained myself well in my previous post.

The capture and shipment are done in separate threads. I tried the motor control with another thread, but using a separate process (multiprocessing module, Process class) has given me better results.
I make the advance of the film when the capture has finished, not the sending as I have mistakenly said before.

In the client I use a single thread for the reception and post-processing of the images.

I find it very interesting your proposal to eliminate the request for a new image and entrust the arrival of images to a queue. This possibility had not occurred to me. I think I'll give it a try, as well as use separate threads for reception and post-processing.

Greetings and thanks again.

Wheat Buckley

unread,
Jan 5, 2021, 1:17:39 PM1/5/21
to Raspberry Pi Film Capture
Hello all, thank you for your responses.  

As to Dominique's first reply:  
"Now for your code there's no reason for it to be longer with the trigger."
     I feel the same, but simply uncommenting it does cause a delay which is my dilemma.  I figured some kind of poor programming on my part.
     Once I can figure out this bottleneck I can then concentrate on ensuring I am not recording the moment as the frame is advancing.

"Without capturing and writing the frames you have to get the triggers exactly at the fps  of the motor  otherwise there is a sensor problem."
     I guess I am feeling confident about the sensor operation for I am printing out the time it takes every frame with that last statement of the sensorCapture() function.  
     It seems to show appropriate values as I adjust the speed.  The only reason I have the how_long function is to determine the time it takes the sensor to react from one frame to the next.

As to Manuel'a first reply:
"In your case, I see that you use images in png format, which is much slower than jpg." 
     Yes you are right, when I changed it to jpg the image captures take almost half as long.
     Incidentally, I reviewed the FOR loop program that I originally used to determine the 17fps capture (with jpg or png), it was capturing using the videoport, 
     which from what I understand is faster but of lesser quality.  So, that would be my tradeoff if were to even attempt a higher speed capture.

If I were to send images over a network to get processed, wouldn't that get bottlenecked from the faster capture speed?  I plan on using the projectors slowest speed plus I am not attempting to capture multiple frames for HDR.

Some of your conversations is over my head but am slowly learning as I go, but also has given me some ideas.

Thank you!


      

Dominique Galland

unread,
Jan 5, 2021, 1:45:17 PM1/5/21
to Raspberry Pi Film Capture
So to have a correct framerate it is absolutely necessary to capture on the video port.
There is no quality difference in the capture itself but only in the result because the ISP does image enhancement treatments.
These treatments (denoising, sharpening, etc...) can be done in post-processing.
With a client-server design and an elaborate programming (not so easy!) the only limiting factor remains the camera framerate.
The network speed (wired, not wifi), the speed of the PC and the PI are sufficient to not slow down.
Finally if you plan to make HDR the speed of the engine must be very reduced not more than 0.5 fps 
I do not think that your original engine can go down so low, the stepper will be absolutely necessary.

See the performance discussion on the Google group of my project.

Ted Yurkon

unread,
Jan 6, 2021, 8:01:36 PM1/6/21
to Raspberry Pi Film Capture
First-time poster so forgive me if this is inappropriate. The question intrigues me because I made a telecine using a Bell and Howell 471A projector. I replaced original drive with a cheap geared motor (150 rpm iirc), driving with cogged belts to maintain sync. I mounted a shutter disk on the motor shaft, and a home-made IR transmitter/sensor to trigger photos. I was running guvcview on Linux and needed to type an "i" on a keyboard to take each photo. Therefore, I ripped the controller from a USB keyboard and wired a solid state relay on the sensor board to the controller to "tap" the letter "i" key, causing guvcview to capture a still image. I added HDR by spinning a transparent disk with neutral density filters in front of the light source and taking 3 photos per frame. I used a vintage Fujicon 55 mm, 1:1.8 lens and focused the image onto a Sony IMX179 sensor in an open USB camera.
The above worked great except that the Sony sensor has horrible chief-ray characteristics, 25 degrees at only 1/2 image height. This resulted in terrible color shift.
Now, my excuse for posting is that I bought a Raspberry Pi HQ camera using the IMX477 sensor where the CRA is only 6 degrees at 1/2 image height. I don't have a Raspberry Pi, and at age 77 I don't feel like learning it so I bought an Arducam IMX477 UVC Camera Adapter Board for 12MP Raspberry Pi HQ Camera so I can use it on my system with few mods. First tests look great.

If interested, a video is here: https://youtu.be/M0VZVPd1-Wg
I hope my post is appropriate and promise not to do it again.

Wheat Buckley

unread,
Jan 7, 2021, 1:17:32 PM1/7/21
to Raspberry Pi Film Capture

Hi Ted, no worries and I much appreciated you're comments.

After realizing the limitations of what I've wanted to attempt with the Pi, I was considering something much to what you've described.  I was looking at USB3 cameras yesterday, but it seems your use of the USB2 camera was plenty sufficient.  I could still use the Pi to trigger, however, I would be likely to use a more beefy machine to also sustain the necessary image captures; that being said I was looking for solutions that would provide GPIO ports (as on the Pi) but for use on a laptop or desktop.  Any ideas?

Very clever of the use of the spinning ND filters for the HDR imaging.  

Dominique Galland

unread,
Jan 8, 2021, 4:57:38 AM1/8/21
to Raspberry Pi Film Capture
What are the limitations on the PI?
You can reach 12fps and 2fps in HDR as in Joe's project or mine. 

There are some projects that use a UVC webcam (or a DSLR)
Mostly with no programming guvcview  is used and a mechanism is invented to trigger the capture of an image by simulating the keyboard or the mouse.
But no motor control probably
In this case no need for a PI  and why a GPIO? 

Bow if you want to program your own application and to answer your question there are USB/Serial or USB/GPIO adapters for PCs. ex:

So  in theory we could realize  the application completely on the PC, UVC webcam , trigger and motor control.
An ordinary webcam will not be better than the Picamera, it would only be interesting with a high-end camera (native HDR, external trigger ...)
But I really think it would be more difficult than with a PI and a PIcamera.
Dominique

Wheat Buckley

unread,
Jan 8, 2021, 6:22:30 PM1/8/21
to Raspberry Pi Film Capture
Hey there Dom, I didn't mean to dismiss the Pi with my comment, all the projects I've seen are excellent and very capable.  The point I was making with my previous post was based on what I think I'm understanding is that if I were to capture images by implementing the videoport, I would need to determine an absolute FPS before starting the capture.  My thinking is because I have the original projector motor, the FPS would likely vary too much, definitely more than the precision of a stepper.  But now you've got me rethinking that if the difference in FPS could be minimal enough to function.

So, when I manually engage the motor the FPS varies greatly for the first 5 or 6 frames, then the next 15-20 frames it stabilizes, and then from there, the speed varies from 12.8 to 13.1 FPS.  It isn't much but I was thinking that might be insufficient.  I can certainly wait for 30 frames to go by before the device starts recording.  Do you think this speed variation is doable?  I don't need HDR imaging.   I guess I can try and I'll find out now that I've thought about this.

FYI, the projector does go slower when I enable the 1/3 speed switch, however, the main rotor is still turning at the same speed.  It will advance a frame for every 3 turns.  SO, at the moment I cannot go slower unless I guess I were to dial down the main power to the motor.

I was asking about the GPIO, because I personally don't know of any other way to programmatically attempting to trigger something.  Thanks for the link to the RASP-FT232H!

Wheat

Dominique Galland

unread,
Jan 9, 2021, 5:48:26 AM1/9/21
to Raspberry Pi Film Capture
I don't understand why you think you need to determine an absolute fps before the capture and what it has to do with the video port ?
It's a misconception to think of the Raspberry camera as a DSLR see :
When you open the camera with a framerate it will continuously send frames to the ISP and the PI even if you don't make a capture, the unprocessed frames are ignored.
So no matter how fast the engine is running you will get a frame when the trigger goes on. On the other hand if the engine goes too fast frames will be ignored.
Anyway 12 to 13 fps is  too fast even without HDR.
Also if your projector goes forward one frame for 3 turns it probably has a three blade shutter the image is illuminated three times before moving on to the next one.
Absolutely useless and even bad in our case.
So you need to see if you can modify your projector to remove this three blade shutter and install a stepper motor.
This is the only reliable solution.

Ted Yurkon

unread,
Jan 9, 2021, 10:49:09 AM1/9/21
to Raspberry Pi Film Capture
Well explained domdom. On my modified projector system mentioned earlier, the 3-opening blade shutter was a problem and got removed. As to fps, that's why my projector now has a DC geared motor. It only runs at about 0.75 fps. Could triple that if I didn't do HDR.
Reply all
Reply to author
Forward
0 new messages