Maybe the exec-* operations can help with that? Other people are using that to trigger Whisper jobs outside of Opencast workers as well.
> On 1. Oct 2024, at 17:05, Franck Tanoh <
fta...@gmail.com> wrote:
>
> Hi everyone!
>
> We've decided to go the lambda function way, a python script to trigger the autoscaling based on whisper job type.
> The issue we have though is that opencast does not create for example whisper jobs if whisper service does not exist (correct me if I'm wrong), meaning we have to have at least 1 GPU running all the time.
> Is there a way to create a kind of dummy whisper service, which doesn't process jobs but allows opencast to at least queue the jobs. That way, the autoscaling will come in and process the job.
>
> Alternately, if you have some ideas on how we can trigger the GPU based on transcription requests or jobs, please comment.
>
> Thanks,
> Franck
>
>
>
>
>
> On Mon, Sep 16, 2024 at 1:26 PM Franck Tanoh <
fta...@gmail.com> wrote:
> Hi everyone!
>
> Has anyone in the community implemented AWS autoscaling for Whisper GPU?
> We've now moved to whisper for our transcriptions, but since processing is between 9am-6pm, it's a waste of money leaving AWS GPU instances there with not much to do.
>
> Happy to hear your thoughts.
>
> Thanks,
> Franck
>
>