Looking for help adding OpenToonz to my AI flow

114 views
Skip to first unread message

Hep Katt

unread,
Jun 20, 2024, 1:57:52 PMJun 20
to OpenToonz Users Forum

I've been using AI to make animations by taking images made with Midjourney and animating them. Most of the time this works well enough to let me make my simple music videos. Recently I've been trying and failing to add lip sync to one of my animated music videos using AI and having issues because most of the training was on real 3D people not 2D animated images.

So I started researching and discovered OpenToonz and Rhubarb. This seems like it would really work well for my purposes and I even found a tutorial where a guy did a simple animation with lip sync in that way. The problem I'm having is that the layout seems really different from the version offered as a Flatpak in Linux. I'm also not sure how to scale it up or to configure mine to match what is seen in the tutorial.

I took the video clip I animated and used the cross dissolve transition with to allow a longer clip with my character against a sunset background with moving clouds and a slight twinkle in her eyes and used ffmpeg to extract the images getting me a total of 1,158 images. OpenToonz asks me if I want to discard the duplicates and I tell it yes. So that gets me a single series on a horizontal time line. And I have created six mouth images but due to the way Midjourney works I'm having to cut those from the images I generated. They're not consistent in size though they all fit the face of my character.

I'm really feeling confused and lost. Can anyone please give me some advice on how to make this work? I'd ask the guy who made the tutorial but he has comments disabled. Thanks for all advice in advance.

Reply all
Reply to author
Forward
0 new messages