Yeah i checked with them. Since theyre quite new to this, theres no support for alpha transperency in videos as of yet. But i could suffice by rendering in PNG sequences. so that should hopefully work.
Demand to try Dream Machine overloaded Luma's servers so much they had to introduce a queuing system. I waited all night for my prompts to become videos, but the actual "dreaming" process takes about two minutes once you reach the top of the queue.
I created several clips while testing it out. One was ready within about three hours, the rest took most of the night. Some of them have questionable blending or blurring, but for the most party they capture movement better than any model I've tried.
I had them showing walking, dancing and even running. Older models might have people going backwards, or have a dolly zoom on a dancer standing still from prompts requesting that type of motion. Not Dream Machine.
Dream Machine captured the concept of the subject in motion brilliantly with no need to specify the area of motion. It was particularly good at running. But you have minimal fine-tuned or granular control beyond the prompt.
It could also be a feature of video models built on transformer diffusion technology rather than straight diffusion. Haiper, the UK-based AI video startup also says its model works best when you let the prompt do the work and Sora is said to be little more than a simple text prompt with minimal additional controls.
I came up with a series of prompts to test out Dream Machine. For some of these I've also tried them with existing AI video models to see how they compare and none of them achieved the level of motion accuracy or realistic physics.
In some cases I just gave it a simple text prompt, enabling the enhance feature. For others I prompted it myself with a longer prompt and in a couple of cases I gave it an image I'd generated in Midjourney.
This time I just gave Dream Machine a simple prompt and told it not to enhance the prompt, just take what it is given. It actually created two videos that flow from one another, as if they were the first and second shot in a scene.
The prompt: "A person walking along a busy city street at dusk, holding a smartphone vertically. The camera captures their hand as they swing it slightly while walking, showing glimpses of shop windows, people passing by, and the glow of streetlights. The video has a slight handheld shake to mimic the natural movement of holding a phone."
Next up I started with an image generated in Midjourney of a dancer in silhouette. I've tried using this with Runway, Pika Labs and Stable Video Diffusion and in each cases it shows movement into the shot but not of the character moving.
The prompt: "Create a captivating tracking shot of a woman dancing in silhouette against a contrasting, well-lit background. The camera should follow the dancer's fluid movements, maintaining focus on her silhouette throughout the shot."
One of the first prompts I try with any new generative AI image or video mode is "cats dancing on the moon in a spacesuit". It is weird enough to not have existing videos to draw from and complex enough for video to struggle with motion.
Next up was another one that started with a Midjourney image. It was a picture showing a bustling European food market. The original Midjourney prompt was: "An ultra-realistic candid smartphone photo of a bustling, open-air farmers market in a quaint, European town square."
I wish I'd been more specific about how the characters should move. It captured the motion of the camera really well but it resulted in a lot of warping and merging between people in the scene. This was one of my first attempts and so hadn't tried out better techniques for prompting the model.
Phoenix did a good job but that was just an image, so I decided to put the exact same prompt into Dream Machine: "A surreal, weathered chessboard floating in a misty void, adorned with brass gears and cogs, where intricate steampunk chess pieces - including steam-powered robot pawns."
It pretty much ignored everything but the chess board and created this surrealist video of chess pieces being swept off the end of the board as if they were melting. Because of the surrealism element I can't tell if this was deliberate or a failure of its motion understanding. It looks cool though.
Over the past two years AI image generation has gone from a weird, low-res representation of humans with multiple fingers and faces looking more like something Edvard Munch might paint than a photograph to becoming near indistinguishable from reality.
Abel Art, an avid AI artist who had early access to Dream Machine has created some impressive work. But he said he needed to create hundreds of generations just for one minute of video to make it coherent and once you discard unusable clips.
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.\nWhen not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?"}), " -0-10/js/authorBio.js"); } else console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); Ryan MorrisonSocial Links NavigationAI EditorRyan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?
Did you know that the State of Connecticut has a list of all of the movies, tv, shows, and videos that were produced in our state? As I was scrolling through the Connecticut Department of Economic and Community Development's list, I was surprised to find out that one of my favorite Billy Joel song's music video was shot in Middletown?
The music video for Billy Joel's classic song 'The River of Dreams' was shot right here in Connecticut in 1993. It was directed by Andy Morahan, and it features several scenes shot on and around the Connecticut River.
Remember that video? God, I always imagined that he was singing about a river in the Midwest or South, the Mighty Mississippi or a lazy river running through Savannah, Georgia. Nope, those scenes were shot right near the former America's Cup restaurant, the Arrigoni Bridge, and the dreaded 66/Rt. 9 intersection.
According to wikipedia.com, portions of the video were shot on the Glastonbury-Rocky Hill Ferry, the shot with Billy and his backup singers was shot on the Western span of the Providence and Worcester Railroad Bridge, and that's the Connecticut River behind them. Scenes were also shot along the river in Portland, East Haddam, and Old Saybrook. The scenes with Billy on his piano were shot in an tobacco barn in South Glastonbury. You can also see Christy Brinkley painting the artwork that was actually used for 'The River of Dreams' LP cover.
Her video for Wildest Dreams, which she premiered on the red carpet during Sunday's MTV Video Music Awards, is being slammed by some critics for romanticizing white colonialism, and for using Africa as a backdrop for a story about white people.
c80f0f1006