InSEO, for example, Google has to constantly evolve its algorithm to stay a step ahead, and they can't reveal the inner workings of their process for fear of people seeking out vulnerabilities. Facebook too has to continually refine and tweak its algorithm to ensure people aren't being inundated with junk - if they were to over-emphasize Page Likes, for example, Like sellers would ramp up their promotions.
People are always looking for ways to get ahead, to 'hack' the systems in order to gain an advantage - which makes sense to a degree, but it often also goes against the purpose of why such options exist, and ends up annoying the platform, the users, and/or both.
One of the more recent examples of this has come about because of Facebook's increased emphasis on video. Because Facebook's News Feed algorithm gives preferential treatment to video content, some Pages have worked out that they can game the system by posting static images as video - like this one:
This is not actually a video, it merely plays that static image for 14 seconds, but because it's posted as a video, it gets more reach. This is a tactic that's clearly working for this Page - check out the view counts here, and all of these are static images posted as videos, all similar length.
It's not necessarily scamming, they're not advertising their content as anything different to what it is, and as the videos autoplay in the News Feed, most users wouldn't even notice that these are videos. But they generate a lot more reach than they would as static images.
Using a new 'motion scoring' system, Facebook will be able to detect movement inside a video, and demote content that's not actual video, despite being posted as such. This will likely also impact those Facebook Live posts which include virtually static counters, which have also helped some Pages boost their reach.
"When people click on an image in their News Feed featuring a play button, they expect a video to start playing. Spammers often use fake play buttons to trick people into clicking links to low quality websites."
So what will the impact be for your Page? Nothing, so long as you don't use these tactics. In order to avoid any negative impacts, ensure you're not posting play buttons in your preview images and don't post static content as a video. Such tactics may have provided some benefit for some Pages in the short term, but as Facebook rolls out these new changes, these posts will see a significant drop in reach.
It's a good update for Facebook, further removing ambiguity around the types of content being posted, helping to ensure a better user experience by providing what you would expect from both video and non-video content.
Now, when you see a video play button, you can expect it to actually work, while eliminating mis-uses of Facebook Live can only help to improve the overall quality of the offering, which will bring more users back to Facebook Live more often.
People share millions of photos and videos on Facebook every day, creating some of the most compelling and creative visuals on our platform. Some of that content is manipulated, often for benign reasons, like making a video sharper or audio more clear. But there are people who engage in media manipulation in order to mislead.
Today we want to describe how we are addressing both deepfakes and all types of manipulated media. Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.
As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:
Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech.
Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts. Just last month, we identified and removed a network using AI-generated photos to conceal their fake accounts. Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior.
To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy
The video, created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny, shows Mark Zuckerberg sitting at a desk, seemingly giving a sinister speech about Facebook's power. The video is framed with broadcast chyrons that say "We're increasing transparency on ads," to make it look like it's part of a news segment.
"Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures," Zuckerberg's likeness says, in the video. "I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future."
The original, real video is from a September 2017 address Zuckerberg gave about Russian election interference on Facebook. The caption of the Instagram post says it's created using CannyAI's video dialogue replacement (VDR) technology.
This deepfake of Zuckerberg is one of several made by Canny in collaboration with Posters, including ones of Kim Kardashian and Donald Trump, as part of Spectre, an exhibition that took place as part of the Sheffield Doc Fest in the UK.
Following the viral spread of a manipulated Facebook video of House speaker Nancy Pelosi, Facebook has been forced to take a stance on whether fake or altered images are allowed to stay up on the site. Instead of deleting the video, the company chose to de-prioritize it, so that it appeared less frequently in users' feeds, and placed the video alongside third party fact-checker information.
Canny's founders, Omer Ben-Ami and Jonathan Heimann, told special effects blog FXGuide that their work comes after algorithms developed by University of Washington researchers, which turned audio clips of people speaking into realistic videos of people made to look like they're speaking those words. The UW researchers demonstrated this, at the time, using Barack Obama's face. They said they're also inspired by Stanford's Face2Face program, which enabled real-time facial reenactment.
Ben-Ami told Motherboard that to create the fake videos, Canny used a proprietary AI algorithm, trained on 20 to 45 second scenes of the target face for between 12-24 hours. That doesn't seem like much, but we've already seen deepfakes made from as little as one image of a face.
For the Zuckerberg deepfake, Canny engineers arbitrarily clipped a 21-second segment out of the original seven minute video, trained the algorithm on this clip as well as videos of the voice actor speaking, and then reconstructed the frames in Zuckerberg's video to match the facial movements of the voice actor.
Ben-Ami said that Canny saw this as both an opportunity to educate the public on the uses of AI today, but also to imagine what's next. "The true potential we see for this tech lies in the ability of creating a photo realistic model of a human being," he said. "For us it is the next step in our digital evolution where eventually each one of us could have a digital copy, a Universal Everlasting human. This will change the way we share and tell stories, remember our loved ones and create content."
Facebook has also announced the winner of its Deepfake Detection Challenge, in which 2,114 participants submitted around 35,000 models trained on its data set. The best model, developed by Selim Seferbekov, a machine-learning engineer at mapping firm Mapbox, was able to detect whether a video was a deepfake with 65% accuracy when tested on a set of 10,000 previously unseen clips, including a mix of new videos generated by Facebook and existing ones taken from the internet.
Facebook does not plan to use any of the winning models on its site. For one thing, 65% accuracy is not yet good enough to be useful. Some models achieved more than 80% accuracy with the training data, but this dropped when pitted against unseen clips. Generalizing to new videos, which can include different faces swapped in using different techniques, is the hardest part of the challenge, says Seferbekov.
When millions of people are able to create and share videos, trusting what we see is more important than ever. Fake news spreads through Facebook like wildfire, and the mere possibility of deepfakes sows doubt, making us more likely to question genuine footage as well as fake.
Zuckerberg never uttered those words. The video was a "deepfake," a technique that uses AI to create videos of people saying something they didn't, highlighting the challenges social networks face when it comes to policing manipulated content.
The Zuckerberg video could also be a test for Facebook, which has come under fire after it refused to remove an altered video of House Speaker Nancy Pelosi that was slowed to make her seem drunk, according to Vice, which reported earlier on the video. Zuckerberg called Pelosi but she wasn't "eager" to hear what he had to say, The Washington Post reported on Tuesday.
"We will treat this content the same way we treat all misinformation on Instagram," a spokesperson for the photo sharing site said. "If third-party fact-checkers mark it as false, we will filter it from Instagram's recommendation surfaces like Explore and hashtag pages."
Fact-checker Lead Stories, which called the video "art," said in a post that it's flagging the video as satire and this will not harm the video's distribution. Users will see a warning label that it isn't real.
3a8082e126