I pulled this chapter together from dozens of sources that were at times somewhat contradictory. Facts on the ground change over time and depend who is telling the story and what audience they're addressing. I tried to create as coherent a narrative as I could. If there are any errors I'd be more than happy to fix them. Keep in mind this article is not a technical deep dive. It's a big picture type article. For example, I don't mention the word microservice even once :-)
Given our discussion in the What is Cloud Computing? chapter, you might expect Netflix to serve video using AWS. Press play in a Netflix application and video stored in S3 would be streamed from S3, over the internet, directly to your device.
Another relevant factoid is Netflix is subscription based. Members pay Netflix monthly and can cancel at any time. When you press play to chill on Netflix, it had better work. Unhappy members unsubscribe.
The client is the user interface on any device used to browse and play Netflix videos. It could be an app on your iPhone, a website on your desktop computer, or even an app on your Smart TV. Netflix controls each and every client for each and every device.
Everything that happens before you hit play happens in the backend, which runs in AWS. That includes things like preparing all new incoming video and handling requests from all apps, websites, TVs, and other devices.
In 2007 Netflix introduced their streaming video-on-demand service that allowed subscribers to stream television series and films via the Netflix website on personal computers, or the Netflix software on a variety of supported platforms, including smartphones and tablets, digital media players, video game consoles, and smart TVs.
Netflix succeeded. Netflix certainly executed well, but they were late to the game, and that helped them. By 2007 the internet was fast enough and cheap enough to support streaming video services. That was never the case before. The addition of fast, low-cost mobile bandwidth and the introduction of powerful mobile devices like smart phones and tablets, has made it easier and cheaper for anyone to stream video at any time from anywhere. Timing is everything.
Building out a datacenter is a lot of work. Ordering equipment takes a long time. Installing and getting all the equipment working takes a long time. And as soon they got everything working they would run out of capacity, and the whole process had to start over again.
The long lead times for equipment forced Netflix to adopt what is known as a vertical scaling strategy. Netflix made big programs that ran on big computers. This approach is called building a monolith. One program did everything.
What Netflix was good at was delivering video to their members. Netflix would rather concentrate on getting better at delivering video rather than getting better at building datacenters. Building datacenters was not a competitive advantage for Netflix, delivering video is.
It took more than eight years for Netflix to complete the process of moving from their own datacenters to AWS. During that period Netflix grew its number of streaming customers eightfold. Netflix now runs on several hundred thousand EC2 instances.
The advantage of having three regions is that any one region can fail, and the other regions will step in handle all the members in the failed region. When a region fails, Netflix calls this evacuating a region.
The header image is meant to intrigue you, to draw you into selecting a video. The idea is the more compelling the header image, the more likely you are to watch a video. And the more videos you watch, the less likely you are to unsubscribe from Netflix.
The first thing Netflix does is spend a lot of time validating the video. It looks for digital artifacts, color changes, or missing frames that may have been caused by previous transcoding attempts or data transmission problems.
A pipeline is simply a series of steps data is put through to make it ready for use, much like an assembly line in a factory. More than 70 different pieces of software have a hand in creating every video.
The idea behind a CDN is simple: put video as close as possible to users by spreading computers throughout the world. When a user wants to watch a video, find the nearest computer with the video on it and stream to the device from there.
In 2007, when Netflix debuted its new streaming service, it had 36 million members in 50 countries, watching more than a billion hours of video each month, streaming multiple terabits of content per second.
At the same time, Netflix was also devoting a lot of effort into all the AWS services we talked about earlier. Netflix calls the services in AWS its control plane. Control plane is a telecommunications term identifying the part of the system that controls everything else. In your body, your brain is the control plane; it controls everything else.
In 2011, Netflix realized at its scale it needed a dedicated CDN solution to maximize network efficiency. Video distribution is a core competency for Netflix and could be a huge competitive advantage.
The number of OCAs on a site depends on how reliable Netflix wants the site to be, the amount of Netflix traffic (bandwidth) that is delivered from that site, and the percentage of traffic a site allows to be streamed.
Within a location, a popular video like House of Cards is copied to many different OCAs. The more popular a video, the more servers it will be copied to. Why? If there was only one copy of a very popular video, streaming the video to members would overwhelm the server. As they say, many hands make light work.
Right now, up to 100% of Netflix content is being served from within ISP networks. This reduces costs by relieving internet congestion for ISPs. At the same time, Netflix members experience a high-quality viewing experience. And network performance improves for everyone.
What may not be immediately obvious is that the OCAs are independent of each other. OCAs act as self-sufficient video-serving archipelagos. Members streaming from one OCA are not affected when other OCAs fail.
Last week I saw M3GAN, the new horror-comedy starring Allison Williams and a robot-doll in a blond wig. I liked it enough. The doll character is genuinely well-done\u2014a seemingly hard-to-nail mix of creepy and campy\u2014but I walked out of the theater with a vaguely empty feeling. I couldn\u2019t quite place it until I started talking with my friends about where the movie was set, and I realized I had no idea. One answer is somewhere in Silicon Valley, given its bald critique of big tech. It didn\u2019t actually feel like Silicon Valley, though. It didn\u2019t feel like anywhere at all. (Update: I\u2019ve been informed it\u2019s set in Seattle, although it didn\u2019t feel like there either.) Every backdrop was generic and crisp: the scrubbed tech-compound where Gemma (Allison Williams) works; the bland, Wayfair-decorated house she lives in; the clean, non-specific streets she drives on. I thought little of this while watching. The movie looked expensive and professional, or at least had the hallmarks of those things: glossy, filtered, smooth. Only after it ended did it occur to me that it seemed, like so many other contemporary movies and shows, to exist in a phony parallel universe we\u2019ve come to accept as relevant to our own.
To be clear, this isn\u2019t about whether the movie was \u201Crealistic.\u201D Movies with absurd, surreal, or fantastical plots can still communicate something honest and true. It\u2019s actually, specifically, about how movies these days look. That is, more flat, more fake, over-saturated, or else over-filtered, like an Instagram photo in 2012, but rendered in commercial-like high-def. This applies to prestige television, too. There are more green screens and sound stages, more CGI, more fixing-it-in-post. As these production tools have gotten slicker and cheaper and thus more widely abused, it\u2019s not that everything looks obviously shitty or too good to feel true, it\u2019s actually that most things look mid in the exact same way. The ubiquity of the look is making it harder to spot, and the overall result is weightless and uncanny. An endless stream of glossy vehicles that are easy to watch and easier to forget. I call it the \u201CNetflix shine,\u201D inspired by one of the worst offenders, although some reading on the topic revealed others call it (more boringly) the \u201CNetflix look.\u201D
In a 2022 Vice piece called \u201CWhy Does Everything on Netflix Look Like That,\u201D writer Gita Jackson describes the Netflix look as unusually bright and colorful, or too dark, the characters lit inexplicably by neon lights, everything shot at a medium close-up. Jackson discovered this aesthetic monotony is in part due to the fact that Netflix requires the same \u201Ctechnical specifications from all its productions.\u201D This is of course an economic choice: more consistency = less risk. They\u2019ve also structured their budgets to favor pre-production costs like securing top talent. So despite the fact that their budgets are high, they\u2019re spending it all on what is essentially marketing, pulling resources away from things like design and location. This style-over-substance approach is felt in most things Netflix makes, and it\u2019s being replicated across the industry. (For more proof of concept, Rachel Syme\u2019s recent New Yorker profile of Netflix Global Head of Television Bela Bajaria is perfectly tuned and genuinely chilling. I\u2019m still thinking about her \u201CArt is Truth\u201D blazer and lack of jet lag despite constant world travel. She\u2019s a walking metaphor.)
I\u2019m not a film buff, so I write this from a layman\u2019s perspective. But every time I watch something made before 2000, it looks so beautiful to me\u2014not otherworldly or majestic, but beautiful in the way the world around me is beautiful. And I don\u2019t think I\u2019m just being nostalgic. Consider these two popular rom-com movies stills: The first from When Harry Met Sally, shot on film in 1989, the second from Moonshot, shot digitally in 2022.
90f70e40cf