warwain hardie jeffrey

0 views
Skip to first unread message

Emerio Boykins

unread,
Aug 2, 2024, 12:32:58 PM8/2/24
to nforutticde

LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.

Though some of their stories struck me as intense and overwhelming (like receiving feedback in the middle of leading a large meeting) after reading the book, I've found myself more open to more feedback... even criticism!

This has been backed by research- in trials where a team had to work with a difficult person or underperformer (an actor placed in the study) the team took on the negative characteristics of that difficult or underperforming teammate. These teams performed 30-40% worse than teams without the actor. Netflix particularly disagrees with the concept of performance improvement plans (PIP). With the Netflix culture of candor, employees should have access to regular, helpful feedback.

They cited a Duke University and MIT study where a group of employees offered a medium bonus performed no better than the employees offered a low bonus. Employees offered the largest bonus actually performed worse than the groups with the low or medium bonus.

This is in spite of a study they shared from Vienna University- when a newspaper was placed next to a sign stating the price, and a plea for honesty, with payment slot, two-thirds took a newspaper without paying.

The Writers Guild Association and the AMPTP came to an agreement, ending more than 150 days of strike. One provision of this new contract is the inclusion of a performance bonus for original streaming films and series and that will be the focus of this article. Which 2023 Netflix US shows and films would have got it?

Per the agreement, a streaming original series or film that managed to get the equivalent in CVEs (or views as Netflix refers to it) of 20% of the US subscribers base of a streaming service in the first 91 days of release is getting the performance bonus, starting from January 1st, 2024.

I've been searching for answers on which internet, cable, phone, and select streaming services are included for extra points with the RR Visa. So far, I learned that "Only subscriptions paid for, or purchases made with the following select merchants will qualify for this category: Apple Music, Apple TV, Disney+, ESPN+, Fubo TV, HBO Max, Hulu, Netflix, Pandora, Paramount+, Peacock, Showtime, SiriusXM, Sling, Spotify, YouTube Premium, YouTube TV and Vudu."

So that covers the streaming - but what about the cable, internet, and phone aspects of the category - which is broken out in the rewards summary? What providers are included? I earned zero points for this category this last month despite putting my phone, internet provider, and cable bills on this card - and all are major providers.

The most interesting thing to me is that the wording in the advertising is not "select cable, phone, internet" but "cable, phone, internet, and select streaming." That implied that cable, phone, and internet would be broadly covered. Is anyone else having better luck getting points here than me?

In the Chase app I see the charge from my cable company, Comcast, is identified as merchant type "cable and paid television service". It then also shows +2x points for the transaction. What merchant type is your transaction being labeled as? That's what determines when the bonus points are issued.

Thanks, you are right. It's Spectrum and it says it's in that same category. So I should get the points. My statement has not closed for this month yet so I will look for it. I assume the points will be there.

Shareholders voted on the non-binding "say-on-pay" advisory measure at Netflix's annual meeting on Thursday, with the result coming just days after the the Writers Guild of America, a union representing striking entertainment industry writers, urged shareholders to say no.

Sarandos could earn as much as $40 million this year from the combination of his base pay, performance bonus and stock options, compared to the roughly $50.3 million he made in 2022, Netflix's proxy statement shows. Meanwhile, Peters, his new co-CEO, could earn just over $34 million this year through a combination of base pay and stock options, a company SEC filing shows.

"While investors have long taken issue with Netflix's executive pay, the compensation structure is more egregious against the backdrop of the strike," WGA West president Meredith Stiehm wrote in a letter to Netflix shareholders.

Stiehm added, "If the company could afford to spend $166 million on executive compensation last year, it can afford to pay the estimated $68 million per year that writers are asking for in contract improvements and put an end to the disruptive strike."

Shareholders' rejection of the compensation packages comes as the streaming giant is under pressure after more than 11,000 television and film writers went on strike last month following the breakdown of negotiations between the Writers Guild of America and Hollywood studios.

Sarandos declined to accept an award at the PEN American Spring Literary Gala last month, citing the potential for a disruption due to the strike. Picketers have disrupted events such as Boston University's graduation ceremony, where Warner Bros. Discovery CEO David Zaslav gave the commencement address.

"Given the threat to disrupt this wonderful evening, I thought it was best to pull out so as not to distract from the important work that PEN America does for writers and journalists," Sarandos told Variety last month.

Shareholders have voted against Netflix's pay packages before. Last year, only about 27% of Netflix's investors voted for the proposed pay packages. Still, the pay packages for the company's then-CEOs, Sarandos and Reed Hastings, rose by about 31% and 25%, respectively, from 2021 to 2022, according to regulatory filings.

The story of the Netflix Prize differs from traditional diversity narratives in which a single talented individual, given an opportunity, creates a breakthrough because of some idiosynchratic piece of information. Instead, teams of diverse, brilliant people competed to attain a goal. The contest attracted thousands of participants with a variety of technical backgrounds and work experiences. The teams applied an algorithmic zoo of conceptual, computational, and analytical approaches. Early in the contest, the top ten teams included a team of American undergraduate math majors, a team of Austrian computer programmers, a British psychologist and his calculus-wielding daughter, two Canadian electrical engineers, and a group of data scientists from AT&T research labs.

In the end, the participants discovered that their collective differences contributed as much as or more than their individual talents. By sharing perspectives, knowledge, information, and techniques, the contestants produced a sequence of quantifiable diversity bonuses.

Winning the Netflix Prize required the inference of patterns from an enormous data set. That data set covered a diverse population of people. Some liked horror films. Others preferred romantic comedies. Some liked documentaries. The modelers would attempt to account for this heterogeneity by creating categories of movies and of people.

To understand the nature of the task, imagine a giant spreadsheet with a row for each person and a column for each movie. If each user rated every movie, that spreadsheet would contain over 8.5 billion ratings. The data consisted of a mere 100 million ratings. Though an enormous amount of data, it fills in fewer than 1.2 percent of the cells. If you opened the spreadsheet in Excel, you would see mostly blanks. Computer scientists refer to this as sparse data.

The contestants had to predict the blanks, or, to be more precise, predict the values for the blanks that consumers would fill in next. Inferring patterns from existing data, what data scientists call collaborative filtering, requires the creation of similarity measures between people and between movies. Similar people should rank the same movie similarly. And each person should rank similar movies similarly.

One might think that including more features would lead to more accurate predictions. That need not hold. Models with too many variables can overfit the data. To guard against overfitting, computer scientists divide their data into two sets: a training set and a testing set. They fit their model to the first set, then check to see if it also works on the second set.[2] In the Netflix Prize competition, the size of the data set and the costs of computation limited the number of variables that could be included in any one model. The winner would therefore not be the person or team that could think up the most features. It would be the team capable of identifying the most informative and tractable set of features.

Given a feature set, each team also needed an algorithm to make predictions. Dinosaur Planet, a team of three mathematics undergraduates that briefly led the competition in 2007, tried multiple approaches, including clustering (partitioning movies into sets based on similar characteristics), neural networks (algorithms that take features as inputs and learn patterns), and nearest-neighbor methods (algorithms that assign numerical scores to each feature for each movie and compute a distance based on vectors of features).

At the end of the first year, a team from AT&T research labs, known as BellKor, led the competition. Their best single model relied on fifty variables per movie and improved on Cinematch by 6.58 percent. That was just one of their models. By combining their fifty models in an ensemble, they could improve on Cinematch by 8.43 percent.

A year and a half into the competition, BellKor knew they could outperform the other teams, but also that they could not reach the 10 percent threshold. Rather than give up, BellKor opted to call in reinforcements. In 2008, they merged with the Austrian computer scientists, Big Chaos, a team that had developed sophisticated algorithms for combining models. BellKor had the best predictive models. Big Chaos knew better ways to combine them. By combining these repertoires, they produced a diversity bonus. However, that bonus was not sufficient to push them above the 10 percent threshold.

90f70e40cf
Reply all
Reply to author
Forward
0 new messages