Will I "Really Like" this Movie?

Navigating Movie Website Ratings to Select More Enjoyable Movies

Wonder Woman Is Wonderful But Is It the GOAT Superhero Movie?

Everybody is talking about Wonder Woman and its record-breaking box office last weekend. Critics and audiences agree that Wonder Woman is worth a trip to the theater. The Mad Movie Man is convinced as well. You’ll find the movie in the top half of the 2017 Top Ten List and it is on my Watch List for the week, which means I plan on seeing it within the next week.

I mentioned last week that critics were falling all over themselves in praising this movie with some calling it the Superhero GOAT (Greatest Of All Time). Does it warrant such acclaim? Maybe. When you compare it to four other highly rated superhero movies that kicked off franchises, it holds up pretty well.

Oscar Noms/Wins IMDB Rating Rotten Tomatoes Rating Rotten Tomatoes % Fresh Combined Rating
Wonder Woman (2017) 0/0 8.3 C. Fresh 93%              17.6
Iron Man (2008) 2/0 7.9 C. Fresh 94%              17.3
Batman Begins (2005) 1/0 8.3 C. Fresh 84%              16.7
Superman (1978) 3/0 7.3 C. Fresh 93%              16.6
Spider-Man (2002) 2/0 7.3 C. Fresh 89%              16.2

All four of these comparison movies were Oscar nominated. We’ll have to wait until next January to see if Wonder Woman earns Oscar recognition. The combined rating presented here totals the IMDB rating and the Rotten Tomatoes % Fresh (converted to a 10 pt. scale) to measure the response of both critics and audiences to the five movies. It is still early, and IMDB ratings tend to fade a little over time, but for now Wonder Woman is clearly in the GOAT discussion.

If Wonder Woman holds on to its statistical GOAT position it will be fueled by the response of women to the movie. A comparison of Female and Male IMDB ratings for the five movies compared here lays this out pretty clearly.

Female IMDB Rating Male IMDB Rating IMDB Rating Differnce
Wonder Woman 8.6 8.2 0.4
Iron Man 7.9 7.9 0.0
Superman 7.3 7.3 0.0
Batman Begins 8.1 8.3 -0.2
Spider-Man 7.1 7.3 -0.2

While men “really like” Wonder Woman, females love the movie. Women are responding like they never have before to a superhero movie. Men, on the other hand, have a slight preference for Christopher Nolan’s vision of Batman. I also have to admit that I personally consider Batman Begins as one of the GOAT movies, irrespective of genre. That being said, I am really excited about seeing Wonder Woman.

***

After all of this praise for Wonder Woman, you might be wondering why it is only fifth on the 2017 Top Movies List. Does that mean that the four movies ahead of it are better movies? It might but not necessarily. The top four movies all went into limited release in 2016 to qualify for Oscar consideration. They didn’t go into wide release until early 2017, which is why they are on this list. All of the other movies on the list won’t be considered for Oscar recognition until January 2018. As I mentioned last week, this list is based on objective criteria. The Oscar nominations that the top four movies received are additional objective pieces of evidence that they are quality movies. This allows the algorithm to be more confident in its evaluation of the movie and as a result produces a higher “really like” probability. Again, just in case you were wondering.

 

“Really Like” Previews of Coming Attractions 

Recently I mentioned to someone that I was a movie blogger. Naturally they assumed I wrote movie reviews. It did get me thinking, though, “what is my blog really about?”

Looking back at my previous 92 posts, it’s hard to discern a consistent theme. I confess that it has been hard to come up with a fresh idea every single week. The result has been a hodgepodge of articles that intersect movies and data, but lack a unifying core. That is…until now.

It occurs to me that, while I’m not in the movie reviewing business, I am in the movie previewing business. I use statistical analysis to preview what movies I might “really like”. It also occurs to me that I created my algorithm for my benefit, not yours. I write this blog, though, for your benefit.

With all of that in mind, I’ve decided to reorient this blog to a discussion of movies you might “really like”, using my statistical analysis as the foundation of the discussion. My algorithm has two parts. The first produces a “really like” probability based on data from websites like Netflix, Movielens, and Critcker that are oriented to movies that I, personally, will “really like”.

The second part of the equation is based on general data that has nothing to do with my personal taste in movies. IMDB and Rotten Tomatoes produce ratings based on the input of all of their website participants. Oscar award performance has nothing to do with me. I’m not a member of the academy. For now, these are the factors that go into my “really like” probability based on general information. It’s this “really like” probability that might be most useful to you, the followers of this blog.

On Monday I added a new list to this site. The Top Ten 2017 Movies Based on Objective Criteria uses this second half of my algorithm to suggest movies that you might “really like”. I intend to update this list every Monday after the initial data from the previous weekend’s new releases comes in. This Friday, for example, Wonder Woman goes into wide release. Some critics are calling it the “best superhero movie of all time”. It will be interesting to look at the data feedback on Monday to see if it’s actually trending that way.

I’m also exploring the addition of other general data to the criteria. For example is there statistical significance to when a movie is released. I’m in the data gathering stage of that study. I’m also planning on adding in future months Top Ten lists for years prior to 2017.

I will also continue to update on Wednesday’s my Watch List for the week. While it is based on movies I should “really like”, you might find some movies there that pique your interest.

As for this blog, I plan to orient each week’s post around one or two of the movies on my lists and offer up some ideas as to why it might be a movie that you’ll “really like”. For now I would encourage you to check back on Monday to see if the hyperbolic buzz surrounding Wonder Woman is supported by strong enough numbers to move it into 2017’s “really like” Top Ten. Then, return again on Thursday to see what movies that you might “really like” have caught my eye.

This One Is All About You and Movielens

A few months ago my daughter texted me for recommendations for good movies on Netflix or Amazon Prime. I recommended a hidden treasure of a movie, Begin Again, but I couldn’t remember if it was on Netflix or Amazon. I knew it was on one of them. I had to go to each site to find the movie to nail down which streaming service it was on.

My daughter, and others like her, will no longer need to search blindly for movies on the streaming services they subscribe to if they’ve signed up to use my favorite movie recommender site, Movielens. Aside from being a very reliable movie recommender site, it is also the most useful in terms of finding movies to watch.

Within the last couple of months Movielens has added its best feature to date. Not only can you get pages and pages of recommended movies, once you’ve taken the time to rate the movies you’ve seen, but now you can filter them by the most popular streaming services.

Movielens allows you to filter recommendations by movies currently on Netflix, Amazon, Hulu, HBO, and Showtime. You can filter them individually or in combination. In my case, I filter by Netflix, Amazon and HBO. This means that you can get a list of movies that you can watch right now, ranked by the probability that you will “really like” them.

If I go to the Home Page of Movielens right now and go to Top Picks, I can click on the filter’s drop down menu and select Streaming Services. This will provide me with a list of the five services mentioned previously. By clicking on Netflix, Amazon, and HBO, I get a list of movies that I can watch now that I haven’t seen before. There are 5,256 movies available for me to watch right now ranked from the one I’m most likely to enjoy, last summer’s box office surprise Me Before You (Amazon), to the movie I’m least likely to enjoy, The Admirer (Amazon). You’ve never heard of The Admirer? Neither have I. It is a 2012 Russian movie based on the love between Anton Chekhov and a young writer, Lidiya Avilova. ZZZ.

More often than not my posts are about my experiences in finding movies that I will “really like”. This one’s for you. If you only have time to track one movie recommender website, go to Movielens. It will be fun and it will save you time scrolling through lines and lines of movies searching for movies that you might like.

When It Comes to Unique Movies, Don’t Put All of Your Movie Eggs in the Netflix Basket.

It is rare to find a movie that isn’t a sequel, or a remake, or a borrowed plot idea, or a tried and true formula. Many of these movies are entertaining because they feel familiar and remind us of another pleasant movie experience. The movie recommender websites Netflix, Movielens, and Criticker do a terrific job of identifying those movie types that you “really liked” before and surfacing those movies that match the familiar plot lines you’ve enjoyed in the past.

But, what about those movies that are truly original. What about those movies that present ideas and plot lines that aren’t in your usual comfort zone. Will these reliable websites surface these unique movies that I might like? Netflix has 20,000+ genre categories that they slot movies into. But, what if a movie doesn’t fit neatly into one of those 20,000 categories.

Yesterday I watched a great movie, Being There.

Being There

This 1979 movie, starring Peter Sellers in an Academy Award nominated performance, is about a simple-minded gardener who never left the home of his employer over the first fifty years of his life. Aside from gardening, the only knowledge he has is what he’s seen on television. After his employer dies he is forced to leave his home. The movie follows Chance the Gardener as he becomes Chauncey Gardner, economic advisor to the President. It’s not a story of transformation but of perception. The movie is fresh.

My most reliable movie recommenders, Netflix and Movielens, warned me away from this movie. The only reason I added it to my weekly Watch List is because I saw the movie in the theater when it first came out in 1979 and “really liked” it.

Another possible reason why Netflix missed on this movie is because I hated Peter Sellers’ other classic movie Dr. Strangelove. I rated it 1 out of 5 stars.  If Being There is slotted among a Netflix category of Peter Sellers classics, it might explain the mismatch.

Is it impossible, then, to surface movies that have creative original content that you might like. Not entirely. Criticker predicted I would rate Being There an 86 out of 100. I gave it an 89. The IMDB rating is 8.0 based on over 54,000 votes. Rotten Tomatoes has it at 96% Certified Fresh. This is why I incorporate ratings from five different websites into the “really like” model rather than just Netflix.

When it comes to unique movies, don’t put all of your movie eggs in the Netflix basket.

 

 

What Am I Actually Going to Watch This Week? Netflix Helps Out with One of My Selections.

The core mission of this blog is to share ideas on how to select movies to watch that we’ll “really like”. I believe that there have been times when I’ve bogged down on how to build the “really like” model. I’d like to reorient the dialogue back to the primary mission of what “really like” movies I am going to watch and more importantly why.

Each Wednesday I publish the ten movies on my Watch List for the week. These movies usually represent the ten movies with the highest “really like” probability that are available to me to watch on platforms that I’ve already paid for. This includes cable and streaming channels I’m paying for and my Netflix DVD subscription. I rarely use a movie on demand service.

Now, 10 movies is too much, even for the Mad Movie Man, to watch in a week. The ten movie Watch List instead serves as a menu for the 3 or 4 movies I actually most want to watch during the week. So, how do I select those 3 or 4 movies?

The first and most basic question to answer is who, if anyone, am I watching the movie with. Friday night is usually the night that my wife and I will sit down and watch a movie together. The rest of the week I’ll watch two or three movies by myself. So, right from the start, I have to find a movie that my wife and I will both enjoy. This week that movie is Hidden Figures, the 2016 Oscar nominated film about the role three black female mathematicians played in John Glenn’s orbit of the earth in the early 1960’s.

This movie became available to Netflix DVD subscribers on Tuesday May 9. I received my Hidden Figures DVD on that day. Something I’ve learned over the years is that Netflix ships DVD’s on Monday that become available on Tuesday. For this to happen you have to time the return of your old DVD to arrive on the Saturday or Monday before the Tuesday release. This gives you the best chance to avoid “long wait” queues.

I generally use Netflix DVD to see new movies that I don’t want to wait another 3 to 6 months to see or for old movies that I really want to see but aren’t available on my usual platforms.

As of the first quarter of 2017, Netflix reported that there are only 3.94 million subscribers to their DVD service. I am one of them. The DVD service is the only way that you can still access Netflix’ best in the business 5 star system of rating movies. It is easily the most reliable predictor of how you’ll rate a movie or TV show. Unfortunately, Netflix Streaming customers no longer have the benefit of the 5 Star system. They have gone to a less granular “thumbs up” and “thumbs down” rating system. To be fair, I haven’t gathered any data on this new system yet therefore I’ll reserve judgement as to its value. As for the DVD service, they will have me as a customer as long as they maintain their 5 star recommender system as one of the benefits of being a DVD subscriber.

The 5 star system is a critical assist to finding a movie for both my wife and I. Netflix allows you set up profiles for other members of the family. After my wife and I watch a movie, she gives it a rating and I give it a rating. These ratings are entered under our separate profiles. This allows a unique predicted rating for each of us based on our individual taste in movies. For example, Netflix predicts that I will rate Hidden Figures a 4.6 out of 5 and my wife will rate it a 4.9. In other words, according to Netflix, this is a movie that both of us, not only will “really like”, but we should absolutely “love”.

Hidden Figures has a “really like” probability of 61.4%. It’s Oscar Performance probability is 60.7% based on its three nominations. Its probability based solely on the feedback from the recommender sites that I use is 69.1%. At this point in time, it is a Quintile 1 movie from a credibility standpoint. This means that the 69.1% probability is based on a limited number of ratings. It’s not very credible yet. That’s why the 61.4% “really like” probability is closer to the Oscar Performance probability of 60.7%. I would fully expect that, as more people see Hidden Figures and enter their ratings, the “really like” probability will move higher for this movie.

Friday Night Movie Night this week looks like a “really like” lock…thanks to Netflix DVD.

 

 

Can You Increase Your Odds of Having a “Really Like” Experience at the Movie Theater

Last Friday, my wife and I were away from home visiting two different sets of friends. One group we met for lunch. The second group we were meeting in the evening. With some time to spare between visits, we decided to go to a movie. The end of April usually has slim pickings for “really like” movies at the theater. With the help of IMDB and Rotten Tomatoes, I was able to surface a couple of prospects but only one that both my wife and I might “really like”. We ended up seeing a terrific little movie, Gifted.

My experience got me thinking about the probabilities of seeing “really like” movies at the movie theater. These movies have the least data to base a decision off of and yet I can’t recall too many movies that I’ve seen in the theater that I haven’t “really liked”. Was this reality or merely perception.

I created a subset of my database of movies that I’ve seen within 3 months of their release. Of the 1,998 movies in my database, 99 movies, or 5%, met the criteria. Of these 99 movies, I “really liked” 86% of them. For the whole database, I “really liked” 60% of the movies I’ve watched over the last 15 years. My average score for the 99 movies was 7.8 out of 10. For the remaining 1,899 movies my average score was 6.8 out of 10.

How do I explain this? My working theory is that when a movie comes with an additional cash payout, i.e. theater tickets, I become a lot more selective in what I see. But, how can I be more selective with less data? I think it’s by selecting safe movies. There are movies that I know I am going to like. When I went into the movies theater a couple of months ago to see Beauty and the Beast I knew I was going to love it and I did. Those are the types of movie selections I tend to reserve for the theater experience.

There are occasions like last Friday when a specific movie isn’t drawing me to the movies but instead I’m drawn by the movie theater experience itself. Can I improve my chances of selecting a “really like” movie in those instances?

Last week I mentioned in my article that I needed to define better what I needed my “really like” probability model to do. One of the things that it needs to do is to provide better guidance for new releases. The current model has a gap when it comes to new releases. Because the data is scarce most new releases will be Quintile 1 movies in the model. In other words, very little of the indicators based on my taste in movies, i.e. Netflix, Movielens, and Criticker, is factored into the “really like” probability.

A second gap in the model is that new releases haven’t been considered for Academy Awards yet. The model treats them as if they aren’t award worthy, even though some of them will be Oscar nominated.

I haven’t finalized a solution to these gaps but I’m experimenting with one. As a substitute for the Oscar performance factor in my model I’m considering a combined IMDB/Rotten Tomatoes probability factor. These two outputs are viable indicators of the quality of a new release. This factor would be used until the movie goes through the Oscar nomination process. At that time, it would convert to the Oscar performance factor.

I’ve created a 2017 new release list of the new movies I’m tracking. You can find it on the sidebar with my Weekly Watch List movies. This list uses the new “really like” probability approach I’m testing for new releases. Check it out.

If you plan on going to the movies this weekend to see Guardians of the Galaxy Vol. 2, it is probably because you really liked the first one. Based on IMDB and Rotten Tomatoes, you shouldn’t be disappointed. It is Certified Fresh 86% on Rotten Tomatoes and 8.2 on IMDB.

 

 

“Really Like” Movies: Is That All There Is?

After scoring a movie that I’ve watched, one of my rituals is to read a critic’s review of the movie. If the movie is contemporaneous to Roger Ebert’s tenure as the world’s most read critic, he becomes my critic of choice. I choose Ebert, first of all, because he is a terrific writer. He has a way of seeing beyond the entertainment value of the movie and observing how it fits into the culture of the time. I also choose Ebert because I find that he “really likes” many of the movies I “really like”. He acts as a validator of my film taste.

The algorithm that I use to find “really like” movies to watch is also a validator. It sifts through a significant amount of data about a movie I’m considering and validates whether I’ll probably “really like” it or not based on how I’ve scored other movies. It guides me towards movies that will be “safe” to watch. That’s a good thing. Right? I guess so. Particularly, if my goal is to find a movie that will entertain me on a Friday night when I might want to escape the stress of the week.

But what if I want to experience more than a comfortable escape? What if I want to develop a more sophisticated movie palate? That won’t happen if I only watch movies that are “safe”. Is it possible that my algorithm is limiting my movie options by guiding me away from movies that might expand my taste? My algorithm suggests that because I “really liked” Rocky I & II, I’ll “really like” Rocky III as well. While that’s probably a true statement, the movie won’t surprise me. I’ll enjoy the movie because it is a variation of a comfortable and enjoyable formula.

By the same token, I don’t want to start watching a bunch of movies that I don’t “really like” in the name of expanding my comfort zone. I do, however, want to change the trajectory of my movie taste. In the end, perhaps it’s an algorithm design issue. Perhaps, I need to step back and define what I want my algorithm to do. It should be able to walk and chew gum at the same time.

I mentioned that I used Roger Ebert’s reviews because he seemed to “really like” many of the same movies that I “really liked”. It’s important to note that Roger Ebert “really liked” many more movies than I have over his lifetime. Many of those movies are outside my “really like” comfort zone. Perhaps I should aspire to “really like” the movies that Ebert did rather than find comfort that Ebert “really liked” the movies that I did.

 

Does Critic Expertise on Rotten Tomatoes Overcome the Law of Large Numbers?

In the evolution of my “really like” movie algorithm, one of the difficulties I keep encountering is how should I integrate Rotten Tomatoes ratings in a statistically significant way. Every time I try I keep rediscovering that its ratings are not as useful as the other websites that I use. It’s not that it has no use. To determine if a movie is worth seeing within a week after its release, you’ll be hard pressed to find a better indicator. The problem is that most of the data for a particular movie is counted in that first week. Most of the critic reviews are completed close to the release dates to provide moviegoers with guidance on the day a movie is released. After that first week, the critics are on to the next batch of new movies to review. With all of the other websites, the ratings continually get better as more people see the movie and provide input. The data pool gets larger and the law of large numbers kicks in. With Rotten Tomatoes, there is very little data growth. Its value is based on the expertise of the critics and less on the law of large numbers.

The question becomes what is the value of film critics expertise. It is actually pretty valuable. When Rotten Tomatoes slots movies into one of their three main rating buckets (Certified Fresh, Fresh, Rotten), it does create a statistically significant differentiation.

Rating “Really Like” %
Certified Fresh 69.7%
Fresh 50.0%
Rotten 36.6%

Rotten Tomatoes is able to separate pretty well those movies I “really like” from those I don’t.

So what’s the problem? If we stick to Certified Fresh movies we’ll “really like” them 7 out of 10 times. That’s true. And, if I’m deciding on which new release to see in the movie theater, that’s really good. But, if I’m deciding what movie my wife and I should watch on Friday night movie night and our selection is from the movies on cable or our streaming service, we can do better.

Of the 1,998 movies I’ve seen in the last 15 years, 923 are Certified Fresh. Which of those movies am I most likely to “really like”? Based on the following table, I wouldn’t rely on the Rotten Tomatoes % Fresh number.

Rating % Fresh Range “Really Like” %
Certified Fresh 96 to 100% 69.9%
Certified Fresh 93 to 95% 73.4%
Certified Fresh 89 to 92% 68.3%
Certified Fresh 85 to 88% 71.2%
Certified Fresh 80 to 84% 73.0%
Certified Fresh 74 to 79% 65.3%

This grouping of six equal size buckets suggests that there isn’t much difference between a movie in my database that is 75% Fresh and one that is 100% Fresh. Now, it is entirely possible that there is an actual difference between 75% Fresh and 100% Fresh. It is possible that, if my database were larger, my data might produce a less random pattern which might be statistically significant. For now, though, the data is what it is. Certified Fresh is predictive and the % Fresh part of the rating less so.

Expertise can reduce the numbers needed for meaningful differentiation between what is Certified Fresh and what is Rotten. The law of large numbers, though, may be too daunting for credible guidance much beyond that.

 

 

Some Facts Are Not So Trivial

As I’ve mentioned before on these pages, I always pay a visit to the IMDB trivia link after watching a movie. Often I will find a fun but ultimately trivial fact such as the one I discovered after viewing Beauty and the Beast. According to IMDB, Emma Watson was offered the Academy Award winning role of Mia in La La Land but turned it down because she was committed to Beauty and the Beast. Coincidentally, the heretofore non-musical Ryan Gosling was offered the role of the Beast and turned it down because he was committed to that other musical, La La Land. You really can’t fault either of their decisions. Both movies have been huge successes.

On Tuesday I watched the “really like” 1935 film classic Mutiny on the Bounty.My visit to the trivia pages of this film unearthed facts that were more consequential than trivial. For example, the film was the first movie of  historically factual events with actors playing historically factual people to win the Academy Award for Best Picture. The previous eight winners were all based on fiction. Real life became a viable source for great films as the next two Best Picture winners, The Great Ziegfeld and The Life of Emile Zola, were also biographies. Interestingly, it would be another 25 years before another non-fictional film, Lawrence of Arabia, would win a Best Picture award.

Mutiny on the Bounty also has the distinction of being the only movie ever to have three actors nominated for Best Actor. Clark Gable, Charles Laughton, and Franchot Tone were all nominated for Best Actor. Everyone expected one of them to win. After splitting the votes amongst themselves, none of them won. Oscar officials vowed to never let that happen again. For the next Academy Awards in 1937, they created two new awards for Actor and Actress in a Supporting Role. Since then, in only six other instances, have two actors from the same movie been nominated for Best Actor.

Before leaving Mutiny on the Bounty, there is one more non-trivial fact to relate about the movie. The characters of Captain Bligh and First Mate Fletcher Christian grow to hate each other in the plot. To further that requisite hate in the movie, Irving Thalberg, one of the producers, purposely cast the overtly gay Charles Laughton as Bligh and the notorious homophobe Gable as Fletcher Christian. This crass manipulation of the actors’ prejudice seemed to have worked as the hate between the two men was evident on the set and clearly translated to the screen. This kind of morally corrupt behavior was not uncommon in the boardrooms of the Studio system in Hollywood at the time.

Some other older Best Picture winning films with facts, not trivial, but consequential to the film industry or the outside world include:

  • It Happened One Night, another Clark Gable classic, in 1935 became the first of only three films to win the Oscar “grand slam”. The other two were One Flew Over the Cuckoo’s Nest and Silence of the Lambs. The Oscar “grand slam” is when a movie wins all five major awards, Best Picture, Director, Actor, Actress, and Screenplay.
  • Gone with the Wind, along with being the first Best Picture filmed in color,  is the longest movie, at four hours, to win Best Picture. Hattie McDaniel became the first black actor to be nominated and win an Oscar for her role in the film.
  • In Casablanca, there is a scene where the locals drown out the Nazi song “Watch on the Rhine” with their singing of the “Marseillaise”. In that scene you can see tears running down the cheeks of many of the locals. For many of these extras the tears were real since they were actual refugees from Nazi tyranny. Ironically, many of the Nazis in the scene were also German Jews who had escaped Germany.
  • To prepare for his 1946 award winning portrayal of an alcoholic in The Lost Weekend, IMDB reveals that “Ray Milland actually checked himself into Bellevue Hospital with the help of resident doctors, in order to experience the horror of a drunk ward. Milland was given an iron bed and he was locked inside the “booze tank.” That night, a new arrival came into the ward screaming, an entrance which ignited the whole ward into hysteria. With the ward falling into bedlam, a robed and barefooted Milland escaped while the door was ajar and slipped out onto 34th Street where he tried to hail a cab. When a suspicious cop spotted him, Milland tried to explain, but the cop didn’t believe him, especially after he noticed the Bellevue insignia on his robe. The actor was dragged back to Bellevue where it took him a half-hour to explain his situation to the authorities before he was finally released.”
  • In the 1947 film Gentlemen’s Agreement about anti-Semitism, according to IMDB, “The movie mentions three real people well-known for their racism and anti-Semitism at the time: Sen. Theodore Bilbo (D-Mississippi), who advocated sending all African-Americans back to Africa; Rep. John Rankin (D-Mississippi), who called columnist Walter Winchell  “the little kike” on the floor of the House of Representatives; and leader of “Share Our Wealth” and “Christian Nationalist Crusade” Gerald L. K. Smith, who tried legal means to prevent Twentieth Century-Fox from showing the movie in Tulsa. He lost the case, but then sued Fox for $1,000,000. The case was thrown out of court in 1951.”

One of the definitions of “trivia” is “an inessential fact; trifle”. Because IMDB lists facts under the Trivia link does not make them trivia. The facts presented here either promoted creative growth in the film industry or made a significant statement about society. Some facts are not so trivial.

 

 

 

Sometimes When You Start To Go There You End Up Here

There are some weeks when I’m stumped as to what I should write about in this weekly trip to Mad Moviedom. Sometimes I’m in the middle of an interesting study that isn’t quite ready for publication. Sometimes an idea isn’t quite fully developed. Sometimes I have an idea but I find myself blocked as to how to present it. When I find myself in this position, one avenue always open to me is to create a quick study that might be halfway interesting.

This is where I found myself this week. I had ideas that weren’t ready to publish yet. So, my fallback study was going to be a quick study of which movie decades present the best “really like” viewing potential. Here are the results of my first pass at this:

“Really Like” Decades
Based on Number of “Really Like” Movies
As of April 6, 2017
My Rating
Really Liked Didn’t Really Like Total “Really Like” Probability
 All       1,108                  888         1,996
 2010’s           232                  117            349 60.9%
 2000’s           363                  382            745 50.5%
 1990’s           175                    75            250 62.0%
 1980’s             97                    60            157 58.4%
 1970’s             56                    49            105 54.5%
 1960’s             60                    55            115 53.9%
 1950’s             51                    78            129 46.6%
 1940’s             55                    43               98 55.8%
 1930’s             19                    29               48 46.9%

These results are mildly interesting. The 2010’s, 1990″s, 1980’s, and 1940’s are above average decades for me. There are an unusually high number of movies in the sample that were released in the 2000’s. Remember that movies stay in my sample for 15 years from the year I last watched the movie. After 15 years they are removed from the sample and put into the pool of movies available to watch again. The good movies get watched again and the other movies are never seen again, hopefully. Movies last seen after 2002 have not gone through the process of separating out the “really like” movies to be watched again and permanently weeding from the sample the didn’t “really like” movies. The contrast of the 2000’s with the 2010’s is a good measure of the impact of the undisciplined selection movies and the disciplined selection.

As I’ve pointed out in recent posts, I’ve made some changes to my algorithm. One of the big changes I’ve made is that I’ve replaced the number of movies that are “really like” movies with the number of ratings for the movies that are “really like” movies. After doing my decade study based on number of movies, I realized I should have used the number of ratings method to be consistent with my new methodology. Here are the results based on the new methodology:

“Really Like” Decades
Based on Number of “Really Like” Ratings
As of April 6, 2017
My Rating
Really Liked Didn’t Really Like Total “Really Like” Probability
 All    2,323,200,802    1,367,262,395    3,690,463,197
 2010’s        168,271,890        166,710,270        334,982,160 57.1%
 2000’s    1,097,605,373        888,938,968    1,986,544,341 56.6%
 1990’s        610,053,403        125,896,166        735,949,569 70.8%
 1980’s        249,296,289        111,352,418        360,648,707 65.3%
 1970’s          85,940,966          25,372,041        111,313,007 67.7%
 1960’s          57,485,708          15,856,076          73,341,784 68.0%
 1950’s          28,157,933          23,398,131          51,556,064 59.5%
 1940’s          17,003,848            5,220,590          22,224,438 67.4%
 1930’s            9,385,392            4,517,735          13,903,127 64.6%

While the results are different, the big reveal was that 63.0% of the ratings are for “really like” movies and only 55.5% of the number of movies are “really like” movies. It starkly reinforces the impact of the law of large numbers. Movie website indicators of “really like” movies are more reliable when the number of ratings driving those indicators are larger. The following table illustrates this better:

“Really Like” Decades
Based on Average Number of “Really Like” Ratings per Movie
As of April 6, 2017
My Rating
Really Liked Didn’t Really Like Total “Really Like” % Difference
 All      2,096,751.63      1,539,709.90      1,848,929.46 36.2%
 2010’s          725,309.87      1,424,874.10          959,834.27 -49.1%
 2000’s      3,023,706.26      2,327,065.36      2,666,502.47 29.9%
 1990’s      3,486,019.45      1,678,615.55      2,943,798.28 107.7%
 1980’s      2,570,064.84      1,855,873.63      2,297,125.52 38.5%
 1970’s      1,534,660.11          517,796.76      1,060,123.88 196.4%
 1960’s          958,095.13          288,292.29          637,754.64 232.3%
 1950’s          552,116.33          299,976.04          399,659.41 84.1%
 1940’s          309,160.87          121,409.07          226,779.98 154.6%
 1930’s          493,968.00          155,783.97          289,648.48 217.1%

With the exception of the 2010’s, the average number of ratings per movie is larger for the “really like” movies. In fact, they are dramatically different for the decades prior to 2000. My educated guess is that the post-2000 years will end up fitting the pattern of the other decades once those years mature.

So what is the significance of this finding. It clearly suggests that waiting to decide whether to see a new movie or not until a sufficient number of ratings come in will produce a more reliable result. The unanswered question is how many ratings is enough.

The finding also reinforces the need to have something like Oscar performance to act as a second measure of quality for movies that will never have “enough” ratings for a reliable result.

Finally, the path from “there to here” is not always found on a map.

Post Navigation