Random Thoughts – Randocity!

Why Rotten Tomatoes is rotten

Posted in botch, business, california by commorancy on December 31, 2019

cinema-popcornWhen you visit a site like Rotten Tomatoes to get information about a film, you need to ask yourself one very important question, “Is Rotten Tomatoes trustworthy?”

Rotten Tomatoes as a movie review service has come under fire many times for review bombing and manipulation. That is, Rotten Tomatoes seem to allow shills to join the service to review bomb a movie to either raise or lower its various scores by manipulating the Rotten Tomatoes review system. In the past, these claims couldn’t be verified. Today, they can.

As of a change in May 2019, Rotten Tomatoes has made it exceedingly easy for both movie studios and Rotten Tomatoes itself to game and manipulate the “Audience Score” ratings. Let’s explore.

Rotten Tomatoes as a Service

Originally, Rotten Tomatoes began its life as an independent movie review service such that both critics and audience members can have a voice in what they think of a film. So long as Rotten Tomatoes remained an independent and separate service from movie studio influence and corruption, it could make that claim. Its reviews were fair and for the most part accurate.

Unfortunately, all good things must come to an end. In February of 2016, Fandango purchased Rotten Tomatoes. Let’s understand the ramifications of this purchase. Because Fandango is wholly owned by Comcast and in which Warner Brothers also holds an ownership stake in Fandango, this firmly plants Rotten Tomatoes well out of the possibility of remaining neutral in film reviews. Keep in mind that Comcast also owns NBC as well as Universal Studios.

Fandango doesn’t own a stake in Disney as far as I can tell, but that won’t matter based on what I describe next about the Rotten Tomatoes review system.

Review Bombing

As stated in the opening, Rotten Tomatoes has come under fire for several notable recent movies as having scores which have been manipulated. Rotten Tomatoes has then later debunked those claims by stating that their system was not manipulated, but then really offering no proof of that fact. We simply have to take them at their word. One of these allegedly review bombed films was Star Wars: The Last Jedi… where the scores inexplicably dropped dramatically in about a 1 month period of time. Rotten Tomatoes apparently validated the drop as “legitimate”.

Unfortunately, Rotten Tomatoes has become a bit more untrustworthy as of late. Let’s understand why.

As of May of 2019, Rotten Tomatoes introduced a new feature known as “verified reviews”. For a review’s score to be counted towards the “Audience Score”, the reviewer must have purchased a ticket from a verifiable source. Unfortunately, the only source from which Rotten Tomatoes can verify ticket purchases is from its parent company, Fandango. All other ticket purchases don’t count… thus, if you choose to review a film after purchasing your ticket from the theater’s box office, from MovieTickets.com or via any other means, your ticket won’t count as “verified” should you review or rate the movie. Only Fandango ticket purchases count towards “verified” reviews, thus altering the audience score. This change is BAD. Very, very bad.

Here’s what Rotten Tomatoes has to say from the linked article just above:

Rotten Tomatoes now features an Audience Score made up of ratings from users we’ve confirmed bought tickets to the movie – we’re calling them “Verified Ratings.” We’re also tagging written reviews from users we can confirm purchased tickets to a movie as “Verified” reviews.

While this might sound like a great idea in theory, it’s ripe for manipulation problems. Fandango also states that “IF” they can determine “other” reviews as confirmed ticket purchases, they will mark them as “verified”. Yeah, but that’s a manual process and is impossibly difficult to determine. We can pretty much forget that this option even exists. Let’s list the problems coming out of this change:

  1. Fandango only sells a small percentage of overall ticket sales for a film. If the “Audience Score” is calculated primarily and solely from Fandango ticket sales alone, then this metric is a horribly inaccurate metric to rely on.
  2. Fandango CAN handpick “other” non-Fandango ticket purchased reviews to be included. Not likely to happen often, but this also means they can pick their favorites (and favorable) reviews to include. This opens Rotten Tomatoes up to Payola or “pay for inclusion”.
  3. By specifying exactly how this process works, this change opens the Rotten Tomatoes system to being gamed and manipulated, even by Rotten Tomatoes staff themselves. Movie studios can also ask their employees, families and friends to exclusively purchase their tickets from Fandango and request these same people to write “glowing, positive reviews” or submit “high ratings” or face job consequences. Studios might even be willing to pay for these positive reviews.
  4. Studios can even hire outside people (sometime known as shills) to go see a movie by buying tickets from Fandango and then rate their films highly… because they were paid to do so. As I said, manipulation.

Trust in Reviews

It’s clear that while Rotten Tomatoes is trying to fix its ills, it is incredibly naive at it. It gets worse. Not only is Rotten Tomatoes incredibly naive, this company is also not at all tech savvy. Its system is so ripe for being gamed, the “Audience Score” is a nearly pointless metric. For example, 38,000 verified reviews based on millions of people who watched it? Yeah, if that “Audience Score” number isn’t now skewed, I don’t know what is.

Case in point. The “Audience Score” for The Rise of Skywalker is 86%. The difficulty with this number is the vast majority of the reviews I’ve seen from people on chat forums don’t rate the film anywhere close to 86%. What that means is that the new way that Rotten Tomatoes is calculating scores is effectively a form of manipulation itself BY Rotten Tomatoes.

To have the most fair and accurate metric, ALL reviews must be counted and included in all ratings. You can’t just toss out the vast majority of reviews simply because you can’t verify them has holding a ticket. Even still, holding a ticket doesn’t mean someone has actually watched the film. Buying a ticket and actually attending a showing of the film are two entirely separate things.

While you may have verified a ticket purchase, did you verify that the person actually watched the film? Are you withholding brand new Rotten Tomatoes account reviewers out of the audience score? How trustworthy can someone be if this is their first and only review on Rotten Tomatoes? What about people who downloaded the app just to buy a ticket for that film? Simply buying a ticket from Fandango doesn’t make the rating or reviewer trustworthy.

Rethinking Rotten Tomatoes

Someone at Rotten Tomatoes needs to drastically reconsider this change and they need to do it fast. If Rotten Tomatoes wasn’t guilty of manipulation of review scores before this late spring change in 2019, they are now. Rotten Tomatoes is definitely guilty of manipulating the “Audience Score” by the sheer lack of reviews covered under this “verified review” change. Nothing can be considered valid when the sampling size is so small as to be useless. Verifying a ticket holder also doesn’t validate a review author’s sincerity, intent or, indeed, legitimacy. It also severely limits who can be counted under their ratings, this skewing the usefulness of “Audience Score”.

In fact, only by looking at past reviews can someone determine if a review author has trustworthy opinions.

Worse, Fandango holds a very small portion of all ticket sales made for theaters (see below). By showing all (or primarily) scores tabulated by people who bought tickets from Fandango, this definitely eliminates well over half of the written reviews on Rotten Tomatoes as valid. Worse, because of the way the metric is calculated, nefarious entities can game the system to their own benefit and manipulate the score quickly.

This has a chilling effect on Rotten Tomatoes. The staff at Rotten Tomatoes needs roll back this change pronto. For Rotten Tomatoes to return it being a trustworthy neutral entity in the art of movie reviews, it needs a far better way to determine trustworthiness of its reviews and of its reviewers. Trust comes from well written, consistent reviews. Ratings come from trusted sources. Trust is earned. The sole act of buying a ticket from Fandango doesn’t earn trust. It earns bankroll.

Why then are ticket buyers from Fandango any more trustworthy than people purchasing tickets elsewhere? They aren’t… and here’s where Rotten Tomatoes has failed. Rotten Tomatoes incorrectly assumes that by “verifying” a sale of a ticket via Fandango alone, that that somehow makes a review or rating more trustworthy. It doesn’t.

It gets worse because while Fandango represents at least 70% of online sales, it STILL only represents a tiny fraction of overall ticket sales, at just 5-6% (as of 2012).

“Online ticketing still just represents five to six percent of the box office, so there’s tremendous potential for growth right here.” –TheWrap in 2012

Granted, this TheWrap article is from 2012. Even if Fandango had managed to grab 50% of the overall ticket sales in the subsequent 7 years since that article, that would leave out 50% of the remaining ticket holder’s voices, which will not be tallied into Rotten Tomatoes current “Audience Score” metric. I seriously doubt that Fandango has managed to achieve anywhere close to 50% of total movie ticket sales. If it held 5-6% overall sales in 2012, in 7 years Fandango might account for growth between 10-15% at most by 2019. That’s still 85% of all reviews excluded from Rotten Tomatoes’s “Audience Score” metric.  In fact, it behooves Fandango to keep this overall ticket sales number as low as possible so as to influence its “Audience Score” number with more ease and precision.

To put this in a little more perspective, a movie theater might have 200 seats. 10% of that is 20. That means that for every 200 people who might fill a theater, just less than 20 people have bought their ticket from Fandango and are eligible for their review to count towards “Audience Score”. Considering that only a small percentage of that 20 will actually take the time to write a review, that could mean out of every 200 people who’ve seen the film legitimately, between 1 and 5 people might be counted towards the Audience Score. Calculating that up, for very 1 million people who see a blockbuster film, somewhere between 5,000 and 25,000’s reviews may contribute to the Rotten Tomatoes “Audience Score”… even if there are hundreds of thousands of reviews on the site.

The fewer the reviews contributing to that score, the easier it is to manipulate that score by adding just a handful of reviews to the mix… and that’s where Rotten Tomatoes “handpicked reviews” come into play (and with it, the potential for Payola). Rotten Tomatoes can then handpick positive reviews for inclusion. The problem is that while Rotten Tomatoes understands all of this this, so do the studios. Which means that studios can, like I said above, “invite” employees to buy tickets via Fandango before writing a review on Rotten Tomatoes. They can even contact Rotten Tomatoes and pay for “special treatment”. This situation can allow movie studios to unduly influence the “Audience Score” for a current release… this is compounded because there are so few reviews that  count to create the “Audience Score”.

Where Rotten Tomatoes likely counted every review towards this score before this change, after they implemented the new “verified score” methodology, this change greatly drops the number of reviews which contribute to tallying this score. This lower number of reviews means that it is now much easier to manipulate its Audience Score number either by gaming the system or by Rotten Tomatoes handpicking reviews to include.

Fading Trust

While Rotten Tomatoes was once a trustworthy site for movie reviews, it has greatly reduced its trust levels by instituting such backwards and easily manipulable systems.

Whenever you visit a site like Rotten Tomatoes, you must always question everything you see. When you see something like an “Audience Score”, you must question how that number is calculated and what is included in that number. Rotten Tomatoes isn’t forthcoming.

In the case of Rotten Tomatoes, they have drastically reduced the number of included reviews in that metric because of their “verified purchase” mechanism. Unfortunately, the introduction of that mechanism at once destroys Rotten Tomatoes trust and trashes the concept of their site.

It Gets Worse

What’s even more of a problem is the following two images:

Screen Shot 2019-12-23 at 7.26.58 AM

Screen Shot 2019-12-23 at 7.26.24 AM

From the above two images, it is claimed Rotten Tomatoes has 37,956 “Verified Ratings”, yet they only have 3,342 “Verified Audience” reviews. That’s a huge discrepancy. Where are those other 34,614 “Verified” reviews? You need to calculate the Audience Score not solely on a phone device using a simplistic “rate this movie” alone. It must be calculated in combination with an author writing a review. Of course, there are 5,240 reviews that didn’t at all contribute to any score at all on Rotten Tomatoes. Those audience reviews are just “there”, taking up space.

Single number ratings are pointless without at least some text validation information. Worse, we know that these “Verified Ratings” likely have nothing to do with “Verified Audience” as shown in the images above. Even if those 3,342 audience reviews are actually calculated into the “Verified Ratings” (they probably aren’t), that’s still such a limited number when considered with the rest of the “Verified Ratings” so as to be skewed by people who may not have even attended the film.

You can only determine if someone has actually attended a film by asking them to WRITE even the smallest of a review. Simply pressing “five star” on the app without even caring is pointless. It’s possible the reviews weren’t even tabulated correctly via the App. The App itself may even submit star data after a period of time without the owner’s knowledge or consent. The App can even word its rating question in such a way as to manipulate the response in a positive direction. Can we say, “Skewed”?

None of this leads to trust. Without knowing exactly how that data was collected, the method(s) used and how it was presented on the site and on the app, how can you trust any of it? It’s easy to see professional critic reviews because Rotten Tomatoes must cite back to the source of the review. However, with audience metrics, it’s all nebulous and easily falsified… particularly when Rotten Tomatoes is intentionally obtuse and opaque for exactly how it collects this data and how it is presents it.

Even still, with over one million people attending and viewing The Rise of Skywalker, yet Rotten Tomatoes has only counted just under verified 38,000 people, something doesn’t add up. Yeah, Rotten Tomatoes is so very trustworthy (yeah right), particularly after this “verified” change. Maybe it’s time for those Rotten Tomatoes to finally be tossed into the garbage?

↩︎

Rant Time: Google doesn’t understand COPPA

Posted in botch, business, california, rant by commorancy on November 24, 2019

kid-tablet.jpgWe all know what Google is, but what is COPPA? COPPA stands for the Children’s Online Privacy Protection Act and is legislation designed to incidentally protect children by protecting their personal data given to web site operators. YouTube has recently made a platform change allegedly around COPPA, but it is entirely misguided. It also shows that Google doesn’t fundamentally understand the COPPA legislation. Let’s explore.

COPPA — What it isn’t

The COPPA body of legislation is intended to protect how and when a child’s personal data may be collected, stored, used and processed by web site operators. It has very specific verbiage describing how and when such data can be collected and used. It is, by its very nature, a data protection and privacy act. It protects the data itself… and, by extension, the protection of that data hopes to protect the child. This Act isn’t intended to protect the child directly and it is misguided to assume that it does. COPPA protects personal private data of children.

By the above, that means that the child is incidentally protected by how their collected data can (or cannot) be used. For the purposes of COPPA, a “child” is defined to be any person under the age of 13. Let’s look at a small portion of the body of this text.

General requirements. It shall be unlawful for any operator of a Web site or online service directed to children, or any operator that has actual knowledge that it is collecting or maintaining personal information from a child, to collect personal information from a child in a manner that violates the regulations prescribed under this part. Generally, under this part, an operator must:

(a) Provide notice on the Web site or online service of what information it collects from children, how it uses such information, and its disclosure practices for such information (§312.4(b));

(b) Obtain verifiable parental consent prior to any collection, use, and/or disclosure of personal information from children (§312.5);

(c) Provide a reasonable means for a parent to review the personal information collected from a child and to refuse to permit its further use or maintenance (§312.6);

(d) Not condition a child’s participation in a game, the offering of a prize, or another activity on the child disclosing more personal information than is reasonably necessary to participate in such activity (§312.7); and

(e) Establish and maintain reasonable procedures to protect the confidentiality, security, and integrity of personal information collected from children (§312.8).

This pretty much sums up the tone for what follows in the body text of this legislation. What it essentially states is all about “data collection” and what you (as a web site operator) must do specifically if you intend to collect specific data from someone under the age of 13… and, more specifically, what data you can and cannot collect.

YouTube and Google’s Misunderstanding of COPPA

YouTube’s parent company is Google. That means that I may essentially interchange “Google” for “YouTube” because both are one-in-the-same company. With that said, let’s understand how Google / YouTube fundamentally does not understand the COPPA body of legislation.

Google has recently rolled out a new feature to its YouTube content creators. It is a checkbox both as a channel wide setting and as an individual video setting. This setting sets a flag whether the video is targeted towards children or not (see image below for this setting’s details). Let’s understand Google’s misunderstanding of COPPA.

COPPA is a data protection act. It is not a child protection act. Sure, it incidentally protects children because of what is allowed to be collected, stored and processed, but make no mistake, it protects collected data directly, not children. With that said, checking a box on a video whether it is appropriate for children has nothing whatever to do with data collection. Let’s understand why.

Google has, many years ago in fact, already implemented a system to prevent “children” (as defined by COPPA) to sign up for and use Google’s platforms. What that means is when someone signs up for a Google account, that person is asked questions to ascertain the person’s age. If that age is identified as under 13, that account is classified by Google as in use by a “child”. Once Google identifies a child, it is then obligated to uphold ALL laws governed by COPPA (and other applicable child privacy laws) … that includes all data collection practices required by COPPA and other applicable laws. It can also then further apply Google related children protections against that account (i.e. to prevent the child from viewing inappropriate content on YouTube). Google would have needed to uphold these data privacy laws since the year 2000, when COPPA was enacted. If Google has failed to protect a child’s collected data or failed to uphold COPPA’s other provisions, then that’s on Google. It is also a situation firmly between Google and the FTC … the governmental body tasked with enforcing the COPPA legislation. Google solely collects the data. Therefore, it is exclusively on Google if that data is used or collected in inappropriate ways, counter to COPPA’s requirements.

YouTube’s newest “not appropriate for children” flag

As of November 2019, YouTube has implemented a new flag for YouTube content creators. The channel-wide setting looks like so:

Screen Shot 2019-11-24 at 2.33.32 AM

This setting, for all intents and purposes, isn’t related to COPPA. COPPA doesn’t care whether video content is targeted towards children. COPPA cares about how data is collected from children and how that data is then used by web sites. COPPA is, as I said above, all about data collection practices, not about whether content is targeted towards children.

Let’s understand that in the visual entertainment area, there are already ratings systems which apply. Systems such as the ESRB ratings system founded in 1994. This system specifically sets ratings for video games depending on the types of content contained within. For TV shows, there is the TV Parental Guidelines which began in 1996 and was proposed between the US Congress, the TV industry and FCC. These guidelines rate TV shows such as TV-Y, TV-14 or TV-MA depending, again, on the content within. This was mandated in 1997 by the US Government due to its stranglehold on TV broadcast licenses. For theatrical films, there’s the MPAA’s movie ratings system which began in 1968. So, it’s not as if there aren’t already effective content ratings systems available. These voluntary systems have been in place for many years already.

For YouTube, marking your channel or video content as “made for kids” has nothing whatever to do with COPPA legislated data collection practices.

YouTube Creators

Here is exactly where we see Google and YouTube’s fundamental misunderstanding of COPPA. COPPA is about the protection and collection of data from children. Google collects, stores and uses this and all data it collects. YouTube creators have very, very limited access to any of this Google-collected data. YouTube creators have no hand in its collection or its use. Google controls all of the data collection on YouTube. With the exception of comments and the list of subscribers of a channel, the majority of the data collected and supplied by Google to the creators is almost exclusively limited to aggregate unpersonalized statistical data. Even then, this data can be inaccurate depending on what the Google account ID stated when they signed up. Still, the limited personal subscriber data it does supply to content creators is limited to the subscriber’s ID only. Google offers its content creators no access to deeper personal data, not even the age of its subscribers.

Further, Google (and pretty much every other web site) relies on truthfulness when people sign up for services. Google does not in any way verify the information given to Google during the signup process or that this information is in any way accurate or truthful. Indeed, Google doesn’t even verify the identity of the person using the account or even require the use of real names. The only time Google does ANY level of identity verification is when using Google Wallet. Even then, it’s only as a result of needing identity verification due to possible credit card fraud issues. Google Wallet is a pointless service that many other payment systems do better, such as Apple Pay, Amazon Checkout and, yes, PayPal. I digress.

With that said, Google is solely responsible for all data collection practices associated with YouTube (and its other properties) including storing, processing and managing of that data. YouTube creators have no control over what YouTube (or Google) chooses to collect, store or disseminate. Indeed, YouTube creators have no control over YouTube’s data collection or storage practices whatsoever.

This new alleged “COPPA mechanism” that YouTube has implemented has nothing whatever to do with data collection practices and everything to do with content which might be targeted towards “children”. Right now, this limited mechanism is pretty much a binary system (a very limited system). The channel either does or it doesn’t target content towards children (either channel as a whole or video by video). It’s entirely unclear what happens when you do or don’t via YouTube, though some creators have had seeming bad luck with their content, which has been manually reviewed by YouTube staff and misclassified as “for children” when the content clearly is not. These manual overrides have even run counter to the global channel settings, which have been set to “No, set this channel as not made for kids.”

Clearly, this new mechanism has nothing to do with data collection and everything to do with classifying which content is suitable for children and which isn’t. This defines a …

Ratings System

Ratings systems in entertainment content are nothing new. TV has had a content rating systems since the mid 90s. Movies have had ratings systems since the late 60s. Video games have had them since the mid 90s. COPPA, on the other hand, has entirely nothing to do with ratings or content. It is legislation that protects children by protecting their data. It’s pretty straightforward what COPPA covers, but one thing it does not cover is whether video content is appropriate to be viewed by children. Indeed, COPPA isn’t a ratings system. It is child data protection legislation.

How YouTube got this law’s interpretation so entirely wrong is anyone’s guess. I can’t even fathom how Google could have been led this astray. Perhaps Google’s very own lawyers are simply inept and not at all versed in COPPA? I have no idea… but whatever led YouTube’s developers to thinking the above mechanism in any way relates to COPPA is entirely wrong thinking. No where does COPPA legislate YouTube video content appropriateness. Categorizing content is entirely up to a ratings system to handle.

Indeed, YouTube is trudging on very thin ice with the FTC. Not only did they interpret the COPPA legislation completely wrong, they have implemented “a fix” even more wrongly. What Google and YouTube has done is shoot themselves in the foot… not once, but twice. The second time is that Google has fully admitted that they don’t even have a functional working ratings system. Indeed, it doesn’t… and now everyone knows it.

Google has now additionally admitted that children under the age of 13 use YouTube by the addition of this “new” mechanism. With this one mechanism, Google has admitted to many things about children using its platform… which means YouTube and Google are both now in the hot seat with regards to COPPA. They must now completely ensure that YouTube (and Google by extension) is fully and solely complying with the letter of COPPA’s verbiage by collecting children’s data.

YouTube Creators Part II

YouTube creators have no control over what Google collects from its users, that’s crystal clear. YouTube creators also don’t have access to view most of this data or access to modify anything related to this data collection system. Only Google has that level of access. Because Google controls its own data collection practices, it is on Google to protect any personal information it may have received by children using its platform.

That also means that content creators should be entirely immune from prosecution over such data collection practices… after all, the creators don’t own or control Google’s data collection systems.

This new YouTube mechanism seems to imply that creators have some level of liability and/or culpability for Google’s collection practices, when creators simply and clearly do not. Even the FTC made a striking statement that they may try to “go after” content creators. I’m not even sure how that’s possible under COPPA. Content creators don’t collect, store or manage data about children, regardless of the content that they create. The only thing content creators control is appropriateness of the content towards children… and that has nothing to do with COPPA and everything to do with a ratings system… a system that Google does not even have in place within YouTube.

Content creators, however, can voluntarily label their content as TV-MA or whatever they deem is appropriate based on the TV Parental Guidelines. After all, YouTube is more like TV than it is like a video game. Therefore, YouTube should offer and have in place the same ratings system as is listed in the TV Parental Guidelines. This recent COPPA-attributed change is actually YouTube’s efforts at enacting a content ratings system, albeit an extremely poor attempt at one. As I said, creators can only specify the age appropriateness of the content that they create. YouTube is simply the platform where it is shown.

FTC going after YouTube Creators?

Google controls its data collections systems, not its content creators (though YouTube does hold leverage over whether content is or remains monetized). What that means is that it makes absolutely no sense for the FTC to legally go after content creators based on violations of COPPA. There may be other legislation they can lean on, but COPPA isn’t it. COPPA also isn’t intended to be a “catch all” piece of legislation to protect children’s behaviors on the Internet. It is intended to protect how data is collected and used by children under 13 years of age… that’s it. COPPA isn’t intended to be used as a “ratings system” for appropriateness by video sharing platforms like YouTube.

I can’t see even one judge accepting, let alone prosecuting such a clear cut case of legal abuse of the justice system. Going after Google for COPPA violations? Sure. They stored and collected that data. Going after the YouTube content creators? No, I don’t think so. They created a video and uploaded it, but that had nothing whatever to do with how Google controls, manages or collects data from children.

If the US Federal Government wants to create law to manage appropriateness of Internet content, then they need to draft it up and pass it. COPPA isn’t intended for that purpose. Voluntary ratings systems have been in place for years including within motion pictures, TV and now video games. So then why is YouTube immune from such rating systems? Indeed, it’s time YouTube was forced to implement a proper ratings system instead of this haphazard binary system under the false guise of COPPA.

Content Creator Advice

If you are a YouTube content creator (or create on any other online platform), you should take advantage of the thumbnail and describe the audience your content targets. The easiest way to do this is to use the same ratings system implemented by the TV Parental Guidance system… such as TV-Y, TV-14 and TV-MA. Placing this information firmly on the thumbnail and also placing it onto the video at the beginning of your video explicitly states towards which age group and audience your content is targeted. By voluntarily rating not only the thumbnail, but also the content itself in the first 5 minutes of the video opening, your video cannot be misconstrued for any other group or audience. This means that even though your video is not intended for children, placing the TV Parental Guidance rating literally onto the video intentionally states that fact in plain sight.

If a YouTube employee manually reclassifies your video as being “for children” even when it isn’t, labeling your content in the video’s opening as TV-MA explicitly states that the program is not suitable for children. You might even create an additional disclaimer as some TV programs do stating:

This content is not suitable for all audiences. Some content may be considered disturbing or controversial. Viewer or parental discretion is advised.

Labeling your video means that even the FTC can’t argue that your video somehow inappropriately targeted children… even though this new YouTube system has nothing to do with COPPA. Be cautious, use common sense and use best practices when creating and uploading videos to YouTube. YouTube isn’t there to protect you, the creator. The site is there to protect YouTube and Google. In this case, this new creator feature is entirely misguided as a COPPA helper, when it is clearly intended to be a ratings system.

Before you go…

One last thing… Google controls everything about the YouTube platform including the “recommended” lists of videos. If, for whatever reason, Google chooses to promote a specific video towards an unintended audience, the YouTube creator has no control over this fact. In point of fact, the content creator has almost no control over any promotion or placement of their video within YouTube. The only exception is if YouTube allows for paid promotion of video content (and they probably do). After all, YouTube is in it for the $$$. If you’re willing to throw some of your money at Google, I’m quite sure they’d be willing to help you out. Short of paying Google for video placement, however, all non-paid placement is entirely at the sole discretion of Google. The YouTube creator has no control over their video’s placement within “recommended” lists or anywhere else on YouTube.

↩︎

%d bloggers like this: