Rant Time: Flickr is running out of time & money?
I received a rather questionable email about Flickr allegedly from Don MacAskill, CEO of SmugMug.
Unfortunately, his email is also wrapped in the guise of email marketing and arrived through the same marketing channel as all other email marketing from Flickr.
Don, if you want us to take this situation seriously, you shouldn’t use email marketing platforms to do it. These emails need to come personally from you using a SmugMug or Flickr address. They also shouldn’t contain several email marketing links. An email from the CEO should contain only ONE link and it should be at the very bottom of the email.
The information contained in this letter is not a surprise in general, but the way it arrived and the tone it takes is a surprise coming from a CEO, particularly when it takes the format of generic email marketing. Let’s explore.
Flickr Pro
I will place the letter at the bottom so you can it read in full. The gist of the letter is, “We’re running out of money, so sign up right away!”
I want to take the time to discuss the above “running out of money” point. Here’s an excerpt from Don’s email:
We didn’t buy Flickr because we thought it was a cash cow. Unlike platforms like Facebook, we also didn’t buy it to invade your privacy and sell your data. We bought it because we love photographers, we love photography, and we believe Flickr deserves not only to live on but thrive. We think the world agrees; and we think the Flickr community does, too. But we cannot continue to operate it at a loss as we’ve been doing.
Let’s start by saying, why on Earth would I ever sign up for a money losing service that is in danger of closing? Seriously, Flickr? Are you mad? Don’t give me assurances that *I* can save your business with my single conversion. It’s going to take MANY someones to keep Flickr afloat if it’s running out of money. Worse, sending this email to former Pro members trying to get us to convert again is a losing proposition. Send it to someone who cares, assuming there is anyone like that.
A single conversion isn’t likely to do a damned thing to stem the tide of your money hemorrhaging, Flickr. Are you insane to send out a letter like this in this generic email marketing way? If anything, a letter like this may see even MORE of your existing members run for the hills by cancelling their memberships, instead of trying to save Flickr from certain doom. But, let’s ignore this letter’s asinine message and focus on why I decided to write this article.
Flickr is Dead to Me
I had an email exchange in November of 2018 with Flickr’s team. I make my stance exceedingly clear exactly why I cancelled my Pro membership and why their inexplicable price increase is pointless. And yes, it is a rant. This exchange goes as follows:
Susan from Flickr states:
When we re-introduced the annual Flickr Pro at $49.99 more than 3 years ago, we promised all grandfathered Pros (including the bi-annual and 3-month plans) a 2-year protected price period. We have kept this promise, but in order to continue providing our best service to all of our customers, we are now updating the pricing for grandfathered Pros. We started this process on August 16, 2018.
With this being the case, bi-annual Pros pay $99.98 every 2 years, annual Pros pay $49.99 every year, and 3-month Pros pay $17.97 every 3 months. Notifications including the price increase have been sent out to our users starting from August 16.
I then write back the following rant:
Hi Susan,
Yes, and that means you’ve had more than ample time to make that $50 a year worth it for Pro subscribers. You haven’t and you’ve failed. It’s still the same Flickr it was when I was paying $22.48 a year. Why should I now pay over double the price for no added benefits? Now that SmugMug has bought it, here we are now being forced to pay the $50 a year toll when there’s nothing new that’s worth paying $50 for. Pro users have been given ZERO tools to sell our photos on the platform as stock photos. Being given these tools is what ‘Pro’ means, Susan. We additionally can’t in any way monetize our content to recoup the cost of our Pro membership fees. Worse, you’re displaying ads over the top our photos and we’re not seeing a dime from that revenue.
Again, what have you given that makes $50 a year worth it? You’re really expecting us to PAY you $50 a year to show ads to free users over the top of our content? No! I was barely willing to do that with $22.48 a year. Of course, this will all fall on deaf ears because these words mean nothing to you. It’s your management team pushing stupid efforts that don’t make sense in a world where Flickr is practically obsolete. Well, I’m done with using a 14 year old decrepit platform that has degraded rather than improved. Sorry Susan, I’ve removed over 2500 photos, cancelled my Pro membership and will move back to the free tier. If SmugMug ever comes to its senses and actually produces a Pro platform worth using (i.e., actually offers monetization tools or even a storefront), I might consider paying. As it is now, Flickr is an antiquated 14 year old platform firmly rooted in a 2004 world. Wake up, it’s 2018! The iStockphotos of the world are overtaking you and offering better Pro tools.
Bye.
Flickr and SmugMug
When Flickr was purchased by SmugMug, I wasn’t expecting much from Flickr. But, I also didn’t expect Flickr to double its prices while also providing nothing in return. The platform has literally added nothing to improve the “Pro” aspect of its service. You’re simply paying more for the privilege of having ads placed over the top of your photos. Though, what SmugMug might claim you’re paying for is entirely the privilege of the tiniest bit more storage space to store a few more photos.
Back when storage costs were immense, that pricing might have made sense. In an age where storage costs are impossibly low, that extra per month pricing is way out of line. SmugMug and Flickr should have spent their time adding actual “Pro” tools so that photographers can, you know, make money from their photos by selling them, leasing them, producing framed physical wall hangings, mugs, t-shirts, mouse pads, and so on. Let us monetize our one and only one product… you know, like Deviant Art does. Instead, SmugMug has decided to charge more, then place ads over the top of our photos and not provide even a fraction of what Deviant Art does for free.
As a photographer, why should I spend $50 a year on Flickr only to gain nothing when I can move my photos to Deviant Art and pay nothing a year AND get many more tools which help me monetize my images? I can also submit them to stock photo services and make money off of leasing them to publications, something still not possible at Flickr.
Don’s plea is completely disingenuous. You can’t call something “Pro” when there’s nothing professional about it. But then, Don feels compelled to call out where they have actually hosted Flickr and accidentally explains why Flickr is losing money.
We moved the platform and every photo to Amazon Web Services (AWS), the industry leader in cloud computing, and modernized its technology along the way.
What modernization? Hosting a service on AWS doesn’t “modernize” anything. It’s a hosting platform. Worse, this hosting decision is entirely the cause of SmugMug’s central money woes with Flickr. AWS is THE most expensive cloud hosting platform available. There is nothing whatsoever cheap about using AWS’s storage and compute platforms. Yes, AWS works well, but the bill at the end of the month sucks. To keep the lights on when hosting at AWS, plan to spend a mint.
If SmugMug wanted to save on costs of hosting Flickr, they should have migrated it to a much lower cost hosting platform instead of sending empty marketing promises asking people to “help save the platform”. Changing hosting platforms might require more hands on effort for SmugMug’s technical staff, but SmugMug can likely half the costs of hosting this platform by moving it to lower cost hosting providers… providers that will work just as well as AWS.
Trying to urge past subscribers to re-up into Pro again simply to “save its AWS hosting decision”, not gonna happen. Those of us who’ve gotten no added benefit by paying money to Flickr in the past are not eager to return. Either give us a legitimate reason to pay money to you (add a storefront or monetization tools) or spend your time moving Flickr to a lower cost hosting service, one where Flickr can make money.
Don, why not use your supposed CEO prowess to have your team come up with lower cost solutions? I just did. It’s just a thought. You shouldn’t rely on such tactless and generic email marketing practices to solve the ills of Flickr and SmugMug. You bought it, you have to live with it. If that means Flickr must shutdown because you can’t figure out a way to save it, then so be it.
Below is Don MacAskill’s email in all of its unnecessary email marketing glory (links redacted):
Dear friends,Flickr—the world’s most-beloved, money-losing business—needs your help. Two years ago, Flickr was losing tens of millions of dollars a year. Our company, SmugMug, stepped in to rescue it from being shut down and to save tens of billions of your precious photos from being erased. Why? We’ve spent 17 years lovingly building our company into a thriving, family-owned and -operated business that cares deeply about photographers. SmugMug has always been the place for photographers to showcase their photography, and we’ve long admired how Flickr has been the community where they connect with each other. We couldn’t stand by and watch Flickr vanish. So we took a big risk, stepped in, and saved Flickr. Together, we created the world’s largest photographer-focused community: a place where photographers can stand out and fit in. We’ve been hard at work improving Flickr. We hired an excellent, large staff of Support Heroes who now deliver support with an average customer satisfaction rating of above 90%. We got rid of Yahoo’s login. We moved the platform and every photo to Amazon Web Services (AWS), the industry leader in cloud computing, and modernized its technology along the way. As a result, pages are already 20% faster and photos load 30% more quickly. Platform outages, including Pandas, are way down. Flickr continues to get faster and more stable, and important new features are being built once again. Our work is never done, but we’ve made tremendous progress. Now Flickr needs your help. It’s still losing money. Hundreds of thousands of loyal Flickr members stepped up and joined Flickr Pro, for which we are eternally grateful. It’s losing a lot less money than it was. But it’s not yet making enough. We need more Flickr Pro members if we want to keep the Flickr dream alive. We didn’t buy Flickr because we thought it was a cash cow. Unlike platforms like Facebook, we also didn’t buy it to invade your privacy and sell your data. We bought it because we love photographers, we love photography, and we believe Flickr deserves not only to live on but thrive. We think the world agrees; and we think the Flickr community does, too. But we cannot continue to operate it at a loss as we’ve been doing. Flickr is the world’s largest photographer-focused community. It’s the world’s best way to find great photography and connect with amazing photographers. Flickr hosts some of the world’s most iconic, most priceless photos, freely available to the entire world. This community is home to more than 100 million accounts and tens of billions of photos. It serves billions of photos every single day. It’s huge. It’s a priceless treasure for the whole world. And it costs money to operate. Lots of money. Flickr is not a charity, and we’re not asking you for a donation. Flickr is the best value in photo sharing anywhere in the world. Flickr Pro members get ad-free browsing for themselves and their visitors, advanced stats, unlimited full-quality storage for all their photos, plus premium features and access to the world’s largest photographer-focused community for less than $5 per month. You likely pay services such as Netflix and Spotify at least $9 per month. I love services like these, and I’m a happy paying customer, but they don’t keep your priceless photos safe and let you share them with the most important people in your world. Flickr does, and a Flickr Pro membership costs less than $1 per week. Please, help us make Flickr thrive. Help us ensure it has a bright future. Every Flickr Pro subscription goes directly to keeping Flickr alive and creating great new experiences for photographers like you. We are building lots of great things for the Flickr community, but we need your help. We can do this together. We’re launching our end-of-year Pro subscription campaign on Thursday, December 26, but I want to invite you to subscribe to Flickr Pro today for the same 25% discount. We’ve gone to great lengths to optimize Flickr for cost savings wherever possible, but the increasing cost of operating this enormous community and continuing to invest in its future will require a small price increase early in the new year, so this is truly the very best time to upgrade your membership to Pro. If you value Flickr finally being independent, built for photographers and by photographers, we ask you to join us, and to share this offer with those who share your love of photography and community. With gratitude, Don MacAskill |
|
|
↩︎
Am I impacted by the FTC’s YouTube agreement?
This question is currently a hot debate among YouTubers. The answer to this question is complex and depends on many factors. This is a long read as there’s a lot to say (~10000 words = ~35-50 minutes). Grab a cup of your favorite Joe and let’s explore.
COPPA, YouTube and the FTC
I’ve written a previous article on this topic entitled Rant Time: Google doesn’t understand COPPA. You’ll want to read that article to gain a bit more insight around this topic. Today’s article is geared more towards YouTube content creators and parents looking for answers. It is also geared towards anyone with a passing interest in the goings on at YouTube.
Before I start, let me write this disclaimer by saying I’m not a lawyer. Therefore, this article is not intended in any way to be construed as legal advice. If you need legal advice, there are many lawyers available who may be able to help you with regards to being a YouTube content creator and your specific channel’s circumstances. If you ARE HERE looking for legal advice, please go speak to a lawyer instead. The information provided in this article is strictly for information purposes only and IS NOT LEGAL ADVICE.
For Kids or Not For Kids?

With that out of the way, let’s talk a little about what’s going on at YouTube for the uninitiated. YouTube has recently rolled out a new channel creator feature. This feature requires that you mark your channel “for kids” or “not for kids”. Individual videos can also be marked this way (which becomes important a little later in the article). Note, this “heading” is not the actual text on the screen in the settings area (see the image), but this is what you are doing when you change this YouTube creator setting. This setting is a binary setting. Your content is either directed at kids or it is not directed at kids. Let’s understand this reasoning around COPPA. Also, “kids” or “child” is defined in COPPA any person 12 or younger.
When you set the “for kids” setting on a YouTube channel, a number of things will happen to your channel, including comments being disabled, monetization will be severely limited or eliminated and how your content is promoted by YouTube will drastically change. There may also be other subtle changes that are as yet unclear. The reason for all of these restrictions is that COPPA prevents the collection of personal information from children 12 and under… or at least, if it is collected that it is deleted if parental consent cannot be obtained. In the 2013 update, COPPA added cookie tracking to the list of items that cannot be collected.
By disabling all of these features under ‘For Kids’, YouTube is attempting to reduce or eliminate its data collection vectors that could violate COPPA… to thwart future liabilities for Google / YouTube as a company.
On the other hand, setting your channel as ‘Not For Kids’, YouTube maintains your channel as it has always been with comments enabled, full monetization possible, etc. Seems simple, right? Wrong.
Not as Simple as it Seems
You’re a creator thinking, “Ok, then I’ll just set my channel to ‘Not for Kids’ and everything will be fine.” Not so fast there, partner. It’s not quite as simple as that. COPPA applies to your channel if even one child visits and Google collects any data from that child. But, there’s more to it.
YouTube will also be rolling out a tool that attempts to identify the primary audience of video content. If YouTube’s new tool identifies a video as content primarily targeting “kids”, that video’s “Not for Kids” setting may be overridden by YouTube and set as “For Kids”. Yes, this can be done by YouTube’s tool, thus overriding your channel-wide settings. It’s not enough to set this setting on your channel, you must make sure your content is not being watched by kids and the content is not overly kid friendly. How exactly YouTube’s scanner will work is entirely unknown as of now.
And here is where we get to the crux of this whole matter.
What is “Kid Friendly” Content?
Unfortunately, there is no clear answer to this question. Your content could be you reviewing toys, it could be drawing pictures by hand on the screen, it could be reviewing comic books, you might ride skateboards, you might play video games, you might even assemble Legos into large sculptures. These are all video topics that could go either way… and it all depends on which audience your video tends draw in.
It also depends on your existing subscriber base. If a vast majority of your current active subscribers are children 12 and under, this fact can unfairly influence your content even if your curent content is most definitely not for kids. The fact that ‘kids’ are watching your channel is a problem for ANY content that you upload.
But you say, “My viewer statistics don’t show me 12 and under category.” No, it doesn’t and there’s a good reason why it doesn’t. Google has always professed that it doesn’t allow 12 and under on its platform. But clearly, that was a lie. Google does, in fact, allow 12 and under onto its platform. That’s crystal clear for two reasons: 1) The FTC fined Google $170 million for violating COPPA (meaning, FTC found kids 12 and under are using the platform) and 2) YouTube has rolled out this “for kids / not for kids” setting confirming by Google that 12 and under do, in fact, watch YouTube and have active Google Account IDs.
I hear someone else saying, “I’m a parent and I let my 11 year old son use YouTube.” Yeah, that’s perfectly fine and legal, so long as you have given “verifiable consent” to the company that is collecting data from your 11 year old child. As long as a parent gives ‘verifiable consent’ for their child under 12 to Google or YouTube or even to the channel owner directly, it’s perfectly legal for your child to be on the platform watching and participating and for Google and YouTube to collect data from your child.
Unfortunately, verifiable consent is difficult to manage digitally. See the DIY method of parental consent below. Unfortunately, Google doesn’t offer any “verifiable consent” mechanism for itself or for YouTube content creators. This means that even if you as a parent are okay with your child being on YouTube, Facebook, Instagram or even Snapchat, if you haven’t provided explicit and verifiable parental consent to that online service for your child 12 and under, that service is in violation of COPPA by handling data that your child may input into that service. Data can include name, telephone number, email address or even sharing photos or videos of themselves. It also includes cookies placed onto their devices.
COPPA was written to penalize the “web site” or “online services” that collect a child’s information. It doesn’t penalize the family. Without “verifiable consent” from a parent or legal guardian, to the “web site” or “online service” it’s the same as no consent at all. Implicit consent isn’t valid for COPPA. It must be explicitly given and verifiable consent from a parent or legal guardian given to the service being used by the child.
The Murky Waters of Google
If only YouTube were Google’s only property to consider. It isn’t. Google has many, many properties. I’ll make a somewhat short-ish list here:
- Google Search
- Google Games
- Google Music
- Google Play Store (App)
- Google Play Games (App)
- Google Stadia
- Google Hangouts
- Google Docs
- Google’s G Suite
- Google Voice
- Google Chrome (browser)
- Google Chromebook (device)
- Google Earth (App)
- Google Movies and TV
- Google Photos
- Google’s Gmail
- Google Books
- Google Drive
- Google Home (the smart speaker device)
- Google Chromecast (TV device)
- Android OS on Phones
- … and the list goes on …
To drive all of these properties and devices, Google relies on the creation of a Google Account ID. To create an account, you must supply Google with certain specific identifying information including email address, first and last name and various other required information. Google will then grant you a login identifier and a password in the form of credentials which allows you to log into and use any of the above Google properties, including (you guessed it) YouTube.
Without “verifiable consent” supplied to Google for a child 12 and under, what data Google has collected from your child during the Google Account signup process (or any of the above apps) has violated COPPA, a ruleset tasked for enforcement by the Federal Trade Commission (FTC).
Yes, this whole situation gets even murkier.
Data Collection and Manipulation
The whole point to COPPA is to protect data collected from any child aged 12 and under. More specifically, it rules that this data cannot be collected / processed from the child unless a parent or legal guardian supplies “verifiable consent” to the “web site” or “online service” within a reasonable time of the child having supplied their data to the site.
As of 2013, data collection and manipulation isn’t defined just by what the child personally uploads and types, though this data is included. This Act was expanded to include cookies placed onto a child’s computer device to track and target that child with ads. These cookies are also considered protected data by COPPA as these cookies could be used to personally identify the child. If a service does not have “verifiable consent” on file for that child from a parent or guardian, the “online service” or “web site” is considered by the FTC in violation of COPPA.
The difficulty with Google’s situation is that Google actually stores a child’s data within the child’s Google Account ID. This account ID being entirely separate from YouTube. For example, if you buy your child a Samsung Note 10 Phone running Android and you as a parent create a Google Account for your 12 or under child to use that device, you have just helped Google violate COPPA. This is part of the reason the FTC fined Google $170 million for violations to COPPA. Perhaps not this specific scenario, but the fact that Google doesn’t offer a “verifiable consent” system to verify a child’s access to its services and devices prior to collecting data or granting access to services led the FTC to its ruling. The FTC’s focus, however, is currently YouTube… even though Google is violating COPPA everywhere all over its properties as a result of the use of a Google Account ID.
YouTube’s and COPPA Fallout
Google wholly owns YouTube. Google purchased the YouTube property in 2006. In 2009, Google retired YouTube’s original login credential system and began requiring YouTube to use Google Accounts to gain access to the YouTube property by viewers. This change is important.
It also seems that YouTube is still operating itself mostly as a self-autonomous entity within Google’s larger corporate structure. What all of this means more specifically is that YouTube now uses Google Accounts, a separately controlled and operated system within Google, to manage credentials and gain access into not only the YouTube property, but every other property that Google has (see the short-ish list above).
In 2009, the YouTube developers deprecated their own home grown credentials system and began using the Google Accounts system of credential storage. This change to YouTube very likely means that YouTube itself no longer stores or controls any credential or identifying data. That data is now contained within the Google Accounts system. YouTube likely now only manages the videos that get uploaded, comments, supplying ads on videos (which the tracking and manage is probably controlled by Google also), content ID matching and anything else that appears in the YouTube UI interface. Everything else is likely out of the YouTube team’s control (or even access). In fact, I’d suspect that the YouTube team likely has entirely zero access to the data and information stored within the Google Accounts system (with the exception of that specific data which is authorized by the account holder to be publicly shown).
Why is this Google Accounts information important?
So long as Google Accounts remains a separate entity from YouTube (even though YouTube is owned by the same company), this means that YouTube can’t be in violation of COPPA (at least not where storage of credentials are concerned). There is one exception which YouTube does control… its comment system.
The comment system on YouTube is one of the earliest “modern” social networks ever created. Only Facebook and MySpace were slightly earlier, though all three were generally created within 1 year of one another. It is also the only free form place left in the present 2019 YouTube interface that allows a 12 or under child to incidentally type some form of personally identifying information into a public forum for YouTube to store (in violation of COPPA).
This is the reason that the “for kids” setting disables comments. YouTube formerly had a private messaging service, but it was retired as of September of 2019. It is no longer possible to use YouTube to have private conversations between other YouTube users. If you want to converse with another YouTube viewer, you must do it in a public comment. This change was likely also fallout from Google’s COPPA woes.
Google and Cookies
For the same reason as Google Accounts, YouTube likely doesn’t even manage its own site cookies. It might, but it likely relies on a centralized internal Google service to create, manage and handle cookies. The reason for this is obvious. Were YouTube’s developers to create and manage their own separate cookie, it would be a cookie that holds no use for other Google services. However, if YouTube developers were to rely on a centralized Google controlled service to manage their site’s cookies, it would allow the cookie to be created in a standardized way that all Google services can consume and use. For this reason, this author supposes a centralized system is used at YouTube rather than something “homegrown” and specific to YouTube.
While it is possible that YouTube might create its own cookies, it’s doubtful that YouTube does this for one important reason: ad monetization. For YouTube to participate in Google Advertising (yet another service under the Google umbrella of services), YouTube would need to use tracking cookies that the Google Advertising service can read, parse and update while someone is watching a video on YouTube.
This situation remains murky because YouTube can manage its own internal cookies. I’m supposing that YouTube doesn’t because of a larger corporate platform strategy. But, it is still entirely possible that YouTube does manage its own browser cookies. Only a YouTube employee would know for certain which way this one goes.
Because of the ambiguity in how cookies are managed within Google and YouTube, this is another area where YouTube has erred on the side of caution by disabling ads and ad tracking if a channel is marked as ‘for kids’. This prevents placing ad tracking cookies on any computers from ‘for kids’ marked channels and videos, again avoiding violations of COPPA.
The FTC’s position
Unfortunately, the FTC has put themselves into a constitutionally precarious position. The United States Constitution has a very important provision within its First Amendment.
Let me cite a quote from the US Constitution’s First Amendment (highlighting and italics added by author to call out importance):
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
The constitutional difficulty that the FTC has placed themselves in is that YouTube, by its very nature, offers a journalistic platform which is constitutionally protected from tortious interference by the United States government. The government (or more specifically, Congress) cannot make law that in any way abridges freedom of speech or of the press.
A video on YouTube is not only a form of journalism, it is a form of free speech. As long as YouTube and Google remain operating within the borders of the United States, United States residents must be able to use this platform unfettered without government tortious interference.
How does this imply to the FTC? It applies because the FTC is a governmental entity created by an act of the US Congress and, therefore, acts on behalf of the US Congress. This means that the FTC must uphold all provisions of the United States Constitution when dealing with matters of Freedom of Speech and Freedom of the Press.
How is does this problem manifest for the FTC? The FTC has repeatedly stated that it will use “tools” to determine if a YouTube channel’s content is intended for and is primarily intended to target children 12 and under. Here’s the critical part. If a channel’s content is determined to be targeting children 12 and under, the channel owner may be fined up to $43,530 per video as it will have been deemed in violation of COPPA.
There are two problems with the above statements the FTC has made. Let’s examine text from this FTC provided page about YouTube (italics provided by the FTC):
So how does COPPA apply to channel owners who upload their content to YouTube or another third-party platform? COPPA applies in the same way it would if the channel owner had its own website or app. If a channel owner uploads content to a platform like YouTube, the channel might meet the definition of a “website or online service” covered by COPPA, depending on the nature of the content and the information collected. If the content is directed to children and if the channel owner, or someone on its behalf (for example, an ad network), collects personal information from viewers of that content (for example, through a persistent identifier that tracks a user to serve interest-based ads), the channel is covered by COPPA. Once COPPA applies, the operator must provide notice, obtain verifiable parental consent, and meet COPPA’s other requirements.
and there’s more, which contains the most critical part of the FTC’s article:
Under COPPA, there is no one-size-fits-all answer about what makes a site directed to children, but we can offer some guidance. To be clear, your content isn’t considered “directed to children” just because some children may see it. However, if your intended audience is kids under 13, you’re covered by COPPA and have to honor the Rule’s requirements.
The Rule sets out additional factors the FTC will consider in determining whether your content is child-directed:
- the subject matter,
- visual content,
- the use of animated characters or child-oriented activities and incentives,
- the kind of music or other audio content,
- the age of models,
- the presence of child celebrities or celebrities who appeal to children,
- language or other characteristics of the site,
- whether advertising that promotes or appears on the site is directed to children, and
- competent and reliable empirical evidence about the age of the audience.
Content, Content and more Content
The above quotes discuss YouTube Content becoming “covered by COPPA”. This is a ruse. Content is protected speech by the United States Constitution and is defined within the First Amendment (see above). Nothing in any YouTube visual content when published by a United State Citizen can be “covered by COPPA”. The First Amendment sees to that.
Let’s understand why. First, COPPA is a data collections Act. It has nothing whatever to do with content ratings, content age appropriateness or, indeed, does not discuss anything else related visual content targeted towards children of ANY age. Indeed, there is no verbiage within the COPPA provisions that discuss YouTube, visual content, audio content or anything else to do with Freedom of Speech matters.
It gets worse… at least for the FTC. Targeting channels for disruption by fining them strictly over content uploaded onto the channel is less about protecting children’s data and more about content censorship on YouTube. Indeed, fining a channel $42,530 is tantamount to censorship as it is likely to see that content removed from YouTube… which is, indeed, censorship in its most basic form. Any censorship of Freedom of Speech is firmly against First Amendment rights.
Since the FTC is using fines based on COPPA as leverage against content creators, the implication is that the FTC will use this legal leverage to have YouTube take down content it feels is inappropriate targeting 12 and under children, rather than upholding COPPA’s actual data protection provisions. Indeed, the FTC will actually be making new law by fining channels based on content, not on whether data was actually collected in violation of COPPA’s data collection provisions. Though, the first paragraph may claim “data collection” as a metric, the second paragraph is solely about “offending content”… which is entirely about censorship. Why is that? Let’s continue.
COPPA vs “Freedom of Speech”
The FTC has effectively hung themselves out to dry. In fact, if the FTC does fine even ONE YouTube channel for “inappropriate content”, the FTC will be firmly in the business of censorship of journalism. Or, more specifically, the FTC will have violated the First Amendment rights of U.S. Citizens’ freedom of speech protections.
This means that in order for the FTC to enforce COPPA against YouTube creators, it has now firmly put itself into the precarious position of violating the U.S. Constitution’s First Amendment. In fact, the FTC cannot even fine even one channel owner without violating the First Amendment.
In truth, they can fine under only the following circumstance:
- The FTC proves that the YouTube channel actually collected and currently possesses inappropriate data from a child 12 and under.
- The FTC leaves the channel entirely untouched. The channel and content must remain online and active.
Number 2 is actually quite a bit more difficult for the FTC than it sounds. Because YouTube and the FTC have made an agreement, that means that YouTube can be seen as an agent of the FTC by doing the FTC’s bidding. This means that even if YouTube takes down the channel after a fine for TOS reasons, the FTC’s fining action can still be construed as in violation of First Amendment rights because YouTube acted as an agent to take down the “offending content”.
It gets even more precarious for the FTC. Even the simple the act of levying a fine against a YouTube channel could be seen as a violation of First Amendment rights. This action by the FTC seems less about protecting children’s data and more about going after YouTube content creators “targeting children with certain types of content” (see above). Because the latter quote from the FTC article explicitly calls out types of content as “directed at children”, this intentionally shows that it’s not about COPPA, but about visual content rules. Visual content rules DO NOT exist in COPPA.
Channel Owners and Content
If you are a YouTube channel owner, all of the above should greatly concern you for the following reasons:
- You don’t want to become a Guinea Pig to test First Amendment legal waters of the FTC + COPPA
- The FTC’s content rules above effectively state, “We’ll know it when we see it.” This is constitutionally BAD. This heavily implies content censorship intent. This means that the FTC can simply call out any content as being inappropriate and then fine a channel owner for uploading that content.
- It doesn’t specify state if the rule applies retroactively. Does previously uploaded content become subject to the FTC’s whim?
- The agreement takes effect beginning January 1, 2020
- YouTube can “accidentally” reclassify content as “for kids” when it clearly isn’t… which can trigger an FTC action.
- The FTC will apparently have direct access to the YouTube platform scanning tools. To what degree it has access is unknown. If it has direct access to take videos or channels offline, it has direct access to violate the First Amendment. Even if it must ask YouTube to do this takedown work, the FTC will still have violated the First Amendment.
The Fallacy
The difficulty I have with this entire situation is that the FTC now appears to be holding content creators to blame for heavy deficiencies within YouTube’s and Google’s platforms. Because Google failed to properly police its own platform for 12 and under users, it now seeks to pass that blame down onto YouTube creators simply because they create and upload video content. Content, I might add, that is completely protected under the United State Constitution’s First Amendment as “Freedom of Speech”. Pre-shot video content is a one-way passive form of communication.
Just like broadcast and cable TV, YouTube is a video sharing platform. It is designed to allow creators to impart one-way passive communication using pre-made videos, just like broadcast TV. If these FTC actions apply to YouTube, then they equally apply to broadcast and cable television providers…. particularly now that CBS, ABC, NBC, Netflix, Disney+ (especially Disney+), Hulu, Vudu, Amazon, Apple and cable TV providers now also offer “web sites” and “online services” where their respective video content can (and will) be viewed by children 12 and under via a computer device or web browser and where a child may is able to input COPPA protected data. For example, is Disney+ requiring verifiable parental consent to comply with COPPA?
Live Streaming
However, YouTube now also offers live streaming which changes the game a little for COPPA. Live streaming offers two-way live communication and in somewhat real-time. Live streaming is a situation where a channel creator might be able to collect inappropriate data from a child simply by asking pointed questions during a live stream event. A child might even feel compelled to write into live chat information that they shouldn’t be giving out. Live streaming may be more likely to collect COPPA protected data than pre-made video content simply because of the live interactivity between the host and the viewers. You don’t get that level of interaction when using pre-made video content.
Live streaming or not, there is absolutely no way a content creator can in any way be construed as an “Operator” of Google or of YouTube. The FTC is simply playing a game of “Guilty by Association”. They are using this flawed logic… “You own a YouTube channel, therefore you are automatically responsible for YouTube’s infractions.” It’s simply Google’s way of passing down its own legal burdens by your channel’s association with YouTube. Worse, the FTC seems to have bought into this Google shenanigan. It’s great for Google, though. They won’t be held liable for any more infractions against COPPA so long as YouTube creators end up shouldering that legal burden for Google.
The FTC seems to have conveniently forgotten this next part. In order to have collected data from a child, you must still possess a copy of that data to prove that you actually did collect it and that you are STILL in violation of COPPA. If you don’t have a copy of the alleged violating data, then you either didn’t collect it, the child didn’t provide it, you never had it to begin with or you have since deleted it. As for cookie violations, it’s entirely a stretch to say that YouTube creators had anything to do with how Google / YouTube manages cookies. The COPPA verbiage states of deletion under Parental Consent:
§312.4(c)(1). If the operator has not obtained parental consent after a reasonable time from the date of the information collection, the operator must delete such information from its records;
If an “operator” deletes such records, then the “operator” is not in violation of COPPA. If an “operator” obtains parental consent, then the “operator” is also not in violation of COPPA. Nothing, though, states definitively that a YouTube creator assumes the role of “operator”.
This is important because Google is and remains the “operator”. Until or unless Google extends access to its Google Accounts collected data to ALL YouTube creators so that a creator can take possession of said data, a creator cannot be considered an “operator”. The YouTube creator doesn’t have (and never has had) access to the Google Account personal data (other than what is publicly published on Google). Only Google has access to this account data which has been collected as part of creating a new Google Account. Even the YouTube property and its employees likely don’t even have access to Google Account personal data as mentioned. This means that, by extension, a YouTube creator doesn’t have a copy of any personal data that a Google Accounts signup may have collected… and therefore the YouTube content creator is NOT in violation of COPPA, though that doesn’t take Google off of the hook for it.
A YouTube content creator must actually POSSESS the data to be in violation. The FTC’s burden of proof is to show that the YouTube content creator actually has possession of that data. Who possesses that data? Google. Who doesn’t possess that data? The YouTube content creator. Though, there may be some limited edge cases where a YouTube creator might have requested personal information from a child in violation of COPPA. Even if a YouTube creator did request such data, so long as it has since been deleted fully, it is not in violation of COPPA. You must still be in possession of said data to be in violation of COPPA, at least according to how the act seems to read. If you have questions about this section, you should contact a lawyer for definitive confirmation and advice. Remember, I’m not a lawyer.
There is only ONE situation where a YouTube content creator may be in direct violation of COPPA. That is for live streaming. If a live streamer prompts for personal data to be written into the live chat area from its viewers and one of those viewers is 12 or under, the creator will have access to COPPA violating personal data. Additionally, comments on videos might be construed as in violation of COPPA if a 12 and under child writes something personally identifying into a comment. Though, I don’t know of many content creators who would intentionally request their viewers to reveal personally information in a comment on YouTube. Most people (including content creators) know the dangers all too well of posting such personally identifying information in a YouTube comment. A child might not, though. I can’t recall having watched one single YouTube channel where the host requests personally identifying information be placed into a YouTube comment. Ignoring COPPA for a second, such a request would be completely irresponsible. Let’s continue…
COPPA does state this about collecting data under its ‘Definitions’ section:
Collects or collection means the gathering of any personal information from a child by any means, including but not limited to:
(1) Requesting, prompting, or encouraging a child to submit personal information online;
(2) Enabling a child to make personal information publicly available in identifiable form. An operator shall not be considered to have collected personal information under this paragraph if it takes reasonable measures to delete all or virtually all personal information from a child’s postings before they are made public and also to delete such information from its records; or
(3) Passive tracking of a child online.
The “Enabling a child” section above is the reason for the removal of comments when the “for kids” setting is defined. Having comments enabled on a video when a child 12 and under could be watching enables the child to be able to write in personal information if they so choose. Simply by having a comment system available to someone 12 and under appears to be an infraction of COPPA. YouTube creators DO have access to enable or disable comments. What YouTube Creators don’t have access to is the age of the viewer. Google hides that information from YouTube content creators. YouTube content creators, in good faith, do not know the ages of anyone watching their channel.
Tracking a child’s activities is not possible by a YouTube content creator. A content creator has no direct or even incidental access to Google’s systems which perform any tracking activities. Only Google Does. Therefore, number 3 does not apply to YouTube content creators. The only way number 3 would ever apply to a creator is if Google / YouTube offered direct access to its cookie tracking systems to its YouTube content creators. Therefore, only numbers 1 and 2 could potentially apply to YouTube content creators.
In fact, because Google Accounts hides its personal data from YouTube content creators (including the ages of its viewers), content creators don’t know anything personal about any of its viewers. Which means, how are YouTube content creators supposed to know if a child 12 and under is even watching?
Google’s Failures
The reality is, Google has failed to control its data collection under Google Accounts. It is Google Accounts that needs to have COPPA applied to it, not YouTube. In fact, this action by the FTC will actually solve NOTHING at Google.
Google’s entire system is tainted. Because of the number of services that Google owns and controls, placing COPPA controls on only ONE of these services (YouTube) is the absolute bare minimum for an FTC action against COPPA. It’s clear that the FTC simply doesn’t understand the breadth and scope of Google’s COPPA failures within its systems. Placing these controls on YouTube will do NOTHING to fix COPPA’s greater violations which continue unabated within the rest of Google’s Services, including its brand new video gaming streaming service, Google Stadia. Google Stadia is likely to draw in just as many children 12 and under as YouTube. Probably more. If Stadia has even one sharing or voice chat service active or uses cookies to track its users, Stadia is in violation for the same exact reasons YouTube is… Google’s failure of compliance within Google Accounts.
Worse, there’s Android. Many parents are now handing brand new Android phones to their children 12 and under. Android has MANY tracking features enabled on its phones. From the GPS on board, to cookies, to apps, to the cell towers, to the OS itself. Talk about COPPA violations.
What about Google Home? You know, that seemingly innocuous smart speaker? Yeah, that thing is going to track not only each individual’s voice, it may even store recordings of those voices. It probably even tracks what things you request and then, based on your Google Account, will target ads on your Android phone or on Google Chrome based on things you’ve asked Google Home to provide. What’s more personally identifying than your own voice being recorded and stored after asking something personal?
Yeah, YouTube is merely the tippiest tip of a much, much, MUCH larger corporate iceberg that is continually in violation of COPPA within Google. The FTC just doesn’t get that its $170 million fine and First Amendment violating censorship efforts on YouTube isn’t the right course of action. Not only does the FTC’s involvement in censorship on YouTube lead to First Amendment violations, it won’t solve the rest of the COPPA violations at Google.
Here’s where the main body of this article ends.
Because there are still more questions, thoughts and ideas around this issue, let’s explore a some deeper ideas which might answer a few more of your questions as a creator or as a parent. Each question is prefaced by a ➡️ symbol. At this point, you may want to skim the rest of this article for specific thoughts which may be relevant to you.
➡️ “Should I Continue with my YouTube Channel?”
This is a great question and one that I can’t answer for you. Since I don’t know your channel or your channel’s content, there’s no way for me to give advice to you. Even if you do tell me your channel and its content, the FTC explicitly states that it will be at the FTC’s own discretion if a channel’s content “is covered by COPPA”. This means you need to review your own channel content to determine if your video content drives kids 12 and under to watch. Even then, it’s a crap shoot.
Are there ways you can begin to protect your channel? Yes. The first way is to post a video requesting that all subscribers who are 12 and under either unsubscribe from the channel or alternatively ask their parents to provide verifiable consent to you to allow that child to continue watching. This consent must come from a parent or guardian, not the child. Obtaining verifiable consent is not as easy as it sounds. Though, after you have received verifiable parental consent from every “child” subscriber on your channel, you can easily produce this consent documentation to the FTC if they claim your channel is in violation.
The next option is to apply for TRUSTe’s Children’s Privacy Certification. This affords your YouTube channel “Safe Harbor” protections against the FTC. This one is likely most helpful for large YouTube channels which tend to target children and which make significant income through ad monetization. TRUSTe’s certification is not likely to come cheap. This is the reason this avenue would only be helpful for the largest channels receiving significant monetization enough to pay for such a service.
Note, if you go through the “Safe Harbor” process or obtain consent for every subscriber, you won’t need to set your channel as ‘for kids’. Also note that “Safe Harbor” may not be possible due to Google owning all of the equipment that operates YouTube. Certification programs usually require you to have direct access to systems to ensure they continue to comply with the terms of the certification. Certifications usually also require direct auditing of systems to ensure the systems comply with the certification requirements. It’s very doubtful that Google will allow an auditing firm to audit YouTube’s servers on behalf of a content creator for certification compliance… and even if they did allow such an audit, YouTube’s servers would likely fail the certification audit.
The final option is to suspend your channel. Simply hide all of your content and walk away from YouTube. If you decide to use another video service like DailyMotion, Vimeo, or Twitch, the FTC may show up there as well. If they can make the biggest video sharing service in the world bow down to the FTC, then the rest of these video sharing services are likely not far behind.
➡️ “I don’t monetize my channel”
This won’t protect you. It’s not about monetization. It’s about data collection. The FTC is holding channel owners responsible for Google irresponsible data collection practices. Because Google can’t seem to police its own data collection to shield its end users from COPPA, Google/YouTube has decided to skip trying to fix their broken system and, instead, YouTube has chosen pass their violations down onto their end users… the YouTube creators.
This “passing off liability” action is fairly unheard of in most businesses. Most businesses attempt to shield their end users from legal liabilities by the use of its services as much as possible. Not Google or YouTube. They’re more than willing to hang their end users out to dry and let their end users take the burden of Google’s continued COPPA violations.
➡️ “My content isn’t for kids”
That doesn’t matter. What matters is whether the FTC thinks it is. If your content is animated, video game related, toy related, art related, craft related or in any way might draw in children as viewers, that’s all that matters. Even one child 12 and under is enough to shift Google’s COPPA data collection liabilities down onto your shoulders.
➡️ “I’ve set my channel as ‘not for kids'”
This won’t protect you. Google has a tool in the works that will scan the visual content of a video and potentially reclassify a video as “for kids” in defiance of the channel-wide setting of “not for kids”. Don’t expect that the channel-wide setting will hold up for every single video you post. YouTube can reclassify videos as it sees fit. Whether there will be a way to appeal this is as yet unknown. To get rid of that reclassification of a video, you may have to delete the video and reupload. Though, if you do this and the content remains the same, it will likely be scanned and marked “for kids” again by YouTube’s scanner. Be cautious.
➡️ “I’ll set my channel ‘for kids'”
Do this only if you’re willing to live with the restrictions AND only if your content really is for kids (or is content that could easily be construed as for kids). While this channel setting may seem to protect your channel from COPPA violations, it actually doesn’t. On the other hand, if your content truly isn’t for children and you set it ‘for kids’ that may open your channel up to other problems. I wouldn’t recommend setting content as ‘for kids’ if the content you post is not for kids. Though, there’s more to this issue… keep reading.
Marking your content “for kids” won’t actually protect you from COPPA. In fact, it makes your channel even more liable to COPPA violations. If you mark your content as “for kids”, you are then firmly under the obligation of providing proof that your channel absolutely DID NOT collect data from children under the age of 13. Since the FTC is making creators liable for Google’s problematic data collection practices, you could be held liable for Google’s broken data collection system simply by marking your content as ‘for kids’.
This setting is very perilous. I definitely don’t recommend ANY channel use this setting… not even if your channel is targeted at kids. By setting ‘for kids’ on any channel or content, your channel WILL become liable under COPPA’s data collection provisions. Worse, you will be held liable for Google’s data collections practices… meaning the FTC can come after you with fines. This is where you will have to fight to prove that you presently don’t have access to any child’s collected data, that you never did and that it was solely Google who stored and maintained that data. If you don’t possess any of this alleged data, it may be difficult for the FTC to uphold fines against channel owners. But, unfortunately, it may cost you significant attorney fees to prove that your channel is in the clear.
Finally, it’s entirely possible that YouTube may change this ‘for kids’ setting so that it becomes a one-way transition. This means that you may be unable to undo this change in the future. If it becomes one way, then a channel that is marked ‘for kids’ may never be able to go back to ‘not for kids’. You may have to create an entirely new channel and start over. If you have a large channel following, that could be a big problem. Don’t set your channel ‘for kids’ thinking you are protecting your channel. Do it because you’re okay with the outcome and because your content really is targeted for kids. But, keep in mind that setting ‘for kids’ will immediately allow the FTC to target your channel for COPPA violations.
➡️ “I’m a parent and I wish to give verifiable parental consent”
That’s great. Unfortunately, doing so is complicated. Because it’s easy for a child to fabricate such information using friends or parents of friends, giving verifiable consent to a provider is more difficult for parents than it sounds. It requires first verifying your identity as a parent, then it requires the provider to collect consent documentation from you.
It seems that Google / YouTube have chosen not yet set up a mechanism to collect verifiable consent themselves, let alone for YouTube content creators. What that means is that there’s no easy way for you as a parent to give (or a channel to get) verifiable consent easily. On the flip side as a content creator, it is left to you to handle contacting parents and collecting verifiable consent for child subscribers. You can use a service that will cost you money or you can do it yourself. As a parent, you can do your part by contacting a channel owner and giving them explicit verifiable consent. Keep reading to understand how to go about giving consent.
Content Creators and Parental Consent
Signing up for a service that provides a verifiable consent is something that larger YouTube channels may be able to afford, But, for a small YouTube channel, collecting such information from every new subscriber will be difficult. Google / YouTube could set up such an internal verification service for its creators, but YouTube doesn’t care about that or complying with COPPA. If Google cared about complying with COPPA, they would already have a properly working age verification system in Google Accounts that forces children to set their real age and which requires verifiable consent from the parent of a child 12 and under. If a child 12 and under is identified, Google can then block access to all services that might allow the child to violate COPPA until such consent is given.
It gets even more complicated. Because YouTube no longer maintains a private messaging service, there’s no way for a channel owner to contact subscribers directly on the YouTube platform other than posting a one-way communication video to your channel showing an email address or other means to contact you. This is why it’s important for each parent to reach out to each YouTube channel owner where the child subscribes and offer verifiable consent to the channel owner.
As a creator, this means you will need to post a video stating that ALL subscribers who are under the age of 13 must have have parental consent to watch your channel. This child will need to request their parent contact you using a COPPA authorized mechanism to provide consent. This will allow you to begin the collection of verifiable consent from parents of any children watching or subscribed to your content. Additionally, with every video you post, you must also have an intro on every video stating that all new subscribers 12 and under must have their parent contact the channel owner to provide consent. This shows to the FTC that your channel is serious about collecting verifiable parental consent.
So what is involved in Do It Yourself consent? Not gonna lie. It’s going to be very time consuming. However, the easiest way to obtain verifiable consent is setting up and using a two-way video conferencing service like Google Hangouts, Discord or Skype. You can do this yourself, but it’s better if you hire a third party to do it. It’s also better to use a service like Hangouts which shows all party faces together on the screen at once. This way, when you record the call for your records, both yours and the parent+child’s faces are readily shown. This shows you didn’t fabricate the exchange.
To be valid consent, both the parent and the child must be present and visible in the video while conferencing with the channel owner. The channel owner should also be present in the call and visible on camera if possible. Before beginning, the channel owner must notify the parent that the call will be recorded by the channel owner for the sole purposes of obtaining and storing verifiable consent. You may want to ensure the parent understands that the call will only and ever be used for this purpose (and hold to that). It is off limits to post these videos as a montage on YouTube as content. Then, you may record the conference call and keep it in the channel owners records. As a parent, you need to be willing to offer a video recorded statement to the channel owner stating something similar to the following:
“I, [parent or guardian full name], am 18 years of age or older and give permission to [your channel name] for my child / my ward [child’s YouTube public profile name] to continue watching [your channel name]. I additionally give permission to [your channel name] to collect any necessary data from my child / my ward while watching your channel named [your channel name].”
If possible, the parent should hold up the computer, tablet, phone or device that the child will use to the camera so that it clearly shows the child account’s profile name is logged into YouTube on your channel. This will verify that it is, indeed, the parent or legal guardian of that child’s profile. You may want to additionally request the parent hold up a valid form of picture ID (driver’s license or passport) obscuring any addresses or identifiers with paper or similar to verify the picture and name against the person performing consent. You don’t need to know where they live, you just need to verify the name and photo on the ID matched the person you are speaking to.
Record this video statement for your records and store this video recording in a safe place in case you need to recall this video for the FTC. There should be no posting of these videos to YouTube or any other place. These are solely to be filed for consent purposes. Be sure to also notice if the person with the child is old enough to be an adult, that the ID seems legit and the person is not that child’s sibling or someone falsifying this verification process. If this is a legal guardian situation, this is more difficult to validate legal guardianship. Just do your best and hope that the guardian is being truthful. If in doubt, thank the people on the call for their time and then block the subscriber from your channel.
If your channel is owned by a corporation, the statement should include the name of the business as well as the channel. Such a statement over a video offers verifiable parental consent for data collection from that child by that corporation and/or the channel. This means that the child may participate in comment systems related to your videos (and any other data collection as necessary). Yes, this is a lot of work if you have a lot of under 13 subscribers, but it is the work that the U.S. Government requires to remain compliant with COPPA. The more difficult part is knowing which subscribers are 12 and under. Google and YouTube don’t provide any place to determine this. Instead, you will need to ask your child subscribers to submit parental consent.
If the DIY effort is too much work, then the alternative is to post a video requesting 12 and under subscribers contact you via email stating their YouTube public subscriber identifier. Offer up an email address for this purpose. It doesn’t have to be your primary address. It can be a ‘throw away’ address solely for this purpose. For any account that emails you their account information, block it. This is the simplest way to avoid 12 and under children who may already be in your subscriber pool. Additionally, be sure to state in every future video that any 12 and under watching this channel must have their parental consent or risk being blocked.
Note, you may be thinking that requesting any information from a child 12 and under is in violation of COPPA, but it isn’t. COPPA allows for a reasonable period of time to collect personal data while in the process of obtaining parental consent before that data needs to be irrevocably deleted. After you block 12 and under subscribers, be sure to delete all correspondence via that email address. Make sure that the email correspondence isn’t sitting in a trashcan. Also make sure that not only are the emails are fully deleted, but any collected contact information is fully purged from that email system. You want to make sure that not only are all emails deleted, but any collected email addresses are also purged. Many email services automatically collect and store email addresses into an automatic address list. Make sure that these automatic lists are also purged. As long as all contact data has been irrevocably deleted, you aren’t violating COPPA.
COPPA recognizes the need to collect personal information to obtain parental consent:
(c) Exceptions to prior parental consent. Verifiable parental consent is required prior to any collection, use, or disclosure of personal information from a child except as set forth in this paragraph:
(1) Where the sole purpose of collecting the name or online contact information of the parent or child is to provide notice and obtain parental consent under §312.4(c)(1). If the operator has not obtained parental consent after a reasonable time from the date of the information collection, the operator must delete such information from its records;
This means you CAN collect a child’s or parent’s name or contact information in an effort to obtain parental consent and that data may be retained for a period of “reasonable time” to gain that consent. If consent is not obtained in that time, then the channel owner must “delete such information from its records”.
➡️ “How can I protect myself?”
As long as your channel remains on YouTube with published content, your channel is at risk. As mentioned above, there are several steps you can take to reduce your risks. I’ll list them here:
- Apply for Safe Harbor with TrustArc’s TRUSTe certification. It will cost you money, but once certified, your channel will be safe from the FTC so long as you remain certified under the Safe Harbor provisions.
- Remove your channel from YouTube. So long as no content remains online, the FTC can’t review your content and potentially mark it as “covered by COPPA.”
- Wait and see. This is the most risky option. The FTC makes some claims that it intends proving you had access to, stored and maintained protected data from children. However, there are just as many statements that indicate they will take action first, then request proof later. Collecting data will be difficult burden of proof for most channels. It also means a court battle.
- Use DYI or locate a service to obtain verifiable parental consent for every subscriber 12 and under.
➡️ “What went wrong?”
A whole lot failed on Google and YouTube’s side. Let’s get started with bulleted points of Google’s failures.
- Google has failed to identify children 12 and under to YouTube content creators.
- Google has failed to offer mechanisms to creators to prevent children 12 and under from viewing content on YouTube.
- Google has failed to prevent children 12 and under from creating a Google Account.
- Google has failed to offer a system to allow parents to give consent for children 12 and under to Google. If Google had collected parental consent for 12 and under, that consent should automatically apply to content creators… at least for data input using Google’s platforms.
- Google has failed to warn parents that they will need to provide verifiable consent for children 12 and under using Google’s platform(s). Even the FTC has failed to warn parents of this fact.
- YouTube has failed to provide an unsubscribe tool to creators to easily remove any subscribers from a channel. See question below.
- YouTube has failed to provide a blocking mechanism that prevents a Google Account from searching, finding or watching a YouTube channel.
- YouTube has failed to identify accounts that may be operated by a child 12 and under and warn content creators of this fact thus allow the creator to block any such accounts.
- YouTube has failed to offer a tool to allow creators to block specific (or all) content from viewers 12 and under.
- YouTube has failed to institute a full ratings system, such as the TV Parental Guidelines that sets a rating on the video and provides a video rating identifier within the first 2 minutes, thus stating that a video may contain content inappropriate for certain age groups. Such a full ratings system would allow parents to block specific ratings of content from their child using parental controls. This would allow parents to prevent not only children 12 and under from viewing more mature rated YouTube content, it lets parents block content for all age groups handled by the TV Parental Guidelines.
➡️ “I’m a creator. Can I unsubscribe a subscriber from my channel?”
No, you cannot. But, you can “Block” the user and/or you can “Hide user from channel” depending on where you are in the YouTube interface. Neither of these functions are available as features directly under the Subscriber area of YouTube Creator. Both of these features require digging into separate public Google areas. These mechanisms don’t prevent a Google Account from searching your channel and watching your public content, however.
To block a subscriber, enter the Subscribers area of your channel using Creator Studio Classic to view a list of your subscribers. A full list of subscribers is NOT available under the newest YouTube Studio. You can also see your subscribers (while logged into your account) by navigating to https://www.youtube.com/subscribers. From here, click on the username of the subscriber. This will take you to that subscriber’s YouTube page. From this user page, locate a small grey flag in the upper portion of the screen. I won’t snapshot the flag or give its exact location because YouTube is continually moving this stuff around and changing the flag image shape. Simply look for a small flag icon and click on it, which will drop down a menu. This menu will allow you to block this user.
Blocking a user prevents all interactions between that user and your channel(s). They will no longer be able to post comments on your videos, but they will still be able to view your public content and they will remain subscribed if they already are.
The second method is to use “Hide user from channel”. You do this by finding a comment on the video from that user and selecting “Hide user from channel” using the 3 vertical dot drop down menu to the right of the comment. You must be logged into your channel and viewing one of your video pages for this to work.
Hiding a user and blocking a user are effectively the same thing, according to YouTube. The difference is only in the method of performing the block. Again, none of the above allows you to unsubscribe users manually from your channel. Blocking or hiding a user still allows the user to remain subscribed to your channel as stated above. It also allows them to continue watching any public content that you post. However, a blocked or hidden user will no longer receive notifications about your channel.
This “remaining subscribed” distinction is important because the FTC appears to be using audience viewer demographics as part of its method to determine if a channel is directing its content towards children 12 and under. It may even use subscriber demographics. Even if you do manage to block an account of a child 12 and under who has subscribed to your channel, that child remains a subscriber and can continue to search for your channel and watch any content you post. That child’s subscription to your channel may, in fact, continue to impact your channel’s demographics, thus leading to possible action by the FTC. By blocking 12 and under children, you may be able to use this fact to your advantage by proving that you are taking action to prevent 12 and under users from posting inappropriate data to your channel.
➡️ “What about using Twitch or Mixer?”
Any video sharing or live streaming platforms outside of and not owned by Google aren’t subject to Google’s / YouTube’s FTC agreement.
Twitch
Twitch isn’t owned or operated by Google. They aren’t nearly as big as YouTube, either. Monetization on Twitch may be less than can be had on YouTube (at least before this COPPA change).
Additionally, Twitch’s terms of service are fairly explicit regarding age requirements, which should prevent COPPA issues. Twitch’s terms state as follows of minors using Twitch:
2. Use of Twitch by Minors and Blocked Persons
The Twitch Services are not available to persons under the age of 13. If you are between the ages of 13 and 18 (or between 13 and the age of legal majority in your jurisdiction of residence), you may only use the Twitch Services under the supervision of a parent or legal guardian who agrees to be bound by these Terms of Service.
This statement is more than Google provided for its creators. This statement by Twitch explicitly means Twitch intends to protect its creators from COPPA and any other legal requirements associated with minors or “children” using the Twitch service. For creators, this piece of mind is important.
Unfortunately, Google has no such creator piece of mind. In fact, the whole way YouTube has handled COPPA is sloppy at best. If you are a creator on YouTube, you should seriously consider this a huge breech of trust between Google and you, the creator.
Mixer
Mixer is presently owned by Microsoft. I’d recommend caution using Mixer. Because Microsoft allows 12 and under onto its ID system, it may end up in the same boat as YouTube. It’s probably a matter of time before the FTC targets Microsoft and Mixer with similar actions.
Here’s what Mixer’s terms of service say about age requirements:
User Age Requirements
- Users age 12 years and younger cannot have a channel of their own. The account must be owned by the parent, and the parent or guardian MUST be on camera at all times. CAT should not have to guess whether a parent is present or not. If such a user does not appear to have a guardian present, they can be reported, so CAT can investigate further.
- Users aged 13-16 can have a channel, with parental consent. They do not require an adult present on camera. If they are reported, CAT will take steps to ensure that the parent is aware, and has given consent.
This looks great and all, but within the same terms of service area it also states:
Users Discussing Age In Chat
We do NOT have any rule against discussing or stating age. Only users who claim to be (or are suspected to be) under 13 will be banned from the service. If someone says they are under 13, it is your choice to report it or not; if you do report it, CAT will ban them, pending proof of age and/or proof of parental consent.
If someone is streaming and appears to be under 16 without a parent present, CAT may suspend the channel, pending proof of parental consent and age. Streamers under 13 have a special exception, noted [above].
If you’re wondering what “CAT” is, it stands for Community Action Team (AKA moderators) for Mixer. The above is effectively a “Don’t Ask, Don’t Tell” policy. It also means Mixer has no one to actively police the service for underage users, not even its CAT team. It also means that Mixer is aware that persons 12 and under are using Mixer’s services. By making the above statement, it opens Mixer up to auditing by the FTC for COPPA compliance. If you’re considering using Mixer, this platform could also end up in the same boat as YouTube sooner rather than later considering the size of Microsoft as a company.
Basically, Twitch’s Terms of Service are a better written for creator piece of mind.
➡️ “What is ‘burden of proof’?”
When faced with civil legal circumstances, you are either the plaintiff or the defendant. The plaintiff is the party levying the charges against the other party (the defendant). Depending on the type of case, burden of proof must be established by the plaintiff to show that the defendant did (or didn’t) do the act(s) alleged. The type of burden of proof is slightly different when the action is a civil suit versus a criminal suit.
Some cases requires the plaintiff to take on the burden of proof to show the act(s) occurred. But, it’s not that simple for the defendant. The defendant may be required to bring both character witnesses and actual witnesses which may, in fact, establish a form of burden of proof that the acts could not have occurred. Even though burden of proof is not explicitly required of a defendant, that doesn’t mean you won’t need to provide evidence to exonerate yourself. In the case of a civil FTC action, the FTC is the plaintiff and your channel will be the defendant.
The FTC itself can only bring civil actions against another party. The FTC will be required to handle the burden of proof to prove that your channel not only collected the alleged COPPA protected data, but that you have access to and remain in possession of such data.
However the FTC can hand its findings over to the United States Department of Justice which has the authority to file both civil and criminal lawsuits. Depending on where the suit is filed and by whom, you could face either civil penalties or criminal penalties. It is assumed that the FTC will directly file its legal actions against COPPA as civil suits… but that’s just an assumption. The FTC does have the freedom to request the Department of Justice handle the complaint.
One more time, this article is not legal advice. It is simply information. If you need actual legal advice, you are advised to contact an attorney who can understand your specific circumstances and offer you legal advice for your specific circumstances.
↩︎
Rant Time: What is a Public Safety Power Shutoff?
Here’s where jurisprudence meets our every day lives (and safety) and here is also where PG&E is severely deluded and fast becoming a menace. There is actually no hope for this company. Let’s explore.
California Fire Danger Forecasting
“Officials” in California (not sure exactly to which specific organization is referred here) predicted the possibility of high winds, which could spark wildfires. This happened earlier the week of October 7 (or possibly earlier). As I said, these are “predictions”. Yet, as far as I can see, no strong winds have come to pass… a completely separate issue, but it is heavily tied to this story.
Yet, PG&E has taken it upon themselves to begin powering off areas of Northern California in “preparation” for these “predictions”… not because of an actual wind event. If the high winds had begun to materialize, then yes, perhaps mobilize and begin the power shut offs. Did PG&E wait for this? No, they did it anyway.
What exactly is Public Safety?
In the context of modern society, pretty much everything today relies on electric power generation to operate our public safety infrastructure. This infrastructure includes the likes of traffic lights to street lights to hospitals to medical equipment to refrigeration. All of these need power to function and keep the public safe. To date, we have come to rely on monopoly services like PG&E to provide these energy delivery services. Yet, what happens when the one and only one thing that PG&E is supposed to do and they can’t even do it?
Granted, what PG&E has done is intentional, but the argument is, “Are the PG&E power outages in the best interest of public safety?” Let’s explore this even further.
PG&E claims that these power outages will reduce the possibility of a wildfire. Well, that might be true from a self-centered perspective of PG&E as a corporation. After all, they’ve been tapped several times for legal liability over recent wildfire events. They’ve even had to declare bankruptcy to cover those costs incurred as a result. We’ll come to the reason behind this issue a little bit later. However, let’s stay focused on the Public Safety aspect for the moment.
PG&E claims it is in the best public safety interest to shut down its power grid. Yet, let’s explore that thought rationale. Sure, this outage action might reduce the possibility of sparking from a power line, but what it doesn’t take into account is the reduction in and lack of public safety from all of the other normal-everyday-public-safety mechanisms which have also had their power cut. As I said, street lights, traffic lights, hospitals, medical equipment, 911 services, airports and refrigeration.
The short term effect of shutting the power off might save some lives (based on a fire prediction that might not even come true), but then there are other lives which might be lost as a result of the power being shut off for days. Keep in mind that PG&E claims it might take up to 5 days to restore power after this scheduled power off event. That’s a long time to be without standard regular public safety mechanisms (simply ignoring the high wind advisory).
If PG&E has been found responsible for wildfires, then why aren’t they responsible for these incidental deaths that wouldn’t have occurred if the power had remained on. Worse, what about medical equipment and refrigeration? For people who rely on medical equipment to sustain their lives, what about these folks? How many of these could die from this outage? If it truly takes 5 days to get the power back on, what about the foods being sold at restaurants and grocery stores? If you do trust it, you might get sick… very, very sick… as in food poisoning sick. Who is responsible for that? The retailer or the restaurant?
Sure, I guess to some degree it is the retailer / restaurant. They should have thrown the food out and replaced it with fresh foods. Even then, perhaps the distributors were also affected by the outage. We can’t really know how far the food spoilage chain might go. At the root of all of this, though, it is PG&E who chose to cut the power. How many people might die as a result of PG&E force shutting off the power grid versus how many might potentially die if a wildfire ignites?
I’ve already heard there have been a number of traffic accidents because the power has been cut to traffic lights. It’s not a common occurrence to have the power out on intersections. When it does happen, many motorists don’t know the rules… and worse, they simply don’t pay attention to follow them. They just blast on through the intersection. Again, who is responsible for this? The city? No. In this case, it is truly PG&E’s responsibility. The same for food poisoning as a result of the lack of refrigeration. What about the death of someone because their medicine spoiled without refrigeration?
Trading One Evil For Another
Truly, PG&E is playing with fire. They are damned if they do and damned if they don’t. The reality is, either way, shutting off electricity or leaving it on, PG&E risks the public’s safety. They are simply trading one set of public safety for another. Basically, they are “Robbing Peter to pay Paul.” By trying to thwart the possibility of setting an accidental wildfire, the outage can cause traffic accidents, deaths in hospitals, create food poisoning circumstances and this list goes on and on. When there is no power, this is real danger. Sometimes immediate danger, sometimes latent danger (food poisoning) which may present weeks later.
The reality is, it is PG&E who is responsible for this. PG&E “thinks” (and this is the key word here) that they are being proactive to prevent forest fires. In reality, they’re creating even more public safety issues by cutting the power off indiscriminately.
Cutting Power Off Sanely?
The first problem was in warning the public. PG&E came up with this plan with too short of a notice. The public was not properly notified in advance. If this outage scenario were on the table of options for PG&E to pursue during the wildfire season, this information should have been disseminated early in the summer. People could have had several months to prepare for this eventuality. Instead of notifying months ahead, they chose to notify at a moment’s notice forcing a cram situation when everyone floods the stores and gas stations trying to keep their homes in power and prepare. At the bare minimum, PG&E should be held responsible for inciting public frenzy. Instead, with proper planning and notification, people could have had several months notice to buy generators, stock up on water, buy a propane fridge, buy a propane stove, prep their fridges and freezers, and so on.
With a propane fridge, many people can still have refrigeration in their home during an extended (up to 7 day) power outage. This prevents both spoilage of foods and of medicines. Unfortunately, when it comes to crunch time notices, supplies and products run out quick. Manufacturers don’t build products for crunch time. They build for limited people to buy over a short period of time. Over several months, these manufacturers could have ramped up production for such a situation, but that can’t happen overnight. PG&E was entirely remiss with this notification. For such drastic knee-jerk actions to public safety, it needs to notify the public months in advance of this possibility. This is public menace situation #1.
Indiscriminate Power Outages
Here’s a second big problem with PG&E’s outage strategy. PG&E can’t pick and choose its outages. Instead, its substations cover whole swatches of areas which may include such major public safety issues as traffic lights and hospitals, let alone restaurants and grocery stores whose food is likely to spoil.
If PG&E could sanely turn off power to only specific businesses and residences without risking the power to hospitals, cell phone infrastructure, 911 and traffic infrastructure, then perhaps PG&E’s plan might be in a better shape. Unfortunately, PG&E’s outage strategy is a sledgehammer approach. “Let’s just shut it all down.”, I can almost hear them say. Dangerous! Perhaps even more long term dangerous than the possibility of not setting a wildfire. Who’s to say? This creates public menace situation #2.
Sad Infrastructure
Unfortunately, this whole situation seems less about public safety and more about CYA. PG&E has been burned (literally) several times over the last few wildfire seasons. In fact, they were both literally and monetarily burned so hard that this is less about actual public safety and more about covering PG&E’s legal butt. Even then, as I said above, PG&E isn’t without legal liability simply because they decided to cut the power to thwart a wildfire. In fact, while the legal liability might not be for causing a wildfire, instead it might be for incidental deaths created by outages at intersections, by deaths created in hospitals and in homes due to medical equipment failure, by deaths created via food spoilage in restaurants and grocery stores… and even food spoilage or lack of medical care in the home.
The reality behind PG&E’s woes is not tied to its supposedly proactive power outage measures, it is actually tied to its aging infrastructure. Instead of being proactive and replacing its wires to be less prone to sparking (what it should have been doing for the last 10 years or more), it has done almost nothing in this area. Instead of cutting back brush around its equipment, it has resorted to turning the power off. Its liability in wildfires is almost directly attributable to relying on infrastructure created and installed decades ago by the likes of Hetch Hetchy (and other early electric infrastructure builders) back in the early 1900s. I’m not saying that every piece of this infrastructure is nearly 100 years old, but some of it is. That’s something to think about right there.
PG&E does carry power from Hetch Hetchy to its end users via Hetch Hetchy generation facilities, but more importantly, through PG&E’s monopoly electric lines to its end users. PG&E also generates its own electricity from its own facilities. It also carries power from other generation providers like SVCE. The difficulty with PG&E is its monopoly in end user delivery. No other company is able to deliver power to PG&E’s end user territory, leaving consumers with only ONE commercial choice to power their home. End users can opt to install their own in-home energy generation systems such as solar, wind or even diesel generators (when the city allows), but that’s not a “commercial” provider like PG&E.
Because PG&E has the market sewn up, everyone who uses PG&E is at their mercy to provide solid continuous power… that is, until they don’t. This is public menace situation #3.
Legal Troubles
I’m surprised that PG&E has even decided to use this strategy considering its risky nature. To me, this forced power outage strategy seems as big a liability in and of itself as it does against wildfires.
PG&E is assigned one task: Deliver Power. If it can’t do this, then PG&E needs to step aside and let another company more experienced in to replace PG&E’s dominance in power delivery. If PG&E can’t even be bothered to update its aging equipment, which is at the heart of this entire problem, then it definitely needs to step aside and let a new company start over. Sure, a new company will take time to set it all up, but once it’s going, PG&E can quietly wind down and go away… which may happen anyway considering both its current legal troubles and its bankruptcy.
The state should, likewise, allow parties significantly impacted by this forced power outage (i.e., death or injury) to bring lawsuits against PG&E for its improperly planned and indiscriminately executed power outage. Except, because PG&E is still in bankruptcy court, consumers who are wronged by this outage must stand in line behind all of those who are already in line at PG&E’s bankruptcy court. I’m not even sure why the bankruptcy judge would have even allowed this action by PG&E while still in bankruptcy. Considering the possibility of significant additional legal liabilities incurred by this forced outage, the bankruptcy judge should have foreseen this and denied its action. It’s almost like PG&E execs are all, “F-it, we’ll just turn it all off and if they want to sue us, they’ll have to get in line.” This malicious level of callous disregard for public safety needs much more state and legal scrutiny. The bankruptcy judge should have had a say over this action by PG&E. That they didn’t, this makes public menace situation number 4, thus truly making PG&E an official public safety menace and a nuisance.
Updated 10/11/2019 — Clarification
I’ve realized that while one point was made in the article, it wasn’t explicitly called out. To clarify this point, let’s explore. Because PG&E acted solely on a predicted forecast and didn’t wait for the wind event to actually begin, PG&E’s actions egregiously disregarded public safety. As I said in the main body of the article above, PG&E traded one “predicted” public safety event for actual real incurred public safety events. By proceeding to shut down the power WITHOUT the predicted wind event manifesting, PG&E acted recklessly towards public safety. As a power company, their sole reason to exist is to provide power and maintain that public safety. By summarily shutting down power, not only did they fail to provide the one thing they are in business to do, they shut the power down for reasons other than for fire safety. As I stated above, this point is the entire reason that PG&E is now an official menace to the public.
↩︎
Can I use my Xbox One or PS4 controller on my iPhone?
This is a common question regarding the two most popular game controllers to have ever existed. Let’s explore.
MFi Certification
Let’s start with a little history behind why game controllers have been a continual problem for Apple’s iOS devices. The difficulty comes down to Apple’s MFi controller certification program. Since MFi’s developer specification release, not many controller developers have chosen to adopt it. The one notable exception is the SteelSeries Nimbus controller. It’s a fair controller, it holds well enough in the hand, has an okay battery life, but it’s not that well made. It does sport a lightning port so you can charge it with your iPhone’s charger, however. That’s of little concession, though, when you actually want to use an Xbox One or PS4 controller instead.
Because Apple chose to rely on its own MFi specification and certification system, manufacturers would need to build a controller that satisfies that MFi certification. Satisfying the requirements of MFi and getting certified likely requires licensing technology built by Apple. As we know, licenses typically cost money paid to Apple for the privilege of using that technology. That’s great for Apple, not so great for the consumer.
Even though the SteelSeries Nimbus is by no means perfect, it really has become the de facto MFi controller simply because no other manufacturers have chosen to adopt Apple’s MFi system. And why would they?
Sony and Microsoft
Both Sony and Microsoft have held (and continue to hold) the market as the dominant game controllers. While the SteelSeries Nimbus may have become the de facto controller for Apple’s devices, simply because there is nothing else really available, the DualShock and the Xbox One controllers are far and away better controllers for gaming. Apple hasn’t yet been able to break into the console market, even as much as they have tried with the Apple TV. Game developers just haven’t embraced the Apple TV in the same way they have of the Xbox One and the PS4. That’s obvious as to why. The Apple TV, while reasonable for some games, simply does not offer the same level of graphics and game power as an Xbox One or PS4. It also doesn’t have a controller built by Apple.
Until Apple gets its head into the game properly with a more suitably named game system actually intended for gaming, rather than general purpose entertainment, Apple simply can’t become a third console. Apple seems to try these roundabout methods of introducing hardware to try and usurp, or at least insert itself into certain markets. Because of this subtle roundabout method Apple chooses, it just never works out. In the case of MFi, that hasn’t worked out too well for Apple.
Without a controller that Apple has built themselves, few people see the Apple TV as anything more than a TV entertainment system with built-in apps… even if it can run limited games. The Apple TV is simply not seen as a gaming console. It doesn’t ship with a controller. It isn’t named appropriately. Thus, it is simply not seen as a gaming console.
With that said, the PS4 and the Xbox One are fully seen as gaming consoles and prove that with every new game release. Sony and Microsoft also chose to design and build their own controllers based on their own specifications; specifications that are intended for use on their consoles. Neither Sony, nor will Microsoft go down the path to MFi certification. That’s just not in the cards. Again, why would they? These controllers are intended to be used on devices Sony and Microsoft make. They aren’t intended to be used with Apple devices. Hence, there is absolutely zero incentive for Microsoft or Sony to retool their respective game controllers to cater to Apple’s MFi certification whims. To date, this has yet to happen… and it likely never will.
Apple is (or was) too caught up in itself to understand this fundamental problem. If Apple wanted Sony or Microsoft to bend to the will of Apple, Apple would have to pay Sony and Microsoft to spend their time, effort and engineering to retool their console controllers to fit within the MFi certification. In other words, not only would Apple have to entice Sony and Microsoft to retool their controllers, they’d likely have to pay them for that privilege. And so, here we are… neither the DualShock nor does the Xbox One controller support iOS via MFi certification.
iOS 12 and Below
To answer the above question, we have to observe Apple’s stance on iOS. As of iOS 12 and below, Apple chose to rely solely on its MFi certification system to certify controllers for use with iOS. That left few consumer choices. I’m guessing that Apple somehow thought that Microsoft and Sony would cave to their so-called MFi pressure and release updated controllers to satisfy Apple’s whims.
Again, why would either Sony or Microsoft choose to do this? Would they do it out of the goodness of their own heart? Doubtful. Sony and Microsoft would ask the question, “What’s in it for me?” Clearly, for iOS, not much. Sony doesn’t release games on iOS and neither does Microsoft. There’s no incentive to produce MFi certified controllers. In fact, Sony and Microsoft both have enough on their plates supporting their own consoles, let alone spending extra time screwing around with Apple’s problems.
That Apple chose to deny the use of the DualShock 4 and the Xbox One controllers on iOS was clearly an Apple problem. Sony and Microsoft couldn’t care less about Apple’s dilemmas. Additionally, because both of these controllers dominate the gaming market, even on PCs, Apple has simply lost out when sticking to their well-intentioned, but misguided MFi certification program. The handwriting was on the wall when they built the MFi developer system, but Apple is always blinded by its own arrogance. I could see that MFi would create more problems than it would solve for iOS when I first heard about it several years ago.
And so we come to…
iOS 13 and iPhone 11
With the release of iOS 13, it seems Apple has finally seen the light. They have also realized both Sony and Microsoft’s positions in gaming. There is simply no way that the two most dominant game controllers on the market will bow to Apple’s pressures. If Apple wants these controllers certified under its MFi program, it will need to take steps to make that a reality… OR, they’ll need to relax this requirement and allow these two controllers to “just work”… and the latter is exactly what Apple has done.
As of the release of iOS 13, you will be able to use both the Xbox One (bluetooth version) and the PS4’s DualShock 4 controller on iOS. Apple has realized its certification system was simply a pipe dream, one that never got realized. Sure, MFi still exists. Sure, iOS will likely support it for several more releases, but eventually Apple will obsolete it entirely or morph it into something that includes Sony and Microsoft’s controllers.
What that means for the consumer is great news. As of iOS 13, you can now grab your PS4 or Xbox One controller, pair it to iOS and begin gaming. However, it is uncertain exactly how compatible this will be for iOS. It could be that some games may not recognize these controllers until they are updated for iOS 13. This could mean that older games that only supported MFi may not work until they are updated for iOS 13. The problem here is that many projects have become abandoned over the years and their respective developers are no longer updating apps. That means that you could find your favorite game doesn’t work with the PS4 or Xbox One controller if it is now abandoned.
Even though iOS 13 will support the controllers, it doesn’t mean that older games will. There’s still that problem to be solved. Apple could solve that by folding the controllers under the MFi certification system internally to make them appear as though they are MFi certified. I’m pretty sure Apple won’t do that. Instead, they’ll likely offer a separate system that identifies “third party” controllers separately from MFi certified controllers. This means that developers will likely have to go out of their way to recognize and use Sony and Microsoft’s controllers. Though, we’ll have to wait and see how this all plays out in practice.
Great News
Even still, this change is welcome news to iOS and tvOS users. This means that you don’t have to go out and buy some lesser controller and hope it will feel and work right. Instead, you can now grab a familiar controller that’s sitting right next to you, pair it up and begin playing on your iPad.
This news is actually more than welcome, it’s a necessity. I think Apple finally realizes this. There is no way Sony or Microsoft would ever cave to Apple’s pressures. In fact, there was no pressure at all really. Ultimately, Apple shot themselves in the foot by not supporting these two controllers. Worse, by not supporting these controllers, it kept the Apple TV from becoming the hopeful gaming system that Apple had wanted. Instead, it’s simply a set-top box that provides movies, music and limited live streaming services. Without an adequate controller, it simply couldn’t become a gaming system.
Even the iPad and iPhone have been suffering without good solid controllers. Though, I’m still surprised that Apple itself hasn’t jumped in and built their own Apple game controller. You’d think that if they set out to create an MFi certification system that they’d have taken it to the next step and actually built a controller themselves. Nope.
Because Apple relied on third parties to fulfill its controller needs, it only really ever got one controller out of the deal. A controller that’s fair, but not great. It’s expensive, but not that well made. As I said above, it’s the SteelSeries Nimbus. It’s a mid-grade controller that works fine in most cases, but cannot hold a candle to the PS4’s or the Xbox One’s controller for usability. Personally, I always thought of the Nimbus controller as a “tide me over” controller until something better came along. That never happened. Unfortunately, it has taken Apple years to own up to this mistake. A mistake that they’ve finally decided to rectify in iOS 13.
A little late, yes, but well done Apple!
↩︎
Security Tip: Apple ID locked for security?
This one also doubles as a Rant Time. Having my Apple ID account locked is an issue I face far too often with Apple. Perhaps you do, too? In my case, no one knows my account ID. Yet, I face having to unlock my account frequently because of this issue. I personally think Apple is causing this issue. Let’s explore.
Unlocking an Apple ID
As with far too many things, Apple’s unlocking system is unnecessarily complex and fraught with digital peril after-the-fact… particularly if you enable some of Apple’s more complex security features (i.e., Two Factor authentication).
One of the things Apple hasn’t yet to get correct is properly securing its Apple ID system from intrusion attempts. That doesn’t mean that your account is unsafe. What it means is that your account is unsafe against malicious attacks targeting your account ID. But, there’s an even bigger risk using Apple’s ID system… securing your credentials by using an email address. I’ll come back to this practice a little later.
Once your account becomes locked, there are a number of major problems that present. The first immediate problem is that you need to remember your security questions OR face changing your password (assuming standard security). If you use Apple’s two-factor authentication, you face even more problems. If you don’t use two-factor and you’ve forgotten your security questions, you have the option to contact Apple Support to help you with your security question problems to gain access to your account. On the other hand, if you’ve forgotten your security information set up when enabling two-factor, you’re screwed. Apple can’t help you after you have two-factor set up… one of the major reasons I have chosen not to use two-factor at Apple. Two-factor IS more secure, but by using it you risk losing your Apple ID if you lose a tiny bit of information. That risk is far too great. With all of the “ease of use” Apple is known for, its Apple ID system is too overly complex.
The second problem is that once you do manage to get your account unlocked, you are then required to go touch EVERY SINGLE DEVICE that uses your account ID and reenter your password AGAIN. This includes not only every Apple device, but every device utilizing Apple services such as Alexa’s account linking for Apple Music on the Amazon Echo. If you use Apple Music on an Android, you’ll need to go touch that too. It’s not just the locking and unlocking of your account, it’s the immense hassle of signing into your Apple ID on EVERY SINGLE DEVICE. Own an Apple Watch? Own an Apple TV? Own a Home Pod? Own an iPad? Own a MacBook? Use Apple Music on your Android? You’ll need to go to each and every one of these devices and touch them.
On the iPhone, it’s particularly problematic. You’ll be presented with at least 3 login prompts simultaneously all competing with one another on the screen. Later, you’ll be presented with a few more stragglers over the course of 30 minutes or an hour. Apple still can’t seem to figure out how to use a single login panel to authenticate the entire device and all of its services. Instead, it must request passwords for each “thing” separately. So many prompts pop up so fast you have no idea which one is which because none of them are labeled as to which service they are attached. You could even be giving your account ID and password to a random nefarious app on your device. You’d never know. If you own an Apple Watch, you’ll have to re-enter it separately for that device as well. Literally, every single device that uses your Apple ID must be touched after unlocking your Apple ID. Unlike Wi-Fi passwords which you enter once and it’s shared across every device you own, Apple can’t possibly do that with its Apple ID system so that we enter it once and it populates ALL of our devices. No. We must touch each and every device we own.
Worse, if you don’t do go touch each and every one of these devices immediately upon unlocking your account, you risk having your account locked almost immediately by just one of these devices. Apple’s ID system is not forgiving if even one of these devices hasn’t logged in properly after a security lock. You could face being locked out just a few hours later.
So the rant begins…
Using Email Addresses as Network IDs
Here’s a security practice that needs to stop. Apple, I’m l👀king at you! Using email addresses as an ID was the “norm” during the mid-late 00s and is still in common practice throughout much of the Internet industry. It is, however, a practice that needs to end. Email addresses are public entities easily seen, easily found and, most easily, attacked. They are NOT good candidates for use as login identifiers. Login identifiers need to use words, phrases or information that are not generally publicly accessible or known. Yes, people will continue to use their favorite pet’s name or TV show or girlfriend’s name as login IDs. At least that’s only found by asking the person involved. Email addresses are not required when developing login systems. You can use tie the email address to the account via its profile. But, it SHOULD NOT be used as a login identifier.
When an Apple ID account gets continually locked, Apple Support suggests to change the login ID, but that’s not going to change anything. You’re simply moving the crap from one toilet to another. Crap is still crap. The problem is that it still uses an email address and, to reiterate, email addresses are easily seen, found and attacked. What I need is a login ID that’s of my own choosing and is not an email address. This way, random folks can’t go to Apple’s iCloud web site and randomly enter an email address intentionally to lock accounts. If I can choose my own login identifier, unless I give that information out explicitly to someone, it’s not guessable AT ALL and far less likely to be locked out by random folks entering junk into web based Apple’s login panels.
Oh, and make no mistake, it’s not people on an iPhone or iPad doing this. It’s people going to Apple’s web site and doing it there. There is no other place where it can be happening. And yet, we unsuspecting users are penalized by having to spend a half an hour finding and reentering passwords on all our devices because someone spent 5 minutes at Apple’s web site entering random information incorrectly 3 times. Less than 5 minutes worth of effort triggers at least 30 minutes of work unlocking the account and reentering passwords on many devices and services. And then there are the stragglers that continue to prompt for at least an hour or two after… all because Apple refuses to secure its own web site login panels from this activity. This is not my problem Apple, it’s yours. You need to fix your shit and that’s something I absolutely cannot do for you.
Notifications
Apple prides itself on building its push notification system, yet it can’t even use it to alert users of potential unusual activity on its very own Apple IDs. If someone is incorrectly trying passwords on a web site, they know where this vector is. So then, tell me about it, Apple. Send me an alert that someone is trying to log into the Apple Store or the iCloud.net site. Inform me that my ID is being used in a place that seems suspect. You know the IP address where the user is coming from. Alert me. Google does. You can, too.
Additionally, Apple stores absolutely NO information about bad login attempts. If you attempt to contact Apple Support about your account activity, they don’t have access. They can’t even tell you what triggered your Account ID lock. This level of information is the absolute bare minimum a company using centralized login IDs must offer to its users. If Apple can’t even bother to help you find out why and where your account was locked, why would you trust Apple to store your information? Apple puts all its cards on its functionality side, but it can’t put a single card on this side of the security fence? What the hell, Apple?
Apple Locking Accounts
I also firmly believe that Apple is intentionally locking accounts. When these lockouts occur, it’s not me doing it. I’m not out there entering my account credentials incorrectly. It’s not my devices, either. My devices ALL have my correct password setup. This means that either someone has guessed my email address or, more likely, Apple is intentionally locking the account. I firmly believe Apple is intentionally doing this internally and it’s not incorrect password attempts at all. The more it happens, the more I believe Apple is forcing this. I don’t know why they would want to do this, but I do believe they are. Maybe it’s a disgruntled employee who just randomly feels the need to screw with Apple’s users?
Apple’s Response
I’ve called Apple Support at least twice regarding this issue and gotten absolutely nowhere. They can’t and, more importantly, won’t help with this issue. They claim to have no access to security logs. They can’t determine where, when or why an account was locked. In fact, I do believe Apple does have access to this information, but I believe Apple Support has been told not to provide any information.
If Apple Support can’t give this information, then this information should be offered through the Apple ID account site (appleid.apple.com). This site should contain not only the ability to manage your Apple ID, it should also store and offer security information for when and where your ID was used (and where the account was used when it locked). Yet, Apple offers NOTHING. Not a single thing. You can log into this site, but there are no tools offered to the user. Apple exposes nothing about my account use to me. Google, on the other hand, is very transparent. So transparent, in fact, that they send “unusual activity” alerts whenever your ID is used in an unusual way. Google errors on the side of over-communication. Yet, Apple hasn’t done shit in this area and errors on the side of absolute ZERO communication.
Get your act together Apple. Your Apple ID system sucks. Figure it out!
↩︎
Apple Cancels AirPower charge mat
While I realize that this “news” is a little old at this point (announced March 29th), the intention of this article is not to report on this announcement, but to write an analysis of this announcement’s ramifications to Apple. Let’s explore.
Think Different
Apple used this slogan for a time when it was touting its innovative approach to the creation of its devices and systems. However, Apple has pretty much abandoned this slogan after Steve Jobs’s passing.
Since the loss of Jobs, Apple’s innovation has waned, which has left industry pundits with a conundrum. Do these Apple expert journalists continue to be fanboys for this brand and “love everything Apple” or do they finally drop that pretext and begin reporting the realities of the brand?
I’ve never been an Apple “fanboy” in the sense that I “automatically love everything Apple”. There are too many legitimate journalists and social media influencers who already follow that trend. However, I won’t name any names, iJustine. Whoops. If you’re another of these people, you know who you are.
Think The Same
In recent years, Apple has been trailing its competition with its phone and other tech ideas. Ideas that have already been done, sometimes better than Apple. For example, the iPhone X is an iPhone version of the Galaxy Note 8. The Note 8 released months earlier than the iPhone X. The wired EarPods were simply Apple’s version of a similar Bose earbud. And… the AirPower would simply have been an Apple version of a Qi Wireless charging mat.
As you can see, Apple’s most recent innovations aren’t innovations at all. Even the AirPods, while wireless, are not new. While they do sound pretty good, they leave some to be desired for long wear-ability and comfort. They also take way too long to connect, when they decide to connect at all (at least the gen 1 AirPods). These are iterations of products that have already existed on the market.
The iPhone 1 demonstrates actual innovation. No one had created a smart phone like the iPhone when it came to exist. Sure, some handsets had limited apps and a few had a touch screen, but Apple took the handheld phone to a whole new level. The first iPad was also quite innovative. No other tablet was on the market at the time and offered something never before seen. Just look at the tablet market today!
Unfortunately, the innovation that was once so prevalent at Apple has evaporated after Jobs’s untimely death.
Qi
Inductive wireless charging is nothing new. It’s been a staple technology in Braun’s wireless toothbrushes since the early 90s. It was simply the next logical step to bring inductive charging to mobile devices. Samsung did that with its own Qi wireless charging mats (and by backing the Qi standard). These mats and phones were introduced in 2008.
With the introduction of the iPhone X model in November of 2017 (and other Apple phone models released that same year), Apple finally added induction charging to its handsets. That’s 9 years after Qi became a thing. That’s 9 years after Samsung had it on their handsets. There’s nothing at all innovative about wireless charging on an Apple device. Yes, it may have been a “most requested” feature, but it certainly was not innovative or even new. If anything, Apple decided it was time to fill a technology gap on their mobile devices… while with earlier phones they had refused to fill that gap. We won’t get into the whys of it all (ahem… Samsung).
With its iPhone X announcement, Apple also announced a new product called AirPower. This product would be a rival inductive charging mat to already existing Qi charging mats. The primary iterative difference between AirPower and the existing Qi charger bases is that the AirPower would output more power to wireless charge the iPhone much faster… perhaps even faster than a Lightning cable. We’ll never know now. The AirPower announcement also showed 3 devices charging simultaneous, including an AirPods case.
Unfortunately, Apple wasn’t able to release this product at the same time as the iPhone X. Apple announced they would release this charging mat sometime in mid-late 2018. This release date came and went without an announcement or release. By the end of March 2019 (nearly a year and a half after Phil Schiller announced it to the public), Apple officially pulled the plug on the AirPower product.
Everyone reading this announcement should take it as a sign of problems within Apple. And… here we are at the crux and analysis portions of this article.
The Apple Bites
With the cancellation of the AirPower, this signifies a substantial problem brewing within Apple’s infinite circle. If the engineers of what seems to be a relatively simple device cannot even manage to design and build a functional wireless charging base, a technology that’s been in use since the 1990s and in use in the mobile phone market for over 10 years now, how can we trust Apple to provide innovative, functional products going into the future?
This cancellation is a big, big deal to Apple’s reputation. If Apple cannot build a reasonably simplistic device after nearly a year and a half, what does this say about Apple’s current engineers on the whole?
Assuming Apple’s internal engineers were actually incapable of producing this product in-house, Apple could have farmed the product design out to a third party company (i.e., Samsung or Belkin) and had that third party design and build the product to Apple’s specs. It doesn’t seem that this product should have died on the vine, let alone be abandoned.
Instead of outright abandoning the product, Apple should have brought it to market in a different way. As I said, outright cancelling the product signifies much deeper problems within Apple. This is actually one of the first times I’ve actually seen Apple publicly announce a vapor product and then cancel said vapor product (albeit, over a year later). It’s a completely surprising, disappointing, unusual and highly unprecedented move by Apple… especially considering Apple’s new devices that desperately rely on this unreleased device. I guess this is why Apple has always been so secretive about product announcements in the past. If you cancel an unannounced product, no one knows. When you cancel a publicly announced product, it tarnishes your reputation… particularly when a functional product already exists on the market from other manufacturers (and competitors) and when the product is rather simplistic in nature. That’s a huge blow to Apple’s “innovative” reputation.
AirPods 2
The AirPower cancellation is also particularly disappointing and disheartening on the heels of the announcement of the AirPods 2 wireless charging case. The lack of the AirPower mat is a significant blow to one of the biggest features of the newest generation of AirPods. Effectively, without AirPower, the AirPods 2 are basically the same as the AirPods gen 1 except that the AirPods 2 offer a better “Hey Siri” support (and a better placed LED charge light).
The one feature that many people really looked forward to on the AirPods is basically unavailable. Sure, you can charge the AirPods 2 on a standard Qi wireless charger, but at a much slower rate than via the Lightning port. You don’t want to be sitting around waiting on a slow Qi charger to get the AirPods case fully charged. No, you’re going to plug it in to make sure you can walk out the door with a fully charged AirPods case. The case already charges slowly enough on a Lightning cable. There’s no reason to make it charge even slower by using a Qi charger. That’s the sole reason for the AirPower to exist.. to charge at much faster rates. Without AirPower, the reason to charge wirelessly has more-or-less evaporated.
Of course, you can also buy a wireless case for the AirPods gen 1, but what’s the point in that? With the AirPower cancelled, you have to invest in a Qi charger and live with its very slow charge speed for Apple’s brutal $80 price tag. No thanks. Even then, you don’t get any other benefit out of placing your AirPods gen 1 earbuds into a gen 2 wireless charging case for that $80. You might as well invest that $80 into a new set of AirPods gen 2, even though the Airpods 2 cost $199 (with wireless charging case) versus $159 for the gen 1 AirPods (without charging case).
Of course, in Apple’s typical form, they also offers the AirPods 2 without a wireless charging case for $159, the same price as the AirPods gen 1. But this is all diversionary minutiae.
Analysis
Apple’s level of innovations have been both flagging and lagging for several years. With the AirPower cancellation, it should now be crystal clear to not only journalists and analysts alike, but also to Apple’s fanboys that Apple’s luster has officially worn off. Apple’s once strong “reality distortion field” is now a distant memory.
Even the iPhone X isn’t fairing well in terms of durability of design just slightly over a year after its introduction. I’ve seen several people report FaceID failing over time, as well as other hardware problems on this phone model. A premium model phone at a premium price tag should hold up longer than this. Arguably, the iPhone X is one of Apple’s ugliest phones ever made, with that stupid unsightly “notch” covering up a portion of that expensive OLED screen.
It seems the iPhone 8 design (based on the iPhone 7 case design) is fairing much better than the iPhone X. Even the iPhone 7, which Apple still sells, holds up better. That should also be an indication of Apple’s current practical level of design. Of course, the problems showing in the iPhone X could be because there are more iPhone Xs in circulation than iPhone 8s. Still, the iPhone X is appearing more often in repair shops than the iPhone 8. That says something about the build quality and durability (or lack thereof) of the iPhone X’s design for that premium price tag.
Apple now needs to pull a rabbit out of a hat very soon to prove they still have the chops to not only innovate AND provide high quality goods, but be the first to the table with a new product idea or forever hold their peace and become an underdog in the tech industry. That doesn’t mean Apple won’t continue to sell product. It doesn’t mean Apple won’t design product. However, it does mean that the “fanboy” mentality that so many had previously adopted towards Apple’s products should finally evaporate, just as has Apple’s innovation. Before the AirPower cancellation announcement, we only had a hunch that Apple’s design wasn’t up to par. With the cancellation of the AirPower, we finally have confirmation.
Eventually, everyone must take off their rose colored glasses and see things as they really are at Apple. And with this article, I hope we’re finally to that point.
↩︎
Rant Time: Pizza Hut “Service Fee”?
If you’re wondering what Pizza Hut’s “Service Fee” is, you’re not alone. I was wondering this myself on my last visit to Pizza Hut. Let’s Explore.
Update for November 2020
The Pizza Hut that was formerly across from the United States Post Office in Cupertino is now closed. I drove by there last night. I do not know when it closed, but it is no longer open. I supposed either COVID or this “Service Fee” business ran them out of business. I’m not sad one bit. If a business can’t operate in a fair and equitable manner, then they deserve closure.
Service Fee
Apparently, some restaurants have found it hard to continue to do business in California. To that end, some of these restaurants have tried various tactics to raise their prices without raising their prices. I know, it doesn’t make sense to me either. But, there it is.
In that goal, some restaurants have instituted add-on fees to the bill in the form of new line items. For example, The Counter (a hamburger chain) has opted to add an “optional” service fee to the bill. This fee is to counter the higher wage costs they must pay and allow their prices to remain competitive with other chains. Except, it doesn’t keep the food costs competitive.
Pizza Hut appears to have grasped onto this slippery-slope approach with its “Service Fee” on the bill.
Confused
Even the staff taking orders don’t really know what this fee is, who is collecting it or even how to properly describe it. However, they do call it out when they are reading back the total cost of the bill.
When I placed my order, the waitperson misrepresented that it was a state of California fee… meaning, that the state of California was collecting this fee through this restaurant. As far as I know, the only mandated California fee is state sales tax. Yet, I’ve ordered from other restaurants and have paid no such “Service Fees” in addition to state mandated taxes.
No, this cashier was not only confused, she had no idea what it was even for and was clearly not trained to answer the question.
Money Collected versus what?
While I can’t speak specifically to the legality of this “fee”, it doesn’t seem all that legal to me, particularly if the cashier misrepresents the fee. As far as I know, businesses adding line items and collecting fees must provide some kind of product or service for that fee. Otherwise, it’s fraud. I can clearly tell you that my takeout order arrived bagged without plates, utensils or condiments. If that fee was to cover the takeout portion, they clearly didn’t offer any setup for my food. I also ordered pasta, which requires the use of a utensil.
It’s clear, this “Service Fee” is a price gouge attempt by Pizza Hut to rake in more money, but provide nothing in return.
High Percentage
Here’s the kicker on my bill. The “Service Fee” was actually higher than state sales tax. State tax on my order was $2.08 and Pizza Hut’s “Service Fee” was $2.10 (exactly 10% of the $20.98 subtotal).
Then, she presented me with a credit card receipt that prompted for a tip. I gave $1. That $2.10 mandated service fee covered for the rest of that tip. I usually give up to 10% on takeout, but that was already given via their “Service Fee”. In fact, Pizza Hut really swindled me out of a nearly 15% tip on a takeout order.
Last Visit
Ultimately, this will be the last time I do business with Pizza Hut in California. Not only are they now swindling customers out of fees they don’t deserve, the Pizza sauce just wasn’t tasty. I simply won’t go back to this restaurant only to get swindled for low quality Pizza.
↩︎
We all know what Google is, but what is COPPA? COPPA stands for the 
Google 





2 comments