Random Thoughts – Randocity!

Is Google running a Racket?

Posted in botch, business, california, corruption, Uncategorized by commorancy on March 16, 2020

monopoly-1920In the 1930s, we had crime syndicates that would shake down small business owners for protection money. This became known as a “Racket”. These mob bosses would use coercion and extortion to ensure that these syndicates got their money. It seems that Google is now performing actions similar with AMP. Let’s explore.

AMP

AMP is an acronym that stands for Accelerated Mobile Pages. To be honest, this technology is only “accelerated” because it strips out much of what makes HTML pages look good and function well. The HTML technology that make a web page function are also what make it usable. When you strip out the majority of that usability, what you are left with is a stripped down protocol named AMP… which should stand for Antiquated Markup Protocol.

This “new” (ahem) technology was birthed by Google in 2016. It claims to be an open source project and also an “open standard”, but the vast majority of the developers creating this (ahem) “standard” are Google employees. Yeah… so what does this say about AMP?

AMP as a technology is fine if it were allowed to stand on its own merit. Unfortunately, Google is playing hardball to get AMP adopted.

Hardball

Google seems to feel that everyone needs to adopt and support AMP. To that end, Google has created a racket. Yes, an old-fashioned mob racket.

To ensure that AMP becomes adopted, Google requires web site owners to create, design and manage “properly formatted” AMP pages or face having their entire web site rankings be lost within Google’s Search.

In effect, Google is coercing web site owners into creating AMP versions of their web sites or effectively face extortion by being delisted from Google Search. Yeah, that’s hardball guys.

It also may be very illegal under RICO laws. While no money is being transferred to Google (at least not explicitly), this action has the same effect. Basically, if as a web site owner, you don’t keep up with your AMP pages, Google will remove your web site from the search engine, thus forcing you to comply with AMP to reinstate the listing.

Google Search as Leverage

If Google Search were say 15% or less of the search market, I might not even make a big deal out of this. However, because Google’s Search holds around 90% of the search market (an effective monopoly), it can make or break a business by reducing site traffic because of low ranking. By Google reducing search rankings, this is much the same as handing Google protection money… and, yes, this is still very much a racket. While rackets have been traditionally about collecting money, Google’s currency isn’t money. Google’s currency is search rankings. Search rankings make or break companies, much the same as paying or not paying mobsters back in the 1930s.

Basically, by Google coercing and extorting web site owners into creating AMP pages, it has effectively joined the ranks of those 1930 mob boss racketeers. Google is now basically racketeering.

Technology for Technology’s Sake

I’m fine when a technology is created, then released and let land where it may. If it’s adopted by people, great. If it isn’t, so be it. However, Google felt the need to force AMP’s adoption by playing the extortion game. Basically, Google is extorting web site owners to force them to support AMP or face consequences. This forces web site owners to adopt creating and maintaining AMP versions of their web pages to not only appease Google, but prevent their entire site from being heavily reduced in search rankings and, by extensions, visitors.

RICO Act

In October of 1970, Richard M. Nixon signs into law the Racketeer and Influenced Corrupt Organizations Act… or RICO for short. This Act makes it illegal for corrupt organizations to coerce and extort people or businesses for personal gains. Yet, here we are in 2020 and that’s exactly what Google is doing with AMP.

It’s not that AMP is a great technology. It may have merit at some point in the future. Unfortunately, we’ll never really know that. Instead of Google following the tried-and-true formula of letting technologies land where they may, someone at Google decided to force web site owners to support AMP … or else. The ‘else’ being the loss of that business’s income stream by being deranked from Google’s Search.

Google Search can make or break a business. By Google extorting businesses into using AMP at the fear of loss of search ranking, that very much runs afoul of RICO. Google gains AMP adoption, yes, but that’s Google’s gain at the site owners loss. “What loss?”, you ask. Site owners are forced to hire staff to learn and understand AMP because the alternative is loss of business. Is Google paying business owners back for this extortion? No.

So, here we are. A business the size of Google wields a lot of power. In fact, it wields around 90% of the Internet’s search power. One might even consider that a monopoly power. Combining a monopoly and extortion together, that very much runs afoul of RICO.

Lawsuit City and Monopolies

Someone needs to bring Google up in front of congress for their actions here. It’s entirely one thing to create a standard and let people adopt it on their own. It’s entirely another matter when you force adoption of that standard on people who have no choice by using your monopoly power against them.

Google has already lost one legal battle with COPPA and YouTube. It certainly seems time that Google needs to lose another legal battle here. Businesses like Google shouldn’t be allowed to use their monopoly power to brute force business owners into complying with Google technology initiatives. In fact, I’d suggest that it may now be time for Google, just like the Bell companies back in the 80s, to be broken up into separate companies so that these monopoly problems can no longer exist at Google.

↩︎

FX TV Series Review: Devs

Posted in botch, california, entertainment, Uncategorized by commorancy on March 7, 2020

devsDevs is a new “limited” series from FX, also being streamed on Hulu. Let’s explore everything that went wrong here.

Silicon Valley Startups

Having worked in Silicon Valley for several tech companies, I can confirm exactly how unrealistic this show is. Let’s start by discussing all of the major flaws within the pilot. I should also point out that the pilot is what sets the tone of a series. Unfortunately, the writers cut so many corners setting up the pilot’s plot, the rest of the series will suffer for it.

As a result of the sloppy writing for the pilot, the writers will now be required to retcon many plot elements into the series as the need arises. Retconning story wouldn’t have been needed had they simply set up this series properly. Unfortunately, they rushed the pilot story.

Slow Paced

While you might be thinking, “Well, I thought the pacing of the series was extremely slow.” The dialog and scene pacing is slow. But, the story itself moves along so rapidly, if you blink you’ll miss it.

What’s it about?

A girlfriend and boyfriend pair work for the same fictional tech company named “Amaya”. It is located in a redwood forested area near San Francisco, apparently. It doesn’t specifically state where it exists, but it’s somewhere located in a wooded area.

The female lead, Lily, and the male lead, Sergei, are in a relationship. She’s of Chinese-American heritage and he’s of Russian descent. She works on the crytography team at Amaya and he works in the AI division at Amaya (at least in the pilot of the show).

Things Go Awry

Almost immediately, the series takes a bad turn. Sergei shows off his project to the ‘Devs’ team leader, another team in the company. We later come to find that this unkempt leader is actually the founder of the company and Amaya was his daughter who died. He also apparently heads up a part of the company that we come to find is named ‘Devs’. Unfortunately, because there’s no setup around what ‘Devs’ exactly is, this leaves the viewer firmly lost over the magnitude of what’s going on at this meeting. Clearly, it isn’t lost on Sergei as he’s extremely nervous about the meeting, but he still goes in reasonably confident of his project. As viewers, though, we’re mostly lost until much later in the episode.

Sergei demonstrates his project to this not-explained team and they seem suitably impressed with Sergei’s project’s results… that is until the end of the meeting when the results begin failing due to insufficient amounts of processing power.

Still, Sergei’s results are impressive enough that he is invited (not the rest of his team) to join ‘Devs’ right then and there.

And then we hear the sound of a record needle being ripped across a record…

Not how Silicon Valley works

You don’t get invited to join some kind of “elite coveted” team at the drop of a hat like that. Managers have paperwork, transfer requests have to be made and budgets have to be allotted. There are lots of HR related things that must result when transferring a person from one department to another, even at the request of the CEO. It’s not a “You’re now on my team effectively immediately” kind of thing. That doesn’t occur and is horribly unrealistic.

Ignoring the lack of realism of this transfer, the actor playing Sergei is either not that great of an actor or was directed poorly. Whatever the reason, he didn’t properly convey the elation required upon being invited and accepted into “the most prestigious” department at Amaya. If he were actually trying to get into ‘Devs’, his emotions should have consisted of at least some moment of joy. In fact, the moment he’s accepted into ‘Devs’, it almost seems like fear or confusion blankets him. That’s not a normal emotion one would experience having just stepped into a “dream job”.

This is where the writers failed. The writers failed to properly explain that this was Sergei’s dream job. This is also where the writers failed to properly set up the ‘Devs’ team as the “Holy Grail” of Amaya.

Clearly, the writers were attempting to set this fictional Amaya company up to mirror a company of a similar size of Google or Apple.

Location

Ignoring the meeting that sets up the whole opening (and which also fails to do so properly), Sergei heads home to explain to Lily his change in company status and his transfer into ‘Devs’. They have a conversation about the closed nature of that team and that they won’t be able to discuss his new job in ‘Devs’.

The next day, Sergei heads over to the head of Amaya security to be ‘vetted’ for the ‘Devs’ team. Apparently, there’s some kind of security formality where the security team must interview and vet out any potential problems. The security manager even points out that because Sergei is native Russian and because Lily is Chinese that there’s strong concern over his transfer. If this security person is so concerned over his background, then he should rescind his transfer effective immediately.

Instead, he sends Sergei on his way to meet with the ‘Devs’ manager who then escorts him through a heavily wooded area into what amounts to an isolated fortress.

Record needle rips across again… “Hold it right there”

While it’s certainly possible a tech startup might attempt to locate its headquarters deep in a wooded area, it’s completely unrealistic. California is full of tree huggers. There are, in fact, way too many tree huggers in California. There is no way a company like Google or Apple could buy a heavily forested area and then plop down a huge fortress in the middle of it. No, not possible. In fact, an organization like “Open Space Trust” would see to it that they would block such a land purchase request. There is no way a private company could set this up.

A governmental organization could do it simply through annexation via eminent domain, but not a private company. Let’s ignore this straight up California fact and continue onward with this show. Though, it would have made more sense if Amaya had been government sanctioned and funded.

Sergei’s First (and Last) Day

Ignoring the improbable setup of this entire show, Sergei is escorted by his new boss, who remarkably looks like Grizzly Adams… but more dirty, homeless and unkempt. Typically, Silicon Valley companies won’t allow men who look like this into managerial roles. Because we come to find later that he is apparently the “founder” of Amaya, the rest of the company lets his unkempt look slide. His look is made worse by the long hair wig they’ve glued onto this actor. If you want a guy to look like Grizzly Adams, at least have him grow his hair out to some length so a lacefront wig looks at least somewhat realistic.

Anyway, let’s move on. Sergei is escorted through a heavily wooded area (complete with a monstrously huge and exceedingly ugly statue of a child in a creepy pose) and onto his new work location… the aforementioned fortress I described earlier. His boss explains how well secured the location is by pointing out its security features including an “unbroken vacuum seal” to which Sergei ponders aloud before being shown how it works. Sergei is then told that there is only one rule. That rule being that no personal effects go into the building and nothing else comes out of it. Yet, this rule is already broken when they head inside. Even the “manager” breaks this rule.

Once they enter the building and get past the entry area, Mr. Grizzly explains that nothing inside the building is passworded. It’s all open access to everything. He is then shown his workspace and left to his own devices. Grizzly explains he’ll figure it out on his own by “reading the code”.

Unrealistic. No company does this.

Last Day

Here’s where everything turns sour. We are left to assume that only one day has passed since Sergei has been been escorted into the building. Sergei then stares at his terminal screen not doing anything for about 5 minutes. He gets up, goes to the bathroom, barfs and then fiddles with his watch.

He then attempts to leave the building, yet somehow it’s night time. It was probably morning when he entered. Here’s where the storytellers failed again. There was no explanation of time passage. The same screen he was looking at when he entered is the same screen that was on his terminal when he attempts to leave. Yet, now it’s night time?

His manager assumes that Sergei has absconded with the code (remember the open access?) from the facility and that he is attempting to leave with it on his “James Bond Watch”. Sergei is jumped by the head of Amaya security and is seemingly suffocated by this same head of security no less.

And so the retcon begins…

The writers have now killed the person they needed to explain this story. So now, they have to rely on Lily to unravel what happened (as a newly minted detective). Here’s where the show goes from being a possible uplifting story to an implausible detective horror story.

To enable Lily to even get the first clue what has happened to her boyfriend, the ‘Devs’ and the security teams collude to fabricate footage to make it appear as if Sergei is acting oddly while walking around the campus.

Instead of the writers creating actual story, they rely on fake security footage to retell the story. They even go so far as to fabricate a person setting themselves on fire with Sergei’s face attached… to make it appear as some kind of suicide. Yeah, I doubt Lily is buying any of it. Unfortunately, the writers leave too much unsaid. So, we have no idea what Lily is really thinking.

Instead, Lily heads off to find her ex-boyfriend and ask him for help… who he then summarily tells her to “fuck off”. This whole ex-boyfriend premise is so contrived and unrealistic it actually tops the list of unrealistic tropes in this show.

Questions without Answers

Would a Silicon Valley company stoop to murder to protect its intellectual property? I guess it could happen, but it is very unlikely. Would they allow a thug to head up its security team? Exceedingly doubtful. If a company were to need to protect its property through acts of violence, it would hire out for that.

Though, really, Amaya is actually very naive. If they didn’t trust Sergei, they shouldn’t have hired him. Worse, they allowed their one rule to be broken… allowing personal effects inside the building. Both Sergei and Grizzly wear watches into the building. If no personal effects are to be carried in or out, then that includes ALL forms of technology including wrist watches of any form. In fact, they should require everyone to change their clothes before entering the building, forcing ALL personal effects into a locker with no access to that locker until shift end. The staff would then wear issued wardrobe for the duration of their work shift.

If Amaya had simply followed its own rules by setting the whole system up correctly, there wouldn’t have been the possibility of any code theft or the need to murder an employee. Yet, Sergei is allowed to wear his watch into the building? It is then assumed that Sergei has managed to copy all (?) of the code onto his watch? Setting up such a secure system would have forced Sergei to thwart this system in some way creating more drama and enforcing the fact that Sergei is, indeed, a spy. By killing Sergei off so quickly, the writers were requires to take many shortcuts to get this story told.

Clearly, corporate espionage does exist, but would anyone attempt corporate espionage on their first day on a new team? On their second day? I think not. In fact, this setup is so contrived and blatantly stupid, it treats not only Sergei, but the audience as if we haven’t a brain in our heads. That the writers also assume that Russian espionage is this stupid is also insane.

No. If Sergei were being handled as a spy, he would only attempt espionage after having been in the position for a long time… perhaps even years. Definitely well enough time to be considered “trusted”. No company fully trusts a new employee on the first day. No company gives full access to all data to a new employee on the first day, either. There is no way that “first day” Sergei could have ever been put in the position of having access to everything.

Further, a new employee needs to fully understand exactly what’s going on in the new department, where everything is and get accustomed to the new work area and new co-workers. There is no way Sergei would have attempted to abscond any the code when he barely understands what that code is even doing. Preposterous.

Episode 2

The writers then again further insult us with the passworded Soduku app that Lily finds on Sergei’s phone. Lily enlists her ex-boyfriend again (whom she hadn’t talked to in years) to help unlock the app. Amazingly, this second time he agrees. He then explains to Lily that it’s a Russian messaging app and that Sergei was a spy.

Here’s the insulting part. After her ex-boyfriend unlocks the app, all of the messages are in English. Seriously? No, I don’t think so. Every message would have been in Russian, not English. If it’s a Russian app, they would communicate using the Russian language. But then the next part wouldn’t have made any sense.

Lily then decides to text whomever is on the other end. If the text had been in Russian, she would have had to learn enough Russian to message the other party. By making the text app English, it avoids this problem. That’s called “lazy writing”.

Inexplicably, the other end decides to meet with Lily. Needle rips again… No, I don’t think so. If it were really Sergei’s handler with the power to delete the app, the app would have been deleted immediately after Lily made contact. No questions asked. If they wanted to meet with Lily, they likely would have abducted her separately much, much later.

Still, it all conveniently happens. Worse, when the meeting takes place, the head of Amaya’s security is somehow there eavesdropping on the whole conversation. Yeah, I don’t think so. If the head of Amaya’s security is there, that either means he’s spying on Sergei’s apps (which are likely encrypted, so there’s no real way) or Amaya’s future prediction algorithm is already fully functional.

Basically, everything is way too convenient. Worse, if Amaya does manage to crack the prediction algorithm, the show’s writers have a huge problem on their hands. There’s no way for them to write any fresh stories in that universe without it all turning out contrived. With a prediction algorithm fully functional, Amaya can predict future events with 100% accuracy. This means they can then thwart anything negative that might hinder Amaya’s business. The whole concept is entirely far fetched, but it’s actually made worse by the idea of an omniscient computer system that Amaya is attempting to build. But really, would a company actually kill an exceedingly bright software engineer who is just about to give your computer full future omniscience? I don’t think so.

Omniscience is actually the bane of storytelling. If you have an omniscient being (or anything) available to see the future, then a company could effectively rule the world by manipulating historical events to their own benefit. This situation is a huge predicament for the writers and show runners.

In fact, I would make sure that Amaya’s computer is firmly destroyed within the first 4 episodes. Amaya’s omniscience can’t come to exist or the show will jump the shark. The show should remain focused on Sergei’s death and Lily uncovering it, rather than on creating Amaya’s omniscient computer. That computer becoming fully functional will actually be the downfall of the show. The espionage doesn’t need to succeed. In fact, it shouldn’t succeed. Instead, one of Amaya’s existing internal staff should be enlightened to the of danger Amaya’s management once the actual reality of Sergei’s death becomes widely known. The now enlightened staff should turn on Amaya and subvert the soon-to-be “omniscient” computer, now comprehending the magnitude of just how far their bosses are willing to take everything. That computer is not only a danger to the show, it’s a danger to that entire fictional world. Worse, though, are murderous bosses who are the real travesty here.

Any person working at a company with management willing to commit murder of its staff should at best seek to leave the company immediately (fearing for their own safety)… alternatively, some of these employees might subversively see to that company’s demise before exiting the organization. In fact, Devs should become a cautionary tale.

Technical staff always hold all of the cards at any tech company. Trusted coders and technical staff leave companies extremely vulnerable. These staff can insert damaging code at any time… code that can, in fact, take down a company from within. This is the real danger. This is where this show should head. Let’s forget all about the silly omniscience gimmick and focus on the dangers of what can happen to a company when trusted technical staff become personally threatened by their own employer. This is the real point. This is the real horror. The omniscience gimmick is weak and subverts the show. Instead, bring the staff back to reality by having them take a stand against an employer who is willing to commit murder merely to protect company secrets.

[Updated: 7/11/2020]

About a week after I wrote this article, the next episode arrived. The term “Jump the Shark” immediately pop out at me about halfway into this episode.

There’s a scene where the Devs manager, Katie (Alison Pill), walks into the room and observes two of her team watching what is effectively porn on the company’s core technology. In fact, it’s not just any porn, but famous celebrities from the past “doing it”.

I can most definitely certify that while Silicon Valley’s hiring practices are dominated by males, no manager would allow this behavior in a conference room, let alone by using the company’s primary technology. They could have been watching literally anything and this is what they chose?

I can guarantee you that any manager who found out that an employee was watching such things on a work computer would, at best, require a stern talking to and a reprimand goes into the employee file. At worst, that person is fired. Katie just shrugs it off and makes a somewhat off-handed comment as she leaves the room. That’s completely unrealistic for Silicon Valley companies. Legal issues abound in the Bay Area. There’s no way any company would risk their own existence to let that behavior slide by any employee.

Of course, having a security manager running around and offing employees isn’t something companies in SV do either.

↩︎

Apple and Law Enforcement

Posted in Apple, botch, business, california by commorancy on January 14, 2020

apple-phoneApple always seems to refuse law enforcement requests. Let’s understand why this is bad for Apple… and for Silicon Valley as a whole. Let’s see how this can be resolved.

Stubbornness

While Apple and other “Silicon Valley” companies may be stubborn in reducing encryption strength on phones, reduction of encryption strength isn’t strictly necessary for law enforcement to get what they need out of a phone device. In fact, it doesn’t really make sense to reduce encryption across all phone devices simply so law enforcement can gain access to a small number of computer devices in a small set of criminal cases.

That’s like using a sledgehammer to open a pea. Sure, it works, but not very well. Worse, these legal cases might not even be impacted by what’s found on the device. Making all phones vulnerable to potentially even worse crimes, such as identity theft and stealing money in order to prosecute a smaller number of crimes which might not be impacted by unlocking a phone doesn’t make sense.

There Are Solutions

Apple (and other phone manufacturers) should be required to partner with law enforcement to create a one-use unlocking system for law enforcement use. Federal law could even mandate that any non-law enforcement personnel who attempts to access the law enforcement mode of a phone would be in violation of federal law. Though, policing this might be somewhat difficult. It should be relatively easy to build and implement such one-use system. Such a system will be relatively easy to use (with the correct information) and be equally difficult to hack (without the correct information).

How this enforcement system would work is that Apple (or any phone vendor) would be required to build both law enforcement support web site and a law enforcement mode on the phone for law enforcement use only. This LE support server is naturally authentication protected. A verified law enforcement agent logs into Apple’s LE system and enters key information from/about a specific device along with their own Apple issued law enforcement ID number. Apple could even require law enforcement officers to have access to an iPhone themselves to use FaceID to verify their identity before access.

The device information from an evidence phone may include the iPhone’s IMEI (available on the SIMM tray), ICCID (if available), SEID (if available), serial number, phone number (if available) and then finally a valid federally issued warrant number. Apple’s validation system would then log in to a federal system and validate the warrant number. Once the warrant is validated and provided the required input data specific to the phone all match to the device (along with the Apple’s law enforcement ID), Apple will issue a one-time use unlocking code to the law enforcement agent. This code can then be used one time to unlock the device in Law Enforcement Mode (LEM).

To unlock an evidence device, the agent then boots the phone into LEM (needs to be built by Apple) and then manually enters an Apple-generated code into the phone’s interface along with their law enforcement ID. The law enforcement mode then allows setup and connection to a local WiFi network (if no data network is available), but only after entering a valid code. The code will then be verified by Apple’s servers and then the phone will be temporarily unlocked. Valid entry of a law enforcement code unlocks the device for a period of 24 hours for law enforcement use. There is no “lock out” when entering the wrong code when the phone is in “law enforcement mode” because these codes are far too complex to implement such a system. Though, the phone can reboot out of LEM after a number of wrong attempts. You simply can’t randomly guess these codes by trial and error. They are too complex and lengthy for this.

This specific one-use code allows unlocking the device one time only and only for a period of 24 hours. This means that phone will accept that specific code only once and never accept that specific code again. If law enforcement needs to unlock the phone again, they will have to go through the law enforcement process of having Apple generate a new code using the same input data which would then generate a new code, again, valid for only 24 hours.

A successfully used LE code will suspend all phone screen lock security for a period of 24 hours. This means that the only action need to get into a phone for up to 24 hours (even after having been powered off and back on) is by pressing the home key or swiping up. No touch ID or Face ID is needed when the phone is unlocked during this 24 hour period. This allows for use of this phone by multiple people for gathering evidence, downloading information or as needed by law enforcement. This mode also suspends all security around connecting and trusting iTunes. iTunes will also allow downloading data from the phone without going through its “trust” security. After 24 hours, the phone reboots, deletes LE configuration parameters (such as WiFi networks) and reverts back to its original locked and secured state.

The iPhone will also leave a notification for the owner of the phone that the phone has been unlocked and accessed by law enforcement (much the same as the note left in luggage by the TSA after it has been searched). If the phone still has Internet access, it will contact Apple and inform the Apple ID that the phone has been unlocked and accessed by law enforcement. This Internet notification can be suspended for up to 30 days to allow law enforcement time enough to get what they need before the system notifies the Apple ID owner of access to that device. Though, I’d recommend that Apple notify the owner right away of any access by law enforcement.

How to use the code

When a valid generated Apple law enforcement code is entered into the phone in LEM, the phone calculates the validity of the code based on an internal process that runs on the phone continuously. While the phone is validly being used by its owner, this process will periodically sync with Apple’s LE servers to ensure that an iPhone’s LEM process will work properly should the phone fall into the possession of law enforcement. This information will have to be spelled out and agreed to in Apple’s terms and conditions. Apple’s servers and the phone remain synchronized in the same way as RSA one-time keys remain synchronized (within a small calculable margin of error). Thus, it won’t need to synchronize often.

How to use Law Enforcement Mode

This mode can be brought up by anyone, but to unlock this mode fully, a valid Apple issued law enforcement ID and one-use code must be entered into an iPhone for the mode to unlock and allow setup of a WiFi network. Without entry of an Apple issued law enforcement ID number or because of successive incorrect entries, the phone will reboot out of LEM after a short period time.

Law Enforcement ID

A law enforcement ID must be generated by Apple and these IDs will synchronize to all Apple devices prior to falling under law enforcement possession. To keep this list small, it will remain compressed on the device until LEM successfully activates, at which time the file is decompressed for offline validation use. This means that a nefarious someone can’t simply get into this mode and start mucking about easily to gain entry to a random phone. It also means someone can’t request Apple issue a brand new ID on the spot. Even if Apple were to create a new ID, the phone would take up to 24 hours to synchronize… and that assumes that the phone still has data service (which it probably doesn’t). Without data service, the phone cannot synchronize new IDs. This is the importance of creating these IDs in advance.

Apple will also need to go through a validation process to ensure the law enforcement officer requesting an ID is a valid officer working for a legitimate law enforcement organization. This in-advance validation may require a PDF of the officer’s badge and number, an agency issued ID card and any other agency relevant information to ensure the officer is a valid LE officer or an officer of the court. This requires some effort on the part of Apple.

To get an Apple law enforcement ID, the department needing access must apply for such access with Apple under its law enforcement support site (to be created). Once an Apple law enforcement ID has been issued, within 24 hours the ID will sync to phones, thus activating the use of this ID with the phone’s LEM. These IDs should not be shared outside of any law enforcement department. IDs must be renewed periodically through a simple validation process, otherwise they will expire and fall off of the list. Manufacturers shouldn’t have to manage this list manually.

Such a system is relatively simple to build, but may take time to implement. Apple, however, may not be cool with developing such a law enforcement system on its own time and dime. This is where the government may need to step in and mandate such a law enforcement support system be built by phone manufacturers who insist on using overly strong encryption. While government(s) can legislate that companies reduce their encryption strength on their devices to avoid building a law enforcement system as described, instead I’d strongly recommend that companies be required to build a law enforcement support and unlocking system into their devices should they wish to continue using ever stronger encryption. Why compromise the security of all devices simply for a small number of law enforcement cases? Apple must meet law enforcement somewhere in the middle via technological means.

There is also no reason why Apple and other device manufacturers are denying access to law enforcement agents for phone devices when there are software and technical solutions that can see Apple and other manufacturers cooperate with law enforcement, but yet not “give away the farm”.

I don’t even work for Apple and I designed this functional system in under 30 minutes. There may be other considerations of which I am not aware within iOS or Android, but none of these considerations are insurmountable in this design. Every device that Apple has built can support such a mode. Google should also be required to build a similar system for its Android phones and devices.

Apple is simply not trying.

↩︎

Rant Time: Google’s Lie

Posted in botch, business, california, rant by commorancy on January 7, 2020

finger-512I’ve already written an article or two about YouTube giving content creators the finger. I didn’t really put that information into this article’s context so that everyone can really understand what’s actually going on at YouTube, with the FTC and with Google. Let’s explore.

Lies and Fiction

Google has asserted and maintained, since at least 2000 when COPPA came into effect, that it didn’t allow children under age 13 on its platforms. Well, Google was caught with its proverbial pants down and suffered a $170 million fine at the hand of the FTC based on COPPA. Clearly, Google lied. To maintain that lie, it has had to do a number of things:

  1. For YouTube content creators, YouTube has hidden its metrics for anyone under the age of 13 from viewer stats on YouTube. What that means to creators is that the viewer metrics you see on your stats page is completely inaccurate for those under the age of 13. If Google had disclosed the under 13 age group of stats on this page, Google’s lie would have unraveled far faster than it did. For Google to maintain its lie, it had to hide any possible trail that could lead to uncovering this lie.
  2. For other Google platforms (Stadia, Chromebook, Android phones, etc), they likely also kept these statistics secret for the same reasons. Disclosure that the 12 and under age group existed on Google meant disclosing to the FTC that they had lied about this age group using its services all along.
  3. For Android phones, we’ll let’s just say that many a kid 12 and under have owned Android phones. Parents have bought them and handed them over to their children. For the FTC to remain so oblivious to this fact for years is a testament to how badly operated this portion of the government is.
  4. Google / YouTube had to instruct engineers to design software systems around this “we don’t display under age 13 metrics” lie.

Anyway, so lie Google did. They lied from 2000 all of the way to 2019. That’s almost 20 years of lying to the government… and to the public.

YouTube’s Lie

Considering that even just one COPPA infraction found to be “valid” could leave a YouTube channel owner destitute. After all, Google’s fine was $170 million. Because a single violation could cost a whopping $42,530, it’s a major risk simply to maintain a YouTube channel.

Because of the problem of Google perpetuating its lie about 12 and under for so long, this lie has become ingrained in Google’s corporate culture (and software systems). What this means is that for Google to maintain this lie, it had to direct its engineers to write software to avoid showing any statistic information anywhere that could disclose to anyone that Google allows 12 and under onto any of its platforms, let alone YouTube.

This also means that YouTube content creators are entirely left in the dark when it comes to viewer statistics of ages 12 and under. Because Google had intended to continue maintaining its “we don’t serve 12 and under here” lie, it meant that its systems were designed around this lie. This meant that any place where 12 and under could have been disclosed, this data was specifically culled and redacted from view. No one, specifically not YouTube content creators, could see viewer metrics for anyone 12 and under. By intentionally redacting this information from its statistics interfaces, no one could see that 12 and under were actually viewing YouTube videos or even buying products. As a creator, you really have no idea how many 12 and under viewers you have. The FTC will have access into YouTube’s systems to see this information, even if you as a content creator do not.

This means that content creators are actually in the dark for this viewer age group. There’s no way to really know if this age group is being accurately counted. Actually, Google is likely collecting this information, but they’re simply not disclosing it over public interfaces. Though, to be fully safe and to fully protect Google’s lie, they might have been purging this data more often than 13 and older data. If they don’t have the data on the system, they can’t be easily caught with it. Still, that didn’t help when Google finally did get caught and were fined $170 million.

Unfortunately, because Google’s systems were intentionally designed around a lie and because they are now already in place, undoing that intentional design lie could be a challenge for Google. They’ve had 19 years worth of engineering effort build code upon code avoiding disclosure of 12 and under using Google’s platforms. Undoing 19 years of coding might be a problem.

Swinging back around to that huge fine, this leaves YouTube in a quandary. It means that content creators have no way to know if the metrics that are being served to content creators are in any way accurate. After all, Google has been maintaining this lie for 19 years. They’ve built and maintained their systems around this lie. But now, Google must undo 19 years of lies built into their systems to allow content creators to see what we already knew… that 12 and under have been using the platform probably since 2000.

For content creators, you need to think twice when considering setting up a channel on YouTube. It doesn’t matter what your content is. If that content attracts children under 13, you’re at risk. The only type of channel content that cannot at all be seen as “for kids” is content that kids would never watch. There is really only a handful of content type I can name that wouldn’t appeal to children (not an exhaustive list):

  1. Legal advice from lawyers
  2. Court room video
  3. Horror programs
  4. Political programs
  5. Frank sex topics

It would probably be easier to state those types of programs that do appeal to children:

  1. Pretty much everything else

What that means is topics like music videos, video game footage, cartoons, pet videos, singing competitions, beauty channels, fashion channels, technology channels and toy reviews could appeal to children… and the list goes on. You name it and pretty much every other content type has the possibility of attracting children 12 and under… some content more than others. There’s literally very little that a child 12 and under might not consider watching.

The thing is, when someone decides to create a channel on YouTube, you must now consider if the content you intend to create might appeal to children 12 and under. If it’s generalized information without the use of explicit information, children could potentially tune in. Though, YouTube doesn’t allow true adult content on its platform.

Google’s lie has really put would-be channel creators into a huge bind with YouTube, plummeting the value of YouTube as a platform. For monetization, not only is there now the 1,000 subscriber hurdle you must get past and you must also have 14,000 views in a month, but now you must also be cognizant of the audience your content might attract. Even seemingly child-unfriendly content might draw in children unintentionally. If you interview the wrong person on your channel, you might find that you now have a huge child audience. Operating a YouTube Channel is a huge risk.

YouTube’s Value as a Platform

With this recent Google change, compounded by Google’s lie, the value of YouTube as a video sharing platform has significantly dropped. Not only did Google drop a bomb on its content creators, it has lied to not only the government, but to the public for years. With the FTC’s hand watching what you’re doing on YouTube, YouTube really IS moving towards “big government watching” as described in George Orwell’s book 1984. Why Google would allow such a deep level of governmental interference over its platform is a major problem, not just for Google, but for the computer industry as a whole. It’s incredibly chilling.

$42,530 per COPPA violation is not just small change you can pull out of your pocket. That’s significant bank. So much bank, in fact, that a single violation could bankrupt nearly any less than 100,000 subscriber channel on YouTube.

Not only do you have to overcome YouTube’s silly monetization hurdles, you must attempt to stay far away from the COPPA hurdle that YouTube has now foisted on you.

Google’s Mistake

Google did have a way to rectify and remediate this situation early. It’s called honesty. They could have simply fixed their platform to accurately protect and steer 12 and under away from its properties where they don’t belong. It could have stated that it did (and does) allow 12 and under to sign up.

If Google had simply been honest about 12 and under and allowed 12 and under to sign up, Google could have set up the correct processes from the beginning that would have allowed not only Google to become COPPA compliant, but by extension allow YouTube creators to remain compliant through Google’s tools. Google should have always remained in the business of protecting its creators from governmental interference. Yet, here we are.

In fact, the COPPA legislation allows for parental permission and consent and it’s not actually that hard to set up, particularly for a large organization like Google. For Google, in fact, it already has mechanisms it could leverage to attempt to obtain verifiable parental consent. If Google had chosen to setup and maintain a 12 and under verifiable parental consent program all along, YouTube content creators could have been left off of the hook. Instead, YouTube has given content creators the finger.

If YouTube content creators must share in Google’s lack of COPPA compliance, then content creators should equally share in a Google created parental consent system. Parental consent isn’t that hard to implement. Google could have spent its time building such a system instead of lying.

Trust and Lies

When companies as big as Google participate in lies of this magnitude, you should seriously question any business you do with such a company. Companies are supposed to be ethically bound to do the right thing. When companies don’t do the right ethical thing and perpetuate lies for years, everyone must consider how much you trust that company.

What else are they lying about? It’s difficult to trust someone who lies. Why is it any different when a company chooses to lie?

When that lie can cost you $42,530 per violation, that’s what comes out of lying. Google not only didn’t protect its content creators, it perpetuated a lie that has now left its content creators hanging out to dry.

This is why YouTube as a content creator platform is about as worthless as it can possibly be… not only for the lie and COPPA, but also the monetization clampdown from 2017-2018. Every year has brought another downside to YouTube and for 2019, it’s Google’s lie.

For large creators who have an entrenched large audience and who are making ad revenue bank from their audience (at least for the moment), I understand the dilemma to ditch YouTube. But, for those content creators who make maybe $5 a month, is it worth that $5 a month to risk $42,530 every time you upload a video? Worse, the FTC can go back through your back video catalog and fine you for every single video they find! That’s a lot of $42,530 fines, potentially at least one per video. Now that’s risky!

Solutions

There are solutions. The biggest solution, ditch YouTube for other video platforms such as Facebook, SnapChat, Vimeo or DailyMotion. If you’re live streaming, there’s YouNow, Twitch and Mixer. You’re not beholden to YouTube to gain an audience and following. In fact, with the huge black COPPA cloud now permanently hanging over YouTube, it’s only a matter of time before the FTC starts its tirade and cements what I’m saying here in this article. For small and medium sized creators, particularly brand new creators, it’s officially time to give YouTube the finger-512 (just as Google has given us the finger-512). It’s long past time to ditch YouTube and to find an alternative video sharing platform. You might as well make that one a 2020 New Year’s resolution. Let’s all agree that YouTube is officially dead and move on.

Just be sure to read the fine print of whatever service you are considering using. For example, Twitch’s terms and conditions are very explicit with regards to age… no one under 13 is permitted on Twitch. If only Google had been able to actually maintain that reality instead of lying about it for nearly 20 years.

↩︎

 

Why Rotten Tomatoes is rotten

Posted in botch, business, california by commorancy on December 31, 2019

cinema-popcornWhen you visit a site like Rotten Tomatoes to get information about a film, you need to ask yourself one very important question, “Is Rotten Tomatoes trustworthy?”

Rotten Tomatoes as a movie review service has come under fire many times for review bombing and manipulation. That is, Rotten Tomatoes seem to allow shills to join the service to review bomb a movie to either raise or lower its various scores by manipulating the Rotten Tomatoes review system. In the past, these claims couldn’t be verified. Today, they can.

As of a change in May 2019, Rotten Tomatoes has made it exceedingly easy for both movie studios and Rotten Tomatoes itself to game and manipulate the “Audience Score” ratings. Let’s explore.

Rotten Tomatoes as a Service

Originally, Rotten Tomatoes began its life as an independent movie review service such that both critics and audience members can have a voice in what they think of a film. So long as Rotten Tomatoes remained an independent and separate service from movie studio influence and corruption, it could make that claim. Its reviews were fair and for the most part accurate.

Unfortunately, all good things must come to an end. In February of 2016, Fandango purchased Rotten Tomatoes. Let’s understand the ramifications of this purchase. Because Fandango is wholly owned by Comcast and in which Warner Brothers also holds an ownership stake in Fandango, this firmly plants Rotten Tomatoes well out of the possibility of remaining neutral in film reviews. Keep in mind that Comcast also owns NBC as well as Universal Studios.

Fandango doesn’t own a stake in Disney as far as I can tell, but that won’t matter based on what I describe next about the Rotten Tomatoes review system.

Review Bombing

As stated in the opening, Rotten Tomatoes has come under fire for several notable recent movies as having scores which have been manipulated. Rotten Tomatoes has then later debunked those claims by stating that their system was not manipulated, but then really offering no proof of that fact. We simply have to take them at their word. One of these allegedly review bombed films was Star Wars: The Last Jedi… where the scores inexplicably dropped dramatically in about a 1 month period of time. Rotten Tomatoes apparently validated the drop as “legitimate”.

Unfortunately, Rotten Tomatoes has become a bit more untrustworthy as of late. Let’s understand why.

As of May of 2019, Rotten Tomatoes introduced a new feature known as “verified reviews”. For a review’s score to be counted towards the “Audience Score”, the reviewer must have purchased a ticket from a verifiable source. Unfortunately, the only source from which Rotten Tomatoes can verify ticket purchases is from its parent company, Fandango. All other ticket purchases don’t count… thus, if you choose to review a film after purchasing your ticket from the theater’s box office, from MovieTickets.com or via any other means, your ticket won’t count as “verified” should you review or rate the movie. Only Fandango ticket purchases count towards “verified” reviews, thus altering the audience score. This change is BAD. Very, very bad.

Here’s what Rotten Tomatoes has to say from the linked article just above:

Rotten Tomatoes now features an Audience Score made up of ratings from users we’ve confirmed bought tickets to the movie – we’re calling them “Verified Ratings.” We’re also tagging written reviews from users we can confirm purchased tickets to a movie as “Verified” reviews.

While this might sound like a great idea in theory, it’s ripe for manipulation problems. Fandango also states that “IF” they can determine “other” reviews as confirmed ticket purchases, they will mark them as “verified”. Yeah, but that’s a manual process and is impossibly difficult to determine. We can pretty much forget that this option even exists. Let’s list the problems coming out of this change:

  1. Fandango only sells a small percentage of overall ticket sales for a film. If the “Audience Score” is calculated primarily and solely from Fandango ticket sales alone, then this metric is a horribly inaccurate metric to rely on.
  2. Fandango CAN handpick “other” non-Fandango ticket purchased reviews to be included. Not likely to happen often, but this also means they can pick their favorites (and favorable) reviews to include. This opens Rotten Tomatoes up to Payola or “pay for inclusion”.
  3. By specifying exactly how this process works, this change opens the Rotten Tomatoes system to being gamed and manipulated, even by Rotten Tomatoes staff themselves. Movie studios can also ask their employees, families and friends to exclusively purchase their tickets from Fandango and request these same people to write “glowing, positive reviews” or submit “high ratings” or face job consequences. Studios might even be willing to pay for these positive reviews.
  4. Studios can even hire outside people (sometime known as shills) to go see a movie by buying tickets from Fandango and then rate their films highly… because they were paid to do so. As I said, manipulation.

Trust in Reviews

It’s clear that while Rotten Tomatoes is trying to fix its ills, it is incredibly naive at it. It gets worse. Not only is Rotten Tomatoes incredibly naive, this company is also not at all tech savvy. Its system is so ripe for being gamed, the “Audience Score” is a nearly pointless metric. For example, 38,000 verified reviews based on millions of people who watched it? Yeah, if that “Audience Score” number isn’t now skewed, I don’t know what is.

Case in point. The “Audience Score” for The Rise of Skywalker is 86%. The difficulty with this number is the vast majority of the reviews I’ve seen from people on chat forums don’t rate the film anywhere close to 86%. What that means is that the new way that Rotten Tomatoes is calculating scores is effectively a form of manipulation itself BY Rotten Tomatoes.

To have the most fair and accurate metric, ALL reviews must be counted and included in all ratings. You can’t just toss out the vast majority of reviews simply because you can’t verify them has holding a ticket. Even still, holding a ticket doesn’t mean someone has actually watched the film. Buying a ticket and actually attending a showing of the film are two entirely separate things.

While you may have verified a ticket purchase, did you verify that the person actually watched the film? Are you withholding brand new Rotten Tomatoes account reviewers out of the audience score? How trustworthy can someone be if this is their first and only review on Rotten Tomatoes? What about people who downloaded the app just to buy a ticket for that film? Simply buying a ticket from Fandango doesn’t make the rating or reviewer trustworthy.

Rethinking Rotten Tomatoes

Someone at Rotten Tomatoes needs to drastically reconsider this change and they need to do it fast. If Rotten Tomatoes wasn’t guilty of manipulation of review scores before this late spring change in 2019, they are now. Rotten Tomatoes is definitely guilty of manipulating the “Audience Score” by the sheer lack of reviews covered under this “verified review” change. Nothing can be considered valid when the sampling size is so small as to be useless. Verifying a ticket holder also doesn’t validate a review author’s sincerity, intent or, indeed, legitimacy. It also severely limits who can be counted under their ratings, this skewing the usefulness of “Audience Score”.

In fact, only by looking at past reviews can someone determine if a review author has trustworthy opinions.

Worse, Fandango holds a very small portion of all ticket sales made for theaters (see below). By showing all (or primarily) scores tabulated by people who bought tickets from Fandango, this definitely eliminates well over half of the written reviews on Rotten Tomatoes as valid. Worse, because of the way the metric is calculated, nefarious entities can game the system to their own benefit and manipulate the score quickly.

This has a chilling effect on Rotten Tomatoes. The staff at Rotten Tomatoes needs roll back this change pronto. For Rotten Tomatoes to return it being a trustworthy neutral entity in the art of movie reviews, it needs a far better way to determine trustworthiness of its reviews and of its reviewers. Trust comes from well written, consistent reviews. Ratings come from trusted sources. Trust is earned. The sole act of buying a ticket from Fandango doesn’t earn trust. It earns bankroll.

Why then are ticket buyers from Fandango any more trustworthy than people purchasing tickets elsewhere? They aren’t… and here’s where Rotten Tomatoes has failed. Rotten Tomatoes incorrectly assumes that by “verifying” a sale of a ticket via Fandango alone, that that somehow makes a review or rating more trustworthy. It doesn’t.

It gets worse because while Fandango represents at least 70% of online sales, it STILL only represents a tiny fraction of overall ticket sales, at just 5-6% (as of 2012).

“Online ticketing still just represents five to six percent of the box office, so there’s tremendous potential for growth right here.” –TheWrap in 2012

Granted, this TheWrap article is from 2012. Even if Fandango had managed to grab 50% of the overall ticket sales in the subsequent 7 years since that article, that would leave out 50% of the remaining ticket holder’s voices, which will not be tallied into Rotten Tomatoes current “Audience Score” metric. I seriously doubt that Fandango has managed to achieve anywhere close to 50% of total movie ticket sales. If it held 5-6% overall sales in 2012, in 7 years Fandango might account for growth between 10-15% at most by 2019. That’s still 85% of all reviews excluded from Rotten Tomatoes’s “Audience Score” metric.  In fact, it behooves Fandango to keep this overall ticket sales number as low as possible so as to influence its “Audience Score” number with more ease and precision.

To put this in a little more perspective, a movie theater might have 200 seats. 10% of that is 20. That means that for every 200 people who might fill a theater, just less than 20 people have bought their ticket from Fandango and are eligible for their review to count towards “Audience Score”. Considering that only a small percentage of that 20 will actually take the time to write a review, that could mean out of every 200 people who’ve seen the film legitimately, between 1 and 5 people might be counted towards the Audience Score. Calculating that up, for very 1 million people who see a blockbuster film, somewhere between 5,000 and 25,000’s reviews may contribute to the Rotten Tomatoes “Audience Score”… even if there are hundreds of thousands of reviews on the site.

The fewer the reviews contributing to that score, the easier it is to manipulate that score by adding just a handful of reviews to the mix… and that’s where Rotten Tomatoes “handpicked reviews” come into play (and with it, the potential for Payola). Rotten Tomatoes can then handpick positive reviews for inclusion. The problem is that while Rotten Tomatoes understands all of this this, so do the studios. Which means that studios can, like I said above, “invite” employees to buy tickets via Fandango before writing a review on Rotten Tomatoes. They can even contact Rotten Tomatoes and pay for “special treatment”. This situation can allow movie studios to unduly influence the “Audience Score” for a current release… this is compounded because there are so few reviews that  count to create the “Audience Score”.

Where Rotten Tomatoes likely counted every review towards this score before this change, after they implemented the new “verified score” methodology, this change greatly drops the number of reviews which contribute to tallying this score. This lower number of reviews means that it is now much easier to manipulate its Audience Score number either by gaming the system or by Rotten Tomatoes handpicking reviews to include.

Fading Trust

While Rotten Tomatoes was once a trustworthy site for movie reviews, it has greatly reduced its trust levels by instituting such backwards and easily manipulable systems.

Whenever you visit a site like Rotten Tomatoes, you must always question everything you see. When you see something like an “Audience Score”, you must question how that number is calculated and what is included in that number. Rotten Tomatoes isn’t forthcoming.

In the case of Rotten Tomatoes, they have drastically reduced the number of included reviews in that metric because of their “verified purchase” mechanism. Unfortunately, the introduction of that mechanism at once destroys Rotten Tomatoes trust and trashes the concept of their site.

It Gets Worse

What’s even more of a problem is the following two images:

Screen Shot 2019-12-23 at 7.26.58 AM

Screen Shot 2019-12-23 at 7.26.24 AM

From the above two images, it is claimed Rotten Tomatoes has 37,956 “Verified Ratings”, yet they only have 3,342 “Verified Audience” reviews. That’s a huge discrepancy. Where are those other 34,614 “Verified” reviews? You need to calculate the Audience Score not solely on a phone device using a simplistic “rate this movie” alone. It must be calculated in combination with an author writing a review. Of course, there are 5,240 reviews that didn’t at all contribute to any score at all on Rotten Tomatoes. Those audience reviews are just “there”, taking up space.

Single number ratings are pointless without at least some text validation information. Worse, we know that these “Verified Ratings” likely have nothing to do with “Verified Audience” as shown in the images above. Even if those 3,342 audience reviews are actually calculated into the “Verified Ratings” (they probably aren’t), that’s still such a limited number when considered with the rest of the “Verified Ratings” so as to be skewed by people who may not have even attended the film.

You can only determine if someone has actually attended a film by asking them to WRITE even the smallest of a review. Simply pressing “five star” on the app without even caring is pointless. It’s possible the reviews weren’t even tabulated correctly via the App. The App itself may even submit star data after a period of time without the owner’s knowledge or consent. The App can even word its rating question in such a way as to manipulate the response in a positive direction. Can we say, “Skewed”?

None of this leads to trust. Without knowing exactly how that data was collected, the method(s) used and how it was presented on the site and on the app, how can you trust any of it? It’s easy to see professional critic reviews because Rotten Tomatoes must cite back to the source of the review. However, with audience metrics, it’s all nebulous and easily falsified… particularly when Rotten Tomatoes is intentionally obtuse and opaque for exactly how it collects this data and how it is presents it.

Even still, with over one million people attending and viewing The Rise of Skywalker, yet Rotten Tomatoes has only counted just under verified 38,000 people, something doesn’t add up. Yeah, Rotten Tomatoes is so very trustworthy (yeah right), particularly after this “verified” change. Maybe it’s time for those Rotten Tomatoes to finally be tossed into the garbage?

↩︎

Rant Time: Flickr is running out of time & money?

Posted in botch, business, california by commorancy on December 19, 2019

Flickr2I received a rather questionable email about Flickr allegedly from Don MacAskill, CEO of SmugMug.

Unfortunately, his email is also wrapped in the guise of email marketing and arrived through the same marketing channel as all other email marketing from Flickr.

Don, if you want us to take this situation seriously, you shouldn’t use email marketing platforms to do it. These emails need to come personally from you using a SmugMug or Flickr address. They also shouldn’t contain several email marketing links. An email from the CEO should contain only ONE link and it should be at the very bottom of the email.

The information contained in this letter is not a surprise in general, but the way it arrived and the tone it takes is a surprise coming from a CEO, particularly when it takes the format of generic email marketing. Let’s explore.

Flickr Pro

I will place the letter at the bottom so you can it read in full. The gist of the letter is, “We’re running out of money, so sign up right away!”

I want to take the time to discuss the above “running out of money” point. Here’s an excerpt from Don’s email:

We didn’t buy Flickr because we thought it was a cash cow. Unlike platforms like Facebook, we also didn’t buy it to invade your privacy and sell your data. We bought it because we love photographers, we love photography, and we believe Flickr deserves not only to live on but thrive. We think the world agrees; and we think the Flickr community does, too. But we cannot continue to operate it at a loss as we’ve been doing.

Let’s start by saying, why on Earth would I ever sign up for a money losing service that is in danger of closing? Seriously, Flickr? Are you mad? Don’t give me assurances that *I* can save your business with my single conversion. It’s going to take MANY someones to keep Flickr afloat if it’s running out of money. Worse, sending this email to former Pro members trying to get us to convert again is a losing proposition. Send it to someone who cares, assuming there is anyone like that.

A single conversion isn’t likely to do a damned thing to stem the tide of your money hemorrhaging, Flickr. Are you insane to send out a letter like this in this generic email marketing way? If anything, a letter like this may see even MORE of your existing members run for the hills by cancelling their memberships, instead of trying to save Flickr from certain doom. But, let’s ignore this letter’s asinine message and focus on why I decided to write this article.

Flickr is Dead to Me

I had an email exchange in November of 2018 with Flickr’s team. I make my stance exceedingly clear exactly why I cancelled my Pro membership and why their inexplicable price increase is pointless. And yes, it is a rant. This exchange goes as follows:

Susan from Flickr states:

When we re-introduced the annual Flickr Pro at $49.99 more than 3 years ago, we promised all grandfathered Pros (including the bi-annual and 3-month plans) a 2-year protected price period. We have kept this promise, but in order to continue providing our best service to all of our customers, we are now updating the pricing for grandfathered Pros. We started this process on August 16, 2018.

With this being the case, bi-annual Pros pay $99.98 every 2 years, annual Pros pay $49.99 every year, and 3-month Pros pay $17.97 every 3 months. Notifications including the price increase have been sent out to our users starting from August 16.

I then write back the following rant:

Hi Susan,

Yes, and that means you’ve had more than ample time to make that $50 a year worth it for Pro subscribers. You haven’t and you’ve failed. It’s still the same Flickr it was when I was paying $22.48 a year. Why should I now pay over double the price for no added benefits? Now that SmugMug has bought it, here we are now being forced to pay the $50 a year toll when there’s nothing new that’s worth paying $50 for. Pro users have been given ZERO tools to sell our photos on the platform as stock photos. Being given these tools is what ‘Pro’ means, Susan. We additionally can’t in any way monetize our content to recoup the cost of our Pro membership fees. Worse, you’re displaying ads over the top our photos and we’re not seeing a dime from that revenue.

Again, what have you given that makes $50 a year worth it? You’re really expecting us to PAY you $50 a year to show ads to free users over the top of our content? No! I was barely willing to do that with $22.48 a year. Of course, this will all fall on deaf ears because these words mean nothing to you. It’s your management team pushing stupid efforts that don’t make sense in a world where Flickr is practically obsolete. Well, I’m done with using a 14 year old decrepit platform that has degraded rather than improved. Sorry Susan, I’ve removed over 2500 photos, cancelled my Pro membership and will move back to the free tier. If SmugMug ever comes to its senses and actually produces a Pro platform worth using (i.e., actually offers monetization tools or even a storefront), I might consider paying. As it is now, Flickr is an antiquated 14 year old platform firmly rooted in a 2004 world. Wake up, it’s 2018! The iStockphotos of the world are overtaking you and offering better Pro tools.

Bye.

Flickr and SmugMug

When Flickr was purchased by SmugMug, I wasn’t expecting much from Flickr. But, I also didn’t expect Flickr to double its prices while also providing nothing in return. The platform has literally added nothing to improve the “Pro” aspect of its service. You’re simply paying more for the privilege of having ads placed over the top of your photos. Though, what SmugMug might claim you’re paying for is entirely the privilege of the tiniest bit more storage space to store a few more photos.

Back when storage costs were immense, that pricing might have made sense. In an age where storage costs are impossibly low, that extra per month pricing is way out of line. SmugMug and Flickr should have spent their time adding actual “Pro” tools so that photographers can, you know, make money from their photos by selling them, leasing them, producing framed physical wall hangings, mugs, t-shirts, mouse pads, and so on. Let us monetize our one and only one product… you know, like Deviant Art does. Instead, SmugMug has decided to charge more, then place ads over the top of our photos and not provide even a fraction of what Deviant Art does for free.

As a photographer, why should I spend $50 a year on Flickr only to gain nothing when I can move my photos to Deviant Art and pay nothing a year AND get many more tools which help me monetize my images? I can also submit them to stock photo services and make money off of leasing them to publications, something still not possible at Flickr.

Don’s plea is completely disingenuous. You can’t call something “Pro” when there’s nothing professional about it. But then, Don feels compelled to call out where they have actually hosted Flickr and accidentally explains why Flickr is losing money.

We moved the platform and every photo to Amazon Web Services (AWS), the industry leader in cloud computing, and modernized its technology along the way.

What modernization? Hosting a service on AWS doesn’t “modernize” anything. It’s a hosting platform. Worse, this hosting decision is entirely the cause of SmugMug’s central money woes with Flickr. AWS is THE most expensive cloud hosting platform available. There is nothing whatsoever cheap about using AWS’s storage and compute platforms. Yes, AWS works well, but the bill at the end of the month sucks. To keep the lights on when hosting at AWS, plan to spend a mint.

If SmugMug wanted to save on costs of hosting Flickr, they should have migrated it to a much lower cost hosting platform instead of sending empty marketing promises asking people to “help save the platform”. Changing hosting platforms might require more hands on effort for SmugMug’s technical staff, but SmugMug can likely half the costs of hosting this platform by moving it to lower cost hosting providers… providers that will work just as well as AWS.

Trying to urge past subscribers to re-up into Pro again simply to “save its AWS hosting decision”, not gonna happen. Those of us who’ve gotten no added benefit by paying money to Flickr in the past are not eager to return. Either give us a legitimate reason to pay money to you (add a storefront or monetization tools) or spend your time moving Flickr to a lower cost hosting service, one where Flickr can make money.

Don, why not use your supposed CEO prowess to have your team come up with lower cost solutions? I just did. It’s just a thought. You shouldn’t rely on such tactless and generic email marketing practices to solve the ills of Flickr and SmugMug. You bought it, you have to live with it. If that means Flickr must shutdown because you can’t figure out a way to save it, then so be it.

Below is Don MacAskill’s email in all of its unnecessary email marketing glory (links redacted):

Dear friends,

Flickr—the world’s most-beloved, money-losing business—needs your help.

Two years ago, Flickr was losing tens of millions of dollars a year. Our company, SmugMug, stepped in to rescue it from being shut down and to save tens of billions of your precious photos from being erased.

Why? We’ve spent 17 years lovingly building our company into a thriving, family-owned and -operated business that cares deeply about photographers. SmugMug has always been the place for photographers to showcase their photography, and we’ve long admired how Flickr has been the community where they connect with each other. We couldn’t stand by and watch Flickr vanish.

So we took a big risk, stepped in, and saved Flickr. Together, we created the world’s largest photographer-focused community: a place where photographers can stand out and fit in.

We’ve been hard at work improving Flickr. We hired an excellent, large staff of Support Heroes who now deliver support with an average customer satisfaction rating of above 90%. We got rid of Yahoo’s login. We moved the platform and every photo to Amazon Web Services (AWS), the industry leader in cloud computing, and modernized its technology along the way. As a result, pages are already 20% faster and photos load 30% more quickly. Platform outages, including Pandas, are way down. Flickr continues to get faster and more stable, and important new features are being built once again.

Our work is never done, but we’ve made tremendous progress.

Now Flickr needs your help. It’s still losing money. Hundreds of thousands of loyal Flickr members stepped up and joined Flickr Pro, for which we are eternally grateful. It’s losing a lot less money than it was. But it’s not yet making enough.

We need more Flickr Pro members if we want to keep the Flickr dream alive.

We didn’t buy Flickr because we thought it was a cash cow. Unlike platforms like Facebook, we also didn’t buy it to invade your privacy and sell your data. We bought it because we love photographers, we love photography, and we believe Flickr deserves not only to live on but thrive. We think the world agrees; and we think the Flickr community does, too. But we cannot continue to operate it at a loss as we’ve been doing.

Flickr is the world’s largest photographer-focused community. It’s the world’s best way to find great photography and connect with amazing photographers. Flickr hosts some of the world’s most iconic, most priceless photos, freely available to the entire world. This community is home to more than 100 million accounts and tens of billions of photos. It serves billions of photos every single day. It’s huge. It’s a priceless treasure for the whole world. And it costs money to operate. Lots of money.

Flickr is not a charity, and we’re not asking you for a donation. Flickr is the best value in photo sharing anywhere in the world. Flickr Pro members get ad-free browsing for themselves and their visitors, advanced stats, unlimited full-quality storage for all their photos, plus premium features and access to the world’s largest photographer-focused community for less than $5 per month.

You likely pay services such as Netflix and Spotify at least $9 per month. I love services like these, and I’m a happy paying customer, but they don’t keep your priceless photos safe and let you share them with the most important people in your world. Flickr does, and a Flickr Pro membership costs less than $1 per week.

Please, help us make Flickr thrive. Help us ensure it has a bright future. Every Flickr Pro subscription goes directly to keeping Flickr alive and creating great new experiences for photographers like you. We are building lots of great things for the Flickr community, but we need your help. We can do this together.

We’re launching our end-of-year Pro subscription campaign on Thursday, December 26, but I want to invite you to subscribe to Flickr Pro today for the same 25% discount.

We’ve gone to great lengths to optimize Flickr for cost savings wherever possible, but the increasing cost of operating this enormous community and continuing to invest in its future will require a small price increase early in the new year, so this is truly the very best time to upgrade your membership to Pro.

If you value Flickr finally being independent, built for photographers and by photographers, we ask you to join us, and to share this offer with those who share your love of photography and community.

With gratitude,

Don MacAskill
Co-Founder, CEO & Chief Geek

SmugMug + Flickr

Use and share coupon code [redacted] to get 25% off Flickr Pro now.

↩︎

Am I impacted by the FTC’s YouTube agreement?

Posted in botch, business, california, ethics, family by commorancy on December 16, 2019

kid-tabletThis question is currently a hot debate among YouTubers. The answer to this question is complex and depends on many factors. This is a long read as there’s a lot to say (~10000 words = ~35-50 minutes). Grab a cup of your favorite Joe and let’s explore.

COPPA, YouTube and the FTC

I’ve written a previous article on this topic entitled Rant Time: Google doesn’t understand COPPA. You’ll want to read that article to gain a bit more insight around this topic. Today’s article is geared more towards YouTube content creators and parents looking for answers. It is also geared towards anyone with a passing interest in the goings on at YouTube.

Before I start, let me write this disclaimer by saying I’m not a lawyer. Therefore, this article is not intended in any way to be construed as legal advice. If you need legal advice, there are many lawyers available who may be able to help you with regards to being a YouTube content creator and your specific channel’s circumstances. If you ARE HERE looking for legal advice, please go speak to a lawyer instead. The information provided in this article is strictly for information purposes only and IS NOT LEGAL ADVICE.

For Kids or Not For Kids?

screen-shot-2019-11-24-at-2.33.32-am.png

With that out of the way, let’s talk a little about what’s going on at YouTube for the uninitiated. YouTube has recently rolled out a new channel creator feature. This feature requires that you mark your channel “for kids” or “not for kids”. Individual videos can also be marked this way (which becomes important a little later in the article). Note, this “heading” is not the actual text on the screen in the settings area (see the image), but this is what you are doing when you change this YouTube creator setting. This setting is a binary setting. Your content is either directed at kids or it is not directed at kids. Let’s understand this reasoning around COPPA. Also, “kids” or “child” is defined in COPPA any person 12 or younger.

When you set the “for kids” setting on a YouTube channel, a number of things will happen to your channel, including comments being disabled, monetization will be severely limited or eliminated and how your content is promoted by YouTube will drastically change. There may also be other subtle changes that are as yet unclear. The reason for all of these restrictions is that COPPA prevents the collection of personal information from children 12 and under… or at least, if it is collected that it is deleted if parental consent cannot be obtained. In the 2013 update, COPPA added cookie tracking to the list of items that cannot be collected.

By disabling all of these features under ‘For Kids’, YouTube is attempting to reduce or eliminate its data collection vectors that could violate COPPA… to thwart future liabilities for Google / YouTube as a company.

On the other hand, setting your channel as ‘Not For Kids’, YouTube maintains your channel as it has always been with comments enabled, full monetization possible, etc. Seems simple, right? Wrong.

Not as Simple as it Seems

You’re a creator thinking, “Ok, then I’ll just set my channel to ‘Not for Kids’ and everything will be fine.” Not so fast there, partner. It’s not quite as simple as that. COPPA applies to your channel if even one child visits and Google collects any data from that child. But, there’s more to it.

YouTube will also be rolling out a tool that attempts to identify the primary audience of video content. If YouTube’s new tool identifies a video as content primarily targeting “kids”, that video’s “Not for Kids” setting may be overridden by YouTube and set as “For Kids”. Yes, this can be done by YouTube’s tool, thus overriding your channel-wide settings. It’s not enough to set this setting on your channel, you must make sure your content is not being watched by kids and the content is not overly kid friendly. How exactly YouTube’s scanner will work is entirely unknown as of now.

And here is where we get to the crux of this whole matter.

What is “Kid Friendly” Content?

Unfortunately, there is no clear answer to this question. Your content could be you reviewing toys, it could be drawing pictures by hand on the screen, it could be reviewing comic books, you might ride skateboards, you might play video games, you might even assemble Legos into large sculptures. These are all video topics that could go either way… and it all depends on which audience your video tends draw in.

It also depends on your existing subscriber base. If a vast majority of your current active subscribers are children 12 and under, this fact can unfairly influence your content even if your curent content is most definitely not for kids. The fact that ‘kids’ are watching your channel is a problem for ANY content that you upload.

But you say, “My viewer statistics don’t show me 12 and under category.” No, it doesn’t and there’s a good reason why it doesn’t. Google has always professed that it doesn’t allow 12 and under on its platform. But clearly, that was a lie. Google does, in fact, allow 12 and under onto its platform. That’s crystal clear for two reasons: 1) The FTC fined Google $170 million for violating COPPA (meaning, FTC found kids 12 and under are using the platform) and 2) YouTube has rolled out this “for kids / not for kids” setting confirming by Google that 12 and under do, in fact, watch YouTube and have active Google Account IDs.

I hear someone else saying, “I’m a parent and I let my 11 year old son use YouTube.” Yeah, that’s perfectly fine and legal, so long as you have given “verifiable consent” to the company that is collecting data from your 11 year old child. As long as a parent gives ‘verifiable consent’ for their child under 12 to Google or YouTube or even to the channel owner directly, it’s perfectly legal for your child to be on the platform watching and participating and for Google and YouTube to collect data from your child.

Unfortunately, verifiable consent is difficult to manage digitally. See the DIY method of parental consent below. Unfortunately, Google doesn’t offer any “verifiable consent” mechanism for itself or for YouTube content creators. This means that even if you as a parent are okay with your child being on YouTube, Facebook, Instagram or even Snapchat, if you haven’t provided explicit and verifiable parental consent to that online service for your child 12 and under, that service is in violation of COPPA by handling data that your child may input into that service. Data can include name, telephone number, email address or even sharing photos or videos of themselves. It also includes cookies placed onto their devices.

COPPA was written to penalize the “web site” or “online services” that collect a child’s information. It doesn’t penalize the family. Without “verifiable consent” from a parent or legal guardian, to the “web site” or “online service” it’s the same as no consent at all. Implicit consent isn’t valid for COPPA. It must be explicitly given and verifiable consent from a parent or legal guardian given to the service being used by the child.

The Murky Waters of Google

If only YouTube were Google’s only property to consider. It isn’t. Google has many, many properties. I’ll make a somewhat short-ish list here:

  • Google Search
  • Google Games
  • Google Music
  • Google Play Store (App)
  • Google Play Games (App)
  • Google Stadia
  • Google Hangouts
  • Google Docs
  • Google’s G Suite
  • Google Voice
  • Google Chrome (browser)
  • Google Chromebook (device)
  • Google Earth (App)
  • Google Movies and TV
  • Google Photos
  • Google’s Gmail
  • Google Books
  • Google Drive
  • Google Home (the smart speaker device)
  • Google Chromecast (TV device)
  • Android OS on Phones
  • … and the list goes on …

To drive all of these properties and devices, Google relies on the creation of a Google Account ID. To create an account, you must supply Google with certain specific identifying information including email address, first and last name and various other required information. Google will then grant you a login identifier and a password in the form of credentials which allows you to log into and use any of the above Google properties, including (you guessed it) YouTube.

Without “verifiable consent” supplied to Google for a child 12 and under, what data Google has collected from your child during the Google Account signup process (or any of the above apps) has violated COPPA, a ruleset tasked for enforcement by the Federal Trade Commission (FTC).

Yes, this whole situation gets even murkier.

Data Collection and Manipulation

The whole point to COPPA is to protect data collected from any child aged 12 and under. More specifically, it rules that this data cannot be collected / processed from the child unless a parent or legal guardian supplies “verifiable consent” to the “web site” or “online service” within a reasonable time of the child having supplied their data to the site.

As of 2013, data collection and manipulation isn’t defined just by what the child personally uploads and types, though this data is included. This Act was expanded to include cookies placed onto a child’s computer device to track and target that child with ads. These cookies are also considered protected data by COPPA as these cookies could be used to personally identify the child. If a service does not have “verifiable consent” on file for that child from a parent or guardian, the “online service” or “web site” is considered by the FTC in violation of COPPA.

The difficulty with Google’s situation is that Google actually stores a child’s data within the child’s Google Account ID. This account ID being entirely separate from YouTube. For example, if you buy your child a Samsung Note 10 Phone running Android and you as a parent create a Google Account for your 12 or under child to use that device, you have just helped Google violate COPPA. This is part of the reason the FTC fined Google $170 million for violations to COPPA. Perhaps not this specific scenario, but the fact that Google doesn’t offer a “verifiable consent” system to verify a child’s access to its services and devices prior to collecting data or granting access to services led the FTC to its ruling. The FTC’s focus, however, is currently YouTube… even though Google is violating COPPA everywhere all over its properties as a result of the use of a Google Account ID.

YouTube’s and COPPA Fallout

Google wholly owns YouTube. Google purchased the YouTube property in 2006. In 2009, Google retired YouTube’s original login credential system and began requiring YouTube to use Google Accounts to gain access to the YouTube property by viewers. This change is important.

It also seems that YouTube is still operating itself mostly as a self-autonomous entity within Google’s larger corporate structure. What all of this means more specifically is that YouTube now uses Google Accounts, a separately controlled and operated system within Google, to manage credentials and gain access into not only the YouTube property, but every other property that Google has (see the short-ish list above).

In 2009, the YouTube developers deprecated their own home grown credentials system and began using the Google Accounts system of credential storage. This change to YouTube very likely means that YouTube itself no longer stores or controls any credential or identifying data. That data is now contained within the Google Accounts system. YouTube likely now only manages the videos that get uploaded, comments, supplying ads on videos (which the tracking and manage is probably controlled by Google also), content ID matching and anything else that appears in the YouTube UI interface. Everything else is likely out of the YouTube team’s control (or even access). In fact, I’d suspect that the YouTube team likely has entirely zero access to the data and information stored within the Google Accounts system (with the exception of that specific data which is authorized by the account holder to be publicly shown).

Why is this Google Accounts information important?

So long as Google Accounts remains a separate entity from YouTube (even though YouTube is owned by the same company), this means that YouTube can’t be in violation of COPPA (at least not where storage of credentials are concerned). There is one exception which YouTube does control… its comment system.

The comment system on YouTube is one of the earliest “modern” social networks ever created. Only Facebook and MySpace were slightly earlier, though all three were generally created within 1 year of one another. It is also the only free form place left in the present 2019 YouTube interface that allows a 12 or under child to incidentally type some form of personally identifying information into a public forum for YouTube to store (in violation of COPPA).

This is the reason that the “for kids” setting disables comments. YouTube formerly had a private messaging service, but it was retired as of September of 2019. It is no longer possible to use YouTube to have private conversations between other YouTube users. If you want to converse with another YouTube viewer, you must do it in a public comment. This change was likely also fallout from Google’s COPPA woes.

Google and Cookies

For the same reason as Google Accounts, YouTube likely doesn’t even manage its own site cookies. It might, but it likely relies on a centralized internal Google service to create, manage and handle cookies. The reason for this is obvious. Were YouTube’s developers to create and manage their own separate cookie, it would be a cookie that holds no use for other Google services. However, if YouTube developers were to rely on a centralized Google controlled service to manage their site’s cookies, it would allow the cookie to be created in a standardized way that all Google services can consume and use. For this reason, this author supposes a centralized system is used at YouTube rather than something “homegrown” and specific to YouTube.

While it is possible that YouTube might create its own cookies, it’s doubtful that YouTube does this for one important reason: ad monetization. For YouTube to participate in Google Advertising (yet another service under the Google umbrella of services), YouTube would need to use tracking cookies that the Google Advertising service can read, parse and update while someone is watching a video on YouTube.

This situation remains murky because YouTube can manage its own internal cookies. I’m supposing that YouTube doesn’t because of a larger corporate platform strategy. But, it is still entirely possible that YouTube does manage its own browser cookies. Only a YouTube employee would know for certain which way this one goes.

Because of the ambiguity in how cookies are managed within Google and YouTube, this is another area where YouTube has erred on the side of caution by disabling ads and ad tracking if a channel is marked as ‘for kids’. This prevents placing ad tracking cookies on any computers from ‘for kids’ marked channels and videos, again avoiding violations of COPPA.

The FTC’s position

Unfortunately, the FTC has put themselves into a constitutionally precarious position. The United States Constitution has a very important provision within its First Amendment.

Let me cite a quote from the US Constitution’s First Amendment (highlighting and italics added by author to call out importance):

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

The constitutional difficulty that the FTC has placed themselves in is that YouTube, by its very nature, offers a journalistic platform which is constitutionally protected from tortious interference by the United States government. The government (or more specifically, Congress) cannot make law that in any way abridges freedom of speech or of the press.

A video on YouTube is not only a form of journalism, it is a form of free speech. As long as YouTube and Google remain operating within the borders of the United States, United States residents must be able to use this platform unfettered without government tortious interference.

How does this imply to the FTC? It applies because the FTC is a governmental entity created by an act of the US Congress and, therefore, acts on behalf of the US Congress. This means that the FTC must uphold all provisions of the United States Constitution when dealing with matters of Freedom of Speech and Freedom of the Press.

How is does this problem manifest for the FTC? The FTC has repeatedly stated that it will use “tools” to determine if a YouTube channel’s content is intended for and is primarily intended to target children 12 and under. Here’s the critical part. If a channel’s content is determined to be targeting children 12 and under, the channel owner may be fined up to $43,530 per video as it will have been deemed in violation of COPPA.

There are two problems with the above statements the FTC has made. Let’s examine text from this FTC provided page about YouTube (italics provided by the FTC):

So how does COPPA apply to channel owners who upload their content to YouTube or another third-party platform? COPPA applies in the same way it would if the channel owner had its own website or app. If a channel owner uploads content to a platform like YouTube, the channel might meet the definition of a “website or online service” covered by COPPA, depending on the nature of the content and the information collected. If the content is directed to children and if the channel owner, or someone on its behalf (for example, an ad network), collects personal information from viewers of that content (for example, through a persistent identifier that tracks a user to serve interest-based ads), the channel is covered by COPPA. Once COPPA applies, the operator must provide notice, obtain verifiable parental consent, and meet COPPA’s other requirements.

and there’s more, which contains the most critical part of the FTC’s article:

Under COPPA, there is no one-size-fits-all answer about what makes a site directed to children, but we can offer some guidance. To be clear, your content isn’t considered “directed to children” just because some children may see it. However, if your intended audience is kids under 13, you’re covered by COPPA and have to honor the Rule’s requirements.

The Rule sets out additional factors the FTC will consider in determining whether your content is child-directed:

  • the subject matter,
  • visual content,
  • the use of animated characters or child-oriented activities and incentives,
  • the kind of music or other audio content,
  • the age of models,
  • the presence of child celebrities or celebrities who appeal to children,
  • language or other characteristics of the site,
  • whether advertising that promotes or appears on the site is directed to children, and
  • competent and reliable empirical evidence about the age of the audience.

Content, Content and more Content

The above quotes discuss YouTube Content becoming “covered by COPPA”. This is a ruse. Content is protected speech by the United States Constitution and is defined within the First Amendment (see above). Nothing in any YouTube visual content when published by a United State Citizen can be “covered by COPPA”. The First Amendment sees to that.

Let’s understand why. First, COPPA is a data collections Act. It has nothing whatever to do with content ratings, content age appropriateness or, indeed, does not discuss anything else related visual content targeted towards children of ANY age. Indeed, there is no verbiage within the COPPA provisions that discuss YouTube, visual content, audio content or anything else to do with Freedom of Speech matters.

It gets worse… at least for the FTC. Targeting channels for disruption by fining them strictly over content uploaded onto the channel is less about protecting children’s data and more about content censorship on YouTube. Indeed, fining a channel $42,530 is tantamount to censorship as it is likely to see that content removed from YouTube… which is, indeed, censorship in its most basic form. Any censorship of Freedom of Speech is firmly against First Amendment rights.

Since the FTC is using fines based on COPPA as leverage against content creators, the implication is that the FTC will use this legal leverage to have YouTube take down content it feels is inappropriate targeting 12 and under children, rather than upholding COPPA’s actual data protection provisions. Indeed, the FTC will actually be making new law by fining channels based on content, not on whether data was actually collected in violation of COPPA’s data collection provisions. Though, the first paragraph may claim “data collection” as a metric, the second paragraph is solely about “offending content”… which is entirely about censorship. Why is that? Let’s continue.

COPPA vs “Freedom of Speech”

The FTC has effectively hung themselves out to dry. In fact, if the FTC does fine even ONE YouTube channel for “inappropriate content”, the FTC will be firmly in the business of censorship of journalism. Or, more specifically, the FTC will have violated the First Amendment rights of U.S. Citizens’ freedom of speech protections.

This means that in order for the FTC to enforce COPPA against YouTube creators, it has now firmly put itself into the precarious position of violating the U.S. Constitution’s First Amendment. In fact, the FTC cannot even fine even one channel owner without violating the First Amendment.

In truth, they can fine under only the following circumstance:

  1. The FTC proves that the YouTube channel actually collected and currently possesses inappropriate data from a child 12 and under.
  2. The FTC leaves the channel entirely untouched. The channel and content must remain online and active.

Number 2 is actually quite a bit more difficult for the FTC than it sounds. Because YouTube and the FTC have made an agreement, that means that YouTube can be seen as an agent of the FTC by doing the FTC’s bidding. This means that even if YouTube takes down the channel after a fine for TOS reasons, the FTC’s fining action can still be construed as in violation of First Amendment rights because YouTube acted as an agent to take down the “offending content”.

It gets even more precarious for the FTC. Even the simple the act of levying a fine against a YouTube channel could be seen as a violation of First Amendment rights. This action by the FTC seems less about protecting children’s data and more about going after YouTube content creators “targeting children with certain types of content” (see above). Because the latter quote from the FTC article explicitly calls out types of content as “directed at children”, this intentionally shows that it’s not about COPPA, but about visual content rules. Visual content rules DO NOT exist in COPPA.

Channel Owners and Content

If you are a YouTube channel owner, all of the above should greatly concern you for the following reasons:

  1. You don’t want to become a Guinea Pig to test First Amendment legal waters of the FTC + COPPA
  2. The FTC’s content rules above effectively state, “We’ll know it when we see it.” This is constitutionally BAD. This heavily implies content censorship intent. This means that the FTC can simply call out any content as being inappropriate and then fine a channel owner for uploading that content.
  3. It doesn’t specify state if the rule applies retroactively. Does previously uploaded content become subject to the FTC’s whim?
  4. The agreement takes effect beginning January 1, 2020
  5. YouTube can “accidentally” reclassify content as “for kids” when it clearly isn’t… which can trigger an FTC action.
  6. The FTC will apparently have direct access to the YouTube platform scanning tools. To what degree it has access is unknown. If it has direct access to take videos or channels offline, it has direct access to violate the First Amendment. Even if it must ask YouTube to do this takedown work, the FTC will still have violated the First Amendment.

The Fallacy

The difficulty I have with this entire situation is that the FTC now appears to be holding content creators to blame for heavy deficiencies within YouTube’s and Google’s platforms. Because Google failed to properly police its own platform for 12 and under users, it now seeks to pass that blame down onto YouTube creators simply because they create and upload video content. Content, I might add, that is completely protected under the United State Constitution’s First Amendment as “Freedom of Speech”. Pre-shot video content is a one-way passive form of communication.

Just like broadcast and cable TV, YouTube is a video sharing platform. It is designed to allow creators to impart one-way passive communication using pre-made videos, just like broadcast TV. If these FTC actions apply to YouTube, then they equally apply to broadcast and cable television providers…. particularly now that CBS, ABC, NBC, Netflix, Disney+ (especially Disney+), Hulu, Vudu, Amazon, Apple and cable TV providers now also offer “web sites” and “online services” where their respective video content can (and will) be viewed by children 12 and under via a computer device or web browser and where a child may is able to input COPPA protected data. For example, is Disney+ requiring verifiable parental consent to comply with COPPA?

Live Streaming

However, YouTube now also offers live streaming which changes the game a little for COPPA. Live streaming offers two-way live communication and in somewhat real-time. Live streaming is a situation where a channel creator might be able to collect inappropriate data from a child simply by asking pointed questions during a live stream event. A child might even feel compelled to write into live chat information that they shouldn’t be giving out. Live streaming may be more likely to collect COPPA protected data than pre-made video content simply because of the live interactivity between the host and the viewers. You don’t get that level of interaction when using pre-made video content.

Live streaming or not, there is absolutely no way a content creator can in any way be construed as an “Operator” of Google or of YouTube. The FTC is simply playing a game of “Guilty by Association”. They are using this flawed logic… “You own a YouTube channel, therefore you are automatically responsible for YouTube’s infractions.” It’s simply Google’s way of passing down its own legal burdens by your channel’s association with YouTube. Worse, the FTC seems to have bought into this Google shenanigan. It’s great for Google, though. They won’t be held liable for any more infractions against COPPA so long as YouTube creators end up shouldering that legal burden for Google.

The FTC seems to have conveniently forgotten this next part. In order to have collected data from a child, you must still possess a copy of that data to prove that you actually did collect it and that you are STILL in violation of COPPA. If you don’t have a copy of the alleged violating data, then you either didn’t collect it, the child didn’t provide it, you never had it to begin with or you have since deleted it. As for cookie violations, it’s entirely a stretch to say that YouTube creators had anything to do with how Google / YouTube manages cookies. The COPPA verbiage states of deletion under Parental Consent:

§312.4(c)(1). If the operator has not obtained parental consent after a reasonable time from the date of the information collection, the operator must delete such information from its records;

If an “operator” deletes such records, then the “operator” is not in violation of COPPA. If an “operator” obtains parental consent, then the “operator” is also not in violation of COPPA. Nothing, though, states definitively that a YouTube creator assumes the role of “operator”.

This is important because Google is and remains the “operator”. Until or unless Google extends access to its Google Accounts collected data to ALL YouTube creators so that a creator can take possession of said data, a creator cannot be considered an “operator”. The YouTube creator doesn’t have (and never has had) access to the Google Account personal data (other than what is publicly published on Google). Only Google has access to this account data which has been collected as part of creating a new Google Account. Even the YouTube property and its employees likely don’t even have access to Google Account personal data as mentioned. This means that, by extension, a YouTube creator doesn’t have a copy of any personal data that a Google Accounts signup may have collected… and therefore the YouTube content creator is NOT in violation of COPPA, though that doesn’t take Google off of the hook for it.

A YouTube content creator must actually POSSESS the data to be in violation. The FTC’s burden of proof is to show that the YouTube content creator actually has possession of that data. Who possesses that data? Google. Who doesn’t possess that data? The YouTube content creator. Though, there may be some limited edge cases where a YouTube creator might have requested personal information from a child in violation of COPPA. Even if a YouTube creator did request such data, so long as it has since been deleted fully, it is not in violation of COPPA. You must still be in possession of said data to be in violation of COPPA, at least according to how the act seems to read. If you have questions about this section, you should contact a lawyer for definitive confirmation and advice. Remember, I’m not a lawyer.

There is only ONE situation where a YouTube content creator may be in direct violation of COPPA. That is for live streaming. If a live streamer prompts for personal data to be written into the live chat area from its viewers and one of those viewers is 12 or under, the creator will have access to COPPA violating personal data. Additionally, comments on videos might be construed as in violation of COPPA if a 12 and under child writes something personally identifying into a comment. Though, I don’t know of many content creators who would intentionally request their viewers to reveal personally information in a comment on YouTube. Most people (including content creators) know the dangers all too well of posting such personally identifying information in a YouTube comment. A child might not, though. I can’t recall having watched one single YouTube channel where the host requests personally identifying information be placed into a YouTube comment. Ignoring COPPA for a second, such a request would be completely irresponsible. Let’s continue…

COPPA does state this about collecting data under its ‘Definitions’ section:

Collects or collection means the gathering of any personal information from a child by any means, including but not limited to:

(1) Requesting, prompting, or encouraging a child to submit personal information online;

(2) Enabling a child to make personal information publicly available in identifiable form. An operator shall not be considered to have collected personal information under this paragraph if it takes reasonable measures to delete all or virtually all personal information from a child’s postings before they are made public and also to delete such information from its records; or

(3) Passive tracking of a child online.

The “Enabling a child” section above is the reason for the removal of comments when the “for kids” setting is defined. Having comments enabled on a video when a child 12 and under could be watching enables the child to be able to write in personal information if they so choose. Simply by having a comment system available to someone 12 and under appears to be an infraction of COPPA. YouTube creators DO have access to enable or disable comments. What YouTube Creators don’t have access to is the age of the viewer. Google hides that information from YouTube content creators. YouTube content creators, in good faith, do not know the ages of anyone watching their channel.

Tracking a child’s activities is not possible by a YouTube content creator. A content creator has no direct or even incidental access to Google’s systems which perform any tracking activities. Only Google Does. Therefore, number 3 does not apply to YouTube content creators. The only way number 3 would ever apply to a creator is if Google / YouTube offered direct access to its cookie tracking systems to its YouTube content creators. Therefore, only numbers 1 and 2 could potentially apply to YouTube content creators.

In fact, because Google Accounts hides its personal data from YouTube content creators (including the ages of its viewers), content creators don’t know anything personal about any of its viewers. Which means, how are YouTube content creators supposed to know if a child 12 and under is even watching?

Google’s Failures

The reality is, Google has failed to control its data collection under Google Accounts. It is Google Accounts that needs to have COPPA applied to it, not YouTube. In fact, this action by the FTC will actually solve NOTHING at Google.

Google’s entire system is tainted. Because of the number of services that Google owns and controls, placing COPPA controls on only ONE of these services (YouTube) is the absolute bare minimum for an FTC action against COPPA. It’s clear that the FTC simply doesn’t understand the breadth and scope of Google’s COPPA failures within its systems. Placing these controls on YouTube will do NOTHING to fix COPPA’s greater violations which continue unabated within the rest of Google’s Services, including its brand new video gaming streaming service, Google Stadia. Google Stadia is likely to draw in just as many children 12 and under as YouTube. Probably more. If Stadia has even one sharing or voice chat service active or uses cookies to track its users, Stadia is in violation for the same exact reasons YouTube is… Google’s failure of compliance within Google Accounts.

Worse, there’s Android. Many parents are now handing brand new Android phones to their children 12 and under. Android has MANY tracking features enabled on its phones. From the GPS on board, to cookies, to apps, to the cell towers, to the OS itself. Talk about COPPA violations.

What about Google Home? You know, that seemingly innocuous smart speaker? Yeah, that thing is going to track not only each individual’s voice, it may even store recordings of those voices. It probably even tracks what things you request and then, based on your Google Account, will target ads on your Android phone or on Google Chrome based on things you’ve asked Google Home to provide. What’s more personally identifying than your own voice being recorded and stored after asking something personal?

Yeah, YouTube is merely the tippiest tip of a much, much, MUCH larger corporate iceberg that is continually in violation of COPPA within Google. The FTC just doesn’t get that its $170 million fine and First Amendment violating censorship efforts on YouTube isn’t the right course of action. Not only does the FTC’s involvement in censorship on YouTube lead to First Amendment violations, it won’t solve the rest of the COPPA violations at Google.

Here’s where the main body of this article ends.

Because there are still more questions, thoughts and ideas around this issue, let’s explore a some deeper ideas which might answer a few more of your questions as a creator or as a parent. Each question is prefaced by a ➡️ symbol. At this point, you may want to skim the rest of this article for specific thoughts which may be relevant to you.


➡️ “Should I Continue with my YouTube Channel?”

This is a great question and one that I can’t answer for you. Since I don’t know your channel or your channel’s content, there’s no way for me to give advice to you. Even if you do tell me your channel and its content, the FTC explicitly states that it will be at the FTC’s own discretion if a channel’s content “is covered by COPPA”. This means you need to review your own channel content to determine if your video content drives kids 12 and under to watch. Even then, it’s a crap shoot.

Are there ways you can begin to protect your channel? Yes. The first way is to post a video requesting that all subscribers who are 12 and under either unsubscribe from the channel or alternatively ask their parents to provide verifiable consent to you to allow that child to continue watching. This consent must come from a parent or guardian, not the child. Obtaining verifiable consent is not as easy as it sounds. Though, after you have received verifiable parental consent from every “child” subscriber on your channel, you can easily produce this consent documentation to the FTC if they claim your channel is in violation.

The next option is to apply for TRUSTe’s Children’s Privacy Certification. This affords your YouTube channel “Safe Harbor” protections against the FTC. This one is likely most helpful for large YouTube channels which tend to target children and which make significant income through ad monetization. TRUSTe’s certification is not likely to come cheap. This is the reason this avenue would only be helpful for the largest channels receiving significant monetization enough to pay for such a service.

Note, if you go through the “Safe Harbor” process or obtain consent for every subscriber, you won’t need to set your channel as ‘for kids’. Also note that “Safe Harbor” may not be possible due to Google owning all of the equipment that operates YouTube. Certification programs usually require you to have direct access to systems to ensure they continue to comply with the terms of the certification. Certifications usually also require direct auditing of systems to ensure the systems comply with the certification requirements. It’s very doubtful that Google will allow an auditing firm to audit YouTube’s servers on behalf of a content creator for certification compliance… and even if they did allow such an audit, YouTube’s servers would likely fail the certification audit.

The final option is to suspend your channel. Simply hide all of your content and walk away from YouTube. If you decide to use another video service like DailyMotion, Vimeo, or Twitch, the FTC may show up there as well. If they can make the biggest video sharing service in the world bow down to the FTC, then the rest of these video sharing services are likely not far behind.

➡️ “I don’t monetize my channel”

This won’t protect you. It’s not about monetization. It’s about data collection. The FTC is holding channel owners responsible for Google irresponsible data collection practices. Because Google can’t seem to police its own data collection to shield its end users from COPPA, Google/YouTube has decided to skip trying to fix their broken system and, instead, YouTube has chosen pass their violations down onto their end users… the YouTube creators.

This “passing off liability” action is fairly unheard of in most businesses. Most businesses attempt to shield their end users from legal liabilities by the use of its services as much as possible. Not Google or YouTube. They’re more than willing to hang their end users out to dry and let their end users take the burden of Google’s continued COPPA violations.

➡️ “My content isn’t for kids”

That doesn’t matter. What matters is whether the FTC thinks it is. If your content is animated, video game related, toy related, art related, craft related or in any way might draw in children as viewers, that’s all that matters. Even one child 12 and under is enough to shift Google’s COPPA data collection liabilities down onto your shoulders.

➡️ “I’ve set my channel as ‘not for kids'”

This won’t protect you. Google has a tool in the works that will scan the visual content of a video and potentially reclassify a video as “for kids” in defiance of the channel-wide setting of “not for kids”. Don’t expect that the channel-wide setting will hold up for every single video you post. YouTube can reclassify videos as it sees fit. Whether there will be a way to appeal this is as yet unknown. To get rid of that reclassification of a video, you may have to delete the video and reupload. Though, if you do this and the content remains the same, it will likely be scanned and marked “for kids” again by YouTube’s scanner. Be cautious.

➡️ “I’ll set my channel ‘for kids'”

Do this only if you’re willing to live with the restrictions AND only if your content really is for kids (or is content that could easily be construed as for kids). While this channel setting may seem to protect your channel from COPPA violations, it actually doesn’t. On the other hand, if your content truly isn’t for children and you set it ‘for kids’ that may open your channel up to other problems. I wouldn’t recommend setting content as ‘for kids’ if the content you post is not for kids. Though, there’s more to this issue… keep reading.

Marking your content “for kids” won’t actually protect you from COPPA. In fact, it makes your channel even more liable to COPPA violations. If you mark your content as “for kids”, you are then firmly under the obligation of providing proof that your channel absolutely DID NOT collect data from children under the age of 13. Since the FTC is making creators liable for Google’s problematic data collection practices, you could be held liable for Google’s broken data collection system simply by marking your content as ‘for kids’.

This setting is very perilous. I definitely don’t recommend ANY channel use this setting… not even if your channel is targeted at kids. By setting ‘for kids’ on any channel or content, your channel WILL become liable under COPPA’s data collection provisions. Worse, you will be held liable for Google’s data collections practices… meaning the FTC can come after you with fines. This is where you will have to fight to prove that you presently don’t have access to any child’s collected data, that you never did and that it was solely Google who stored and maintained that data. If you don’t possess any of this alleged data, it may be difficult for the FTC to uphold fines against channel owners. But, unfortunately, it may cost you significant attorney fees to prove that your channel is in the clear.

Finally, it’s entirely possible that YouTube may change this ‘for kids’ setting so that it becomes a one-way transition. This means that you may be unable to undo this change in the future. If it becomes one way, then a channel that is marked ‘for kids’ may never be able to go back to ‘not for kids’. You may have to create an entirely new channel and start over. If you have a large channel following, that could be a big problem. Don’t set your channel ‘for kids’ thinking you are protecting your channel. Do it because you’re okay with the outcome and because your content really is targeted for kids. But, keep in mind that setting ‘for kids’ will immediately allow the FTC to target your channel for COPPA violations.

➡️ “I’m a parent and I wish to give verifiable parental consent”

That’s great. Unfortunately, doing so is complicated. Because it’s easy for a child to fabricate such information using friends or parents of friends, giving verifiable consent to a provider is more difficult for parents than it sounds. It requires first verifying your identity as a parent, then it requires the provider to collect consent documentation from you.

It seems that Google / YouTube have chosen not yet set up a mechanism to collect verifiable consent themselves, let alone for YouTube content creators. What that means is that there’s no easy way for you as a parent to give (or a channel to get) verifiable consent easily. On the flip side as a content creator, it is left to you to handle contacting parents and collecting verifiable consent for child subscribers. You can use a service that will cost you money or you can do it yourself. As a parent, you can do your part by contacting a channel owner and giving them explicit verifiable consent. Keep reading to understand how to go about giving consent.

Content Creators and Parental Consent

Signing up for a service that provides a verifiable consent is something that larger YouTube channels may be able to afford, But, for a small YouTube channel, collecting such information from every new subscriber will be difficult. Google / YouTube could set up such an internal verification service for its creators, but YouTube doesn’t care about that or complying with COPPA. If Google cared about complying with COPPA, they would already have a properly working age verification system in Google Accounts that forces children to set their real age and which requires verifiable consent from the parent of a child 12 and under. If a child 12 and under is identified, Google can then block access to all services that might allow the child to violate COPPA until such consent is given.

It gets even more complicated. Because YouTube no longer maintains a private messaging service, there’s no way for a channel owner to contact subscribers directly on the YouTube platform other than posting a one-way communication video to your channel showing an email address or other means to contact you. This is why it’s important for each parent to reach out to each YouTube channel owner where the child subscribes and offer verifiable consent to the channel owner.

As a creator, this means you will need to post a video stating that ALL subscribers who are under the age of 13 must have have parental consent to watch your channel. This child will need to request their parent contact you using a COPPA authorized mechanism to provide consent. This will allow you to begin the collection of verifiable consent from parents of any children watching or subscribed to your content. Additionally, with every video you post, you must also have an intro on every video stating that all new subscribers 12 and under must have their parent contact the channel owner to provide consent. This shows to the FTC that your channel is serious about collecting verifiable parental consent.

So what is involved in Do It Yourself consent? Not gonna lie. It’s going to be very time consuming. However, the easiest way to obtain verifiable consent is setting up and using a two-way video conferencing service like Google Hangouts, Discord or Skype. You can do this yourself, but it’s better if you hire a third party to do it. It’s also better to use a service like Hangouts which shows all party faces together on the screen at once. This way, when you record the call for your records, both yours and the parent+child’s faces are readily shown. This shows you didn’t fabricate the exchange.

To be valid consent, both the parent and the child must be present and visible in the video while conferencing with the channel owner. The channel owner should also be present in the call and visible on camera if possible. Before beginning, the channel owner must notify the parent that the call will be recorded by the channel owner for the sole purposes of obtaining and storing verifiable consent. You may want to ensure the parent understands that the call will only and ever be used for this purpose (and hold to that). It is off limits to post these videos as a montage on YouTube as content. Then, you may record the conference call and keep it in the channel owners records. As a parent, you need to be willing to offer a video recorded statement to the channel owner stating something similar to the following:

“I, [parent or guardian full name], am 18 years of age or older and give permission to [your channel name] for my child / my ward [child’s YouTube public profile name] to continue watching [your channel name]. I additionally give permission to [your channel name] to collect any necessary data from my child / my ward while watching your channel named [your channel name].”

If possible, the parent should hold up the computer, tablet, phone or device that the child will use to the camera so that it clearly shows the child account’s profile name is logged into YouTube on your channel. This will verify that it is, indeed, the parent or legal guardian of that child’s profile. You may want to additionally request the parent hold up a valid form of picture ID (driver’s license or passport) obscuring any addresses or identifiers with paper or similar to verify the picture and name against the person performing consent. You don’t need to know where they live, you just need to verify the name and photo on the ID matched the person you are speaking to.

Record this video statement for your records and store this video recording in a safe place in case you need to recall this video for the FTC. There should be no posting of these videos to YouTube or any other place. These are solely to be filed for consent purposes. Be sure to also notice if the person with the child is old enough to be an adult, that the ID seems legit and the person is not that child’s sibling or someone falsifying this verification process. If this is a legal guardian situation, this is more difficult to validate legal guardianship. Just do your best and hope that the guardian is being truthful. If in doubt, thank the people on the call for their time and then block the subscriber from your channel.

If your channel is owned by a corporation, the statement should include the name of the business as well as the channel. Such a statement over a video offers verifiable parental consent for data collection from that child by that corporation and/or the channel. This means that the child may participate in comment systems related to your videos (and any other data collection as necessary). Yes, this is a lot of work if you have a lot of under 13 subscribers, but it is the work that the U.S. Government requires to remain compliant with COPPA. The more difficult part is knowing which subscribers are 12 and under. Google and YouTube don’t provide any place to determine this. Instead, you will need to ask your child subscribers to submit parental consent.

If the DIY effort is too much work, then the alternative is to post a video requesting 12 and under subscribers contact you via email stating their YouTube public subscriber identifier. Offer up an email address for this purpose. It doesn’t have to be your primary address. It can be a ‘throw away’ address solely for this purpose. For any account that emails you their account information, block it. This is the simplest way to avoid 12 and under children who may already be in your subscriber pool. Additionally, be sure to state in every future video that any 12 and under watching this channel must have their parental consent or risk being blocked.

Note, you may be thinking that requesting any information from a child 12 and under is in violation of COPPA, but it isn’t. COPPA allows for a reasonable period of time to collect personal data while in the process of obtaining parental consent before that data needs to be irrevocably deleted. After you block 12 and under subscribers, be sure to delete all correspondence via that email address. Make sure that the email correspondence isn’t sitting in a trashcan. Also make sure that not only are the emails are fully deleted, but any collected contact information is fully purged from that email system. You want to make sure that not only are all emails deleted, but any collected email addresses are also purged. Many email services automatically collect and store email addresses into an automatic address list. Make sure that these automatic lists are also purged. As long as all contact data has been irrevocably deleted, you aren’t violating COPPA.

COPPA recognizes the need to collect personal information to obtain parental consent:

(c) Exceptions to prior parental consent. Verifiable parental consent is required prior to any collection, use, or disclosure of personal information from a child except as set forth in this paragraph:

(1) Where the sole purpose of collecting the name or online contact information of the parent or child is to provide notice and obtain parental consent under §312.4(c)(1). If the operator has not obtained parental consent after a reasonable time from the date of the information collection, the operator must delete such information from its records;

This means you CAN collect a child’s or parent’s name or contact information in an effort to obtain parental consent and that data may be retained for a period of “reasonable time” to gain that consent. If consent is not obtained in that time, then the channel owner must “delete such information from its records”.

➡️ “How can I protect myself?”

As long as your channel remains on YouTube with published content, your channel is at risk. As mentioned above, there are several steps you can take to reduce your risks. I’ll list them here:

  1. Apply for Safe Harbor with TrustArc’s TRUSTe certification. It will cost you money, but once certified, your channel will be safe from the FTC so long as you remain certified under the Safe Harbor provisions.
  2. Remove your channel from YouTube. So long as no content remains online, the FTC can’t review your content and potentially mark it as “covered by COPPA.”
  3. Wait and see. This is the most risky option. The FTC makes some claims that it intends proving you had access to, stored and maintained protected data from children. However, there are just as many statements that indicate they will take action first, then request proof later. Collecting data will be difficult burden of proof for most channels. It also means a court battle.
  4. Use DYI or locate a service to obtain verifiable parental consent for every subscriber 12 and under.

➡️ “What went wrong?”

A whole lot failed on Google and YouTube’s side. Let’s get started with bulleted points of Google’s failures.

  • Google has failed to identify children 12 and under to YouTube content creators.
  • Google has failed to offer mechanisms to creators to prevent children 12 and under from viewing content on YouTube.
  • Google has failed to prevent children 12 and under from creating a Google Account.
  • Google has failed to offer a system to allow parents to give consent for children 12 and under to Google. If Google had collected parental consent for 12 and under, that consent should automatically apply to content creators… at least for data input using Google’s platforms.
  • Google has failed to warn parents that they will need to provide verifiable consent for children 12 and under using Google’s platform(s). Even the FTC has failed to warn parents of this fact.
  • YouTube has failed to provide an unsubscribe tool to creators to easily remove any subscribers from a channel. See question below.
  • YouTube has failed to provide a blocking mechanism that prevents a Google Account from searching, finding or watching a YouTube channel.
  • YouTube has failed to identify accounts that may be operated by a child 12 and under and warn content creators of this fact thus allow the creator to block any such accounts.
  • YouTube has failed to offer a tool to allow creators to block specific (or all) content from viewers 12 and under.
  • YouTube has failed to institute a full ratings system, such as the TV Parental Guidelines that sets a rating on the video and provides a video rating identifier within the first 2 minutes, thus stating that a video may contain content inappropriate for certain age groups. Such a full ratings system would allow parents to block specific ratings of content from their child using parental controls. This would allow parents to prevent not only children 12 and under from viewing more mature rated YouTube content, it lets parents block content for all age groups handled by the TV Parental Guidelines.

➡️ “I’m a creator. Can I unsubscribe a subscriber from my channel?”

No, you cannot. But, you can “Block” the user and/or you can “Hide user from channel” depending on where you are in the YouTube interface. Neither of these functions are available as features directly under the Subscriber area of YouTube Creator. Both of these features require digging into separate public Google areas. These mechanisms don’t prevent a Google Account from searching your channel and watching your public content, however.

To block a subscriber, enter the Subscribers area of your channel using Creator Studio Classic to view a list of your subscribers. A full list of subscribers is NOT available under the newest YouTube Studio. You can also see your subscribers (while logged into your account) by navigating to https://www.youtube.com/subscribers. From here, click on the username of the subscriber. This will take you to that subscriber’s YouTube page. From this user page, locate a small grey flag in the upper portion of the screen. I won’t snapshot the flag or give its exact location because YouTube is continually moving this stuff around and changing the flag image shape. Simply look for a small flag icon and click on it, which will drop down a menu. This menu will allow you to block this user.

Blocking a user prevents all interactions between that user and your channel(s). They will no longer be able to post comments on your videos, but they will still be able to view your public content and they will remain subscribed if they already are.

The second method is to use “Hide user from channel”. You do this by finding a comment on the video from that user and selecting “Hide user from channel” using the 3 vertical  dot drop down menu to the right of the comment. You must be logged into your channel and viewing one of your video pages for this to work.

Hiding a user and blocking a user are effectively the same thing, according to YouTube. The difference is only in the method of performing the block. Again, none of the above allows you to unsubscribe users manually from your channel. Blocking or hiding a user still allows the user to remain subscribed to your channel as stated above. It also allows them to continue watching any public content that you post. However, a blocked or hidden user will no longer receive notifications about your channel.

This “remaining subscribed” distinction is important because the FTC appears to be using audience viewer demographics as part of its method to determine if a channel is directing its content towards children 12 and under. It may even use subscriber demographics. Even if you do manage to block an account of a child 12 and under who has subscribed to your channel, that child remains a subscriber and can continue to search for your channel and watch any content you post. That child’s subscription to your channel may, in fact, continue to impact your channel’s demographics, thus leading to possible action by the FTC. By blocking 12 and under children, you may be able to use this fact to your advantage by proving that you are taking action to prevent 12 and under users from posting inappropriate data to your channel.

➡️ “What about using Twitch or Mixer?”

Any video sharing or live streaming platforms outside of and not owned by Google aren’t subject to Google’s / YouTube’s FTC agreement.

Twitch

Twitch isn’t owned or operated by Google. They aren’t nearly as big as YouTube, either. Monetization on Twitch may be less than can be had on YouTube (at least before this COPPA change).

Additionally, Twitch’s terms of service are fairly explicit regarding age requirements, which should prevent COPPA issues. Twitch’s terms state as follows of minors using Twitch:

2. Use of Twitch by Minors and Blocked Persons

The Twitch Services are not available to persons under the age of 13. If you are between the ages of 13 and 18 (or between 13 and the age of legal majority in your jurisdiction of residence), you may only use the Twitch Services under the supervision of a parent or legal guardian who agrees to be bound by these Terms of Service.

This statement is more than Google provided for its creators. This statement by Twitch explicitly means Twitch intends to protect its creators from COPPA and any other legal requirements associated with minors or “children” using the Twitch service. For creators, this piece of mind is important.

Unfortunately, Google has no such creator piece of mind. In fact, the whole way YouTube has handled COPPA is sloppy at best. If you are a creator on YouTube, you should seriously consider this a huge breech of trust between Google and you, the creator.

Mixer

Mixer is presently owned by Microsoft. I’d recommend caution using Mixer. Because Microsoft allows 12 and under onto its ID system, it may end up in the same boat as YouTube. It’s probably a matter of time before the FTC targets Microsoft and Mixer with similar actions.

Here’s what Mixer’s terms of service say about age requirements:

User Age Requirements

  • Users age 12 years and younger cannot have a channel of their own. The account must be owned by the parent, and the parent or guardian MUST be on camera at all times. CAT should not have to guess whether a parent is present or not. If such a user does not appear to have a guardian present, they can be reported, so CAT can investigate further.
  • Users aged 13-16 can have a channel, with parental consent. They do not require an adult present on camera. If they are reported, CAT will take steps to ensure that the parent is aware, and has given consent.

This looks great and all, but within the same terms of service area it also states:

Users Discussing Age In Chat

We do NOT have any rule against discussing or stating age. Only users who claim to be (or are suspected to be) under 13 will be banned from the service. If someone says they are under 13, it is your choice to report it or not; if you do report it, CAT will ban them, pending proof of age and/or proof of parental consent.

If someone is streaming and appears to be under 16 without a parent present, CAT may suspend the channel, pending proof of parental consent and age. Streamers under 13 have a special exception, noted [above].

If you’re wondering what “CAT” is, it stands for Community Action Team (AKA moderators) for Mixer. The above is effectively a “Don’t Ask, Don’t Tell” policy. It also means Mixer has no one to actively police the service for underage users, not even its CAT team. It also means that Mixer is aware that persons 12 and under are using Mixer’s services. By making the above statement, it opens Mixer up to auditing by the FTC for COPPA compliance. If you’re considering using Mixer, this platform could also end up in the same boat as YouTube sooner rather than later considering the size of Microsoft as a company.

Basically, Twitch’s Terms of Service are a better written for creator piece of mind.

➡️ “What is ‘burden of proof’?”

When faced with civil legal circumstances, you are either the plaintiff or the defendant. The plaintiff is the party levying the charges against the other party (the defendant). Depending on the type of case, burden of proof must be established by the plaintiff to show that the defendant did (or didn’t) do the act(s) alleged. The type of burden of proof is slightly different when the action is a civil suit versus a criminal suit.

Some cases requires the plaintiff to take on the burden of proof to show the act(s) occurred. But, it’s not that simple for the defendant. The defendant may be required to bring both character witnesses and actual witnesses which may, in fact, establish a form of burden of proof that the acts could not have occurred. Even though burden of proof is not explicitly required of a defendant, that doesn’t mean you won’t need to provide evidence to exonerate yourself. In the case of a civil FTC action, the FTC is the plaintiff and your channel will be the defendant.

The FTC itself can only bring civil actions against another party. The FTC will be required to handle the burden of proof to prove that your channel not only collected the alleged COPPA protected data, but that you have access to and remain in possession of such data.

However the FTC can hand its findings over to the United States Department of Justice which has the authority to file both civil and criminal lawsuits. Depending on where the suit is filed and by whom, you could face either civil penalties or criminal penalties. It is assumed that the FTC will directly file its legal actions against COPPA as civil suits… but that’s just an assumption. The FTC does have the freedom to request the Department of Justice handle the complaint.

One more time, this article is not legal advice. It is simply information. If you need actual legal advice, you are advised to contact an attorney who can understand your specific circumstances and offer you legal advice for your specific circumstances.

↩︎

Rant Time: Google doesn’t understand COPPA

Posted in botch, business, california, rant by commorancy on November 24, 2019

kid-tablet.jpgWe all know what Google is, but what is COPPA? COPPA stands for the Children’s Online Privacy Protection Act and is legislation designed to incidentally protect children by protecting their personal data given to web site operators. YouTube has recently made a platform change allegedly around COPPA, but it is entirely misguided. It also shows that Google doesn’t fundamentally understand the COPPA legislation. Let’s explore.

COPPA — What it isn’t

The COPPA body of legislation is intended to protect how and when a child’s personal data may be collected, stored, used and processed by web site operators. It has very specific verbiage describing how and when such data can be collected and used. It is, by its very nature, a data protection and privacy act. It protects the data itself… and, by extension, the protection of that data hopes to protect the child. This Act isn’t intended to protect the child directly and it is misguided to assume that it does. COPPA protects personal private data of children.

By the above, that means that the child is incidentally protected by how their collected data can (or cannot) be used. For the purposes of COPPA, a “child” is defined to be any person under the age of 13. Let’s look at a small portion of the body of this text.

General requirements. It shall be unlawful for any operator of a Web site or online service directed to children, or any operator that has actual knowledge that it is collecting or maintaining personal information from a child, to collect personal information from a child in a manner that violates the regulations prescribed under this part. Generally, under this part, an operator must:

(a) Provide notice on the Web site or online service of what information it collects from children, how it uses such information, and its disclosure practices for such information (§312.4(b));

(b) Obtain verifiable parental consent prior to any collection, use, and/or disclosure of personal information from children (§312.5);

(c) Provide a reasonable means for a parent to review the personal information collected from a child and to refuse to permit its further use or maintenance (§312.6);

(d) Not condition a child’s participation in a game, the offering of a prize, or another activity on the child disclosing more personal information than is reasonably necessary to participate in such activity (§312.7); and

(e) Establish and maintain reasonable procedures to protect the confidentiality, security, and integrity of personal information collected from children (§312.8).

This pretty much sums up the tone for what follows in the body text of this legislation. What it essentially states is all about “data collection” and what you (as a web site operator) must do specifically if you intend to collect specific data from someone under the age of 13… and, more specifically, what data you can and cannot collect.

YouTube and Google’s Misunderstanding of COPPA

YouTube’s parent company is Google. That means that I may essentially interchange “Google” for “YouTube” because both are one-in-the-same company. With that said, let’s understand how Google / YouTube fundamentally does not understand the COPPA body of legislation.

Google has recently rolled out a new feature to its YouTube content creators. It is a checkbox both as a channel wide setting and as an individual video setting. This setting sets a flag whether the video is targeted towards children or not (see image below for this setting’s details). Let’s understand Google’s misunderstanding of COPPA.

COPPA is a data protection act. It is not a child protection act. Sure, it incidentally protects children because of what is allowed to be collected, stored and processed, but make no mistake, it protects collected data directly, not children. With that said, checking a box on a video whether it is appropriate for children has nothing whatever to do with data collection. Let’s understand why.

Google has, many years ago in fact, already implemented a system to prevent “children” (as defined by COPPA) to sign up for and use Google’s platforms. What that means is when someone signs up for a Google account, that person is asked questions to ascertain the person’s age. If that age is identified as under 13, that account is classified by Google as in use by a “child”. Once Google identifies a child, it is then obligated to uphold ALL laws governed by COPPA (and other applicable child privacy laws) … that includes all data collection practices required by COPPA and other applicable laws. It can also then further apply Google related children protections against that account (i.e. to prevent the child from viewing inappropriate content on YouTube). Google would have needed to uphold these data privacy laws since the year 2000, when COPPA was enacted. If Google has failed to protect a child’s collected data or failed to uphold COPPA’s other provisions, then that’s on Google. It is also a situation firmly between Google and the FTC … the governmental body tasked with enforcing the COPPA legislation. Google solely collects the data. Therefore, it is exclusively on Google if that data is used or collected in inappropriate ways, counter to COPPA’s requirements.

YouTube’s newest “not appropriate for children” flag

As of November 2019, YouTube has implemented a new flag for YouTube content creators. The channel-wide setting looks like so:

Screen Shot 2019-11-24 at 2.33.32 AM

This setting, for all intents and purposes, isn’t related to COPPA. COPPA doesn’t care whether video content is targeted towards children. COPPA cares about how data is collected from children and how that data is then used by web sites. COPPA is, as I said above, all about data collection practices, not about whether content is targeted towards children.

Let’s understand that in the visual entertainment area, there are already ratings systems which apply. Systems such as the ESRB ratings system founded in 1994. This system specifically sets ratings for video games depending on the types of content contained within. For TV shows, there is the TV Parental Guidelines which began in 1996 and was proposed between the US Congress, the TV industry and FCC. These guidelines rate TV shows such as TV-Y, TV-14 or TV-MA depending, again, on the content within. This was mandated in 1997 by the US Government due to its stranglehold on TV broadcast licenses. For theatrical films, there’s the MPAA’s movie ratings system which began in 1968. So, it’s not as if there aren’t already effective content ratings systems available. These voluntary systems have been in place for many years already.

For YouTube, marking your channel or video content as “made for kids” has nothing whatever to do with COPPA legislated data collection practices.

YouTube Creators

Here is exactly where we see Google and YouTube’s fundamental misunderstanding of COPPA. COPPA is about the protection and collection of data from children. Google collects, stores and uses this and all data it collects. YouTube creators have very, very limited access to any of this Google-collected data. YouTube creators have no hand in its collection or its use. Google controls all of the data collection on YouTube. With the exception of comments and the list of subscribers of a channel, the majority of the data collected and supplied by Google to the creators is almost exclusively limited to aggregate unpersonalized statistical data. Even then, this data can be inaccurate depending on what the Google account ID stated when they signed up. Still, the limited personal subscriber data it does supply to content creators is limited to the subscriber’s ID only. Google offers its content creators no access to deeper personal data, not even the age of its subscribers.

Further, Google (and pretty much every other web site) relies on truthfulness when people sign up for services. Google does not in any way verify the information given to Google during the signup process or that this information is in any way accurate or truthful. Indeed, Google doesn’t even verify the identity of the person using the account or even require the use of real names. The only time Google does ANY level of identity verification is when using Google Wallet. Even then, it’s only as a result of needing identity verification due to possible credit card fraud issues. Google Wallet is a pointless service that many other payment systems do better, such as Apple Pay, Amazon Checkout and, yes, PayPal. I digress.

With that said, Google is solely responsible for all data collection practices associated with YouTube (and its other properties) including storing, processing and managing of that data. YouTube creators have no control over what YouTube (or Google) chooses to collect, store or disseminate. Indeed, YouTube creators have no control over YouTube’s data collection or storage practices whatsoever.

This new alleged “COPPA mechanism” that YouTube has implemented has nothing whatever to do with data collection practices and everything to do with content which might be targeted towards “children”. Right now, this limited mechanism is pretty much a binary system (a very limited system). The channel either does or it doesn’t target content towards children (either channel as a whole or video by video). It’s entirely unclear what happens when you do or don’t via YouTube, though some creators have had seeming bad luck with their content, which has been manually reviewed by YouTube staff and misclassified as “for children” when the content clearly is not. These manual overrides have even run counter to the global channel settings, which have been set to “No, set this channel as not made for kids.”

Clearly, this new mechanism has nothing to do with data collection and everything to do with classifying which content is suitable for children and which isn’t. This defines a …

Ratings System

Ratings systems in entertainment content are nothing new. TV has had a content rating systems since the mid 90s. Movies have had ratings systems since the late 60s. Video games have had them since the mid 90s. COPPA, on the other hand, has entirely nothing to do with ratings or content. It is legislation that protects children by protecting their data. It’s pretty straightforward what COPPA covers, but one thing it does not cover is whether video content is appropriate to be viewed by children. Indeed, COPPA isn’t a ratings system. It is child data protection legislation.

How YouTube got this law’s interpretation so entirely wrong is anyone’s guess. I can’t even fathom how Google could have been led this astray. Perhaps Google’s very own lawyers are simply inept and not at all versed in COPPA? I have no idea… but whatever led YouTube’s developers to thinking the above mechanism in any way relates to COPPA is entirely wrong thinking. No where does COPPA legislate YouTube video content appropriateness. Categorizing content is entirely up to a ratings system to handle.

Indeed, YouTube is trudging on very thin ice with the FTC. Not only did they interpret the COPPA legislation completely wrong, they have implemented “a fix” even more wrongly. What Google and YouTube has done is shoot themselves in the foot… not once, but twice. The second time is that Google has fully admitted that they don’t even have a functional working ratings system. Indeed, it doesn’t… and now everyone knows it.

Google has now additionally admitted that children under the age of 13 use YouTube by the addition of this “new” mechanism. With this one mechanism, Google has admitted to many things about children using its platform… which means YouTube and Google are both now in the hot seat with regards to COPPA. They must now completely ensure that YouTube (and Google by extension) is fully and solely complying with the letter of COPPA’s verbiage by collecting children’s data.

YouTube Creators Part II

YouTube creators have no control over what Google collects from its users, that’s crystal clear. YouTube creators also don’t have access to view most of this data or access to modify anything related to this data collection system. Only Google has that level of access. Because Google controls its own data collection practices, it is on Google to protect any personal information it may have received by children using its platform.

That also means that content creators should be entirely immune from prosecution over such data collection practices… after all, the creators don’t own or control Google’s data collection systems.

This new YouTube mechanism seems to imply that creators have some level of liability and/or culpability for Google’s collection practices, when creators simply and clearly do not. Even the FTC made a striking statement that they may try to “go after” content creators. I’m not even sure how that’s possible under COPPA. Content creators don’t collect, store or manage data about children, regardless of the content that they create. The only thing content creators control is appropriateness of the content towards children… and that has nothing to do with COPPA and everything to do with a ratings system… a system that Google does not even have in place within YouTube.

Content creators, however, can voluntarily label their content as TV-MA or whatever they deem is appropriate based on the TV Parental Guidelines. After all, YouTube is more like TV than it is like a video game. Therefore, YouTube should offer and have in place the same ratings system as is listed in the TV Parental Guidelines. This recent COPPA-attributed change is actually YouTube’s efforts at enacting a content ratings system, albeit an extremely poor attempt at one. As I said, creators can only specify the age appropriateness of the content that they create. YouTube is simply the platform where it is shown.

FTC going after YouTube Creators?

Google controls its data collections systems, not its content creators (though YouTube does hold leverage over whether content is or remains monetized). What that means is that it makes absolutely no sense for the FTC to legally go after content creators based on violations of COPPA. There may be other legislation they can lean on, but COPPA isn’t it. COPPA also isn’t intended to be a “catch all” piece of legislation to protect children’s behaviors on the Internet. It is intended to protect how data is collected and used by children under 13 years of age… that’s it. COPPA isn’t intended to be used as a “ratings system” for appropriateness by video sharing platforms like YouTube.

I can’t see even one judge accepting, let alone prosecuting such a clear cut case of legal abuse of the justice system. Going after Google for COPPA violations? Sure. They stored and collected that data. Going after the YouTube content creators? No, I don’t think so. They created a video and uploaded it, but that had nothing whatever to do with how Google controls, manages or collects data from children.

If the US Federal Government wants to create law to manage appropriateness of Internet content, then they need to draft it up and pass it. COPPA isn’t intended for that purpose. Voluntary ratings systems have been in place for years including within motion pictures, TV and now video games. So then why is YouTube immune from such rating systems? Indeed, it’s time YouTube was forced to implement a proper ratings system instead of this haphazard binary system under the false guise of COPPA.

Content Creator Advice

If you are a YouTube content creator (or create on any other online platform), you should take advantage of the thumbnail and describe the audience your content targets. The easiest way to do this is to use the same ratings system implemented by the TV Parental Guidance system… such as TV-Y, TV-14 and TV-MA. Placing this information firmly on the thumbnail and also placing it onto the video at the beginning of your video explicitly states towards which age group and audience your content is targeted. By voluntarily rating not only the thumbnail, but also the content itself in the first 5 minutes of the video opening, your video cannot be misconstrued for any other group or audience. This means that even though your video is not intended for children, placing the TV Parental Guidance rating literally onto the video intentionally states that fact in plain sight.

If a YouTube employee manually reclassifies your video as being “for children” even when it isn’t, labeling your content in the video’s opening as TV-MA explicitly states that the program is not suitable for children. You might even create an additional disclaimer as some TV programs do stating:

This content is not suitable for all audiences. Some content may be considered disturbing or controversial. Viewer or parental discretion is advised.

Labeling your video means that even the FTC can’t argue that your video somehow inappropriately targeted children… even though this new YouTube system has nothing to do with COPPA. Be cautious, use common sense and use best practices when creating and uploading videos to YouTube. YouTube isn’t there to protect you, the creator. The site is there to protect YouTube and Google. In this case, this new creator feature is entirely misguided as a COPPA helper, when it is clearly intended to be a ratings system.

Before you go…

One last thing… Google controls everything about the YouTube platform including the “recommended” lists of videos. If, for whatever reason, Google chooses to promote a specific video towards an unintended audience, the YouTube creator has no control over this fact. In point of fact, the content creator has almost no control over any promotion or placement of their video within YouTube. The only exception is if YouTube allows for paid promotion of video content (and they probably do). After all, YouTube is in it for the $$$. If you’re willing to throw some of your money at Google, I’m quite sure they’d be willing to help you out. Short of paying Google for video placement, however, all non-paid placement is entirely at the sole discretion of Google. The YouTube creator has no control over their video’s placement within “recommended” lists or anywhere else on YouTube.

↩︎

Rant Time: What is a Public Safety Power Shutoff?

Posted in bankruptcy, botch, business, california by commorancy on October 10, 2019

candlelightHere’s where jurisprudence meets our every day lives (and safety) and here is also where PG&E is severely deluded and fast becoming a menace. There is actually no hope for this company. Let’s explore.

California Fire Danger Forecasting

“Officials” in California (not sure exactly to which specific organization is referred here) predicted the possibility of high winds, which could spark wildfires. This happened earlier the week of October 7 (or possibly earlier). As I said, these are “predictions”. Yet, as far as I can see, no strong winds have come to pass… a completely separate issue, but it is heavily tied to this story.

Yet, PG&E has taken it upon themselves to begin powering off areas of Northern California in “preparation” for these “predictions”… not because of an actual wind event. If the high winds had begun to materialize, then yes, perhaps mobilize and begin the power shut offs. Did PG&E wait for this? No, they did it anyway.

What exactly is Public Safety?

In the context of modern society, pretty much everything today relies on electric power generation to operate our public safety infrastructure. This infrastructure includes the likes of traffic lights to street lights to hospitals to medical equipment to refrigeration. All of these need power to function and keep the public safe. To date, we have come to rely on monopoly services like PG&E to provide these energy delivery services. Yet, what happens when the one and only one thing that PG&E is supposed to do and they can’t even do it?

Granted, what PG&E has done is intentional, but the argument is, “Are the PG&E power outages in the best interest of public safety?” Let’s explore this even further.

PG&E claims that these power outages will reduce the possibility of a wildfire. Well, that might be true from a self-centered perspective of PG&E as a corporation. After all, they’ve been tapped several times for legal liability over recent wildfire events. They’ve even had to declare bankruptcy to cover those costs incurred as a result. We’ll come to the reason behind this issue a little bit later. However, let’s stay focused on the Public Safety aspect for the moment.

PG&E claims it is in the best public safety interest to shut down its power grid. Yet, let’s explore that thought rationale. Sure, this outage action might reduce the possibility of sparking from a power line, but what it doesn’t take into account is the reduction in and lack of public safety from all of the other normal-everyday-public-safety mechanisms which have also had their power cut. As I said, street lights, traffic lights, hospitals, medical equipment, 911 services, airports and refrigeration.

The short term effect of shutting the power off might save some lives (based on a fire prediction that might not even come true), but then there are other lives which might be lost as a result of the power being shut off for days. Keep in mind that PG&E claims it might take up to 5 days to restore power after this scheduled power off event. That’s a long time to be without standard regular public safety mechanisms (simply ignoring the high wind advisory).

If PG&E has been found responsible for wildfires, then why aren’t they responsible for these incidental deaths that wouldn’t have occurred if the power had remained on. Worse, what about medical equipment and refrigeration? For people who rely on medical equipment to sustain their lives, what about these folks? How many of these could die from this outage? If it truly takes 5 days to get the power back on, what about the foods being sold at restaurants and grocery stores? If you do trust it, you might get sick… very, very sick… as in food poisoning sick. Who is responsible for that? The retailer or the restaurant?

Sure, I guess to some degree it is the retailer / restaurant. They should have thrown the food out and replaced it with fresh foods. Even then, perhaps the distributors were also affected by the outage. We can’t really know how far the food spoilage chain might go. At the root of all of this, though, it is PG&E who chose to cut the power. How many people might die as a result of PG&E force shutting off the power grid versus how many might potentially die if a wildfire ignites?

I’ve already heard there have been a number of traffic accidents because the power has been cut to traffic lights. It’s not a common occurrence to have the power out on intersections. When it does happen, many motorists don’t know the rules… and worse, they simply don’t pay attention to follow them. They just blast on through the intersection. Again, who is responsible for this? The city? No. In this case, it is truly PG&E’s responsibility. The same for food poisoning as a result of the lack of refrigeration. What about the death of someone because their medicine spoiled without refrigeration?

Trading One Evil For Another

Truly, PG&E is playing with fire. They are damned if they do and damned if they don’t. The reality is, either way, shutting off electricity or leaving it on, PG&E risks the public’s safety. They are simply trading one set of public safety for another. Basically, they are “Robbing Peter to pay Paul.” By trying to thwart the possibility of setting an accidental wildfire, the outage can cause traffic accidents, deaths in hospitals, create food poisoning circumstances and this list goes on and on. When there is no power, this is real danger. Sometimes immediate danger, sometimes latent danger (food poisoning) which may present weeks later.

The reality is, it is PG&E who is responsible for this. PG&E “thinks” (and this is the key word here) that they are being proactive to prevent forest fires. In reality, they’re creating even more public safety issues by cutting the power off indiscriminately.

Cutting Power Off Sanely?

The first problem was in warning the public. PG&E came up with this plan with too short of a notice. The public was not properly notified in advance. If this outage scenario were on the table of options for PG&E to pursue during the wildfire season, this information should have been disseminated early in the summer. People could have had several months to prepare for this eventuality. Instead of notifying months ahead, they chose to notify at a moment’s notice forcing a cram situation when everyone floods the stores and gas stations trying to keep their homes in power and prepare. At the bare minimum, PG&E should be held responsible for inciting public frenzy. Instead, with proper planning and notification, people could have had several months notice to buy generators, stock up on water, buy a propane fridge, buy a propane stove, prep their fridges and freezers, and so on.

With a propane fridge, many people can still have refrigeration in their home during an extended (up to 7 day) power outage. This prevents both spoilage of foods and of medicines. Unfortunately, when it comes to crunch time notices, supplies and products run out quick. Manufacturers don’t build products for crunch time. They build for limited people to buy over a short period of time. Over several months, these manufacturers could have ramped up production for such a situation, but that can’t happen overnight. PG&E was entirely remiss with this notification. For such drastic knee-jerk actions to public safety, it needs to notify the public months in advance of this possibility. This is public menace situation #1.

Indiscriminate Power Outages

Here’s a second big problem with PG&E’s outage strategy. PG&E can’t pick and choose its outages. Instead, its substations cover whole swatches of areas which may include such major public safety issues as traffic lights and hospitals, let alone restaurants and grocery stores whose food is likely to spoil.

If PG&E could sanely turn off power to only specific businesses and residences without risking the power to hospitals, cell phone infrastructure, 911 and traffic infrastructure, then perhaps PG&E’s plan might be in a better shape. Unfortunately, PG&E’s outage strategy is a sledgehammer approach. “Let’s just shut it all down.”, I can almost hear them say. Dangerous! Perhaps even more long term dangerous than the possibility of not setting a wildfire. Who’s to say? This creates public menace situation #2.

Sad Infrastructure

Unfortunately, this whole situation seems less about public safety and more about CYA. PG&E has been burned (literally) several times over the last few wildfire seasons. In fact, they were both literally and monetarily burned so hard that this is less about actual public safety and more about covering PG&E’s legal butt. Even then, as I said above, PG&E isn’t without legal liability simply because they decided to cut the power to thwart a wildfire. In fact, while the legal liability might not be for causing a wildfire, instead it might be for incidental deaths created by outages at intersections, by deaths created in hospitals and in homes due to medical equipment failure, by deaths created via food spoilage in restaurants and grocery stores… and even food spoilage or lack of medical care in the home.

The reality behind PG&E’s woes is not tied to its supposedly proactive power outage measures, it is actually tied to its aging infrastructure. Instead of being proactive and replacing its wires to be less prone to sparking (what it should have been doing for the last 10 years or more), it has done almost nothing in this area. Instead of cutting back brush around its equipment, it has resorted to turning the power off. Its liability in wildfires is almost directly attributable to relying on infrastructure created and installed decades ago by the likes of Hetch Hetchy (and other early electric infrastructure builders) back in the early 1900s. I’m not saying that every piece of this infrastructure is nearly 100 years old, but some of it is. That’s something to think about right there.

PG&E does carry power from Hetch Hetchy to its end users via Hetch Hetchy generation facilities, but more importantly, through PG&E’s monopoly electric lines to its end users. PG&E also generates its own electricity from its own facilities. It also carries power from other generation providers like SVCE. The difficulty with PG&E is its monopoly in end user delivery. No other company is able to deliver power to PG&E’s end user territory, leaving consumers with only ONE commercial choice to power their home. End users can opt to install their own in-home energy generation systems such as solar, wind or even diesel generators (when the city allows), but that’s not a “commercial” provider like PG&E.

Because PG&E has the market sewn up, everyone who uses PG&E is at their mercy to provide solid continuous power… that is, until they don’t. This is public menace situation #3.

Legal Troubles

I’m surprised that PG&E has even decided to use this strategy considering its risky nature. To me, this forced power outage strategy seems as big a liability in and of itself as it does against wildfires.

PG&E is assigned one task: Deliver Power. If it can’t do this, then PG&E needs to step aside and let another company more experienced in to replace PG&E’s dominance in power delivery. If PG&E can’t even be bothered to update its aging equipment, which is at the heart of this entire problem, then it definitely needs to step aside and let a new company start over. Sure, a new company will take time to set it all up, but once it’s going, PG&E can quietly wind down and go away… which may happen anyway considering both its current legal troubles and its bankruptcy.

The state should, likewise, allow parties significantly impacted by this forced power outage (i.e., death or injury) to bring lawsuits against PG&E for its improperly planned and indiscriminately executed power outage. Except, because PG&E is still in bankruptcy court, consumers who are wronged by this outage must stand in line behind all of those who are already in line at PG&E’s bankruptcy court. I’m not even sure why the bankruptcy judge would have even allowed this action by PG&E while still in bankruptcy. Considering the possibility of significant additional legal liabilities incurred by this forced outage, the bankruptcy judge should have foreseen this and denied its action. It’s almost like PG&E execs are all, “F-it, we’ll just turn it all off and if they want to sue us, they’ll have to get in line.” This malicious level of callous disregard for public safety needs much more state and legal scrutiny. The bankruptcy judge should have had a say over this action by PG&E. That they didn’t, this makes public menace situation number 4, thus truly making PG&E an official public safety menace and a nuisance.

Updated 10/11/2019 — Clarification

I’ve realized that while one point was made in the article, it wasn’t explicitly called out.  To clarify this point, let’s explore. Because PG&E acted solely on a predicted forecast and didn’t wait for the wind event to actually begin, PG&E’s actions egregiously disregarded public safety. As I said in the main body of the article above, PG&E traded one “predicted” public safety event for actual real incurred public safety events. By proceeding to shut down the power WITHOUT the predicted wind event manifesting, PG&E acted recklessly towards public safety. As a power company, their sole reason to exist is to provide power and maintain that public safety. By summarily shutting down power, not only did they fail to provide the one thing they are in business to do, they shut the power down for reasons other than for fire safety. As I stated above, this point is the entire reason that PG&E is now an official menace to the public.

↩︎

 

Can I use my Xbox One or PS4 controller on my iPhone?

Posted in Apple, botch, california, game controller, gaming, video game by commorancy on September 16, 2019

XboxOneEliteController-smThis is a common question regarding the two most popular game controllers to have ever existed. Let’s explore.

MFi Certification

Let’s start with a little history behind why game controllers have been a continual problem for Apple’s iOS devices. The difficulty comes down to Apple’s MFi controller certification program. Since MFi’s developer specification release, not many controller developers have chosen to adopt it. The one notable exception is the SteelSeries Nimbus controller. It’s a fair controller, it holds well enough in the hand, has an okay battery life, but it’s not that well made. It does sport a lightning port so you can charge it with your iPhone’s charger, however. That’s of little concession, though, when you actually want to use an Xbox One or PS4 controller instead.

Because Apple chose to rely on its own MFi specification and certification system, manufacturers would need to build a controller that satisfies that MFi certification. Satisfying the requirements of MFi and getting certified likely requires licensing technology built by Apple. As we know, licenses typically cost money paid to Apple for the privilege of using that technology. That’s great for Apple, not so great for the consumer.

Even though the SteelSeries Nimbus is by no means perfect, it really has become the de facto MFi controller simply because no other manufacturers have chosen to adopt Apple’s MFi system. And why would they?

Sony and Microsoft

Both Sony and Microsoft have held (and continue to hold) the market as the dominant game controllers. While the SteelSeries Nimbus may have become the de facto controller for Apple’s devices, simply because there is nothing else really available, the DualShock and the Xbox One controllers are far and away better controllers for gaming. Apple hasn’t yet been able to break into the console market, even as much as they have tried with the Apple TV. Game developers just haven’t embraced the Apple TV in the same way they have of the Xbox One and the PS4. That’s obvious as to why. The Apple TV, while reasonable for some games, simply does not offer the same level of graphics and game power as an Xbox One or PS4. It also doesn’t have a controller built by Apple.

Until Apple gets its head into the game properly with a more suitably named game system actually intended for gaming, rather than general purpose entertainment, Apple simply can’t become a third console. Apple seems to try these roundabout methods of introducing hardware to try and usurp, or at least insert itself into certain markets. Because of this subtle roundabout method Apple chooses, it just never works out. In the case of MFi, that hasn’t worked out too well for Apple.

Without a controller that Apple has built themselves, few people see the Apple TV as anything more than a TV entertainment system with built-in apps… even if it can run limited games. The Apple TV is simply not seen as a gaming console. It doesn’t ship with a controller. It isn’t named appropriately. Thus, it is simply not seen as a gaming console.

With that said, the PS4 and the Xbox One are fully seen as gaming consoles and prove that with every new game release. Sony and Microsoft also chose to design and build their own controllers based on their own specifications; specifications that are intended for use on their consoles. Neither Sony, nor will Microsoft go down the path to MFi certification. That’s just not in the cards. Again, why would they? These controllers are intended to be used on devices Sony and Microsoft make. They aren’t intended to be used with Apple devices. Hence, there is absolutely zero incentive for Microsoft or Sony to retool their respective game controllers to cater to Apple’s MFi certification whims. To date, this has yet to happen… and it likely never will.

Apple is (or was) too caught up in itself to understand this fundamental problem. If Apple wanted Sony or Microsoft to bend to the will of Apple, Apple would have to pay Sony and Microsoft to spend their time, effort and engineering to retool their console controllers to fit within the MFi certification. In other words, not only would Apple have to entice Sony and Microsoft to retool their controllers, they’d likely have to pay them for that privilege. And so, here we are… neither the DualShock nor does the Xbox One controller support iOS via MFi certification.

iOS 12 and Below

To answer the above question, we have to observe Apple’s stance on iOS. As of iOS 12 and below, Apple chose to rely solely on its MFi certification system to certify controllers for use with iOS. That left few consumer choices. I’m guessing that Apple somehow thought that Microsoft and Sony would cave to their so-called MFi pressure and release updated controllers to satisfy Apple’s whims.

Again, why would either Sony or Microsoft choose to do this? Would they do it out of the goodness of their own heart? Doubtful. Sony and Microsoft would ask the question, “What’s in it for me?” Clearly, for iOS, not much. Sony doesn’t release games on iOS and neither does Microsoft. There’s no incentive to produce MFi certified controllers. In fact, Sony and Microsoft both have enough on their plates supporting their own consoles, let alone spending extra time screwing around with Apple’s problems.

That Apple chose to deny the use of the DualShock 4 and the Xbox One controllers on iOS was clearly an Apple problem. Sony and Microsoft couldn’t care less about Apple’s dilemmas. Additionally, because both of these controllers dominate the gaming market, even on PCs, Apple has simply lost out when sticking to their well-intentioned, but misguided MFi certification program. The handwriting was on the wall when they built the MFi developer system, but Apple is always blinded by its own arrogance. I could see that MFi would create more problems than it would solve for iOS when I first heard about it several years ago.

And so we come to…

iOS 13 and iPhone 11

With the release of iOS 13, it seems Apple has finally seen the light. They have also realized both Sony and Microsoft’s positions in gaming. There is simply no way that the two most dominant game controllers on the market will bow to Apple’s pressures. If Apple wants these controllers certified under its MFi program, it will need to take steps to make that a reality… OR, they’ll need to relax this requirement and allow these two controllers to “just work”… and the latter is exactly what Apple has done.

As of the release of iOS 13, you will be able to use both the Xbox One (bluetooth version) and the PS4’s DualShock 4 controller on iOS. Apple has realized its certification system was simply a pipe dream, one that never got realized. Sure, MFi still exists. Sure, iOS will likely support it for several more releases, but eventually Apple will obsolete it entirely or morph it into something that includes Sony and Microsoft’s controllers.

What that means for the consumer is great news. As of iOS 13, you can now grab your PS4 or Xbox One controller, pair it to iOS and begin gaming. However, it is uncertain exactly how compatible this will be for iOS. It could be that some games may not recognize these controllers until they are updated for iOS 13. This could mean that older games that only supported MFi may not work until they are updated for iOS 13. The problem here is that many projects have become abandoned over the years and their respective developers are no longer updating apps. That means that you could find your favorite game doesn’t work with the PS4 or Xbox One controller if it is now abandoned.

Even though iOS 13 will support the controllers, it doesn’t mean that older games will. There’s still that problem to be solved. Apple could solve that by folding the controllers under the MFi certification system internally to make them appear as though they are MFi certified. I’m pretty sure Apple won’t do that. Instead, they’ll likely offer a separate system that identifies “third party” controllers separately from MFi certified controllers. This means that developers will likely have to go out of their way to recognize and use Sony and Microsoft’s controllers. Though, we’ll have to wait and see how this all plays out in practice.

Great News

Even still, this change is welcome news to iOS and tvOS users. This means that you don’t have to go out and buy some lesser controller and hope it will feel and work right. Instead, you can now grab a familiar controller that’s sitting right next to you, pair it up and begin playing on your iPad.

This news is actually more than welcome, it’s a necessity. I think Apple finally realizes this. There is no way Sony or Microsoft would ever cave to Apple’s pressures. In fact, there was no pressure at all really. Ultimately, Apple shot themselves in the foot by not supporting these two controllers. Worse, by not supporting these controllers, it kept the Apple TV from becoming the hopeful gaming system that Apple had wanted. Instead, it’s simply a set-top box that provides movies, music and limited live streaming services. Without an adequate controller, it simply couldn’t become a gaming system.

Even the iPad and iPhone have been suffering without good solid controllers. Though, I’m still surprised that Apple itself hasn’t jumped in and built their own Apple game controller. You’d think that if they set out to create an MFi certification system that they’d have taken it to the next step and actually built a controller themselves. Nope.

Because Apple relied on third parties to fulfill its controller needs, it only really ever got one controller out of the deal. A controller that’s fair, but not great. It’s expensive, but not that well made. As I said above, it’s the SteelSeries Nimbus. It’s a mid-grade controller that works fine in most cases, but cannot hold a candle to the PS4’s or the Xbox One’s controller for usability. Personally, I always thought of the Nimbus controller as a “tide me over” controller until something better came along. That never happened. Unfortunately, it has taken Apple years to own up to this mistake. A mistake that they’ve finally decided to rectify in iOS 13.

A little late, yes, but well done Apple!

↩︎

 

%d bloggers like this: