Random Thoughts – Randocity!

Why I stopped using Twitter

Posted in botch, business, california by commorancy on November 25, 2022

a woman in red scarf holding a megaphone

Based on my recent article, Is the Demise of Twitter imminent?, I have outlined the reasons why I believe Twitter is very close to closing down entirely. While that is a reason not to use the platform, it isn’t my primary reason for leaving Twitter. Twitter has a lot more wrong with it than potential closure. Let’s explore.

Content Moderation and Trust

Let’s jump right into the heart of the reason why Twitter is in serious jeopardy. Any social network that offers User Generated Content (UGC) is at risk if the operators of the site are unwilling to handle that UGC appropriately.

Terms of Service (TOS) agreements and Acceptable Use Policies (AUP) exist to protect the site from lawsuits. Meaning, so long as the site adheres to the terms laid out in their agreements, then the site is said to be doing its fiduciary responsibility to its users.

TOS and AUP agreements define what is considered acceptable conduct by anyone who uses the web site. Most such agreements lay out that conduct such as hate speech, harassing speech, bullying, threats of violence, death threats and any conduct which is considered illegal federally or locally is prohibited on the web site. The article I mentioned above also touches on this topic.

Whenever a site is created that publishes such user generated content on behalf of its users, a site must make sure that the speech remains within the confines of acceptable use. That means offering such mechanisms as user reporting features (allowing users to report offensive content), automated scanning of content to detect such infringing content and a team of content moderators to remove or suspend users who willfully break the rules.

Why do these agreements exist?

Trust. These agreements are in place to help users understand that Twitter is a safe and trustworthy space. As long as the agreements are upheld, then users can know that Twitter is looking out for them. Without such agreements or, more specifically, knowing the agreements aren’t being enforced, then the safety level of the site drops precipitously, along with the site’s level of trust.

Politics and AUP

Recently, too many people on Twitter are now seeing everything through the a political lens. Specifically, the right wingers are now seeing everything they say through a political lens of free speech.

Let’s understand first and foremost that First Amendment Free Speech DOES NOT apply to Twitter or any non-governmental organization operating a social network. It never has. The First Amendment only applies to Governmental organizations and staff. While your local county official cannot abridge your freedom of speech or freedom of press, Twitter can.

Further, let’s understand that terms of service (conduct) agreements are not built with politics in mind. They are built by lawyers who are paid to provide legal services to corporations. These agreements are not political leaning. These agreements apply to everyone using the services equally. Anyone who infringes the agreement is subject to disciplinary action… yes, ANYONE.

Right Wing Activists and Lying

Right wingers have been completely jumping on the bandwagon that somehow Twitter is selectively applying its rules only to right wing activists and not to left wing activists. That would be unfair application of terms of service, but it’s also a false statement. That kind of false rhetoric is now a staple with right leaning conservatives. They’re willing to lie about nearly anything and everything. Why would social media be an exception? It isn’t.

Twitter has applied its rules equally to all people who infringe, left, right or center. It doesn’t matter what your political beliefs, if you put forth infringing content, you’re suspended or banned.

Left wing activists have also been banned from the platform. Thus, this right wing falsehood is just that, a falsehood… like many others. Yet, they keep saying it with careless abandon as though saying it multiple times will somehow make it true. It doesn’t.

As of this moment, right wingers are completely out of control on Twitter… running afoul of Twitter’s rules without any disciplinary action by Twitter staff. That’s not to say left wingers aren’t out of control, because they are also. In fact, there are a lot of apolitical people on Twitter simply playing games with Twitter’s rules because Twitter isn’t enforcing them…. and here is the problem in a nutshell.

Rules, Chaos and Crowd Sourced Moderation

Rules exist to stem the chaos and enforce trust. Without enforcement of rules, a social media site is simply a cesspool without trust… and that’s exactly where Twitter sits right now.

If Twitter had been designed to allow thread creators to manage and moderate user comments within their created thread, like YouTube owners can moderate comments on videos, then Twitter would be in a much better place right now.

It would mean that I, as a Twitter user, could dump off comments from my thread that break not only Twitter’s rules, but my own personal rules of decorum. Unfortunately, Twitter doesn’t afford that level of content moderation to the thread creator. That means relying on Twitter’s now non-existent staff. Of course, when that staff doesn’t exist, there’s no one there to do the moderation work that’s needed.

If Twitter had moved to crowd based moderation, the platform would be in a much better place. It wouldn’t need nearly as much moderation staff as thread creators could simply remove comments from threads they own. If someone chimes in with an insensitive, inappropriate or problematic comment, then “Delete” and the comment is gone. No Twitter staff needed.

In fact, this is the way social media needs to operate now and in the future. Twitter still firmly believes that it is Twitter’s staff sole responsibility to moderate content. That’s not doable when you have perhaps billions of messages being sent daily. A company can’t grow its moderation team to scale to this number of messages. It is also an antiquated idea that should have been gone years ago. However, at the time of Twitter’s conception, crowd managed UGC wasn’t really commonplace. Partly that’s something that wasn’t being done, but partly it’s because Jack Dorsey’s team didn’t have the foresight to realize staff moderation of billions of small messages was not humanly scalable.

In recent years, crowd managed moderation has become not only more acceptable, it’s become commonplace and even important. YouTube has allowed this for quite some time. It allows the channel owner to remove any and all messages from its videos that the content creator deems problematic. It firmly puts the burden of content moderation on the creator. That’s also a completely acceptable situation.

Crowd Moderation

Wikipedia has completely proven that crowd moderation of content works. As a company, you can’t afford to hire the thousands of people needed to scout billions of messages all over the platform. Instead, it’s better to empower content creators to manage a much smaller number of messages.

Reporting inappropriate comments is still available, however. This allows staff the opportunity to jump in and manage inappropriate content if the content creator reports a comment.

However, conscientious creators should be willing to hold and moderate comments prior to allowing them to be published. With Twitter, publishing is instantaneous with no advanced moderation possible. Considering the sheer volume of messages on Twitter, it might be almost impossible to handle a single hold-queue style moderation system. With a spam filter, it may be possible to separate the wheat from the chaff into more easily manageable piles.

Trust, Quality and Moderation

Here’s something that Twitter has needed for a very, very long time. Twitter is chock full of bad actors. Any bad actors who consistently write bad comments of low or questionable quality would see their comment moved into the “junk” moderation pile for the content creator to manage and/or report.

Such a system would allow Twitter to offer up content moderation for all of its content creators. Enabling content moderation places moderation in the hands of the content creator using a hold queue. This halts many instant responses, but it ensures higher quality comments. Comments are then examined and filtered into trust and quality buckets. High quality comments from more trusted individuals get placed into the pile the creator manages first. Successively lower quality comments from lesser trusted people get moved into successively lower moderation piles.

Content creators can both move comments from one pile to another and they can mark commenters so that future comments get placed into specific piles all the way to a block which prevents the user from commenting at all.

For example, piles might be labeled as:

  • Instant Publish
  • Mostly-Trusted
  • Semi-Trusted
  • Untrusted
  • Untrusted Junk
  • Junk

These 6 piles are a good starting place. Instant publish is for your most trusted followers. You know that these followers can be completely trusted to instantly publish a high quality comment with no holds. No moderation is needed for fully trusted people. For people who are mostly trusted, these comments go into the mostly-trusted pile for moderation hold. These are people who are very close to getting instant publish, but you still need to hold their messages because you want to read the comment first.

All other piles are reviewed at the sole discretion of the content creator. If the content creator chooses not to look through the remaining piles, then the comments get purged after 7-30 days on hold.

How does a user become trusted?

Trust comes from both following and adding a new button labeled ‘trust’ along with an assigned level (1-6). Following someone only places someone into the Semi-Trusted pile. Meaning, you’ve followed them so you’re assigning them the default trust of level of 3. However, you haven’t completely trusted them. This means you’ll need to moderate comment content.

As a user gets more and more messages posted out of moderation, the user will automatically move up the ranks of trust, eventually reaching Instant Publish unless the content creator explicitly sets the user’s trust level.

User trust levels can also be managed by interactions with others. A content creator can enable “inherit trust averages” to new followers. This means that user’s trust level is calculated and inherited based on past interactions. If a user has consistent bad interactions, been reported a number of times, been blocked by many people and so on, these bad activities affect the user’s inherited trust level and the user’s trust level goes down. Instead of being assigned a default of level 3, the user might inherit a level of 5 or 6.

Note, being blocked by lower trust level users doesn’t influence a user’s inherited trust. Only people of higher trust levels who block them influence the inherited trust level. This stops bad seeds from gaming this system and attempting to lower a person’s trust level by creating hundreds of accounts and blocking someone of higher levels of trust. The only trust levels that impact a user’s inherited trust level when blocked is if the blocking user has a trust level above 2. That means bad seeds would need to work their hundreds of accounts up to level 2 before blocking people to reduce trust. Even then, any user attempting to game the trust system will automatically be banned.

Note that there are effectively two trust levels at play. There is the inherited trust level of the user themselves, which is gained by behaving correctly, producing high quality content and, in small amount, by having someone follow you. The second trust level is set by a content creator. Even if a person is inherited with a 90% trust level, if they follow someone and comment, the content creator can set that 90% trusted user down to level 6 if they choose. That moderation trust level only applies to the content creator, but doesn’t impact the follower’s inherited trust level… unless many high level trusted people all mark that user down.

Trust levels are the means by which the bad actors go to the bottom of the pile and good actors bubble to the top. To date, no social networks have instituted such a trust system. Instead, they have chosen to allow chaos to reign supreme instead of forcing users to learn behavioral norms when interacting on social networks. Enforcing behavioral norms is something social media desperately needs.

Trust Numbers

Implementing a trust numbering system would also add more control by users and content creators alike. Users who insist on being untrustworthy, to lie, to generally be toxic will see their trust numbers reduced. It doesn’t matter if it’s a celebrity or a nobody. Trust numbers are what people will judge. Like any score system, it can be used to allow users to auto-block and auto-ignore users who choose to have trust cores below a certain threshold. If a comment from a user with a trust level below 50 would appear on a timeline, a rule saying hide comments from users below trust level 50 would automatically weed out toxic comments.

More than this, if a user has a less than 50 trust score, a content creator can make a rule that prevents low score users from commenting at all. In effect, the trust score auto-blocks the user from comments. If the user wishes to make a comment, then they need to do the right things to raise their trust score. A trust scoring system is the only way for users and content creators to know that they can be safe on a platform like Twitter.

Chaos now reigns at Twitter

Because Elon Musk has decided to cut over half of Twitter’s staff, there’s really no one left to enforce much of anything on Twitter. In effect, Twitter is now overrun by untrustworthy, lying, conniving bad actors. It is these toxic people who don’t deserve to have any interactions at all. They are the absolute dregs of social media. These are toxic people you would never interact with in person, yet here they are on full display on Twitter.

Because Twitter has no moderation staff left to manage these bad seeds, the platform is overrun by people of bad intent. These are people who insist sowing seeds of chaos and doing as much damage as possible all with providing no value to the platform. Their comments are worthless, bordering on toxic and are sometimes even dangerous.

With no moderation team, there’s no one at Twitter who can review these comments for their toxicity, let alone do anything about it. Worse, Elon Musk is pushing a “new freer” Twitter, which simply doubles down on this level of toxicity all over the Twitter platform.

If Twitter were to introduce a trust and moderation system as described above, Twitter could forgo the moderation staff, instead letting content creators manage these bad seeds to push them off of the platform. Such a moderation system would also take a huge burden off of Twitter’s staff. Bad seeds would eventually disappear when they find their comments don’t get published. They also can’t claim Twitter is a fault because a content creator moderation system would mean people of all political persuasions would be kicking these bad seeds to the curb.

There’s really no other way for Twitter to manage such bad seeds other than a crowd managed moderation system like the above. Unfortunately, Twitter’s staff is dwindling at an astonishing rate, including the very software engineers needed to design and build such a system.

If Twitter wants to become a platform about trust and safety, it needs to institute a mechanism that enforces this philosphy, like the above content creator moderation system. Without such a system, Twitter remains chaos.

Toxic People

Toxic Symbol

Toxic people are everywhere, but it seems that social media like Twitter attracts them in droves. I don’t know why other than the anonymity that seems afforded. Suffice it to say that while Twitter was relatively toxic prior to Musk’s takeover, the content moderation staff took care of a lot of that toxicity through suspensions and banning.

Unfortunately, Musk seems to have reversed that stance and is now allowing (and even condoning) toxic people back into Twitter who were formerly removed. That means Twitter is now becoming even less of a safe and welcoming space than it formerly was. Toxicity now prevails. Toxicity is something no one needs in their life, least of all on Twitter. Toxic people are draining for all of the wrong reasons.

  • Toxic people waste your time — Toxic people ask you to do stuff for them while providing nothing in return. Even if you do spend the time providing what they request…
  • Toxic people always criticize you — Wasting time on someone toxic, they will turn that wasted time against you by arguing and criticizing what what you provided was not what they requested.
  • Toxic people spread negativity — Even after trying to talk to them to convince them, they will still turn it back around on you as a negative, as though you did something wrong. You didn’t.
  • Toxic people are jealous — The most likely reason they interacted with you in the first place is that they are jealous of what you have. In order to make themselves feel better, they will argue and downplay over whatever they are jealous… or they will try to make you feel jealous by claiming they have something that they don’t actually have.
  • Toxic people play the victim — Instead of accepting their own faults and failings, it’s always someone else who is to blame for them. If you happen to get in their way, you will become the victim over their having been victimized by you. That goes back to being jealous. If they are jealous over something, they will blame you for their being victimized by their own jealousy.
  • Toxic people are self-centered — This is a form of narcissism. How bad the narcissism is depends on them, not you. This means that not only are they likely to blame you for them being a victim, it all revolves around them, never around you. These people never see you as anything more than a punching bag to inflate their own ego.
  • Toxic people really don’t care — In other words, they argue with you because it inflates their ego, but honestly they don’t care about you or how you feel as long as it makes them feel better. It’s a form of manipulation.
  • Toxic people will manipulate you — This is another form of narcissism. It all ends up revolving around them. Most toxic people don’t care about your feelings at all. All they care about is getting whatever they want out of you. If that’s money or a ride or food, they’ll do or say whatever makes that a reality. On Twitter, you have to be cautious as money is really the only motivating factor. If Twitter enables money transfers, expect these toxic people to turn into scam artists.

Twitter currently enables, facilitates and now condones these toxic types of people on Twitter. Not only will they waste your time, they will attempt to play the victim game as though you caused them to be the victim. They will always claim that you are the one who is wrong and they are the one who is right. There is no middle ground, concession or compromise with toxic people. It’s always them and no one else.

If you feed into their garbage, you are likely the one to be harmed by them. Don’t allow it. As soon as you see someone like this, block them instantly. Don’t interact with them. If Twitter isn’t willing to handle toxic people, you have two choices, block and hope they don’t come back using another account or stop using Twitter.

Leaving Twitter

What Twitter currently means for sincere AUP-abiding content creators is increased effort to block toxic people, which actually does little to stop that user’s toxicity. They simply move to other victims to vomit their toxic rhetoric, with those users being forced to block them also. In other words, there’s nothing at all a standard user or content creator can do to stop toxic people from being toxic on Twitter (other than blocking that person for themselves). The best a legitimate person can do is block these toxic people for themselves alone, but that doesn’t make any impact on that toxic user’s account. Even reporting such an account today is likely to go ignored by Twitter. Musk appears to have no interest in holding rule breakers accountable.

A trust system would change this game. Meaning, users who insist on being toxic get to share in their consequences of being toxic. The more toxic they become, the more their account gets moved to the bottom. When the account gets down to a certain threshold, this allows Twitter to review these accounts for being a problem… thus requiring far less staff.

Unfortunately, Twitter has now placed this time suck burden onto each user to block, mute and dump users and to clean up the mess after. I don’t have time for that. Not only is that a complete waste of my time, I’m not being paid by Twitter to do it. It also means Twitter is not a safe or welcoming space. Spending my time managing my account only affects my account alone. It doesn’t in any way stop those toxic bad seeds from laying siege to other users on the platform. Since Twitter has no staff to manage these toxic bad seeds, Twitter is simply a chaotic cesspool of the lowest social media dregs all running amok in a quagmire of chaos. No one is safe from these toxic people.

If you’re looking for a safe and trusting space where you can feel like the social media site is looking out for you and your best interests, Twitter is not that place. Twitter has now become literally the worst, most toxic environment you could join right now, second up only to Facebook. Twitter doesn’t care about trust or safety or protecting you. They’re only interested in letting toxic social media users run roughshod all over everyone else.

For the reason of toxic users and Twitter actively choosing to be unsafe, I am off of Twitter. I simply cannot condone using a platform where the management is more interested in allowing chaos to rule over offering up appropriate safety measures for its users to use against toxic people.

Twitter’s Safety Rating

Safety: 1 out of 10
Toxicity: 10 out of 10
Recommendation: Avoid until Twitter closes or Musk figures it out

↩︎

Is the Demise of Twitter imminent?

Posted in botch, business, california by commorancy on November 20, 2022

red blue and yellow textile

With Elon Musk’s $44 billion hostile takeover of Twitter now closed, it’s clear that Musk is way out of his depth operating this social media platform and with that inexperience, this platform is very likely to die. Note, this is an unfolding story. Please check back for new updates to this article over Twitter’s latest blunders. Let’s explore.

Twitter as a Microblogging Platform

The rise of Jack Dorsey’s Twitter was rather unexpected considering its severe limits, such as its initial 140 character limit which was later doubled to 280 characters. Small messages are akin to SMS messages and I suppose that’s why so many people readily adopted this character limit.

Twitter has gained a lot of “people”, but unfortunately has also gained a lot of “bots”… which at this moment appear to far outnumber actual live people.

Blogging platforms, like WordPress.com on which this article is hosted, allows users to mostly say whatever they like. However, saying things isn’t without problems. Sure, free speech is important on blogging platforms, but what can be said isn’t without bounds. There are, in fact, TOS limits that prevent certain types of speech. For example, there are rules against hate speech, perpetuation of misinformation and disinformation and there are even laws against certain types of speech like “fighting words” and “defamation”. Free speech most definitely has its limits. Free speech is also not without consequences.

Freedom of speech is not truly “free” in the sense that you are free to say whatever pops into your head. You do have to consider the ramifications of what you say to those around you. One classic example is yelling, “Fire” in a crowded theater. That’s a form of trolling. It is most definitely not protected speech and could see the perpetrator fined and/or jailed for performing such reckless activities. Yes, freedom of speech has limits.

Those limits can be defined both by laws and by Terms of Service agreements. If you sign up for a service, you must read the Terms of Service and Acceptable Use Policies carefully to determine where the boundaries begin and end. Running afoul of Terms of Service rules can see your account restricted, suspended, banned or deleted. Such suspensions and bans can be limited to a few days or the action could be permanent. It might even see your account removed from the platform depending on the egregiousness of the action.

Suffice it to say that Free Speech, as I reiterate again, has limits and boundaries. You are not allowed to say whatever you want when using private company services. Other violating examples include such speech as death threats, threats of self-harm or of harm to other people, bullying, harassing others, inciting people into violence, stalking others or any other activities which are considered illegal or condone violence upon others.

Freedom of Speech

Many people hold up the first amendment as though it’s some sort of shield when using platforms like Facebook, Twitter or YouTube. The First Amendment is not a shield! Let’s examine the text of the First Amendment to better understand where and how it applies:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Let’s break it down. “Congress shall make no law” firmly states that the limits of the First Amendment are strictly on the Congress and, by that same extension, all Government entities. The Constitution strictly governs how the U.S. Government operates. It does not cover protection of speech for private businesses at all. Thus, the text of this amendment does not apply to how Facebook, Twitter or any other social media site operates unless that service is wholly or partially owned by the Government. How the First Amendment applies is by preventing Government workers, including any branch of the government, from abridging speech either written (press) or verbal (protests).

For example, using sites operated by the U.S. Government, such as the FTC’s call for comments area, the First Amendment fully applies. If you say something that may become publicly visible on such Government web sites, your speech is protected by the First Amendment. However, if you say something on Twitter, a site not owned or operated by the U.S. Government (or any government), your speech is not protected by the First Amendment, but instead is governed by Twitter’s Terms of Service agreement and/or any other associated agreement(s).

Too many people believe that First Amendment free speech rights apply to private enterprise, but it does not. While most speech is allowed on these platforms, some speech forms are not and those that are not are clearly written into the Terms and Conditions to which you must agree by opening an account.

For example, Twitter only allows impersonation of accounts as parody when the parody accounts are clearly labeled in specific ways. This Twitter rule restricts your freedom of speech in very specific ways. Meaning, you are not allowed to impersonate an account in a way that makes it appear as if you are genuinely the person you are attempting to impersonate. If you don’t label your account according to Twitter’s rules, your account is considered in violation and will be disciplined accordingly.

The First Amendment doesn’t restrict this type of impersonation activity, however. Other state or local laws might restrict such impersonation activities, but the First Amendment does not. However, Twitter does restrict this activity via its rules to which you must agree as part of using its services. There are other such activities which are also considered in violation of Twitter’s rules which can also become apparent after you violate them.

In other words, Free Speech on Twitter is firmly at the whims and rules of those who operate Twitter… rules that can be changed at a moment’s notice.

Twitter as a Viable Platform

Prior to Elon Musk’s takeover, Jack Dorsey (and his successor team) operated the platform in a way that many political pundits believed to be unfair to certain parts of the political spectrum. Politics are generally divisive. After all, there are two parties and each party believes they are superior to the other. I won’t get into who’s right or who’s wrong politically, but suffice it to say that the rules must apply to political activists in the same way as any other person using the platform.

Unfortunately, Musk is now seeking to shield political activists from Twitter’s rules. Instead, choosing to not hold any political activists accountable to Twitter’s established rules.

For example, Musk has recently chosen to reinstate Donald Trump’s account to Twitter. Donald Trump intentionally and willfully violated Twitter’s rules in the past. Yet, because Musk now owns Twitter, he has forgiven Donald Trump those past transgressions and has reinstated his account. This is a very clear example of how Musk chooses to break Twitter’s own rules at Musk’s own whim.

“Rules are made to be Broken”

This is an old saying, but it’s one that has no place in Social Media. If rules only govern some people, but not others, then there can be no ethics or justice. Rules must apply to all or they apply to none. Selective rule application is the basis for no rules at all. That’s how law works. If law enforcement fails to enforce laws on some criminals, then laws mean nothing. Likewise, if rule breakers can get away with breaking rules, then rules mean nothing.

Twitter has firmly moved into ethically questionable territory. If Musk thinks that selective application of rules to some people, but not others, is a recipe for success, then Twitter is truly no platform anyone should be using. It’s part of the reason I am no longer using Twitter. I have walked away from the platform and will not return. Here’s another example of Musk applying selective rules.

Musk’s Selective Rules and Instant Rule Changes

With Kathy Griffin’s suspension, Musk has made it clear that Musk makes the rules and no one else. This means that if someone does something that Musk doesn’t like, he’ll instantly rewrite the rules to satisfy his own whims. That’s actually called a moving target. Any user who ends up rubbing Musk the wrong way might end up with a suspension simply because Musk decides he doesn’t like whatever it was and he’ll then rewrite the rules instantly to make that activity against Twitter’s terms.

He did that with Kathy Griffin. She parodied Musk in a way that Musk didn’t like, then Musk retaliated by strictly applying Twitter’s terms, but more than this, he also rewrote Twitter’s rules by not giving her the 3 required warnings. Instead, he gave her zero warnings and instant suspension. Twitter’s rules about warnings are clear. You’re supposed to get at least 1 warning in advance of suspension. Kathy Griffin didn’t get that. She got the boot from Musk without any warnings at all.

Again, that’s a moving target. If you don’t know what the full rules are, you can’t abide by them. Sure, Kathy should have read the terms of impersonation more closely to prevent even getting warned. However, Musk should have read Twitter’s terms and upheld those rules by warning her before suspension, not change the rules on a whim. Both Musk and Griffin are guilty of not following the rules.

For Twitter users, it means Musk can instantly rewrite Twitter’s rules without warning and then suddenly a user is in violation. That’s no way to run a site. The rules are written in advance so we all understand them and have a fair chance at abiding by them. Instant changes mean there’s no way to comply with randomly changing rules simply because you can’t know what they are or what they could become if Musk gets triggered.

App Store and Twitter about to Square Off

[Update 11/25/2022] Twitter’s new “freer speech” rules combined with its lack of enough staff to manage the deluge of hate speech on Twitter is leading Twitter down many wrong paths. In addition, Elon Musk is also complaining about losing between 15% to 30% of its $8/mo subscription fees to Apple and Google when purchased in-app.

Because Apple is also now investigating Twitter’s latest “freer speech” maneuvers, Twitter is poised to potentially lose its app listing in the Apple Store over Twitter’s own inability to abide by its App Store agreements with Apple. Apple is already investigating if this is the case now. If Apple shuts Twitter out of the app store, Google is likely to follow suit for similar reasons. That leaves Twitter with no new users. Existing Twitter app owners can continue to use the Twitter app, but new users will be shut out. That means new users will be forced to use a browser to consume Twitter.

An app store removal is an even bigger blow to Twitter than the mere loss of 15-30% to Apple’s and Google’s in-app purchase fees. Elon Musk is playing with fire by not honoring its own Terms of Service agreements against both previous and current violators, a fact that could lead to an app store removal. Instead, Twitter is also giving former violating accounts “amnesty” allowing them to be reinstated. App store agreements require that apps providing services must adhere to Apple’s app store has rules against apps which don’t properly handle hate speech and other objectionable content.

With Twitter’s more lax rules around objectionable content and reduced “freer speech” filtering, Twitter is very likely now in violation of Apple’s developer rules. Such an app store removal would have a devastating effect on Twitter’s bottom line, especially after advertisers have begun abandoning the platform. When even Apple staffers are abandoning Twitter, that doesn’t say good things for Twitter’s longevity:

Over the weekend, Phil Schiller, the former head Apple marketing executive who still oversees the App Store, apparently deleted his widely-followed Twitter account with hundreds of thousands of followers. —cnbc.com

[↩︎]

Twitter’s Demise

wrong-wayIn addition to all of the above, Musk has saddled Twitter with mountains of debt numbered in the billions of dollars. Some people speculate that it’s $13 billion because that’s what banks have issued Musk in loans. However, that doesn’t take into account the “investors” who Musk didn’t pay out or private investor loans from people who aren’t banks. Twitter’s debt is likely well higher than $13 billion, it’s just that $13 billion is what we can visibly see. Since Twitter is now private, Musk is not obligated to report anything to anyone about the Twitter’s total debt burden or any of its other finances.

One thing is certain, Twitter (and by extension, Musk) was required to pay out all shareholders to take Twitter private. That payout delisted Twitter’s stock and made Twitter a private company. If Twitter was in debt at around $1 billion prior to the takeover, Twitter is likely carrying at least 20-30x more debt now. If Twitter couldn’t make ends meet prior to Twitter’s takeover, there’s absolutely no way Twitter has any hope of doing that under Musk’s “leadership” (and I use this term quite loosely).

When attempting to reduce expenses in any company rapidly, there are only so many places to begin. The first place is in staffing. Staff reduction is low hanging fruit and it’s relatively easy to let staff go to stop at least that cash hemorrhaging quickly. It’s also the first place where Musk chose to begin. Nine days after taking over Twitter, Musk let half of Twitter’s staff go. But that’s not where the staff changes end. That’s just the beginning. In amongst Musk’s crass jokes and public displays about these staff reductions on Twitter, Musk continues to reduce staff every single day. There’s no way to know when Musk will be satisfied with the staff reductions. In fact, he could eliminate every single staffer and still not reduce expenses enough to keep Twitter from running out of money.

Other places to reduce after the above low hanging fruit include real estate (i.e., leases), employee perks and travel expenses.

Employee Perks

Musk has also taken aim at employee perks. Musk has claimed that it cost Twitter upward of $400 to feed each employee per day at the Twitter’s onsite employee cafeteria. While that claim is bold, it’s not really backed up with actual information. Though, Musk has claimed that less than 10% of the company participates in that free food program. If that’s true, then…

My assumption is that the cafeteria continues to buy enough food to feed an overly large lunch crowd every day, yet much of that food goes to waste as employees don’t show up. That’s really a food expense and food prediction problem.

If you want to operate a cafeteria, you have to buy enough food to handle future crowds. You can’t buy only enough food to handle 10% of the employees because then you’ll run out of food when 20% of the employees show up. The first option for this free food perk is to shut it down. If you don’t want to pay for the food expenses of a cafeteria, then you don’t run a cafeteria or you run it more intelligently.

For an example of a more intelligently run cafeteria, the cafeteria could publish its menu a week in advance. Employees who wish to order a meal for any given day submit their orders early. The orders would be accepted up to two days before to prevent people ordering a week’s worth of food in advance, but never show up to eat it. They also can’t order the “day of” because a cafeteria can’t operate that way without over ordering. This then allows the cafeteria to know a few days in advance how much food to order to handle that day’s lunch orders. This limits the food order costs to only those who order meals and only to the amount foods needed to create those ordered meals.

The cafeteria could add on a limited number of extra meals beyond those that were ordered to handle a limited number of walk-ins as well as replacement meals, just in case.

Alternatively, Twitter could contract with a meal provider like Eat Club, which essentially does the same as what I describe above. You order your meal up to a couple of days in advance. This allows Eat Club to only need food enough to cover the meals ordered. It also means that Musk doesn’t need to operate a cafeteria at all, removing food costs and all cafeteria staff.

Beyond smartening up food costs of a cafeteria, other perks may also be targeted for removal, such as child care, reimbursement of certain types of expenses and other employee benefits which are costly. The public may never know about the other perks that get eliminated unless Musk states them publicly or employees speak up, but that’s unlikely because Musk has likely required an NDA for all employees.

Moving Twitter’s HQ

To reduce yet more expenses, the next place for a CEO to look is to expensive office leases. Twitter operates in one of the most expensive real estate markets in the nation, San Francisco, California. Worse, Twitter operates in San Francisco city proper. While San Francisco has, at least in the past, been amenable to offering tax incentives and subsidies to companies willing to remain in San Francisco, there’s no way to know if Twitter benefits from those.

Unfortunately, San Francisco does not extend those tax breaks and incentives to individuals who work in the city. San Francisco is one of THE most expensive places in the nation to live and work. That’s why so many people commute into San Francisco rather than actually living there… that and the crime rate in SF is astonishing. If you work in San Francisco and commute there, expect to spend at least $340 per month simply for a parking space every day. And no, most companies operating in San Francisco won’t pay parking expenses for employees. That’s simply a pay cut you deal with when working at San Francisco companies. The same lack of reimbursement goes for gas expenses or choosing to ride BART or Caltrain every day.

What this expensive lease means for Twitter staffers is that eventually Musk is likely to move Twitter’s HQ to Texas along side Tesla’s HQ. That means that staffers will eventually be forced make the decision to move to Texas or find a different job in California. This mandate has not yet come down from Musk, but looking ahead to the future, this is very likely Musk’s trajectory. That all assumes Twitter doesn’t fail long before a move.

Bankruptcy

Twitter may not quite yet be on the verge of bankruptcy, but only because Musk apparently still seems to have some liquid cash stashed somewhere to pay Twitter’s bills. He may even be using some of his own personal cash to prop Twitter up at this point. Considering that many advertisers have left Twitter, which is made worse because the previous management team failed to secure pre-buys for advertising in 2023, Twitter is about to come into a cash crunch very soon. No advertisers means no ad revenue. For this reason, Musk has his hands tied trying to keep Twitter from running out of cash. Hence, Musk’s $8/mo plan to try and keep Twitter afloat. If Twitter runs out of cash, it’s all over.

There are very likely no banks willing to extend Twitter yet more loans amid the billions that Twitter has already leveraged in Musk’s ill advised buyout. Musk knows this. That’s throwing good money after bad.

Once Twitter’s liquid cash runs out, there’s no way to pay the server bills or staff or electric bills or any other bills. Considering how drastically and rapidly Musk is cutting, Twitter’s cash flow situation must be relatively dire.

What that all means is that Twitter is very likely just weeks away from bankruptcy, which is dependent on Twitter’s cash burn rate. As I said above, Musk may be dipping into his own personal wallet to fund Twitter at this point. If so, it’s understandable why Musk is cutting so deeply and so rapidly. Who wants to prop up millions in cash burn every day? Musk is wealthy, but that’s not a smart way to use (or rather, lose) money.

[UPDATE] It looks increasingly likely that Twitter will need to file bankruptcy. This New York Times article explains that some of Twitter’s bills are now going unpaid. That’s the first step toward not being able to pay any bills.

But once Mr. Musk took over the company, he refused to reimburse travel vendors for those bills, current and former Twitter employees said. Mr. Musk’s staff said the services were authorized by the company’s former management and not by him. His staff have since avoided the calls of the travel vendors, the people said….

Twitter’s spending has dropped, but the moves have spurred complaints from insiders — as well as from some vendors who are owed millions of dollars in back payments. —New York Times

Yeah, this is a bad sign. If vendors are now going unpaid, that indicates lawsuits from just about every angle are imminent against Twitter. It’s also a matter of time before Musk stops paying other critical bills.

Check Mark for $8/mo

yellow dead end sign during day time

One additional thing that Musk has banked on to increase revenues over Twitter’s loss of advertising revenue is to charge users $8/mo for Twitter. Not only was Twitter free to use in the past, the compensation for using Twitter was Twitter’s free access to the IP content generated by its users.

Instead, Musk has forgotten and ignored that gentleman’s agreement between Twitter users and Twitter, instead choosing to try to make money off the backs of its content creators. That would be tantamount to YouTube charging its content creators monthly for the privilege of creating content for YouTube. It’s a ridiculous ask.

The Check Mark verification system originally instituted by Twitter was intended to prove that those with a check mark are who they say they are. Unfortunately, by reducing this feature to an $8/mo plan and because more than half of Twitter employees have been sacked, there’s effectively no one left at Twitter who can actually verify someone who buys the $8/mo plan.

That fact was born out when Musk released the not-ready-for-primetime feature to the public before it was ready, let alone tested. A bunch of bad actors all paid $8 and then began impersonating nearly every celebrity that you could possibly think of. This then forced Musk to halt the program, but not before much damage had been done to the platform and the reputation of the “new” Check Mark program.

Musk was forced to shut down the subscription plan in an attempt to revamp it. So far, the fixed plan has not been released. Those who purchased and who played games were left holding the bag when they were unable to change their usernames back. Irony shines hard on bad actors for being bad actors. Anyway, Musk is a loose cannon and this is clear example of that. Musk was so desperate to make revenue, he was willing to release an unfinished feature that was easily gamed by the bad actors on Twitter.

Worse, it has brought even more bad actors to the platform and those are now beginning their own tirades. Yet, Twitter is now so understaffed and because the bad actors know this, they are running rampant all over the platform harassing, trolling, spewing hate speech and there’s no one there watching or enforcing. Twitter is literally a cesspool. If we thought Twitter was bad under Dorsey, it’s 1000 times worse under Musk… and Musk literally doesn’t care.

Above all of this, Musk plans to prioritize tweets for those who pay and de-prioritize tweets for those who don’t. Meaning, if you pay, you get placement and visibility. If you don’t, your tweets don’t get seen. More than this, Musk even admitted to hiding tweets that he doesn’t like. I’ve even seen this behavior. Hidden tweets are not new. Thread creators can hide tweets of those they don’t like. This goes one step beyond hidden tweets. This allows Twitter to hide tweets silently. No one knows tweets have been hidden unless you go check. Even then, you can’t know it’s been hidden unless you see certain behaviors within Twitter’s UI. Your tweet could be visible one moment and invisible the next, with no notification.

This behavior goes way beyond benign and lands well into nefarious territory. There is zero difference between suspending people over bad tweets and hiding people’s tweets from view without warning or notification. They’re both forms of oppression and speech suppression by an overly wealthy man-boy who simply becomes triggered too easily. This cliché comes to mind, “Out of the frying pan and into the fire!” Which leads to…

The Rise of Oligarchy in Journalism

Make no mistake, even 280 characters is considered a form of journalism. However, because users aren’t journalists, they aren’t bound by journalistic ethics. Meaning, bad actors believe they can say anything they wish, sometimes even doing so willfully to test the boundaries for how far they can take their speech.

Regardless, wealthy individuals are beginning to buy up these large platforms for their own egocentric interests. For example, Rupert Murdoch purchased Fox News (and other similar news outfits) to push his own personal political agendas. Later, after Warner Brothers Discovery purchased CNN, we’ve come to find that billionaire John Malone is a large stakeholder in this new CNN acquired outfit. The latest, of course, is billionaire Elon Musk who has now purchased Twitter, yet another more or less news outfit. Even Facebook’s Mark Zuckerberg has his own biases which get injected into Facebook’s operation… and yes, Zuckerberg is also considered a media influencing oligarch.

Oligarchy is now firmly entrenched in our media sources in ways that are not amenable to providing unbiased news sources. With Fox News’ right leaning bent at the hand of Rupert Murdoch and now CNN’s more-or-less right leaning bent with John Malone and Musk’s somewhat right leaning bent with Twitter, more and more news organizations are becoming right wing news sources because of these right wing billionaires.

Yet, the government is doing nothing to halt or stymie this harm to consumers. Overall, right wing propaganda is getting more and more intense, with these right wing news organizations spewing false propaganda claiming it is the left who is doing the damage? It’s not left wing billionaires buying up news sources. Note, there is another blog article here yet to be written which is born out of this section, look for it soon.

I’m not saying that left wing or right wing political slants are at all good business for media. However, it appears that the vast majority of false disinformation is coming from right wing media. False information that is perpetrated as truth, particularly about left wing politics.

I’m not here to get into who’s right and who’s wrong. I’m simply disclosing that the political discourse in many media platforms are now being swayed by right wing billionaires. This is to the loss of professional unbiased journalism. It will have to fall to small blog article sites, like WordPress, that are independently run not by right or left wing billionaires where news can be had in unbiased ways. That assumes that right wing billionaires don’t buy up these blogging sites, too. Unfortunately, too many people are willing to listen to these biased news organizations thinking they are both unbiased and purport truth when, in fact, they do neither the vast majority of the time.

Alternative Platforms

While there isn’t a clear winner for a Twitter replacement, some are in the works while others are trying. For example, both Tribel and Mastodon are giving it a good college try and likely have seen an influx of traffic since Twitter’s wobbly last few weeks.

One might also consider Truth Social were it not simply a playground for Donald Trump’s exceedingly fragile ego. If you go over to Truth Social, expect to be barraged by ads. Also, don’t expect to be able to say anything negative about Trump or any of his sycophants or you’ll be banned. Freedom of speech is most definitely not alive and well at Truth Social.

As for Tribel and Mastodon, read their terms and conditions closely before opening an account. Tribel, for example, requires you to agree to hand over all rights to any Intellectual Property (IP) that you upload into Tribel. You forfeit all rights for anything you submit to Tribel. Twitter’s terms allow you to retain ownership, but give Twitter rights to use it. However, with Musk’s haphazard behavior, anything is now possible. I simply can’t trust that Twitter is a safe space any longer.

One possibility is waiting for Jack Dorsey’s BlueSky social which is based on a decentralized system like Mastodon. However, there’s no way to know if Dorsey’s BlueSky will become the defacto Social Media site like Twitter was. However, it may be worth waiting for BlueSky to see if it can become a sufficient replacement for Twitter.

For now, there’s no real leader in social media… unless you trust Facebook and its ilk completely (i.e., Instagram and WhatsApp), which I personally do not. Facebook, or more specifically Meta, has proven itself time and again to be a completely untrustworthy organization. And now, Twitter has fallen into this same trap of being entirely untrustworthy.

Overall

Twitter is a train wreck unfolding right before our eyes. Musk says he wants Twitter to succeed, but his actions say the opposite. From his lackadaisical application of Terms and Conditions to random suspensions to sacking half of Twitter’s staff without understanding that there’s no one there to moderate the platform.

Because of all of these factors, Twitter has effectively become a free for all for bad actors. By ‘Bad Actors’, I mean people who are intent on causing mischief, trolling, attacking people and being general nuisances all without any supervision. In effect, the crazies are running the show at Twitter and Musk clearly doesn’t care.

Unfortunately, I don’t have the hours needed to spend babysitting Twitter trolls. Prior to Musk, at least 50% of the time you could have civilized discourse between various people. Now, there’s almost no one willing or able to have civilized discourse on Twitter, instead choosing to attack, troll or vomit random memes in hopes of solely getting a rise out of someone… simply to pick a fight.

I don’t have time to become a babysitter for Twitter babies. That’s Twitter’s job, not mine… and Twitter is not doing it. Twitter doesn’t pay me to do that work, yet I’m expected to deal with it? No.

As long as Twitter can’t get their shit together, I’m out. I simply can’t spend hours babysitting a Twitter account to continually mute, block and report thousands of users for inappropriate behavior. I don’t even want to think about what celebrities are going through right now with perhaps tens of thousands or millions of followers. Twitter is simply a disaster.

One thing is certain, there will be a dedicated chapter written over “How not to run a business” in business school textbooks for Musk’s incredibly shitty handling of Twitter.

Once Twitter folds, the best thing I can say about it is, “Good riddance to bad rubbish.” I’ll also say that, for the record, it does appear that Twitter is on the brink of collapse. Clearly, Musk didn’t perform his fiduciary responsibility to ensure Twitter’s books were solid before making an offer to purchase. Instead, he harped only on the excessive number of bots on the platform. If Twitter was in this dire of a financial situation prior to the purchase, that should have been enough for Musk to squash the purchase contract. Who agrees to buy a financially insolvent company?

Musk, if you’re reading… finger-512.


If you enjoy reading Randocity, I urge you to click the follow button to continue to get notifications for all new content.

↩︎

Elizabeth Holmes: Why aren’t more CEOs in prison?

Posted in botch, business, california by commorancy on August 23, 2022

close up shot of scrabble tiles on a white surface

On the heels of Elizabeth Holmes’s conviction for four counts of fraud, the question begs… Why aren’t more startup CEOs in prison for fraud? Before we get into the answer, let’s explore a little about Elizabeth Holmes.

Theranos

Theranos was a technological biomedical startup, not unlike so many tech startup companies before it. Like many startups, Theranos began based out of Palo Alto, California… what some might consider the heart of Silicon Valley. Most startups that begin their life in or around Palo Alto seem able to rope in a lot of tech investors and tech money. Theranos was no different.

Let’s step back to understand who was at the helm of Theranos before we get into what technology this startup purported to offer the world. Theranos was helmed by none other than Elizabeth Holmes. Holmes founded Theranos in 2003 at the age of 19, after she had dropped out of Stanford University. In 2002 prior to founding Theranos, Elizabeth Holmes was an engineering student studying chemical engineering. No, she was not a medical student nor did she have any medical training.

Clearly, by 2003, she had envisioned grandiose ideas about how to make her way in the world… and it didn’t seem to involve actually completing her degree at Stanford. Thus, Theranos was born after having she had gotten her dean, but not medical experts at the school, to sign off on her blood testing idea.

Medical Technology

What was her medical idea? Holmes’s idea involved gathering vast amounts of data from a few drops of blood. Unfortunately, not everyone agreed that her idea had merit, particularly medical professors at Stanford. However, she was able to get some people to buy into her idea and, thus, Theranos was born.

From the drawing board to creating a device that actually does what Holmes claimed would pose the ultimate challenge, one that would see her convicted of fraud.

Software Technology

Most startup products in Silicon Valley involve software innovation with that occasional product which also requires a specialty hardware device to support the software. Such hardware and software examples include the Apple iPhone, the Fitbit and even the now defunct Pebble.

Software only solutions include such notables as Adobe Photoshop, Microsoft Office and even operating systems like Microsoft Windows. Even video games fall under such possible startups, like Pokémon Go. Yes, these standalone softwares do require separate hardware, but using already existing products that consumers either own or can easily purchase. These software startups don’t need to build any specialty hardware.

Software solutions can solve problems for many differing industries including the financial industry, the medical industry, the fast food industry and the law enforcement industry and even solve problems for home consumers.

There are so many differing ideas that can make life much simpler, some ideas are well worth exploring. However, like Theranos, some aren’t.

Theranos vs Silicon Valley

Elizabeth Holmes’s idea that a few drops of blood could reveal a lot of information was a radical idea that didn’t, at her young age of 19, have a solution. This is what Elizabeth Holmes sought to create with Theranos.

Many Silicon Valley startups must craft a way to solve the problem they envision. Whether that be accessing data faster or more reliably to creating a queuing system for restaurants using an iPhone app.

It’s not so much the idea, but the execution of it. That’s where the CEO comes into play. The CEO must assemble a team capable of realizing and executing the idea they have in their head. For example, is it possible to create a device to extract mountains of data from a few drops of blood? That’s what Elizabeth Holmes was hoping she could create. It was the entire basis for the creation of Theranos.

Investors

To create that software and device, it takes money and time. Time to develop and money to design and build necessary devices using R&D. A startup must also hire experts in various fields who can step into the role and determine what is and isn’t possible.

In other words, a CEO’s plan is “fake it until you make it”. That saying goes for every single startup CEO who’s ever attempted to build a company. Investors see to it that there’s sufficient capital to make sure a company can succeed, or at least give it a very good shot. Early investors include seed and angel investors, where the money may have few if any strings and later stage investors such as Venture Capitalists, where there are heavy strings tied to the money in the form of exchanging company ownership in exchange for money.

Later stage investors are usually much more hands-on than many angel or seed investors. In fact, sometimes late stage investors can be so hands-on as to cause the company to pivot a company in unwanted directions and away from the original vision. This article isn’t intended to become a lesson for how VC’s work, but suffice it to say that they can become quite important in directing a company’s vision.

In Theranos case, however, Elizabeth Holmes locked out investors by creating a …

Black Box

One thing that Silicon Valley investors don’t like are black boxes. What is a black box? It’s a metaphor for a wall that’s erected between a company’s product and any investors involved. A black box company is one that refuses to share how a startup company’s technology actually works. Many investors won’t invest in such “black box” companies. Investors want to know how their money is being spent and how a company’s technology is progressing. Black boxes don’t allow for that information flow.

Theranos employed such a black box approach to its blood analyzer device. It’s actually a wonder Theranos got as much investor support as it did, particularly for a CEO that young and, obviously, inexperienced when insisting on a black box approach. That situation is ripe for abuse. At 19, how effective could Elizabeth Holmes be as a CEO? How trustworthy and responsible could a 19 year old be with millions of dollars of funding? How many 19 year olds would you entrust with millions of dollars, after they had dropped out of college? For investors, this should have been a huge red flag.

There’s something to be said for the possibility of a wunderkind in Elizabeth Holmes, except she hadn’t proven herself to be a prodigy while attending Stanford. Even the medical experts she had consulted about her idea clearly didn’t think she had the necessary skills to make her far-fetched idea a reality. A chemical engineering student hopping into the biotech field with the creation of small, almost portable blood analysis machine at a time when commercial blood analysis machines where orders of magnitudes bigger and required much more blood volume? Holmes’s idea was fantastical, yet clearly unrealistic.

However, Theranos’s black box, dubbed the Edison or miniLab, was a small piece of equipment about half the size of a standard tower computer case and included a touch screen display and blood insertion port. How $9 Billion Startup Theranos Blew Up And Laid Off 41%

Unfortunately, this black box was truly a black box in all senses of the word, including its actual case coloring. Not only was the Edison’s innards kept a strict company secret, its testing methodologies were also kept secret, even from employees. In other words, no one knew exactly how the Edison truly worked. No, not even the engineers that Theranos hired to try to actually make Holmes’s vision a reality.

Theranos and Walgreens

By 2016, Theranos had secured a contract with Walgreens for Walgreens to use Theranos’s Edison machine to test blood samples by medical patients. Unfortunately, what came to pass from those tests was less than stellar. It’s also what led to the downfall of Theranos and ultimately Elizabeth Holmes and her business partner, Sunny Balwani.

The engineers that Theranos hired knew that the Edison didn’t work, even though they hadn’t been privy to all of its inner workings. Instead, what they saw was those tiny vials of blood trying to run samples on larger blood testing machines like the Siemens Advia 1800.

When the engineers, Erika Cheung and Tyler Shultz, confronted Holmes and Balwani about the Edison machine’s lack of functionality and about being asked to falsify test results, they were given the cold shoulder. Both Cheung and Tyler decided to blow the whistle on Theranos’s fraud. Cheung and Schultz both left Theranos after whistleblowing to start their own companies.

Ultimately, Theranos had been using alternative medical diagnostic technology in lieu of its own Edison machine, which the Edison clearly didn’t function properly and neither did the third party systems with the amount of blood that Holmes stated that it required.

This left patients at Walgreens with false test results, requiring many patients to retest with another lab to confirm the validity of Theranos’s results.

Elizabeth Holmes Fate?

In January of 2022, Elizabeth Holmes was found guilty of 4 counts of fraud. However, the jury acquitted her of all counts involving patient fraud… the patients were, in fact, hurt the most by Theranos’s fraud. The jury awarded monetary rewards to the investors, not to the patients who may have been irreparably harmed by her machine’s failure to function.

Why aren’t more CEOs in prison for fraud?

While the Theranos and Elizabeth Holmes case is somewhat unique among Silicon Valley startups, it is not completely unique. Defrauding investors is a slippery slope for Silicon Valley. Once one company is found perpetrating fraud on investors, it actually opens the door up to many more such cases.

Taking money from investors to attempt to bring a dream to life is exactly what CEOs do. However, Theranos (and Elizabeth Holmes) between 2003 and 2016 couldn’t produce a functional machine.

Most CEOs, given enough time and, of course money, can likely produce a functional product in some form. Whether that product resembles the original idea that founded the company remains to be seen. Some CEOs pivot a year or two in and change directions. They either realize their initial idea wasn’t unique enough or that there would be significant problems bringing it to market. They then change direction and come up with a new idea that may be more easily marketable.

Startups that Bankrupt

In the case of Theranos, other startups that go bankrupt could signal the possibility that CEOs may now be held accountable to fraud charges, just like Ms. Holmes. The Elizabeth Holmes case has now set that precedent. Taking investor money may no longer be without legal peril directly to company executives. If you agree to bring a product to market and are given investor capital to do it… and then you fail and the company folds, you may find yourself in court up on fraud charges.

Silicon Valley investors do understand that the odds of a successful startup is relatively low… which is why they typically invest in many at once. The one that succeeds typically more than makes up for the others that fail. If more than one succeeds, even better. It’s called, “playing the odds”. The more you bet, the better chances you have of winning. However, playing the odds won’t stop investors from wanting to recoup losses for money given to failed startups.

The Elizabeth Holmes case may very well be chilling for startups. It’s ultimately chilling to would-be CEOs who see dollar signs in their eyes, but then months later that startup is out of cash and closing down in failure.

CEOs and Prison Time

Elizabeth Holmes should be considered a cautionary tale for all would-be CEOs looking for some quick cash to get their idea off the ground. If you do manage to secure funding, you should be cautious with how you use that cash. Also always and I mean ALWAYS make sure the progress in building your idea is shown to your investors regularly. Let them know how their investor money is being used. When software is available for demonstrations, show it off. Don’t hide it inside of a black box.

Black boxes have no place in startup investing. As with Elizabeth Holmes, she’s facing up to 20 years in prison. However, her sentence has yet to be handed down, but is expected to be no less than 20 years. Though, it’s possible she may be given the possibility of parole and the possibility of a reduced sentence for good behavior… all of which is up to the sentencing judge.

Elizabeth Holmes opened this door for startup CEOs. It’s only a matter of time before investors begin using this precedent to hold CEO founders to account should an investment in a startup fail.

↩︎

Is Google running a Racket?

Posted in botch, business, california, corruption, Uncategorized by commorancy on March 16, 2020

monopoly-1920In the 1930s, we had crime syndicates that would shake down small business owners for protection money. This became known as a “Racket”. These mob bosses would use coercion and extortion to ensure that these syndicates got their money. It seems that Google is now performing actions similar with AMP. Let’s explore.

AMP

AMP is an acronym that stands for Accelerated Mobile Pages. To be honest, this technology is only “accelerated” because it strips out much of what makes HTML pages look good and function well. The HTML technology that make a web page function are also what make it usable. When you strip out the majority of that usability, what you are left with is a stripped down protocol named AMP… which should stand for Antiquated Markup Protocol.

This “new” (ahem) technology was birthed by Google in 2016. It claims to be an open source project and also an “open standard”, but the vast majority of the developers creating this (ahem) “standard” are Google employees. Yeah… so what does this say about AMP?

AMP as a technology is fine if it were allowed to stand on its own merit. Unfortunately, Google is playing hardball to get AMP adopted.

Hardball

Google seems to feel that everyone needs to adopt and support AMP. To that end, Google has created a racket. Yes, an old-fashioned mob racket.

To ensure that AMP becomes adopted, Google requires web site owners to create, design and manage “properly formatted” AMP pages or face having their entire web site rankings be lost within Google’s Search.

In effect, Google is coercing web site owners into creating AMP versions of their web sites or effectively face extortion by being delisted from Google Search. Yeah, that’s hardball guys.

It also may be very illegal under RICO laws. While no money is being transferred to Google (at least not explicitly), this action has the same effect. Basically, if as a web site owner, you don’t keep up with your AMP pages, Google will remove your web site from the search engine, thus forcing you to comply with AMP to reinstate the listing.

Google Search as Leverage

If Google Search were say 15% or less of the search market, I might not even make a big deal out of this. However, because Google’s Search holds around 90% of the search market (an effective monopoly), it can make or break a business by reducing site traffic because of low ranking. By Google reducing search rankings, this is much the same as handing Google protection money… and, yes, this is still very much a racket. While rackets have been traditionally about collecting money, Google’s currency isn’t money. Google’s currency is search rankings. Search rankings make or break companies, much the same as paying or not paying mobsters back in the 1930s.

Basically, by Google coercing and extorting web site owners into creating AMP pages, it has effectively joined the ranks of those 1930 mob boss racketeers. Google is now basically racketeering.

Technology for Technology’s Sake

I’m fine when a technology is created, then released and let land where it may. If it’s adopted by people, great. If it isn’t, so be it. However, Google felt the need to force AMP’s adoption by playing the extortion game. Basically, Google is extorting web site owners to force them to support AMP or face consequences. This forces web site owners to adopt creating and maintaining AMP versions of their web pages to not only appease Google, but prevent their entire site from being heavily reduced in search rankings and, by extensions, visitors.

RICO Act

In October of 1970, Richard M. Nixon signs into law the Racketeer and Influenced Corrupt Organizations Act… or RICO for short. This Act makes it illegal for corrupt organizations to coerce and extort people or businesses for personal gains. Yet, here we are in 2020 and that’s exactly what Google is doing with AMP.

It’s not that AMP is a great technology. It may have merit at some point in the future. Unfortunately, we’ll never really know that. Instead of Google following the tried-and-true formula of letting technologies land where they may, someone at Google decided to force web site owners to support AMP … or else. The ‘else’ being the loss of that business’s income stream by being deranked from Google’s Search.

Google Search can make or break a business. By Google extorting businesses into using AMP at the fear of loss of search ranking, that very much runs afoul of RICO. Google gains AMP adoption, yes, but that’s Google’s gain at the site owners loss. “What loss?”, you ask. Site owners are forced to hire staff to learn and understand AMP because the alternative is loss of business. Is Google paying business owners back for this extortion? No.

So, here we are. A business the size of Google wields a lot of power. In fact, it wields around 90% of the Internet’s search power. One might even consider that a monopoly power. Combining a monopoly and extortion together, that very much runs afoul of RICO.

Lawsuit City and Monopolies

Someone needs to bring Google up in front of congress for their actions here. It’s entirely one thing to create a standard and let people adopt it on their own. It’s entirely another matter when you force adoption of that standard on people who have no choice by using your monopoly power against them.

Google has already lost one legal battle with COPPA and YouTube. It certainly seems time that Google needs to lose another legal battle here. Businesses like Google shouldn’t be allowed to use their monopoly power to brute force business owners into complying with Google technology initiatives. In fact, I’d suggest that it may now be time for Google, just like the Bell companies back in the 80s, to be broken up into separate companies so that these monopoly problems can no longer exist at Google.

↩︎

FX TV Series Review: Devs

Posted in botch, california, entertainment, Uncategorized by commorancy on March 7, 2020

devsDevs is a new “limited” series from FX, also being streamed on Hulu. Let’s explore everything that went wrong here.

Silicon Valley Startups

Having worked in Silicon Valley for several tech companies, I can confirm exactly how unrealistic this show is. Let’s start by discussing all of the major flaws within the pilot. I should also point out that the pilot is what sets the tone of a series. Unfortunately, the writers cut so many corners setting up the pilot’s plot, the rest of the series will suffer for it.

As a result of the sloppy writing for the pilot, the writers will now be required to retcon many plot elements into the series as the need arises. Retconning story wouldn’t have been needed had they simply set up this series properly. Unfortunately, they rushed the pilot story.

Slow Paced

While you might be thinking, “Well, I thought the pacing of the series was extremely slow.” The dialog and scene pacing is slow. But, the story itself moves along so rapidly, if you blink you’ll miss it.

What’s it about?

A girlfriend and boyfriend pair work for the same fictional tech company named “Amaya”. It is located in a redwood forested area near San Francisco, apparently. It doesn’t specifically state where it exists, but it’s somewhere located in a wooded area.

The female lead, Lily, and the male lead, Sergei, are in a relationship. She’s of Chinese-American heritage and he’s of Russian descent. She works on the crytography team at Amaya and he works in the AI division at Amaya (at least in the pilot of the show).

Things Go Awry

Almost immediately, the series takes a bad turn. Sergei shows off his project to the ‘Devs’ team leader, another team in the company. We later come to find that this unkempt leader is actually the founder of the company and Amaya was his daughter who died. He also apparently heads up a part of the company that we come to find is named ‘Devs’. Unfortunately, because there’s no setup around what ‘Devs’ exactly is, this leaves the viewer firmly lost over the magnitude of what’s going on at this meeting. Clearly, it isn’t lost on Sergei as he’s extremely nervous about the meeting, but he still goes in reasonably confident of his project. As viewers, though, we’re mostly lost until much later in the episode.

Sergei demonstrates his project to this not-explained team and they seem suitably impressed with Sergei’s project’s results… that is until the end of the meeting when the results begin failing due to insufficient amounts of processing power.

Still, Sergei’s results are impressive enough that he is invited (not the rest of his team) to join ‘Devs’ right then and there.

And then we hear the sound of a record needle being ripped across a record…

Not how Silicon Valley works

You don’t get invited to join some kind of “elite coveted” team at the drop of a hat like that. Managers have paperwork, transfer requests have to be made and budgets have to be allotted. There are lots of HR related things that must result when transferring a person from one department to another, even at the request of the CEO. It’s not a “You’re now on my team effectively immediately” kind of thing. That doesn’t occur and is horribly unrealistic.

Ignoring the lack of realism of this transfer, the actor playing Sergei is either not that great of an actor or was directed poorly. Whatever the reason, he didn’t properly convey the elation required upon being invited and accepted into “the most prestigious” department at Amaya. If he were actually trying to get into ‘Devs’, his emotions should have consisted of at least some moment of joy. In fact, the moment he’s accepted into ‘Devs’, it almost seems like fear or confusion blankets him. That’s not a normal emotion one would experience having just stepped into a “dream job”.

This is where the writers failed. The writers failed to properly explain that this was Sergei’s dream job. This is also where the writers failed to properly set up the ‘Devs’ team as the “Holy Grail” of Amaya.

Clearly, the writers were attempting to set this fictional Amaya company up to mirror a company of a similar size of Google or Apple.

Location

Ignoring the meeting that sets up the whole opening (and which also fails to do so properly), Sergei heads home to explain to Lily his change in company status and his transfer into ‘Devs’. They have a conversation about the closed nature of that team and that they won’t be able to discuss his new job in ‘Devs’.

The next day, Sergei heads over to the head of Amaya security to be ‘vetted’ for the ‘Devs’ team. Apparently, there’s some kind of security formality where the security team must interview and vet out any potential problems. The security manager even points out that because Sergei is native Russian and because Lily is Chinese that there’s strong concern over his transfer. If this security person is so concerned over his background, then he should rescind his transfer effective immediately.

Instead, he sends Sergei on his way to meet with the ‘Devs’ manager who then escorts him through a heavily wooded area into what amounts to an isolated fortress.

Record needle rips across again… “Hold it right there”

While it’s certainly possible a tech startup might attempt to locate its headquarters deep in a wooded area, it’s completely unrealistic. California is full of tree huggers. There are, in fact, way too many tree huggers in California. There is no way a company like Google or Apple could buy a heavily forested area and then plop down a huge fortress in the middle of it. No, not possible. In fact, an organization like “Open Space Trust” would see to it that they would block such a land purchase request. There is no way a private company could set this up.

A governmental organization could do it simply through annexation via eminent domain, but not a private company. Let’s ignore this straight up California fact and continue onward with this show. Though, it would have made more sense if Amaya had been government sanctioned and funded.

Sergei’s First (and Last) Day

Ignoring the improbable setup of this entire show, Sergei is escorted by his new boss, who remarkably looks like Grizzly Adams… but more dirty, homeless and unkempt. Typically, Silicon Valley companies won’t allow men who look like this into managerial roles. Because we come to find later that he is apparently the “founder” of Amaya, the rest of the company lets his unkempt look slide. His look is made worse by the long hair wig they’ve glued onto this actor. If you want a guy to look like Grizzly Adams, at least have him grow his hair out to some length so a lacefront wig looks at least somewhat realistic.

Anyway, let’s move on. Sergei is escorted through a heavily wooded area (complete with a monstrously huge and exceedingly ugly statue of a child in a creepy pose) and onto his new work location… the aforementioned fortress I described earlier. His boss explains how well secured the location is by pointing out its security features including an “unbroken vacuum seal” to which Sergei ponders aloud before being shown how it works. Sergei is then told that there is only one rule. That rule being that no personal effects go into the building and nothing else comes out of it. Yet, this rule is already broken when they head inside. Even the “manager” breaks this rule.

Once they enter the building and get past the entry area, Mr. Grizzly explains that nothing inside the building is passworded. It’s all open access to everything. He is then shown his workspace and left to his own devices. Grizzly explains he’ll figure it out on his own by “reading the code”.

Unrealistic. No company does this.

Last Day

Here’s where everything turns sour. We are left to assume that only one day has passed since Sergei has been been escorted into the building. Sergei then stares at his terminal screen not doing anything for about 5 minutes. He gets up, goes to the bathroom, barfs and then fiddles with his watch.

He then attempts to leave the building, yet somehow it’s night time. It was probably morning when he entered. Here’s where the storytellers failed again. There was no explanation of time passage. The same screen he was looking at when he entered is the same screen that was on his terminal when he attempts to leave. Yet, now it’s night time?

His manager assumes that Sergei has absconded with the code (remember the open access?) from the facility and that he is attempting to leave with it on his “James Bond Watch”. Sergei is jumped by the head of Amaya security and is seemingly suffocated by this same head of security no less.

And so the retcon begins…

The writers have now killed the person they needed to explain this story. So now, they have to rely on Lily to unravel what happened (as a newly minted detective). Here’s where the show goes from being a possible uplifting story to an implausible detective horror story.

To enable Lily to even get the first clue what has happened to her boyfriend, the ‘Devs’ and the security teams collude to fabricate footage to make it appear as if Sergei is acting oddly while walking around the campus.

Instead of the writers creating actual story, they rely on fake security footage to retell the story. They even go so far as to fabricate a person setting themselves on fire with Sergei’s face attached… to make it appear as some kind of suicide. Yeah, I doubt Lily is buying any of it. Unfortunately, the writers leave too much unsaid. So, we have no idea what Lily is really thinking.

Instead, Lily heads off to find her ex-boyfriend and ask him for help… who he then summarily tells her to “fuck off”. This whole ex-boyfriend premise is so contrived and unrealistic it actually tops the list of unrealistic tropes in this show.

Questions without Answers

Would a Silicon Valley company stoop to murder to protect its intellectual property? I guess it could happen, but it is very unlikely. Would they allow a thug to head up its security team? Exceedingly doubtful. If a company were to need to protect its property through acts of violence, it would hire out for that.

Though, really, Amaya is actually very naive. If they didn’t trust Sergei, they shouldn’t have hired him. Worse, they allowed their one rule to be broken… allowing personal effects inside the building. Both Sergei and Grizzly wear watches into the building. If no personal effects are to be carried in or out, then that includes ALL forms of technology including wrist watches of any form. In fact, they should require everyone to change their clothes before entering the building, forcing ALL personal effects into a locker with no access to that locker until shift end. The staff would then wear issued wardrobe for the duration of their work shift.

If Amaya had simply followed its own rules by setting the whole system up correctly, there wouldn’t have been the possibility of any code theft or the need to murder an employee. Yet, Sergei is allowed to wear his watch into the building? It is then assumed that Sergei has managed to copy all (?) of the code onto his watch? Setting up such a secure system would have forced Sergei to thwart this system in some way creating more drama and enforcing the fact that Sergei is, indeed, a spy. By killing Sergei off so quickly, the writers were requires to take many shortcuts to get this story told.

Clearly, corporate espionage does exist, but would anyone attempt corporate espionage on their first day on a new team? On their second day? I think not. In fact, this setup is so contrived and blatantly stupid, it treats not only Sergei, but the audience as if we haven’t a brain in our heads. That the writers also assume that Russian espionage is this stupid is also insane.

No. If Sergei were being handled as a spy, he would only attempt espionage after having been in the position for a long time… perhaps even years. Definitely well enough time to be considered “trusted”. No company fully trusts a new employee on the first day. No company gives full access to all data to a new employee on the first day, either. There is no way that “first day” Sergei could have ever been put in the position of having access to everything.

Further, a new employee needs to fully understand exactly what’s going on in the new department, where everything is and get accustomed to the new work area and new co-workers. There is no way Sergei would have attempted to abscond any the code when he barely understands what that code is even doing. Preposterous.

Episode 2

The writers then again further insult us with the passworded Soduku app that Lily finds on Sergei’s phone. Lily enlists her ex-boyfriend again (whom she hadn’t talked to in years) to help unlock the app. Amazingly, this second time he agrees. He then explains to Lily that it’s a Russian messaging app and that Sergei was a spy.

Here’s the insulting part. After her ex-boyfriend unlocks the app, all of the messages are in English. Seriously? No, I don’t think so. Every message would have been in Russian, not English. If it’s a Russian app, they would communicate using the Russian language. But then the next part wouldn’t have made any sense.

Lily then decides to text whomever is on the other end. If the text had been in Russian, she would have had to learn enough Russian to message the other party. By making the text app English, it avoids this problem. That’s called “lazy writing”.

Inexplicably, the other end decides to meet with Lily. Needle rips again… No, I don’t think so. If it were really Sergei’s handler with the power to delete the app, the app would have been deleted immediately after Lily made contact. No questions asked. If they wanted to meet with Lily, they likely would have abducted her separately much, much later.

Still, it all conveniently happens. Worse, when the meeting takes place, the head of Amaya’s security is somehow there eavesdropping on the whole conversation. Yeah, I don’t think so. If the head of Amaya’s security is there, that either means he’s spying on Sergei’s apps (which are likely encrypted, so there’s no real way) or Amaya’s future prediction algorithm is already fully functional.

Basically, everything is way too convenient. Worse, if Amaya does manage to crack the prediction algorithm, the show’s writers have a huge problem on their hands. There’s no way for them to write any fresh stories in that universe without it all turning out contrived. With a prediction algorithm fully functional, Amaya can predict future events with 100% accuracy. This means they can then thwart anything negative that might hinder Amaya’s business. The whole concept is entirely far fetched, but it’s actually made worse by the idea of an omniscient computer system that Amaya is attempting to build. But really, would a company actually kill an exceedingly bright software engineer who is just about to give your computer full future omniscience? I don’t think so.

Omniscience is actually the bane of storytelling. If you have an omniscient being (or anything) available to see the future, then a company could effectively rule the world by manipulating historical events to their own benefit. This situation is a huge predicament for the writers and show runners.

In fact, I would make sure that Amaya’s computer is firmly destroyed within the first 4 episodes. Amaya’s omniscience can’t come to exist or the show will jump the shark. The show should remain focused on Sergei’s death and Lily uncovering it, rather than on creating Amaya’s omniscient computer. That computer becoming fully functional will actually be the downfall of the show. The espionage doesn’t need to succeed. In fact, it shouldn’t succeed. Instead, one of Amaya’s existing internal staff should be enlightened to the of danger Amaya’s management once the actual reality of Sergei’s death becomes widely known. The now enlightened staff should turn on Amaya and subvert the soon-to-be “omniscient” computer, now comprehending the magnitude of just how far their bosses are willing to take everything. That computer is not only a danger to the show, it’s a danger to that entire fictional world. Worse, though, are murderous bosses who are the real travesty here.

Any person working at a company with management willing to commit murder of its staff should at best seek to leave the company immediately (fearing for their own safety)… alternatively, some of these employees might subversively see to that company’s demise before exiting the organization. In fact, Devs should become a cautionary tale.

Technical staff always hold all of the cards at any tech company. Trusted coders and technical staff leave companies extremely vulnerable. These staff can insert damaging code at any time… code that can, in fact, take down a company from within. This is the real danger. This is where this show should head. Let’s forget all about the silly omniscience gimmick and focus on the dangers of what can happen to a company when trusted technical staff become personally threatened by their own employer. This is the real point. This is the real horror. The omniscience gimmick is weak and subverts the show. Instead, bring the staff back to reality by having them take a stand against an employer who is willing to commit murder merely to protect company secrets.

[Updated: 7/11/2020]

About a week after I wrote this article, the next episode arrived. The term “Jump the Shark” immediately pop out at me about halfway into this episode.

There’s a scene where the Devs manager, Katie (Alison Pill), walks into the room and observes two of her team watching what is effectively porn on the company’s core technology. In fact, it’s not just any porn, but famous celebrities from the past “doing it”.

I can most definitely certify that while Silicon Valley’s hiring practices are dominated by males, no manager would allow this behavior in a conference room, let alone by using the company’s primary technology. They could have been watching literally anything and this is what they chose?

I can guarantee you that any manager who found out that an employee was watching such things on a work computer would, at best, require a stern talking to and a reprimand goes into the employee file. At worst, that person is fired. Katie just shrugs it off and makes a somewhat off-handed comment as she leaves the room. That’s completely unrealistic for Silicon Valley companies. Legal issues abound in the Bay Area. There’s no way any company would risk their own existence to let that behavior slide by any employee.

Of course, having a security manager running around and offing employees isn’t something companies in SV do either.

↩︎

Apple and Law Enforcement

Posted in Apple, botch, business, california by commorancy on January 14, 2020

apple-phoneApple always seems to refuse law enforcement requests. Let’s understand why this is bad for Apple… and for Silicon Valley as a whole. Let’s see how this can be resolved.

Stubbornness

While Apple and other “Silicon Valley” companies may be stubborn in reducing encryption strength on phones, reduction of encryption strength isn’t strictly necessary for law enforcement to get what they need out of a phone device. In fact, it doesn’t really make sense to reduce encryption across all phone devices simply so law enforcement can gain access to a small number of computer devices in a small set of criminal cases.

That’s like using a sledgehammer to open a pea. Sure, it works, but not very well. Worse, these legal cases might not even be impacted by what’s found on the device. Making all phones vulnerable to potentially even worse crimes, such as identity theft and stealing money in order to prosecute a smaller number of crimes which might not be impacted by unlocking a phone doesn’t make sense.

There Are Solutions

Apple (and other phone manufacturers) should be required to partner with law enforcement to create a one-use unlocking system for law enforcement use. Federal law could even mandate that any non-law enforcement personnel who attempts to access the law enforcement mode of a phone would be in violation of federal law. Though, policing this might be somewhat difficult. It should be relatively easy to build and implement such one-use system. Such a system will be relatively easy to use (with the correct information) and be equally difficult to hack (without the correct information).

How this enforcement system would work is that Apple (or any phone vendor) would be required to build both law enforcement support web site and a law enforcement mode on the phone for law enforcement use only. This LE support server is naturally authentication protected. A verified law enforcement agent logs into Apple’s LE system and enters key information from/about a specific device along with their own Apple issued law enforcement ID number. Apple could even require law enforcement officers to have access to an iPhone themselves to use FaceID to verify their identity before access.

The device information from an evidence phone may include the iPhone’s IMEI (available on the SIMM tray), ICCID (if available), SEID (if available), serial number, phone number (if available) and then finally a valid federally issued warrant number. Apple’s validation system would then log in to a federal system and validate the warrant number. Once the warrant is validated and provided the required input data specific to the phone all match to the device (along with the Apple’s law enforcement ID), Apple will issue a one-time use unlocking code to the law enforcement agent. This code can then be used one time to unlock the device in Law Enforcement Mode (LEM).

To unlock an evidence device, the agent then boots the phone into LEM (needs to be built by Apple) and then manually enters an Apple-generated code into the phone’s interface along with their law enforcement ID. The law enforcement mode then allows setup and connection to a local WiFi network (if no data network is available), but only after entering a valid code. The code will then be verified by Apple’s servers and then the phone will be temporarily unlocked. Valid entry of a law enforcement code unlocks the device for a period of 24 hours for law enforcement use. There is no “lock out” when entering the wrong code when the phone is in “law enforcement mode” because these codes are far too complex to implement such a system. Though, the phone can reboot out of LEM after a number of wrong attempts. You simply can’t randomly guess these codes by trial and error. They are too complex and lengthy for this.

This specific one-use code allows unlocking the device one time only and only for a period of 24 hours. This means that phone will accept that specific code only once and never accept that specific code again. If law enforcement needs to unlock the phone again, they will have to go through the law enforcement process of having Apple generate a new code using the same input data which would then generate a new code, again, valid for only 24 hours.

A successfully used LE code will suspend all phone screen lock security for a period of 24 hours. This means that the only action need to get into a phone for up to 24 hours (even after having been powered off and back on) is by pressing the home key or swiping up. No touch ID or Face ID is needed when the phone is unlocked during this 24 hour period. This allows for use of this phone by multiple people for gathering evidence, downloading information or as needed by law enforcement. This mode also suspends all security around connecting and trusting iTunes. iTunes will also allow downloading data from the phone without going through its “trust” security. After 24 hours, the phone reboots, deletes LE configuration parameters (such as WiFi networks) and reverts back to its original locked and secured state.

The iPhone will also leave a notification for the owner of the phone that the phone has been unlocked and accessed by law enforcement (much the same as the note left in luggage by the TSA after it has been searched). If the phone still has Internet access, it will contact Apple and inform the Apple ID that the phone has been unlocked and accessed by law enforcement. This Internet notification can be suspended for up to 30 days to allow law enforcement time enough to get what they need before the system notifies the Apple ID owner of access to that device. Though, I’d recommend that Apple notify the owner right away of any access by law enforcement.

How to use the code

When a valid generated Apple law enforcement code is entered into the phone in LEM, the phone calculates the validity of the code based on an internal process that runs on the phone continuously. While the phone is validly being used by its owner, this process will periodically sync with Apple’s LE servers to ensure that an iPhone’s LEM process will work properly should the phone fall into the possession of law enforcement. This information will have to be spelled out and agreed to in Apple’s terms and conditions. Apple’s servers and the phone remain synchronized in the same way as RSA one-time keys remain synchronized (within a small calculable margin of error). Thus, it won’t need to synchronize often.

How to use Law Enforcement Mode

This mode can be brought up by anyone, but to unlock this mode fully, a valid Apple issued law enforcement ID and one-use code must be entered into an iPhone for the mode to unlock and allow setup of a WiFi network. Without entry of an Apple issued law enforcement ID number or because of successive incorrect entries, the phone will reboot out of LEM after a short period time.

Law Enforcement ID

A law enforcement ID must be generated by Apple and these IDs will synchronize to all Apple devices prior to falling under law enforcement possession. To keep this list small, it will remain compressed on the device until LEM successfully activates, at which time the file is decompressed for offline validation use. This means that a nefarious someone can’t simply get into this mode and start mucking about easily to gain entry to a random phone. It also means someone can’t request Apple issue a brand new ID on the spot. Even if Apple were to create a new ID, the phone would take up to 24 hours to synchronize… and that assumes that the phone still has data service (which it probably doesn’t). Without data service, the phone cannot synchronize new IDs. This is the importance of creating these IDs in advance.

Apple will also need to go through a validation process to ensure the law enforcement officer requesting an ID is a valid officer working for a legitimate law enforcement organization. This in-advance validation may require a PDF of the officer’s badge and number, an agency issued ID card and any other agency relevant information to ensure the officer is a valid LE officer or an officer of the court. This requires some effort on the part of Apple.

To get an Apple law enforcement ID, the department needing access must apply for such access with Apple under its law enforcement support site (to be created). Once an Apple law enforcement ID has been issued, within 24 hours the ID will sync to phones, thus activating the use of this ID with the phone’s LEM. These IDs should not be shared outside of any law enforcement department. IDs must be renewed periodically through a simple validation process, otherwise they will expire and fall off of the list. Manufacturers shouldn’t have to manage this list manually.

Such a system is relatively simple to build, but may take time to implement. Apple, however, may not be cool with developing such a law enforcement system on its own time and dime. This is where the government may need to step in and mandate such a law enforcement support system be built by phone manufacturers who insist on using overly strong encryption. While government(s) can legislate that companies reduce their encryption strength on their devices to avoid building a law enforcement system as described, instead I’d strongly recommend that companies be required to build a law enforcement support and unlocking system into their devices should they wish to continue using ever stronger encryption. Why compromise the security of all devices simply for a small number of law enforcement cases? Apple must meet law enforcement somewhere in the middle via technological means.

There is also no reason why Apple and other device manufacturers are denying access to law enforcement agents for phone devices when there are software and technical solutions that can see Apple and other manufacturers cooperate with law enforcement, but yet not “give away the farm”.

I don’t even work for Apple and I designed this functional system in under 30 minutes. There may be other considerations of which I am not aware within iOS or Android, but none of these considerations are insurmountable in this design. Every device that Apple has built can support such a mode. Google should also be required to build a similar system for its Android phones and devices.

Apple is simply not trying.

↩︎

Rant Time: Google’s Lie

Posted in botch, business, california, rant by commorancy on January 7, 2020

finger-512I’ve already written an article or two about YouTube giving content creators the finger. I didn’t really put that information into this article’s context so that everyone can really understand what’s actually going on at YouTube, with the FTC and with Google. Let’s explore.

Lies and Fiction

Google has asserted and maintained, since at least 2000 when COPPA came into effect, that it didn’t allow children under age 13 on its platforms. Well, Google was caught with its proverbial pants down and suffered a $170 million fine at the hand of the FTC based on COPPA. Clearly, Google lied. To maintain that lie, it has had to do a number of things:

  1. For YouTube content creators, YouTube has hidden its metrics for anyone under the age of 13 from viewer stats on YouTube. What that means to creators is that the viewer metrics you see on your stats page is completely inaccurate for those under the age of 13. If Google had disclosed the under 13 age group of stats on this page, Google’s lie would have unraveled far faster than it did. For Google to maintain its lie, it had to hide any possible trail that could lead to uncovering this lie.
  2. For other Google platforms (Stadia, Chromebook, Android phones, etc), they likely also kept these statistics secret for the same reasons. Disclosure that the 12 and under age group existed on Google meant disclosing to the FTC that they had lied about this age group using its services all along.
  3. For Android phones, we’ll let’s just say that many a kid 12 and under have owned Android phones. Parents have bought them and handed them over to their children. For the FTC to remain so oblivious to this fact for years is a testament to how badly operated this portion of the government is.
  4. Google / YouTube had to instruct engineers to design software systems around this “we don’t display under age 13 metrics” lie.

Anyway, so lie Google did. They lied from 2000 all of the way to 2019. That’s almost 20 years of lying to the government… and to the public.

YouTube’s Lie

Considering that even just one COPPA infraction found to be “valid” could leave a YouTube channel owner destitute. After all, Google’s fine was $170 million. Because a single violation could cost a whopping $42,530, it’s a major risk simply to maintain a YouTube channel.

Because of the problem of Google perpetuating its lie about 12 and under for so long, this lie has become ingrained in Google’s corporate culture (and software systems). What this means is that for Google to maintain this lie, it had to direct its engineers to write software to avoid showing any statistic information anywhere that could disclose to anyone that Google allows 12 and under onto any of its platforms, let alone YouTube.

This also means that YouTube content creators are entirely left in the dark when it comes to viewer statistics of ages 12 and under. Because Google had intended to continue maintaining its “we don’t serve 12 and under here” lie, it meant that its systems were designed around this lie. This meant that any place where 12 and under could have been disclosed, this data was specifically culled and redacted from view. No one, specifically not YouTube content creators, could see viewer metrics for anyone 12 and under. By intentionally redacting this information from its statistics interfaces, no one could see that 12 and under were actually viewing YouTube videos or even buying products. As a creator, you really have no idea how many 12 and under viewers you have. The FTC will have access into YouTube’s systems to see this information, even if you as a content creator do not.

This means that content creators are actually in the dark for this viewer age group. There’s no way to really know if this age group is being accurately counted. Actually, Google is likely collecting this information, but they’re simply not disclosing it over public interfaces. Though, to be fully safe and to fully protect Google’s lie, they might have been purging this data more often than 13 and older data. If they don’t have the data on the system, they can’t be easily caught with it. Still, that didn’t help when Google finally did get caught and were fined $170 million.

Unfortunately, because Google’s systems were intentionally designed around a lie and because they are now already in place, undoing that intentional design lie could be a challenge for Google. They’ve had 19 years worth of engineering effort build code upon code avoiding disclosure of 12 and under using Google’s platforms. Undoing 19 years of coding might be a problem.

Swinging back around to that huge fine, this leaves YouTube in a quandary. It means that content creators have no way to know if the metrics that are being served to content creators are in any way accurate. After all, Google has been maintaining this lie for 19 years. They’ve built and maintained their systems around this lie. But now, Google must undo 19 years of lies built into their systems to allow content creators to see what we already knew… that 12 and under have been using the platform probably since 2000.

For content creators, you need to think twice when considering setting up a channel on YouTube. It doesn’t matter what your content is. If that content attracts children under 13, you’re at risk. The only type of channel content that cannot at all be seen as “for kids” is content that kids would never watch. There is really only a handful of content type I can name that wouldn’t appeal to children (not an exhaustive list):

  1. Legal advice from lawyers
  2. Court room video
  3. Horror programs
  4. Political programs
  5. Frank sex topics

It would probably be easier to state those types of programs that do appeal to children:

  1. Pretty much everything else

What that means is topics like music videos, video game footage, cartoons, pet videos, singing competitions, beauty channels, fashion channels, technology channels and toy reviews could appeal to children… and the list goes on. You name it and pretty much every other content type has the possibility of attracting children 12 and under… some content more than others. There’s literally very little that a child 12 and under might not consider watching.

The thing is, when someone decides to create a channel on YouTube, you must now consider if the content you intend to create might appeal to children 12 and under. If it’s generalized information without the use of explicit information, children could potentially tune in. Though, YouTube doesn’t allow true adult content on its platform.

Google’s lie has really put would-be channel creators into a huge bind with YouTube, plummeting the value of YouTube as a platform. For monetization, not only is there now the 1,000 subscriber hurdle you must get past and you must also have 14,000 views in a month, but now you must also be cognizant of the audience your content might attract. Even seemingly child-unfriendly content might draw in children unintentionally. If you interview the wrong person on your channel, you might find that you now have a huge child audience. Operating a YouTube Channel is a huge risk.

YouTube’s Value as a Platform

With this recent Google change, compounded by Google’s lie, the value of YouTube as a video sharing platform has significantly dropped. Not only did Google drop a bomb on its content creators, it has lied to not only the government, but to the public for years. With the FTC’s hand watching what you’re doing on YouTube, YouTube really IS moving towards “big government watching” as described in George Orwell’s book 1984. Why Google would allow such a deep level of governmental interference over its platform is a major problem, not just for Google, but for the computer industry as a whole. It’s incredibly chilling.

$42,530 per COPPA violation is not just small change you can pull out of your pocket. That’s significant bank. So much bank, in fact, that a single violation could bankrupt nearly any less than 100,000 subscriber channel on YouTube.

Not only do you have to overcome YouTube’s silly monetization hurdles, you must attempt to stay far away from the COPPA hurdle that YouTube has now foisted on you.

Google’s Mistake

Google did have a way to rectify and remediate this situation early. It’s called honesty. They could have simply fixed their platform to accurately protect and steer 12 and under away from its properties where they don’t belong. It could have stated that it did (and does) allow 12 and under to sign up.

If Google had simply been honest about 12 and under and allowed 12 and under to sign up, Google could have set up the correct processes from the beginning that would have allowed not only Google to become COPPA compliant, but by extension allow YouTube creators to remain compliant through Google’s tools. Google should have always remained in the business of protecting its creators from governmental interference. Yet, here we are.

In fact, the COPPA legislation allows for parental permission and consent and it’s not actually that hard to set up, particularly for a large organization like Google. For Google, in fact, it already has mechanisms it could leverage to attempt to obtain verifiable parental consent. If Google had chosen to setup and maintain a 12 and under verifiable parental consent program all along, YouTube content creators could have been left off of the hook. Instead, YouTube has given content creators the finger.

If YouTube content creators must share in Google’s lack of COPPA compliance, then content creators should equally share in a Google created parental consent system. Parental consent isn’t that hard to implement. Google could have spent its time building such a system instead of lying.

Trust and Lies

When companies as big as Google participate in lies of this magnitude, you should seriously question any business you do with such a company. Companies are supposed to be ethically bound to do the right thing. When companies don’t do the right ethical thing and perpetuate lies for years, everyone must consider how much you trust that company.

What else are they lying about? It’s difficult to trust someone who lies. Why is it any different when a company chooses to lie?

When that lie can cost you $42,530 per violation, that’s what comes out of lying. Google not only didn’t protect its content creators, it perpetuated a lie that has now left its content creators hanging out to dry.

This is why YouTube as a content creator platform is about as worthless as it can possibly be… not only for the lie and COPPA, but also the monetization clampdown from 2017-2018. Every year has brought another downside to YouTube and for 2019, it’s Google’s lie.

For large creators who have an entrenched large audience and who are making ad revenue bank from their audience (at least for the moment), I understand the dilemma to ditch YouTube. But, for those content creators who make maybe $5 a month, is it worth that $5 a month to risk $42,530 every time you upload a video? Worse, the FTC can go back through your back video catalog and fine you for every single video they find! That’s a lot of $42,530 fines, potentially at least one per video. Now that’s risky!

Solutions

There are solutions. The biggest solution, ditch YouTube for other video platforms such as Facebook, SnapChat, Vimeo or DailyMotion. If you’re live streaming, there’s YouNow, Twitch and Mixer. You’re not beholden to YouTube to gain an audience and following. In fact, with the huge black COPPA cloud now permanently hanging over YouTube, it’s only a matter of time before the FTC starts its tirade and cements what I’m saying here in this article. For small and medium sized creators, particularly brand new creators, it’s officially time to give YouTube the finger-512 (just as Google has given us the finger-512). It’s long past time to ditch YouTube and to find an alternative video sharing platform. You might as well make that one a 2020 New Year’s resolution. Let’s all agree that YouTube is officially dead and move on.

Just be sure to read the fine print of whatever service you are considering using. For example, Twitch’s terms and conditions are very explicit with regards to age… no one under 13 is permitted on Twitch. If only Google had been able to actually maintain that reality instead of lying about it for nearly 20 years.

↩︎

 

Why Rotten Tomatoes is rotten

Posted in botch, business, california by commorancy on December 31, 2019

cinema-popcornWhen you visit a site like Rotten Tomatoes to get information about a film, you need to ask yourself one very important question, “Is Rotten Tomatoes trustworthy?”

Rotten Tomatoes as a movie review service has come under fire many times for revitew bombing and manipulation. That is, Rotten Tomatoes seem to allow shills to join the service to review bomb a movie to either raise or lower its various scores by manipulating the Rotten Tomatoes review system. In the past, these claims couldn’t be verified. Today, they can.

As of a change in May 2019, Rotten Tomatoes has made it exceedingly easy for both movie studios and Rotten Tomatoes itself to game and manipulate the “Audience Score” ratings. Let’s explore.

Rotten Tomatoes as a Service

Originally, Rotten Tomatoes began its life as an independent movie review service such that both critics and audience members can have a voice in what they think of a film. So long as Rotten Tomatoes remained an independent and separate service from movie studio influence and corruption, it could make that claim. Its reviews were fair and for the most part accurate.

Unfortunately, all good things must come to an end. In February of 2016, Fandango purchased Rotten Tomatoes. Let’s understand the ramifications of this purchase. Because Fandango is wholly owned by Comcast and in which Warner Brothers also holds an ownership stake in Fandango, this firmly plants Rotten Tomatoes well out of the possibility of remaining neutral in film reviews. Keep in mind that Comcast also owns NBC as well as Universal Studios.

Fandango doesn’t own a stake in Disney as far as I can tell, but that won’t matter based on what I describe next about the Rotten Tomatoes review system.

Review Bombing

As stated in the opening, Rotten Tomatoes has come under fire for several notable recent movies as having scores which have been manipulated. Rotten Tomatoes has then later debunked those claims by stating that their system was not manipulated, but then really offering no proof of that fact. We simply have to take them at their word. One of these allegedly review bombed films was Star Wars: The Last Jedi… where the scores inexplicably dropped dramatically in about a 1 month period of time. Rotten Tomatoes apparently validated the drop as “legitimate”.

Unfortunately, Rotten Tomatoes has become a bit more untrustworthy as of late. Let’s understand why.

As of May of 2019, Rotten Tomatoes introduced a new feature known as “verified reviews”. For a review’s score to be counted towards the “Audience Score”, the reviewer must have purchased a ticket from a verifiable source. Unfortunately, the only source from which Rotten Tomatoes can verify ticket purchases is from its parent company, Fandango. All other ticket purchases don’t count… thus, if you choose to review a film after purchasing your ticket from the theater’s box office, from MovieTickets.com or via any other means, your ticket won’t count as “verified” should you review or rate the movie. Only Fandango ticket purchases count towards “verified” reviews, thus altering the audience score. This change is BAD. Very, very bad.

Here’s what Rotten Tomatoes has to say from the linked article just above:

Rotten Tomatoes now features an Audience Score made up of ratings from users we’ve confirmed bought tickets to the movie – we’re calling them “Verified Ratings.” We’re also tagging written reviews from users we can confirm purchased tickets to a movie as “Verified” reviews.

While this might sound like a great idea in theory, it’s ripe for manipulation problems. Fandango also states that “IF” they can determine “other” reviews as confirmed ticket purchases, they will mark them as “verified”. Yeah, but that’s a manual process and is impossibly difficult to determine. We can pretty much forget that this option even exists. Let’s list the problems coming out of this change:

  1. Fandango only sells a small percentage of overall ticket sales for a film. If the “Audience Score” is calculated primarily and solely from Fandango ticket sales alone, then this metric is a horribly inaccurate metric to rely on.
  2. Fandango CAN handpick “other” non-Fandango ticket purchased reviews to be included. Not likely to happen often, but this also means they can pick their favorites (and favorable) reviews to include. This opens Rotten Tomatoes up to Payola or “pay for inclusion”.
  3. By specifying exactly how this process works, this change opens the Rotten Tomatoes system to being gamed and manipulated, even by Rotten Tomatoes staff themselves. Movie studios can also ask their employees, families and friends to exclusively purchase their tickets from Fandango and request these same people to write “glowing, positive reviews” or submit “high ratings” or face job consequences. Studios might even be willing to pay for these positive reviews.
  4. Studios can even hire outside people (sometime known as shills) to go see a movie by buying tickets from Fandango and then rate their films highly… because they were paid to do so. As I said, manipulation.

Trust in Reviews

It’s clear that while Rotten Tomatoes is trying to fix its ills, it is incredibly naive at it. It gets worse. Not only is Rotten Tomatoes incredibly naive, this company is also not at all tech savvy. Its system is so ripe for being gamed, the “Audience Score” is a nearly pointless metric. For example, 38,000 verified reviews based on millions of people who watched it? Yeah, if that “Audience Score” number isn’t now skewed, I don’t know what is.

Case in point. The “Audience Score” for The Rise of Skywalker is 86%. The difficulty with this number is the vast majority of the reviews I’ve seen from people on chat forums don’t rate the film anywhere close to 86%. What that means is that the new way that Rotten Tomatoes is calculating scores is effectively a form of manipulation itself BY Rotten Tomatoes.

To have the most fair and accurate metric, ALL reviews must be counted and included in all ratings. You can’t just toss out the vast majority of reviews simply because you can’t verify them has holding a ticket. Even still, holding a ticket doesn’t mean someone has actually watched the film. Buying a ticket and actually attending a showing of the film are two entirely separate things.

While you may have verified a ticket purchase, did you verify that the person actually watched the film? Are you withholding brand new Rotten Tomatoes account reviewers out of the audience score? How trustworthy can someone be if this is their first and only review on Rotten Tomatoes? What about people who downloaded the app just to buy a ticket for that film? Simply buying a ticket from Fandango doesn’t make the rating or reviewer trustworthy.

Rethinking Rotten Tomatoes

Someone at Rotten Tomatoes needs to drastically reconsider this change and they need to do it fast. If Rotten Tomatoes wasn’t guilty of manipulation of review scores before this late spring change in 2019, they are now. Rotten Tomatoes is definitely guilty of manipulating the “Audience Score” by the sheer lack of reviews covered under this “verified review” change. Nothing can be considered valid when the sampling size is so small as to be useless. Verifying a ticket holder also doesn’t validate a review author’s sincerity, intent or, indeed, legitimacy. It also severely limits who can be counted under their ratings, thus reducing the trustworthiness of “Audience Score”.

In fact, only by looking at past reviews can someone determine if a review author has trustworthy opinions.

Worse, Fandango holds a very small portion of all ticket sales made for theaters (see below). By showing all (or primarily) scores tabulated by people who bought tickets from Fandango, this definitely eliminates well over half of the written reviews on Rotten Tomatoes as valid. Worse, because of the way the metric is calculated, nefarious entities can game the system to their own benefit and manipulate the score quickly.

This has a chilling effect on Rotten Tomatoes. The staff at Rotten Tomatoes needs roll back this change pronto. For Rotten Tomatoes to return it being a trustworthy neutral entity in the art of movie reviews, it needs a far better way to determine trustworthiness of its reviews and of its reviewers. Trust comes from well written, consistent reviews. Ratings come from trusted sources. Trust is earned. The sole act of buying a ticket from Fandango doesn’t earn trust. It earns bankroll.

Why then are ticket buyers from Fandango any more trustworthy than people purchasing tickets elsewhere? They aren’t… and here’s where Rotten Tomatoes has failed. Rotten Tomatoes incorrectly assumes that by “verifying” a sale of a ticket via Fandango alone, that that somehow makes a review or rating more trustworthy. It doesn’t.

It gets worse because while Fandango represents at least 70% of online sales, it STILL only represents a tiny fraction of overall ticket sales, at just 5-6% (as of 2012).

“Online ticketing still just represents five to six percent of the box office, so there’s tremendous potential for growth right here.” –TheWrap in 2012

Granted, this TheWrap article is from 2012. Even if Fandango had managed to grab 50% of the overall ticket sales in the subsequent 7 years since that article, that would leave out 50% of the remaining ticket holder’s voices, which will not be tallied into Rotten Tomatoes current “Audience Score” metric. I seriously doubt that Fandango has managed to achieve anywhere close to 50% of total movie ticket sales. If it held 5-6% overall sales in 2012, in 7 years Fandango might account for growth between 10-15% at most by 2019. That’s still 85% of all reviews excluded from Rotten Tomatoes’s “Audience Score” metric.  In fact, it behooves Fandango to keep this overall ticket sales number as low as possible so as to influence its “Audience Score” number with more ease and precision.

To put this in a little more perspective, a movie theater might have 200 seats. 10% of that is 20. That means that for every 200 people who might fill a theater, just less than 20 people have bought their ticket from Fandango and are eligible for their review to count towards “Audience Score”. Considering that only a small percentage of that 20 will actually take the time to write a review, that could mean out of every 200 people who’ve seen the film legitimately, between 1 and 5 people might be counted towards the Audience Score. Calculating that up, for very 1 million people who see a blockbuster film, somewhere between 5,000 and 25,000’s reviews may contribute to the Rotten Tomatoes “Audience Score”… even if there are hundreds of thousands of reviews on the site.

The fewer the reviews contributing to that score, the easier it is to manipulate that score by adding just a handful of reviews to the mix… and that’s where Rotten Tomatoes “handpicked reviews” come into play (and with it, the potential for Payola). Rotten Tomatoes can then handpick positive reviews for inclusion. The problem is that while Rotten Tomatoes understands all of this this, so do the studios. Which means that studios can, like I said above, “invite” employees to buy tickets via Fandango before writing a review on Rotten Tomatoes. They can even contact Rotten Tomatoes and pay for “special treatment”. This situation can allow movie studios to unduly influence the “Audience Score” for a current release… this is compounded because there are so few reviews that count to create the “Audience Score”.

Where Rotten Tomatoes likely counted every review towards this score before this change, after they implemented the new “verified score” methodology, this change greatly drops the number of reviews which contribute to tallying this score. This lower number of reviews means that it is now much easier to manipulate its Audience Score number either by gaming the system or by Rotten Tomatoes handpicking reviews to include.

Fading Trust

While Rotten Tomatoes was once a trustworthy site for movie reviews, it has greatly reduced its trust levels by instituting such backwards and easily manipulable systems.

Whenever you visit a site like Rotten Tomatoes, you must always question everything you see. When you see something like an “Audience Score”, you must question how that number is calculated and what is included in that number. Rotten Tomatoes isn’t forthcoming.

In the case of Rotten Tomatoes, they have drastically reduced the number of included reviews in that metric because of their “verified purchase” mechanism. Unfortunately, the introduction of that mechanism at once destroys Rotten Tomatoes trust and trashes the concept of their site.

It Gets Worse

What’s even more of a problem is the following two images:

Screen Shot 2019-12-23 at 7.26.58 AM

Screen Shot 2019-12-23 at 7.26.24 AM

From the above two images, it is claimed Rotten Tomatoes has 37,956 “Verified Ratings”, yet they only have 3,342 “Verified Audience” reviews. That’s a huge discrepancy. Where are those other 34,614 “Verified” reviews? You need to calculate the Audience Score not solely on a phone device using a simplistic “rate this movie” alone. It must be calculated in combination with an author writing a review. Of course, there are 5,240 reviews that didn’t at all contribute to any score at all on Rotten Tomatoes. Those audience reviews are just “there”, taking up space.

Single number ratings are pointless without at least some text validation information. Worse, we know that these “Verified Ratings” likely have nothing to do with “Verified Audience” as shown in the images above. Even if those 3,342 audience reviews are actually calculated into the “Verified Ratings” (they probably aren’t), that’s still such a limited number when considered with the rest of the “Verified Ratings” so as to be skewed by people who may not have even attended the film.

You can only determine if someone has actually attended a film by asking them to WRITE even the smallest of a review. Simply pressing “five star” on the app without even caring is pointless. It’s possible the reviews weren’t even tabulated correctly via the App. The App itself may even submit star data after a period of time without the owner’s knowledge or consent. The App can even word its rating question in such a way as to manipulate the response in a positive direction. Can we say, “Skewed”?

None of this leads to trust. Without knowing exactly how that data was collected, the method(s) used and how it was presented on the site and on the app, how can you trust any of it? It’s easy to see professional critic reviews because Rotten Tomatoes must cite back to the source of the review. However, with audience metrics, it’s all nebulous and easily falsified… particularly when Rotten Tomatoes is intentionally obtuse and opaque for exactly how it collects this data and how it is presents it.

Even still, with over one million people attending and viewing The Rise of Skywalker, yet Rotten Tomatoes has only counted just under verified 38,000 people, something doesn’t add up. Yeah, Rotten Tomatoes is so very trustworthy (yeah right), particularly after this “verified” change. Maybe it’s time for those Rotten Tomatoes to finally be tossed into the garbage?

↩︎

Rant Time: Flickr is running out of time & money?

Posted in botch, business, california by commorancy on December 19, 2019

Flickr2I received a rather questionable email about Flickr allegedly from Don MacAskill, CEO of SmugMug.

Unfortunately, his email is also wrapped in the guise of email marketing and arrived through the same marketing channel as all other email marketing from Flickr.

Don, if you want us to take this situation seriously, you shouldn’t use email marketing platforms to do it. These emails need to come personally from you using a SmugMug or Flickr address. They also shouldn’t contain several email marketing links. An email from the CEO should contain only ONE link and it should be at the very bottom of the email.

The information contained in this letter is not a surprise in general, but the way it arrived and the tone it takes is a surprise coming from a CEO, particularly when it takes the format of generic email marketing. Let’s explore.

Flickr Pro

I will place the letter at the bottom so you can it read in full. The gist of the letter is, “We’re running out of money, so sign up right away!”

I want to take the time to discuss the above “running out of money” point. Here’s an excerpt from Don’s email:

We didn’t buy Flickr because we thought it was a cash cow. Unlike platforms like Facebook, we also didn’t buy it to invade your privacy and sell your data. We bought it because we love photographers, we love photography, and we believe Flickr deserves not only to live on but thrive. We think the world agrees; and we think the Flickr community does, too. But we cannot continue to operate it at a loss as we’ve been doing.

Let’s start by saying, why on Earth would I ever sign up for a money losing service that is in danger of closing? Seriously, Flickr? Are you mad? Don’t give me assurances that *I* can save your business with my single conversion. It’s going to take MANY someones to keep Flickr afloat if it’s running out of money. Worse, sending this email to former Pro members trying to get us to convert again is a losing proposition. Send it to someone who cares, assuming there is anyone like that.

A single conversion isn’t likely to do a damned thing to stem the tide of your money hemorrhaging, Flickr. Are you insane to send out a letter like this in this generic email marketing way? If anything, a letter like this may see even MORE of your existing members run for the hills by cancelling their memberships, instead of trying to save Flickr from certain doom. But, let’s ignore this letter’s asinine message and focus on why I decided to write this article.

Flickr is Dead to Me

I had an email exchange in November of 2018 with Flickr’s team. I make my stance exceedingly clear exactly why I cancelled my Pro membership and why their inexplicable price increase is pointless. And yes, it is a rant. This exchange goes as follows:

Susan from Flickr states:

When we re-introduced the annual Flickr Pro at $49.99 more than 3 years ago, we promised all grandfathered Pros (including the bi-annual and 3-month plans) a 2-year protected price period. We have kept this promise, but in order to continue providing our best service to all of our customers, we are now updating the pricing for grandfathered Pros. We started this process on August 16, 2018.

With this being the case, bi-annual Pros pay $99.98 every 2 years, annual Pros pay $49.99 every year, and 3-month Pros pay $17.97 every 3 months. Notifications including the price increase have been sent out to our users starting from August 16.

I then write back the following rant:

Hi Susan,

Yes, and that means you’ve had more than ample time to make that $50 a year worth it for Pro subscribers. You haven’t and you’ve failed. It’s still the same Flickr it was when I was paying $22.48 a year. Why should I now pay over double the price for no added benefits? Now that SmugMug has bought it, here we are now being forced to pay the $50 a year toll when there’s nothing new that’s worth paying $50 for. Pro users have been given ZERO tools to sell our photos on the platform as stock photos. Being given these tools is what ‘Pro’ means, Susan. We additionally can’t in any way monetize our content to recoup the cost of our Pro membership fees. Worse, you’re displaying ads over the top our photos and we’re not seeing a dime from that revenue.

Again, what have you given that makes $50 a year worth it? You’re really expecting us to PAY you $50 a year to show ads to free users over the top of our content? No! I was barely willing to do that with $22.48 a year. Of course, this will all fall on deaf ears because these words mean nothing to you. It’s your management team pushing stupid efforts that don’t make sense in a world where Flickr is practically obsolete. Well, I’m done with using a 14 year old decrepit platform that has degraded rather than improved. Sorry Susan, I’ve removed over 2500 photos, cancelled my Pro membership and will move back to the free tier. If SmugMug ever comes to its senses and actually produces a Pro platform worth using (i.e., actually offers monetization tools or even a storefront), I might consider paying. As it is now, Flickr is an antiquated 14 year old platform firmly rooted in a 2004 world. Wake up, it’s 2018! The iStockphotos of the world are overtaking you and offering better Pro tools.

Bye.

Flickr and SmugMug

When Flickr was purchased by SmugMug, I wasn’t expecting much from Flickr. But, I also didn’t expect Flickr to double its prices while also providing nothing in return. The platform has literally added nothing to improve the “Pro” aspect of its service. You’re simply paying more for the privilege of having ads placed over the top of your photos. Though, what SmugMug might claim you’re paying for is entirely the privilege of the tiniest bit more storage space to store a few more photos.

Back when storage costs were immense, that pricing might have made sense. In an age where storage costs are impossibly low, that extra per month pricing is way out of line. SmugMug and Flickr should have spent their time adding actual “Pro” tools so that photographers can, you know, make money from their photos by selling them, leasing them, producing framed physical wall hangings, mugs, t-shirts, mouse pads, and so on. Let us monetize our one and only one product… you know, like Deviant Art does. Instead, SmugMug has decided to charge more, then place ads over the top of our photos and not provide even a fraction of what Deviant Art does for free.

As a photographer, why should I spend $50 a year on Flickr only to gain nothing when I can move my photos to Deviant Art and pay nothing a year AND get many more tools which help me monetize my images? I can also submit them to stock photo services and make money off of leasing them to publications, something still not possible at Flickr.

Don’s plea is completely disingenuous. You can’t call something “Pro” when there’s nothing professional about it. But then, Don feels compelled to call out where they have actually hosted Flickr and accidentally explains why Flickr is losing money.

We moved the platform and every photo to Amazon Web Services (AWS), the industry leader in cloud computing, and modernized its technology along the way.

What modernization? Hosting a service on AWS doesn’t “modernize” anything. It’s a hosting platform. Worse, this hosting decision is entirely the cause of SmugMug’s central money woes with Flickr. AWS is THE most expensive cloud hosting platform available. There is nothing whatsoever cheap about using AWS’s storage and compute platforms. Yes, AWS works well, but the bill at the end of the month sucks. To keep the lights on when hosting at AWS, plan to spend a mint.

If SmugMug wanted to save on costs of hosting Flickr, they should have migrated it to a much lower cost hosting platform instead of sending empty marketing promises asking people to “help save the platform”. Changing hosting platforms might require more hands on effort for SmugMug’s technical staff, but SmugMug can likely half the costs of hosting this platform by moving it to lower cost hosting providers… providers that will work just as well as AWS.

Trying to urge past subscribers to re-up into Pro again simply to “save its AWS hosting decision”, not gonna happen. Those of us who’ve gotten no added benefit by paying money to Flickr in the past are not eager to return. Either give us a legitimate reason to pay money to you (add a storefront or monetization tools) or spend your time moving Flickr to a lower cost hosting service, one where Flickr can make money.

Don, why not use your supposed CEO prowess to have your team come up with lower cost solutions? I just did. It’s just a thought. You shouldn’t rely on such tactless and generic email marketing practices to solve the ills of Flickr and SmugMug. You bought it, you have to live with it. If that means Flickr must shutdown because you can’t figure out a way to save it, then so be it.

Below is Don MacAskill’s email in all of its unnecessary email marketing glory (links redacted):

Dear friends,

Flickr—the world’s most-beloved, money-losing business—needs your help.

Two years ago, Flickr was losing tens of millions of dollars a year. Our company, SmugMug, stepped in to rescue it from being shut down and to save tens of billions of your precious photos from being erased.

Why? We’ve spent 17 years lovingly building our company into a thriving, family-owned and -operated business that cares deeply about photographers. SmugMug has always been the place for photographers to showcase their photography, and we’ve long admired how Flickr has been the community where they connect with each other. We couldn’t stand by and watch Flickr vanish.

So we took a big risk, stepped in, and saved Flickr. Together, we created the world’s largest photographer-focused community: a place where photographers can stand out and fit in.

We’ve been hard at work improving Flickr. We hired an excellent, large staff of Support Heroes who now deliver support with an average customer satisfaction rating of above 90%. We got rid of Yahoo’s login. We moved the platform and every photo to Amazon Web Services (AWS), the industry leader in cloud computing, and modernized its technology along the way. As a result, pages are already 20% faster and photos load 30% more quickly. Platform outages, including Pandas, are way down. Flickr continues to get faster and more stable, and important new features are being built once again.

Our work is never done, but we’ve made tremendous progress.

Now Flickr needs your help. It’s still losing money. Hundreds of thousands of loyal Flickr members stepped up and joined Flickr Pro, for which we are eternally grateful. It’s losing a lot less money than it was. But it’s not yet making enough.

We need more Flickr Pro members if we want to keep the Flickr dream alive.

We didn’t buy Flickr because we thought it was a cash cow. Unlike platforms like Facebook, we also didn’t buy it to invade your privacy and sell your data. We bought it because we love photographers, we love photography, and we believe Flickr deserves not only to live on but thrive. We think the world agrees; and we think the Flickr community does, too. But we cannot continue to operate it at a loss as we’ve been doing.

Flickr is the world’s largest photographer-focused community. It’s the world’s best way to find great photography and connect with amazing photographers. Flickr hosts some of the world’s most iconic, most priceless photos, freely available to the entire world. This community is home to more than 100 million accounts and tens of billions of photos. It serves billions of photos every single day. It’s huge. It’s a priceless treasure for the whole world. And it costs money to operate. Lots of money.

Flickr is not a charity, and we’re not asking you for a donation. Flickr is the best value in photo sharing anywhere in the world. Flickr Pro members get ad-free browsing for themselves and their visitors, advanced stats, unlimited full-quality storage for all their photos, plus premium features and access to the world’s largest photographer-focused community for less than $5 per month.

You likely pay services such as Netflix and Spotify at least $9 per month. I love services like these, and I’m a happy paying customer, but they don’t keep your priceless photos safe and let you share them with the most important people in your world. Flickr does, and a Flickr Pro membership costs less than $1 per week.

Please, help us make Flickr thrive. Help us ensure it has a bright future. Every Flickr Pro subscription goes directly to keeping Flickr alive and creating great new experiences for photographers like you. We are building lots of great things for the Flickr community, but we need your help. We can do this together.

We’re launching our end-of-year Pro subscription campaign on Thursday, December 26, but I want to invite you to subscribe to Flickr Pro today for the same 25% discount.

We’ve gone to great lengths to optimize Flickr for cost savings wherever possible, but the increasing cost of operating this enormous community and continuing to invest in its future will require a small price increase early in the new year, so this is truly the very best time to upgrade your membership to Pro.

If you value Flickr finally being independent, built for photographers and by photographers, we ask you to join us, and to share this offer with those who share your love of photography and community.

With gratitude,

Don MacAskill
Co-Founder, CEO & Chief Geek

SmugMug + Flickr

Use and share coupon code [redacted] to get 25% off Flickr Pro now.

↩︎

Am I impacted by the FTC’s YouTube agreement?

Posted in botch, business, california, ethics, family by commorancy on December 16, 2019

kid-tabletThis question is currently a hot debate among YouTubers. The answer to this question is complex and depends on many factors. This is a long read as there’s a lot to say (~10000 words = ~35-50 minutes). Grab a cup of your favorite Joe and let’s explore.

COPPA, YouTube and the FTC

I’ve written a previous article on this topic entitled Rant Time: Google doesn’t understand COPPA. You’ll want to read that article to gain a bit more insight around this topic. Today’s article is geared more towards YouTube content creators and parents looking for answers. It is also geared towards anyone with a passing interest in the goings on at YouTube.

Before I start, let me write this disclaimer by saying I’m not a lawyer. Therefore, this article is not intended in any way to be construed as legal advice. If you need legal advice, there are many lawyers available who may be able to help you with regards to being a YouTube content creator and your specific channel’s circumstances. If you ARE HERE looking for legal advice, please go speak to a lawyer instead. The information provided in this article is strictly for information purposes only and IS NOT LEGAL ADVICE.

For Kids or Not For Kids?

screen-shot-2019-11-24-at-2.33.32-am.png

With that out of the way, let’s talk a little about what’s going on at YouTube for the uninitiated. YouTube has recently rolled out a new channel creator feature. This feature requires that you mark your channel “for kids” or “not for kids”. Individual videos can also be marked this way (which becomes important a little later in the article). Note, this “heading” is not the actual text on the screen in the settings area (see the image), but this is what you are doing when you change this YouTube creator setting. This setting is a binary setting. Your content is either directed at kids or it is not directed at kids. Let’s understand this reasoning around COPPA. Also, “kids” or “child” is defined in COPPA any person 12 or younger.

When you set the “for kids” setting on a YouTube channel, a number of things will happen to your channel, including comments being disabled, monetization will be severely limited or eliminated and how your content is promoted by YouTube will drastically change. There may also be other subtle changes that are as yet unclear. The reason for all of these restrictions is that COPPA prevents the collection of personal information from children 12 and under… or at least, if it is collected that it is deleted if parental consent cannot be obtained. In the 2013 update, COPPA added cookie tracking to the list of items that cannot be collected.

By disabling all of these features under ‘For Kids’, YouTube is attempting to reduce or eliminate its data collection vectors that could violate COPPA… to thwart future liabilities for Google / YouTube as a company.

On the other hand, setting your channel as ‘Not For Kids’, YouTube maintains your channel as it has always been with comments enabled, full monetization possible, etc. Seems simple, right? Wrong.

Not as Simple as it Seems

You’re a creator thinking, “Ok, then I’ll just set my channel to ‘Not for Kids’ and everything will be fine.” Not so fast there, partner. It’s not quite as simple as that. COPPA applies to your channel if even one child visits and Google collects any data from that child. But, there’s more to it.

YouTube will also be rolling out a tool that attempts to identify the primary audience of video content. If YouTube’s new tool identifies a video as content primarily targeting “kids”, that video’s “Not for Kids” setting may be overridden by YouTube and set as “For Kids”. Yes, this can be done by YouTube’s tool, thus overriding your channel-wide settings. It’s not enough to set this setting on your channel, you must make sure your content is not being watched by kids and the content is not overly kid friendly. How exactly YouTube’s scanner will work is entirely unknown as of now.

And here is where we get to the crux of this whole matter.

What is “Kid Friendly” Content?

Unfortunately, there is no clear answer to this question. Your content could be you reviewing toys, it could be drawing pictures by hand on the screen, it could be reviewing comic books, you might ride skateboards, you might play video games, you might even assemble Legos into large sculptures. These are all video topics that could go either way… and it all depends on which audience your video tends draw in.

It also depends on your existing subscriber base. If a vast majority of your current active subscribers are children 12 and under, this fact can unfairly influence your content even if your curent content is most definitely not for kids. The fact that ‘kids’ are watching your channel is a problem for ANY content that you upload.

But you say, “My viewer statistics don’t show me 12 and under category.” No, it doesn’t and there’s a good reason why it doesn’t. Google has always professed that it doesn’t allow 12 and under on its platform. But clearly, that was a lie. Google does, in fact, allow 12 and under onto its platform. That’s crystal clear for two reasons: 1) The FTC fined Google $170 million for violating COPPA (meaning, FTC found kids 12 and under are using the platform) and 2) YouTube has rolled out this “for kids / not for kids” setting confirming by Google that 12 and under do, in fact, watch YouTube and have active Google Account IDs.

I hear someone else saying, “I’m a parent and I let my 11 year old son use YouTube.” Yeah, that’s perfectly fine and legal, so long as you have given “verifiable consent” to the company that is collecting data from your 11 year old child. As long as a parent gives ‘verifiable consent’ for their child under 12 to Google or YouTube or even to the channel owner directly, it’s perfectly legal for your child to be on the platform watching and participating and for Google and YouTube to collect data from your child.

Unfortunately, verifiable consent is difficult to manage digitally. See the DIY method of parental consent below. Unfortunately, Google doesn’t offer any “verifiable consent” mechanism for itself or for YouTube content creators. This means that even if you as a parent are okay with your child being on YouTube, Facebook, Instagram or even Snapchat, if you haven’t provided explicit and verifiable parental consent to that online service for your child 12 and under, that service is in violation of COPPA by handling data that your child may input into that service. Data can include name, telephone number, email address or even sharing photos or videos of themselves. It also includes cookies placed onto their devices.

COPPA was written to penalize the “web site” or “online services” that collect a child’s information. It doesn’t penalize the family. Without “verifiable consent” from a parent or legal guardian, to the “web site” or “online service” it’s the same as no consent at all. Implicit consent isn’t valid for COPPA. It must be explicitly given and verifiable consent from a parent or legal guardian given to the service being used by the child.

The Murky Waters of Google

If only YouTube were Google’s only property to consider. It isn’t. Google has many, many properties. I’ll make a somewhat short-ish list here:

  • Google Search
  • Google Games
  • Google Music
  • Google Play Store (App)
  • Google Play Games (App)
  • Google Stadia
  • Google Hangouts
  • Google Docs
  • Google’s G Suite
  • Google Voice
  • Google Chrome (browser)
  • Google Chromebook (device)
  • Google Earth (App)
  • Google Movies and TV
  • Google Photos
  • Google’s Gmail
  • Google Books
  • Google Drive
  • Google Home (the smart speaker device)
  • Google Chromecast (TV device)
  • Android OS on Phones
  • … and the list goes on …

To drive all of these properties and devices, Google relies on the creation of a Google Account ID. To create an account, you must supply Google with certain specific identifying information including email address, first and last name and various other required information. Google will then grant you a login identifier and a password in the form of credentials which allows you to log into and use any of the above Google properties, including (you guessed it) YouTube.

Without “verifiable consent” supplied to Google for a child 12 and under, what data Google has collected from your child during the Google Account signup process (or any of the above apps) has violated COPPA, a ruleset tasked for enforcement by the Federal Trade Commission (FTC).

Yes, this whole situation gets even murkier.

Data Collection and Manipulation

The whole point to COPPA is to protect data collected from any child aged 12 and under. More specifically, it rules that this data cannot be collected / processed from the child unless a parent or legal guardian supplies “verifiable consent” to the “web site” or “online service” within a reasonable time of the child having supplied their data to the site.

As of 2013, data collection and manipulation isn’t defined just by what the child personally uploads and types, though this data is included. This Act was expanded to include cookies placed onto a child’s computer device to track and target that child with ads. These cookies are also considered protected data by COPPA as these cookies could be used to personally identify the child. If a service does not have “verifiable consent” on file for that child from a parent or guardian, the “online service” or “web site” is considered by the FTC in violation of COPPA.

The difficulty with Google’s situation is that Google actually stores a child’s data within the child’s Google Account ID. This account ID being entirely separate from YouTube. For example, if you buy your child a Samsung Note 10 Phone running Android and you as a parent create a Google Account for your 12 or under child to use that device, you have just helped Google violate COPPA. This is part of the reason the FTC fined Google $170 million for violations to COPPA. Perhaps not this specific scenario, but the fact that Google doesn’t offer a “verifiable consent” system to verify a child’s access to its services and devices prior to collecting data or granting access to services led the FTC to its ruling. The FTC’s focus, however, is currently YouTube… even though Google is violating COPPA everywhere all over its properties as a result of the use of a Google Account ID.

YouTube’s and COPPA Fallout

Google wholly owns YouTube. Google purchased the YouTube property in 2006. In 2009, Google retired YouTube’s original login credential system and began requiring YouTube to use Google Accounts to gain access to the YouTube property by viewers. This change is important.

It also seems that YouTube is still operating itself mostly as a self-autonomous entity within Google’s larger corporate structure. What all of this means more specifically is that YouTube now uses Google Accounts, a separately controlled and operated system within Google, to manage credentials and gain access into not only the YouTube property, but every other property that Google has (see the short-ish list above).

In 2009, the YouTube developers deprecated their own home grown credentials system and began using the Google Accounts system of credential storage. This change to YouTube very likely means that YouTube itself no longer stores or controls any credential or identifying data. That data is now contained within the Google Accounts system. YouTube likely now only manages the videos that get uploaded, comments, supplying ads on videos (which the tracking and manage is probably controlled by Google also), content ID matching and anything else that appears in the YouTube UI interface. Everything else is likely out of the YouTube team’s control (or even access). In fact, I’d suspect that the YouTube team likely has entirely zero access to the data and information stored within the Google Accounts system (with the exception of that specific data which is authorized by the account holder to be publicly shown).

Why is this Google Accounts information important?

So long as Google Accounts remains a separate entity from YouTube (even though YouTube is owned by the same company), this means that YouTube can’t be in violation of COPPA (at least not where storage of credentials are concerned). There is one exception which YouTube does control… its comment system.

The comment system on YouTube is one of the earliest “modern” social networks ever created. Only Facebook and MySpace were slightly earlier, though all three were generally created within 1 year of one another. It is also the only free form place left in the present 2019 YouTube interface that allows a 12 or under child to incidentally type some form of personally identifying information into a public forum for YouTube to store (in violation of COPPA).

This is the reason that the “for kids” setting disables comments. YouTube formerly had a private messaging service, but it was retired as of September of 2019. It is no longer possible to use YouTube to have private conversations between other YouTube users. If you want to converse with another YouTube viewer, you must do it in a public comment. This change was likely also fallout from Google’s COPPA woes.

Google and Cookies

For the same reason as Google Accounts, YouTube likely doesn’t even manage its own site cookies. It might, but it likely relies on a centralized internal Google service to create, manage and handle cookies. The reason for this is obvious. Were YouTube’s developers to create and manage their own separate cookie, it would be a cookie that holds no use for other Google services. However, if YouTube developers were to rely on a centralized Google controlled service to manage their site’s cookies, it would allow the cookie to be created in a standardized way that all Google services can consume and use. For this reason, this author supposes a centralized system is used at YouTube rather than something “homegrown” and specific to YouTube.

While it is possible that YouTube might create its own cookies, it’s doubtful that YouTube does this for one important reason: ad monetization. For YouTube to participate in Google Advertising (yet another service under the Google umbrella of services), YouTube would need to use tracking cookies that the Google Advertising service can read, parse and update while someone is watching a video on YouTube.

This situation remains murky because YouTube can manage its own internal cookies. I’m supposing that YouTube doesn’t because of a larger corporate platform strategy. But, it is still entirely possible that YouTube does manage its own browser cookies. Only a YouTube employee would know for certain which way this one goes.

Because of the ambiguity in how cookies are managed within Google and YouTube, this is another area where YouTube has erred on the side of caution by disabling ads and ad tracking if a channel is marked as ‘for kids’. This prevents placing ad tracking cookies on any computers from ‘for kids’ marked channels and videos, again avoiding violations of COPPA.

The FTC’s position

Unfortunately, the FTC has put themselves into a constitutionally precarious position. The United States Constitution has a very important provision within its First Amendment.

Let me cite a quote from the US Constitution’s First Amendment (highlighting and italics added by author to call out importance):

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

The constitutional difficulty that the FTC has placed themselves in is that YouTube, by its very nature, offers a journalistic platform which is constitutionally protected from tortious interference by the United States government. The government (or more specifically, Congress) cannot make law that in any way abridges freedom of speech or of the press.

A video on YouTube is not only a form of journalism, it is a form of free speech. As long as YouTube and Google remain operating within the borders of the United States, United States residents must be able to use this platform unfettered without government tortious interference.

How does this imply to the FTC? It applies because the FTC is a governmental entity created by an act of the US Congress and, therefore, acts on behalf of the US Congress. This means that the FTC must uphold all provisions of the United States Constitution when dealing with matters of Freedom of Speech and Freedom of the Press.

How is does this problem manifest for the FTC? The FTC has repeatedly stated that it will use “tools” to determine if a YouTube channel’s content is intended for and is primarily intended to target children 12 and under. Here’s the critical part. If a channel’s content is determined to be targeting children 12 and under, the channel owner may be fined up to $43,530 per video as it will have been deemed in violation of COPPA.

There are two problems with the above statements the FTC has made. Let’s examine text from this FTC provided page about YouTube (italics provided by the FTC):

So how does COPPA apply to channel owners who upload their content to YouTube or another third-party platform? COPPA applies in the same way it would if the channel owner had its own website or app. If a channel owner uploads content to a platform like YouTube, the channel might meet the definition of a “website or online service” covered by COPPA, depending on the nature of the content and the information collected. If the content is directed to children and if the channel owner, or someone on its behalf (for example, an ad network), collects personal information from viewers of that content (for example, through a persistent identifier that tracks a user to serve interest-based ads), the channel is covered by COPPA. Once COPPA applies, the operator must provide notice, obtain verifiable parental consent, and meet COPPA’s other requirements.

and there’s more, which contains the most critical part of the FTC’s article:

Under COPPA, there is no one-size-fits-all answer about what makes a site directed to children, but we can offer some guidance. To be clear, your content isn’t considered “directed to children” just because some children may see it. However, if your intended audience is kids under 13, you’re covered by COPPA and have to honor the Rule’s requirements.

The Rule sets out additional factors the FTC will consider in determining whether your content is child-directed:

  • the subject matter,
  • visual content,
  • the use of animated characters or child-oriented activities and incentives,
  • the kind of music or other audio content,
  • the age of models,
  • the presence of child celebrities or celebrities who appeal to children,
  • language or other characteristics of the site,
  • whether advertising that promotes or appears on the site is directed to children, and
  • competent and reliable empirical evidence about the age of the audience.

Content, Content and more Content

The above quotes discuss YouTube Content becoming “covered by COPPA”. This is a ruse. Content is protected speech by the United States Constitution and is defined within the First Amendment (see above). Nothing in any YouTube visual content when published by a United State Citizen can be “covered by COPPA”. The First Amendment sees to that.

Let’s understand why. First, COPPA is a data collections Act. It has nothing whatever to do with content ratings, content age appropriateness or, indeed, does not discuss anything else related visual content targeted towards children of ANY age. Indeed, there is no verbiage within the COPPA provisions that discuss YouTube, visual content, audio content or anything else to do with Freedom of Speech matters.

It gets worse… at least for the FTC. Targeting channels for disruption by fining them strictly over content uploaded onto the channel is less about protecting children’s data and more about content censorship on YouTube. Indeed, fining a channel $42,530 is tantamount to censorship as it is likely to see that content removed from YouTube… which is, indeed, censorship in its most basic form. Any censorship of Freedom of Speech is firmly against First Amendment rights.

Since the FTC is using fines based on COPPA as leverage against content creators, the implication is that the FTC will use this legal leverage to have YouTube take down content it feels is inappropriate targeting 12 and under children, rather than upholding COPPA’s actual data protection provisions. Indeed, the FTC will actually be making new law by fining channels based on content, not on whether data was actually collected in violation of COPPA’s data collection provisions. Though, the first paragraph may claim “data collection” as a metric, the second paragraph is solely about “offending content”… which is entirely about censorship. Why is that? Let’s continue.

COPPA vs “Freedom of Speech”

The FTC has effectively hung themselves out to dry. In fact, if the FTC does fine even ONE YouTube channel for “inappropriate content”, the FTC will be firmly in the business of censorship of journalism. Or, more specifically, the FTC will have violated the First Amendment rights of U.S. Citizens’ freedom of speech protections.

This means that in order for the FTC to enforce COPPA against YouTube creators, it has now firmly put itself into the precarious position of violating the U.S. Constitution’s First Amendment. In fact, the FTC cannot even fine even one channel owner without violating the First Amendment.

In truth, they can fine under only the following circumstance:

  1. The FTC proves that the YouTube channel actually collected and currently possesses inappropriate data from a child 12 and under.
  2. The FTC leaves the channel entirely untouched. The channel and content must remain online and active.

Number 2 is actually quite a bit more difficult for the FTC than it sounds. Because YouTube and the FTC have made an agreement, that means that YouTube can be seen as an agent of the FTC by doing the FTC’s bidding. This means that even if YouTube takes down the channel after a fine for TOS reasons, the FTC’s fining action can still be construed as in violation of First Amendment rights because YouTube acted as an agent to take down the “offending content”.

It gets even more precarious for the FTC. Even the simple the act of levying a fine against a YouTube channel could be seen as a violation of First Amendment rights. This action by the FTC seems less about protecting children’s data and more about going after YouTube content creators “targeting children with certain types of content” (see above). Because the latter quote from the FTC article explicitly calls out types of content as “directed at children”, this intentionally shows that it’s not about COPPA, but about visual content rules. Visual content rules DO NOT exist in COPPA.

Channel Owners and Content

If you are a YouTube channel owner, all of the above should greatly concern you for the following reasons:

  1. You don’t want to become a Guinea Pig to test First Amendment legal waters of the FTC + COPPA
  2. The FTC’s content rules above effectively state, “We’ll know it when we see it.” This is constitutionally BAD. This heavily implies content censorship intent. This means that the FTC can simply call out any content as being inappropriate and then fine a channel owner for uploading that content.
  3. It doesn’t specify state if the rule applies retroactively. Does previously uploaded content become subject to the FTC’s whim?
  4. The agreement takes effect beginning January 1, 2020
  5. YouTube can “accidentally” reclassify content as “for kids” when it clearly isn’t… which can trigger an FTC action.
  6. The FTC will apparently have direct access to the YouTube platform scanning tools. To what degree it has access is unknown. If it has direct access to take videos or channels offline, it has direct access to violate the First Amendment. Even if it must ask YouTube to do this takedown work, the FTC will still have violated the First Amendment.

The Fallacy

The difficulty I have with this entire situation is that the FTC now appears to be holding content creators to blame for heavy deficiencies within YouTube’s and Google’s platforms. Because Google failed to properly police its own platform for 12 and under users, it now seeks to pass that blame down onto YouTube creators simply because they create and upload video content. Content, I might add, that is completely protected under the United State Constitution’s First Amendment as “Freedom of Speech”. Pre-shot video content is a one-way passive form of communication.

Just like broadcast and cable TV, YouTube is a video sharing platform. It is designed to allow creators to impart one-way passive communication using pre-made videos, just like broadcast TV. If these FTC actions apply to YouTube, then they equally apply to broadcast and cable television providers…. particularly now that CBS, ABC, NBC, Netflix, Disney+ (especially Disney+), Hulu, Vudu, Amazon, Apple and cable TV providers now also offer “web sites” and “online services” where their respective video content can (and will) be viewed by children 12 and under via a computer device or web browser and where a child may is able to input COPPA protected data. For example, is Disney+ requiring verifiable parental consent to comply with COPPA?

Live Streaming

However, YouTube now also offers live streaming which changes the game a little for COPPA. Live streaming offers two-way live communication and in somewhat real-time. Live streaming is a situation where a channel creator might be able to collect inappropriate data from a child simply by asking pointed questions during a live stream event. A child might even feel compelled to write into live chat information that they shouldn’t be giving out. Live streaming may be more likely to collect COPPA protected data than pre-made video content simply because of the live interactivity between the host and the viewers. You don’t get that level of interaction when using pre-made video content.

Live streaming or not, there is absolutely no way a content creator can in any way be construed as an “Operator” of Google or of YouTube. The FTC is simply playing a game of “Guilty by Association”. They are using this flawed logic… “You own a YouTube channel, therefore you are automatically responsible for YouTube’s infractions.” It’s simply Google’s way of passing down its own legal burdens by your channel’s association with YouTube. Worse, the FTC seems to have bought into this Google shenanigan. It’s great for Google, though. They won’t be held liable for any more infractions against COPPA so long as YouTube creators end up shouldering that legal burden for Google.

The FTC seems to have conveniently forgotten this next part. In order to have collected data from a child, you must still possess a copy of that data to prove that you actually did collect it and that you are STILL in violation of COPPA. If you don’t have a copy of the alleged violating data, then you either didn’t collect it, the child didn’t provide it, you never had it to begin with or you have since deleted it. As for cookie violations, it’s entirely a stretch to say that YouTube creators had anything to do with how Google / YouTube manages cookies. The COPPA verbiage states of deletion under Parental Consent:

§312.4(c)(1). If the operator has not obtained parental consent after a reasonable time from the date of the information collection, the operator must delete such information from its records;

If an “operator” deletes such records, then the “operator” is not in violation of COPPA. If an “operator” obtains parental consent, then the “operator” is also not in violation of COPPA. Nothing, though, states definitively that a YouTube creator assumes the role of “operator”.

This is important because Google is and remains the “operator”. Until or unless Google extends access to its Google Accounts collected data to ALL YouTube creators so that a creator can take possession of said data, a creator cannot be considered an “operator”. The YouTube creator doesn’t have (and never has had) access to the Google Account personal data (other than what is publicly published on Google). Only Google has access to this account data which has been collected as part of creating a new Google Account. Even the YouTube property and its employees likely don’t even have access to Google Account personal data as mentioned. This means that, by extension, a YouTube creator doesn’t have a copy of any personal data that a Google Accounts signup may have collected… and therefore the YouTube content creator is NOT in violation of COPPA, though that doesn’t take Google off of the hook for it.

A YouTube content creator must actually POSSESS the data to be in violation. The FTC’s burden of proof is to show that the YouTube content creator actually has possession of that data. Who possesses that data? Google. Who doesn’t possess that data? The YouTube content creator. Though, there may be some limited edge cases where a YouTube creator might have requested personal information from a child in violation of COPPA. Even if a YouTube creator did request such data, so long as it has since been deleted fully, it is not in violation of COPPA. You must still be in possession of said data to be in violation of COPPA, at least according to how the act seems to read. If you have questions about this section, you should contact a lawyer for definitive confirmation and advice. Remember, I’m not a lawyer.

There is only ONE situation where a YouTube content creator may be in direct violation of COPPA. That is for live streaming. If a live streamer prompts for personal data to be written into the live chat area from its viewers and one of those viewers is 12 or under, the creator will have access to COPPA violating personal data. Additionally, comments on videos might be construed as in violation of COPPA if a 12 and under child writes something personally identifying into a comment. Though, I don’t know of many content creators who would intentionally request their viewers to reveal personally information in a comment on YouTube. Most people (including content creators) know the dangers all too well of posting such personally identifying information in a YouTube comment. A child might not, though. I can’t recall having watched one single YouTube channel where the host requests personally identifying information be placed into a YouTube comment. Ignoring COPPA for a second, such a request would be completely irresponsible. Let’s continue…

COPPA does state this about collecting data under its ‘Definitions’ section:

Collects or collection means the gathering of any personal information from a child by any means, including but not limited to:

(1) Requesting, prompting, or encouraging a child to submit personal information online;

(2) Enabling a child to make personal information publicly available in identifiable form. An operator shall not be considered to have collected personal information under this paragraph if it takes reasonable measures to delete all or virtually all personal information from a child’s postings before they are made public and also to delete such information from its records; or

(3) Passive tracking of a child online.

The “Enabling a child” section above is the reason for the removal of comments when the “for kids” setting is defined. Having comments enabled on a video when a child 12 and under could be watching enables the child to be able to write in personal information if they so choose. Simply by having a comment system available to someone 12 and under appears to be an infraction of COPPA. YouTube creators DO have access to enable or disable comments. What YouTube Creators don’t have access to is the age of the viewer. Google hides that information from YouTube content creators. YouTube content creators, in good faith, do not know the ages of anyone watching their channel.

Tracking a child’s activities is not possible by a YouTube content creator. A content creator has no direct or even incidental access to Google’s systems which perform any tracking activities. Only Google Does. Therefore, number 3 does not apply to YouTube content creators. The only way number 3 would ever apply to a creator is if Google / YouTube offered direct access to its cookie tracking systems to its YouTube content creators. Therefore, only numbers 1 and 2 could potentially apply to YouTube content creators.

In fact, because Google Accounts hides its personal data from YouTube content creators (including the ages of its viewers), content creators don’t know anything personal about any of its viewers. Which means, how are YouTube content creators supposed to know if a child 12 and under is even watching?

Google’s Failures

The reality is, Google has failed to control its data collection under Google Accounts. It is Google Accounts that needs to have COPPA applied to it, not YouTube. In fact, this action by the FTC will actually solve NOTHING at Google.

Google’s entire system is tainted. Because of the number of services that Google owns and controls, placing COPPA controls on only ONE of these services (YouTube) is the absolute bare minimum for an FTC action against COPPA. It’s clear that the FTC simply doesn’t understand the breadth and scope of Google’s COPPA failures within its systems. Placing these controls on YouTube will do NOTHING to fix COPPA’s greater violations which continue unabated within the rest of Google’s Services, including its brand new video gaming streaming service, Google Stadia. Google Stadia is likely to draw in just as many children 12 and under as YouTube. Probably more. If Stadia has even one sharing or voice chat service active or uses cookies to track its users, Stadia is in violation for the same exact reasons YouTube is… Google’s failure of compliance within Google Accounts.

Worse, there’s Android. Many parents are now handing brand new Android phones to their children 12 and under. Android has MANY tracking features enabled on its phones. From the GPS on board, to cookies, to apps, to the cell towers, to the OS itself. Talk about COPPA violations.

What about Google Home? You know, that seemingly innocuous smart speaker? Yeah, that thing is going to track not only each individual’s voice, it may even store recordings of those voices. It probably even tracks what things you request and then, based on your Google Account, will target ads on your Android phone or on Google Chrome based on things you’ve asked Google Home to provide. What’s more personally identifying than your own voice being recorded and stored after asking something personal?

Yeah, YouTube is merely the tippiest tip of a much, much, MUCH larger corporate iceberg that is continually in violation of COPPA within Google. The FTC just doesn’t get that its $170 million fine and First Amendment violating censorship efforts on YouTube isn’t the right course of action. Not only does the FTC’s involvement in censorship on YouTube lead to First Amendment violations, it won’t solve the rest of the COPPA violations at Google.

Here’s where the main body of this article ends.

Because there are still more questions, thoughts and ideas around this issue, let’s explore a some deeper ideas which might answer a few more of your questions as a creator or as a parent. Each question is prefaced by a ➡️ symbol. At this point, you may want to skim the rest of this article for specific thoughts which may be relevant to you.


➡️ “Should I Continue with my YouTube Channel?”

This is a great question and one that I can’t answer for you. Since I don’t know your channel or your channel’s content, there’s no way for me to give advice to you. Even if you do tell me your channel and its content, the FTC explicitly states that it will be at the FTC’s own discretion if a channel’s content “is covered by COPPA”. This means you need to review your own channel content to determine if your video content drives kids 12 and under to watch. Even then, it’s a crap shoot.

Are there ways you can begin to protect your channel? Yes. The first way is to post a video requesting that all subscribers who are 12 and under either unsubscribe from the channel or alternatively ask their parents to provide verifiable consent to you to allow that child to continue watching. This consent must come from a parent or guardian, not the child. Obtaining verifiable consent is not as easy as it sounds. Though, after you have received verifiable parental consent from every “child” subscriber on your channel, you can easily produce this consent documentation to the FTC if they claim your channel is in violation.

The next option is to apply for TRUSTe’s Children’s Privacy Certification. This affords your YouTube channel “Safe Harbor” protections against the FTC. This one is likely most helpful for large YouTube channels which tend to target children and which make significant income through ad monetization. TRUSTe’s certification is not likely to come cheap. This is the reason this avenue would only be helpful for the largest channels receiving significant monetization enough to pay for such a service.

Note, if you go through the “Safe Harbor” process or obtain consent for every subscriber, you won’t need to set your channel as ‘for kids’. Also note that “Safe Harbor” may not be possible due to Google owning all of the equipment that operates YouTube. Certification programs usually require you to have direct access to systems to ensure they continue to comply with the terms of the certification. Certifications usually also require direct auditing of systems to ensure the systems comply with the certification requirements. It’s very doubtful that Google will allow an auditing firm to audit YouTube’s servers on behalf of a content creator for certification compliance… and even if they did allow such an audit, YouTube’s servers would likely fail the certification audit.

The final option is to suspend your channel. Simply hide all of your content and walk away from YouTube. If you decide to use another video service like DailyMotion, Vimeo, or Twitch, the FTC may show up there as well. If they can make the biggest video sharing service in the world bow down to the FTC, then the rest of these video sharing services are likely not far behind.

➡️ “I don’t monetize my channel”

This won’t protect you. It’s not about monetization. It’s about data collection. The FTC is holding channel owners responsible for Google irresponsible data collection practices. Because Google can’t seem to police its own data collection to shield its end users from COPPA, Google/YouTube has decided to skip trying to fix their broken system and, instead, YouTube has chosen pass their violations down onto their end users… the YouTube creators.

This “passing off liability” action is fairly unheard of in most businesses. Most businesses attempt to shield their end users from legal liabilities by the use of its services as much as possible. Not Google or YouTube. They’re more than willing to hang their end users out to dry and let their end users take the burden of Google’s continued COPPA violations.

➡️ “My content isn’t for kids”

That doesn’t matter. What matters is whether the FTC thinks it is. If your content is animated, video game related, toy related, art related, craft related or in any way might draw in children as viewers, that’s all that matters. Even one child 12 and under is enough to shift Google’s COPPA data collection liabilities down onto your shoulders.

➡️ “I’ve set my channel as ‘not for kids'”

This won’t protect you. Google has a tool in the works that will scan the visual content of a video and potentially reclassify a video as “for kids” in defiance of the channel-wide setting of “not for kids”. Don’t expect that the channel-wide setting will hold up for every single video you post. YouTube can reclassify videos as it sees fit. Whether there will be a way to appeal this is as yet unknown. To get rid of that reclassification of a video, you may have to delete the video and reupload. Though, if you do this and the content remains the same, it will likely be scanned and marked “for kids” again by YouTube’s scanner. Be cautious.

➡️ “I’ll set my channel ‘for kids'”

Do this only if you’re willing to live with the restrictions AND only if your content really is for kids (or is content that could easily be construed as for kids). While this channel setting may seem to protect your channel from COPPA violations, it actually doesn’t. On the other hand, if your content truly isn’t for children and you set it ‘for kids’ that may open your channel up to other problems. I wouldn’t recommend setting content as ‘for kids’ if the content you post is not for kids. Though, there’s more to this issue… keep reading.

Marking your content “for kids” won’t actually protect you from COPPA. In fact, it makes your channel even more liable to COPPA violations. If you mark your content as “for kids”, you are then firmly under the obligation of providing proof that your channel absolutely DID NOT collect data from children under the age of 13. Since the FTC is making creators liable for Google’s problematic data collection practices, you could be held liable for Google’s broken data collection system simply by marking your content as ‘for kids’.

This setting is very perilous. I definitely don’t recommend ANY channel use this setting… not even if your channel is targeted at kids. By setting ‘for kids’ on any channel or content, your channel WILL become liable under COPPA’s data collection provisions. Worse, you will be held liable for Google’s data collections practices… meaning the FTC can come after you with fines. This is where you will have to fight to prove that you presently don’t have access to any child’s collected data, that you never did and that it was solely Google who stored and maintained that data. If you don’t possess any of this alleged data, it may be difficult for the FTC to uphold fines against channel owners. But, unfortunately, it may cost you significant attorney fees to prove that your channel is in the clear.

Finally, it’s entirely possible that YouTube may change this ‘for kids’ setting so that it becomes a one-way transition. This means that you may be unable to undo this change in the future. If it becomes one way, then a channel that is marked ‘for kids’ may never be able to go back to ‘not for kids’. You may have to create an entirely new channel and start over. If you have a large channel following, that could be a big problem. Don’t set your channel ‘for kids’ thinking you are protecting your channel. Do it because you’re okay with the outcome and because your content really is targeted for kids. But, keep in mind that setting ‘for kids’ will immediately allow the FTC to target your channel for COPPA violations.

➡️ “I’m a parent and I wish to give verifiable parental consent”

That’s great. Unfortunately, doing so is complicated. Because it’s easy for a child to fabricate such information using friends or parents of friends, giving verifiable consent to a provider is more difficult for parents than it sounds. It requires first verifying your identity as a parent, then it requires the provider to collect consent documentation from you.

It seems that Google / YouTube have chosen not yet set up a mechanism to collect verifiable consent themselves, let alone for YouTube content creators. What that means is that there’s no easy way for you as a parent to give (or a channel to get) verifiable consent easily. On the flip side as a content creator, it is left to you to handle contacting parents and collecting verifiable consent for child subscribers. You can use a service that will cost you money or you can do it yourself. As a parent, you can do your part by contacting a channel owner and giving them explicit verifiable consent. Keep reading to understand how to go about giving consent.

Content Creators and Parental Consent

Signing up for a service that provides a verifiable consent is something that larger YouTube channels may be able to afford, But, for a small YouTube channel, collecting such information from every new subscriber will be difficult. Google / YouTube could set up such an internal verification service for its creators, but YouTube doesn’t care about that or complying with COPPA. If Google cared about complying with COPPA, they would already have a properly working age verification system in Google Accounts that forces children to set their real age and which requires verifiable consent from the parent of a child 12 and under. If a child 12 and under is identified, Google can then block access to all services that might allow the child to violate COPPA until such consent is given.

It gets even more complicated. Because YouTube no longer maintains a private messaging service, there’s no way for a channel owner to contact subscribers directly on the YouTube platform other than posting a one-way communication video to your channel showing an email address or other means to contact you. This is why it’s important for each parent to reach out to each YouTube channel owner where the child subscribes and offer verifiable consent to the channel owner.

As a creator, this means you will need to post a video stating that ALL subscribers who are under the age of 13 must have have parental consent to watch your channel. This child will need to request their parent contact you using a COPPA authorized mechanism to provide consent. This will allow you to begin the collection of verifiable consent from parents of any children watching or subscribed to your content. Additionally, with every video you post, you must also have an intro on every video stating that all new subscribers 12 and under must have their parent contact the channel owner to provide consent. This shows to the FTC that your channel is serious about collecting verifiable parental consent.

So what is involved in Do It Yourself consent? Not gonna lie. It’s going to be very time consuming. However, the easiest way to obtain verifiable consent is setting up and using a two-way video conferencing service like Google Hangouts, Discord or Skype. You can do this yourself, but it’s better if you hire a third party to do it. It’s also better to use a service like Hangouts which shows all party faces together on the screen at once. This way, when you record the call for your records, both yours and the parent+child’s faces are readily shown. This shows you didn’t fabricate the exchange.

To be valid consent, both the parent and the child must be present and visible in the video while conferencing with the channel owner. The channel owner should also be present in the call and visible on camera if possible. Before beginning, the channel owner must notify the parent that the call will be recorded by the channel owner for the sole purposes of obtaining and storing verifiable consent. You may want to ensure the parent understands that the call will only and ever be used for this purpose (and hold to that). It is off limits to post these videos as a montage on YouTube as content. Then, you may record the conference call and keep it in the channel owners records. As a parent, you need to be willing to offer a video recorded statement to the channel owner stating something similar to the following:

“I, [parent or guardian full name], am 18 years of age or older and give permission to [your channel name] for my child / my ward [child’s YouTube public profile name] to continue watching [your channel name]. I additionally give permission to [your channel name] to collect any necessary data from my child / my ward while watching your channel named [your channel name].”

If possible, the parent should hold up the computer, tablet, phone or device that the child will use to the camera so that it clearly shows the child account’s profile name is logged into YouTube on your channel. This will verify that it is, indeed, the parent or legal guardian of that child’s profile. You may want to additionally request the parent hold up a valid form of picture ID (driver’s license or passport) obscuring any addresses or identifiers with paper or similar to verify the picture and name against the person performing consent. You don’t need to know where they live, you just need to verify the name and photo on the ID matched the person you are speaking to.

Record this video statement for your records and store this video recording in a safe place in case you need to recall this video for the FTC. There should be no posting of these videos to YouTube or any other place. These are solely to be filed for consent purposes. Be sure to also notice if the person with the child is old enough to be an adult, that the ID seems legit and the person is not that child’s sibling or someone falsifying this verification process. If this is a legal guardian situation, this is more difficult to validate legal guardianship. Just do your best and hope that the guardian is being truthful. If in doubt, thank the people on the call for their time and then block the subscriber from your channel.

If your channel is owned by a corporation, the statement should include the name of the business as well as the channel. Such a statement over a video offers verifiable parental consent for data collection from that child by that corporation and/or the channel. This means that the child may participate in comment systems related to your videos (and any other data collection as necessary). Yes, this is a lot of work if you have a lot of under 13 subscribers, but it is the work that the U.S. Government requires to remain compliant with COPPA. The more difficult part is knowing which subscribers are 12 and under. Google and YouTube don’t provide any place to determine this. Instead, you will need to ask your child subscribers to submit parental consent.

If the DIY effort is too much work, then the alternative is to post a video requesting 12 and under subscribers contact you via email stating their YouTube public subscriber identifier. Offer up an email address for this purpose. It doesn’t have to be your primary address. It can be a ‘throw away’ address solely for this purpose. For any account that emails you their account information, block it. This is the simplest way to avoid 12 and under children who may already be in your subscriber pool. Additionally, be sure to state in every future video that any 12 and under watching this channel must have their parental consent or risk being blocked.

Note, you may be thinking that requesting any information from a child 12 and under is in violation of COPPA, but it isn’t. COPPA allows for a reasonable period of time to collect personal data while in the process of obtaining parental consent before that data needs to be irrevocably deleted. After you block 12 and under subscribers, be sure to delete all correspondence via that email address. Make sure that the email correspondence isn’t sitting in a trashcan. Also make sure that not only are the emails are fully deleted, but any collected contact information is fully purged from that email system. You want to make sure that not only are all emails deleted, but any collected email addresses are also purged. Many email services automatically collect and store email addresses into an automatic address list. Make sure that these automatic lists are also purged. As long as all contact data has been irrevocably deleted, you aren’t violating COPPA.

COPPA recognizes the need to collect personal information to obtain parental consent:

(c) Exceptions to prior parental consent. Verifiable parental consent is required prior to any collection, use, or disclosure of personal information from a child except as set forth in this paragraph:

(1) Where the sole purpose of collecting the name or online contact information of the parent or child is to provide notice and obtain parental consent under §312.4(c)(1). If the operator has not obtained parental consent after a reasonable time from the date of the information collection, the operator must delete such information from its records;

This means you CAN collect a child’s or parent’s name or contact information in an effort to obtain parental consent and that data may be retained for a period of “reasonable time” to gain that consent. If consent is not obtained in that time, then the channel owner must “delete such information from its records”.

➡️ “How can I protect myself?”

As long as your channel remains on YouTube with published content, your channel is at risk. As mentioned above, there are several steps you can take to reduce your risks. I’ll list them here:

  1. Apply for Safe Harbor with TrustArc’s TRUSTe certification. It will cost you money, but once certified, your channel will be safe from the FTC so long as you remain certified under the Safe Harbor provisions.
  2. Remove your channel from YouTube. So long as no content remains online, the FTC can’t review your content and potentially mark it as “covered by COPPA.”
  3. Wait and see. This is the most risky option. The FTC makes some claims that it intends proving you had access to, stored and maintained protected data from children. However, there are just as many statements that indicate they will take action first, then request proof later. Collecting data will be difficult burden of proof for most channels. It also means a court battle.
  4. Use DYI or locate a service to obtain verifiable parental consent for every subscriber 12 and under.

➡️ “What went wrong?”

A whole lot failed on Google and YouTube’s side. Let’s get started with bulleted points of Google’s failures.

  • Google has failed to identify children 12 and under to YouTube content creators.
  • Google has failed to offer mechanisms to creators to prevent children 12 and under from viewing content on YouTube.
  • Google has failed to prevent children 12 and under from creating a Google Account.
  • Google has failed to offer a system to allow parents to give consent for children 12 and under to Google. If Google had collected parental consent for 12 and under, that consent should automatically apply to content creators… at least for data input using Google’s platforms.
  • Google has failed to warn parents that they will need to provide verifiable consent for children 12 and under using Google’s platform(s). Even the FTC has failed to warn parents of this fact.
  • YouTube has failed to provide an unsubscribe tool to creators to easily remove any subscribers from a channel. See question below.
  • YouTube has failed to provide a blocking mechanism that prevents a Google Account from searching, finding or watching a YouTube channel.
  • YouTube has failed to identify accounts that may be operated by a child 12 and under and warn content creators of this fact thus allow the creator to block any such accounts.
  • YouTube has failed to offer a tool to allow creators to block specific (or all) content from viewers 12 and under.
  • YouTube has failed to institute a full ratings system, such as the TV Parental Guidelines that sets a rating on the video and provides a video rating identifier within the first 2 minutes, thus stating that a video may contain content inappropriate for certain age groups. Such a full ratings system would allow parents to block specific ratings of content from their child using parental controls. This would allow parents to prevent not only children 12 and under from viewing more mature rated YouTube content, it lets parents block content for all age groups handled by the TV Parental Guidelines.

➡️ “I’m a creator. Can I unsubscribe a subscriber from my channel?”

No, you cannot. But, you can “Block” the user and/or you can “Hide user from channel” depending on where you are in the YouTube interface. Neither of these functions are available as features directly under the Subscriber area of YouTube Creator. Both of these features require digging into separate public Google areas. These mechanisms don’t prevent a Google Account from searching your channel and watching your public content, however.

To block a subscriber, enter the Subscribers area of your channel using Creator Studio Classic to view a list of your subscribers. A full list of subscribers is NOT available under the newest YouTube Studio. You can also see your subscribers (while logged into your account) by navigating to https://www.youtube.com/subscribers. From here, click on the username of the subscriber. This will take you to that subscriber’s YouTube page. From this user page, locate a small grey flag in the upper portion of the screen. I won’t snapshot the flag or give its exact location because YouTube is continually moving this stuff around and changing the flag image shape. Simply look for a small flag icon and click on it, which will drop down a menu. This menu will allow you to block this user.

Blocking a user prevents all interactions between that user and your channel(s). They will no longer be able to post comments on your videos, but they will still be able to view your public content and they will remain subscribed if they already are.

The second method is to use “Hide user from channel”. You do this by finding a comment on the video from that user and selecting “Hide user from channel” using the 3 vertical  dot drop down menu to the right of the comment. You must be logged into your channel and viewing one of your video pages for this to work.

Hiding a user and blocking a user are effectively the same thing, according to YouTube. The difference is only in the method of performing the block. Again, none of the above allows you to unsubscribe users manually from your channel. Blocking or hiding a user still allows the user to remain subscribed to your channel as stated above. It also allows them to continue watching any public content that you post. However, a blocked or hidden user will no longer receive notifications about your channel.

This “remaining subscribed” distinction is important because the FTC appears to be using audience viewer demographics as part of its method to determine if a channel is directing its content towards children 12 and under. It may even use subscriber demographics. Even if you do manage to block an account of a child 12 and under who has subscribed to your channel, that child remains a subscriber and can continue to search for your channel and watch any content you post. That child’s subscription to your channel may, in fact, continue to impact your channel’s demographics, thus leading to possible action by the FTC. By blocking 12 and under children, you may be able to use this fact to your advantage by proving that you are taking action to prevent 12 and under users from posting inappropriate data to your channel.

➡️ “What about using Twitch or Mixer?”

Any video sharing or live streaming platforms outside of and not owned by Google aren’t subject to Google’s / YouTube’s FTC agreement.

Twitch

Twitch isn’t owned or operated by Google. They aren’t nearly as big as YouTube, either. Monetization on Twitch may be less than can be had on YouTube (at least before this COPPA change).

Additionally, Twitch’s terms of service are fairly explicit regarding age requirements, which should prevent COPPA issues. Twitch’s terms state as follows of minors using Twitch:

2. Use of Twitch by Minors and Blocked Persons

The Twitch Services are not available to persons under the age of 13. If you are between the ages of 13 and 18 (or between 13 and the age of legal majority in your jurisdiction of residence), you may only use the Twitch Services under the supervision of a parent or legal guardian who agrees to be bound by these Terms of Service.

This statement is more than Google provided for its creators. This statement by Twitch explicitly means Twitch intends to protect its creators from COPPA and any other legal requirements associated with minors or “children” using the Twitch service. For creators, this piece of mind is important.

Unfortunately, Google has no such creator piece of mind. In fact, the whole way YouTube has handled COPPA is sloppy at best. If you are a creator on YouTube, you should seriously consider this a huge breech of trust between Google and you, the creator.

Mixer

Mixer is presently owned by Microsoft. I’d recommend caution using Mixer. Because Microsoft allows 12 and under onto its ID system, it may end up in the same boat as YouTube. It’s probably a matter of time before the FTC targets Microsoft and Mixer with similar actions.

Here’s what Mixer’s terms of service say about age requirements:

User Age Requirements

  • Users age 12 years and younger cannot have a channel of their own. The account must be owned by the parent, and the parent or guardian MUST be on camera at all times. CAT should not have to guess whether a parent is present or not. If such a user does not appear to have a guardian present, they can be reported, so CAT can investigate further.
  • Users aged 13-16 can have a channel, with parental consent. They do not require an adult present on camera. If they are reported, CAT will take steps to ensure that the parent is aware, and has given consent.

This looks great and all, but within the same terms of service area it also states:

Users Discussing Age In Chat

We do NOT have any rule against discussing or stating age. Only users who claim to be (or are suspected to be) under 13 will be banned from the service. If someone says they are under 13, it is your choice to report it or not; if you do report it, CAT will ban them, pending proof of age and/or proof of parental consent.

If someone is streaming and appears to be under 16 without a parent present, CAT may suspend the channel, pending proof of parental consent and age. Streamers under 13 have a special exception, noted [above].

If you’re wondering what “CAT” is, it stands for Community Action Team (AKA moderators) for Mixer. The above is effectively a “Don’t Ask, Don’t Tell” policy. It also means Mixer has no one to actively police the service for underage users, not even its CAT team. It also means that Mixer is aware that persons 12 and under are using Mixer’s services. By making the above statement, it opens Mixer up to auditing by the FTC for COPPA compliance. If you’re considering using Mixer, this platform could also end up in the same boat as YouTube sooner rather than later considering the size of Microsoft as a company.

Basically, Twitch’s Terms of Service are a better written for creator piece of mind.

➡️ “What is ‘burden of proof’?”

When faced with civil legal circumstances, you are either the plaintiff or the defendant. The plaintiff is the party levying the charges against the other party (the defendant). Depending on the type of case, burden of proof must be established by the plaintiff to show that the defendant did (or didn’t) do the act(s) alleged. The type of burden of proof is slightly different when the action is a civil suit versus a criminal suit.

Some cases requires the plaintiff to take on the burden of proof to show the act(s) occurred. But, it’s not that simple for the defendant. The defendant may be required to bring both character witnesses and actual witnesses which may, in fact, establish a form of burden of proof that the acts could not have occurred. Even though burden of proof is not explicitly required of a defendant, that doesn’t mean you won’t need to provide evidence to exonerate yourself. In the case of a civil FTC action, the FTC is the plaintiff and your channel will be the defendant.

The FTC itself can only bring civil actions against another party. The FTC will be required to handle the burden of proof to prove that your channel not only collected the alleged COPPA protected data, but that you have access to and remain in possession of such data.

However the FTC can hand its findings over to the United States Department of Justice which has the authority to file both civil and criminal lawsuits. Depending on where the suit is filed and by whom, you could face either civil penalties or criminal penalties. It is assumed that the FTC will directly file its legal actions against COPPA as civil suits… but that’s just an assumption. The FTC does have the freedom to request the Department of Justice handle the complaint.

One more time, this article is not legal advice. It is simply information. If you need actual legal advice, you are advised to contact an attorney who can understand your specific circumstances and offer you legal advice for your specific circumstances.

↩︎

%d bloggers like this: