# Random Thoughts – Randocity!

## Elizabeth Holmes: Why aren’t more CEOs in prison?

Posted in botch, business, california by commorancy on August 23, 2022

On the heels of Elizabeth Holmes’s conviction for four counts of fraud, the question begs… Why aren’t more startup CEOs in prison for fraud? Before we get into the answer, let’s explore a little about Elizabeth Holmes.

Theranos

Theranos was a technological biomedical startup, not unlike so many tech startup companies before it. Like many startups, Theranos began based out of Palo Alto, California… what some might consider the heart of Silicon Valley. Most startups that begin their life in or around Palo Alto seem able to rope in a lot of tech investors and tech money. Theranos was no different.

Let’s step back to understand who was at the helm of Theranos before we get into what technology this startup purported to offer the world. Theranos was helmed by none other than Elizabeth Holmes. Holmes founded Theranos in 2003 at the age of 19, after she had dropped out of Stanford University. In 2002 prior to founding Theranos, Elizabeth Holmes was an engineering student studying chemical engineering. No, she was not a medical student nor did she have any medical training.

Clearly, by 2003, she had envisioned grandiose ideas about how to make her way in the world… and it didn’t seem to involve actually completing her degree at Stanford. Thus, Theranos was born after having she had gotten her dean, but not medical experts at the school, to sign off on her blood testing idea.

Medical Technology

What was her medical idea? Holmes’s idea involved gathering vast amounts of data from a few drops of blood. Unfortunately, not everyone agreed that her idea had merit, particularly medical professors at Stanford. However, she was able to get some people to buy into her idea and, thus, Theranos was born.

From the drawing board to creating a device that actually does what Holmes claimed would pose the ultimate challenge, one that would see her convicted of fraud.

Software Technology

Most startup products in Silicon Valley involve software innovation with that occasional product which also requires a specialty hardware device to support the software. Such hardware and software examples include the Apple iPhone, the Fitbit and even the now defunct Pebble.

Software only solutions include such notables as Adobe Photoshop, Microsoft Office and even operating systems like Microsoft Windows. Even video games fall under such possible startups, like Pokémon Go. Yes, these standalone softwares do require separate hardware, but using already existing products that consumers either own or can easily purchase. These software startups don’t need to build any specialty hardware.

Software solutions can solve problems for many differing industries including the financial industry, the medical industry, the fast food industry and the law enforcement industry and even solve problems for home consumers.

There are so many differing ideas that can make life much simpler, some ideas are well worth exploring. However, like Theranos, some aren’t.

Theranos vs Silicon Valley

Elizabeth Holmes’s idea that a few drops of blood could reveal a lot of information was a radical idea that didn’t, at her young age of 19, have a solution. This is what Elizabeth Holmes sought to create with Theranos.

Many Silicon Valley startups must craft a way to solve the problem they envision. Whether that be accessing data faster or more reliably to creating a queuing system for restaurants using an iPhone app.

It’s not so much the idea, but the execution of it. That’s where the CEO comes into play. The CEO must assemble a team capable of realizing and executing the idea they have in their head. For example, is it possible to create a device to extract mountains of data from a few drops of blood? That’s what Elizabeth Holmes was hoping she could create. It was the entire basis for the creation of Theranos.

Investors

To create that software and device, it takes money and time. Time to develop and money to design and build necessary devices using R&D. A startup must also hire experts in various fields who can step into the role and determine what is and isn’t possible.

In other words, a CEO’s plan is “fake it until you make it”. That saying goes for every single startup CEO who’s ever attempted to build a company. Investors see to it that there’s sufficient capital to make sure a company can succeed, or at least give it a very good shot. Early investors include seed and angel investors, where the money may have few if any strings and later stage investors such as Venture Capitalists, where there are heavy strings tied to the money in the form of exchanging company ownership in exchange for money.

Later stage investors are usually much more hands-on than many angel or seed investors. In fact, sometimes late stage investors can be so hands-on as to cause the company to pivot a company in unwanted directions and away from the original vision. This article isn’t intended to become a lesson for how VC’s work, but suffice it to say that they can become quite important in directing a company’s vision.

In Theranos case, however, Elizabeth Holmes locked out investors by creating a …

Black Box

One thing that Silicon Valley investors don’t like are black boxes. What is a black box? It’s a metaphor for a wall that’s erected between a company’s product and any investors involved. A black box company is one that refuses to share how a startup company’s technology actually works. Many investors won’t invest in such “black box” companies. Investors want to know how their money is being spent and how a company’s technology is progressing. Black boxes don’t allow for that information flow.

Theranos employed such a black box approach to its blood analyzer device. It’s actually a wonder Theranos got as much investor support as it did, particularly for a CEO that young and, obviously, inexperienced when insisting on a black box approach. That situation is ripe for abuse. At 19, how effective could Elizabeth Holmes be as a CEO? How trustworthy and responsible could a 19 year old be with millions of dollars of funding? How many 19 year olds would you entrust with millions of dollars, after they had dropped out of college? For investors, this should have been a huge red flag.

There’s something to be said for the possibility of a wunderkind in Elizabeth Holmes, except she hadn’t proven herself to be a prodigy while attending Stanford. Even the medical experts she had consulted about her idea clearly didn’t think she had the necessary skills to make her far-fetched idea a reality. A chemical engineering student hopping into the biotech field with the creation of small, almost portable blood analysis machine at a time when commercial blood analysis machines where orders of magnitudes bigger and required much more blood volume? Holmes’s idea was fantastical, yet clearly unrealistic.

However, Theranos’s black box, dubbed the Edison or miniLab, was a small piece of equipment about half the size of a standard tower computer case and included a touch screen display and blood insertion port.

Unfortunately, this black box was truly a black box in all senses of the word, including its actual case coloring. Not only was the Edison’s innards kept a strict company secret, its testing methodologies were also kept secret, even from employees. In other words, no one knew exactly how the Edison truly worked. No, not even the engineers that Theranos hired to try to actually make Holmes’s vision a reality.

Theranos and Walgreens

By 2016, Theranos had secured a contract with Walgreens for Walgreens to use Theranos’s Edison machine to test blood samples by medical patients. Unfortunately, what came to pass from those tests was less than stellar. It’s also what led to the downfall of Theranos and ultimately Elizabeth Holmes and her business partner, Sunny Balwani.

The engineers that Theranos hired knew that the Edison didn’t work, even though they hadn’t been privy to all of its inner workings. Instead, what they saw was those tiny vials of blood trying to run samples on larger blood testing machines like the Siemens Advia 1800.

When the engineers, Erika Cheung and Tyler Shultz, confronted Holmes and Balwani about the Edison machine’s lack of functionality and about being asked to falsify test results, they were given the cold shoulder. Both Cheung and Tyler decided to blow the whistle on Theranos’s fraud. Cheung and Schultz both left Theranos after whistleblowing to start their own companies.

Ultimately, Theranos had been using alternative medical diagnostic technology in lieu of its own Edison machine, which the Edison clearly didn’t function properly and neither did the third party systems with the amount of blood that Holmes stated that it required.

This left patients at Walgreens with false test results, requiring many patients to retest with another lab to confirm the validity of Theranos’s results.

Elizabeth Holmes Fate?

In January of 2022, Elizabeth Holmes was found guilty of 4 counts of fraud. However, the jury acquitted her of all counts involving patient fraud… the patients were, in fact, hurt the most by Theranos’s fraud. The jury awarded monetary rewards to the investors, not to the patients who may have been irreparably harmed by her machine’s failure to function.

Why aren’t more CEOs in prison for fraud?

While the Theranos and Elizabeth Holmes case is somewhat unique among Silicon Valley startups, it is not completely unique. Defrauding investors is a slippery slope for Silicon Valley. Once one company is found perpetrating fraud on investors, it actually opens the door up to many more such cases.

Taking money from investors to attempt to bring a dream to life is exactly what CEOs do. However, Theranos (and Elizabeth Holmes) between 2003 and 2016 couldn’t produce a functional machine.

Most CEOs, given enough time and, of course money, can likely produce a functional product in some form. Whether that product resembles the original idea that founded the company remains to be seen. Some CEOs pivot a year or two in and change directions. They either realize their initial idea wasn’t unique enough or that there would be significant problems bringing it to market. They then change direction and come up with a new idea that may be more easily marketable.

Startups that Bankrupt

In the case of Theranos, other startups that go bankrupt could signal the possibility that CEOs may now be held accountable to fraud charges, just like Ms. Holmes. The Elizabeth Holmes case has now set that precedent. Taking investor money may no longer be without legal peril directly to company executives. If you agree to bring a product to market and are given investor capital to do it… and then you fail and the company folds, you may find yourself in court up on fraud charges.

Silicon Valley investors do understand that the odds of a successful startup is relatively low… which is why they typically invest in many at once. The one that succeeds typically more than makes up for the others that fail. If more than one succeeds, even better. It’s called, “playing the odds”. The more you bet, the better chances you have of winning. However, playing the odds won’t stop investors from wanting to recoup losses for money given to failed startups.

The Elizabeth Holmes case may very well be chilling for startups. It’s ultimately chilling to would-be CEOs who see dollar signs in their eyes, but then months later that startup is out of cash and closing down in failure.

CEOs and Prison Time

Elizabeth Holmes should be considered a cautionary tale for all would-be CEOs looking for some quick cash to get their idea off the ground. If you do manage to secure funding, you should be cautious with how you use that cash. Also always and I mean ALWAYS make sure the progress in building your idea is shown to your investors regularly. Let them know how their investor money is being used. When software is available for demonstrations, show it off. Don’t hide it inside of a black box.

Black boxes have no place in startup investing. As with Elizabeth Holmes, she’s facing up to 20 years in prison. However, her sentence has yet to be handed down, but is expected to be no less than 20 years. Though, it’s possible she may be given the possibility of parole and the possibility of a reduced sentence for good behavior… all of which is up to the sentencing judge.

Elizabeth Holmes opened this door for startup CEOs. It’s only a matter of time before investors begin using this precedent to hold CEO founders to account should an investment in a startup fail.

↩︎

## No Man’s Sky Review: What are Expeditions?

Posted in botch, video game, video game design by commorancy on July 27, 2022

What exactly are No Man’s Sky Expeditions? Simply put, they are extended gameplay tutorials. Let’s explore.

Early Tutorials

Sometime around the year 2000, game designers found that it was simpler (and cheaper) to include a small intro tutorial in the game than producing an expensive and time consuming instruction manual. The purpose of a tutorial is to show the gamer how to use basic game mechanics to accomplish various tasks. It’s easier to do this interactively than trying to explain it in a written manual, which ultimately no nobody really reads.

In games like Call of Duty, these tutorials show you how to use the weapons, learn the button controller layout, such as stealth moves and so on. These tutorials began as simple introductory systems. These tutorials typically occur outside of the game’s normal play mode. Some games force tutorials to be completed, which prevents you from progressing into the game’s normal play mode until you complete all of the necessary tutorials.

However, game developers quickly realized the problem with these locked tutorials. Gamers weren’t happy with this forced intro system which tested a gamer’s patience before they could actually begin playing the main game. Gamers simply wanted to get into the meat of the gameplay immediately and couldn’t while locked into completing a silly, but long tutorial session.

Worse, many of the tutorials produce situations that ultimately never materialize in the game. It’s really frustrating to teach a gamer a specific technique which is never useful once actually in the game. The Titanfall game is guilty of forcing such a tutorial system, which also taught techniques never used in the game. What was the point in teaching a useless technique?

Tutorials Today

Today, game developers still use these in-game tutorial systems in various forms. Rarely are these tutorials forced, like in the above example. Many games allow you to skip the tutorials entirely, but they allow you to revisit any of the tutorials later if you want to learn a specific move, understand a mechanic better or simply hone skills around specific mechanics.

The best of all worlds is when a game developer chooses not to force a tutorial, but allows the player to skip them and revisit them later if needed. If tutorials are required, then the game developer should offer up a reward for completing them… as is done in the No Man’s Sky Expeditions.

Since most games have settled on standard controller layouts and many use similar mechanics, most gamers can easily fall right into a game within a few minutes, being required to learn only a few new concepts specific to that game.

No Man’s Sky has taken Tutorials to a New Level (aka Ways to Improve)

With the introduction the Expedition idea in 2021, Hello Games has turned long, but very basic tutorials into a gameplay mode, for better or worse. It is a gameplay mode that sees gamers earn rewards for all of their other game saves, but only after enduring very basic tutorial concepts.

Personally, I’d have preferred if Expeditions could be played on our existing game saves rather than cluttering up our game with a bunch of new saves, each used for a separate expedition. It’s a waste of space on our PC or console… space that we can’t get rid of easily because those saves earned the rewards.

More than this, I’d like to see expeditions offer us more than the simple, basic tutorials. Instead of teaching us basic concepts like using a portal, flying our starship, using hyperdrive on our freighter or installing technology modules, I’d prefer adding much more advanced features added to the expeditions that eventually get added into our normal play after the expedition is over. Basically, Hello Games should use expeditions as a preview mode for new features that eventually get rolled up and unlocked for our regular saves. As for these tutorials, the vast majority of players aren’t playing No Man’s Sky for the first time. We already understand all of these basics in abundance. That we must endure a somewhat condescending tutorial gameplay mode just to get some very basic rewards is quite time-wasting and insulting.

I’d have preferred a system that turns No Man’s Sky on its head, like allowing us to test out new features before being fully release to all game save modes. As an example, pick a planet and setup a PVP area. Then flatten that area and allow players to use a new unique vehicle to enter a new arena tournament. This allows full on competitive PVP on a specific planet. More than this, allow normal save players (not part of the expedition) to visit and spectate if they choose not to play the expedition. Simply spoon feeding us basics to collect a few low-level rewards seems mostly pointless. Instead, design brand new creative uses of the game engine, worlds and environments… then allow players to use those new areas in completion of an expedition. Better, use expeditions as a pre-release area to entice gamers to want to see what’s new and what’s coming.

Another example. Most worlds have large cave systems. Enable some kind of “egg hunt” in the caves of a specific world. Once you collect all of the necessary items and turn them in, you’ll get your expedition reward. This might require a new technology to be equipped on the scanner to allow searching for underground cave hollows. As it is now, it’s almost impossible to locate caves, thus a new technology must be added to the Multitool to allow for locating hollowed areas underground. Such a new Multitool feature would be an excellent use of an expedition to test this tool and get player feedback.

Now, I’m not advocating for expeditions to become strictly beta test areas, but pre-releasing fully working, but unreleased ideas allows Hello Games to understand if a feature is a hit or a bomb.

No Man’s Sky — 2016 Version

When No Man’s Sky (NMS) arrived in 2016, it had no tutorial. Gamers had to learn to play by doing. That’s fine, too. I find that tutorial systems take some of the fun out of learning the mechanics of the game and how far you might be able to take those mechanics. Tutorials teach you a straight-and-narrow approach for an individual mechanic, but it does not at all teach you the ways of using those mechanics in creative and unique new ways… ways that the developers might not have intended or, indeed, understood.

No Man’s Sky Expeditions

Let’s get into the meat of this review. What exactly is an expedition? To make an analogy, an Expedition is to No Man’s Sky as a Season is to Fortnite… mostly. More than this, an expedition is simply an extended tutorial for No Man’s Sky.

There are a number of pluses and minuses to expeditions and that’s what this article intends to uncover. Before we get into the advantages and disadvantages, let’s understand deeper what an expedition further is.

Extended Tutorial

Yes, a No Man’s Sky expedition is effectively an extended tutorial. That’s pretty much it in a nutshell. Hello Games takes you on a long-winded, convoluted journey by teaching you how to use, obtain and unlock features in No Man’s Sky in each expedition in an extremely detailed tutorialized way. Along the way, this very long extended tutorial will unlock a few exclusive items such as decals and decorations and an occasional piece of technology. These are both a plus and a minus.

If you’re thinking you would like to jump into the latest expedition, understand that it really only serves to teach you, in an almost condescending way, how to do extremely basic things in the game. When you complete a single phase milestone, the game unlocks certain rewards.

For example, in Expedition 8 (the current expedition running as of this article), visiting another player’s Freighter (a milestone), the game will give you a full Atlas Pass set, 5 million units and unlock all of the Portal glyphs. Nevermind that you have to find a random gamer, team up with them in a group, then hop aboard their freighter. Other player freighters don’t appear in multiplayer. They only appear after you’ve explicitly teamed up with another player. A hassle, to say the least. Why HG couldn’t have improved the game to allow all active gamer freighters to be visible and visitable without teaming up is unknown. That would have been an exceptional improvement to the game.

Almost all of the items that a phase milestone unlocks can usually be had without a phase milestone. It’s that if you perform a specific milestone, you’ll unlock them more quickly and easily in one step. The exceptions here are the expedition exclusive rewards. You can’t easily see which one these are, but you’ll know if you’ve played No Man’s Sky before. In effect, think of an expedition as a way to cheat your way through the game in just a few weeks simply by following the extended tutorial. By cheating, I mean that you get “sets” of basic items unlocked, like the Portal Glyphs, simply by doing a fairly simple thing. In a “Normal” game, it would take you way longer to get these Portal Glyphs.

On the flip side, however, the “new” exclusive Expedition rewards require extensive hoop-jumping exercises before the game unlocks these. If it’s an old mechanic, one thing unlocks the entire set. If it’s a new mechanic, expect to spend hours and hours jumping through burning hoops to unlock a couple of silly decals.

As mentioned just above, each phase milestone issues various rewards, most are standard while a few are exclusive to the expedition. While this is technically an advantage, it’s also a disadvantage because the game teaches you to get the basic items simply by performing basic milestones. With exclusives, you can only get these by performing a milestone. With the basics, you can get them by performing the milestone (the fastest way) or by doing standard in-game play (slower). While this is considered mostly an advantage, it honestly teaches the gamer the wrong way to get the basics if you intend to play using a “Normal” save. In other words, to get the basic rewards in a non-expedition save, you’ll have to do a whole lot more work by following the originally designed and intended method. There are no “expedition shortcuts” available in a normal game.

Likewise, the primary disadvantage of expeditions is that they offer fairly crappy exclusive rewards. What I mean by that is, for example in Expedition 7, the final “big” reward was collecting the Living ‘Leviathan’ Frigate for your frigate fleet. While this frigate is unusual in its looks, it’s really nothing special as a frigate. It doesn’t have any features that are more unique than any other frigate you can find in the game. It’s a living frigate, but beyond that skin on the ship, it offers little unique benefit. In fact, it’s not even really a great frigate in and of itself. I have way better frigates in my fleet than this “reward” frigate.

What that means then is spending 6 weeks working your way through achieving 40 different milestones, each taking a substantial amount of time to complete. Then, only to be rewarded with something that isn’t substantially better than what you can buy with a couple million units. You can spend 6 weeks getting this “reward” or you can spend 6 weeks or less gathering enough units to buy several regular frigates, not just one.

That’s not say that the unique rewards, such as decals, aren’t somewhat interesting, but it may not be worth spending 6 weeks to complete an expedition that’s effectively a basic, but very extended tutorial.

Advantage: Rewards collected on other saves

Since the introduction of the Quicksilver shop in the Anomaly station, Hello Games has added the ability to collect expedition rewards for all of your game saves. So long as you play through an expedition, it will unlock unique rewards during that expedition. Thus, your other game saves can also collect those rewards, such as the Leviathan Frigate. That’s cool and all. Again, is it really worth spending 6 weeks just to unlock this reward for another game save?

Starting any expedition, you must start a brand new game save. This means starting No Man’s Sky over from scratch with a minimally configured Exosuit, Multitool, Starship and, if given, a Freighter. It also means spending loads of time again collecting resources, units and nanites. It further means you need to collect salvaged data so you can, once again, unlock all of the unlockables at the Anomaly station and it means spending loads of time finding, then unlocking features in the Exosuit, Starship, Multitool, and Freighter.

In the case of Expedition 8, this “tutorial” is all about the newly introduced Freighter features. I’ll specifically discuss Expedition 8 more below. This means that Expedition 8 is effectively a very long tutorial to teach you how to unlock rooms in your freighter and build them. As a tutorial, it’s really basic. There’s a huge disadvantage to Expedition 8 in and of itself and I’ll also discuss that below.

After about the 3rd time going through an Expedition, the entire having-to-start-over thing gets very old. Being plopped down in a system with hazardous planets and being forced to forage for resources on these annoying planets is at once, time consuming and yet very, very pointless. If you choose to join an Expedition, that’s where you start.

Hello Games needs to figure out a way for us to import our character and at least one starship from a previous save into an Expedition save so we can start off with our suits and ships unlocked. Unless the goal is to tutorialize our way through the Exosuit and the Starship, then give us the ability to import those things that make no difference to the Expedition. Why make us continually start over from the beginning when it isn’t needed or relevant? It’s pointless and actually a severe disadvantage to expeditions. It also makes an expedition take way longer than is necessary.

When starting over from scratch, that means minimal slots unlocked on the Exosuit, Multitool, Starship and Freighter… with the Freighter being the most difficult to unlock and the Exosuit being the most expensive. You can wait through achieving milestones to unlock some Exosuit and Multitool slots or you can buy your way into unlocking them.

During Expedition 8, I needed my Exosuit slots unlocked much, much faster than the milestones were offering. There were simply not enough slots given on the Freighter or Starship or Exosuit to proceed. I decided not to wait and paid 6 million units to buy 72 Drop Pod Coordinate Data, which I randomly found at a single Trade Terminal vendor.

I’d already paid to unlock all of the “General” slots by visiting space stations and the anomaly station. These slots are relatively cheap to unlock with the most expensive costing around 220,000 units. The only slots left unlocked are the incredibly expensive “Cargo” slots. I could spend a million or more units to unlock 1 slot at a time (after the first two or three) or I can pay 6 million units and chase down Drop Pods on a planet. I chose the latter.

I found a suitable anomaly “mushroom” planet, which has perfect weather and few sentinels. Then, I went to work with my trusty Signal Booster. Just craft this bad boy and drop it on the ground. Then use it to locate a Drop Pod using one of the purchased Drop Pod Coordinate Data items each time. Rinse and repeat. I did this maybe 45 times or however many slots were locked. It’s also way faster to do it this way than hyper traveling to system after system to unlock them at new space stations.

After I finished unlocking all Cargo slots, I proceeded to unlock the remaining Technology slots. That left me with 26 unused Drop Pod Coordinate Data items. I sold them back for 3 million units. To unlock almost every single Cargo slot and half of my Technology slots cost me around 3 million units all told… way, way cheaper than purchasing Cargo slots from a bunch of newly discovered space stations. In fact, if I had paid at space stations, I’d have spent maybe 20-50 million in total to unlock all of the Cargo slots. No. Using Drop Pod Coordinate Data is the cheapest and fastest way to unlock Cargo slots… and, it can be done on one single docile planet. In my case, unlocking that number of slots took me around 2 hours of real time. It’s pretty monotonous and repetitive, but once it’s done you don’t need to do it again.

Expedition 8: Polestar

Let’s review this latest expedition. With Expedition 8: Polestar, Hello Games has introduced some questionable new additions to the Freighter that really offer no added value to the game or to the freighter itself.

To reiterate, these new freighter room additions really add no substantial value to the overall game. In fact, the building additions dumb down parts of the game so much as to take the game in a completely wrong direction.

Building and Freighters

Building in No Man’s Sky has always been about using a construction kit and then placing specific technology objects wherever the player chooses. It’s a creative and rewarding endeavor because the player can use these objects in creative and interesting ways. The construction kits offer basic room designs that can be placed in unique layouts, including upper and lower floors.

Unfortunately, Hello Games has taken a huge step backwards with this latest freighter update. Gone is the basic room construction kit in the freighter and in replacement we get dumbed down and stupid single purpose rooms. Worse, though, is that these single purpose rooms are unconfigurable. Meaning, you must plop the room down as a whole as is. Gone is the empty room where you can place technology objects creatively. Now it’s just a single purpose whole room. Nothing creative at all about that.

Worse, Hello Games has decided to force the player to unlock these rooms from the freighter configuration area using Salvaged Frigate Modules (a form of in-game currency). Unfortunately, bar none, these modules are the single most difficult items (and currency) to locate in the game world. The only way to obtain them is by random spawn only. The chances of one of these spawning is probably 1 time out of 50, with the odds perhaps even higher than that. Meaning, it’s rare that these will spawn.

They can’t be purchased at all with any other more abundant currency, such as units or nanites. Nope. You must spend loads of time grinding in and around places that may or may not randomly spawn them.

Prior to this latest update, there were limited unlocks that required the need for the Salvaged Frigate Module currency. Since this update, there are many, many new items that now require them. Yet, Hello Games has not improved the spawn rate of these modules or made them easier to locate, making this Freighter update (and this expedition) at best a chore to complete. Worse, few of the expedition rewards offer Salvaged Frigate Modules as rewards. When they do, it’s between one and three at most, when the game ultimately requires around 15-20 of them to unlock all of the rooms, not counting the need for at least that many again to unlock hyperdrive add-ons and other useful freighter features.

When you’re playing outside of an expedition, you could spend several weeks and chase down only a handful of Salvaged Frigate Modules. Yes, they’re that rare.

Hello Games did a complete disservice to us with this update. Not only are these rooms almost 100% pointless to unlock as they don’t increase the freighter’s usefulness (thus wasting Salvaged Frigate Modules), the game itself is worse because of the new dumbed down building system combined with the need for even more Salvaged Frigate Modules to unlock these new features.

Overall, Endurance (the name of the update) is probably one of the crappiest updates Hello Games has dropped for No Man’s Sky.

New Rooms vs Old

Why is it so crappy? Because these new rooms don’t play well with one another, but more importantly, with the older legacy rooms. When you put these new rooms side by side with an older room, there are too many glitches and visual problems. Sometimes, the game leaves huge gaping holes. Yes, it’s that bad.

It’s also crappy because of the dumbed down building. With a game that includes building features, we don’t want single use rooms. We want a construction kit that offers creative building options. By dumbing down the construction in this way, Hello Games clearly doesn’t understand what us as gamers want from a building mode. Though, Hello Games was on the right track with the newest construction kit add-on for bases, these new one-use rooms in the freighter are a huge step backward for the game.

Freighter Improvement?

That’s the question, does this update greatly improve the freighter? No. Why? The freighter’s two main purposes include 1) being a starship garage and 2) launching frigate missions. That’s really the entire purposes of a freighter. With this update, nothing’s changed. The freighter’s usefulness is still limited to those two purposes. It’s far easier to equip your Starship for long distance hyperdrive travel using easier-to-obtain Nanites than trying to chase down rare Salvaged Frigate Modules only to get maybe half the distance with a freighter. No. The way to hyperdrive travel long distances in No Man’s Sky is still by using a starship. You simply cannot equip a freighter to achieve the hyperdrive distances that a starship can when properly equipped with technology modules. Freighters still do not offer enough technology modules in this or any other area.

With Endurance, we are once again forced to run around re-buying and re-unlocking all of the technology we had already spent weeks unlocking for base building. Instead, Hello Games has firmly separated base building from freighter building to the detriment of No Man’s Sky.

Freighter and base building should remain interlocked using the exact same features. If there’s a zone where you can build, all building construction tools should be available in every location. Instead, now we have these stupid one-use rooms that only work on a freighter and which also make zero sense. This change effectively takes the fun of building out of the game.

Base Building

The bigger problem is that, eventually, Hello Games will pull these single purpose rooms down into planetary base building. It doesn’t make sense to support two completely separate build systems. Eventually, Hello Games will want to marry this newer room based build system onto all build zones. What that means is that eventually base building will inherit this single use room concept, doing away with all of the current structures and technology by replacing them with these insipid all-in-one rooms.

For a game with construction capabilities, this really takes No Man’s Sky too far backwards. If you’re planning to take building back this far and dumb it down this much, then simply take building out of the game entirely. There’s no purpose in offering single purpose rooms and calling it “building”. Plopping down a handful of single purpose rooms is not considered in-game building. There’s nothing at all creative about that. Creation comes from construction kits, not from single pre-configured rooms.

This idea as a huge mistake and it is also badly implemented. In short, it’s an extremely disappointing move for No Man’s Sky.

Should I play No Man’s Sky Expeditions?

It depends. For Expedition 8, I’d suggest not. The Freighter additions are ultimately pointless and useless. With the exception of one thing, the Singularity Drive. This drive might be worth playing through to get this. Unfortunately, to get this drive, you have to play through Phases 1-4 and parts of Phase 5 to unlock it. There are still questions surrounding this drive, though. Since it’s a Singularity Drive, that means it likely uses the same jump mechanism as a black hole. When you traverse through a black hole in No Man’s Sky, technology ends up breaking once you emerge.

This means repairing technology after using a black hole and likely after using the Singularity Drive. I’ve stopped using this mode of travel because 1) it’s too random, 2) it doesn’t really get that much closer to the center and 3) technology breaks after using it. Traveling through a black hole is like circling a drain. You pop a teeny bit closer to the center, but you’re still just circling. It takes hundreds of hops through a black hole to get you even the tiniest bit closer to the center. It’s really, really pointless and it means repairing lots of technology with wiring looms along the way.

Outfitting your starship with the longest light year jump distance is really the best way to get to the center of the galaxy. It also avoids the broken technology problem each time you jump. I really despise it when Hello Games insists on breaking technology on the ship after using a jump technology. It’s such a complete waste of time and resources.

Also keep in mind that the Polestar expedition is entirely designed as a tutorial to teach you about these pointless freighter add-ons. Since the freighter itself isn’t drastically improved by these additions, I can’t recommend playing Polestar. Play if you like, but don’t expect great things if you do.

I also find that the rewards from the expeditions don’t match the time and energy expended to get through the milestones. While the rewards are “nice to haves”, they’re not ultimately required to play the game. That’s partly because Hello Games knows there’s no other way to get these rewards other than completing an expedition that eventually ends and may never return.

That means that if you never play a single expedition, you’re locked out of those expedition rewards. You can’t unlock them in any other way than by playing the expeditions. Ultimately, that means that the rewards offered by playing an expedition must ultimately remain inconsequential to any other game saves you may already have. This is why most of the rewards consist of posters or decals or other cosmetic items to decorate your base, with only one or two rewards being even moderately functional items.

Completed

[Updated Aug 6, 2022] I’ve recently completed Expedition Polestar. I didn’t complete the “Optional” milestone because it is a pointless multiplayer exercise that does nothing to help this expedition succeed; with its reward of 5 million units, unlocking of 16 glyphs and Atlas Pass set. The extra units are actually the most useful portion of this milestone, but units can be had in so many better ways than this. Unlocking the portal glyphs and the Atlas passes are entirely pointless as they are unneeded.

After completion of Expedition Polestar, there are still a large number of unresolved problems. The first problem is that while Starship Hyperdrive plans are unlocked, the red, green and blue drives are not! This means that your Starship is limited to yellow star systems only, forcing you to unlock all of the drives for the freighter ?!?? This also means that even though you have completed the expedition, the game is still nowhere near close to a “normal” save game mode. Secondarily and more importantly, the base computer remains locked with no way to unlock it. This precludes any base building after completing Expedition Polestar. Worthless!

I don’t know if the lack of unlocking these items was a simple oversight on the part of Hello Games or if they’re intentional. Either way, the left over save is pointless. Not only can you not build bases after you’ve completed this expedition, you can’t mine for resources on planets. This means you’re stuck using your crappy multitool alone to continue to gather resources from resource piles on planets. A complete waste of time and effort.

Some may think that these plans might get unlocked after the expedition clock times out weeks later, but I doubt it. If it hasn’t unlocked by the end of the expedition as part of the expedition, it’s never likely to unlock for that expedition save.

If you’re thinking of playing this expedition with the intent you can continue to use this game save after, you likely won’t want to. Even the biggest reward, the Singularity Drive, is more of a gimmick than it is useful. I wouldn’t suggest playing this expedition strictly for the Singularity Drive. It’s really not worth it for that. In fact, it seems Hello Games has been giving us ever crappier rewards (and saves) for each successive expedition.

To be honest, this is not only the single crappiest update for No Man’s Sky, Expedition Polestar is the single crappiest expedition to date. There’s nothing really of value to be had from these Freighter additions. In fact, these additions are so bad as to take the game back to a worse state than before the update… not just from a bugs perspective, but also from the single-purpose room building that Hello Games has now foisted onto us. There’s really very little that’s redeeming about this expedition overall.

Recommended: No
Stars: 1.5 out of 5
Play Value: 1.5 out of 5
Overall Rating: 1.5 out of 5
Overall Comment: Don’t play expeditions unless you really enjoy condescending tutorials that take forever and offer mostly pointless rewards.

↩︎

## Are Nielsen Ratings Accurate?

Posted in botch, ratings, television by commorancy on June 9, 2022

This article seeks to show that how Nielsen Media Research chooses its ratings families may alter the accuracy of the Nielsen’s ratings. More than this, this article seeks to uncover just how antiquated and unreliable Nielsen’s household rating system actually is. Let’s explore.

What is Nielsen?

I’ll give a small synopsis here, but Wikipedia does a much better job at describing who and what Nielsen Media Research (one of this company’s many names) is. For all intents and purposes, I will refer to Nielsen Media Research as simply Nielsen for the purpose of this article.

Nielsen is a research group who seeks to identify how viewers, among other avenues of information that they gather, watch Television. During the 70s, this was the primary means by which TV executives learned the ratings fate of their television programs.

How does Nielsen work?

Nielsen still relies on its Nielsen households to provide the vast majority of its television ratings information. It does this by sending out unsolicited mail to households around the country attempting to solicit a household into becoming a Nielsen household. By using this moniker, it means the family who resides at a specific household must do certain things to not only participate in the Nielsen program, but must also provide feedback to Nielsen around its viewing habits.

How does Nielsen collect its ratings information?

According to Nielsen’s own site, it says the following:

To measure TV audiences and derive our viewing metrics (i.e., ratings, reach, frequency), we use proprietary electronic measuring devices and software to capture what content, network or station viewers are watching on each TV and digital devices in the homes of our Nielsen Families. In total, we measure hundreds of networks, hundreds of stations, thousands of programs and millions of viewers. In the U.S., electronic measuring devices and millions of cable/satellite boxes are used to provide local market-level viewing behaviors, enabling the media marketplace to gain a granular view of TV audiences.

What that means is that, as a Nielsen household, they will send you a device and/or require you to install certain software on your existing devices which will “measure” your viewing habits. In other words, they spy on what you’re watching and it reports back to Nielsen what you specifically watched and for how long. For example, Nielsen might install software onto your smart TV device, Roku, TiVO, Apple TV or possibly even your cable TV provider’s supplied box.

Nielsen may even be willing to supply you with their own device, which you will place in-line with your existing TV and devices. It does say “devices and software”, meaning one or both can be used.

Rural vs Urban

Typically, larger urban city areas tend to vote Democrat more often than Republican. These urban areas are also typically more densely populated. On the flip side, rural areas tend to vote Republican more often than Democrat. Why is this information important? It’s important to understand these facts because it can drastically alter the accuracy of Nielsen’s ratings. Let’s understand why.

For participating in being a Nielsen household, you’re given a stipend. In other words, you’re paid for this service. Let’s understand more about this pay. You’re paid around $10 a month to participate. If you remain a Nielsen household for a certain period, around 6 months, Nielsen will pay you a bonus. All told, for 6 months of service, a Nielsen household will receive around$200.

Here’s where the Urban vs Rural comes into play. Rural areas tend to be more depressed economically. Meaning, income is generally less and the need for extra money is, therefore, higher. Urban areas tend to boom more economically meaning the need for extra money is, therefore, lessened.

If a rural household receives a card inviting them to become part of the Nielsen family, explaining all of the “benefits” (including the pay), rural viewers are much more likely to take Nielsen up on their pitch. It seems easy enough to get paid simply for watching TV. On the other hand, urban areas are less likely to take Nielsen up on their offer not only because the pay is so low, but because urban viewers are much more savvy around their privacy.

Who would intentionally invite a company into your household to spy on you, even for money? One might say, well there’s Alexa. Alexa offers benefits to the user far greater than what Nielsen provides. Nielsen provides spying for cash. Alexa offers app features, smart house features, music, calling features, recipe helpers, and the list goes on. Nielsen’s device(s) and software(s) don’t provide those much extended features.

Nielsen’s spying is one tracked and only helps out TV executives. I might add that those TV executives PAY Nielsen to gain access to this information. Which means that if you’re a Nielsen household, you’re getting paid out of money collected from TV executives. In effect, it is the TV executives who effectively sign your Nielsen paycheck that you receive. I digress.

Random Solicitation

Make no mistake, Nielsen solicits households through a random mail selection process. It sends pitch cards out to inform and solicit households to participate. They may even include a crisp $1 bill to entice the household. Nielsen knows that a certain percentage of people will take Nielsen up on their offer to participate in the program. The difficulty is that this selection process relies on random chance for whomever chooses to participate. This goes back to Urban vs Rural argument. Because depressed areas are more “hard up” for cash, they are more likely to take Nielsen up on their offer than Urban areas, who urban viewers are not only likely to be mistrustful of spying using digital devices, these people also don’t necessarily need the small-ish amount of cash that Nielsen is offering… considering the amount of time required to watch TV (and do whatever else Nielsen requires). Yes, Nielsen requires you to watch TV to participate. The whole thing doesn’t work unless you actually watch TV. This ultimately means that it is more likely rural Republican areas of the country are over represented in Nielsen’s households and equally likely Democrat areas to be under-represented in Nielsen’s ratings. While Nielsen has no control over who chooses to accept the “Nielsen Household” solicitation, Nielsen does control the parameters to entice people into the program. Thus, their parameters are skewed toward lower income households, which are likely to be in predominantly rural areas. In other words, depressed rural areas are far more likely to need the extra cash and be willing to jump through Nielsen’s hoops than more affluent urban areas. That’s not to say that there won’t be a percentage of viewers in urban areas as some households in those areas may elect to participate. Disposable Income Urban areas can be a bit more affluent than rural areas. Urban area residents may have more in disposable income, but also because it’s a larger city, it has more entertainment options. This means entertainment options besides watching TV. When you live in a small rural town, entertainment options can be extremely limited even if disposable income is available. Rural townships tend to encourage more TV watching more often than urban areas where night clubs, restaurants, theme parks, opera, live theater events, shopping and large cinemas are common. More entertainment options means less need to watch TV as often.. except for specific shows. Thus, urban viewers are less likely to want to participate in Nielsen’s household program than rural viewers, whose entertainment options may be limited by both what’s available near them and by their disposable income. Extrapolation Here’s the crux of Nielsen’s problems. Based on the over and under represented areas due to Nielsen’s flawed selection process, they attempt to make up for this by extrapolating data. Regardless of how the households may be skewed, Nielsen intends to extrapolate its data anyway. Nielsen estimates that it has around 42,000 households participating in 2022. Though, I’d venture to guess that that number is not completely accurate. I’d suggest Nielsen may have perhaps half that number actively participating at any one time. There might be 42,000 households signed up as a Nielsen household, but likely only a fraction actively participate at any specific moment in time. For example, not every household will watch a specific sporting event when it’s on. Only those who truly enjoy watching a specific football game might be watching a specific game. This could drop that 42,000 households down to under 5,000 viewers. If it’s a local sporting event, it could drop that number down well below 1,000 and maybe even below 200 actively watching. 200 equals 1 million, 5 million, 100 million? How does this affect the ratings? Good question. Only Nielsen really knows. The problem is, as I stated above, Nielsen uses extrapolation. What is extrapolation? Extrapolation is the process of using 1 viewer to represent many viewers. How many is a matter of debate. It is a process that Nielsen has employed for many years, and it is highly inaccurate. It makes the assumption that for every one viewer watching, there will be a specific number also watching. How many are extrapolated is really up to Nielsen. Nielsen must come up with those numbers and herein lies the inaccuracy. Effectively, Nielsen fudges the numbers to appear great (or poor) depending on how it decides to cull the number together. In other words, extrapolation is an exceedingly poor and inaccurate way to determine actual viewership numbers. Yet, here we are. Digital Media Streaming With digital streaming services, such as Netflix, Hulu, Amazon and Crackle… more specifically, devices such as DVRs like TiVO and devices like Apple TV, Nielsen’s numbers may be somewhat more accurate when using these devices. However, one thing is certain. Nielsen still doesn’t have 100% accuracy because it doesn’t have 100% of every TV household participating. Again, Nielsen’s numbers may be somewhat more accurate because we now have active digital streaming devices, but Nielsen still employs extrapolation to inflate the data they collect. Nielsen takes the numbers they collect, then guess at how many might be watching based on each single viewer’s behaviors. Why Extrapolation over Interpolation? Interpolation requires two distinct sets of data points in which to fill in the interior data gap between those two sets. Filling in data between two distinct sets of data is a bit more accurate than attempting to guess at data points outside of them. With viewership numbers, it’s only one set of data at a single point in time. Everything that is gleaned from that single set of data is always considered “outside” or “extrapolated” data. There’s nothing in a single data set to interpolate. You have 42,000 households. You have a smaller number watching a TV program at any point in time. That’s all there is. If Nielsen ran two unique and separate sets of 42,000 households of viewers (a total of 84,000 viewers), interpolation would be possible between those to separate sets of 42,000. Nielsen doesn’t utilize this technique, thus making interpolation of its collected data is impossible. How Accurate is Extrapolation? Not very. I’ll point to this StackExchange article to explain the details as to exactly why. In short, the larger the number gets outside of the original sample size, the larger the margin for error… to the point where the error outweighs the value of the extrapolation. One answer provides this quote: [Extrapolation] is a theoretical result, at least for linear regression. Indeed, if one computes the so-called ”prediction error” (see this link, slide 11), one can easily see that the further the independent variable 𝑥 is away from the sample average 𝑥¯ (and for extrapolation one may be far away), the larger the prediction error. In the link that I referred to one can also see that in a graphical way. In a system where there is no other option, such as during the 70s when computers were room-sized devices, extrapolation may have been the only choice. Today, with palm sized internet enabled phones containing compute power orders of magnitudes faster than many of those 70s room-size computers, continuing to use extrapolation honestly makes zero sense… especially when accuracy is exceedingly important and, indeed, required. Extrapolation Examples If 1 Nielsen viewer represents 1,000 viewers extrapolated (1:1,000), then 100 Nielsen households watching suggests 100,000 viewers may actually be watching. If 100 Nielsen viewers watch a program and each household represents 100,000 viewers (1:100,000), then this suggests 10,000,000 viewers may be watching. Just by changing the ratio, Nielsen can alter how many it suggests may be watching. Highly inaccurate and completely beholden to Nielsen making up these ratios. As stated above, the larger the number diverges from the original sample size, the larger the margin of error… possibly making this data worthless. These suggested extrapolated viewership numbers do not actually mean that that many viewers were, in reality, watching. In fact, the real viewership number may be far, far lower than the extrapolated numbers suggest. This is why extrapolation is a bad, bad practice. Extrapolation is always error prone and usually in the wrong way. It makes too many assumptions that are more than likely to be wrong. Unless the person doing the extrapolation has additional data points which logically suggest a specific ratio is at play, then it’s all “best guess” and “worst error”. How many businesses would choose run their corporation on “best guess”? Yet, that’s exactly what TV executives are doing when they “rely” (and I use this term loosely) on Nielsen. Biased Even above the fact that extrapolation has no real place in business, because of its highly inaccurate and “best guess” nature, these numbers can be highly biased. Why? Because of the Urban vs Rural acceptance rates. Unless Nielsen explicitly goes out of their way to take the under vs over represented nature of Nielsen households into account when extrapolating, what Nielsen suggests is even more inaccurate than I even suggest just from the use of extrapolation alone. CNN vs Fox News CNN has tended to be a more liberal and, thus, a Democrat favorable news organization. Though, I’d say CNN tends to be more moderate in its liberal Democrat leanings. Fox News, on the other hand, makes no bones about their viewpoint. Fox News is quite far right and Republican in too much of its of leanings. Fox News is not always as far right as, for example, Alex Jones or other extremist right media. However, some of its leanings can be as far right as some quite far right media. Here’s an image from the Pew Research Center that visually explains what I’m describing: Whether Pew’s research and datapoints are spot on, I’ll leave that for you to decide. I’ve reviewed this chart and believe it to be mostly accurate in terms of each outlet’s political leanings. Though, I personally have found PBS to be somewhat closer to the “Average Respondent” location than this chart purports… which is why even Pew might not have this chart 100% correct. For the purpose of CNN, Fox News and Hannity, I’ve found this chart to be spot on. As you can see in the chart above, Fox News itself is considered a right leaning news organization, but not far off of center at around a 2. However, the Sean Hannity show is considered just as far right as Breitbart at about 6-7. CNN is considered slightly left leaning at around a 1 (less left leaning than Fox News is right leaning at 2). What does all this mean for Nielsen? It means that those who are Republican, which tends to include more rural viewers than urban, those rural viewers tend to be conservative. Because Nielsen is more likely to see participation from rural viewers than urban viewers, due to its enticement practices, this skews Nielsen’s accuracy towards conservative viewership and away from liberal viewership. Nielsen’s enticement practice isn’t the only problem which can lead to this skew, though. Meaning, Fox News viewership numbers as stated by Nielsen may be highly overestimated and inaccurate. Quantifying that more specifically, Fox News viewers may be over-represented where CNN viewers may be severely under-represented. It further means that unless Nielsen actually realizes this liberal vs conservative under vs over representation disparity in its Nielsen households (respectively) and, thus, alters its extrapolated numbers accordingly, then its viewership numbers published for CNN vs Fox News are highly suspect and are likely to be highly inaccurate. Worse, Fox News is owned by Rupert Murdoch. Because this man is in it for the cash that he can milk from the Fox News network, he’s more than willing to pay-for-play. Meaning, if he can get companies to favor Fox News by asking them for favors in exchange for money, he (or one of his underlings) will do it. Murdoch can then make more money because more advertisers will flock to Fox News under the guise of more viewership. Fake viewership is most definitely lucrative. Because Nielsen extrapolates data, this makes faking data extremely easy. Unlike YouTube where Google has no reason to lie about its reported views, Fox News has every reason to lie about its viewership, particularly if it can game other companies into complying with its wishes. Nielsen Itself Nielsen purports to offer objective data. Yet, we know that businesses are helmed by fallible human CEOs who have their own viewpoints and political leanings and who are in it for the money. One only needs to look at Rupert Murdoch and Fox News to understand this problem. Some CEOs also choose to micromanage their company’s products. Meaning, if Nielsen’s current CEO is micromanaging its ratings product, which is also likely to be Nielsen’s highest moneymaking product, then it’s entirely possible that the ratings being reported are biased, particularly in light of the above about Rupert Murdoch (who is also a Republican). Conflict of Interest When money gets involved, common sense goes out the window. What I mean by this statement is that since TV executives / networks pay Nielsen to receive its ratings results periodically, Nielsen is beholden to its customers. The word “beholden” can have many meanings in this “sales” context. Typically in business, “beholden” means the more you pay, the more you get. In the case of Nielsen, it’s possible that paying more to Nielsen potentially means that business may get more / better ratings. That sort of breaks the “objective” context of Nielsen’s data service. It’s called “Conflict of Interest”. In essence, in this case it could represent a pay-for-play solution, a true conflict of interest. There’s honestly no way to know what deals Nielsen has brokered with its clients, or more specifically with Rupert Murdoch’s Fox News Network. Most companies who do sales deals keep those details close to the vest and under non-disclosure binding contracts. The only way these deals ever get exposed is during court trials when those contracts can become discovery evidence for a trial. Otherwise, they remain locked in digital filing cabinets between both parties. Even then, such contracts are very unlikely to contain words disclosing any “back room” verbal handshake deals discussed. Those deal details will be documented in a separate system or set of systems describing how to handle that customer’s account. Let me count the ways There are many problems in the Nielsen’s rating services that may lead to highly inaccurate information being released. Let’s explore them: 1. Nielsen’s solicitation of households can easily lead to bias due to its probability of luring in people who are hard up for cash (e.g., rural Republicans) vs those who are not (e.g., urban Democrats). 2. Nielsen’s products and software spy on knowing users about viewership habits. Spying of any variety is usually viewed with skepticism and disdain, especially these days and especially by certain types of people in the population (usually liberal leaning individuals). Rural Republicans are less likely to understand the ramifications of this spying (and more willing to accept it) than urban Democrats (who tend to be more likely to work in tech based businesses and who see this type of spying as too intrusive). 3. Nielsen’s numbers are “fortified” using extrapolation. Fortified is a nice way of saying “padded”. By padding their numbers, Nielsen staff can basically gyrate the numbers any way they want and make any channels viewership numbers look any particular way. Which ties directly into… 4. Nielsen sells its ratings product to TV producers and networks. Because these deals are brokered separately for varying amounts of money, the network who pays the most is likely to see the best results (i.e., pay-for-play). 5. Nielsen moved away from its “on paper” auditing system to the use of digital device auditing. Because Nielsen removed the human factor from this ratings equation (and fired people as a result), it also means that fewer and fewer people can see the numbers to know what they truly are (or at least were before the extrapolation). Fewer people seeing the numbers means higher chances of fabrication. Looking at all of these above, it’s easy to see how Nielsen’s numbers could be seriously inaccurate, possibly even intentionally. I won’t go so far as to say, fake, although that’s entirely possible. However, because Nielsen employs extrapolation, it would be easy for a Nielsen staffer (or even Nielsen’s very CEO) to make up anything they want and justify it based on its “proprietary” extrapolation techniques. Meaning, numbers stated for any network’s viewership could be entirely fabricated by Nielsen, possibly even at a network’s request or possibly even as part of that network’s deal with Nielsen. In fact, fabrication is possible based entirely on number 4 above. A TV network could pay significantly to make sure their network and their programming is always rated the highest, at least until they stop paying for it. With Nielsen’s extrapolation system and when data can get played fast and loose, it’s entirely possible for such a sales scenario to manifest. Why are Nielsen’s Numbers Important? Advertising. That’s the #1 reason. Companies using TV advertising wish to invest their advertising dollars into channels with the highest viewership. The higher, the better. Nielsen’s ratings are, therefore, indicative that a higher ratings share means higher viewership. The problem is, Nielsen’s extrapolation gets in the way of that. Regardless of whether or not cheating or fabrication is involved, the sheer fact that extrapolation is used should be considered a problem. The only thing Nielsen really knows is that of the 42,000 Nielsen households that it has devices in, only a fraction of those households watched a given program or channel at any specific time. Meaning, the real numbers of viewership from Nielsen offers a maximum of 42,000 viewers at any moment in time… no where close to the millions that they claim. Any number higher than 42,000 is always considered fabricated whether extrapolation or any other means is used to inflate that number. That companies like Procter and Gamble rely on those 42,000 Nielsen households to determine whether to invest perhaps millions of dollars in advertising on a channel is suspect. That companies have been doing this since the 70s is a much bigger problem. In the 70s, when there was no other way to really determine TV viewership, Nielsen’s system may have held some measure of value, even though it used extrapolation. However, in 2022 with live always-on internet enabled phone, tablet, computer, game console and other smart TV devices, measuring actual live viewers seems quite feasible directly from each device tuned in. If someone is live streaming CNN over the Internet, for example, it’s not hard to determine and count this at all. If hundreds of people are streaming, that should be easy to count. If millions, it’s also easy. Why extrapolate when you can use real numbers? The days of extrapolation should have long ended, replaced by live viewer tallies from various digital streaming devices, such as phones, computers and Apple TVs. Whether these devices are allowed to phone home to provide that data, that’s on each viewer to decide. If the viewer wishes to opt-in to allowing their viewership metrics to be shared with each TV station, then that’s far more realistic viewership numbers than Nielsen’s extrapolated numbers. If they opt-out, then those stations can’t see the numbers. Opting in and out should be the choice of the viewer. That’s where privacy meets data sharing. Some people simply don’t want any of their private data to be shared with companies… and that’s okay. That then means some level of extrapolation (there’s that word again) must be used to attempt inflate the numbers accordingly. Let’s consider that 42,000 is 0.01273% of 330 million. That’s trying to represent the entire population of TV viewers in the United States from less than 0.01% of people watching. Insane! With always-on digital devices, if 10% opt out, that’s still provides 90% more accurate viewership numbers than relying on Nielsen’s tiny number of households. Which means there’s way less amount of data to attempt to extrapolate. That advertisers don’t get this point is really surprising. Auditing You might think, “Well, isn’t Nielsen audited?” Most companies dealing with numbers are typically audited. Unfortunately, I’ve found that working in a tech business which sees regular audits can still have fabrication. How? Because those who work on the technical side of the house are not those who get audited. Meaning, those systems administrators who maintain the logs and records (i.e. databases) aren’t under the scrutiny that the financial side of the house gets. If it relates to money and sales, auditing of the accounting books is a regular occurrence and must uphold specific standards due to legal requirements. Auditing when it relates to anything else is catch-as-catch-can, particularly when laws don’t exist. Meaning, the auditors must rely on the statements of staffers to be accurate. There’s no way for an auditor to know if something has or hasn’t been fabricated when viewing a log. Worse, if the company employs a proprietary algorithm (read private) to manage its day to day operations, auditors typically are unable to break through its proprietary nature to understand if there’s a problem afoot. In other words, auditors must take what’s told to them at face value. This is why auditing is and can be a highly inaccurate profession. I should also point out that auditing isn’t really intended to uncover treachery and deception. It’s intended to document what a company states about specific questions, whether true or false. Treachery and deception may fall out of an audit, but usually only if legal action is brought against the company. In the case of money, it’s easy to audit records of both the company and third parties to ensure the numbers match. In the case of proprietary data, there’s no such records to perform this sort of matching. What an auditor sees is what they must accept as genuine. The only real way that such deception and fabrication becomes known is if an employee performing such fabrication blows the whistle. An independent auditor likely won’t be able to find it without a whistleblower. Because jobs tend to be “on the line” around such matters, employees are usually told what they can and cannot say to an auditor by their boss. Meaning, the boss might be acutely aware of the fabrication and may instruct their employees not to talk about it, even if directly asked. In fact, employees performing such fabrication of data may intentionally be shielded from audits, instead throwing employees who have no knowledge at the auditors. It’s called, plausible deniability. Overall None of the above is intended to state that Nielsen fabricates numbers maliciously. However, know that extrapolation of data is actually the art of data fabrication. It takes lower numbers and then applies some measure of logic and reasoning that “makes sense” to deduce a larger number. For example, if one person complains of a problem, it’s guaranteed a number of other people have also encountered the same exact problem, but didn’t complain. The art is in deducing how many didn’t complain. That’s extrapolation by using logic and reasoning to deduce the larger number. Extrapolation clearly isn’t without errors. Everyone who deals in extrapolation knows there’s a margin of error, which might be as high as 10% or possibly higher and which grows as the extrapolation data size increases. Are Nielsen’s ratings numbers accurate? Not when you’re talking about 42,000 households attempting to represent the around 122 million households with TVs. This data doesn’t even include digital phones, tablets and computers which are capable of streaming TV… which smartphones alone account for about 7.26 billion devices. Yes, billion. In the United States, the number of smart phone owners is around 301 million. There are more smart phones in existence in the United States (and the rest of the world) than there are TV’s in people’s homes. So, exactly why does Nielsen continue to cling to its extremely outdated business model? Worse, why do advertisers still rely on it? 🤷‍♂️ ↩︎ ## Rant: Fallout 76 Event — Invaders from Beyond Posted in botch, video game design, video gaming by commorancy on March 11, 2022 On the close of the Fasnacht event, not a week later Bethesda launches Invaders from Beyond, a new limited time “seasonal” Fallout 76 event. Let’s explore. Invaders from Beyond Since the inception of Fallout 76 (and indeed, the Fallout franchise), hints at aliens have been littered throughout the lore. However, Bethesda has now taken the leap and created a full fledged event out of aliens. Too bad they released this just a month or so after the alien invasion in Grand Theft Auto. This one feels like Bethesda is ripping off Rockstar. The event begins with a round typical saucer ship hovering overhead. The aliens are typical and what you might typically expect when you think of an alien, but a bit more menacing looking with jagged teeth. There are some in power armor. There are also robotic floating drones, for whatever reason. Fallout 76 has hinted at the presence of aliens with the inclusion of the Alien Blaster weapon since the launch of the game. This weapon could be found in Toxic Valley in a sunken and broken safe, along with a few other items and a key since day one of the game. This weapon, unfortunately, has always been more of a joke than useful. In fact, it still is. Additionally, while a small amount of AB rounds of ammo have been available in the game, it could never be crafted. Thus, you had to get the plan to convert the Alien Blaster to use Fusion Cells, which could be crafted. Unfortunately, the conversion to using Fusion Cells with the Alien Blaster heavily nerfs the damage output of this pistol, making it effectively worthless for in-game use. Even with the AB rounds, it’s not that powerful. With the introduction of the Invaders from Beyond event, the Alien Blaster Pistol and the new Alien Disintegrator Rifle plans now drop as potential loot from this event. In addition, not only can the weapon plans drop, so to do the mod plans that go with them adding cryo or poison damage to the each gun’s energy damage, in addition to some other limited mods. Additionally, there are a number of CAMP decoration plans which can drop, such as Alien Autopsy Bed, Alien and Human Tubes, an Asteroid and an Alien Stashbox. Because of the new Alien Disintegrator addition, Bethesda has unlocked crafting of AB ammo, which works in both the Alien Blaster and the Alien Disintegrator. Unfortunately, Bethesda forgot to unlock AB round creation in the CAMP ArmCo Ammo appliance and supplying AB rounds in the Ammo Converter appliance. This is currently Bethesda’s half-assed method of operation. Unlock something new in one place, like the Tinker’s bench, but then forget all about all of the other places where it also needs to be supported. Bethesda did this same shtick with Fallout 1st members. Sure, Bethesda gives us an infinite Scrapbox with Fallout 1st, but then conveniently forgets to support Fallout 1st members at train stations by adding Scrapboxes there. Fallout 1st members should be considered “premium” players. First members are actively paying monthly for that service. Yet, Bethesda still treats Fallout 1st members as second class players, giving priority to non-1st players. It makes zero sense. I digress. Locations for the Event This event, unlike Fasnacht which spawns only in Helvetia, spawns in a number of different locations on the map. The multiple event location is both a good and bad thing. The good thing is that it prevents players from nuking the area in advance of the event. Though, they could wait and nuke the location immediately after the event starts. It’s possible, though, that the event disallows nuking while active. I haven’t tried nuking the area with the event active to find out. The bad thing is that one of the locations entirely sucks when playing this event. The event locations are as follows: • Dyer Chemical (The Mire) • Charleston (Forest) • Sparse Sundew Grove (Cranberry Bog) • Garrahan Mining HQ / Garrahan Estate (Ash Heap) • Monongah (Savage Divide) • Wavy Willard’s (Toxic Valley) Couldn’t they have chosen some better locations? These locations really do suck overall. The event claims to be “Easy”, but that is all dependent on the player and the location where it spawns. It also depends on your character’s build. For example, Sparse Sundew Grove is the most difficult location, but only because the plants are like rocks and immovable. What I mean is that, unlike most plants in most games (and in real life) which move out of the way when you intend to shove past, you know like plants actually do, these plants do not move. The game offers no physics on these plants at all, preventing the plant from moving should you run into them. They become like brick walls that block movement. This makes the event much more difficult than it should be. Additionally, unlike Helvetia where the Scorched are temporarily removed to make way for the event, these event locations are not cleared of enemies, requiring players to clear the entire area of the existing enemies prior to starting the event. Way to go, Bethesda. You had one job. Grenade Drops This event is narrated by “Homer Saperstein” and the event has 3 “Brainwave Siphons” which must be destroyed. To do this, you must kill all of the aliens in each wave (30), then kill the bosses that appear for each siphon. The final siphon boss is a 3 star legendary which drops random legendary loot, usually worthless one-star crap. By the second siphon, the overhead alien ship changes tactics and begins dropping grenades, denoted by a red streak. The grenades aren’t randomly dropped. Oh, no no no. The game explicitly targets gamer positions, sometimes multiple times in a row. Sometimes even without warning. Just, boom and if you’re Bloodied, you’re dead. No warning. Let’s talk about the worst location for these grenade drops. Because Sundew Grove plants don’t move with physics, if you get pinned by one of the plants, unable to move, the grenade will hit you. In open areas like Dyer Chemical or Garrahan, there’s no problem moving away. With these stupid plants, it can make player movement impossible. Even simple movement like jumping or running can see the player blocked by a plant. It’s a pain in the ass. The lack of plant physics makes this event 3 times harder in this grove than it is in other open area locations. Simply even walking through the plants is a pain in the ass. Why is all of this important? It’s important because Bethesda has changed how (and where) characters respawn. No longer do you spawn near where you fell. No. Now you respawn sometimes so far away you’re outside of the event area. You have to spend at least 30 seconds running at full speed to get back to where you were. By that time you reach that location, it’s too late to participate because other players have already killed everything and the event is over. This respawn mechanic fucking sucks. So too do these fucking grenades. Crap Event for Bloodied Build Here’s where things get exceedingly dicey when you’re running a bloodied build. This event explicitly targets bloodied players, both in dropping grenades on them and in heavily nerfing bloodied weapons against the aliens at the same time. Oh, it gets so much worse. Because a bloodied build must rely on ranged weapons, implicitly requiring VATS, to effectively make the bloodied build actually work, Bethesda heavily nerfs VATS against the Aliens. Where you can stand a car length away from any other enemy in the game and see a 95% VATS hit chance, Aliens show 72% or less. Way less if you’re a house distance away or more. Bethesda has explicitly targeted bloodied builds to make this event much more difficult for no added benefit. I also find that alien grenades target my bloodied character way more frequently than other players. The “Brainwave Siphons” also aren’t siphons. What they are is big ass grenades. When they go off, they wipe away HP instantly. If you’re running a bloodied build, being anywhere near a siphon will instantly kill you when it pulses. Homer says that the siphons may “sting” a bit. It’s way more than a “sting” if you’re running a bloodied build. However, it is simple enough to stay far away from the siphons. The grenades, on the other hand are frustrating as hell for all the reasons I discuss above (and below). Sneak Card This event negates the Sneak perk card entirely specifically against alien grenade drops. The point in the Sneak card is that if you’re [ CAUTION ] or [ HIDDEN ], then nothing should know you’re there. Yet, the grenades ALWAYS target my character directly even when [ HIDDEN ]. The red warning indicator doesn’t land in front of my character or next to my character. It always lands directly ON my character. Sneak should protect you from grenades if you’re seeing [ CAUTION ] or [ HIDDEN ]. Yet, many of these crap Bethesda events entirely disable Sneak from functioning. This Sneak card bullshit started during the first Daily Ops event when Super Mutants had stealth ability. That Daily Ops bullshit sneak card problem is the reason I don’t play Daily Ops at all ever. It seems Bethesda intentionally keeps bypassing Perk cards willy nilly with these new game modes. We spend our time tweaking our character build and combat strategies and Bethesda spends their time building game modes that bypass it all. What’s the point in buying into this perk card system (or, indeed, this game at all) if you don’t intend to use it as it was built? Why even have a Sneak card if you don’t intend to honor it ALL of the time? Or, do you Bethesda guys sit in a room when designing and say, “Fuck the Sneak card users. Let’s target them anyway.” ?? Invaders from Beyond is a Technical Failure This event shows everything wrong with Bethesda in one single event. Not only does the event unfairly target certain players types, it does so with intentional vengeance. Yes, I said intentional. Basically, the event intentionally penalizes bloodied builds for being bloodied. Not just from the player health perspective, but reducing damage output from bloodied weapons to be mere pin pricks on the aliens, reducing VATS to being entirely useless and by negating perks from perk cards. Literally, targeting the torso (the most basic of VATS hits) misses at least 50% of the time even when VATS shows 95% chance). Missing 50% of the time is not a 95% chance. While other non-bloodied builds can shred aliens almost instantly, bloodied characters must take 6-10 (or more) shots to kill a single alien. It’s ridiculous. Bloodied weapons that shred HP on robots, Liberators, Scorched, animals and even Super Mutants can’t kill one tiny alien in one or two shots? It doesn’t make any sense and it’s entirely fucked up. Because it makes no sense, it means Bethesda has intentionally targeted Bloodied build characters against this event unfairly, though this issue probably also affects other player builds to a lesser degree. And yes, it even gets worse. The grenade drops are timed perfectly to interfere with the event. Right when the boss arrives, within 5-10 seconds, a barrage of grenades fall, typically targeting my player. This means I have to stop what I’m doing, move far away to avoid grenade damage, which means I can’t even shoot at the boss (or at anything). Typically, that allows other players to shred the HP of the boss with their OP weapons while I’m trying to avoid a stupid grenade. Once the boss is down, within 5-10 seconds of that and right after Homer suggests we start shooting the siphon to destroy it, another huge barrage of grenades fall from the sky again targeting my character. This means I have to move again to someplace else, preventing me from, you know, actually shooting the siphon. This event should not be about avoiding damage from fucking grenades! It’s about the combat against the aliens. Why the hell should I invest time in an event when the only thing I’m doing is avoiding fucking grenades? Grenades that I shouldn’t even be avoiding because my Sneak 3 card is active and my screen says [ HIDDEN ]. If I’m hidden, then those grenades can’t find me. Capeesh, Bethesda? That the grenades can and do find me with the Sneak card active is fucking insane. “Don’t play Bloodied” I can hear some players exclaim. To that I say, “Fuck Off!” It took me months to get my character tweaked to be a bloodied character. I can’t just turn it off overnight and choose an entirely different play style. That means not just redoing my character’s stats and SPECIAL, that means changing crap tons of things about my character including carry weight, finding entirely different weapons and armor and completely rearranging my perk card stack to accommodate that new build. I don’t tell you how to run your character, don’t tell me how to run mine. So, “Fuck Off”. To Bethesda and this event I also say, “Fuck Off”. Do you really want us to play this game or not? Why must you keep rewriting the game’s established rules arbitrarily for each of these events? If my gun does a specific and expected level of damage using VATS, then stick with that on ALL enemy types. These aliens aren’t special. In fact, they’re weaker than even a Ghoul. So why the hell are my weapons so fucking nerfed during this event? Environmental Perks Here’s something that’s ongoing with the game, but is now exacerbated by this event. When your character picks up any environmental perk, such as Kindred Spirit, Strength, Agility, Luck or Endurance perks via camp items like the exercise bike, the weight bench, the fortune teller or similar, Bethesda has not only reduced damage resistance while these perks are active, but allows enemies to unfairly target your character. It’s even worse, though. It seems the game has seemingly put a bright red halo around the character alerting every enemy in the game of the character’s presence while carrying these perks. It also seems that the more of these you stack, the brighter that halo becomes. Not only is your damage resistance drastically reduced by carrying these perks, the game allows enemies to find and target your position in an attempt to instantly kill you. Carrying these perks even seems to give the enemies better accuracy. Worse, it seems that this “halo” allows enemies to teleport instantly to your position and silently attack from behind. If you’ve been wondering what the hell is going on with this game, well this is it! Again, it’s another case of Bethesda intentionally, yet unfairly targeting players who are using standard in-game features, like a bloodied build and bloodied weapons AND environmental perks. The problem with these environmental perks is that they’re instantly wiped away upon death and respawn. It’s like someone at Bethesda doesn’t want you to actually use or carry these perks. Why the fuck did you include them in the game if you didn’t want us to use them? Worse, why the hell are you penalizing us when we do use them? Literally, one swipe from a ghoul of any level can kill my character instantly with ANY environmental perk active. Without these perks, a swipe with my character carrying the same exact amount of HP only does half the damage and my character remains alive. I’ve tested this. Why the hell did Bethesda reduce damage resistance while carrying these perks? I don’t know, but it’s entirely facetious. Bethesda also knows these perks disappear after a character death. Why paint a huge fucking target on me when I carry them? That’s not cool at all and it’s entirely unbalanced and unfair gameplay… which is entirely what Fallout 76 has devolved into. Why Intentional? Because of the duping scandal in the game’s early life, Bethesda has been itching to take intentional vengeance against ALL players by specifically and unfairly targeting players choosing to play using officially supported builds and taking advantage of official environmental perks. Worse, it seems that Bethesda is now targeting not only players carrying environmental perks, but those also playing using a bloodied build. How do I know this? Because whenever I kill legendary enemies, the chances of a bloodied weapon drop have drastically increased. Just like the game knows that I’m predominantly using a .45 ammo weapon and drops this exact kind of ammo with every enemy’s loot, the game knows I’m playing using a bloodied build and using a bloodied weapon. Thus, the chances of receiving a bloodied drop is drastically higher. The game is now unfairly targeting bloodied build players to not only instantly kill them as often as possible, the devs are also intentionally nerfing damage output and screwing with VATS percentages to reduce the frequency of hits and damage output. Again, I call bullshit on this. Bethesda, if you don’t want us playing a bloodied build, then remove all of these fucking bloodied weapons and all perk cards enabling this build. Simply remove it. Don’t play internal games to fuck us over in an attempt to deter us from using this build, simply TAKE IT OUT entirely. If you don’t want us playing this build, then take it out! Don’t silently fuck us over because we have chosen to use this build. Losing Perks after Death This one chaps me so hard. Oh no, can’t lose that stupid disease after death and respawn, but yes, we’ll wipe away all of those environmental perks and force you to go get them all again. What a fuck job, Bethesda. Even Homer’s Aid remains after death and respawn, which is the same fucking thing as an environmental perk. Oh sure, that one can remain, but not Kindred Spirit. Not the SPECIAL perks. Oh no. Gotta fuck us over, but keep only the things you think we should keep. This game is completely inconsistent and ridiculous. If one perk can remain, then they all can. If one can’t remain, then they ALL must be wiped away. That’s what consistent means. Having these exceptions is bullshit. Crashing The one last important thing I almost forgot to include is crashing. The game client now officially crashes just as often as the Beta in 2018. Probably more often. Bethesda had been working towards some semblance of stability in the game client, but it seems that has all been tossed out of the window. Now the game crashes randomly after having played the event. I’m playing on a PS4, so I guess Bethesda has given up any thought of trying to keep this game playable on the “last gen” consoles. If you’re going to go so far as to abandon all stability for the game, then just pull it from the platform entirely. Why support a “last gen” platform when the game literally crashes at the drop of a hat? Crashing was bad during Fasnacht, but is officially twice as bad with Invaders from Beyond. And note, I’m no longer reporting crashes on my PS4. They never fix the bugs anyway. So, why bother reporting them? If they can’t be bothered to fix bugs, I can’t be bothered to report them. Seems only fair. Rating Overall, I give this event 1 star out of 5. Not only is the event insanely predictable and stupid easy once you know what to do, the fact that Bethesda has chosen to play fuck games with certain types of player builds makes this event (and this game) completely worthless. More than this, the loot drops are effectively junk. The best items are the CAMP decorations. The weapons are worthless. The reason for the 1 star and not 0 is that the event is playable and it does drop loot. You can also choose to stand at the edge of the event border, do nothing and collect loot from the event and not participate. Though, I expect Bethesda will nerf this too. At some point, events may require participation (i.e. killing at least one enemy) to get dropped loot out of the event. I wouldn’t mind this change as it prevents players from idling while farming events for loot. The lowered rating is also because the alien grenades are entirely pointless and they intentionally bypass the Sneak card. Bloodied weapons are nerfed all to hell and so is VATS making the event frustrating and pointless for no real benefit. So now, there are other builds that are way overpowered. It used to be Bloodied builds that were, but Bethesda has seen to it that anyone running a bloodied build is now so weak it’s pointless. But, I’ve seen other non-bloodied builds that can shred the HP of the final Legendary enemy in seconds. In fact, this same build can shred the HP of any of the alien enemies in seconds. So, what was the point in screwing with bloodied builds here, Bethesda? You simply pushed the problem off to other overpowered builds. Now, those overpowered builds are the ones using machine gun weapons or shotguns. Are you going to go and nerf those builds and weapons, too? If you plan on nerfing every single build in the game, then why even run Fallout 76 as a game? The point in this game is to build out high powered characters. That’s the reward for reaching the endgame. That’s why we as gamers play your games. That’s why we spend time getting our characters to level 400 or 500 or 1000 because we want to have an overpowered build. By picking these builds out and nerfing the hell out of each and every build (because some random game player complains), you’re simply chasing more and more gamers away. Leave the game the fuck alone. If you don’t want players building overpowered characters, then just shut the game down. Don’t fuck with nerfing every single weapon, armor and build in the game. SHUT IT DOWN. There’s no point in running a Fallout game if players aren’t rewarded for reaching the endgame and reaching a high level. Instead, it seems we’re expected to live with level 1 underpowered weapons because you want to fuck us over with every single release and every single event type. Stop screwing with us. If you can’t do that, then just shut the whole fucking thing down. There’s no reason to keep the game system alive if all you want is for every player to play with level 1 powered weapons against level 100 enemies. ↩︎ ## Can the Steam Deck succeed? Posted in botch, business, gaming, portable by commorancy on February 28, 2022 I’ve not yet had my hands onto this new Valve bad boy of a handheld, but I still want to give my first impressions of this device with its base$399 price tag. Let’s explore.

Handhelds in Gaming

Before I jump into my opinion of Valve’s new Steam Deck, let’s take step back in time to understand this device’s origins. I’m sure you may already be aware of many of the devices listed, but for those who may be new to some of them, I’ll list them below.

Handheld gaming began with Nintendo going as far back as early 80s with the Nintendo Game & Watch series of handheld gaming devices. These were single game devices that played very simplistic games, such as Fire and Ball. These simplistic games had you doing very simplistic things, such as with Fire, catching people as they tumble out of a burning building or in Ball, juggling balls.

Nintendo realized the magic of these small single-game handhelds and introduced the more flexible cartridge based GameBoy. Using cartridges, this handheld gaming unit offered the ability to switch games out and play many different games, up to as many as cartridges were made. It also offered game play on the go in a compact format.

Since then, we’ve seen a number of portable gaming handhelds in the subsequent years including:

• Gameboy Color
• Gameboy Color Clamshell
• Atari Lynx
• Sega Gamegear
• Nintendo DS
• Nintendo 3DS
• Neo Geo Pocket
• Sony PSP
• Nokia NGage
• Sony Xperia Play
• Sony PS Vita
• NVIDIA Shield
• Nintendo Switch

and now we have the Steam Deck to add to this list. I didn’t include the Nintendo Wii U because while it had a portable element in the Gamepad, it simply wasn’t possible to play games strictly on the Gamepad on-the-go.

The Reviews

The early reviewers of the Steam Deck call it groundbreaking. Yet, the Steam Deck doesn’t solve any of the fundamental problems of handheld consoles of this variety. So, how exactly is it groundbreaking? It isn’t. It has one huge limitation that makes it fall far short of “ground breaking”. It’s also buggy as all get-out in far too many places that matter. Let’s take a closer look at the Steam Deck.

From the above image, it looks like a fairly standard kind of handheld with …

• A large touch screen
• Two thumbsticks
• ABXY buttons (using the Xbox Controller layout)
• A ‘Steam’ button
• Two shoulder and two trigger buttons
• (new) Two trackpad buttons below the thumb sticks
• (new) Two additional buttons (View and Options) between the thumbsticks and the D-Pad and ABXY buttons.
• Power, volume up (+) and down (-), headphone jack, USB-C port and reset buttons are on the top edge.

This console runs Linux, or rather SteamOS, apparently. I’m uncertain why Steam chose to go with Linux on this handheld when choosing Windows would have been a much more compatible option. I mean, every single PC game would operate right out of the box on a handheld built on Windows. My only thought is that Gabe Newell didn’t want to fork over a bunch of cash to Microsoft to make this console a reality. Using a Linux based SteamOS meant cheaper outlay and no royalty fees.

Unfortunately, that design choice immediately sacrifices game compatibility right out of the gate. No where is this more apparent than when you attempt to play some games, which simply crash outright. Seeing as this is Linux based, it must use a Windows compatibility stack. The SteamOS apparently uses the open source Proton for its Windows compatibility stack, in similar form to Wine (Wine Is Not an Emulator). In fact, Proton’s Windows compatibility is based on Wine, but is being developed and improved by Valve and CodeWeavers. If you’ve ever used Wine, then you know it’s not ready for everyday use the vast majority of the time. Valve’s Proton layer may be better, but it can’t be that much better than Wine.

One thing I’ll say about compatibility is that it can work great one minute and suck hard the next. It’s entirely dependent on so many tiny little things to line up. Instead of fighting with compatibility layers, the Steam Deck could have run Windows directly and avoided all of these compatibility problems. Assuming the the point is to play Windows games, then why fake Windows when you can use the real thing? If the Steam Deck had at least been given the option of loading Windows as its operating system, then a buggy Windows compatibility layer wouldn’t have been required.

Pricing

The Steam Deck isn’t cheap. Let’s examine the Steam Deck’s pricing levels:

For $399, that gets you an entry level handheld device and a carrying case. I’m assuming the Steam Deck also ships with a power cord and power brick, but it’s not listed in the above. If you want the top end version of the Steam Deck, you’re going to fork over$650 … more once you’ve bought accessories and games and paid taxes.

Let’s put this into perspective. For $499 ($100 more than the Steam Deck’s base model), you can buy a PS5 or an Xbox Series X. Both are true gaming consoles, but not handhelds.

PlayStation Vita, NVIDIA Shield, Nintendo Switch

All three of these consoles are the most recent iteration of touch screen “tablet” handhelds. These are the handhelds that should have been able to perform the best. In fact, the NVIDIA Shield should have been competitive with this console. Although, the Shield is now several years old at this point.

However, the Shield tried exactly what the Steam Deck is now trying. To bring PC gaming to a handheld. Yet, for whatever reason, it hasn’t ultimately worked.

That’s not to say that the Steam Deck won’t have some success, but ultimately it likely won’t succeed in the way Gabe hopes. It’s not for lack of trying. This format has been tried multiple times each with varying degrees of success, but none so runaway as to call it massively successful. As I said above, though, there’s one huge fail in the Steam Deck’s design. I’ll come to this shortly.

Of all of them, Nintendo’s Switch is probably the closest to a ‘runaway success’, but it’s still not winning the handheld space. What is? The Smartphone. Why? Because of it’s multifunction purpose. You can play games as easily as answer the phone as easily as book your next flight to Aruba. A phone supports always on data and, thus, gaming can take full advantage of that fact… making carrying around a phone the easiest gaming handheld available. On top of this, smartphones are updated about every year, making them 100% compatible with your current software, but allowing them to run that software much more fluidly. Gaming handhelds, like the Steam Deck, are lucky to be updated once every 3 years.

The point is, the handheld market is dominated by smartphones, not gaming handhelds. The reason for this, as I stated above, is clear. Not to mention, the battery life on phones, while not perfect, is about as good as you can expect in that sized device. It lasts all day, at least 8 hours. Many phone batteries can last up to 12 hours.

Handheld Gaming Battery Woes

Handheld gaming devices, at best, offer about 2 hours of play time with the best games. That’s a problem with the Nintendo Switch, the PS Vita, the NVIDIA Shield and, yes, even the Steam Deck. The point is, 2 hours of play time is simply not enough when you’re attempting to become immersed in a brand new game. All immersion is immediately broken when you see that sad red flashing battery icon letting you know your battery is about to die.

Sure, you can then find a wall outlet to plug into to continue your gaming, but that can be a hassle. This is also the fundamental problem why such handheld gaming consoles don’t sell as well as they should. You can’t produce fabulous looking games at a rock stable 60 FPS gaming experience when you’re limited to 2 hours of play time. The OS must then play internal conservation tricks with frame rates, CPU power levels, GPU power levels, shutting hard drives down and so on. These power saving techniques mean better battery life, but poor gaming performance. It also gets worse as the battery runs down. Less amperage and voltage means limiting CPU and GPU computing speeds.

This problem exists even with the Nintendo Switch, but Nintendo has taken a balanced approach by reducing the resolution to 720p when playing on the console’s screen. Moving the Switch to OLED, though, likely means even better battery life. An LCD screen backlight is a huge power drain. When using OLED, each LED in the screen uses much less overall power than a full sized backlight. Unfortunately, OLED also raises the cost of the unit simply due to its inclusion. Basically, to get a small amount of battery savings from an OLED display means the consumer shelling out $50 more ($349) to replace a Switch that you may already paid $300 previously. Display Technology Unfortunately, Valve chose not to use OLED to help save battery power for the Steam Deck. Instead, Valve chose to build it with a TFT LCD screen with a backlight. Let’s talk about the screen for a moment. What type of screen does the Steam Deck offer? • 7-inch touchscreen • 1280 x 800 (16:10) • 60 Hz • IPS (In-Plane Switching) • Anti-glare etched glass This is a decent screen type for handheld. Consider, though, that the Nintendo Switch offers OLED at 1280×720 (720p) at$349, meaning that the screen on the Steam Deck is only 80 pixels wider. However, even were the Steam Deck to ship with an OLED screen, the CPU and GPU are the power hungry hogs in this unit. Yes, the Backlight does consume power, but not at the same rate as the CPU and GPU. An OLED screen might buy the unit an additional 15-45 minutes of play time depending on many factors for how the screen is used. Meaning, the fewer pixels lit, the less power it takes to drive the screen.

On a handheld, OLED should always be the first consideration when choosing a display, if only because of the backlight power savings. For a TV monitor, OLED’s benefits are inky blackness in dark areas.

I give it to Valve, though. You gotta start your design somewhere and LCD was the easiest place to start the Steam Deck, I suppose. Let’s hope that the next iteration, assuming there is one, that Gabe considers the power savings an OLED screen affords in a handheld design.

Can the Steam Deck succeed?

Unknown, but probably not this version. Did I mention that one huge flaw in this design? It’s still coming. Though, based on its current specifications, I’d give it a relatively low chance for success. Why? Because a gaming-only piece of hardware swimming in among a sea of smartphones doesn’t exactly indicate success. Oh, the unit will sell to various die-hard gamers and those who really want to be out-and-about gaming (ahem). But, those die-hard gamers are not going prop up this market. If that were to happen, the PS Vita would have succeeded. Yet, it hasn’t.

The only reason the Nintendo Switch has done as well as it has isn’t because of the Switch itself. It’s because of the franchises that Nintendo owns: Super Mario, Donkey Kong, Pokémon, Animal Crossing, Super Smash, Zelda, Mario Party, Kirby, Metroid and so on. These franchises drive the sales of the device, not the other way around. Nintendo could put out the worst piece of handheld garbage imaginable and people would still flock to it so long as they can play Pokémon.

Unfortunately, the Steam Deck doesn’t have legitimate access to these Nintendo franchises (other than through emulation after-the-fact). The Steam Deck must rely on games written for SteamOS or that are compatible with SteamOS. Even then, not all of these Steam games work on the Steam Deck properly, because they were designed to work with a mouse and keyboard, not a console controller. In essence, putting these games on a Steam Deck is tantamount to shoving a square peg into a round hole. Sometimes you can get it to work. Sometimes you can’t.

Ultimately, what this all means for the Steam Deck is a mixed bag and a mixed gaming experience.

Games Should Just Work

Consoles have taught us that games should simply “just work”. What that means to the gamer is that the simple act of opening a game on a console means that it launches and plays without problems. Though, recently, many games devs have taken this to a whole new and bad level… I’m looking at you, Bethesda.

With the Nintendo Switch, for example, games simply just work. The Nintendo Switch is, if nothing else, one of the best handheld gaming experiences I’ve had with a handheld. Not from a battery perspective, but from a “just works” perspective. I can’t even recall the last time I ran a Nintendo game that crashed outright back to the dashboard. Nintendo’s games are always rock solid.

Unfortunately, for the Steam Deck, that experience doesn’t exist. Some games may work well. Some may work halfway. Some may crash part way through. Some games won’t launch. The experience is a mixed bag. This poor level of experience is exactly why the Steam Deck may or may not succeed. That and the Steam Deck’s one big flaw… yes, that info is still coming.

When paying $400-700 for a gaming console, you expect games to play. Yet, even though the game is listed on the Steam store doesn’t imply the Steam Deck will run it. That’s a fairly major problem with the Steam Deck. Instead, the Steam Deck folks need to create a tried and tested list of Steam Deck games and limit only those games to being visible, available and playable on the Steam Deck’s interface. Basically, the unit should prevent you from seeing, downloading and attempting to play any game which has not been thoroughly tested as functional on the Steam Deck. This vetting is important to bring the Steam Deck back into a similar stable play experience to other handhelds, like Nintendo. If a game doesn’t work, you can’t see it or download it on the Steam Deck. This “games just work” mentality is an important aspect to a gaming handheld like the Steam Deck. It’s a make or break aspect of marketing this handheld. It’s not that difficult to limit which games can be seen and downloaded. There’s absolutely no reason why this handheld shows games that knowingly don’t work or that are knowingly unstable. Yes, such limits will reduce the amount of games available, but will improve the overall play experience of this device for buyers. When spending$400, we’ve come to expect a specific level of sanity and stability. This goes hand-in-hand for the price tag, even for the Steam Deck.

Yet, reviewers have stated that their review models have experienced a completely mixed bag. Some games that work, work well. Some games that were expected to work, haven’t worked at all. Some games that worked well have crashed the following day of play. This problem all comes back down to the Proton compatibility problems mentioned earlier.

Success or Failure?

At this point, it’s too early to tell, but one big flaw will likely prevent full success. However, let me dive right into my own opinion of this handheld console. My first observation is that the Steam Deck is physically too big. I understand that Steam wanted it to have a big enough screen with the Steam Deck, but its simply too big and bulky. With the added bulk of the controls, it simply becomes oversized.

Instead, how I would handled design of this platform would have been to use a separate remote joystick. Design the screen to be a simple tablet display with a hefty fan, kickstand and no controls on the tablet at all (other than volume, power, reset and headphone jack). Then, have a separate joystick that can be charged and carried separately. This does a couple of things. Joysticks with separate batteries mean no drain on the console battery under use. Additionally, a battery in the joystick could be hefty enough to be used as a supplemental power source, which can be tapped by the console to extend its battery life once the console is out and about. Having a separate battery means longer play time, thus carrying a separate and charged joystick means extra playtime.

Separate joysticks also feel better in the hand than attached joysticks on handhelds like this. The wide spaced joystick always feels somewhat awkward to use. This awkwardness can be overcome after using it for some time, but I’ve never really gotten past the awkwardness of the Nintendo Switch. I always prefer using the Pro controller over the attached JoyCons. The mechanics used to drive the sticks in a small form factor like the Steam Deck are squashed down and usually don’t feel correct. Using a full sized joystick, these awkward sizing and design issues don’t exist.

With the PS Vita, this spacing problem was less of a problem due to the smaller screen. Though, playing games with the smashed-flat thumbsticks on the PS Vita always felt awkward.

I do get having an attached controller, though. For places like on a train or a bus or in a car, it may be difficult to use two separate devices. Thus, having a controller built-in solves that problem in these few situations. Yet, I don’t know if I’d hamper a handheld device’s size simply to cover a few limited places where having a separate controller won’t easily work. The vast majority of out-and-about play locations would allow for using a controller separately from the LCD screen base, which can be propped up or even hung.

However, these are all relatively minor problems not likely cause failure of sales of the device. The major problem with this device is its lack of additional functionality. For example, it’s not a phone. You can’t make calls using the device.

SteamOS also seems to also offer limited productivity apps, such as word processors, video editors and so on. It does have a browser, but even that seems limited because of the limited controls. You’d have to pair a keyboard if you want more functionality.

Because SteamOS is based on Linux, there’s limited commercial software available for Linux. Unlike MacOS X and Windows, where the vast majority of software is being written, Linux doesn’t have many of these commercial software options. To run Windows apps requires a compatibility stack like Proton, to which those problems have been discussed above.

The Steam Deck goes way deeper than not being a phone, though. It has no cell phone data capability at all. I’ve been teasing the Steam Deck’s biggest flaw… so here it is. No on-the-go always on networking. Meaning, there’s no way to play multiplayer games while out and about without carrying additional devices. Which leads to….

Multiplayer Games?

Don’t go into the purchase of a Steam Deck for any purpose other than single player offline gaming. Know that you won’t be making cell phone calls nor have any easy always-on data options available. If you want data while out and about, you’re going to need a WiFi network handy, to carry a MiFi hotspot with you or use your phone as a WiFi hotspot. This means you’ll need to carry a second device for playing any games which require multiplayer. Because too many games these days require multiplayer always-on Internet, the Steam Deck substantially misses the boat here.

Even still, using a phone or MiFi hotspot may limit your speeds enough to prevent the use of multiplayer in some games. If you try to use a Starbucks or Target store WiFi, you may find that gaming is blocked entirely. This is a huge downside to this device for out-and-about multiplayer gaming. Basically, the only games you can play while out and about are single player offline games only, not multiplayer. While some offline games are still being made, many games now require online Internet at all times, regardless of whether you are playing multiplayer. As more and more game devs require always online status, this will limit the usefulness of this model of the Steam Deck over time.

Instead, Valve needs to rethink this design of the Steam Deck. Valve should include a cell phone radio so that this unit can join a 5G network to enable always-on networking. This is a huge miss for the Steam Deck… one that shouldn’t have been missed. Multiplayer gaming is here to stay and pretty much so is always online Internet. As I said, many game devs require always online Internet.

The lack of a cell phone data network on the Steam Deck limits out and about play for far too many games. Ultimately, the novelty of the Steam Deck’s handheld’s remote play basically limits you to playing multiplayer games in and around your home or at places where you know high speed online gaming is allowed, which isn’t very many places. Even Hotels may limit speeds such that some online games won’t function properly. Thus, the lack of always-on Internet actually undermines the portability of the Steam Deck, making it far less portable for gaming than one might expect.

Instead, Valve needs to team up with a large mobile carrier to offer always-on data networking for the Steam Deck that also allows for full speed gaming. Thus, this would mean including a built-in cell phone radio that offers purchasing a data plan offering high-speed 5G always-on network multiplayer gaming. Once this is achieved, only then could this device be considered ‘ground breaking’. Without this always-on networking capability, the Steam Deck handheld is firmly tied to a past where fewer and fewer offline games are being created today.

Success or Failure Part II

Circling back around… the Steam Deck, while novel and while also offering access to the Steam library of games may not yet be all that it can be. This handheld needs a lot more design consideration to become truly useful in today’s gaming circles.

Some gamers may be willing to shell out $399 to play it, but many won’t. The limitations of this unit far outweigh it’s usefulness as a modern handheld console. Back when the PS Vita offered two versions, WiFi only version and a Cell Phone version, that was at a time when multiplayer gaming was still not always online. Today, because many games require always-on Internet, not having a cell phone network available on this gaming “tablet” (yes, it is a tablet), is highly limiting for multiplayer gaming. Multiplayer gaming isn’t going away. If anything, it’s getting bigger each year. Choosing not to include or offer a cell phone data version of this tablet is a huge miss. My guess for success of this specific version of the Steam Deck is that its success will be limited. It will sell some, but only to very specific gamers. I seriously doubt that it will be considered “ground breaking” in any substantial way, particularly after missing the general purpose nature of a tablet combined with including an always-on a cell data network feature. I felt this way both with the Nintendo Switch and with the NVIDIA Shield. Both of those tablets have done okay with their respective markets, particularly Nintendo’s Switch. It’s done exceedingly well, but only because of Nintendo’s major game franchises and because none of those franchises (other than Mario Kart) require heavy networking. The Shield, like this tablet, has only done okay in sales. Not great, not horrible. If Valve wants to sell this gaming tablet as it is, it needs to strike while the iron is hot and while this tablet is new. Advertise the crap out of it everywhere. Because it’s new, people will be interested to have a look. Many more will buy it because it’s new. Eventually, all of the above limitations will be apparent, but only after people have paid their cash and already purchased it. Personally, this unit has too many limitations for me to consider it. If this gaming tablet offered both cell phone data options AND full Windows gaming compatibility, I might have considered it. It isn’t enough to offer many from the Steam library of games. It also needs to offer the fundamental basics for multiplayer gaming. For example, you wouldn’t be able to play Fallout 76 while out and about without access to a high speed MiFi hotspot. Thus, you also won’t be able to play multiplayer games like Fortnite, Overwatch or Destiny using a Steam Deck while riding a train to work. The lack of a multiplayer always-on data network is huge miss that ultimately undermines the usability of the Steam Deck and is also its biggest design flaw; a flaw that shouldn’t have been missed by the Valve team at the Steam Deck’s price tag. Overall, I can’t personally recommend the purchase of the Steam Deck as a portable modern gaming device strictly because of its lack of thoughtful design around multiplayer gaming while on the go. However, the Steam Deck is probably fine if used as a home console device using a wireless controller while hooked to a widescreen TV and connected to a home high speed WiFi network. It may also be worth it if you intend to use it primarily to play offline single player games or if you intend to use it as a retro emulator for 80s and 90s games. Still, it’s way overpowered for the likes of Joust, Dig Dug or Defender. ↩︎ ## CEO Question: Should I sell my business to a Venture Capital group? Posted in botch, business, howto, tips by commorancy on February 5, 2022 This may seem like a question with a simple answer, but there’s lots to consider. The answer also depends on your goals as CEO. If you’re here reading this, then you’re clearly weighing all of your options. Let’s get started. Selling Anything A sale is a sale is a sale. Money is money is money. What these cliché statements lack in brilliance is more than made up for in realism. What these statements ultimately mean is, if the entire goal of selling your business is to make you (personally) some quick money, then it honestly doesn’t matter to whom you sell. Selling your company to your brother, a bank, another corporation or, yes, even a Venture Capitalist group, the end result is the same: a paycheck. If your end goal is that paycheck and little else matters, then you can end your reading here and move forward with your sale. However, if your goal is to keep your hard built business, brand and product alive and allow it to move into the future, I urge you to keep reading to find out the real answer. Selling your Company Because you are here reading and you’ve got some level of interest in the answer to the question posed, I assume, then, that you’re here looking for more than the simple “paycheck” answer. With that assumption in place, let’s keep going. Companies are complex beasts. Not only does a company have its own product parts that makes the company money, companies must also have staffing parts, the people who are hired to support those product parts and maintain those new sales. Basically, there are always two primary aspects of any business: product and staff. As a CEO, it’s on you to gauge the importance levels of each of these aspects to you. After all, your staff looks to you for guidance and they rely on you for continued employment. There’s also your legacy to consider and how you may want to be remembered by the business (and history): positively, negatively or possibly not at all. Reputation Let’s understand that in countries like China, reputation or “face” is the #1 most important aspect of doing business. I don’t mean the business’s reputation. I mean the person’s own reputation is at stake. If the person makes a critical misstep in business, that can prevent future opportunities. In the United States, however, “face” (or personal reputation) is almost insignificant in its importance, especially to CEOs. Short of being found guilty of criminal acts (i.e., Elizabeth Holmes), there’s very little a CEO can honestly do to fail their career. Indeed, I’ve seen many “disgraced” CEOs find, start, and operate many more businesses even after their “disgrace”. It’s even possible Elizabeth Holmes may be able to do this after serving her sentence. As I said, in the United States, someone’s business reputation means very little when being hired. In fact, a hiring business only performs background checks to determine criminal acts, not determine whether the person has a success or failure track record at their previous business ventures. Why does any of this matter? It matters because no matter what you do as a CEO, the only person you have to look at every day in the mirror is you. If you don’t like what you see, then that’s on you. The rest of the industry won’t care or even know what you’ve done in the past unless you disclose it. Venture Capitalist Buyouts At this point, you’re probably asking, what about those Venture Capital Buyouts? Are they good deals? That really all depends on your point of view. If you’ve put “blood, sweat, tears and sleepless nights” into building your business from literally nothing to something to be proud of and you still hold any measure of pride in that fact, then a Venture Capital group buyout is probably not what you want. Let’s understand the differences in the types of buyouts. 1. Direct Business Buyouts — These are sales made directly to other businesses like Google, Facebook, Amazon, Apple and the like. These are sales where the buyer sees value in not only maintaining the brand and products under that brand, but building that brand as a sub-product under the bigger buyout company. With these kinds of buyouts, your product will live on under that new company. Additionally, the staff have the option to remain on board and continue to maintain that product for the new company for potentially many years. This kind of buyout helps maintain the product and maintains “face” among staff members. This kind of buyout rarely involves resale and, after the acquisition dust settles, is usually seen as a positive change. 2. Venture Capital Buyouts — This kind of buyout is an entirely different beast. Venture Capitalists are in the purchase solely to make money off of their “investment” as a whole. The business itself is the commodity, not the products sold by the company being purchased. No. Venture Capital buyouts are a type of investor who buys a “business commodity” to “fix up” then “flip” to make their investment return. Thus, Venture Capitalists don’t honestly care about the internals of the products or solutions the company offers, only that those products / solutions become marketing fodder for their sales cap. Venture Capitalists do weigh the value in the products prior to the purchase, but beyond that and once the purchase completes, the business is treated not as an ongoing concern, but as a commodity to be leaned out, fixed up and made attractive to a buyer. This kind of buyout always involves resale. This fact means that remaining staff must endure acquisition twice in succession, probably within 1-2 years. This kind of buyout is usually viewed by staff (and the industry) as a negative change. Thus, the difference between these two types of purchases is quite noticeable, particularly to staff who must endure them. Undervalued [Update 2/8/2022] Everything up to this point has only implied what this section actually states. I’ve decided to explicitly state this portion because it may not be obvious, even though I thought this information was quite obvious while writing the initial article. Bottom Line: If a Venture Capital group is considering a purchase of your business, know that what the VC group is offering is only a fraction of what your business is actually worth. They can’t make money if they pay you, the seller, the company’s full value. Keep in mind that the VCs consider the business a “fixer upper”. That means they will invest “some” money into the business to “dress it up”. How that “dress up” manifests isn’t intended to turn your business around, however. What “dress up” means is investing money to make the business look pretty on paper… or, more specifically, so the books look better. That means they’ll pay an accountant to dress up the numbers, not pay to make your business actually better. Though, they will cut staff and then pull out the whips to make sure everyone sells, sells, sells so the business appears to have better year-over-year profits. When a prospective buyer looks at the books, the buyer will notice improved numbers and, hopefully, be willing to fork over double (or more) what the VCs paid to buy the “company” from you, the original seller. Even the smartest, brightest, most intelligent CEOs can be taken in by the lure of a Venture Capital Group company purchase offer. Know then that what VCs have offered you isn’t what your company is actually worth. Ultimately, it also means that you as the seller are being taken for a ride by the VCs. You can dress up your own company and do exactly as the VCs. Then, find a direct buyer willing to pay double what the VCs offered, which will make you twice as much money AND remove the VCs entirely from the picture as an unnecessary profiting middleman. Acquisition Woes Being the acquired company in an acquisition is hard on staff. Lots of questions, few answers and during the transition there’s practically silence. It’s a difficult process once the deal closes. It only gets worse. Typically, the then CEO becomes a lesser executive in the new firm. However, most times the CEO changes position not because they want to, but because the buyout contract stipulates a 6-9 month transition period and obviously most companies don’t want two CEOs. Though, I have rarely seen transitions that agree to co-CEOs. It’s an odd arrangement, though. This means that the newly demoted executive is only on board to complete the transition and receive 100% of their contractually agreed buyout payment. In fact, most buyout contracts stipulate that for the CEO to receive their 100% payout, they must not only remain on board in a specific position for a specified period of time, they may also be required to meet certain key performance indicator (KPI) metrics. So long as all goals are met, the contract is considered satisfied and the former CEO receives 100% payment. However, if some of the goals are only partially met, then reduction of payment is warranted. Such other metrics may include retaining key staff on board for a minimum of 6 months. If any general staff have ever gone through a buyout and have received a special bonus or incentive package, that’s the reason. The incentive package is to ensure the CEO’s KPI is reached so that the contractually defined buyout payment is paid at or as close to 100% as possible. This is also why these acquired executives can get both grumpy and testy when they realize their KPIs are in jeopardy. Trust Let me pause for just a moment to discuss a key issue, “trust”. While contracts stipulate very specific criteria, such as payment terms, not everything in a buyout is covered under the contract. For example, the acquiring company’s executives can find anything they wish wrong with the KPIs to reduce payment. Contracts usually do not contain intent clauses that hold the acquiring company execs accountable if they “make up” flaws in the agreement that don’t exist. It is ultimately the acquiring executives who decide whether the KPIs have been met, not the incoming CEO. If you trust these people to be morally and ethically sound, then you have nothing to worry about. However, because Venture Capitalists aren’t always practical in what they do and are driven by the need to see a return on their investment, they could find faults in the KPIs that don’t exist, solely to reduce payments. Basically, you’ll need to be careful when extending trust. Meaning, you must place full trust in those VCs willing to purchase your company. This means, doing your homework on these people to find out where they’ve been, who they’ve worked with and, if possible, get references. Let’s continue… Buyouts with Strings Every buyout has strings attached. No buyer will purchase a company outright for straight up cash without such strings. Such strings ensure the company remains intact, that key staff remain on board and that the product remains functional. These are handled via such stipulated “insurance policy” clauses in the form of KPIs applied to acquired CEO and executive team. These KPIs, when reached, allow the business seller to receive payment for reaching those KPIs. Were key staff to leave and the product have no knowledgeable or trained staff left to operate the product, then the purchase would be useless and the product would fail. For a buyer, requesting such insurance policies in the contract is always a key portion of buyout contracts. Expect them. Saving Face Circling back around to Venture Capital group buyouts, it’s important to understand that the point of such a buyout is for those “investors” to return their investment sooner rather than later. The sooner, the better. That means that their point in a company purchase by a Venture Capital group is not to take your business into new and bigger directions by dumping loads of money in and growing it. If they dangle that carrot in front of you, know that that’s absolutely not how these deals work. Don’t be deceived by the dangling of this carrot. This carrot is absolutely to get you to sell, but will almost just as definitely not pan out… unless it’s contractually obligated. On the contrary. They’ve spent loads of money already simply buying the company. They’re not planning on dumping loads more cash into it. Instead, they plan to lean it out, get rid of stuff that wastes money (typically HR, insurance and such first), then move onto erasing what they deem as “useless” staff and wasteful costly third party services (ticketing systems, email systems, marketing systems, etc). As for staff cuts, this means asking managers to identify key staff and jettisoning those staff who aren’t “key”. This usually comes down in the form of a mandate that only X people can be kept on board out of Y. For example, 10 people may be employed, only 3 may stay. Who will you pick? That then means jettisoning 7 people from the staff roster. You won’t know this aspect going into the deal because they won’t have made you privy to these “plan” details. It also likely won’t be in the buyout contract either, unless you requested such a buyout stipulation. It’s guaranteed you’ll find out this plan within 10-20 days after the deal closes. As I said, the Venture Capitalists don’t look at it as an ongoing business to help flourish, they look at it as commodity to lean out, pretty up and hope for a high priced buyer to come along. Venture Capitalists understand that it does cost some money to make money, but they’re not looking for a money pit. The purchase price is typically where the money pit ends. You shouldn’t expect an infusion of cash as soon as the Venture Capital firm closes the sale, unless such investment has been stipulated in writing in the purchase contract. Of course, you are free to take some of your own sale money and invest it into the business, but I don’t know why you’d do that since you no longer own the company. What this means and why this section is labeled “Saving Face” is that eventually you’re going to have look into the face of not only the 7 people you had to fire, but the 3 people left and explain what’s going on. These situations are extremely hard on morale and makes it exceedingly difficult for those 3 who stayed on to remain positive. Surviving a huge layoff cut is not a win. In fact, it’s just the opposite. It’s also not simply a perception issue, either. Such a huge layoff places an even bigger burden on those who remain. The 3 who remain feel as though they’ve lost the lottery. Now those 3 must work at least 10 times harder to make up the work for the 7 who are no longer there. Honestly, it’s a lose-lose situation for the acquired company. For the venture capitalists, it doesn’t matter. They’ve leaned out the company and the books now “appear” way better and the business also “appears” far less costly to operate in the short term. “Short Term” is exactly what the VCs are banking on to sell the company. This makes the “business” look great on paper for a buyer. As I said, the quicker the Venture Capitalists can flip their investment and make their money back, the better. The VCs are more than willing to endure hardship within the acquired company to make the company appear better for a buyer. As the saying goes, “It’s no skin off their noses”. Technologists vs Venture Capitalists Being a Venture Capitalist and being a Technologist are two entirely separate and nearly diametrically opposed jobs. It’s difficult to be both at the same time. As a technologist-founder-turned-CEO, the point is to build a business from scratch, allow the business revenues to help grow the business further and expand and build a reputation and customer base. Building a business from scratch is a slow road to a return on investment, which typically takes many, many years. That investment takes years to accrue, but can make an executive a lot of ongoing money. Just look at Jeff Bezos and Amazon. It can and does work. As a Venture Capitalist group buying companies, the point isn’t to build a business. It’s to buy already built businesses as “commodities”, lean them out, make the books look great, then sell them for at least double the money, usually in months, not years. If the VCs dangle a “five year plan” in front of you, claiming to grow the business, please re-read the above again. To spell it out, there’s no “five year plan”, unless it randomly takes the VCs that long to line up a buyer. That’s more of an accident than a plan. The VCs would prefer to line up a buyer far sooner than 5 years. The “five year plan” rhetoric is just that, rhetoric. It was told to you, the seller, to keep you interested in the buyout; not because it is true. If the “5 year plan” carrot is dangled in front of you, then you need to have the VCs put up or shut up. What this means is, make them write the “5 year plan” investment explicitly into the purchase contract. If they are legitimately interested in growing the acquired company, they should have no problems adding this language into the buyout contract. This will also be your litmus test. I’d be highly surprised to actually see VCs contractually agree to adding such “5 year plan” language into the purchase contract. As I said above, these two types of jobs are nearly diametrically opposed. One method slowly builds the company as a long term investment opportunity, the other uses the existing whole company itself as a commodity to sell quickly as a fast return on an investment. As a CEO, this is what you must understand when considering selling your business to a group of Venture Capitalists. If you want your business and brand to continue into the future and have a legacy listed in Wikipedia, then you want to keep your business going and growing. Once you sell your business to VCs, the brand, product and, eventually, staff will all disappear. Nothing of what you built will remain. Selling to a Venture Capital group likely ensures that this process happens in less than 1 year. Selling to a direct business, the brand naming may hang around much longer than 1 year. It’s really all about whether you care about your legacy and your resume. You can’t exactly point to producing a successful business when nothing of it remains. Selling the company makes money, yes, but has a high chance of losing everything you’ve spent loads of time building. Unfortunately, Venture Capital group purchases almost ensure the fastest means to dissolution of the brand and of that time spent building your business. Still, a paycheck is a paycheck and you can’t argue with that in the end. ↩︎ ## Politics: What happens if Trump runs again? Posted in advice, botch, politics by commorancy on February 1, 2022 While I’ve pretty much avoided political debate and politics on Randocity, I also recognize that this blog is called Randocity. Political discussion is never off of the table. I’ve avoided politics because it’s like playing with Playdough. It’s salty, dries out and becomes no fun after just a few minutes. Because democracy actually hangs in the balance with this former President, I’ll grit and bear my way through this article as this needs to be said. Hopefully, you’re willing to do the same. Let’s explore. Prophetic I’m not one to try and be a prophet, but let me don this hat for this next few sections of the article. We all know what Trump did during the 2020 election. Let’s just list his actions leading up to and after the election: 1. Trump began his lead up to the 2020 election by sowing seeds of mistrust and doubt in the election system by claiming that mail-in ballots are a major source of fraud. Don’t trust me on this? Follow the link. Trump made this claims many times well prior to the election. Trump’s action intended to sow distrust in the United States’s election system. For better or worse, it worked. It also set the stage for what came after. This is the start of Trump’s “Big Lie”. 2. Election day arrives and Joe Biden wins. Yet, according to Trump (and his followers), Biden and Democrat party somehow managed to “rig” the election (and 50 states worth of voting systems) to see Biden win. See #1 for the beginnings of this “Big Lie”. 3. Trump refuses to concede the election on election day, the day after or even today as I write this article. Instead, he begins a concerted effort to prove that he won and that Biden lost. This effort includes a number of steps including discrediting election officials, discrediting election workers, discrediting election polling places, discrediting election equipment and basically discrediting anyone involved in the election system. Make no mistake, this discrediting tactic was systematic and entailed making wild claims about the entirety of the election system… which, of course, those claims could not at all be supported or corroborated. Courts all over the country were entangled in many (frivilous) lawsuits set up by Trump and his followers to challenge the election integrity and discredit many people in the process. Trump didn’t stop here. However, Trump lost every single one of those lawsuits, over and over and over. No election fraud was (or is) ever uncovered. 4. On January 2nd, 2021, in a vain attempt at overturning the election results in the state of Georgia, Trump calls Brad Raffensperger, the Secretary of State over Georgia, requesting that Raffensperger “find” 11,780 votes for Trump. Of course, he made no mention of exactly how Raffensperger might go about “finding” those votes. Clearly, this was an attempt at persuading election officials into performing actual voter fraud on behalf of Trump using veiled words. It’s most definitely not the first time Trump has used veiled words to prompt someone take potentially illegal actions which greatly benefits Trump. However, those words can then be claimed by Trump as “innocent”. It also wouldn’t be the last time Trump uses veiled words to do his bidding. 5. Trump organizes a rally at the Ellipse near Capitol Hill on January 6th, 2021. January 6th was the day the winning candidate was to be confirmed as the Presidential winner by the Electoral College. This congressional procedure is primarily symbolic in nature, but it also serves a purpose for congress to go through the motions to ensure the candidate is fully recognized as having been duly elected. Trump’s rally brought throngs of Trump supporters to Washington DC on the day of the Electoral College vote in the hopes that he could somehow disrupt the Electoral College process. 6. On the same day of the Rally, Trump calls Mike Pence, the then Vice President of the United States, to request him to discredit Electoral Vote counts from key states. States that, if discredited, would aid in Trump remaining in office by overturning the election results. Pence refuses and performs his duties as President of the Senate. Pence, as Vice President, is the person who facilitates and presides over Electoral Vote tabulation in front of the House and Senate. In fact, the Vice President doesn’t appear to have such requested power even if he had wanted to do as Trump asked. Again, Trump likely used veiled words with Pence to “get him” to do something untoward that, again, greatly benefits Trump. 7. Trump, along with a bunch of Trump allies, make veiled, but now inflammatory rhetoric riling up the crowd at the Ellipse, effectively making it appear as if the election was about to be stolen from Trump by the Electoral College. Again, Trump uses flowery veiled rhetoric to incite the crowd into a frenzy. Trump knew exactly what his rhetoric would have the crowd do, particularly knowing a large extremist Trump-supporting fringe element had also shown up. The vote, at that time, was just several hours away. Trump and Co’s inflammatory, but veiled rhetoric lead to the riotous results which immediately followed on Capitol Hill. 8. After the walk from the Ellipse, the riots begin in earnest. As a result, this riot forces the Electoral Vote count proceedings to halt for a period of time while the House and Senate staff take cover in a safe location until the grounds can be brought back under control with the rioters gone. Until that time, the Electoral Vote count remains suspended. Yes, Trump was instrumental in encouraging this action. Yes, Trump, the then sitting President of the United States, based on his veiled rhetoric speech, intentionally caused suspension of the prescribed formality of counting and tabulating the Electoral College vote counts. Keep in mind that this intentional suspension was all for the purpose of overturning the election results… IN TRUMP’S FAVOR. 9. Several hours later after the rioters had gone and the DC police had brought the grounds under control did the vote count resume, with Mike Pence presiding. The vote count was uneventful and, as the voting had concluded, Biden was confirmed as the next President by the Electoral College. These above facts are irrefutable, even though Trump would have you believe it’s all fake. Let’s stop here. I think I’ve included enough pertinent information to predict the outcome should Trump run again. Trump is, if anything, predictable. Trump hates Losing It’s clear, Trump hates losing. In fact, he hates it so much that he began planning his road to the “Big Lie” months before the election to ensure he couldn’t lose, at least in his own mind. If he can drag some people into his “world” of lies, then all the better. To date, Trump has still not conceded the election and still insists that the election (and election system) was (and is) rigged. In fact, Trump is so adamant that he had won the election (both before and after) that he filed many, many lawsuits in an attempt to “prove” the election was somehow rigged, sometimes forcing a vote recount. In some places like Arizona, the votes were recounted a number of separate times all confirming and proving that Biden had won, even by his own requested staffers. Yet, Trump simply won’t take, “No” for an answer. Trump still insists that the election was rigged, is fraudulent and that he is the rightful winner of the 2020 election. No such evidence has ever been shown that this claim is, in fact, true. In fact, all evidence points to the fact that the 2020 election was free, fair and without major fraud. Sure, every election has its irregularities, but no more than any other past election. Trump simply can’t look at the irregularities and call foul when the statistics indicate no such fraud exists. Election Lies and Rigging Let’s understand the preposterousness of Trump’s lie claims and understand better who is actually doing the rigging here. In order for Joe Biden and others in the Democrat party to have truly rigged the election in favor of Joe Biden, this action would have required an extremely enormous coordinated effort from many, many election officials, election workers and modification of election equipment all over the United States, in every single state. Such an enormous coordinated effort would have required many thousands of people’s synchronized participation at the polls and many, many hours of planning. If our election system is truly that easily compromised, then there’s no way possible we can possibly use it for any future elections… ever. Let’s examine what’s more probable, plausible or even possible? Trump’s Lie that thousands and thousands election workers all conspired against Trump to make Biden win? Or, the American people voted correctly, accurately and fairly… and that Biden was duly and fairly elected! Let’s even qualify this more. Whom do you trust in the above scenario? One single person who is known to lie (i.e., Trump) or thousands of election workers all over the country who voluntarily devote their time and resources to ensuring we have a free and fair election? Again, I ask, “Which situation is more probable?” Just to be sure we’re on the same page, I’ll answer that question. Trump has more than proven he is not trustworthy. Thousands of election workers and election staff cannot ALL be at the level of untrustworthy that Trump claims in his “Big Lie”. It is, therefore, Trump who is lying. Obviously, Trump’s lie is THE ludicrous and unbelievable claim here. It is way more probable that Trump is lying than suggesting an enormous coordinated effort existed to place Biden into the Presidential seat over the will of the voters. Further, if such a coordinated effort truly existed, why stop at such narrow voting margins and not go for an all out landslide victory? If the election machines can be truly compromised and modified, then why bother with slim margins? No, Trump’s claims just don’t hold water. Biden didn’t win by any sense of a “landslide”. Oh, no no. The votes were so close that some battleground states weren’t able to call the election results for days after the election. By ‘close’, this could be as few as several thousand votes. This meant election workers were forced into counting and recounting to ensure the vote counts are all counted accurately and tabulated properly. With that many recounts all showing Biden won, there is no possible way that Trump’s “Big Lie” is in any way plausible, let alone realistic or even true. Trump and 2024 Looking ahead, let’s really talk about what’s likely to occur should Trump end up on the ballot again. In fact, Trump is already sowing the seeds of distrust even deeper right at this very moment. As long as Trump maintains his “big election lie”, he WILL continue to both expand and reuse it against the 2024 presidential election should he choose to run again. Believe me, he will most definitely use it again and will up his game based on what he learned during the 2020 process! He’s that predictable. Prediction noted. Let me say right now that this man should never be able to run for President again. In fact, Congress performed a major disservice to this country for not finding Trump guilty in his final Impeachment hearing. If they had found him guilty, that would have prevented Trump from ever holding office as President again. This would be a blessing come 2024. The man cannot be President again or even be allowed to run or else this country may entirely lose the meaning of the word, “Democracy.” Prediction noted. Trump Wins? Assuming Trump were to win in 2024, Trump will not only continue to do everything with his reacquired Presidential power to discredit the election system entirely. It’s nearly guaranteed he will want to ensure that he remains in office indefinitely by attempting to halt everything to do with future elections. That’s just the beginning of his tirade. Trump will see to it that not only can he not be voted out again, that no one else can be voted in. At this point, Democracy and the Constitution’s power ends. Worse, Trump’s “back pocket” GOP will likely follow the leader here and continue to do Trump’s bidding by seeing to it that legislation is passed that allows Trump to remain in power beyond 4 years, possibly even indefinitely. Prediction noted. Election Lie 2.0 What if Trump loses? The outcome is just as bad simply because he was allowed to run. If we think Trump’s election lie is bad now, just wait. If Trump is allowed to participate and again loses, not only will Trump parade his next version of the Election Lie v2.0, he will see to it that both he and the GOP make sure the elections are so undermined that we can’t even use our election system come 2028. The courts will also be so completely saturated with meritless case after meritless case all for the sole means of attempting to prove that the election was, once again, rigged and stolen from Trump. Trump will most definitely up his lying game to make sure everyone knows he was, again, cheated out of his win. That somehow the election system was (and is) majorly rigged against him with yet more fabricated evidence. This will then lead to even more voter law changes by Trump supporting states. Prediction noted. Let’s put this into a bit more perspective with how Trump can leverage the GOP leadership team. The GOP (aka Republican party), is hanging onto Trump’s coattails for all it’s worth. These elected officials continually and constantly push Trump’s lie, but not verbally. They do so by introducing legislation that is tantamount to a modern version Gerrymandering. Gerrymandering is technically redrawing district lines around population centers so as to change the outcome of an election from a Democrat win to a GOP win. It is a form of political scam. When districts are drawn correctly and properly, the vote distributions are fair. When redrawn using Gerrymandering, it unfairly rigs voting in favor of one party over the other. Gerrymandering is an old tactic, but there are many new age tactics that can also be used in addition to redrawing districts in unfair ways. States have now taken it upon themselves to craft laws that restrict voting in ways that make it easier for Republicans to vote and much more difficult for Democrats to vote. This is a legal form of Gerrymandering. Laws, in combination with actual district Gerrymandering, pretty much ensures a win for the party who set all of these scams up, even if that party is in the minority. This is a form of… Election Rigging Who is actually doing the rigging here? The problem I really have with all of Trump’s (and by extension, Trump’s GOP) hoo-ha above is that the reverse is actually true. Everything that Trump has crafted in an attempt to discredit the election results was actually performed with the intent to rig the 2020 election in Trump’s (and, more specifically, the GOP’s) favor. His “Big Lie” wasn’t intended to uncover any truth as there was no “truth” to actually uncover. Instead, his claims of election fraud by Biden were all intended to allow him to rig the elections in Trump’s favor. It’s a reverse ploy. He takes a functional, legitimate, working system and twists it into something that appears broken, corrupt and perverse for the sole means of turning it around and using it to his own benefit. It’s a classic victim ploy. It’s also diabolical. Rigging is rigging whether by Trump or by someone else. Trump’s attempts to use the justice system, the media, his supporters and veiled words are simply attempts to get people to do his bidding, which meant overturning the 2020 election results by illegitimate means and usurping the 2020 election for himself. Can we say, “Rigged by Trump”? Yet, for whatever reason, people actually fail to see this diabolical scheme that Trump has concocted. It’s a plot that seemingly turns Trump into a victim rather than exposing him as a con man. Trump is, plain and simple, a con man. He intended to deceive his followers into believing fake information and, thus, attempt to take a legitimate free and fair election system and actually twist it by rigging it to Trump’s will. Let me ask. Who exactly is doing the rigging here? It’s certainly not Biden. However, few Trump supporters want to believe that they’ve been conned by Trump. It’s way easier to accept Trump as a victim than to view themselves as being duped by Trump. If you accept Trump’s lies, however, you ARE being duped. Accepting a known lie is the very definition of being duped. Can Trump be Trusted? A very good question. Let’s examine. At this point, it should be completely clear that this man cannot be trusted, not with Presidential power, not even with participating in the election system as a candidate. Anyone so intent on treating our election system so recklessly, callously, with disdain and with so much malice of intent cannot be trusted. Trust is earned. It’s clear that Trump has failed to earn trust and respect from almost anyone. Yet, followers still flock in his direction. I’m still at a loss as to why. The man has proven that he has no morals, moral compass, ethics or scruples. It’s one thing for a politician to make boasting claims about doing great deeds while in office, then fail to accomplish those goals. It’s entirely another when the President of the United States holds a rally intended to halt counting the Electoral Votes which undermines the election system and the basic fundamentals that hold Democracy together. Lies and Fraud Trump’s deception has not ended and will not end until he is pushed out of politics entirely. That means that the GOP must force Trump out of the party. The GOP cannot continue as a legitimate political party when someone so corrupt and so ill-intentioned remains within. Someone who was (and still is) willing to sacrifice the entirety of the United States Constitution and Democracy’s fabric itself simply so that he can remain in office, that’s someone we absolutely do not need running this country, let alone even being allowed on the ballot. If Trump is placed on the ballot in 2024, Democracy literally hangs in the balance. If we think we’re in a constitutional crisis after the January 6th Capitol attack, that’s simply the first salvo in what will likely bring down the United States if Trump regains the office of President. Prediction noted. Trump absolutely in no way cares about the continuance of Democracy and only cares about one thing… Trump and his ability to gain and retain power, particularly Presidential power. He also wants to take that power and bastardize it into something that was never intended by the framers of the Constitution. Regardless of whether Trump wins or loses in 2024, the United States faces a serious existential threat, one that Trump seems to want to seriously undermine (at best) and dismantle (at worst). No, Trump cannot be allowed to even participate in the 2024 election process at all. His corruption will taint the election system, win or lose. The GOP leadership must eject Trump from the party and shun any further interaction with him. That is, unless the GOP (Republican party) wants to become known as the party that brought down United States Democracy, which also likely means the GOP (and all other parties) will cease to exist once Democracy dies. No need for Democratic processes once the President wields all of the power, forever. ↩︎ ## Review: Star Wars: The Rise of Skywalker Posted in botch, entertainment, movies, reviews, storytelling by commorancy on December 22, 2021 Usually, I write reviews and analysis immediately after I see a film. Well, I have to be honest, I did just see Star Wars: The Rise of Skywalker recently. You might be wondering why that is? Well, let’s explore. Obligatory Note: This review contains major *spoilers*. Stop reading now if you haven’t seen this film. Rewarding Poor Business Decisions I’m not one to necessarily boycott businesses, but with Star Wars I’ve made an exception. I boycotted seeing the film in the theater and I, likewise, boycotted paying money to see it at any rental venue. The reason I saw it last weekend is because finally a channel has released an on-demand version that’s included with something I already pay for. To be honest, Disney will get a small amount of money from me watching it via on-demand. It’s called the pay-for-play royalty system. That means that every time someone plays it, Disney will derive some amount of money from the playback (probably 10-25¢ at most). I’m okay with that because that’s about what it’s worth. Though, I don’t have to pay directly. I refuse to reward companies for producing crap. I simply won’t do it. I know that this paragraph’s sentiment is entirely brutal… but hey, that’s part of the review. Retroactive Continuity Bonanza Congratulations! You’ve hit the Retcon Bonanza! One thing about applying retroactive continuity (retcon) to a story line is that it’s fairly obvious. See, the thing is, retcon runs all through Star Wars: The Rise of Skywalker in very blatant and obvious ways. I already knew going into The Rise of Skywalker that it would be chock full of retroactive continuity. So what’s wrong with retconning a story? Let me count the ways. 1. Trite 2. Cliché 3. Poor writing 4. Bad planning 5. Bad storytelling 6. Contrived 7. Unsatisfying Great storytelling sets up little bits and pieces all along the way. Then brings those bits and pieces together at the end in a cohesive way to explain why those seemingly unrelated bits and pieces were included. It’s a standard storytelling practice that shows the writer had planning of forethought when crafting their story. It’s also an immensely satisfying storytelling practice. If you’re an astute observer, you can put these foreshadowing pieces together early to conclude what’s about to occur. If storytellers are too obvious with their clues, it makes guessing the ending too easy. For example, many people were able to easily guess the premise of M. Night Shyamalan’s Sixth Sense, when the ending was all but revealed by four words of dialogue spoken very early in the film. However, this situation also depended heavily on whether you believed the visuals of the film or you chose to believe the spoken words. It also means the writers concocted a poorly conceived clue delivery system. It should have been way more subtle than that. In fact, those words shouldn’t have been uttered until much later in the film. That’s not the case with The Rise of Skywalker, though. With this film, it wasn’t a matter of clumsy clues. It was the fact that no clues were given at all, not in The Force Awakens and not in The Last Jedi where it makes much more sense to leave these clues behind. Emperor Palpatine Palpatine was the primary villain in the first 3 Star Wars films. He was dispatched at the end of Return of the Jedi by being dropped down a power shaft. This villain was firmly dead. However, The Rise of Skywalker latches onto this story context for all that its worth. That, and cloning. The thing is, Attack of the Clones wasn’t really referenced… or more specifically, Kamino. Specifically mentioning this planet somewhere along the way, such as earlier in The Force Awakens would have set up the notion of cloning as a possibility somewhere in the story. For example, if Snoke had been found to be a clone based on DNA testing or something similar after he’d been chopped in half in The Last Jedi, that would have explained what was said by Palpatine in The Rise of Skywalker. Yet, no such reference in either of the first two films exists. As an another example, even the simple act of dropping Palpatine’s name in any small kind of way, such as mentioning the similarity to Snoke’s villainy. Even simple name dropping can open whole doors up later and it’s those kinds of clues that avoid retroactive continuity problems. Simple name dropping Palpatine or Kamino or Cloners in any capacity along the way in The Force Awakens or The Last Jedi would have been enough to prove the writers were thinking about closure of the story at the beginning of it. Instead, the writers and filmmakers were so self-absorbed in their own self-indulgence that they couldn’t even consider such prior setup in the writing of the first two installments. To be honest, this is really the fault of J.J. Abrams. He had the task of opening the storyline in The Force Awakens, but fails to really give a hint at what’s to come. Hints and clues are what make great stories. It’s called foreshadowing and it’s an incredibly impressive storytelling tactic when it’s done correctly. When it’s not done at all, then it’s called retroactive continuity… or building a new story by making up establishing facts instantly rather than relying on clues laid down earlier. Sure, the original films and the prequels had information that could be leveraged, but not in a way that would be seen as clues for Disney’s trilogy. You don’t just pull crap out of the air and hope people somehow magically get the reference. Proper build-up is essential to a story. Without it, it makes a story fail. Palpatine Again!? When Palpatine is, again, introduced as “the man behind the curtain” in The Rise of Skywalker, it’s groan time… ugh! I’m thinking, “Not again”. Can’t these guys think up anything original? At least there wasn’t yet a third Death Star… at least we’ve made some progress, I guess. Not much, though. Bringing Palpatine back to life without really so much as an explanation is such a bad storytelling idea that it makes the rest of the story feel like garbage. You either believe Palpatine is back or you don’t. The worst thing about Palpatine is that he stands there like a statue and simply taunts people with words. Granted, in Return of the Jedi, he was also fairly catatonic. Though, he did get up and walk around a little. In this film, he’s a literal statue standing in one spot the entire time spouting platitudes. It’s his same old tired self-assured, over-confident, self-righteous Sith rhetoric about eliminating the Jedi. He died for those same clichéd thoughts in The Return of the Jedi. Has he learned nothing? You’d think that after his first death at the hands of Vader, he’d be a little more cautious and wiser the second time around. Yet, *crickets*. The storytellers don’t give Palpatine an ounce of credit as intelligent or thoughtful. The man is made out to be as dumb as brick. Seriously, after Palpatine’s trip down the power conduit, you’d think he’d rethink his over-confident, self-assured, self-righteous threatening demeanor and, instead, try something new. Nope. Snoke You might also want to point to Snoke as an example of that, but then you’d be wrong because Snoke was summarily chopped in half midway through The Last Jedi. That was that for Snoke. It’s one thing to use Snoke as a puppet, but it’s clear that that puppet failed utterly to its own demise. Stupid Villains! Just to make it perfectly clear, none of the above was mentioned anywhere in The Last Jedi. Again, no such clues were left behind for bringing it all together in the end. Nope. No where was it mentioned that Snoke was a puppet of Palpatine, though a clue should have been left somewhere in TLJ if not by Snoke himself. For example, a quick scene where we see Snoke nodding to a shadowy figure in a cloak which fades out followed by Snoke going directly into communication with Ben. That would have been something. Of course, in Star Wars revisionist tendencies, Disney may go back into both The Force Awakens and The Last Jedi and retrofit dialog, extra scenes and whatnot to shoehorn these clues…. which is an even worse practice than what they did in the contrived storytelling in The Rise of Skywalker. Revisionism has no place in movies, let alone Star Wars films. To be honest, what George Lucas did with his revisionism was add better FX and reintroduce scenes that he wanted, but those changes didn’t fundamentally alter the storyline and were not introduced to ‘fix’ a story problem for a later film. No, George’s stories were solid from the beginning, so the stories didn’t need ‘fixing’. Disney Hires Crap Writers Part of the problem here is that Disney doesn’t have a clue how to run a live action film business, nor exactly what a good live action script is. Disney comes from an animation background. The stories in Disney’s animated films have been simplistic and intended for children. For some reason, Disney thought they could insinuate themselves into a live action movie business and have those films turn out great. Well, it’s clear, that’s not true. No where is that more apparent than in how the stories for the Disney Trilogy were handled. The first mistake was hiring J.J. Abrams to write these films. Instead, Disney should have hired actual film writers with experience in writing. Before that, they should have hired actual story writers to come up with the overall story arc encompassing the three films prior to embarking on filming them. This would have meant that going into each film there was an outline of the necessary elements needed to craft each film’s story which would support the rest. The director might take some liberties in some areas around portions of the story telling, but the required story elements must be included for the entire story arc to work. This would have also meant that all three films were essentially written up-front. Instead, Disney apparently allowed the writers of each film to craft their own story in pre-production for each film. Basically, the films were made up at the time of each production. This isn’t a recipe for success. In fact, it’s a recipe for failure. It’s exactly why J.J. Abrams Alias and Lost series failed to ultimately work. The stories were “made up” as they went along rather than attempting to at least write an overarching story outline that encompasses the entire season. Each story doesn’t need to be written, but certain specific points must be included in the season to reach the conclusion properly. Without such inserted clues, the conclusion absolutely cannot be satisfying… and so it goes with Lost. Lost‘s conclusion was such an awful mess that not only did it make no sense, what little pieces did try to make sense were awful. It was like watching a train wreck unfold. So then, Disney hires this two-bit hack to pen Star Wars? Here’s a guy who can’t even write two TV series properly and yet Disney hires him for Star Wars? Yeah, I could see this wasn’t going to end well… and so it goes. Endings Speaking of things not ending well, let’s continue with The Rise of Skywalker and its ending. Disney would have been smarter to leave a thread open that could be followed up with a new trilogy. Instead, Disney, and more specifically, J.J. Abrams and Kathleen Kennedy were so focused on damage control that they forgot to add intentional cliffhangers leading into a new series of films. However, I believe at the time the film was being created, damage control was the primary means of closure for the The Rise of Skywalker storyline. With that said, the ending is simultaneously satisfying and disappointing. On the surface, it’s a satisfying conclusion to this series of films. Diving deeper, the entire story is incredibly unsatisfying, thus leaving the conclusion disenchanting. The whole shoehorn-this-story-into-a-Palpatine-issue is deeply distasteful. Not only does it ruin the thought that Palpatine is, in fact, dead, it does so in a way that doesn’t make a whole lot of sense and simultaneously leaves a gaping hole open as wide as the Grand Canyon. The original Palpatine was shrewd, cunning and incredibly intelligent. Yet, this film treats Palpatine as one of the dumbest villains to have ever graced the Star Wars universe. Granted, the Palpatine in The Rise of Skywalker is supposed to be a clone. I suppose one could argue that the cloning process dumbs down its clones unintentionally (or even intentionally). The Kaminoan cloners might have seeded its clones so that they would never become aggressive towards Kamino, thus dumbing them down in other ways. It would make sense for the Kaminoans to protect Kamino from its clones turning on its masters or on the world. This argument could be said of all of the Clone Troopers. Yet, this fact has never been established in canon outright. Palpatine, the original, would have also known and understood this dumbing down limitation of Kamino Clones and probably would have attempted to mitigate it long before it became a problem. Yet, it seems that didn’t happen based on clone Palpatine’s overall dumb self-righteous behavior. This cloned Palpatine is one of the least intelligent villains I’ve yet seen in a Star Wars film, save that perhaps Snoke was likely also a clone considering that Palpatine claims to have “made Snoke” (implying a clone). Whether Palpatine used Kamino to produced the clones or if Palpatine bought and established his own cloning technology separately, it’s not really stated. Watching this film, I assumed that all of the cloning occurred on Kamino… or at least, Kamino cloning technology was utilized by Palpatine even if not cloned directly on Kamino. I know that Palpatine suggested bringing the dead back to life in the prequel Revenge of the Sith (which was lightly referenced in The Rise of Skywalker). Don’t take my word for it. Here’s the conversation from Palpatine himself. This platitude by Palpatine may have been a veiled reference to cloning or to an unseen force power or both, which by the time of this scene, the world of Kamino and its technology had been established by the prequel, Attack of the Clones. Of course, this information wasn’t definitively stated in The Rise of Skywalker or even in Attack of the Clones or Revenge of the Sith. The information in The Rise of Skywalker was all left to the audience to put 2 and 2 together and theorize Palpatine was talking about cloning and/or the conversation above. If you hadn’t watched the prequels before seeing The Rise of Skywalker, you wouldn’t be able to correlate this information, leaving the means by which Palpatine reappears as a mystery that doesn’t make a whole lot of sense and isn’t resolved in the narrative. What this all means for the ending is a somewhat convoluted, complex, yet simpleminded ending. In fact, the ending was so simpleminded and single tracked, it was easy to predict the outcome. Is It Over? This is a lingering question that remains. If there’s one clone, there can be many. Did Rey fight the last and final clone? We don’t know. This is the gaping hole the size of the Grand Canyon. If it took Rey to the point of death to kill one single clone, then fighting any more means she probably won’t succeed in killing any others. After all, she won’t have Ben there to give her his remaining life force and bring her back to life again. For the reason of clones, the ending is entirely unsatisfying. Once you open this story door to clones (plural), it’s a never ending cycle. You simply can’t win against potentially thousands of Palpatine clones strewn throughout the Star Wars galaxy. This is why the ending is simultaneously satisfying at face value and completely unsatisfying when you dig deeper. Cheap Cop Out Ultimately, the two main problems in this story stem from relying on the concept of cloning combined with using a duplicate (cloned) Palpatine to carry this story. Out of thousands of better possible ideas, JJ chose these two weakest and most trite ideas over any others? This simply shows just how inept a writer JJ actually is. Though, the “Mary Sue” idea was almost completely squashed by introducing the “Palpatine’s Granddaughter” idea. My problem with the ending of this story is, why did we miss a generation? In fact, the whole “Palpatine having children” storyline could have been a far better story idea to base this final set of films on over what’s included in this mess of a trilogy. Definitely, the “Palpatine having children” story idea is a far, far superior story in establishing the idea of the carrying forward of the Sith vs Jedi conflict over the mess-of-a-story shown in this bankrupt trilogy. This is particularly true if you truly want to hand off this conflict to a new generation of Sith and Jedi. Unfortunately, JJ has already given away the farm. Following the “Palpatine had Children” idea, when did Palpatine procreate and with whom? Why wasn’t it THIS story that begins these final 3 films? If, as a storyteller, you’re going to tease us that Palpatine had children, then we need to know more about this situation. Who was his “wife”? How many children did Palpatine have? Was Rey an only child? Have these children chosen to be dark or light? None of these questions are answered. They’re left open. JJ’s story elements weren’t added to tell us that Palpatine had children. They were useless contrivances included simply to carry The Rise of Skywalker to conclusion. These contrivances are the very definition of retroactive continuity, “Let’s add something random about the past that lets the future proceed in a specific way.” That’s entirely retroactive contrivance If past historical events had been introduced early in The Force Awakens or The Last Jedi, I’d not be critical of these “convenient” story elements included in The Rise of Skywalker. It would have meant that the writers were thinking ahead to the future film. It also means that the story arc was properly planned. Without these elements in any prior films, it’s included for mere convenient storytelling. It’s also the very definition of a “hack writer“. Palpatine’s Children Before we dive deep into the the “hack writer” concept, let’s explore what we could have had in this final trilogy. Oh, and boy is it a doozy! It’s actually hard to believe that JJ chose not to run with this story idea, which would have made the final trilogy not only completely satisfying, but would have opened the door up to so many more films and TV shows. Disney could have made twice the amount of money off of this (and it would still be going) and the Star Wars brand would be stronger than ever instead of petering out after The Last Jedi ended up like dropping a gallon of water on lit candle. If The Force Awakens had opened, instead, using one of Palpatine’s children as a primary villain with that child obviously dark side leaning, the whole tone and concept of this entire trilogy would have completely changed. Talk about introducing a “new generation”, well this was the way to do it! It would have also changed the entire story concept over these three films. Instead of a Mary Sue story unfolding around Rey, we could have focused on the brashness, harshness and destructiveness of a Palpatine child and in a growing Jedi order to combat that new Palpatine threat. Except, this time it’s not Palpatine. It’s the child of Palpatine and they have a completely new idea on how to squash the Jedi order, not using Palpatine’s old, tired rhetoric… that didn’t work anyway. If Palpatine had had more than one child, which of course we knew nothing about those other children, another child could emerge as a conflict mechanism, both against the Jedi and also against the Sith. This would allow the story to pit both Palpatine children against one another, but at the same time against the Jedi. See, so much potential lost! This could have turned Star Wars a bit darker, more modern, updated, yet still fall within Star Wars ideas and visuals. Instead of the crappy Disney trilogy that we got, which was a bunch of cotton candy fluff, we could have dived deep into a darker, more sinister plot involving Palpatine’s children. Snoke could have still been involved as a puppet of this Palpatine child, but we don’t even have to bring back Palpatine as a clone to accomplish it. We simply need this dark side leaning child to “carry the torch”. So many ideas and so any concepts swirling, it’s amazing JJ didn’t realize that THIS is where the story should have headed… not with his carnival of cotton candy and candied apples. JJ’s trilogy was, in fact, so candy-bar sweet as to get diabetes. No, that’s not where Star Wars needed to go. Star Wars needed to begin with a darker, more sinister villain to launch the story, then slowly emerge (over 3 films) from that darkness with a huge win at the end… a win that perhaps doesn’t even stem from the Jedi. Such a win could then lead into not only more films, but also spin off into a whole bunch of TV series. Disney missed the boat here in an immense way. So much potential completely wasted and lost. Hack Writer A hack writer is a pejorative term for a writer who is paid to write low-quality, rushed articles or books “to order”, often with a short deadline. That’s exactly how J.J. Abrams comes to The Rise of Skywalker. He was most definitely paid to write a rushed low-quality script and the film most definitely reveals that. It also reveals that JJ doesn’t have the creative chops to come up with solid, great story ideas and concepts, such as using a Palpatine child to not only bring Star Wars to a brand new generation of children, but also breed a whole new generation of Sith and Jedi alike. Instead, we got… High Gloss Cotton Candy One of the things that most disturbs me about this film is its high gloss nature. This gloss defines the term putting “lipstick on a pig“. This phrase means taking a low quality, bad product and dressing it up to disguise its fundamental failings. The “gloss” here is the film’s far too quick pacing and the overuse of CG effects, right from the opening. Yes, it’s a pretty film. It also includes throwing random and rapid paced information at the viewer, but not giving the person not enough time to react to that information. If the viewer attempts to think anything through, they’ll miss the next scene of the film. This is intentional. You can’t really go into deep thought and stay focused on the film in front of you. You can only go into deep thought after the film is over, at which point you’ll already be initially “satisfied” (or at least sated) by the film’s intended conclusion. However, thinking the film through, you’ll understand all of the points I’ve made above. That’s the whole point of the “glossy coating” and, thus, to put “lipstick on a pig”. It’s not that the story is the worst story I’ve ever seen in a film, but it’s definitely not a great story by any stretch. It was cobbled together from elements not established in this trilogy. Instead, the story had to fall back on story elements established from the prequels and the original films, but which hadn’t been discussed in this trilogy until the final film. Yes, that’s the very definition of a “Cop Out”. Instead, this trilogy should have relied on itself and its own stories to carry its way through to conclusion. It didn’t need a cloned Palpatine to carry this story. That’s perfectly clear. Here’s one of the primary problems I have with this whole cloned Palpatine issue. How and when did Palpatine become cloned? Is someone else pulling the strings? Was that cloned Palpatine merely a test for Rey? Was it merely the first in a series of tests? Was that clone the only one? So many questions left unanswered. So many questions that needed to be answered for a proper conclusion. Yet, no. These are not “cliffhanger” questions. These are fundamental questions which should have been answered over the course of the Disney trilogy, yet were not. To really underscore the Cop Out problem, we must examine… The Last Jedi The closing shot of the kid in the The Last Jedi shows a force capable child. Yet, The Rise of Skywalker doesn’t even attempt to close that narrative. The ring that Finn and Rose bestow onto that kid meant nothing? The whole almost 30 minute romp through the Casino was pointless? Indeed, it means the whole Rose storyline was more-or-less pointless considering they set up an almost blatant new romantic interest in The Rise of Skywalker in Naomi Ackie’s Jannah character. Yet, neither the romantic storyline between either Rose or Jannah materializes in The Rise of Skywalker. Rose has a few scenes in the Leia camp, but it’s all for naught and is a fairly useless means of closure for this character. Set her up in The Last Jedi to be a romantic interest, then ignore Rose as mere wallpaper in The Rise of Skywalker. The interest around Rose was molded into yet another new character of Jannah. Yes, The Rise of Skywalker trounces all over The Last Jedi in an attempt right-its-wrongs for better or worse. More specifically, The Rise of Skywalker simply chooses to ignore those things it deems as unimportant from the previous film. Examples: the force-capable kid, the Casino romp, Rose and even the ring. Whatever The Rise of Skywalker writers deem as unimportant are left without acknowledgement or conclusion. Indeed, The Rise of Skywalker plays too much fan service and not enough at closing elements already opened in prior films. It wouldn’t have taken much to include a small scene showing that force-capable kid wearing the ring somewhere in The Rise of Skywalker. It doesn’t need to be a long or even important scene, it simply needs to be in there. Maybe a scene between Rey and that kid moving rocks around briefly, as though she or Leia is training him. We don’t need to know more about the kid other than he’s still around and he may or may not become important later, just not in this film. Change of Clothing One of the most obvious and out of place elements is that Rey wears the same outfit and hairstyle throughout much of all three films. At least Leia was given proper costume changes along the way including her film’s iconic opening outfit with buns, her braided pony tail ceremonial outfit at the end of Episode 4, her Hoth ice outfit, her Bespin outfit, her ever important Jabba Bikini and so on. With each new environment, she changes clothing. No, it’s not explained how Leia does this, but she does. Rey, on the other hand, almost never changes clothes. She effectively has two outfits. Her scavenger outfit which she wore in The Force Awakens and again in The Rise of Skywalker. In The Last Jedi, the costumers gave her a new darker outfit and a new hairstyle while on the Luke Skywalker banishment planet, but that was a short stint with that outfit. However, once she leaves, she’s back into yet another version of her scavenger outfit. For battling, I guess that outfit is fine, but you’d think that Leia could have issued her more appropriate resistance clothing along the way. For scavenging on a hot planet, what she was originally wearing was fine. For a resistance member, she should have changed into something more befitting of her new role. Additionally, being a budding Jedi, she should have at least donned more Jedi befitting clothing. Nope, she was placed right back into her scavenger outfit all throughout The Rise of Skywalker, even at the end of the film. This is a small point, but it’s a relevant point to the development of a character. The costumes indicate growth of a character as much as her actions and words. Story After all of this lead up, let’s finally talk about the film’s story as a whole. The story itself is both simplistic and meh. It concludes in a way that leaves a bad taste for Star Wars and for Disney in general. Because hack writers were chosen to not create a cohesive whole, but a chopped up mess of a hack-job over three films which almost have no relation to one another other than characters, it ends up a truly sad affair. It also concludes in this way. However, Disney also felt obligated to conclude this problem child. They did so only because they had started down this road and felt the need to finish it. Personally, I think Disney should have shelved the entire project after The Last Jedi and called it done. The whole thing was too irreparably damaged by that point, at least as a creative project. For Disney, the dollar$igns lingered too much in front of someone’s eyes to give it up.

Let’s talk about the film itself. When we begin The Rise of Skywalker, we’re greeted by the familiar text crawl followed by the familiar and obligatory space pan shot. Before we step into the visuals, let’s talk about this text crawl. The text crawl mentions Palpatine by name and that he’s back, never mind those pesky details of exactly how. Basically, the story opens with retroactive continuity before an actor ever graces the silver screen. We already know the lay of the land before one single actual live action shot. From that crawl alone, we now know exactly what we’re in for in The Rise of Skywalker, but we don’t yet know how it will unfold. Though, giving it two minutes of thought, you can understand where the story is heading, we simply need to see it visually.

How it actually ends up playing out is a series of scenes, the Millenium Falcon, a cameo by a now aging Lando Calrissian and a bunch of throwbacks and nods to the original Star Wars, simply to keep the visual interest high. In other words, visually the film relies almost solely on reminiscing over the original three films by attempting to ignore the failings of The Last Jedi specifically, but also glosses over some of The Force Awakens. The Rise of Skywalker attempts to be the one and only one film that matters in this Disney trilogy. In fact, it tries way too hard at this and ultimately feels hollow and disappointing.

It’s a film that feels whole and solid while you watch it, but like a chocolate Easter Bunny once you bite down and realize it’s hollow, the film ultimately lacks any real reason to exist. For this reason, this is why George Lucas decided not to create films 7, 8 and 9 himself. He realized that once the 6 films were complete, there was nothing left to say.

The Rise of Skywalker proves this fact out in amazing abundance. At the end, we’re left not with the question about how great Rey is, but what the hell just happened? More importantly, what was the point? How exactly does Rey’s existence perpetuate the Star Wars narrative in a positive or useful way? Rey is clearly not a Skywalker. She’s a Palpatine. She’ll always be a Palpatine. She’ll always have the potential for falling into the dark side. Yet, she takes the Skywalker name because, plot.

Was it necessary or important for Rey to be a Skywalker? *shrug* I’ve no idea. There’s nothing that comes after to explain the need for this inexplicable naming. Yet, that’s exactly how the story ends. She’s now Rey Skywalker in name only. She’ll always be Rey Palpatine or whatever her father’s family surname was. We don’t even know if it was her father or mother who was the daughter or son of Emperor Palpatine. For all we know, Palpatine didn’t even have a child. Instead, he may have made a clone of himself who ultimately broke away, got married and had a child. We just don’t have enough backstory to know how this whole Rey situation came about.

We came too late in The Force Awakens to get this backstory. It was also never explained throughout the Disney trilogy. We’re simply left in the dark. Even at the very end of The Rise of Skywalker, we’re still left in the dark about how Rey came to be the granddaughter of Palpatine. Bad storytelling. If you’re planning on including retroactive continuity, you could at least fill in these rather important details so we can better understand how and from where Rey came… or, more specifically, how Emperor Palpatine managed to have kids. We don’t even know if Palpatine’s kids were from the “original” Palpatine or if one of Palpatine’s clones had kids. Yes, I said clones… as in the plural form, meaning “more than one”.

Ben and Rey

One thing that The Rise of Skywalker postulates is that Rey and Ben are a force dyad. The only way that’s possible is if Ben and Rey are twins, or at least from the same parent. That implies that Leia may have given birth to twins (like her mother who also had twins Luke and Leia) and somehow Rey was kidnapped by a Palpatine clone and assumed it to be his own child birthed by, well, whomever was on the ship with Rey whenever she was left on Jakku.

Again, this was not explained in the film, but a force dyad doesn’t make much sense unless they’re siblings or, in some way related… which makes that kiss at the end all the more “ewww”. Again, not explained.

Never Ending Ending

Here’s the ultimate problem that exists and persists after closure of The Rise of Skywalker and it’s a big one! An ending that never ends is what we have left over from The Rise of Skywalker. What exactly do I mean? I mean that because Palpatine is a clone, there were likely many Palpatine clones. If Palpatine were to make one clone, he would make several. Why? To ensure the survival of at least one of the clones, there must be many.

The question remains, how many and where are they? We don’t know. Clearly, Rey seems to have fought a particularly weak clone. Perhaps they’re all weak. The fact that they’re clones, they might not have inherited all of the force strength of the original. Because Rey couldn’t defeat this Palpatine clone all by herself implies that she herself was most likely born of a clone and not the original Palpatine. While that may or may not be a problem, the bigger problem is that the ending of The Rise of Skywalker has no end.

As Rey heads off into the galaxy for future travels, she’ll inevitably encounter more Palpatine clones and she’ll be forced to dispatch each and every one. In fact, it’s highly likely she’ll have to dispatch many Palpatine clones, because like the original Palpatine, even the clones will have the drive to survive and those clones will also hire cloners to clone the clone making yet more Palpatines. Like a virus, this situation perpetuates and never ends. Rey will never run out of an army of Palpatines to defeat.

This is the problem you bring into a story when forcing such concepts as clones as a story element for story closure. Like waking up from a dream sequence as an ending, using clones to close the final story element leaves the story’s ending unsatisfying. There’s nothing at all satisfying about the possibility of hundreds or thousands of Palpatines all infesting the universe waiting to attack the next Jedi that happens along.

See, I didn’t even have to resort to holding up the unmitigated pretentious disaster of a story that was J.J.’s Star Trek to illustrate just how much of a hack writer J.J. Abrams really is. Oops, I guess I just did. Yes indeed, J.J. seems to have the uncanny ability to ruin just about any franchise he touches.

Graphics: 5 out of 5
Story: 1 out of 5
Pacing: 2 out of 5
Overall: 2 out of 5 (wait until it’s available to watch without paying)

↩︎

## Smart Bulb Rant: Avoid Bluetooth + Alexa Bulbs

Posted in Amazon, botch, business by commorancy on November 28, 2021

Having worked with a number of smart Internet of Things devices (IoT), mostly light bulbs and hubs, I’ve come to learn what works and what doesn’t. Let’s explore.

Smart Hubs

Overall, the smartest value for your money is the purchase of a smart hub with light bulbs, such as the Philips Hue system. Why? These smart hubs use a mesh network that is separate from your WiFi network. These systems also have their own custom iOS apps that allow for extreme customization of colors, scenes and grouping. These hub-based devices also don’t require or consume IP addresses, like WiFi bulbs, but there are drawbacks to using a smart hub based system.

The biggest drawback is that smart hubs require an active Internet connection be available 24×7. When the Internet goes down, the smart devices, including light bulbs, don’t work well or at all. This is where WiFi bulbs typically shine, though not always. Controlling WiFi bulbs almost always works even with the Internet down when the mobile app is written properly. However, some mobile apps must check in with the mothership before enabling remote control features. Which means… the lack of Internet connectivity makes it difficult to control your devices other than manually. The good news is that most of these light bulbs work correctly by using the light switch on the lamp. This means you can still turn lamps on and off the “old fashioned way” … assuming you have electric power, of course.

The second drawback is that these systems are subject to interference by certain types of wireless systems such as some Bluetooth devices, wireless routers and cordless phone systems.

However, to be able to utilize voice control, such as with Google Home, Alexa or Apple’s Siri, this requires the Internet. The same for most smart apps. Though, I have found that Hue’s iOS or Android app can sometimes control lighting even with the Internet offline. However, without the Internet, the hub may perform poorly, work intermittently or fail to take commands until the Internet is restored.

While the Internet is online and functional, however, control of lighting and devices is easy and seamless. Not always so with…

Bluetooth and Alexa

Recently, some IoT LED bulb manufacturers have begun designing and using smart LED light bulbs based strictly on Bluetooth combined with Alexa. These Bluetooth based lights also don’t require or consume IP addresses, unlike WiFi bulbs. After all, Echo devices do support Bluetooth to allow for connecting to and controlling remote Bluetooth devices. The problem is, the Echo’s Bluetooth can be spotty, at best. Mostly the reason that Bluetooth is spotty is that it uses the same frequency as many home cordless phone systems (as well as WiFi routers and other Bluetooth devices). Not cell phones, mind you, but those old 2.4Ghz cordless handsets that sit in a charging base. Because these phone systems burst data periodically to keep the remote handsets up-to-date, these bursts can interfere with Bluetooth devices. Note that this can be major a problem if you live in a condo or apartment where adjacent neighbors could have such cordless phone systems or routers. Unfortunately, these bulbs can end up being problematic not only because of cordless phones.

Likewise, if you live in a large house with a number of different Echo devices on multiple floors (and you also have these cordless phone handsets), the bulb randomly chooses an Echo device to connect to as its Bluetooth ‘hub’. Whenever a command is issued from any Echo to control that light bulb, these devices must contact this elected Echo ‘hub’ device to perform the action. This could mean that the light bulb has hubbed itself to the farthest device from the bulb with the worst connection. I’ve seen these bulbs connect to not the closest Echo device to the bulb, but the farthest. As an example, I have a small Echo dot in the basement and this is the unit that tends to be elected by these bulbs when upstairs. This is also likely to have the most spotty connection and the worst Bluetooth reception because of being in the basement. There’s no way to ensure that one of these bulbs chooses the best and closest device without first turning off every Echo device except the one you want it connected to… a major hassle.

In the end, because the bulb chooses randomly and poorly, you’ll end up seeing ‘Device Malfunction’ or ‘Device Not Responding’ frequently inside of the Alexa app. If you click the gear icon with the device selected, you can see which Echo device the bulb has chosen. Unfortunately, while you can see the elected device, you cannot change it. The ‘Device Malfunction’ or ‘Device Not Responding’ messages inside of the Alexa app mean that the Alexa device is having trouble contacting the remote device, which is likely because of interference from something else using that same frequency (i.e., cordless handsets or routers).

This makes the purchase of any Bluetooth only LED light bulbs an exceedingly poor choice for Alexa remote control. Amazon can make this better by letting the user change the hub to a closer unit. As of now, the Alexa app doesn’t allow this.

Hub based Systems

Why don’t hub based systems suffer from this problem? Hub based systems setup and use a mesh network. What means is that the devices can all talk to one another. This means that instead of each device relying on directly connecting to the hub, the devices link to one another to determine which device in the mesh has the best connection to the hub. When the hub issues commands, it goes the other way. The command is sent down the mesh chain to a better connected device to issue the command to the destination bulb. This smart mesh network makes controlling lights via a hub + mesh system much more reliable than it would otherwise be without this mesh. The Philips Hue does use 2.4Ghz also to support the ZigBee protocol, but the smart mesh system prevents many connectivity problems, unlike these Sengled Bluetooth LED bulbs.

This is exactly why purchasing a Bluetooth-based light is a poor choice. Because these BT light bulbs don’t have enough intelligence to discover which Echo device is closest and has best connectivity and because it cannot talk to just any Echo device, this leaves the light bulb prone to problems and failure.

Sure, these BT bulbs may be less costly than a Hue bulb, but you get the quality you pay for. Alexa’s Bluetooth was not designed or intended for this type of remote control purpose. It’s being sledgehammered into this situation by these Chinese bulb manufacturers. Sure, it can work. For the most part, it fails to work frequently and often. It also depends on the bulb itself. Not all bulb electronics are manufactured equally, particularly when made in China.

If you find a specific bulb isn’t working as expected, the bulb is probably cheaply made of garbage parts and crappy electronics. You’ll want to return the bulb for replacement… or better, for a Hue system / bulb.

Color Rendition

These cheap bulb brands include such manufacturers as Sengled (shown in the photo) … a brand commonly found on Amazon. Because these bulbs are made cheaply all around, but separate from the BT issues already mentioned, you’ll also find the color rendition on these LED bulbs to be problematic. For example, asking for a Daylight color might yield something that ends up too blue. Asking for Soft White might end up with something too yellow (or a sorry shade of yellow). These are cheap bulbs made of exceedingly cheap parts through and through, including cheap LEDs that aren’t properly calibrated.

Asking for Yellow, for example, usually yields something more closely resembling orange or green. That would be fine if Alexa would allow you to create custom colors and name them. Unfortunately, the Alexa app doesn’t allow this.

Whatever colors are preset in Alexa are all the colors you can use. There are no such thing as custom colors inside of Alexa. If you don’t like the color rendition that the bulb produces, then you’re stuck. Or, you’ll need to replace the bulb with one that allows for custom color choices.

Bulbs purchased for a hub based system, like the Philips Hue bulbs, typically offer a custom iOS or Android app that allows for building not only custom colors and presets, but also custom scenes that allow for setting individual bulbs separately, but as a group. The Alexa app wasn’t designed for this granular lighting control purpose and is extremely lean of options. Everything that the Alexa app offers is set in stone and extremely rudimentary for lighting control. The Alexa app is designed as a can-opener, not as a specific tool. It does many things somewhat fine, but it doesn’t do any one thing particularly well.

Purchasing these BT Alexa-controlled LED lights is a poor choice overall. If you want the flexibility of color choices and color temperatures, you buy a bulb system like Philips Hue, which also offers a custom app. If you’re looking for something on-the-cheap but which allows quick control, then a Sengled or Cree or GE smart bulb might fit the bill. Don’t be surprised when the bulb fails to control at all or produces a color that is not what you were expecting. Worse, don’t be surprised when the bulb’s LED driver fails and begins to flash uncontrollably after a month’s use.

Updated Dec 7th after Amazon Outage

Today, Amazon Web Services (AWS) had a severe outage that impacted many different services including Ring and, yes, Amazon’s Smart Home features, including Alexa + Sengled bulbs. In fact, the only system that seems to have remain unaffected (at least in my home) was the Philips Hue system. Alexa was able to properly control all of my Philips Hue lights all throughout the day.

However, Alexa failed to control Kasa, Wemo, Wyze and even its own Bluetooth bulbs like Sengled. Indeed, pretty much most of my lights were unable to be controlled by Alexa throughout the duration of the outage, which was pretty much all day.

Amazon was able to isolate the failure root cause, but it still took them hours to recover all of the equipment needed to regain those services. This failure meant that it was impossible to control smart lights or, indeed, even my Ring alarm system.

Smart lights are controllable by switch. Shutting the switch off and back on will illuminate the light. You can then switch it off like normal. However, that also means that if the switch is off, Alexa can’t control the light. You must leave all lamp fixtures in the on position for the lights to turn on, off and dim by Alexa. If you turn the light switch off, then the smart features are no longer available and the lamp will display “Device is Unresponsive” in the Alexa app.

Failures

In fact, this “Device is Unresponsive” error is the exact failure response I saw throughout the day in the Alexa app during the failure. How does this all work? Alexa is powered by Amazon Web Services servers. These servers store data about your lamps, about your routines, about your Alexa usage and, indeed, about how to control your devices. Almost nothing is really stored on any given Echo device itself. Some small amounts of settings and a small amount of cache are utilized, but only to keep track of limited things for short periods of time. For example, if you’re playing music and pause, Alexa will keep track of that pause pointer for maybe 10-20 minutes max. After that time, it purges that resume information so that the stream can no longer resume.

All information about Alexa’s Smart Home devices is stored in the cloud on AWS. It also seems that state information about the lights (on, off, not responding) is also stored in AWS. When the connectivity stopped earlier on the 7th, that prevented connectivity from Alexa to those servers to determine the state of the information. It also prevented Alexa from controlling those specific devices handled strictly by Alexa. Because Alexa skills seemed to be handled by those servers, Alexa skills were unavailable also.

However, some services, like Ring, are also hosted on AWS. These servers seemed to have been impacted not only affecting Alexa’s interface to those services, but also preventing the use of Ring’s very own app to control its own services. Yes, it was a big outage.

This outage also affected many other servers and services unrelated to Alexa’s Smart Home systems. So, yes, it was a wide ranging, long lasting outage. In fact, as I type in this update, the outage may still be affecting some services. However, it seems that the Smart Home services may now be back online as of this writing. If you’re reading this days later, it’s likely all working again.

Smart Home Devices and Local Management

Using a hub Smart Home system like the Philips Hue hub system can allow for local management of equipment without the need for continuous internet. This means that if the Internet is offline for a period of time, you can still control your lighting with the Philips Hue app using local control. While you can control your lights with your switch, it’s just as important to be able to control your lighting even if your Internet goes down temporarily.

What this all means is that investing into a system like a Philips Hue hub and Philips Hue lights allows your smart lighting system to remain functional even if your Internet services goes down. In this case, Philips Hue didn’t go down and neither did my Internet. Instead, what went down was part of Amazon’s infrastructure and systems. This had an impact on much of Alexa and Alexa’s control over Smart Home devices. However, even though this was true of Alexa skills and Alexa controlled devices, Philips Hue remained functional all throughout.

That doesn’t necessarily mean that investing in a Philips Hue system is the best choice, but clearly in this instance it was a better choice than investing in the cheaper Alexa-only bulbs, which remained nonfunctional throughout the outage.

If there’s any caveat here, it’s that Smart Home systems are still subject to outages when services like AWS go belly up for a time. If you’re really wanting to maintain the ability to control your lights during such outages, then investing in a system like Philips Hue, which seems to be able to weather such outage storms, is the best of all worlds. Unfortunately, the Alexa only Sengled Bluetooth bulbs were the absolute worst choice for this type of AWS outage.

↩︎

## Rant Time: It’s time for Gutenberg to go.

Posted in botch, business, rant by commorancy on October 9, 2021

As the title may suggest and as a WordPress.com blogger, I’ve given up using the Gutenberg editor for articles. Let’s explore exactly the reasons why.

Gutenberg, Block Editing and Calypso

One of the biggest selling points of Gutenberg (the latest WordPress editor, first released in 2018 and headed up by Matias Ventura) is its ability to have literal text blocks. Each paragraph is literally a square block that is separate from all other blocks. The blocks allow for movement with an arrow up and down. The point to this movement system is to allow for easily rearranging your articles. At least, that was the main selling point.

In reality, the blocks are more of a chore than a help. I’ll explain this more in a bit. When Gutenberg first launched, it replaced the previous editor, Calypso, which was released in 2015. Calypso loaded extremely fast (in under 3 seconds you’re editing). Typing in text was flawless and simply just “worked”. When Calypso first released, there were a number of performance issues, some bugs and it didn’t always work as expected. However, after several updates over the initial months, all of that was solved. The slowness and performance issues were completely gone.

Before Calypso arrived, there was the much older “black colored” editor that was simple text-only editor. Meaning, there was no ability to graphically place or drag-move objects. Instead, you had to use specific HTML tags to manually place images and use inline CSS to get things done. It was a hassle, but it worked for the time. The big update for WordPress was that Calypso would bring modern word processor features and a more WYSIWYG type experience to blogging. Calypso did that exceedingly well, but in an occasionally limited way.

Unfortunately, Calypso had a short lifespan of about 3 years. For whatever reason, the WordPress.org team decided that a new editor was in order and so the Gutenberg project was born.

Gutenberg Performance

The real problem with Gutenberg is its performance. Since its release, Gutenberg’s block-building system has immense overhead. Every time you type something into a block, the entire page and all blocks around it must react and shift to those changes. Performance is particularly bad if you’re typing into a block in the middle of an article with many other blocks. Not only does the editor have to readjust the page on every single keystroke entered, it has to do it both up and down. Because of this continual adjustment of the page, keystrokes can become lagged by up to 12 seconds behind the keyboard typing.

Where Calypso’s typing performance is instant and without lag, Gutenberg suffers incredible lag due to its poorly conceived block design. Gutenberg has only gotten worse over time. Unlike Wine which ages and gets better every day, Gutenberg gets worse every day. There are literally hundreds of bugs in the Gutenberg editor that have never been corrected, let alone the aforementioned severe performance issue.

Classic Editor

You might be asking, “What editor are you using?” Technically, I’m using Calypso inside of Gutenberg because there’s no other option than the antiquated “black editor”. When Gutenberg came about, they had to find a way to make old articles written in Calypso compatible with Gutenberg without having to convert every single article into the new Gutenberg block format. To do this, the Gutenberg team included Calypso in the block called the “Classic Editor” block. It’s effectively a full version of Calypso in a single block.

The Classic Block type is what I’m now using to type this and all new articles. I must also say that every character I type into the Classic Block is spot on in speed. No lags at all. Typing is instantaneous. However, with Gutenberg, typing words into a Gutenberg “paragraph” block can see text show up literally many seconds after I’ve typed it… sometimes more than 10 seconds later. I can literally sit and watch the cursor make each letter appear after I’ve stopped typing. It’s incredibly frustrating.

Few typists are 100% accurate 100% of the time. This means using the backspace key to remove a double tapped letter, add a missing letter or rewrite a portion of text is required. When you’re waiting on the editor to “catch up” with your typing, you can’t even know what errors you made until it finally shows up. It’s like watching paint dry. It’s incredibly frustrating and time wasting.

Editor Performance

Gutenberg’s performance has gotten progressively worse since 2018. By comparison, Calypso’s launch performance suffered when it was first released, taking 10-12 seconds to launch. The Calypso team managed to get that under control within 6 months and reduced the launch time to under 2 seconds. Literally, you could go from a new browser tab to editing an existing or brand new article in under 2 seconds. Gutenberg’s launch performance has remained consistent at ~10 seconds and has never wavered in the many years since it launched in 2018. And… that 10 seconds all for what? An editor with horrible performance?

Gutenberg launched with “okay” block performance years ago, but in the last 6 months, its performance level has significantly degraded. Literally, the Gutenberg paragraph block, the mainstay of the entire Gutenberg editor, is now almost completely unusable in far too many circumstances.

If you’re looking to type a single short paragraph article, you might be able to use Gutenberg. Typing an article like this one with a large number of paragraphs of reasonable length means slower and slower performance the longer the article gets, especially if you need to edit in the middle of the article. That’s not a problem when using the Classic Block as the article has only one block. It’s when there’s an ever growing number of blocks stacking up that Gutenberg gets ever slower and slower. Gutenberg is literally one of the most horrible editing experiences I’ve ever had as a WordPress blogger.

Gutenberg’s Developers

As a user of Gutenberg, I’ve attempted to create bugs for the Gutenberg team in hopes that they would not only be receptive to wanting these bug reports, but that they would be willing to fix them. Instead, what I got was an ever growing level of hostility with every bug reported… culminating in myself and one of the Gutenberg developers basically having words. He accused me of not taking the right path to report bugs… but what other path is there to report bugs if not in the official bug reporting system devoted to Gutenberg’s bugs? This one entirely baffled me. Talk about ungrateful.

Sure, I’m a WordPress.com user, but the WordPress.com team doesn’t accept bug reports for Gutenberg as they have nothing to do with Gutenberg’s development. They’ll help support the WordPress.com product itself, but they don’t take official bug reports for sub-product components. In fact, I’d been told by multiple WordPress.com support staffers to report my bugs directly into the Gutenberg project bug reporting system. That’s what I did. I explained that to the developer who suddenly became somewhat apologetic, but remained terse and condescending.

Let’s understand one thing. WordPress.com is a separate entity from the WordPress.org Gutenberg development team. The two have no direct relationship whatsoever, making this whole situation even more convoluted. It’s a situation that WordPress.com must workout with WordPress.org. As a blogger, it’s not my responsibility to become the “middle man” to communicate between these orgs.

Any development team with this level of hostility towards its end users needs to be reevaluated for its project values. Developers can’t develop in a bubble. They need the feedback from users to improve their product. Developers unwilling to accept this feedback need to be pulled from the project and, if their attitude does not improve, be jettisoned. Bad attitudes need to be culled from any development project. It will only serve to poison the end product… and nowhere is this more abundantly clear than in the Gutenberg editor. This editor is now literally falling apart at the seams.

At this point, WordPress needs to make a choice. It’s clear, the Gutenberg editor can’t last. WordPress.com must make a new editor choice sooner rather than later. Gutenberg is on its last legs and needs to be ushered out of the door.

If that means re-wrappering the entire editor so that the Classic Block becomes the only and default block available, then so be it. I’d be perfectly happy if WordPress.com would make the Classic Block not only the default editor block type when entering a new editor, but the ONLY block type available. After all, everything that can be done with individual blocks in Gutenberg can be done in the Classic Block.

Then, refocus the Gutenberg development team’s efforts to improving ONLY the Classic Block. Have them drop the entirety of development for every other block type from that horrible Gutenberg editor product.

Blocks and Gutenberg

Let’s talk about Gutenberg’s design for a moment. The idea behind Gutenberg is noble, but ultimately its actual design is entirely misguided. Not only has Gutenberg failed to improve the editor in any substantial way, it has made text editing slower, more complex and difficult in an age when an editor should make blogging easier, faster and simpler. All of the things that should have improved over Calypso have actually failed to materialize in Gutenberg.

The multiple block interface doesn’t actually improve the blogging experience at all. Worse, the overhead of more and more blocks stacking to create an article makes the blogging experience progressively slower and less reliable. In fact, there are times when the editor becomes so unresponsive that it requires refreshing the entire editor page in the browser to recover. Simply, Gutenberg easily loses track of its blocks causing the editor to essentially crash internally.

None of this is a problem with the Classic Editor block because editing takes place in one single block. Because the Classic Editor is a single block, Gutenberg must only keep up with one thing, not potentially hundreds. For this reason, the Classic Editor is a much easier solution for WordPress.com. WordPress.com need only force the Classic Block as the primary editor in Gutenberg and hide all of the rest of Gutenberg’s garbage blocks that barely work. Done. The editor is now back to a functional state and bloggers can now move on with producing blog articles rather than fighting Gutenberg to get a single sentence written. Yes, Gutenberg is that bad.

Worse, however, is Gutenberg’s block design idea. I really don’t fully understand what the Gutenberg team was hoping to accomplish with this odd block design. Sure, it allows movement of the blocks easily, but it’s essentially a technical replacement for cut and paste. How hard is it really to select a paragraph of text, cut it and then paste it into a different location? In fact, cut and paste is actually easier, faster and simpler than trying to move a block. Block movement is up or down by one position at a time when clicked. If you need the block moved up by 10 paragraphs, then you’re clicking the up button 10 times. And, you might have to do this for 5 different paragraphs. That’s a lot of clicking. How does that much clicking save time or make blogging easier? Cut and paste is always four actions. Select the text, cut, click cursor to new location, paste. Cut and paste has none of this click-click-click-click-clickity-click BS. Of course, you can cut and paste a whole block, but that sort of defeats the purpose of building the up and down function for movement, doesn’t it?

Instead, I’ve actually found in practice that Gutenberg’s alleged more advanced “design” actually gets in the way of blogging. You’d think that with a brand new editor design, a developer would strive to bring something new and better to the table. Gutenberg fails. The whole cornerstone and supposed “benefit” of Gutenberg’s design is its blocks. The blocks are also its biggest failing. Once you realize the blocks are mostly a gimmick… a pointless and a slow gimmick at that, you then realize Calypso was a much better, more advanced editor overall, particularly after using a Classic Block to blog even just one article.

Change for Change’s Sake

Here’s a problem that’s plagued the software industry for years, but in more recent times has become a big, big problem. With the rush to add new features, no one stops to review the changes for functionality. Product managers are entirely blinded by their job requirement to deliver something new all of the time. However, new isn’t always better and Gutenberg proves this one out in droves. Simply because someone believes a product can be better doesn’t mean that the software architects are smart or creative enough to craft that reality.

We must all accept that creating new things sometimes works and sometimes fails. More than that, we need to recognize a failure BEFORE we proceed down the path of creation. Part of that is in the “Proof of Concept” phase. This is the time when you build a mini-version of a concept to prove out its worth. It is typically at the “Proof of Concept” stage where we can identify success or failure.

Unfortunately, it seems that many companies blow right past the proof-of-concept stage and jump from on-paper design into full-bore development efforts. Without a proper design review by at least some stakeholders, there’s no way to know if the end result will be functional, useful or indeed solve any problems. This is exactly where Gutenberg sits.

While I can’t definitively state that the Gutenberg team blew past the proof-of-concept stage, it certainly seems that they did. Anyone reviewing Gutenberg’s blocks idea could have asked one simple question, “How exactly are blocks better than cut and paste?” The answer here is the key. Unfortunately, the actual answer to this question likely would have been political double-speak which doesn’t answer the question or it might end up being a bunch of statistical developer garbage not proving anything. The real answer is that this block system idea doesn’t actually improve blogging. In fact, it weighs down the blogging experience tremendously.

Instead of spending time writing, which is what we bloggers do (and actually want to do), we now spend more time playing Legos with the editor to determine which block fits where. As a blogger, an editor should work for us, not against us. Spending 1/3 of our time managing editor blocks means the loss of 1/3 of our time we could have been writing. Less time writing means less articles written.

Because blogging is about publishing information, speed is of utmost importance. Instead of fumbling around in clumsy blocks, we should spend our time formulating our thoughts and putting them down onto the page. For this reason, Gutenberg gets in our way, not out of our way.

At a Crossroads — Part II

Circling back around, we can now see exactly WHY WordPress.com is at a crossroads. The managers at WordPress.com need to ask this simple question, “What makes our bloggers happy?” The answer to this question is, “A better and faster editor.”

Are Gutenberg’s failings making bloggers happy? No. Since the answer to this question is “No”, WordPress.com managers need to realize there’s a problem afoot… a problem which can be solved. Nothing requires the WordPress.com platform to use Gutenberg… or at least the block portions of it. Because there exists a solution in the Classic Block, it would be simple to launch Gutenberg directly into a locked-in version of the Classic Block and not allow any further blocks to be created… essentially dumping the vast majority of Gutenberg.

This change reverts the editor back to Calypso and effectively does away with Gutenberg almost entirely. Though, this is a stop-gap measure. Eventually, the WordPress.com managers will need to remove Gutenberg entirely from the WordPress.com platform and replace it with a suitably faster and more streamlined editor, perhaps based on a better, updated version of Calypso. It’s time for this change. Why?

If the Gutenberg team cannot get a handle on crafting an editor that works after 3 years, then Gutenberg needs to be removed and replaced with an editor team actually willing to improve the blogging experience. WordPress.com needs to be able to justify its sales offerings, but it’s exceedingly difficult when you have what should be the cornerstone of the platform, the editor, working against you. This makes it exceedingly difficult for new would-be buyers to literally spend money for WordPress.com platform. Paying for an editor that barely works is insane. WordPress.com managers can’t be so blind as to not see this effect?

The bottom line is, how do you justify replacing an editor with an under 2 second launch time with an editor that now has a 10-20 second launch time? That’s taking steps backwards. How do you justify an editor that lags behind the keyboard typing by up to 12 seconds when the previous editor had no lag at all? Again, steps backwards. Isn’t the point in introducing new features to make a product better, faster and easier? Someone, somewhere must recognize this failure in Gutenberg besides me!! Honestly, it’s in the name of the product “WordPress”. How can we “press words” without an editor that “just works”?

WordPress.com, hear me, it’s time to make a change for the better. Dumping Gutenberg from the WordPress.com platform is your best hope for a brighter future at WordPress.com. As for the WordPress.org team, let them waddle in their own filth. If they want to drag that Gutenberg trash forward, that’s on them.

↩︎