Random Thoughts – Randocity!

Review: Pokémon — Let’s Go! Pikachu

Posted in botch, business, video game, video game design by commorancy on November 16, 2018

img_0072I’ll make this one short and sweet. This is the first Pokémon for the Nintendo Switch and in some ways it’s fun, but in many ways it’s a sheer disappointment. Let’s Go!

Pikachu

In this review, I’m playing the Pikachu edition. I’m sure that the Eevee edition will likely be very similar in play value, with the exception of certain Pokémon you can only collect in each separate edition.

Controller Problems

Here’s the first disappointment with this game. I want to get this one out of the way right up front. The Nintendo Pro Controller doesn’t work at all in this game. When you press the connect button, the light Cylon’s back and forth, but never connects.

img_0065Unfortunately, you are forced to use the JoyCons with this game. This is an extreme disappointment. But wait, it gets worse. If you pull the JoyCons off of the console and hold them in your hand and use the JoyCons wirelessly, you can’t use both of them together like you can when they are connected to the console. When they are separated from the console, the game mistakenly assumes that two people will be using one each. An entirely stupid decision. If there’s only one player, then let the player use both. If a second player wants to join, then remap the keys so each player is separate. Don’t just make bad assumptions about this.

Even if you place the two controllers into a JoyCon Grip to make the JoyCons feel like a Pro controller, the game still assumes one controller per person. Bad, bad design. It gets worse, again. If you want to hold the JoyCon horizontally so that the buttons are on the right and so you can hold the single JoyCon with both hands… not possible. The only possible orientation for holding the JoyCon is vertical.

I’m very disappointed in Nintendo and Game Freak here. It keeps getting worse. Because the JoyCons are not capable of the same distance away from the Switch as the Pro Controller, the connectivity to the console is entirely spotty using the JoyCons when it is docked several feet from you. Unless you intend to game with the console just a few inches in front of you (in which case you might as well attach them), using the JoyCons at a distance is entirely problematic and frustrating.

So, the only way to use both controllers to play the game as a single player is when they are connected to the console and that means holding the Switch in your hand playing it using the built-in screen. You CANNOT play Pokémon Let’s Go using the Pro controller at all or by using both JoyCons together when they are not attached. You are forced to play this game using a single JoyCon per player when detached. A stupid and unnecessary requirement and decision. And people wonder why Nintendo is in third place for its consoles.

Dock

This game almost completely ignores the fact that there’s a dock. Instead of allowing use of the Pro controller when docked, it forces you to pull the JoyCons off of the Switch and use these instead. I found this to be cumbersome, problematic and unnecessary. We spend $70 for the Pro controller and we can’t even use it. To not be able to use the Pro controller on Pokémon (one of Nintendo’s flagship properties) is just an extremely bad design decision.

Game Play

img_0064Not completely ignoring the stupidism that is the controller system (which is stupid), the gameplay is underwhelming. Sure, Nintendo finally added the ability to see the Pokémon running around in the weeds before you collect it, but that’s of little concession when the game is basically the same game as every other DS version.

Let’s go back to the controller again, but for a different reason than above. When you are attempting to capture Pokémon with the JoyCons attached to the Switch, it’s much, much easier and simpler to throw Pokéballs. The ball throwing motion needed when using a detatched JoyCon is much, much more difficult for no apparent reason. Worse, when using a loose JoyCon, the hand on the screen when trying to interact with your Pokémon is entirely difficult, where using the touch screen is easy peasy. Here’s another place where forcing the use of a JoyCon a tremendously bad idea. The motion to throw a Pokéball with the Pro controller would mimic the same motion used when holding the console… where using the a detatched JoyCon for throwing a Pokéball is … well … strange.

Game Design

img_0068I was actually expecting a whole lot more use of the player camera than what is being offered. It’s effectively a 3DS version ported to the Switch. Nintendo completely missed the opportunity to give this game a much needed facelift for the Switch, like they did for Breath of the Wild. It is effectively the same game as every other Pokémon game. This is quite disappointing, but it’s also a double edged sword.

For some players, it is like a comfortable glove. If you’ve played Pokémon in the past, then you can fall right into this game without any problems at all. It’s old hat and feels old hat. The graphics are improved, but it needed a more open world RPG style update rather than this constrained old-school Pokémon conversion.

I’m sure a lot of people will absolutely adore this game. Because Nintendo has chosen to play games with how the controllers work, it really constrains this game to feeling rushed and unfinished or a really bad port.

Graphics

To be honest, the graphics are very low res, flat and cartoony. I sort of expected this, but not at this low of a level. It’s at such a low level, that it looks like a Nintendo DS. Though, as I said above, it is somewhat better than the DS only from the fact that the resolution is higher… but that’s not really saying much.

Overall,  was expecting a whole lot more from this game.

This slideshow requires JavaScript.

Score

Graphics: 4.5 out of 10 (Underwhelming)
Sound: 2 out of 10 (Music is way too loud and unnecessary)
Controls: 2 out of 10 (Controller system is strange, no Pro controller support)

Overall: 4 out of 10 (Antiquated, strange controller design, seems unfinished or bad port)

↩︎

Advertisements

Rant Time: SmugMug and Flickr

Posted in botch, business, california by commorancy on November 12, 2018

Flickr2While you may or may not be aware, if you’re a Flickr user, you should be aware. SmugMug bought Flickr and they’re increasing the yearly price by more than double. They’re also changing the free tier. Let’s explore.

Flickr Out

When Flickr came about under Yahoo, it was really the only photo sharing site out there. It had a vibrant community that cared about its users and it offered very good tools. It also offered a Pro service that was reasonably priced.

After Marissa Mayer took over Yahoo, she had the Flickr team redesign the interface, and not for the better. It took on a look and feel that was not only counter-intuitive, it displayed the photos in a jumbled mass that made not only the photos look bad, it made their interface look even worse.

The last time I paid for Pro service, it was for 2 years at $44.95, that’s $22.48 a year. Not a horrible price for what was being offered… a lackluster interface and a crappy display of my photos.

After SmugMug took over, it has done little to improve the interface. In fact, it is still very much the same as it was when it was redesigned and offers little in the way of improvements. We’re talking about a design of a product that started in 2004. In many ways, Flickr still feels like 2004 even with its current offerings.

Status Quo

While Flickr kept their pricing reasonable at about $23 a year, I was okay with that.. particularly with the 2 year billing cycle. I had no incentive to do anything different with the photos I already had in Flickr. I’d let them sit and do whatever they want. In recent months, I hadn’t been adding photos to that site simply because the viewership has gone way, way down. At one point, Flickr was THE goto photo service on the Internet. Today, it’s just a shell of what it once was. With Instagram, Tumblr and Pinterest, there’s no real need to use Flickr any longer.

A true Pro photographer can take their work and make money off of it at sites like iStockPhoto, Getty, Alamy and similar stock photo sites. You simply can’t sell your work on Flickr. They just never offered that feature for Pro users. Shit, for the money, Flickr was heavily remiss in not giving way more tools to the Pro users to help them at least make some money off of their work.

Price Increase

SmugMug now owns the Flickr property and has decided to more than double the yearly price. Instead of the once $44.95 every 2 years, now they want us to pay $50 a year for Pro service.

SmugMugFlickr

[RANT ON] So, what the hell SmugMug? What is it that you think you’re offering now that is worth more than double what Yahoo was charging Pro members before you took over Flickr? You’ve bought a 14 year old property. That’s no spring chicken. And you now expect us to shell out an extra $28 a year for an antiquated site? For what? Seriously, FOR WHAT?

We’re just graciously going to give you an extra $28 a year to pay for a 14 year old product? How stupid do you think we are? If you’re going to charge us $28 extra a year, you damned well better give us much better Pro tools and reasons to pay that premium. For example, offer tools that let us charge for and sell our photos as stock photos right through the Flickr interface. You need to provide Pro users with a hell of a lot more service for that extra $28 per year than what you currently offer.

Unlimited GB? Seriously? It already was unlimited. Photos are, in general, small enough not to even worry about size.

Advanced stats? They were already there. It’s not like the stats are useful or anything.

Ad-free browsing? What the hell? How is this even a selling point? It’s definitely not worth an extra $28 per year.

10 minutes worth of video? Who the hell uses Flickr for video? We can’t sell them as stock video! You can’t monetize the videos, so you can’t even make money that way! What other reason is there to use Flickr for video? YouTube still offers nearly unlimited length video sizes AND monetization (if applicable). Where is Flickr in this process? Nowhere.

Flickr is still firmly stuck in 2004 with 2004 ideals and 2004 mentality. There is no way Flickr is worth $50 a year. It’s barely worth $20 a year. [RANT MOSTLY OFF]

New Subscribers and Pro Features

Granted, this is pricing grandfathered from Yahoo. If you have recently joined Flickr as a Pro user, you’re likely paying $50 a year. 50 US dollars per year, I might add that’s entirely not worth it.

Let’s understand what you (don’t) get from Flickr. As a Pro user, you’re likely purchasing into this tier level to get more space and storage. But, what does that do for you other than allowing you to add more photos? Nothing. In fact, you’re paying Flickr or the privilege of letting them advertise on the back of your photo content.

Yes, you read that right. Most people searching Flickr are free tier users. Free tier gets ads placed onto their screens, including on your pages of content. You can’t control the ads they see or that your page might appear to endorse a specific product, particularly if the ad is placed near one of your photos. Ads that you might actually be offended by. Ads that make Flickr money, but that Flickr doesn’t trickle back into its paying Pro users. Yes, they’re USING your content to make them money. Money that they wouldn’t have had without your content being there. Think about that for a moment.

Advertising on your Content

Yes, that’s right, you’re actually paying Flickr $50 for the privilege of them placing ads onto your page of content. What do they give you in return? A larger storage limit that’s effectively useless? Even the biggest photos don’t take much space… not nearly as much space as a YouTube video. Flickr knows that. They hope us users don’t see the wool being pulled over our eyes. Yet, do you see YouTube charging its channels for the privilege of uploading or storing content? No. In fact, if your channel is big enough, YouTube’ll even share ad revenue with you. Yahoo, now SmugMug, has never shared any of its ad revenue with its users, let alone Pro users. Bilking… that’s what it is.

On the back of that problem, Flickr has never offered any method of selling or licensing your photos within Flickr. If ever there was  ‘Pro’ feature that needed to exist, it would be selling / licensing photos.. like Getty, like iStockPhotos, like Alamy. Instead, what has Flickr done in this area? Nothing.. other than the highly unpopular and horrible redesign released in 2013 which was all cosmetic (and ugly at that)… and that affected all users, not just Pro. Even further, what as SmugMug done for Flickr? Absolutely nothing… zip, zero, zilch, nada. Other than spending money to acquire Flickr, SmugMug has done nothing… and it shows.

Free Tier Accounts

For free tier users, SmugMug has decided to limit the maximum number of uploaded photos to 1000. This is simply a money making ploy. They assume that free tier users will upgrade to Pro simply to keep their more than 1000 photos in the account. Well, I can’t tell you what to do with your account, but I’ve already deleted many photos to reduce my photo count below 1000. I have no intention of paying $50 a year to SmugMug for the “privilege” of allowing them to monetize my photos. No thanks.

If you are a free tier user, know that very soon they will be instituting the 1000 photo limit. This means that you’ll either have to upgrade or delete some of your photos below 1000.

Because the Flickr platform is now far too old to be considered modern, I might even say that it’s on the verge of being obsolete… and because the last upgrade that Marissa had Yahoo perform on Flickr made it look like a big turd, I’m not willing to pay Flickr / SmugMug $50 a year for that turd any longer. I’ve decided to get off my butt and remove photos, clean up my account and move on. If SmugMug decides to change their free tier further, I’ll simply move many of my photos over to DeviantArt where there are no such silly limits and then delete my Flickr account entirely.

If enough people do this, it will hurt SmugMug bad enough to turn that once vibrant Flickr community into a useless wasteland. I believe that outcome will actually become a reality anyway in about 2 years.

SmugMug

This company is aptly named, particularly after this Flickr stunt. They’re definitely smug about their ability bilk users out of their money without delivering any kind of useful new product. It would be entirely one thing if SmugMug had spent 6-12 months and delivered a full features ad revenue system, a stock photo licensing tool and a store-front to sell the photos. With all of these additions, $50 a year might be worth it, particularly if SmugMug helped Flickr users promote and sell their photos.

Without these kinds of useful changes, $50 is just cash without delivering something useful. If all you want to do is park your images, you can do that at Google, at Tumblr, at Pinterest, at Instagram and several other photo sharing sites just like Flickr. You can even park them at Alamy and other sites and make money from your photographic efforts.

Why would you want to park them at Flickr / SmugMug when they only want to use your photos to make Flickr / SmugMug money from advertising on a page with you content? It just doesn’t make sense. DeviantArt is actually a better platform and lets you sell your photos on various types of media and various sizes.

Email Sent to Support

Here’s an email I sent to Flickr’s support team. This email is in response to Margaret who claims they gave us “3 years grace period” for lower grandfathered pricing:

Hi Margaret,

Yes, and that means you’ve had more than ample time to make that $50 a year worth it for Pro subscribers. You haven’t and you’ve failed. It’s still the same Flickr it was when I was paying $22.48 a year. Why should I now pay over double the price for no added benefits? Now that SmugMug has bought it, here we are now being forced to pay the $50 a year toll when there’s nothing new that’s worth paying $50 for. Pro users have been given ZERO tools to sell our photos on the platform as stock photos. Being given these tools is what ‘Pro’ means, Margaret. We additionally can’t in any way monetize our content to recoup the cost of our Pro membership fees. Worse, you’re displaying ads over the top our photos and we’re not seeing a dime from that revenue.

Again, what have you given that makes $50 a year worth it? You’re really expecting us to PAY you $50 a year to show ads to free users over the top of our content? No! I was barely willing to do that with $22.48 a year. Of course, this will all fall on deaf ears because these words mean nothing to you. It’s your management team pushing stupid efforts that don’t make sense in a world where Flickr is practically obsolete. Well, I’m done with using a 14 year old decrepit platform that has degraded rather than improved. Sorry Margaret, I’ve removed over 2500 photos, cancelled my Pro membership and will move back to the free tier. If SmugMug ever comes to its senses and actually produces a Pro platform worth using (i.e., actually offers monetization tools or even a storefront), I might consider paying. As it is now, Flickr is an antiquated 14 year old platform firmly rooted in a 2004 world. Wake up, it’s 2018! The iStockphotos of the world are overtaking you and offering better Pro tools.

Bye.

Reasons to Leave

With this latest stupid pricing effort and the lack of effort from SmugMug, I now firmly have a reason to leave Flickr Pro. I have deleted over 2500 photos from Flickr which is now below 1000 photos (the free tier level). After that, it will remain on free tier unless SmugMug decides to get rid of that too. If that happens, I’ll simply delete the rest of the photos and the account and move on.

I have no intention of paying a premium for a 14 year old site that feels 14 years old. It’s 2004 technology given a spit and polish shine using shoelaces and chewing gum. There’s also no community at Flickr, not anymore. There’s really no reason to even host your photos at Flickr. It’s antiquated by today’s technology standards. I also know that I can’t be alone in this. Seriously, paying a huge premium to use a site that was effectively designed in 2004? No, I don’t think so.

Oh, well, it was sort of fun while it lasted. My advice to SmugMug…

“Don’t let the door hit you on the way out!” Buh Bye. Oh and SmugMug… STOP SENDING ME EMAILS ABOUT THIS ‘CHANGE’.


If you’re a Flickr Pro subscriber, I think I’ve made my thoughts clear. Are you willing to pay this price for a 14 year old aging photo sharing site? Please leave a comment below.

↩︎

Why Star Trek Discovery is not canon

Posted in botch, business, entertainment, TV Shows by commorancy on November 2, 2018

A lot of “fans” of the latest Star Trek TV series installment of Star Trek Discovery claim to love the show. They also claim that because the show runners have claimed Discovery is official canon, that the show is canon. But, is it? Let’s explore.

What is Canon?

Canon is previous story and characters that a show must follow so as not to contradict something that has come before. Yet, Discovery has contradicted established canon all along the way. The first contradiction was the Klingons with their … well, let me show a picture:

Star-Trek-Discovery-TKuvma-Klingon-Leader

This is a Discovery Klingon. This Klingon above looks nothing like these 3:

NextGeneration Klingon

or even this Klingon from a TOS episode:

Classic-klingon

The latter two having been Klingons in The Next Generation and in the Original Series, respectively. The “bonehead” Klingons became the norm from 1979 onward. It was the bonehead Klingon design that Gene Roddenberry himself approved.

With Star Trek Discovery, that all changed and now we have the Klingon pictured in the top most image. The difficulty is, “Where did this Klingon come from?”. It doesn’t match the canon approved and used throughout the 80s and 90s and even into the 00s with Star Trek Enterprise.

Now, Discovery appears and gives us this odd designed Klingon that has never been used in any previous series ever. It doesn’t much resemble a Klingon, even though they’re speaking Klingon and have a kind of “bonehead”. The question remains, what happened? Is this design canon or not? Before I answer that question, let’s talk about how this Intellectual Property has been fractured between studios.

Paramount versus CBS

When Roddenberry was alive and even up until not too long ago, Paramount was the sole rights holder of Star Trek. However, when Viacom bought and then split Paramount and CBS, this all changed who owned what and it fractured the Star Trek franchise in unnecessary and inexplicable ways.

A little history. In 1994, Paramount was purchased by Viacom. In 1999, Viacom agreed to purchase CBS. This means that from 1999 to 2005, Viacom owned both Paramount and CBS. In 2005, Viacom’s then board of directors voted to split Paramount and CBS into separate companies for better “shareholder value”.

When the companies split, CBS was given the rights to the Star Trek TV series universe and Paramount was given the rights to the Star Trek motion picture universe. Ultimately, this now gives two separate entertainment companies the rights to create and make up canon in their respective universes. This is ultimately where the fracturing of the intellectual property comes into play and why Discovery is such a mess when it comes to producing its series based on canon.

This split also means that the canon is now split between two separate companies. A franchise disaster, to be honest.

Motion Pictures versus TV series

The TV series includes Star Trek The Original Series, The Animated Series, The Next Generation, Deep Space Nine, Voyager and Enterprise. These properties up to Enterprise existed at the time of the split. Discovery did not exist then.

The original cast motion pictures include Star Trek The Motion Picture, II, III, IV, V and VI. The Next Generation cast pictures include Generations, First Contact, Insurrection and Nemesis. The Kelvin time line pictures (i.e., J.J. Abrams) include Star Trek (2009 Reboot), Into Darkness, Beyond and there is a possibility of a fourth film which is in limbo as of this article.

This means that CBS owns the rights to the above TV series properties (in addition to Discovery) and Paramount owns the rights to the above Motion Picture properties. It also means that CBS can now ignore motion picture canon and Paramount can ignore TV series canon when producing future works.

Clearly, this is how CBS is proceeding with its latest TV series, Star Trek Discovery. One can argue, the “bonehead” Klingons appear in the TV series. They do. And, to a degree, the design above does appear somewhat like a bonehead Klingon, except without hair, much darker skin, odd shaped facial features and odd shaped outfits. However, no Klingon has ever appeared on screen in any way (TV or Movie) that looks like this Discovery Klingon. This Klingon type is actually the first of its kind… which means, it is NOT Roddenberry canon.

The Trouble with Tribbles

Or, more specifically, the trouble with double ownership of the Star Trek franchise means there is no effective steward maintaining canon. There can’t be. There are two separate companies competing for your almighty Star Trek dollar. One company can make shit up and the other company doesn’t have to use it. This is effectively what CBS is doing… making shit up as they go along because they don’t have to answer to canon placed into the motion pictures. Even then, they’re not following canon established by previous Star Trek TV series either. After all, Star Trek Discovery is clearly set at the same time as The Original Series.

The TV series timeline goes something like (timeline courtesy of Memory Alpha):

2151-2155 -- Star Trek Enterprise (Season 1 thru 4)
2254-2254 -- Star Trek The Original Series: "The Cage" (Episode)
2256-2257 -- Star Trek Discovery (Season 1)
2265-2269 -- Star Trek The Original Series (Seasons 1, 2 and 3)
2269-2270 -- Star Trek The Animated Series (Seasons 1 and 2)
2364-2370 -- Star Trek The Next Generation (Seasons 1 thru 7)
2369-2375 -- Star Trek Deep Space Nine (Season 1 thru 7)
2369-2370 -- Star Trek Enterprise: "These are the Voyages" (Episode)
2371-2378 -- Star Trek Voyager (Seasons 1 thru 7)

As you can see, Star Trek Discovery is actually set BEFORE Star Trek The Original Series, before The Animated Series and before any other series with the exception of one Star Trek TOS episode and Star Trek Enterprise which come before Discovery.

STMP-KlingonBasically, the canon that Star Trek Discovery must adhere to is what is seen in Star Trek Enterprise and in one episode of The Original Series (and, of course, anything in later TV series that corroborate Enterprise and TOS). Enterprise and this one episode of Star Trek TOS are both enough to set canon as to how Discovery should run. Discovery also occurs 9 years prior to The Original Series. However, The Original Series only showed the non-bonehead Klingons while Enterprise showed us both styles of Klingons. This means that both Klingon types already existed in the Roddenberry universe when Star Trek TOS existed. This also means that both Klingon types exist at the time when Discovery is operating. One could argue that Enterprise broke canon by showing us the bonehead Klingons that we wouldn’t see until Star Trek The Motion Picture in 1979 (picture to the left). However, Discovery’s Klingon type comes out of nowhere and goes back into nowhere because this Klingon type won’t exist after Discovery ends.

AfflictionHowever, in the Enterprise episode “Affliction” in the 4th season, I guess this episode is supposed to explain the difference between the bonehead and non-bonehead Klingons and the reasons why the non-bonehead Klingons appear in The Original Series. I think it was a cheap cop-out episode, but hey, at least they held true to the TMP and TOS Klingon designs… which is more than I can say for Discovery.

Discovery, on the other hand, doesn’t hold true to either design. They made their own Klingon canon. They made a Klingon design that has never exited before or after… not in ENT, TOS, TNG, DS9, TAS or Voyager. They’re clearly, “making shit up”.

Additionally, there’s the Spore Drive. Yet again, Discovery is found “making shit up”. This drive type has never been discussed either before or since, yet Discovery has introduced this propulsion system as some experimental thing that only existed during Discovery’s existence. I’m sorry, if the spore drive were a real thing in the Roddenberry universe, there would have been talks of it both in Star Trek TOS and likely Star Trek Enterprise and even in TNG, DS9 and Voyager (it would have at least come up, particularly in Voyager when looking for a way home). That no information was ever discussed regarding this drive system, Discovery is simply creating things out of thin air to make their series more watchable (and make more money). However, there may be another reason… so, keep reading.

Because “The Cage” episode shows us that the Federation chain of command already exists in a formalized and hierarchical command structured way, having Discovery show its characters as chaotic, insubordinate and outright informal makes me believe that the Discovery creators had no intention of following established Roddenberry “Federation” canon. In fact, I will go so far as to say that Star Trek Discovery is actually operating in its own universe. Perhaps it exists in the Kelvin universe along side the reboot Star Trek motion pictures, but I believe it lives in its own new CBS universe. But, Discovery does not live in the same universe as the Roddenberry universe TV shows do.

CBS Universe

Because Star Trek Discovery lives in its own universe, the creators of Discovery can literally make up anything they wish and it will be canon. It’s canon because the show isn’t set in the Roddenberry universe. It’s set in a CBS offshoot universe where everything can and does exist if the creators want it to. In this universe, weird shaped Klingons, spore drives and insubordination are all accepted because in this universe it’s all there.

In the Roddenberry universe, Discovery never existed and couldn’t exist. The spore drive doesn’t exist. The weird Discovery Klingons don’t exist. The F-bombs don’t exist. The nonsensical highly sophisticated NCC-1031 starship doesn’t exist with its operating panel designs that don’t exist on the Federation’s flagship Enterprise NCC-1701 just 9 years later.Star Trek Discovery BridgeDiscovery living in a CBS Universe is the only explanation that can possibly work for this TV show. When a show runner says it’s canon, well it is. But, it’s only canon if you consider that Discovery is a show created in an offshoot CBS universe that has never before existed. It is not canon were it to exist in the Roddenberry universe. Obviously, the show creators aren’t going to make this distinction because they don’t want viewers to understand the difference between the CBS universe and the Roddenberry universe. They just want the viewer to believe it somehow magically exists in the Roddenberry universe when this show clearly cannot.

It’s clear, Discovery does not exist in the Roddenberry universe. It can’t. That universe ended with the close of Star Trek Enterprise. It remains to be seen if the new Patrick Stewart series will be set in Discovery’s CBS universe or if CBS will try to set that series in the Roddenberry universe. My guess is that CBS may want to attempt some type of crossover episodes between Discovery and the as yet unnamed Patrick Stewart series. However, that would be a feat considering that Discovery occurs 98 years earlier from the original TNG series (see timeline). Considering Patrick Stewart’s age now, they’ll have to age forward the new series to have it make sense with Stewart’s current age… which means this new series must occur over 100 years in Discovery’s future. It will then be difficult to have a crossover without time travel. However, they can engineer dual episodes which causes something to happen in Discovery that impacts the Picard series 100 years later. This is akin to a crossover and would establish both series being in the same universe; the CBS universe.

Personally, I’d rather the two series remain entirely independent. No crossovers. No incidental references to prior events in Discovery. This means that Discovery can officially be announced as operating in its own CBS universe and that the Picard series will be set in the Roddenberry universe and no crossovers will be possible.

Kelvin Universe

When J.J. Abrams became part of Paramount’s efforts to reboot the Star Trek movie franchise, he decided to create an entirely new and separate universe. In that effort, he had elder Spock (from the Roddenberry universe) fall through a time hole and land in an alternate universe much earlier in its unfolding life. Elder Spock then meets up with his much younger alternate version of Spock along with younger versions of Kirk, Sulu, Chekov, Uhura, Bones and so on. Basically, these alternate versions of these main characters set the tone of this alternate universe’s ‘Five Year Mission’, set in an alternate Enterprise, set in an alternate timeline known as Kelvin. It’s named after the USS Kelvin, the ship which fell through the time hole with elder Spock. Why is this important?

It’s important to understand this Kelvin alternate universe idea because it appears CBS has done the same exact thing with the Discovery TV series. Instead of trying to disturb and hold true to the Roddenberry universe canon, it’s far easier to create a brand new offshoot universe set in its own time line. This then means the writers can write anything they wish, on any ship they wish, with any technology they wish. Because Paramount has already established their own playground universe for the movies to live in, it appears CBS is also running with this idea and has done the exact same thing with Discovery. Even the name ‘Discovery’ hints at the existence of this alternate universe.

In fact, I believe that this alternate universe will reveal itself and will likely become a big part of Discovery’s future stories. I’m assuming that the writers are holding this point back until just the exact moment when they can reveal a character like Picar… er Spock falling through a time distortion and we can clearly see that Discovery is not set in the Roddenberry universe. It makes for a good plot twist, don’t you think? Holding this point back allows the Discovery writers to craft and unfold an entire season long story arc about this new CBS universe (or whatever name they decide to give it). For now, I’m calling it the CBS universe, but it will likely be named differently after someone from the Roddenberry universe falls into it.

I’d suspect it might be a TNG character who falls through this time. Perhaps Q created this universe? I’d steer clear of Q as using this character always feels like a cop-out. Because Wesley had become a kind of universe traveler, I’d like to see him return a bit older so we can finish out his story arc that never really closed properly in TNG. I might also like to see Kess show up as she also didn’t get proper closure in Voyager. Seeing a new Dax might also be a good way to handle this reveal also. Dax’s immense knowledge and age would allow for some very good stories. Even Guinan might be a good choice to land in Discovery’s alternate universe.

For this reason, I believe that Discovery’s writers and creators are holding back on this idea, but will eventually reveal it. For this reason, the show runners can say that Discovery is canon, because it is, in its own universe. They just haven’t revealed this alternate universe point in the TV series yet. They can string the fans along making them think it’s in the Roddenberry universe when they haven’t yet unveiled the story. It’s still too early in this TV series to reveal a story point this big.

Canon or not?

Because I surmise that Discovery is set in its own CBS universe, which is entirely separate from the Roddenberry and the Kelvin universes, Discovery can be its own bubble show and do whatever it wants with its stories. It doesn’t need to follow any Trek lore or, indeed, anything to do with Trek. It can feel free to “make shit up” however it wishes. I’m fine with that as long as the show runners finally fess up to this. As it is now, trying to shoehorn Discovery into the Roddenberry universe where it doesn’t belong is just stupid.

To answer this Blog’s ultimate question, Discovery is not canon for Roddenberry’s universe. It is canon for its newly created CBS universe. It’s possible that Discovery exists in the Kelvin universe (doubtful) where it may or may not be canon. The difficulty is that, as I said above, the motion picture canon is operated by Paramount. The TV series canon is operated by CBS. This means that never the twain shall meet. This fracturing of intellectual property rights was a horribly bad idea for Star Trek. It has now left this franchise with a fracture right down the middle of its canon. Show producers for Discovery can now claim canon when what they’re doing clearly isn’t canon and cannot possibly be unless the show is set in its own CBS universe (which the CBS universe ultimately has no canon except for what Discovery has created so far).

↩︎

Should I beta test Fallout 76?

Posted in best practices, botch, business, video gaming by commorancy on November 1, 2018

ps4-pro-500-million-dualshock-4-crWhile I know that beta testing for Fallout 76 is already underway, let’s explore what it means to beta test a game and whether or not you should participate.

Fallout 76

Before I get into the nitty gritty details of beta testing, let’s talk about Fallout 76. Fallout 76, like The Elder Scrolls Online before it, is a massively multiplayer online role playing game (MMORPG). Like The Elder Scrolls Online which offered an Elder Scrolls themed universe, Fallout 76 will offer a Fallout themed universe in an online landscape.

How the game ultimately releases is yet to be determine, but a beta test gives you a solid taste of how it will all work. Personally, I didn’t like The Elder Scrolls Online much. While it had the flavor and flair of an Elder Scrolls game entry, the whole thing felt hollow and unconnected to the franchise. It also meant that Bethesda spent some very valuable time building this online game when they could have been building the next installment of the Elder Scrolls.

It is as yet undetermined how these online games play into the canon of The Elder Scrolls or, in Fallout 76’s case, in the Fallout universe. Personally, I see them as offshoots with only a distant connection. For example, The Elder Scrolls Online felt Elder Scrollsy, but without the deep solid connections and stories that go with building that universe. Instead, it was merely a multiplayer playground that felt like The Elder Scrolls in theme, but everything else was just fluff. I’m deeply concerned that we’ll get this same treatment from Fallout 76.

The Problem with Online Games

Online games have, in recent years, gotten a bad rap… and for good reason. The reason that this is so is because the game developers focus on the inclusion of silly things like character emoting and taking selfies. While these are fun little inclusions, they are by no means intrinsic to the fundamental game play of an actual game.

Games should be about the story that unfolds… about why your character is there and how your character is important in that universe. When the game expands to include an online component, now it’s perhaps tens of thousands of people all on the server at the same time. So, how can each of these characters be important to that universe? The answer is, they can’t.

Having many characters all running around doing the “same” things in the universe all being told by the game that they are “the most important thing” to the survival of that universe is just ludicrous.

This leads to the “importance syndrome” which is present in any MMORPG. As a developer, you either acknowledge the importance syndrome and avoid it by producing a shallow multiplayer experience that entirely avoids player importance (i.e., Fortnite, Overwatch, Destiny, etc) or you make everyone important each in their own game (i.e., The Elder Scrolls Online). Basically, the game is either a bunch of people running around doing nothing important at all and simply trying to survive whatever match battles have been set up (boring and repetitive) or the game treats each user as if they are individually important in their own single player game, except there are a bunch of other users online, all doing the same exact thing.

The Elder Scrolls Online fell into the latter camp which made the game weird and disconnected, to say the least. It also made the game feel less like an Elder Scrolls game and more like any cheap and cheesy iPad knockoff game you can download for free… except you’ve paid $60 + DLC + online fees for it.

I’ve played other MMORPG games similar to The Elder Scrolls Online including Defiance. In fact, Defiance played so much like The Elder Scrolls Online, I could swear that Bethesda simply took Defiance’s MMORPG engine and adapted it to The Elder Scrolls Online.

Environments and Users

The secondary problem is how to deal with online users. Both in the Elder Scrolls Online and Defiance, there were areas that included player versus environment (PvE). PvE environments mean that players cannot attack other players. Only NPCs can attack your player or your character can die by the environment (i.e., falling onto spikes). There were also some areas of the online map that were player versus player (PvP). PvP means any online player can attack any other online player in any way they wish.

In The Elder Scrolls Online, the PvP area was Cyrodiil, which was unfortunate for ESO. The PvP made this territory mostly a dead zone for the game. Even though there were a few caves in the area and some exploring you could do, you simply couldn’t go dungeon diving there because as soon as you tried, some player would show up and kill your player. Yes, the NPCs and AI enemies could also show up and kill your player, but so could online players.

The difficulty with Cyrodiil was that if another player killed your player in the PvP area, that player death was treated entirely differently than if they died by the environment. If another player killed your character, you had to respawn at a fort, which would force your character to respawn perhaps half a map away from where you presently were. If your character died by the environment or another NPC, you could respawn in the same location where your character died. This different treatment in handling the character death was frustrating, to say the least.

With Fallout 76, I’m unsure how all of this will work, but it’s likely that Bethesda will adopt a similar strategy from what they learned in building The Elder Scrolls Online. This likely means both PvE areas and PvP area(s). Note that ESO only had one PvP zone, but had many PvE zones. This made questing easier in the PvE zones, but also caused the “importance syndrome”. This syndrome doesn’t exist in single player offline games, but is omnipresent in MMORPGs.

MMORPGs and Characters

The difficulty with MMORPGs is that they’re primarily just clients of a server based environment. The client might be a heavier client that includes handling rendering character and environment graphics, but it is still nonetheless a client. This means that to use an MMORPG, you must log into the server to play. When you login, your character information, bank account, level ups, weapons, armor and so on are kept on the server.

This means that you can’t save off your character information. It also means you can’t mod your game or mod your character through game mods. Online games are strict about how you can change or manage your game and your character. In fact, these systems are so strict that if a new version of the game comes out, you must first download and install the game before they’ll let you back onto the server… unlike standalone games that let you play the game even if networking components are disabled. This means that you cannot play an MMORPG until your client is most current, which could mean 50GB and hours later.

This means that you’ll need an always on Internet connection to play Fallout 76 and you’ll need to be able to handle very large client downloads (even if you own the game disc).

Beta Testing

Many game producers like to offer, particularly if it’s a server based MMORPG, the chance for players to beta test their new game. Most online games allow for this.

However, I refuse to do this for game developers. They have a team they’ve hired to beta test their environments, quests and landscapes. I just don’t see any benefit for my player to get early access to their game environment. Sometimes, characters you build and grow in a beta won’t even carry over into the released game. This means that whatever loot you have found and leveling you may have done may be lost when release day comes. For that early access, the developer will also expect you to submit bug reports. I won’t do that for them. I also don’t want to feel obligated to do so.

Bethesda stands to make millions of dollars off of this game. Yet, they’re asking me to log into their game early, potentially endure huge bugs preventing quest progress, potentially lose my character and all of its progress and also spend time submitting bug reports? Then, spend $60 to buy the game when it arrives? Then, rebuild my character again from scratch?

No, I don’t think so. I’m not about to spend $60 for the privilege of spending my time running into bugs and submitting bug reports for that game. You, the game developer, stand to make millions from this game. So, hire people to beta test it for you. Or, give beta testers free copies of the game in compensation for the work they’re doing for you.

If you’re a gamer thinking of participating in beta testing, you should think twice. Not only are you helping Bethesda to make millions of dollars, you’re not going to see a dime of that money and you’re doing that work for free. In addition, you’re still going to be expected to spend $60 + DLC costs to participate in the final released game. No, I won’t do that. If I’m doing work for you, you should pay me as a contractor. How you pay me for that work is entirely up to you, but the minimum payment should consist of a free copy of the game. You can tie that payment to work efforts if you like.

For example, for each report submitted and verified as a new bug, the beta tester will get $5 in credit towards the cost of the game up to the full price of the game. This encourages beta testers to actually submit useful bug reports (i.e., duplicates or useless reports won’t count). This also means you earn your game as you report valid and useful bugs. It also means that you won’t have to pay for the game if you create enough useful, genuine reports.

Unfortunately, none of these game developers offer such incentive programs and they simply expect gamers to do it “generously” and “out of the kindness of their hearts”. No, I’m not doing that for you for free. Pay me or I’ll wait until the game is released.

Should I Participate in Beta Tests?

As a gamer, this is why you should not participate in beta tests. Just say no to them. If enough gamers say no and fail to participate in beta releases, this will force game developers to encourage gamers to participate with incentive programs such as what I suggest above.

Unfortunately, there are far too many unwitting gamers who are more than willing to see the environment early without thinking through the ramifications of what they are doing. For all of the above reasons, this is why you should NEVER participate (and this is why I do not participate) in any high dollar game beta tests.

↩︎

 

 

 

 

Game Review: Red Dead Redemption 2

Posted in botch, video gaming by commorancy on October 27, 2018

Red Dead Redemption 2_20181026235524

I was so wanting to like Red Dead Redemption 2 right out of the gate. For Rockstar, this game’s lengthy intro and dragging pace is a total misfire. Let’s explore.

A Horrible, Horrible Intro

The whole slow snow covered mountain terrain opening is an incredible fail for a game series like Red Dead Redemption. It’s so slow and rail based that I just want to toss the disc in the trash. This insipid opening doesn’t inspire me to want to “wait it out” for the “rest” of this game. All I desperately want to do is skip this opening and get through it as fast as possible. Unfortunately, not only is it unskippable, it’s ….

Slow, Slow, Slow

Red Dead Redemption 2_20181027022716

When following the rail based opening “stories”, even when you do manage to follow the correct path (a feat in and of itself), it’s entirely far too slow of a pace. I could run to the kitchen and make a sandwich in the time it takes to get from point A to B in this game.

The horses run like they’re drugged. Even worse is the forced stamina meter on horses. This isn’t a simulation, it’s an RPG style “Old West” game. We don’t want to water and feed our horses so they can run fast. Then, have to stop and feed them again when they run out of “energy”. That’s akin to making us fill our GTA5 cars up with gas at in-game gas stations. Thankfully, they didn’t make us endure that stupidity in GTA5. Unfortunately, that stupidity is included in RDR2. We also don’t want our horses to run out of energy while running at full gallop. A stupid concept made stupider by the mere inclusion of it in this game.

The game seems like it’s running in slow motion. I’m not sure what’s going on here or why R thought this opening play style would be okay, but it isn’t. At least with GTA, when you got in a car, it was fast. Here, everything moves at a snail’s pace and the rail based gang quests are sheer torture. I just want this part to be over so I can finally get to the meat of the game.

R, let us skip these insanely boring, long and insipid intros. I don’t want to endure this crap. This opening is a horrible misfire for a game in a franchise like Red Dead Redemption. It’s fine if a tutorial opening takes 15-20 minutes. But, when an opening takes 2 hours or more to get past, it’s entirely WAY TOO LONG. Cut it down… seriously.

Failed Intro Setup

I understand what Rockstar was trying to do with this opening. Unfortunately, it just doesn’t work. It’s fine to see the gang camaraderie being built, but it doesn’t take 2+ hours and snail’s pacing to do it. This dragged-out opening is a horribly unnecessary.

I realize the opening of any game is typically tutorial city, but let me skip most of it. I don’t want to be told how to open a cabinet or how to sit down. I can figure this out on my own. Just show me the screen icon and let me do the rest. I don’t need little black boxes appearing in the corner telling me how to do the most simplistic things. It’s like Rockstar thinks we’ve never ever played a video game in our entire lives. Shit, it’s RDR 2 for crisake. It’s a sequel. We’ve likely already played RDR. I have.

Condescending treatment to gamers by hand-holding even the most basic of actions is as torturous as this far-too-slow-paced intro. Whoever greenlit this intro should be removed from producing future video games. Just get to the game already, Rockstar!


Camera

Batter Batter Batter… swing and miss. And, what a miss this one is for R! Let me start this section by saying there is no “photo mode” at all in this game. Instead, you have to obtain an “old timey” camera from some hack who’s in a bar. Then, you have to equip it from your satchel. Only after you obtain and equip this camera can you actually take pictures in-game. Uh, no. I realize this is supposed to be some kind of immersion tactic, but having characters take photos for quests with an in-game camera should be entirely separate from having a photo mode built into the game for player use and sharing. A photo mode should be available from the moment the first gameplay begins. It shouldn’t be something that’s “found or earned” later in the game.

Rockstar again swung and missed on this one. Rockstar, next time, just add a photo mode into the game as part of the UI for the player to use from the start. If the player character needs a camera to take pictures for a quest, just make it disposable and disappear after the quest is completed.

The reason for having a photo mode is so you can add features like exposure, filters and get bird’s-eye views of the environment. Limiting the photos to the perspective of the character holding the camera is stupid and wasteful. We want to use an actual photo mode, not a character acquired and limited camera.

Lighting and Graphics

I was actually expecting a whole lot more from the RAGE engine here. While Grand Theft Auto wasn’t perfect in rendering realism and didn’t always offer the most realistic results, the lighting did offer realistic moments, particularly with certain cars and with certain building structures under certain daylight lighting conditions. With Red Dead Redemption 2, I was actually expecting at least some improvement in the RAGE engine for 4K rendering. Nope. It seems that Rockstar simply grabbed the same engine used in GTA and plopped it right into Red Dead Redemption 2.

So far with Red Dead Redemption 2, I’m entirely underwhelmed with the indoor lighting model being used. “Wow” is all I can say, and that’s not “wow” in a good way. I am not only underwhelmed by the realism of the character models themselves, but of how the lighting falls on the character models. When a character opens his/her mouth, the teeth read as a child’s attempt at a drawing. It’s bad. B.A.D! Let’s take a look at RAGE’s poor quality indoor lighting:

The wood looks flat and dull. The clothing looks flat and dull. Metal doesn’t look like metal. Glass doesn’t look like glass. The faces just don’t read as skin. The skin on the characters looks shiny and plastic and, at the same time, flat and dull. The teeth look like a child’s drawing. Part of this is poor quality lighting, but part of it is poor quality models and textures. The three main character models in GTA5 (Michael de Santa, Trevor Philips and Franklin Clinton) looked way better than this, likely using the same RAGE engine. The RAGE engine is not aging well at all. Even the “sunlight rays” here look forced and unrealistic. This game looks like something I would have expected to see in 2004, not 2018. Let’s compare this to Ubisoft’s AnvilNext engine which is night and day different:

Wow! What a difference… (click to read Randocity’s Assassin’s Creed Odyssey review)

Screenshots vs Camera

And speaking of teeth… trying to get these Red Dead Redemption 2 screenshots is like pulling teeth. I have to attempt to position the gameplay camera just so. I can’t use the “Old Timey” camera for the above in-game shots as there’s no way to get that “Camera” into the proper position using the player character. Using the actual gameplay camera is always hit or miss. If the camera moves a little bit too far or too close to a figure, it pops over the character and you can’t see them.

The point to adding a photo mode is positioning the camera exactly where you want it, to get the best shot. It also allows you to use depth of field. I can’t do that in Red Dead 2 and I’m limited to playing tricks with the camera placement and hope it turns up with a shot using the PS4’s share button. Not to mention, I have to spend time running to the menu to turn off HUD elements (the reason the map and the money is visible in one of the RDR2 screenshots).

R⭑ , get with the program. It’s time to add a real photo mode to RAGE… a photo mode that offers so much more than the player character holding and using an “old timey” camera. It’s fine if the character needs an in-game camera for quest reasons, but it’s time for a real photo mode… which is how I captured all of these Assassin’s Creed Odyssey screenshots above. I should also point out the reason for having photo mode in a game is for the game player, not for the benefit of the in-game character. Adding a photo mode means you’re thinking of the gamer and how they want to use the game to capture and share their gameplay. By not including a photo mode and having such poor quality graphics, it shows that R‘s interest is more in making money and not in advancing their RAGE technology to provide a next gen quality experience.

Red Dead Redemption 2 is a huge step backwards for realism in video games.


Meat of the Game

I’m finally past the torturous intro and I’m sad to say that the game itself is absolutely nothing like Red Dead Redemption. Red Dead Redemption was open prairies, tumbleweed and Arizona-like environments. These environments worked tremendously well for “The Old West”. This game is lush green valleys with trees, forests and streams. It’s not so great to set an “Old West” kind of ambiance. Ignoring the wrong environmental settings in which to place an “Old West” kind of game, the game’s pacing is sheer torture to endure. The pacing in Red Dead Redemption was near perfect.

Here, the leisurely slow pace in how the player character moves and walks and how slow the horse runs is totally wrong for this game and is *yawn* b.o.r.i.n.g. Again, this is nothing like Red Dead Redemption. I’m not looking for Lamborghini speeds, here. But, I am looking for a much quicker pace than the la-la-la leisurely pace of this game. In fact, this game’s pacing is so arduous, it makes you want to pop the game out and go do something else at a faster pace. Again, another total Rockstar misfire.

Town Bounties and Game Interference

Just for the sheer heck of it while trying to relieve the boredom with the game’s slow pacing and lame story activities, I decided to have a shoot out in Valentine, the first town you’re supposed to reach in this game. As you progress in dying and getting a higher and higher bounty, the game stupidly pushes your character farther and farther away from the town with each respawn. Game, if you don’t want the character doing this in a town, then just prevent it. Don’t respawn my player character farther away from the town each time. Respawn the character where he fell and let me choose whether to leave or stay. This intentional interference is not only an asinine game design mechanic, it makes me want to break the game disc in half.

I’m merely trying to make the game at least somewhat more interesting and tolerable than the forced slow pacing… but then the game feels the need to frustrate and interfere with my efforts by sending my character farther and farther away from town. On top of that, once you get a bounty, the NPCs that come after you are practically unkillable. I’ve hit them with perhaps 5-10 shots of a shotgun (many times in the head) and they’re still getting up and shooting at me. There is absolutely no way that’s possible. I realize this is a game, but that’s taking the unrealistic nature of this game way too far. It’s not like they’re wearing Kevlar. If I shoot an NPC twice, they need to die. This includes any character, deputy or otherwise. These are not SWAT characters in Los Santos wearing police armor. It’s asinine how the game works this bounty mechanic by protecting the town residents.

If this game is truly supposed to offer RPG style open world play, then I should be able to go into any town and have a gunfight with the entire town if I so choose… and the characters in the town need to die with a realistic amount of bullets. It might make my character wanted, put a bounty on his head, turn him to the “dark side” or whatever, but I should be able to play this game on my own terms without the game interfering with my choice of play. By interfering with my choice of play, the game is specifically telling me that this isn’t what I’m supposed to be doing and that I should be following the story path laid out by the game developers. That’s the very definition of a rail based game. That’s NOT an open world make-my-own-choices game.

Now, I do realize this interference is intended, but this interference takes away an important gamer choice… to play the game in any way the gamer chooses. If you’re going to offer guns and bullets, you need to make them count in the game. Bullets can’t act deadly in some situations and act as mere bee stings to other NPCs. Bullet damage must remain consistent against ALL NPCs under ALL conditions unless you implement a visible character level system.

Because of the boring slow pace, the lame story elements (Really? A tavern brawl is the best you can do?), the absolute crap hand-to-hand combat mechanic, the unkillable-NPC-bounty situation, the lackluster lighting, the game’s meddling interference in my choice of play, the poorly created character models and textures, the lack of photo mode and the broken Social Club site, my 2 out of 10 stars firmly stands for this game.

An Utter Disaster

This game is a disaster for Rockstar. I guess every game studio is entitled to a dud. Most times I can give some creative advice on how to improve a game. I’m at such a loss for improving this game’s disastrous design, I can’t even begin to tell Rockstar how to get this hot mess back on track. I think it needed to go back to the drawing board. Oh well, my high hopes for this game have been utterly dashed. It’d be nice to get my money back. This game is crap. Avoid.


Graphics: 5 out of 10
Sound: 7 out of 10
Voice Acting: 2 out of 10
Brawling: 2 out of 10
Gunfights: 5 out of 10
Pacing and Stories: 1 out of 10
Stability: N/A

Overall Rating: 2 out of 10
Recommendation: Don’t buy. Avoid. If you must try it, rent only.

I’d actually rate it lower, but I’m giving it 2 stars for sheer effort. Let’s just forget all about this game and remember the fun we had with Red Dead Redemption.


Agree or disagree? Please leave a comment below and let me know what you think about Red Dead Redemption 2.

↩︎

What does Reset Network Settings in iOS do?

Posted in Apple, botch, business, california by commorancy on October 25, 2018

apple-cracked-3.0-noderivsIf you’ve experienced networking issues with your iPad or iPhone, you may have called Apple for support. Many times they recommend that you “Reset Network Settings.” But, what exactly does this operation do? Let’s explore.

What’s included in this Reset Network Settings process?

This is a complicated answer and how it affects you depends on several factors. What this process does, in addition to resetting a bunch of locally stored device settings on the iOS device itself, it also deletes network settings stored in your iCloud Keychain. If you have only an iPhone and own no other devices (i.e., no iPads, no Macs, no iPods, no Apple Watches, no Apple TVs, nothing else), resetting these settings will likely work just fine for you.

However, if you own or use multiple Apple devices and these devices participate in iCloud Keychain, things can get complicated… very, very complicated. The “or use” statement is the one that makes this process much more complicated. If you have a work Mac computer that’s hooked up to your Apple ID and is participating in iCloud Keychain, performing “Reset Network Settings” on an iPhone can become problematic for your work computer. How? First, let’s find out more about iCloud Keychain.

iCloud Keychain

What is iCloud Keychain? This is an iCloud network service that stores sensitive passwords and credit card information in a secure way. This iCloud service also lets multiple iOS, MacOS, tvOS and WatchOS devices participate and use this data as part of your Apple ID. If you own multiple Apple devices, they can all share and use this same set of sensitive data without having to enter it individually on each device (convenience).

Your iCloud Keychain is specific to your Apple ID which is protected by your Apple ID login and password. The iCloud Keychain was created as both a convenience (all devices can share data), but also secure in that this data is protected behind your Apple ID credentials.

When you “Reset Network Settings” on any iOS (or possibly even MacOS, tvOS or even WatchOS) device and your devices participate in iCloud Keychain synchronization, a “Reset Network Settings” can cause networking issues for all of your devices. Why?

The iCloud Keychain stores WiFi access point names (SSIDs) and passwords. Not only that, it also stores credit cards that you might use with Apple Pay (this becomes important later). When you run “Reset Network Settings” on any iOS device, it will wipe all access point SSIDs and Passwords from your iCloud Keychain.

You might be asking, “Why is this a problem?” This will become a problem for all devices participating in iCloud Keychain. All of your Apple devices share in using this SSID and password data from your iCloud Keychain. This important to understand.  Because of this level of sharing, it only takes one device to learn of an access point for all Apple devices to use that network when in range. For example, if you bring your Mac to a convention and log it into an access point at the convention, your Mac logs this access point data to the iCloud Keychain. Your phone will immediately pick up on this new access point addition and also connect to that access point using the newly stored password as soon as it finds it… automagically.

Likewise, it only takes one device to wipe an access point and all devices lose access to it. It’s a single shared location for this networking data. One device adds it, all can use it. One device deletes it, all devices forget about it. Is this a good idea? You decide.

Reset Network Settings and Multiple Devices

Here’s where things get complicated with iCloud Keychain. If you are having network troubles with your iPhone, you might be requested by Apple Support to “Reset Network Settings”.

If all of your MacOS, tvOS, iOS and WatchOS devices participate in iCloud Keychain and you actually perform “Reset Network Settings” on your iPhone, it will wipe not only the current access point, but every access point that every device is aware of. It returns your network settings on iOS (and in iCloud Keychain) to a clean slate to start it over. It does this to try and clear out any problematic network settings. It also deletes known access points from the iCloud Keychain. This wipes access to this data for ALL of your Apple devices, not just the one you performed “Reset Network Settings” on.

What this means is that every device participating in iCloud Keychain will lose access to ALL access points that had previously been known because they have been deleted as part of “Reset Network Settings”. If your iOS device knew of all access points, they will ALL be wiped from iCloud Keychain. This means that every device will immediately lose access to its current access point. It also means that every Apple device you own must now be touched to reselect a new access point requiring you to reenter the password for that access point… On. Every. Apple. Device!

For example, I own two Macs, two iPads, three iPhones and two iPod Touches. A “Reset Network Settings” from a single device means I will need to go and manually touch 9 different devices to reconnect them to WiFi after a single iOS device performs a “Reset Network Settings” operation. It requires this because every device has lost access to even its home network which means no access to iCloud Keychain… which means, touching every device to get them back onto a WiFi network.

For me, it was even more complicated than the mere hassle of setting up WiFi on every device. It wiped known access points created by my employer on my Mac which were put into my iCloud Keychain… access points where I didn’t know the name or passwords. Thankfully, I was able to recover this data from another co-worker’s Mac and get back onto my corporate network. Otherwise, I’d have been down at my IT team’s desk asking for them to fix my Mac… and all as a result of performing “Reset Network Settings” on my iPhone.

Horrible, horrible design.

Avoiding This Problem

Can this problem be avoided? Possibly. If you turn off iCloud Keychain on your iOS device BEFORE you perform “Reset Network Settings”, it may avoid wiping the data in the iCloud Keychain. I say “may” because after you take the device out of iCloud Keychain, then reset the network settings and then rejoin it to iCloud Keychain, it may propagate the differences at the time the device rejoins. Hopefully, not. Hopefully, the newly reset device will ONLY download the existing data in the iCloud Keychain without making any modifications to it. With Apple, you never know.

The secondary issue is that removing your iPhone from iCloud Keychain may remove stored credit cards. This may mean reentry of all of your credit cards after you have “Reset Network Settings” and after you have rejoined your device to the iCloud Keychain. This may also depend on iOS version. I just tried removing iCloud Keychain, then performed “Reset Network Settings”, then rejoined iCloud Keychain and all my cards are still intact on the device. If you’re on iOS 11 or iOS 10, your results may vary.

Why is this a problem?

First off, I don’t want to have to go touch many devices after a single device reset. That’s just stupid. Second, removing the device from iCloud Keychain to perform “Reset Network Settings” will wipe all of your current credit card data from the device and likely from the iCloud Keychain. Third, Apple needs to fix their shit to allow more granularity in what it wipes with “Reset Network Settings”. In fact, it shouldn’t even touch iCloud Keychain data. It should wipe only locally stored information on the device and then see if that works. If that doesn’t work, then wipe the data on iCloud Keychain, but only as a LAST RESORT!

I understand that Apple seems to think that wiping all network data (including what’s in iCloud Keychain) might solve “whatever the problem is”, but that’s just a sledgehammer. If what’s stored in iCloud Keychain were a problem, my 8 other devices should be experiencing the same issue as well. It’s basically, stupid Apple troubleshooting logic.

As I mentioned, disabling iCloud Keychain may unregister your credit cards from your device (and from the Keychain). I know this was the case in iOS 11, but in iOS 12 it seems to not require this any longer. I definitely don’t want to have to rescan all of my credit cards again onto my iOS device to restore them. It takes at least 30 minutes to do this with the number of cards I have to input. With the Apple Watch, this process is horribly unreliable and lengthy. It can sometimes take over an hour diddling with Bluetooth timeouts and silly unreliability problems to finally get all of my cards back onto the Watch (in addition to the iPhone).

Such time wasting problems over a single troubleshooting thing that should be extremely straightforward and easy. Horrible, horrible design.

Representatives and Suggestions

If you’re taking to an Apple representative about a networking problem and they suggest for you to “Reset Network Settings”, you should refer them to this article so they can better understand what it is they are asking you to do.

Neither Apple Support, nor will any of your phone carrier support teams warn you of this iCloud Keychain problem when requesting “Reset Network Settings.” They will ask you to perform this step as though it’s some simple little step. It’s not!

Whenever Apple asks me to perform the “Reset Network Settings” troubleshooting step, I always decline citing this exact problem. Perhaps someone at Apple will finally wake up and fix this issue once and for all. Until then, you should always question Apple’s troubleshooting methods before blindly following them.

How to disable iCloud Keychain

To disable the iCloud Keychain on your iOS device, go to …

Settings=>Your Name=>iCloud=>Keychain

… and toggle it off. Your Name is actually your name. It is located at the very top of settings. Once toggled off, it will likely unregister your credit cards stored on your iOS device, but I guess it’s a small price to pay if you really need to reset these network settings to your restore networking to 100% functionality. Of course, there’s no guarantee that “Reset Network Settings” or jumping through any of these hoops will solve this problem. There’s also the possibility that “Reset Network Settings” could still screw with your iCloud Keychain even if you disable it before performing “Reset Network Settings”.

With Apple, your mileage may vary.

How to Reset Network Settings

Settings=>General=>Reset=>Reset Network Settings

If you own multiple Apple devices and they are using iCloud Keychain, don’t perform this step first. Instead, disable iCloud Keychain first (above), then perform this step. If you only own one Apple device, there is no need to disable iCloud Keychain.

Network Problems and Quick Fixes

In my most recent case of being prompted to “Reset Network Settings”, my phone’s Wi-Fi calling feature simply stopped working. I first called T-Mobile and they referred me to “Reset Network Settings” (based on Apple’s documentation) and they also referred me to Apple Support. Because I already knew about the iCloud Keychain problem from a previous inadvertent wipe of all of my network access points, this time I opted to turn off iCloud Keychain before attempting “Reset Network Settings.” Suffice it to say that “Reset Network Settings” didn’t do a damned thing, as I full well expected.

In fact, I tried many options prior to “Reset Network Settings”. These included:

  • Disabling and enabling Wi-Fi calling
  • Joining a different access point
  • Restarting my Comcast modem
  • Restarting my network router
  • Restarting my Apple Airport
  • Restarting my phone
  • Hard restarting my phone
  • Disabling and enabling Wi-Fi
  • Dumping Sysdiagnose logs and digging through them
  • Killing and restarting the Phone app

I tried all of the above and nothing resolved the issue. No, not even “Reset Network Settings”.

I recall reading a year or two back that sometimes Airplane Mode can resolve some network connectivity issues. I’m not sure exactly what Airplane Mode actually does under the hood in detail, but it appears to modify a bunch of configs and disable all networking including Cellular, Wi-Fi, Bluetooth and anything else that performs networking.

Once Airplane Mode was enabled, I allowed the phone to sit for 30 seconds to make sure all components recognized Airplane Mode. Then, I disabled Airplane Mode. Almost immediately, the phone’s menu bar now says ‘T-Mobile Wi-Fi’. Wow, that actually worked.

If you’re having networking problems on your iPhone, try enabling then disabling Airplane Mode instead of “Reset Network Settings”. At least, it’s worth a try before resorting to disabling iCloud Keychain followed by “Reset Network Settings”.

iOS 11 vs 12

The first time I experienced my issue with the iCloud Keychain and “Reset Network Settings”, I was using iOS 11. I’m firmly of, “Once Bitten, Twice Shy.” This means, I haven’t tested this on iOS 12 to see if Apple has changed their ways. It’s very doubtful they have and very likely this problem still persists even in the most current version of iOS.

Design Rant Mode On

Apple seems to be under the delusion that we’re still living in a one-device-ownership-world. We’re not. We now own Macs, Apple TVs, Watches, iPhones and iPads that all rely on their multi-device services, such as iCloud Keychain. To design a feature that can wipe the entire data shared by multiple devices is not only the very definition of shit software, it’s also the very definition of a shit company that hasn’t the first clue of what the hell they’ve actually built.


If this article is helpful to you, please leave a comment below.

↩︎

How to iCloud unlock an iPad or iPhone?

Posted in botch, business, california by commorancy on October 21, 2018

apple-cracked-3.0-noderivsA lot of people seem to be asking this question. So, let’s explore if there are any solutions to the iCloud unlock problem.

Apple’s iCloud Lock: What is it?

Let’s examine what exactly is an iCloud lock. When you use an iPhone or iPad, a big part of that experience is using iCloud. You may not even know it. You may not know how much iCloud you are actually using (which is how Apple likes it) as it is heavily integrated into every Apple device. The iCloud service uses your Apple ID to gain access. Your Apple ID consists of your username (an email address) and a password. You can enable extended security features like two factor authentication, but for simplicity, I will discuss devices using only a standard login ID and password… nothing fancy.

iCloud is Apple’s cloud network services layer that support service synchronization between devices like calendaring, email contacts, phone data, iMessage, iCloud Drive, Apple Music, iTunes Playlists, etc. As long as your Apple ID remains logged into these services, you will have access to the same data across all of your devices. Note, your devices don’t have to use iCloud at all. You can disable it and not use any of it. However, Apple makes it terribly convenient to use iCloud’s services including such features as Find my iPhone, which allows you to lock or erase your iPhone if it’s ever lost or stolen.

One feature that automatically comes along for the ride when using iCloud services is an iCloud lock. If you have ever logged your iPhone or iPad into iCloud, your device is now locked to your Apple ID. This means that if it’s ever lost or stolen, no one can use your device because it is locked to your iCloud Apple ID and locked to Find my iPhone for that user (which I believe is now enabled by default upon logging into iCloud).

This also means that any recipient of such an iCloud locked device cannot use that device as their own without first disassociating that device from the previous Apple ID. This lock type is known as an iCloud lock. This type of Apple lock is separate from a phone carrier lock which limits with which carriers a phone can be used. Don’t confuse or conflate the two.

I should further qualify what “use your device” actually means after an iCloud lock is in place. A thief cannot clean off your device and then log it into their own Apple ID and use the phone for themselves. Because the phone is iCloud locked to your account, it’s locked to your account forever (or until you manually disassociate it). This means that unless you explicitly remove the association between your Apple ID and that specific device, no one can use that device again on Apple’s network. The best a would-be thief can do with your stolen phone is open it up and break it down for limited parts. Or, they can sell the iCloud locked device to an unsuspecting buyer before the buyer has a chance to notice that it’s iCloud locked.

Buying Used Devices

If you’re thinking of buying a used iPhone from an individual or any online business who is not Apple and because the iCloud lock is an implicit and automatic feature enabled simply by using iCloud services, you will always need to ask any seller if the device is iCloud unlocked before you pay. Or, more specifically, you will need to ask if the previous owner of the device has logged out and removed the device from Find my iPhone services and all other iCloud and Apple ID services. If this action has not been performed, then the device will remain iCloud locked to that specific Apple ID. You should also avoid the purchase and look for a reputable seller.

What this means to you as a would-be buyer of used Apple product is that you need to check for this problem immediately before you walk away from the seller. If the battery on the device is dead, walk away from the sale. If you’re buying a device sight unseen over the Internet, you should be extremely wary before clicking ‘Submit’. In fact, I’d recommend not buying used Apple equipment from eBay or Craigslist because of how easy it is to buy bricked equipment and lose your money. Anything you buy from Apple shouldn’t be a problem. Anything you buy from a random third party, particularly if they’re in China, might be a scam.

Can iCloud Lock be Removed?

Technically yes, but none of the solutions are terribly easy or in some cases practical. Here is a possible list of solutions:

1) This one requires technical skills, equipment and repair of the device. With this solution, you must take the device apart, unsolder a flash RAM chip, reflash it with a new serial number, then reassemble the unit.

Pros: This will fix the iPad or iPhone and allow it to work
Cons: May not work forever if Apple notices the faked and changed serial number. If the soldering job was performed poorly, the device hardware could fail.

Let’s watch a video of this one in action:

2) Ask the original owner of the device, if you know who they are, to disassociate the iDevice from their account. This will unlock it.

Pros: Makes the device 100% functional. No soldering.
Cons: Requires knowing the original owner and asking them to disassociate the device.

3) Contact Apple with your original purchase receipt and give Apple all of the necessary information from the device. Ask them to remove the iCloud lock. They can iCloud unlock the device if they so choose and if they deem your device purchase as valid.

Pros: Makes the device 100% functional.
Cons: Unlocking Apple devices through Apple Support can be difficult, if not impossible. Your mileage may vary.

4) Replace the logic board in the iPad / iPhone with one from another. Again, this one requires repair knowledge, tools, experience and necessary parts.

Pros: May restore most functionality to the device.
Cons: Certain features, like the touch ID button and other internal systems may not work 100% after a logic board replacement.

As you can see, none of these are particularly easy, but none are all that impossible either. If you’re not comfortable cracking open your gear, you might need to ask a repair center if they can do any of this for you. However, reflashing a new serial number might raise eyebrows at some repair centers with the assumption that your device is stolen. Be careful when asking a repair center to perform #1 above for you.

iCloud Locking

It seems that the reason the iCloud Lock came into existence is to thwart thieves. Unfortunately, it doesn’t actually solve that problem. Instead, it creates a whole new set of consumer problems. Now, not only are would-be thieves stealing iPads still, they’re selling these devices iCloud locked to unsuspecting buyers and scamming them out of their money. The thieves don’t care. The only thing this feature does is screw used device consumers out of their money.

Thieves

That Apple thought they could stop thievery by implementing the iCloud lock shows just how idealistically naïve Apple’s technical team really is. Instead, they created a whole new scamming market for iCloud locked Apple devices. In fact, the whole reason this article exists is to explain this problem.

For the former owner of an iPad which was stolen, there’s likely no hope of ever getting it back. The iCloud lock feature does nothing to identify the thief or return stolen property to its rightful owner. The iCloud lock simply makes it a tiny nuisance to the thief and would-be scammer. As long as they can get $100 or $200 for selling an iCloud locked iPad, they don’t care if it’s iCloud locked. In fact, the fact that this feature exists makes no difference at all to a thief.

It may reduce the “value” of the stolen property some, but not enough to worry about. If it was five finger discounted, then any money had is money gained, even if it’s a smaller amount than anticipated. For thieves, the iCloud lock does absolutely nothing to stop thievery.

Buyers

Here’s the place where the iCloud lock technology hurts the most. Instead of thwarting would-be thieves, it ends up placing the burden of the iCloud lock squarely on the consumer. If you are considering buying a used device, which should be a simple straightforward transaction, you now have to worry about whether the device is iCloud locked.

It also means that buying an iPhone or iPad used could scam you out of your money if you’re not careful. It’s very easy to buy these used devices sight unseen from online sellers. Yet, when you get the box open, you may find the device is iCloud locked to an existing Apple ID. At that point, unless you’re willing to jump through one of the four hoops listed above, you may have just been scammed.

If you can’t return the device, then you’re out money. The only organization that stands to benefit from the iCloud lock is Apple and that’s only because they’ll claim you should have bought your device new from them. If this is Apple’s attempt at thwarting or reducing used hardware sales, it doesn’t seem to be working. For the consumer, the iCloud lock seems intent on harming consumer satisfaction for device purchases of used Apple equipment… a market that Apple should want to exist because it helps them sell more software product (their highest grossing product).

Sellers

For actually honest sellers, an iCloud lock makes selling used iPad and iPhone devices a small problem. For unscrupulous sellers, then there is no problem here at all. An honest seller must make sure that the device has been disassociated from its former Apple ID before putting the item up for sale. If an honest seller doesn’t know the original owner and the device is locked, it should not be sold. For the unscrupulous sellers, the situation then becomes the scammer selling locked gear and potentially trafficking stolen goods.

It should be said that it is naturally assumed that an iCloud locked device is stolen. It makes sense. If the owner had really wanted the item sold as used, they would have removed the device from iCloud services… except that Apple doesn’t make this process at all easy to understand.

Here’s where Apple fails would-be sellers. Apple doesn’t make it perfectly clear that selling the device requires removing the Apple ID information fully and completely from the device. Even wiping the device doesn’t always do this as there are many silent errors in the reset process. Many owners think that doing a wipe and reset of the device is enough to iCloud unlock the device. It isn’t.

As a would-be seller and before wiping it, you must go into your iPad or iPhone and manually remove the device from Find my iPhone and log the phone out of all Apple ID services. This includes not only logging it out of iCloud, but also logging out out of iTunes and Email and every other place where Apple requires you to enter your Apple ID credentials. Because iOS requires logging in multiple times separately to each of these services, you must log out of these services separately on the device. Then, wipe the device. Even after all of that, you should double check Find my iPhone from another device to make sure the old device no longer shows up there. In fact, you should walk through the setup process once to the point where it asks you for your Apple ID to confirm the device is not locked to your Apple ID.

This is where it’s easy to sell a device thinking you’ve cleared it all out, but you actually haven’t. It also means that this device was legitimately sold as used, but wasn’t properly removed from iCloud implying that it’s now stolen. Instead, Apple needs to offer a ‘Prep for Resell’ setting in Settings. This means this setting will not only wipe the device in the end, but it will also 100% ensure an iCloud unlock of the device and log it out of all logged Apple ID services. This setting will truly wipe the device clean as though it were an unregistered, brand new device. If it’s phone device, it should also carrier unlock the phone so that it can accept a SIM card from any carrier.

Apple makes it very easy to set up brand new devices, but Apple makes it equally difficult to properly clear off a device for resale. Apple should make this part a whole lot easier for would-be sellers. If need be, maybe Apple needs to sell a reseller toolkit to scan and ensure devices are not only iCloud unlocked, but run diagnostic checks to ensure they are worthy of being sold.


 

If you like what you’ve just read, please leave a comment below and give me your experiences.

↩︎

Software Engineering and Architecture

Posted in botch, business, Employment by commorancy on October 21, 2018

ExcellenceHere’s a subject of which I’m all too familiar and is in need of commentary. Since my profession is technical in nature, I’ve definitely run into various issues regarding software engineering, systems architecture and operations. Let’s Explore.

Software Engineering as a Profession

One thing that software engineers like is to be able to develop their code on their local laptops and computers. That’s great for rapid development, but it causes many problems later, particularly when it comes to security, deployment, systems architecture and operations.

For a systems engineer / devops engineer, the problem arises when that code needs to be productionalized. This is fundamentally a problem with pretty much any newly designed software system.

Having come from from a background of systems administration, systems engineering and devops, there are lots to be considered when wanting to deploy freshly designed code.

Designing in a Bubble

I’ve worked in many companies where development occurs offline on a notebook or desktop computer. The software engineer has built out a workable environment on their local system. The problem is, this local eneironment doesn’t take into account certain constraints which may be in place in a production environment such as internal firewalls, ACLs, web caching systems, software version differences, lack of compilers and other such security or software constraints.

What this means is that far too many times, deploying the code for the first time is fraught with problems. Specifically, problems that were not encountered on the engineer’s notebook… and problems that sometimes fail extremely bad. In fact, many of these failures are sometimes silent (the worst kind), where everything looks like it’s functioning normally, but the code is sending its data into a black hole and nothing is actually working.

This is the fundamental problem with designing in a bubble without any constraints.

I understand that building something new is fun and challenging, but not taking into account the constraints the software will be under when finally deployed is naive at best and reckless at the very worse. It also makes life as a systems engineer / devops engineer a living hell for several months until all of these little failures are sewn shut.

It’s like receiving a garment that looks complete, but on inspection, you find a bunch of holes all over that all need to be fixed before it can be worn.

Engineering as a Team

To me, this is situation means that software engineer is not a team player. They might be playing on the engineering team, but they’re not playing on the company team. Part of software design is designing for the full use case of the software, including not only code authoring, but systems deployment.

If systems deployment isn’t your specialty as a software engineer, then bring in a systems engineer and/or devops engineer to help guide your code during the development phase. Designing without taking the full scope of that software release into consideration means you didn’t earn your salary and you’re not a very good software engineer.

Yet, Silicon Valley is willing to pay “Principal Engineers” top dollar for these folks failing to do their jobs.

Building and Rebuilding

It’s an entirely a waste of time to get to the end of a software development cycle and claim “code complete” when that code is nowhere near complete. I’ve had so many situations where software engineers toss their code to us as complete and expect the systems engineer to magically make it all work.

It doesn’t work that way. Code works when it’s written in combination with understanding of the architecture where it will be deployed. Only then can the code be 100% complete because only then will it deploy and function without problems. Until that point is reached, it cannot be considered “code complete”.

Docker and Containers

More and more, systems engineers want to get out of the long drawn out business of integrating square code into a round production hole, eventually after much time has passed, molding the code into that round hole is possible. This usually takes months. Months that could have been avoided if the software engineer had designed the code in an environment where the production constraints exist.

That’s part of the reason for containers like Docker. When a container like Docker is used, the whole container can then be deployed without thought to square pegs in round holes. Instead, whatever flaws are in the Docker container are there for all to see because the developer put it there.

In other words, the middle folks who take code from engineering and mold it onto production gear don’t relish the thought of ironing out hundreds of glitchy problems until it seamlessly all works. Sure, it’s a job, but at some level it’s also a bit janitorial, wasteful and a unnecessary.

Planning

Part of the reason for these problems is the delineation between the engineering teams and the production operations teams. Because many organizations separate these two functional teams, it forces the above problem. Instead, these two teams should be merged into one and work together from project and code inception.

When a new project needs code to be built that will eventually be deployed, the production team should be there to move the software architecture onto the right path and be able to choose the correct path for that code all throughout its design and building phases. In fact, every company should mandate that its software engineers be a client of operations team. Meaning, they’re writing code for operations, not the customer (even though the features eventually benefit the customer).

The point here is that the code’s functionality is designed for the customer, but the deploying and running that code is entirely for the operations team. Yet, so many software engineers don’t even give a single thought to how much the operations team will be required support that code going forward.

Operational Support

For every component needed to support a specific piece of software, there needs to be a likewise knowledgeable person on the operations team to support that component. Not only do they need to understand that it exists in the environment, the need to understand its failure states, its recovery strategies, its backup strategies, its monitoring strategies and everything else in between.

This is also yet another problem that software engineers typically fail to address in their code design. Ultimately, your code isn’t just to run on your notebook for you. It must run on a set of equipment and systems that will serve perhaps millions of users. It must be written in ways that are fail safe, recoverable, redundant, scalable, monitorable, deployable and stable. These are the things that the operations team folks are concerned with and that’s what they are paid to do.

For each new code deployment, that makes the environment just that much more complex.

The Stacked Approach

This is an issue that happens over time. No software engineer wants to work on someone else’s code. Instead, it’s much easier to write something new and from scratch. It’s easy for software engineer, but it’s difficult for the operations team. As these new pieces of code get written and deployed, it drastically increases the technical debt and burden on the operations staff. Meaning, it pushes the problems off onto the operations team to continue supporting more and more and more components if none ever get rewritten or retired.

In one organization where I worked, we had such an approach to new code deployment. It made for a spider’s web mess of an environment. We had so many environments and so few operations staff to support it, the on-call staff were overwhelmed with the amount of incessant pages from so many of these components.

That’s partly because the environment was unstable, but that’s partly because it was a house of cards. You shift one card and the whole thing tumbles.

Software stacking might seem like a good strategy from an engineering perspective, but then the software engineers don’t have to first line support it. Sometimes they don’t have to support it at all. Yes, stacking makes code writing and deployment much simpler.

How many times can engineering team do this before the house of cards tumbles? Software stacking is not an ideal any software engineering team should endorse. In fact, it’s simply comes down to laziness. You’re a software engineer because writing code is hard, not because it is easy. You should always do the right thing even if it takes more time.

Burden Shifting

While this is related to software stacking, it is separate and must be discussed separately. We called this problem, “Throwing shit over the fence”. It happens a whole lot more often that one might like to realize. When designing in a bubble, it’s really easy to call “code complete” and “throw it all over the fence” as someone else’s problem.

While I understand this behavior, it has no place in any professionally run organization. Yet, I’ve seen so many engineering team managers endorse this practice. They simply want their team off of that project because “their job is done”, so they can move them onto the next project.

You can’t just throw shit over the fence and expect it all to just magically work on the production side. Worse, I’ve had software engineers actually ask my input into the use of specific software components in their software design. Then, when their project failed because that component didn’t work properly, they threw me under the bus for that choice. Nope, that not my issue. If your code doesn’t work, that’s a coding and architecture problem, not a component problem. If that open source component didn’t work in real life for other organizations, it wouldn’t be distributed around the world. If a software engineer can’t make that component work properly, that’s a coding and software design problem, not an integration or operational problem. Choosing software components should be the software engineer’s choice to use whatever is necessary to make their software system work correctly.

Operations Team

The operations team is the lifeblood of any organization. If the operations team isn’t given the tools to get their job done properly, that’s a problem with the organization as a whole. The operations team is the third hand recipient of someone else’s work. We step in and fix problems many times without any knowledge of the component or the software. We do this sometimes by deductive logic, trial and error, sometimes by documentation (if it exists) and sometimes with the help of a software engineer on the phone.

We use all available avenues at our disposal to get that software functioning. In the middle of the night the flow of information can be limited. This means longer troubleshooting times, depending on the skill level of the person triaging the situation.

Many organizations treat its operations team as a bane, as a burden, as something that shouldn’t exist, but does out of necessity. Instead of treating the operations team as second class citizens, treat this team with all of the importance that it deserves. This degrading view typically comes top down from the management team. The operations team is not a burden nor is it simply there out of necessity. It exists to keep your organization operational and functioning. It keeps customer data accessible, reliable, redundant and available. It is responsible for long term backups, storage and retrieval. It’s responsible for the security of that data and making sure spying eyes can’t get to it. It is ultimately responsible to make sure the customer experience remains at a high excellence standard.

If you recognize this problem in your organization, it’s on you to try and make change here. Operations exists because the company needs that job role. Computers don’t run themselves. They run because of dedicated personnel who make it their job and passion to make sure those computers stay online, accessible and remain 100% available.

Your company’s uptime metrics are directly impacted by the quality of your operations team staff members. These are the folks using the digital equivalent of chewing gum and shoelaces to keep the system operating. They spend many a sleepless night keeping these systems online. And, they do so without much, if any thanks. It’s all simply part of the job.

Software Engineer and Care

It’s on each and every software engineer to care about their fellow co-workers. Tossing code over the fence assuming there’s someone on the other side to catch it is insane. It’s an insanity that has run for far too long in many organizations. It’s an insanity that needs to be stopped and the trend needs to reverse.

In fact, by merging the software engineering and operations teams into one, it will stop. It will stop by merit of having the same bosses operating both teams. I’m not talking about at a VP level only. I’m talking about software engineering managers need to take on the operational burden of the components they design and build. They need to understand and handle day-to-day operations of these components. They need to wear pagers and understand just how much operational work their component is.

Only then can engineering organizations change for the positive.


As always, if you can identify with what you’ve read, I encourage you to like and leave a comment below. Please share with your friends as well.

↩︎

Cytokine Storm Syndrome: The Drug Trial That Went Wrong

Posted in botch, business, medical by commorancy on October 13, 2018

Here’s a story about six men, in 2006, who endured the fight for their lives after a drug trial went horribly wrong. The above program runtime is 58m 15s. Let’s explore.

Method of Action

As soon as the method of action of this drug was revealed in this documentary, my first thought was, “Uh oh”. Trying to teach the immune system to do anything is somewhat akin to attempting to steer a flood away from a town. The immune system attacks foreign invaders. That they injected this drug not knowing exactly how many receptors it might bind to was a severe “UH OH” moment before I even watched this. I already know how unpredictable the immune system can be. To intentionally try to tame the immune system to solve a medical problem is essentially playing with fire.

Too Many Mistakes

There were a number of mistakes made during this trial as well.

  • Not enough separation between patient injections
  • When reactions began to occur, the trial should have been halted until determining each injections patient’s reaction extent. Isn’t the point to document the reactions?
  • Waiting too long to determine the problem and attempt countermeasures.
  • The trial doctor was horribly uninformed of reaction possibilities
    • Because doctor was uninformed of side effects, the facilities were ill prepared to handle what came after
    • Not enough drugs or equipment handy to handle medical complications

Trial Paradigm Failure?

The 10 minute separation between the patients was far too quick a succession, particularly when you’re screwing with the immune system, to fully understand how the drug might react. When the first patient began experiencing problems, the trial should have halted further injections to assess the already injected patients. This trial simply threw caution to the wind and endangered all of its trial participants even when they had huge red warning flags from patient 001.

That the doctor wasn’t self-informed on the possible reactions and had to spend valuable time to seek information later, “Wow”. If that’s not the very definition of uninformed, I don’t know what is. Before a single vial was injected, the doctor should read and understood each and every possible manufacturer side effect including having enough known remedies handy. You can’t know what you don’t know, but you can know what is written down by the manufacturer. Not reading and comprehending that literature fully before starting the trial is a huge mistake. If he had fully understood the ramifications of cytokine storm syndrome before injecting a single patient, he could have had started countermeasures much, much sooner in these patients.

If he wasn’t proficient in cytokine storm syndrome, he should have had a doctor on standby should the patients need another opinion.

The almost fatal mistake here was the attending doctor bought fully into the hype of the manufacturer that “nothing bad” would happen after injection. That’s called taking things for granted. Trial drugs are experimental for a reason and must be treated with all of the seriousness and respect they deserve.

Patient Trials

While it’s critically important to trial medicines in humans, it’s equally important to perform those trials in as safe a manner as humanly possible. This includes performing these trials in facilities capable of handling the load of every patient in the trial potentially crashing. If there’s not enough equipment in the hospital facility to handle that number of simultaneous crashes, then the trial needs to be moved to a hospital that can handle this patient load.

No trial clinic should be waiting for ambulances, equipment and medicines to arrive from around the city. All of this should be immediately on-hand, ready and waiting. To me, that’s a huge failing of the company that scheduled this trial. That company should definitely be held accountable for any problems that arise from being ill prepared at its clinic facilities.

Cytokine Storm Syndrome

One of the possible side effects after the doctor read the manufacturer’s literature of the trial drug TGN-1412 was a cytokine storm. He only read this after the trial had started and patients were already suffering. Cytokine storm is when the body’s immune system reacts systemically over the whole body. It can cause basically rapid shutdown of organs including fever, nausea, redness (heat) because the body’s immune system is attacking… well basically everything. That this reaction was fully documented in the drug’s literature is telling. It says that the manufacturer knew this was a possible complication, yet the trial doctor didn’t look at this literature until it was nearly too late.

Of course, by that time other doctors had been consulted in the midst of crashing patients, these other doctors felt the need throw their own wrenches into the works by claiming the drug itself may have been tainted or improperly stored, prepared or handled… possibly causing these patients to have an systemic infection. Throwing this wrench into the works was also reckless by those additional doctors who joined in on the action. Perhaps they needed to also ready the manufacturer’s literature before jumping to that conclusion.

It’s good that someone finally decided the correct course of action was to treat for cytokine storm as the manufacturer’s reactions suggest, but not before one of the trial patients had ended up with dry gangrene losing his fingertips and parts of his feet. A horrible ending to a drug trial that was ill prepared and improperly staffed for that kind of a drug reaction.

Hindsight

I know it’s easy to both see and say all of this in hindsight. But, I have worked at many companies where the all mighty buck is rules… basically, “Do it for as cheaply as possible”. The saying, “You get what you pay for” applies in every situation. I’ve worked for many organizations that blaze ahead with projects without fully evaluating all consequences of their actions. They do this simply because they want the product out the door fast for the least amount of money. They don’t care what problems might arise. Instead, they deal with the problems along the way. If that means throwing more money at it later, so be it. Just don’t spend it now.

To me, that’s reckless. Thankfully, I have never worked for a medical organization at all. I’ve chosen to stay away from that line of work for the simple reasons of what this level of recklessness can do when put into the hands of medical organizations. This trial should be considered the very definition of reckless and what can happen when the all mighty buck is more important than patient’s lives. Thankfully, the NHS stepped in on behalf of the patients and treated them as the sick patients they were, not guinea pig trial participants.

I encourage you to watch the program in full. Then please leave a comment below if you agree or disagree.

↩︎

Rant Time: Bloomberg and Hacked Servers

Posted in best practices, botch, data security, reporting by commorancy on October 5, 2018

Bloomberg has just released a story claiming SuperMicro motherboards destined for large corporations may have been hacked with a tiny “spy” chip. Let’s explore.

Bloomberg’s Claims

Supposedly the reporters for Bloomberg have been working on this story for months. Here’s a situation where Bloomberg’s reporters have just enough information in hand to be dangerous. Let’s understand how this tiny chip might or might not be able to do what Bloomberg’s alarmist view claims. Thanks Bloomberg for killing the stock market today with your alarmist reporting.

Data Compromise

If all of these alleged servers have been compromised by a Chinese hardware hack, someone would have noticed data streaming out of their server to Chinese IP addresses, or at least some consistent address. Security scans of network equipment require looking through inbound and outbound data logs for data patterns. If these motherboards had been compromised, the only way for the Chinese to have gotten that data back is through the network. This means data passing through network cards, switches and routers before ever hitting the Internet.

Even if such a tiny chip were embedded in the system, many internal only servers have no direct Internet access. This means that if these servers are used solely for internal purposes, they couldn’t have transmitted their data back to China. The firewalls would prevent that.

For servers that may have had direct access to the Internet, these servers could have sent payloads, but eventually these patterns would have been detected by systems administrators, network administrators and security administrators in performing standard security checks. It might take a while to find the hacks, but they would be found just strictly because of odd outbound data being sent to locations that don’t make sense.

Bloomberg’s Fantasy

While it is definitely not out of the realm of possibility that China could tamper with and deliver compromised PCB goods to the US, it’s doubtful that this took place in the numbers that Bloomberg has reported.

Worse, Bloomberg makes the claim that this so-called hacked hardware was earmarked for specific large companies. I don’t even see how that’s possible. How would a Chinese factory know the end destination of any specific SuperMicro motherboard? As far as I know, most cloud providers like AWS and Google buy fully assembled equipment, not loose motherboards. How could SuperMicro board builders possibly know it’s going to end up in a server at AWS or Google or Apple? If SuperMicro’s motherboard products have been hacked, they would be hacked randomly and everywhere, not just at AWS or Google or whatever fantasy Bloomberg dreams up.

The Dangers of Outsourcing

As China’s technical design skills grow, so will the plausibility of receiving hacked goods from that region. Everyone takes a risk ordering any electronics from China. China has no scruples about any other country than China. China protects China, but couldn’t give a crap about any other country outside of China. This is a dangerous situation for China. Building electronics for the world requires a level of trust that must exist or China won’t get the business.

Assuming this alleged “spy chip” is genuinely found on SuperMicro motherboards, then that throws a huge damper on buying motherboards and other PCBs made in China. China’s trust level is gone. If Chinese companies are truly willing to compromise equipment at that level, they’re willing to compromise any hardware built in China including cell phones, laptops and tablets.

This means that any company considering manufacturing their main logic boards in China might want to think twice. The consequences here are as serious as it can get for China. China has seen a huge resurgence of inbound money flow into China. If Bloomberg’s notion is true, this situation severely undermines China’s ability to continue at this prosperity level.

What this means ultimately is that these tiny chips could easily be attached to the main board of an iPhone or Android phone or any mobile device. These mobile devices can easily phone home with data from mobile devices. While the SuperMicro motherboard problem might or might not be real, adding such a circuit to a phone is much more undetectable and likely to provide a wealth more data than placing it onto servers behind corporate firewalls.

Rebuttal to Bloomberg

Statements like from this next reporter is why no one should take these media outlets seriously. Let’s listen. Bloomberg’s Jordan Robertson states, “Hardware hacking is the most effective type of hacking an organization can engineer… There are no security systems that can detect that kind of manipulation.” Wrong. There are several security systems that look for unusual data patterns including most intrusion detection systems. Let’s step back for a moment.

If the point in the hardware hacking is to corrupt data, then yes, it would be hard to detect that. You’d just assume the hardware is defective and replace it. However, if the point to the hardware hack is to phone data home, then that is easily detected via various security systems and is easily blocked by firewalls.

The assumption that Jordon is making is that we’re still in the 90s with minimal security. We are no longer in the 90s. Most large organizations today have very tight security around servers. Depending on the role of the server, it might or might not have direct trusted access to secured data. That server might have to ask an internal trusted server to get the data it needs.

For detection purposes, if the server is to be used as a web server, then the majority of the data should have a 1:1 relationship. Basically, one request inbound, some amount of data sent outbound from that request. Data originating from the server without an inbound request would be suspect and could be detected. For legitimate requests, you can see these 1:1 relationships in the logs and when watching the server traffic on a intrusion detection system. For one-sided transactions sending data outbound from the server, the IDS would easily see it and could block it. If you don’t think that most large organizations don’t have an IDS even simply in watch mode, you are mistaken.

If packets of data originate from the server without any prompting, that would eventually be noticed by a dedicated security team performing regular log monitoring and regular server security scans. The security team might not be able to pinpoint the reason (i.e. a hardware hack) for unprompted outbound data, but they will be able to see it.

I have no idea how smart such tiny chip could actually be. Such a tiny chip likely would not have enough memory to store any gathered payload data. Instead, it would have to store that payload either on the operating systems disks or in RAM. If the server was cut off from the Internet as most internal servers are, that disk or RAM would eventually fill its data stores up without transfer of that data to wherever it needed to go. Again, systems administrators would notice the spike in usage of /tmp or RAM due to the chip’s inability to send its payload.

If the hacking chip simply gives remote control access to the server without delivering data at all, then that would also be detected by an IDS system. Anyone attempting to access a port that is not open will be blocked. If the chip makes an outbound connection to a server in China and leaves it open would eventually be detected. Again, a dedicated security team would see the unusual data traffic from/to the server and investigate.

If the hacking chip wants to run code, it would need to compiled it first. That implies having a compiler in that tiny chip. Doubtful. If the system builder installs a compiler, the spy chip might be able to leverage it, assuming it has any level of knowledge about the current operating system installed. That means that chip would have to know about many different versions of Linux, BSD, MacOS X, Windows and so on, then have code ready to deploy for each of these systems. Unlikely.

Standards and Protocols

Bloomberg seems to think there’s some mystery box here that allows China to have access to these servers without bounds. The point to having multi-layer security is to prevent such access. Even if the motherboards were compromised, most of these servers would end up behind multiple firewalls in combination with continuous monitoring for security. Even more than this, many companies segregate servers by type. Servers performing services that need a high degree of security have very limited ability to do anything but their one task. Even getting into these servers can be challenge even for administrators.

For web servers in a DMZ which are open to the world, capturing data here might be easier. However, even if the hacker at SuperMicro did know which company placed an order for motherboards, they wouldn’t know how those servers would ultimately be deployed and used. This means that these chips could be placed into server roles behind enough security to render their ability to spy as worthless.

It’s clear, these reporters are journalists through and through. They really have no skill at being a systems administrator, network engineer or security administrator. Perhaps it’s now time to hire technical consultants at Bloomberg who can help you guide your articles when they involve technical matters? It’s clear, there was no guidance by any technical person who could steer Jordan away from some of the ludicrous statements he’s made.

Bloomberg, hire a technical consultant the next time you chase one of these “security” stories or give it up. At this point, I’m considering Bloomberg to be nothing more a troll looking for views.


If you enjoy reading Randocity, please like, subscribe and leave a comment below.

↩︎

 

%d bloggers like this: