Random Thoughts – Randocity!

Are folding smartphones practical?

Posted in computers, ipad, mobile devices by commorancy on April 30, 2019

Today, let’s explore folding smartphones. Are they practical? Do they have a place in the market? Will they last? Are they innovative? Let’s explore.

Tablets vs. Folding Smartphone

Looking at the Huawei Mate X and the Samsung Galaxy Fold folding devices, two things become abundantly clear, First, they fold open into the form factor of a tablet. Second, they command a price that’s way, way higher than an actual tablet.

There are also additional problems with these phones. When the phone is folded open, you can’t hold it to your ear and use it as you would a phone. You must fold it closed to regain the phone form factor. Because the larger screen is the primary reason to buy this device, this makes the folding aspect of this device less about being a phone and more about being a tablet. Or, let me put that another way, it’s a gimmick. Why is it a gimmick? Because in addition to the tablet size, you also add creases and marks to the plastic surface each time you fold and unfold the phone. Ultimately, it becomes less about being a tablet and more about the novelty of the folding screen.

For a product that’s supposed to be a premium, top-tier device, I don’t know about you, but I want my surface to be (and remain) pristine. I don’t want to feel surface bumps or lines running down the center of the fold area. I certainly don’t want this stretching and bending plastic to get worse over time. Yet, that’s where these phones clash with…

Materials Science

In other words, this is where these phones meet reality (or physics). To fold any type of material, that material will become marred and marked by the folding action. As the folding continues, the problems will increase with the surface becoming more and more marred. It’s simply the nature of folding something. It’s a limitation of the way physical objects operate when folded. It’s nature.

When applied to a phone’s case design, you will continue to see see the fold area gain marks, bumps and imperfections due to the folding action. To me, this doesn’t say “premium”. It says “cheap”. Plastics see to that “cheapness”. After all, plastics are some of the cheapest materials around today… and plastics are the only substances capable of holding up to any level of folding with a minimum of problems. However, a minimum of problems doesn’t mean zero problems.

The only way this could change is if materials could be made out of a polymer that can heal itself under these folding stresses and stretch and relax appropriately during the fold. To date, no one has been able to produce such a material. This means that folding screen surfaces will inevitably become marked and marred with each fold and unfold action. Over time, it will eventually become a sheer mess of marks… which also assumes the folding action of the OLED screen itself will survive that many folds. Just consider how a paperback book’s spine looks after you’re done reading it. That same effect happens to plastic, even the most resilient of plastics.

OLED Screens and Electronics

I’m not a materials scientist. With that said, I’m also unaware of any specific clear plastic sheet materials that can survive being folded and unfolded many times. Silicone might work, but even then silicone might degrade or break over time. Considering how many times people might utilize a folding screen per day, it could be folded and unfolded perhaps hundreds of times in a single day. If you unfolded the screen just once per day, in 1 year you’d have unfolded the screen 365 times. If you multiply that times 100, that’s likely over 36,500 folds per year… probably more.

While notebook manufacturers have more or less worked out the folding problem with LCD screens (they use flexible ribbon cables), notebooks hinges and components do eventually wear out from regular opening and closing.

In a phone, the problem will be ten times greater. Not only will the phone wear out faster than a non-folding model, the phone will be worth far, far less at the end of your use. No one is going to want to buy a used folding phone that looks like a used paper back book.

Since the hinges on these devices have to be uniquely designed for a smaller phone form factor and to avoid getting in the way of the screen surface, these designs are likely ripe for defects… particularly the first generation phones. And all for what?

A Tablet?

The single unique benefit of the folding phone is to turn the screen into the size of a tablet. While single body phablets worked great when they arrived, this idea of making the screen even bigger in a phone doesn’t make much sense. Yes, it’s unique. Yes, it’s probably a way to make more sales. But, is it really a good idea? Not really.

Tablets already do what they need to do and they do it well. Arguably, the best tablet I’ve seen created by Samsung is the Galaxy Tab S. It had the perfect form factor for watching movies. It fit in the hand nicely. It had a perfect weight. It also had that amazing OLED screen. It had everything you could want in a tablet.

Now, with folding screens comes a whole new paradigm of software to drive the folding action. When unfolded, it’s basically a tablet. When folded, it becomes a phone form factor. To move between the small folded screen and the larger unfolded screen seamlessly between apps, app support must be built. This requires whole new set of OS libraries and software to support that action.

Unfortunately, neither Android nor IOS supports this folding screen usability. Instead, Samsung has cobbled together some early drivers and software for its Galaxy Fold. With the Huawei’s Mate X, it’s not even that far along. If you buy into one of these convertibles, you’re going to be sorely disappointed when moving between the small and large screens within the same app. Some apps might update properly, many more will not.

That doesn’t mean the OSes won’t catch up, but it will take some time. It also means growing pains until the OS technology catches up.

Phone and Tablet Together

Tablets and phones should be married. I’ve said that for quite some time. There’s no reason to carry around two devices when you can carry around one. However, it doesn’t need to fold. Carrying around a tablet as both a tablet and a phone is perfectly fine. Simply marry the innards of a phone into a Tablet and voila, a new device. I’d be perfectly fine carrying around a tablet the size of the iPad Mini as my primary phone. It’s small enough to be portable and large enough to do what I need to do on a tablet. There’s no need to fold it in half.

Gimmicks

Unfortunately, technology has moved away from producing useful new features and has moved firmly into adding gimmicks to sell new devices. From FaceID to the ever growing number of unnecessary cameras to now this folding action. For cameras, one camera is fine. Two borders on a gimmick. Adding three or more cameras is most definitely a gimmick to part you from your money. You don’t need multiple cameras on a phone. One is enough.

Folding is also a gimmick. The idea of folding isn’t new. We’ve had folding books, folding paper and folding binders… heck, there are even “folders” so named to hold paper in filing cabinets.

Folding a phone? Not necessary. Gimmick? Definitely. It’s particularly a gimmick because of the problem with materials science. If you know your phone is going to end up looking like trash at the end of one year’s use, then why bother? I know phones are designed to last one year before buying another, but that purchase cycle is insane. I’ve never fallen into that manufacturing and purchasing trap. I expect my phones to last at least 3 years, sometimes before even needing a battery change.

I’ve always held onto my phone for at least 1-2 new phone release cycles before buying a new one. Lately, it’s been 2-3 cycles because I’m currently invested in the Apple universe and I vehemently dislike the iPhone X’s design. I have an iPhone 7 Plus. I abhor the notched screen. I dislike that Apple invested in a costly OLED screen not only include that notch, but also reduce the color rendition to mimic an LCD screen. If you’re planning on degrading an OLED screen’s rendition, then use the cheaper LCD screen technology. An OLED screen offers a very intense saturated look. Some people don’t like it, but many do. The point to offering a screen capable of that level of color saturation is to make use of it, not hide it.

With the iPhone X, I dislike that there’s no TouchID button. I also dislike that the screen isn’t flush to the full edge of the case. There’s still a small black bezel around the entire screen except near that ugly, ugly notch. I also don’t like the introduction of the rounded display corners. It worked on the Mac back in the day, but not on a phone. Keep the corners firmly square.

Worse, at the time the iPhone X arrived, my iPhone 7 Plus’s screen was still larger than the iPhone X. It wasn’t until the iPhone X Max arrived that we got a comparable screen size to the 7 Plus. I digress.

Gimmicks are now firmly driving the phone industry rather than outstanding design and usability features. The last outstanding iPhone design that Apple produced was arguably the iPhone 7. It solved all of the glass design problems around the iPhone 4, the small screen of the iPhone 5 and the bendgate problems of the iPhone 6. It is arguably, indeed, the best phone Apple has yet designed. Then, they introduced the abomination known as the iPhone X.

At the same time as the iPhone X, Apple introduced the iPhone 8. The iPhone 8 seems much like an extension of the iPhone 7, but with wireless charging. Yes, wireless charging would have been great IF Apple hadn’t cancelled the AirPower charger base they had promised. Now that that product is non-existent, the point to wireless charging an iPhone has more or less evaporated. Sure, you can wireless charge an iPhone with a Qi charger, but at such a slow rate it’s not worth considering.

The AirPower’s whole reason to exist was to charged the phone (and other devices) supposedly faster than a Lightning cable. Perhaps Apple will finally release their fast charging specs to the industry so that Qi chargers can finally build in this faster charging feature and offer similar charging times as the aforementioned, but now defunct AirPower base. But, we know Apple, they’ve just shot themselves in the foot and they won’t do anything about it. Now that the AirPower is dead, so likely too is the hope of super fast wireless charging.

AirPods 2

This whole situation with the AirPower (and really this is more about Apple’s failure to deliver a workable product) is made far, far worse with the release of the the AirPods 2. It’s like Apple has some big sadistic streak towards its customers by cancelling a completely necessary product in one breath, then announce the AirPods 2 “with wireless charging case” in the next.

One of the primary reasons the AirPods 2 exist is for the wireless charging case. Unfortunately, even with Lightning, the charging speed of the AirPods is still incredibly slow. Considering how much slower Qi chargers operate, it will take infinitely more time to charge the case of the AirPods 2. You don’t want this if you’re trying to get out the door to your destination. This means that those who had banked on the wireless charging capability for purchase of the AirPods 2 with the wireless charging (a case that costs $80 separately or adds $40 to the price of standard AirPods) because of the AirPower’s faster performance has now been misled by Apple. Thus, this makes the primary selling point of the AirPods 2 now worthless.

If anything, the cancellation of the AirPower wireless charging pad clearly shows Apple’s failure as a company. Not only did the engineers fail to design and deliver a seemingly simplistic device, Apple as a company failed the consumer by not carrying through with their ecosystem continuity plans. Plans that, if they had come to fruition, could have seen to the existence of a much wider array of wireless devices aided by the AirPower.

The AirPods are pretty much a case of “you-can’t-have-one-without-the-other”. Failing the delivery of the AirPower means you’ve failed the delivery of the AirPods 2 by extension. It’s this double whammy failure that will hit Apple hard.

In fact, it’s even worse than a double-whammy for the future of Apple. It impacts future iPhone sales, future iPad sales, future Apple Watch sales and, in general, any other wireless charging device Apple might have had in its design queue. The failure to deliver the AirPower base is a major blow to Apple’s innovation and the Apple ecosystem as a whole.

Apple’s Apathy

The management team at Apple appears to be apathetic to this wider problem. I can hear them now, “Let’s just cancel AirPower”. Another person says, “But, it’s going to be needed for many future devices”. Another person says, “Don’t worry about that. Just cancel it.”

Apathy is the antithesis of Innovation. These two concepts have no symbiotic relationship. There was a time when Apple (or more specifically, Steve Jobs) would push his teams to deliver amazing designed products with features 5-10 years ahead of their time. Now Apple can’t even deliver a similar product that already exists in the marketplace by other manufacturers.

You can’t run a business on apathy. You can only run a business on doing. If Apple is smart, they’ll announce the cancellation of AirPower, but quickly announce an even better wireless charging alternative that’s even faster than the AirPower. Without a solid, reliable and performant wireless charging system, devices like the now-wireless charging AirPods 2 are left hanging. The Apple Watch is left hanging. And… Apple’s flagship product, the iPhone X is also being left hanging without a net.

Innovation and Gimmicks

While I know I got off on an Apple tangent, it was to prove a point. That point being that gimmicks like wireless charging cases, must have functional sister products to bring that product to life. Without such symbiotic sister products, a half-product is very much simply an on-paper gimmick to sell more product.

Clearly, Apple is now firmly in gimmick territory in its attempts to make money. So is Samsung, Huawei and even LG. It’s more about innovating and making truly new and exciting products we’ve never seen, than it is about adding more cameras, or bigger batteries or making it thinner or adding a pencil or even, yes, folding. These featires are the “accessories” that add value to an innovative product, but these are not primary driving factors.

If you want to wow the industry, you make a product no one has ever seen before. We’ve seen both the Huawei Mate X and the Galaxy Fold before, in tablets and in phones. Marrying the two together doesn’t make innovation, it makes iteration. There’s a substantial difference between iteration and innovation. Iteration is taking two existing concepts and marrying them together. Innovation is producing a product that has never before existed like that ever. Tablets already exist even if they don’t fold.

The iPad as Innovation

The iPad is a game changing, innovative device. The only even close product would have been in the early 90’s with Grid’s GRiDPad. The only similarity between the iPad and the GRiDPad is the fact that they were both tablets by function. Both have completely different philosophies on what a tablet is, how it works and how it looks. The GRiDPad failed because it didn’t know what to be at a time when it needed a clear reason to exist. This is particularly true when such a form factor had never before existed. People need to be able to wrap their head around why a tablet needs to exist. With Grid, they couldn’t.

The reason Apple’s iPad succeeded was not only because of the form factor, but because Apple also put an amazing amount of time and thought into how a tablet form factor works, feels in the hand and how the touch interface works. They gave people the understanding of how and why a tablet is useful… something Grid failed to do with the GRiDPad. It also didn’t hurt that Apple had a solid, robust operating system in MacOS X that they could tweak and use as a base to drive the user interface. Grid, on the other hand, didn’t. They didn’t build an ecosystem, they didn’t have an app store, they didn’t have a proper operating system, they didn’t really even have apps. There was the tablet, but on the other side of the equals sign there was nothing.

Apple’s design thought of nearly everything from top to bottom and from form to function to ecosystem. Apple offered the consumer the total package. Grid got the form down, but not the function. Apple nailed nearly everything about the iPad from the start.

In fact, Apple nailed so much about the iPad from the beginning, Apple has not actually been able to improve upon that design substantially. Everything that Apple has added to iOS has been created not to improve upon the touch UI, but to add missing features, like cut and paste and Siri. In fact, Siri is as equally important innovation to the iPad, but it’s not truly needed for the iPad. It’s much more important innovation for the iPhone, because of the hands free nature that a phone needs while driving. Siri is, in fact, the single most important achievement to create a safer driving experience… something you won’t be doing on an iPad, but you will be doing on an iPhone or Apple Watch.

Steve Jobs

The Apple achievements I mention wouldn’t have been possible without Steve Jobs. Steve was not only a truly masterful marketer, he was also a visionary. He may not have personally designed the product, but he knew exactly what he wanted in the device. He was definitely visionary when it came to simplicity of design, when combined with everyday life.

You definitely want simplicity. You want easy to access software systems. You want intuitive touch interfaces. You want to be able to get in and out of interfaces in one or two touches. You don’t want to dig ever deeper in menu after menu after menu simply to get to a single function. Steve Jobs very much endorsed Keep-It-Simple-Stupid (or the KISS) philosophy. For example, the creation of single button mice. The placing of a single button on the front of the iPad. These are all very much the KISS design philosophy. It’s what makes people’s lives easier rather than more complicated.

Unfortunately, after Steve Jobs’s death in 2011, that left a huge KISS gap at Apple, which as only widened since. Even iOS and MacOS X have succumbed to this change in design philosophy. Instead of adopting KISS, Apple has abandoned that design goal and, instead, replaced it with deeper and deeper menus, with more complicated UI interfaces, with less simplistic user experiences and with buggier releases. The bugs being simply an outcome of dropping the KISS design idea. More complicated software means more bugs. Less complicated software means less bugs.

Some might argue FaceID makes your life simpler. Yes, it might… when it works, at the added cost of privacy problems. Problems that were solved just as simply with TouchID, adding none of those nasty privacy issues.

Samsung and Apple

While Samsung played catch-up with Apple for quite a while, Samsung got ahead by buying into component manufacturing, including the manufacture of OLED screens. In fact, Samsung became one of the leaders in OLED screen fabrication. If there’s an OLED screen in a product, there’s a high likelihood it was made by Samsung.

This meant that most OLED Android smart phones contain a Samsung part even if the phone was designed and produce by LG or Huawei or Google. This component level aspect of Samsung’s technology strategy has helped Samsung produce some of the best looking and functioning Android smart phones. For this same reason and because of the Apple and Samsung rivalry, Apple shunned using OLED for far too long. Because of the ever continual Samsung vs. Apple argument, Apple refused to add OLED screens into their devices… thus stunting Apple’s ability to innovate in the iPhone space for many years.

The OLED screen also allowed Samsung to produce the first “phablet” (combination phone and tablet). Bigger than most smart phones, smaller than a tablet. It offered users a larger phone screen to better surf Twitter, Facebook and Instagram. It was an iterative improvement to be sure, definitely not innovative in the truest sense. However, it definitely leapfrogged Samsung way ahead of Apple in screen quality and size. This is where Samsung leaped over Apple and made a serious niche for themselves… and it is also what propelled Android phones front and center. The “phablet” is what firmly propelled Samsung ahead of Apple and it is what has firmly kept them there.

In fact, Apple is now so far behind Samsung that it is now playing continual catch-up with it screen technologies, with wireless charging, with smart watches and with pretty much every other “innovation” Samsung has offered in since 2015.

One might argue that the AirPods were something new an innovative. Sure, but they were simply an iterative improvement over the EarPods (the wired version). The part of the AirPods that are, in fact, innovative is not the ear buds themselves, but the magnetic charging case. This case design is, in fact, the thing that sets these earbuds apart from every other set of wireless earbuds. This case, in fact, is one of the last few KISS design bastions I’ve seen come out of Apple. Unfortunately, as sleek as the case design is, the software behind it is equally as clumsy and not at all KISS in design.

The AirPods pair fast and easy using an instant recognizing system, but actually using the AirPods can be a chore. Sometimes the AirPods fail to connect at all. Sometimes one of them fails to connect. When a phone call comes in and you place an earbud in your ear, the phone still answers on the internal speaker even though the earbud is connected. When you try to move the audio to the earbud, the option is not even available. Sometimes you hear dropouts and stuttering while listening to music just inches away from the phone. Yes, the software is entirely clumsy. It’s so clumsy, in fact, it’s really something I would have expected to see from Android instead of iOS.

Commitment to Excellence

When Jobs was operating the innovation arm at Apple, the commitment to excellence was palpable. Since Jobs’s has left us, Apple’s commitment to excellence has entirely vanished. Not only are their products no longer being produced with the Jobs level of expected excellence, it’s not even up to a standard level of industry excellence. It’s now what one might expect to get from McDonald’s, not a three star Michelin rated restaurant. Apple, at one point, was effectively a three star Michelin-rated restaurant. Today, Apple is the Wendy’s of the computer world. Wendy’s is better than most and they make a good hamburger, but it is in no way gourmet. This is what Apple has become.

Samsung fares even worse. Samsung has never been known for its commitment to excellence. In fact, for a long time, I’ve been aware that Samsung’s products, while pretty and have great screens, are not at all built to last. They have small parts that wear out quickly and eventually break. Sometimes the units just outright fall apart. For the longest time I steered clear of Samsung products simply because their commitment to excellence was so far below Apple (even at where Apple is today), I simply couldn’t trust Samsung to produce a lasting product.

Recently, Samsung has mostly proven me wrong, at least for the Galaxy S5 and the Galaxy Tab S. This smart phone and tablet have held up amazingly well. The OLED screens still look tantalizingly sharp and crisp. The processors are still fast enough to handle most of what’s being pushed out today… which is still much better than the speed of an iPhone 4S or iPhone 5 released around the same time. Apple’s products simply don’t stand up to the test of time. However, Samsung’s older products have. That’s a testament to the improved build quality of Samsung devices.

However, commitment to excellence is not a commitment to innovation. The two, while related, are not the same. I’d really like to see Apple and Samsung both commit to excellence in innovation instead of creating devices based on gimmicks.

Full Circle

We’ve explored a lot of different aspects of technology within this article. Let’s bring it all home. The point is that innovation, true innovation, is what drives technology forward. Iterative innovation does not. It improves a device slightly, but it waters down the device in the process. You don’t want to water down devices, you want to build new, innovative devices that improve our lives, make our homes better, faster, safer, easier… and make access to information quicker and, overall, help improve our daily lives.

We don’t want to fight with devices to hear our voices over a fan. We don’t want to have to guess the phrase to use with Siri through iterative trial-and-error to make it give us specific information. We don’t want to have to flash the phone in front of our faces several times before FaceID recognizes us. We don’t want to dig through menu, after menu, after menu simply to enable or disable a function. That’s not easy access, that’s complication. Complications belong on smart watches, not on phones.

In short, we need to get back to the KISS design philosophy. We need to declutter, simplify, make devices less obtuse and more straightforward. Lose the menus and give us back quick access to device functions. We need to make buttons bigger, rather than smaller on touch screens. Teeny-tiny buttons have no place on a touch screen. We’ve gone backwards rather than forwards with touch interfaces on tablets and phones… yes, even on iOS devices.

Folding phones are not simple. In fact, they are the opposite of simple. Simple is making phone usability easier, not more tricky. Adding a folding screen adds more complication to the phone, not less. Lose the folding. We need to shorten, simplify and reduce. We need to make mobile devices, once again, Steve Jobs simple.

↩︎

 

 

How much data does it take to update my PS4 or Xbox One or Switch?

Posted in computers, updates, video game console by commorancy on May 10, 2018

It seems this is a common question regarding the most recent gaming consoles. Let’s explore.

Reasons?

  • If the reason you are asking this question is because you’re concerned with data usage on your Internet connection or if your connection is very slow, you’ll find that this answer will likely not satisfy you. However, please keep reading.
  • If the reason you are asking this question is because you want to predict the amount of data more precisely, then skip down to the ‘Offline Updates’ section below.
  • If the reason you are asking this question is because you’re simply curious, then please keep reading.

Xbox One, PS4 and Switch Update sizes

The PS4, Xbox One and Switch periodically patch and update their console operating systems for maximum performance, to squash bugs and to improve features. However, this process is unpredictable and can cause folks who are on metered Internet connections no end of frustration.

How much data will it need to update?

There is no way to know … let’s pause to soak this in …

How much data is needed is entirely dependent on how recently you’ve upgraded your console. For example, if you’ve kept your console up to date all along the way, the next update will only be sized whatever the newest update is. With that said, there’s no way to gauge even that size in advance. Not Microsoft, not Sony and not Nintendo publish their update sizes in advance. They are the size they are. If it fixes only a small set of things, it could be 50-100 megabytes. If it’s a full blown point release (5.0 to 5.1), it could be several gigabytes in size. If it’s a smaller release, it could be 1GB.

If your console is way out of date (i.e., if you last turned it on 6 months ago), your console will have some catching up to do. This means that your update may be larger than someone who updates their console every new update. This means that if the base update is 1GB, you might have another 1GB of catch up before the newest update can be applied. This catch-up update system applies primarily to the Xbox One and not to the PS4 or Switch.

Xbox One vs PS4 vs Switch Update Conventions

Sony and Nintendo both choose a bit more of an one-size-fits-all update process when compared to Microsoft. Because of this, we’ll discuss the Xbox One first. Since the Xbox One is based, in part, on Windows 10, it follows the same update conventions as Windows 10. However, because the Xbox One also uses other embedded OSes to drive other parts of the console, those pieces may also require separate updates of varying sizes. This means that for the Xbox One to update, it has a process that scans the system for currently installed software versions, then proceeds to download everything needed to bring all of those components up to date.

Sony and Nintendo, on the other hand, don’t seem to follow this same convention. Instead, the Switch and PS4 typically offer only point-release updates. This means that everyone gets the same update at the same time in one big package. In this way, it’s more like an iPhone update.

For full point-release updates, the Xbox One also works this same way. For interim updates, it all depends on what Microsoft chooses to send out compared to what’s already on your Xbox One. This means that the Xbox One can update more frequently than the PS4 by keeping underlying individual components updated more frequently if they so choose. This is why the Xbox One can offer weekly updates where the PS4 and the Switch typically offer only quarterly or, at least, much less frequent updates.

Size of Updates

If you want to know the size of a specific update, you have to begin the update process. This works the same on the PS4, the Xbox One or the Switch. This means you have to kick off the update. Once you do this, the download progress bar will show you the size of the download. This is the only way to know how big the update is directly on the console.

However, both the PS4 and the Xbox One allow you to download your updates manually via a web browser (PC or Mac). You can then format a memory stick, copy the files to USB and restart the console in a specific way to apply the updates. This manual process still requires you to download the updates in full and, thus, uses the same bandwidth as performing this action on the console. This process requires you to also have a sufficiently sized and properly formatted USB memory stick. For updating the PS4, the memory stick must be formatted exFAT or FAT32. For updating the Xbox One, it must be formatted NTFS. The Nintendo Switch doesn’t provide offline updates.

Cancelling Updates in Progress

The Xbox One allows you to cancel the current system update in progress by unplugging the lan and/or disconnecting WiFi. Then turning off the console. When the console starts up without networking, you can continue to play games on your console, but you will not be able to use Xbox Live because of the lack of networking.

Once you plug the network back in, the system will again attempt to update. Or, you can perform an offline update with the Xbox One console offline. See Offline Updates just below.

You can also stop the PS4 download process by going to Notifications, selecting the download, press the X button and select ‘Cancel and Delete’ or ‘Pause’. Note, this feature is available on 5.x or newer PS4 version. If your PS4 version is super old, you may not have this option in the Notifications area. You will also need to go into settings (Xbox One or PS4) and disable automatic updates otherwise it could download these without you seeing it.

How to disable automatic updates:

With that said, you cannot stop system updates on the Nintendo Switch once they have begun. Nintendo’s downloads are usually relatively small anyway. Trying to catch them in progress and stop them may be near impossible. It’s easier to follow the guides above and prevent them from auto-downloading.

Also note, any of the consoles may still warn you that an update is available and prompt you to update your console even if you have disabled automatic software downloads.

*This setting on the Nintendo Switch may exclude firmware updates, your mileage may vary.

Offline Updates

Xbox One

The Xbox One allows you to update your system offline using a Windows PC. This type of update is not easily possible with a Mac. Mac computers don’t natively support formatting or reading NTFS properly, but there are tools you can use (Tuxera NTFS for Mac).

To use the Offline System Update, you’ll need:

  • A Windows-based PC with an Internet connection and a USB port.
  • A USB flash drive with a minimum 4 GB of space formatted as NTFS.

Most USB flash drives come formatted as FAT32 and will have to be reformatted to NTFS. Note that formatting a USB flash drive for this procedure will erase all files on it. Back up or transfer any files on your flash drive before you format the drive. For information about how to format a USB flash drive to NTFS using a PC, see How to format a flash drive to NTFS on Windows.

  1. Plug your USB flash drive into a USB port on your computer.
  2. Open the Offline System Update file OSU1.
  3. Click Save to save the console update .zip file to your computer.
  4. Unzip the file by right-clicking on the file and selecting Extract all from the pop-up menu.
  5. Copy the $SystemUpdate file from the .zip file to your flash drive.
    Note The files should be copied to the root directory, and there shouldn’t be any other files on the flash drive.
  6. Unplug the USB flash drive from your computer.

PlayStation 4

You can also update your PS4 console offline using Sony’s system updates. Here’s the procedure for PS4 offline updates. Note, the USB memory stick must be formatted either exFAT or FAT32. The PS4 doesn’t support any other types of stick formats. This means, if you buy a USB stick intended to be used on Windows, you will need to reformat it properly before you can use it on the PS4.

Update using a computer

For the standard update procedure, follow the steps below.

The following things are needed to perform the update:

  • PlayStation®4 system
  • Computer connected to the Internet
  • USB storage device, such as a USB* flash drive
  • There must be approximately 460 MB of free space.
    • On the USB storage device, create folders for saving the update file. Using a computer, create a folder named “PS4”. Inside that folder, create another folder named “UPDATE”.
      PC Update
    • Download the update file, and save it in the “UPDATE” folder you created in step 1. Save the file with the file name “PS4UPDATE.PUP”.
      Download Now Click to start the download.
    • Connect the USB storage device to your PS4™ system, and then from the function screen, select Settings (Settings) > [System Software Update].
      Follow the on-screen instructions to complete the update.
  • If your PS4™ system does not recognize the update file, check that the folder names and file name are correct. Enter the folder names and file name in single-byte characters using uppercase letters.

Nintendo Switch Updates

Nintendo doesn’t offer offline updates at all. The Nintendo Switch only supports Internet updates. There is currently no way to download or update your Switch via USB stick or SD card. The Nintendo Switch is the newest of the consoles, so it’s possible that Nintendo could offer an offline update mechanism some time in the future. However, knowing Nintendo, don’t hold you breath for this feature.

Offline Updates are Point Release Only

These offline update processes apply point-release updates only and not interim updates. Interim updates must still be applied directly from the console. Interim updates scan your system, find what’s needed, then download the patches. This can only be performed on the console. This means you could find that after installing a point release, the Xbox One may still require an additional update or two.

Updates and Internet Connectivity

Game consoles require updates to keep them current. The primary reason for most updates is to keep yours and your friend’s games in sync when playing multiplayer games. This prevents you from having a network edge over another player. When all game consoles are running the same version, all multiplayer activities are on the same playing field.

For this reason, Xbox Live and the PlayStation Network (PSN) require all users to update to use networking features. If you declined or postpone any updates, both the Xbox One and the PS4 will deny you access to networking features. You must update both the console and the games to continue using networking.

If you don’t intend to use the network features such as multiplayer or leader boards, then you don’t need to worry about this. However, if you’re not using the networking features, then there’s no reason to buy Xbox Live or PSN. So far, Nintendo doesn’t yet offer a network capable of multiplayer gaming like Xbox Live or PSN, but as soon as they do I’m quite sure they will enforce the same requirements.

Pushing off Updates

While you can postpone updates to your console, it’s not always the best idea. I get that some people are on metered networking connections and can’t afford to download 20GB sized updates. But, at the same time, this is how consoles work. If you’re looking for a console that supports offline updates, then you’ll want to look at the PS4 or the Xbox One. You might want to skip the Switch if this is a show stopper for you.

As we move into the future, these consoles will continue to assume more and more connectivity is always available. Don’t be surprised to find that both the Xbox One and PS4 discontinue their offline update feature at some point in the future.

Though, Sony will still need to provide a way to install the operating system when a hard drive is replaced. However, that won’t help you with updating your console offline.

If you have a reason to want to know your download sizes more precisely, other than what I mention above, please leave a comment below and let me know.

↩︎

CanDo: An Amiga Programming Language

Posted in computers, history by commorancy on March 27, 2018

At one point in time, I owned a Commodore Amiga. This was back when I was in college. I first owned an Amiga 500, then later an Amiga 3000. I loved spending my time learning new things about the Amiga and I was particularly interested in programming it. While in college, I came across a programming language by the name of CanDo. Let’s explore.

HyperCard

Around the time that CanDo came to exist on the Amiga, Apple had already introduced HyperCard on the Mac. It was a ‘card’ based programming language. What that means is that each screen (i.e., card) had a set of objects such has fields for entering data, placement of visual images or animations, buttons and whatever other things you could jam onto that card. Behind each element on the card, you could attach written programming functions() and conditionals (if-then-else, do…while, etc). For example, if you had an animation on the screen, you could add a play button. If you click the play button, a function would be called to run the animation just above the button. You could even add buttons like pause, slow motion, fast forward and so on.

CanDo was an interpreted object oriented language written by a small software company named Inovatronics out of Dallas. I want to say it was released around 1989. Don’t let the fact that it was an interpreted language fool you. CanDo was fast for an interpreted language (by comparison, I’d say it was proportionally faster than the first version of Java), even on the then 68000 CPU series. The CanDo creators took the HyperCard idea, expanded it and created their own version on the Amiga. While it supported very similar ideas to HyperCard, it certainly wasn’t identical. In fact, it was a whole lot more powerful than HyperCard ever dreamed of being. HyperCard was a small infant next to this product. My programmer friend and I would yet come to find exactly how powerful the CanDo language could be.

CanDo

Amiga owners only saw what INOVATronics wanted them to see in this product. A simplistic and easy to use user interface consisting of a ‘deck’ (i.e, deck of cards) concept where you could add buttons or fields or images or sounds or animation to one of the cards in that deck. They were trying to keep this product as easy to use as possible. It was, for all intents and purposes, a drag-and-drop programming language, but closely resembled HyperCard in functionality, not language syntax. For the most part, you didn’t need to write a stitch of code to make most things work. It was all just there. You pull a button over and a bunch of pre-programmed functions could be placed onto the button and attached to other objects already on the screen. As a language, it was about as simple as you could make it. I commend the INOVATronics guys on the user-friendly aspect of this system. This was, hands down, one of the easiest visual programming languages to learn on the Amiga. They nailed that part of this software on the head.

However, if you wanted to write complex code, you most certainly could do this as well. The underlying language was completely full featured and easy to write. The syntax checker was amazing and would help you fix just about any problem in your code. The language had a full set of typical conditional constructs including for loops, if…then…else, do…while, while…until and even do…while…until (very unusual to see this one). It was a fully stocked mostly free form programming language, not unlike C, but easier. If you’re interested in reading the manual for CanDo, it is now located at this end of this section below.

As an object oriented language, internal functions were literally attached to objects (usually screen elements). For example, a button. The button would then have a string of code or functions that drove its internal functionality. You could even dip into that element’s functions to get data out (from the outside). Like most OO languages, the object itself is opaque. You can’t see its functions names or use them directly, only that object that owns that code can. However, you could ask the object to use one of its function and return data back to you. Of course, you had to know that function existed. In fact, this would be the first time I would be introduced to the concept of object oriented programming in this way. There was no such thing as free floating code in this language. All code had to exist as an attachment to some kind of object. For example, it was directly attached to the deck itself, to one of the cards in the deck, to an element on one of the cards or to an action associated with that object (mouse, joystick button or movement, etc).

CanDo also supported RPC calls. This was incredibly important for communication between two separately running CanDo deck windows. If you had one deck window with player controls and another window that had a video you wanted to play, you could send a message from one window to another to perform an action in that window, like play, pause, etc. There were many reasons to need many windows open each communicating with each other.

The INOVATronics guys really took programming the Amiga to a whole new level… way beyond anything in HyperCard. It was so powerful, in fact, there was virtually nothing stock on the Amiga it couldn’t control. Unfortunately, it did have one downside. It didn’t have the ability to import system shared libraries on AmigaDOS. If you installed a new database engine on your Amiga with its own shared function library, there was no way to access those functions in CanDo by linking it in. This was probably CanDo’s single biggest flaw. It was not OS extensible. However, for what CanDo was designed to do and the amount of control that was built into it by default, it was an amazing product.

I’d also like to mention that TCP/IP was just coming into existence with modems on the Amiga. I don’t recall how much CanDo supported network sockets or network programming. It did support com port communication, but I can’t recall if it supported TCP/IP programming. I have no doubt that had INOVATronics stayed in business and CanDo progressed beyond its few short years in existence, TCP/IP support would have been added.

CanDo also supported Amiga Rexx (ARexx) to add additional functionality to CanDO which would offer additional features that CanDo didn’t support directly. Though, ARexx worked, it wasn’t as convenient as having a feature supported directly by CanDo.

Here are the CanDo manuals if you’re interested in checking out more about it:

Here’s a snippet from the CanDo main manual:

CanDo is a revolutionary, Amiga specific, interactive software authoring system. Its unique purpose is to let you create real Amiga software without any programming experience. CanDo is extremely friendly to you and your Amiga. Its elegant design lets you take advantage of the Amiga’s sophisticated operating system without any technical knowledge. CanDo makes it easy to use the things that other programs generate – pictures, sounds, animations, and text files. In a minimal amount of time, you can make programs that are specially suited to your needs. Equipped with CanDo, a paint program, a sound digitizer, and a little bit of imagination, you can produce standalone applications that rival commercial quality software. These applications may be given to friends or sold for profit without the need for licenses or fees.

As you can see from this snippet, INOVATronics thought of it as an ‘Authoring System’ not as a language. CanDo itself might have been, but the underlying language was most definitely a programming language.

CanDo Player

The way CanDo worked its creation process was that you would create your CanDo deck and program it in the deck creator. Once your deck was completed, you only needed the CanDo player to run your deck. The player ran with much less memory than the entire CanDo editor system. The player was also freely redistributable. However, you could run your decks from the CanDo editor if you preferred. The CanDo Player could also be appended to the deck to become a pseudo-executable that allowed you to distribute your executable software to other people. Also, anything you created in CanDo was fully redistributable without any strings to CanDo. You couldn’t distribute CanDo, but you could distribute anything you created in it.

The save files for decks were simple byte compiled packages. Instead of having to store humanly readable words and phrases, each language keyword had a corresponding byte code. This made storing decks much smaller than keeping all of the human readable code there. It also made it a lot more tricky to read this code if you opened the deck up in a text editor. It wasn’t totally secure, but it was better than having your code out there for all to see when you distributed a deck. You would actually have to own CanDo to decompile the code back into a humanly readable format… which was entirely possible.

The CanDo language was way too young to support more advanced code security features, like password encryption before executing the deck, even though PGP was a thing at that time. INOVATronics had more to worry about than securing your created deck’s code from prying eyes, though they did improve this as they upgraded versions. I also think the INOVATronics team was just a little naïve about how easily it would be to crack open CanDo, let alone user decks.

TurboEditor — The product that never was

A programmer friend who was working towards his CompSci masters owned an Amiga, and also owned CanDo. In fact, he introduced me to it. He had been poking around with CanDo and found that it supported three very interesting functions. It had the ability to  decompile its own code into humanly readable format to allow modification, syntactically check the changes and recompile it, all while still running. Yes, you read that right. It supported on-the-fly code modification. Remember this, it’s important.

Enter TurboEditor. Because of this one simple little thing (not so little actually) that my friend found, we were able to decompile the entire CanDo program and figure out how it worked. Remember that important thing? Yes, that’s right, CanDo is actually written in itself and it could actually modify pieces that are currently executing. Let me clarify this just a little. One card could modify another card, then pull that card into focus. The actual card wasn’t currently executing, but the deck was. In fact, we came to find that CanDo was merely a facade. We also found that there was a very powerful object oriented, fully reentrant, self-modifying, programming language under that facade of a UI. In fact, this is how CanDo’s UI worked. Internally, it could take an element, decompile it, modify it and then recompile it right away and make it go live, immediately adding the updated functionality to a button or slider.

While CanDo could modify itself, it never did this. Instead, it utilized a parent-child relationship. It always created a child sandbox for user-created decks. This sandbox area is where the user built new CanDo Decks. Using this sandbox approach, this is how CanDo built and displayed a preview of your deck’s window. The sandbox would then be saved to a deck file and then executed as necessary. In fact, it would be one of these sandbox areas that we would use to build TurboEditor, in TurboEditor.

Anyway, together, we took this find one step further and created an alternative CanDo editor that we called TurboEditor, so named because you could get into it and edit your buttons and functions much, much faster than CanDo’s somewhat sluggish and clunky UI. In fact, we took a demo of our product to INOVATronics’s Dallas office and pitched the idea of this new editor to them. Let’s just say, that team was lukewarm and not very receptive to the idea initially. While they were somewhat impressed with our tenacity in unraveling CanDo to the core, they were also a bit dismayed and a little perturbed by it. Though, they warmed to the idea a little. Still, we pressed on hoping we could talk them into the idea of releasing TurboEditor as an alternative script editor… as an adjunct to CanDo.

Underlying Language

After meeting with and having several discussions with the INOVATronics team, we found that the underlying language actually has a different name. The underlying language name was AV1. Even then, everyone called it by ‘CanDo’ over that name. Suffice it to say that I was deeply disappointed that INOVATronics never released the underlying fully opaque, object oriented, reentrant, self-modifying on-the-fly AV1 language or its spec. If they had, it would have easily become the go-to deep programming language of choice for the Amiga. Most people at the time had been using C if they wanted to dive deep. However, INOVATronics had a product poised to take over for Amiga’s C in nearly every way (except for the shared library thing which could have been resolved).

I asked the product manager while at the INOVATronics headquarters about releasing the underlying language and he specifically said they had no plans to release it in that way. I always felt that was shortsighted. In hindsight for them, it probably was. If they had released it, it could have easily become CanDo Pro and they could sold it for twice or more the price. They just didn’t want to get into that business for some reason.

I also spoke with several other folks while I was at INOVATronics. One of them was the programmer who actually built much of CanDo (or, I should say, the underlying language). He told me that the key pieces of CanDo he built in assembly (the compiler portions) and the rest was built with just enough C to bootstrap the language into existence. The C was also needed to link in the necessary Amiga shared library functions to control audio, animation, graphics and so on. This new language was very clever and very useful for at least building CanDo itself.

It has been confirmed by Jim O’Flaherty, Jr. (formerly Technical Support for INOVATronics) via a comment that the underlying language name was, in fact, AV1. This AV portion meaning audio visual. This new, at the time, underlying object oriented Amiga programming language was an entirely newly built language and was designed specifically to control the Amiga computer.

Demise of INOVAtronics

After we got what seemed like a promising response from the INOVATronics team, we left their offices. We weren’t sure it would work out, but we kept hoping we would be able to bring TurboEditor to the market through INOVATronics.

Unfortunately, our hopes dwindled. As weeks turned into months waiting for the go ahead for TurboEditor, we decided it wasn’t going to happen. We did call them periodically to get updates, but nothing came of that. We eventually gave up, but not because we didn’t want to release TurboEditor, but because INOVATronics was apparently having troubles staying afloat. Apparently, their CanDo flagship product at the time wasn’t able to keep the lights on for the company. In fact, they were probably floundering when we visited them. I will also say their offices were a little bit of a dive. They weren’t in the best area of Dallas and in an older office building. The office was clean enough, but the office space just seemed a little bit well worn.

Within a year of meeting the INOVATronics guys, the entire company folded. No more CanDo. It was also a huge missed opportunity for me in more ways than one. I could have gone down to INOVATronics, at the time, and bought the rights to the software on a fire sale and resurrected it as TurboEditor (or the underlying language). Hindsight is 20/20.

We probably could have gone ahead and released TurboEditor after the demise of INOVATronics, but we had no way to support the CanDo product without having their code. We would have had to buy the rights to the software code for that.

So, there you have it. My quick history of CanDo on the Amiga.

If you were one of the programmers who happened to work on the CanDo project at INOVATronics, please leave a comment below with any information you may have. I’d like to expand this article with any information you’re willing to provide about the history of CanDo, this fascinating and lesser known Amiga programming language.

 

App-casting vs Screen Casting vs Streaming

Posted in Android, Apple, computers by commorancy on October 8, 2016

A lot of people seem to be confused by these three types of broadcasting software, including using AppleTV and Chromecast for this. I’m here to help clear that up. Let’s explore.

Streaming and Buffering

What exactly is streaming? Streaming is when software takes content (music file, movie file, etc) and sends it out in small chunks from the beginning to the end of the file over a network. While streaming, there is a placeholder point in time entry point to begin watching. In other words, when you join a streaming feed, you’re watching that feed live. If you join 20 minutes in, you’ll miss the first 20 minutes that has already played. The placeholder point is the point in time that’s currently being played from the media.

What about broadcasting? Is it the same? Yes, it is a form of streaming that is used during app-casting and screen casting. So, if you join a live screen casting feed, you won’t get to see what has been in the past, you only get to see the point forward from when you joined the stream already in progress.

Streaming also uses buffering to support its actions. That means that during the streaming process, the application buffers up a bunch of content into memory (the fastest type of storage possible) so that it can grab the next chunk rapidly and send it to the streaming service for smooth continuous playback. Buffering is used to avoid access to slow devices like hard drives and other storage devices which may impair smooth playback. Because of buffering, there may be a delay in what your screen shows versus what the person watching sees.

Streaming encodes the content to a streaming format at broadcast time. It is also decoded by the client at the during streaming. Therefore, the endpoint client viewer may choose to reduce the resolution of the content to improve streaming performance. For this reason, this is why if you’re watching Netflix or Amazon, the resolution may drop to less than HD. However, if you’re watching content across a local network at home, this should never be a problem (unless your network or WiFi is just really crappy).

Note, I will use the word stream and cast interchangeably to mean the same thing within this article.

Screen Casting (i.e., Screen Mirroring)

Screen casting is broadcasting the screen of your device itself. For example, if you want to broadcast the screen of your MacBook or your Android tablet, it will broadcast at whatever resolution your screen is currently running. If your resolution is 1920×1080, then it will stream your screen at HD resolution. If your screen’s resolution is less than this, it will stream the content at less than HD. If your screen resolution is more than this, it will stream at that resolution. Though, with some streaming software, you can set a top end resolution and encoder to prevent sending out too much data.

Because screen casting or mirroring only casts in the resolution of your screen, this is not optimal for streaming movies (unless your movie is 1080p and matches your screen’s resolution). If your screen runs at a lower resolution than the content, it is not optimal for watching moves. If you want to watch UltraHD movies, this is also not possible in most cases (unless your PC has an extremely advanced graphics card).

For many mobile devices and because screen resolutions vary, it’s likely your screen resolution is far less than the content you want to watch. For this reason, app developers have created App-casting.

App-casting

What exactly is app-casting? App-casting distances itself from the screen resolution by streaming the content at the content’s resolution. App-casting is when you use AppleTV or Chromecast to stream content from an app-cast enabled application on your computer or mobile device. Because the content dictates the resolution, there are no pesky screen resolution problems to get in the way. This means content streamed through applications can present their content at full native resolutions.

For Netflix, ABC TV, NBC TV, Hulu and Amazon, this means you’ll be watching those movies and TV shows in glorious full 1080p resolution (or whatever the app-casting receiver supports and also based on the content). For example today, AppleTV and Chromecast only support up to HD resolution (i.e., 1080p). In the future, we may see UltraHD versions of AppleTV and Chromecast become available. However, for now, we’re limited to HD with these devices.

Though, once an UltraHD version of AppleTV and Chromecast arrive, it also means that streaming to these devices means heftier bandwidth requirements. So, your home network might be fine for 1080p content casting, UltraHD content streaming may not run quite as well without better bandwidth. To stream UltraHD 4k content, you may have to upgrade your wireless network.

Note that Google has recently announced an UltraHD 4k Chromecast will be available in November 2016.

Chromecast and AppleTV

These are the two leading app-streaming devices on the market. AppleTV supports iOS app streaming and Chromecast supports Android OS streaming. While these are commonly used and sold for this purpose, they are by no means the only software or hardware solutions on the market.

For example, DLNA / UPnP is common for streaming to TVs, Xbox One and PS4. This type of streaming can be found in apps available on both iOS and Android (as well as MacOS, Linux and Windows). When streaming content from a DLNA compatible app, you don’t need to have a special receiver like AppleTV or Chromecast. Many smart TVs today support DLNA streaming right out of the box. To use DLNA, your media device needs to present a list of items available. After selection, DLNA will begin streaming to your TV or other device that supports DLNA. For example, Vizio TVs offer a Multimedia app from the Via menu to start DLNA search for media servers.

Note that you do not have to buy an AppleTV or Chromecast to stream your tablet, desktop or other device. There are free and paid DLNA, Twitch and YouTube streaming apps. You can stream both your display and possibly even your apps using third party apps. You’ll need to search for DLNA streaming app in whichever app store is associated with your device.

DLNA stands for Digital Living Network Alliance. It is an organization that advocates for content streaming around the home.

App-casting compatibility

To cast from an application on any specific operating system to devices like Chromecast or AppleTV, the app must support this remote display protocol. Not all apps support it, though Apple and Google built apps do. Third party applications must build their software to support these external displays. If the app doesn’t support it, you won’t see the necessary icon to begin streaming.

For example, to stream on iOS, a specific icon appears to let you know that an Apple TV is available. For Android, a similar icon also appears if a Chromecast is available. If you don’t see the streaming icon on your application, it means that your application does not support streaming to a remote display. You will need to ask the developer of that software to support it.

There are also third party casting apps that support streaming video data to remote displays or remote services like Twitch or YouTube. You don’t necessarily need to buy an AppleTV or Chromecast to stream your display.

Third Party Streaming Apps

For computers or mobile devices, there are a number of streaming apps available. Some require special setups, some support Twitch or YouTube and others support DLNA / UPnP. If you’re looking to stream content to the Internet, then you’ll want to pick one up that supports Twitch or YouTube. If you’re wanting to stream your data just to your local network, you’ll want to find one that supports DLNA.

You’ll just need to search through the appropriate app store to find the software you need. Just search for DLNA streaming and you’ll find a number apps that support this protocol. Note that apps that don’t require the use of Chromecast or AppleTV may tend to be less robust at streaming. This means they may crash or otherwise not work as expected. Using AppleTV or Chromecast may be your best alternative if you need to rely on having perfect streaming for a project or presentation.

Basically, for stability and usability, I recommend using an AppleTV or Chromecast. But, there are other software products that may work.

How to stop Mac dock icon bouncing

Posted in Apple, botch, computers by commorancy on September 28, 2015

AppleWhen an application starts up in MacOS X Yosemite, it bounces the application dock icon a few times, then stops bouncing once the application has started. For me, this is perfectly fine because at least there’s a positive response. Positive response is never a bad thing in operating system design.

Unfortunately, Apple decided to overloaded this same bouncing behavior for notifications to get your attention by bouncing a dock icon. For me, this is definitely not wanted. Not only is it extremely annoying, it never stops until you go touch that icon. It also performs this bouncing way too frequently. There are much better ways to get user attention than by bouncing the dock icon. Thankfully, there’s a way to stop this annoying and unwanted UI behavior. Let’s explore.

Defaults Database

Apple has what’s known as the user defaults database. It is a database of settings not unlike the old UNIX .files system, but much more extended. Unfortunately, most developers don’t document which settings can go into the defaults database and many of the settings may be hidden. However, you can easily find them by reading the values by opening terminal.app and then typing:

$ defaults read com.apple.dock | more

This command will spew out a lot of stuff, so you’ll want to pipe it to more to page through it. Each app has its own namespace similar in format to com.apple.dock that you can review. Not all apps support changing settings this way. For other apps, simply replace com.apple.dock with the appropriate application namespace and you can read up the settings for that application. If you decide to change any of the values, you may have to kill and restart the application or log out and log back in.

In short, there is a way to stop the bouncing using the defaults command. To do this, you will need to update the defaults database for com.apple.dock with the correct setting to stop it.

Stop the Bouncing
BounceIconTo stop the bouncing of dock icons, open a terminal shell and at a command prompt, type the following:

$ defaults write com.apple.dock no-bouncing -bool TRUE
$ killall Dock

Keep in mind that this is a global setting. This stops the dock icon bouncing for every application on your system for all notifications. The launch icon bouncing is not controlled by this setting. For that, you should visit the preferences area.

You can always reenable the bouncing at any time by opening terminal and then typing:

$ defaults write com.apple.dock no-bouncing -bool FALSE
$ killall Dock

Note that the defaults database is stored locally in each user account. So, if you log into several different accounts on your Mac, you’ll need to do this for each of your accounts.

Please leave me a comment below if this doesn’t work for you.

Flickr’s new interface review: Is it time to leave Flickr?

Posted in botch, cloud computing, computers, social media by commorancy on May 21, 2013

New Flickr InterfaceYahoo’s Flickr has just introduced their new ’tile’ interface (not unlike Windows Metro tiles) as the new user interface experience. Unfortunately, it appears that Yahoo introduced this site without any kind of preview, beta test or user feedback. Let’s explore.

Tile User Experience

The tiles interface at first may appear enticing. But, you quickly realize just how busy, cluttered, cumbersome and ugly this new interface is when you actually try to navigate and use it. The interface is very distracting and, again, overly busy. Note, it’s not just the tiles that are the problem. When you click an image from the tile sheet, it takes you to this huge black background with the image on top. Then you have to scroll and scroll to get to the comments.  No, not exactly how I want my images showcased. Anyway, let me start by saying that I’m not a fan of these odd shaped square tile interfaces (that look like a bad copycat of a Mondrian painting). The interface has been common on the Xbox 360 for quite some time and is now standard for Windows Metro interface. While I’ll tolerate it on the Xbox as a UI, it’s not an enticing user experience. It’s frustrating and, more than that, it’s ugly. So, why exactly Yahoo decided on this user interface as their core experience, I am completely at a loss…. unless this is some bid to bring back the Microsoft deal they tossed out several years back. I digress.

Visitor experience

While I’m okay with the tiles being the primary visitor experience, I don’t want this interface as my primary account owner experience. Instead, there should be two separate and distinct interfaces. An experience for visitors and an experience for the account owner.  The tile experience is fine for visitors, but keep in mind that this is a photo and art sharing site.  So, I should be able to display my images in the way I want my users to see them.  If I want them framed in black, let me do that. If I want them framed in white, let me do that. Don’t force me into a one-size-fits-all mold with no customization. That’s where we are right now.

Account owner experience

As a Flickr account owner, I want an experience that helps me manage my images, my sets, my collections and most of all, the comments and statistics about my images. The tile experience gives me none of this. It may seem ‘pretty’ (ahem, pretty ugly), but it’s not at all conducive to managing the images. Yes, I can hear the argument that there is the ‘organizr’ that you can use. Yes, but that’s of limited functionality. I preferred the view where I can see view numbers at a glance, if someone’s favorited a photo, if there are any comments, etc.  I don’t want to have to dig down into each photo to go find this information, I want this part at a glance.  Hence, the need for an account owner interface experience that’s separate from what visitors see.

Customization

This is a photo sharing site. These are my photos. Let me design my user interface experience to match the way I want my photos to be viewed. It is a gallery after all. If I were to show my work at a gallery, I would be able to choose the frames, the wall placement, the lighting and all other aspects about how my work is shown. Why not Flickr? This is what Flickr needs to provide. Don’t force us into a one-size-fits-all mold of something that is not only hideous to view, it’s slow to load and impossible to easily navigate.  No, give me a site where I can frame my work on the site. Give me a site where I can design a virtual lighting concept.  Give me a site where I can add virtual frames. Let me customize each and every image’s experience that best shows off my work.

Don’t corner me into a single user experience where I have no control over look and feel. If I don’t like the tile experience, let me choose from other options. This is what Flickr should have been designing.

No Beta Test?

Any site that rolls out a change as substantial as what Flickr has just pushed usually offers a preview window.  A period of time where users can preview the new interface and give feedback. This does two things:

  1. Gives users a way to see what’s coming.
  2. Gives the site owner a way to tweak the experience based on feedback before rolling it out.

Flickr didn’t do this. It is huge mistake to think that users will just silently accept any interface some random designer throws out there. The site is as much the users as it is Yahoo’s. It’s a community effort. Yahoo provides us with the tools to present our photos, we provide the photos to enhance their site. Yahoo doesn’t get this concept. Instead, they have become jaded to this and feel that they can do whatever they want and users will ‘have’ to accept it. This is a grave mistake for any web sharing site, least of all Flickr. Flickr, stop, look and listen. Now is the time.

Photo Sharing Sites

In among Flickr, there are many many photo sharing sites on the Internet. Flickr is not the only one. As content providers, we can simply take our photos and move them elsewhere. Yahoo doesn’t get this concept. They think they have some kind of captive audience. Unfortunately, this thinking is why Yahoo’s stock is now at $28 a share and not $280 a share. We can move our photos to a place where there’s a better experience (i.e., Picasa, DeviantArt, Photobucket, 500px, etc). Yahoo needs to wake up and realize they are not the only photo sharing site on the planet.

Old Site Back?

No, I’m not advocating to move back to the old site. I do want a new user experience with Flickr. Just not this one. I want an experience that works for my needs. I want an interface that let’s me showcase my images in the way I want. I want a virtual gallery that lets me customize how my images are viewed and not by using those hideous and slow tiles.  Why not take a page from the WordPress handbook and support gallery themes. Let me choose a theme (or design my own) that lets me choose how to best represent my imagery. This is the user experience that I want. This is the user experience I want my visitors to have. These are my images, let me show them in their best light.

Suggestions for @Yahoo/@Flickr

Reimagine. Rethink. Redesign. I’m glad to see that Yahoo is trying new things. But, the designers need to be willing to admit when a new idea is a failure and redesign it until it does work. Don’t stop coming up with new ideas. Don’t think that this is the way it is and there is nothing more. If Yahoo stops at this point with the interface as it is now, the site is dead and very likely with it Yahoo. Yahoo is very nearly on its last legs anyway. Making such a huge blunder with such a well respected (albeit antiquated site) could well be the last thing Yahoo ever does.

Marissa, have your engineers take this back to the drawing board and give us a site that we can actually use and that we actually want to use.

Tagged with: , , , , , , , ,

iPhone Risk: Your Employer and Personal Devices

Posted in best practices, cloud computing, computers, data security, Employment by commorancy on May 5, 2013

So, you go to work every day with your iPhone, Android phone or even an iPod. You bring it with you because you like having the convenience of people being able to reach you or because you listen to music. Let’s get started so you can understand your risks.

Employment Agreements

We all know these agreements. We typically sign one whenever we start a new job. Employers want to make sure that each employee remains responsible all during employment and some even require that employee to remain responsible even after leaving the company for a specified (or sometimes unspecified) period of time.  That is, these agreements make you, as an employee, personally responsible for not sharing things that shouldn’t be shared. Did you realize that many of these agreements extend to anything on your person and can include your iPhone, iPod, Android Phone, Blackberry or any other personal electronic device that you carry onto the property? Thus, the Employment Agreement may allow your employer to seize these devices to determine if they contain any data they shouldn’t contain.

You should always take the time to read these agreements carefully and thoroughly. If you don’t or can’t decipher the legalese, you should take it to an attorney and pay the fee for them to review it before signing it.  You might be signing away too many of your own personal rights including anything you may be carrying on your person.

Your Personal Phone versus Your Employer

We carry our personal devices to our offices each and every day without really thinking about the consequences. The danger, though, is that many employers now allow you to load up personal email on your own personal iDevices. Doing this can especially leave your device at risk of legal seizure or forfeiture under certain conditions.  So, always read Employment Agreements carefully. Better, if your employer requires you to be available remotely, they should supply you with all of the devices you need to support that remote access. If that support means you need to be available by phone or text messaging, then they should supply you with a device that supports these requirements.

Cheap Employers and Expensive Devices

As anyone who has bought an iPhone or an Android phone can attest, these devices are not cheap. Because many people are buying these for their own personal use, employers have become jaded by this and leech into this freebie and allow employees to use their own devices for corporate communication purposes. This is called a subsidy. You are paying your cell phone bill and giving part of that usage to your employer, unless your employer is reimbursing you part or all of your plan rate.  If you are paying your own bill without reimbursement, but using the device to connect to your company’s network or to corporate email, your device is likely at high risk should there be a legal need to investigate the company for any wrong doing. This could leave your device at risk of being pulled from your grasp, potentially forever.

If you let the company reimburse part or all of your phone bill, especially on a post-paid plan, the company could seize your phone on termination as company property.  The reason, post-paid plans pay for the cost of the phone as part of your bill. If the company reimburses more than 50% of the phone cost as part of your bill, they could legally own the phone at the end of your employment. If the company doesn’t reimburse your plan, your employer could still seize your device if you put corporate communication on your phone because it then contains company property.

What should I do?

If the company requires that you work remotely or have access to company communication after hours, they need to provide you with a device that supports this access. If they are unwilling to provide you with a device, you should decline to use your personal device for that purpose. At least, you should decline unless the employment agreement specifically states that they can’t seize your personal electronics. Although, most employers likely won’t put a provision in that explicitly forbids them from taking your device. Once you bring your device on the property, your employer can claim that your device contains company property and seize it anyway. Note that even leaving it in your car could be enough if the company WiFi reaches your car in its parking space.

Buy a dumb phone and use that at work. By this I mean, buy a phone that doesn’t support WiFi, doesn’t support a data plan, doesn’t support email, doesn’t support bluetooth and that doesn’t support any storage that can be removed. If your phone is a dumb phone, it cannot be claimed that it could contain any company file data.  If it doesn’t support WiFi, it can’t be listening in on company secrets.  This dumb phone basically requires your company to buy you a smart phone if they need you to have remote access to email and always on Internet. It also prevents them from leeching off your personal iPhone plan.

That doesn’t mean you can’t have an iPhone, but you should leave it at home during work days. Bring your dumb phone to work. People can still call and text you, but the phone cannot be used as a storage vehicle for company secrets (unless you start entering corporate contacts into the phone book). You should avoid entering any company contact information in your personal phone’s address book. Even this information could be construed as confidential data and could be enough to have even your dumb phone seized.

If they do decide to seize your dumb phone, you’ve only lost a small amount of money in the phone and it’s simple to replace the SIM card in most devices. So, you can probably pick up a replacement phone and get it working the same day for under $100 (many times under $30).

Request to Strike Language from the Employment Agreement

Reading through your Employment Agreement can make or break the deal of whether or not you decide to hire on. Some Employment Agreements are way overreaching in their goals. Depending on how the management reacts to your request to strike language from the Employment Agreement may tell you the kind of company you are considering. In some cases, I’ve personally had language struck from the agreement and replaced with an addendum to which we both agreed and signed. In another case, I walked away from the position because both the hiring and HR managers refused to alter the Employment Agreement containing overreaching language. Depending on how badly they want to fill the position, you may or may not have bargaining power here. However, if it’s important to you, you should always ask. If they decline to amend the agreement, then you have to decide whether or not the position is important enough to justify signing the Agreement with that language still in place.

But, I like my iPhone/iPad/iPod too much

Then, you take your chances with your employer. Only you can judge your employer for their intent (and by reading your employment agreement).  When it comes down to brass tacks, your employer will do what’s right for the company, not for you. The bigger the company gets, the more likely they are to take your phone and not care about you or the situation. If you work in a 1000+ employee company, your phone seizure risk greatly increases.  This is especially true if you work in any position where you have may access to extremely sensitive company data.

If you really like your device, then you should protect it by leaving it someplace away from the office (and not in your car parked on company property). This will ensure they cannot seize it from you when you’re on company property. However, it won’t stop them from visiting your home and confiscating it from you there.

On the other hand, unlike the dumb phone example above, if they size your iPhone, you’re looking at a $200-500 expense to replace the phone plus the SIM card and possibly other expenses. If you have synced your iPhone with your computer at home and data resides there, that could leave your home computer at risk of seizure, especially if the Federal Government is involved. Also, because iCloud now stores backups of your iDevices, they could petition the court to seize your Apple ID from Apple to gain access to your iDevice backups.

For company issued iPhones, create a brand new Apple ID using your company email address. Have your company issued phone create its backups in your company created Apple ID. If they seize this Apple ID, there is no loss to you. You should always, whenever possible create separate IDs for company issued devices and for your personal devices. Never overlap this personal and company login IDs matter how tempting it may be. This includes doing such things as linking in your personal Facebook, Google, LinkedIn, Yahoo or any other personal site accounts to your corporate issued iPhone or Apps. If you take any personal photographs using your company phone, you should make sure to get them off of the phone quickly.  Better, don’t take personal pictures with your company phone. If you must sync your iPhone with a computer, make sure it is only a company computer. Never sync your company issued iPhone or iPad with your personally owned computer. Only sync your device with a company issued computer.

Personal Device Liabilities

Even if during an investigation nothing is turned up on your device related to the company’s investigation, if they find anything incriminating on your device (i.e., child porn, piracy or any other illegal things), you will be held liable for those things they find as a separate case. If something is turned up on your personal device related to the company’s investigation, it could be permanently seized and never returned.  So, you should be aware that if you carry any device onto your company’s premises, your device can become the company’s property.

Caution is Always Wise

With the use of smart phones comes unknown liabilities when used at your place of employment. You should always treat your employer and place of business as a professional relationship. Never feel that you are ‘safe’ because you know everyone there. That doesn’t matter when legal investigations begin. If a court wants to find out everything about a situation, that could include seizing anything they feel is relevant to the investigation. That could include your phone, your home computer, your accounts or anything else that may be relevant. Your Employment Agreement may also allow your employer to seize things that they need if they feel you have violated the terms of your employment. Your employer can also petition the court to require you to relinquish your devices to the court.

Now, that doesn’t mean you won’t get your devices, computers or accounts back. But, it could take months if the investigation drags on and on. To protect your belongings from this situation, here are some …

Tips

  • Read your Employment Agreement carefully
  • Ask to strike language from Agreements that you don’t agree with
  • Make sure agreements with companies eventually expire after you leave the company
  • NDAs should expire after 5-10 years after termination
  • Non-compete agreements should expire 1 year after termination
  • Bring devices to the office that you are willing to lose
  • Use cheap dumb phones (lessens your liability)
  • Leave memory sticks and other memory devices at home
  • Don’t use personal devices for company communication (i.e., email or texting)
  • Don’t let the company pay for your personal device bills (especially post-paid cell plans)
  • Prepaid plans are your friend at your office
  • Require your employer to supply and pay for iDevices to support your job function
  • Turn WiFi off on all personal devices and never connect them to corporate networks
  • Don’t connect personal phones to corporate email systems
  • Don’t text any co-workers about company business on personal devices
  • Ask Employees to refrain from texting your personal phone
  • Use a cheap mp3 player without WiFi or internet features when at the office
  • Turn your personal cell phone off when at work, if at all possible
  • Step outside the office building to make personal calls
  • Don’t use your personal Apple ID when setting up your corporate issued iPhone
  • Create a new separate Apple ID for corporate issued iPhones
  • Don’t link iPhone or Android apps to personal accounts (LinkedIn, Facebook, etc)
  • Don’t take personal photos with a company issued phone
  • Don’t sync company issued phones with your personally owned computer
  • Don’t sync personal phones with company owned computers
  • Replace your device after leaving employment of a company

Nothing can prevent your device from being confiscated under all conditions. But, you can help reduce this outcome by following these tips and by segregating your personal devices and accounts from your work devices and work accounts. Keeping your personal devices away from your company’s property is the only real way to help prevent it from being seized. But, the company could still seize it believing that it may contain something about the company simply because you were or are an employee. Using a dumb prepaid phone is probably the only way to ensure that on seizure, you can get a phone set up and your service back quickly and with the least expense involved. I should also point out that having your phone seized does not count as being stolen, so your insurance won’t pay to replace your phone for this event.

Windows 8 PC: No Linux?

Posted in botch, computers, linux, microsoft, redmond, windows by commorancy on August 5, 2012

According to the rumor mill, Windows 8 PC systems will come shipped with a new BIOS replacement using UEFI (the extension of the EFI standard).  This new replacement boot system apparently comes shipped with a secured booting system that, some say, will be locked to Windows 8 alone.   On the other hand, the Linux distributions are not entirely sure how the secure boot systems will be implemented.  Are Linux distributions being prematurely alarmist? Let’s explore.

What does this mean?

For Windows 8 users, probably not much.  Purchasing a new PC will be business as usual.  For Microsoft, and assuming UEFI secure boot cannot be disabled or reset, it means you can’t load another operating system on the hardware.  Think of locked and closed phones and you’ll get the idea.  For Linux, that would mean the end of Linux on PCs (at least, not unless Linux distributions jump thorough some secure booting hoops).  Ok, so that’s the grim view of this.  However, for Linux users, there will likely be other options.  That is, buying a PC that isn’t locked.  Or, alternatively, resetting the PC back to its factory default state of being unlocked (which the UEFI should support).

On the other hand, dual booting may no longer be an option with secure boot enabled.  That means, it may not be possible to install both Windows and Linux onto the system and choose to boot one or the other at boot time.  On other other hand, we do not know if Windows 8 requires UEFI secure boot to boot or whether it can be disabled.  So far it appears to be required, but if you buy a boxed retail edition of Windows 8 (which is not yet available), it may be possible to disable secure boot.  It may be that some of the released to manufacturing (OEM) editions require secure boot.  Some editions may not.

PC Manufacturers and Windows 8

The real question here, though, is what’s driving UEFI secure booting?  Is it Windows?  Is it the PC manufacturers?  Is it a consortium?  I’m not exactly sure.  Whatever the impetus is to move in this direction may lead Microsoft back down the antitrust path once again.  Excluding all other operating systems from PC hardware is a dangerous precedent as this has not been attempted on this hardware before.  Yes, with phones, iPads and other ‘closed’ devices, we accept this.  On PC hardware, we have not accepted this ‘closed’ nature because it has never been closed.  So, this is a dangerous game Microsoft is playing, once again.

Microsoft anti-trust suit renewed?

Microsoft should tread on this ground carefully.  Asking PC manufacturers to lock PCs to exclusively Windows 8 use is a lawsuit waiting to happen.  It’s just a matter of time before yet another class action lawsuit begins and, ultimately, turns into a DOJ antitrust suit.  You would think that Microsoft would have learned its lesson by its previous behaviors in the PC marketplace.  There is no reason that Windows needs to lock down the hardware in this way.

If every PC manufacturer begins producing PCs that preclude the loading of Linux or other UNIX distributions, this treads entirely too close to antitrust territory for Microsoft yet again.  If Linux is excluded from running on the majority of PCs, this is definitely not wanted behavior.  This rolls us back to the time when Microsoft used to lock down loading of Windows on the hardware over every other operating system on the market.  Except that last time, nothing stopped you from wiping the PC and loading Linux. You just had to pay the Microsoft tax to do it.  At that time, you couldn’t even buy a PC without Windows.  This time, according to reports, you cannot even load Linux with secure booting locked to Windows 8.  In fact, you can’t even load Windows 7 or Windows XP, either.  Using UEFI secure boot on Windows 8 PCs treads  within millimeters of this same collusionary behavior that Microsoft was called on many years back, and ultimately went to court over and lost much money on.

Microsoft needs to listen and tread carefully

Tread carefully, Microsoft.  Locking PCs to running only Windows 8 is as close as you can get to the antitrust suits you thought you were done with.  Unless PC manufacturers give ways of resetting and turning off the UEFI secure boot system to allow non-secure operating systems, Microsoft will once again be seen in collusion with PC manufacturers to exclude all other operating systems from UEFI secure boot PCs.  That is about as antitrust as you can get.

I’d fully expect to see Microsoft (and possibly some PC makers) in DOJ court over antitrust issues.  It’s not a matter of if, it’s a matter of when.  I predict by early 2014 another antitrust suit will have materialized, assuming the way that UEFI works comes true.  On the other hand, this issue is easily mitigated by UEFI PC makers allowing users to disable the UEFI secure boot to allow a BIOS boot and Linux to be loaded.  So, the antitrust suits will entirely hinge on how flexible the PC manufacturers set up the UEFI secure booting.  If both Microsoft and the PC makers have been smart about this change, UEFI booting can be disabled.   If not, we know the legal outcome.

Virtualization

For Windows 8, it’s likely that we’ll see more people moving to use Linux as their base OS with Windows 8 virtualized (except for gamers where direct hardware is required).  If Windows 8 is this locked down, then it’s better to lock down VirtualBox than the physical hardware.

Death Knell for Windows?

Note that should the UEFI secure boot system be as closed as predicted, this may be the final death knell for Windows and, ultimately, Microsoft.  The danger is in the UEFI secure boot system itself.  UEFI is new and untested in the mass market.  This means that not only is Windows 8 new (and we know how that goes bugwise), now we have an entirely new untested boot system in secure boot UEFI.  This means that if anything goes wrong in this secure booting system that Windows 8 simply won’t boot.  And believe me, I predict there will be many failures in the secure booting system itself.  The reason, we are still relying on mechanical hard drives that are highly prone to partial failures.  Even while solid state drives are better, they can also go bad.  So, whatever data the secure boot system relies on (i.e. decryption keys) will likely be stored somewhere on the hard drive.  If this sector of the hard drive fails, no more boot.  Worse, if this secure booting system requires an encrypted hard drive, that means no access to the data on the hard drive after failure ever.

I’d predict there will be many failures related to this new UEFI secure boot that will lead to dead PCs.  But, not only dead PCs, but PCs that offer no access to the data on the hard drives.  So people will lose everything on their computer.

As people realize this aspect of this local storage system on an extremely closed system, they will move toward cloud service devices to prevent data loss.  Once they realize the benefits of cloud storage, the appeal of storing things on local hard drives and most of the reasons to use Windows 8 will be lost.  Gamers may be able to keep the Windows market alive a bit longer, otherwise. On the other hand, this why a gaming company like Valve software is hedging its bets and releasing Linux versions of their games. For non-gamers, desktop and notebook PCs running Windows will be less and less needed and used.  In fact, I contend this is already happening.  Tablets and other cloud storage devices are already becoming the norm.  Perhaps not so much in the corporate world as yet, but once cloud based Office suites get better, all bets are off.  So, combining the already trending move towards limited storage cloud devices, closing down PC systems in this way is, at best, one more nail in Windows’ coffin.  At worst, Redmond is playing Taps for Windows.

Closing down the PC market in this way is not the answer.  Microsoft has stated it wants to be more innovative as Steve Balmer recently proclaimed.  Yet, I see moves like this and this proves that Microsoft has clearly not changed and has no innovation left.  Innovation doesn’t have to and shouldn’t lead to closed PC systems and antitrust lawsuits.

How to format NTFS on MacOS X

Posted in Apple, computers, Mac OS X, microsoft by commorancy on June 2, 2012

This article is designed to show you how to mount and manage NTFS partitions in MacOS X.  Note the prerequisites below as it’s not quite as straightforward as one would hope.  That is, there is no native MacOS X tool to accomplish this, but it can be done.  First things first:

Disclaimer

This article discusses commands that will format, destroy or otherwise wipe data from hard drives.  If you are uncomfortable working with commands like these, you shouldn’t attempt to follow this article.  This information is provided as-is and all risk is incurred solely by the reader.  If you wipe your data accidentally by the use of the information contained in this article, you solely accept all risk.  This author accepts no liability for the use or misuse of the commands explored in this article.

Prerequisites

Right up front I’m going to say that to accomplish this task, you must have the following prerequisites set up:

  1. VirtualBox installed (free)
  2. Windows 7 (any flavor) installed in VirtualBox (you can probably use Windows XP, but the commands may be different) (Windows is not free)

For reading / writing to NTFS formatted partitions (optional), you will need one of the following:

  1. For writing to NTFS partitions on MacOS X:
  2. For reading from NTFS, MacOS X can natively mount and read from NTFS partitions in read-only mode. This is built into Mac OS X.

If you plan on writing to NTFS partitions, I highly recommend Tuxera over ntfs-3g. Tuxera is stable and I’ve had no troubles with it corrupting NTFS volumes which would require a ‘chkdsk’ operation to fix.  On the other hand, ntfs-3g regularly corrupts volumes and will require chkdsk to clean up the volume periodically. Do not override MacOS X’s native NTFS mounter and have it write to volumes (even though it is possible).  The MacOS X native NTFS mounter will corrupt disks in write mode.  Use Tuxera or ntfs-3g instead.

Why NTFS on Mac OS X?

If you’re like me, I have a Mac at work and Windows at home.  Because Mac can mount NTFS, but Windows has no hope of mounting MacOS Journaled filesystems, I opted to use NTFS as my disk carry standard.  Note, I use large 1-2TB sized hard drives and NTFS is much more efficient with space allocation than FAT32 for these sized disks.  So, this is why I use NTFS as my carry around standard for both Windows and Mac.

How to format a new hard drive with NTFS on Mac OS X

Once you have Windows 7 installed in VirtualBox and working, shut it down for the moment.  Note, I will assume that you know how to install Windows 7 in VirtualBox.  If not, let me know and I can write a separate article on how to do this.

Now, go to Mac OS X and open a command terminal (/Applications/Utilities/Terminal.app).  Connect the disk to your Mac via USB or whatever method you wish the drive to connect.  Once you have it connected, you will need to determine which /dev/diskX device it is using.  There are several ways of doing this.  However, the easiest way is with the ‘diskutil’ command:

$ diskutil list
/dev/disk0
   #: TYPE NAME SIZE IDENTIFIER
   0: GUID_partition_scheme *500.1 GB disk0
   1: EFI 209.7 MB disk0s1
   2: Apple_HFS Macintosh HD 499.8 GB disk0s2
/dev/disk1
   #: TYPE NAME SIZE IDENTIFIER
   0: GUID_partition_scheme *2.0 TB disk1
/dev/disk2
   #: TYPE NAME SIZE IDENTIFIER
   0: Apple_partition_scheme *119.6 MB disk2
   1: Apple_partition_map 32.3 KB disk2s1
   2: Apple_HFS VirtualBox 119.5 MB disk2s2

 
Locate the drive that appears to be the size of your new hard drive.  If the hard drive is blank (a brand new drive), it shouldn’t show any additional partitions. In my case, I’ve identified that I want to use /dev/disk1.  Remember this device file path because you will need it for creating the raw disk vmdk file. Note the nomenclature above:  The /dev/disk1 is the device to access the entire drive from sector 0 to the very end.  The /dev/diskXsX files access individual partitions created on the device.  Make sure you’ve noted the correct /dev/disk here or you could overwrite the wrong drive.

Don’t create any partitions with MacOS X in Disk Utility or in diskutil as these won’t be used (or useful) in Windows.  In fact, if you create any partitions with Disk Utility, you will need to ‘clean’ the drive in Windows.

Creating a raw disk vmdk for VirtualBox

This next part will create a raw connector between VirtualBox and your physical drive.  This will allow Windows to directly access the entire physical /dev/disk1 drive from within VirtualBox Windows.  Giving Windows access to the entire drive will let you manage the entire drive from within Windows including creating partitions and formatting them.

To create the connector, you will use the following command in Mac OS X from a terminal shell:

$ vboxmanage internalcommands createrawvmdk \
-filename "/path/to/VirtualBox VMs/Windows/disk1.vmdk" -rawdisk /dev/disk1

 
It’s a good idea to create the disk1.vmdk where your Windows VirtualBox VM lives. Note, if vboxmanage isn’t in your PATH, you will need to add it to your PATH to execute this command or, alternatively, specify the exact path to the vboxmanage command. In my case, this is located in /usr/bin/vboxmanage.  This command will create a file named disk1.vmdk that will be used inside your Windows VirtualBox machine to access the hard drive. Note that creating the vmdk doesn’t connect the drive to your VirtualBox Windows system. That’s the next step.  Make note of the path to disk1.vmdk as you will also need this for the next step.

Additional notes, if the drive already has any partitions on it (NTFS or MacOS), you will need to unmount any mounted partitions before Windows can access it and before you can createrawvmdk with vboxmanage.  Check ‘df’ to see if any partitions on drive are mounted.  To unmount, either drop the partition(s) on the trashcan, use umount /path/to/partition or use diskutil unmount /path/to/partition.  You will need to unmount all partitions on the drive in question before Windows or vboxmanage can access it.  Even one mounted partition will prevent VirtualBox from gaining access to the disk.

Note, if this is a brand new drive, it should be blank and it won’t attempt to mount anything.  MacOS may ask you to format it, but just click ‘ignore’.  Don’t have MacOS X format the drive.  However, if you are re-using a previously used drive and wanting to format over what’s on it, I would suggest you zero the drive (see ‘Zeroing a drive’ below) as the fastest way to clear the drive of partition information.

Hooking up the raw disk vmdk to VirtualBox

Open VirtualBox.  In VirtualBox, highlight your Windows virtual machine and click the ‘Settings’ cog at the top.

  • Click the Storage icon.
  • Click the ‘SATA Controller’
  • Click on the ‘Add Hard Disk’ icon (3 disks stacked).
  • When the ? panel appears, click on ‘Choose existing disk’.
  • Navigate to the folder where you created ‘disk1.vmdk’, select it and click ‘Open’.
  • The disk1.vmdk connector will now appear under SATA Controller

You are ready to launch VirtualBox.  Note, if /dev/disk1 isn’t owned by your user account, VirtualBox may fail to open this drive and show an error panel.  If you see any error panels, check to make sure no partitions are mounted and  then check the permissions of /dev/disk1 with ls -l /dev/disk1 and, if necessary, chown $LOGNAME /dev/disk1.  The drive must not have any partitions actively mounted and /dev/disk1 must be owned by your user account on MacOS X.  Also make sure that the vmdk file you created above is owned by your user account as you may need to become root to createrawvmdk.

Launching VirtualBox

Click the ‘Start’ button to start your Windows VirtualBox.  Once you’re at the Windows login panel, log into Windows as you normally would.  Note, if the hard drive goes to sleep, you may have to wait for it to wake up for Windows to finish loading.

Once inside Windows, do the following:

  • Start->All Programs->Accessories->Command Prompt
  • Type in ‘diskpart’
  • At the DISKPART> prompt, type ‘list disk’ and look for the drive (based on the size of the drive).
    • Note, if you have more than one drive that’s the same exact size, you’ll want to be extra careful when changing things as you could overwrite the wrong drive.  If this is the case, follow these next steps at your own risk!
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
 -------- ------------- ------- ------- --- ---
 Disk 0 Online 40 GB 0 B
 Disk 1 Online 1863 GB 0 B *
  • In my case, I am using Disk 1.  So, type in ‘select disk 1’.  It will say ‘Disk 1 is now the selected disk.’
    • From here on down, use these commands at your own risk.  They are destructive commands and will wipe the drive and data from the drive.  If you are uncertain about what’s on the drive or you need to keep a copy, you should stop here and backup the data before proceeding.  You have been warned.
    • Note, ‘Disk 1’ is coincidentally named the same as /dev/disk1 on the Mac.  It may not always follow the same naming scheme on all systems.
  • To ensure the drive is fully blank type in ‘clean’ and press enter.
    • The clean command will wipe all partitions and volumes from the drive and make the drive ‘blank’.
    • From here, you can repartition the drive as necessary.

Creating a partition, formatting and mounting the drive in Windows

  • Using diskpart, here are the commands to create one partition using the whole drive, format it NTFS and mount it as G: (see commands below):
DISKPART> select disk 1
Disk 1 is now the selected disk
DISKPART> clean
DiskPart succeeded in cleaning the disk.
DISKPART> create partition primary
DiskPart succeeded in creating the specified partition.
DISKPART> list partition
Partition ### Type Size Offset
 ------------- ---------------- ------- -------
* Partition 1 Primary 1863 GB 1024 KB
DISKPART> select partition 1
Partition 1 is now the selected partition.
DISKPART> format fs=ntfs label="Data" quick
100 percent completed
DiskPart successfully formatted the volume.
DISKPART> assign letter=g
DiskPart successfully assigned the drive letter or mount point.
DISKPART> exit
Leaving DiskPart...

 

  • The drive is now formatted as NTFS and mounted as G:.  You should see the drive in Windows Explorer.
    • Note, unless you want to spend hours formatting a 1-2TB sized drive, you should format it as QUICK.
    • If you want to validate the drive is good, then you may want to do a full format on the drive.  New drives are generally good already, so QUICK is a much better option to get the drive formatted faster.
  • If you want to review the drive in Disk Management Console, in the command shell type in diskmgmt.msc
  • When the window opens, you should find your Data drive listed as ‘Disk 1’

Note, the reason to use ‘diskpart’ over Disk Management Console is that you can’t use ‘clean’ in Disk Management Console, this command is only available in the diskpart tool and it’s the only way to completely clean the drive of all partitions to make the drive blank again.  This is especially handy if you happen to have previously formatted the drive with MacOS X Journaled FS and there’s an EFI partition on the drive.  The only way to get rid of a Mac EFI partition is to ‘clean’ the drive as above.

Annoyances and Caveats

MacOS X always tries to mount recognizable removable (USB) partitions when they become available.  So, as soon as you have formatted the drive and have shut down Windows, Mac will likely mount the NTFS drive under /Volumes/Data.  You can check this with ‘df’ in Mac terminal or by opening Finder.  If you find that it is mounted in Mac, you must unmount it before you can start VirtualBox to use the drive in Windows.  If you try to start VirtualBox with a mounted partition in Mac OS X, you will see a red error panel in VirtualBox.  Mac and Windows will not share a physical volume.  So you must make sure MacOS X has unmounted the volume before you start VirtualBox with the disk1.vmdk physical drive.

Also, the raw vmdk drive is specific to that single hard drive.  You will need to go through the steps of creating a new raw vmdk for each new hard drive you want to format in Windows unless you know for certain that each hard drive is truly identical.  The reason is that vboxmanage discovers the geometry of the drive and writes it to the vmdk.  So, each raw vmdk is tailored to each drive’s size and geometry.  It is recommended that you not try to reuse an existing physical vmdk with another drive.  Always create a new raw vmdk for each drive you wish to manage in Windows.

Zeroing a drive

While the clean command clears off all partition information in Windows, you can also clean off the drive in MacOS X.  The way to do this is by using dd.  Again, this command is destructive, so be sure you know which drive you are operating on before you press enter.  Once you press enter, the drive will be wiped of data.  Use this section at your own risk.

To clean the drive use the following:

$ dd if=/dev/zero of=/dev/disk1 bs=4096 count=10000

 
This command will write 10000 * 4096 byte blocks with all zeros.  This should overwrite any partition information and clear the drive off.  You may not need to do this as the diskpart ‘clean’ command may be sufficient.

Using chkdsk

If the drive has become corrupted or is acting in a way you think may be a problem, you can always go back into Windows with the data1.vmdk connector and run chkdsk on the volume.  You can also use this on any NTFS or FAT32 volume you may have.  You will just need to create a physical vmdk connector and attach it to your Windows SATA controller and make sure MacOS X doesn’t have it mounted. Then, launch VirtualBox and clean it up.

Tuxera

If you are using Tuxera to mount NTFS, once you exit out of Windows with your freshly formatted NTFS volume, Tuxera should immediately see the volume and mount it.  This will show you that NTFS has been formatted properly on the drive.  You can now read and write to this volume as necessary.

Note that this method to format a drive with NTFS is the safest way on Mac OS X.  While there may be some native tools floating around out there, using Windows to format NTFS will ensure the volume is 100% compliant with NTFS and Windows.  Using third party tools not written by Microsoft could lead to data corruption or improperly formatted volumes.

Of course, you could always connect the drive directly to a Windows system and format it that way. ;)

Tagged with: , ,

How not to run a business (Part 3) — SaaS edition

Posted in business, cloud computing, computers by commorancy on May 8, 2012

So, we’ve talked about how not to run a general business, let’s get to some specifics. Since software as a service (SaaS) is now becoming more and more common, let’s explore software companies and how not to run these.

Don’t add new features because you can

If a customer is asking for something new, then add that new feature at some appointed future time. Do not, however, think that that feature needs to be implemented tomorrow. On the other hand, if you have conceived something that you think might be useful, do not spend time implementing it until someone is actually asking for it. This is an important lesson to learn. It’s a waste of time to write code that no one will actually use. So, if you think your feature has some merit, invite your existing customers to a discussion by asking them if they would find the proposed feature useful. Your customers have the the final say. If the majority of your customers don’t think they would use it, scrap the idea. Time spent writing a useless feature is time wasted. Once written, the code has to be maintained by someone and is an additional waste of time.

Don’t tie yourself to your existing code

Another lesson to learn is that your code (and app) needs to be both flexible and trashable. Yes, I said trashable. You need to be willing to throw away code and rewrite it if necessary. That means, code flows, changes and morphs. It does not stay static. Ideas change, features change, hardware changes, data changes and customer expectations change. As your product matures and requires more and better infrastructure support, you will find that your older code becomes outdated. Don’t be surprised if you find yourself trashing much of your existing code for completely new implementations taking advantage of newer technologies and frameworks. Code that you may have written from scratch to solve an early business problem may now have a software framework that, while not identical to your code, will do what your code does 100x more efficiently. You have to be willing to dump old code for new implementations and be willing to implement those ideas in place of old code. As an example, usually early code does not take high availability into account. Therefore, gutting old code that isn’t highly available for new frameworks that are is always a benefit to your customers. If there’s anything to understand here, code is not a pet to get attached to. It provides your business with a point in time service set. However, that code set must grow with your customer’s expectations. Yes, this includes total ground-up rewrites.

Don’t write code that focuses solely on user experience

In software-as-a-service companies, many early designs can focus solely on what the code brings to the table for customer experience. The problem is that the design team can become so focused on writing the customer experience that they forget all about the manageability of the code from an operational perspective. Don’t write your code this way. Your company’s ability to support that user experience will suffer greatly from this mistake. Operationally, the code must be manageable, supportable, functional and must also start up, pause and stop consistently. This means, don’t write code so that when it fails it leaves garbage in tables, half-completed transactions with no way to restart the failed transactions or huge temporary files in /tmp. This is sloppy code design at best. At worst, it’s garbage code that needs to be rewritten.

All software designs should plan for both the user experience and the operational functionality. You can’t expect your operations team to become the engineering code janitors. Operations teams are not janitors for cleaning up after sloppy code that leaves garbage everywhere. Which leads to …

Don’t write code that doesn’t clean up after itself

If your code writes temporary tables or otherwise uses temporary mechanisms to complete its processing, clean this up not only on a clean exit, but also during failure conditions. I know of no languages or code that, when written correctly, cannot cleanup after itself even under the most severe software failure conditions. Learn to use these mechanisms to clean up. Better, don’t write code that leaves lots of garbage behind at any point in time. Consume what you need in small blocks and limit the damage under failure conditions.

Additionally, if your code needs to run through processing a series of steps, checkpoint those steps. That means, save the checkpoint somewhere. So, if you fail to process step 3 of 5, another process can come along and continue at step 3 and move forward. Leaving half completed transactions leaves your customers open to user experience problems. Always make sure your code can restart after a failure at the last checkpoint. Remember, user experience isn’t limited to a web interface…

Don’t think that the front end is all there is to user experience

One of the mistakes that a lot of design teams fall into is thinking that the user experience is tied to the way the front end interacts. Unfortunately, this design approach has failure written all over it. Operationally, the back end processing is as much a user experience as the front end interface. Sure, the interface is what the user sees and how the user interacts with your company’s service. At the same time, what the user does on the front end directly drives what happens on the back end. Seeing as your service is likely to be multiuser capable, what each user does needs to have its own separate allocation of resources on the back end to complete their requests. Designing the back end process to serially manage the user requests will lead to backups when you have 100, 1,000 or 10,000 users online.

It’s important to design both the front end experience and the back end processing to support a fully scalable multiuser experience. Most operating systems today are fully capable of multitasking utilizing both multiprocess and multithreaded support. So, take advantage of these features and run your user’s processing requests concurrently, not serially. Even better, make sure they can scale properly.

Don’t write code that sets no limits

One of the most damaging things you can do for user experience is tell your customers there are no limits in your application. As soon as those words are uttered from your lips, someone will be on your system testing that statement. First by seeing how much data it takes before the system breaks, then by stating that you are lying. Bad from all aspects. The takeaway here is that all systems have limits such as disk capacity, disk throughput, network throughput, network latency, the Internet itself is problematic, database limits, process limits, etc. There are limits everywhere in every operating system, every network and every application. You can’t state that your application gives unlimited capabilities without that being a lie. Eventually, your customers will hit a limit and you’ll be standing there scratching your head.

No, it’s far simpler not to make this statement. Set quotas, set limits, set expectations that data sets perform best when they remain between a range. Customers are actually much happier when you give them realistic limits and set their expectations appropriately. Far fetched statements leave your company open to problems. Don’t do this.

Don’t rely on cron to run your business

Ok, so I know some people will say, why not? Cron, while a decent scheduling system, isn’t without its own share of problems. One of its biggest problems, however, is that its smallest level of granularity is once per minute. If you need something to run more frequently than every minute, you are out of luck with cron. Cron also requires hard coded scripts that must be submitted in specific directories for cron to function. Cron doesn’t have an API. Cron supports no external statistics other than by digging through log files. Note, I’m not hating on cron. Cron is a great system administration tool. It has a lot of great things going for it with systems administration use when utilizing relatively infrequent tasks. It’s just not designed to be used under heavy mission critical load. If you’re doing distributed processing, you will need to find a way to launch in a more decentralized way anyway. So, cron likely won’t work in a distributed environment. Cron also has a propensity to stop working internally, but leave itself running in the process list. So, monitoring systems will think it’s working when it’s not actually launching any tasks.

If you’re a Windows shop, don’t rely on Windows scheduler to run your business. Why? Windows scheduler is actually a component of Internet Explorer (IE). When IE changes, the entire system could stop or fail. Considering the frequency with which Microsoft releases updates to not only the operating system, but to IE, you’d be wise to find another scheduler that is not likely to be impacted by Microsoft’s incessant need to modify the operating system.

Find or design a more reliable scheduler that works in a scalable fault tolerant way.

Don’t rely on monitoring systems (or your operations team) to find every problem or find the problem timely

Monitoring systems are designed by humans to find problems and alert. Monitoring systems are by their very nature, reactive. This means that monitoring systems only alert you AFTER they have found a problem. Never before. Worse, most monitoring systems only alert of problems after multiple checks have failed. This means that not only is the service down, it’s been down for probably 15-20 minutes by the time the system alerts. In this time, your customers may or may not have already seen that something is going on.

Additionally, for any monitoring for a given application feature, the monitoring system needs a window into that specific feature. For example, monitoring Windows WMI components or Windows message queues from a Linux monitoring system is near impossible. Linux has no components at all to access, for example, the Windows WMI system or Windows message queues. That said, a third party monitoring system with an agent process on the Windows system may be able to access WMI, but it may not.

Always design your code to provide a window into critical application components and functionality for monitoring purposes. Without such a monitoring window, these applications can be next to impossible to monitor. Better, design using standardized components that work across all platforms instead of relying on platform specific components. Either that or choose a single platform for your business environment and stick with that choice. Note that it is not the responsibility of the operations team to find windows to monitor. It’s the application engineering team’s responsibility to provide the necessary windows into the application to monitor the application.

Don’t expect your operations team to debug your application’s code

Systems administrators are generally not programmers. Yes, they can write shell scripts, but they don’t write code. If your application is written in PHP or C or C++ or Java, don’t expect your operations team to review your application’s code, debug the code or even understand it. Yes, they may be able to review some Java or PHP, but their job is not to write or review your application’s code. Systems administrators are tasked to manage the operating systems and components. That is, to make sure the hardware and operating system is healthy for the application to function and thrive. Systems administrators are therefore not tasked to write or debug your application’s code. Debugging the application is the task for your software engineers. Yes, a systems administrator can find bugs and report them, just as anyone can. Determining why that bug exists is your software engineers’ responsibility. If you expect your systems administrators to understand your application’s code in that level of detail, they are no longer systems administrators and they are considered software engineers. Keeping job roles separate is important in keeping your staff from becoming overloaded with unnecessary tasks.

Don’t write code that is not also documented

This is a plain and simple programming 101 issue. Yes, it’s very simple. Your software engineers’ responsibilities are to write robust code, but also document everything they write. That’s their job responsibility and should be part of their job description. If they do not, cannot or are unwilling to document the code they write, they should be put on a performance review plan and without improvement, walked to the door. Without documentation, reverse engineering their code can take weeks for new personnel. Documentation is critical to your businesses continued success, especially when personnel changes. Think of this like you would disaster recovery. If you suddenly no longer had your current engineers available and you had to hire all new engineers, how quickly could the new engineers understand your application’s code enough to release a new version? This ends up a make or break situation. Documentation is the key here.

Thus, documentation must be part of any engineer’s responsibility when they write code for your company. Code review is equally important by management to ensure that the code not only seems reasonable (i..e, no gotos), but is fully documented and attributed to that person. Yes, the author’s name should be included in comments surrounding each section of code they write and the date the code was written. All languages provide ways to comment within the code, require your staff to use it.

Don’t expect your code to test itself or that your engineers will properly test it

Your software engineers are far too close to the code to determine if the code works correctly under all scenarios. Plain and simple, software doesn’t test itself. Use an independent quality testing group to ensure that the code performs as expected based on the design specifications. Yes, always test based on the design specifications. Clearly, your company should have a road map of features and exactly how those features are expected to perform. These features should be driven by customer requests for new features. Your quality assurance team should have a list of new all features being placed into each new release to write thorough test cases well in advance. So, when the code is ready, they can put the release candidate into the testing environment and run through their test cases. As I said, don’t rely on your software engineers to provide this level of test cases. Use a full quality assurance team to review and sign off on the test cases to ensure that the features work as defined.

Don’t expect code to write (or fix) itself

Here’s another one that would be seemingly self-explanatory. Basically, when a feature comes along that needs to be implemented, don’t expect the code to spring up out of nowhere. You need competent technical people who fully understand the design to write the code for any new feature. But, just because an engineer has actually written code doesn’t mean the code actually implements the feature. Always have test cases ready to ensure that the implemented feature actually performs the way that it was intended.

If the code doesn’t perform what it’s supposed to after having been implemented, obviously it needs to be rewritten so that it does. If the code written doesn’t match the requested feature, the engineer may not understand the requested feature enough to implement it correctly. Alternatively, the feature set wasn’t documented well enough before having been sent to the engineering team to be coded. Always document the features completely, with pseudo-code if necessary, prior to being sent to engineering to write actual code. If using an agile engineering approach, review the progress frequently and test the feature along the way.

Additionally, if the code doesn’t work as expected and is rolled to production broken, don’t expect that code to magically start working or that the production team has some kind of magic wand to fix the problem. If it’s a coding problem, this is a software engineering task to resolve. Regardless of whether or not the production team (or even a customer) manages to find a workaround is irrelevant to actually fixing the bug. If a bug is found and documented, fix it.

Don’t let your software engineers design features

Your software engineers are there to write the code based features derived from customer feedback. Don’t let your software engineers write code for features not on the current road map. This is a waste of time and, at the same time, doesn’t help get your newest release out the door. Make sure that your software engineers remain focused on the current set of features destined for the next release. Focusing on anything other than the next release could delay that release. If you’re wanting to stick to a specific release date, always keep your engineers focused on the features destined for the latest release. Of course, fixing bugs from previous releases is also a priority, so make sure they have enough time to work on these while still working on coding for the newest release. If you have the manpower, focus some people on bug fixing and others on new features. If the code is documented well enough, a separate bug fixing team should have no difficulties creating patches to fix bugs from the current release.

Don’t expect to create 100% perfect code

So, this one almost goes without saying, but it does need to be said. Nothing is ever bug free. This section is here is to illustrate why you need to design your application using a modular patching approach. It goes back to operations manageability (as stated above). Design your application so that code modules can drop-in replace easily while the code is running. This means that the operations team (or whomever is tasked to do your patching) simply drops a new code file in place, tells the system to reload and within minutes the new code is operating. Modular drop in replacements while running is the only way to prevent major downtime (assuming the code is fully tested). As an SaaS company, should always design your application with high availability in mind. Doing full code releases, on the other hand, should have a separate installation process than drop in replacement. Although, if you would like to utilize the dynamic patching process for more agile releases, this is definitely an encouraged design feature. The more easily you design manageability and rapid deployment into your code for the operations team, the less operations people you need to manage and deploy it.

Without the distractions of long involved release processes, the operations team can focus on hardware design, implementation and general growth of the operations processes. The more distractions your operations team has with regards to bugs, fixing bugs, patching bugs and general code related issues, the less time they have to spend on the infrastructure side to make your application perform its best. As well, the operations team also has to keep up with operating system patches, software releases, software updates and security issues that may affect your application or the security of your user’s data.

Don’t overlook security in your design

Many people who write code, write code to implement a feature without thought to security. I’m not necessarily talking about blatantly obvious things like using logins and passwords to get into your system. Although, if you don’t have this, you need to add it. It’s clear, logins are required if you want to have multiple users using your system at once. No, I’m discussing the more subtle but damaging security problems such as cross-site scripting or SQL injection attacks. Always have your site’s code thoroughly tested against a suite of security tools prior to release. Fix any security problems revealed before rolling that code out to production. Don’t wait until the code rolls to production to fix security vulnerabilities. If your quality assurance team isn’t testing for security vulnerabilities as part of the QA sign off process, then you need to rethink and restructure your QA testing methodologies. Otherwise, you may find yourself becoming the next Sony Playstation Store news headline at Yahoo News or CNN. You don’t really want this type of press for your company. You also don’t want your company to be known for losing customer data.

Additionally, you should always store user passwords and other sensitive user data in one-way encrypted form. You can store the last 4 digits or similar of social security numbers or the last 4 of account numbers in clear text, but do not store the whole number in either plain text, with two-way encryption or in a form that is easily derived (md5 hash). Always use actual encryption algorithms with reasonably strong one-way encryption to store sensitive data. If you need access to that data, this will require the user to enter the whole string to unlock whatever it is they are trying to access.

Don’t expect your code to work on terabytes of data

If you’re writing code that manages SQL queries or, more specifically, are constructing SQL queries based on some kind of structured input, don’t expect your query to return timely when run against gigabytes or terabytes of data, thousands of columns or billions of rows or more. Test your code against large data sets. If you don’t have a large data set to test against, you need to find or build some. It’s plain and simple, if you can’t replicate your biggest customers’ environments in your test environment, then you cannot test all edge cases against the code that was written. SQL queries have lots of penalties against large data sets due to explain plans and statistical tables that must be built, if you don’t test your code, you will find that these statistical tables are not at all built the way you expect and the query may take 4,000 seconds instead of 4 seconds to return.

Alternatively, if you’re using very large data sets, it might be worth exploring such technologies as Hadoop and Cassandra instead of traditional relational databases to handle these large data sets in more efficient ways than by using databases like MySQL. Unfortunately, however, Hadoop and Cassandra are noSQL implementations, so you forfeit the use of structured queries to retrieve the data, but very large data sets can be randomly accessed and written to, in many cases, much faster than using SQL ACID database implementations.

Don’t write islands of code

You would think in this day and age that people would understand how frameworks work. Unfortunately, many people don’t and continue to write code that isn’t library or framework based. Let’s get you up to speed on this topic. Instead of writing little disparate islands of code, roll the code up under shared frameworks or shared libraries. This allows other engineers to use and reuse that code in new ways. If it’s a new feature, it’s possible that another bit of unrelated code may need to pull some data from another earlier implemented feature. Frameworks are a great way to ensure that reusing code is possible without reinventing the wheel or copying and pasting code all over the place. Reusable libraries and frameworks are the future. Use them.

Of course, these libraries and frameworks need to be fully documented with specifications of the calls before they can be reused by other engineers in other parts of the code. So, documentation is critical to code reuse. Better, the use of object oriented programming allows not only reuse, but inheritance. So, you can inherit an object in its template form and add your own custom additions to this object to expand its usefulness.

Don’t talk and chew bubble gum at the same time

That is, don’t try to be too grandiose in your plans. Your team has limited time between the start of a development cycle and the roll out of a new release. Make sure that your feature set is compatible with this deadline. Sure, you can throw everything in including the kitchen sink, but don’t expect your engineering team to deliver on time or, if they do actually manage to deliver, that the code will work half as well as you expect. Instead, pair your feature sets down to manageable chunks. Then, group the chunks together into releases throughout the year. Set expectations that you want a certain feature set in a given release. Make sure, however, that that feature set is attainable in the time allotted with the number of engineers that you have on staff. If you have a team of two engineers and a development cycle of one month, don’t expect these engineers to implement hundreds of complex features in that time. Be realistic, but at the same time, know what your engineers are capable of.

Don’t implement features based on one customer’s demand

If someone made a sales promise to deliver a feature to one, and only one customer, you’ve made a serious business mistake. Never promise an individual feature to an individual customer. While you may be able to retain that customer based on implementing that feature, you will run yourself and the rest of your company ragged trying to fulfill this promise. Worse, that customer has no loyalty to you. So, even if you expend the 2-3 weeks day and night coding frenzy to meet the customer’s requirement, the customer will not be any more loyal to you after you have released the code. Sure, it may make the customer briefly happy, but at what expense? You likely won’t keep this customer as a customer any longer. By the time you’ve gotten to this level of desperation with a customer, they are likely already on the way out the door. So, these crunch requests are usually last-ditch efforts at customer retention and customer relations. Worse, the company runs itself ragged trying desperately to roll this new feature almost completely ignoring all other customers needing attention and projects, yet these harried features so completely end up as customized one-offs that no other customer can even use the feature without a major rewrite. So, the code is effectively useless to anyone other than the requesting customer who’s likely within inches of terminating their contract. Don’t do it. If your company gets into this desperation mode, you need to stop and rethink your business strategy and why you are in business.

Don’t forget your customer

You need to hire a high quality sales team who is attentive to customer needs. But, more than this, they need to periodically talk to your existing clients on customer relations terms. Basically, ask the right questions and determine if the customer is happy with the services. I’ve seen so many cases where a customer appears completely happy with the services. In reality, they have either been shopping around or have been approached by competition and wooed away with a better deal. You can’t assume that any customer is so entrenched in your service that they won’t leave. Instead, your sales team needs to take a proactive approach and reach out to the customers periodically to get feedback, determine needs and ask if they have any questions regarding their services. If a contract is within 3 months of renewal, the sales team needs to be on the phone and discussing renewal plans. Don’t wait until a week before the renewal to contact your customers. By a week out, it’s likely that the customers have already been approached by competition and it’s far too late to participate in any vendor review process. You need to know when the vendor review process happens and always submit yourself to that process for continued business consideration from that customer. Just because a customer has a current contract with you does not make you a preferred vendor. More than this, you want to always participate in the vendor review process, so this is why it’s important to contact your customer and ask when the vendor review process begins. Don’t blame the customer that you weren’t included in any vendor review and purchasing process. It’s your sales team’s job to find out when vendor reviews commence.

Part 2 | Chapter Index | Part 4

Tagged with: ,
%d bloggers like this: