Random Thoughts – Randocity!

How not to run a business (part 10): Undermining Your Business

Posted in botch, business by commorancy on May 31, 2015

There are lots of very subtle actions that can be taken where you can unknowingly undermine your business success. Let’s explore.

Don’t fire a position multiple times in a row

While this somewhat depends on the position, it’s still never a good sign when you must fire staff from the same position over and over. Even if you’re disenchanted with the people you’ve hired doing the specific work and after firing a position more than three times, this says more about your needs than the position. It’s clear, what you need in the position and the type of people you are hiring is mismatched. You need to rethink the job requirements for the position and hire the correct talent to fit that role. When you fire leaders in this way (i.e., VPs, SVPs or C-Level execs), this situation is even more detrimental to your business.

How will this undermine your business? Firing a position more than three times says several things. First, it says you don’t know what you’re looking for. Second, for all of those people whom you’ve hired and fired, the word will get around that anyone who’s a competent candidate won’t even consider the position. Once you realize that the most talented pools are relatively small and that they do talk to one another, firing a position multiple times means the word will get around to not hire on at that company. Once the word gets around about this situation, it is never a good thing for your business. The higher the profile of the position, the stronger the word gets around and this will severely undermine your ability to acquire top-end talent.

Additionally, the word will also get around in the recruiter community and they also choose not to place talent at your company.

Don’t choose an industry without researching its requirements

When you’re setting up your business plan, you need to thoroughly research the industry your business will be in. For example, selling a product or service to the medical industry is profoundly different than selling it to marketing teams, which is also profoundly different than selling your wares to the government or doctors or lawyers or farmers.

How will this undermine your business? If you fail to properly research the types of clients you are working to obtain, you won’t understand their demands and requirements. Every industry is bound by laws and regulation (some industries more than others). If you fail to realize that the industry you are targeting requires strict compliance to laws, you may also fail to understand how those laws apply to your business and that your company’s compliance may be unattainable and far too costly.  If you can’t comply with the laws, you may never be able to land the deals on which your business depends. Yes, for many companies, vendors must comply with certain laws, security requirements and industry standards before a company will agree to close your deal. Failing to research these requirements may undermine your ability to remain in business. Or, it may relegate your business to smaller companies needing much smaller deals.

Don’t fail to underestimate the power of word of mouth

Sites like Glassdoor exist for a reason. Before Glassdoor, there was no transparency except by word of mouth. Now, there is.

How will this undermine your business? Before Glassdoor, recruiters would make the determination of where to place candidates. Recruiters also won’t choose to place prospective employees in toxic environments. That is, environments where firings are common, where turnover is high and where employee morale is extremely low. They won’t place anyone for a very good reason… they know the new employee won’t stay long enough to allow the recruiter to collect their commission. Recruiters only get paid their commission if the placed employee stays for the specified duration of time (3-6 months depending on the placement contract). Recruiters, therefore, will not place would-be candidates into an environment they know will be a short lived role. It also likely means that when you do find recruiters who will work with your hiring needs, they likely are unaware of the problem. Though, once they place and realize they are making no money, don’t expect to hear from them again.

Additionally, with sites like Glassdoor, employees can remain anonymous and be brutally honest about their experiences at the business. This can also undermine your ability to hire. Sites like Twitter and Facebook just compound the problem. As work gets around and your business’s reputation becomes tainted, you’ll find that it can be nearly impossible to not only attract good talent, but also retain it. Word of mouth in an industry is as good as gold to a job candidate, but can be poison for a business. You need to make sure your word of mouth is always high quality praise. Never negative backlash. If you choose to ignore the word of mouth, it will be to the detriment of your company.

Don’t skimp on employee perks

Employees spend 8 or more hours of their day working for you (not to mention commute time). Morale is a big part of that work day. If morale sinks low, your employees will exodus. Perks help keep employee morale up. Choosing the right morale boosters is critically important to employee retention.

How will this undermine your business? The Shining said it best, “All work and no play makes Jack a dull boy”. I’m not recommending that you allow your employees to play. But, offering employees a place to relax when they are having a break is important. You also need to understand what other companies in your same vicinity are offering to their employees. While I understand that every company can’t offer all of the same perks, you need to offer at least some of them. Not offering perks to your employees is tantamount to telling would-be employees and recruiters, “Don’t place people here”. Word of mouth spreads, once again, and you’ll find it hard to hire strong candidates because the perks are better somewhere else.

Secondarily, this goes back to several of the previous Don’ts. Lack of perks in combination with more problematic industry and poor morale, leading to negative word of mouth, could lead your business into a tough hiring spot. You could find that it’s nearly impossible to hire staff across the board. What you’re left with hiring are people who are not the top end of the hiring pool and instead end up the middle to low end staff. Once your business is forced to hire lesser qualified staff, your business will tank.

Perks like free food, subsidized or free daycare, subsidized or free transportation to and from work. Other perks can include tuition reimbursement, travel discounts and store discounts. When you skimp on perks when other companies are not, you will find it difficult to hire and keep any talent, especially top end talent.

Don’t assume you can live without top end talent

Without top end talent, your business is doomed to mediocrity.

How will this undermine your business? Once you hit the mediocrity stage, your customers will, one by one, leave you. They will realize you aren’t providing the quality service that you promised. Oh, you’ll still get some new signups, until they also realize the mediocrity of your business. Competition is fierce and it’s guaranteed that your business will have competition. If your competitors are doing it better than you, then your product/service offering will end up behind all others. It’s important to understand that top end talent drives your business forward. Low to mid level talent, keeps status quo. Top end talent provides innovation, low to mid level talent doesn’t.

Once you understand this fundamental distinction between the levels of talent, you will understand why you pay top dollar to have top end talent. And note, I’m not necessarily talking about top-end talent in Director, VP, SVP or C-Level positions. While it helps to have top-end talent there, those positions do not typically get the work done day to day. It’s those doing the hands-on day-to-day work that are driving your business forward. You want motivated self-starters who are willing to own the work that they do. Low to mid-level talent won’t actually own their work. Ownership of work is critically important when looking for talented staff to hire.

Note that some industries are harder to hire for than others. If you choose, for example, to open a business that does email marketing, you’ll find it very difficult to hire into this industry no matter what the position. Most technical people understand spam and realize they don’t want their resume to contain anything to do with a ‘spam related’ business.

Don’t ignore the value of social media

Social media offers a brand new marketing approach. Social media now offers grass roots marketing team that you have at your disposal. For example, if you can get your business placed onto certain people’s Facebook or Twitter feeds, this can drive lots of people to your business.

How will this undermine your business? If you fail to understand the power of social media either by ignoring it or by assuming that it is pointless, you have deluded yourself and this immediately undermines your business. Every marketing technique is appropriate and should be exploited to its fullest. Interactive marketing is even better. If you hire people to actively scout Twitter and Facebook, they can write comments that counter any negative feedback. By hiring a team of people to manage all social media outlets, they can head off problems before they even start. By having an active team countering Twitter or Facebook posts, it shows your business is proactive and willing to counteract any negative postings by disgruntled customers.

Don’t ignore the power of mobile marketing

Smartphones are now ubiquitous. Yet, this technology as a marketing platform is still in its infancy. While in this infancy, you can latch onto this technology early and get an edge over your competition. Companies that choose to embrace mobile marketing now will have the upper hand when marketing on these devices becomes commonplace in the future (and when email is ultimately dead).

How will this undermine your business? If you fail to understand that mobile marketing is the future, you will also fail to understand how quickly you can reach out to your customers with the immediacy of a phone call. Email, for example, is a slow mechanism by nature. It can take anywhere between 5 minutes and several days for people to read your email. With push notifications, your marketing is given to the user instantaneously. They can then go find out what the hubbub is all about within seconds. Nearly every push notification is read immediately. Emails take far far longer and are prone to spam blockers, image blockers and link blockers. Mobile marketing, at least today, is under no such blocking constraints. If you take advantage early, you can add a significant advantage to marketing for your business.

To take maximum advantage, however, it must be with a solid and useful app. An app you can direct the push notification to and give critical information. If you’re a retail business, for example, offering coupon discounts for purchases can be the difference between a purchase and not, such as 20% off of a product. Discounts are always a good idea when done regularly, but infrequently.

Mobile marketing needs to be relevant, targeted and location based. You need to know exactly what this customer’s interests are and provide them with spot on marketing that gives them exactly what they need when they need it. For location based marketing, if they are close to one of your stores, you should immediately send a push notification to notify them of any special offers located with that store.

Don’t change upper management every year

Or.. even every other year. This is can be a hard one to actually accomplish depending on lots of factors and it can cause severe morale problems.

How will this undermine your business? Whether it’s through a firing, through so-called ‘voluntary’ severance (aka The Velvet Hammer), people quitting, demotions, promotions or lateral moves, frequent or regular departures in the upper management team creates more questions than answers for employees and, if a public company, stockholders. Stability in the upper management team (at least for 4-5 years at a time) says volumes about loyalty, stability and allows employees to recognize the upper management. Changing this team frequently says something is wrong internally. Not only do employees begin questioning what’s going on, these changes plant seeds that “maybe I need to seek employment elsewhere”. Operating a musical chair management team will undermine your ability to operate your business. It takes at least 6 months for any new hire to get his feet in a position. Changing critical management positions often means the person in the position never has time to get an understanding of what they need to get done. Worse, just about the time they are about to get something done, they’re being seen to the door.

You can’t operate your business with a constant stream of new people in critical management positions. If you aren’t absolutely sure the person is the right person, don’t hire them. It’s far better to leave the position vacant for just the right person than it is to fill it with a person you know won’t work out. Additionally, if you can’t ever find someone to fill the role to your satisfaction, perhaps you’re looking for answers in the wrong place. You might want to start by looking at yourself and your expectations of the role. If you can’t clearly define the expectations of that management role, don’t expect anyone you hire to magically gain this understanding and define it for you. That will never happen.

Part 9 | ↓ Parts 10.1, 10.2, 10.3 10.4 | Chapter Index | Part 11

Tagged with: , ,

Rant Time: You gotta hate Lollipop

Posted in Android, botch, business by commorancy on May 27, 2015

You know, I can’t understand the predilection for glaring white background and garish bright colors on a tablet. In comes Lollipop trying to act all like iOS and failing miserably at it. OMG, Lollipop has to be one of the most garish and horrible UI interfaces that has come along in a very long time. Let’s explore.

Garish Colors on Blinding White

Skeumorphism had its place in the computer world. Yes, it was ‘old timey’ and needed to be updated, but to what exactly? One thing can be said, skeumorphism was at least easy on the eyes. But, Lollipop with its white backgrounds and horrible teals, pinks and oranges? Really? This is considered to be ‘better’? Sorry, but no. A thousand times, no. As a graphic designer and artist, this is one of the worst UI choices for handheld devices.

If, for example, the engineers actually used the light sensor on the damned things and then determined that when it’s dark in the room and then changed the UI to something easier in the dark, I’d be all over that. But, nooooooo. You’re stuck with these stupid blinding white screens even when the room is pitch black. So there you have your flashlight lighting up your face all while trying to use your tablet. I mean, how stupid are these UI designers? You put light sensors on it… use them.

Stupid UI Designers?

Seriously, I’ll take skeumorphism over these blazing white screens any day. I mean seriously? Who in their right mind thought that this in any way looked good? Why rip a page from Apple’s horrible design book when you don’t have to. I’ll be glad when Lollipop is a thing of the past and Google has decided to blaze their own UI way. No Google, you don’t need to follow after Apple.

Just because some asinine designer at Apple thinks this looks good doesn’t mean that it actually does. Get rid of the white screens. Let’s go back to themes so we can choose the way we want our systems to look. Blaze your own path and give users the choice of the look of their OS. Choice is the answer, not forced compliance.

Smaller and Smaller

What’s with the smaller and smaller panels and buttons all of a sudden? At first the pull down was large and fit nicely on the screen. The buttons were easy to touch and sliders easy to move. Now it’s half the size with the buttons and sliders nearly impossible to grab and press. Let’s go back to resizing buttons so they are finger friendly on a tablet, mkay? The notification pulldown has now been reduced in size for no apparent reason. Pop up questions are half the size. The buttons and sliders on there are twice has hard to hit with a finger.

Google, blaze your own path

Apple has now become the poster child of how not to design UI interfaces. You don’t want to rip pages from their book. Take your UI designers into a room and let them come up with ideas that are unique to Google and Android. Don’t force them to use a look and feel from an entirely different company using ideas that are outright horrible.

Note, I prefer dark or grey backgrounds. They are much easier on the eyes than blazing white backgrounds. White screens are great for only one thing, lighting up the room. They are extremely hard on the eyes and don’t necessarily make text easier to read.

Google, please go back to blazing your own trail separately from Apple. I’ll be entirely glad when this garish-colors-on-white-fad goes the way of the Pet Rock. And once this stupid trend is finally gone, I’ll be shouting good riddance from the top of the Los Altos hills. It also won’t be soon enough. For now, dayam Google, get it together will ya?

Tagged with: , , , , ,

Rant Time: No Survey For You

Posted in best practices, botch, business by commorancy on May 17, 2015

While I understand the need to ask for surveys or ratings or whatever after a purchase or talking to a sales or service rep, but give us a friggin’ break constant hounding. Flat out, I am not doing them any more.

Wasted Time

It seems that more and more places want to request surveys after having an interaction. Either they want vocal surveys over the phone after the call, place links on receipts or they want to send long and torturous Survey Monkey surveys. Worse, they are getting longer and longer and longer.. Worse, they’re getting to be so in-your-face with these requests now. These requests are way overreaching….and I’m not going to do any of them.

Yes, I’ll do them if I have a bad experience, but other than that, suck it up. I’m not doing it. So, don’t send me 2, 3 or 4 ‘reminder’ emails that I need to go and do it. Give it a rest. I’m not doing it. Worse, if you keep sending me these emails, I’m highly likely to mark them as spam, which isn’t going to help your email reputation. So, give that reminder thing a rest!

No intention of doing surveys

As the title says, not doin’ it. It’s a waste of my time to do these long survey forms that don’t really help me in any substantial way. If you want me to participate in your survey, why not give me an incentive? Like money off my next bill or a coupon for money off my next purchase? Seriously, how hard is it? If you really want me to do it, give me an incentive to do it. I’m not here to run your business for you. That’s your job. My feedback is likely to be tossed anyway. So, that 15-20 minutes I just spent on your behalf is a total waste of my time. If you want me to do them, then give me a substantial reason.

Bad service = bad review

On the flipside, if your service is awful, expect a bad review. So, you might not want to ask for them. Of course, if you actually intend to make your service better, then by all means ask. Not that I’m going to fill out a survey if the service is good. It just frustrates me when I fill out a survey and submit it to a company that has no intention of changing (Comcast, Verizon, AT&T, et al).

I get the reason for asking for these surveys, but let’s end this trend. Let’s figure out a way to get what you need in another way. Surveys don’t provide you with what you need anyway. You may think they do, but they don’t. In the end, they don’t work to improve things and, in many cases, fall on deaf ears. So, they’re pointless. For this reason and the lack of incentive, I’m not doing any future surveys and will decline them every chance. I also plan to start marking them as spam at every turn. So, I’d seriously suggest businesses start being much more careful when sending after-the-fact emails asking for completion of surveys.

Bottom line… no survey for you.

Tagged with: ,

ffmpeg: A recipe for HD video on portables

Posted in video conversion by commorancy on May 17, 2015

The first thing people are likely to ask about this article is how to rip a blu-ray disk. Unfortunately, I’ll have to leave that task for you to figure out. There are plenty of places on the net to find this information. I’d suggest you visit www.doom9.org and poke around there to find your answer. Once you have your HD video ripped, that’s where this article will help you produce a high quality portable video experience for your tablet or phone.

What is ffmpeg?

I’ll start with an analogy. What sox is to audio, ffmpeg is to video (and audio). Sox is a great tool if you need to manage wave files, mp3s or au files. But, these audio formats are not so great when you want to use it as a soundtrack in a video. That’s where ffmpeg shines.

Ffmpeg is a tool that converts one video + audio format into another. For example, a blu-ray disk contains a file format known as m2ts. This is a type of container that holds various types of video, audio, chapter and subtitle information (many times, several of these). However, the m2ts file, while playable with VLC, is not so playable on an iPad, iPod or a Samsung Galaxy Tab. In fact, portables won’t play the formats contained in an m2ts container. Of course, it’s not that you’d want to play this format on your tablet because an m2ts file can be 25G-40G in size. Considering an iPad has, at most, 128GB, you’d only be able to store about 3 of these honking videos.. not that you’d actually be able to play them as the format is incompatible. To the rescue, ffmpeg.

ffmpeg and video containers

The format of choice that Apple prefers is H264 + AAC. The first thing I will say about this is that the libraries needed to get ffmpeg to produce this format are not in the pre-compiled version. Instead, you’ll need to set aside a weekend to compile up the latest version of ffmpeg with the non-free libraries. By ‘non-free’, that means these libraries are potentially encumbered by copyrighted or patented code. For this reason, the pre-compiled versions do not contain such code. Therefore, it’s impossible to produce an H264 + AAC mp4 video file with the pre-compiled versions. This means you have to compile it yourself.

Explaining how to compile ffmpeg is a bit beyond the scope of this article. If you are interested in this topic, please leave a comment below and let me know that you’re interested in such an article. I will state, of compiling this, that the –enable-free option when running configure on ffmpeg is only half the battle. You first need to go get the non-free libraries and compile them up separately. Then, when compiling ffmpeg, reference the already compiled non-free AAC shared libraries so that ffmpeg can link with them. ‘Nuf said about compiling it.

The good thing about ffmpeg is that this tool understands nearly every video container type out there. This includes VOB (DVD format) and m2ts (blu-ray format). For what it’s worth, it also understands HD-DVD format even though this format is long dead.

Converting with ffmpeg

There are what seem like a ton of options when you type in ‘ffmpeg –help’. In fact, the help is so daunting as to turn off many beginners who might look for something else. Yes, there are tons of options that can tweak the resulting video file output. That’s why this article is here. I have found what I believe to be the perfect video conversion method from 30GB m2ts format to around 3GB files that fit quite comfortably on an iPad or Galaxy TabS and still retain HD quality. Without further adieu, let’s get to the recipe:

Pass 1

/usr/local/bin/ffmpeg -y -i “/path/to/input.m2ts” -f mp4 -metadata title=”movie_title” -vcodec libx264 -level 31 -s 1920×1080 -vf crop=1920:800:0:140 -b:v 3600k -bt 1024k -bufsize 10M -maxrate 10M -g 250 -coder 0 -partitions 0 -me_method dia -subq 1 -trellis 0 -refs 1 -flags +loop -cmp +chroma -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -qcomp 0.6 -qmin $qmin -qmax $qmax -qdiff 4 number_of_cpu_cores -pass 1 -acodec libfdk_aac -cutoff 20000 -ab 192k -ac 2 “/path/to/output.mp4

Pass 2

/usr/local/bin/ffmpeg -y -i “/path/to/input.m2ts” -f mp4 -metadata title=”movie_title” -vcodec libx264 -level 31 -s 1920×1080 -vf crop=1920:800:0:140 -b:v 3600k -bt 1024k -bufsize 10M -maxrate 10M -g 250 -coder 0 -partitions 0 -me_method dia -subq 1 -trellis 0 -refs 1 -flags +loop -cmp +chroma -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -qcomp 0.6 -qmin $qmin -qmax $qmax -qdiff 4 number_of_cpu_cores -pass 2 -acodec libfdk_aac -cutoff 20000 -ab 192k -ac 2 10 “/path/to/output.mp4

Note that the libfdk_aac is a non-free library which will not be in the free and pre-compiled version.

Two Passes? Doesn’t that take longer? 

Yes, it does. Two passes are necessary to provide the best motion quality in the video. The first pass retrieves statistics about each frame transition of the film and stores it in a statistics file. The second pass reads this very large first-pass-created (~2 GB) statistic file and uses it to creates smooth transitions between each frame of the output video. For this reason, you need to make sure to have plenty of disk space free. This two-pass system removes the herky-jerky video experience (especially with horizontal camera pans). Ultimately, what you’ll find is that the recipe above gives the absolute best quality and size I’ve yet to find from any conversion. For a 90-120 minute film, you’re looking at around a 2.5G-4G resulting mp4 for full 1080p + stereo audio. This file is compatible with the iPad and the Samsung Galaxy series.

Conversion of 90-120 minute films can take anywhere between 40 to 70 minutes for each movie conversion. You can run them in parallel on your system, but you’ll need to run them in separate directories to keep the statistics files in the first pass separate.

Stereo audio? Why Stereo?

I always convert down to stereo for good reason. These videos are intended to be used on portable devices. I’ve yet to see any tablet, computer or phone support 5.1 or 7.1 audio with the built-in audio hardware. Leaving the multi-channel audio intact is basically pointless and consumes extra disk space. Sure, if you want to export the video and audio to something like the Apple TV, it might be handy to have. But then, you’d probably want to stream the m2ts file rather than a slightly more sucky mp4. After all, converting to mp4 is equivalent to ripping a CD audio to mp3. It makes the size of the file much smaller, but is also lossy. However, in this case, sucky isn’t sucky at all and most portable devices support stereo.

In other words, the output is stereo because portable devices don’t support multichannel audio. At least, not yet.

How good is the video quality?

Excellent. It’s will not be exactly the same quality as the m2ts file (that’s not possible considering it is being compressed), but when viewing on a portable, it’s practically like watching a blu-ray disk. The video is still 1080p. It’s that good, all in about 3GB size. Note the crop size above. It is designed to remove the black bars from the top and bottom. When blu-rays are encoded, they are encoded full 16:9 even when the video may be wide screen. This means that the black bars are stored as extra video information in the video file. Removing this unnecessary black space eliminates encoding this blank video, thus reducing the size of the output file. Because these black bars are totally unnecessary, it also makes for a better viewing experience when watching the video in a picture-in-picture style window.

However, the crop value given is intended to be used with widescreen 2.35:1 and 2.39:1 films that have been produced since the mid-1970s. Widescreen films produced in the 1960s and before have a different aspect ratio and using this crop value will cut off pieces of the top and bottom. There are also some films that use unique and creative aspect ratios that may not work with that crop value. Also, many animated films are full 16:9, so you’ll want to remove the -vf crop=1920:800:0:140 argument entirely to prevent ffmpeg from cropping.

Calculate Crop

To calculate crop, here’s how to do it. The arguments are crop=:::. Here’s the table:

  • hwidth = horizontal width of input video
  • vheight = vertical height of video
  • hoffset = how much to move the video horizontally (into frame)
  • voffset = how much to move the video vertically (into frame)

Note that my crop recipe above does no horizontal cropping, but it can be done just as vertical cropping. For the vertical cropping, note that I’ve reduced the frame canvas size from 1080 to 800 to make the frame the proper aspect ratio. But, reducing the frame size from 1080 to 800 only changes the size of the video canvas. It doesn’t move the content back into frame, yet. To do that, you need hoffset and voffset. To calculate the voffset value and move the video into the newly sized canvas, you need to fill in the voffset value. To do that, subtract 800 from 1080. So, 1080 – 800 = 280. Then, divide that subtracted value by 2. So, 280 / 2 = 140. The voffset value is 140.  About cropping, one thing is important to note. If you divide the value and get floating point number like 141.2222. You can’t use this. You need to adjust the voffset value so that it is always a whole number. To do this, ensure when you subtract your vertical size from 1080, it always results in an even number so that the division by 2 also results in a whole number.

VBV underflow warnings?

Note that when converting some content, the conversion output may occasionally show VBV underflow. This is intentional. While it is possible to get rid of these warnings by raising or removing the -bufsize and -maxrate options, raising this value may also increase the size of the output movie file. This underflow warning means that there’s just too much input to be stored in the requested output bitrate. That’s why I say this video won’t be exactly identical to the input file. This selected settings given provide the perfect marriage between size, quality and functionality. If you raise or eliminate the -maxrate option to get rid of the warnings, so too will the size increase of the output video file. Because I prefer the mp4 file sizes to remain in the 2.5-4GB size range and because the VBV underflow warnings do not materially impact the resulting output file, I have chosen not fix this issue. However, if you would like to get rid of these warnings, you can remove the arguments -bufsize 10M -maxrate 10M or increase these values as you see fit. However, you might want to read this wiki article describing the interrelationships between -b:v, -bufsize and -maxrate.

Because I also want the final videos to have the most steady bitrate possible (very important for any kind of streaming), I allow the VBV underflow warnings. Removing the -maxrate and -bufsize options (to get rid of the warnings) will allow the bitrate to vary wildly at times. This can cause unsteady or choppy playback in some players, especially when network streaming. To avoid wild variability in the bitrate, I intentionally force -bufsize and -maxrate to be a specific size.

Enjoy the recipe. Hopefully, it’s helpful. If you get good results from these ffmpeg recipes, please leave a comment below. Also, feel free to tweak the recipes with however you see fit. If you find something even better than what I show above, please post a comment below and let me know what you did. I’m always interested in seeing what people can do with ffmpeg.

How not to run a business (Part 9): Culture Clash and Acquisitions

Posted in best practices, business by commorancy on April 18, 2015

Okay, so now your business is big enough (and making enough revenue) to consider acquisitions. But, making acquisitions can be tough. Part of what makes acquisitions tough is making the tough decisions to ensure the success of the acquisition. Yet, some companies haven’t the first clue about how to make these tough decisions, especially when involving company culture.

Don’t let the company you are acquiring dictate any company culture demands

In other words, walk away from any acquisition deal where the owners demand (as part of the deal) to be allowed to continue their current company culture. No, no, NO! Do not allow this! Never concede this by allowing the acquired company culture to remain as part of the acquisition. If you do, it will tie your hands when it comes time to merge the the acquired company into yours. It all must become a single company culture or you will never make the acquisition a success.

At some point, you must merge the people and the cultures. If you don’t nip having two cultures in the bud, you’ll end up with part of the company doing things one way and another doing things entirely different. You can’t have your company culture fractured across the boundary of an acquired entity or you will never get rid of culture problems. Basically, don’t tie your hands before the deal is done.

Don’t let the acquired company executives dictate how their section will continue to operate

This goes hand in hand with company culture, but is distinctly different. Executives of the company being acquired do not want to lose their tenure, authority, position or compensation after having been acquired. Ultimately, this is not possible. And, ultimately, it can’t be allowed. You can concede this for a short time during a transition period, but you cannot allow it to remain after the transition period. If the acquired company executives don’t like it, they can leave. If you concede this point, you will never successfully merge the two entities.

This is one of the hard choices you must make. For companies being acquired, you have to lay down the law. If the person can have a role in the new company and can accept your company culture, give it to them. If they don’t have a role, lay them off. If that person can’t accept the company culture, lay them off. If they are unwilling to work within the current constraints of your company’s goals and processes, lay them off. This is a hard decision, but a decision that must be made. You cannot keep the acquired company structure and processes around in your business. If a process you’ve inherited from the company makes sense, then yes, you can integrate it. But, typically this never happens. The company being acquired almost never has more mature processes than yours.

Don’t allow an acquired company to remain located in a separate city from your business

Another hard choice, but one that is entirely necessary. You cannot leave the office open in the city where the acquired entity was located. You should dictate as part of the acquisition terms that you will close it and relocate staff who choose to relocate to your headquarter offices. While you can leave the office open during the transition period, you cannot leave that office open. If you do, you will never integrate the staff into your business. They will forever retain their culture in that office. Acquired staff must move to your headquarters or leave the company. If that’s a deal breaker, walk away from the deal.

The only exclusion to this rule is acquiring foreign entities. If you are a US entity and acquire a Japanese office, this is the only time where you will want to keep that entity in its entirety. However, in the domestic US, the rule is close the office. You can re-open and restaff an office in that same city later, but the acquired entity office must be closed as soon as possible to set the tone that your company is one culture and one team.

Don’t make the staff of the company the most important piece of the acquisition

Unless you are a staffing firm acquiring another staffing firm, you typically acquire a company for its customer base or its technology, rarely ever for its staff. You will need to keep in perspective exactly why you are buying a company… and it’s rarely ever for staff. However, if you are buying a software company, it’s probably a good idea to keep certain few key developers for at least a short period of transition time. But, do not keep them on staff forever. Once they have turned over their braintrust and code to your engineers, usher them out of the building. I’ll reiterate, you buy a company for its technology or customer base, never for employees. However, if those key employees are willing to relocate and willing to accept your culture (usually not), then you can invite them to stay. Otherwise, you should put that key staff on a 6 month contract to transition the software and documentation to your team, then usher them out.

Don’t hire executives for more than a 1 year contract on acquisitions

When you buy a company, you’re technically hiring these employees and execs blind. Sure, you could assume that the employees there did something right to get the company to the point where you considered buying it, but you may be making the wrong assumption. It’s entirely possible that the people (or person) who created the product or service has long since walked and you’re buying a shell in maintenance mode. Based on this fact alone, you should be prepared to walk everyone in the acquired company to the door. If you aren’t prepared to do this, you’ll have no hope of successfully merging two entirely different cultures. If you’re not prepared to fire every single acquired employee, you shouldn’t be in the business of making acquisitions.

If the acquired employees are not acutely aware and accept that your culture is the dominant culture, they will not fit in nor follow your company’s processes. Even if they are aware of this fact, they may still choose not to follow your company’s processes (see allegiances below). You should be prepared to let any acquired employee go quickly. In fact, you should plan to let these employees go after the transition period is over. This prevents culture issues entirely.

Don’t get lulled into thinking that a technology acquisition will save your business

It won’t. Plain and simple. If your own product or service isn’t cutting it, any company you purchase will not typically be any more successful than yours. In fact, you may find that it may make no money at all and you’ll end up (best case) giving it away for free or (worst case) shutting it all down and dumping it.

You should understand that, like any business, ideas come and ideas go. Some work, some don’t. Buying a company for software, hardware or specific technologies isn’t without risk. Sometimes you gamble and win, some times you lose. There is no crystal ball for this. But, you must willing and prepared to throw away everything from an acquisition. This is yet another tough decision, but it’s one that needs to be clearly understood. If you are unwilling to acknowledge the failure of an acquisition, then you shouldn’t be in the business of acquiring companies.

Don’t create new positions for acquired executive staff

If there isn’t a position already open, do not create fake titles for executive staff. You should explain that there is no position available for their skills within your company, at the bargaining table, and make it perfectly clear that they won’t have a role in the new merged company. Of course, you can compensate them, but they will have no job. If they won’t accept that, walk away from the deal. Additionally, don’t create co-presidents or co-CEOs or co-anything. Dual roles in your business generally do not work. Not only will your staff be confused over to whom they report, double decision makers lead to decision problems, never solutions. Additionally, you likely don’t know any of these acquired executive staff. Sure, they might appear knowledgable, but they didn’t go through your official interview processes. They bypassed that process and became your employee through acquisition. There is no accounting for their knowledge, skills, background or abilities.

One other point I should make here is about allegiance. Keeping executives from an acquisition in a position of power, especially co-leader positions, enables acquired employees to retain their allegiances to their former leaders rather forming new allegiances with your leaders. These fractured allegiances are likely to lead to more problems in the future. This goes back to company culture above. If you keep acquired staff and executives on board, you are asking for culture clash problems. This can be eliminated by eliminating acquired staff after the transition period is over, including executives.

Don’t skip the interview process for acquired staff

If you want to hire on any employee from an acquisition, force them to go through your same hiring processes as any candidate. Have your teams interview them and determine if they fit with the position based on their skills. If the staff like and accept them, hire them. If they don’t, walk them to the door. Do not blanketly accept staff from an acquisition simply because the company was acquired. Follow your standard hiring practices when considering bringing staff on from an acquisition. Make sure that that the acquired company is fully aware that every staff member will need to go through a rehire process by your hiring managers. If they don’t fit the skills needed for an open position, don’t hire them.

Don’t avoid reviewing your acquisition progress yearly

Company technologies and staff don’t always integrate nicely, especially over time. You need to review the progress of any acquisition regularly. Don’t just assume that the acquisition is working perfectly simply because you hear nothing about it. Instead, you need to go digging for information. Ask people on your team what they think of the acquisition and if it was successful. Get opinions from your team members and understand what they are saying. If your team members won’t give candid information, then ask for them to fill out a survey and offer a notes section at the end for free form comments. Assuming the survey is truly anonymous, the employees will be open and candid with you. You need to know when company culture clashes exist. These cannot be swept under the rug.

Part 8Chapter Index | Part 10

Tagged with: ,

Fetch song of the Week: Let Me Go by Avril Lavigne

Posted in music by commorancy on April 13, 2015
Tagged with:

Random song of the Week: Playmate to Jesus by Aqua

Posted in music by commorancy on April 5, 2015
Tagged with:

Star Trek Voyager: Inconsistencies Abound

Posted in entertainment, writing by commorancy on April 2, 2015

I’ve recently decided to rewatch all of the seasons of Star Trek Voyager again. I missed many of the later episodes and decided now is the time to watch them. One thing I have noticed is that time has not been kind to this series, neither have the writers. Let’s explore.

Seasons 1, 2 and 3

The first thing you’ll notice about season one is the dire predicament in which Voyager is placed. After attempting rescue of a Maquis ship, the Voyager gets pulled into an unknown anomaly and is sent hurtling into the delta quadrant. After the two ship crews merge, because they need the Maquis ship as an explosive, they ‘assimilate’ both crews onto the Voyager. This is where the fun begins.

The first season sees a lot of resistance and animosity from the Maquis crew towards Star Fleet. Captain Janeway makes some questionable decisions, like blowing up the caretaker array instead of trying to salvage it, thus stranding everyone in the delta quadrant. From here, we see many a shuttle accident in among holodeck romps. It seems that every time a shuttle tries to land somewhere (for whatever reason), it ends up crashing and Voyager has to come to the rescue. If we’re not seeing rescued downed shuttles, we’re playing with stupid characters on the holodeck or beaming critical staff (sometimes the Captain herself) into inexcusably dangerous situations.

The second and third seasons keep expanding what was started in the first. But, one thing you’ll notice is that while Janeway keeps close tabs on stock depletion in the first season, all that subtext is dropped by the second season. By the third season, it became a monster of the week series where Voyager was ‘reset’ at the beginning of each episode to have a full crew, full armament of torpedoes and a full complement of shuttle craft. Additionally, any damage sustained in a previous episode was non-existent in the next episode. The only continuity that was pulled forward was the replicator rations. And, that plot device was only pulled forward to give the Neelix character some work to do as a makeshift chef in the Captain’s private dining room.

Unfortunately, dropping the limited stock, rations, crew complement and limited shuttle craft supply was a singly bad move for the writers and this series. Seeing Voyager become increasingly more and more damaged throughout the series would have added to the realism and cemented the dire predicament in which this ship was placed. In fact, in the episode Equinox (straddling seasons 5 and 6), the Equinox ship is likely similar to how Voyager’s ship and crew should have looked by that point in their journey. Also, at some point in the journey through the delta quadrant, Janeway would have had to drop the entire Star Fleet pretext to survive. If, like the Equinox, half of the crew had been killed in a battle, Janeway would have been forced to reconsider the Prime Directive and Star Fleet protocol. In fact, this entire story premise could have started a much more compelling story arc at a time when Voyager’s relevance as a series was seriously waning and viewership dropping. Taking Voyager out of its sterile happy-go-lucky situation and placing it into more dire realistic circumstance could have led to an entirely new viewership audience. Situations not unlike this would ultimately be played out in later series like BSG where this type of realism would become the norm and a breath of fresh air in the previously tired formulaic series.

Star Trek, up to Voyager, had always been a sterile yet friendly series where each episode arc always closed with a happy-ending. Each episode was always tied up far too neatly in a pretty little bow, possibly also wrapped in a morality play. While that worked in the 60s and seemed to work in the 80s for TNG, during the 90s that premise wore extremely thin. By the 2000s, gritty realism was the way of series like Stargate, 24, Lost, BSG and Game of Thrones. Unfortunately, by comparison, the new influx of gritty realism in other series made Voyager, DS9 and TNG seem quaint and naïve by comparison. Instead of perfectly coiffed hair and immaculately cleaned and pressed uniforms, we would now see dirty costumes, hair that is unmanaged, very little makeup and character scenarios where everything doesn’t work out perfectly at the end.

While Brannon Braga, Rick Berman, Michael Piller and Jeri Taylor should get a few kudos for attempting to keep Star Trek alive, they did so at the cost of not keeping up with the times and sacrificing the franchise entirely as a result. Even when Voyager was introduced, the episodic formula that Voyager provided was already wearing thin. Even during its initial run, it was somewhat quaint and naïve already. Like attempting to recreate the Brady Bunch series exactly as it was in the 70s in the 2000s, Voyager was a throwback to the past. All of this is mostly the reason I stopped watching it during its original airing. Like an old comfort toy from childhood, eventually you have to leave it behind and grow more mature. Star Trek Voyager just didn’t grow up and mature with the prevailing winds of change, its audience age demographic and the prevailing TV series landscape. It’s ironic, Star Trek is about growth, maturity and learning, yet while the producers and writers were churning out weekly stories about these very topics, they couldn’t manage to keep up with the growth trends in their own industry. In short, Voyager needed a drastic mid-series makeover (after season 3) to keep up with the changing times.

Inconsistencies

In the first season specifically, Janeway institutes replicator rations, power saving measures, yet fully allows the crew to use the holodeck at will. Seriously, the holodeck is probably one of the top energy drains on that ship, and you’re going to let the crew use this power hungry thing willy-nilly? Yet, you force the crew to limited replicator rations? Why not disable the holodeck except for emergency use and let the crew have all the replicator rations they want? It’s seems fair to me.

Again, in the first season, Janeway identifies that the ship has limited shuttle and torpedo complements. Yet, in 3rd and later seasons, Voyager is popping off photon torpedos like candy. I also have no idea just how many shuttles have been destroyed, disabled or otherwise left as junk on planets. Yet, Voyager seems to have an infinite supply of them. It also seems that Voyager has an infinite supply of crew and torpedoes. I believe it was counted that Voyager shot off somewhere close to 98 torpedoes the entire 7 season run. And, considering that 7 seasons was actually only 7 of Voyager’s 23 years in the delta quadrant, extrapolating that out means Voyager would have shot over 320 torpedoes in the 23 years they were in the delta quadrant when they only had 38 on board.

On top of all of this, Janeway is a completely reckless captain. She continually puts her crew in harm’s way intentionally looking for resources, scouring through junk, investigating, exploring, trying to salvage Borg cubes. You name it, Janeway has had her crew recklessly do it, instead of the obvious… trying to find a way home. How that crew managed not to actually mutiny and kick her butt out of the captain’s chair is beyond me. Janeway is seriously the most reckless captain in Star Fleet. Far and above Kirk in recklessness.

Episode Writing Continuity Carelessness

In Season 4 Episode 23 entitled Living Witness, the Doctor is reactivated 700 years in the future on the Kyrian home planet in the Delta quadrant. There was never any discussion that this episode was built from any kind of temporal anomaly. The Doctor finds he is part of a museum exhibit and is called upon to clear Voyager’s name for being part of the ship that started their war. Ignoring the stupid war premise which really makes no difference one way or another, what this episode states is that the Doctor’s holo matrix is downloaded during an attack on Voyager and left on the planet for 700 years.

Let me pause here for a moment to catch everyone up since there have been some questions about this specific episode’s setup (which was, by the way, also inconsistent). Pretty much the entire series before and after the Living Witness episode drilled the point home time and time again that due to the doctor’s expanded holomatrix, ‘he’ was ‘unique’ and ‘uncopyable’. Because this point was driven home time and time again and because it was used as a plot device to ensure both the audience and the Voyager crew understood just how much the doctor was like a human, we are told the doctor is unique, individual, indispensable, irreplaceable and can die. There was even a Kes episode about this whole idea, but not the only one. When the rest of the crew was ready to reboot the doctor because his holomatrix had been degraded so badly, Kes stood by the doctor and vouched for his uniqueness, individuality and stood up for the doctor (when he couldn’t) to continue trying to keep him intact. If it had been as easy as making a backup copy and restoring a doctor copy, the ship could have used a backup doctor several times when the ‘real’ doctor goes on away missions, instead of leaving Kes and Paris to run Sickbay. They could have even used a backup copy to overlay his later degraded version on top and clean his matrix up. Yet, this never manifests not once in any episode. In fact, as I said, the writers did everything they could to ensure we understood that he was uncopyable, not even with the mobile emitter. So, what does this all mean? It means that the mobile emitter that was found contained the actual doctor, not a copy as was theorized.

What this story flaw also says is that there should no longer be an EMH on Voyager after the doctor has been left on this planet for over 700 years. It also means that no other episodes after this event should ever see this EMH program again. In another episode, Harry Kim tries to recreate the EMH after the doctor was thought to be lost during that episode, but after Kim fails, he leaves Paris to fend for himself in Sickbay. This means that there is exactly one doctor and he was left on Kyrian planet. The Doctor serves the Kyrians for a period of time, but eventually finds his way home to Earth 700-800 years after Voyager. Yet, in episodes after Living Witness, the Doctor is happily helping folks in Sickbay once again, including appearing in the final episode entitled Endgame.

Now, one could argue that Living Witness happened sometime later at the end of Voyager’s run, but then why is it in season 4? It also means that for at least some duration of Voyager’s trip, the Doctor EMH program was not available. Though, B’lana might have created a new rudimentary EMH, we never saw it. Yet, in Season 7, Episode 23 — Endgame, we see the Doctor come strolling through the Voyager party 23 years later. Assuming the episode Living Witness to be true, then this is a major continuity error. The doctor should not be in Endgame at all. He should still be deactivated on the Kyrian homeworld.

Let’s consider how it’s even possible that the mobile emitter was left (or was stolen) in Living Witness. Since there was only and ever one mobile emitter, that logically means the doctor should not have had the mobile emitter for any episode after that Living Witness (assuming we accept the ‘backup’ idea, which I don’t). Yet, we continue to see the mobile emitter used on episodes all the way to the very end when Voyager returns. This episode contains far too many consistency problems and should not have aired.

Lack of Season-wide Story Arc

Star Trek The Next Generation attempted to create a few longer story arcs. But, the writers never really embraced such arcs beyond the borders of an episode (or multi-part episodes). Though, some character relationship arcs did reach beyond the borders (i.e., love relationships, children, cultural rituals, marriages, etc), arcs related to alien races, ship resources, ship damage or astral phenomena (with the exception of the Q) were almost never carried forward. So, for example, in TNG, during season 7, the Force of Nature episode forced Star Fleet to institute a warp speed limited due to warp drive destruction of subspace. That speed limit arc carried through a few episodes, but was ultimately dropped and ignored during Voyager. It was dropped primarily because it didn’t help the writers produce better episodes. By forcing starships to travel at slower warp speed, nothing good came from this plot device. In fact, this speed limit would have only served to hinder Voyager in getting home. Clearly, the writers had not yet conceived of Voyager when TNG’s Force of Nature aired. Otherwise, the producers might have reconsidered airing this episode.

Also, because warp speed is a fairly hard to imagine concept in general, artificially limiting speeds in a series where fantasy and space travel is the end goal actually served to undermine the series. There were many ideas that could have created larger more compelling story arcs besides setting an unnecessary speed limit. The sole purpose for the speed limit, I might also add, was only to make Star Trek appear eco-friendly towards the inhabitants of the Milky Way… as if it even needed that moniker. I digress.

Even at the time when TNG was ending, other non-Trek series were beginning to use very large and complex story arcs. Yet, Star Trek TNG, DS9 and Voyager clung tightly to story arcs that fit neatly within a 42 minute episode border. This 42 minute closed border ultimately limited what appeared in subsequent episodes. Very rarely did something from a previous episode appear in a later episode unless it was relationship driven or the writers were hard-up for stories and wanted to revisit a specific plot element from a previous episode. In general, that was rare. In Voyager, it happens in the season 5 episode Course: Oblivion (which this entire episode was not even about Voyager’s crew) and which is a sequel to the season 4 episode Demon (where the crew lands on a Class Y planet and is cloned by a bio-mimetic gel). These types of story sequels are rare in the Star Trek universe, especially across season boundaries, but they did occasionally happen. Even though such stories might appear occasionally, Star Trek stayed away from season-wide or multi-season wide story arcs, with the exception of character relationship arcs.

Janeway’s Inconsistencies

The writers were not kind to the Janeway character. One minute she’s spouting the prime directive and the next she’s violating it. There is no consistency at all here. Whatever the story requires forces Janeway’s ethics out the airlock. The writers take no care to keep her character consistent, forthright, honest and fair. No, she will do whatever it takes to make the story end up the way the writers want. It’s too bad too because in the beginning, the Janeway character started out quite forthright. By the time Seska leaves the ship, I’m almost rooting for a mutiny to get Janeway out of the way. In fact, I actually agreed with Seska to a certain extent. Janeway’s number one priority was to protect the crew and make it safely back to the Alpha quadrant as timely as possible. Instead, Janeway feels needlessly compelled to galavant for 23 years all over the Delta quadrant making more enemies than friends, killing her crew one-by-one, destroying shuttles, using up torpedos, using up ship resources and generally being a nuisance.

Worse, Janeway’s diplomatic skills with alien races is about as graceful as a hammer hitting your thumb. She just didn’t get it. The Sisko character in DS9 got it. The Seska character got it. Janeway, definitely not. While she may have been trained to Captain the tiny Voyager ship, she had absolutely zero diplomatic skills. I’m guessing that’s why Star Fleet never tapped her to helm a Galaxy class ship and, instead, forced her into the tiny Intrepid class for scientific exploration.

I’m not even sure why Star Fleet tapped Voyager to go find the Maquis ship. While Voyager may be somewhat more maneuverable than a Galaxy class ship, a Galaxy class ship would have been better suited to find and bring back the Maquis ship in the first episode, not Voyager. So, even the series started out wrong.

Commentary

Time has also not been kind to the Voyager episodes themselves. Both the Next Generation and Voyager relied on the weekly episodic nature of the series. The 7 day span between airing of episodes gave viewers time to forget all about the last episode before the next one aired. This time gap helped the series.. a lot! But, in the age of DVD sets and Netflix where commercials are devoid and there’s no need to wait any length of time to watch the next episode, watching Voyager in rapid succession shows just how glaring the continuity flaws are. No, this format is definitely not kind to Voyager. It’s not even just the continuity errors. It’s stupid decisions. Like arbitrarily deciding that it’s perfectly okay to leave Holodeck simulations running even when the ship is running out of power with no way to replenish. Like firing yet another large volley of photon torpedoes at a Borg ship when you only have 38 on board. Like continually and intentionally sending shuttle crafts into known atmospheric disturbances only for them to be disabled and downed. Janeway is the very definition of reckless with her ship, with her command, with her crew and with their lives. Yet, no one on board saw it, commented or mentioned this. Seska came close, but she left the ship before she got that far with Janeway.

Overall, when it was originally on, it was more enjoyable. Today it’s a quaint series with many glaring flaws, no overall story progression and a silly ending. Frankly, I’m surprised this series actually ran for 7 years. It should have ended at about the fifth season. Basically, after Kes (Jennifer Lien) left and the series picked up Seven of Nine (Jeri Ryan), it all went downhill.

If anything is responsible for killing off the Star Trek franchise, it’s Voyager. Yes, Enterprise came after, but Enterprise was just too foreign to really make it as a full fledged Star Trek. It was really a casualty of Voyager instead of being to blame for the demise of Star Trek.

Favorite song of the week: Nuclear by Mike Oldfield

Posted in music by commorancy on March 28, 2015
Tagged with: ,

Apple’s newest MacBook: Simply Unsatisfying

Posted in Apple, botch, business, california by commorancy on March 12, 2015

macbook_largeIt’s not a MacBook Air. It’s not a MacBook Pro. It’s simply being called the MacBook. Clever name for a computer, eh? It’s not like we haven’t seen this brand before. What’s the real trouble with this system? A single USB-C connector. Let’s explore.

Simplifying Things

There’s an art to simplification, but it seems Apple has lost its ability to rationally understand this fundamental concept. Jobs got it. Oh man, did Jobs get the concept of simplification in spades. Granted, not all of Jobs’s meddling in simplification worked. Like, a computer with only a mouse and no keyboard. Great concept, but you really don’t want to enter text through an on-screen keyboard. This is the reason the iPad is so problematic for anything other than one-liners. At least, not unless there’s some kind of audio dictation system. At the time, the Macintosh didn’t have such a system. With Siri, however, we do. Though, I’m not necessarily endorsing that Apple bring back the concept of a keyboard-less computer. Though, in fact, with a slight modification to Siri’s dictation capabilities, it would be possible.

Instead, the new MacBook has taken things away from the case design. More specifically, it has replaced all of those, you know, clunky, annoying and confusing USB 3.0 and Thunderbolt port connectors that mar the case experience. Apple’s engineers have now taken this old and clunky experience and ‘simplified’ it down to exactly one USB-C port (excluding the headphone jack.. and why do we even need this jack again).

The big question, “Is this really simplification?”

New Case Design

Instead of the full complement of ports we previously had, such as the clever magsafe power port, one or two Thunderbolt ports, two USB 3.0 ports and an SD card slot, now we have exactly one USB-C port. And, it’s not even a well known or widely used port style yet.usb_macbook

Smart. Adopt a port that literally no one is using and then center your entire computer’s universe around this untried technology. It’s a bold if not risky maneuver for Apple. No one has ever said Apple isn’t up for risky business ideas. It’s just odd that they centered it on an open standard rather than something custom designed by Apple. Let’s hope that Apple has massively tested plugging and unplugging this connector. If it breaks, you better hope your AppleCare service is active. And since the unplugging and plugging activity falls under wear-and-tear, it might not even be covered. Expect to spend more time at the Genius bar arguing over whether your computer is covered when this port breaks. On the other hand, we know the magsafe connector is almost impossible to break. How about this unknown USB-C connector? Does it also have the same functional lifespan? My guess is no.

I also understand that the USB-C technology automatically inherits the 10 Gbps bandwidth standard and has a no-confusion-plug-in-either-way connector style. But, it’s not as if Thunderbolt didn’t already offer the same transfer speed, though not the plug-in-either-way cable. So, I’m guessing that this means Thunderbolt is officially dead?

What about the Lightning cable? Apple recently designed and introduced the Lightning connector for charging and data transfer. Why not use the Lightning connector by adding on a faster data transfer standard? Apple spent all this time and effort on this cool new cable for charging and data transfer, but what the hell? Let’s just abandon that too and go with USB-C? Is it all about throwing out the baby with the bathwater over at Apple?

I guess the fundamental question is… Really, how important is this plug-in-either-way connector? Is Apple insinuating that general public is so dumb that it can’t figure out how to plug in a cable? Yes, trying to get the microUSB connectors inserted in the dark (because they only go in one direction) can be a hassle. The real problem isn’t that it’s a hassle, the real problem is that the connector itself was engineered all wrong. So, trying to fit in a microUSB cable into a port is only a problem because it’s metal on metal. Even when you do manage to get it lined up in the right direction, it sometimes still won’t go in. That’s just a fundamental flaw in the port connector design. It has nothing to do with directionality of it. I digress.

Fundamentally, the importance of a plug-in-either-way cable should be the lowest idea on the agenda. What should be the highest idea is simplifying to give a better user experience overall and not to hobble the computer to the point of being unnecessarily problematic.

Simply Unsatisfying

Let’s get into the meat of this whole USB-C deal. While the case now looks sleek and minimal, it doesn’t really simplify the user experience. It merely changes it. It’s basically a shell game. It moves the ball from one cup to another, but fundamentally doesn’t change the ball itself. So, instead of carrying only a power adapter and the computer, you are now being forced to carry a computer, power adapter and a dock. I fail to see exactly how this simplifies the user experience at all? I left docks behind when I walked away from using Dell Notebooks. Now, we’re being asked to use a dock again by, of all companies, Apple?

The point to making changes in any hardware (or software) design is to help improve the usability and user experience. Changing the case to offer a single USB-C port doesn’t enhance the usability or user experience. This is merely a cost cutting measure by Apple. Apple no longer needs to add pay for all of these arguably ‘extra’ (and costly) ports to the case. Removing all of those ‘extraneous’ ports now means less cost for the motherboard and die-cuts on the case, but at the expense that the user must carry around more things to support that computer. That doesn’t simplify anything for the user. It also burdens the user by forcing the user to pay more money for things that were previously included in the system itself. Not to mention, requiring the user to carry around yet more dongles. I’ve never ever known Apple to foist less of an experience on the user as a simultaneous cost cutting and accessory money making measure. This is most definitely a first for Apple, but not a first for which they want to become known. Is Apple now taking pages from Dell’s playbook?

Instead of walking out of the store with a computer ready in hand, now you have to immediately run to the accessory isle and spend another $100-200 (or more) on these ‘extras’. Extras, I might add, that were previously included in the cost of the previous gen computers. But now, they cost extra. So, that formerly $999 computer you bought that already had everything you needed will now cost you $1100-1200 or more (once you consider you now need a bag to carry all of these extras).

Apple’s Backward Thinking?

I’m sure Apple is thinking that eventually that’s all we’ll need. No more SD cards, no more Thunderbolt devices, no more USB 3 connectors. We just do everything wirelessly. After all, you have the (ahem) Apple TV for a wireless remote display (which would be great if only that technology didn’t suck so bad for latency and suffer from horrible mpeg artifacting because the bit rate is too low).

Apple likes to think they are thinking about the future. But, by the time the future arrives, what they have chosen is already outdated because they realized no one is actually using that technology other than them. So, then they have to resort to a new connector design or a new industry standard because no other computers have adopted what Apple is pushing.

For example, Thunderbolt is a tremendous idea. By today, this port should have been widely used and widely supported, yet it isn’t. There are few hard drives that use it. There are few extras that support it. Other than Apple’s use of this port to drive extra displays, that’s about the extent of how this port is used. It’s effectively a dead port on the computer. Worse, just about the time where Thunderbolt might actually be picking up steam, Apple dumps it in lieu of USB-C which offers the same transfer speeds. At best, a lateral move technologically speaking. If this port had offered 100 Gbps, I might not have even written this article.

Early Adopter Pain

What this all means is that those users who buy into this new USB-C only computer (I intentionally forget the headphone jack because it’s still pointless), will suffer early adopter pains with this computer. Not only will you be almost immediately tied to buying Apple gear, Apple has likely set up the USB-C connector to require licensed and ID’d cables and peripherals. This means that if you buy a third party unlicensed cable or device, Apple is likely to prevent it from working, just as they did with unlicensed Lightning cables on iOS.

This also means that, for at least 1-2 years, you’re at the mercy of Apple to provide you with that dongle. If you need VGA and there’s no dongle, you’re outta luck. If you need a 10/100 network adapter, outta luck. This means that until or unless a specific situational adapter becomes available, you’re stuck. Expect some level of pain when you buy into this computer.

Single Port

In addition to all of the above, let’s just fundamentally understand what a single port means. If you have your power brick plugged in, that’s it. You can’t plug anything else in. Oh, you need to run 2 monitors, read from an SD card, plug in an external hard drive and charge your computer? Good luck with that. That is, unless you buy a dock that offers all of these ports.

It’s a single port being used for everything. That means it has a single 10 Gbps path into the computer. So, if you plug in a hard drive that consumes 5 Gbps and a 4k monitor that consumes 2 Gbps, you’re already topping out that connector’s entire bandwidth into the computer. Or, what if you need a 10 Gbps Ethernet cable? Well, that pretty much consumes the entire bandwidth on this single USB-C connector. Good luck with trying to run a hard drive and monitor with that setup.

Where an older MacBook Air or Pro had two 5 Gbps USB3 ports and one or two 10 Gbps Thunderbolt ports (offering greater than 10 Gbps paths into the computer), the new MacBook only supports a max of 10 Gbps input rate over that single port. Not exactly the best trade off for performance. Of course, the reality is that the current Apple motherboards may not actually be capable of handling 30 Gbps input rate, but it was at least there to try. Though, I would expect that motherboard to handle an input rate greater than 10.

With the new MacBook, you are firmly stuck to a maximum input speed of 10 Gbps because it is a single port. Again, an inconvenience to the user. Apple once again makes the assumption that 10 Gbps is perfectly fine for all use cases. I’m guessing that Apple hopes the users simply won’t notice. Technologically, this is a step backward, not forward.

Overall

In among the early adopter problems and the relevancy problems that USB-C has to overcome, this computer now offers a more convoluted user experience. Additionally, instead of offering something that would be truly more useful and enhance the usability, such as a touch screen to use with an exclusive Spotlight mode, they opted to take this computer in a questionable direction.

Sure, the case colors are cool and the idea of a single port is intriguing, it’s only when you delve deep into the usefulness of this single port does the design quickly unravel.

Apple needs a whole lot of help in this department. I’m quite sure had Jobs been alive that while he might have introduced the simplified case design, it would have been overshadowed by the computer’s feature set (i.e., touch screen, better input device, better dictation, etc). Instead of trying to wow people with a single USB-C port (which offers more befuddlement than wow), Apple should have fundamentally improved the actual usability of this computer by enhancing the integration between the OS and the computer.

The case design doesn’t ultimately much matter, the usability of the computer itself matters. Until Apple understands that we don’t really much care what the case looks like as long as it provides what we need to compute without added hassles, weight and costs, Apple’s designers will continue running off on these tangents spending useless cycles attempting to redesign minimalist cases that really don’t benefit from it. At least, Apple needs to understand that there is a point of diminishing returns when trying to rethink minimalist designs…. and with this MacBook design, the Apple designers have gone well beyond the point of diminishing returns.