Shopping Frustration: When coupon codes don’t work
Nothing is more frustrating during online shopping than when e-tailers send out a coupon code for a one day sale that doesn’t work. I have to wonder, are these sites just stupid, clueless or technically inept? Let’x explore.
Holiday Shopping Spree
If you’re like me, I tend to shop for things when people send me coupon codes. Specifically, I shop when things are wearing out. I try to make sure these purchase times match up when coupon codes are available. So, I like to wait for sale days like Memorial Day, President’s Day or, like today, Labor Day. So, I’m happy when companies where I like to shop send me a 20% or 30% off coupon. I generally like to take advantage of these deals because they don’t appear that frequently and I can shop for clothes that are wearing out.
Clickable Ad Banners in Email
Unfortunately, many of these e-tail sites are so inept or mismanaged that they email out the code but they forget to activate the code. Sometimes they deactivate it too early. Worse, they send an email with a big clickable banner ad describing this ‘Sale’ that, when you click, takes you to their home page and not to the sale items that apply to the code. This action leaves you wondering what the heck is actually on sale? One word comes to mind: inept. Retailers, this is a seriously stupid practice. If you send out an email that you’re having a 20% off sale, a click should immediately take you to the sales item(s). Don’t make your customers guess what’s on sale. In the case where I am taken to the front page, I close the browser, delete the email and move on. Sorry, you’ve just lost a sale and I simply won’t shop there. I know I’m not alone in this. A lot of people fill their carts and either abandon the cart or clear it out because of stupid things like coupon codes that don’t work.
Coupon Codes that Don’t Work
I’ve had many times where some company sends me a coupon code that when you type it into the cart and click ‘Apply’, the message says ‘This coupon is not valid’ or ‘This coupon does not apply to the items in your cart’. This goes back to the above issue. If you’re planning to issue a coupon code and spend the time and effort to email your email list with this code, you damned well better test that code to make sure it works and you damned well better make sure the customers know to which items the code applies. Don’t make your customers guess. Additionally, for 24 hour sales, you should make also sure that code works until midnight. And by this I mean, make sure it works until midnight of the customer’s timezone, not just your company’s timezone. That coupon should not expire at midnight your company’s timezone time as that could be midday in some locales. The code should expire at midnight wherever your shopper resides or better, expire it the following day sometime during the day to prevent expiration before the day is over for every customer and also lets late customers take advantage. After all, isn’t the idea behind a coupon code to get people into your site to purchase?
Customers walking away
Making stupid moves like not activating coupons, deactivating them early or making your customers guess as to what merchandise the coupon applies is just a stupid practice. You probably think I’m talking about small mom-and-pop shops here. No, these are well known well respected companies that are making these most basic mistakes, like Jockey, Tommy Bahama and Zagg.
Nothing is more frustrating than filling up your cart with merchandise expecting to use a coupon code only to find that it doesn’t work. Or, worse, not finding the merchandise to which the sale or coupon applies. In these cases, I empty the cart, close the browser window and delete the email. If these companies do this more than once, I remove myself from their email list as it’s quite clear that these companies do not have their act together. Which, if you think about it, is completely odd. These are retailers in business to make money. If you’re planning to offer a sale that uses a coupon code and that code doesn’t work, do you really think people are going to pay full price anyway? No. Selling your merchandise is your bread and butter and if you want people to buy your stuff, then you need to make sure your email ads reflect the reality of your site. If it doesn’t work, then you have even more serious issues on your hands, not the least of which might be considered fraud.
Amazon Better?
I just don’t understand this practice. This is why Amazon is kicking butt. With Prime, you get 2 day shipping included and the best price without hassling with coupon codes. Sure, you might be able to find it slightly cheaper at some mom-and-pop shop. But, the hassle of setting up a new account and dealing with yet more email that can’t do it right outweighs the few pennies of savings you might get from that mom-and-pop shop. So, I always find myself back at Amazon buying, at least for hassle-free purchasing. I don’t want to deal with coupon codes that don’t work, sites that don’t specify what’s on sale or silly stupid problems like this.
For those sites that do this, fix your sites or lose the sale and be trampled by Amazon. It’s quite simple.
Patent Wars: When IP protection becomes anti-competitive
So, who wins when companies like Apple and Samsung battle over intellectual property? No one. Here’s why.
Apple doesn’t win
Apple thinks they will win because they think this action will block a rival product based on the fact that they claim they invented it first. In fact, it’s not that they ‘invented’ it first, it’s because they patented it first. Whomever gets to the patent office gets exclusivity. That’s how patent law works. However, Apple won’t win because of the negative publicity backlash that it is now unfolding onto the Apple brand. The backlash against Apple is already beginning and it may end up becoming Apple’s downfall.
Seriously, are we to believe that there is any possibility of confusion between a Samsung device running Android and an Apple device running IOS? The operating systems aren’t even remotely similar. The sole and only reason to prevent another company from putting something on the market is to avoid brand or product confusion. I hardly think that anyone would confuse a Samsung Galaxy device clearly labeled with the Samsung brand with an Apple device clearly labeled with the Apple brand. Heck, the Galaxy devices don’t even resemble the iPhone now.
Clerk: Why are you returning this device today?
Consumer: Oh, I’m bringing this Samsung back because I thought it was an iPhone.
I don’t think so. This is not a likely scenario at all. I can’t imagine any consumer could walk into a Samsung retailer and confuse a Galaxy S with an iPhone. So, why is Apple so adamant that this device is a threat to their survival? In fact, if anything is a threat to Apple’s survival, it’s Apple. Playing these legal games is the best way to actually make consumers become aware and interested in the exact devices they hope to prevent being placed onto store shelves. If Apple had left well enough alone, these devices would have fallen into obscurity on their own and the iPhone would still reign supreme. Calling undue attention to another device, in just the way Apple is doing, is just ripe to backfire on Apple. And, backfire it appears to be doing. Way to go Apple.
Samsung doesn’t win
I’m not going to cheer for Samsung here. Are they a victim? Not really. They’re a large corporation that’s out to make a buck on a design that’s far too similar to one that someone else created. I won’t say that Apple is in the right here, but Samsung is also not in the right by doing what they did. I personally don’t like Samsung devices. They’re too unreliable and don’t last. I’ve bought many Samsung devices and they just don’t hold up long enough. The quality is too low for the price they charge. Making quality products is a whole separate issue from producing a product that cashes in on a look from a competitor. Samsung, at least have the decency to hire designers that produce original looking devices designs. It’s really not that hard. There are plenty of good industrial designers who could produce a high quality unique case design that could easily rival Apple’s designs without looking remotely like an Apple product. More than that, though, why not make products that actually last?
Consumers don’t win
By getting injunctions to prevent products from hitting the store shelves, this is tantamount to legalized anti-competitive practices. Legalized because the courts agree with and, further, set up injunctions to prevent these devices from hitting the shelves or be sold within the US. This hurts the consumer because now there is less choice. Apple’s thinking is that with less choice comes more likelihood that the consumer will choose Apple instead. Unfortunately, Apple didn’t take into account the PR nightmare that’s unfolding here. Apple, don’t underestimate the consumer’s intelligence. Consumers understand that Apple is taking legalized anti-competitive measures to try to win the consumer choice war. It is, however, the consumer’s choice as to what phone to buy and use. It is not Apple’s choice. Companies, when they get to a certain size and arrogance, tend to forget or choose to ignore consumer choice. This is capitalism and consumers have freedom of choice.
Consumers will vote with their wallets in the end and that will likely be to Apple’s detriment in the long haul. Instead, Apple needs to drop this lawsuit now and let these devices onto the market from Samsung. Let the devices hold their own or fail on their own merits. The consumers will decide what they want to use. Since there is not a real possibility that consumers could mistake a Galaxy S Android phone for an IOS based iPhone, there is really no damage done here. It’s only perceived damage.
The real damage being done today, that Apple is doing to itself, is the public relations debacle they face with consumer sentiment. Consumer sentiment is real and it is tangible and it can make or break a company. The longer these IP issues drag on and the more devices they try to block, the more people will pull away from Apple and leave the company, once again, high and dry.
Apple’s future uncertain
Apple needs to stop, look and listen. They need to make better, faster and more useful devices instead of pulling out the legal team to fight a losing battle. Keep the innovation going. Forget the old wars and move on. Heck, the whole thing started because Samsung made a phone that resembled the iPhone 3 case style. They don’t even sell the iPhone 3 case style anymore. The Galaxy Tab looks nothing remotely like an iPad either. So, the whole ‘it looks like an IOS device’ issue is now moot. It’s just being dragged on because of Job’s complete hated of Android.
Unfortunately for Apple, Android is here to stay and it’s not going away anytime soon. Locking out Samsung does not in any way lock out LG or HTC or any other device that runs Android. Instead, Apple needs to focus on innovation with IOS and its new devices and drop this PR nightmare that’s now unfolding in the consumer space. If Apple wants to drive a wedge between the consumer and the company, Apple’s current legal strategy is perfect. If Apple wants to produce high quality easy-t0-use devices, that goal has nothing to do with blocking the sale of similar devices via legal channels.
Apple is now officially full of sour grapes.
How not to run a business (Part 5) — Meeting Edition
In this edition of ‘How not to run a business’, the topic is meetings. Do they help or hurt your business? Let’s explore. I’ll start this one out with a ‘Do’.
Do set up meetings between sales staff and prospective clients
Sales meetings are entirely out of the scope of this article. Sales meetings are the only truly critical and needed business meetings. Sales meetings are also part of a salesperson’s job. So, for sales meetings, bring in anyone who is needed to ensure a deal is closed. If that means bringing in technical staff, then do it. If that means bringing in the CEO, do it. Of course, the personnel involvement level also depends on the size of the deal. If it’s a $10 a month deal, this is probably not worth involving everyone in the company. If it’s a $300,000 a year contract, then by all means create meetings with whomever it takes to close this deal. Sales meetings are the only type of meetings where some of these rules below do not apply. So, keep this in mind when reading through this article. The only other piece of advice I will add that’s outside of the scope of this article is, don’t oversell. That is, your sales team is there to help close deals you can actually support. Your sales team should never close a deal based on something that doesn’t presently exist as a product. Selling vapor products is a huge corporate no-no.
Don’t create a meeting based on personal opinion
Meetings are about communication transfer, not about personal opinions. Yes, we all have opinions, but business meetings are not the platform to express your opinion. You can do this in email or stopping by someone’s desk. Opinions impart no useful information about an objective. Getting the job done and whom handles specific pieces of that job, that’s a valid reason to gather a meeting. Discussing why you don’t like something about the business, that’s opinion and irrelevant to the job at hand. If you have an opinion that leads to a fundamental design change that works to solve a problem, then by all means create a meeting involving the design change, not involving your opinion.
Don’t expect productivity from employees while in meetings
Meetings quite simply halt employee productivity. When an employee is away from his/her desk in a conference room listening to someone discuss something irrelevant to their job at hand, then that is quite simply lost productivity. As a manager or business owner, you hire your employees to be at their desks doing the job you hired them to do. However, if they are continually being required to attend meetings, they cannot be at their desk doing that job you hired them to do. This means that for every minute of time the employee spends in the meeting, that’s minutes you paid for that employee to not do their job and not be productive. Meetings often solve nothing which leads to completely lost productivity.
Don’t expect your employees to make up for productivity lost while in meetings
If an employee spends half or more of their day attending meetings, don’t expect that employee to put in overtime or spend after hours time making up for lost productivity in those meetings. This is a completely unfair work life balance request. You have then asked them to sacrifice their personal time (either on or off the clock) to make up for that lost time spent in the meeting on the clock. This is not fair trade and is not be expected. If you expect this, you will eventually lose the talent it took you so long to actually find and hire.
Don’t call meetings with people who do not need to be there
Invite only the absolute minimum people you need to any meeting. Everyone else can learn from someone else. So, if a manager has 10 staff, only bring in 1 or 2 staff to attend a meeting and leave the rest at their desks working. Don’t invite all 10 of those staff simply because you want to have a staff meeting. You can rotate your staff through the staff meetings weekly so that each of them participate in a staff meeting at some point, but they don’t all need to be there every single time. Alternatively, sit with your staff at their desks one at a time and spend 5 minutes or so catching up on expected completion times for projects or other deadline work.
Don’t hold hour long meetings
Meetings should be as brief as possible. Fifteen (15) minutes is long enough time to impart most necessary information and simultaneously short enough to prevent the meeting from degrading into a pissing match between several people or other non-related discussions. At the same time, it prevents employees from being away from their desk and not being productive. Productivity is the key to your business success. The more productive your employees are, the more productive your business will be. Lack of productivity can be directly attributed to useless meetings among other time wasters.
Don’t hold (or allow your staff to hold) useless meetings
What exactly is a useless meeting?
- Meetings that rehash existing topics and add no new information.
- Meetings that are simply platforms for employees to express opinions.
- Meetings that discuss extremely distant possible future projects without knowing any exact information.
- Holding excessive numbers of meetings in a single day (leads to meeting overload).
- Meetings that are overly long and overly verbose.
- Meetings that degrade into unrelated topics.
- Meetings that end up with multiple groups dividing and talking at once.
Meetings need to be as long as is necessary to explain a given topic, short enough to limit productivity loss.
Don’t hold meetings every single day of the week
Employees want to work at their desk, not sit in conference rooms doing no work and listening to someone else chatter. You hired your employees to do a job, having too many meetings is wasteful and also means you’re paying these employees for sitting in meetings rather than doing the job you hired them to do.
Don’t allow staff to hold meetings that consume nearly every work hour
When a company gets to a certain size, usually above 100 employees, meetings start becoming excessive. People begin scheduling meetings to discuss any and everything. I’ve been personally pulled into meetings that have consumed every single hour of my work day including, no surprise, the lunch hour. Granted, free food was supplied, but that doesn’t make up for all the work that didn’t get done. This is meeting overload. At the end of the day, you walk away from work knowing you got nothing done and, at the same time, feel like the meetings accomplished nothing. So, it was a completely unproductive day. But, my employer paid me nonetheless. Then the employee comes to the realization that they have about 3 due tasks the day after that meeting stretch. Meetings should not pull in staff who have critical deadlines the next day.
Don’t hold meetings during lunch hour without supplying lunch
If you plan to hold a meeting that spans through the lunch hour, then supply lunches to your staff. Don’t expect them to take a late lunch or skip their lunch as they might be tied up getting other work done and have no time to take a lunch after that meeting. This is both unfair to the employee and can get your business into legal hot water if any employees file a grievance. If at all possible, let your employees leave for lunch and reconvene the meeting after lunch is over or, alternatively, expect to order lunches for meetings that span the lunch hour.
Don’t let your meetings run long
Meetings need to be a predetermined length. Many times, meetings can degrade into a pissing match between one or several people over a single thing. Nip that behavior out quick. Have these employees table the discussion for later or have them take the discussion out of the room. The rest of the attendees likely don’t need to hear or even want to hear those discussions. Additionally, if you are unable to impart all of the information you expected to and the meeting is at an end, schedule a followup meeting for later, but not the next day. Let the people digest what they’ve heard. By the time you reconvene, there may be new information that would have invalidated your extra information (or even the entire meeting). If you can cut your meeting short, then do so.
Don’t feel obligated to use all of the reserved meeting time
If you have reserved a one hour slot, but you are done with what you need to say in 10 minutes, leave the conference room. Do not continue to hold the meeting after 10 minutes simply because you have the meeting room reserved. Let your employees get back to their desks as fast as possible. You hired your staff to do a specific job, let them do it. Remember that keeping people in extended meetings takes employees away from their desks.
Don’t schedule excessively long meetings
Schedule only the maximum amount of time you need to impart the information required. Don’t write a novel sized agenda, set up a 4 hour meeting and expect many attendees. Business meetings need to remain short. The shorter the better. Fifteen minutes is the optimal time. Long enough to get done what you need, short enough to get people back to their desks to become productive.
Don’t expect great things out of meetings
Meetings are a mixed bag. Sometimes they work, sometimes times they fail. I’ve been to many meetings where nothing was accomplished. That is, we were no better off after attending the meeting than before we joined the meeting. If you suspect (or know) your meeting will not bear fruitful results, then bring in the minimum people. If you didn’t realize your meeting would be fruitless, then you will need to understand why the meeting failed before setting up another meeting of that same topic. Don’t continue to press a failed topic if it’s not going anywhere. Drop the topic and move on.
Don’t schedule a meeting between two people
Meetings are intended for 3 or more people unless it happens to be an interview. Two people conference room meetings are a waste. Send email, call them or stop by their desk to ask your questions. Don’t go through the motions of reserving a conference room for two people.
Don’t expect as much produced from a meeting as can be produced from someone at their desk
Employees know their jobs. They know what they are doing. Or, at least they should know what they are doing as that’s why you hired them. Meetings are generally designed to discuss unknowns (how do we do this, how can we fix that, what happened with this, where are we with regards to blah, etc). Some of these questions can be asked one-on-one to the individuals involved and do not necessarily need 20 people together to ask this single question and get this single answer. Taking a number of people away from their desks for extended periods means that the employees are getting further behind in their work for topics that could be better handled in other ways. So, those employees now have to make up that one or possibly several hours of dead time for work that they were unable to do while sitting in a conference room. So, pull in only the people who absolutely must be there. Don’t bring in people who have no participation in that discussion.
Don’t use a meeting as a public whipping post
Meetings are and should be about business topics. That is, topics that further the business along. Meetings are not intended to be used as subterfuge to get people in a room for group tongue lashings. If you need to chastise an individual or group for failing to perform, do this one-on-one with each individual. If you need to have a group fail discussion, then produce an improvement plan. That is, design a ‘Here’s how we can do better next time’ approach. Chastising people without a way to correct the issues is fruitless. This type of meeting only serves to demoralize the team without anything productive from the meeting. Again, if you need this type of meeting, then bring in something positive to the meeting by discussing how to correct the issue and with improvement points for each team member to work through. Putting together a fail meeting solely chastise employees can open your business to legal hostile workplace issues, so be careful with these types of meetings.
Do encourage other communication methods for meetings
With GoToMeeting, Skype, Hangouts, IM and SMS you can easily talk to people in many other ways than holding a physical gathering in a room. Find alternative methods to keep people at their desks. At the same time, they can attend and participate in the meeting when they are needed. Otherwise, they can be productive at their desks. Taking your staff away from their desks for conference room gatherings is the fastest way to lost productivity that you are actively paying your staff to produce. Keep the people at their desks rather than sitting in a conference rooms listening, but producing nothing.
← Part 4 | Chapter Index | Part 6 →
Tingle or vibration from the back of the iPad while charging
You may or may not have noticed, but if you run your hand along the back of the iPad (or even an iPod touch) while it’s charging, especially when using a wall power adapter, you may notice a vibration or tingle sensation on your hand. You might be wondering what it is. This article is short and sweet, so let’s explore.
Charging your iPad
When you plug your iPad into a wall outlet (or any charger for that matter), you would think the current should go into the device alone. Well, it doesn’t. Some of the charge is dispersed along the metal case by design. Here is a comment on Apple’s forum quoted from Apple’s support team regarding this issue:
There is measurable AC voltage across the external metal parts when an iPad charges. The measured voltage is within the SELV (Separated Extra-Low Voltage) limit, which means that the iPad is safe to touch. Additionally, the touch current is within the safety limit according to UL/IEC 60950 (Safety of Information Technology Equipment).[1]
So, there you have it. This is by design and nothing to be concerned over. Although, what Apple should have done is take that current being dispersed onto the case surface and run it to an LED to soak it up so you feel nothing. Of course, that means the iPad would need an external LED, but it wouldn’t be a bad thing to know when the device is charging without having to turn it on.
Checking your iPad with Apple
Note, if you get anything more than a mild sensation from the back of the iPad, then you should take it back to Apple. The current you feel from the back should be minuscule. If you see any sparks or feel anything more than a slight vibration, your iPad might be electrically defective. If you’re unsure, take it to Apple and have them check it out.
So, there you go.
[1] Apple’s Discussion Forum Comentary on this issue
Windows 8 PC: No Linux?
According to the rumor mill, Windows 8 PC systems will come shipped with a new BIOS replacement using UEFI (the extension of the EFI standard). This new replacement boot system apparently comes shipped with a secured booting system that, some say, will be locked to Windows 8 alone. On the other hand, the Linux distributions are not entirely sure how the secure boot systems will be implemented. Are Linux distributions being prematurely alarmist? Let’s explore.
What does this mean?
For Windows 8 users, probably not much. Purchasing a new PC will be business as usual. For Microsoft, and assuming UEFI secure boot cannot be disabled or reset, it means you can’t load another operating system on the hardware. Think of locked and closed phones and you’ll get the idea. For Linux, that would mean the end of Linux on PCs (at least, not unless Linux distributions jump thorough some secure booting hoops). Ok, so that’s the grim view of this. However, for Linux users, there will likely be other options. That is, buying a PC that isn’t locked. Or, alternatively, resetting the PC back to its factory default state of being unlocked (which the UEFI should support).
On the other hand, dual booting may no longer be an option with secure boot enabled. That means, it may not be possible to install both Windows and Linux onto the system and choose to boot one or the other at boot time. On other other hand, we do not know if Windows 8 requires UEFI secure boot to boot or whether it can be disabled. So far it appears to be required, but if you buy a boxed retail edition of Windows 8 (which is not yet available), it may be possible to disable secure boot. It may be that some of the released to manufacturing (OEM) editions require secure boot. Some editions may not.
PC Manufacturers and Windows 8
The real question here, though, is what’s driving UEFI secure booting? Is it Windows? Is it the PC manufacturers? Is it a consortium? I’m not exactly sure. Whatever the impetus is to move in this direction may lead Microsoft back down the antitrust path once again. Excluding all other operating systems from PC hardware is a dangerous precedent as this has not been attempted on this hardware before. Yes, with phones, iPads and other ‘closed’ devices, we accept this. On PC hardware, we have not accepted this ‘closed’ nature because it has never been closed. So, this is a dangerous game Microsoft is playing, once again.
Microsoft anti-trust suit renewed?
Microsoft should tread on this ground carefully. Asking PC manufacturers to lock PCs to exclusively Windows 8 use is a lawsuit waiting to happen. It’s just a matter of time before yet another class action lawsuit begins and, ultimately, turns into a DOJ antitrust suit. You would think that Microsoft would have learned its lesson by its previous behaviors in the PC marketplace. There is no reason that Windows needs to lock down the hardware in this way.
If every PC manufacturer begins producing PCs that preclude the loading of Linux or other UNIX distributions, this treads entirely too close to antitrust territory for Microsoft yet again. If Linux is excluded from running on the majority of PCs, this is definitely not wanted behavior. This rolls us back to the time when Microsoft used to lock down loading of Windows on the hardware over every other operating system on the market. Except that last time, nothing stopped you from wiping the PC and loading Linux. You just had to pay the Microsoft tax to do it. At that time, you couldn’t even buy a PC without Windows. This time, according to reports, you cannot even load Linux with secure booting locked to Windows 8. In fact, you can’t even load Windows 7 or Windows XP, either. Using UEFI secure boot on Windows 8 PCs treads within millimeters of this same collusionary behavior that Microsoft was called on many years back, and ultimately went to court over and lost much money on.
Microsoft needs to listen and tread carefully
Tread carefully, Microsoft. Locking PCs to running only Windows 8 is as close as you can get to the antitrust suits you thought you were done with. Unless PC manufacturers give ways of resetting and turning off the UEFI secure boot system to allow non-secure operating systems, Microsoft will once again be seen in collusion with PC manufacturers to exclude all other operating systems from UEFI secure boot PCs. That is about as antitrust as you can get.
I’d fully expect to see Microsoft (and possibly some PC makers) in DOJ court over antitrust issues. It’s not a matter of if, it’s a matter of when. I predict by early 2014 another antitrust suit will have materialized, assuming the way that UEFI works comes true. On the other hand, this issue is easily mitigated by UEFI PC makers allowing users to disable the UEFI secure boot to allow a BIOS boot and Linux to be loaded. So, the antitrust suits will entirely hinge on how flexible the PC manufacturers set up the UEFI secure booting. If both Microsoft and the PC makers have been smart about this change, UEFI booting can be disabled. If not, we know the legal outcome.
Virtualization
For Windows 8, it’s likely that we’ll see more people moving to use Linux as their base OS with Windows 8 virtualized (except for gamers where direct hardware is required). If Windows 8 is this locked down, then it’s better to lock down VirtualBox than the physical hardware.
Death Knell for Windows?
Note that should the UEFI secure boot system be as closed as predicted, this may be the final death knell for Windows and, ultimately, Microsoft. The danger is in the UEFI secure boot system itself. UEFI is new and untested in the mass market. This means that not only is Windows 8 new (and we know how that goes bugwise), now we have an entirely new untested boot system in secure boot UEFI. This means that if anything goes wrong in this secure booting system that Windows 8 simply won’t boot. And believe me, I predict there will be many failures in the secure booting system itself. The reason, we are still relying on mechanical hard drives that are highly prone to partial failures. Even while solid state drives are better, they can also go bad. So, whatever data the secure boot system relies on (i.e. decryption keys) will likely be stored somewhere on the hard drive. If this sector of the hard drive fails, no more boot. Worse, if this secure booting system requires an encrypted hard drive, that means no access to the data on the hard drive after failure ever.
I’d predict there will be many failures related to this new UEFI secure boot that will lead to dead PCs. But, not only dead PCs, but PCs that offer no access to the data on the hard drives. So people will lose everything on their computer.
As people realize this aspect of this local storage system on an extremely closed system, they will move toward cloud service devices to prevent data loss. Once they realize the benefits of cloud storage, the appeal of storing things on local hard drives and most of the reasons to use Windows 8 will be lost. Gamers may be able to keep the Windows market alive a bit longer, otherwise. On the other hand, this why a gaming company like Valve software is hedging its bets and releasing Linux versions of their games. For non-gamers, desktop and notebook PCs running Windows will be less and less needed and used. In fact, I contend this is already happening. Tablets and other cloud storage devices are already becoming the norm. Perhaps not so much in the corporate world as yet, but once cloud based Office suites get better, all bets are off. So, combining the already trending move towards limited storage cloud devices, closing down PC systems in this way is, at best, one more nail in Windows’ coffin. At worst, Redmond is playing Taps for Windows.
Closing down the PC market in this way is not the answer. Microsoft has stated it wants to be more innovative as Steve Balmer recently proclaimed. Yet, I see moves like this and this proves that Microsoft has clearly not changed and has no innovation left. Innovation doesn’t have to and shouldn’t lead to closed PC systems and antitrust lawsuits.
How not to run a business (Part 4) — Performance Evaluations
Do employee performance evaluations help or hurt your business? Are evaluations even necessary? The HR team may say, “Yes!”. But, that’s mostly because they have a vested interest in keeping their jobs. If evaluations are performed incorrectly (and the majority of the time they are), they can hurt your company and your relationship with your employees. Employee evaluations are also always negative experiences, so even this aspect can hurt your relationship with your employees. Let’s explore why?
Don’t let your Human Resources staff design the employee evaluations
If you absolutely must create and implement the tired ‘once-a-year’ evaluation system, then at least make sure you do it correctly. That is, assuming there is a ‘correct’ way to do this tired old thing. Employee evaluations should be designed by someone who is knowledgeable with writing evaluations and who has written them in the past. Using a service company like SuccessFactors or ADP to deploy your evaluations is fine, but is not required. Someone must still be tasked with designing the questions asked of the employee during the evaluation process.
Make sure your designer fully understands what is being asked of employees during the process, how it pertains to your business and most importantly, that the questions pertain to job performance and not to nebulous concepts like ‘core values’. Make sure the evaluation asks questions related to an employee’s actual job performance. The questions should also be relevant to all job roles within the company. Evaluations that target the sales teams with questions surrounding ‘customer interactions’ won’t apply to technical roles that have no customer facing aspects. Either create unique evaluation question options that apply to each department, or keep the questions generic enough that all job roles fit the questions.
Don’t ‘stack’ your evaluations
By stacking, this means that you should not mandate your managers give a certain number of excellent, good and poor reviews (i.e., ‘stacking’ the reviews towards certain employees — a form of favoritism). If your managers happens to have very good teams, stacking means that one or more than one of these individuals will end up with poor performance reviews, even though they performed well. Stacking is the best way to lose good employee talent.
Your staff has spent a lot of time and effort trying to locate the right employees for each job. With one stale (and lopsided) internal process, you may effectively, but inadvertently walk some employees to the door. Employees won’t stay where they feel they are not being treated fairly even while putting out high quality work. If a good employee is targeted with a bad review, don’t underestimate their intelligence to notice your stacked evaluation system and write about it on places like Glassdoor. Keep in mind that this is especially important for technical roles where talent can be extremely hard to find. Note, there are underperformers, but a once-a-year evaluation process is not likely to find many of them. Only can on-going, regular evaluation processes will find underperformers. Even more than this, only the manager can find underperformers via weekly one-on-one sessions, going through each employee’s work output.
Let your evaluation chips fall where they may. If a team ends up all with excellent reviews, so be it. Don’t try to manipulate some down because you feel the need to reduce cost of living wages. This comes back to paying your employees what they are worth. Note, this assumes that reviews will be tied to merit increases. Don’t assume that employees don’t know that the evaluations are stacked when you stack. That’s not only a condescending view, it way underestimates the intelligence of your workforce. If you’re thinking of decoupling evaluations from merit increases, see the next Don’t.
Don’t decouple evaluations from some form of merit increase
If you decouple employee evaluations from merit increases, you decouple the reason for employees to do evaluations. The question then becomes, “What’s the point in doing this?” If there’s any question surrounding the employee evaluation process, then your employees will not be motivated to participate. This also means that your evaluations will be worthless in the end. And, the employees will also know this. By tying the evaluations to merit cost of living increases, this ensures that all employees are motivated to participate properly in the process. However, keeping it tied to merit also means that this could lead to ‘stacking’. Avoid ‘stacking’ like the plague. If you really want to keep your employees on board, then let the evaluations remain truthful.
Additionally, when you decouple merit increases from the evaluation process, why have evaluations at all? Managers should be regularly evaluating their employees for work output and effectiveness. If they aren’t, then you have a bigger manager problem on your hands. If there’s no real reason to do evaluations, expect some employees to opt-out of the review process. If they chose to opt-out, let them. Forcing them to participate only leads to forced evaluations which may ultimately have them leave the company anyway and provide you with nothing of value.
Don’t require employees to rate their own performance numerically
Numerical or ‘star’ ratings are worthless. Numbers say nothing about the employee’s work ethic or performance. They are a failed attempt at trying to ‘rate’ an individual. The trouble is, if you artificially make the scale low by saying ‘No one is a 5″ on a scale of 1 to 5, then you have made the scale effectively 1-4. Then make the scale 1-4 and not 1-5. If you are using a scale of 1-5, then use the entire scale. If a person is a 5, then they are a 5. They are not a 4. This is similar to stacking. Do not artificially limit the use of something within the evaluation to make high performers appear lower than they are. This is counter productive and unnecessary and makes the employee feel as though they are under-appreciated. If that’s the intent, then it’s a job well done. However, it may lead to employee loss. Again, you spent all that time recruiting the talent, don’t squander that time, effort and money spent. Rating employees and artificially capping the scale is yet another visible employee negative.
Don’t do employee performance evaluations simply because you can
Employee evaluations are important for the manager and the employee to discuss performance issues and where performance can be improved. That’s the point in this process. It is not about anything other than how to get the manager and the employee on the same work page. Running this through multiple managers and multiple staff all the way up the chain to the CEO is pointless. Not only is it a severe time waster for those above the employee’s manager, it’s also a privacy issue that, for some reason, upper management and the human resources department alike think they should be privy to. In reality, any performance issues are between the manager and the employee. Ultimately, because of upper management prying eyes, any actual performance issues are not likely to present on an evaluation because it might actually become a hostile workplace or HR violation issue. Most evaluations are highly sanitized by both the employee and by the manager. Any real work issues are discussed in private between the manager and the employee. They are never included on HR based performance evaluations.
For example, an employee with poor hygiene and who is causing issues around the office could cause some severe HR legal issues if this information is placed onto a written employee evaluation. Yet, it is a performance issue. How do you document this without causing potential legal issues? This is the problem with once-a-year employee evaluations. Employee evaluations tend not to document the types of issues which result in legal issues for a company. These types of issues are sanitized from evaluations for this reason. This also means that company wide evaluations are by their very nature not completely accurate. If they’re not accurate, why do them?
Let the managers handle all performance issues internally. If the process needs documentation, then have the manager do so. But, do so privately. Airing the dirty laundry for all to see is ripe for both hostile workplace issues and could document potential legal issues that could arise should the employee leave as a result of a documented performance issue. Note that anything written and placed into the employee file can be come legal fodder should employee legal issues arise. If the evaluation process documents an illegal activity within the company, then your business is at risk. Leading to…
Don’t sanitize employee evaluations after-the-fact
If there is something written on an employee evaluation that puts your business at legal risk, don’t sanitize the evaluation or destroy it after the fact. This will make things far worse for your business. Instead, leave it as it is. If it’s a legal risk, you can defend yourself in court even if it’s in the document. Removing it from the document or removing the entire document is far more problematic legally than leaving it there. Note, if your employee has to write any part of the evaluation, they can make a copy for themselves. If an employee unknowingly describes an illegal business activity on the evaluation, your business is at risk no matter if someone in your organization deletes or sanitizes it. If you are concerned that some illegal activity could appear on an employee evaluation, it may be smarter not to do evaluations. An employee may keep a version of their copy for their records. You can’t easily expunge an employee’s personal records.
Don’t expect much productivity out of your employees during evaluation week
Employee evaluations kill at least a week of productivity time for every employee in the company. Instead of focusing on their job at hand, they are focusing on paperwork that is not related to their job. Expect that evaluations will lose about a week of productivity just for the paperwork portion alone and turn it into non-productive time. If your employees’ work time is important to you, you need to understand that during the evaluation process, far less output than normal will get done. This means you should choose a slow time of the year to perform evaluations. The more you ask of the employee to do on the evaluation forms, the less actual work they get done. Be careful with this process as it can lead to a lot of lost productivity. Note, there will also be a week or two of aftermath from the evaluation process where employees will reflect, brood and be distracted as a result of the outcome of their evaluation with their manager. Without any upside to doing the evaluation, this process simply leaves that bad taste to fester. Which leads to…
Don’t expect sunshine and rainbows
Employee evaluations are by their very nature negative job experiences. Always. Evaluations never give glowing job performance reviews. They are always there to show all of the flaws and weaknesses of the employee and make sure they feel like crap for at least a week or two following completion of the evaluation. This can negatively impact productivity following the completion of the evaluation. You need to understand that this process is by its very nature a negative job experience. It is never a positive experience. The only positive is a merit increase, if it comes. For an employee’s suffering through another performance evaluation, the upside is that employees will hopefully see a higher paycheck. If you decouple merit increase (as stated above), the employee evaluation process becomes a completely negative experience without any upside benefit to the employee. In fact, there is very little if any upside benefit to the company, either. This project then becomes an exercise in futility. If you really want to make your employees feel like crap for several weeks, this is the way to do it.
Think twice before implementing an evaluation system solely because you think it’s necessary. If employees feel that their evaluation is unfair (many will), expect a number of people to walk away from the company. Expect those who stay to underperform for at least a week following any evaluation. Expect some employees to brood and eventually leave months after their review. You will also need to accept some employee departures as a result. Other employees will realize the exercise in futility and seek a job elsewhere. Some may realize the unfairness of the ‘stacking’ and try to find an employer is more fair about this process. Make sure you are well aware of the full ramifications of an evaluation system before you implement it.
Make sure employees get some kind of positive benefit after the evaluation is complete (preferably a merit increase). If you’re planning to make your employees suffer through this negative job experience, then you need to be prepared to offer some sunshine and rainbows to your employees at the end to make the process go down easier. As Mary Poppins once said, “A spoonful of sugar helps the medicine go down” . You need to find that spoonful of sugar… and I don’t mean literally, either (don’t be funny and put a sugar cube on their desks).
Note that the evaluation process should never get in the way of actual work. Yet, it does. It interjects itself between the manager and the employee in a way that can drive a wedge between the employee and the company. A wedge that might otherwise not be there were sleeping dogs left lying, as it were. Employee evaluations can open a Pandora’s box with some individuals, so be careful with this process.
Do think up a better way than the traditional performance review system
If you can come up with a new improved performance system that works better than the old, stale, negative system, then by all means implement it in your company. Such a system would do wonders for making this process much more smooth. Unfortunately, I do not believe such a thing exists. In reality, having monthly one-on-ones between the employee and manager should suffice as an ongoing performance review system. It’s far less negative than the once-a-year evaluation which is mostly pointless. Do away with the once-a-year evaluation system and implement an ongoing manager and employee relationship building system that keeps the employee far more on track than a once-a-year system which really benefits no-one.
Employee evaluations can both help and hurt your company at the same time. Evaluations can open up problems that may not be necessary for an employee to perform their job properly and at the same time, it always ends up as a negative experience for all involved. If you really enjoy running your employees through the ringer once a year, the stale old evaluation process is the way to do it. Worse, though, is that because it’s a once-a-year event, it doesn’t really serve much purpose unless it is tied to a merit increase. If it’s not tied to a merit increase, this is a fruitless exercise. This is part of the reason many companies no longer do one-a-year evaluations.
Basically, do not feel compelled to run evaluations simply because you think you need them. Think twice before implementing these tired vehicles when they don’t really benefit anyone. If you must set up a performance evaluation system, then conduct it once a month between the manager and the employee. Let them discuss active projects, what’s going on today and focus on current performance issues. Having an on-going regular relevant performance evaluation system is much more productive to job performance today and ends up as a much more relaxed and positive experience. Out with the old and in with the new.
Don’t run an evaluation for an employee with 3 or more managers in 6 months
This one is pretty self-explanatory. However, it should be said that if an employee gets a new manager 2 months before the evaluation process is set to begin, the employee has no hope of a fair evaluation. If the employee’s old manager is still part of the organization, then enlist that manager to complete that employee’s evaluation. If the old manager is no longer part of the organization, then skip this employee’s evaluation.
An employee cannot be properly evaluated with a new manager having 2 or less months of service with that employee. Employees under this circumstance should also have the ability to opt-out of the evaluation process entirely. If they can’t get a fair, impartial evaluation for 6 to 12 months of service that year from their current manager, then the employee shouldn’t be obligated to submit an evaluation. I’ll also point out that change in management team is not the employee’s responsibility. Unfairly penalizing an employee’s yearly performance review because of management changes is not the fault of the employee. It’s the fault of your management team.
Unless there has been at least one manager who has managed that employee for a minimum of 6 continuous months of the year, evaluations shouldn’t be performed for that employee.
← Part 3 | Chapter Index | Part 5 →
Bluetooth Mouse Pairing: Fix ‘Authentication Error’ in Windows 7
Every once in a while my bluetooth dongle decides to go whacky on me and the mouse won’t work any longer. Sometimes the keyboard also. Usually, I can unplug the dongle and replug it. This generally recovers both the mouse and the keyboard. Sometimes it requires repairing one or both of the devices. Today was a repairing day (at least for the mouse). Except, today didn’t go at all smoothly.
Note: Before proceeding with any pairing operation to battery powered devices such as mice or keyboards, always make sure your batteries are fresh. Dead or dying batteries can cause pairing problems simply because the wireless transmitter in the device may not produce a stable enough signal for the receiver. Also note that dead or dying batteries can sometimes be the source of device connectivity problems. Therefore, always make sure your batteries are fresh before attempting pairing operations with these devices.
The Problem
Normally I just go into ‘Devices and Printers’ and delete the device and pair it again. This usually works seamlessly. Today, not so much. I successfully delete the Targus mouse from the ‘Devices and Printers’ and that works correctly. I then put the mouse into discovery mode and start the ‘Add a Bluetooth Device’ panel. The panel finds the mouse fine. I select the mouse and click ‘Next’. I then see the next image.
So, this is a reasonably stupid error because it’s a mouse. Mice don’t have authentication errors because they don’t use pairing codes. I have no idea why Windows would even present this. It’s clear that something is completely borked in Windows. And, you know, this is one of the things about Windows I absolutely hate. It gives stupid errors like this without any hope for resolution. Note that clicking the little blue link at the bottom of the window is completely worthless. Clicking that link won’t help you resolve this issue. It leads you to some worthless help page that leaves more questions than answers and only serves to waste time. I digress.
So, now that I’ve received this error, I proceed to Google to find an answer. Well, I didn’t find one. After traversing through several forums where people are asking the same questions, no answers here. Then, I proceed to search the registry thinking it left some garbage in the registry from the previous pairing. Nope, that search was a waste. So now, I’m basically at the trial and error phase of resolution.
I finally get to Microsoft’s knowledgebase which is probably where I should have visited first. Unfortunately, even that didn’t help, but I did find that Windows Server doesn’t support Bluetooth devices (not that that’s very helpful for my issue because I’m on Windows 7). What visiting this page at Microsoft did is give me an idea of how to proceed based on some images I saw. Not images of what I’m about to show you, though. Just an image of something that triggered a thought about how silly Microsoft is which lead to another thought and so on leading to the below.
The Fix
So, I go back to trying to pair again. I set the mouse up into pairing mode and then start ‘Add a Bluetooth Device’. Instead, this time I decide to right click the device about to be added:
You’ll need to do this pretty quickly as the device won’t stay in pairing mode for very long. So, click ‘Properties’ and you’ll see the following window:
Now, check the box next to the ‘Drivers for keyboard, mice, etc (HID)’ and click ‘OK’. This should immediately pair the device without the ‘Authentication Error’ panel appearing. At least, this fix worked perfectly for my situation. I can’t guarantee this will work with every Bluetooth mouse or every Bluetooth hardware. So, your results may vary. It’s definitely worth giving it a try, though.
Note: The differences in Bluetooth drivers may prevent this fix from working across the board. So, you will have to try this and relay your experience of whether or not it works for you.
Note, after I unpaired the mouse and repaired it after having done the above, I now see the following panel instead of the authentication error panel. This is the correct panel for the mouse. Clicking ‘Pair without using a code’ works perfectly now for this device. I have no idea what caused the other panel to present above. Note that once Windows gets into that state above, it stays there. Not sure why Windows would cache an error, but apparently it does. I’m at a complete loss why Microsoft would cache anything to do with real-time device connection activities like this! However, the mouse now unpairs and pairs correctly again. Whatever causes this issue, the Windows development team needs to fix it.
These are the stupid little things that make Windows such a hacky time-wasting experience. It’s these stupid quirky behaviors that give Microsoft a bad wrap and that continue to make Microsoft perceived as an inept operating system development company. It’s problems like this that make Windows a 1990’s level computer experience.
And, I’m not just talking about the error itself. I’m talking about the overall experience surrounding the error to the lack of any help in finding an answer. It’s having to resort to searching Google to find answers when Microsoft’s knowledgebase has nothing and offers no answers. It’s the having to guess using trial and error to find an answer. It’s the bad experience and bad taste that this experience leaves. Microsoft get your sh*t together. It’s long time for Windows to be done with experiences like this and time wasting experiences. If there are resolutions to a problem, then the time has long past to lead your users who see errors like this one to an exact resolution page with step-by-step instructions that work. Clearly, there is a resolution to my issue and I present it here. Why can’t your team do the same?
Seriously, I don’t understand why Microsoft relies on sites like mine to help users fix problems that Microsoft cannot be bothered to document properly. Yes, I realize I’m contributing to the problem by writing this article and ‘helping’ Microsoft out. Note, however, it’s not so much about helping Microsoft as it is helping users who run into this same stupid experience. The purpose of this article is to show just how stupid this experience is. It’s clear that Microsoft has no want in helping its own users who PAID for this product to actually give them real support and documentation. So, why do we continue to use Windows?
How to format NTFS on MacOS X
This article is designed to show you how to mount and manage NTFS partitions in MacOS X. Note the prerequisites below as it’s not quite as straightforward as one would hope. That is, there is no native MacOS X tool to accomplish this, but it can be done. First things first:
Disclaimer
This article discusses commands that will format, destroy or otherwise wipe data from hard drives. If you are uncomfortable working with commands like these, you shouldn’t attempt to follow this article. This information is provided as-is and all risk is incurred solely by the reader. If you wipe your data accidentally by the use of the information contained in this article, you solely accept all risk. This author accepts no liability for the use or misuse of the commands explored in this article.
Prerequisites
Right up front I’m going to say that to accomplish this task, you must have the following prerequisites set up:
- VirtualBox installed (free)
- Windows 7 (any flavor) installed in VirtualBox (you can probably use Windows XP, but the commands may be different) (Windows is not free)
For reading / writing to NTFS formatted partitions (optional), you will need one of the following:
- For writing to NTFS partitions on MacOS X:
- Tuxera NTFS (not free) or
- ntfs-3g (free)
- For reading from NTFS, MacOS X can natively mount and read from NTFS partitions in read-only mode. This is built into Mac OS X.
If you plan on writing to NTFS partitions, I highly recommend Tuxera over ntfs-3g. Tuxera is stable and I’ve had no troubles with it corrupting NTFS volumes which would require a ‘chkdsk’ operation to fix. On the other hand, ntfs-3g regularly corrupts volumes and will require chkdsk to clean up the volume periodically. Do not override MacOS X’s native NTFS mounter and have it write to volumes (even though it is possible). The MacOS X native NTFS mounter will corrupt disks in write mode. Use Tuxera or ntfs-3g instead.
Why NTFS on Mac OS X?
If you’re like me, I have a Mac at work and Windows at home. Because Mac can mount NTFS, but Windows has no hope of mounting MacOS Journaled filesystems, I opted to use NTFS as my disk carry standard. Note, I use large 1-2TB sized hard drives and NTFS is much more efficient with space allocation than FAT32 for these sized disks. So, this is why I use NTFS as my carry around standard for both Windows and Mac.
How to format a new hard drive with NTFS on Mac OS X
Once you have Windows 7 installed in VirtualBox and working, shut it down for the moment. Note, I will assume that you know how to install Windows 7 in VirtualBox. If not, let me know and I can write a separate article on how to do this.
Now, go to Mac OS X and open a command terminal (/Applications/Utilities/Terminal.app). Connect the disk to your Mac via USB or whatever method you wish the drive to connect. Once you have it connected, you will need to determine which /dev/diskX device it is using. There are several ways of doing this. However, the easiest way is with the ‘diskutil’ command:
$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_HFS Macintosh HD 499.8 GB disk0s2 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *2.0 TB disk1 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: Apple_partition_scheme *119.6 MB disk2 1: Apple_partition_map 32.3 KB disk2s1 2: Apple_HFS VirtualBox 119.5 MB disk2s2
Locate the drive that appears to be the size of your new hard drive. If the hard drive is blank (a brand new drive), it shouldn’t show any additional partitions. In my case, I’ve identified that I want to use /dev/disk1. Remember this device file path because you will need it for creating the raw disk vmdk file. Note the nomenclature above: The /dev/disk1 is the device to access the entire drive from sector 0 to the very end. The /dev/diskXsX files access individual partitions created on the device. Make sure you’ve noted the correct /dev/disk here or you could overwrite the wrong drive.
Don’t create any partitions with MacOS X in Disk Utility or in diskutil as these won’t be used (or useful) in Windows. In fact, if you create any partitions with Disk Utility, you will need to ‘clean’ the drive in Windows.
Creating a raw disk vmdk for VirtualBox
This next part will create a raw connector between VirtualBox and your physical drive. This will allow Windows to directly access the entire physical /dev/disk1 drive from within VirtualBox Windows. Giving Windows access to the entire drive will let you manage the entire drive from within Windows including creating partitions and formatting them.
To create the connector, you will use the following command in Mac OS X from a terminal shell:
$ vboxmanage internalcommands createrawvmdk \ -filename "/path/to/VirtualBox VMs/Windows/disk1.vmdk" -rawdisk /dev/disk1
It’s a good idea to create the disk1.vmdk where your Windows VirtualBox VM lives. Note, if vboxmanage isn’t in your PATH, you will need to add it to your PATH to execute this command or, alternatively, specify the exact path to the vboxmanage command. In my case, this is located in /usr/bin/vboxmanage. This command will create a file named disk1.vmdk that will be used inside your Windows VirtualBox machine to access the hard drive. Note that creating the vmdk doesn’t connect the drive to your VirtualBox Windows system. That’s the next step. Make note of the path to disk1.vmdk as you will also need this for the next step.
Additional notes, if the drive already has any partitions on it (NTFS or MacOS), you will need to unmount any mounted partitions before Windows can access it and before you can createrawvmdk with vboxmanage. Check ‘df’ to see if any partitions on drive are mounted. To unmount, either drop the partition(s) on the trashcan, use umount /path/to/partition or use diskutil unmount /path/to/partition. You will need to unmount all partitions on the drive in question before Windows or vboxmanage can access it. Even one mounted partition will prevent VirtualBox from gaining access to the disk.
Note, if this is a brand new drive, it should be blank and it won’t attempt to mount anything. MacOS may ask you to format it, but just click ‘ignore’. Don’t have MacOS X format the drive. However, if you are re-using a previously used drive and wanting to format over what’s on it, I would suggest you zero the drive (see ‘Zeroing a drive’ below) as the fastest way to clear the drive of partition information.
Hooking up the raw disk vmdk to VirtualBox
Open VirtualBox. In VirtualBox, highlight your Windows virtual machine and click the ‘Settings’ cog at the top.
- Click the Storage icon.
- Click the ‘SATA Controller’
- Click on the ‘Add Hard Disk’ icon (3 disks stacked).
- When the ? panel appears, click on ‘Choose existing disk’.
- Navigate to the folder where you created ‘disk1.vmdk’, select it and click ‘Open’.
- The disk1.vmdk connector will now appear under SATA Controller
You are ready to launch VirtualBox. Note, if /dev/disk1 isn’t owned by your user account, VirtualBox may fail to open this drive and show an error panel. If you see any error panels, check to make sure no partitions are mounted and then check the permissions of /dev/disk1 with ls -l /dev/disk1 and, if necessary, chown $LOGNAME /dev/disk1. The drive must not have any partitions actively mounted and /dev/disk1 must be owned by your user account on MacOS X. Also make sure that the vmdk file you created above is owned by your user account as you may need to become root to createrawvmdk.
Launching VirtualBox
Click the ‘Start’ button to start your Windows VirtualBox. Once you’re at the Windows login panel, log into Windows as you normally would. Note, if the hard drive goes to sleep, you may have to wait for it to wake up for Windows to finish loading.
Once inside Windows, do the following:
- Start->All Programs->Accessories->Command Prompt
- Type in ‘diskpart’
- At the DISKPART> prompt, type ‘list disk’ and look for the drive (based on the size of the drive).
- Note, if you have more than one drive that’s the same exact size, you’ll want to be extra careful when changing things as you could overwrite the wrong drive. If this is the case, follow these next steps at your own risk!
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 40 GB 0 B Disk 1 Online 1863 GB 0 B *
- In my case, I am using Disk 1. So, type in ‘select disk 1’. It will say ‘Disk 1 is now the selected disk.’
- From here on down, use these commands at your own risk. They are destructive commands and will wipe the drive and data from the drive. If you are uncertain about what’s on the drive or you need to keep a copy, you should stop here and backup the data before proceeding. You have been warned.
- Note, ‘Disk 1’ is coincidentally named the same as /dev/disk1 on the Mac. It may not always follow the same naming scheme on all systems.
- To ensure the drive is fully blank type in ‘clean’ and press enter.
- The clean command will wipe all partitions and volumes from the drive and make the drive ‘blank’.
- From here, you can repartition the drive as necessary.
Creating a partition, formatting and mounting the drive in Windows
- Using diskpart, here are the commands to create one partition using the whole drive, format it NTFS and mount it as G: (see commands below):
DISKPART> select disk 1
Disk 1 is now the selected disk
DISKPART> clean
DiskPart succeeded in cleaning the disk.
DISKPART> create partition primary
DiskPart succeeded in creating the specified partition.
DISKPART> list partition
Partition ### Type Size Offset ------------- ---------------- ------- ------- * Partition 1 Primary 1863 GB 1024 KB
DISKPART> select partition 1
Partition 1 is now the selected partition.
DISKPART> format fs=ntfs label="Data" quick
100 percent completed
DiskPart successfully formatted the volume.
DISKPART> assign letter=g
DiskPart successfully assigned the drive letter or mount point.
DISKPART> exit
Leaving DiskPart...
- The drive is now formatted as NTFS and mounted as G:. You should see the drive in Windows Explorer.
- Note, unless you want to spend hours formatting a 1-2TB sized drive, you should format it as QUICK.
- If you want to validate the drive is good, then you may want to do a full format on the drive. New drives are generally good already, so QUICK is a much better option to get the drive formatted faster.
- If you want to review the drive in Disk Management Console, in the command shell type in diskmgmt.msc
- When the window opens, you should find your Data drive listed as ‘Disk 1’
Note, the reason to use ‘diskpart’ over Disk Management Console is that you can’t use ‘clean’ in Disk Management Console, this command is only available in the diskpart tool and it’s the only way to completely clean the drive of all partitions to make the drive blank again. This is especially handy if you happen to have previously formatted the drive with MacOS X Journaled FS and there’s an EFI partition on the drive. The only way to get rid of a Mac EFI partition is to ‘clean’ the drive as above.
Annoyances and Caveats
MacOS X always tries to mount recognizable removable (USB) partitions when they become available. So, as soon as you have formatted the drive and have shut down Windows, Mac will likely mount the NTFS drive under /Volumes/Data. You can check this with ‘df’ in Mac terminal or by opening Finder. If you find that it is mounted in Mac, you must unmount it before you can start VirtualBox to use the drive in Windows. If you try to start VirtualBox with a mounted partition in Mac OS X, you will see a red error panel in VirtualBox. Mac and Windows will not share a physical volume. So you must make sure MacOS X has unmounted the volume before you start VirtualBox with the disk1.vmdk physical drive.
Also, the raw vmdk drive is specific to that single hard drive. You will need to go through the steps of creating a new raw vmdk for each new hard drive you want to format in Windows unless you know for certain that each hard drive is truly identical. The reason is that vboxmanage discovers the geometry of the drive and writes it to the vmdk. So, each raw vmdk is tailored to each drive’s size and geometry. It is recommended that you not try to reuse an existing physical vmdk with another drive. Always create a new raw vmdk for each drive you wish to manage in Windows.
Zeroing a drive
While the clean command clears off all partition information in Windows, you can also clean off the drive in MacOS X. The way to do this is by using dd. Again, this command is destructive, so be sure you know which drive you are operating on before you press enter. Once you press enter, the drive will be wiped of data. Use this section at your own risk.
To clean the drive use the following:
$ dd if=/dev/zero of=/dev/disk1 bs=4096 count=10000
This command will write 10000 * 4096 byte blocks with all zeros. This should overwrite any partition information and clear the drive off. You may not need to do this as the diskpart ‘clean’ command may be sufficient.
Using chkdsk
If the drive has become corrupted or is acting in a way you think may be a problem, you can always go back into Windows with the data1.vmdk connector and run chkdsk on the volume. You can also use this on any NTFS or FAT32 volume you may have. You will just need to create a physical vmdk connector and attach it to your Windows SATA controller and make sure MacOS X doesn’t have it mounted. Then, launch VirtualBox and clean it up.
Tuxera
If you are using Tuxera to mount NTFS, once you exit out of Windows with your freshly formatted NTFS volume, Tuxera should immediately see the volume and mount it. This will show you that NTFS has been formatted properly on the drive. You can now read and write to this volume as necessary.
Note that this method to format a drive with NTFS is the safest way on Mac OS X. While there may be some native tools floating around out there, using Windows to format NTFS will ensure the volume is 100% compliant with NTFS and Windows. Using third party tools not written by Microsoft could lead to data corruption or improperly formatted volumes.
Of course, you could always connect the drive directly to a Windows system and format it that way. ;)
How not to run a business (Part 3) — SaaS edition
So, we’ve talked about how not to run a general business, let’s get to some specifics. Since software as a service (SaaS) is now becoming more and more common, let’s explore software companies and how not to run these.
Don’t add new features because you can
If a customer is asking for something new, then add that new feature at some appointed future time. Do not, however, think that that feature needs to be implemented tomorrow. On the other hand, if you have conceived something that you think might be useful, do not spend time implementing it until someone is actually asking for it. This is an important lesson to learn. It’s a waste of time to write code that no one will actually use. So, if you think your feature has some merit, invite your existing customers to a discussion by asking them if they would find the proposed feature useful. Your customers have the the final say. If the majority of your customers don’t think they would use it, scrap the idea. Time spent writing a useless feature is time wasted. Once written, the code has to be maintained by someone and is an additional waste of time.
Don’t tie yourself to your existing code
Another lesson to learn is that your code (and app) needs to be both flexible and trashable. Yes, I said trashable. You need to be willing to throw away code and rewrite it if necessary. That means, code flows, changes and morphs. It does not stay static. Ideas change, features change, hardware changes, data changes and customer expectations change. As your product matures and requires more and better infrastructure support, you will find that your older code becomes outdated. Don’t be surprised if you find yourself trashing much of your existing code for completely new implementations taking advantage of newer technologies and frameworks. Code that you may have written from scratch to solve an early business problem may now have a software framework that, while not identical to your code, will do what your code does 100x more efficiently. You have to be willing to dump old code for new implementations and be willing to implement those ideas in place of old code. As an example, usually early code does not take high availability into account. Therefore, gutting old code that isn’t highly available for new frameworks that are is always a benefit to your customers. If there’s anything to understand here, code is not a pet to get attached to. It provides your business with a point in time service set. However, that code set must grow with your customer’s expectations. Yes, this includes total ground-up rewrites.
Don’t write code that focuses solely on user experience
In software-as-a-service companies, many early designs can focus solely on what the code brings to the table for customer experience. The problem is that the design team can become so focused on writing the customer experience that they forget all about the manageability of the code from an operational perspective. Don’t write your code this way. Your company’s ability to support that user experience will suffer greatly from this mistake. Operationally, the code must be manageable, supportable, functional and must also start up, pause and stop consistently. This means, don’t write code so that when it fails it leaves garbage in tables, half-completed transactions with no way to restart the failed transactions or huge temporary files in /tmp. This is sloppy code design at best. At worst, it’s garbage code that needs to be rewritten.
All software designs should plan for both the user experience and the operational functionality. You can’t expect your operations team to become the engineering code janitors. Operations teams are not janitors for cleaning up after sloppy code that leaves garbage everywhere. Which leads to …
Don’t write code that doesn’t clean up after itself
If your code writes temporary tables or otherwise uses temporary mechanisms to complete its processing, clean this up not only on a clean exit, but also during failure conditions. I know of no languages or code that, when written correctly, cannot cleanup after itself even under the most severe software failure conditions. Learn to use these mechanisms to clean up. Better, don’t write code that leaves lots of garbage behind at any point in time. Consume what you need in small blocks and limit the damage under failure conditions.
Additionally, if your code needs to run through processing a series of steps, checkpoint those steps. That means, save the checkpoint somewhere. So, if you fail to process step 3 of 5, another process can come along and continue at step 3 and move forward. Leaving half completed transactions leaves your customers open to user experience problems. Always make sure your code can restart after a failure at the last checkpoint. Remember, user experience isn’t limited to a web interface…
Don’t think that the front end is all there is to user experience
One of the mistakes that a lot of design teams fall into is thinking that the user experience is tied to the way the front end interacts. Unfortunately, this design approach has failure written all over it. Operationally, the back end processing is as much a user experience as the front end interface. Sure, the interface is what the user sees and how the user interacts with your company’s service. At the same time, what the user does on the front end directly drives what happens on the back end. Seeing as your service is likely to be multiuser capable, what each user does needs to have its own separate allocation of resources on the back end to complete their requests. Designing the back end process to serially manage the user requests will lead to backups when you have 100, 1,000 or 10,000 users online.
It’s important to design both the front end experience and the back end processing to support a fully scalable multiuser experience. Most operating systems today are fully capable of multitasking utilizing both multiprocess and multithreaded support. So, take advantage of these features and run your user’s processing requests concurrently, not serially. Even better, make sure they can scale properly.
Don’t write code that sets no limits
One of the most damaging things you can do for user experience is tell your customers there are no limits in your application. As soon as those words are uttered from your lips, someone will be on your system testing that statement. First by seeing how much data it takes before the system breaks, then by stating that you are lying. Bad from all aspects. The takeaway here is that all systems have limits such as disk capacity, disk throughput, network throughput, network latency, the Internet itself is problematic, database limits, process limits, etc. There are limits everywhere in every operating system, every network and every application. You can’t state that your application gives unlimited capabilities without that being a lie. Eventually, your customers will hit a limit and you’ll be standing there scratching your head.
No, it’s far simpler not to make this statement. Set quotas, set limits, set expectations that data sets perform best when they remain between a range. Customers are actually much happier when you give them realistic limits and set their expectations appropriately. Far fetched statements leave your company open to problems. Don’t do this.
Don’t rely on cron to run your business
Ok, so I know some people will say, why not? Cron, while a decent scheduling system, isn’t without its own share of problems. One of its biggest problems, however, is that its smallest level of granularity is once per minute. If you need something to run more frequently than every minute, you are out of luck with cron. Cron also requires hard coded scripts that must be submitted in specific directories for cron to function. Cron doesn’t have an API. Cron supports no external statistics other than by digging through log files. Note, I’m not hating on cron. Cron is a great system administration tool. It has a lot of great things going for it with systems administration use when utilizing relatively infrequent tasks. It’s just not designed to be used under heavy mission critical load. If you’re doing distributed processing, you will need to find a way to launch in a more decentralized way anyway. So, cron likely won’t work in a distributed environment. Cron also has a propensity to stop working internally, but leave itself running in the process list. So, monitoring systems will think it’s working when it’s not actually launching any tasks.
If you’re a Windows shop, don’t rely on Windows scheduler to run your business. Why? Windows scheduler is actually a component of Internet Explorer (IE). When IE changes, the entire system could stop or fail. Considering the frequency with which Microsoft releases updates to not only the operating system, but to IE, you’d be wise to find another scheduler that is not likely to be impacted by Microsoft’s incessant need to modify the operating system.
Find or design a more reliable scheduler that works in a scalable fault tolerant way.
Don’t rely on monitoring systems (or your operations team) to find every problem or find the problem timely
Monitoring systems are designed by humans to find problems and alert. Monitoring systems are by their very nature, reactive. This means that monitoring systems only alert you AFTER they have found a problem. Never before. Worse, most monitoring systems only alert of problems after multiple checks have failed. This means that not only is the service down, it’s been down for probably 15-20 minutes by the time the system alerts. In this time, your customers may or may not have already seen that something is going on.
Additionally, for any monitoring for a given application feature, the monitoring system needs a window into that specific feature. For example, monitoring Windows WMI components or Windows message queues from a Linux monitoring system is near impossible. Linux has no components at all to access, for example, the Windows WMI system or Windows message queues. That said, a third party monitoring system with an agent process on the Windows system may be able to access WMI, but it may not.
Always design your code to provide a window into critical application components and functionality for monitoring purposes. Without such a monitoring window, these applications can be next to impossible to monitor. Better, design using standardized components that work across all platforms instead of relying on platform specific components. Either that or choose a single platform for your business environment and stick with that choice. Note that it is not the responsibility of the operations team to find windows to monitor. It’s the application engineering team’s responsibility to provide the necessary windows into the application to monitor the application.
Don’t expect your operations team to debug your application’s code
Systems administrators are generally not programmers. Yes, they can write shell scripts, but they don’t write code. If your application is written in PHP or C or C++ or Java, don’t expect your operations team to review your application’s code, debug the code or even understand it. Yes, they may be able to review some Java or PHP, but their job is not to write or review your application’s code. Systems administrators are tasked to manage the operating systems and components. That is, to make sure the hardware and operating system is healthy for the application to function and thrive. Systems administrators are therefore not tasked to write or debug your application’s code. Debugging the application is the task for your software engineers. Yes, a systems administrator can find bugs and report them, just as anyone can. Determining why that bug exists is your software engineers’ responsibility. If you expect your systems administrators to understand your application’s code in that level of detail, they are no longer systems administrators and they are considered software engineers. Keeping job roles separate is important in keeping your staff from becoming overloaded with unnecessary tasks.
Don’t write code that is not also documented
This is a plain and simple programming 101 issue. Yes, it’s very simple. Your software engineers’ responsibilities are to write robust code, but also document everything they write. That’s their job responsibility and should be part of their job description. If they do not, cannot or are unwilling to document the code they write, they should be put on a performance review plan and without improvement, walked to the door. Without documentation, reverse engineering their code can take weeks for new personnel. Documentation is critical to your businesses continued success, especially when personnel changes. Think of this like you would disaster recovery. If you suddenly no longer had your current engineers available and you had to hire all new engineers, how quickly could the new engineers understand your application’s code enough to release a new version? This ends up a make or break situation. Documentation is the key here.
Thus, documentation must be part of any engineer’s responsibility when they write code for your company. Code review is equally important by management to ensure that the code not only seems reasonable (i..e, no gotos), but is fully documented and attributed to that person. Yes, the author’s name should be included in comments surrounding each section of code they write and the date the code was written. All languages provide ways to comment within the code, require your staff to use it.
Don’t expect your code to test itself or that your engineers will properly test it
Your software engineers are far too close to the code to determine if the code works correctly under all scenarios. Plain and simple, software doesn’t test itself. Use an independent quality testing group to ensure that the code performs as expected based on the design specifications. Yes, always test based on the design specifications. Clearly, your company should have a road map of features and exactly how those features are expected to perform. These features should be driven by customer requests for new features. Your quality assurance team should have a list of new all features being placed into each new release to write thorough test cases well in advance. So, when the code is ready, they can put the release candidate into the testing environment and run through their test cases. As I said, don’t rely on your software engineers to provide this level of test cases. Use a full quality assurance team to review and sign off on the test cases to ensure that the features work as defined.
Don’t expect code to write (or fix) itself
Here’s another one that would be seemingly self-explanatory. Basically, when a feature comes along that needs to be implemented, don’t expect the code to spring up out of nowhere. You need competent technical people who fully understand the design to write the code for any new feature. But, just because an engineer has actually written code doesn’t mean the code actually implements the feature. Always have test cases ready to ensure that the implemented feature actually performs the way that it was intended.
If the code doesn’t perform what it’s supposed to after having been implemented, obviously it needs to be rewritten so that it does. If the code written doesn’t match the requested feature, the engineer may not understand the requested feature enough to implement it correctly. Alternatively, the feature set wasn’t documented well enough before having been sent to the engineering team to be coded. Always document the features completely, with pseudo-code if necessary, prior to being sent to engineering to write actual code. If using an agile engineering approach, review the progress frequently and test the feature along the way.
Additionally, if the code doesn’t work as expected and is rolled to production broken, don’t expect that code to magically start working or that the production team has some kind of magic wand to fix the problem. If it’s a coding problem, this is a software engineering task to resolve. Regardless of whether or not the production team (or even a customer) manages to find a workaround is irrelevant to actually fixing the bug. If a bug is found and documented, fix it.
Don’t let your software engineers design features
Your software engineers are there to write the code based features derived from customer feedback. Don’t let your software engineers write code for features not on the current road map. This is a waste of time and, at the same time, doesn’t help get your newest release out the door. Make sure that your software engineers remain focused on the current set of features destined for the next release. Focusing on anything other than the next release could delay that release. If you’re wanting to stick to a specific release date, always keep your engineers focused on the features destined for the latest release. Of course, fixing bugs from previous releases is also a priority, so make sure they have enough time to work on these while still working on coding for the newest release. If you have the manpower, focus some people on bug fixing and others on new features. If the code is documented well enough, a separate bug fixing team should have no difficulties creating patches to fix bugs from the current release.
Don’t expect to create 100% perfect code
So, this one almost goes without saying, but it does need to be said. Nothing is ever bug free. This section is here is to illustrate why you need to design your application using a modular patching approach. It goes back to operations manageability (as stated above). Design your application so that code modules can drop-in replace easily while the code is running. This means that the operations team (or whomever is tasked to do your patching) simply drops a new code file in place, tells the system to reload and within minutes the new code is operating. Modular drop in replacements while running is the only way to prevent major downtime (assuming the code is fully tested). As an SaaS company, should always design your application with high availability in mind. Doing full code releases, on the other hand, should have a separate installation process than drop in replacement. Although, if you would like to utilize the dynamic patching process for more agile releases, this is definitely an encouraged design feature. The more easily you design manageability and rapid deployment into your code for the operations team, the less operations people you need to manage and deploy it.
Without the distractions of long involved release processes, the operations team can focus on hardware design, implementation and general growth of the operations processes. The more distractions your operations team has with regards to bugs, fixing bugs, patching bugs and general code related issues, the less time they have to spend on the infrastructure side to make your application perform its best. As well, the operations team also has to keep up with operating system patches, software releases, software updates and security issues that may affect your application or the security of your user’s data.
Don’t overlook security in your design
Many people who write code, write code to implement a feature without thought to security. I’m not necessarily talking about blatantly obvious things like using logins and passwords to get into your system. Although, if you don’t have this, you need to add it. It’s clear, logins are required if you want to have multiple users using your system at once. No, I’m discussing the more subtle but damaging security problems such as cross-site scripting or SQL injection attacks. Always have your site’s code thoroughly tested against a suite of security tools prior to release. Fix any security problems revealed before rolling that code out to production. Don’t wait until the code rolls to production to fix security vulnerabilities. If your quality assurance team isn’t testing for security vulnerabilities as part of the QA sign off process, then you need to rethink and restructure your QA testing methodologies. Otherwise, you may find yourself becoming the next Sony Playstation Store news headline at Yahoo News or CNN. You don’t really want this type of press for your company. You also don’t want your company to be known for losing customer data.
Additionally, you should always store user passwords and other sensitive user data in one-way encrypted form. You can store the last 4 digits or similar of social security numbers or the last 4 of account numbers in clear text, but do not store the whole number in either plain text, with two-way encryption or in a form that is easily derived (md5 hash). Always use actual encryption algorithms with reasonably strong one-way encryption to store sensitive data. If you need access to that data, this will require the user to enter the whole string to unlock whatever it is they are trying to access.
Don’t expect your code to work on terabytes of data
If you’re writing code that manages SQL queries or, more specifically, are constructing SQL queries based on some kind of structured input, don’t expect your query to return timely when run against gigabytes or terabytes of data, thousands of columns or billions of rows or more. Test your code against large data sets. If you don’t have a large data set to test against, you need to find or build some. It’s plain and simple, if you can’t replicate your biggest customers’ environments in your test environment, then you cannot test all edge cases against the code that was written. SQL queries have lots of penalties against large data sets due to explain plans and statistical tables that must be built, if you don’t test your code, you will find that these statistical tables are not at all built the way you expect and the query may take 4,000 seconds instead of 4 seconds to return.
Alternatively, if you’re using very large data sets, it might be worth exploring such technologies as Hadoop and Cassandra instead of traditional relational databases to handle these large data sets in more efficient ways than by using databases like MySQL. Unfortunately, however, Hadoop and Cassandra are noSQL implementations, so you forfeit the use of structured queries to retrieve the data, but very large data sets can be randomly accessed and written to, in many cases, much faster than using SQL ACID database implementations.
Don’t write islands of code
You would think in this day and age that people would understand how frameworks work. Unfortunately, many people don’t and continue to write code that isn’t library or framework based. Let’s get you up to speed on this topic. Instead of writing little disparate islands of code, roll the code up under shared frameworks or shared libraries. This allows other engineers to use and reuse that code in new ways. If it’s a new feature, it’s possible that another bit of unrelated code may need to pull some data from another earlier implemented feature. Frameworks are a great way to ensure that reusing code is possible without reinventing the wheel or copying and pasting code all over the place. Reusable libraries and frameworks are the future. Use them.
Of course, these libraries and frameworks need to be fully documented with specifications of the calls before they can be reused by other engineers in other parts of the code. So, documentation is critical to code reuse. Better, the use of object oriented programming allows not only reuse, but inheritance. So, you can inherit an object in its template form and add your own custom additions to this object to expand its usefulness.
Don’t talk and chew bubble gum at the same time
That is, don’t try to be too grandiose in your plans. Your team has limited time between the start of a development cycle and the roll out of a new release. Make sure that your feature set is compatible with this deadline. Sure, you can throw everything in including the kitchen sink, but don’t expect your engineering team to deliver on time or, if they do actually manage to deliver, that the code will work half as well as you expect. Instead, pair your feature sets down to manageable chunks. Then, group the chunks together into releases throughout the year. Set expectations that you want a certain feature set in a given release. Make sure, however, that that feature set is attainable in the time allotted with the number of engineers that you have on staff. If you have a team of two engineers and a development cycle of one month, don’t expect these engineers to implement hundreds of complex features in that time. Be realistic, but at the same time, know what your engineers are capable of.
Don’t implement features based on one customer’s demand
If someone made a sales promise to deliver a feature to one, and only one customer, you’ve made a serious business mistake. Never promise an individual feature to an individual customer. While you may be able to retain that customer based on implementing that feature, you will run yourself and the rest of your company ragged trying to fulfill this promise. Worse, that customer has no loyalty to you. So, even if you expend the 2-3 weeks day and night coding frenzy to meet the customer’s requirement, the customer will not be any more loyal to you after you have released the code. Sure, it may make the customer briefly happy, but at what expense? You likely won’t keep this customer as a customer any longer. By the time you’ve gotten to this level of desperation with a customer, they are likely already on the way out the door. So, these crunch requests are usually last-ditch efforts at customer retention and customer relations. Worse, the company runs itself ragged trying desperately to roll this new feature almost completely ignoring all other customers needing attention and projects, yet these harried features so completely end up as customized one-offs that no other customer can even use the feature without a major rewrite. So, the code is effectively useless to anyone other than the requesting customer who’s likely within inches of terminating their contract. Don’t do it. If your company gets into this desperation mode, you need to stop and rethink your business strategy and why you are in business.
Don’t forget your customer
You need to hire a high quality sales team who is attentive to customer needs. But, more than this, they need to periodically talk to your existing clients on customer relations terms. Basically, ask the right questions and determine if the customer is happy with the services. I’ve seen so many cases where a customer appears completely happy with the services. In reality, they have either been shopping around or have been approached by competition and wooed away with a better deal. You can’t assume that any customer is so entrenched in your service that they won’t leave. Instead, your sales team needs to take a proactive approach and reach out to the customers periodically to get feedback, determine needs and ask if they have any questions regarding their services. If a contract is within 3 months of renewal, the sales team needs to be on the phone and discussing renewal plans. Don’t wait until a week before the renewal to contact your customers. By a week out, it’s likely that the customers have already been approached by competition and it’s far too late to participate in any vendor review process. You need to know when the vendor review process happens and always submit yourself to that process for continued business consideration from that customer. Just because a customer has a current contract with you does not make you a preferred vendor. More than this, you want to always participate in the vendor review process, so this is why it’s important to contact your customer and ask when the vendor review process begins. Don’t blame the customer that you weren’t included in any vendor review and purchasing process. It’s your sales team’s job to find out when vendor reviews commence.
← Part 2 | Chapter Index | Part 4 →
Amazon Kindle: Buyer’s Security Warning
If you’re thinking of purchasing a Kindle or Kindle Fire, beware. Amazon ships the Kindle pre-registered to your account in advance while the item being shipped. What does that mean? It means that the device is ready to make purchases right from your account without being in your possession. Amazon does this to make it ‘easy’. Unfortunately, this is a huge security risk. You need to take some precautions before the Kindle arrives.
Why is this a risk?
If the package gets stolen, it becomes not only a hassle to get the device replaced, it means the thief can rack up purchases for that device from your Amazon account on your registered credit card without you being immediately aware. The bigger security problem, however, is that the Kindle does not require a login and password to purchase content. Once registered to your account, it means the device is already given consent to purchase without any further security. Because the Kindle does not require a password to purchase content, unlike the iPad which asks for a password to purchase, the Kindle can easily purchase content right on your credit card without any further prompts. You will only find out about the purchases after they have been made through email receipts. At this point, you will have to dispute the charges with Amazon and, likely, with your bank.
This is bad on many levels, but it’s especially bad while the item is in transit until you receive the device in the mail. If the device is stolen in transit, your account could end up being charged for content by the thief, as described above. Also, if you have a child that you would like to use the device, they can also make easy purchases because it’s registered and requires no additional passwords. They just click and you’ve bought.
What to do?
When you order a Kindle, you will want to find and de-register that Kindle (may take 24 hours before it appears) until it safely arrives into your possession and is working as you expect. You can find the Kindles registered to your account by clicking (from the front page while logged in) ‘Your Account->Manage Your Kindle‘ menu then click ‘Manage Your Devices‘ in the left side panel. From here, look for any Kindles you may have recently purchased and click ‘Deregister’. Follow through any prompts until they are unregistered. This will unregister that device. You can re-register the device when it arrives.
If you’re concerned that your child may make unauthorized purchases, either don’t let them use your Kindle or de-register the Kindle each time you give the device to your child. They can use the content that’s on the device, but they cannot make any further purchases unless you re-register the device.
Kindle as a Gift
Still a problem. Amazon doesn’t recognize gift purchases any differently. If you are buying a Kindle for a friend, co-worker or even as a giveaway for your company’s party, you will want to explicitly find the purchased Kindle in your account and de-register it. Otherwise, the person who receives the device could potentially rack up purchases on your account without you knowing.
Shame on Amazon
Amazon should stop this practice of pre-registering Kindles pronto. All Kindles should only register to the account after the device has arrived in the possession of the rightful owner. Then, and only then, should the device be registered to the consumer’s Amazon account as part of the setup process using an authorized Amazon login and password (or by doing it in the Manage devices section of the Amazon account). The consumer should be the sole responsible party to authorize all devices to their account. Amazon needs to stop pre-registering of devices before the item ships. This is a bad practice and a huge security risk to the holder of the Amazon account who purchased the Kindle. It also makes gifting Kindles extremely problematic. Amazon, it’s time to stop this bad security practice or place more security mechanisms on the Kindle before a purchase can be made.









1 comment