Random Thoughts – Randocity!

How to format NTFS on MacOS X

Posted in Apple, computers, Mac OS X, microsoft by commorancy on June 2, 2012

This article is designed to show you how to mount and manage NTFS partitions in MacOS X.  Note the prerequisites below as it’s not quite as straightforward as one would hope.  That is, there is no native MacOS X tool to accomplish this, but it can be done.  First things first:

Disclaimer

This article discusses commands that will format, destroy or otherwise wipe data from hard drives.  If you are uncomfortable working with commands like these, you shouldn’t attempt to follow this article.  This information is provided as-is and all risk is incurred solely by the reader.  If you wipe your data accidentally by the use of the information contained in this article, you solely accept all risk.  This author accepts no liability for the use or misuse of the commands explored in this article.

Prerequisites

Right up front I’m going to say that to accomplish this task, you must have the following prerequisites set up:

  1. VirtualBox installed (free)
  2. Windows 7 (any flavor) installed in VirtualBox (you can probably use Windows XP, but the commands may be different) (Windows is not free)

For reading / writing to NTFS formatted partitions (optional), you will need one of the following:

  1. For writing to NTFS partitions on MacOS X:
  2. For reading from NTFS, MacOS X can natively mount and read from NTFS partitions in read-only mode. This is built into Mac OS X.

If you plan on writing to NTFS partitions, I highly recommend Tuxera over ntfs-3g. Tuxera is stable and I’ve had no troubles with it corrupting NTFS volumes which would require a ‘chkdsk’ operation to fix.  On the other hand, ntfs-3g regularly corrupts volumes and will require chkdsk to clean up the volume periodically. Do not override MacOS X’s native NTFS mounter and have it write to volumes (even though it is possible).  The MacOS X native NTFS mounter will corrupt disks in write mode.  Use Tuxera or ntfs-3g instead.

Why NTFS on Mac OS X?

If you’re like me, I have a Mac at work and Windows at home.  Because Mac can mount NTFS, but Windows has no hope of mounting MacOS Journaled filesystems, I opted to use NTFS as my disk carry standard.  Note, I use large 1-2TB sized hard drives and NTFS is much more efficient with space allocation than FAT32 for these sized disks.  So, this is why I use NTFS as my carry around standard for both Windows and Mac.

How to format a new hard drive with NTFS on Mac OS X

Once you have Windows 7 installed in VirtualBox and working, shut it down for the moment.  Note, I will assume that you know how to install Windows 7 in VirtualBox.  If not, let me know and I can write a separate article on how to do this.

Now, go to Mac OS X and open a command terminal (/Applications/Utilities/Terminal.app).  Connect the disk to your Mac via USB or whatever method you wish the drive to connect.  Once you have it connected, you will need to determine which /dev/diskX device it is using.  There are several ways of doing this.  However, the easiest way is with the ‘diskutil’ command:

$ diskutil list
/dev/disk0
   #: TYPE NAME SIZE IDENTIFIER
   0: GUID_partition_scheme *500.1 GB disk0
   1: EFI 209.7 MB disk0s1
   2: Apple_HFS Macintosh HD 499.8 GB disk0s2
/dev/disk1
   #: TYPE NAME SIZE IDENTIFIER
   0: GUID_partition_scheme *2.0 TB disk1
/dev/disk2
   #: TYPE NAME SIZE IDENTIFIER
   0: Apple_partition_scheme *119.6 MB disk2
   1: Apple_partition_map 32.3 KB disk2s1
   2: Apple_HFS VirtualBox 119.5 MB disk2s2

 
Locate the drive that appears to be the size of your new hard drive.  If the hard drive is blank (a brand new drive), it shouldn’t show any additional partitions. In my case, I’ve identified that I want to use /dev/disk1.  Remember this device file path because you will need it for creating the raw disk vmdk file. Note the nomenclature above:  The /dev/disk1 is the device to access the entire drive from sector 0 to the very end.  The /dev/diskXsX files access individual partitions created on the device.  Make sure you’ve noted the correct /dev/disk here or you could overwrite the wrong drive.

Don’t create any partitions with MacOS X in Disk Utility or in diskutil as these won’t be used (or useful) in Windows.  In fact, if you create any partitions with Disk Utility, you will need to ‘clean’ the drive in Windows.

Creating a raw disk vmdk for VirtualBox

This next part will create a raw connector between VirtualBox and your physical drive.  This will allow Windows to directly access the entire physical /dev/disk1 drive from within VirtualBox Windows.  Giving Windows access to the entire drive will let you manage the entire drive from within Windows including creating partitions and formatting them.

To create the connector, you will use the following command in Mac OS X from a terminal shell:

$ vboxmanage internalcommands createrawvmdk \
-filename "/path/to/VirtualBox VMs/Windows/disk1.vmdk" -rawdisk /dev/disk1

 
It’s a good idea to create the disk1.vmdk where your Windows VirtualBox VM lives. Note, if vboxmanage isn’t in your PATH, you will need to add it to your PATH to execute this command or, alternatively, specify the exact path to the vboxmanage command. In my case, this is located in /usr/bin/vboxmanage.  This command will create a file named disk1.vmdk that will be used inside your Windows VirtualBox machine to access the hard drive. Note that creating the vmdk doesn’t connect the drive to your VirtualBox Windows system. That’s the next step.  Make note of the path to disk1.vmdk as you will also need this for the next step.

Additional notes, if the drive already has any partitions on it (NTFS or MacOS), you will need to unmount any mounted partitions before Windows can access it and before you can createrawvmdk with vboxmanage.  Check ‘df’ to see if any partitions on drive are mounted.  To unmount, either drop the partition(s) on the trashcan, use umount /path/to/partition or use diskutil unmount /path/to/partition.  You will need to unmount all partitions on the drive in question before Windows or vboxmanage can access it.  Even one mounted partition will prevent VirtualBox from gaining access to the disk.

Note, if this is a brand new drive, it should be blank and it won’t attempt to mount anything.  MacOS may ask you to format it, but just click ‘ignore’.  Don’t have MacOS X format the drive.  However, if you are re-using a previously used drive and wanting to format over what’s on it, I would suggest you zero the drive (see ‘Zeroing a drive’ below) as the fastest way to clear the drive of partition information.

Hooking up the raw disk vmdk to VirtualBox

Open VirtualBox.  In VirtualBox, highlight your Windows virtual machine and click the ‘Settings’ cog at the top.

  • Click the Storage icon.
  • Click the ‘SATA Controller’
  • Click on the ‘Add Hard Disk’ icon (3 disks stacked).
  • When the ? panel appears, click on ‘Choose existing disk’.
  • Navigate to the folder where you created ‘disk1.vmdk’, select it and click ‘Open’.
  • The disk1.vmdk connector will now appear under SATA Controller

You are ready to launch VirtualBox.  Note, if /dev/disk1 isn’t owned by your user account, VirtualBox may fail to open this drive and show an error panel.  If you see any error panels, check to make sure no partitions are mounted and  then check the permissions of /dev/disk1 with ls -l /dev/disk1 and, if necessary, chown $LOGNAME /dev/disk1.  The drive must not have any partitions actively mounted and /dev/disk1 must be owned by your user account on MacOS X.  Also make sure that the vmdk file you created above is owned by your user account as you may need to become root to createrawvmdk.

Launching VirtualBox

Click the ‘Start’ button to start your Windows VirtualBox.  Once you’re at the Windows login panel, log into Windows as you normally would.  Note, if the hard drive goes to sleep, you may have to wait for it to wake up for Windows to finish loading.

Once inside Windows, do the following:

  • Start->All Programs->Accessories->Command Prompt
  • Type in ‘diskpart’
  • At the DISKPART> prompt, type ‘list disk’ and look for the drive (based on the size of the drive).
    • Note, if you have more than one drive that’s the same exact size, you’ll want to be extra careful when changing things as you could overwrite the wrong drive.  If this is the case, follow these next steps at your own risk!
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
 -------- ------------- ------- ------- --- ---
 Disk 0 Online 40 GB 0 B
 Disk 1 Online 1863 GB 0 B *
  • In my case, I am using Disk 1.  So, type in ‘select disk 1’.  It will say ‘Disk 1 is now the selected disk.’
    • From here on down, use these commands at your own risk.  They are destructive commands and will wipe the drive and data from the drive.  If you are uncertain about what’s on the drive or you need to keep a copy, you should stop here and backup the data before proceeding.  You have been warned.
    • Note, ‘Disk 1’ is coincidentally named the same as /dev/disk1 on the Mac.  It may not always follow the same naming scheme on all systems.
  • To ensure the drive is fully blank type in ‘clean’ and press enter.
    • The clean command will wipe all partitions and volumes from the drive and make the drive ‘blank’.
    • From here, you can repartition the drive as necessary.

Creating a partition, formatting and mounting the drive in Windows

  • Using diskpart, here are the commands to create one partition using the whole drive, format it NTFS and mount it as G: (see commands below):
DISKPART> select disk 1
Disk 1 is now the selected disk
DISKPART> clean
DiskPart succeeded in cleaning the disk.
DISKPART> create partition primary
DiskPart succeeded in creating the specified partition.
DISKPART> list partition
Partition ### Type Size Offset
 ------------- ---------------- ------- -------
* Partition 1 Primary 1863 GB 1024 KB
DISKPART> select partition 1
Partition 1 is now the selected partition.
DISKPART> format fs=ntfs label="Data" quick
100 percent completed
DiskPart successfully formatted the volume.
DISKPART> assign letter=g
DiskPart successfully assigned the drive letter or mount point.
DISKPART> exit
Leaving DiskPart...

 

  • The drive is now formatted as NTFS and mounted as G:.  You should see the drive in Windows Explorer.
    • Note, unless you want to spend hours formatting a 1-2TB sized drive, you should format it as QUICK.
    • If you want to validate the drive is good, then you may want to do a full format on the drive.  New drives are generally good already, so QUICK is a much better option to get the drive formatted faster.
  • If you want to review the drive in Disk Management Console, in the command shell type in diskmgmt.msc
  • When the window opens, you should find your Data drive listed as ‘Disk 1’

Note, the reason to use ‘diskpart’ over Disk Management Console is that you can’t use ‘clean’ in Disk Management Console, this command is only available in the diskpart tool and it’s the only way to completely clean the drive of all partitions to make the drive blank again.  This is especially handy if you happen to have previously formatted the drive with MacOS X Journaled FS and there’s an EFI partition on the drive.  The only way to get rid of a Mac EFI partition is to ‘clean’ the drive as above.

Annoyances and Caveats

MacOS X always tries to mount recognizable removable (USB) partitions when they become available.  So, as soon as you have formatted the drive and have shut down Windows, Mac will likely mount the NTFS drive under /Volumes/Data.  You can check this with ‘df’ in Mac terminal or by opening Finder.  If you find that it is mounted in Mac, you must unmount it before you can start VirtualBox to use the drive in Windows.  If you try to start VirtualBox with a mounted partition in Mac OS X, you will see a red error panel in VirtualBox.  Mac and Windows will not share a physical volume.  So you must make sure MacOS X has unmounted the volume before you start VirtualBox with the disk1.vmdk physical drive.

Also, the raw vmdk drive is specific to that single hard drive.  You will need to go through the steps of creating a new raw vmdk for each new hard drive you want to format in Windows unless you know for certain that each hard drive is truly identical.  The reason is that vboxmanage discovers the geometry of the drive and writes it to the vmdk.  So, each raw vmdk is tailored to each drive’s size and geometry.  It is recommended that you not try to reuse an existing physical vmdk with another drive.  Always create a new raw vmdk for each drive you wish to manage in Windows.

Zeroing a drive

While the clean command clears off all partition information in Windows, you can also clean off the drive in MacOS X.  The way to do this is by using dd.  Again, this command is destructive, so be sure you know which drive you are operating on before you press enter.  Once you press enter, the drive will be wiped of data.  Use this section at your own risk.

To clean the drive use the following:

$ dd if=/dev/zero of=/dev/disk1 bs=4096 count=10000

 
This command will write 10000 * 4096 byte blocks with all zeros.  This should overwrite any partition information and clear the drive off.  You may not need to do this as the diskpart ‘clean’ command may be sufficient.

Using chkdsk

If the drive has become corrupted or is acting in a way you think may be a problem, you can always go back into Windows with the data1.vmdk connector and run chkdsk on the volume.  You can also use this on any NTFS or FAT32 volume you may have.  You will just need to create a physical vmdk connector and attach it to your Windows SATA controller and make sure MacOS X doesn’t have it mounted. Then, launch VirtualBox and clean it up.

Tuxera

If you are using Tuxera to mount NTFS, once you exit out of Windows with your freshly formatted NTFS volume, Tuxera should immediately see the volume and mount it.  This will show you that NTFS has been formatted properly on the drive.  You can now read and write to this volume as necessary.

Note that this method to format a drive with NTFS is the safest way on Mac OS X.  While there may be some native tools floating around out there, using Windows to format NTFS will ensure the volume is 100% compliant with NTFS and Windows.  Using third party tools not written by Microsoft could lead to data corruption or improperly formatted volumes.

Of course, you could always connect the drive directly to a Windows system and format it that way. ;)

Tagged with: , ,

How not to run a business (Part 3) — SaaS edition

Posted in business, cloud computing, computers by commorancy on May 8, 2012

So, we’ve talked about how not to run a general business, let’s get to some specifics. Since software as a service (SaaS) is now becoming more and more common, let’s explore software companies and how not to run these.

Don’t add new features because you can

If a customer is asking for something new, then add that new feature at some appointed future time. Do not, however, think that that feature needs to be implemented tomorrow. On the other hand, if you have conceived something that you think might be useful, do not spend time implementing it until someone is actually asking for it. This is an important lesson to learn. It’s a waste of time to write code that no one will actually use. So, if you think your feature has some merit, invite your existing customers to a discussion by asking them if they would find the proposed feature useful. Your customers have the the final say. If the majority of your customers don’t think they would use it, scrap the idea. Time spent writing a useless feature is time wasted. Once written, the code has to be maintained by someone and is an additional waste of time.

Don’t tie yourself to your existing code

Another lesson to learn is that your code (and app) needs to be both flexible and trashable. Yes, I said trashable. You need to be willing to throw away code and rewrite it if necessary. That means, code flows, changes and morphs. It does not stay static. Ideas change, features change, hardware changes, data changes and customer expectations change. As your product matures and requires more and better infrastructure support, you will find that your older code becomes outdated. Don’t be surprised if you find yourself trashing much of your existing code for completely new implementations taking advantage of newer technologies and frameworks. Code that you may have written from scratch to solve an early business problem may now have a software framework that, while not identical to your code, will do what your code does 100x more efficiently. You have to be willing to dump old code for new implementations and be willing to implement those ideas in place of old code. As an example, usually early code does not take high availability into account. Therefore, gutting old code that isn’t highly available for new frameworks that are is always a benefit to your customers. If there’s anything to understand here, code is not a pet to get attached to. It provides your business with a point in time service set. However, that code set must grow with your customer’s expectations. Yes, this includes total ground-up rewrites.

Don’t write code that focuses solely on user experience

In software-as-a-service companies, many early designs can focus solely on what the code brings to the table for customer experience. The problem is that the design team can become so focused on writing the customer experience that they forget all about the manageability of the code from an operational perspective. Don’t write your code this way. Your company’s ability to support that user experience will suffer greatly from this mistake. Operationally, the code must be manageable, supportable, functional and must also start up, pause and stop consistently. This means, don’t write code so that when it fails it leaves garbage in tables, half-completed transactions with no way to restart the failed transactions or huge temporary files in /tmp. This is sloppy code design at best. At worst, it’s garbage code that needs to be rewritten.

All software designs should plan for both the user experience and the operational functionality. You can’t expect your operations team to become the engineering code janitors. Operations teams are not janitors for cleaning up after sloppy code that leaves garbage everywhere. Which leads to …

Don’t write code that doesn’t clean up after itself

If your code writes temporary tables or otherwise uses temporary mechanisms to complete its processing, clean this up not only on a clean exit, but also during failure conditions. I know of no languages or code that, when written correctly, cannot cleanup after itself even under the most severe software failure conditions. Learn to use these mechanisms to clean up. Better, don’t write code that leaves lots of garbage behind at any point in time. Consume what you need in small blocks and limit the damage under failure conditions.

Additionally, if your code needs to run through processing a series of steps, checkpoint those steps. That means, save the checkpoint somewhere. So, if you fail to process step 3 of 5, another process can come along and continue at step 3 and move forward. Leaving half completed transactions leaves your customers open to user experience problems. Always make sure your code can restart after a failure at the last checkpoint. Remember, user experience isn’t limited to a web interface…

Don’t think that the front end is all there is to user experience

One of the mistakes that a lot of design teams fall into is thinking that the user experience is tied to the way the front end interacts. Unfortunately, this design approach has failure written all over it. Operationally, the back end processing is as much a user experience as the front end interface. Sure, the interface is what the user sees and how the user interacts with your company’s service. At the same time, what the user does on the front end directly drives what happens on the back end. Seeing as your service is likely to be multiuser capable, what each user does needs to have its own separate allocation of resources on the back end to complete their requests. Designing the back end process to serially manage the user requests will lead to backups when you have 100, 1,000 or 10,000 users online.

It’s important to design both the front end experience and the back end processing to support a fully scalable multiuser experience. Most operating systems today are fully capable of multitasking utilizing both multiprocess and multithreaded support. So, take advantage of these features and run your user’s processing requests concurrently, not serially. Even better, make sure they can scale properly.

Don’t write code that sets no limits

One of the most damaging things you can do for user experience is tell your customers there are no limits in your application. As soon as those words are uttered from your lips, someone will be on your system testing that statement. First by seeing how much data it takes before the system breaks, then by stating that you are lying. Bad from all aspects. The takeaway here is that all systems have limits such as disk capacity, disk throughput, network throughput, network latency, the Internet itself is problematic, database limits, process limits, etc. There are limits everywhere in every operating system, every network and every application. You can’t state that your application gives unlimited capabilities without that being a lie. Eventually, your customers will hit a limit and you’ll be standing there scratching your head.

No, it’s far simpler not to make this statement. Set quotas, set limits, set expectations that data sets perform best when they remain between a range. Customers are actually much happier when you give them realistic limits and set their expectations appropriately. Far fetched statements leave your company open to problems. Don’t do this.

Don’t rely on cron to run your business

Ok, so I know some people will say, why not? Cron, while a decent scheduling system, isn’t without its own share of problems. One of its biggest problems, however, is that its smallest level of granularity is once per minute. If you need something to run more frequently than every minute, you are out of luck with cron. Cron also requires hard coded scripts that must be submitted in specific directories for cron to function. Cron doesn’t have an API. Cron supports no external statistics other than by digging through log files. Note, I’m not hating on cron. Cron is a great system administration tool. It has a lot of great things going for it with systems administration use when utilizing relatively infrequent tasks. It’s just not designed to be used under heavy mission critical load. If you’re doing distributed processing, you will need to find a way to launch in a more decentralized way anyway. So, cron likely won’t work in a distributed environment. Cron also has a propensity to stop working internally, but leave itself running in the process list. So, monitoring systems will think it’s working when it’s not actually launching any tasks.

If you’re a Windows shop, don’t rely on Windows scheduler to run your business. Why? Windows scheduler is actually a component of Internet Explorer (IE). When IE changes, the entire system could stop or fail. Considering the frequency with which Microsoft releases updates to not only the operating system, but to IE, you’d be wise to find another scheduler that is not likely to be impacted by Microsoft’s incessant need to modify the operating system.

Find or design a more reliable scheduler that works in a scalable fault tolerant way.

Don’t rely on monitoring systems (or your operations team) to find every problem or find the problem timely

Monitoring systems are designed by humans to find problems and alert. Monitoring systems are by their very nature, reactive. This means that monitoring systems only alert you AFTER they have found a problem. Never before. Worse, most monitoring systems only alert of problems after multiple checks have failed. This means that not only is the service down, it’s been down for probably 15-20 minutes by the time the system alerts. In this time, your customers may or may not have already seen that something is going on.

Additionally, for any monitoring for a given application feature, the monitoring system needs a window into that specific feature. For example, monitoring Windows WMI components or Windows message queues from a Linux monitoring system is near impossible. Linux has no components at all to access, for example, the Windows WMI system or Windows message queues. That said, a third party monitoring system with an agent process on the Windows system may be able to access WMI, but it may not.

Always design your code to provide a window into critical application components and functionality for monitoring purposes. Without such a monitoring window, these applications can be next to impossible to monitor. Better, design using standardized components that work across all platforms instead of relying on platform specific components. Either that or choose a single platform for your business environment and stick with that choice. Note that it is not the responsibility of the operations team to find windows to monitor. It’s the application engineering team’s responsibility to provide the necessary windows into the application to monitor the application.

Don’t expect your operations team to debug your application’s code

Systems administrators are generally not programmers. Yes, they can write shell scripts, but they don’t write code. If your application is written in PHP or C or C++ or Java, don’t expect your operations team to review your application’s code, debug the code or even understand it. Yes, they may be able to review some Java or PHP, but their job is not to write or review your application’s code. Systems administrators are tasked to manage the operating systems and components. That is, to make sure the hardware and operating system is healthy for the application to function and thrive. Systems administrators are therefore not tasked to write or debug your application’s code. Debugging the application is the task for your software engineers. Yes, a systems administrator can find bugs and report them, just as anyone can. Determining why that bug exists is your software engineers’ responsibility. If you expect your systems administrators to understand your application’s code in that level of detail, they are no longer systems administrators and they are considered software engineers. Keeping job roles separate is important in keeping your staff from becoming overloaded with unnecessary tasks.

Don’t write code that is not also documented

This is a plain and simple programming 101 issue. Yes, it’s very simple. Your software engineers’ responsibilities are to write robust code, but also document everything they write. That’s their job responsibility and should be part of their job description. If they do not, cannot or are unwilling to document the code they write, they should be put on a performance review plan and without improvement, walked to the door. Without documentation, reverse engineering their code can take weeks for new personnel. Documentation is critical to your businesses continued success, especially when personnel changes. Think of this like you would disaster recovery. If you suddenly no longer had your current engineers available and you had to hire all new engineers, how quickly could the new engineers understand your application’s code enough to release a new version? This ends up a make or break situation. Documentation is the key here.

Thus, documentation must be part of any engineer’s responsibility when they write code for your company. Code review is equally important by management to ensure that the code not only seems reasonable (i..e, no gotos), but is fully documented and attributed to that person. Yes, the author’s name should be included in comments surrounding each section of code they write and the date the code was written. All languages provide ways to comment within the code, require your staff to use it.

Don’t expect your code to test itself or that your engineers will properly test it

Your software engineers are far too close to the code to determine if the code works correctly under all scenarios. Plain and simple, software doesn’t test itself. Use an independent quality testing group to ensure that the code performs as expected based on the design specifications. Yes, always test based on the design specifications. Clearly, your company should have a road map of features and exactly how those features are expected to perform. These features should be driven by customer requests for new features. Your quality assurance team should have a list of new all features being placed into each new release to write thorough test cases well in advance. So, when the code is ready, they can put the release candidate into the testing environment and run through their test cases. As I said, don’t rely on your software engineers to provide this level of test cases. Use a full quality assurance team to review and sign off on the test cases to ensure that the features work as defined.

Don’t expect code to write (or fix) itself

Here’s another one that would be seemingly self-explanatory. Basically, when a feature comes along that needs to be implemented, don’t expect the code to spring up out of nowhere. You need competent technical people who fully understand the design to write the code for any new feature. But, just because an engineer has actually written code doesn’t mean the code actually implements the feature. Always have test cases ready to ensure that the implemented feature actually performs the way that it was intended.

If the code doesn’t perform what it’s supposed to after having been implemented, obviously it needs to be rewritten so that it does. If the code written doesn’t match the requested feature, the engineer may not understand the requested feature enough to implement it correctly. Alternatively, the feature set wasn’t documented well enough before having been sent to the engineering team to be coded. Always document the features completely, with pseudo-code if necessary, prior to being sent to engineering to write actual code. If using an agile engineering approach, review the progress frequently and test the feature along the way.

Additionally, if the code doesn’t work as expected and is rolled to production broken, don’t expect that code to magically start working or that the production team has some kind of magic wand to fix the problem. If it’s a coding problem, this is a software engineering task to resolve. Regardless of whether or not the production team (or even a customer) manages to find a workaround is irrelevant to actually fixing the bug. If a bug is found and documented, fix it.

Don’t let your software engineers design features

Your software engineers are there to write the code based features derived from customer feedback. Don’t let your software engineers write code for features not on the current road map. This is a waste of time and, at the same time, doesn’t help get your newest release out the door. Make sure that your software engineers remain focused on the current set of features destined for the next release. Focusing on anything other than the next release could delay that release. If you’re wanting to stick to a specific release date, always keep your engineers focused on the features destined for the latest release. Of course, fixing bugs from previous releases is also a priority, so make sure they have enough time to work on these while still working on coding for the newest release. If you have the manpower, focus some people on bug fixing and others on new features. If the code is documented well enough, a separate bug fixing team should have no difficulties creating patches to fix bugs from the current release.

Don’t expect to create 100% perfect code

So, this one almost goes without saying, but it does need to be said. Nothing is ever bug free. This section is here is to illustrate why you need to design your application using a modular patching approach. It goes back to operations manageability (as stated above). Design your application so that code modules can drop-in replace easily while the code is running. This means that the operations team (or whomever is tasked to do your patching) simply drops a new code file in place, tells the system to reload and within minutes the new code is operating. Modular drop in replacements while running is the only way to prevent major downtime (assuming the code is fully tested). As an SaaS company, should always design your application with high availability in mind. Doing full code releases, on the other hand, should have a separate installation process than drop in replacement. Although, if you would like to utilize the dynamic patching process for more agile releases, this is definitely an encouraged design feature. The more easily you design manageability and rapid deployment into your code for the operations team, the less operations people you need to manage and deploy it.

Without the distractions of long involved release processes, the operations team can focus on hardware design, implementation and general growth of the operations processes. The more distractions your operations team has with regards to bugs, fixing bugs, patching bugs and general code related issues, the less time they have to spend on the infrastructure side to make your application perform its best. As well, the operations team also has to keep up with operating system patches, software releases, software updates and security issues that may affect your application or the security of your user’s data.

Don’t overlook security in your design

Many people who write code, write code to implement a feature without thought to security. I’m not necessarily talking about blatantly obvious things like using logins and passwords to get into your system. Although, if you don’t have this, you need to add it. It’s clear, logins are required if you want to have multiple users using your system at once. No, I’m discussing the more subtle but damaging security problems such as cross-site scripting or SQL injection attacks. Always have your site’s code thoroughly tested against a suite of security tools prior to release. Fix any security problems revealed before rolling that code out to production. Don’t wait until the code rolls to production to fix security vulnerabilities. If your quality assurance team isn’t testing for security vulnerabilities as part of the QA sign off process, then you need to rethink and restructure your QA testing methodologies. Otherwise, you may find yourself becoming the next Sony Playstation Store news headline at Yahoo News or CNN. You don’t really want this type of press for your company. You also don’t want your company to be known for losing customer data.

Additionally, you should always store user passwords and other sensitive user data in one-way encrypted form. You can store the last 4 digits or similar of social security numbers or the last 4 of account numbers in clear text, but do not store the whole number in either plain text, with two-way encryption or in a form that is easily derived (md5 hash). Always use actual encryption algorithms with reasonably strong one-way encryption to store sensitive data. If you need access to that data, this will require the user to enter the whole string to unlock whatever it is they are trying to access.

Don’t expect your code to work on terabytes of data

If you’re writing code that manages SQL queries or, more specifically, are constructing SQL queries based on some kind of structured input, don’t expect your query to return timely when run against gigabytes or terabytes of data, thousands of columns or billions of rows or more. Test your code against large data sets. If you don’t have a large data set to test against, you need to find or build some. It’s plain and simple, if you can’t replicate your biggest customers’ environments in your test environment, then you cannot test all edge cases against the code that was written. SQL queries have lots of penalties against large data sets due to explain plans and statistical tables that must be built, if you don’t test your code, you will find that these statistical tables are not at all built the way you expect and the query may take 4,000 seconds instead of 4 seconds to return.

Alternatively, if you’re using very large data sets, it might be worth exploring such technologies as Hadoop and Cassandra instead of traditional relational databases to handle these large data sets in more efficient ways than by using databases like MySQL. Unfortunately, however, Hadoop and Cassandra are noSQL implementations, so you forfeit the use of structured queries to retrieve the data, but very large data sets can be randomly accessed and written to, in many cases, much faster than using SQL ACID database implementations.

Don’t write islands of code

You would think in this day and age that people would understand how frameworks work. Unfortunately, many people don’t and continue to write code that isn’t library or framework based. Let’s get you up to speed on this topic. Instead of writing little disparate islands of code, roll the code up under shared frameworks or shared libraries. This allows other engineers to use and reuse that code in new ways. If it’s a new feature, it’s possible that another bit of unrelated code may need to pull some data from another earlier implemented feature. Frameworks are a great way to ensure that reusing code is possible without reinventing the wheel or copying and pasting code all over the place. Reusable libraries and frameworks are the future. Use them.

Of course, these libraries and frameworks need to be fully documented with specifications of the calls before they can be reused by other engineers in other parts of the code. So, documentation is critical to code reuse. Better, the use of object oriented programming allows not only reuse, but inheritance. So, you can inherit an object in its template form and add your own custom additions to this object to expand its usefulness.

Don’t talk and chew bubble gum at the same time

That is, don’t try to be too grandiose in your plans. Your team has limited time between the start of a development cycle and the roll out of a new release. Make sure that your feature set is compatible with this deadline. Sure, you can throw everything in including the kitchen sink, but don’t expect your engineering team to deliver on time or, if they do actually manage to deliver, that the code will work half as well as you expect. Instead, pair your feature sets down to manageable chunks. Then, group the chunks together into releases throughout the year. Set expectations that you want a certain feature set in a given release. Make sure, however, that that feature set is attainable in the time allotted with the number of engineers that you have on staff. If you have a team of two engineers and a development cycle of one month, don’t expect these engineers to implement hundreds of complex features in that time. Be realistic, but at the same time, know what your engineers are capable of.

Don’t implement features based on one customer’s demand

If someone made a sales promise to deliver a feature to one, and only one customer, you’ve made a serious business mistake. Never promise an individual feature to an individual customer. While you may be able to retain that customer based on implementing that feature, you will run yourself and the rest of your company ragged trying to fulfill this promise. Worse, that customer has no loyalty to you. So, even if you expend the 2-3 weeks day and night coding frenzy to meet the customer’s requirement, the customer will not be any more loyal to you after you have released the code. Sure, it may make the customer briefly happy, but at what expense? You likely won’t keep this customer as a customer any longer. By the time you’ve gotten to this level of desperation with a customer, they are likely already on the way out the door. So, these crunch requests are usually last-ditch efforts at customer retention and customer relations. Worse, the company runs itself ragged trying desperately to roll this new feature almost completely ignoring all other customers needing attention and projects, yet these harried features so completely end up as customized one-offs that no other customer can even use the feature without a major rewrite. So, the code is effectively useless to anyone other than the requesting customer who’s likely within inches of terminating their contract. Don’t do it. If your company gets into this desperation mode, you need to stop and rethink your business strategy and why you are in business.

Don’t forget your customer

You need to hire a high quality sales team who is attentive to customer needs. But, more than this, they need to periodically talk to your existing clients on customer relations terms. Basically, ask the right questions and determine if the customer is happy with the services. I’ve seen so many cases where a customer appears completely happy with the services. In reality, they have either been shopping around or have been approached by competition and wooed away with a better deal. You can’t assume that any customer is so entrenched in your service that they won’t leave. Instead, your sales team needs to take a proactive approach and reach out to the customers periodically to get feedback, determine needs and ask if they have any questions regarding their services. If a contract is within 3 months of renewal, the sales team needs to be on the phone and discussing renewal plans. Don’t wait until a week before the renewal to contact your customers. By a week out, it’s likely that the customers have already been approached by competition and it’s far too late to participate in any vendor review process. You need to know when the vendor review process happens and always submit yourself to that process for continued business consideration from that customer. Just because a customer has a current contract with you does not make you a preferred vendor. More than this, you want to always participate in the vendor review process, so this is why it’s important to contact your customer and ask when the vendor review process begins. Don’t blame the customer that you weren’t included in any vendor review and purchasing process. It’s your sales team’s job to find out when vendor reviews commence.

Part 2 | Chapter Index | Part 4

Tagged with: ,

Amazon Kindle: Buyer’s Security Warning

Posted in best practices, computers, family, security, shopping by commorancy on May 4, 2012

If you’re thinking of purchasing a Kindle or Kindle Fire, beware. Amazon ships the Kindle pre-registered to your account in advance while the item being shipped. What does that mean? It means that the device is ready to make purchases right from your account without being in your possession. Amazon does this to make it ‘easy’. Unfortunately, this is a huge security risk. You need to take some precautions before the Kindle arrives.

Why is this a risk?

If the package gets stolen, it becomes not only a hassle to get the device replaced, it means the thief can rack up purchases for that device from your Amazon account on your registered credit card without you being immediately aware. The bigger security problem, however, is that the Kindle does not require a login and password to purchase content. Once registered to your account, it means the device is already given consent to purchase without any further security. Because the Kindle does not require a password to purchase content, unlike the iPad which asks for a password to purchase, the Kindle can easily purchase content right on your credit card without any further prompts. You will only find out about the purchases after they have been made through email receipts. At this point, you will have to dispute the charges with Amazon and, likely, with your bank.

This is bad on many levels, but it’s especially bad while the item is in transit until you receive the device in the mail. If the device is stolen in transit, your account could end up being charged for content by the thief, as described above. Also, if you have a child that you would like to use the device, they can also make easy purchases because it’s registered and requires no additional passwords. They just click and you’ve bought.

What to do?

When you order a Kindle, you will want to find and de-register that Kindle (may take 24 hours before it appears) until it safely arrives into your possession and is working as you expect. You can find the Kindles registered to your account by clicking (from the front page while logged in) ‘Your Account->Manage Your Kindle‘  menu then click ‘Manage Your Devices‘ in the left side panel. From here, look for any Kindles you may have recently purchased and click ‘Deregister’. Follow through any prompts until they are unregistered. This will unregister that device. You can re-register the device when it arrives.

If you’re concerned that your child may make unauthorized purchases, either don’t let them use your Kindle or de-register the Kindle each time you give the device to your child. They can use the content that’s on the device, but they cannot make any further purchases unless you re-register the device.

Kindle as a Gift

Still a problem. Amazon doesn’t recognize gift purchases any differently. If you are buying a Kindle for a friend, co-worker or even as a giveaway for your company’s party, you will want to explicitly find the purchased Kindle in your account and de-register it. Otherwise, the person who receives the device could potentially rack up purchases on your account without you knowing.

Shame on Amazon

Amazon should stop this practice of pre-registering Kindles pronto. All Kindles should only register to the account after the device has arrived in the possession of the rightful owner. Then, and only then, should the device be registered to the consumer’s Amazon account as part of the setup process using an authorized Amazon login and password (or by doing it in the Manage devices section of the Amazon account). The consumer should be the sole responsible party to authorize all devices to their account. Amazon needs to stop pre-registering of devices before the item ships. This is a bad practice and a huge security risk to the holder of the Amazon account who purchased the Kindle. It also makes gifting Kindles extremely problematic. Amazon, it’s time to stop this bad security practice or place more security mechanisms on the Kindle before a purchase can be made.

Tagged with: , , ,

When Digital Art Works Infringe

Posted in 3D Renderings, art, best practices, computers, economy by commorancy on March 12, 2012

What is art?  Art is an image expression created by an individual using some type of media.  Traditional media typically includes acrylic paint, oil paint, watercolor, clay or porcelain sculpture, screen printing, metal etching and printing, screen printing or any of any other tangible type media.  Art can also be made from found objects such as bicycles, inner tubes, paper, trash, tires, urinals or anything else that can be found and incorporated.  Sometimes the objects are painted, sometimes not.  Art is the expression once it has been completed.

Digital Art

So, what’s different about digital art?  Nothing really.  Digital art is still based on using digital assets including software and 3D objects used to produce pixels in a 2D format that depicts an image.  Unlike traditional media, digital media is limited to flat 2D imagery when complete (unless printed and turned into a real world object.. which then becomes another form of ‘traditional found art media’ as listed above).

Copyrights

What are copyrights?  Copyrights are rights to copy a given specific likeness of something restricting usage to only those that have permission.  That is, an object or subject either real-world or digital-world has been created by someone and any likeness of that subject is considered copyright.  This has also extended to celebrities in that their likenesses can also be considered copyright by the celebrity.  That is, the likeness of a copyrighted subject is controlled strictly by the owner of the copyright.  Note that copyrights are born as soon as the object or person exists.  These are implicit copyrights.  These rights can be explicitly defined by submitting a form to the U.S. Copyright office or similar other agencies in other parts of the world.

Implicit or explicit, copyrights are there to restrict usage of that subject to those who wish to use it for their own gain.  Mickey Mouse is a good example of a copyrighted property.  Anyone who creates, for example, art containing a depiction of Mickey Mouse is infringing on Disney’s copyright if no permission was granted before usage.

Fair Use

What is fair use?  Fair use is supposed to be a way to use copyrighted works that allows for usage without permission.  Unfortunately, what’s considered fair use is pretty much left up to the copyright owner to decide.  If the copyright holder decides that a depiction is not considered fair use, it can be challenged in a court of law.  This pretty much means that any depiction of any copyrighted character, subject, item or thing can be challenged in a court of law by the copyright holder at any time.  In essence, fair use is a nice concept, but it doesn’t really exist in practice.  There are clear cases where a judge will decide that something is fair use, but only after ending up in court.  Basically, fair use should be defined so clearly and completely that, when something is used within those constraints, no court is required at all. Unfortunately, fair use isn’t defined that clearly.  Copyrights leave anyone attempting to use a copyrighted work at the mercy of the copyright holder in all cases except when permission is granted explicitly in writing.

Public Domain

Public domain is a type of copyright that says there is no copyright.  That is, the copyright no longer exists and the work can be freely used, given away, sold, copied or used in any way without permission to anyone.

3D Art Work

When computers first came into being with reasonable graphics, paint packages became common.  That is, a way to push pixels around on the screen to create an image.  At first, most of the usage of these packages were for utility (icons and video games).  Inevitably, this media evolved to mimic real world tools such as chalk, pastels, charcoal, ink, paint and other media.  But, these paint packages were still simply pushing pixels around on the screen in a flat way.

Enter 3D rendering.  These packages now mimic 3D objects in a 3D space.  These objects are placed into a 3D world and then effectively ‘photographed’.  So, 3D art has more in common with photography than it does painting.  But, the results can mimic painting through various rendering types.  Some renderers can simulate paint strokes, cartoon outlines, chalk and other real world media.  However, instead of just pushing pixels around with a paint package, you can load in 3D objects, place them and then ‘photograph’ them.

3D objects, Real World objects and Copyrights

All objects become copyrighted by the people who create them.  So, a 3D object may or may not need permission for usage (depending on how they were copyrighted).  However, when dealing with 3D objects, the permissions for usage of 3D objects are usually limited to copying and distribution of said objects.  Copyright does not generally cover creating a 3D rendered likeness of an object (unless, of course, the likeness happens to be Mickey Mouse) in which case it isn’t the object that’s copyrighted, but the subject matter. This is the gray area surrounding the use of 3D objects.  In the real world, you can run out and take a picture of your Lexus and post this on the web without any infringement.  In fact, you can sell your Lexus to someone else, because of the First Sale Doctrine, even though that object may be copyrighted.  You can also sell the photograph you took of your Lexus because it’s your photograph.

On the other hand, if you visit Disney World and take a picture of a costumed Mickey Mouse character, you don’t necessarily have the right to sell that photograph.  Why?  Because Mickey Mouse is a copyrighted character and Disney holds the ownership on all likenesses of that character.  You also took the photo inside the park which may have photographic restrictions (you have to read the ticket). Yes, it’s your photograph, but you don’t own the subject matter, Disney does.  Again, a gray area.  On the other hand, if you build a costume from scratch of Mickey Mouse and then photograph yourself in the costume outside the park, you still may not be able to sell the photograph.  You can likely post it to the web, but you likely can’t sell it due to the copyrighted character it contains.

In the digital world, these same ambiguous rules apply with even more exceptions.  If you use a 3D object of Mickey Mouse that you either created or obtained (it doesn’t really matter which because you’re not ultimately selling or giving away the 3D object) and you render that Mickey Mouse character in a rendering package, the resulting 2D image is still copyrighted by Disney because it contains a likeness of Mickey Mouse.  It’s the likeness that matters, not that you used an object of Mickey Mouse in the scene.

Basically, the resulting 2D image and the likeness it contains is what matters here.  If you happened to make the 3D object of Mickey Mouse from scratch (to create the 2D image), you’re still restricted.  You can’t sell that 3D object of Mickey Mouse either.  That’s still infringement.  You might be able to give it away, though, but Disney could still balk as it was unlicensed.

But, I bought a 3D model from Daz…

“am I not protected?” No, you’re not.  If you bought a 3D model of the likeness of a celebrity or of a copyrighted character, you are still infringing on that copyrighted property without permission.  Even if you use Daz’s own Genesis, M4 or other similar models, you could still be held liable for infringement even from the use of those models.  Daz grants usage of their base models in 2D images.  If you dress the model up to look like Snow White or Cruella DeVille from Disney’s films, these are Disney owned copyrighted characters.  If you dress them up to look like Superman, same story from Warner Brothers.  Daz’s protections only extend to the base figure they supply, but not once you dress and modify them.

The Bottom Line

If you are an artist and want to use any highly recognizable copyrighted characters like Mickey Mouse, Barbie, G.I. Joe, Spiderman, Batman or even current celebrity likenesses of Madonna, Angelina Jolie or Britney in any of your art, you could be held accountable for infringement as soon as the work is sold.  It may also be considered infringement if the subject is used in an inappropriate or inconsistent way with the character’s personality.  The days of Andy Warhol are over using celebrity likenesses in art (unless you explicitly commission a photograph of the subject and obtain permission to create the work).

It doesn’t really matter that you used a 3D character to simulate the likeness or who created that 3D object, what matters is that you produced a likeness of a copyrighted character in a 2D final image.  It’s that likeness that can cause issues.  If you intend to use copyrighted subject matter of others in your art, you should be extra careful with the final work as you could end up in court.

With art, it’s actually safer not to use recognizable copyrighted people, objects or characters in your work.  With art, it’s all about imagination anyway.  So, use your imagination to create your own copyrighted characters.  Don’t rely on the works of others to carry your artwork as profit motives are the whole point of contention with most copyright holders anyway.  However, don’t think you’re safe just because you gave the work away for free.

3D TV: Flat cutouts no more!

Posted in computers, entertainment, movies, video gaming by commorancy on February 18, 2012

So, I’ve recently gotten interested in 3D technology. Well, not recently exactly, 3D technologies have always fascinated me even back in the blue-red glasses days. However, since there are new technologies that better take advantage of 3D imagery, I’ve recently taken an interest again. My interest was additionally sparked by the purchase of a Nintendo 3DS. With the 3DS, you don’t need glasses as the technology uses small louvers to block out the image to each eye.  This is similar to lenticular technologies, but it doesn’t use prisms for this.  Instead, small louvers block light to each eye.  Not to get into too many technical details, the technology works reasonably well, but requires viewing the screen at a very specific angle or the effect breaks down.  For portable gaming, it works ok, but because of the very specific viewing angle, it breaks down further when the action in the game gets heated and you start moving the unit around.  So, I find that I’m constantly shifting the unit to get it back into the proper position which is, of course, very distracting when you’re trying to concentrate on the game itself.

3D Gaming

On the other hand, I’ve found that with the Nintendo 3DS, the games appear truly 3D.  That is, the objects in the 3D space appear geometrically correct.  Boxes appear square.  Spheres appear round.  Characters appear to have the proper volumes and shapes and move around in the space properly (depth perception wise).  All appears to work well with 3D games.  In fact, the marriage of 3D technology works very well with 3D games. Although, because of the specific viewing angle, the jury is still out whether it actually enhances the game play enough to justify it.  However, since you can turn it off or adjust 3D effect to be stronger or weaker, you can do some things to reduce the specific viewing angle problem.

3D Live Action and Films

On the other hand, I’ve tried viewing 3D shorts filmed with actual cameras.  For whatever reason, the whole filmed 3D technology part doesn’t work at all.  I’ve come to realize that while the 3D gaming calculates the vectors exactly in space, with a camera you’re capturing two 2D images only slightly apart.  So, you’re not really sampling enough points in space, but just marrying two flat images taken a specified distance.  As a result, this 3D doesn’t truly appear to be 3D.  In fact, what I find is that this type of filmed 3D ends up looking like flat parallax planes moving in space.  That is, people and objects end up looking like flat cardboard cutouts.  These cutouts appear to be placed in space at a specified distance from the camera.  It kind of reminds me of a moving shadowbox.  I don’t know why this is, but it makes filmed 3D quite less than impressive and appears fake and unnatural.

At first, I thought this to be a problem with the size of the 3DS screen.  In fact, I visited Best Buy and viewed a 3D film on both a large Samsung and Sony monitor.  To my surprise, the filmed action still appeared as flat cutouts in space.  I believe this is the reason why 3D film is failing (and will continue to fail) with the general public.  Flat cutouts that move in parallax through perceived space just doesn’t cut it. We don’t perceive 3D in this way.  We perceive 3D in full 3D, not as flat cutouts.  For this reason, this triggers an Uncanny Valley response from many people.  Basically, it appears just fake enough that we dismiss it as being slightly off and are, in many cases, repulsed or, in some cases, physically sickened (headaches, nausea, etc).

Filmed 3D translated to 3D vector

To resolve this flat cutout problem, film producers will need to add an extra step in their film process to make 3D films actually appear 3D when using 3D glasses.  Instead of just filming two flat images and combining them, the entire filming and post processing step needs to be reworked.  The 2D images will need to be mapped onto a 3D surface in a computer.  Then, these 3D environments are then ‘re-filmed’ into left and right information from the computer’s vector information.  Basically, the film will be turned into 3D models and filmed as a 3D animation within the computer. This will effectively turn the film into a 3D vector video game cinematic. Once mapped into a computer 3D space, this should immediately resolve the flat cutout problem as now the scene is described by points in space and can then be captured properly, much the way the video game works.  So, the characters and objects now appear to have volume along with depth in space.  There will need to be some care taken for the conversion from 2D to 3D as it could look bad if done wrong.  But, done correctly, this will completely enhance the film’s 3D experience and reduce the Uncanny Valley problem.  It might even resolve some of the issues causing people to get sick.

In fact, it might even be better to store the film into a format that can be replayed by the computer using live 3D vector information rather than baking the computer’s 3D information down to 2D flat frames to be reassembled later. Using film today is a bit obsolete anyway.  Since we now have powerful computers, we can do much of this in real-time today. So, replaying 3D vector information overlaid with live motion filmed information should be possible.  Again, it has the possibility of looking really bad if done incorrectly.  So, care must be taken to do this properly.

Rethinking Film

Clearly, to create a 3D film properly, as a filmmaker you’ll need to film the entire scene with not just 2 cameras, but at least 6-8 either in a full 360 degree rotation or at least 180 degrees.  You’ll need this much information to have the computer translate to a believable model on the computer.  A model that can be rotated around using cameras placed in this 3D space so it can be ‘re-filmed’ properly.  Once the original filmed information is placed onto the extruded 3D surface and the film is then animated onto these surfaces, the 3D will come alive and will really appear to occupy space.  So, when translated to a 3D version of the film, it no longer appears like flat cutouts and now appears to have true 3D volumes.

In fact, it would be best to have a computer translate the scene you’re filming into 3D information as you are filming.  This way, you have the vector information from the actual live scene rather than trying to extrapolate this 3D information from 6-8 cameras of information later.  Extrapolation introduces errors that can be substantially reduced by getting the vector information from the scene directly.

Of course, this isn’t without cost because now you need more cameras and a filming computer to get the images to translate the filmed scene into a 3D scene in the computer.  Additionally, this adds the processing work to convert the film into a 3D surface in the computer and then basically recreate the film a second time with the extruded 3D surfaces and cameras within the 3D environment.  But, a properly created end result will speak for itself and end the flat cutout problem.

When thinking about 3D, we really must think truly in 3D, not just as flat images combined to create stereo.  Clearly, the eyes aren’t tricked that easily and more information is necessary to avoid the flat cutout problem.

iPad: One year later…

Posted in Apple, cloud computing, computers, ipad by commorancy on May 8, 2011

The iPad was introduced very close to this time last year.  Now the iPad 2 is out, let’s see how it’s well it’s going for Apple and for this platform as a whole.

Tablet Format

The tablet format seems like it should be a well-adopted platform. But, does the iPad (or any tablet) really have many use cases?  Yes, but not where you think. I’m not sure Apple even knew the potential use cases for a tablet format before releasing it. Apple just saw that they needed a netbook competitor, so they decided to go with the iPad. I am speculating that Apple released it with as wide an array of software and development tools to see exactly where it could go. After all, they likely had no idea if it would even take off.

Yes, the iPad has had a widely and wildly accepted adoption rate.  Although, market saturation is probably close at hand with the numbers of iPads sold combined with the Android tablet entries (Samsung’s Galaxy S, Toshiba’s tablet and other tablets out or about to be released).  That is, those people who want a tablet now can have one. But, the main question is, what are most people using a tablet for?

My Usage

I received an iPad as a gift (the original iPad, not the iPad 2). I find myself using it at work to take notes first and foremost. I can also use it as a systems admin tool in a pinch. However, instead of carrying paper and pencil into a meeting, I take notes in the notepad app. This is actually a very good app for taking quick notes. Tap typing is nearly silent, so no clicky key noises or distracting pencils. The good thing, though, is that these notes will sync with Gmail and you can read all your notes in Gmail. You can’t modify the notes on Gmail, but at least you have them there. You can modify them on the iPad, though.  You can also sync your notes to other places as well.

My second use case is watching movies. So, I have put nearly my entire collection of movies on the iPad. Of course, they don’t all fit in 32GB, so I have to pick and choose which ones get loaded. The one thing the iPad needs, for this purpose, is more local storage. I’d like to have a 128GB or 256GB storage system for the iPad. With that amount of space, I could probably carry around my entire movie collection. In fact, I’d forgo the thinness of the iPad 2 by adding thickness to support a solid state 256GB drive.

The rest of my use cases involve reading email and searching and, sometimes, listening to music… although, I have an iPod touch for that.  I might listen to music more if it had a 256GB solid state drive.

Cloud Computing and Google

This article would be remiss by not discussing competition to the iPad.  There is one thing about Google’s Android platform that should be said.  Android is completely integrated with Google’s platform.  Apple’s iPad isn’t.  Google has a huge array of already functional and soon-to-be-released cloud apps that Android can take advantage of.  Apple, on the other hand, is extremely weak on cloud apps.  The only cloud app they have is the iTunes store.   That, in fact, is really a store and not a cloud app.  So, excluding iTunes, there really isn’t any cloud platforms for Apple’s devices.  That’s not to say that the iPad is excluded from Google, it’s just not nearly as integrated as an Android tablet.

Eventually, Android may exceed the capabilities of Apple’s IOS platform.  In some ways, it already has (at least in cloud computing offerings).  However, Android is still quite a bit more buggy when compared to IOS. IOS’s interface is much more streamlined, slick and consistent.  The touch typing system is far easier to use on an iPad than on Android. Finally, the graphics performance on Android is incredibly bad. With Android, the scrolling and movement is choppy using an extremely slow frame rate.  Apple’s interface is much more fluid and smooth and uses a high framerate.  The transitions between various apps is clean and smooth in IOS, but not in Android.  This graphics performance issue Google must address in Android.  A choppy slow interface is not pretty and makes the platform seem cheap and underpowered.  Worse, the platform is inconsistent from manufacturer to manufacturer (icons are different between manufacturers).  Google has to addresses these performance and consistency issues to bring Android to the level where it must be.

Apple’s Blinders

That said, the iPad (or more specifically Apple) needs to strengthen its cloud offerings.  If that means partnering with Google, then so be it.  Apple needs something like Google Docs and Google Voice.  It also needs cloud storage.  It needs to create these offerings that are Apple branded that integrate with the iPad natively, not as third party add-ons through the app store.  This is what Apple needs to work on.  Apple is so focused on its hardware and making the next device that it’s forgetting that it needs to support its current devices with more and better cloud offerings.  This is what may lead Apple out of the tablet race. This may also be what makes Google the leader in this space.

So, what things do you use your iPad for?

Let’s Find Out

Poll 1 Poll 2
Tagged with: , , ,

A call to boycott ABC’s V series

Posted in computers, entertainment, itunes, science fiction, streaming media, TV Shows by commorancy on January 20, 2011

[Update: V has been cancelled as of May 13th. Bye ‘V’.].

I have personally decided to boycott watching the new V series. No, not because the series isn’t good. It’s a reasonably good series, so far. No, it’s also not for any creative or story reasons you might think. The reason I have decided to boycott the V series is that whomever owns the rights or produces this series has decided to no longer allow streaming of new episodes in any form or on any Internet site, like Hulu or iTunes.

No more V on Hulu?

It’s not just Hulu that’s cut out of streaming for this show. It’s all streaming sites including ABC’s very own ABC.com site. You would think that since ABC owns the broadcast rights to the series and, in fact, are the ones who make the very decision whether V lives or dies as a series, that ABC would have the rights to stream this program online. No, apparently they do not. Very odd. It’s also not available on iTunes or Amazon either.

It almost seems like the producers are biting the hand that feeds them (in more ways than just one). Seriously, not even allowing ABC.com to stream episodes of V on their own site? This seems like the kiss of death for this series.

Rationale behind this decision

I have no inside scoop here, so I really have no idea what the producers were thinking. But, I can only guess that the reasoning is to force viewers to watch the show live on ABC (the TV channel) and only on the TV channel for its first run. So, on the one hand, this seems like a ratings bonanza. On the other hand, let’s explore the downside of this decision.

Viewer Demographics

Because V is very much a long continuous story arc format, if you miss even two episodes, you’re hopelessly lost. V isn’t a one-off monster-of-the-week series where you can watch an episode now and then. No, it is a long deep story arc that needs to be watched one episode at a time in order.

On top of the long story arc format, it is a science fiction program involving heavy uses of technology and intrigue. This genre choice automatically limits the types of viewers. So, the types of viewers that V tends to draw in are those who tend to be younger, tech savvy, internet knowledgeable types. Basically, the kind of viewers who tend to watch things on Hulu and download content from iTunes.

Producer miscalculation

So, on the one hand, the appearance is that this decision should allow the program to get higher ratings by forcing people to watch it live. On the other hand, Hulu and iTunes (and others) no longer have the rights to carry the back catalog of episodes to allow people to catch up. If viewers can’t catch up, they’ll not watch it live either. If you get lost, there is no reason to watch as you can’t understand what’s going on anyway. So, turn the channel and watch something else.

By alienating the exact demographic who tends to watch programs on Hulu combined with the lack of back catalog of episodes on Hulu for people to catch up with missed episodes, my guess is that this decision will seriously backfire on the producers. The ratings will, instead, drop and drop precipitously as the season progresses. In fact, I’d venture to guess that this decision may, in fact, be the sole reason for the death of this series. It’s clear that ABC won’t keep V on the air without viewers. We know that. But, you can’t keep viewers watching V by trying to appeal to the wrong demographic or by pissing on the fan base.

The streaming and Internet genie is out of the bottle. You can’t go back to a time before the Internet and Hulu existed. The producers seriously need to understand this. It’s unfortunate that the producers chose V for this experiment. So far, V appears to be a good series and is probably worth watching. But, the producers also need to realize that removing choices of where and how this program can be viewed is not the answer. You need more viewers, not less.

Underground distribution

Of course, that just means that people will create xvids or mp4s of the show and distribute them via torrents. Instead of seeing legitimate views on legitimate sites with legitimate ad revenue, the whole thing now gets pushed underground where there is no ad revenue and views don’t help the show or the producers at all. Not smart. Not smart at all.

What is the answer?

The answer lies with Neilsen Ratings. In a time where streaming and instant (day after) releases are nearly common place, Neilsen still has no strategy to cover this media with ratings. TV ratings are still and only counted by live views. This company is seriously antiquated. It still solely relies on active Neilsen households watching programs live. Hulu views, DVR views and iTunes downloads do not count towards viewership or ratings. Yet, these ‘day after’ views can be just as relevant (or even more) today than live views. Today, counting only live views is fundamentally wrong.

Change needs to come with the ratings companies, not by producers trying to force the 70s viewing style in 2011. Neilsen needs to count all views of a program no matter where they are or when they are. The ratings game needs to change and must change to accommodate the future of TV. As TVs become Internet connected, this change will become even more important. Eventually, TV programming will be seamlessly delivered over the Internet. In fact, there will come a time when you ‘tune in’ and you won’t even know if it’s streamed or over the air. In fact, why should you care? A view is a view whether live or a month later.

Understanding Neilsen’s antiquated system

Of course, once you understand Neilsen’s outdated model, you can also understand why Neilsen is not counting any ratings other than live TV. Why is that? Because counting any other medium than live TV threatens the very existence of Neilsen’s service. Once broadcasters realize they can gather these numbers through Hulu, Roku, Slingbox, Netflix and other DVR and on-demand technologies directly, there is no need for Neilsen. That is, once we’ve moved to streaming TV 100% it’s easy to get accurate counts. Neilsen’s service was born out of the need to track viewers in a time when the Internet did not exist. With the Internet, it’s much easier to track viewer activity and data in real time. It’s also easy to get this information right from the places that have rights to stream. So, with these real-time reporting methodologies, Neilsen really is no longer necessary.

Neilsen has always used an extrapolation methodology for its ratings statistics, anyway. That is, only a tiny subset of homes throughout the country are Neilsen households. So, when these Neilsen households watch, these small numbers are extrapolated to the larger population, even though there is really no way to know what non-Neilsen households are watching. So, Neilsen’s ratings systems are actually very inaccurate. Counting the numbers of views from Hulu, iTunes, Amazon, Roku, Slingbox, Netflix and other streaming sites and technologies are exact and spot-on accurate. In fact, these numbers are so exact, they can even be traced back to specific hardware devices and specific households, something Neilsen’s rating systems have never been capable of doing. This is why Neilsen is scared to count online views. This is why Neilsen is no longer needed.

Goodbye V

It was nice knowing ya. My instincts all say that the fan backlash from this decision will be swift and final. If this series manages to make it to the end of the 2011 spring season without cancellation, I’ll be amazed. However, if ABC cancels this show before June, that won’t surprise me. So, unless the producers make an about-face really fast with regards to this no-streaming experiment, this series is likely already cancelled… it just doesn’t yet know it. I’d also urge anyone reading (and especially Neilsen households) to boycott the new V series and send a message to the producers that not offering streaming options is not acceptable and that your program is dead without them. I can tell you that I won’t watch this series again until streaming options become available. This is not really a problem for me as there are plenty of other TV shows available. The problem here is for the cast and crew. These people are dedicating their time, effort and livelihoods to putting this series together only to be screwed over by the producers. Such is life in Hollywood, I guess.

Useless excess: Fashion Victim Edition

Posted in Apple, computers, iphone, ipod by commorancy on December 8, 2010

For whatever reason today, a lot of people can’t seem to temper their purchasing of useless things.  I have to admit that I’ve been guilty of this on occasion myself, but I try to exercise restraint with purchases by asking, “Do I have a real need?”

Purchasing excess

I see lots of people buying things where they haven’t really justified a need in their lives.  I’d say the most egregious example of this useless excess is the iPad.  So many people walked into the purchase of this device not knowing how it would enrich their lives, how they might use it or what it benefits it might offer.  Is the iPad useless excess?  I’d say so.  I still haven’t yet fully justified the purchase of this device for myself.  The only justification I have right now is the larger screen and reading email in a portable way.  Those are the justifications I’ve been able to come up with.  Since a I don’t avidly read digital books, that part isn’t really overall that useful me.  I do have an iPod Touch and have found this device to immensely enrich my life, though.  It solved my portable music need, it has a browser, a Kindle app and email and a few admin apps for in-a-pinch situations. It has a long battery life so I have something to use pretty much anywhere, again, in-a-pinch.   So, the cost and use for this isn’t useless excess for me.  On the other hand, the iPad isn’t that portable, so really doesn’t work for things like portable music.

Is an iPad worth $500?  Not yet for me. However, there are times where I’m walking around the office and having an iPad in hand could come in handy for spot email reading or forwarding an email.  Since it also supports some administrative tools, I might even be able to justify it for the use of those tools. On the other hand, a netbook is a more powerful hardware tool (i.e., usb ports, networking ports, SD card slot, etc).  So, hardware-wise, a Netbook is much more justified for what I do. They’re just a bit more cumbersome to use than an iPad.  On the other hand, composing email on an iPad is basically useless.  I’d much rather have a real keyboard, so I’d definitely need a dock for extended use of an iPad.

Keeping up with the Jones’

A lot of useless excess stems from ‘social’ reasons.  Some people just want to show off their money.  The reality is, I find this disturbing.  Why would you want to buy something just to walk around and flaunt it?  I really don’t relish the thought of being robbed or mugged. I mean, I can somewhat understand fashion.  Not so much fashion excess (i.e., diamond studded bling), but wearing fashion to accentuate yourself we have become accustomed to.  I don’t personally go for high fashionista, though.  Useful fashion yes, excess fashion no.  Unfortuantely, an iPad is not a fashion accessory.  No computer or electronic device is (other than those trashy flashing earrings). So, why must people treat Apple products (and some computers and phones) as fashion when it clearly isn’t.  You should always buy a computer for a need in your life, not because your next door neighbor has one or you ‘think’ it might be useful.

Coffee table paperweights

Now that the iPad has been out for about 9 months, I’m still not finding a solid use for the iPad in my personal life. For business use, I have a couple reasons (cited above), but these reasons are not yet enough to justify a $500 expense.  In fact, I would think there’s going to be a growing used market for iPads very quickly here.  People will realize they don’t need or use them and will need the money more.   Especially when it is no longer the ‘chic’ device (and that’s quickly approaching).  Right now is also the prime time to get rid of your iPad, not before it goes out of ‘fashion’.  Additionally, it’s almost guaranteed that by spring 2011, Apple will have a new model iPad ready to ship.   This will majorly devalue the resale value of the 2010 iPad.  So, if you want to sell your iPad for any decent amount of change, you should consider doing it now.  Otherwise, sitting on it will only devalue it down to probably the $150-200 range by end of 2011 and less then that by 2012.

By now, people should really know if the iPad has a use in their life.  Only you can answer that question, but if the most you do is turn it on once a week (or less), it’s a paperweight.  You should probably consider selling it now before the new iPad is released if you want any return on your investment.  Granted, you may have paid $500, but you’re likely only to get about $200-250 (16GB version) depending on where you sell.  If you put it on eBay as an auction, you might get more money out of it ($450, if you’re lucky).  By this time next year, though, you probably won’t get half that amount on eBay.

As another example, see the Wii.  Now that the Wii has been out for several years, it is no longer the ‘chic’ thing to own.  Today, people are likely purchasing it because they want to play a specific game title.  And, that’s how it should be.  You should always buy computer gear for the software it runs, not because it’s the ‘thing to have’.  Wii consoles are now in a glut and easy to find.  So, if you want one today, it’s very easy to get them.

Gift excess

I know people who buy gift items not because it’s a useful gift, but because it’s the thing to have.  Worse, though, is that the person who receives the gift doesn’t even use it or carry it.  In this example case, it’s an iPad 64GB version.  Yet, this person doesn’t carry it around or, indeed, even use it.  Instead, they prefer to use their 2-3 year old notebook.  What does that say about the usefulness of such useless excess?

Is the iPad considered useless excess? At the moment, yes.  There may be certain professions that have found a way to use the iPad as something more than a novelty, but I’ve yet to see a business convert to using iPads as their sole means of corporate management.  For example, it would say something if FedEx would adopt the iPad is their means of doing business.  Instead of the small hand scanners, they could carry around the iPad to do this work.  Oh, that’s right, there’s no camera on the iPad, so scanning isn’t even useful.

While this article may seem to specifically bash the iPad, it isn’t intended to focus solely on them.  The iPhone is another example of useless excess.  You pay $200 just to get the phone, you’re locked into a 2-4 year contract with at least $80 a month.  And, the worst part, the iPhone isn’t even a very good phone.  Dare I say, Nokia and Motorola still make better quality phone electronics than Apple ever has.  Apple is a computer maker, not a phone maker.  So, they still haven’t the experience with phone innards.  So, when talking to people on the iPhone, the voice quality, call quality and clarity suffer over better made handsets.  Again, people justify the purchase of an iPhone 4 because of the ‘Apps’, not because of quality.  Worse, though, is that many people buying iPhones are doing so because it’s ‘the thing to have’, not because it’s actually useful in their lives.  If the only thing you find yourself doing with the iPhone is talking on the phone, then you’re a victim of useless excess.

How to curb useless excess

Ask yourself, ‘How will this thing make me more productive, or solve a problem?’  If you cannot come up with an answer, it’s useless excess.  Once you find at least one real need for a device, then the purchase is justified.  If you just want it to have it, that’s useless excess.  Just having something because you can doesn’t make you a better person.  It just makes you a victim of useless excess.  Simply because you can afford something doesn’t mean you should.

How do you justify an iPad purchase?  For example, if you intend to mount it into a door of your kitchen as an internet recipe retrieval device and you bake or cook every day, that would be one way it could enrich your life.  Although, it’s also not impervious to water or other wet ingredients, so you might want to cover it to avoid those issues.  In other words, for a computer to not be considered useless excess, it would need to be used every day to provide you with useful information you can’t otherwise get.

If you’re looking for a holiday gift, don’t just buy an iPad because you can, buy it because the person will actually use and actually needs it to solve a problem.

iPad, iPod, iPhone, iConsume

Posted in Apple, cloud computing, computers by commorancy on July 30, 2010

While all of these new Apple devices seem really great on the surface (pun intended), with no effective local storage, the design behind these devices offers no thought on creation or export of created content. However, the design clearly targets consumption of digital goods. Effectively, this is a one-way device for content. That is, content goes in but it doesn’t come back out. The question begs, however, does Apple think that we are only consumers now? We are now relegated to being just a bunch of ravenous money spending consumers? We don’t have brains in our heads or creativity or imagination? We’re just a bunch of finger pushing consumers with portable devices?

Consumerism

If there’s anything that Apple has done in recent years with these one-way devices, it’s to solidify consumerism. That is, to sell us products that are essentially one-way content input devices. Granted, it has a camera so we can take pictures or video. And yes, they may have even managed to get a video editor onto an iPad, but these apps aren’t designed for professional level editing (or even prosumer level editing). Sure, it’s fine for some random party or perhaps even a low quality wedding souvenir, but these consumer-centric devices really don’t offer much for creativity or imagination, let alone software development. It doesn’t even much offer a way to produce a spreadsheet or a word processor document. No, these platforms are almost entirely designed for consumption of digital goods (i.e., books, movies, magazines, music, web content, games, etc).

Lack of Creativity

These devices were designed to consume, not create. Consume, consume, consume and then consume some more. Yes, some creativity apps have popped up, but they’re more game than serious. They’re there to let you play when you’re bored. Even these creativity apps must be consumed before you can use them. As these are really read-only devices (no hard drive, external storage or other ways of getting things out of the device), these creativity apps really aren’t meant to be taken seriously. In other words, these apps are there to placate those who might think these are consumer focused only. In reality, these creative apps are shells of what a real creative app looks like, such as Photoshop, Illustrator, AutoCAD or Maya. Even prosumer apps like Poser and Daz Studio are still leaps and bounds better than anything that’s available on these iConsumer devices.

Computers vs iConsumers

Computers are designed as well rounded devices capable of not only consuming content, but creating it. That is, as a computer owner, you have the choice to both produce and consume content. Granted, there are a lot of people who have no intention of writing music, painting a digital work, developing an application or writing a novel. However, with a computer, you have these choices available. With iConsumer devices, you really don’t. On IOS 4 or even Android, these devices just don’t have enough resources to run these types of full sized apps. So, you won’t find a full Office suite on the Droid or an iPhone. Even something as big as the iPad still doesn’t have a productivity suite that would work in a proper or efficient way. Granted, Android likely supports Google Docs. But, even still, I don’t want to sit around all day pecking in information on a chicklet keyboard of a phone. Give me a solid full sized qwerty keyboard any day for creation.

Cloud Computing, Operating Systems and a step backwards

Apple definitely missed the ball on this one. With a device like the iPad without any local storage, the only way this device could actually create is by using cloud computing services. Unfortunately, Apple offers nothing in cloud computing. The iTunes store is a poor alternative to that. In fact, the iTunes store is just a store. It doesn’t offer online apps like Google Docs, it doesn’t offer any types of web based or cloud based services that the iPad can consume. The sole way to deal with the iPad is through apps that you must download from the store. Yes, there may be ‘an app for that’, but many times there isn’t.

The other difficulties with apps is that they don’t work together on the device. There is no app synergy. For example, NeXTStep (the operating system that gave birth to Mac OS X and later iOS4) was a brilliant design. It offered a system where any app could extend the operating system by adding new controls or features. New apps installed could then consume those controls within its own app framework (sometimes even launching the other app to do so). With iPhone OS (any version), Jobs has taken a huge step backwards in computing. No longer is this extension system available. Apps are just standalone things that don’t interact or interrelate to or with one another. Yes, now multitasking may be back, but they’re still just standalone things. About the extent of interrelation between apps is having one app launch Safari and open a URL. Wow.. so sophisticated.

Notebook and creation tools

Granted, there are a lot of people who’s sole goal is to consume. And yes, it’s probably true that most people only want to consume. The question is, though, do you want to give up the ability to create to only consume? That’s exactly what you give up when you buy into the iPad, iPod or iPhone. When these portable devices can clearly consume and create content equally well and don’t force consumers to make this choice when purchasing a device, then the device will have its true potential. Until then, I see these consumerist devices as easy ways to give your money away. For people who don’t need portable creation tools, that’s fine. For those of us who do, then a full fledge hard drive-equipped notebook is still the only portable device that fills this void.

Cloud Computing Standards

We are not where we need to be. Again, the iPad was a shortsighted rapidly-designed device. It was designed for a small singular purpose, but that purpose wasn’t long term designed. Sure, the OS is upgradeable and perhaps the device may get to that point. Perhaps not. Apple has a bad habit of releasing new devices making the old ones obsolete within months of the previously-new device. So, even if a device is truly capable of reaching its potential, Apple will have tossed it aside for a new hardware device 6-10 months later.

Clearly, cloud computing will need to establish a standard by which all cloud computing devices will operate. That means, the standards will discuss exactly how icons are managed, how apps are installed and how people will interface with the cloud apps. That doesn’t mean that different devices with different input devices can’t be created. The devices can, in fact, all be different. One computer may be keyboard and mouse based, another may be touch surface based. What the cloud standards will accomplish is a standard by which users will interact with the cloud. So, no matter what computer you are using, you will always consume the cloud apps the same way. That also means the cloud apps will always work the same no matter what interface you are using.

We are kind of there now, but the web is fractured. We currently have no idea how to find new sites easily. Searching for new sites is a nightmare. Cloud computing standards would need to reduce the nightmare of search, increase ease of use for consumers and provide standardized ways of dealing with cloud computing services. In other words, the web needs dramatic simplification.

Cloud Computing and the iPad

The iPad is the right device at the wrong time and consumers will eventually see this once a real cloud computing device hits the market. Then, the iPad will be seen as a crude toy by comparison. A true cloud computing device will offer no storage, but have a huge infrastructure of extensible interrelated apps available online. Apps similar to Google Docs, but so many more types all throughout every single app category. From games, to music, to video, to photography, to finance, to everything you can imagine. Yes, a true cloud computing device will be able to consume as freely as it can create. A cloud computing OS will install apps as links to cloud services. That is, an icon will show on the ‘desktop’, but simply launch connectivity right into the cloud.

Nothing says that you need a mouse and keyboard to create content, but you do need professional quality to produce professional content. I liken the apps on the iPad to plastic play money you buy for your kids. Effectively, they’re throw-away toy apps. They’re not there for serious computing. To fully replace the desktop with cloud computing, it will need fully secured full-featured robust content creation and consumption applications. You won’t download apps at all. In fact, you will simply turn your portable computer on and the cloud will do the rest. Of course, you might have to use a login and password and you might be required to pay a monthly fee. But, since people are already paying the $30 a month for 3G service, we’re already getting accustomed to the notion of a monthly service fee. It’s only a matter of time before we are doing all of our computing on someone else’s equipment using a portable device. For listening to music, we’ll need streaming. But, with a solid state cache drive, the device will automatically download the music and listen offline. In fact, that will be necessary. But this is all stuff that must be designed and thought out properly long before any cloud device is released. …something which Apple did not do for the iPad. What they did do, though, is create the perfect digital consumption device. That is, they produced a device that lets them nickel and dime you until your wallet hates you.

Tagged with: , , ,

Deep Tech #1: Momentus XT, Microsoft Kinect, Micro PCs

Posted in computers, windows by commorancy on July 25, 2010

Momentus XT

Here’s something that holds some promise for notebook hard drives, but don’t get your hopes up too high. Seagate has released the Momentus XT notebook hard drive. It’s a hybrid drive that combines solid state cache technology and a 7200 RPM mechanical spindle. The thought behind this drive technology is to help speed up your notebook’s hard drive performance. The upside, the SSD cache apparently does help speed the system up. The downside is that it only works on notebooks where the bottleneck is the hard drive.

The reality is, in many notebooks, this drive technology may not help speed up the system. The reality is, most notebook manufacturers cut corners on underlying bus architectures so that the motherboard ends up being the bottleneck, not the hard drive. For this reason, notebook makers put in 5400 RPM drives to 1) increase battery life and 2) reduce the heat. A faster drive requires more power and also radiates more heat. So, if you’re looking to keep your system as cool and quiet with the longest amount of battery, the Momentus XT may not be a great choice. Considering that the drive costs around $120-200 for a 500GB drive and no guarantee of performance improvement, it may not be worth the gamble if your notebook is older than 1 year.

On the other hand, if you’re looking for a portable USB 2 or 3 drive without the need for extra power supplies, this drive may well be the answer. Although, I have not found the drive prepackaged to purchase, it’s simple enough to put together your own portable drive from this drive and an external USB 3 case.

The bad news about this drive, however… don’t expect to find it at your local tech retailer. Fry’s, Best Buy and Microcenter don’t carry it. Neither do Target, Walmart, Sears or any other local retailer. If you want this drive, you will need to order it from an Internet technology e-tailer like Newegg or Amazon and then wait for it to be delivered.

Microsoft Kinect

This device, formerly known as Project Natal, turns your body into a game controller. I don’t know about you, but this really doesn’t sound that appealing. Back in the late 90s, I’d seen a full body controller game at an arcade. Not only did it require a large amount of space, so you don’t knock things over or fall over and hurt yourself, it just seemed clumsy and awkward. Fast forward to the Microsoft Kinect. I see the same issue here. I’m not actually thrilled by standing around all day flailing my arms and legs to play a video game. Perhaps for 20 or 30 minutes to get a workout (ala, Wii Fit), I just don’t think I’d want to stand around all day flailing my arms and legs to play Red Dead Redemption. I just don’t really see it happening.

Considering the price tag of $150 for this device (which, BTW, is only $50 less than the cost of an Xbox 360), I’m just not feeling the love here. Overall, I think this novelty device will garner some support in small circles, but as with most Microsoft novelty tech, it’s pretty much dead in the water at $150. If they could bundle it in with an Xbox 360 for a $250-300 price tag including a game, then maybe. But, as for now, I’m not predicting that this device will last.

The Micro PC

If you’re looking for a small computer to fill that niche in your entertainment center, then perhaps the Dell Zino HD or the Viewsonic VOT 550 will fit the bill as both appear to be quite capable tiny computers. I’ve been looking for a small well designed PC for a specific purpose. A computer about the size of an Apple MacMini, but that runs Windows. Yes, I could probably get a MacMini and load Windows on it, but I’d really rather get a PC designed for that task. With an Apple MacMini, it feels like a square-peg-round-hole situation. With a PC designed for Windows, loading Windows would work with much less problems and probably have better driver support.

Overall, I like both the idea of the Zino HD and the Viewsonic VOT 550, I’d just like to see something as small as the MacMini. If Apple can produce such a small PC, I’m not sure why Dell, Gateway or other manufacturers can’t do it with PC hardware.