Random Thoughts – Randocity!

Software Engineering and Architecture

Posted in botch, business, Employment by commorancy on October 21, 2018

ExcellenceHere’s a subject of which I’m all too familiar and is in need of commentary. Since my profession is technical in nature, I’ve definitely run into various issues regarding software engineering, systems architecture and operations. Let’s Explore.

Software Engineering as a Profession

One thing that software engineers like is to be able to develop their code on their local laptops and computers. That’s great for rapid development, but it causes many problems later, particularly when it comes to security, deployment, systems architecture and operations.

For a systems engineer / devops engineer, the problem arises when that code needs to be productionalized. This is fundamentally a problem with pretty much any newly designed software system.

Having come from from a background of systems administration, systems engineering and devops, there are lots to be considered when wanting to deploy freshly designed code.

Designing in a Bubble

I’ve worked in many companies where development occurs offline on a notebook or desktop computer. The software engineer has built out a workable environment on their local system. The problem is, this local eneironment doesn’t take into account certain constraints which may be in place in a production environment such as internal firewalls, ACLs, web caching systems, software version differences, lack of compilers and other such security or software constraints.

What this means is that far too many times, deploying the code for the first time is fraught with problems. Specifically, problems that were not encountered on the engineer’s notebook… and problems that sometimes fail extremely bad. In fact, many of these failures are sometimes silent (the worst kind), where everything looks like it’s functioning normally, but the code is sending its data into a black hole and nothing is actually working.

This is the fundamental problem with designing in a bubble without any constraints.

I understand that building something new is fun and challenging, but not taking into account the constraints the software will be under when finally deployed is naive at best and reckless at the very worse. It also makes life as a systems engineer / devops engineer a living hell for several months until all of these little failures are sewn shut.

It’s like receiving a garment that looks complete, but on inspection, you find a bunch of holes all over that all need to be fixed before it can be worn.

Engineering as a Team

To me, this is situation means that software engineer is not a team player. They might be playing on the engineering team, but they’re not playing on the company team. Part of software design is designing for the full use case of the software, including not only code authoring, but systems deployment.

If systems deployment isn’t your specialty as a software engineer, then bring in a systems engineer and/or devops engineer to help guide your code during the development phase. Designing without taking the full scope of that software release into consideration means you didn’t earn your salary and you’re not a very good software engineer.

Yet, Silicon Valley is willing to pay “Principal Engineers” top dollar for these folks failing to do their jobs.

Building and Rebuilding

It’s an entirely a waste of time to get to the end of a software development cycle and claim “code complete” when that code is nowhere near complete. I’ve had so many situations where software engineers toss their code to us as complete and expect the systems engineer to magically make it all work.

It doesn’t work that way. Code works when it’s written in combination with understanding of the architecture where it will be deployed. Only then can the code be 100% complete because only then will it deploy and function without problems. Until that point is reached, it cannot be considered “code complete”.

Docker and Containers

More and more, systems engineers want to get out of the long drawn out business of integrating square code into a round production hole, eventually after much time has passed, molding the code into that round hole is possible. This usually takes months. Months that could have been avoided if the software engineer had designed the code in an environment where the production constraints exist.

That’s part of the reason for containers like Docker. When a container like Docker is used, the whole container can then be deployed without thought to square pegs in round holes. Instead, whatever flaws are in the Docker container are there for all to see because the developer put it there.

In other words, the middle folks who take code from engineering and mold it onto production gear don’t relish the thought of ironing out hundreds of glitchy problems until it seamlessly all works. Sure, it’s a job, but at some level it’s also a bit janitorial, wasteful and a unnecessary.

Planning

Part of the reason for these problems is the delineation between the engineering teams and the production operations teams. Because many organizations separate these two functional teams, it forces the above problem. Instead, these two teams should be merged into one and work together from project and code inception.

When a new project needs code to be built that will eventually be deployed, the production team should be there to move the software architecture onto the right path and be able to choose the correct path for that code all throughout its design and building phases. In fact, every company should mandate that its software engineers be a client of operations team. Meaning, they’re writing code for operations, not the customer (even though the features eventually benefit the customer).

The point here is that the code’s functionality is designed for the customer, but the deploying and running that code is entirely for the operations team. Yet, so many software engineers don’t even give a single thought to how much the operations team will be required support that code going forward.

Operational Support

For every component needed to support a specific piece of software, there needs to be a likewise knowledgeable person on the operations team to support that component. Not only do they need to understand that it exists in the environment, the need to understand its failure states, its recovery strategies, its backup strategies, its monitoring strategies and everything else in between.

This is also yet another problem that software engineers typically fail to address in their code design. Ultimately, your code isn’t just to run on your notebook for you. It must run on a set of equipment and systems that will serve perhaps millions of users. It must be written in ways that are fail safe, recoverable, redundant, scalable, monitorable, deployable and stable. These are the things that the operations team folks are concerned with and that’s what they are paid to do.

For each new code deployment, that makes the environment just that much more complex.

The Stacked Approach

This is an issue that happens over time. No software engineer wants to work on someone else’s code. Instead, it’s much easier to write something new and from scratch. It’s easy for software engineer, but it’s difficult for the operations team. As these new pieces of code get written and deployed, it drastically increases the technical debt and burden on the operations staff. Meaning, it pushes the problems off onto the operations team to continue supporting more and more and more components if none ever get rewritten or retired.

In one organization where I worked, we had such an approach to new code deployment. It made for a spider’s web mess of an environment. We had so many environments and so few operations staff to support it, the on-call staff were overwhelmed with the amount of incessant pages from so many of these components.

That’s partly because the environment was unstable, but that’s partly because it was a house of cards. You shift one card and the whole thing tumbles.

Software stacking might seem like a good strategy from an engineering perspective, but then the software engineers don’t have to first line support it. Sometimes they don’t have to support it at all. Yes, stacking makes code writing and deployment much simpler.

How many times can engineering team do this before the house of cards tumbles? Software stacking is not an ideal any software engineering team should endorse. In fact, it’s simply comes down to laziness. You’re a software engineer because writing code is hard, not because it is easy. You should always do the right thing even if it takes more time.

Burden Shifting

While this is related to software stacking, it is separate and must be discussed separately. We called this problem, “Throwing shit over the fence”. It happens a whole lot more often that one might like to realize. When designing in a bubble, it’s really easy to call “code complete” and “throw it all over the fence” as someone else’s problem.

While I understand this behavior, it has no place in any professionally run organization. Yet, I’ve seen so many engineering team managers endorse this practice. They simply want their team off of that project because “their job is done”, so they can move them onto the next project.

You can’t just throw shit over the fence and expect it all to just magically work on the production side. Worse, I’ve had software engineers actually ask my input into the use of specific software components in their software design. Then, when their project failed because that component didn’t work properly, they threw me under the bus for that choice. Nope, that not my issue. If your code doesn’t work, that’s a coding and architecture problem, not a component problem. If that open source component didn’t work in real life for other organizations, it wouldn’t be distributed around the world. If a software engineer can’t make that component work properly, that’s a coding and software design problem, not an integration or operational problem. Choosing software components should be the software engineer’s choice to use whatever is necessary to make their software system work correctly.

Operations Team

The operations team is the lifeblood of any organization. If the operations team isn’t given the tools to get their job done properly, that’s a problem with the organization as a whole. The operations team is the third hand recipient of someone else’s work. We step in and fix problems many times without any knowledge of the component or the software. We do this sometimes by deductive logic, trial and error, sometimes by documentation (if it exists) and sometimes with the help of a software engineer on the phone.

We use all available avenues at our disposal to get that software functioning. In the middle of the night the flow of information can be limited. This means longer troubleshooting times, depending on the skill level of the person triaging the situation.

Many organizations treat its operations team as a bane, as a burden, as something that shouldn’t exist, but does out of necessity. Instead of treating the operations team as second class citizens, treat this team with all of the importance that it deserves. This degrading view typically comes top down from the management team. The operations team is not a burden nor is it simply there out of necessity. It exists to keep your organization operational and functioning. It keeps customer data accessible, reliable, redundant and available. It is responsible for long term backups, storage and retrieval. It’s responsible for the security of that data and making sure spying eyes can’t get to it. It is ultimately responsible to make sure the customer experience remains at a high excellence standard.

If you recognize this problem in your organization, it’s on you to try and make change here. Operations exists because the company needs that job role. Computers don’t run themselves. They run because of dedicated personnel who make it their job and passion to make sure those computers stay online, accessible and remain 100% available.

Your company’s uptime metrics are directly impacted by the quality of your operations team staff members. These are the folks using the digital equivalent of chewing gum and shoelaces to keep the system operating. They spend many a sleepless night keeping these systems online. And, they do so without much, if any thanks. It’s all simply part of the job.

Software Engineer and Care

It’s on each and every software engineer to care about their fellow co-workers. Tossing code over the fence assuming there’s someone on the other side to catch it is insane. It’s an insanity that has run for far too long in many organizations. It’s an insanity that needs to be stopped and the trend needs to reverse.

In fact, by merging the software engineering and operations teams into one, it will stop. It will stop by merit of having the same bosses operating both teams. I’m not talking about at a VP level only. I’m talking about software engineering managers need to take on the operational burden of the components they design and build. They need to understand and handle day-to-day operations of these components. They need to wear pagers and understand just how much operational work their component is.

Only then can engineering organizations change for the positive.


As always, if you can identify with what you’ve read, I encourage you to like and leave a comment below. Please share with your friends as well.

↩︎

Rant Time: Apple Music vs Twitter

Posted in Apple, botch, business, california by commorancy on September 12, 2018

apple-cracked-3.0-noderivsI know I’ve been on a tirade with the number of rants recently, but here we are. I rant when there’s something to rant about. This time it’s about sharing Apple Music playlists on Twitter… and just how badly this feature is broken. Worse, just how Apple itself is broken. Let’s explore.

Twitter Cards

Twitter has a feature they call Twitter cards. It’s well documented and requires a number of meta tags to be present in an HTML page. When the page is shared via Twitter, Twitter goes looking at the HTML for its respective Twitter meta tags to generate a Twitter card.

A Twitter card comes in two sizes and looks something like this:

Small Twitter Card

Twitter Card Small 2

Large Twitter Card

Large Twitter Card

What determines the size of the Twitter card seems to be the size and ratio of the image. If the image is square in size (144×144 or larger), Twitter creates a small card as shown at the top. If the image ratio is not square and larger than 144×144, Twitter produces a large Twitter card. The difference between the cards is obvious:

  • Small card has an image to the left and text to the right
  • Large card has image above and text below

It’s up to the person sharing on Twitter to decide which size is most appropriate. Personally, I prefer the larger size because it allows for a much larger image.

Apple Music Playlist Sharing

Here’s where the RANT begins… hang onto your hat’s folks. Apple’s engineering team doesn’t get Twitter cards…. AT. ALL! Let me give an example of this. Here’s a playlist I shared on Twitter:

Apple Music Playlist Twitter Card

What’s wrong with this Twitter card? If you guessed the image is way too tiny, you’d win. Apple doesn’t understand the concept of producing a 144×144 image properly. Here’s the fundamental problem. In iTunes, my playlist image is uploaded with a 1200×1200 size image. This image is well large enough for any use on the net. Here’s how it looks in iTunes, albeit scaled somewhat small:

iTunes Playlist Image

Note, iTunes retains the full image size, but scales the image as needed. If you look at the playlist on the web, it looks like this with a much larger scaled image:

Apple Playlist Web

As you can see, the image scales properly and still looks good even larger. Yes, even large enough to produce a 144×144 image on a Twitter card.

Here’s the Twitter card metadata on that Apple Music Preview page:

meta id="1" name="twitter:title" content="‎AstroWorld Pioneer by Klearnote" class="ember-view"

meta id="2" name="twitter:description" content="‎Playlist · 22 Songs" class="ember-view"

meta id="3" name="twitter:site" content="@appleMusic" class="ember-view">

meta id="4" name="twitter:domain" content="Apple Music" class="ember-view">

meta id="5" name="twitter:image" 
content="https://is5-ssl.mzstatic.com/image/thumb/SG-S3-US-Std-Image-000001/v4/a2/c6/6f/a2c66fc6-a63b-f590-c6db-e41aebfc327c/image/600x600wp.png" 
class="ember-view"

meta id="6" name="twitter:card" content="summary" class="ember-view"

You’ll notice that the text in red above is the piece that is relevant. Let’s look at that image now…

600x600wp

Scaled. Click to see 600×600 image

You’ll notice that the playlist image content is centered at 213×213 pixels in size centered in a light grey box that’s 600×600. Yes, that thick light grey border is part of the image. This is actually how the image is being produced by Apple on their servers. That would be okay if the image were scaled to the full 600×600 pixels. Unfortunately, it isn’t. Twitter will scale any image to its preferred size of 144×144 pixels for small Twitter cards. Here’s what a 144×144 image looks like when scaled by WordPress:

600x600wp

Small, but reasonably clear. Here’s Twitter’s crap scaled and unreadable version:

twitter-144x144

I have no idea what Twitter is using to scale its images, but it looks like absolute trash. The bigger problem isn’t that Twitter has scaled this image down, it’s that Apple has provided Twitter with such an already small and crap looking playlist image. Why have a 144×144 image if you’re only going to use 1/9th of the entire space? Apple, why wouldn’t you not want to use the entire 144×144 image space to make the image look like this:

pioneer-1200x1200

That sized image would make the Twitter card look like this…

TwitterCardFixed

… instead of this absolute shit looking card…

TwitterCardBroken

How the Mighty Have Fallen

Apple used to be a well respected company who always prided itself on doing things correctly and producing high quality products. Today, they’re a shadow of their former selves. Producing products as crap as this only serves as a detriment to all of the other products they now offer. It’s clear, Apple Music is an afterthought and Apple seems to have only one engineer assigned to this software product… maybe none.

It’s also clear, Apple doesn’t respect the standards of anyone, not even themselves. I consider this absolute crap attention to detail. Seriously, who wants their images to be scaled to the point of being unreadable? No one!

Yet, when I called Apple Support to report this issue, I was told, “This is expected behavior”. Expected by whom? Who would ever expect an image to be scaled the point of nonrecognition? No one. If this is the level of software development effort we’re now seeing from Apple, then I don’t even want to think what corners are being cut on their hardware products.

What’s next? Apple watches catching on fire and exploding on people’s wrists? Phones taking out people’s ears? If I can no longer trust Apple to uphold the standards of high quality, then the mighty have truly fallen. There is no hope for Apple no matter how much crap they try to peddle.

Apple, Hear Me!

If you are serious about your business, then you need to be serious about all aspects including offering high quality products, services and features. This goes all the way to playlist sharing on Twitter. My experience with dealing with Apple in this matter was so amateur, including the way Apple Music itself is being handled, why should I continue to use your products? Give me a reason to pay you $99 for such shit service! Seriously, in addition to the above, I’m also finding what appear to be bootlegged music products on Apple Music and yet you’re pawning it off as official releases?

And as suggested by your representative, why should I contact Twitter for this issue? Twitter’s features work properly when provided with the correct information. As has been stated for years in software engineering, “Garbage In, Garbage Out”. It is you, Apple, who are providing Twitter with garbage information. It’s not a Twitter problem, it’s an Apple problem. Also, because this is an Apple engineering problem to solve, why should I contact Twitter on Apple’s behalf? I don’t work for you. You need to have YOUR engineering team contact Twitter and have them explain to you the errors of your ways.

This is just the tip of the iceberg here. There’s so much wrong at Apple, if you continue to entrust your family’s safety into Apple’s products, you may find one of your family members injured or dead. Apple, wake up and learn to take quality seriously.

The next time you are shopping for a computer or a watch device, you need to ask yourself, “Do I really trust Apple to provide safe choices for me or my family?”

Apple has now officially and truly reached the level of shit!

Broken Apple Image credit: The King of The Vikings via DeviantArt

↩︎

Rant Time: The problem with Twitter

Posted in botch, business, social media by commorancy on August 27, 2018

Twitter-smTwitter began as a lofty idea for small text social conversation. For many of its early years, it managed to keep some semblance of order and decency. As of 2018, the platform has devolved into something far less useful and more problematic. Let’s explore.

Primary Twitter Topics

Today, Twitter is primarily dominated by breaking news, gun control and political rhetoric, sometimes all three at the same time. While these topics do have a place, reading these dominant conversations every moment of every day is tiring. It also goes against the diversity of what the platform is intended to offer. While these topics have a place, they don’t really have a place as the dominant force on Twitter. They exist simply to clog up each Twitter user’s feed.

Twitter’s Failings and Slow Development

When Twitter began back in 2006, it offered a fairly limited social conversation platform with its 140 character limit. In fact, that limit wasn’t raised until early 2018, 12 years later… when the limit went up to 280 characters. Talk about slow development! This 140 character limit was a holdover from SMS days which still to this day hold this limit. I do not know why Twitter chose this arbitrarily small amount of text for a social conversation platform. It had no relation to SMS and couldn’t send SMS messages, so it never made sense.

Twitter has also firmly embraced the “no edit” mantra to the chagrin of many. To modify a tweet, you must delete it, then recreate it. This is a cumbersome hassle. It also means that any feedback you had on that tweet must be forfeit. There’s a real incentive to get the tweet right the first time. For a conversation platform in 2018, this limitation on a text discussion platform is completely ludicrous. Clarification of thought is extremely important in all text mediums. The only way to ensure clarify of thought is via editing. We all make mistakes when typing, such as their for there or they’re. These are extremely common typing mistakes. Sometimes it’s the accidental misuse of homonyms. There are plenty of other types of common mistakes. There is also rewording. Yet, here we are… 12 years after Twitter’s inception and we STILL can’t edit a tweet. What the hell is going on over there at Twitter, Jack Dorsey?

While Twitter has grown little since 2006, only offering better privacy, limited feed customization, an ad platform and some UI improvements, it really has done next to nothing to improve the user functionality since 2006. I’ve worked at companies where the product has almost completely performed a 180º turn in product features in only 1-2 years. Twitter has remained nearly stagnant, feature wise and has implemented clamored features at an absolute snail’s pace (read, it hasn’t implemented them) in its 12 years of existence.

Censorship

As we all should understand, the first amendment free speech protections do not apply to private corporations. This ultimately means that there can be no speech submitted on the Twitter platform that is protected. As much as people want to complain that some left or right winger has been suspended, banned or otherwise dismissed from Twitter, that is Twitter’s right. Twitter is not a government owned or operated corporation. Therefore, they can censor, delete, suspend or otherwise prevent a user or entity from putting any content onto their platform for any reason.

What this means is that Twitter can do whatever they wish and claim violations of ‘terms of service’. After all, Twitter writes the terms of service and can modify them at any time without notification to anyone. In fact, Twitter isn’t even required to have explicit terms listed and they can still delete or suspend anyone they wish, for any reason. As I said, free speech protections on Twitter do not apply.

Leadership Team

Jack Dorsey heads up the leadership team at Twitter as CEO. In the last 1-2 years, he’s spouted rhetoric about reduction of hate speech on Twitter. What that ultimately translates to, within Twitter’s current moderation tool limits, is deletion of selected speech or accounts, regardless whether it contains hate speech or not. If Twitter doesn’t like what you have to say, out you go.

Twitter SuspendedNo more evident is that in those users who have amassed 15k followers or more. One foible on one of these accounts and Twitter closes it. No no, can’t have a 15k or more followers seeing something that Twitter management doesn’t like. Even celebs aren’t immune to this. If you are reading the article and you have amassed more than 6000 followers, your account is a risk with each tweet you post, particularly if your speech primarily consists of political messages, controversial topics or divisive ideas (NRA vs Gun Control, Abortion vs Pro Life, Trump bashing, etc).

The current technical means at Twitter’s disposal to reduce this kind of speech consists of tweet deletion, account suspensions or bans. Twitter has no other means at its disposal. In reality, Twitter has dug the hole it is now in. Twitter has failed to foresee problems of user scale. Whenever the total user base grows, so are the problems amplified that go with that. Twitter should have initially implemented some level of moderation and anointed users to help moderate its platform in a similar fashion to both Wikipedia and Reddit. It didn’t.

Twitter is to Blame

Twitter has only itself to blame for not taking proactive action sooner and in failing to build more complete moderation tools sooner. Additionally, social platforms that have implemented self-moderation automated systems have done exceptionally well. When the community downvotes certain content at a certain level, then Twitter should not promote it into user’s feeds. In fact, Twitter’s continual promotion of tweets into people’s primary feeds has actually propagated hate and problematic speech. Instead, Twitter should have been building a self-policing platform from day one or at least within the first couple of years. They chose not to.

Even today, Twitter still hasn’t built a self-policing platform. I regularly find hate speech in my feed. Worse, while I can mark the stuff I like with a heart, I have no such action to force items out of my feed that I choose not to see. The best I can do is mute the user or mute the account. Why is that Twitter? Why can’t I mark individual types of tweets that I no longer want to see and have that content removed from my feed? Why do I have to trudge all the way into preferences and put in mute words or, even more sledgehammery, mute or block the user? Even then, that only affects my account. It doesn’t have any impact on the global Twitter.

Employing Social Moderation and Tweet Grading

Using social moderation is both effective and necessary when you’re working with millions of users sending millions of messages per day. Twitter is a social platform. Let’s use that social interaction of those millions of people to bubble worthy messages to the top and sink crap messages so they never get seen. This is the ONLY way to effectively moderate at scale on a social platform. The value of each tweet is in its worth to viewers. Many viewers all marking tweets downward means less people see it. Many viewers marking tweets up means more people will see it. I can’t imagine that any sane person would choose to vote up hate speech, death threats or similar unacceptable or violent content.

I’m not saying that any one user should have undue influence over a tweet’s popularity. In fact, users will need to build their trust and reputation levels on the platform. For example, newly created accounts might not even be able to influence up or downward momentum of tweets right out of the gate. It might take 2-4 months of interactions on the platform before the user’s interactions begin to count. This way, users can’t go out and create 100 or more accounts in an attempt to get their tweet to the top of popularity. In fact, any tweet that ends up getting upvotes from too many newly created accounts without other upvotes should be marked as suspect and have their own trust levels locked or reduced.

Additionally, it should take interactions from many trusted accounts simultaneously to raise a tweet’s popularity substantially, particularly if those accounts have no relationship to one another (not following each other). This says that independent users have found a tweet’s content to be worthy of interaction by others.

This isn’t to say this is the only algorithm that could be built to handle social moderation, but it would definitely be a good start and a way to take this platform to the next level. Conversely, I will state that building an algorithm to scan and rate a tweet based solely on its textual content is next to impossible. Using the power of social interaction to grade a tweet and raise or lower its value is the best way to force those who want to game the system out of the platform.

Also, there should be no exemptions from the system. Not for CNN, not for Proctor and Gamble, not for any account. Social moderation needs to apply to all accounts or it’s worthless.

I’m not saying that social moderation is in any way a perfect solution. It isn’t. But, at least it can be fair when implemented properly. Can this kind of system be gamed? Probably. But, the engineers would need to watch for this eventuality and be ready to make change to prevent further gaming of the system. Eventually, the holes will be patched.

Multiple feeds and Topics

Here’s another area where Twitter has failed. As with any social platform, users have likes and dislikes and topic preferences they want to see. For example, I really don’t want to see political bashing. That’s not my thing. I’d prefer a feed that is politic free. My only interest in politics and political candidates is when there’s an imminent election. Otherwise, I want it out of my feed. Same for NRA / Gun control arguments. Same for Trump tweet bashing. Same for Pro Life vs Abortion. I don’t want to waste time with these types of divisive controversial topics in my feed. I have better uses for my time. If I want to see that content, I will explicitly go searching for it. I don’t want it to automatically appear in my feed.

Yet, Twitter has not implemented any customized feeds based on likes, hobbies or preferences of information (i.e., new technology). Instead, Twitter has based this part on following Twitter accounts that offer such information. The problem is, chasing down these accounts to follow. Even then, because those accounts might only post new on-topic information 20% of the time, the other 80% of the time I would see stuff I don’t want to see in my feed. Herein lies the problem with Twitter. It shouldn’t be based on following a user, it should be based on following conversation topics.

I’d prefer to customize my feeds (and have several feeds hooked to different topics) and subscribe to those feeds. I don’t need to follow any given account that’s talks about stuff I’m not interested in. Instead, by following topics, my feed gets interesting tweets. I can then discover new accounts to follow and also discover topics I’m interested in. This is the single important piece that Jack and team have sorely failed to address within the Twitter platform. To reiterate, I want to see stuff in my feeds that I am interested in, even if I don’t follow that account. I don’t want to see stuff I’m not interested in at all even if following an account that tweets about it. Following by topic is more important than following by user.

This is the power of social media. This is the power of Twitter. This is what is missing to make Twitter a complete platform… this, in addition to social moderation.

Twitter’s Hand Moderation

Instead of implementing a social moderation system or an interests based feed system, Twitter has spent its time hand moderating by suspending and banning accounts all in its stated goal of “reducing hate speech”. While deleting accounts may reduce that account’s ability to post hate speech, it doesn’t stop the user from creating a brand new account and starting it all over again. This is Twitter’s flaw in the user follow model.

Only the above two designs: 1) topic based multiple feeds and 2) social moderation will lead to lasting change within the Twitter platform. Nothing else will. Twitter’s hand moderation technique is merely a small bandaid with limited scope. It will never make a dent in reducing hate speech on Twitter. Lasting change only comes from innovating the platform in new and better ways to improve the end user experience and, at the same time, reducing the signal to noise ratio.

It’s time for Twitter to step up and actually begin innovating its platform in substantial new and meaningful ways… or it will perish.

↩︎

Why you should NOT use Disqus on your site!

Posted in botch, business, california by commorancy on October 26, 2017

What is Disqus (pronounced discuss)? This is a service that purports to offer an embedded comment / discussion service to your blog or website. Seems like a good feature, but let’s explore why this service shouldn’t be used.

Discussion Forums

Any good blog site or article site should offer a way to allow for comments. However, I find far too many sites that don’t offer comments at all. This is not the focus of this article, but it is one of my pet peeves. Should you choose to add a discussion or comment service, you should not consider using Disqus at all. Why?

Every good discussion package should offer a way to moderate posts and see every post that’s been submitted to your article. I believe that while Disqus does offer moderation, it also has a built-in spam detection package that hides posts from you that have been detected as spam. The problem with using Disqus, is that not only is their spam detection heinously faulty by filtering out many valid posts as false positives, Disqus does nothing about it. This means that as a site owner, you could be losing many, many valuable and valid comments to Disqus’s spam detection system.

As a site owner, you won’t even get to see those detected posts to know they were even there. They are simply hidden in the user’s profile on Disqus who posted their comment. Secondarily, the person leaving the comment can do nothing to get their comment unspammed. Once it’s detected by Disqus’s spam filter, that comment is lost for all eternity. Disqus staff not only does not monitor these failures,  they do nothing about them. Disqus offers a comment platform and they can’t even do that job.

If a user clicks on the This is not spam button, nothing happens. The post is not reposted. No one at Disqus looks at the comment. No one approves it. So, the comment remains in perpetual limbo solely on the user’s Disqus profile.

Disqus as a Discussion Service

As a site owner contemplating embedding Disqus as a comment platform for your site, you want to know that your reader’s comments appear timely and fully. This is guaranteed not to happen with Disqus. You don’t want to use a half-baked discussion system thinking you’re actually getting to see all comments on your posts. With Disqus, I’d guess at least 50% of all comments left on an article are lost to Disqus’s extremely stupid spam filtering system. That number might even be higher than that. If you actually want to see all participation on your posts, you should find another system to enable comments on your articles. DO NOT rely on the Disqus platform as they WILL lose valuable comments from your readers… comments that you will never see.

If you really value reader feedback and participation, do yourself a favor and DO NOT USE Disqus as a platform. Until this company actually gives a damn about your users and actually gives you the tools to manage every user response (spam filtered or not), you should find another service to add discussion feedback to your articles that you post.

Better, lead your users your other social media site where open discussions are, in fact, permitted without the draconian spam engine that Disqus currently employs to hide and censor valid and valuable comments from you.

Tagged with: , ,

Make LuxRender render faster

Posted in 3D Renderings, Daz Studio by commorancy on March 2, 2015

In addition to writing blogs here at Randosity, I also like creating 3D art. You can see some of it off to the right side of the screen in the Flickr images. I point this out because I typically like to use Daz Studio to do my 3D work. I also prefer working with the human form over still life, but occasionally I’ll also do a still life, landscape or some other type of scene. Today, I’m going to talk about a rendering engine that I like to use: LuxRender.  More specifically, how to get it to render faster. You can read more about it at www.luxrender.net. Let’s explore.

3Delight and Daz Studio

Daz Studio is what I use to compose my scenes. What comes built into Daz Studio is the rendering engine named 3Delight. It’s a very capable biased renderer. That is, it prefers to use lighting and internal short cuts to do its rendering work. While 3Delight does support global illumination (aka. GI or bounced lighting), it doesn’t do it as well or as fast as I would like. When GI is turned on, it takes forever for 3Delight to calculate the bounced light on surfaces. Unfortunately,  I don’t have that long to wait for a render to complete. So, I turn to a more capable renderer:  LuxRender. Though, keep in mind that I do render in 3Delight and I am able to get some very realistic scenes out of it, also. But, these scenes have a completely different look than Lux and they typically take a whole lot longer to set up (and a ton more lights).

LuxRender

What’s different about Lux? The developers consider it to be an unbiased renderer, that is, it is considered physics based. In fact, all renderers attempt to use physics, but Lux attempts to use physics on all light sources. What is the end result? Better, more accurate, more realistic lighting…. and lighting is the key to making a scene look its best. Without great lighting, the objects within it will look dull, flat and without volume. It would be like turning the lights off in a room and attempting to take a photograph without a flash. What you get is a grainy, low light, washed out and flat image. That’s not what you want. For the same reason you use a flash in photography, you want to use LuxRender to produce images.

Now, I’m not here to say that LuxRender is a perfect renderer. No no no. It is, by far, not perfect. It has its share of flaws. But, for lighting, it can produce some of the most realistically lit scenes from a 3D renderer that I’ve found. Unfortunately too, this renderer is also slow. Not as slow as 3Delight with GI enabled, but definitely not by any stretch fast. Though, the more light you add to a scene, the faster Lux renders.

However, even with sufficient lighting, there are still drawbacks to how fast it can render. Let’s understand why.

LuxRender UI

The developers who designed LuxRender also decided that it needed a UI. A tool that allows you to control and tweak your renders (even while they’re rendering). I applaud what the LuxRender team has done with the UI in terms of the image tweaking functionality, but for all of the great things in the UI, there are not-so-smart things done on the rendering side. As cool and tweakable as a render-in-progress is, it should never take away from the speed at how fast a renderer can render. Unfortunately, it does.

Let’s step back a minute. When you use Daz Studio, you need a bridge to operate Lux. It needs to be able to export the scene into a format that Lux can parse and render. There are two bridges out there. The first is Reality. The second is Luxus. I’ll leave it to you to find the bridge that works best for you. However, Reality has versions for both Daz Studio and Poser. So, if you have both of these, you can get each of these versions and have a similar experience between these two different apps. If you’re solely in the Daz world, you can get Luxus and be fine. Once you have this bridge and you export a scene to the LuxRender, that’s when you’ll notice a big glaring sore thumb problem while rendering.

Render Speed and LuxRender UI

When I first began using LuxRender, one thing became very apparent. LuxRender has this annoying habit of stopping and starting. Because my computer has fans that speed up when the CPU is put under load and slow down when not, I can hear this behavior.  What I hear is the fans spinning up and spinning down at regular intervals. I decided to investigate why. Note, renderers should be capable of running all of the CPU cores at full speed until the render has completed. 3Delight does this. Nearly every other rendering engine does this, but not LuxRender.

Here’s part of the answer. There are two automatic activities inside of the LuxRender UI while rendering:

  1. Tonemapping
  2. Saving the image to disk from memory
  3. Write FLM resume file

Both of these activities outright halt the rendering process for sometimes several minutes. This is insane. Now, let’s understand why this is insane. Most systems today offer 4 or more cores (8 or more hyperthreaded cores). Since you have more than one core, it makes no sense to stop all of the cores just to do one of the above tasks. No. Instead, the developers should have absconded with one of the cores for either of these processes leaving the rest of the cores to continue to do rendering work all of the time. The developers didn’t do this. Instead, they stop all cores, use one core (or less) to write the file to disk or update the GUI display and then wait and wait and wait. Finally, the cores start up again. This non-rendering time adds up to at least 5 minutes. That’s 5 minutes where zero rendering is taking place. That’s way too long.

How do I get around this issue? Well, I don’t entirely. If you want to use LuxRender, you should run over to luxrender.net and make a complaint to solve this problem. The second thing to do is set the tonemapping interval to 3600 seconds, the image write to disk interval to 3600 seconds and the FLM write interval to 3600 seconds. That means it will only save to disk every 1 hour. It will only update the screen every 1 hour and save a resume file every 1 hour. That means that LuxRender will have 1 hour of solid render time without interruptions from these silly update processes. This is especially important when you’re not even using the LuxRender UI.

Note that many applications set up intervals as short as a few seconds. That’s stupid considering the above. Yeah, we all want instant gratification, but I want my image to render its absolute fastest. I don’t need to see every single update interval in the UI. No, if I want to see an update, I can ask the UI to provide me that update when I bring it to the front. Automatically updating the UI at 10 second intervals (and stop the rendering) is just insane and a waste of time, especially when I can simply refresh the UI myself manually. In fact, there is absolutely no need for an automatic refresh of the UI ever.

Network Rendering

The second way to speed up rendering is to use other systems you may have around the house. They don’t necessarily need to be the fastest thing out there. But, even adding one more machine to the rendering pool makes a big difference on how fast your image might complete. This is especially important if you’re rendering at sizes of 3000 by 3000 pixels or higher.

System Specs and Lux

Of course, buying a well capable system will make rendering faster. To render your absolute fastest in Lux, it’s a given that you need CPU, CPU caching and large amounts of RAM to render. So, get what you can afford, but make sure it has a fair number of CPUs, a reasonable L1 and L2 cache set and at least 16GB of RAM (for 3k by 3k or larger images). If you add one or more GPUs to the mix, Lux will throw this processing power on top and get even faster rendering. But, this doesn’t solve the problem described above. Even if you have 32 cores, 128GB of RAM and the fastest L1 and L2 caches, it still doesn’t solve the stopping and starting problem with rendering.

If you want to dabble in LuxRender, you should run over to the luxrender.net and file a complaint to fix this cycling problem. In this day and age with multiple cores and multithreading, stopping the render process to save a file or update a UI is absolutely insane.  To get your fastest renders, set the update intervals to 3600 seconds. Note, though, that if LuxRender crashes during one of the one hour intervals, you will lose all of that work. Though, I haven’t had this happen while rendering.

So, that’s how you get your fastest render out of LuxRender.

 

How not to run a business (Part 6): Coding Edition

Posted in best practices, business by commorancy on August 6, 2013

So… you decide to open a business to write and sell software. Your business can choose from several different software development methodologies and strategies to help you get that software off the ground. You can choose the waterfall approach or use an agile approach. There are many approaches that can work, but all approaches have both benefits and drawbacks. Depending on the type of business your company is in, you need to think through how each type of coding method can affect your customer. Note that the goal behind most methods of development is to drive the process to completion, not so much to provide quality. With either Agile or Waterfall, both approaches can let you down if you’re not actively driving quality all along the way. Let’s explore.

Don’t choose a software development strategy just because you think it will allow you to complete the software on time. Any strategy you employ must make sure quality is number one or you face customer problems. Simply getting the software done and on time is not enough. Quality has to remain at the top for any software your team writes.

Don’t let your customers become guinea pigs. Software development is for your customers’ benefit. Thoroughly testing code is important. Don’t let this fall through the cracks or your customers will suffer the consequences and end up as beta testers.

Don’t employ only happy path programming efforts when writing code. Coding solely for the happy path leaves your customers vulnerable to the unhappy paths. Coding for happy path is equivalent to intentionally skipping big pieces of testing. Your customer will pay the price when they fall into an unhappy path trap. Your staff then has to respond by spending time doing data fixups, writing patches to fix holes missed and your sales/support teams will be on the phone to customers giving false reassurances.

Don’t miss crucial QA steps just because you ran out of time. If time is the most important thing when coding, then the code quality will suffer and and so will your customer. Again, do not use your customers as guinea pigs unless you like them walking away from your business.

Let’s understand more why the above is important!

The Happy Path is not happy for your customers. Utilizing Happy Path coding is solely for the convenience and benefit of your programmers in getting work done rapidly. Getting things done rapidly, but poorly is not good for your business or your customer. Happy Path software development simply doesn’t work.

As an example, imagine walking down a block in downtown NYC. You walk the block from corner to corner without any diversions or problems arising. Let’s call this the Happy Path. Now, let’s say you walk that walk again, but this time you stumble over a manhole and fall. You were so focused on the destination, you didn’t pay attention to the manhole cover that wasn’t fully closed. This is an unhappy path. Let’s say that the next time you walk this path, you know the manhole over isn’t fully closed and you avoid it. Except this time you were so focused on avoiding the manhole cover, you walk into a tree. Yet another unhappy path.

Now, imagine this is your customer. Each time they try to navigate down your Happy Path, they fall into one trap after another because your software doesn’t handle these pitfalls. Writing code solely for the Happy Path is definitely not your business friend. Your customers will become frustrated and eventually find another company with a more stable product. It doesn’t matter how good the features are, it matters that the software is stable. A customer places trust in your software, but that trust is broken when the software breaks often.

Coding for your customer

Your customer is the most important thing you have in your business. They drive your revenue and keep you in business. You should never play games with them and you should never use them as paid beta testers. But, writing code that only utilizes the Happy Path intentionally leads your customers and your company into unhappy pitfalls. Wouldn’t you rather have your team find these pitfalls before your customer does? When your customer finds the bug before you do, it makes your team and company look inept. This is never a good position to be in, especially when you are trying to establish yourself as high quality software company.

The Happy Path may only provide between 20-50% tested code paths. The other 50-80% is left to be tested by your customer while they pay for and use your product? The Happy Path only leads to unhappy customers. Instead, if your software developers test a Robust Path all along the way, your software should catch at least 80-90% of the bugs leaving very small percentage of edge case bugs for your customers to find. So, instead of having your team working on constant bug fixes and/or constantly fixing or restoring customer data, your team can focus on the next product release features. Unfortunately, customer fixups and customer phone calls over these issues are big time wasters. Wasted time that can be completely avoided by writing the code correctly the first time.

The reality is when writing software, your team can either spend their time on crash proofing code up front or spend even more time crash proofing the code after it’s already in production and making the customer unhappy. Either way, the problems will get fixed. It’s just that you, as a company owner, need to decide whether it’s more important to have happy satisfied customers or a fast development cycle. You can’t really have both. Customers become especially disenchanted when they’re paying a hefty fee to you for your service and end up beta testers. Customers are always expecting solid robust code. They don’t pay to be beta testers.

Customer Churn

Keeping the customer happy should always be your number one priority. Unfortunately, you can’t do that if the code that’s being written is crashing and generally providing a less than stellar experience to your customers. You have to decide if you want your team to spend their time bug-proofing the code or have even more of your staff spend their time after the release smoothing out customer dissatisfaction issues in combination with bug fixes. So, not only is your sales team’s and customer care team’s time spent making the customer happy again, your engineering team’s time is being incorrectly spent having to rewrite code a second, third or fourth time to fix bugs. Depending on your SLAs, you might even be violating these by having certain bugs.

This can mean at least three times more work created for your staff than simply having your developers write robust code from the beginning. Sure, it might require a month longer development cycle, a bigger QA test cycle, but that extra time will pay for itself in having happy satisfied paying customers and fewer customer incidents. Customer satisfaction keeps your development team focused on the next feature set, keeps your sales team focused on new sales and keeps your customer support team educating users about how to use your product. Quality is the key, not speed.

Bug Fixing after a Release

I’m not saying there won’t be bugs to fix or unhappy customers. Bugs will be found even if your team appears to write the most perfect code. However, writing high quality code from the beginning will drastically reduce the cycle of bug fixing and patches. This means making sure your development staff are all trained and knowledgeable about the languages they are required to use. Introducing new programming languages without proper training is a recipe for problems. Learning a new language on the go is the best way to write bad code. Properly trained engineers will usually provide much higher quality code. Don’t ignore the Quality Assurance team’s role in making sure they have a full and complete test suite and solid full test cases. Unit testing works okay for developers, but once the code is all assembled, it needs a full QA test suite.

Also, if your feature set doesn’t cover your customer’s needs properly, satisfaction can also drop. This can happen if your business is fighting bad code rather than listening to what your customers want. Of course, this is a somewhat separate issue which I will discuss in another installment.

Java

I want to take a moment to discuss using Java for applications. Using Java is, again, a convenience to support speed coding efforts. More and more companies want it done ‘fast’.

With more and more compressed timelines, too many people seem to think that writing software in Java is easy, quick and simple. This is a fallacy. It isn’t. While writing the code may appear simple at first glance, the whole JVM adds a huge level of operating complexity that engineers and management fail to understand or simply overlook. In theory, you should be able to deploy your .jar file and be done with it. It’s not that simple. The JVM has heap space sizing issues and garbage collection that can easily turn what seems reasonable code into a nightmare for your operations team to support and a nightmare your customers. Basically, the JVM is an unpredictable beast.

Let’s understand this better. The JVM tries to make coding simple and easy because it’s interpreted. That thinking is a trap. From a coding perspective, it does make coding a whole lot easier as there are lots of frameworks that can be used and code examples to be had. Unfortunately, nothing ever comes for free. There are always strings attached. Java does a whole lot of internal housekeeping so the coder doesn’t have to. This ease of writing the code is completely negated by the JVM itself. To help the coder to not deal with extra coding of freeing up variables and objects, the JVM takes care of all of that. But, the price paid is the Garbage Collector. So, instead, of coder doing this work in code, the Garbage Collector (GC) allegedly does this for you. We won’t even get into just how ugly and horrible the JVM logging is when you’re trying to determine what went wrong.

In reality, the GC can end up spending so much time doing all of this extraneous cleanup work that no actual code work gets done. The reasons behind this issue can range from bad java code (e.g., object leaks, memory leaks, file descriptor leaks, etc) to huge swings in memory usage (creating GB sized objects and freeing them up often). As Big Data is becoming more common place, the JVM really was not designed to properly handle Big Data objects strictly because of the overhead of the GC. That means you need to have someone who’s extremely knowledgeable about tweaking the JVM’s heap sizes, GC frequency and other tweakable parameters inside the JVM so that it doesn’t get into this condition. It also means much more precise monitoring to determine when it is in this condition.

In some Java use cases with big data, using Java may not even work. If you really need to move big data around fast, you should really consider a compiled language first.

In essence, the engineering team has now pushed the normal robust coding and cleanup work off onto the operations team to manage via the JVM container. Now the operations team have to become experts in JVM management through managing and tweaking Java to keep it properly tuned and working. Worse, they now have to understand the code to even begin to diagnose a root cause of failure. In other words, it requires your operations staff have a much higher level of knowledge about java, java coding and JVMs than when using languages that don’t require Java.

Using C, C++ or other compiled languages

Even though compiled languages can require a much longer development cycle and more explicit handling of objects, they do two things for your company: 1) Forces your development team to write better code and 2) Gets rid of interpreted languages (and containers). Even above the tremendous speed gain your application will see from being compiled, the operations overhead to manage the application is drastically reduced. Writing a UNIX daemon to handle an operational task might require a simple configuration file and a ‘service’ script to restart it. No knowledge of a JVM container, of GC or of heap sizes is required.

Memory usage is always a concern, but not in the same way as Java works. In fact, it’s far far simpler to both troubleshoot and manage compiled applications than it is to troubleshoot and manage JVM container apps. If a compiled app goes off the rails, you know for certain that it was the app that did it. If a JVM contained app goes off the rails, you don’t know if it was the app itself or the JVM container that spiraled out of control.

When a JVM contained app fails, you’re left trying to determine if it was a bug in your company’s code running in the container, if it was a bug in Oracle’s Java version itself or if it was a third party component problem. This leaves too many variables to try and diagnose at once. With compiled languages, this troubleshooting is almost always far less ambiguous and is usually as simple as ‘strace’, ‘top’ or reviewing a core dump.

Business Choices

Whatever approach your team chooses, quality must remain number one. When quality is sacrificed for the sake of development speed, your customers will suffer and, in turn, so will the bottom line. Some customers may be willing to deal with a bug occasionally. But, if bugs are continual and constant after every release, eventually they will go find another service. Stability and reliability are the key to making sure your company continues to succeed no matter whether your company provides an iPad app or if your company intends to become the next Google. Innovation keeps your customers coming back from more, but you can’t innovate if your team is constantly fighting bad code.

Part 5Chapter IndexPart 7

Restore a Mac formatted 6th Gen iPod nano in Windows 7

Posted in Apple, botch, Mac OS X by commorancy on September 22, 2012

I recently picked up a sixth generation iPod nano refurbished from Gamestop. When I got home and plugged it into iTunes for Windows 7, iTunes recognized it as a Macintosh formatted iPod and said that it needed to be restored. Here’s where the fun begins.. not. Several things happened after I plugged it in. First, Windows recognized it as drive O: and opened a requester wanting to format the iPod. This format panel stays open until cancelled. Second, when I tried to restore the iPod, iTunes kept showing me error 1436, which is a rather non-descript error that takes you to a mostly generic Apple help page that is only moderately helpful. I take that back, this help page wasn’t helpful at all.

Note, Macintosh formatted iPods cannot be used with Windows. However, Windows formatted iPods can be used on both Windows and Macs. So, this is simply a problem that exists because this iPod was originally formatted on a Mac. Such stupid issues that cause such time wasting problems.

How did the first restore go?

It didn’t. I realized the above mentioned Windows disk format panel had the iPod open and the 1436 error was due to this. However, that was just the beginning of the problems. When I cancelled that panel and I tried the restore again, I got a different issue. Basically, iTunes opens a progress bar that keeps moving without any progress. I wasn’t sure if this progress panel was normal or abnormal. Although, I suspected abnormal after 3 minutes without any changes. So, I began searching for how long an iPod restore should take. I found that restore should complete in only a few minutes (less actually). So, I knew something was wrong when it wasn’t making any progress.

Disk Mode

It was clear that iTunes wasn’t going to restore this iPod through its normal means. I began searching on the net for how to recover this iPod and ran into a site that led me to Apple’s How to put an iPod in Disk Mode help page. This page is actually very useful and where the 1436 error page should have led me but didn’t.

What is Disk Mode? Disk Mode puts the iPod into a state that allows it to be formatted as a disk. Well, you don’t really want to format it. Instead, in Disk Mode, it gets rid of all that pesky Macintosh formatting garbage and actually lets you restore it properly. For the sixth gen iPod nano, to put it in Disk Mode, press and hold the power and volume down buttons until the screen turns black and the Apple logo appears. When you see the Apple logo, press and hold both volume up and down buttons until the iPod shows a white screen. This is the Disk Mode screen.

Recovering

At this point, I plugged the iPod back in with iTunes running and iTunes saw that the iPod was ‘corrupted’ and asked to restore it. Well, the restoration this time went like a champ. No issues at all. However, after I restored it, I did have to close out of iTunes and restart iTunes. Until I did that, iTunes kept telling me that the iPod was in ‘Recovery Mode’ even though I knew that it wasn’t based on the screen of the iPod. After restarting iTunes, that stopped and it finally recognized the iPod as new and let me put music on it. Yay!

So, there you have it. Although, it should have been as simple as plug-in and restore. But, Apple had to make this a chore because of the PC vs Mac formatting thing. Seriously, is that even necessary?

Design

Let me take a moment to commend Apple on this design of this iPod nano. When the first long skinny nano was first released, I thought it was kind of cool, but not worth it. Then the smaller squatty nano arrived and I liked that design so much that I bought one. I got my use out of that and eventually bought an iPod touch. However, the iPod touch isn’t useful in all circumstances and I wanted something smaller and lighter. When this nano was released, I always thought it was a great idea and well executed save for the fact that it has no application support. So, here’s where Apple dropped the ball on this one.

The size and weight is awesome. The look is great, especially if you get a watch band. It just needed a refresh to add a few more features like Bluetooth, video (although, not really necessary in my book) and apps support. I loved the square display because this is the exact image ratio of CD covers. So, it was the perfect marriage between a music player and a user interface. Some people complained that the touch display was overkill. Perhaps, but I always liked it, but I have never needed one of these. I still don’t really need one. The reason I bought one is because Apple has discontinued this model in lieu of it’s bigger screen cousin.

The new nano, however is neither nano in size nor is it really that small. This nano was the perfect size and perfect shape. It truly deserved the name nano. However, the new nano is really not deserving of that name. The screen is too big and it’s really just a dumbed down iPod touch. Yes, the new nano has video capabilities, but so what? I don’t plan on ever loading video on it. Without WiFi or streaming mechanisms, there’s no point. I realize Apple wants to enrich their ecosystem (read, sell more videos to people), but this isn’t the device to do it. In fact. this latest nano design to ship late 2012 is really not that great looking. I feel that it’s stepping too far into the same territory as the iPod touch. So, why do this? It’s also bigger, bulkier and likely heavier. The battery life is probably shorter even. It’s no longer a small portable player.

The 6th generation iPod nano (this one I just bought) is truly small and light. It can go just about anywhere and has a built-in clip even! It lacks some features, yes, but for a music player I certainly don’t miss them. If you’re thinking of buying a 6th generation iPod nano, you should do it now while the Apple outlet still has them in stock. Yes, they are refurbished, but they’re still quite spectacular little music players. However, don’t go into the purchase expecting the feature-set of an iPhone or an iPod touch. It’s not here. If you go into the purchase thinking it’s an iPod shuffle with a display, then you won’t be disappointed with the purchase.

Apple’s ever changing product line

What I don’t get about Apple is removing a product from its product lineup that clearly has no competition in the marketplace at all, let alone having no competition even within its own product lineup. Yet, here we are. Apple is dropping the 6th generation design in lieu of the 7th generation design that’s bigger and bulkier (and likely heavier). In fact, it looks a lot like a smaller dumbed-down iPod touch.

In reality, the 7th gen nano is so close to becoming a tiny iPod touch clone that it clearly competes with the Touch. This is bad. The 6th generation nano (pictured above) in no way competes with the iPod touch, other than it has a tiny touch screen. The 6th generation nano design clearly still has a place in Apple’s lineup. I just don’t get why they dump products from their lineup and replace them with designs that aren’t likely to sell better (0ther than to those people who complained you couldn’t play video on the 6th gen nano). The 6th gen nano is great for the gym or while running. However, after this newest nano is introduced, if you want a square sized small music player, you have to get a shuffle with no display. The bigger bulkier 7th gen design just won’t work for most activity use cases. Apple, your design team needs to better understand how these devices are actually being used before you put pen to paper on new designs, let alone release them for public consumption. Why is it always just one device? Why can’t you have both in the product lineup?

Of course, if they had retained an updated 6th gen model along with adding the 7th gen model, then that would make a lot more sense. Removing the older model in lieu of this one, this is not a replacement design. You can’t wear this one like a watch. So, that whole functionality is gone. What I would like to have seen is two models. A 6th gen revamped to add more features like Bluetooth and perhaps a camera and, at the same time, introducing this new video capable model. The updated 6th gen doesn’t need to playback movies, the screen is too tiny for that. In fact, the screen on this new 7th gen model is too tiny for that. Even the iPod touch is too tiny for watching movies, in practicality. It’s not until you get to the iPad does watching a movie even become practical. In a pinch, yes you could watch a video or movie, but you’d be seriously straining your eyes. I’d rather do that (or rather, not strain my eyes) with a much bigger screen. No, an updated square-format touch screen iPod is still very much necessary in the lineup. I understand Apple’s need for change here, but not for the use case that’s now lost with this 7th generation iPod. Sometimes, Apple just doesn’t seem to get it. This is just one of a new series of cracks in the armor that is the new Jobs-less era Apple. Welcome to the new Apple folks.

↩︎

Patent Trolls or why software patents should be abolished!

Posted in business, free enterprise, politics by commorancy on May 21, 2011

The patent system was originally designed to provide exclusive rights for invented ideas to inventors. But, there used to be a catch, the idea must lead to a real world tangible device. The patent system was also conceived long before computers existed. So, at the time when the patent system was conceived, it was designed as a way for inventors to retain exclusive control over their ideas for tangible devices without other people stealing or profiting from those ideas.

The patent system is enforced by the legal system. It is sanctioned by governments (specifically in the US, by the US Patent Office – USPTO and the legislative system) to protect said individuals’ patents from use by others who serve to profit from those previously ‘patented’ ideas. So, enforcing a patent involves suing an alleged infringer and then having a court of law rule whether the alleged infringer has, in fact, infringed. It is, then, the burden of proof of the patent holder to prove infringement.  And, of course, it ties up the legal system to resolve this dispute.

Tangible vs Intangible Devices

The patent system was conceived at a time when the ultimate outcome of a patent idea was to produce a tangible physical good. That is, something that ultimately exists in the real world like a pen, a toaster, a drill, a telephone or a light bulb. The patented idea itself is not tangible, but the idea described within the patent should ultimately produce a tangible real world item if actually built. This is why ideas that lead to intangible things were never allowed to be patented and are only allowed to be copyrighted or trademarked.

Fast forward to when the first computers came into existence (30s-60s). Then later, the 70s when the US Patent Office began granting software patents en masse (although, the first software patent was apparently granted in 1966). Software, unfortunately, is not a tangible thing and, for the most part, is simply a set of ideas expressed through a ‘programming language’ with finite constructs. Modern programming languages, specifically, are designed to have limited constructs to produce a structured code. That is, an application that follows a specific set of pre-built rules to basically take data in and present data out (in specific unique ways).  Ultimately, that’s what a program does, take data in, process it and spit data out in a new way.

Software Design Limits

Because modern programming languages have limited constructs from which to build an application and which are further constrained by such limits as application programming interface (API) frameworks, operating system function calls, hardware limitations and other such constraints, writing an application becomes an exercise in compromise. That is, you must compromise programming flexibility for the ease and speed of using someone else’s API framework. Of course, you can write anything you want from scratch if you really want, but most people choose to use pre-existing frameworks to speed the development process.  Using external frameworks also reduce time to completion of a project. At the same time, including third party API systems is not without its share of coding and legal issues. Programmatically speaking, using a third party API opens up your code to security problems and puts implicit trust into that API that it’s ‘doing the right thing’. Clearly, the functionality derived from the external framework may outweigh the security dangers present within the framework. From a legal perspective, you also don’t know what legal traps your application may fall into as a result of using someone else’s API framework. If they used code within the framework that is legally questionable, that will also bring your application into question because you used that framework inside your app (unless, of course, it’s using a SOAP/REST internet framework).

With all that said, embedding frameworks in your app severely constricts your ability to control what your program is doing. Worse, though, if you are using a high level programming language like C, C++, Objective C, C# or any other high level language, you are limited by that programming language’s built-in construct. So, even if you choose to code everything from scratch, it’s very likely you could write code substantially similar to something that someone else has already written. Because high level languages have limited constructs, there are only so many ways to build an application that, for example, extracts data from a database. So, you have to follow the same conventions as everyone else to accomplish this same task.

Software Patents are bad

Because of these limited high level language constructs, there is a high probability that someone writing an application will write code that has already been written hundreds of times before. And note, that’s not an accident. That happens because do()while, for() and while() loops as well as if conditionals area always used in the same way. Worse, you can’t deviate from these language constructs because they are always the same in pretty much any language.  If these constructs didn’t exist, you couldn’t easily make decisions within your code (ie, if X is greater than 3, do this, else do that).

Why are software patents bad? Simply, because languages are written with such limited programming concepts, the probability to reinvent something that has already been invented is far too high. Unlike devising a real world idea where the probability someone could come up with that same exact idea is likely near zero, writing software using language constructs the probability is far higher than 70% that someone could design the same (or substantially similar) code, idea or construct. And. that high probability is strictly because of the limits and constructs imposed by the high level language.

Yet, the USPTO has decided to allow and grant software patents knowing the probabilities of creating substantially similar ideas within the software world is that high. Yes, probabilities should play a part in whether or not to grant patents.

Probabilities

Probability in idea creation is (and should always be considered) how likely someone is to create something substantially similar to someone else. Probability should always be relevant in granting patents. Patents need to be unique and individual. That is, a patent should be granted based on something that multiple people could not devise, guess, build or otherwise conceive accidentally. Because real world tangible items are constrained only by the elements here on Earth, this effectively makes inventions using Earth elements pretty much infinite (at least for all intents and purposes). Because software code uses a much smaller number of constructs that limit and constrain programming efforts, that smaller set increases the chances and the probabilities that someone can create something similar.  In fact, it increases probabilities by orders of magnitudes. I’m sure an expert on statistics and probabilities could even come up with real world probability numbers between element based inventions and software code based inventions. Suffice it to say, even without this analysis, it’s quite clear that it’s far too easy for someone to devise something substantially similar in software without even really trying.

Software patents are bad, revisited

Basically, it’s far too easy for someone to devise something someone else has already conceived using software. On top of this, the USPTO has seen fit to grant software patents that are way too obvious anyway. That is, they’ve granted patents to software ideas that are similarly as common place as cotton, strawberries, a nail and yarn. Worse, because of these completely obvious patents, patent trolls (people who do nothing but patent without the intent of producing anything) game the system and produce completely obvious patents. This action has created a land mine situation for the software industry.  This is especially bad because it’s virtually impossible to search for existing patents before writing software.

So, as a software developer, you never know when you might step on one of these land mines and get a ‘cease and desist’ notification from a patent troll. That is, someone who has patented some tiny little thing that’s completely obvious, yet your application takes advantage of that thing somewhere because you just happened upon one of the easy to build constructs in a language. Yet, patents should only be granted based on an idea that someone cannot easily create by sheer accident. Yet, here we are.

Ideas now patented

Worse, software is not and has never been tangible. That is, software doesn’t and cannot exist in the real world. Yes, software exists on real world devices, but that software itself is just a series of bits in a storage device. It is not real and will never be real or ever see the light of day. That is, software is just an idea. An idea with a structured format. It is not real and will never have a real tangible physical shape, like a toaster. We will never be able to have tactile interaction with software. Hardware, yes, is tactile. Software, no. The software’s running code itself cannot stimulate any of our five senses: not sight, hearing, touch, smell and taste.. Someone might argue, well software does produce visual and audible interaction. Yes, the output of the software produces these interactions. That is, the software processes the input data and produces output data. The input and output data has sight and sound interaction. You still aren’t seeing or hearing the software code doing the processing. That’s under the hood and cannot be experienced by our five senses. For this reason, software is strictly an idea, a construct. It is not a tangible good.

Patents are a form of personal law

That is, the owner of the patent now has a legal ‘law’ that they need to personally enforce.  That is, that patent number gives them the right to take anyone to court to enforce their ‘law’ err.. patent.  No entity in government should be allowed to grant personal law.  Especially not for intangible things.  I can understand granting patents on tangible items (a specialty hair clip, a curling iron, a new type of pen, etc).  That makes sense and it’s easy to see infringement as you can see and touch the fake.  It takes effort, time and money to produce such a tangible item. Software patents require nothing.  Just an application to the USPTO, a payment and then wait for it to be granted.  After the patent has been granted, take people to court, win and wait for royalties.  This is wrong.

All software patents should be immediately abolished and invalidated

Why?

  • Software patents only serve corporations in money making ventures. Yet, software patents really serve to bog down the legal system with unnecessary actions.
  • Software patents stifle innovation due to ‘land mines’. Many would-be developers steer clear of writing any code for fear of the legal traps.
  • Software patents are granted based on probabilities far too high that someone will produce something similar based on limited high level language constructs
  • Because software language constructs are, by comparison, much smaller in number when compared to Earth elements (when inventing real world ideas), probabilities say it’s too easy to recreate something substantially similar to someone else in software.
  • Software is intangible and cannot expose itself as anything tangible (which goes against the original idea of patents in the first place)
  • Software patents will reach critical mass.  Eventually, the only people left writing code will be large corporations who can afford to defend against legal traps.
  • Software patents are now being granted without regards to obviousness.

As a result, all software patents, past and present, should be immediately invalidated.  If we continue this path of software patents, a critical mass will eventually exist such that writing software will become such a legal landmine that developers will simply stop developing.  I believe we’ve already seen the beginnings of this. Eventually, the only people left who can afford to develop software will be large corporations with deep pockets.  Effectively, software patents will stifle innovation to the point that small developers will no longer be able to legally defend against the Patent Trolls and large corporations seeking to make money off ‘licensing’. The patent system needs to go back to a time when the only patents granted were patents describing tangible physical goods. Patents that do not describe tangible physical goods should be considered ideas and dumped under copyright law only.

%d bloggers like this: