Random Thoughts – Randocity!

The design failure of SE Linux?

Posted in botch, data security, software by commorancy on August 20, 2023

numerous padlocks on metal bridge railing

Buckle up, folks. Let’s embark on a wild and whimsical journey into the quirky world of SE Linux. Oh yes, we’re diving deep into the mysterious realm of this oh-so-important “security” thingamajig, which may sound a bit dull, but trust us, it’s secretly fascinating. Grab your virtual popcorn and Starbucks, sit back, and let’s unravel this enigmatic Linux subsystem together! Let’s explore.

What is SE Linux?

SE Linux stands for Security Enhanced Linux (SEL); a catch phrase more or less. Developers love giving their add-ons names like SE Linux. In reality, what does SE Linux actually do? The name doesn’t really say. It does say it has something to do with security, but short of digging deep into documentation, you really have no idea what SE Linux really is.

Let me start by saying that SE Linux makes Linux incompatible with standard written applications. Why? Security Enhanced Linux attempts to lock down the internals of Linux, but it does so in a way that breaks nearly every single regular application ever written. In essence, enabling SE Linux is sure to break all of your third party apps.

Why does SE Linux break the apps? Because SE Linux is given complete control to restrict access of components down to the function() call level and down to a content serving level. What that means is that a function call like execve() could receive “access denied” if a program were to attempt to use it with SE Linux enabled… yes, even if the program is operating as “root” user. Even serving up HTTP content over a path that shouldn’t have HTTP content could be denied.

Because the “root” user has always had unbridled access to EVERYTHING in an operating system, allowing SE Linux to constrain the “root” user’s access to no more than a regular user automatically breaks the idea of what Linux is.

SE Linux Modes

Before getting too deep into the weeds, someone is likely to point out that there are two modes to SE Linux when operating: 1) Permissive and 2) Enforcing. Unfortunately, the “Permissive” mode isn’t as permissive as one would hope and it’s a more-or-less useless operating mode intended strictly for testing purposes. Even enabling “Permissive” can still break applications simply because “Permissive” isn’t exactly the same has having SE Linux disabled entirely.

Crossing GuardWhen SE Linux is entirely disabled, this is (and was) the natural state of Linux (and UNIX) since the day UNIX was first introduced. The problem is, SE Linux was designed by the NSA (National Security Agency) as patches to Linux and, more specifically, to Linux’s kernel. The NSA isn’t really a software developer. As such, this agency has shoe-horned into Linux a system that not only fundamentally breaks UNIX, it fundamentally changes Linux and UNIX into something other than UNIX.

UNIX was founded on the principal that it should work in a very specific way, a way that enhances computing. Unfortunately, SE Linux has shoehorned its way into the operating system as a watchdog system that’s sole purpose is to get in the way of computing; to be that crossing guard who throws up a STOP sign and prevents you from crossing… even if you’re a firetruck on the way to a fire.

Linux Security

Linux has always been a relatively secure operating system, so long as you maintain good password quality, close down unnecessary and unneeded services, regularly maintain security patches and utilize best practices when installing new software. Combining all of these proactive management best practices with a solid firewall, it’s relatively unheard of for a Linux system to be broken into, let alone exploited with malicious code. Nearly all deployed malicious code found on Linux servers is due to hackers having gained root access to the server and then manually having installed it.

Yet, the NSA felt that it was necessary to effectively break Linux to introduce a “new” watchdog system that watches every system call being used on the operating system. More than just watching it, it must interfere with some of these calls, preventing them from occurring.

This doesn’t just break Linux, it guts Linux into oblivion. It’s no wonder then why the vast majority of sites (and managers) running Linux, disable SE Linux as first thing before deploying a new server. Who wants to have to deal with broken software?

Third Party Software

You would think third party software manufacturers would have embraced SE Linux due to its alleged extra security. Instead, you’d have thought wrong. Most manufacturers still don’t embrace SE Linux due to its hodge-podge nature. It doesn’t help that most systems administrators and systems managers also don’t understand SE Linux or its internals… but that’s not the real problem.

The real problem is the developers. Developers build their software on laptops and other convenient computers running Linux, but they disable SE Linux so that it doesn’t get in their way when writing code. Writing and testing code is difficult enough without having to debug SE Linux when code failures begin. By disabling SE Linux, developers take that annoyance out of the equation. Rightly so. Why have a subsystem enabled that’s sole purpose is to get in your way?

The problem is, without developing code WITH SE Linux running, that throws the problem onto the systems administrators and/or systems engineers to solve after-the-fact. The developer is all, “Here you go” (handing the system engineer the finished software), leaving the systems engineer the problem of attempting to get the software working with SE Linux enabled. Most times, that ask is impossible. A systems engineer doesn’t have access to the source code. So, they can’t guide the developer to rewrite or redo portions of the code to make it compatible with SE Linux.

What that ultimately means is that SE Linux gets disabled on production servers simply to deploy that developer’s code. Without every developer both enabling and understanding SE Linux on their development servers and, most importantly, using it during software development, there is no way a systems administrator or systems engineer can make it work with SE Linux after-the-fact. Software is either designed to work properly within the constraints of SE Linux or it is not.

This is the fundamental problem with the compatibility level of SE Linux. This is also a primary design failure of SE Linux by the NSA, that and SEL’s failure to actually secure the server. In other words, new subsystems must remain fully backward compatible to what has come before. If it can’t remain backwards compatible, then it ultimately won’t be used… and that’s actually where we are.

DOD and SE Linux

To be certified by the Department of Defense (DOD) per Security Technical Implementation Guide (STIG) compliance, a UNIX system must enable SE Linux as ‘Enforcing’ (the strongest level offered). For those companies who wish to do business with the government, or more specifically with the Department of Defense, STIG compliance is a must. By extension, STIG compliance does mean enabling SE Linux (in among a whole slew of additional DOD security requirements).

Businesses must then make a choice. Seek to do business with the US Government or not. If you’re running Linux operating systems as part of whatever service you intend to offer to the US Government, you must comply with the requirements defined in the Defense Information Systems Agency’s (DISA’s) STIGs (which, as stated above, includes enabling SE Linux… and all that falls out of that).

Are there ways around SE Linux’s Incompatibility?

Yes, but it’s not always easy or fast. Heads up. This is the dull part. So as not to dive too deep into the sysadmin weeds as to why, here’s a comprehensive RedHat guide of SE Linux’s incompatibility (and how to get around it all). However, we will still need to dive deep enough to get this article’s point across.

For example, customizing an HTTP configuration as so (a normal thing to do for Apache HTTP), yet this customization would yield the following problems when SE Linux is enabled:

The http package is installed and the Apache HTTP server is configured to 
listen on TCP port 3131 and to use the /var/test_www/ directory instead of 
the default /var/www directory or the default port of 80.

# systemctl start httpd
# systemctl status httpd
...
httpd[14523]: (13)Permission denied: AH00072: make_sock: could not bind 
to address [::]:3131
...
systemd[1]: Failed to start The Apache HTTP Server.

With SE Linux disabled on a Linux system, Apache’s HTTP server would happily start up just fine. With SE Linux enabled and set to ‘Enforcing‘, starting httpd with the above modified config, you’ll see “Permission Denied” at the point when httpd attempts to bind to port 3131.

It gets worse. To modify SE Linux to allow httpd to listen on port 3131, you have to execute the following SE Linux permission modification command:

semanage port -a -t http_port_t -p tcp 3131

That’s just the beginning. Even after executing this semanage command… then restarting HTTP, the change in directory yields the following error when attempting to retrieve content:

# wget localhost:3131/index.html
...
HTTP request sent, awaiting response... 403 Forbidden

Why 403 Forbidden? Well duh…

# sealert -l "*"
...
SELinux is preventing httpd from getattr access on the 
file /var/test_www/html/index.html.
...

SE Linux has prevented access to the getattr() function for /var/test_www/html/index.html. This again requires manually reconfiguring SE Linux to allow this new directory location for httpd. Though, we must understand why SE Linux doesn’t like this path and file.

# matchpathcon /var/www/html /var/test_www/html
/var/www/html       system_u:object_r:httpd_sys_content_t:s0
/var/test_www/html  system_u:object_r:var_t:s0

The SE Linux command matchpathcon (so intuitively named here) determines that the content type used in /var/www/html (the standard default location) isn’t the same as what’s defined for /var/test_www/html. Thus, SE Linux won’t allow HTML content to be served from that customized directory when HTML content is not defined. Can we say, “minutiae?” I knew that you could.

That means redefining the content type for /var/test_www/html to allow serving httpd_sys_content_t type. To do that, a system admin would need to execute the following:

# semanage fcontext -a -e /var/www /var/test_www

BUT, that command executed just above doesn’t actually do it recursively for all files and dirs within /var/test_www. Oh, no no no. Now you have to run yet another command to force recursion to set all sub-directories and files to allow for httpd_sys_content_t type of data. You do that with…

# restorecon -Rv /var/
...
Relabeled /var/test_www/html from unconfined_u:object_r:var_t:s0 to
unconfined_u:object_r:httpd_sys_content_t:s0
Relabeled /var/test_www/html/index.html from unconfined_u:object_r:var_t:s0 to
unconfined_u:object_r:httpd_sys_content_t:s0

A systems administrator can spend all of the above time to do all of this additional reconfiguration work each and every time a new web directory is needed…. OR, a systems administrator can disable SE Linux and avoid all of this work.

Janitorial Work

Even if you don’t understand a word of what was said just above, it’s easy to see that it’s an absolute mess. Not only does SE Linux require a systems administrator to configure all of this extra junk, it requires a systems administrator to understand all of the above NEW commands needed to manage SE Linux AND have a firm grasp of all of these commands’ nuances and quirks. Even missing one tiny thing can cause the whole application to break or fail to work in unexplained ways.

For example, the 403 Forbidden error could have led an inexperienced systems admin down a rabbit hole simply because they don’t know that SE Linux is enabled as ‘Enforcing’. Such inexperience might not allow putting two-and-two together to understand that SE Linux is actually the culprit.

It’s easy to see why many, many businesses running Linux make it a policy to instantly disable SE Linux. If your company is not doing business with the government, there’s no need to make your systems administrators do all of this extra work when they could be performing other more critical tasks.

On the flip side, if your business is currently negotiating with the DOD for a contract, then you better get your systems administrators trained up quick on SE Linux. More than this, you better run an audit to determine which software your business uses to determine if this software is easily made compatible with SE Linux. Hint: it probably isn’t easy.

DOD Exceptions?

Does the DOD allow for exceptions? Yes, but limited and likely only for a limited time. Meaning, if you can’t enable SE Linux right away due to software limitations, you’ll need to document exactly why. Even then, your team better have a plan to get SE Linux implemented soon or else your contract might dry up. It only takes another vendor to step up that IS fully compliant with DISA STIGS for your company to lose its contract.

Does SE Linux improve security?

This is actually a very good question. The short answer is, no. SE Linux requires a system administrator to drastically increase workload to manage application permissions. However, SE Linux also forces an administrator to explicitly define permissions for each application down to incredible minutia. Once that long-tailed convoluted configuration is complete, the application then works again like it always has (i.e., without SE Linux).

Here’s the key! Because most exploits rely on standard app functionality to work, SE Linux would happily allow an exploit to occur simply via performing that application’s normal functions. The only exception would be is if the systems administrator explicitly disallowed use of specific system function calls. However, if an application uses that function call even once during normal operation, having the system administrator disallow that call could cause the application to fail in very unexpected ways, possibly even leading to an OS cascade failure / core dump.

Further, SE Linux is effectively an enhanced permissions system, but it does nothing to watchdog an application’s behaviors to ensure that the application itself is functioning correctly or normally.

What this further means is that a system administrator would need to become a software developer to read through and understand the entire application’s source code to know when or if an application uses a specific function call that the administrator wishes to deny. While many systems administrators can be programmers, not all of them are. More than this, many systems administrators who can code are barely more than novices. Were a systems administrator actually a software developer in disguise, then why would they remain a systems administrator by trade? Thus, most systems administrators know enough to read some code (i.e., novice), but not enough to actually write complex code.

Let’s take this one step further. Putting a system administrator in the position of unilaterally denying access to specific function calls is not what systems administrators are tasked to do. That’s defining policy. That’s not an SA’s job. Expecting an SA to take on this type of job turns an SA into a security manager or policy manager, not a systems administrator. Systems administration is exactly how those two words sound: administration of systems. Meaning, management of systems, making sure those systems operate fine, occasionally install software and/or operating systems, manage configurations of systems and debug it all when it doesn’t work correctly. Systems administrators are even tasked with winding down old hardware and systems to dispose of them.

Systems administrators don’t make policy, but will enforce policy as defined by managers… so long as that policy makes sense and doesn’t interfere with the operation of the network, server or application. However, not all systems administrators are knowledgeable enough to foresee if any specific policy change might end in bad results.

Policy Implementation

Here’s a situation that can get systems administrators into hot water easily. Managers all congregate and decide to implement a new policy that execve() cannot be called from within any application. The policy is handed to a systems administrator to implement. The SA is relatively new and doesn’t understand either the systems fully or the software operating on those systems. The SA does understand SE Linux enough to implement the change as requested and, thus, does so.

Within an hour (or less), the company’s primary paid application is down, the servers are behaving erratically, memory is spiking and the systems are actually crashing and rebooting. Effectively, the business’s servers are down.

Here’s a situation where the company’s executives made an unwise and untested decision and forced implementation down onto a person with very little experience. The person happily obliged thinking the managers already knew it would work. Why would these managers expect a new SA to jump through many hoops testing all of this? The SA would assume that if the request landed on his/her desk, it must already be tested.

Yet, it wasn’t. Here’s the rub. Because the SA did the actual work to implement the change to the systems, the SA will be held responsible for the outage (possibly up to and including termination). Ideas from managers never get blamed. The people who get blamed are the systems administrators who “should have known better” and, specifically, the person who actually “pulled the trigger” by performing the configuration change.

Enabling SE Linux as ‘Enforcing’ is the same situation. If you ask your SA team to implement this change without performing any testing, then expect your business to go down. Almost no applications are properly configured to handle SE Linux set to ‘Enforcing’ prior to enabling it.

Heading down the SE Linux Road

If a company wishes to implement SE Linux as ‘Enforcing’, then you best test, test, test and then test some more. You can’t just turn SEL on like a light and expect it all to work just as it had. Making this decision means testing. More than this, it means ensuring all systems administrators are not only familiar with SE Linux itself (and its commands), but also are familiar with all applications installed and running on the company’s servers.

Once SEL is enabled, the applications are likely to begin failing unless the systems administrators have already configured those specific applications under SEL before.

What have we learned?

Let’s explore all that we’ve learned about SE Linux.

  1. SE Linux is a deep dive permissions system add-on for Linux. It primarily enhances security through obscurity. We already know that security through obscurity doesn’t work.
  2. SE Linux is fraught with peril. Unless systems administrators are properly trained to both understand SEL and how to configure apps under SEL, enabling SEL can lead to problems.
  3. SE Linux doesn’t improve security because once apps are configured under SEL, they are just as vulnerable to being exploited as if SEL were not enabled.
  4. SE Linux increases workload for systems administrators because not only do they need to do their normal Linux administration jobs, they must also deep dive into SE Linux a lot to make sure it is and remains correctly configured and functional.
  5. SE Linux is an overall hassle to manage.
  6. SE Linux is not required unless you’re attempting to win a contract with the United States Department of Defense.

Overall, the design behind SE Linux seemed to have noble intentions. Unfortunately, SE Linux is actually much the same as requiring someone to spend time hanging padlocks off of a chain-link fence as illustrated in this article’s opening. Unfortunately, those padlocks don’t serve to protect that fence. The fence is still doing all of the protection work.

However, these padlocks symbolize the exact way that SE Linux attempts to protect an operating system. The operating system is the chain link fence… and the OS does all of the protection. The padlocks (SEL) only serve to clutter up that fence, but don’t actually do much of anything to improve security.

↩︎

The Microsoft Botch — Part II

Posted in botch, microsoft, redmond, windows by commorancy on January 17, 2009

In a question to The Microsoft Botch blog article, jan_j on Twitter asks, “Do you think Microsoft is going down?”  In commentary to that question, I put forth this article.

I’ll start by saying, “No”.  I do not think that Microsoft is ‘going down’.  Microsoft is certainly in a bad way at this point in time, but they still have far too much market share with Windows XP, Windows 2000 and Windows 2003 server as well as Exchange and several other enterprise products.  So, the monies they are making off of these existing installations (and licenses) will carry them on for quite some time.  Combine that with Xbox Live and the licensing of the Xbox 360 games… Microsoft isn’t going anywhere for quite a while.  The real question to ask, though, is.. Is Microsoft’s userbase dwindling?  At this point, it’s unclear, but likely.  Since the Vista debacle, many users and IT managers have contemplated less expensive alternative installations including Linux.  The sheer fact that people are looking for alternatives doesn’t say good things about Microsoft.  

As far as alternatives, MacOS X isn’t necessarily less expensive than Windows, but it is being considered as one possible replacement for Windows by some.   Some people have already switched.  MacOS X may, however, be less expensive in the long term strictly due to maintenance and repair costs.  Linux can be less expensive than Windows (as far as installation software costs and continuing licenses), but it requires someone who’s knowledgable to maintain them.

In comparison…

To compare Microsoft to another company from the past, IBM comes to mind.  IBM was flying high with their PCs in the early days, but that quickly crumbled when IBM started botching things up.  That and PC clones took off.  To date, there has not been a Windows OS clone to compete head-to-head with Microsoft.  So, Microsoft has been safe from that issue.  But, Linux and MacOS X do represent alternative operating systems that do function quite well in their own environments.  Although, MacOS X and Linux interoperate poorly, in many specific cases, with Windows (primarily thanks to Microsoft).

Linux as a replacement

While it is possible to replace Windows with Linux and have a functional system, the Windows compatibility limitations become readily apparent rapidly.  Since most of the rest of the world uses Windows, Linux doesn’t have fully compatible replacement softwares for the Windows world.  Because of Microsoft’s close-to-the-vest approach to software combined with their release-just-enough-information to allow half-baked Windows compatibility.  Thus, Linux (and other non-Microsoft OSes) can’t compete in a Windows world.  This is a ‘glass is half empty or half full’ argument.  On its own, Linux interoperates well with other Linux systems.  But, when you try to pair that together with Windows, certain aspects just fall apart.

That doesn’t mean Linux is at fault.  What it usually means is that Microsoft has intentionally withheld enough information so as to prevent Linux from interoperating.  Note, there is no need to go into the gritty details of these issues in this article.  There are plenty of sites on the Internet that can explain it all in excruciating detail.

However, if your company or home system doesn’t need to interoperate with Windows, then Linux is a perfectly suitable solution for nearly every task (i.e., reading email, browsing, writing blogs, etc).  If, however, someone wants to pass you an Adobe Illustrator file or you receive a Winmail.dat file in your email, you’re kind of stuck.  That’s not to say you can’t find a workable solution with some DIY Linux tools, but you won’t find these out of the box.

This is not meant to berate Linux.  This is just a decision specifically by Microsoft to limit compatibility and interoperability of non-Microsoft products.  This decision by Microsoft is intentional and, thus, Windows is specifically and intentionally designed that way.

Microsoft’s days ahead

Looking at Microsoft’s coming days, it’s going to be a bit rough even when Windows 7 arrives.  If Windows 7 is based on Vista and also requires the same hardware requirements as Vista, Windows 7 won’t be any more of a winner than Vista.

Microsoft needs to do some serious rethinking.  They need to rethink not only how their products are perceived by the public, they need to rethink what they think is good for the public.  Clearly, Microsoft is not listening to their customers.  In Vista, Microsoft made a lot of changes without really consulting with their target userbase and, as a result, ended up with a mostly disliked operating system.

Apple, on the other hand, is able to introduce new innovative tools that, instead of making life more of a hassle, it simplifies things.  Microsoft isn’t doing this.  

Rocky Road

While this flavor of ice cream might be appealing, Microsoft’s road ahead won’t be quite so much that way.  They are heading for a few rocky years coming.  Combine their bad software design decisions with a bad economy and you’ve got a real problem.  Microsoft’s problems, though, primarily stem from lack of vision.  Windows roadmap is not clear.  Instead of actually trying to lay out design goals for the next several revisions, Microsoft appears to be making it up as they go along… all the while hoping that the users will like it.   But, their designers really do not have much in the way of vision.  The biggest change that Microsoft made to Windows was the Start button.  That’s probably the single most innovative thing that Microsoft has done (note that the start button is not really that great of a design anyway).  

Microsoft forces everyone else to do it the Windows way

Microsoft’s main problem with Windows stems from its lack of interoperability between Windows and other operating systems.  While Windows always plays well with Windows (and other Microsoft products), it rarely plays well with other OSes.  In fact, Microsoft effectively forces the other OSes and devices to become compatible with Windows.  Apple has been the one exception to this with many of their products.  Apple has managed to keep their own proprietary devices mostly off of Windows (with the exception of the iPhone and iPods).   Even Apple has had to succumb to the pressures of Microsoft (with certain products) and compete in the Microsoft world even when Apple has its own successful operating system.  Note, however, that Apple’s softwares on Windows leave a lot to be desired as far as full compatibility goes.

 Microsoft has an initiative to allow open source projects access to deeper Microsoft technologies to allow for better compatibility between open source projects and Windows.  There’s two sides to this ‘access’.  The first is that it does help open source projects become more compatible.  On the other side, the developer must sign certain legal agreements that could put the open source project in jeopardy if Microsoft were to press the legal agreements.   So, to get the interoperability, it becomes a double-edged sword.

The tide is turning

Microsoft’s somewhat dwindling installations of Windows, lack of quality control and bungling of major products may lead more and more people away from Microsoft to more stable devices.  But, the market is fickle.  As long as people continue to generally like Microsoft products and solutions, Microsoft will never be gone.

Note, you can follow my Twitter ramblings here.