Tag: process management

Un-killable Process

Scott had a Dolphin instance veg out on him. Not much for it other than killing the process. Except it didn’t kill. Now SIGTERM (15) I don’t expect to kill a vegged out process, but SIGKILL (9)? I’ve never seen that fail. A little research later, and I’ve discovered “uninterruptible sleep”.  Which, at 11PM is starting to sound really good to me. But not something I associate with computer programs. Essentially, processes that are waiting on I/O very briefly pop into this state and pop out of the state when the I/O operation completes. Code needs to have timeouts to prevent the application from getting stuck waiting for I/O. And, evidently, Scott has discovered a scenario in which Dolphin does not have a timeout.

How can you tell that your process is stuck in uninterruptible sleep? Use “ps u” (or “ps aux” for all processes) and check the “STAT” column.

From “man ps”:

PROCESS STATE CODES
Here are the different values that the s, stat and state output
specifiers (header "STAT" or "S") will display to describe the state of
a process:

D uninterruptible sleep (usually IO)
I Idle kernel thread
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to complete)
T stopped by job control signal
t stopped by debugger during the tracing
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z defunct ("zombie") process, terminated but not reaped by its parent

For BSD formats and when the stat keyword is used, additional
characters may be displayed:

< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
+ is in the foreground process group

Rebooting clears the process (or sorting whatever is blocking the I/O operation). But there are processes that “kill -9” won’t terminate.

New Process Police

As an operational support group, we did not have a software development methodology. Doesn’t mean we didn’t develop software — one of the great things within operational support is the ability to automate day-to-day tasks to reduce workload. Why have someone check for application patches when a process can watch an RSS feed or file repository and notify us when there’s an update. Why have someone clickity-click provisioning users into groups when the user can make a web request, the group owner can approve the request, and an automated process can add the user into the group? The end result of our automation programming is, well, quite a bit of software.

And with a small number of people, informal application development worked. Wasn’t ideal, but it worked. If you want to write in Java while I use C# … not ideal, but the alternative is that one of us needs to learn a new programming language. Problem is the next guy we hired uses VBS, the next guy uses PowerShell … and I’ll use perl for simpler processes. Then someone starts tweaking my code and buggers it up … and we’ve got to figure out what happened and roll back based on some tape backup.

To get our internal software development processes organized, I developed a process. And ran a training session so everyone was familiar with both the process and the tools. Some of us have used the process well — don’t edit production code, clone the repo locally, make a branch for your edits, test it and have another group member sign off on the changes, merge your branch back into master, test more, then pull the code into production. The majority, it seems, have not followed the process at all. Changes are made to the code running in production, not incorporated into the Git repo. Six months after the new development process went in place, half of our code has improperly made changes!

To an extent, I consider this a management problem … if the department doesn’t want software development to be a free-for-all, then the department managers need to ensure their staff follows the process. If the department wants everyone to do their own thing — then get rid of the process and declare our methodology as “do whatever you want”. The challenge for managers, though, is that they don’t know that someone has edited code in production and failed to commit their changes into the repo. If only there were some way to watch for improperly edited code and alert us promptly.

Other scripts I’ve found to perform a similar function attempt to parse ‘git status’ to identify all sorts of issues — but that doesn’t address the specific problem that I’ve got. To facilitate identifying offenders, I wrote a quick Python script that searches a directory tree for git repositories and alerts us when changes have not been staged for commit. If you’ve staged the changes for commit, that won’t be identified. But the particular problem we encounter frequently … there are alerts for that.