Fork me on Github
Fork me on Github

Joe Dog Software

Proudly serving the Internets since 1999

An Old Dog Learns A New Trick

Beginning with version 3.0.6-beta2, siege reacts differently to –reps=once.

In the past, when you invoked –reps=once, each siege user would invoke each URL in the file exactly one time. If urls.txt contained 100 files and you ran -c10 –reps=once, siege would finish its business with 1000 hits.

That was then.

This is now: siege runs each URL in the file exactly once. If you run -c10 –reps=once, then siege will split the file among all 10 users and hit each URL one time. Whereas in the past, you’d finish with 1000 hits, you now finish with 100 hits.

This should give you greater control by making tests more precise.



Why Do Investors Love Amazon?

What’s happening at Amazon isn’t suppose to happen in modern finance. Shares are rising as profits are falling:

Amazon shares are up around 150 percent since mid-2010, which perhaps not coincidentally was the last time the company had sizable profits. In other words, investors really decided they loved the company only when net income began to slide.

Any fool can run a profitable company but it takes a gutsy person to build the world’s largest retailer….

[New York Times – All Amazon Is Missing Is a Profit]



Fido Learns A New Trick

I use Mondoarchive to create Linux recovery disks. Each server writes ISO images to a shared volume on a weekly basis. If any file inside that directory is older than seven days, then a server failed to create an ISO. In order to monitor this directory for failure, I added a new feature to fido. Exciting!

Starting with version 1.1.0 (click to download), fido can monitor a file or directory to see if it — or any file inside it — is older than a user configurable period of time. If fido discovers a file whose modification date exceeds the configured time, it fires an alert.

The following example illustrates how to configure the use case above:

/export {
  rules = exceeds 8 days
  exclude = ^.|^lccns178$|^lccns179$|^lccns335$|lccns336$
  throttle = 12 hours
  action = /etc/fido/notify.sh
}

This file block applies to “/export” which is a directory. Since it’s a directory, the rules apply to every file inside it. In this case ‘rules’ is pretty straight forward. We’re looking for files that exceed eight days in age. This rule will always follow this format: exceeds [int] [modifier]. The modifier can be seconds, minutes, hours or days. If you take the long view — if you’re concerned about events far into the future — then you’ll have to do some math. We don’t designate years so you’ll have to use 1825 days if you want to be alerted five years out.

We also find a new feature inside this block. ‘exclude’ takes a regular expression and tells fido which files to ignore. Currently, ‘exclude’ only works inside a file block with an exceeds rule but I plan to make better use of it.

Finally we notice one final feature that we’ve never seen before. The ‘throttle’ directive tells fido how long to wait between alerts. In this scenario, fido will trigger an alert the second it finds a file which exceeds 8 days. If the problem is not addressed within twelve hours, it will fire another alert. Alerts will continue in twelve hour intervals until the problem is corrected.

I hope you enjoy these features. If there are enhancements you’d like to see, feel free to contact me either in the comments or by email.



Gub’mint (IT) Mule

bureaucratsSean Gallagher has an interesting piece on (ars)technica. He asks, “Why do government IT projects fail so hard and so often?” Gallagher provides several reasons, most of which are symptoms of a large organization. Let’s examine that list.

1. The government uses antiquated technologies. Its bureaucracy is slow to move and slow to adapt. Older technologies remain long after their life cycle expires largely because the approval process for new ones is long and arduous. You can imagine many frustrating meetings that end with, “Fsck it. We’ll put it on XP.”

2. Its user base is really large. Gallagher cits as one example a DOD email rollout that touched 1.5 million users. That’s an astonishing number for an in-house IT department. Certainly there are web companies with more users — there are 425 million GMail accouts, for instance — but Google does web for profit. The army’s IT department is an expense.

3. Flawed metrics. Gallagher notes that many government IT dashboards are filled with nice metrics that contain a lot of nines. Unfortunately, those nines have little bearing on end user experience. If department CIOs measured things that mattered, they’d be filled with zeros.

I’m sure these are valid criticisms but how do they vary from other large organizations? I work for a large corporation and this week my company finally moved my laptop off Windows XP. There’s no way the entire organization will be XP-free by end-of-life-cycle. It took a Great Recession to convince management that Linux was a viable alternative to HP-UX.

Our user base is 15,000 and every internal roll-out contains some glitches or problems. The army rolls out applications to a user base that is two orders of magnitude larger than ours. Where we roll out in a carefully controlled environment, they have to provide service to all corners of the world. Some of those corners are pre-fab barracks on an Afghanistan mountain top.

I’m yet to meet a person in my company who likes our out-sourcing partner. The bean counters like how little they cost, but they’re not happy with their services either. Yet if you look at this partner’s dashboard, you’ll find it’s filled with as many nines as those government CIOs. Faulty metrics aren’t limited to the public sector.

The Affordable Care website famously crashed during its rollout. Yet on the surface it wasn’t subject to many of government IT’s shortcomings. Most of the work was handled by a private partner. They used apache webservers on Linux. The site was fronted by the Akamai CDN which greatly reduces load by moving content close to the users. Most importantly, the site wasn’t tied to antiquated government infrastructure. Yet it failed. Why?

When you examine the site you find it’s simply not optimized for heavy traffic. The pages are too heavy and they contain too many elements. Fifty-six javascript files? Really? Then we learn the system was tested under load that was an order of magnitude less then what they received on October 1st. The site was basically slash-dotted.

Certainly there are good failures and bad failures. Collapsing under the weight of your own popularity is a good one. Still, with better planning and better coding the Affordable Care site could have experienced a more successful roll-out. Those operations require a high level of expertise which brings us to what I suspect is the real reason government IT projects fail: constrained by the tax payer’s dime, government can’t attract the talent necessary to service a very large user base. In other words, we get what we pay for.