Fork me on Github
Fork me on Github

Joe Dog Software

Proudly serving the Internets since 1999

Shellshocked

Wired provides an interesting angle on the bash shell bug that has all your panties in a bind

[Brian] Fox drove those tapes to California and went back to work on Bash, other engineers started using the software and even helped build it. And as UNIX gave rise to GNU and Linux—the OS that drives so much of the modern internet—Bash found its way onto tens of thousands of machines. But somewhere along the way, in about 1992, one engineer typed a bug into the code. Last week, more then twenty years later, security researchers finally noticed this flaw in Fox’s ancient program. They called it Shellshock, and they warned it could allow hackers to wreak havoc on the modern internet.

[Wired: The Internet Is Broken]

 



Is Hardware Outpacing Software Or Is It The Other Way Around?

Here’s an interesting experiment.

After hearing two strong players argue that the only real progress in chess engines in the last ten years was due to faster computers a special match was played to challenge this idea. Komodo 8 ran on a smartphone while a top engine of 2006 used a modern i7 computer that runs 50 times faster. This is the difference between Usain Bolt and the Concorde. Guess what happened?

 

 



Fido 1.1.3

Your JoeDog had a requirements change. “Stupid requirements!” He had to ensure each file in a directory and all its sub-directories was less than eight days old. Unfortunately, Your Fido didn’t traverse directory trees. He stood watch only at the top of the tree.

That’s the problem with dogs: they have a mind of their own.

Without much effort, fido learned a new trick. It now recursively searches a directory for files. To leverage this feature, you’ll have to give it a command. “Recurse, boy, recurse!”

/export {
 rules = exceeds 7 days
 exclude = ^.|CVS|Makefile
 action = /usr/local/bin/sendtrap.sh
 recurse = true
}

recurse takes one of two values, true or false. True means search the tree and false means remain at the top level. If you don’t set a recurse directive, then fido will treat it as false, i.e., it will remain in the top directory.

[Trending: Fido-1.1.3]

 



Linux Bare Metal Recovery With Rear

Your JoeDog loves rear! And who doesn’t, amirite?

Except it’s not that rear. It’s an acronym for Relax and Recover, a Linux bare metal recovery tool.

Your JoeDog has been using Mondo for cloning systems. It’s good software that served him well despite difficulties moving from one hardware set to another. If Your JoeDog archived sd disks and recovered to cciss, then he was knee deep in i-want-my-lvm hell.

Rear makes those type of migrations much easier. If you archive a server using one type of disk driver and recover it to one that requires another, rear reworks the disk layout for you. It’s also configured to ignore external disks. If you archive a server connected to a SAN, rear simply ignores those multipath devices.

Like Mondo, you can archive and recover from an NFS server. Here’s a suggested configuration for NFS archiving. Place these directives inside /etc/rear/local.conf

OUTPUT=ISO
BACKUP=NETFS
NETFS_URL=nfs://10.37.72.44/export
NETFS_OPTIONS=rw,nolock,noatime
OUTPUT_URL=file:///export

To archive the system, run ‘rear -v mkbackup’

This configuration creates an ISO image called ‘rear-hostname.iso’ inside 10.37.72.44/export/hostname. To recover the server, burn that ISO onto a CD and boot the system with it. Select the Recover option then run ‘rear recover’ at the command prompt.

“It’s that simple,” Your JoeDog said with the zeal of a recent convert. He’ll be back to bitch about rear in a couple weeks but for now it’s nothing but love….



Is A Port Number Required in the HTTP Host Header?

Well? Is it?

How’s this for a definitive answer: “Yes and no.”

We find the answer in RFC 2616 section 14.23:

The Host request-header field specifies the Internet host and port number of the resource being requested, as obtained from the original URI: “Host” “:” host [ “:” port ]

A “host” without any trailing port information implies the default port for the service requested (e.g., “80” for an HTTP URL).

So if an HTTPS request is made to a non-standard port, say 29043, then you should send a port even though the RFC doesn’t compel you to. And if you make HTTP or HTTPS requests to standard ports, then it’s probably best to omit the port string.

The above is my interpretation. I’ve maintained an HTTP client for thirteen years and this has been a point of contention. In the course of all that time, I’ve added and dropped :port from the header. Like Jason in a hockey mask, it keeps coming back. In its latest iteration, siege implements the interpretation you see above. If the port is non-standard, it appends :port to the string. If it is standard, then it simply sends the host.

Look for this feature in siege-3.0.8

 

 

 

 



Siege 3.0.7 Release

Here’s the format for a location header,  Location: absolute_url

Unfortunately, many developers don’t care about standards and Internet Exploder is famous for letting them get away with it. When siege followed the letter of the law, I was inundated with bug reports that weren’t bugs at all. If siege is confused by Location: /haha that’s your developer’s problem, not mine. Against my better judgement and beginning with siege-3.0.6, I started constructing absolute_urls from relative paths. Unfortunately, my parser missed a usecase: localhost. Siege 3.0.6 will barf on this:

Location: http://localhost/haha_or_whatever

Technically, I didn’t miss localhost. If you look at url.c:459 you’ll see this:

// XXX: do I really need to test for localhost?

It didn’t occur to me that people would run siege on the same server as their webserver.  My bad. There are many tests besides load tests.

All siege users running version 3.0.6 should upgrade to siege-3.0.7.tar.gz



It Knows Me Better Than I Know Myself….

robotI write a lot of software with which I interact. If it’s easy for me, then it’s easy for you. I try to keep it easy for me. JoeDog’s Pinochle is the first program against which I’ve competed. It’s been a surreal experience.

The program was designed to be competitive against me. Tonight it took two out of three games. The damn thing knows me inside and out. And why not? I wrote it. And while I can exploit some knowledge of its inter-workings, I can’t predict all its behavior. It was designed to learn bidding from experience.

Bidding is the hardest aspect of this game. The team that wins the bid has an incredible opportunity to earn a lot of points. At the same time, overbids come at a large price. A failure to make the bid means the bid is deducted from your score.

When the game was first released, its bids were implemented programmatically. I like to think I’m a pretty good programmer but that version of the game played like a moran. To improve it, I had the game play itself hundreds of thousands of times. It would store those results and use them to generate future bids.

This implementation has resulted in a much more competitive program. Now it bids more aggressively — much more aggressively. It bids like me which is odd because I didn’t tell it to do that. I told it to learn from its experience and as a result of that experience, its personality morphed into mine.

 



Pinochle

Today I’m pleased to announce the first public release of JoeDog’s Pinochle. It’s a computerized version of the classic card game. It plays a four-player variation in which you are paired with a computer player against two computer players. Exciting!

This project is notable for several reasons: 1.) It’s the first time I’ve released software with a graphical interface and 2.) it’s the first major project I’ve completed in java.

JoeDog’s Pinochle is the culmination of hundreds of hours of work over the past several years. The groundwork was laid on planes and trains. It offered an enjoyable way to pass the time as I sat in traveling tubes. Last September I finally achieved a functioning version of the game. Since then, I’ve honed its ability to play a descent game.

Because pinochle maintains a strict set of rules for governing play, most of the “intelligence” in this game was implemented programmatically. Unfortunately, its original ability to assess and bid a hand was very weak. In order to improve that, I’ve built experience into the game. It played itself for hundreds of hours and stored those outcomes. Now when it bids a hand, it consults past experience to shape future results. A skilled human can still beat it but give me time. It will get better with each ensuing release.

Future

I’m currently honing the game’s ability to beat the pants off you people. That may take some time. Once I’ve built adequate intelligence into the game I’d like to add more variation. A double-deck version will be added at some later date. I’d also like to add three and five player variations.

Licence

This game is currently licensed under the terms of shareware. It contains some code that was published without licensing terms and the author has not answered my inquiries about its license. It’s probably in the public domain but until I get verification I can’t release it under an open source license. If you’d like a copy of the code, send me an email. Until then, enjoy the binaries.



An Old Dog Learns A New Trick

Beginning with version 3.0.6-beta2, siege reacts differently to –reps=once.

In the past, when you invoked –reps=once, each siege user would invoke each URL in the file exactly one time. If urls.txt contained 100 files and you ran -c10 –reps=once, siege would finish its business with 1000 hits.

That was then.

This is now: siege runs each URL in the file exactly once. If you run -c10 –reps=once, then siege will split the file among all 10 users and hit each URL one time. Whereas in the past, you’d finish with 1000 hits, you now finish with 100 hits.

This should give you greater control by making tests more precise.



Fido Learns A New Trick

I use Mondoarchive to create Linux recovery disks. Each server writes ISO images to a shared volume on a weekly basis. If any file inside that directory is older than seven days, then a server failed to create an ISO. In order to monitor this directory for failure, I added a new feature to fido. Exciting!

Starting with version 1.1.0 (click to download), fido can monitor a file or directory to see if it — or any file inside it — is older than a user configurable period of time. If fido discovers a file whose modification date exceeds the configured time, it fires an alert.

The following example illustrates how to configure the use case above:

/export {
  rules = exceeds 8 days
  exclude = ^.|^lccns178$|^lccns179$|^lccns335$|lccns336$
  throttle = 12 hours
  action = /etc/fido/notify.sh
}

This file block applies to “/export” which is a directory. Since it’s a directory, the rules apply to every file inside it. In this case ‘rules’ is pretty straight forward. We’re looking for files that exceed eight days in age. This rule will always follow this format: exceeds [int] [modifier]. The modifier can be seconds, minutes, hours or days. If you take the long view — if you’re concerned about events far into the future — then you’ll have to do some math. We don’t designate years so you’ll have to use 1825 days if you want to be alerted five years out.

We also find a new feature inside this block. ‘exclude’ takes a regular expression and tells fido which files to ignore. Currently, ‘exclude’ only works inside a file block with an exceeds rule but I plan to make better use of it.

Finally we notice one final feature that we’ve never seen before. The ‘throttle’ directive tells fido how long to wait between alerts. In this scenario, fido will trigger an alert the second it finds a file which exceeds 8 days. If the problem is not addressed within twelve hours, it will fire another alert. Alerts will continue in twelve hour intervals until the problem is corrected.

I hope you enjoy these features. If there are enhancements you’d like to see, feel free to contact me either in the comments or by email.