Fork me on Github
Fork me on Github

Joe Dog Software

Proudly serving the Internets since 1999

Amazon Web Services Free Edition

(Or how to run a website on a shoestring budget)

Last fall, Your JoeDog moved this site into Amazon’s web cloud. He’s using a micro instance on the free tier. It’s free for a year then $0.017 an hour after that.

Note that “micro” part. We’re talking about a pretty lean server. When it first came online, this site screeched to a halt at semi-irregular intervals. It was running out of memory. To increase its capacity while remaining in the free tier, Your JoeDog added some swap. “How do you add swap space in AWS?” Glad you asked. Here’s how:

  $ sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
  $ sudo chown root:root /var/swap.1
  $ sudo chmod 600 /var/swap.1
  $ sudo /sbin/mkswap /var/swap.1
  $ sudo /sbin/swapon /var/swap.1

You can check your creation with the free command:

  $ free -m

By adding swap, Your JoeDog was better able to keep this site humming. Unfortunately, it still locked up. One day, it locked up for an extended period of time.

To monitor the site’s availability, we signed up for pingdom. There’s a free version which allows you to monitor a single URL and send text alerts. (Email won’t do us much good since that service is hosted here.)

Not long after the alerts were configured, one fired. The site was down(ish). Downish? What’s that mean. It was more like a series of brief outages. While this was going on, Your JoeDog’s inbox started filling with new-comment-needs-approval messages.

LINK SPAMMERS!! Some asshole was botting the site with unthrottled comment posts and they essentially DOS’d it.

To free up resources, Your JoeDog created an AWS database instance and moved his content from a local database with an export/import. There’s only one reason you shouldn’t do the same: cost. After the free period, you’ll be charged for that as well.

So what’s the moral of this story? If you can afford it, don’t waste your time on the free instance. These micro VMs are too light to handle traffic bursts. And if you’re a serious business, then you really shouldn’t bother. In the grand scheme of things, Amazon’s computing-for-lease is really inexpensive … except, of course, if you’re a lowly open source developer.

 



Robots Schmobots

Total Factor Productivity

Total factor productivity measures residual growth that cannot be explained by production inputs. Its level is determined by the efficiencies of labor and capital in production. Huh? What are you talking about? Basically, it’s a measure of productivity attributed to technological or organizational improvements. Then why didn’t you say that?

As you can see in this chart Your JoeDog cooked up with FRED, total factor productivity has leveled since the late 1990s. In other words, the pace at which humans are replaced by robots has slowed during the Internet Age. More people lost jobs to automation in the 80s and 9os than lose them now. That’s comforting … I guess.

So all the angry monkeys pecking CAPLOCKED rants about automation in the comments sections are simply displaying ignorance. Automation isn’t their foil – it’s bumbling economic stewardship.

There’s no reason to believe that employment won’t return to its 1990s levels if policy makers either increased aggregate demand or made labor scarcer, i.e., spent money or enacted labor laws. Unfortunately the people George Carlin referred to as “the owners of this country” oppose both measures. Blame them, not robots.

 



Will Artificial Intelligence Take Over The World?

singularitySome are concerned that self-improving artificial intelligence will destroy the world. Stephen Hawking thinks it could destroy mankind. Elon Musk recently tweeted, “I hope we’re not just the biological boot loader for digital superintelligence.” The Guardian thinks they will disrupt jobs while leaving humans physically unharmed.

The idea is this: we build robots with smart digital brians. Those robots use those brains to build even smarter brains. Once they achieve recursive improvement, robot intelligence will rapidly advance. The most advanced species on the planet won’t a species at all – it will be a line of super intelligent robots. Welcome to the technological singularity.

People fear these machines will one day turn on humans and destroy mankind. That could occur in one of two ways: 1.) They intentionally destroy us or 2.) They unintentionally destroy us.

The first scenario requires a goal. To intentionally destroy us, robots must want to destroy us. They’ll need a test case to measure improvement. “This feature kills better than the old one.” Computers don’t feel emotions. They’re not driven by love or hate. If they’re driven to kill humans it’s only because we programmed that feature in the first place. We could be that stupid, but this seems improbable.

The second scenario seems more likely. Self-improving artificial intelligence could be programmed to reverse global warming. When the robot is ready to go-live with its fix, it better have it right or it could render the planet inhospitable to life. In other scenarios, they could deprive humans of resources as they work tirelessly to achieve a goal.

But what if robots achieve something akin to emotion? If they can set their own goals or follow their own “interests”, then who knows where this technology will go. They may devote resources to solving math problems or they might hunt humans for sport. Either way, if technology destroys humanity, we’ll have only ourselves to blame….

 

 

 



CTR Is Hard

Sproxy is a word Your JoeDog invented to describe his [S]iege [Proxy]. At the time of this writing, this site has the top three positions for ‘sproxy’ on Google. In the past week, nine hundred people typed ‘sproxy’ into the Google machine. Of those nine hundred, only 110 clicked a link to this site. That’s a 12.22% click-through rate for a made-up word that describes an esoteric piece of software that exists right on this very site. Let’s just say that falls a little below expectation….

 

 

 



Nerd Splaining Large Numbers

Holy shit — the Economist really outdid itself. What now? In this post, they explained why Gangnam Style will break YouTube’s view counter. They used 3726 characters and 612 words to explain that computer integers don’t go on forever. When the Gangnam Style counter reaches 2,147,483,647 it will stop counting. Why?

Integers are stored in a series of ones and zeroes. On a 32-bit platform, you can only store value in 32 consecutive ones or zeros. Go to this binary to decimal calculator and put 32 ones in the binary field. Press “Calculate” and you’ll get this answer: 4294967295.

But the Gangnam Style counter is maxed at half of that? How come? That’s because computers use positive and negative numbers. The range falls above and below zero, i.e., from -2,147,483,648 to 2,147,483,647. Gangnam Style is approaching the upper bound.

If YouTube switched to 64-bit architecture they could capture up to 9 quintilian views.

Remember kids, there are 10 kinds of people in this world. Those who understand binary numbers and those who don’t.

[Economist: Wordy Word Words on Computer Integers]

 



Why Do Investors Love Amazon?

What’s happening at Amazon isn’t suppose to happen in modern finance. Shares are rising as profits are falling:

Amazon shares are up around 150 percent since mid-2010, which perhaps not coincidentally was the last time the company had sizable profits. In other words, investors really decided they loved the company only when net income began to slide.

Any fool can run a profitable company but it takes a gutsy person to build the world’s largest retailer….

[New York Times – All Amazon Is Missing Is a Profit]



Gub’mint (IT) Mule

bureaucratsSean Gallagher has an interesting piece on (ars)technica. He asks, “Why do government IT projects fail so hard and so often?” Gallagher provides several reasons, most of which are symptoms of a large organization. Let’s examine that list.

1. The government uses antiquated technologies. Its bureaucracy is slow to move and slow to adapt. Older technologies remain long after their life cycle expires largely because the approval process for new ones is long and arduous. You can imagine many frustrating meetings that end with, “Fsck it. We’ll put it on XP.”

2. Its user base is really large. Gallagher cits as one example a DOD email rollout that touched 1.5 million users. That’s an astonishing number for an in-house IT department. Certainly there are web companies with more users — there are 425 million GMail accouts, for instance — but Google does web for profit. The army’s IT department is an expense.

3. Flawed metrics. Gallagher notes that many government IT dashboards are filled with nice metrics that contain a lot of nines. Unfortunately, those nines have little bearing on end user experience. If department CIOs measured things that mattered, they’d be filled with zeros.

I’m sure these are valid criticisms but how do they vary from other large organizations? I work for a large corporation and this week my company finally moved my laptop off Windows XP. There’s no way the entire organization will be XP-free by end-of-life-cycle. It took a Great Recession to convince management that Linux was a viable alternative to HP-UX.

Our user base is 15,000 and every internal roll-out contains some glitches or problems. The army rolls out applications to a user base that is two orders of magnitude larger than ours. Where we roll out in a carefully controlled environment, they have to provide service to all corners of the world. Some of those corners are pre-fab barracks on an Afghanistan mountain top.

I’m yet to meet a person in my company who likes our out-sourcing partner. The bean counters like how little they cost, but they’re not happy with their services either. Yet if you look at this partner’s dashboard, you’ll find it’s filled with as many nines as those government CIOs. Faulty metrics aren’t limited to the public sector.

The Affordable Care website famously crashed during its rollout. Yet on the surface it wasn’t subject to many of government IT’s shortcomings. Most of the work was handled by a private partner. They used apache webservers on Linux. The site was fronted by the Akamai CDN which greatly reduces load by moving content close to the users. Most importantly, the site wasn’t tied to antiquated government infrastructure. Yet it failed. Why?

When you examine the site you find it’s simply not optimized for heavy traffic. The pages are too heavy and they contain too many elements. Fifty-six javascript files? Really? Then we learn the system was tested under load that was an order of magnitude less then what they received on October 1st. The site was basically slash-dotted.

Certainly there are good failures and bad failures. Collapsing under the weight of your own popularity is a good one. Still, with better planning and better coding the Affordable Care site could have experienced a more successful roll-out. Those operations require a high level of expertise which brings us to what I suspect is the real reason government IT projects fail: constrained by the tax payer’s dime, government can’t attract the talent necessary to service a very large user base. In other words, we get what we pay for.



A Contemporary Technology Catches Up With Ancient Rome

colliseumIt’s generally accepted that the contemporary world is more technologically advanced than the ancient one. The Etruscans may have dreamed of space travel, but they were unable to transport themselves to Schenectady, New York, let alone the moon. Yet we can’t be too smug. Sure we carry the Internets in our pockets and heat our meals in seconds, but we can’t touch ancient Rome when it comes to concrete.

Throughout the Mediterranean basin, there are ancient harbors constructed with 2000 year old Roman concrete that remain more or less is perfect functioning condition. And as we gaze about the remnants of the ancient world, we see aqueducts, roads and buildings that have survived remarkably well over time. When we compare these structures with our own, we find contemporary concrete sadly lacking.

Roman concrete was superior to our own and now scientists understand why:

The secret to Roman concrete lies in its unique mineral formulation and production technique. As the researchers explain in a press release outlining their findings, “The Romans made concrete by mixing lime and volcanic rock. For underwater structures, lime and volcanic ash were mixed to form mortar, and this mortar and volcanic tuff were packed into wooden forms. The seawater instantly triggered a hot chemical reaction. The lime was hydrated — incorporating water molecules into its structure — and reacted with the ash to cement the whole mixture together.”

The Portland cement formula crucially lacks the lyme and volcanic ash mixture. As a result, it doesn’t bind quite as well when compared with the Roman concrete, researchers found. It is this inferior binding property that explains why structures made of Portland cement tend to weaken and crack after a few decades of use, Jackson says.