RSS

AppleCare is the best!

Published on 06/16/09

I have had my iphone for about 6 months, and have loved every minute of my ownership of the phone.

Until yesterday, I had called AppleCare twice to get my earbuds replaced (I had blown out one of the speakers each time). Both times they over-nighted a new pair to me.

This last weekend my iPhone started acting wonky. I would press the volume down button and it would act like I was switching the silencer switch to off. If the phone was in silent mode, the volume down button would not work at all.

Also, when the phone was in sleep mode, the screen would come on every 30 seconds as if I had pressed one of the buttons. The screen would shut off after 7 seconds (as it is supposed to) but would come back on 30 seconds later.

I had gone through and erased everything, doing a complete restore several times (without backing anything up). Even with the phone completely wiped it was still behaving the same. So I knew it wasn’t a software issue. The phone hadn’t been dropped or damaged. It just went on the fritz.

So I called AppleCare yesterday morning and explained the problem to them. Since I don’t live anywhere near an apple store, and I didn’t buy the phone at BestBuy, that was my only option.

This morning I got my brand-spankin-new phone. Everything works and I am happy. Apple had to put a hold on my credit card to cover the cost of the phone if I don’t send them the old one back (at their expense), but that wasn’t a big deal. I highly recommend Apple and their support and warranty coverage. It has been a very easy ownership experience!

I want lots of spam

Published on 06/12/09

I have never had trouble with getting spam for any of my email accounts. I don’t know why, I just never have. I never sign up for anything fishy online. I never publish my email address online. I guess spammers just don’t know my email address exists (I have several email addresses).

But now I am in the process of setting up spam filtering on my email server because other email accounts on the server are getting lots of spam. The problem is I have no real way of judging how effective it is because I don’t get any spam.

So I set up an email account at emailtest@anideaweb.com that I am publishing every place I can think of. I have already used emailtest@anideaweb.com to sign up for several free ipod schemes. And I am publishing emailtest@anideaweb.com here on the internet for all the spam bots in the world to see and enjoy.

Please, Mr. Spammer, send some email to emailtest@anideaweb.com. I would love to here all about all the special offers I could get. I would love to help you collect your relative’s inheritance since you are stuck in Africa and can’t come here to claim it. I’m not really interested in anything dirty, but I would sincerely appreciate your thinking of me in your offerings. Please, send email to emailtest@anideaweb.com. I am waiting for you.

UPDATE One note about this, if you try something similar, don’t give out a real phone number. I have been doing this for all of 15 minutes and I have already gotten my first call. Fortunately, I gave my google voice number so I can flag it as spam (though I guess it technically isn’t spam since I gave it to them).

UPDATE #2 (6/16/09) I got 44 spam emails yesterday (up from 24 the first day). Today isn’t even half over and I am up to 33. This is awesome! Now, if I could just get Postfix to communicate properly with amavisd-new…

Starbase Atlantis

Published on 06/11/09

My son was able to take part in a program on the navy base called Starbase Atlantis, a week-long science day camp.

We made a thank you page here.

Being a work-from-home dad

Published on 05/21/09

Being a work-from-home is challanging. The hardest challenge is separating work from home, and I am not just talking about work hours from home hours. The fact is—I am always at work. Even if I am not actually doing work at the time, chances are that I am thinking about it or planning something related to it.

When I had a 9 to 5 job it was easy to separate work from home. I never really thought about work while I was at home; it stayed at the office. (Although, to be honest, the home hours of my last year and a half at my 9 to 5 job were spent working on things in preparation for going out on my own.)

In this new world though (I mean new to me), it is way too easy to get caught up in work all the time, and not switch the work brain off when I should be focusing on my family. I was reminded of that fact this morning while cleaning off my (work) desk.

Logan and I are finishing up his first full year of home-schooling today (that would be another long post for another time) and I found a note that my youngest son left for me some weeks back. To my shame, I didn’t really read it when he gave it to me, but I did today and I was smitten in my conscience. I posted the note for you below.

Just a little explanation—my wife’s work hours are going to be reduced this summer from what they are during the school year. She is planning on taking the kids to the beach as often as she can. There are five in our family.

New attached image

Sniffle.

Rolling restarts for mongrel_cluster_ctl

Published on 04/14/09

I have been doing a lot of updating on Net-at-hand over the last couple of months while working on the plugin architecture.

Before the server upgrade I did recently, restarting my cluster of mongrels was kind of dicey. I was using so much swap, I guess, that some of the ports would refuse to restart and would just hang. Often I would have to go in and kill the processes by hand, clear out my pids folder, and start it back up. Needless to say, I am sure there were many requests that saw the old “We’re down for maintenance” sign.

Upgrading the server and fixing some of the memory leaks fixed most of that. The mongrels restart almost instantly now, but I still get dropped requests because they would all get stopped together and then would all get restarted together.

After some googling, I found this patch for mongrel_cluster_ctl that restarts the mongrels one at a time. So requests are still passed along to the working mongrels while one is being taken care of. I tried it for the first time this morning and I am smiling. I did a smattering of page re-loads while restarting and only one was a little slow (while the mongrel finished loading). Not one was turned away.

Now, a couple of issues remain with this approach.

  • One is that I can’t perform database migrations with this approach. Usually, I can have any old versions of the application sitting around when the new database is in play. I would rather people see the “site maintenance” page than the “oops!” page.
  • The second issue has to do with my front-end server nginx. Right now, I am running six mongrel instances and nginx is proxying requests to those mongrels. However, nginx uses a round-robin proxying strategy, basically just going down the list in order. This has generally worked ok for me, but if nginx is sending a request to a mongrel that is being restarted (rather than just going to the next one), I imagine a request would get dropped. I am planning to fix this in the not-to-distant future because nginx’s proxying also creates issues when someone is uploading a large file (which seems to be happening more regularly now!).

RMagick and memory leaks

Published on 04/04/09

Last night I upgraded the slice that Net-at-hand is running on because my mongrels kept leaking memory. I was running four of them at the time and I would invariably get one or two that would spike to over 140MB and that would bog the whole system down.

I knew that RMagick had something to do with it, but I was dutifully calling GC.start whenever I used it for anything. So I figured that it would all work out in the end.

Well, this morning I checked on my mongrels (I am now running six of them) and they were all humming along fine, each taking up about 90MB of ram. I decided to try an image upload to see how it would work. After uploading a 6MB image one of the mongrels spiked to 150MB and stayed there. That was not good!! If two or three users uploaded an image at the same time, I would end up going into swap memory usage again, and I couldn’t have that.

After some more googling I found this article that introduced me to the destroy! method that can be called on images. When you are done with an image you can destroy it and the memory used by it is released.

I tried calling it on the RMagick::ImageList first but it did not help. I ended up calling it on each image and that fixed the problem.

I had done a bunch of googling for rmagick memory leaks and hadn’t found much (admittedly I did not spend hours looking through the results list). Hopefully, this will help you if you are having the same trouble I was.

UPDATE
I am really excited about the results of this fix. It is working flawlessly. Just as an example, I set up the system to process a 68MB image. When I did, the mongrel that was handling it climbed to over 200MB of ram usage. But it dropped right back down to normal. GC.start was not completely fixing problems like this on my system. I am not sure why, because from what I understand, it should have. But now the problem is gone. Praise the Lord!

Things seem to be going so well, actually, that I could probably downgrade back to the smaller slice, but I am not going to. With the introduction of a plugin system on Net-at-hand I don’t want to take any chances.

A cheap application server for rails

Published on 03/19/09

When I got started as a freelancer back in 2007, I spent a little time developing a client website with rails that I could use to generate invoices for completed work. The app had been running on the same server that net-at-hand is running on, which was fine until I upgraded net-at-hand to rails 2.2 a couple months ago.

After the upgrade, I found myself looking for a way to get aib (an idea billing) off that server. I didn’t want to move it to my email serever because I didn’t want to have the expense of upgrading that server.

I have a server here at home (a hand-me-down quicksilver G4) that I thought of using, buy after some experimenting, I discovered that Cox, my ISP, blocks port 80 for inbound taffic. So I thought I was out of luck.

After a couple of days, it hit me that I could just use my server at home as an application server, with requests going to my net-at-hand server, but then Nginx proxying the requests to my appplication server on port 3000.

Obviously for a production setup of a major site, this might not be the best. It definitely is increasing bandwidth usage. But my billing site doesn’t get many visitors, and the websites responsiveness has definitely improved since the move. And I saved money in the process.

UPDATE
I’ve been thinking about this some more and I realized that the data going between my web server and my home-made application server is probably not encrypted. I am sure that I could remedy this, but I probably won’t any time soon. The information passing through is not really sensitive. I guess if one of my clients wants to pay a hacker thousands of dollars to find out how to erase an invoice of a couple hundred dollars, I should just let them have it.