Tag Archives: amazon

Migrating a “degraded” Amazon EC2 Instance

I’ve been using AWS for a few years now, and it has been rock solid. Last Sunday one of my sites became unreachable, when I got home a couple of hours later, I was able to ssh into the instance and everything seemed to be working perfectly. I checked utmp logs and the instance was rebooted. A while later I got this email from Amazon:

From: Amazon EC2 Notification Subject: Notice: Degraded Amazon EC2 Instance Hello, We have noticed that one or more of your instances are running on a host degraded due to hardware failure. i-xxxxxx The host needs to undergo maintenance and will be taken down at 12:00 GMT on 2010-06-23. Your instances will be terminated at this point. The risk of your instances failing is increased at this point. We cannot determine the health of any applications running on the instances. We recommend that you launch replacement instances and start migrating to them. Feel free to terminate the instances with the ec2-terminate-instance API when you are done with them. Sincerely, The Amazon EC2 Team Sounded like they would terminate the instance because of hardware failure, and that would be very bad – this is a high volume eCommerce site. I looked around to see what was the best way to “clone” the instance and relaunch it, and it turned out to be really simple. When I setup EC2 stuff I always use an EBS volume for the important data like the /home, the MySQL storage, most of the configurations in /etc like Apache vhost configs. I also use an Elastic IP address so I can switch it to another instance easily, and it won’t require modifying DNS records at all. So all I had to do was:

  • get all your AWS access keys, certs, and user id, onto the instance
  • create a folder for the AMI bundling work
  • bundle the root volume on the dying instance
    $ sudo mkdir /mnt/ami && sudo ec2-bundle-vol -d /mnt/ami -k pk-CKXXXXXXXXXXXX.pem -u 12345678 -c cert-CKXXXXXXXXXXXXXXX.pem
  • upload the bundle to S3 and register the AMI
    $ ec2-upload-bundle -b somesite-post-degraded -m /mnt/ami/image.manifest.xml -a XXXXXXXXXX -s XXXXXXXXXXXXX/00XX
    $ ec2-register somesite-post-degraded/image.manifest.xml
  • launch a new instance with the AMI
  • unattach the EBS volume from the old instance
  • attach the EBS volume to new instance
  • re-assign elastic IP to new instance You can do a lot of these tasks from the

AWS Management Console. All of that took about 2 hours, most of the time was spent waiting for the AMI to bundle and upload as it was pretty large. Everything worked perfectly after the migration, when I set up the EC2 infrastructure I had planned for things like these and in theory migration should go without any glitch, but I never actually had a need to migrate an instance. It’s good to know that everything actually worked as designed.

from ServerBeach to Amazon AWS/EC2

I’ve had a dedicated server at ServerBeach for about 5 years, overall they were pretty solid, one time the hard drive died but they were able to mount the dead drive in read-only mode so I could recover all my data. After they were acquired by Peer 1, there were some connectivity issues, but after a few tickets it was pretty solid again.

My ServerBeach server was so old that it was one of the EOL (end-of-life) servers, which means they no longer stocked spare parts for them, which is kindda not good. Ray and I have been hosting our Ruby on Rails projects on EC2 for quite a while, OnMyList is also hosted on EC2 with software load-balancing, multiple app and database instances. I had always planned to consolidate things and move my ServerBeach setup to AWS/EC2, but I never got around to it. About a month ago ServerBeach sent me an email saying that they had to reassign the IP addresses on the server, that is a bit of a pain coz I would have to change the DNS on all the domains, so that pretty much gave me enough incentive to move.

Ray and I started with the Intrepid Ubuntu AMI by Eric Hammond. To get a static IP, I allocated an Elastic IP Address, you get one Elastic IP for free per instance, and you can assign it to any instance. We also use Elastic Block Store (EBS) for persistence storage, the entire /home, the MySQL database, and most of the important configuration files in /etc are symlinked to the EBS volume. With EBS you can easily create snapshots for backups by using the ec-create-snapshot command included in the EC2 tools.

To get an Elastic IP, just do:

% ec2-allocate-address
ADDRESS 75.101.xxx.xxx

To associate that address to an instance:

% ec2-associate-address -i i-xxxxxxx 75.101.xxx.xxx
ADDRESS 75.101.xxx.xxx i-xxxxxxx

To create an EBS volume (make sure the zone is the same as that of the instance):

% ec2-create-volume --size 50 -z us-east-1c
VOLUME vol-xxxxxx 50 us-east-1c creating 2008-10-31T18:01:29+0000

To attach the volume to an instance:

% ec2-attach-volume vol-xxxxxxx -i i-xxxxxxx -d /dev/sdh
ATTACHMENT vol-xxxxxxx i-xxxxxxx /dev/sdh attaching
2008-10-31T18:01:46+0000

Now, ssh into the instance and run mkfs, I’m using ext3 but you can do whatever you want:

% yes | mkfs -t ext3 /dev/sdh
mke2fs 1.41.3 (12-Oct-2008)
/dev/sdh is entire device, not just one partition!
Proceed anyway? (y,n) Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
3276800 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736,
1605632, 2654208, 4096000, 7962624, 11239424
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c
or -i to override.

After this you can add a line to /etc/fstab to mount the EBS volume to a mountpoint.

Everything on the server was pretty much installed from APT, we went with BIND9 for DNS. Rails apps are hosted with Passenger as usual. God is used for monitoring. We host the primary DNS for all our domains on the instance, and secondary DNS’ are hosted at EditDNS. I chose EditDNS because they support SRV records, and we need that for Jabber. All our domains use Google Apps, there is no reason to run my own mail server when Google provides superb reliability, spam filtering, secure IMAP and SMTP services, the gmail web interface is also way better than any open source webmail package I’ve seen. Pretty much all the hosted domains share the same zone file, which makes DNS management super easy. If you got BIND9 setup but the server ain’t answering remote queries, check to see that the port (53) is enabled, with EC2 you have to manually open up ports, most ec2 setup instructions open up port 22 for SSH and 80 for HTTP, if you need anything else you need to authorize them yourself:

ec2-authorize default -P udp -p 53

For Rails apps we setup a deployer user to make things really simple, the Capistrano recipe is pretty straight forward. We use remote_cache with Git to make deploys very very fast.

To migrate user data over from my old server, I first created the users on the new system manually, I didn’t feel comfortable copying over the /etc/passwd, shadow files, and what not, also I had to get rid of quite a few old accounts. After the users and home directories have been created, I used rsync via SSH to copy everything over, I ran rsync with quite a few exclusions, like I wanted to exclude all the Maildir’s and the spamassassin bayesian database files coz I didn’t need them anymore, and a lot of them were pretty huge. If you run rsync in archive mode (rsync -a) the permissions and ownerships are sync’ed automatically to the users and groups with the same names. I also had to migrate Apache1 virtual host configs to Apache2 ones, but that was pretty straight forward. SSL setup was also pretty trivial in Apache2, I moved the hostname over and was able to reuse my InstantSSL certificate.

To migrate MySQL databases, we just did mysqldump’s and loaded them on the new server. I realize this probably wouldn’t work if you’re migrating a high traffic site, but fortunately we could tolerate a few hours of downtime.

After everything was running smoothly, some of our domains had longer TTL values and people were still going to the old IP. So I setup Pound on the old server to forward all HTTP and HTTPS traffic over to the new IP, worked like a charm. I went with Pound because that was the only one available in APT on the old Debian sarge server. Next time I do a migration like this, I will make sure to lower the TTL of the domains a couple of days pre-migration, but even then I am assuming DNS’ obey the specified TTL’s, and most probably don’t. I use OpenDNS and it was pretty easy to expire their cache manually.

It’s been a few weeks and the instance has been rock solid, amazingly it is much higher performance than the ServerBeach dedicated server, but since the server was over 5 years old, that is not saying much. Cost-wise, the EC2 instance is about $20 cheaper than a dedicated host (I was paying about $90 a month for the server + 2 additional IPs). Ray and I have been using EC2 heavily for over a year, so to scale horizontally with more instances is pretty trivial, and it’s far quicker than getting another dedicated server setup with a hosting provider.

$100 off Kindle if you apply for the Amazon.com Chase card

I know a couple of you guys have been wanting a Kindle and haven’t pulled the trigger yet for one reason or another. What if the Kindle can be had for only $259? Would that make your decision a bit easier? 🙂

I just got an email from Amazon that for a limited time if you get the Chase Amazon.com Rewards Visa Card you would get $100 off, not a bad deal.

[Link to Kindle on Amazon]