Category Archives: Tech

WordPress image lazy-loading plugin

Recently I’ve been spending a lot of time on a new Rails project that deals with photos and it also uses a lot of jQuery. I love the jQuery image lazy load plugin and thought since my blog is pretty image-heavy, it’d benefit from it too. It’s trivial to use this plugin, but since my blog tracks WordPress via Subversion, I wanted to do it as a plugin so I wouldn’t have to touch the WordPress code. After 15 minutes or so I hacked together a plugin for this.

You can download it at the plugins directory at WordPress. It’s also available on Github.

It took a day to get commit rights to the Subversion repo of WordPress, since I use Git and had already pushed to Github, it was a bit of an annoyance to add the empty Subversion repo with git-svn. I won’t go into the steps, but this thread helped a lot. I must say that since it’s a shared Subversion repo, the revision number was 119065 to start with, so git svn fetch took a very long time. I spent almost an hour to get to the point so I could do git svn dcommit. And now every time I do dcommit it takes a while coz it pretty much has to compute the diffs between all the recent commits and apply them to Subversion. After using Git for so long I forgot how painful Subversion was.

Just added Facebook Connect to this blog

Just added Facebook Connect to this blog 2022 ayn blog
Uploaded with plasq‘s Skitch!

Sociable released a WordPress plugin to link up WordPress blogs to Facebook with Facbeook Connect. I just installed it, took about 5 minutes, and it works great. It allows you to link your FB account to this blog, so you can post comments with your FB identity. Your comments are automatically approved if you posted with fbconnect. You also get to use the familiar fb:multi-friend-selector to invite friends to this blog.

RESTful in-place edit in Rails and jRails/jQuery

I’ve been using jRails in my recent Rails projects, the original Rails in-place editing plugin uses script.aculo.us, there is a jRails version of it, but neither of them is RESTful – they both create extra actions to update the in-place edit fields.

I found janv’s rest_in_place plugin and it uses the default update action to update the field, so no routes modifications are necessary. I had some problems with the plugin at first but after a pull-request correspondence the plugin now works well. Here are the highlights on how to use it, keep in mind that I use HAML.

The plugin’s init.rb doesn’t load anything for you, so you have to go into your application layout and include the js file:

    = javascript_include_tag 'jquery.rest_in_place.js'

If you have CSRF protection on, this plugin also requires you to set a javascript var. If you have jRails it automatically append the token in ajax requests, but then you would have to modify the plugin a bit to get it to work.

:javascript
  rails_authenticity_token = '#{form_authenticity_token}'

In your controller’s show action, handle the javascript response:

  def show
    respond_to do |format|
      format.html # show.html.erb
      format.js   { render :json => @model }
    end
  end

then you can render the helper in the views:

- div_for @model do
  %p
    = label_tag "Name"
    %br
    %span.rest_in_place{ :attribute =>'name' }
      =h @model.name
  %p
    = label_tag "Location"
    %br
    %span.rest_in_place{ :attribute => 'location' }
      =h @model.location

37signals also saw the benefits of AWS

37signals moved TaDaList to run on pretty much the same things I’m using: EC2, EBS, Elastic IPs, Apache, Passenger. They also started from the same Ubuntu Intrepid images. 🙂

Joshua posted more info on their setup in the comments section of the blog post:

Joshua Sierles 28 Nov 08

Matt,

Our custom image is based on the Ubuntu Intrepid images from Alestic. We install useful EC2 gems and base packages, bundle, then provision each instance by role. Working with EC2 is so easy, we don?t see much value in using a third-party provider for our scale.

Yaroslav,

EBS and Elastic IPs were the primary motivation for moving to EC2 . We use EBS extensively: for MySQL data, logs and local repository mirrors. Performance so far is excellent. Snapshotting volumes is a breeze and makes setting up MySQL slaves and staging environments really easy.

[From Ta-da List on Rails 2.2, Passenger And EC2 – (37signals)]

from ServerBeach to Amazon AWS/EC2

I’ve had a dedicated server at ServerBeach for about 5 years, overall they were pretty solid, one time the hard drive died but they were able to mount the dead drive in read-only mode so I could recover all my data. After they were acquired by Peer 1, there were some connectivity issues, but after a few tickets it was pretty solid again.

My ServerBeach server was so old that it was one of the EOL (end-of-life) servers, which means they no longer stocked spare parts for them, which is kindda not good. Ray and I have been hosting our Ruby on Rails projects on EC2 for quite a while, OnMyList is also hosted on EC2 with software load-balancing, multiple app and database instances. I had always planned to consolidate things and move my ServerBeach setup to AWS/EC2, but I never got around to it. About a month ago ServerBeach sent me an email saying that they had to reassign the IP addresses on the server, that is a bit of a pain coz I would have to change the DNS on all the domains, so that pretty much gave me enough incentive to move.

Ray and I started with the Intrepid Ubuntu AMI by Eric Hammond. To get a static IP, I allocated an Elastic IP Address, you get one Elastic IP for free per instance, and you can assign it to any instance. We also use Elastic Block Store (EBS) for persistence storage, the entire /home, the MySQL database, and most of the important configuration files in /etc are symlinked to the EBS volume. With EBS you can easily create snapshots for backups by using the ec-create-snapshot command included in the EC2 tools.

To get an Elastic IP, just do:

% ec2-allocate-address
ADDRESS 75.101.xxx.xxx

To associate that address to an instance:

% ec2-associate-address -i i-xxxxxxx 75.101.xxx.xxx
ADDRESS 75.101.xxx.xxx i-xxxxxxx

To create an EBS volume (make sure the zone is the same as that of the instance):

% ec2-create-volume --size 50 -z us-east-1c
VOLUME vol-xxxxxx 50 us-east-1c creating 2008-10-31T18:01:29+0000

To attach the volume to an instance:

% ec2-attach-volume vol-xxxxxxx -i i-xxxxxxx -d /dev/sdh
ATTACHMENT vol-xxxxxxx i-xxxxxxx /dev/sdh attaching
2008-10-31T18:01:46+0000

Now, ssh into the instance and run mkfs, I’m using ext3 but you can do whatever you want:

% yes | mkfs -t ext3 /dev/sdh
mke2fs 1.41.3 (12-Oct-2008)
/dev/sdh is entire device, not just one partition!
Proceed anyway? (y,n) Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
3276800 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736,
1605632, 2654208, 4096000, 7962624, 11239424
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c
or -i to override.

After this you can add a line to /etc/fstab to mount the EBS volume to a mountpoint.

Everything on the server was pretty much installed from APT, we went with BIND9 for DNS. Rails apps are hosted with Passenger as usual. God is used for monitoring. We host the primary DNS for all our domains on the instance, and secondary DNS’ are hosted at EditDNS. I chose EditDNS because they support SRV records, and we need that for Jabber. All our domains use Google Apps, there is no reason to run my own mail server when Google provides superb reliability, spam filtering, secure IMAP and SMTP services, the gmail web interface is also way better than any open source webmail package I’ve seen. Pretty much all the hosted domains share the same zone file, which makes DNS management super easy. If you got BIND9 setup but the server ain’t answering remote queries, check to see that the port (53) is enabled, with EC2 you have to manually open up ports, most ec2 setup instructions open up port 22 for SSH and 80 for HTTP, if you need anything else you need to authorize them yourself:

ec2-authorize default -P udp -p 53

For Rails apps we setup a deployer user to make things really simple, the Capistrano recipe is pretty straight forward. We use remote_cache with Git to make deploys very very fast.

To migrate user data over from my old server, I first created the users on the new system manually, I didn’t feel comfortable copying over the /etc/passwd, shadow files, and what not, also I had to get rid of quite a few old accounts. After the users and home directories have been created, I used rsync via SSH to copy everything over, I ran rsync with quite a few exclusions, like I wanted to exclude all the Maildir’s and the spamassassin bayesian database files coz I didn’t need them anymore, and a lot of them were pretty huge. If you run rsync in archive mode (rsync -a) the permissions and ownerships are sync’ed automatically to the users and groups with the same names. I also had to migrate Apache1 virtual host configs to Apache2 ones, but that was pretty straight forward. SSL setup was also pretty trivial in Apache2, I moved the hostname over and was able to reuse my InstantSSL certificate.

To migrate MySQL databases, we just did mysqldump’s and loaded them on the new server. I realize this probably wouldn’t work if you’re migrating a high traffic site, but fortunately we could tolerate a few hours of downtime.

After everything was running smoothly, some of our domains had longer TTL values and people were still going to the old IP. So I setup Pound on the old server to forward all HTTP and HTTPS traffic over to the new IP, worked like a charm. I went with Pound because that was the only one available in APT on the old Debian sarge server. Next time I do a migration like this, I will make sure to lower the TTL of the domains a couple of days pre-migration, but even then I am assuming DNS’ obey the specified TTL’s, and most probably don’t. I use OpenDNS and it was pretty easy to expire their cache manually.

It’s been a few weeks and the instance has been rock solid, amazingly it is much higher performance than the ServerBeach dedicated server, but since the server was over 5 years old, that is not saying much. Cost-wise, the EC2 instance is about $20 cheaper than a dedicated host (I was paying about $90 a month for the server + 2 additional IPs). Ray and I have been using EC2 heavily for over a year, so to scale horizontally with more instances is pretty trivial, and it’s far quicker than getting another dedicated server setup with a hosting provider.

“Late 2008” MacBook Pro display color profile calibration

I spent most morning messing with color profiles of my MBP. The “late 2008” MBPs come with 2 different panel models so far: 9C84 and 9C85. I have the 9C84, probably because I got mine on the release day. I’m not sure which model is better, but it’s more important to have it calibrated correctly than which model number it is.

I really should get a hardware calibration device to calibrate my screens and printer, well, I already have an ICC profile for printing with my printer (Canon i9900) and my photo papers (Ilford Gallerie Pearl), if you use Ilford papers you can download profiles here. I spent more than an hour looking at comparisons and reviews of the different hardware calibration devices, they are pretty confusing as it’s not just the hardware that matters, the software makes a huge difference too, obviously. Doesn’t seem like I can get anything I will be completely satisfied with without spending over $1k. If I were to get something now I’ll probably go with the Eye-One Display 2 by X-Rite. If you’re interested in that here’s a link to get it from Amazon.

I googled to see if I can find profiles calibrated with different devices and software, and I found this thread on MacRumors forums that is exactly what I needed. I tried pretty much all the uploaded profiles there and these 2 looked the best to me. I am using the D65 one, but most people will probably like the native whitepoint one better. (I’ve shared these 2 profiles here and here).

btw, for my photography, I shoot in Adobe RGB, RAW, and open them at 16-bit in Adobe Camera Raw (CS4). I never convert them to sRGB before I publish my work online. I do embed the Adobe RGB profile in the JPEGs, so if you view them in applications that support color management they should look fine. If you don’t, the colors will look really messed up. Safari in Leopard uses ColorSync, so if you use Safari you’ll be fine. If you use Firefox 3 in Leopard, you need to enable color management yourself, as it is disabled by default. To do that, you can either edit the settings in about:config, or install this add-on. If you’re one of the unfortunate few who are still on Windows, I believe Vista has color management built in, if Vista sucks too much and you’re still on XP (good choice!), you can try to download Microsoft Color Control Panel.

Now my profiles from display to prints are pretty close, so I’m okay as long as I shoot and sell my prints. But I upload a lot of images to Flickr, I think maybe I would start converting the profiles to sRGB so they won’t look ridiculous on browsers that don’t support embedded color profiles or color management at all. Any color management or digital workflow tips you’d like to share? Please post them in the comments.

If you’re new to color management, RenĂ© Damkot wrote a great post about the topic at Canon Digital Photography Forums.

Flash 10 security changes: setClipboard() requires user interaction

This is probably why the click-to-copy share links don’t work with Flash 10:

Setting data on the system Clipboard requires user interaction

In Flash Player 9, ActionScript could set data on the system Clipboard at any time. With Flash Player 10, the System.setClipboard() method may be successfully called only through ActionScript that originates from user interaction. This includes actions such as clicking the mouse or using the keyboard. This user interaction requirement also applies to the new ActionScript 3.0 Clipboard.generalClipboard.setData() and Clipboard.generalClipboard.setDataHandler() methods.
What is impacted?

This change can potentially affect any SWF file that makes use of the System.setClipboard() method. This change affects SWF files of all versions played in Flash Player 10 and later. This change affects all non-application content in Adobe AIR?however, AIR application content itself is unaffected.
What do I need to do?

Any existing content that sets data on the system Clipboard using the System.setClipboard() method outside of an event triggered by user interaction will need to be updated. Setting the Clipboard will now have to be invoked through a button, keyboard shortcut, or some other event initiated by the user.

[From Adobe – Developer Center : Understanding the security changes in Flash Player 10]