Emanuele Preda’s Blog

Start here

Who is the very first pioneer of Mobile Internet?

I and my team are arranging the celebration of the 20th birthday of SMS.it, which I founded in 1996, launching a service to get urgent e-mails directly onto any mobile phones.

Since that service was clearly in the space of Mobile Internet industry rather than in the Short Messages one, one of the best ideeas that came through our brainstorming session was to invite the very first worldwide pioneer of Mobile Internet services.

Then we started to dig in the net, searching for any solution that was commercially launched before September 1996 allowing the general public access to any Internet service (web, email, etc…) on the go: without a computer, a modem and a fixed phone line and without a special and expensive device.

What we found out was a real surprise: NOTHING!

Meaning that: it would be extremely easy to invite the very first pioneer of Mobile Internet services, worldwide… since it was …ME!





How to combine two scanned PDFs with front and (reversed) back pages

It’s curious: Mac OS X doesn’t include a tool for automatic scan of front&back (two sided) documents, using a front-only feeder scanner.

I found a way to automate the process.

I assume to have two PDF files: front.pdf that contains the first page and all the odd pages, and a back.pd file that contains all the even pages, that have been scanned starting from the back, so the first page in the PDF will be the last even page.

So we need to make two operations: invert the pages order in back.pdf file (so that the first page will be the last one) and then combine two pdfs in one pdf taking the first from front.pdf, the firts from back.pdf, the second from front.pdf, the second from back.pdf and so on.

To make the first operation (inversion) download and install PDFTK 

Then create this script with textedit and save it with the name reverse-pdf.rb in the same folder of your two pdfs.


if __FILE__ == $0
 puts "Check that pdftk is installed"

 if ARGV.length != 2
 puts "Syntax: #{__FILE__} pdf_to_reverse.pdf <page count>"

 pdf = ARGV[0]
 reversed_pdf = pdf.gsub(/\.pdf/i, "_reversed.pdf")

 page_count = ARGV[1]

 `pdftk #{pdf} cat #{page_count}-#{1} output #{reversed_pdf}`

Then launch terminal, move with cd  command in the right folder and call:

./reverse-pdf.rb back.pdf 55

where 55 is last page number (= the total pages number).

The script will create, in the same folder, a file called back-reversed.pdf with the inverted pages.

For the second part, create this script with textedit and save it with as combine-pdf.rb:


if __FILE__ == $0
 puts "Run this on os x to shuffle two pdf's, where the
 even pages are already reversed (reverse them with other script)"

 if ARGV.length != 3
 puts "Syntax: #{__FILE__} odds.pdf reversed_evens.pdf output.pdf"

 odds_pdf = ARGV[0]
 reversed_evens_pdf = ARGV[1]
 output_pdf = ARGV[2]

 # obviously, only works on os x. I didn't see an easy way to combine pdf's
 # in pdftk or other tools I searched for
 `python '/System/Library/Automator/Combine PDF Pages.action/Contents/Resources/join.py' --output '#{output_pdf}' --shuffle '#{odds_pdf}' '#{reversed_evens_pdf}'`

This nice script uses Automator in “shuffle” mode to take a page alternatively from each document to combine the pdf.

You can invoke it with:

./combine-pdf.rb front.pdf back_reversed.pdf  combined.pdf

Source of the scripts: http://jawspeak.com/2009/08/05/merging-pdfs-on-mac-os-x-from-a-non-duplex-scanner/

Downloading an Amazon EC2 AMI to local drive

I post an integration to the very precise Jiaqi Zhang’s post http://weaponshot.wordpress.com/2012/04/08/downloading-an-ami-to-local/ to download an Amazon EC2 AMI EBS-Backed, and also be able to BOOT the downloaded instance.

I keep the same numbers as the original post, for your reference.

1. Choose an existing EBS-backed AMI that you want to download, and launch it if it’s not, and check if it’s using ext4 filesystem, invoking “df -T” in the instance.

3.1 Use “su” to change to root.

3.3 Download in your computer your credentials, such as pk-XXX.pem, cert-XXX.pem. These can be found in the X509 certificates tab in your credentials panel from upper-right of the console screen (just click your name).

3.4 Copy them to your ami instance using scp -i <identity_file.pem> <pk-XXX.pem> <cert-XXX.pem> ec2-user@your_ami:~/directory. Here the identity_file.pem is the key file you downloaded when you created the instance or created the key pair.

3.5 Log in to the instance, invoke “ec2-bundle-vol -k <pk-XXX.pem> -c <cert-XXX.pem> -u <user_id>”. The two .pem files are just copied in the previous step. The user_id is the digital numbers you can find in your “account activity” (upper-right, under your name), with the – signs.

(If you don’t have the ec2-ami-tools pre-installed, see instructions in the original post)

4. Bundling an image means to compress it and cut in a bunch of files you will see in the /tmp dir.

4.1 To upload it to S3: Create a bucket in your S3 console panel. Name it like, e.g., “mybuck”. Don’t use capital letter, spaces, dashes or underscores.

4.2 in your ami instance, “cd /tmp”and  “ec2-upload-bundle -b <mybuck> -m <manifest_file> -a <access_key> -s <secret_key>”. Here the manifest_file is the one automatically generated xml file when you invoke the bundle command. It should be under the /tmp directory together with those image.part.XX files. You can find the “access_key” and “secret_key” in your credential panel under the “access keys” tab. The secret key is by default hidden, and you should click the “show” to make it visible: copy&paste both of them, and you are there.

Now that all the image files are uploaded, let’s download them to our local machine.

Go in vmware, start an Ubuntu machine. You can freely download from official Ubuntu website. If you use vmware 3.1.4, don’t download the 12.x because the vmware tools, necessary to share the folders between your real machine and your VM, are supported only on 10.x. Chose 32bit/64bit accordingly to your EC2 machine you are importing. This is very important!

Once downloaded the right .iso, in vmware create a new virtual machine using the .iso as the installation disk.

When you have your Ubuntu machine running, vmware should install automatically for you the vmware tools (just see in VirtualMachine menu and wait). Check with “ls -l /mnt/hgfs” if you see your shared folder. If not, install manually vmware-tools (it’s an option in Virtual Machine menu).

6.1 Now install the Amazon ec2 toolkits to your local machine with the command: “apt-get install ec2-ami-tools” (and also ec2-api-tools if you wanto to control ec2 vm from there)

To run these commands, you have to copy in your home dir, using the shared folder /mnt/hgfs, the two .pem files from your computer.

Now create a dir (i.e. in your homedir), cd in to it and invoke “ec2-download-bundle -b <mybuck> -a <access key> -s <secret key> -k <pkXXX.pem>”: this will download the bundled image files from your S3 bucket to that dir.

In this same dir, invoke “ec2-unbundle -k <pkXXX.pem> -m <image.manifest.xml>. Then you should get back the 10GB file named “image” (the file size depends on the type of ami, “small” ones get 10GB).

Now there are two possibilities:

-> If you only want to mount your image and not to boot it, you can simply install qemu with “apt-get install qemu” and invoke “qemu-img convert -f raw -O vmdk image /tmp/ec2-image.vmdk”; then move this .vmdk file in the shared folder, halt the VM, attach the new vmdk to your VM, start again the VM, do a “df” to check if your boot disk is /dev/sda or /dev/sdb, and mount the new disk with  “mkdir /mnt/yourdisc” and “mount -t ext4 /dev/sdX /mnt/yourdisc” where X is b if your boot disk is a, or vice versa.

Then cd /mnt/yourdisc …and here it’s all your stuff!

— *** —

-> But if you want to boot your instance, then there there are some other steps. FYI, this post says that it’s so hard that’s unworthy, but this other shows a way to do it, even if it’s uncomplete, and after a lot of hours I managed to do it.

You just obtained your 10GB image file, right?

Now shut down your VM, go in hard drive setting panel in Vmware and create a secondary disc to attach to your instance. Let’s select a SCSI 11GB drive. You have to select it 10% more of the space of your original disc (We assume that the image is 10GB).

7.3  Boot up again your VM, run “fdisk -l” to check that you see /dev/sdb (assuming it’s sda your primary disk), with no valid partition table because it’s not formatted.

Before to copy your data on it from the image file, install gparted (apt-get install gparted) and run it. In gparted, choose Device/Create partition table with standard (msdos) label, and then create an ext4 partition leaving 1MB at the beginning. Probably it would be even better to create a 9GB partition and 1GB of swap space. Deselect the tick in “round to cylinders” or it will not leave this 1 MB.  Create also the 990MB swap partition. Commit, and then right click the primary partition and activate the “boot” flag.

7.5 Now, as su (root user), invoke “dd if=image of=/dev/sdb1”. It can take from some minutes to a whole night, depending your hadrware. I have a MacBook Pro with an SSD disc and so it took only 6 minutes. This command copies the data from the unbundled raw image to the partition you created.

7.6 Now invoke “mkdir /mnt/ec2” and then “mount -t ext4 /dev/sdb1 /mnt/ec2”: in /mnt/ec2 directory you will see all the files in your AMI. Up to here it’s very similar to the other possibility, but we left 1MB of space at the beginning to make the disk bootable.

Before to make the disc bootable with grub, I replaced the kernel in /boot directory, taking it from an Ubuntu 10.04 64bit (because my Amazon Linux AMI was @64bit).

I just saved the old /mnt/ec2/boot directory and created a new one copying all the /boot directory taken from the Ubuntu 10.04.

Then I created a /mnt/ec2/boot/grub/menu.lst file with inside this content:

title EC2 with kernel 2.6.32 from Ubuntu10.04
root (hd0,0)
kernel /boot/vmlinuz-2.6.32-38-generic root=/dev/sda1
initrd /boot/initrd.img-2.6.32-38-generic

These two files are obviously the name of the vmlinuz* and initrd* files in the boot directory.

Pay attention. Inside this menu we use hd0 and sda1 because when it will boot, this disk will be the first one (and only one). But now on, this disk is the second, so that’s why in the following commands we read hd1 and sdb.

Now invoke  “grub-install –root-directory=/mnt/ec2 /dev/sdb” (double check with df that your hard disc is in /dev/sdb, or you could have problems to boot again from the primary disc!)

Then invoke: “grub –device-map=/dev/null” and, at the grub> prompt, type the commands in bold (I leave for you the results of the commands).

You could have to change the numbers in the geometry command, running a “fdisk -l -u” and looking at your cylinders, heads, and sectors. Head and sectors will be 63 and 255, but the cyls number can change if you choose a different size for your disc.

grub> device (hd1) /dev/sdb
grub> geometry (hd1) 1435 63 255
drive 0x81: C/H/S = 1435/63/255, The number of sectors = 23053275, /dev/sdb
   Partition num: 0,  Filesystem type is ext2fs, partition type 0x83
grub> root (hd1,0)
grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd1)"...  17 sectors are embedded.
 Running "install /boot/grub/stage1 (hd1) (hd1)1+17 p (hd1,0)/boot/grub/stage2
/boot/grub/menu.lst"... succeeded

Now the disc is ready to be booted, but we still have to:

-copy from your Ubuntu machine the /lib/modules, or you’ll not be able to run some commands that need the right modules compiled for your new kernel.

Doing this, a known bug in evbug.ko will spam your console, so you have to:

cd /lib/modules/$(uname -r)/kernel/drivers/input

mv evbug.ko evbug.ko.disabled

add also in /etc/modprobe.d/blacklist.conf a row with: “blacklist evbug”.

Also mv /usr/bin/cloud-init and cloud-init-cfg with a different name, or you’ll have to wait 3 or 4 minutes and a lot of error of these programs trying to connect intranet Amazon addresses.

Last but not least, go in /etc/shadow of your Ubuntu64 machine, copy your encoded password and paste it in the root user of /mnt/ec2/etc/shadow, or you will not be able to log in as root – even if your password of ec2-user should still work and it should also work the sudo su command.

Now halt the VM, create a fresh new empty machine (I choosed a Centos 64 bit but it should work also with an Ubuntu, if there is any difference: it’s only an empty machine and should be the same). Tell vmware to take an existing disk, enter the other Ubuntu 64 machine, find the secondary disc and choose the default option (to make a copy of that disk). At the end of the copy, cross your fingers and run the machine.

The console is really hard to use, so you’ll want to login via ssh. But you’ll see that SSH is not working, because the keys are generated by the cloud-init that’s not working anymore.

It should be possible to generate the keys with the command ssh-keygen, but I didn’t know how to use it, I tried to reinstall with yum but after remove I couldn’t install it back, then downloaded the sources of OpenSSL and OpenSSH from their .org sites and compiled & installed following the instructions, and ssh is perfectly working now.

Please cite my blog, if you want to repost/share this (or part of) the article. Thanks!

Eluana: the Church contradicts itself

Welby was denied his funeral because they said he was a self-confessed suicidal. I don’t think they’ll have the courage to do the same to Eluana, whose plug has been switched-off because the Supreme Court accepted the proof that it was her will, expressed when she could actually make a statement, that is 17 years ago.

It should be remembered, though, that both Welby and Eluana did not wish to “end (more…)

Event Driven or Event Driver?

The week starts, and brings on a whole series of things happening around you. Phone calls, emails, messages via chat or sms, people visiting, anything else that’s coming to you from the outside.

There are only two ways to react, and to handle these external stimuli.

1. Event Driven (the events drive you)

You fit into this category if you (more…)

The Frog’s push of legs

Imagine a pot filled with cold water. A frog is quietly swimming in it. The fire is lit under that pot. Water starts warming up. Soon it becomes lukewarm. The frog finds this rather pleasant and keeps swimming. The temperature (more…)

Worse than the ostrich

Italians have the same severe fault that the ostrich is usually but wrongly(*) branded with: they gaily hide their heads in the sand without seeing incoming dangers or catastrophes.

I find extraordinary the quiet serenity of my compatriots, past and present, living under that same volcano that has already given many signs of eruption, or (more…)