IMAP hosting migration

Long gone are the days where hosted my own websites, email and all. The amount of time it takes to manage that in a safe and stable manner is not worth it when companies have developed all these interfaces for managing your stuff.

For years I had majority of my sites running on surftown. I had a grandfathered plan that was unlimited everything and it cost really nothing compared to others. Then surftown was sold to UnoEuro and service changed to something really odd. Price went up significantly but I just didn’t have the time to move it so I folded and pait the now 3x price for this. Then, UnoEuro was bought by Simply, and price again jacked. For the dozens of sites that I was running there, I was not paying 10x more than what I started out paying in 2005. Needless to say, even tho it’s quite an effort to move – I took the plunge and decided to move. I signed up for siteground gogeek, and moved things over there. I didn’t realize that the promotion price is for your first signup and only signed up for a year. A year passed and the renewal cam up 3x more than the initial price – so I asked if they could give me some discount – NO.

So once again, I am moving everything. This time around I signed up for the max time possible (3 years) and got a decent price. Now, I was also running some company emails on siteground, and didn’t want to just loose all those – so I had to figure out a way to migrate it all.

Turns out there are online tools that are free for that and I migrated 10 or so email accounts in a matter of an hour using Gilles Lamiral’s imapsync. Now, it went so well, that I decided to donate some to this guy. This tool easily saved me half a day of coding my own tool for this move and was super easy. Webinterface – can’t beat that. Check it out here

Sometimes you find something like this and it saves you a bunch of time. I just had to share it.

Good luck with your imap account migrations!

Very different business

A quick story – how did I get involved in a very different business.

A childhood friend of mine back in Sweden was once a pretty successful DJ. I guess family happened and it was hard for him to be on the road and away from family – so he started a environmental friendly car care business. Now, we’ve been friends since we were 15-16 or so years old, so I wasn’t really surprised. He was into cars when we were growing up, maybe not as much as I, but a descent amount. We were both into motorcycles as well for a while, but I left that scene after my second real accident. Just didn’t want to be there for a third. Anyway, this business was doing fairly well and attracted an attention, and in the end investors. Investors wanted to grow this business to something that my friend didn’t and they parted ways. At that point, my dear friend had spent a lot of his time getting this business to grow, so parting ways with an investor and splitting up assets is heard and can cause some heartache (after all, a business is your baby).

During this heartache period, my friend reached out to me to run an idea by me. During his time running the car care business, he had come across another individual and made friends. This individual was involved in a small business that needed help from my friend to figure out how to protect their products from UV rays and weathering/fading when exposed to sun prolonged times.

To cut to the point; My friend asked me if I would be interested in helping him and his friend out with a business idea. What could be done to bring it to the next level.

After hearing the idea and giving it some thought I was sold and completely invested in moving it forward. We developed a plan and evolved the products to something very different and very disruptive in a very stale genre.

We are changing and disrupting a business that haven’t seen much change in the past 100 years or so. Now, hold your breath (or not): We are offering fully customized gravestones, headstones and memorials at Nothing new you would think; except how we’re offering them, that they’re engineered, environmental friendly and the U.S Pat. Pending technology used for transferring text/images to them. The U.S Pat. Pending full headstone assembly makes it easier to ship but also cut down the installation costs while making it safer and more solid. The latter is accomplished by our U.S Pat. Pending headstone mounting plate.

How are we offering them? Well, Anyone can design their piece using our user friendly online editor. The final design will look exactly as you see on the screen and order can be placed right away.

We offer a delivery guarantee of as low as 6 weeks when using any of our standard models (of course completely personalized and customized text/images). If you want a shape that is not a part of any of our signature models, we will work with you to create one to your specifications.

Our U.S Pat. Pending digital image transfer technology makes is possible to transfer full color text and images to any surface of the product

We offer a 25 year warranty against any product defects as well as any color fading of text or images

Our products are environmental friendly, safer and lighter than the competition.

Who would have thought I would be a part of a gravestone business. Who would have thought I would engineer these kind of products? it’s interesting how life provides opportunities.

I’m not expecting to spend any time on a day by day basis on this as the others will be running it. I’ve done my tasks – getting it engineered, optimizing production, launched the site..

This doesn’t however take away my passion working with data persistence technology and building successful, efficient teams working with cutting edge technology. I’ve been fortunate to spend my free time on a side project while working at Adobe. If you ever have doubts, give me a shout. My team has been involved in some pretty amazing and intriguing projects. The atmosphere and creativity at the company has definitely helped me increase my creativity both technically and in other ways. We’ve accomplished some pretty amazing stuff and we’re set for even more greatness this year – I am excited!

NetApp file size limitation. What not to do..

So, I’ve had the pleasure (not so much) to witness first hand what happens when you go about your day and think everything is fine and dandy with your MySQL servers and Boom – one of your tables has grown to 16T and now is saying it’s full. Ok, don’t flame me too much, I know one 16T table isn’t amazing, but unfortunately the app in questions is a legacy app and this particular table is used as an audit log of changes. Not a big deal if the application would have a sane retention policy for this data and clean it up when it’s done with it. After all, developers think about data lifecycle when they write applications *grin*.

So, we run some of our database environments on top of NFS storage (yes, not optimal, but this thing was here before I got here and has been hard to kill). The NFS storage in question is NetApp and if you are going to run NFS as database storage – NetApp would be the way to go.

My team does not manage the storage and we rely on our storage team to manage and maintain/monitor the storage, which they do very well.

Unfortunately, my team nor the storage team was aware that NetApp has a single file size limit on NFS, and that limit is 16TB. All good you would think – who has a 16TB file anyway. Well, for starters – lets say you would not be running innodb_file_per_table, you would store all your data in ib_dataX, and who knows how your have that setup (most have it autogrow) which means you would be limited to a total innodb database size (all schemas and all tables) of 16TB (including indexes).

Further investigation show that there’s also a single LUN size limit on NetApp (that is if you figure you can just run iSCSI on NetApp to get around the file size limit). That size limit is *drum roll* you guessed it – 16TB. This is because *hold your breath* NetApp creates a file in a NetApp volume, that it presents as a LUN..

Ok, not a big issue; create multiple LUN’s in the same NetApp volume (again, NetApp puts everything in Volumes). Then use LVM, assign each LUN to a PV, then those to a VG and finally present them to the OS as an LV and you now can have as big files as you want with no limitation of size and still use NetApps snapshots with full consistency for MySQL backups.

Or – switch to storage with low latency, direct attached or SAN, and never look back.

ERROR 1200 (HY000): The server is not configured as slave

So, you’ve restarted mysqld, and the slave did not start. You issue SHOW SLAVE STATUS\G and notice that everything is fine but “Slave_IO_Running” and “Slave_SQL_Running” both say “No”. You don’t have skip-slave-start in your my.cnf file. Why did this happen? Baffled- you issue “START SLAVE;” and get:
ERROR 1200 (HY000): The server is not configured as slave; fix in config file or with CHANGE MASTER TO

This has happened to me numerous times throughout my time administering mysql servers. It does not happen frequently enough to not baffle you at the first glance, but then you start remembering last time this happened..

The variable server_id always need to be set to something when running replication. Check if server_id is set by issuing “SHOW VARIABLES LIKE ‘server_id’;. The value should not be “0” or “1”. “1” is normally set by default. Also, the master’s server_id and the slave’s server_id should not be the same.

You can set the server id by issuing “SET GLOBAL server_id=;” then issue a START SLAVE;

We should be fine now..

Ugh, I know this is fairly easy to find, but in an attempt to have this fresher in memory, a quick blog post helps remembering. 🙂

IPCOP install on headless device with serial console (ALIX 2D13)

In the voyage of choosing OS to run on this device, I realized I need a easy way to install OS’es onto the CF card. Now, this system does not have any display output other than serial; Any boot loader (in my case GRUB) needs to be aware of that.

The easiest way to try different OS’es is to have the installation media network mounted, and just PXE boot the device, choosing OS in pxelinux.cfg (if trying linux OS’es that is). I set it up as follows (since I have a ubuntu machine anyway), but a similar approach will work for other distribution/OS’es. You will need to install a tftp server, dhcp server and a http server. Your “server” which will serve these three services need to have a static IP and other dhcp servers need to be turned off on that network.


1) Make sure your ALIX is ready with the CF card inserted, and network cable connected to your first/rightmost interface (closest to the power input)
2) Setup and start a tftp server on the machine you will be using as a server. On ubuntu, the easiest way would be using

apt-get install <package>

I chose “tftp-hpa”
3) Setup and start a http server on the machine you will be using as a server. You might already have one running, so check before. I already had lighttpd running.
4) Setup and start a dhcp server on your network. Either on you server or if you have a dhcp server, make sure it can swallow options for PXE clients. I used the ISC dhcp server on the same host as above. Package: isc-dhcp-server
5) Download ipcop-<version>-install-netboot.i486.tgz and ipcop-<version>-install-cd.i486.iso to your server host.
6) Untar the first (netboot install file) to the tftp root, then copy the “ipcop-pxe-serial-<version>.model” file to “default” in the same directory, then copy the “pxelinux.0” file in “<tft-root>/ipcop/<version>/i486/pxelinux.0” to your tftp root
7) Make ipcop iso available via your http server either by copying the contents of the iso to a directory in the http root, or mount it to a directory in your http root. I symlinked my mount to ipcop_iso in the root.
8) Configure your dhcp server to have your tftp servers ip as “next-server” and make sure to pass the pxelinux.0 file in the config for pxe as well. I added these lines to dhcpd.conf:

next-server <my.servers.ip.address>;
host alix {
hardware ethernet <mac address of the alix>;
filename “pxelinux.0”;

9) Connect the ALIX using serial to a machine with a serial port (maybe your server above), and fire up your choice of serial terminal. I used minicom, and since I don’t have a serial port but a USB to SERIAL converter, the device was specified as the device created when the converter was inserted:

minicom –device /dev/ttyUSB0 –baudrate 38400 –8bit –statline”

10) Connect power to your ALIX board. You should see output on your serial terminal immediately. Something like this:

PC Engines ALIX.2 v0.99h
640 KB Base Memory
261120 KB Extended Memory.

Press “S” to enter bios configuration, “E” to enable PXE boot and “Q” to quit, choosing “Y” on the “Do you want to save” prompt.

Your ALIX board should now PXE boot. Follow the install instructions. You will get prompted what media you want to use to install. Choose HTTP and enter the server where you put your ipcop iso, together with the full http path. If your ALIX board did not PXE boot, search the logs and check your DHCP/tftp setup. These are the only ones used during PXE boot.

Now, I had issues with ipcop not wanting to boot after the install. I got Boot error straight after the POST sequence. I am running a 8GB CF card, and though that might be an issue (since IPCOP docs state it supports cards up to 4GB), but after mocking around a bit, I decided to re-install – and that solved it (ugh..). I am guessing something failed during the first installs MBR write to the CF card. I can’t really re-wind now and check the MBR out.. Should have done that in the first place.. I did however get a tip (on the forum) to update to the latest ALIX bios (which I already had on the board). tinyBIOS 0.99h.

Home Project; Router/WiFi AP/Server on ALiX 2D13


So, after a trying a couple of different AP’s, and realizing they were either unstable or lacking important features; I decided to build my own. It’s apparently not that easy to find a nice and stable AP/router/switch nowadays. You can’t just go and buy one for $100-$150 and expect it to be stable. And by stable I mean not being forced to power-cycle the device every couple of days.

After my last D-Link (DGL-4100) died, I bought myself a corporate grade Cisco/Linksys device (WRV200). It promised a lot of what I wanted from a device like that, and it gave me the option to setup VPN (road warrior style and site-2-site). FAIL! -> It promised, but did not live up to my expectations; Sure, you were able to du all of what it promised, but if you started to use it for real, the poor router overheated and hanged. Since I bought 3 of these to interconnect 3 sites using VPN, I was not very happy.. Took a while to have them returned.. Going through the procedure of proving they did not live up to the promises made..

I ended up researching alternatives, and found mikrotik ( and their line of hardware; routerboards ( I wasn’t sure about if I would like running a scaled down and restricted linux based router os where I could not alter things I wanted; but after trying it for a month or so, I decided to buy one and play around with.. Now, I thought the RB493UAH that I got delivered was a broken promise too, as it was hanging after just half a day of heavy use. This proved to be a hardware error, and I got a new one sent out without having to return the old one in advance. Great service here! Anyway, the second one has been up and running for 2.5 years without having to reboot. Firewalling, connection tracking, and all that jazz is enabled and I still get 100mbps throughput in between all interfaces. Torrenting with 300-400-500 simultaneous connections from more than one internal host is smooth and I get maximum throughput both ways (in/out) duplex. I also have one wireless card in it, and am adding a new 802.11N card in the next couple of weeks. Anyway, I finally ended up getting 2x RB433UAH, with various wireless cards, and am running VPN in between sites just fine as well. whee..

ALIX 2D13:

Since I already have been playing around with my fit-pc slim for the past 3.5 years, and it’s been a fun little device; I decided to try a new router/ap/server project. I bought a ALIX 2D13 board (based on the same AMD Geode CPU – LX800 and chipset – CS5536; as the fit-pc slim), 8GB of 30MB/s flash, a TP-Link wireless N card and some other nick-nack’s to get things going.. I agree it feels kinda old to buy a new device (3.5 years later) with the same old cpu as my old fit-pc slim, but it’s been 100% stable running ubuntu 8.04 from start.

I have not decided what OS I will be running, but will mess around with it in the free days around x-mas. Options I am looking at are:

IPCop, IPFire, AlpineLinux, VoyageLinux, OpenWRT, Ubuntu, Linux Mint or FreeBSD.. I’ve been considering pfSense and m0n0wall but they are just to0 scaled down. I want to be able to do more than just a router/ap.

So random

The weather here in London is so random I don’t know where to begin.. Not to go into to much detail;

I look out the window before I leave for work, and it’s cloudy, but no rain and the streets are dry..

I take the elevator down, pass through the lobby, open the door and am faced with hard rain. I turn around to get a different jacket (one with a hood).

Get up to the apartment, put a hoodie on and get out again. Pass the lobby, open the door – no rain.. I walk to the subway (tube), take jubilee line then change in green park for piccadilly. Arrive in leicester square 20 minutes later, get out of the tube station and it’s pouring down again.. ugh..

Interesting new thread..

Concluded the MySQL UC (called the O’Reilly MySQL Conference & Expo nowadays).. There was a lot of interesting stuff going on, and was great to get a refresher.. It was also a useful “see how others do it” exercise.

Something that come to mind is the new threads added in InnoDB. To bad they show up first in 5.6.2+. We would have needed them now.. 🙂 But isn’t that always the case? We are facing the trouble of grooming a table from loads of historical data. It’s running InnoDB, so we all know it’s kinda expensive to do massive deletes from it. Especially if you’re not doing it on your PK. We were debating how to setup a groomer job on a table like this, and no matter how we do it, the only feasible way is to delete by PK. Massive data deletes will still cause for performance degradation when the pages become to dirty. This has of course changed in the new additions to InnoDB in 5.6.2+. Deleting by PK is still the fastest way, and will be faster when we one day switch to 5.6.x. Read more about the new InnoDB stuff here: InnoDB Page Cleaner Thread

MySQL UC 2011 Concluded.

MySQL UC 2011 (Now called O’Reilly MySQL Conference & Expo) concluded yesterday.

It was for sure a very rewarding conference, but one may debate the absence of MySQL itself (Oracle). Normally there is a lot going on, and a lot of announcements from MySQL. In this case Oracle owns MySQL. I don’t know if they are trying to make a statement, not showing up and providing information to the community. Or are they to self absorbed, and want to pull Open database people to join on Oracle World? I don’t know, and I really hope it’s not the latter. They seem to put a lot of money and effort into continued development of MySQL and InnoDB.

Monty did a bald move in his keynote, and later had to appologize in a public blog post: monty says. Me and Tobias Asplund had a decent chat with him at the SkySQL dinner, and there is a lot of new interesting stuff happening in MariaDB. A lot of effort has been put into subquery optimization and the execution plans are now better than ever. Hopefully Oracle will not be too scared to adopt those patches. We will have to try MariaDB here at Marin to see if we’ll get any performance improvements running subqueries. It’s not a production test just to calm those worried souls. 🙂

I enjoyed Yoshinori Matsunobo’s Tutorial. There was a lot of refreshing material, but also some new things I have not been able to test before. Good stuff! Check out Matsunobos blog

I also enjoyed many of the facebook talks. It’s very giving to see how they attack some of the problems others never see. The amount of servers and data those guys are handling is insane. Their data drift spotting stuff was awesome as well.

I am surprised by the amount of people looking for DBA’s or data architects. Almost all talks or keynotes said they were hiring.

All in all, it was a great conference. I am almost looking forward to the next one. 🙂

Enable iPhone Emoji (smiley) icons

I successfully enabled the Emoji keyboard on my iPhone4, making it possible to use a lot of different funny icons in SMS, MMS, email, notes, well, wherever you type something in your phone. There are applications in the app store that will help you do this (cost money), but there is one free application that has a “easter egg” (hidden feature) which will do this for you. This application is called SpellNumber.

Once you download this app, launch it and enter this secret number: 91929394.59 followed by enter (once). Quit the app (pressing the home button) and reboot your phone (iOS 4.x caches available keyboards, so reboot is needed). Once rebooted, enter Settings -> General -> International -> Keyboards -> Add new keyboard. You should have a keyboard named Emoji there. Enable it, and whoops; You have Emoji icons. (you touch the globe icon just beside the spacebar to switch between keyboards)