ERROR 1200 (HY000): The server is not configured as slave

So, you’ve restarted mysqld, and the slave did not start. You issue SHOW SLAVE STATUS\G and notice that everything is fine but “Slave_IO_Running” and “Slave_SQL_Running” both say “No”. You don’t have skip-slave-start in your my.cnf file. Why did this happen? Baffled- you issue “START SLAVE;” and get:
ERROR 1200 (HY000): The server is not configured as slave; fix in config file or with CHANGE MASTER TO

This has happened to me numerous times throughout my time administering mysql servers. It does not happen frequently enough to not baffle you at the first glance, but then you start remembering last time this happened..

The variable server_id always need to be set to something when running replication. Check if server_id is set by issuing “SHOW VARIABLES LIKE ‘server_id’;. The value should not be “0″ or “1″. “1″ is normally set by default. Also, the master’s server_id and the slave’s server_id should not be the same.

You can set the server id by issuing “SET GLOBAL server_id=;” then issue a START SLAVE;

We should be fine now..

Ugh, I know this is fairly easy to find, but in an attempt to have this fresher in memory, a quick blog post helps remembering. :)

IPCOP install on headless device with serial console (ALIX 2D13)

In the voyage of choosing OS to run on this device, I realized I need a easy way to install OS’es onto the CF card. Now, this system does not have any display output other than serial; Any boot loader (in my case GRUB) needs to be aware of that.

The easiest way to try different OS’es is to have the installation media network mounted, and just PXE boot the device, choosing OS in pxelinux.cfg (if trying linux OS’es that is). I set it up as follows (since I have a ubuntu machine anyway), but a similar approach will work for other distribution/OS’es. You will need to install a tftp server, dhcp server and a http server. Your “server” which will serve these three services need to have a static IP and other dhcp servers need to be turned off on that network.

 

1) Make sure your ALIX is ready with the CF card inserted, and network cable connected to your first/rightmost interface (closest to the power input)
2) Setup and start a tftp server on the machine you will be using as a server. On ubuntu, the easiest way would be using

apt-get install <package>

I chose “tftp-hpa”
3) Setup and start a http server on the machine you will be using as a server. You might already have one running, so check before. I already had lighttpd running.
4) Setup and start a dhcp server on your network. Either on you server or if you have a dhcp server, make sure it can swallow options for PXE clients. I used the ISC dhcp server on the same host as above. Package: isc-dhcp-server
5) Download ipcop-<version>-install-netboot.i486.tgz and ipcop-<version>-install-cd.i486.iso to your server host.
6) Untar the first (netboot install file) to the tftp root, then copy the “ipcop-pxe-serial-<version>.model” file to “default” in the same directory, then copy the “pxelinux.0″ file in “<tft-root>/ipcop/<version>/i486/pxelinux.0″ to your tftp root
7) Make ipcop iso available via your http server either by copying the contents of the iso to a directory in the http root, or mount it to a directory in your http root. I symlinked my mount to ipcop_iso in the root.
8) Configure your dhcp server to have your tftp servers ip as “next-server” and make sure to pass the pxelinux.0 file in the config for pxe as well. I added these lines to dhcpd.conf:

next-server <my.servers.ip.address>;
host alix {
hardware ethernet <mac address of the alix>;
filename “pxelinux.0″;
}

9) Connect the ALIX using serial to a machine with a serial port (maybe your server above), and fire up your choice of serial terminal. I used minicom, and since I don’t have a serial port but a USB to SERIAL converter, the device was specified as the device created when the converter was inserted:

minicom –device /dev/ttyUSB0 –baudrate 38400 –8bit –statline”

10) Connect power to your ALIX board. You should see output on your serial terminal immediately. Something like this:

PC Engines ALIX.2 v0.99h
640 KB Base Memory
261120 KB Extended Memory.

Press “S” to enter bios configuration, “E” to enable PXE boot and “Q” to quit, choosing “Y” on the “Do you want to save” prompt.

Your ALIX board should now PXE boot. Follow the install instructions. You will get prompted what media you want to use to install. Choose HTTP and enter the server where you put your ipcop iso, together with the full http path. If your ALIX board did not PXE boot, search the logs and check your DHCP/tftp setup. These are the only ones used during PXE boot.

Now, I had issues with ipcop not wanting to boot after the install. I got Boot error straight after the POST sequence. I am running a 8GB CF card, and though that might be an issue (since IPCOP docs state it supports cards up to 4GB), but after mocking around a bit, I decided to re-install – and that solved it (ugh..). I am guessing something failed during the first installs MBR write to the CF card. I can’t really re-wind now and check the MBR out.. Should have done that in the first place.. I did however get a tip (on the ipcops.com forum) to update to the latest ALIX bios (which I already had on the board). tinyBIOS 0.99h.

Home Project; Router/WiFi AP/Server on ALiX 2D13

Background:

So, after a trying a couple of different AP’s, and realizing they were either unstable or lacking important features; I decided to build my own. It’s apparently not that easy to find a nice and stable AP/router/switch nowadays. You can’t just go and buy one for $100-$150 and expect it to be stable. And by stable I mean not being forced to power-cycle the device every couple of days.

After my last D-Link (DGL-4100) died, I bought myself a corporate grade Cisco/Linksys device (WRV200). It promised a lot of what I wanted from a device like that, and it gave me the option to setup VPN (road warrior style and site-2-site). FAIL! -> It promised, but did not live up to my expectations; Sure, you were able to du all of what it promised, but if you started to use it for real, the poor router overheated and hanged. Since I bought 3 of these to interconnect 3 sites using VPN, I was not very happy.. Took a while to have them returned.. Going through the procedure of proving they did not live up to the promises made..

I ended up researching alternatives, and found mikrotik (www.mikrotik.com) and their line of hardware; routerboards (www.routerboard.com). I wasn’t sure about if I would like running a scaled down and restricted linux based router os where I could not alter things I wanted; but after trying it for a month or so, I decided to buy one and play around with.. Now, I thought the RB493UAH that I got delivered was a broken promise too, as it was hanging after just half a day of heavy use. This proved to be a hardware error, and I got a new one sent out without having to return the old one in advance. Great service here! Anyway, the second one has been up and running for 2.5 years without having to reboot. Firewalling, connection tracking, and all that jazz is enabled and I still get 100mbps throughput in between all interfaces. Torrenting with 300-400-500 simultaneous connections from more than one internal host is smooth and I get maximum throughput both ways (in/out) duplex. I also have one wireless card in it, and am adding a new 802.11N card in the next couple of weeks. Anyway, I finally ended up getting 2x RB433UAH, with various wireless cards, and am running VPN in between sites just fine as well. whee..

ALIX 2D13:

Since I already have been playing around with my fit-pc slim for the past 3.5 years, and it’s been a fun little device; I decided to try a new router/ap/server project. I bought a ALIX 2D13 board (based on the same AMD Geode CPU – LX800 and chipset – CS5536; as the fit-pc slim), 8GB of 30MB/s flash, a TP-Link wireless N card and some other nick-nack’s to get things going.. I agree it feels kinda old to buy a new device (3.5 years later) with the same old cpu as my old fit-pc slim, but it’s been 100% stable running ubuntu 8.04 from start.

I have not decided what OS I will be running, but will mess around with it in the free days around x-mas. Options I am looking at are:

IPCop, IPFire, AlpineLinux, VoyageLinuxOpenWRT, Ubuntu, Linux Mint or FreeBSD.. I’ve been considering pfSense and m0n0wall but they are just to0 scaled down. I want to be able to do more than just a router/ap.

So random

The weather here in London is so random I don’t know where to begin.. Not to go into to much detail;

I look out the window before I leave for work, and it’s cloudy, but no rain and the streets are dry..

I take the elevator down, pass through the lobby, open the door and am faced with hard rain. I turn around to get a different jacket (one with a hood).

Get up to the apartment, put a hoodie on and get out again. Pass the lobby, open the door – no rain.. I walk to the subway (tube), take jubilee line then change in green park for piccadilly. Arrive in leicester square 20 minutes later, get out of the tube station and it’s pouring down again.. ugh..

Interesting new thread..

Concluded the MySQL UC (called the O’Reilly MySQL Conference & Expo nowadays).. There was a lot of interesting stuff going on, and was great to get a refresher.. It was also a useful “see how others do it” exercise.

Something that come to mind is the new threads added in InnoDB. To bad they show up first in 5.6.2+. We would have needed them now.. :) But isn’t that always the case? We are facing the trouble of grooming a table from loads of historical data. It’s running InnoDB, so we all know it’s kinda expensive to do massive deletes from it. Especially if you’re not doing it on your PK. We were debating how to setup a groomer job on a table like this, and no matter how we do it, the only feasible way is to delete by PK. Massive data deletes will still cause for performance degradation when the pages become to dirty. This has of course changed in the new additions to InnoDB in 5.6.2+. Deleting by PK is still the fastest way, and will be faster when we one day switch to 5.6.x. Read more about the new InnoDB stuff here: InnoDB Page Cleaner Thread

MySQL UC 2011 Concluded.

MySQL UC 2011 (Now called O’Reilly MySQL Conference & Expo) concluded yesterday.

It was for sure a very rewarding conference, but one may debate the absence of MySQL itself (Oracle). Normally there is a lot going on, and a lot of announcements from MySQL. In this case Oracle owns MySQL. I don’t know if they are trying to make a statement, not showing up and providing information to the community. Or are they to self absorbed, and want to pull Open database people to join on Oracle World? I don’t know, and I really hope it’s not the latter. They seem to put a lot of money and effort into continued development of MySQL and InnoDB.

Monty did a bald move in his keynote, and later had to appologize in a public blog post: monty says. Me and Tobias Asplund had a decent chat with him at the SkySQL dinner, and there is a lot of new interesting stuff happening in MariaDB. A lot of effort has been put into subquery optimization and the execution plans are now better than ever. Hopefully Oracle will not be too scared to adopt those patches. We will have to try MariaDB here at Marin to see if we’ll get any performance improvements running subqueries. It’s not a production test just to calm those worried souls. :)

I enjoyed Yoshinori Matsunobo’s Tutorial. There was a lot of refreshing material, but also some new things I have not been able to test before. Good stuff! Check out Matsunobos blog

I also enjoyed many of the facebook talks. It’s very giving to see how they attack some of the problems others never see. The amount of servers and data those guys are handling is insane. Their data drift spotting stuff was awesome as well.

I am surprised by the amount of people looking for DBA’s or data architects. Almost all talks or keynotes said they were hiring.

All in all, it was a great conference. I am almost looking forward to the next one. :)

Enable iPhone Emoji (smiley) icons

I successfully enabled the Emoji keyboard on my iPhone4, making it possible to use a lot of different funny icons in SMS, MMS, email, notes, well, wherever you type something in your phone. There are applications in the app store that will help you do this (cost money), but there is one free application that has a “easter egg” (hidden feature) which will do this for you. This application is called SpellNumber.

Once you download this app, launch it and enter this secret number: 91929394.59 followed by enter (once). Quit the app (pressing the home button) and reboot your phone (iOS 4.x caches available keyboards, so reboot is needed). Once rebooted, enter Settings -> General -> International -> Keyboards -> Add new keyboard. You should have a keyboard named Emoji there. Enable it, and whoops; You have Emoji icons. (you touch the globe icon just beside the spacebar to switch between keyboards)

UPDATE on huge table without index

This is something that keeps coming back no matter where I work. It’s always something I do the same, but it takes a minute to remember how I did it last time. I guess it’s time to share something super easy.

Scenario:
You have a huge table with constant activity, containing terabytes of data. You need to update/delete roughly a million random rows. Random selects from the table keep some indexes you could use hot. Running a big update, locking rows/table is not an option since you have approx. 40-50 inserts/sec to the table. So you need to run smaller batches when updating table. LIMIT would be nice, but is not supported with OFFSET when using UPDATE or DELETE.

Updating using IN and subquery together with limit is not supported in 5.0.x (usure about 5.1.x and forward), Else this would have been a viable solution.

Selects with specific where clauses might take longer due to some indexes/keys being too big as well

Solution:
Selecting all Unique/Primary keys to a temp table with an auto increment primary key and then update the production table using this temporary table as a reference table to which rows to update, with a BETWEEN clause on the temporary table’s auto_increment field.
May also enclose everything in a transaction and delete the same ‘between’ on the temporary reference table. Will help you keep track of what you actually did in case you need to abort this.

Example
Production table:

CREATE TABLE `huge_table` (
`request_id` bigint(20) NOT NULL auto_increment,
`customer_id` int(11) default NULL,
`data_id` bigint(20) default NULL,
`group_id` int(11) default NULL,
`user_id` int(11) default NULL,
`action_id` int(11) default NULL,
`external_id` int(11) default NULL,
`user_data` varchar(2048) default NULL,
`entry` text,
`extra_term` varchar(255) default NULL,
`transaction` text,
`receive_time` datetime default NULL,
`from_id` varchar(32) default NULL,
`url` varchar(2048) default NULL,
`from_ip` varchar(2048) default NULL,
`useragent` varchar(2048) default NULL,
`tz` int(11) default NULL,
`cvalue` varchar(255) default NULL,
`rvalue` varchar(255) default NULL,
`uid` varchar(40) default NULL,
PRIMARY KEY (`request_id`),
KEY `k1` (`receive_time`,`action_id`,`url`,`from_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1047142423 DEFAULT CHARSET=utf8

You need to update action_id = 100 on approx 1.5m rows which have receive_time between ’2010-08-01 00:00:00′ and ’2010-08-31 23:59:59′ where from_id equals ‘q7b4x5aa0303erer’.

temporary table:

CREATE TABLE `tmp_ids` (
`id` int(11) NOT NULL auto_increment,
`prod_id` bigint(20) default NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=15424160 DEFAULT CHARSET=utf8

Fill the temporary table with the id’s you need to update in the production table:

INSERT INTO tmp_ids SELECT * FROM huge_table USE INDEX (k1) WHERE from_id=’q7b4x5aa0303erer’ AND receive_time BETWEEN ’2010-08-01 00:00:00′ AND ’2010-08-31 23:59:59′;

Then just iterate this in a bash script or our favorite scripting language (increasing the between values of course):

UPDATE huge_table ht JOIN tmp_ids ti ON ht.request_id=ti.prod_id SET ht.action_id=100 WHERE ti.id BETWEEN x AND x;

example bash script:

#!/bin/bash

# Config params
user=”username”
pass=”password”
host=”hostname”
db=”schema_name”
tmptbl=”tmp_table”
livetbl=”live_table”
tmpcol=”live_id”
livecol=”request_id”
tmpkey=”id”
upcol=”from_id=1″
waittime=”0.5″

# Print usage function
function usage() {
echo “$(basename $0)
echo “rows – number of rows in one go.”
exit 0
}

# Since we need atleast one variable to continue, check so first variable is supplied
[ -e $1 ] && usage && exit

# get that variable
rows=$1
# Check which is the first key we will use in temp table
s=`mysql -N -u$user -p$pass -h$host $db -e”select min(id) from $db.$tmptbl;”`
# Set latter between value
let b=$s+$rows;
# Count how many rows we have to update so we know how long to loop
count=`mysql -N -u$user -p$pass -h$host $db -e”select count(*) from $db.$tmptbl;”|head`;

# Print starting line
echo “Starting up with updates from id: $s to $b, performing $rows at one go..”

# loop till we are done
while [ 1 ];
do

# are we done? If so exit
if (($count <= 0));
then
exit
fi

# Run actual db updates
echo -n "updating $livetbl between $s and $b.."
mysql -N -u$user -p$pass -h$host $db -e"update $livetbl td join $tmptbl i on td.$livecol=i.$tmpcol set $setval where i.$tmpkey between $s and $b"
echo -n " deleting in $tmptbl.."
mysql -N -u$user -p$pass -h$host $db -e"delete from $db.$tmptbl where $tmpkey between $s and $b"
echo -n " Done!"

# set between rows
let s=$s+$rows
let b=$b+$rows
let count=$count-$rows

# wait if there are more records to update
if (($count > 0));
then
echo ” ..waiting.. still $count records to update”;
sleep $waittime
fi
done

# All done
echo “Script done. No more records to update”

iPhone time zone issue – Resolved

So, ever since I got my iPhone 4, i have had issues where my calendar have been showing and alerting me about my appointments one hour ahead.

I could enter an appointment today at 5pm on my phone, choosing to alert 30 minutes ahead of the appointment – The result would be that the phone would alert me at 3:30pm instead of 4:30pm. Even though it showed the correct time in the calendar on the phone.

On a side note, both MobileMe and our company’s Exchange would save the appointment at 4pm. Now, this has to do with my timezone settings. I was playing around with that, but no real change.. It was always displaying one hour wrong from what I entered. Also, if i entered an appointment the right time in any of the above calendars, the iPhone calendar would show one hour in advance.Phone screenshot-tz

I booked an appointment at the genius bar at apple, and they were as stunned as me.. But playing around in the phone together we found a second time zone setting in: Settings->Mail, Contacts, Calendar called Time Zone Support. Apparently, if you want time zones to work when you travel (to have your appointments at the right time where you are located), you have to turn Time Zone Support OFF! (?). This will change the appointments to the time you are when you move between time zones.

I guess I might not be the only one with these issues.. So sharing.. :)

Oracle.. And so it begins..

Oracle sues Google over Android and Java

Ever since Oracle announced the acquisition of Sun Microsystems, I have been thinking somthing like this would happen. Oracle is not exactly known for contributing to the Open Source community, even if they claim they are committed to helping and enhancing it. It feels like their ultimate goal is to close source everything. I can’t understand the nature of such an organization. Isn’t shared knowledge, more knowledge? We all want to evolve, and what better way than not inventing the wheel over and over again. Soon Oracle will want you to pay a license fee for every execution of your java application (per processor) :)

I am dreading what they will do with MySQL when they get there.. Same licensing model as Oracle DB? :)

Just had to comment on this, as I think it is ridiculous and def. not adding anything to the community.

←Older