Friday 19 December 2014

MySQL Plugins Installation, Uninstalling Plugins And Types Of Installations.

Through this post I would like to walk you through Plugin installation, Uninstalling Plugins and various types of plugin installations.

MySQL provides a great flexibility to its users to control and configure their DB servers based on their own requirements.

MySQL 5.1 and up supports a plugin API that enables creation of server components. Plugins can be loaded at server startup, or loaded and unloaded at runtime without restarting the server.

Go through MySQL Plugin API for more information.

PREREQUISITE : Make sure that you have copied required plugin libraries to the plugin directory, and corresponding plugin configuration parameters have been added to configuration file.

Coming to the installation types, we can do plugin installation in the following two ways :

1)  Plugins installed with the INSTALL PLUGIN statement, which is a permanent method of installing a plugin.

    A plugin that is located in a plugin library file can be loaded at run time with the INSTALL PLUGIN statement. The statement also registers the plugin in the mysql.plugin table to cause the server to load it on subsequent restarts. For this reason, INSTALL PLUGIN requires the INSERT privilege for the mysql.plugin table.

    EX : mysql> INSTALL PLUGIN plugin_name SONAME 'shared_library_name’;
   
    This is the statement to be used to install a plugin. As mentioned above in “PREREQUISITE” section make sure that plugin library placed on the plugin directory provided in the configuration file.

    Login to the DB instance and run the above statement. Suppose If you are installing LDAP authentication plugin then we have to place “auth_ldap.so” in the plugin directory and execute the following statement.

        mysql> INSTALL PLUGIN auth_ldap SONAME 'auth_ldap.so';
    
    INSTALL PLUGIN also registers the plugin by adding a line that indicates the plugin name and library file name to the mysql.plugin table. At server startup, the server loads and initializes any plugin that is listed in the mysql.plugin table.

    Once if the plugin installation is done, then you can go ahead and add the plugin specific configuration parameters to configuration file.

    To check the list of installed plugins run “show plugins;” it will display the list of plugins running on the Instance.
   
    For more information go through Installing Plugins.

2) The Second way of doing it is load the plugin at the startup of MySQL instance itself using --plugin_name[=value] startup option.

     A plugin that is located in a plugin library file can be loaded at server startup with the --plugin-load option. Normally, the server enables the plugin at startup, although this can be changed with the --plugin_name option.

    The option value is a semicolon-separated list of name=plugin_library pairs. Each name is the name of the plugin, and plugin_library is the name of the shared library that contains the plugin code. If a plugin library is named without any preceding plugin name, the server loads all plugins in the library. Each library file must be located in the directory named by the plugin_dir system variable.

    This option does not register any plugin in the mysql.plugin table. For subsequent restarts, the server loads the plugin again only if --plugin-load is given again. That is, this option effects a one-time installation that persists only for one server invocation.

    If the server knows about a plugin when it starts (for example, because the plugin is named using a --plugin-load option or registered in the mysql.plugin table), the server loads and enables the plugin by default. It is possible to control activation for such a plugin using a --plugin_name[=value] startup option named after the plugin.

    EX : Suppose If you are installing LDAP authentication plugin then we have to place “auth_ldap.so” in the plugin directory and add “--plugin-load=auth_ldap.so” configuration parameter to configuration file.

    For more information go through Installing Plugins.

Uninstalling Plugins:

    A plugin known to the server can be uninstalled to disable it at run time with the UNINSTALL PLUGIN statement. The statement unloads the plugin and removes it from the mysql.plugin table if it is registered there. For this reason, UNINSTALL PLUGIN statement requires the DELETE privilege for the mysql.plugin table. With the plugin no longer registered in the table, the server will not load the plugin automatically for subsequent restarts.

    UNINSTALL PLUGIN can unload plugins regardless of whether they were loaded with INSTALL PLUGIN or --plugin-load.

    UNINSTALL PLUGIN is subject to these exceptions:

            It cannot unload plugins that are built in to the server. These can be identified as those that have a library name of NULL in the output from INFORMATION_SCHEMA.PLUGINS or SHOW PLUGINS.

            It cannot unload plugins for which the server was started with --plugin_name=FORCE_PLUS_PERMANENT, which prevents plugin unloading at runtime. These can be identified from the LOAD_OPTION column of the INFORMATION_SCHEMA.PLUGINS table.

    For more information go through Installing & Un installing Plugins.

MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers


Friday 21 November 2014

MySQL/Percona Custom Installation using tar.gz(precompiled binaries) files / Setting up multiple versions of MySQL DB instances on a single server.

MySQL/Percona Custom Installation using tar.gz(precompiled binaries) files / Setting up multiple versions of MySQL DB instances on a single server.

Through this post, I would like to walk you through MySQL custom installation using tar.gz(precompiled binaries) files.

This post will give an idea of how to do custom installation of either MySQL/Percona/MariaDB.

And also it will help you to setup/configure multiple Versions of either MySQL/Percona/MariaDB servers on a single server.

Follow the listed steps below to setup either single or multiple version of MySQL software on a single server.

STEP1 :  As usual we have to create a group and a user for MySQL, just create it by following these steps, please ignore if you have already created a user and group for MySQL.

For easy understanding, I am choosing mysqldba as group and mysql as user.
A) groupadd mysqldba
B) useradd mysql -d “/customlocationpath” -g mysqldba
EX : useradd mysql -d “/home/users/mysql” -g mysqldba

NOTE : Based on operating sytem/standardization/your needs, you have to take care of certain extra steps from the STEP1.

STEP2 : Download the required version of software based on your needs.

STEP3 : Choose and create a directory for software and for data directory.

A) For software installation I am choosing “/opt/product/mysql”.
  mkdir -p /opt/product/mysql
  chown -R mysql:mysqldba /opt/product/mysql
  chmod -R 750 /opt/product/mysql

B) For data directory I am going with “/data/mysql”.
  mkdir -p /data/mysql
  chown -R mysql:mysqldba /data/mysql
  chmod -R 700 /data/mysql

STEP4 : Software installation, make sure the following steps you will be running as a mysql user.

NOTE : To show a demo on installation, I am going with the installation of MySQL5.6.21 version, the following steps(commands) will change based on MySQL version.

cd /opt/product/mysql
tar -zxvf “/home/trimurthy/Downloads/mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz” .
mv mysql-5.6.21-linux-glibc2.5-x86_64 mysql5621

Now MySQL5.6.21 base directory is /opt/product/mysql/mysql5621

STEP5 : Prepare a configuration file based on the chosen software/data directories.

NOTE : Add the following extra configuration parameters to your MySQL config file.

[mysqld_multi]
mysqld = /opt/product/mysql/mysql5621/bin/mysqld_safe
mysqladmin=/opt/product/mysql/mysql5621/bin/mysqladmin
user=usertoshutdown  // this user is to shutdown the mysql server.

In your configuration file make sure you are using [mysqldportno] instead of just [mysqld], because we will be using mysqld_multi to start and stop the MySQL DB Instances.
Ex : if your port no is 3326 replace [mysqld] with [mysqld3326] in your configuration file.

STEP6 : MySQL Instance installation/DB installation.

NOTE : Before start the installation process make sure you have created all the required directories mentioned in the configuration file.

set the path to version bin location of the MySQL version.
EX : export PATH=/opt/product/mysql/mysql5621/bin:$PATH

Install/configure the Instance with the following command.
mysql_install_db --defaults-file=/data/mysql/configfiles/mysql_3326.cnf --user=mysql --basedir=/opt/product/mysql/mysql5621/ --datadir=/data/mysql/var/lib/mysql/

Make sure the installation/configuration of instance is success.

STEP7 : Start the MySQL instance by using the following commands.

NOTE : make sure you have set the path to right version of MySQL.

To start MySQL instance.
mysqld_multi --defaults-file=/data/mysql/configfiles/mysql_3326.cnf start 3326

Login to the DB and do the cleanup of anonymous users cleanup and set the password for the root users / run MySQL secure installation.
Go through Link for more information.


Use the following command to check the status,start and stop the MySQL instance.
mysqld_multi --defaults-file=/data/mysql/configfiles/mysql_3326.cnf report 3326
mysqld_multi --defaults-file=/data/mysql/configfiles/mysql_3326.cnf start 3326
mysqld_multi --defaults-file=/data/mysql/configfiles/mysql_3326.cnf stop 3326

Please feel free to comment/reachme if you have any quesions regarding this installation.

MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers

Saturday 13 September 2014

Variables should be taken care while using InnoDB Persistent Statistics

As part of this post i would like to share more information about how the InnoDB Persistent Statistics works, As a DBA what are the other variables that we should take care of while using Persistent Statistics, with a real time problem i came across and the solution to it.

PROBLEM : Query execution taking around 80% more time on MySQL 5.6 compared to  5.1.

REASON : Recently we have upgraded one legacy DB instance from MySQL 5.1 to 5.6 (Done in 2 phases 5.1 -> 5.5 -> 5.6). We have taken care most of the variables tuning required as part of upgrade. Later we came to know that certain queries are taking around 80% more time than usual, started digging into the issue and later found that its because of "InnoDB Persistent Statistics".

Before go through in detail about the problem, first I would like to go through how exactly InnoDB Persistent Statistics works.

InnoDB has always computed statistics for each InnoDB table to help the optimizer find the most efficient query execution plan. Now you can make these statistics persistent, so that the index usage and join order for a particular query is less likely to change. This feature is on by default, enabled by the configuration option innodb_stats_persistent.

if "innodb_stats_persistent" not eanbled then, InnoDB statistics are recalculated at any of the following events:

* When the table is opened

* When the table has changed a lot (10% of the table has been updated/deleted or inserted)

* When ANALYZE TABLE is run

* When SHOW TABLE STATUS or SHOW INDEX FROM is run

* When InnoDB Monitor is turned ON

But if "innodb_stats_persistent" enable, Does not recalculate the statistics automatically, only when ANALYZE TABLE is run. This means that the same query plan will always be chosen even when MySQL is restarted. A query plan using the global statistics can only change after ANALYZE TABLE has been run (manually). This also means that the user and/or DBA is responsible for running ANALYZE TABLE regularly if the data in the table has been changed a lot or if the table data changes more than certain amount of the data when the "innodb_stats_auto_recalc" is ON.

if "innodb_stats_persistent" is enabled then the following 2 variables plays a major role and every DBA must be aware of.

1) innodb_stats_persistent_sample_pages : The number of index pages to sample when estimating cardinality and other statistics for an indexed column, such as those calculated by ANALYZE TABLE. Increasing the value improves the accuracy of index statistics, which can improve the query execution plan, and it will take more time ( if the no of pages increases then time to complete ANALYZE TABLE also increases, its directly proportional, and it may also cause MySQL to do more IO operations).

2) innodb_stats_auto_recalc : Causes InnoDB to automatically recalculate persistent statistics after the data in a table is changed substantially. The threshold value is currently 10% of the rows in the table. This setting applies to tables created when the innodb_stats_persistent option is enabled, or where the clause STATS_PERSISTENT=1 is enabled by a CREATE TABLE or ALTER TABLE statement.

For bigger tables with more no of indexes "innodb_stats_persistent_sample_pages=20" the default value may not give better execution plans, in that case by increasing this value may produce better query execution plans. But be aware that it will take more time to calculate Statistics and it should run manually when the table data changed more that 10% as mentioned above.

SOLUTION : The problem is that the index statistics are not good enough to generate better query execution plan, that’s the reason some of the queries which comprises bigger tables were taking more that 80% time than usual. Though we ran optimize table at database level as part of upgrade activity to de-fragment tables & for better index Statistics it didn't help. Later for bigger tables we ran optimize with various combinations of values for "innodb_stats_persistent_sample_pages" and at last found that its because of this variable and came up with good number of pages based on tables size, at last the query execution time backup to normal and bit better than 5.1.

For more information go through : http://dev.mysql.com/doc/refman/5.6/en/innodb-persistent-stats.html

MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers

 

Monday 23 June 2014

How MySQL optimizer decides which filesort algorithm to be used while processing an order by clause.

Today when i was going through the variable "max_length_for_sort_data" got to know some good information how order by internally works and which kind of algorithm it uses to process order by. It is something interesting to me, so decided to share this information with you guys also.

As of MySQL 5.6 There are 2 variations of file sort algorithm used as explained follows :

    1) The original filesort algorithm
    2) The modified filesort algorithm

There are few differences between original and modified file sort algorithms like how it retrieves and process the data from the table. For more details go through.

I would like to focus mainly on the following :
    a) The "original filesort" algorithm retrieves the rowid and the columns specified in the order by clause and put them in the sort buffer size, sort the data in the soft buffer size and again retrieve the data for other columns specified in the select statement from the table. so it will read the data from the table twice.
    b) But, when it comes to the "modified filesort" algorithm it will retrieve the rowid and all the columns specified in the select statement from the table to sort_buffer_size, then sort the data based on the columns specified in order by clause and sends the output.For more details go through.

Suppose that a table t1 has four VARCHAR columns a, b, c, and d and that the optimizer uses filesort for this query:

SELECT * FROM t1 ORDER BY a, b;

The query sorts by a and b, but returns all columns, so the columns referenced by the query are a, b, c, and d. Depending on which filesort algorithm the optimizer chooses, the query executes as follows:

For the original algorithm, sort buffer tuples have these contents: (fixed size a value, fixed size b value,row ID into t1)
The optimizer sorts on the fixed size values. After sorting, the optimizer reads the tuples in order and uses the row ID in each tuple to read rows from t1 to obtain the select list column values.

For the modified algorithm, sort buffer tuples have these contents: (fixed size a value, fixed size b value,a value, b value, c value, d value)
The optimizer sorts on the fixed size values. After sorting, the optimizer reads the tuples in order and uses the values for a, b, c, and d to obtain the select list column values without reading t1 again.

So how mysql decides which filesort algorithm to be used?

Based on the value specified by "max_length_for_sort_data", mysql decides which filesort algorithm to be used. Lets go through more detail how it will work.

For a given query optimizer first calculates the "row_size" which includes the rowid,pointers and the columns data specified in the select statement's select clause and order by clause, if the row_size > max_length_for_sort_data then optimizer uses the "original filesort" algorithm, if the row_size <= max_length_for_sort_data then optimizer uses the "modified filesort" algorithm.

The following is the detailed example which illustrates how exactly it works :

mysql> create table t1 (a varchar(10), b varchar(10), c varchar(10), d varchar(10));
Query OK, 0 rows affected (0.00 sec)

mysql> insert into t1 values('1','1','1','1'),('2','2','2','2'),('3','3','3','3');
Query OK, 3 rows affected (0.17 sec)
Records: 3  Duplicates: 0  Warnings: 0


mysql> SET OPTIMIZER_TRACE="enabled=on",END_MARKERS_IN_JSON=on;
Query OK, 0 rows affected (0.00 sec)

mysql>  SET OPTIMIZER_TRACE_MAX_MEM_SIZE=90000000;
Query OK, 0 rows affected (0.00 sec)

CASE #1 :

mysql> set max_length_for_sort_data=128;
Query OK, 0 rows affected (0.00 sec)

NOTE : I am reducing the value of "max_length_for_sort_data" from its default (1024) to 128 for testing.

mysql>  select * from t1 order by a,b;
+------+------+------+------+
| a    | b    | c    | d    |
+------+------+------+------+
| 1    | 1    | 1    | 1    |
| 2    | 2    | 2    | 2    |
| 3    | 3    | 3    | 3    |
+------+------+------+------+

mysql> select * from information_schema.optimizer_trace\G
*************************** 1. row ***************************
                            QUERY: select * from t1 order by a,b
                            TRACE: {
  "steps": [
    {
      "join_preparation": {
        "select#": 1,
        "steps": [
          {
            "expanded_query": "/* select#1 */ select `t1`.`a` AS `a`,`t1`.`b` AS `b`,`t1`.`c` AS `c`,`t1`.`d` AS `d` from `t1` order by `t1`.`a`,`t1`.`b`"
          }
        ] /* steps */
      } /* join_preparation */
    },
    {
      "join_optimization": {
        "select#": 1,
        "steps": [
          {
            "table_dependencies": [
              {
                "table": "`t1`",
                "row_may_be_null": false,
                "map_bit": 0,
                "depends_on_map_bits": [
                ] /* depends_on_map_bits */
              }
            ] /* table_dependencies */
          },
          {
            "rows_estimation": [
              {
                "table": "`t1`",
                "table_scan": {
                  "rows": 3,
                  "cost": 1
                } /* table_scan */
              }
            ] /* rows_estimation */
          },
          {
            "considered_execution_plans": [
              {
                "plan_prefix": [
                ] /* plan_prefix */,
                "table": "`t1`",
                "best_access_path": {
                  "considered_access_paths": [
                    {
                      "access_type": "scan",
                      "rows": 3,
                      "cost": 1.6,
                      "chosen": true,
                      "use_tmp_table": true
                    }
                  ] /* considered_access_paths */
                } /* best_access_path */,
                "cost_for_plan": 1.6,
                "rows_for_plan": 3,
                "sort_cost": 3,
                "new_cost_for_plan": 4.6,
                "chosen": true
              }
            ] /* considered_execution_plans */
          },
          {
            "attaching_conditions_to_tables": {
              "original_condition": null,
              "attached_conditions_computation": [
              ] /* attached_conditions_computation */,
              "attached_conditions_summary": [
                {
                  "table": "`t1`",
                  "attached": null
                }
              ] /* attached_conditions_summary */
            } /* attaching_conditions_to_tables */
          },
          {
            "clause_processing": {
              "clause": "ORDER BY",
              "original_clause": "`t1`.`a`,`t1`.`b`",
              "items": [
                {
                  "item": "`t1`.`a`"
                },
                {
                  "item": "`t1`.`b`"
                }
              ] /* items */,
              "resulting_clause_is_simple": true,
              "resulting_clause": "`t1`.`a`,`t1`.`b`"
            } /* clause_processing */
          },
          {
            "refine_plan": [
              {
                "table": "`t1`",
                "access_type": "table_scan"
              }
            ] /* refine_plan */
          }
        ] /* steps */
      } /* join_optimization */
    },
    {
      "join_execution": {
        "select#": 1,
        "steps": [
          {
            "filesort_information": [
              {
                "direction": "asc",
                "table": "`t1`",
                "field": "a"
              },
              {
                "direction": "asc",
                "table": "`t1`",
                "field": "b"
              }
            ] /* filesort_information */,
            "filesort_priority_queue_optimization": {
              "usable": false,
              "cause": "not applicable (no LIMIT)"
            } /* filesort_priority_queue_optimization */,
            "filesort_execution": [
            ] /* filesort_execution */,
            "filesort_summary": {
              "rows": 3,
              "examined_rows": 3,
              "number_of_tmp_files": 0,
              "sort_buffer_size": 63224,
              "sort_mode": "<sort_key, rowid>"
            } /* filesort_summary */
          }
        ] /* steps */
      } /* join_execution */
    }
  ] /* steps */
}
MISSING_BYTES_BEYOND_MAX_MEM_SIZE: 0
          INSUFFICIENT_PRIVILEGES: 0
1 row in set (0.01 sec)

From the above output if we look at "filesort_information" section, "sort_mode": "<sort_key, rowid>" which indicate that the filesort algorithm used is "original filesort".

I have tried with different values for "max_length_for_sort_data" and at last it worked for me at the value of "max_length_for_sort_data=171", why because the row_lenth is 171 bytes ( it didn' worked for me even the values is 170, and it worked in all the cases if the value is >=171).

CASE #2 :

mysql> set max_length_for_sort_data=171;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t1 order by a,b;
+------+------+------+------+
| a    | b    | c    | d    |
+------+------+------+------+
| 1    | 1    | 1    | 1    |
| 2    | 2    | 2    | 2    |
| 3    | 3    | 3    | 3    |
+------+------+------+------+
3 rows in set (0.00 sec)

mysql> select * from information_schema.optimizer_trace\G                                                                                      *************************** 1. row ***************************
                            QUERY: select * from t1 order by a,b
                            TRACE: {
  "steps": [
    {
      "join_preparation": {
        "select#": 1,
        "steps": [
          {
            "expanded_query": "/* select#1 */ select `t1`.`a` AS `a`,`t1`.`b` AS `b`,`t1`.`c` AS `c`,`t1`.`d` AS `d` from `t1` order by `t1`.`a`,`t1`.`b`"
          }
        ] /* steps */
      } /* join_preparation */
    },
    {
      "join_optimization": {
        "select#": 1,
        "steps": [
          {
            "table_dependencies": [
              {
                "table": "`t1`",
                "row_may_be_null": false,
                "map_bit": 0,
                "depends_on_map_bits": [
                ] /* depends_on_map_bits */
              }
            ] /* table_dependencies */
          },
          {
            "rows_estimation": [
              {
                "table": "`t1`",
                "table_scan": {
                  "rows": 3,
                  "cost": 1
                } /* table_scan */
              }
            ] /* rows_estimation */
          },
          {
            "considered_execution_plans": [
              {
                "plan_prefix": [
                ] /* plan_prefix */,
                "table": "`t1`",
                "best_access_path": {
                  "considered_access_paths": [
                    {
                      "access_type": "scan",
                      "rows": 3,
                      "cost": 1.6,
                      "chosen": true,
                      "use_tmp_table": true
                    }
                  ] /* considered_access_paths */
                } /* best_access_path */,
                "cost_for_plan": 1.6,
                "rows_for_plan": 3,
                "sort_cost": 3,
                "new_cost_for_plan": 4.6,
                "chosen": true
              }
            ] /* considered_execution_plans */
          },
          {
            "attaching_conditions_to_tables": {
              "original_condition": null,
              "attached_conditions_computation": [
              ] /* attached_conditions_computation */,
              "attached_conditions_summary": [
                {
                  "table": "`t1`",
                  "attached": null
                }
              ] /* attached_conditions_summary */
            } /* attaching_conditions_to_tables */
          },
          {
            "clause_processing": {
              "clause": "ORDER BY",
              "original_clause": "`t1`.`a`,`t1`.`b`",
              "items": [
                {
                  "item": "`t1`.`a`"
                },
                {
                  "item": "`t1`.`b`"
                }
              ] /* items */,
              "resulting_clause_is_simple": true,
              "resulting_clause": "`t1`.`a`,`t1`.`b`"
            } /* clause_processing */
          },
          {
            "refine_plan": [
              {
                "table": "`t1`",
                "access_type": "table_scan"
              }
            ] /* refine_plan */
          }
        ] /* steps */
      } /* join_optimization */
    },
    {
      "join_execution": {
        "select#": 1,
        "steps": [
          {
            "filesort_information": [
              {
                "direction": "asc",
                "table": "`t1`",
                "field": "a"
              },
              {
                "direction": "asc",
                "table": "`t1`",
                "field": "b"
              }
            ] /* filesort_information */,
            "filesort_priority_queue_optimization": {
              "usable": false,
              "cause": "not applicable (no LIMIT)"
            } /* filesort_priority_queue_optimization */,
            "filesort_execution": [
            ] /* filesort_execution */,
            "filesort_summary": {
              "rows": 3,
              "examined_rows": 3,
              "number_of_tmp_files": 0,
              "sort_buffer_size": 202091,
              "sort_mode": "<sort_key, additional_fields>"
            } /* filesort_summary */
          }
        ] /* steps */
      } /* join_execution */
    }
  ] /* steps */
}
MISSING_BYTES_BEYOND_MAX_MEM_SIZE: 0
          INSUFFICIENT_PRIVILEGES: 0
1 row in set (0.01 sec)

From the above output if we look at "filesort_information" section, "sort_mode": "<sort_key, additional_fields>" which indicate that the filesort algorithm used is "modidied filesort" and also observed that sort buffer size used is high (202091) in case of "modified filesort" algorithm and it is low (63224) in case of "original filesort" algorithm, it is because of in case of "modified filesort" algorithm it keeps all select columns data in the sort buffer.

CONCLUSION : For better execution of order by queries we may need to increase this value to some extent based on requirements and before change this value do some testing and then only proceed. Most of the cases all the columns specified in the select statement’s select clause might not be specified in the order by clause, if that is the case leaving this value to the default might give the better performance than changing this value.  

For information about the optimizer trace, see MySQL Internals: Tracing the Optimizer


MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers

Tuesday 3 June 2014

How MySQL Manages InnoDB Buffer Pool Internally

InnoDB manages the pool as a list, using a variation of the least recently used (LRU) algorithm. When room is needed to add a new block to the pool, InnoDB evicts the least recently used block and adds the new block to the middle of the list. This “midpoint insertion strategy” treats the list as two sublists:

    At the head, a sublist of “new” (or “young”) blocks that were accessed recently.

    At the tail, a sublist of “old” blocks that were accessed less recently.


This algorithm keeps blocks that are heavily used by queries in the new sublist. The old sublist contains less-used blocks; these blocks are candidates for eviction.

The LRU algorithm operates as follows by default:

    3/8 of the buffer pool is devoted to the old sublist.

    The midpoint of the list is the boundary where the tail of the new sublist meets the head of the old sublist.

    When InnoDB reads a block into the buffer pool, it initially inserts it at the midpoint (the head of the old sublist). A block can be read in because it is required for a user-specified operation such as an SQL query, or as part of a read-ahead operation performed automatically by InnoDB.

    Accessing a block in the old sublist makes it “young”, moving it to the head of the buffer pool (the head of the new sublist). If the block was read in because it was required, the first access occurs immediately and the block is made young. If the block was read in due to read-ahead, the first access does not occur immediately (and might not occur at all before the block is evicted).

    As the database operates, blocks in the buffer pool that are not accessed “age” by moving toward the tail of the list. Blocks in both the new and old sublists age as other blocks are made new. Blocks in the old sublist also age as blocks are inserted at the midpoint. Eventually, a block that remains unused for long enough reaches the tail of the old sublist and is evicted.

By default, blocks read by queries immediately move into the new sublist, meaning they will stay in the buffer pool for a long time. A table scan (such as performed for a mysqldump operation, or a SELECT statement with no WHERE clause) can bring a large amount of data into the buffer pool and evict an equivalent amount of older data, even if the new data is never used again. Similarly, blocks that are loaded by the read-ahead background thread and then accessed only once move to the head of the new list. These situations can push frequently used blocks to the old sublist, where they become subject to eviction.

For more information go through http://dev.mysql.com/doc/refman/5.6/en/innodb-buffer-pool.html



MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers 

Wednesday 28 May 2014

How to setup crash-safe-slave replication with MySQL 5.6

Today when i was working on crashed (Because of power failure) dev replication DB instance, came to know that few defaults needs to be changed to set up "crash-safe slave" on MySQL 5.6.

Before start discuss about actual problem first let me share the configuration options used for existing DB instance.

The DB instance has been configured with master_info_repository=TABLE, relay-log-info-repository=TABLE.

After the server restart, started the DB instance and it came up without any issues. Then logged into the DB server checked for "show slave status;"

I was shocked by looking at the information from the "show slave status\G", it was showing

Master_Log_File: bin_1840.000001
Read_Master_Log_Pos: 43762
Relay_Log_File: relay_1836.000002
Relay_Log_Pos: 87537
Relay_Master_Log_File: bin_1840.000001
Exec_Master_Log_Pos: 87375

if we look at "Read_Master_Log_Pos:" its saying position 43762, but if we look at "Exec_Master_Log_Pos:" which is showing as 87375 and both "Master_Log_File:" and "Relay_Master_Log_File:" are working on same file "bin_1840.000001" how could this possible. exec master log position should always be less than read master log position, because what ever the slave has been read from the master will get executed at the slave side. This is strange and i am facing this kind of issue for first time.

So started browsing some information on the same, i found that from MySQL 5.6 onwards the default values have been changed to 10000 for the following variables

sync_master_info
sync_relay_log
sync_relay_log_info

If the value of these variables is greater than 0, the slave updates its master info repository, slave relay log and slave info repository table after every N events set by these variables. If it is 0, the table is never updated.
For more information go through.

That means the slave will not update its status after each event execution, it will wait till 10000 events gets executed on the slave and then only it will update the slave information, the same has happened over here. slave has updated the slave_master_info table at the log position 43762 and it is waiting to complete 10000 events for next update, but mean while the slave has been crashed which is why it is showing "Read_Master_Log_Pos:" as 43762 after restart, which is understandable.

But still its not clear how slave has been updated to "Exec_Master_Log_Pos:" 87375. Got few details from the following output.

mysql> select * from mysql.slave_relay_log_info\G

*************************** 1. row ***************************
  Number_of_lines: 7
   Relay_log_name: /xxxxx/var/log/binlogs/relay_1836.000002
    Relay_log_pos: 87537
  Master_log_name: bin_1840.000001
   Master_log_pos: 87375
        Sql_delay: 0
Number_of_workers: 0
               Id: 1

Again if we go back to "sync_relay_log_info", The effects of this variable on the slave depend on the server's relay_log_info_repository setting (FILE or TABLE), and if this is TABLE, additionally on whether the storage engine used by the relay log info table is transactional (such as InnoDB) or not (MyISAM). If relay_log_info_repository has been set to TABLE and if the table is created as a InnoDB table then The table is updated after each transaction. (N is effectively ignored.) , If the table is created as a MyISAM table The table is updated after every N events. For more information go through.

So now its clear why the "Read_Master_Log_Pos:" is @43762 and "Master_log_pos:" is @87375 and the problem is clearly understood, then what is the solution to this problem.

If we look at "relay_log_recovery" Enables automatic relay log recovery immediately following server start up, which means that the replication slave discards all unprocessed relay logs and retrieves them from the replication master, which is clearly says that slave will start reading master binary logs from the (Exec_Master_Log_Pos: 87375 or Master_log_pos: 87375) for more information go through.

So started the slave by adding "relay_log_recovery=ON" variable to configuration file and logged into the DB instance and checked for show slave status, now (Read_Master_Log_Pos: 87375 and Exec_Master_Log_Pos: 87375) pointing to the same position which is solve my problem.

Master_Log_File: bin_1840.000001
Read_Master_Log_Pos: 87375
Relay_Log_File: relay_1836.000003
Relay_Log_Pos: 4
Relay_Master_Log_File: bin_1840.000001
Exec_Master_Log_Pos: 87375

Started salve, after some time the slave got sync with master without any issues. checked for consistency between master and slave they are consistent.

So finally came to know the the following configuration options will make sure that the crash-safe-slave will work without any issues.

master_info_repository=TABLE
relay-log-info-repository=TABLE
relay_log_recovery=ON

Additional reference : http://www.mysqlperformanceblog.com/2013/09/13/enabling-crash-safe-slaves-with-mysql-5-6/



MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers 

Thursday 22 May 2014

DB instance is not coming up with InnoDB: Fatal : Cannot initialize AIO sub-system error.

InnoDB: Fatal : Cannot initialize AIO sub-system

PROBLEM :

While setting up multiple MySQL DB instances on a singe Linux server, while bringing up the DB instance its resulting the following error, and the DB instance is not coming up.

ERROR DETAILS :

2014-05-20 10:11:15 2b6b7f997520 InnoDB: Warning: io_setup() failed with EAGAIN. Will make 5 attempts before giving up.
InnoDB: Warning: io_setup() attempt 1 failed.
InnoDB: Warning: io_setup() attempt 2 failed.
InnoDB: Warning: io_setup() attempt 3 failed.
InnoDB: Warning: io_setup() attempt 4 failed.
InnoDB: Warning: io_setup() attempt 5 failed.
2014-05-20 10:11:18 2b6b7f997520 InnoDB: Error: io_setup() failed with EAGAIN after 5 attempts.
InnoDB: You can disable Linux Native AIO by setting innodb_use_native_aio = 0 in my.cnf
2014-05-20 10:11:18 623 [ERROR] InnoDB: Fatal : Cannot initialize AIO sub-system
2014-05-20 10:11:18 623 [ERROR] Plugin 'InnoDB' init function returned error.
2014-05-20 10:11:18 623 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2014-05-20 10:11:18 623 [ERROR] Unknown/unsupported storage engine: InnoDB
2014-05-20 10:11:18 623 [ERROR] Aborting


SOLUTION :

1) One quick solution to this, according to the error log we can start the DB instance by adding innodb_use_native_aio = 0 to configuration file. which is not a good and permanent solution.

Running with a large number of InnoDB I/O threads, and especially running multiple such instances on the same server machine, can exceed capacity limits on Linux systems. In this case, you may receive the following error:

EAGAIN: The specified maxevents exceeds the user's limit of available events.

You can typically address this error by writing a higher limit to /proc/sys/fs/aio-max-nr.

However, if a problem with the asynchronous I/O subsystem in the OS prevents InnoDB from starting, you can start the server with innodb_use_native_aio=0 disabled (use innodb_use_native_aio=0 in the option file). This option may also be turned off automatically during startup if InnoDB detects a potential problem such as a combination of tmpdir location, tmpfs filesystem, and Linux kernel that does not support AIO on tmpfs.

For more information about "innodb_use_native_aio" go through http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_use_native_aio

2) The right and permanent solution to this problem is as follows:

Check what is the value of "aio-max-nr" as follows

cat /proc/sys/fs/aio-max-nr
65536

Increase the value of aio-max-nr to higher value as follows

Add the following to /etc/sysctl.cnf and reload
fs.aio-max-nr=1048576
fs.file-max = 6815744

cat /proc/sys/fs/aio-max-nr
1048576

Started the DB instance and it came up without any issues.



MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers 

Friday 18 April 2014

mysqldump: Couldn't execute 'SET OPTION SQL_QUOTE_SHOW_CREATE=1':

After upgraded MySQL db to MySQL 5.6.16, mysqldump resulting the following error.

shell> mysqldump -h sandbox -P9216 -u root -p dumptest > dumptest.sql
Enter password:
mysqldump: Couldn't execute 'SET OPTION SQL_QUOTE_SHOW_CREATE=1': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_QUOTE_SHOW_CREATE=1' at line 1 (1064)

The reason for this is that, before starting to take backup of given database or tables mysqldump first sets an option SQL_QUOTE_SHOW_CREATE by running  "SET OPTION SQL_QUOTE_SHOW_CREATE=1" statement in the server, earlier versions of MySQL i.e MySQL < 5.6 there is no problem with those servers because of those servers support "SET OPTION" statements. Since MySQL 5.6.5 the "SET OPTION" is deprecated in favor of SET without OPTION. Thats the reason why mysqldump fails with the above mentioned error.

In the newer versions of MySQL client libraries(mysqldump) the "SET OPTION SQL_QUOTE_SHOW_CREATE=1" is replace by "SET SQL_QUOTE_SHOW_CREATE=1".

NOTE : The statement "SET SQL_QUOTE_SHOW_CREATE=1" will execute on all MySQL versions i.e (MySQL 5.1, 5.5 and 5.6).

The solution to this problem is, upgrade your MySQL client binaries.

Let see, what makes difference the use of "SET SQL_QUOTE_SHOW_CREATE=1".

mysql> show variables like 'SQL_QUOTE_SHOW_CREATE';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| sql_quote_show_create | OFF   |
+-----------------------+-------+
1 row in set (0.00 sec)

mysql> show create table rpl3\G
*************************** 1. row ***************************
       Table: rpl3
Create Table: CREATE TABLE rpl3 (
  id int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(15) DEFAULT NULL,
  sid varchar(100) DEFAULT NULL,
  PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

mysql> SET SQL_QUOTE_SHOW_CREATE=1;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'SQL_QUOTE_SHOW_CREATE';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| sql_quote_show_create | ON    |
+-----------------------+-------+
1 row in set (0.00 sec)

mysql> show create table rpl3\G
*************************** 1. row ***************************
       Table: rpl3
Create Table: CREATE TABLE `rpl3` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(15) DEFAULT NULL,
  `sid` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`).
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

If we observe the output of above 2 show create statements, the result is different based on sql_quote_show_create. if enable "sql_quote_show_create" it will quote the table name and column names, which won't cause any reserved words usage errors.

For more information about this bug, go through MySQL Bug Report.




MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers 

MySQL Threads

The MySQL server creates the following threads:

Connection manager threads handle client connection requests on the network interfaces that the server listens to. On all platforms, one manager thread handles TCP/IP connection requests. On Unix, this manager thread also handles Unix socket file connection requests.

A signal thread handles all signals. This thread also normally handles alarms and calls process_alarm() to force timeouts on connections that have been idle too long.

InnoDB Main Thread/Background Thread, The main InnoDB thread is a special thread that just runs in a loop to do various kinds of work.  In older versions of innodb it did just about everything, and that was often a source of problems.  It was once responsible for keeping the transaction logs flushed as well as handling asynchronous io requests and much more.  A stall in any number of places would cause the others to get far behind.

Since then many tasks handled by the main thread have been split out into separate threads, which means even if there's a stall in one, others can continue to function and help make sure the server keeps responding.

For InnoDB there will be additional read and write threads by default. The number of these are controlled by the innodb_read_io_threads and innodb_write_io_threads parameters.

On a master replication server, connections from slave servers are handled like client connections: There is one thread per connected slave.

On a slave replication server, an I/O thread is started to connect to the master server and read updates from it. An SQL thread is started to apply updates read from the master. These two threads run independently and can be started and stopped independently.

If the server is started with the --flush_time=val option, a dedicated thread is created to flush all tables every val seconds.

If the event scheduler is active, there is one thread for the scheduler, and a thread for each event currently running.

For more information go through MySQL Threads.



MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers 

Monday 14 April 2014

MySQL InnoDB Row Level Locking Mechanism

We know that MySql InnoDB engine provides row level locking, while MyISAM can only do table level lock. But, InnoDB's row level locking mechanism may not be as what you expect. InnoDB doesn't really lock rows of data, instead, it set locks on every index record that is scanned in the processing of the SQL statement. This means, only when your SQL statement is using index to query data, InnoDB will use row level lock. Otherwise, InnoDB will use table level lock.

Let's do some simple tests.

create table called innodb_lock_test as below.  alter table innodb_lock_test add index id(id);

mysql> create table innodb_lock_test(id int,name varchar(10)) engine=innodb;
Query OK, 0 rows affected (0.15 sec)
mysql> insert into innodb_lock_test values(1,'Trimurthy'),(2,'Anand'),(3,'Krishna'),(4,'Prasad');
Query OK, 4 rows affected (0.00 sec)
Records: 4 Duplicates: 0 Warnings: 0


Now Lets open 2 sessions.

SESSION #1 :

mysql> set autocommit=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from innodb_lock_test where id =1;
+------+-----------+
| id   | name      |
+------+-----------+
|    1 | Trimurthy |
+------+-----------+
1 row in set (0.00 sec)

SESSION #2 :

mysql> set autocommit=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from innodb_lock_test where id =2;
+------+-------+
| id   | name  |
+------+-------+
|    2 | Anand |
+------+-------+
1 row in set (0.00 sec)


SESSION #1 :

mysql> select * from innodb_lock_test where id =1 for update;
+------+-----------+
| id   | name      |
+------+-----------+
|    1 | Trimurthy |
+------+-----------+
1 row in set (0.00 sec)


SESSION #2 :
mysql> select * from innodb_lock_test where id =2 for update;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

This example show us InnoDB can only use table level locking if no index is available. In session 1, it looks like we only set a lock on one row (where id=1 for update). But we don't have index on id column, InnoDB is actually locking the whole table, so in session two, when we try to set a lock to a different row(where id=2 for update), we have to wait until the lock in session 1 is released.

Now, let's add index to our id column and repeat the same test.

mysql> alter table innodb_lock_test add index id(id);
Query OK, 0 rows affected (0.24 sec)
Records: 0  Duplicates: 0  Warnings: 0

SESSION #1 :

mysql> set autocommit=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from innodb_lock_test where id =1;
+------+-----------+
| id   | name      |
+------+-----------+
|    1 | Trimurthy |
+------+-----------+
1 row in set (0.00 sec)

SESSION #2 :

mysql> set autocommit=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from innodb_lock_test where id =2;
+------+-------+
| id   | name  |
+------+-------+
|    2 | Anand |
+------+-------+
1 row in set (0.00 sec)


SESSION #1 :

mysql> select * from innodb_lock_test where id =1 for update;
+------+-----------+
| id   | name      |
+------+-----------+
|    1 | Trimurthy |
+------+-----------+
1 row in set (0.00 sec)


SESSION #2 :

mysql> select * from innodb_lock_test where id =2 for update;
+------+-------+
| id   | name  |
+------+-------+
|    2 | Anand |
+------+-------+
1 row in set (0.00 sec)

If we observer session1 and session2 both the queries got executed(means both the queries got locks on respective rows) because of index.

CONCLUSION : InnoDB row level locks works only when your SQL statement is using index to query data.


MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers

Sunday 13 April 2014

How to calculate InnoDB log files usage.


In this example, I would demonstrate how to check the amount of InnoDB log file space in use,
follow these steps ( checking current usage at peak times):
Examine INNODB MONITOR output (i.e. SHOW ENGINE INNODB STATUS\G)
and look at LOG section.

---
LOG
---
Log sequence number 5428 3708369329
Log flushed up to   5428 3708369329
Last checkpoint at  5428 834766592
0 pending log writes, 0 pending chkp writes


The innodb log file usage can be calculated by the following formula:

Last sequence number is ((5428 * 4 * 1024*1024*1024)+3708369329) = 23316790852017
Last checkpoint at is ((5428 * 4 * 1024*1024*1024)+834766592) = 23313917249280

So the actual innodb log file usuage is = 23316790852017 - 23313917249280 = 2873602737 = which is about 2740 MB.

innodb_log_file_size = 1048576000
innodb_log_files_in_group = 4

Total log files size = 1048576000 * 4 = 4000MB

The above calculations applicable only for MySQL versions < 5.5 version
and for MySQL version > 5.5 use the following calculations.


---
LOG
---
Log sequence number 2735016708
Log flushed up to   2734016690
Last checkpoint at  2380597012

Perfrom following calulation (formula) to know log space used. The values to use in calculation are "Log sequence number" and "Last checkpoint at".

mysql> SELECT (2735016708 - 2380597012)/1024/1024 "Space used in MB";
+------------------+
| Space used in MB |
+------------------+
|     338.00096130 |
+------------------+
1 row in set (0.01 sec)



MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers 

InnoDB Transaction Isolation Levels and Concurrency

When multiple clients run transactions concurrently, three problems that may result are dirty reads, non-repeatable reads, and phantoms. These occur under the following circumstances:

A dirty read is a read by one transaction of uncommitted changes made by another. Suppose that transaction T1 modifies a row. If transaction T2 reads the row and sees the modification even though T1 has not committed it, that is a dirty read. One reason this is a problem is that if T1 rolls back, the change is undone but T2 does not know that.

A non-repeatable read occurs when a transaction performs the same retrieval twice but gets a different result each time. Suppose that T1 reads some rows and that T2 then changes some of those rows and commits the changes. If T1 sees the changes when it reads the rows again, it gets a different result; the initial read is non-repeatable. This is a problem because T1 does not get a consistent result from the same query.

A phantom is a row that appears where it was not visible before. Suppose that T1 and T2 begin, and T1 reads some rows. If T2 inserts a new row and T1 sees that row when it reads again, the row is a phantom.

InnoDB implements four isolation levels that control the visibility of changes made by one transaction to other concurrently executing transactions:

READ UNCOMMITTED : allows a transaction to see uncommitted changes made by other transactions. This isolation level allows dirty reads, non-repeatable reads, and phantoms to occur.

READ COMMITTED : allows a transaction to see changes made by other transactions only if they’ve been committed. Uncommitted changes remain invisible. This isolation level allows non-repeatable reads and phantoms to occur.

REPEATABLE READ : ensures that if a transaction issues the same SELECT twice, it gets the same result both times, regardless of committed or uncommitted changes made by other transactions. In other words, it gets a consistent result from different executions of the same query. In some database systems,  isolation level allows phantoms, such that if another transaction inserts new rows in the interval between the SELECT statements, the second SELECT will see them. This is not true for InnoDB, phantoms do not occur for the REPEATABLE READ level.

SERIALIZABLE : completely isolates the effects of one transaction from others. It is similar to REPEATABLE READ with the additional restriction that rows selected by one transaction cannot be changed by another until the first transaction finishes.

The essential difference between REPEATABLE READ and SERIALIZABLE is that with REPEATABLEREAD, one transaction cannot modify rows another has modified, whereas with SERIALIZABLE,one transaction cannot modify rows if another has merely even read them.



MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers 

Saturday 12 April 2014

MySQL upgrade issues with SQL statements

The following errors encountered duting the validation of new MySQL versions including MySQL 5.5 and 5.6.

#1) As of MySQL5.1 there is no problem with "TRUNCATE TABLE" statement while truncating a parent table if its child table doesn't have any matching rows or if the child table is created using constraints with cascade delete or cascade update.

Since MySQL 5.5 "TRUNCATE TABLE" statement results with the following error while truncating a table, even though its child table doesn't have any matching rows or if the child table is created using constraints with cascade delete or cascade update.

ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_id`) REFERENCES `parent` (`id`)).


Although TRUNCATE TABLE is similar to DELETE, it is classified as a DDL statement rather than a DML statement. It differs from DELETE in the following ways since MySQL 5.5 onwards :

Truncate operations drop and re-create the table, which is much faster than deleting rows one by one.

TRUNCATE TABLE does not cause ON DELETE triggers to fire, and it cannot be performed for InnoDB tables with parent-child foreign key relationships.

Any AUTO_INCREMENT value is reset to its start value. This is true even for MyISAM and InnoDB, which normally do not reuse sequence values.

SUMMARY : Please make sure that you are not using anywhere TRUNCATE TABLE as part of your code, and going forward use DELETE statement instead of TRUNCATE TABLE.

For more information please go through http://dev.mysql.com/doc/refman/5.6/en/truncate-table.html



#2) MySQL 5.1 is not strict to the data types and the associated values.

The following statement works fine in MySQL 5.1.

insert into testdatatype2 select * from testdatatype where status1=1;

The same statement results in the following error in MySQL 5.5 and 5.6.
ERROR 1292 (22007): Truncated incorrect DOUBLE value: 'OFF'

Before start our discussion first lets go through tables structures used.

mysql> show create table testdatatype\G
*************************** 1. row ***************************
       Table: testdatatype
Create Table: CREATE TABLE `testdatatype` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `status1` varchar(5) DEFAULT NULL,
  `status2` int(5) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

mysql> show create table testdatatype2\G
*************************** 1. row ***************************
       Table: testdatatype2
Create Table: CREATE TABLE `testdatatype2` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `status1` varchar(5) DEFAULT NULL,
  `status2` int(5) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

mysql> select * from testdatatype;
+------+-----------+---------+---------+
| id   | name      | status1 | status2 |
+------+-----------+---------+---------+
|    1 | trimurthy | OFF     |       2 |
|    2 | anandh    | 1       |       2 |
+------+-----------+---------+---------+
2 rows in set (0.00 sec)

mysql> select * from testdatatype2;
Empty set (0.00 sec)

Both tables structure is same, only table names are different, if we observe the table structure status1 column has been defined as varchar(5).
And the table is having both string and number data in status1 column.

In MySQL 5.1 version the following statement worked with out any errors and warnings.

mysql> insert into testdatatype2 select * from testdatatype where status1=1;
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> select * from testdatatype2;
+------+-----------+---------+---------+
| id   | name      | status1 | status2 |
+------+-----------+---------+---------+
|    1 | trimurthy | OFF     |       2 |
|    2 | anandh    | 1       |       2 |
+------+-----------+---------+---------+
2 rows in set (0.00 sec)

But when it comes to MySQL 5.5 and 5.6 the same insert statement results in the following error.

mysql> insert into testdatatype2 select * from testdatatype where status1=1;
ERROR 1292 (22007): Truncated incorrect DOUBLE value: 'OFF'

This is because since MySQL 5.5 onwards server is strict to the data types.

To resolve the above mentioned error use quotations in the where clause as follows, which is a SQL standard.

insert into testdatatype2 select * from testdatatype where status1='1';


#3 As of MySQL 5.1 and 5.5 there is no problem with index creation on large varchar data type columns, incase of 5.1 and 5.5 it will create index only on first 255 characters of a given column and it will result in warning stating that "Specified key was too long; max key length is 767 bytes", when it comes to MySQL 5.6 it will result with an error "ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes".

The following is an example :

MySQL 5.1 :

mysql> create table compatibility_check ( id int, name varchar(50), description varchar(500), key key_desc (description));

Query OK, 0 rows affected, 1 warning (0.02 sec)

mysql> show warnings;

+---------+------+---------------------------------------------------------+
| Level   | Code | Message                                                 |
+---------+------+---------------------------------------------------------+
| Warning | 1071 | Specified key was too long; max key length is 767 bytes |
+---------+------+---------------------------------------------------------+
1 row in set (0.00 sec)

mysql> show create table compatibility_check;

CREATE TABLE `compatibility_check` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(50) DEFAULT NULL,
  `description` varchar(500) DEFAULT NULL,
  KEY `key_desc` (`description`(255))
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


MySQL 5.5 :

mysql> create table compatibility_check ( id int, name varchar(50), description varchar(500), key key_desc (description));

Query OK, 0 rows affected, 1 warning (0.03 sec)

mysql> show warnings;

+---------+------+---------------------------------------------------------+
| Level   | Code | Message                                                 |
+---------+------+---------------------------------------------------------+
| Warning | 1071 | Specified key was too long; max key length is 767 bytes |
+---------+------+---------------------------------------------------------+
1 row in set (0.00 sec)

mysql> show create table compatibility_check;

CREATE TABLE `compatibility_check` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(50) DEFAULT NULL,
  `description` varchar(500) DEFAULT NULL,
  KEY `key_desc` (`description`(255))
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


MySQL 5.6 :

mysql> create table compatibility_check ( id int, name varchar(50), description varchar(500), key key_desc (description));

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

mysql> create table compatibility_check ( id int, name varchar(50), description varchar(500), key key_desc (description(100)));

Query OK, 0 rows affected (0.03 sec)


SUMMARY : Make sure that you are not using anywhere creation of index on big varchar columns as part of your code, and going forward use "KEY key_desc (description(255)" instead of "key key_desc (description)", if you have to create and index on varchar columns larger than 255 characters.



MySQL InternalsMySQL HighAvailabilityMySQL Performance TuningMySQL Query OptimizationMySQL performanceMySQL FabricMySQL HAMySQL InstallationMySQL UpgradeInnoDB Performance TuningInnoDB Buffer Pool SizeMySQL Performance TuningMySQL ClusterMySQL Latest NewsNews and EventsMySQL Customers