[HyperDB] Partitioning wpmu data
felix.gushansky at internetbrands.com
Thu Apr 23 19:35:39 GMT 2009
Thanks, Callum and Andy - all good info. In terms of migrating existing
data, we may be in luck, as it may end up not being a problem. It seems
there are many (many!) one-record tables, which means folks signed up for a
blog but never really used it. We're debating cleaning it up by giving users
a 45-day use-it-or-loose-it notice. That we believe would drastically reduce
the existing dataset.
As for pattern matching, checking out the code, particularly
get_dataset_from_table, I see that if the exact table name is not found in
the table-dataset array, it then tries to match table name pattern in the
array with the table name being queried. I do not however understand the
significance of '/' == substr( $pattern, 0, 1 ) in the
if ( '/' == substr( $pattern, 0, 1 ) && preg_match( $pattern, $table ) )
statement. Why do we require a '/' to be the first character in the table
name in the array in order to try to pattern match it? And if '/' *is* the
first character then it and the remaining table name will be matched against
the table name being queried which normally would not have a '/' as its
Thanks, folks. Again, appreciate all your help.
> From: Callum Macdonald <lists.automattic.com at callum-macdonald.com>
> Reply-To: <hyperdb at lists.automattic.com>
> Date: Thu, 23 Apr 2009 11:57:29 -0700
> To: <hyperdb at lists.automattic.com>
> Subject: Re: [HyperDB] Partitioning wpmu data
> As Andy said, HyperDB is about on the money for what you want.
> The biggest challenge you'll face will be moving the data. You have 11k
> tables in one database right now, you need to move those to new
> databases. The MultiDB class uses the same method as HyperDB did
> originally, 4096 databases, first 3 digits of the md5 of the blog_id
> (the NNN number).
> I think your best bet will be to use HyperDB and create your own system
> for managing which blog is in which database (maybe a lookup table like
> Andy mentioned). That way you'll be able to migrate the data over time.
> Other options that occur to me:
> Migrate all the data at once. Take the whole system down for a few hours
> and move all the tables as you need to. Pray nothing goes wrong. :-)
> You could add some logic that checks if the table is found in the new
> target database, and if not, fall back on the global database. That
> would allow you to move blogs one at a time, without changing any code.
> If the blog tables are in the new database, the queries go there, if
> not, the queries go to the master table.
> If you're looking for help, you could speak to the guys at Incsub. I'm a
> (present and about to be) retired staffer there, so I'm biased. :-)
> Best of luck with the project.
> Cheers - Callum.
> On Thu, 2009-04-23 at 10:32 -0700, Felix Gushansky wrote:
>> Hello, folks. We have just inherited the site that uses wordpress MU (ver
>> 2.5.1) with MySQL 5 on the back. Soon we have learned that the blog db is
>> huge. Not size wise necessarily (it is slightly over a gig), but table wise.
>> That db contains over 11,000 tables with the total of over 40,000 files in
>> the same directory. That hugely affects performance AND maintenance to the
>> point in fact that we can not backup the db using mysqldump. We had to write
>> a script dumping a table at a time.
>> So, we need to scale somehow. That¹s why he have looked at HyperDB, which
>> seems to have been built just for that. However, the problem we are seeing
>> is that in our case the tables in the blog db are created and named
>> dynamically. For each new user wpmu creates multiple tables (8?) in the
>> database and names them like wp_<nnnn>_comments, wp_<nnnn>_links,
>> wp_<nnnn>_options, wp_<nnnn>_postmeta, wp_<nnnn>_posts, etc, where <nnnn> is
>> a four digit integer. Can we take advantage of HyperDB in this case? And if
>> not what are our options?
>> Thanks very much folks. Your help is very much appreciated.
>> -- Felix
>> HyperDB mailing list
>> HyperDB at lists.automattic.com
> HyperDB mailing list
> HyperDB at lists.automattic.com
More information about the HyperDB