Archive for the ‘Shard’ Category

mixi.jp Architecture

July 10, 2007

Mixi is a fast growing social networking site in Japan. They provide services like: diary, community, message, review, and photo album. Having a lot in common with LiveJournal they also developed many of the same approaches. Their write up on how they scaled their system is easily one of the best out there.

Site: http://mixi.jp

Information Sources

mixi.jp – scaling out with open source

Platform

Linux
Apache
MySQL
Perl
Memcached
Squid
Shard

What’s Inside?

They grew to approximately 4 million users in two years and add over 15,000 new users/day.
Ranks 35th on Alexa and 3rd in Japan.
More than 100 MySQL servers
Add more than 10 servers/month
Use non-persistent connections.
Diary traffic is 85% read and 15% write.
Message traffic is is 75% read and 25% write.
Ran into replication performance problems so they had to split the database.
Considered splitting vertically by user or splitting horizontally by table type.
The ended up partitioning by table type and user. So all the messages for a group of users would be assigned to a particular database. Partitioning key is used to decide in which database data should be stored.
For caching they use memcached with 39 machines x 2 GB memory.
Stores more than 8 TB of images with about 23 GB added per day.
MySQL is only used to store metadata about the images, not the images themselves.
Images are either frequently accessed or rarely accessed.
Frequently accessed images are cached using Squid on multiple machines.
Rarely accessed images are served from the file system. There’s no profit in caching them.

Lessons Learned

When using dynamic partitioning it’s difficult to pick keys and algorithms for where data should be stored.

Once you partition data you can no longer do joins and you have to open a lot of connections to different databases to merge the data back together.

It’s hard to add new hosts and rearrange data when you partition. For example, let’s say your partitioning algorithm stores all the messages for users 1-N on host 1. Now let’s say host 1 becomes overburdened and you want to repartition users across more hosts. This is very difficult to do.

By using distributed memory caching they rarely hit the DB and there average page load time is about .02 seconds. This reduces the problems associated with partitioning.

You will often have to develop strategies based on the type of content. For example, image will be treated differently than short text posts.

Social networking sites are very time oriented, so it might be useful to partition data by time as well as user and type.

LiveJournal Architecture

July 9, 2007

A fascinating and detailed story of how LiveJournal evolved their system to scale. LiveJournal was an early player in the free blog service race and faced issues from quickly adding a large number of users. Blog posts come fast and furious which causes a lot of writes and writes are particularly hard to scale. Understanding how LiveJournal faced their scaling problems will help any aspiring website builder.

Site: http://www.livejournal.com/

Information Sources

LiveJournal – Behind The Scenes Scaling Storytime
Google Video
Tokyo Video
2005 version

Platform

Linux
MySql
Perl
Memcached
MogileFS
Apache

What’s Inside?

Scaling from 1, 2, and 4 hosts to cluster of servers.
Avoid single points of failure.
Using MySQL replication only takes you so far.
Becoming IO bound kills scaling.
Spread out writes and reads for more parallelism.
You can’t keep adding read slaves and scale.
Shard storage approach, using DRBD, for maximal throughput. Allocate shards based on roles.
Caching to improve performance with memcached. Two-level hashing to distributed RAM.
Perlbal for web load balancing.
MogileFS, a distributed file system, for parallelism.
TheSchwartz and Gearman for distributed job queuing to do more work in parallel.
Solving persistent connection problems.

Lessons Learned

Don’t be afraid to write your own software to solve your own problems. LiveJournal as provided incredible value to the community through their efforts.

Sites can evolve from small 1, 2 machine setups to larger systems as they learn about their users and what their system really needs to do.

Parallelization is key to scaling. Remove choke points by caching, load balancing, sharding, clustering file systems, and making use of more disk spindles.

Replication has a cost. You can’t just keep adding more and more read slaves and expect to scale.

Low level issues like which OS event notification mechanism to use, file system and disk interactions, threading and even models, and connection types, matter at scale.

Large sites eventually turn to a distributed queuing and scheduling mechanism to distribute large work loads across a grid.