back to article MySQL startup targets SSDs

Need a MySQL storage engine tailored for solid-state drives? Start-up Hexagram 49 has unveiled an engine dubbed RethinkDB. The company says it's optimized for SSDs, claiming it can deliver performance ten times faster than existing databases. RehinkDB is the work of computer science students Michael Glukhovsky, Slava …

COMMENTS

This topic is closed for new posts.
  1. Matt 21

    You appear to have printed a marketting release!

    It is certainly not true that tuning databases to specific hardware is a new trend, look at CAFS from ICL in the late 80s, for example.

    It's also not really true that Sybase, Oracle etc. have problems with SSDs. In fact quite the opposite is true, they can already take full advantage of the higher IO performance. In reality 99.99% of systems are not IO bound when using well laid out traditional disks anyway.

    Finally, you seem to imply that SSDs use less 'leccy, which according to most reviews doesn't actually seem to be the case.

  2. Steven Jones

    @Matt 21

    "In reality 99.99% of systems are not IO bound when using well laid out traditional disks anyway."

    So only one in 10,000 database is I/O bound on traditional disks? All I can say is that we must have almost all of them in the UK - we have lots of DBs where the most significant single wait event is due to the latency of I/O. It may not be relevant to all DBs, but in the transactional area latency on random reads is a limiting factor, and it's not getting better (yes, and that is after throwing memory are the problem, using enterprise arrays with NV cache, 15K drives and all the others). Once you get down to random access times of about 5ms there is nowhere else to go with physical disks. Of course if you have a small enough DB that it fits in cache, then not so much of a problem (but beware the horrendous start-up performance with a cold cache).

    It's generally not such an issue with data mining and the like where bandwidth is what matters, and not IOPs.

    Of course Oracle and the like will benefit from SSDs. However, tradition DBs are optimised for physical disks and data is laid out to take account of it. Removing the random access penlaty by having an effective zero seek time would open up a a lot of possibilities for arranging data. write-anywhere approaches become viable - simply avoiding over-writing data in situ as there is no longer the penalty which makes transaction back-outs easier. Access to data is then through a re-direction process.

  3. Anonymous Coward
    Anonymous Coward

    @Steven

    Perhaps I can be clearer: It's not that the DBs would not benefit from faster IO what I wanted to say is that in most cases the performance is good enough and that tuning the SQL and improved design bring much bigger benefits.

    Also note that I said well laid out disks, none of this RAID 5 crap for example.

    I've been doing this kind of thing for many years but in recent years I'd say that most of my clients, from large banks to Telcos, are getting better than 90% cache hit ratio which does mean you get slow queries from a cold cache but its not normally that bad (perhaps you're using an older RDBMS?). So, random reads are generally not a problem as they are served from cache.

    If you've got a heavy write system then better IO might help. Having said which it's not too difficult to get above 1000 OLTP style transactions a second with traditional disks.

  4. David Shone

    @AC

    While I agree in principle, in practice many systems are not always laid-out well, and after many years of troubleshooting some of these problems I've also found that even if they are well laid out, a common problem is shared storage, e.g., a big array on a SAN, where the physical disks are sliced to provide many spindles for IOPs, often leaving unused space. Some time later, the storage team allocates this space for another purpose (often another server) and the performance of the original application is degraded as the LUN latencies are increased.

    Also, 90%+ hit ratio is all very well, but it's the cost of the misses that hurt you; if a miss takes 10x longer to service than a hit (and it's often worse), then the 10% of misses cost more than the 90% hits, halving the throughput.

    SSDs - used appropriately - can make a huge difference and may turn out to be less expensive than lots of spindles that would otherwise have to be left mostly empty.

  5. Mike 61
    WTF?

    WTF

    "The work is part of a growing trend to tune applications, particularly databases, to the underlying hardware in PCs and servers to speed performance and cut down on additional hardware."

    Growing trend? Ya, sure, growing since the invention of the computer maybe. You ALWAYS tune your database AND os to the underlying hardware. This is why you can't virtualize well tuned, well laid out machine running heavy loads, it just adds more load, and you get nothing.

    I have had a [unnamed virtualaztion vendors] come in and do an eval on our systems, and they basically told me the same thing.

    But, on the gripping hand, they are still students, and probably don't know any better.

  6. Steven Jones

    @Matt 21

    We have a system with > 99.8% cache hit in Oracle and OLTP transactions are still 65% waiting on read I/O. All logging and writing is to NV cache so there are sub ms writes. The DB is heading towards the 16TB region and it has many thousands of users.

    Much of the trouble is caused by COTS packages generated from meta-data configurations. A lot of them are very resource intensive. Many batch processes on these are essentially transactional with lots of random I/O and reducing latency by an order of magnitude will make a massive difference.

    As for RAID5, on enterprise arrays with huge NV caches, it can work well (for sequential writes, RAID-5 has lower overheads than RAID-1. Ultimately random reads are constrained by disk latency time. RAID-1 can give you an advantage at at higher util rates as there are two disks to read from rather than one - simple queuing theory. However, if you are seeing 6ms service times on a SAN you aren't going to get a dramatic difference by going to RAID-1. Writes are unaffected as they are all cached anyway.

    The problem is that 20TB of SSD is still too expensive. Maybe in a couple of years time.

  7. Matt 21

    @Steven

    I suppose I was trying to say that there are a lot of sites who have databases smaller than 50GB with only a hundred or so active users, so they generally don't see the issues you are seeing. Your kind of problem is the sort of thing I've seen at bigger clients, but numerically they don't make up a large percentage, if you see what I mean.

    I'm not sure why you're seeing only 0.2% cache hit ratios but 65% of transactions waiting on physical reads...... But we're not really going to get to the bottom os this kind of thing on a comments page are we :-)

    My experience (with five recent customers using large SANs on EMC or HDS) is that they are a nightmare and that bunging a lot of cache at it didn't really help, so we somehow didn't manage with writes being cached. The writing of parity bits is one of the problems but SANs are a bit of a black art.

    I found RAID 10 with local disks gave me the best performance but that's another story......

    Anyway, my main point is that for a lot of people SSDs are not the silver bullet.

  8. Anonymous Coward
    Thumb Up

    Thank You

    "...have not been architecturally geared to..."

    I'd like to express my heartfelt thanks to the author for not using the soundbite "architected".

    Well done, sir.

  9. batfastad

    Gigabyte iRam

    We've been running our MySQL DB on a gigabyte i-ram ramdisk partition for a while now and performance is phenomenal with MyISAM.

    The gigabyte i-ram is a great unit but has a max capacity of 4GB. The advantage over a conventional ramdisk is it's battery backed so if the power gets cut from your powersupply, you're data's still good. Battery lasts about 14 hours and we have a tight backup system in place as well.

    It's only our poxy intranet DB which is about 2GB but I find it helps when we need to run beast queries to return many rows

This topic is closed for new posts.

Other stories you might like