Following my previous testing I was surprised to find Linode’s conventional disk based VPS’ provide disk performance significantly ahead of Digital Oceans cutting edge SSDs. I was suspicious that my results may have been skewed due to some caching or RAID going-ons so wanted to run some more thorough tests and try to discover more about what was behind these numbers. The further results certainly makes for interesting reading.
Measured using dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.48589 s, 239 MB/s
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.26862 s, 329 MB/s
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.48705 s, 239 MB/s
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.89327 s, 1.2 GB/s
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.894649 s, 1.2 GB/s
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.902375 s, 1.2 GB/s
Measured using echo 3 > /proc/sys/vm/drop_caches
and dd if=tempfile of=/dev/null bs=1M count=1024
# echo 3 > /proc/sys/vm/drop_caches
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.84947 s, 377 MB/s
# echo 3 > /proc/sys/vm/drop_caches
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.74739 s, 391 MB/s
# echo 3 > /proc/sys/vm/drop_caches
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.48869 s, 239 MB/s
# echo 3 > /proc/sys/vm/drop_caches
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.08118 s, 516 MB/s
# echo 3 > /proc/sys/vm/drop_caches
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.8766 s, 572 MB/s
# echo 3 > /proc/sys/vm/drop_caches
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.10313 s, 511 MB/s
Measured using dd if=tempfile of=/dev/null bs=1M count=1024
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.2324 s, 332 MB/s
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.43445 s, 441 MB/s
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.8593 s, 376 MB/s
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.0949 s, 981 MB/s
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.08126 s, 993 MB/s
# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.0886 s, 986 MB/s
Measured using bonnie++ -d /tmp -r 4096 -u root
# bonnie++ -d /tmp -r 4096 -u root
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
localhost 8G 355 99 158020 44 159182 61 1845 99 336481 79 5756 391
Latency 58130us 523ms 324ms 8754us 31527us 21232us
Version 1.96 ------Sequential Create------ --------Random Create--------
localhost -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 20436 75 +++++ +++ +++++ +++ 26701 95 +++++ +++ +++++ +++
Latency 5741us 1381us 1041us 4130us 1693us 720us
1.96,1.96,localhost,1,1391329231,8G,,355,99,158020,44,159182,61,1845,99,336481,79,5756,391,16,,,,,20436,75,+++++,+++,+++++,+++,26701,95,+++++,+++,+++++,+++,58130us,523ms,324ms,8754us,31527us,21232us,5741us,1381us,1041us,4130us,1693us,720us
# bonnie++ -d /tmp -r 4096 -u root
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
localhost 8G 384 99 571790 99 435115 71 1066 99 686460 60 +++++ +++
Latency 26159us 12166us 12410us 9310us 94473us 5744us
Version 1.96 ------Sequential Create------ --------Random Create--------
localhost -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 6926us 289us 5780us 3281us 83us 3262us
1.96,1.96,localhost,1,1391307867,8G,,384,99,571790,99,435115,71,1066,99,686460,60,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26159us,12166us,12410us,9310us,94473us,5744us,6926us,289us,5780us,3281us,83us,3262us
Linode is doing a great job of deploying and configuring traditional hard disks. They have a system which not only provides strong serial operation, but also performs strongly on random access too. Looking at the test results below, Linode disks are several multiples fast than Digital oceans SSDs.
Test | Digital Ocean | Linode |
---|---|---|
Seq Write | 269MB/s | 1200MB/s |
Unbuffered read | 335MB/s | 533MB/s |
Buffered read | 383MB/s | 986MB/s |
Bonnie read | 328MB/s | 670MB/s |
Bonnie write | 154MB/s @ 44% CPU | 558MB/s @ 99% CPU |
Bonnie updat | 155MB/s @ 61% CPU | 424MB/s @ 61% CPU |
Bonnie random seeks | 60/s | 79/s |
I was curious if the Digital Ocean droplet performance was CPU or RAM bound in anyway. The Bonnie++ documentation1 suggests that CPU reporting on multi-core machines may be incorrect so I wanted to rule that out. I spun up a 8core/8GB Droplet and ran the same tests but observed similar performance to the basic 512KB droplet.
Tomorrow ill start to investigate why Digital Ocean’s memory bandwidth figure was significantly higher than all the others.
http://www.textuality.com/bonnie/advice.html ↩