Once upon a time my dad worked on supercomputers at Los Alamos National Laboratory and sometimes he would spend a minute running a little test to see about how fast a new supercomputer was. I kept that test, flops.c, very much as I received it in 1992. Your phone is probably much faster than the fastest supercomputer of 1992.
This isn't a very good benchmark, but it's simple and easy. It runs a few basic numeric algorithms with a known number of adds, subtracts, multiplies, and divides, and figures out how many floating point operations per second your CPU can do. It doesn't take advantage of multi-core or SIMD instructions. It doesn't exercise the memory system and probably all fits in L1 cache. It just tests how fast your CPU can do math (and that your compiler isn't terrible at making that happen).
Over the years I transliterated flops.c into other languages to test them and their compilers and interpreters. Python has a pretty slow interpreter, around 1-5% of the speed of C. Javascript got amazingly good and can run 80-90% of the speed of C. Java got to that speed around 2007 or 2008. The Go compiler is surprisingly good for a newer language, it knows some tricks GCC doesn't. Julia is a newer language that should have the potential for running full speed but apparently still needs some tweaking (as of 2018-05).
Note that Go and Julia have some compiler trick the rest are missing. Test 2 is to calculate Pi from atan(1.0) by Taylor series. Without reading the assembly I'm guessing there's some fused-multiply-add going on at least. GCC 7.3.0 with -O3 didn't get it. Go reports 147 Gigaflops on test 2, and Julia 22 Gigaflops. Everything else is showing my cpu in the 5-6 Gigaflop range.
Sources on github:
https://github.com/brianolson/flops
Saturday, May 26, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment